Introduction to Floating-Point Formats (FP32, FP16, BF16)
Was this section helpful?
Mixed Precision Training, Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu, 2018International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.1710.03740 - Presents the original approach for training deep neural networks using mixed-precision arithmetic with FP16, showcasing its benefits and strategies for stable training.
Automatic Mixed Precision for Deep Learning, PyTorch Team, 2025 (PyTorch Foundation) - Official PyTorch documentation detailing how to implement automatic mixed precision (AMP) training, including support for FP16 and BF16.