LLVM Optimization Remarks, The LLVM Project, N.D. - Describes how modern compilers like LLVM generate detailed optimization reports, including successful transformations, missed opportunities, and diagnostic messages, which is central to the section content.
The Polyhedral Model for Loop Optimization, Paul Feautrier, 2007 (Springer US)DOI: 10.1007/978-0-387-39129-8_1 - Provides a theoretical foundation for advanced loop transformations like tiling, fusion, and permutation, which are critical for performance optimization and often documented in compiler reports.
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference, Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko, 2018Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE)DOI: 10.1109/CVPR.2018.00097 - Introduces a widely adopted method for quantizing deep neural networks for efficient inference, explaining the techniques and their implications that would appear in compiler quantization reports.