Generative Adversarial Nets, Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, 2014Advances in Neural Information Processing Systems (NIPS 2014)DOI: 10.48550/arXiv.1406.2661 - This paper introduces the fundamental framework of Generative Adversarial Networks, including the minimax objective function. Understanding its training dynamics is key to recognizing the evaluation challenges posed by its loss behavior.
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, 2017Advances in Neural Information Processing Systems (NIPS 2017), Vol. 30DOI: 10.48550/arXiv.1706.08500 - Presented the Fréchet Inception Distance (FID), a widely accepted and robust metric for evaluating the quality and diversity of generated images by comparing statistics of feature representations.
Improved Precision and Recall Metric for Assessing Generative Models, Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, Timo Aila, 2019NeurIPS 2019DOI: 10.48550/arXiv.1904.06991 - Proposes a refined Precision and Recall metric that provides a separate assessment of fidelity (precision) and diversity (recall) in generative models, which helps clarify the trade-offs between these two aspects.