Deep Learning, Ian Goodfellow, Yoshua Bengio, and Aaron Courville, 2016 (MIT Press) - This authoritative textbook provides a comprehensive introduction to autoencoders, detailing their architecture, the role of bottleneck layers, the reconstruction objective, and common loss functions like Mean Squared Error and cross-entropy, making it fundamental for understanding autoencoder mechanics.
Reducing the Dimensionality of Data with Neural Networks, Geoffrey E. Hinton, Ruslan R. Salakhutdinov, 2006Science, Vol. 313 (American Association for the Advancement of Science)DOI: 10.1126/science.1127647 - This seminal paper introduces the concept of deep autoencoders for effective dimensionality reduction, emphasizing learning a compressed representation (bottleneck) from which the input can be accurately reconstructed, directly underpinning the input-output matching principle.
Lecture 4: Unsupervised Learning (Autoencoders), Alexander Amini, Ava Soleimany, 2021MIT 6.S191 Introduction to Deep Learning (Massachusetts Institute of Technology (MIT)) - These lecture notes offer a clear and accessible explanation of autoencoder components, the reconstruction process, the role of activation functions in the output layer (e.g., Sigmoid for [0,1] data), and common loss functions like Mean Squared Error and Binary Cross-Entropy, making them highly relevant for an introductory audience.