Learning long-term dependencies with gradient descent is difficult, Yoshua Bengio, Patrice Simard, Paolo Frasconi, 1994IEEE Transactions on Neural Networks, Vol. 5 (IEEE)DOI: 10.1109/72.279181 - This seminal paper formally identified and analyzed the vanishing and exploding gradient problems in recurrent neural networks, providing a theoretical foundation for understanding these challenges.
Long Short-Term Memory, Sepp Hochreiter, Jürgen Schmidhuber, 1997Neural Computation, Vol. 9 (MIT Press)DOI: 10.1162/neco.1997.9.8.1735 - The original paper introducing Long Short-Term Memory (LSTM) networks, which were specifically designed to mitigate the vanishing gradient problem in RNNs.
Deep Learning, Ian Goodfellow, Yoshua Bengio, Aaron Courville, 2016 (MIT Press) - An authoritative textbook covering concepts of deep learning, including recurrent neural networks, backpropagation through time, and the vanishing/exploding gradient problems.