LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen, 2022International Conference on Learning Representations (ICLR 2022) (OpenReview.net)DOI: 10.48550/arXiv.2106.09685 - Introduces the LoRA method for efficient fine-tuning of large language models, detailing its mechanism, benefits, and experimental results.
Parameter-Efficient Fine-Tuning of Large-Scale Pre-trained Language Models: A Survey, Ning Lou, Hongye Song, Wenxiao Shang, Xiao Liu, Ziyang Li, Yuxiao Dong, Xin Xu, Jing Chen, Yiqi Wang, Yu Zhang, Jiazeng Fang, Xiaoqing Zheng, and Jie Zhou, 2023arXiv preprintDOI: 10.48550/arXiv.2303.15647 - A survey paper that systematically reviews various Parameter-Efficient Fine-Tuning methods for large language models, providing a detailed comparison and analysis.
PEFT: Parameter-Efficient Fine-Tuning library, Hugging Face, 2024 (Hugging Face) - Official documentation for the Hugging Face PEFT library, offering practical guidance and examples for implementing parameter-efficient fine-tuning techniques like LoRA.