LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2106.09685 - Presents the original LoRA method for parameter-efficient fine-tuning, foundational to understanding adapter techniques.
QLoRA: Efficient Finetuning of Quantized LLMs, Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer, 2023Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2305.14314 - Introduces QLoRA, a method for fine-tuning quantized Large Language Models, relevant for advanced PEFT and quantization scenarios.