Parameter-Efficient Fine-tuning (PEFT) Library, Hugging Face, 2024 - Official documentation for the Hugging Face peft library, detailing its components, API, and usage for various PEFT methods.
LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021arXiv preprint arXiv:2106.09685DOI: 10.48550/arXiv.2106.09685 - The original research paper introducing Low-Rank Adaptation (LoRA) for fine-tuning large language models, explaining the theoretical foundation and methodology.
QLoRA: Efficient Finetuning of Quantized LLMs, Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer, 2023arXiv preprint arXiv:2305.14314DOI: 10.48550/arXiv.2305.14314 - The research paper presenting QLoRA, an efficient fine-tuning approach for quantized large language models, combining LoRA with 4-bit quantization.
Trainer, Hugging Face, 2024 (Hugging Face) - Official documentation for the Hugging Face transformers.Trainer class, which provides a high-level API for training models, including PEFT models.