Prefix-Tuning: Optimizing Continuous Prompts for Generation, Xiang Lisa Li, Percy Liang, 2021Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (Association for Computational Linguistics)DOI: 10.18653/v1/2021.acl-long.353 - The foundational paper introducing Prefix Tuning, a parameter-efficient fine-tuning method that learns continuous prefixes for attention keys and values.
The Power of Scale for Parameter-Efficient Prompt Tuning, Brian Lester, Rami Al-Rfou, Noah Constant, 2021Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Association for Computational Linguistics)DOI: 10.18653/v1/2021.emnlp-main.243 - This paper presents Prompt Tuning, a method that prepends learnable soft prompt embeddings directly to the input of frozen LLMs for efficient adaptation.
GPT Understands, Too, Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang, 2021arXivDOI: 10.48550/arXiv.2103.10385 - Introduces P-Tuning (v1), which uses a trainable prompt encoder to generate continuous prompt vectors for text. This is a precursor to P-Tuning v2.