Language Models are Few-Shot Learners, Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei, 2020Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2005.14165 - This paper introduced the concept of few-shot learning for large language models, demonstrating their ability to perform new tasks with only a few examples without fine-tuning.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, 2022arXivDOI: 10.48550/arXiv.2201.11903 - This paper introduced Chain-of-Thought prompting, a technique that improves LLM reasoning by explicitly showing intermediate reasoning steps, which can be combined with few-shot examples for agent guidance.
ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2023International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2210.03629 - This paper presents the ReAct framework, which combines reasoning and acting for LLM agents. Few-shot examples are often used within ReAct to guide tool use and decision-making for complex tasks.
Prompt Engineering Guide, OpenAI, 2023 (OpenAI) - Official guide from OpenAI offering practical techniques and best practices for prompt engineering, including the effective use of few-shot examples for model control.