Language Models are Few-Shot Learners, Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei, 2020NeurIPSDOI: 10.48550/arXiv.2005.14165 - This paper introduces in-context learning and demonstrates the effectiveness of few-shot prompting with large language models like GPT-3, which serves as the academic foundation for the technique.
Few-Shot Prompt Templates, LangChain Team, 2024 - The official LangChain documentation offering practical guidance on using FewShotPromptTemplate and ExampleSelector for dynamic few-shot prompting.
Prompt engineering, OpenAI, 2023 - OpenAI's official guide to prompt engineering, covering principles and methods for interacting with LLMs, including how few-shot examples improve model performance and control.
Embeddings, OpenAI, 2024 - This guide from OpenAI explains embeddings, their generation, and their use in tasks like semantic search and example selection, which is fundamental to SemanticSimilarityExampleSelector.