Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, 2022Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2201.11903 - Explains how language models can perform complex reasoning by breaking down problems into intermediate steps, forming the basis for prompt chaining.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models, Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan, 2023Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2305.10601 - Presents a method for structured reasoning and planning with LLMs by exploring multiple thought paths, relevant for complex task execution and error handling in chains.
Generative Agents: Interactive Simulacra of Human Behavior, Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein, 2023arXiv preprint arXiv:2304.03442DOI: 10.48550/arXiv.2304.03442 - Describes an architecture for creating interactive AI agents with memory and planning, showcasing how complex, long-term behaviors are managed through sequential prompting and state updates.