ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2022arXiv preprint arXiv:2210.03629DOI: 10.48550/arXiv.2210.03629 - Introduces a paradigm for LLMs to generate reasoning traces (thoughts) and task-specific actions, which helps in breaking down and executing complex tasks.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, 2022arXiv preprint arXiv:2201.11903DOI: 10.48550/arXiv.2201.11903 - A paper demonstrating how prompting LLMs to generate intermediate reasoning steps (a form of problem decomposition) improves performance on complex tasks.
Generative Agents: Interactive Simulacra of Human Behavior, Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein, 2023arXiv preprint arXiv:2304.03442DOI: 10.48550/arXiv.2304.03442 - Details an architecture for AI agents that includes planning and reflection, demonstrating how agents decompose high-level goals into actionable steps over time.