The Function of Prompt Engineering in Agentic Systems
Was this section helpful?
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, 2022Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2201.11903 - This paper introduces Chain-of-Thought prompting, a technique to improve LLM reasoning by explicitly instructing models to show their thinking steps, which is fundamental for guiding agent planning.
ReAct: Synergistic Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao, 2023International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2210.03629 - This paper presents the ReAct framework, which combines reasoning and acting in LLMs through verbalization and interaction with external tools, directly addressing task orchestration and tool use in agents.
Generative Agents: Interactive Simulacra of Human Behavior, Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein, 2023arXiv preprint arXiv:2304.03442DOI: 10.48550/arXiv.2304.03442 - This paper introduces a system for building generative agents that simulate human behavior, demonstrating how prompt-driven architectures can manage memory, planning, and interactions for complex agentic tasks.