Chain-of-Thought and Tree-of-Thought in Agent Prompts
Was this section helpful?
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, 2022arXiv preprint arXiv:2201.11903DOI: 10.48550/arXiv.2201.11903 - This paper introduces Chain-of-Thought (CoT) prompting, a method that enables large language models to perform complex reasoning by generating intermediate steps.
Large Language Models are Zero-Shot Reasoners, Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa, 2022NeurIPS 2022DOI: 10.48550/arXiv.2205.11916 - This paper demonstrates that simply adding "Let's think step by step" to a prompt can elicit CoT reasoning in LLMs without specific examples.