ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2022arXiv preprint arXiv:2210.03629DOI: 10.48550/arXiv.2210.03629 - The original research paper that introduces the ReAct framework, detailing its design for combining reasoning and acting in LLM-based agents.
A Survey of Large Language Model Based Agents, Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen, 2023arXiv preprint arXiv:2308.11432 (cs)DOI: 10.48550/arXiv.2308.11432 - A broad survey providing an overview of different architectures and methodologies employed in large language model-based agents.
Toolformer: Language Models Can Teach Themselves to Use Tools, Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom, 2023arXiv preprint arXiv:2302.04761DOI: 10.48550/arXiv.2302.04761 - This paper demonstrates how LLMs can learn to interact with and use external tools, which is a fundamental aspect of the 'Action' component in agent architectures like ReAct.