Finetuned Language Models are Zero-Shot Learners, Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le, 2022ICLRDOI: 10.48550/arXiv.2109.01652 - This paper presents "FLAN", demonstrating that finetuning language models on a diverse set of instructions significantly improves their zero-shot generalization to unseen tasks, directly supporting the section's claim about instruction-tuned models.