Position: Agentic Evolution is the Path to Evolving LLMs
By: Minhua Lin, Hanqing Lu, Zhan Shi, Bing He, Rui Mao, Zhiwei Zhang, Zongyu Wu, Xianfeng Tang, Hui Liu, Zhenwei Dai, Xiang Zhang, Suhang Wang, Benoit Dumoulin, Jian Pei
Published: 2026-01-30
View on arXiv →Abstract
As Large Language Models (LLMs) move from controlled training environments to open-ended real-world applications, a critical limitation arises: static training cannot keep pace with continuous environmental changes. This paper argues for a new scaling axis—evolution—to bridge this 'train-deploy gap.' It proposes agentic evolution as the future of LLM adaptation, elevating evolution from a fixed pipeline to an autonomous evolver agent. The framework, A-Evolve, treats deployment-time improvement as a deliberate, goal-directed optimization over persistent system state, positing an evolution-scaling hypothesis where adaptation capacity scales with allocated compute, leading to sustained, open-ended adaptation in real-world scenarios.