Recursive Language Models

By: Alex L. Zhang, Omar Khattab

Published: 2025-12-30

View on arXiv →
#cs.AI

Abstract

Recursive Language Models (RLMs) introduce a general inference strategy that allows Large Language Models (LLMs) to process arbitrarily long prompts (exceeding 10 million tokens) by treating them as external environment variables within a Python REPL. The model programmatically examines, decomposes, and recursively calls itself over prompt snippets. This approach achieves superior performance and robustness on diverse long-context tasks compared to direct LLM calls and other scaling methods, mitigating "context rot" and making LLMs effective for applications requiring extensive context understanding.

FEEDBACK

Projects

No projects yet

Recursive Language Models | ArXiv Intelligence