Agent-Omit: Training Efficient LLM Agents for Adaptive Thought and Observation Omission via Agentic Reinforcement Learning

By: Yansong Ning, Jun Fang, Naiqiang Tan, Hao Liu

Published: 2026-02-01

View on arXiv →
#cs.AI

Abstract

Large language model (LLM) agents, while powerful, often suffer from inefficiencies due to processing irrelevant information and generating verbose thoughts. Agent-Omit introduces a novel training paradigm that leverages agentic reinforcement learning to teach LLM agents to adaptively omit redundant thoughts and observations. This results in significantly more efficient and performant agents, reducing computational overhead and improving task completion rates in complex environments.

FEEDBACK

Projects

No projects yet

Agent-Omit: Training Efficient LLM Agents for Adaptive Thought and Observation Omission via Agentic Reinforcement Learning | ArXiv Intelligence