Training-Time Action Conditioning for Efficient Real-Time Robot Control

By: Oliver Schmidt, Chloe Brown, Daniel Kim

Published: 2025-12-02

View on arXiv →
#cs.AI

Abstract

Researchers at Physical Intelligence developed a method for real-time robot control that shifts action chunk conditioning from inference-time to training-time, achieving lower latency and improved robustness for Vision-Language-Action (VLA) models, especially under high inference delays. This approach reduces end-to-end latency by an average of 27ms compared to prior methods while maintaining task performance on complex real-world tasks.

FEEDBACK

Projects

No projects yet