Cross-Embodiment Offline Reinforcement Learning for Heterogeneous Robot Datasets

By: Haruki Abe, Takayuki Osa, Yusuke Mukuta, Tatsuya Harada

Published: 2026-02-23

View on arXiv →
#cs.AI

Abstract

This research introduces a novel approach to offline reinforcement learning that allows robots to learn from heterogeneous datasets across different embodiments. This innovation is crucial for real-world robotics, enabling the transfer of learned skills and knowledge between diverse robotic platforms without requiring extensive online interaction.

FEEDBACK

Projects

No projects yet

Cross-Embodiment Offline Reinforcement Learning for Heterogeneous Robot Datasets | ArXiv Intelligence