RoboSafe: Safeguarding Embodied Agents via Executable Safety Logic

By: Le Wang, Zonghao Ying, Xiao Yang, Quanchen Zou, Zhenfei Yin, Tianlin Li, Jian Yang, Yaodong Yang, Aishan Liu, Xianglong Liu

Published: 2025-12-24

View on arXiv →
#cs.AI

Abstract

Ensuring the safety of embodied AI agents in complex, unstructured environments is a critical challenge. This paper introduces RoboSafe, a novel framework that integrates executable safety logic directly into the agent's control loop. By leveraging formal methods and runtime verification, RoboSafe enables agents to proactively detect and prevent unsafe actions, even in unforeseen situations. The system allows for defining safety policies using a high-level, human-readable language, which are then compiled into efficient, verifiable runtime monitors. Experimental results demonstrate RoboSafe's effectiveness in preventing collisions, respecting human boundaries, and avoiding hazardous states, significantly enhancing the trustworthiness and deployability of embodied AI systems in safety-critical applications.

FEEDBACK

Projects

No projects yet