Learning Control Barrier Functions for Multi-robot Navigation

Learning-based control methods must satisfy safety requirements to be deployed in real-world robotics systems. Control barriers, a potential candidate for standardizing the notion of safety in the learning community, achieve a theoretical guarantee of controller safety via specifying forward-invariant safe regions of the system using Lyapunov theory.

We develop a model-based learning approach to synthesize robust safety-critical controllers by constructing neural control barriers solely with offline data. An actor model is learned to capture the safest controls for neural control barriers. We incorporate the actor model in optimizing the derivative of the barrier model to satisfy the Lyapunov condition so that no optimality assumption is imposed on the controls from data. The actor also enables us to annotate unlabeled data, e.g., the demonstrations with questionable safety, via out-of-distribution analysis. This is to serve the offline setting best, which targets learning directly from real-world demonstrations with limited labeled data.

We evaluate the proposed algorithm for obstacle avoidance in both simulation and real-world platforms. Trained on a limited amount of real-world data, the new method can achieve comparable performance to the DWB local planner included with ROS2 Nav2 for static obstacle avoidance and handle dynamic obstacle avoidance from sensory data on real-world platforms.

Team Members

Henrik Christensen1
Sicun Gao1

1. UC San Diego