MoTo: A Zero-shot Plug-in Interaction-aware Navigation for General Mobile Manipulation

CoRL 2025


Zhenyu Wu1   Angyuan Ma2   Xiuwei Xu2   Hang Yin2   Yinan Liang2   Ziwei Wang3   Jiwen Lu2   Haibin Yan1†

1Beijing University of Posts and Telecommunications  2Tsinghua University  3Nanyang Technological University


paper  Paper (arXiv)     

Abstract



Mobile manipulation stands as a core challenge in robotics, enabling robots to assist humans across varied tasks and dynamic daily environments. Conventional mobile manipulation approaches often struggle to generalize across different tasks and environments due to the lack of large-scale training.

However, recent advances in manipulation foundation models demonstrate impressive generalization capability on a wide range of fixed-base manipulation tasks, which are still limited to a fixed setting. Therefore, we devise a plug-in module named MoTo, which can be combined with any off-the-shelf manipulation foundation model to empower them with mobile manipulation ability.

Specifically, we propose an interaction-aware navigation policy to generate robot docking points for generalized mobile manipulation. To enable zero-shot ability, we propose an interaction keypoints framework via vision-language models (VLM) under multi-view consistency for both target object and robotic arm following instructions, where fixed-base manipulation foundation models can be employed.

We further propose motion planning objectives for the mobile base and robot arm, which minimize the distance between the two keypoints and maintain the physical feasibility of trajectories.

In this way, MoTo guides the robot to move to the docking points where fixed-base manipulation can be successfully performed, and leverages VLM generation and trajectory optimization to achieve mobile manipulation in a zero-shot manner, without any requirement on mobile manipulation expert data. Extensive experimental results on OVMM and real-world demonstrate that MoTo achieves success rates of 2.68% and 16.67% higher than the state-of-the-art mobile manipulation methods, respectively, without requiring additional training data.

pipeline

Approach


The pipeline of MoTo. Based on robot scanning RGB-D observation to get 3D scene point clouds and graphs, we utilize VLM and multi-view consistency voting to get interaction keypoints, and generate mobile manipulation trajectories via proposed cost constraint optimization.

pipeline


© Zhenyu Wu | Last update: June. 17, 2024