RSS 2025
V-HOP: Visuo-Haptic 6D Object Pose Tracking
1[Brown University]
2[The University of Texas at Dallas]
![[Teaser Figure]](/assets/images/projects/v-hop/v-hop-teaser.jpg)
Abstract
Humans naturally integrate vision and haptics for robust object perception during manipulation. The loss of either modality significantly degrades performance. Inspired by this multisensory integration, prior object pose estimation research has attempted to combine visual and haptic/tactile feedback. Although these works demonstrate improvements in controlled environments or synthetic datasets, they often underperform vision-only approaches in real-world settings due to poor generalization across diverse grippers, sensor layouts, or sim-to-real environments. Furthermore, they typically estimate the object pose for each frame independently, resulting in less coherent tracking over sequences in real-world deployments. To address these limitations, we introduce a novel unified haptic representation that effectively handles multiple gripper embodiments. Building on this representation, we introduce a new visuo-haptic transformer-based object pose tracker that seamlessly integrates visual and haptic input. We validate our framework in our dataset and the Feelsight dataset, demonstrating significant performance improvement on challenging sequences. Notably, our method achieves superior generalization and robustness across novel embodiments, objects, and sensor types (both taxel-based and vision-based tactile sensors). In real-world experiments, we demonstrate that our approach outperforms state-of-the-art visual trackers by a large margin. We further show that we can achieve precise manipulation tasks by incorporating our real-time object tracking result into motion plans, underscoring the advantages of visuo-haptic perception.
Methodology
![[Method Figure]](/assets/images/projects/v-hop/overview.png)
The visual modality, based on FoundationPose, uses a visual encoder to process RGB-D observations (real and rendered) into feature maps, which are concatenated and refined through a ResBlock to produce visual embeddings. The haptic modality encodes a unified hand-object point cloud, derived from 9D hand and object point clouds, into a haptic embedding that captures hand-object interactions. These visual and haptic embeddings are processed by Transformer encoders to estimate 3D translation and rotation.
Tracking results
Power Drill
Sugar Box
Tomato Can
Mug
Supplementary Video
Citations
@inproceedings{li2025vhop,
title={V-HOP: Visuo-Haptic 6D Object Pose Tracking},
author={Li, Hongyu and Jia, Mingxi and Akbulut, Tuluhan and Xiang, Yu and Konidaris, George and Sridhar, Srinath},
booktitle={Proceedings of Robotics: Science and Systems},
year={2025}
}
Acknowledgements
This work is supported by the National Science Foundation (NSF) under CAREER grant #2143576, grant #2346528, and the Office of Naval Research (ONR) grant #N00014-22-1-259. We thank Ying Wang, Tao Lu, Zekun Li, and Xiaoyan Cong for their valuable discussions. We thank the area chair and the reviewers for providing constructive feedback on improving the quality and clarity of our paper. This research was conducted using computational resources and services at the Center for Computation and Visualization, Brown University.
Contact
Hongyu Li (contact email)