Suite-IN: Aggregating Motion Features from Apple Suite for Robust Inertial Navigation
/ Authors
/ Abstract
With the rapid development of wearable technology, devices like smartphones, smartwatches, and headphones equipped with IMUs have become essential for applications such as pedestrian positioning. However, traditional pedestrian dead reckoning (PDR) methods struggle with diverse motion patterns, while recent data-driven approaches, though improving accuracy, often lack robustness due to reliance on a single device. In our work, we attempt to enhance the positioning performance using the low-cost commodity IMUs embedded in the wearable devices. We propose a multi-device deep learning framework named Suite-IN, aggregating motion data from Apple Suite for inertial navigation. Motion data captured by sensors on different body parts contains both local and global motion information, making it essential to reduce the negative effects of localized movements and extract global motion representations from multiple devices. Our model innovatively introduces a contrastive learning module to disentangle motionshared and motion-private latent representations, enhancing positioning accuracy. We validate our method on a self-collected dataset consisting of Apple Suite: iPhone, Apple Watch and Airpods, which supports a variety of movement patterns and flexible device configurations. Experimental results demonstrate that our approach outperforms state-of-the-art models while maintaining robustness across diverse sensor configurations.
Journal: 2025 IEEE International Conference on Robotics and Automation (ICRA)