SeePerSea: Multimodal Perception Dataset of In-Water Objects for Autonomous Surface Vehicles
/ Authors
/ Abstract
This article introduces the first publicly accessible labeled multimodal perception dataset for autonomous maritime navigation, focusing on in-water obstacles within the aquatic environment to enhance situational awareness for autonomous surface vehicles (ASVs). This dataset, collected over four years and consisting of diverse objects encountered under varying environmental conditions, aims to bridge the research gap in ASVs by providing a multimodal, annotated, and ego-centric perception dataset, for object detection and classification. We also show the applicability of the proposed dataset by training and testing current deep learning-based open-source perception algorithms that have shown success in the autonomous ground vehicle domain. With the training and testing results, we discuss open challenges for existing datasets and methods, identifying future research directions. We expect that our dataset will contribute to the development of future marine autonomy pipelines and marine (field) robotics. This dataset is open source and found at https://seepersea.github.io/
Journal: IEEE Transactions on Field Robotics