Mizuki Takenawa, Naoki Sugimoto, Leslie Wöhler, Satoshi Ikehata, Kiyoharu Aizawa
We propose to build realistic virtual worlds, called 360RVW, for large urban environments directly from 360° videos. We provide an interface for interactive exploration, where users can freely navigate via their own avatars. 360° videos record the entire environment of the shooting location simultaneously leading to highly realistic and immersive representations. Our system uses 360° videos recorded along streets and builds a 360RVW through four main operations: video segmentation by intersection detection, video completion to remove the videographer, semantic segmentation for virtual collision detection with the avatar, and projection onto a distorted sphere that moves along the camera trajectory following the avatar's movements. Our interface allows users to explore large urban environments by changing their walking direction at intersections or choosing a new location by clicking on a map. Even without a 3D model, the users can experience collision with buildings using metadata produced by semantic segmentation. Furthermore, we stream the 360° videos so users can directly access 360RVW via their web browser. We fully evaluate our system, including a perceptual experiment comparing our approach to previous exploratory interfaces. The results confirm the quality of our system, especially regarding the presence of users and the interactive exploration, making it most suitable for a virtual tour of urban environments.
Tatsuro Banno, Koki Kawada, Mizuki Takenawa, Masatoshi Denda, Kiyoharu Aizawa
Virtual flood experience systems, which enable users to vividly experience flooding, are attracting increasing attention as effective tools for communicating flood risks. However, existing systems typically rely on virtual cities that do not correspond to real locations and often lack sufficient photorealism, limiting users' ability to relate scenarios to their own surroundings. Although 360° video-based virtual environments offer a simple and scalable way to visually replicate real-world scenes, effective 3D flood visualization in these environments typically requires 3D building geometry of the target area, which is not readily available in many regions. To address this limitation, we propose a new virtual flood experience framework that integrates 360° videos with 3D models automatically constructed from widely available 2D building footprints. By extruding footprints to plausible heights and spatially aligning the constructed models with 360° videos, our framework enables 3D flood visualization in photorealistic environments without relying on pre-existing city models such as CityGML. We demonstrate the framework in Memuro, Hokkaido, Japan, an area vulnerable to river flooding. A user study with local residents showed that the proposed system enhances users' ability to envision location-specific flood evacuation situations, demonstrating its potential as an effective tool for disaster risk communication and education.
Tatsuro Banno, Mizuki Takenawa, Leslie Wöhler, Satoshi Ikehata, Kiyoharu Aizawa
We introduce a novel urban visualization system that integrates 3D urban model (CityGML) and 360° walkthrough videos. By aligning the videos with the model and dynamically projecting relevant video frames onto the geometries, our system creates photorealistic urban visualizations, allowing users to intuitively interpret geospatial data from a pedestrian view.