MyUnderwaterWorld: Intelligent Underwater Scene Representation

Underwater Scene Enhancement and 3D Modelling

(Left) Raw video by Piyapong Suwannakul and (Right) Processed video using VI-Lab tools. See video HERE

Aim

Our oceans have been explored for hundreds of years and these activities are becoming increasingly important because of the need to manage and conserve mineral and biological resources effectively, as well as to better understand planetary-scale processes including tectonics and marine hazards. Exploration and analysis are however always limited by the number of diving experts, technologies, and in particular, costs. Advanced imaging methods now support a new paradigm of remote discovery where onshore experts with specific knowledge, such as geologists, archaeologists and biologists, are able to remotely model and explore underwater scenes.

Underwater environment represents the combination of several challenges. Water is a dynamic medium and suspended particles move. Light scatter causes blur and halo effects, whilst light absorption leads to colour distortion and reduced contrast. The model of underwater imagery should thus comprise temporally- and spatially-variant distortion, uneven intensity bias, multiplicative noise, and additive noise. This project aims to exploit underwater image priors to perform 3D mapping process can be done directly from the raw underwater sequences.

Collaborators
National Park Service Submerged Resources Center, Woods Hole Oceanographic Institute, Marine Imaging Technologies, National Oceanic and Atmospheric Administration, Gates Underwater Products, Esprit Film & Television, Beam UK
Funder
EPSRC ECR International Collaboration Grant (EP/Y002490/1), UKRI MyWorld Strength in Places Programme (SIPF00006/1), EPSRC IAA

Methods

  • RUSplatting: Robust 3D Gaussian Splatting for Sparse-View Underwater Scene Reconstruction (BMVC2025)

    [PDF] [Code] [Dataset]

    Our enhanced Gaussian Splatting framework improves both visual quality and geometric accuracy in underwater rendering. We employ physics-guided decoupled RGB learning for accurate colour restoration, a frame interpolation strategy with adaptive weighting to address sparse views, and a new loss function that reduces noise while preserving edges, crucial for deep-sea content.

  • SWAGSplatting: Semantic-guided Water-scene Augmented Gaussian Splatting

    [PDF]

    We present a semantic-guided 3D Gaussian Splatting framework for deep-sea scene reconstruction, where each Gaussian embeds CLIP-derived features to enforce semantic and structural consistency. A dedicated semantic loss and stage-wise training strategy further enhance stability and reconstruction fidelity.


Research team

Core
Undergrad/Postgrad projects
  • [Best AI Project Prize] Zhuodong Jiang (2024/2025), Underwater 3D Gaussian Splatting With Frame Interpolation, Colour Channel Decoupling and Adaptive Bilateral Filtering [Thesis] [Paper] [Submerged3D Dataset]
  • Luca Gough (2023/2024), 3D Representation of Underwater Scenes using Neural Radiance Fields [Thesis] [Paper]
  • George Atkinson (2023/2024), Generative Deep Learning for Temporally Consistent Underwater Video Enhancement [Thesis]

Downloads

Publications
White papers
Datasets
  • BVI-Coral: Underwater scenes for 3D reconstruction [dataset]
  • S-UW: Underwater images from shallow water areas [dataset]
  • Submerged3D: Deep underwater environments (in collaboration with National Park Service Submerged Resources Center) [dataset]

Related publications from VI-Lab
Denoising in different modalities