Publication | Open Access
True Color Correction of Autonomous Underwater Vehicle Imagery
126
Citations
38
References
2015
Year
EngineeringSeafloor MappingUnderwater SystemField RoboticsColor CorrectionOceanographyUnderwater ImagingImage AnalysisStereo VisionRov ObservationImage-based ModelingComputational ImagingUnderwater 3D ReconstructionMachine VisionAcoustic CommunicationsTrue ColorStructure From MotionAutomated ApproachComputer VisionTrue Color CorrectionReflectance PanelsUnderwater VehicleRemote SensingUnderwater Sensing
Underwater imaging is distorted by color‑dependent attenuation, backscatter, and uneven artificial lighting, which hampers visual interpretation, benthic identification, and automated classification. The study proposes an automated method to recover the true color of seafloor objects from AUV images used in 3D modeling and mosaicking. By integrating 3D structure from structure‑from‑motion with an underwater image‑formation model, the method estimates water‑column parameters directly from the imagery, correcting for attenuation, backscatter, vignetting, and lighting patterns. The corrected model accurately recovers reflectance, producing images that resemble those taken in air, and is validated against color targets across multiple AUV deployments.
This paper presents an automated approach to recovering the true color of objects on the seafloor in images collected from multiple perspectives by an autonomous underwater vehicle (AUV) during the construction of three‐dimensional (3D) seafloor models and image mosaics. When capturing images underwater, the water column induces several effects on light that are typically negligible in air, such as color‐dependent attenuation and backscatter. AUVs must typically carry artificial lighting when operating at depths below 20‐30 m; the lighting pattern generated is usually not spatially consistent. These effects cause problems for human interpretation of images, limit the ability of using color to identify benthic biota or quantify changes over multiple dives, and confound computer‐based techniques for clustering and classification. Our approach exploits the 3D structure of the scene generated using structure‐from‐motion and photogrammetry techniques to provide basic spatial data to an underwater image formation model. Parameters that are dependent on the properties of the water column are estimated from the image data itself, rather than using fixed in situ infrastructure, such as reflectance panels or detailed data on water constitutes. The model accounts for distance‐based attenuation and backscatter, camera vignetting and the artificial lighting pattern, recovering measurements of the true color (reflectance) and thus allows us to approximate the appearance of the scene as if imaged in air and illuminated from above. Our method is validated against known color targets using imagery collected in different underwater environments by two AUVs that are routinely used as part of a benthic habitat monitoring program.
| Year | Citations | |
|---|---|---|
Page 1
Page 1