Publication | Open Access
Generative deep-learning-embedded asynchronous structured light for three-dimensional imaging
54
Citations
25
References
2024
Year
EngineeringSparse ImagingFringe PatternFringe Pattern AliasingImage AnalysisDifferentiable RenderingImage-based ModelingThree-dimensional ImagingComputational ImagingRadiologyHealth SciencesMachine VisionSynchronization ConstraintMedical Image ComputingDeep LearningComputational Optical ImagingOptical ImagingComputer VisionGenerative Adversarial NetworkBiomedical ImagingStructured Light3D Imaging
Three-dimensional (3D) imaging with structured light is crucial in diverse scenarios, ranging from intelligent manufacturing and medicine to entertainment. However, current structured light methods rely on projector–camera synchronization, limiting the use of affordable imaging devices and their consumer applications. In this work, we introduce an asynchronous structured light imaging approach based on generative deep neural networks to relax the synchronization constraint, accomplishing the challenges of fringe pattern aliasing, without relying on any a priori constraint of the projection system. To overcome this need, we propose a generative deep neural network with U-Net-like encoder–decoder architecture to learn the underlying fringe features directly by exploring the intrinsic prior principles in the fringe pattern aliasing. We train within an adversarial learning framework and supervise the network training via a statistics-informed loss function. We demonstrate that by evaluating the performance on fields of intensity, phase, and 3D reconstruction. It is shown that the trained network can separate aliased fringe patterns for producing comparable results with the synchronous one: the absolute error is no greater than 8 μm, and the standard deviation does not exceed 3 μm. Evaluation results on multiple objects and pattern types show it could be generalized for any asynchronous structured light scene.
| Year | Citations | |
|---|---|---|
Page 1
Page 1