Publication | Closed Access
DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion
246
Citations
62
References
2023
Year
Unknown Venue
EngineeringMachine LearningMulti-image FusionImage AnalysisData SciencePattern RecognitionGenerative ModelMulti-modality Image FusionRadiologyHealth SciencesSynthetic Image GenerationMedical ImagingGenerative ModelsInverse ProblemsNovel Fusion AlgorithmHuman Image SynthesisDeep LearningMedical Image ComputingComputer VisionGenerative Adversarial NetworkDenoising Diffusion ModelBiomedical ImagingMaximum Likelihood SubproblemMulti-focus Image FusionImage Denoising
Multi-modality image fusion aims to combine different modalities to produce fused images that retain the complementary features of each modality, such as functional highlights and texture details. To leverage strong generative priors and address challenges such as unstable training and lack of interpretability for GAN-based generative methods, we propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM). The fusion task is formulated as a conditional generation problem under the DDPM sampling framework, which is further divided into an unconditional generation subproblem and a maximum likelihood subproblem. The latter is modeled in a hierarchical Bayesian manner with latent variables and inferred by the expectation-maximization (EM) algorithm. By integrating the inference solution into the diffusion sampling iteration, our method can generate high-quality fused images with natural image generative priors and cross-modality information from source images. Note that all we required is an unconditional pre-trained generative model, and no fine-tuning is needed. Our extensive experiments indicate that our approach yields promising fusion results in infrared-visible image fusion and medical image fusion. The code is available at https://github.com/Zhaozixiang1228/MMIF-DDFM.
| Year | Citations | |
|---|---|---|
Page 1
Page 1