Publication | Open Access
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation
87
Citations
41
References
2020
Year
Medical Image SegmentationEngineeringMachine LearningTumor SegmentationBrain Tumor SegmentationTumor Segmentation PerformanceNeuro-oncologyImage AnalysisData ScienceGenerative ModelRadiologySynthetic Image GenerationHealth SciencesData AugmentationMachine VisionMedical ImagingSegmentation Network TrainingNeuroimagingHuman Image SynthesisDeep LearningMedical Image ComputingComputer VisionGenerative Adversarial NetworkBiomedical ImagingNeuroscienceGenerative AiMedical Image AnalysisImage Segmentation
The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.
| Year | Citations | |
|---|---|---|
Page 1
Page 1