Concepedia

TLDR

Colorectal cancer is the third leading cause of cancer death worldwide, and while regular colonoscopy screening reduces mortality, it suffers from high polyp miss rates and limited visual assessment of malignancy. This study introduces an extended benchmark for colonoscopy image segmentation to support decision‑making systems that mitigate these screening limitations. The benchmark comprises four clinically relevant classes and provides baseline fully convolutional network models trained on the dataset. Fully convolutional networks outperform previous methods in endoluminal scene segmentation, notably improving polyp detection and localization without additional post‑processing.

Abstract

Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

References

YearCitations

Page 1