Publication | Closed Access
Interleaved text/image Deep Mining on a large-scale radiology database
96
Citations
52
References
2015
Year
Unknown Venue
Despite tremendous progress in computer vision, effective learning on very large-scale (> 100K patients) medical image databases has been vastly hindered. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's picture archiving and communication system. Instead of using full 3D medical volumes, we focus on a collection of representative ~216K 2D key images/slices (selected by clinicians for diagnostic reference) with text-driven scalar and vector labels. Our system interleaves between unsupervised learning (e.g., latent Dirichlet allocation, recurrent neural net language models) on document- and sentence-level texts to generate semantic labels and supervised learning via deep convolutional neural networks (CNNs) to map from images to label spaces. Disease-related key words can be predicted for radiology images in a retrieval manner. We have demonstrated promising quantitative and qualitative results. The large-scale datasets of extracted key images and their categorization, embedded vector labels and sentence descriptions can be harnessed to alleviate the deep learning “data-hungry” obstacle in the medical domain.
| Year | Citations | |
|---|---|---|
2017 | 75.5K | |
2014 | 75.4K | |
2009 | 60.2K | |
2015 | 46.2K | |
2015 | 39.5K | |
1986 | 29.7K | |
2003 | 26.9K | |
2013 | 18.1K | |
2013 | 18.1K | |
1999 | 13.8K |
Page 1
Page 1