Publication | Closed Access
Convolutional neural networks for histopathology image classification: Training vs. Using pre-trained networks
138
Citations
22
References
2017
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningDigital PathologyHistopathology Image ClassificationFeature VectorImage ClassificationPre-trainingImage AnalysisData SciencePattern RecognitionRadiologyDeepest LayerMedical ImagingFeature LearningHistopathologyTraining VsDeep LearningMedical Image ComputingComputer VisionConvolutional Neural NetworksComputer-aided DiagnosisTransfer LearningMedicineMedical Image Analysis
The study investigates whether deep features from pre‑trained convolutional neural networks can match or surpass models trained from scratch for histopathology image classification, and how transfer learning with limited samples affects performance. Feature vectors were extracted from the deepest layers of several pre‑trained CNNs, with and without transfer learning, and evaluated on the Kimia Path24 dataset comprising 27,055 training patches across 24 tissue classes and 1,325 test patches. Pre‑trained networks performed comparably to models trained from scratch; fine‑tuning VGG16 offered no benefit, whereas fine‑tuning Inception significantly improved retrieval and classification accuracy.
We explore the problem of classification within a medical image data-set based on a feature vector extracted from the deepest layer of pre-trained Convolution Neural Networks. We have used feature vectors from several pre-trained structures, including networks with/without transfer learning to evaluate the performance of pre-trained deep features versus CNNs which have been trained by that specific dataset as well as the impact of transfer learning with a small number of samples. All experiments are done on Kimia Path24 dataset which consists of 27,055 histopathology training patches in 24 tissue texture classes along with 1,325 test patches for evaluation. The result shows that pre-trained networks are quite competitive against training from scratch. As well, fine-tuning does not seem to add any tangible improvement for VGG16 to justify additional training while we observed considerable improvement in retrieval and classification accuracy when we fine-tuned the Inception structure.
| Year | Citations | |
|---|---|---|
Page 1
Page 1