Torres, Felipe S; Akbar, Shazia; Raman, Srinivas; Yasufuku, Kazuhiro; Schmidt, Carola; Hosny, Ahmed; Baldauf-Lenschen, Felix; Leighl, Natasha B End-to-End Non-Small-Cell Lung Cancer Prognostication Using Deep Learning Applied to Pretreatment Computed Tomography Journal Article JCO Clin Cancer Inform , 5 , pp. 1141-1150, 2021. @article{Torres2021, title = {End-to-End Non-Small-Cell Lung Cancer Prognostication Using Deep Learning Applied to Pretreatment Computed Tomography}, author = {Felipe S Torres and Shazia Akbar and Srinivas Raman and Kazuhiro Yasufuku and Carola Schmidt and Ahmed Hosny and Felix Baldauf-Lenschen and Natasha B Leighl}, url = {https://pubmed.ncbi.nlm.nih.gov/34797702/}, doi = {10.1200/CCI.21.00096.}, year = {2021}, date = {2021-10-05}, journal = {JCO Clin Cancer Inform }, volume = {5}, pages = {1141-1150}, abstract = {Purpose: Clinical TNM staging is a key prognostic factor for patients with lung cancer and is used to inform treatment and monitoring. Computed tomography (CT) plays a central role in defining the stage of disease. Deep learning applied to pretreatment CTs may offer additional, individualized prognostic information to facilitate more precise mortality risk prediction and stratification. Methods: We developed a fully automated imaging-based prognostication technique (IPRO) using deep learning to predict 1-year, 2-year, and 5-year mortality from pretreatment CTs of patients with stage I-IV lung cancer. Using six publicly available data sets from The Cancer Imaging Archive, we performed a retrospective five-fold cross-validation using pretreatment CTs of 1,689 patients, of whom 1,110 were diagnosed with non-small-cell lung cancer and had available TNM staging information. We compared the association of IPRO and TNM staging with patients' survival status and assessed an Ensemble risk score that combines IPRO and TNM staging. Finally, we evaluated IPRO's ability to stratify patients within TNM stages using hazard ratios (HRs) and Kaplan-Meier curves. Results: IPRO showed similar prognostic power (concordance index [C-index] 1-year: 0.72, 2-year: 0.70, 5-year: 0.68) compared with that of TNM staging (C-index 1-year: 0.71, 2-year: 0.71, 5-year: 0.70) in predicting 1-year, 2-year, and 5-year mortality. The Ensemble risk score yielded superior performance across all time points (C-index 1-year: 0.77, 2-year: 0.77, 5-year: 0.76). IPRO stratified patients within TNM stages, discriminating between highest- and lowest-risk quintiles in stages I (HR: 8.60), II (HR: 5.03), III (HR: 3.18), and IV (HR: 1.91). Conclusion: Deep learning applied to pretreatment CT combined with TNM staging enhances prognostication and risk stratification in patients with lung cancer.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: Clinical TNM staging is a key prognostic factor for patients with lung cancer and is used to inform treatment and monitoring. Computed tomography (CT) plays a central role in defining the stage of disease. Deep learning applied to pretreatment CTs may offer additional, individualized prognostic information to facilitate more precise mortality risk prediction and stratification. Methods: We developed a fully automated imaging-based prognostication technique (IPRO) using deep learning to predict 1-year, 2-year, and 5-year mortality from pretreatment CTs of patients with stage I-IV lung cancer. Using six publicly available data sets from The Cancer Imaging Archive, we performed a retrospective five-fold cross-validation using pretreatment CTs of 1,689 patients, of whom 1,110 were diagnosed with non-small-cell lung cancer and had available TNM staging information. We compared the association of IPRO and TNM staging with patients' survival status and assessed an Ensemble risk score that combines IPRO and TNM staging. Finally, we evaluated IPRO's ability to stratify patients within TNM stages using hazard ratios (HRs) and Kaplan-Meier curves. Results: IPRO showed similar prognostic power (concordance index [C-index] 1-year: 0.72, 2-year: 0.70, 5-year: 0.68) compared with that of TNM staging (C-index 1-year: 0.71, 2-year: 0.71, 5-year: 0.70) in predicting 1-year, 2-year, and 5-year mortality. The Ensemble risk score yielded superior performance across all time points (C-index 1-year: 0.77, 2-year: 0.77, 5-year: 0.76). IPRO stratified patients within TNM stages, discriminating between highest- and lowest-risk quintiles in stages I (HR: 8.60), II (HR: 5.03), III (HR: 3.18), and IV (HR: 1.91). Conclusion: Deep learning applied to pretreatment CT combined with TNM staging enhances prognostication and risk stratification in patients with lung cancer. |
Petrick, Nicholas; Akbar, Shazia; Cha, Kenny H; Nofech-Mozes, Sharon; Gavrielides, Berkman Sahiner Marios A; Kalpathy-Cramer, Jayashree; Drukker, Karen; Martel, Anne L; Group, BreastPathQ Challenge SPIE Medical Imaging, 8 (3), 2021. @article{Petrick2021, title = {SPIE-AAPM-NCI BreastPathQ challenge: an image analysis challenge for quantitative tumor cellularity assessment in breast cancer histology images following neoadjuvant treatment}, author = {Nicholas Petrick and Shazia Akbar and Kenny H. Cha and Sharon Nofech-Mozes and Berkman Sahiner Marios A. Gavrielides and Jayashree Kalpathy-Cramer and Karen Drukker and Anne L. Martel and BreastPathQ Challenge Group}, url = {http://dx.doi.org/10.1117/1.JMI.8.3.034501}, year = {2021}, date = {2021-05-08}, journal = {SPIE Medical Imaging}, volume = {8}, number = {3}, abstract = {Purpose: The breast pathology quantitative biomarkers (BreastPathQ) challenge was a grand challenge organized jointly by the International Society for Optics and Photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA). The task of the BreastPathQ challenge was computerized estimation of tumor cellularity (TC) in breast cancer histology images following neoadjuvant treatment. Approach: A total of 39 teams developed, validated, and tested their TC estimation algorithms during the challenge. The training, validation, and testing sets consisted of 2394, 185, and 1119 image patches originating from 63, 6, and 27 scanned pathology slides from 33, 4, and 18 patients, respectively. The summary performance metric used for comparing and ranking algorithms was the average prediction probability concordance (PK) using scores from two pathologists as the TC reference standard. Results: Test PK performance ranged from 0.497 to 0.941 across the 100 submitted algorithms. The submitted algorithms generally performed well in estimating TC, with high-performing algorithms obtaining comparable results to the average interrater PK of 0.927 from the two pathologists providing the reference TC scores. Conclusions: The SPIE-AAPM-NCI BreastPathQ challenge was a success, indicating that artificial intelligence/machine learning algorithms may be able to approach human performance for cellularity assessment and may have some utility in clinical practice for improving efficiency and reducing reader variability. The BreastPathQ challenge can be accessed on the Grand Challenge website.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: The breast pathology quantitative biomarkers (BreastPathQ) challenge was a grand challenge organized jointly by the International Society for Optics and Photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA). The task of the BreastPathQ challenge was computerized estimation of tumor cellularity (TC) in breast cancer histology images following neoadjuvant treatment. Approach: A total of 39 teams developed, validated, and tested their TC estimation algorithms during the challenge. The training, validation, and testing sets consisted of 2394, 185, and 1119 image patches originating from 63, 6, and 27 scanned pathology slides from 33, 4, and 18 patients, respectively. The summary performance metric used for comparing and ranking algorithms was the average prediction probability concordance (PK) using scores from two pathologists as the TC reference standard. Results: Test PK performance ranged from 0.497 to 0.941 across the 100 submitted algorithms. The submitted algorithms generally performed well in estimating TC, with high-performing algorithms obtaining comparable results to the average interrater PK of 0.927 from the two pathologists providing the reference TC scores. Conclusions: The SPIE-AAPM-NCI BreastPathQ challenge was a success, indicating that artificial intelligence/machine learning algorithms may be able to approach human performance for cellularity assessment and may have some utility in clinical practice for improving efficiency and reducing reader variability. The BreastPathQ challenge can be accessed on the Grand Challenge website. |
Akbar, Shazia; Peikari, Mohammad; Salama, Sherine; Panah, Azadeh Yazdan; Nofech-Mozes, Sharon; Martel, Anne L Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment Journal Article Scientific Reports, 2019. @article{Akbar2019, title = {Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment}, author = {Shazia Akbar and Mohammad Peikari and Sherine Salama and Azadeh Yazdan Panah and Sharon Nofech-Mozes and Anne L. Martel}, url = {http://www.nature.com/articles/s41598-019-50568-4}, year = {2019}, date = {2019-01-01}, journal = {Scientific Reports}, abstract = {The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in-situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists’ workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in-situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists’ workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future. |
Akbar, Shazia; Peikari, Mohammad; Salama, Sherine; Nofech-Mozes, Sharon; Martel, Anne L The transition module: A method for preventing overfitting in convolutional neural networks Journal Article Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 7 , 2019. @article{Akbar2018b, title = {The transition module: A method for preventing overfitting in convolutional neural networks}, author = {Shazia Akbar and Mohammad Peikari and Sherine Salama and Sharon Nofech-Mozes and Anne L. Martel}, url = {https://www.tandfonline.com/doi/abs/10.1080/21681163.2018.1427148}, year = {2019}, date = {2019-01-01}, journal = {Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization}, volume = {7}, abstract = {Digital pathology has advanced substantially over the last decade with the adoption of slide scanners in pathology labs. The use of digital slides to analyse diseases at the microscopic level is both cost-effective and efficient. Identifying complex tumour patterns in digital slides is a challenging problem but holds significant importance for tumour burden assessment, grading and many other pathological assessments in cancer research. The use of convolutional neural networks (CNNs) to analyse such complex images has been well adopted in digital pathology. However, in recent years, the architecture of CNNs has altered with the introduction of inception modules which have shown great promise for classification tasks. In this paper, we propose a modified ‘transition’ module which encourages generalisation in a deep learning framework with few training samples. In the transition module, filters of varying sizes are used to encourage class-specific filters at multiple spatial resolutions followed by global average pooling. We demonstrate the performance of the transition module in AlexNet and ZFNet, for classifying breast tumours in two independent data-sets of scanned histology sections; the inclusion of the transition module in these CNNs improved performance.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Digital pathology has advanced substantially over the last decade with the adoption of slide scanners in pathology labs. The use of digital slides to analyse diseases at the microscopic level is both cost-effective and efficient. Identifying complex tumour patterns in digital slides is a challenging problem but holds significant importance for tumour burden assessment, grading and many other pathological assessments in cancer research. The use of convolutional neural networks (CNNs) to analyse such complex images has been well adopted in digital pathology. However, in recent years, the architecture of CNNs has altered with the introduction of inception modules which have shown great promise for classification tasks. In this paper, we propose a modified ‘transition’ module which encourages generalisation in a deep learning framework with few training samples. In the transition module, filters of varying sizes are used to encourage class-specific filters at multiple spatial resolutions followed by global average pooling. We demonstrate the performance of the transition module in AlexNet and ZFNet, for classifying breast tumours in two independent data-sets of scanned histology sections; the inclusion of the transition module in these CNNs improved performance. |
Akbar, Shazia; Martel, Anne L Cluster-based learning from weakly labeled bags in digital pathology Conference Machine Learning for Health Workshop, NeurIPS 2018, 2018. @conference{Akbar2018a, title = {Cluster-based learning from weakly labeled bags in digital pathology}, author = {Shazia Akbar and Anne L. Martel }, url = {https://arxiv.org/abs/1812.00884}, year = {2018}, date = {2018-01-01}, booktitle = {Machine Learning for Health Workshop, NeurIPS 2018}, abstract = {To alleviate the burden of gathering detailed expert annotations when training deep neural networks, we propose a weakly supervised learning approach to recognize metastases in microscopic images of breast lymph nodes. We describe an alternative training loss which clusters weakly labeled bags in latent space to inform relevance of patch-instances during training of a convolutional neural network. We evaluate our method on the Camelyon dataset which contains high-resolution digital slides of breast lymph nodes, where labels are provided at the image-level and only subsets of patches are made available during training.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } To alleviate the burden of gathering detailed expert annotations when training deep neural networks, we propose a weakly supervised learning approach to recognize metastases in microscopic images of breast lymph nodes. We describe an alternative training loss which clusters weakly labeled bags in latent space to inform relevance of patch-instances during training of a convolutional neural network. We evaluate our method on the Camelyon dataset which contains high-resolution digital slides of breast lymph nodes, where labels are provided at the image-level and only subsets of patches are made available during training. |
Akbar, Shazia; Peikari, Mohammad; Salama, Sherine; Nofech-Mozes, Sharon; Martel, Anne L Determining tumor cellularity in digital slides using ResNet Conference SPIE Medical Imaging, 2018. @conference{Akbar2018c, title = {Determining tumor cellularity in digital slides using ResNet}, author = {Shazia Akbar and Mohammad Peikari and Sherine Salama and Sharon Nofech-Mozes and Anne L. Martel}, url = {https://doi.org/10.1117/12.2292813}, year = {2018}, date = {2018-01-01}, booktitle = {SPIE Medical Imaging}, journal = {SPIE Medical Imaging}, abstract = {The residual cancer burden index is a powerful prognostic factor which is used to measure neoadjuvant therapy response in invasive breast cancers. Tumor cellularity is one component of the residual cancer burden index and is currently measured manually through eyeballing. As such it is subject to inter- and intra-variability and is currently restricted to discrete values. We propose a method for automatically determining tumor cellularity in digital slides using deep learning techniques. We train a series of ResNet architectures to output both discrete and continuous values and compare our outcomes with scores acquired manually by an expert pathologist. Our configurations were validated on a dataset of image patches extracted from digital slides, each containing various degrees of tumor cellularity. Results showed that, in the case of discrete values, our models were able to distinguish between regions-of-interest containing tumor and healthy cells with over 97% test accuracy rates. Overall, we achieved 76% accuracy over four predefined tumor cellularity classes (no tumor/tumor; low, medium and high tumor cellularity). When computing tumor cellularity scores on a continuous scale, ResNet showed good correlations with manually-identified scores, showing potential for computing reproducible scores consistent with expert opinion using deep learning techniques.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } The residual cancer burden index is a powerful prognostic factor which is used to measure neoadjuvant therapy response in invasive breast cancers. Tumor cellularity is one component of the residual cancer burden index and is currently measured manually through eyeballing. As such it is subject to inter- and intra-variability and is currently restricted to discrete values. We propose a method for automatically determining tumor cellularity in digital slides using deep learning techniques. We train a series of ResNet architectures to output both discrete and continuous values and compare our outcomes with scores acquired manually by an expert pathologist. Our configurations were validated on a dataset of image patches extracted from digital slides, each containing various degrees of tumor cellularity. Results showed that, in the case of discrete values, our models were able to distinguish between regions-of-interest containing tumor and healthy cells with over 97% test accuracy rates. Overall, we achieved 76% accuracy over four predefined tumor cellularity classes (no tumor/tumor; low, medium and high tumor cellularity). When computing tumor cellularity scores on a continuous scale, ResNet showed good correlations with manually-identified scores, showing potential for computing reproducible scores consistent with expert opinion using deep learning techniques. |
Akbar, Shazia; Peikari, Mohammad; Salama, Sherine; Nofech-Mozes, Sharon; Martel, Anne L Transitioning between convolutional and fully connected layers in neural networks Workshop International Workshop on Deep Learning in Medical Image Analysis, MICCAI 2017, 2017. @workshop{Akbar2017a, title = {Transitioning between convolutional and fully connected layers in neural networks}, author = {Shazia Akbar and Mohammad Peikari and Sherine Salama and Sharon Nofech-Mozes and Anne L. Martel}, url = {https://arxiv.org/abs/1707.05743}, year = {2017}, date = {2017-01-01}, booktitle = {International Workshop on Deep Learning in Medical Image Analysis, MICCAI 2017}, journal = {International Workshop on Deep Learning in Medical Image Analysis, MICCAI 2017}, abstract = {Digital pathology has advanced substantially over the last decade however tumor localization continues to be a challenging problem due to highly complex patterns and textures in the underlying tissue bed. The use of convolutional neural networks (CNNs) to analyze such complex images has been well adopted in digital pathology. However in recent years, the architecture of CNNs have altered with the introduction of inception modules which have shown great promise for classification tasks. In this paper, we propose a modified "transition" module which learns global average pooling layers from filters of varying sizes to encourage class-specific filters at multiple spatial resolutions. We demonstrate the performance of the transition module in AlexNet and ZFNet, for classifying breast tumors in two independent datasets of scanned histology sections, of which the transition module was superior.}, keywords = {}, pubstate = {published}, tppubtype = {workshop} } Digital pathology has advanced substantially over the last decade however tumor localization continues to be a challenging problem due to highly complex patterns and textures in the underlying tissue bed. The use of convolutional neural networks (CNNs) to analyze such complex images has been well adopted in digital pathology. However in recent years, the architecture of CNNs have altered with the introduction of inception modules which have shown great promise for classification tasks. In this paper, we propose a modified "transition" module which learns global average pooling layers from filters of varying sizes to encourage class-specific filters at multiple spatial resolutions. We demonstrate the performance of the transition module in AlexNet and ZFNet, for classifying breast tumors in two independent datasets of scanned histology sections, of which the transition module was superior. |
Manivannan, Siyamalan; Li, Wenqi; Akbar, Shazia; Wang, Ruixuan; Zhang, Jianguo; McKenna, Stephen J An automated pattern recognition system for classifying indirect immunofluorescence images of HEp-2 cells and specimens Journal Article Pattern Recognition, 2016. @article{Akbar2016a, title = {An automated pattern recognition system for classifying indirect immunofluorescence images of HEp-2 cells and specimens}, author = {Siyamalan Manivannan and Wenqi Li and Shazia Akbar and Ruixuan Wang and Jianguo Zhang and Stephen J. McKenna}, url = {https://doi.org/10.1016/j.patcog.2015.09.015}, year = {2016}, date = {2016-01-01}, journal = {Pattern Recognition}, abstract = {Immunofluorescence antinuclear antibody tests are important for diagnosis and management of autoimmune conditions; a key step that would benefit from reliable automation is the recognition of subcellular patterns suggestive of different diseases. We present a system to recognize such patterns, at cellular and specimen levels, in images of HEp-2 cells. Ensembles of SVMs were trained to classify cells into six classes based on sparse encoding of texture features with cell pyramids, capturing spatial, multi-scale structure. A similar approach was used to classify specimens into seven classes. Software implementations were submitted to an international contest hosted by ICPR 2014 (Performance Evaluation of Indirect Immunofluorescence Image Analysis Systems). Mean class accuracies obtained on heldout test data sets were 87.1% and 88.5% for cell and specimen classification respectively. These were the highest achieved in the competition, suggesting that our methods are state-of-the-art. We provide detailed descriptions and extensive experiments with various features and encoding methods.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Immunofluorescence antinuclear antibody tests are important for diagnosis and management of autoimmune conditions; a key step that would benefit from reliable automation is the recognition of subcellular patterns suggestive of different diseases. We present a system to recognize such patterns, at cellular and specimen levels, in images of HEp-2 cells. Ensembles of SVMs were trained to classify cells into six classes based on sparse encoding of texture features with cell pyramids, capturing spatial, multi-scale structure. A similar approach was used to classify specimens into seven classes. Software implementations were submitted to an international contest hosted by ICPR 2014 (Performance Evaluation of Indirect Immunofluorescence Image Analysis Systems). Mean class accuracies obtained on heldout test data sets were 87.1% and 88.5% for cell and specimen classification respectively. These were the highest achieved in the competition, suggesting that our methods are state-of-the-art. We provide detailed descriptions and extensive experiments with various features and encoding methods. |
Manivannan, Siyamalan; Li, Wenqi; Akbar, Shazia; Zhang, Jianguo; Trucco, Emanuel; McKenna, Stephen J Local structure prediction for gland segmentation Conference IEEE 13th International Symposium on Biomedical Imaging (ISBI), 2016. @conference{Manivannan2016a, title = {Local structure prediction for gland segmentation}, author = {Siyamalan Manivannan and Wenqi Li and Shazia Akbar and Jianguo Zhang and Emanuel Trucco and Stephen J. McKenna }, url = {https://ieeexplore.ieee.org/document/7493387}, year = {2016}, date = {2016-01-01}, booktitle = {IEEE 13th International Symposium on Biomedical Imaging (ISBI)}, abstract = {We present a method to segment individual glands from colon histopathology images. Segmentation based on sliding window classification does not usually make explicit use of information about the spatial configurations of class labels. To improve on this we propose to segment glands using a structure learning approach in which the local label configurations (structures) are considered when training a support vector machine classifier. The proposed method not only distinguishes foreground from background, it also distinguishes between different local structures in pixel labelling, e.g. locations between adjacent glands and locations far from glands. It directly predicts these label configurations at test time. Experiments demonstrate that it produces better segmentations than when the local label structure is not used to train the classifier.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } We present a method to segment individual glands from colon histopathology images. Segmentation based on sliding window classification does not usually make explicit use of information about the spatial configurations of class labels. To improve on this we propose to segment glands using a structure learning approach in which the local label configurations (structures) are considered when training a support vector machine classifier. The proposed method not only distinguishes foreground from background, it also distinguishes between different local structures in pixel labelling, e.g. locations between adjacent glands and locations far from glands. It directly predicts these label configurations at test time. Experiments demonstrate that it produces better segmentations than when the local label structure is not used to train the classifier. |
Manivannan, Siyamalan; Li, Wenqi; Akbar, Shazia; Zhang, Jianguo; Trucco, Emanuel; McKenna, Stephen J IEEE 13th International Symposium on Biomedical Imaging (ISBI), 2016. @conference{Manivannan2016b, title = {Gland segmentation in colon histology images using hand-crafted features and convolutional neural networks}, author = {Siyamalan Manivannan and Wenqi Li and Shazia Akbar and Jianguo Zhang and Emanuel Trucco and Stephen J. McKenna}, url = {https://ieeexplore.ieee.org/document/7493530 }, year = {2016}, date = {2016-01-01}, booktitle = {IEEE 13th International Symposium on Biomedical Imaging (ISBI)}, abstract = {We investigate glandular structure segmentation in colon histology images as a window-based classification problem. We compare and combine methods based on fine-tuned convolutional neural networks (CNN) and hand-crafted features with support vector machines (HC-SVM). On 85 images of H&E-stained tissue, we find that fine-tuned CNN outperforms HC-SVM in gland segmentation measured by pixel-wise Jaccard and Dice indices. For HC-SVM we further observe that training a second-level window classifier on the posterior probabilities - as an output refinement - can substantially improve the segmentation performance. The final performance of HC-SVM with refinement is comparable to that of CNN. Furthermore, we show that by combining and refining the posterior probability outputs of CNN and HC-SVM together, a further performance boost is obtained.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } We investigate glandular structure segmentation in colon histology images as a window-based classification problem. We compare and combine methods based on fine-tuned convolutional neural networks (CNN) and hand-crafted features with support vector machines (HC-SVM). On 85 images of H&E-stained tissue, we find that fine-tuned CNN outperforms HC-SVM in gland segmentation measured by pixel-wise Jaccard and Dice indices. For HC-SVM we further observe that training a second-level window classifier on the posterior probabilities - as an output refinement - can substantially improve the segmentation performance. The final performance of HC-SVM with refinement is comparable to that of CNN. Furthermore, we show that by combining and refining the posterior probability outputs of CNN and HC-SVM together, a further performance boost is obtained. |