Next Article in Journal
Leaving No Stone Unturned: Flexible Retrieval of Idiomatic Expressions from a Large Text Corpus
Next Article in Special Issue
Templated Text Synthesis for Expert-Guided Multi-Label Extraction from Radiology Reports
Previous Article in Journal
A Combined Short Time Fourier Transform and Image Classification Transformer Model for Rolling Element Bearings Fault Diagnosis in Electric Motors
Article

Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging

1
Keen Eye, 75012 Paris, France
2
LTCI, Télécom Paris, Institut Polytechnique de Paris, 91120 Palaiseau, France
3
Centre National de la Recherche Scientifique, Laboratoire d’Informatique de Paris 6, Sorbonne Université, 75005 Paris, France
*
Author to whom correspondence should be addressed.
Academic Editors: Jaime Cardoso, Nicholas Heller, Pedro Henriques Abreu, Ivana Išgum and Diana Mateus
Mach. Learn. Knowl. Extr. 2021, 3(1), 243-262; https://0-doi-org.brum.beds.ac.uk/10.3390/make3010012
Received: 16 January 2021 / Revised: 10 February 2021 / Accepted: 13 February 2021 / Published: 22 February 2021
Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method. View Full-Text
Keywords: histopathology; WSI classification; explainability; interpretability; heat-maps histopathology; WSI classification; explainability; interpretability; heat-maps
Show Figures

Figure 1

MDPI and ACS Style

Pirovano, A.; Heuberger, H.; Berlemont, S.; Ladjal, S.; Bloch, I. Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging. Mach. Learn. Knowl. Extr. 2021, 3, 243-262. https://0-doi-org.brum.beds.ac.uk/10.3390/make3010012

AMA Style

Pirovano A, Heuberger H, Berlemont S, Ladjal S, Bloch I. Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging. Machine Learning and Knowledge Extraction. 2021; 3(1):243-262. https://0-doi-org.brum.beds.ac.uk/10.3390/make3010012

Chicago/Turabian Style

Pirovano, Antoine, Hippolyte Heuberger, Sylvain Berlemont, SaÏd Ladjal, and Isabelle Bloch. 2021. "Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging" Machine Learning and Knowledge Extraction 3, no. 1: 243-262. https://0-doi-org.brum.beds.ac.uk/10.3390/make3010012

Find Other Styles

Article Access Map by Country/Region

1
Back to TopTop