Deep Learning Generates Synthetic Cancer Histology for Explainability and Education

James M. Dolezal, Rachelle Wolk, Hanna M. Hieromnimon, Frederick M. Howard, Andrew Srisuwananukorn, Dmitry Karpeyev, Siddhi Ramesh, Sara Kochanny, Jung Woo Kwon, Meghana Agni, Richard C. Simon, Chandni Desai, Raghad Kherallah, Tung D. Nguyen, Jefree J. Schulte, Kimberly Cole, Galina Khramtsova, Marina Chiara Garassino, Aliya N. Husain, Huihua Li, Robert Grossman, Nicole A. Cipriani, Alexander T. Pearson

Artificial intelligence (AI) methods including deep neural networks can provide rapid molecular classification of tumors from routine histology with accuracy that can match or exceed human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools can help provide insights into what models have learned when corresponding histologic features are poorly understood. Conditional generative adversarial networks (cGANs) are AI models that generate synthetic images and illustrate subtle differences between image classes. Here, we describe the use of a cGAN for explaining models trained to classify molecularly-subtyped tumors, exposing associated histologic features. We leverage cGANs to create class- and layer-blending visualizations to improve understanding of subtype morphology. Finally, we demonstrate the potential use of synthetic histology for augmenting pathology trainee education and show that clear, intuitive cGAN visualizations can reinforce and improve human understanding of histologic manifestations of tumor biology

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment