Past Recording
ShareStar
Representation Learning of Histopathology Images using Graph Neural Networks
Tuesday May 26 2020 16:00 GMT
Please to join the live chat.
Representation Learning of Histopathology Images using Graph Neural Networks
Why This Is Interesting

Representation learning for Whole Slide Images (WSIs) is pivotal in developing image-based systems to achieve higher precision in diagnostic pathology. The authors propose a two-stage framework for WSI representation learning. They use graph neural networks to learn relations among sampled representative patches to aggregate the image information into a single vector representation. They also introduce attention via graph pooling to automatically infer patches with higher relevance. The authors experiment on 1,026 lung cancer WSIs with the 40× magnification from The Cancer Genome Atlas (TCGA) dataset, the largest public repository of histopathology images and achieved state-of-the-art accuracy of 88.8% and AUC of 0.89 on lung cancer sub-type classification by extracting features from a pre-trained DenseNet model. The authors will be presenting this work at CVPR 2020!

Discussion Points
  • Modelling multiple instance learning as a graph problem
  • Graph pooling with attention
  • Adjacency matrix learning
Takeaways
  • Whole Slide (WSI) histopathology images are really large and it’s hard to do ML on these large resolution images without making the iid assumption and splitting the image up into multiple instances
  • GNNs can be used to learn connectivity between cancerous patches of these WSIs
  • Learning the connectivity can be modelled as learning the adjacency matrix
  • To improve the context information learned in the adjacency matrix, the authors obtain a global DenseNet feature for the entire WSI and concatenate it with node/patch features
  • Using GCNs and DenseNet features for each image patch, the authors were able to achieve state of the art classification results
Time of Recording: Tuesday May 26 2020 16:00 GMT