Self-Supervised Learning with Graph Neural Networks for Region of Interest Retrieval in Histopathology


Creative Commons License

Ozen Y., AKSOY S., KÖSEMEHMETOĞLU K., Onder S., ÜNER A.

25th International Conference on Pattern Recognition (ICPR), ELECTR NETWORK, 10 - 15 January 2021, pp.6329-6334 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Volume:
  • Doi Number: 10.1109/icpr48806.2021.9412903
  • Country: ELECTR NETWORK
  • Page Numbers: pp.6329-6334
  • Keywords: Digital pathology, histopathological image analysis, self-supervised learning, graph neural networks, content-based image retrieval
  • Hacettepe University Affiliated: Yes

Abstract

Deep learning has achieved successful performance in representation learning and content-based retrieval of histopathology images. The commonly used setting in deep learning-based approaches is supervised training of deep neural networks for classification, and using the trained model to extract representations that are used for computing and ranking the distances between images. However, there are two remaining major challenges. First, supervised training of deep neural networks requires large amount of manually labeled data which is often limited in the medical field. Transfer learning has been used to overcome this challenge, but its success remained limited. Second, the clinical practice in histopathology necessitates working with regions of interest (ROI) of multiple diagnostic classes with arbitrary shapes and sizes. The typical solution to this problem is to aggregate the representations of fixed-sized patches cropped from these regions to obtain region-level representations. However, naive methods cannot sufficiently exploit the rich contextual information in the complex tissue structures. To tackle these two challenges, we propose a generic method that utilizes graph neural networks (GNN), combined with a self-supervised training method using a contrastive loss. GNN enables representing arbitrarily-shaped ROIs as graphs and encoding contextual information. Self-supervised contrastive learning improves quality of learned representations without requiring labeled data. The experiments using a challenging breast histopathology data set show that the proposed method achieves better performance than the state-of-the-art.