mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis


Creative Commons License

Yurt M., Dar S. U. H., Erdem A., Erdem E., Oguz K. K., ÇUKUR T.

MEDICAL IMAGE ANALYSIS, cilt.70, 2021 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 70
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1016/j.media.2020.101944
  • Dergi Adı: MEDICAL IMAGE ANALYSIS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Biotechnology Research Abstracts, Compendex, EMBASE, INSPEC, MEDLINE
  • Hacettepe Üniversitesi Adresli: Evet

Özet

Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts are limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts from other high-quality ones can alleviate this limitation. When a single target contrast is of interest, common approaches for multi-contrast MRI involve either one-to-one or many-to-one synthesis methods depending on their input. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The complementary feature maps generated in the one-to-one streams and the shared feature maps generated in the many to-one stream are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Quantitative and radiological assessments on T-1,- T-2-, PD-weighted, and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods. (C) 2020 Elsevier B.V. All rights reserved.