open access

Vol 26, No 1 (2021)
Research paper
Published online: 2021-01-22
Submitted: 2021-01-07
Get Citation

T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks

Daisuke Kawahara, Yasushi Nagata
DOI: 10.5603/RPOR.a2021.0005
·
Rep Pract Oncol Radiother 2021;26(1):35-42.

open access

Vol 26, No 1 (2021)
Original research articles
Published online: 2021-01-22
Submitted: 2021-01-07

Abstract

Background: The objective of this study was to propose an optimal input image quality for a conditional generative adversarial network (GAN) in T1-weighted and T2-weighted magnetic resonance imaging (MRI) images.

Materials and methods: A total of 2,024 images scanned from 2017 to 2018 in 104 patients were used. The prediction framework of T1-weighted to T2-weighted MRI images and T2-weighted to T1-weighted MRI images were created with GAN. Two image sizes (512 × 512 and 256 × 256) and two grayscale level conversion method (simple and adaptive) were used for the input images. The images were converted from 16-bit to 8-bit by dividing with 256 levels in a simple conversion method. For the adaptive conversion method, the unused levels were eliminated in 16-bit images, which were converted to 8-bit images by dividing with the value obtained after dividing the maximum pixel value with 256.

Results: The relative mean absolute error (rMAE) was 0.15 for T1-weighted to T2-weighted MRI images and 0.17 for T2-weighted to T1-weighted MRI images with an adaptive conversion method, which was the smallest. Moreover, the adaptive conversion method has a smallest mean square error (rMSE) and root mean square error (rRMSE), and the largest peak signal-to-noise ratio (PSNR) and mutual information (MI). The computation time depended on the image size.

Conclusions: Input resolution and image size affect the accuracy of prediction. The proposed model and approach of prediction framework can help improve the versatility and quality of multi-contrast MRI tests without the need for prolonged examinations.

Abstract

Background: The objective of this study was to propose an optimal input image quality for a conditional generative adversarial network (GAN) in T1-weighted and T2-weighted magnetic resonance imaging (MRI) images.

Materials and methods: A total of 2,024 images scanned from 2017 to 2018 in 104 patients were used. The prediction framework of T1-weighted to T2-weighted MRI images and T2-weighted to T1-weighted MRI images were created with GAN. Two image sizes (512 × 512 and 256 × 256) and two grayscale level conversion method (simple and adaptive) were used for the input images. The images were converted from 16-bit to 8-bit by dividing with 256 levels in a simple conversion method. For the adaptive conversion method, the unused levels were eliminated in 16-bit images, which were converted to 8-bit images by dividing with the value obtained after dividing the maximum pixel value with 256.

Results: The relative mean absolute error (rMAE) was 0.15 for T1-weighted to T2-weighted MRI images and 0.17 for T2-weighted to T1-weighted MRI images with an adaptive conversion method, which was the smallest. Moreover, the adaptive conversion method has a smallest mean square error (rMSE) and root mean square error (rRMSE), and the largest peak signal-to-noise ratio (PSNR) and mutual information (MI). The computation time depended on the image size.

Conclusions: Input resolution and image size affect the accuracy of prediction. The proposed model and approach of prediction framework can help improve the versatility and quality of multi-contrast MRI tests without the need for prolonged examinations.

Get Citation

Keywords

convolutional generative adversarial networks; image synthesis; MRI

About this article
Title

T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks

Journal

Reports of Practical Oncology and Radiotherapy

Issue

Vol 26, No 1 (2021)

Article type

Research paper

Pages

35-42

Published online

2021-01-22

DOI

10.5603/RPOR.a2021.0005

Bibliographic record

Rep Pract Oncol Radiother 2021;26(1):35-42.

Keywords

convolutional generative adversarial networks
image synthesis
MRI

Authors

Daisuke Kawahara
Yasushi Nagata

References (18)
  1. Rzedzian R, Chapman B, Mansfield P, et al. Real-time nuclear magnetic resonance clinical imaging in paediatrics. Pediatr Radiol. 1986; 2(8362): 1281–1282.
  2. Tsao J. Ultrafast imaging: principles, pitfalls, solutions, and applications. J Magn Reson Imaging. 2010; 32(2): 252–266.
  3. Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017; 44(4): 1408–1419.
  4. Shen W, Zhou Mu, Yang F, et al. Multi-scale Convolutional Neural Networks for Lung Nodule Classification. Inf Process Med Imaging. 2015; 24: 588–599.
  5. Kooi T, Litjens G, van Ginneken B, et al. Large scale deep learning for computer aided detection of mammographic lesions. Med Image Anal. 2017; 35: 303–312.
  6. He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
  7. Isola P, Zhu JY, Zhou T, et al. Image-to-Image Translation with Conditional Adversarial Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.
  8. Nie D, Trullo R, Lian J, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. Med Image Comput Comput Assist Interv. 2017; 10435: 417–425.
  9. Nie D, Trullo R, Lian J, et al. Medical Image Synthesis with Deep Convolutional Adversarial Networks. IEEE Trans Biomed Eng. 2018; 65(12): 2720–2730.
  10. Emami H, Dong M, Nejad-Davarani SP, et al. Generating synthetic CTs from magnetic resonance images using generative adversarial networks. Med Phys. 2018 [Epub ahead of print].
  11. Yang Q, Li N, Zhaob Z. MRI Cross-Modality NeuroImage-to-NeuroImage Translation. arXiv. 2018; 1801: 06940v2.
  12. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings. 2010; 9: 249–256.
  13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(7553): 436–444.
  14. LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998; 86(11): 2278–2324.
  15. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Advances in Neural Information Processing Systems. 2014; 27: 2672–2680.
  16. Zhou X, Takayama R, Wang S, et al. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med Phys. 2017; 44(10): 5221–5233.
  17. Jog A, Roy S, Carass A, et al. Magnetic resonance image synthesis through patch regression. Proc IEEE Int Symp Biomed Imaging. 2013; 2013: 350–353.
  18. Krupa K, Bekiesińska-Figatowska M. Artifacts in magnetic resonance imaging. Pol J Radiol. 2015; 80: 93–106.

Important: This website uses cookies. More >>

The cookies allow us to identify your computer and find out details about your last visit. They remembering whether you've visited the site before, so that you remain logged in - or to help us work out how many new website visitors we get each month. Most internet browsers accept cookies automatically, but you can change the settings of your browser to erase cookies or prevent automatic acceptance if you prefer.

By "Via Medica sp. z o.o." sp.k., ul. Świętokrzyska 73, 80–180 Gdańsk, Poland
tel.:+48 58 320 94 94, fax:+48 58 320 94 60, e-mail: journals@viamedica.pl