Research on Dual-adversarial MR Image Fusion Network Using Pre-trained Model for Feature Extraction
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    With the popularization of multimodal medical images in clinical diagnosis and treatment, fusion technology based on spatiotemporal correlation characteristics has been developed rapidly. The fused medical images not only retain the unique features of source images with various modalities but also strengthen the complementary information, which can facilitate image reading. At present, most methods perform feature extraction and feature fusion by manually defining constraints, which can easily lead to the loss of useful information and unclear details in the fused images. In light of this, a dual-adversarial fusion network using a pre-trained model for feature extraction is proposed in this study to fuse MR-T1/MR-T2 images. The network consists of a feature extraction module, a feature fusion module, and two discriminator network modules. Due to the small scale of the registered multimodal medical image dataset, the feature extraction network cannot be fully trained. In addition, as the pre-trained model has powerful data representation ability, a pre-trained convolutional neural network model is embedded into the feature extraction module to generate the feature map. Then, the feature fusion network fuses the deep features and outputs fused images. Through accurate classification of the source and fused images, the two discriminator networks establish adversarial relations with the feature fusion network separately and eventually encourage it to learn the optimal fusion parameters. The experimental results illustrate the effectiveness of pre-trained technology in this method. Compared with six existing typical fusion methods, the proposed method can generate the fused results of optimal performance in visual effects and quantitative metrics.

    Reference
    Related
    Cited by
Get Citation

刘慧,李珊珊,高珊珊,邓凯,徐岗,张彩明.预训练模型特征提取的双对抗磁共振图像融合网络研究.软件学报,2023,34(5):2134-2151

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:April 18,2022
  • Revised:May 29,2022
  • Adopted:
  • Online: September 20,2022
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063