Multi-scale Feature Frequency Domain Decomposition Filtering for Medical Image Fusion
Author:
Affiliation:

Clc Number:

TP391

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Multi-modal medical image fusion provides a more comprehensive and accurate medical image description for medical diagnosis, surgical navigation, and other clinical applications by effectively combining human tissue structure and lesion information reflected by different modal datasets. This study aims to address partial spectral degradation, lack of edges and details and insufficient color reproduction of adhesion lesion-invaded regions in current fusion methods. It proposes a novel multi-modal medical image fusion method to achieve multi-feature enhancement and color preservation in the multi-scale feature frequency domain decomposition filter domain. This method decomposes the source image into four parts: smoothing, texture, contour, and edge feature layers, which employ specific fusion rules and generate fusion results by image reconstruction. In particular, given the potential feature information contained in the smoothing layer, the study proposes a visual saliency decomposition strategy to explore the energy and partial fiber texture features with multi-scale and multi-dimensionality, enhancing the utilization of source image information. In the texture layer, the study introduces a texture enhancement operator to extract details and hierarchical information through spatial structure and information measurement, addressing the issue of distinguishing the invasion status of adherent lesion areas in current fusion methods. In addition, due to the lack of a public abdominal dataset, 403 sets of abdominal images are registered in this study for public access and download. Experiments conducted on public dataset Atlas and abdominal datasets are compared with six baseline methods. Compared to the most advanced methods, the results show that the similarity between the fused image and the source image is improved by 22.92%, the edge retention, spatial frequency, and contrast ratio of fused images are improved by 35.79%, 28.79%, and 32.92%, respectively. In addition, the visual and computing efficiency of the proposed method are better than those of other methods.

    Reference
    Related
    Cited by
Get Citation

刘慧,朱积成,王欣雨,盛玉瑞,张彩明,聂礼强.面向医学图像融合的多尺度特征频域分解滤波.软件学报,2024,35(12):5687-5709

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:July 13,2023
  • Revised:September 11,2023
  • Adopted:
  • Online: March 06,2024
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063