Search In this Thesis
   Search In this Thesis  
العنوان
Multimodality Medical Image Fusion /
المؤلف
Abd El-Hafiez, Nahed Tawfik Hamed.
هيئة الاعداد
باحث / ناهد توفيق حامد عبدالحفيظ
مشرف / معوض إبراهيم دسوقي
مناقش / طه السيد طه
مناقش / أحمد فرج صديق
الموضوع
Multispectral imaging. Computer vision.
تاريخ النشر
2021.
عدد الصفحات
141 p. :
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
27/4/2021
مكان الإجازة
جامعة المنوفية - كلية الهندسة الإلكترونية - قسم هندسة الإلكترونيات والاتصالات الكهربية
الفهرس
Only 14 pages are availabe for public view

from 141

from 141

Abstract

Multimodality medical image fusion is the process of combining multiple images from single or multiple modalities of imaging. Medical image fusion methods are adopted to increase the quality of medical images by attaining the salient features in the fusion results. Hence, they raise the clinical applicability of medical images for appraisal and diagnosis problems. This purpose is achieved by capturing the complementary information presented in two or more images of different modalities in the fusion result. Medical image fusion is generally concerned with Magnetic Resonance Imaging (MRI), Magnetic Resonance Angiography (MRA), Positron Emission Tomography (PET), Computerized Tomography (CT), and Single Photon Emission Computed Tomography (SPECT) modalities. Each modality has its merits and drawbacks. This induces new fusion methods for merging information from multiple imaging modalities. Researchers have presented several methods for medical image fusion, and these methods achieved good results. However, medical image fusion is a resurgent field that needs to be enhanced to conquer the increasing challenges. This thesis presents multimodality medical image fusion methods for enhancing the fused image quality.
In the first proposed method, a hybrid algorithm based on both pixel and feature levels of multimodal medical image fusion is presented. For the pixel-level fusion, the source images are decomposed into low- and high-frequency components using Discrete Wavelet Transform (DWT), and then the low-frequency coefficients are fused using maximum fusion rule. Thereafter, the curvelet transform is applied on the high-frequency coefficients. The obtained high-frequency sub-bands (fine scale) are fused using Principal Component Analysis (PCA) fusion. On the other hand, the feature-level fusion is accomplished by extracting various features form the coarse and detail sub-bands and employing them for the fusion process. These features involve mean, variance, entropy, visibility, and standard deviation (std). Thereafter, the inverse curvelet transform is implemented on the fused high-frequency coefficients, and finally the resultant fused image is acquired by applying the inverse DWT on the fused low- and high-frequency components.
In the second proposed method, Stacked Sparse Auto-Encoder (SSAE) is applied for feature extraction as a widespread category of deep neural networks. The SSAE is an efficient unsupervised feature extraction technique, which has great capability in the complex data representation. In addition, Non-Subsampled Contourlet Transform (NSCT) is a flexible multi-scale decomposition technique, which is superior to traditional decomposition techniques in several aspects. Motivated by the merits mentioned above, the image fusion is based on both the SSAE and the NSCT. First, the input images are decomposed into low- and high-frequency coefficient sub-bands. Second, a two-layer SSAE is implemented as a feature extraction method to obtain a deep and sparse representation of high-frequency coefficients. Third, the spatial frequencies are estimated for the obtained features to implement the high-frequency coefficient fusion. Then, the maximum fusion rule is applied to combine the low-frequency sub-band coefficients. Finally, the fused image is acquired by the inverse NSCT. The proposed methods are evaluated and implemented on different pairs of medical image modalities. The results demonstrate that the proposed methods improve the quality of the final fused image in terms of Mutual Information (MI), Correlation Coefficient (CC), entropy, Structural Similarity Index (SSIM), Edge Strength Similarity for Image Quality (ESSIM), Feature Similarity index (FSIM), variance, std, edge intensity, local contrast, and edge-based similarity measure (QAB/F). In addition, experimental results prove that the proposed methods are superior to the traditional methods in both subjective and objective evaluations.