Multi-modal Multi-Level Neuroimaging Fusion with Modality-Aware Mask-Guided Attention and Deep Canonical Correlation Analysis to Improve Dementia Risk Prediction
dc.contributor.author | Singh, Swapnil Satyendra | en |
dc.contributor.committeechair | Zhang, Liqing | en |
dc.contributor.committeechair | Da, Ma | en |
dc.contributor.committeemember | Thomas, Christopher Lee | en |
dc.contributor.committeemember | Murali, T. M. | en |
dc.contributor.department | Computer Science and#38; Applications | en |
dc.date.accessioned | 2025-06-12T08:02:29Z | en |
dc.date.available | 2025-06-12T08:02:29Z | en |
dc.date.issued | 2025-06-11 | en |
dc.description.abstract | Alzheimer's Disease (AD) is a progressive neurodegenerative disorder characterized by structural and molecular changes in the brain. Early diagnosis and accurate subtyping are essential for timely intervention and therapeutic planning. This thesis presents a novel multimodal deep learning framework that integrates T1-weighted MRI and Amyloid PET imaging to improve the diagnosis and stratification of AD. The proposed architecture leverages a two-stage pipeline involving modality-specific feature extraction using ResNet50 backbones, followed by middle fusion enhanced with a Modality-Aware Mask-Guided Attention (MAMGA) mechanism. To address missing modalities and inter-modal misalignment, the model incorporates Random Modality Masking and Deep Canonical Correlation Analysis (DCCA) for cross-modal feature alignment. Experiments on the ADNI dataset demonstrate that the proposed MRI+PET (MAMGA+DCCA) model achieves a balanced accuracy of 0.998 and an AUC-ROC of 0.999 in distinguishing stable normal cognition (sNC) from stable Alzheimer's Disease (sDAT). For the more challenging task of separating stable and progressive MCI (sMCI vs. pMCI), the best-performing fusion model achieved a balanced accuracy of 0.732 and an AUC of 0.789. Extensive ablation studies confirm the contributions of MAMGA, DCCA, and dual-optimizer strategies in enhancing diagnostic robustness. This work highlights the clinical potential of multimodal deep learning frameworks in improving early Alzheimer's detection and stratification. | en |
dc.description.abstractgeneral | Alzheimer's Disease is a brain disorder that slowly destroys memory, thinking ability, and the capacity to carry out everyday tasks. Detecting it at an early stage is crucial so that doctors can begin timely treatment, plan care, and support the patient and their family. This research introduces a computer-based system that uses brain scans to help detect the early signs of Alzheimer's more accurately and reliably. It uses two types of medical images: MRI scans, which show the shape and size of different brain regions, and PET scans, which highlight chemical changes linked to the disease. The system combines information from both types of scans using advanced artificial intelligence (AI) techniques. It learns patterns in the brain that are linked to Alzheimer's and uses them to predict which patients are at risk. Even in real-world scenarios where only one scan may be available, the method still works reliably due to a special design that simulates missing data during training. When tested on a large dataset of brain scans from real patients, the model achieved very high accuracy—over 99% in identifying people with healthy memory compared to those with Alzheimer's. It also showed promising results in identifying individuals who might progress from early memory problems to full-blown Alzheimer's. This study shows how AI can be used to support doctors in making earlier, more confident diagnoses, which is an important step toward better treatment outcomes and quality of life for patients. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:44257 | en |
dc.identifier.uri | https://hdl.handle.net/10919/135490 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Alzheimer's Disease | en |
dc.subject | Deep Learning | en |
dc.subject | DCCA | en |
dc.subject | Early Detection | en |
dc.subject | Modality Fusion | en |
dc.subject | MRI | en |
dc.subject | Multimodal Fusion | en |
dc.subject | PET | en |
dc.title | Multi-modal Multi-Level Neuroimaging Fusion with Modality-Aware Mask-Guided Attention and Deep Canonical Correlation Analysis to Improve Dementia Risk Prediction | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science & Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1