Abstract
The early and precise identification of Alzheimer’s Disease (AD) continues to pose considerable clinical difficulty due to subtle structural alterations and overlapping symptoms across the disease phases. This study presents a novel Deformable Attention Vision Transformer (DA-ViT) architecture that integrates deformable Multi-Head Self-Attention (MHSA) with a Multi-Layer Perceptron (MLP) block for efficient classification of Alzheimer’s disease (AD) using Magnetic resonance imaging (MRI) scans. In contrast to traditional vision transformers, our deformable MHSA module preferentially concentrates on spatially pertinent patches through learned offset predictions, markedly diminishing processing demands while improving localized feature representation. DA-ViT contains only 0.93 million parameters, making it exceptionally suitable for implementation in resource-limited settings. We evaluate the model using a class-imbalanced Alzheimer’s MRI dataset comprising 6400 images across four categories, achieving a test accuracy of 80.31%, a macro F1-score of 0.80, and an area under the receiver operating characteristic curve (AUC) of 1.00 for the Mild Demented category. Thorough ablation studies validate the ideal configuration of transformer depth, headcount, and embedding dimensions. Moreover, comparison research indicates that DA-ViT surpasses state-of-the-art pre-trained Convolutional Neural Network (CNN) models in terms of accuracy and parameter efficiency.
| Original language | English |
|---|---|
| Pages (from-to) | 2395-2418 |
| Number of pages | 24 |
| Journal | CMES - Computer Modeling in Engineering and Sciences |
| Volume | 144 |
| Issue number | 2 |
| DOIs | |
| State | Published - 2025 |
Keywords
- Alzheimer disease classification
- MRI analysis
- bayesian optimization
- deformable attention
- vision transformer
Fingerprint
Dive into the research topics of 'DA-ViT: Deformable Attention Vision Transformer for Alzheimer’s Disease Classification from MRI Scans'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver