Sparse Multi-Modal Transformer with Masking for Alzheimer's Disease Classification
By: Cheng-Han Lu, Pei-Hsuan Tsai
Published: 2025-12-17
View on arXiv →#cs.AI
Abstract
This paper introduces SMMT, a sparse multi-modal transformer architecture, to address the high computational and energy costs of dense self-attention in intelligent systems. SMMT incorporates cluster-based sparse attention for near-linear computational complexity and modality-wise masking for robustness against incomplete inputs, specifically evaluating its effectiveness in Alzheimer's Disease classification on the ADNI dataset.