Downloads: 44
Iran | Medical Science | Volume 14 Issue 5, May 2025 | Pages: 1225 - 1234
Multimodal Learning for Breast Cancer Detection: Integrating Vision and Clinical Text Data
Abstract: Early and accurate detection of breast cancer is critical for improving patient outcomes and reducing mortality. In this paper, we propose a multimodal deep learning framework that integrates high-resolution mammographic image analysis using Vision Transformers (ViTs) with clinical text interpretation through BERT-based language models. By combining visual and textual information via an early fusion strategy, our approach captures complementary diagnostic cues to enhance prediction accuracy. We evaluate our model on two publicly available datasets-CBISDDSM and MIMIC-CXR-and demonstrate that the multimodal system significantly outperforms unimodal baselines. Our best-performing model achieves an accuracy of 91.4% and an AUROC of 0.94, surpassing both ViT-only and BERT-only models. Additional experiments and ablation studies confirm the effectiveness of the fusion strategy and the contribution of each modality. These findings highlight the potential of multimodal transformer-based learning to support radiologists in early breast cancer diagnosis through more holistic and robust decision-making.
Keywords: breast cancer detection, multimodal learning, vision transformers, clinical text analysis, deep learning
How to Cite?: Maryam Alaei, Mohammad Zare, Mehdi Hazrati, Amir Chekini, "Multimodal Learning for Breast Cancer Detection: Integrating Vision and Clinical Text Data", Volume 14 Issue 5, May 2025, International Journal of Science and Research (IJSR), Pages: 1225-1234, https://www.ijsr.net/getabstract.php?paperid=SR25509140031, DOI: https://dx.doi.org/10.21275/SR25509140031