A Semantic Web-Enabled Explainable AI Framework for Interoperable and Scalable Detection of Autism Spectrum Disorder

Research output: Contribution to journalArticlepeer-review

Abstract

Autism Spectrum Disorder (ASD) is a lifelong condition that affects communication, social interaction, and behavior. Artificial intelligence (AI) shows promise for early detection, but many models struggle with accuracy, scalability, and interpretability, limiting clinical use. To address these gaps, this paper proposes a semantic web–enabled explainable AI (XAI) framework for accurate and interoperable ASD diagnosis. The framework has three parts: (1) a semantic data integration layer that harmonizes heterogeneous datasets, (2) a scalable feature engineering process using MapReduce with the Binary Capuchin Search Algorithm (BCSA), and (3) interpretable classifiers enriched with SHAP for transparent predictions. Experiments on ASD datasets achieved about 87% accuracy, outperforming baselines by 7–10% and federated methods by 5%. Precision and F1 improved by 6–8%, while semantic integration enhanced interpretability and trust. By uniting semantic technologies with explainable ML, the framework ensures scalability and offers a reliable, transparent pathway toward clinically useful AI.

Original languageEnglish
JournalInternational Journal on Semantic Web and Information Systems
Volume21
Issue number1
DOIs
StatePublished - 2025

Keywords

  • Autism Spectrum Disorder
  • Clinical Decision Support
  • Explainable AI
  • Feature Selection
  • Interoperability
  • Semantic Web

Fingerprint

Dive into the research topics of 'A Semantic Web-Enabled Explainable AI Framework for Interoperable and Scalable Detection of Autism Spectrum Disorder'. Together they form a unique fingerprint.

Cite this