Responsible Artificial Intelligence for Mental Health Disorders: Current Applications and Future Challenges

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Mental health disorders (MHDs) have significant medical and financial impacts on patients and society. Despite the potential opportunities for artificial intelligence (AI) in the mental health field, there are no noticeable roles of these systems in real medical environments. The main reason for these limitations is the lack of trust by domain experts in the decisions of AI-based systems. Recently, trustworthy AI (TAI) guidelines have been proposed to support the building of responsible AI (RAI) systems that are robust, fair, and transparent. This review aims to investigate the literature of TAI for machine learning (ML) and deep learning (DL) architectures in the MHD domain. To the best of our knowledge, this is the first study that analyzes the literature of trustworthiness of ML and DL models in the MHD domain. The review identifies the advances in the literature of RAI models in the MHD domain and investigates how this is related to the current limitations of the applicability of these models in real medical environments. We discover that the current literature on AI-based models in MHD has severe limitations compared to other domains regarding TAI standards and implementations. We discuss these limitations and suggest possible future research directions that could handle these challenges.

Original languageEnglish
Article numbere20240101
JournalJournal of Disability Research
Volume4
Issue number1
DOIs
StatePublished - 3 Jan 2025

Keywords

  • deep learning
  • machine learning robustness
  • mental health disorders
  • model explainability
  • responsible AI
  • trustworthy AI

Fingerprint

Dive into the research topics of 'Responsible Artificial Intelligence for Mental Health Disorders: Current Applications and Future Challenges'. Together they form a unique fingerprint.

Cite this