TY - JOUR
T1 - Guaranteeing Correctness in Black-Box Machine Learning
T2 - A Fusion of Explainable AI and Formal Methods for Healthcare Decision-Making
AU - Khan, Nadia
AU - Nauman, Muhammad
AU - Almadhor, Ahmad S.
AU - Akhtar, Nadeem
AU - Alghuried, Abdullah
AU - Alhudhaif, Adi
N1 - Publisher Copyright:
© 2024 The Authors.
PY - 2024
Y1 - 2024
N2 - In recent years, Explainable Artificial Intelligence (XAI) has attracted considerable attention from the research community, primarily focusing on elucidating the opaque decision-making processes inherent in complex black-box machine learning systems such as deep neural networks. This spike in interest originates from the widespread adoption of black-box models, particularly in critical domains like healthcare and fraud detection, highlighting the pressing need to understand and validate their decision-making mechanisms rigorously. In addition, prominent XAI techniques, including LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive exPlanations), rely on heuristics and cannot guarantee the correctness of the explanations provided. This article systematically addresses this critical issue associated with machine learning and deep learning models, underscoring XAI's pivotal role in promoting model transparency to enhance decision-making quality. Furthermore, this study advocates integrating Formal Methods to provide correctness guarantees for black-box internal decision-making. The proposed methodology unfolds in three pivotal stages: firstly, training black-box models using neural networks to generate synthetic datasets; secondly, employing LIME and SHAP techniques to interpret the models and visualize their internal decision-making processes; and finally, training decision trees on the synthetic datasets to implement Formal Methods for ensuring the correctness of the black-box model's decision-making. To validate this proposed approach, experimentation was conducted on four widely recognized medical datasets, including the Wisconsin Breast Cancer and Thyroid Cancer (TC) datasets, which are available in the UCI Machine Learning Repository. Specifically, this research represents a significant contribution by pioneering a novel approach that seamlessly integrates XAI and Formal Methods, thereby furnishing correctness guarantees for internal decision-making processes within the healthcare domain.
AB - In recent years, Explainable Artificial Intelligence (XAI) has attracted considerable attention from the research community, primarily focusing on elucidating the opaque decision-making processes inherent in complex black-box machine learning systems such as deep neural networks. This spike in interest originates from the widespread adoption of black-box models, particularly in critical domains like healthcare and fraud detection, highlighting the pressing need to understand and validate their decision-making mechanisms rigorously. In addition, prominent XAI techniques, including LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive exPlanations), rely on heuristics and cannot guarantee the correctness of the explanations provided. This article systematically addresses this critical issue associated with machine learning and deep learning models, underscoring XAI's pivotal role in promoting model transparency to enhance decision-making quality. Furthermore, this study advocates integrating Formal Methods to provide correctness guarantees for black-box internal decision-making. The proposed methodology unfolds in three pivotal stages: firstly, training black-box models using neural networks to generate synthetic datasets; secondly, employing LIME and SHAP techniques to interpret the models and visualize their internal decision-making processes; and finally, training decision trees on the synthetic datasets to implement Formal Methods for ensuring the correctness of the black-box model's decision-making. To validate this proposed approach, experimentation was conducted on four widely recognized medical datasets, including the Wisconsin Breast Cancer and Thyroid Cancer (TC) datasets, which are available in the UCI Machine Learning Repository. Specifically, this research represents a significant contribution by pioneering a novel approach that seamlessly integrates XAI and Formal Methods, thereby furnishing correctness guarantees for internal decision-making processes within the healthcare domain.
KW - Black-box machine learning
KW - cancer prognosis
KW - colored petri nets
KW - decision-making
KW - formal methods
KW - formal verification
KW - interpretable machine learning
KW - neural networks
UR - http://www.scopus.com/inward/record.url?scp=85197024646&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3420415
DO - 10.1109/ACCESS.2024.3420415
M3 - Article
AN - SCOPUS:85197024646
SN - 2169-3536
VL - 12
SP - 90299
EP - 90316
JO - IEEE Access
JF - IEEE Access
ER -