TY - JOUR
T1 - Explainable AI for Unraveling the Significance of Visual Cues in High Stakes Deception Detection
AU - Salah, Suhaib
AU - Elbatanouny, Hagar
AU - Sobuh, Abrar
AU - Almajali, Eqab
AU - Khan, Wasiq
AU - Alaskar, Haya
AU - Binbusayyis, Adel
AU - Hassan, Taimur
AU - Yousaf, Jawad
AU - Hussain, Abir
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Deception, a widespread aspect of human behavior, has significant implications in fields like law enforcement, security, judicial proceedings, and social areas. Detecting deception accurately, especially in high-stakes environments, is critical for ensuring justice and security. Recently, machine learning has significantly enhanced deception detection capabilities by analyzing various behavioral and visual cues. However, machine learning models often operate as opaque "black boxes,"offering high predictive accuracy without explaining the reasoning behind the decisions. This lack of transparency necessitates the integration of Explainable Artificial Intelligence to make the models' decisions understandable and trustworthy. This study proposes the implementation of existing model-agnostic Explainable Artificial Intelligence techniques - Permutation Importance, Partial Dependence Plots, and SHapley Additive exPlanations - to showcase the contributions of visual features in deception detection. Using Real-Life Trial dataset, recognized as the most valuable high-stake dataset, we demonstrate that Multi-layer Perceptron achieved the highest accuracy of 88% and a recall of 92.86%. Along with the aforementioned existing techniques, Real-Life Trial dataset inspired us to develop a novel technique: 'set-of-features permutation importance'. Additionally, this study is novel in the sense of that it extensively applies XAI techniques in the field of deception detection on Real-Life Trial dataset. Experimental results shows that the visual cues related to eyebrow movements are most indicative of deceptive behavior. Along with the new findings, our work underscores the importance of making machine learning models more transparent and explainable, thereby enhancing their utility for human-in-loop AI and ethical acceptability.
AB - Deception, a widespread aspect of human behavior, has significant implications in fields like law enforcement, security, judicial proceedings, and social areas. Detecting deception accurately, especially in high-stakes environments, is critical for ensuring justice and security. Recently, machine learning has significantly enhanced deception detection capabilities by analyzing various behavioral and visual cues. However, machine learning models often operate as opaque "black boxes,"offering high predictive accuracy without explaining the reasoning behind the decisions. This lack of transparency necessitates the integration of Explainable Artificial Intelligence to make the models' decisions understandable and trustworthy. This study proposes the implementation of existing model-agnostic Explainable Artificial Intelligence techniques - Permutation Importance, Partial Dependence Plots, and SHapley Additive exPlanations - to showcase the contributions of visual features in deception detection. Using Real-Life Trial dataset, recognized as the most valuable high-stake dataset, we demonstrate that Multi-layer Perceptron achieved the highest accuracy of 88% and a recall of 92.86%. Along with the aforementioned existing techniques, Real-Life Trial dataset inspired us to develop a novel technique: 'set-of-features permutation importance'. Additionally, this study is novel in the sense of that it extensively applies XAI techniques in the field of deception detection on Real-Life Trial dataset. Experimental results shows that the visual cues related to eyebrow movements are most indicative of deceptive behavior. Along with the new findings, our work underscores the importance of making machine learning models more transparent and explainable, thereby enhancing their utility for human-in-loop AI and ethical acceptability.
KW - Deception detection
KW - black-box models
KW - explainable machine learning
KW - human-in-loop AI
KW - model-agnostic techniques
KW - multi-layer perceptron
KW - partial dependence plots
KW - permutation importance
KW - shap
KW - trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=105003111329&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2025.3558875
DO - 10.1109/ACCESS.2025.3558875
M3 - Article
AN - SCOPUS:105003111329
SN - 2169-3536
VL - 13
SP - 65839
EP - 65862
JO - IEEE Access
JF - IEEE Access
ER -