Explainable AI for Unraveling the Significance of Visual Cues in High Stakes Deception Detection

Suhaib Salah, Hagar Elbatanouny, Abrar Sobuh, Eqab Almajali, Wasiq Khan, Haya Alaskar, Adel Binbusayyis, Taimur Hassan, Jawad Yousaf, Abir Hussain

Research output: Contribution to journalArticlepeer-review

Abstract

Deception, a widespread aspect of human behavior, has significant implications in fields like law enforcement, security, judicial proceedings, and social areas. Detecting deception accurately, especially in high-stakes environments, is critical for ensuring justice and security. Recently, machine learning has significantly enhanced deception detection capabilities by analyzing various behavioral and visual cues. However, machine learning models often operate as opaque "black boxes,"offering high predictive accuracy without explaining the reasoning behind the decisions. This lack of transparency necessitates the integration of Explainable Artificial Intelligence to make the models' decisions understandable and trustworthy. This study proposes the implementation of existing model-agnostic Explainable Artificial Intelligence techniques - Permutation Importance, Partial Dependence Plots, and SHapley Additive exPlanations - to showcase the contributions of visual features in deception detection. Using Real-Life Trial dataset, recognized as the most valuable high-stake dataset, we demonstrate that Multi-layer Perceptron achieved the highest accuracy of 88% and a recall of 92.86%. Along with the aforementioned existing techniques, Real-Life Trial dataset inspired us to develop a novel technique: 'set-of-features permutation importance'. Additionally, this study is novel in the sense of that it extensively applies XAI techniques in the field of deception detection on Real-Life Trial dataset. Experimental results shows that the visual cues related to eyebrow movements are most indicative of deceptive behavior. Along with the new findings, our work underscores the importance of making machine learning models more transparent and explainable, thereby enhancing their utility for human-in-loop AI and ethical acceptability.

Original languageEnglish
Pages (from-to)65839-65862
Number of pages24
JournalIEEE Access
Volume13
DOIs
StatePublished - 2025

Keywords

  • Deception detection
  • black-box models
  • explainable machine learning
  • human-in-loop AI
  • model-agnostic techniques
  • multi-layer perceptron
  • partial dependence plots
  • permutation importance
  • shap
  • trustworthy AI

Fingerprint

Dive into the research topics of 'Explainable AI for Unraveling the Significance of Visual Cues in High Stakes Deception Detection'. Together they form a unique fingerprint.

Cite this