TY - JOUR
T1 - Deepfake detection using optimized VGG16-based framework enhanced with LIME for secure digital content
AU - Aldrees, Asma
AU - Abuzinadah, Nihal
AU - Umer, Muhammad
AU - AlHammadi, Dina Abdulaziz
AU - Alsubai, Shtwai
AU - Alharthi, Raed
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2025/10
Y1 - 2025/10
N2 - The rapid evolution of technologies to manipulate facial images, namely Generative Adversarial Networks (GANs) and those based on Stable Diffusion, has increased the need for effective deepfake detection mechanisms to mitigate their misuse. In this paper, the critical challenge of detecting deepfake images is addressed through a new deep learning-based approach that uses the VGG16 model after applying all necessary preprocessing steps. The VGG16 architecture was chosen for its deep structure and strong ability to capture intricate facial patterns when classifying facial images as real or manipulated. A robust preprocessing pipeline — including normalization, augmentation, facial alignment, and noise reduction — was implemented to optimize input data, improving the detection of subtle manipulations. Additionally, Explainable AI (XAI) techniques, such as the Local Interpretable Model-agnostic Explanations (LIME) framework, were integrated to provide transparent, visual explanations of the model's predictions, enhancing interpretability and user trust. To further assess generalizability, the evaluation was extended beyond the initial dataset by incorporating three additional benchmark datasets: FaceForensics++, Celeb-DF (v2), and the DFDC Preview Set. These datasets contain a range of manipulation techniques, allowing for comprehensive testing of the model's robustness across different scenarios. The proposed method outperformed baselines with exceptional performance metrics (accuracy, precision, recall, and F1-score up to 0.99), and maintained strong results across different datasets. These findings demonstrate that combining XAI approaches with a VGG16 model and thorough preprocessing effectively counters advanced deepfake generation techniques, such as StyleGAN2. This research contributes to a safer digital landscape by improving the detection and understanding of manipulated content, providing a practical way to confront the growing threat of deepfakes.
AB - The rapid evolution of technologies to manipulate facial images, namely Generative Adversarial Networks (GANs) and those based on Stable Diffusion, has increased the need for effective deepfake detection mechanisms to mitigate their misuse. In this paper, the critical challenge of detecting deepfake images is addressed through a new deep learning-based approach that uses the VGG16 model after applying all necessary preprocessing steps. The VGG16 architecture was chosen for its deep structure and strong ability to capture intricate facial patterns when classifying facial images as real or manipulated. A robust preprocessing pipeline — including normalization, augmentation, facial alignment, and noise reduction — was implemented to optimize input data, improving the detection of subtle manipulations. Additionally, Explainable AI (XAI) techniques, such as the Local Interpretable Model-agnostic Explanations (LIME) framework, were integrated to provide transparent, visual explanations of the model's predictions, enhancing interpretability and user trust. To further assess generalizability, the evaluation was extended beyond the initial dataset by incorporating three additional benchmark datasets: FaceForensics++, Celeb-DF (v2), and the DFDC Preview Set. These datasets contain a range of manipulation techniques, allowing for comprehensive testing of the model's robustness across different scenarios. The proposed method outperformed baselines with exceptional performance metrics (accuracy, precision, recall, and F1-score up to 0.99), and maintained strong results across different datasets. These findings demonstrate that combining XAI approaches with a VGG16 model and thorough preprocessing effectively counters advanced deepfake generation techniques, such as StyleGAN2. This research contributes to a safer digital landscape by improving the detection and understanding of manipulated content, providing a practical way to confront the growing threat of deepfakes.
KW - Deep learning
KW - Deepfake detection
KW - Explainable AI (XAI)
KW - Image processing
KW - Local interpretable model-agnostic explanations (LIME)
KW - VGG16 model
UR - https://www.scopus.com/pages/publications/105014802481
U2 - 10.1016/j.imavis.2025.105696
DO - 10.1016/j.imavis.2025.105696
M3 - Article
AN - SCOPUS:105014802481
SN - 0262-8856
VL - 162
JO - Image and Vision Computing
JF - Image and Vision Computing
M1 - 105696
ER -