TY - JOUR
T1 - Exposing low-quality deepfake videos of Social Network Service using Spatial Restored Detection Framework
AU - Li, Ying
AU - Bian, Shan
AU - Wang, Chuntao
AU - Polat, Kemal
AU - Alhudhaif, Adi
AU - Alenezi, Fayadh
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/11/30
Y1 - 2023/11/30
N2 - The increasing abuse of facial manipulation methods, such as FaceSwap, Deepfakes etc., seriously threatens the authenticity of digital images/videos on the Internet. Therefore, it is of great importance to identify the facial videos to confirm the contents and avoid fake news or rumors. Many researchers have paid great attention to the detection of deepfakes and put forward a number of deep-learning-based detection models. The existing approaches mostly face the performance degradation in detecting low-quality(LQ) videos, i.e. heavily compressed or low-resolution videos through some SNS (Social Network Service), resulting in the limitation in real applications. To address this issue, in this paper, a novel Spatial Restore Detection Framework(SRDF) is proposed for improving the detection performance for LQ videos by restoring spatial features. We designed a feature extraction-enhancement block and a mapping block inspired by super-resolution methods, to restore and enhance texture features. An attention module was introduced to guide the texture features restoration and enhancement stage attending to different local areas and restoring the texture features. Besides, an improved isolated loss was put forward to prevent the expansion of a single area concerned. Moreover, we adopted a regional data augmentation strategy to prompt feature restore and enhancement in the region attended. Extensive experiments conducted on two deepfake datasets have validated the superiority of the proposed method compared to the state-of-the-art, especially in the scenarios of detecting low-quality deepfake videos.
AB - The increasing abuse of facial manipulation methods, such as FaceSwap, Deepfakes etc., seriously threatens the authenticity of digital images/videos on the Internet. Therefore, it is of great importance to identify the facial videos to confirm the contents and avoid fake news or rumors. Many researchers have paid great attention to the detection of deepfakes and put forward a number of deep-learning-based detection models. The existing approaches mostly face the performance degradation in detecting low-quality(LQ) videos, i.e. heavily compressed or low-resolution videos through some SNS (Social Network Service), resulting in the limitation in real applications. To address this issue, in this paper, a novel Spatial Restore Detection Framework(SRDF) is proposed for improving the detection performance for LQ videos by restoring spatial features. We designed a feature extraction-enhancement block and a mapping block inspired by super-resolution methods, to restore and enhance texture features. An attention module was introduced to guide the texture features restoration and enhancement stage attending to different local areas and restoring the texture features. Besides, an improved isolated loss was put forward to prevent the expansion of a single area concerned. Moreover, we adopted a regional data augmentation strategy to prompt feature restore and enhancement in the region attended. Extensive experiments conducted on two deepfake datasets have validated the superiority of the proposed method compared to the state-of-the-art, especially in the scenarios of detecting low-quality deepfake videos.
KW - Attention mechanism
KW - Deepfake detection
KW - Super resolution
KW - Video forensics
UR - http://www.scopus.com/inward/record.url?scp=85162054325&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2023.120646
DO - 10.1016/j.eswa.2023.120646
M3 - Article
AN - SCOPUS:85162054325
SN - 0957-4174
VL - 231
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 120646
ER -