TY - JOUR
T1 - Enhancing Security in Real-Time Video Surveillance
T2 - A Deep Learning-Based Remedial Approach for Adversarial Attack Mitigation
AU - Ranjana Panigrahi, Gyana
AU - Kumar Sethy, Prabira
AU - Kumari Behera, Santi
AU - Gupta, Manoj
AU - Alenizi, Farhan A.
AU - Nanthaamornphong, Aziz
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - This paper introduces an innovative methodology to disrupt deep-learning (DL) surveillance systems by implementing an adversarial framework strategy, inducing misclassification in live video objects and extending attacks to real-time models. Focusing on the vulnerability of image-categorization models, the study evaluates the effectiveness of face mask surveillance against adversarial threats. A real-time system, employing the ShuffleNet V1 transfer-learning algorithm, was trained on a Kaggle dataset for face mask detection accuracy. Using a white-box Fast Gradient Sign Method (FGSM) attack with epsilon at 0.13, the study successfully generated adversarial frames, deceiving the face mask detection system and prompting unintended video predictions. The findings highlight the risks posed by adversarial attacks on critical video surveillance systems, specifically those designed for face mask detection. The paper emphasizes the need for proactive measures to safeguard these systems before real-world deployment, crucial for ensuring their robustness and reliability in the face of potential adversarial threats.
AB - This paper introduces an innovative methodology to disrupt deep-learning (DL) surveillance systems by implementing an adversarial framework strategy, inducing misclassification in live video objects and extending attacks to real-time models. Focusing on the vulnerability of image-categorization models, the study evaluates the effectiveness of face mask surveillance against adversarial threats. A real-time system, employing the ShuffleNet V1 transfer-learning algorithm, was trained on a Kaggle dataset for face mask detection accuracy. Using a white-box Fast Gradient Sign Method (FGSM) attack with epsilon at 0.13, the study successfully generated adversarial frames, deceiving the face mask detection system and prompting unintended video predictions. The findings highlight the risks posed by adversarial attacks on critical video surveillance systems, specifically those designed for face mask detection. The paper emphasizes the need for proactive measures to safeguard these systems before real-world deployment, crucial for ensuring their robustness and reliability in the face of potential adversarial threats.
KW - Adversarial attacks
KW - ShuffleNetV1
KW - deep learning
KW - face mask detection
KW - face mask recognition
KW - video surveillance systems
UR - http://www.scopus.com/inward/record.url?scp=85197063620&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3418614
DO - 10.1109/ACCESS.2024.3418614
M3 - Article
AN - SCOPUS:85197063620
SN - 2169-3536
VL - 12
SP - 88913
EP - 88926
JO - IEEE Access
JF - IEEE Access
ER -