TY - JOUR
T1 - A Novel Framework for Vehicle Detection and Tracking in Night Ware Surveillance Systems
AU - Abdullah Almujally, Nouf
AU - Mehmood Qureshi, Asifa
AU - Alazeb, Abdulwahab
AU - Rahman, Hameedur
AU - Sadiq, Touseef
AU - Alonazi, Mohammed
AU - Algarni, Asaad
AU - Jalal, Ahmad
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - In the field of traffic surveillance systems, where effective traffic management and safety are the primary concerns, vehicle detection and tracking play an important role. Low brightness, low contrast, and noise are issues with low-light environments that result from poor lighting or insufficient exposure. In this paper, we proposed a vehicle detection and tracking model based on the aerial image captured during nighttime. Before object detection, we performed fogging and image enhancement using MIRNet architecture. After pre-processing, YOLOv5 was used to locate each vehicle position in the image. Each detected vehicle was subjected to a Scale-Invariant Feature Transform (SIFT) feature extraction algorithm to assign a unique identifier to track multiple vehicles in the image frames. To get the best possible location of vehicles in the succeeding frames templates were extracted and template matching was performed. The proposed model achieves a precision score of 0.924 for detection and 0.861 for tracking with the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking (UAVDT) dataset, 0.904 for detection, and 0.833 for tracking with the Vision Meets Drone Single Object-Tracking (VisDrone) dataset.
AB - In the field of traffic surveillance systems, where effective traffic management and safety are the primary concerns, vehicle detection and tracking play an important role. Low brightness, low contrast, and noise are issues with low-light environments that result from poor lighting or insufficient exposure. In this paper, we proposed a vehicle detection and tracking model based on the aerial image captured during nighttime. Before object detection, we performed fogging and image enhancement using MIRNet architecture. After pre-processing, YOLOv5 was used to locate each vehicle position in the image. Each detected vehicle was subjected to a Scale-Invariant Feature Transform (SIFT) feature extraction algorithm to assign a unique identifier to track multiple vehicles in the image frames. To get the best possible location of vehicles in the succeeding frames templates were extracted and template matching was performed. The proposed model achieves a precision score of 0.924 for detection and 0.861 for tracking with the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking (UAVDT) dataset, 0.904 for detection, and 0.833 for tracking with the Vision Meets Drone Single Object-Tracking (VisDrone) dataset.
KW - deep learning
KW - Defogging
KW - feature fusion
KW - image normalization
KW - vehicle detection and tracking
KW - yolov5
UR - http://www.scopus.com/inward/record.url?scp=85196713123&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3417267
DO - 10.1109/ACCESS.2024.3417267
M3 - Article
AN - SCOPUS:85196713123
SN - 2169-3536
VL - 12
SP - 88075
EP - 88085
JO - IEEE Access
JF - IEEE Access
ER -