TY - JOUR
T1 - Urban traffic signal control optimization through Deep Q Learning and double Deep Q Learning
T2 - a novel approach for efficient traffic management
AU - Jamil, Qazi Umer
AU - Kallu, Karam Dad
AU - Khan, Muhammad Jawad
AU - Safdar, Muhammad
AU - Zafar, Amad
AU - Ali, Muhammad Umair
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
PY - 2024
Y1 - 2024
N2 - Traffic congestion remains a persistent challenge in urban areas, necessitating efficient traffic control strategies. This research explores the application of advanced reinforcement learning techniques, specifically Deep Q-Learning (DQN) and Double Deep Q-Learning (DDQN), to address this issue at a four-way traffic intersection. The RL agents are trained using a reward function based on minimizing waiting times, enabling them to learn effective traffic signal control policies. The study focuses on comparing the performance of a simple non-reinforcement learning (Non RL) agent, a Deep Q-Network (DQN) agent, and an improved Double Deep Q-Learning (DDQN) agent in different traffic scenarios. The Non RL agent, which follows a fixed order of traffic phases, demonstrates limitations in both low and high traffic situations, leading to inefficiencies and imbalanced queue lengths. On the other hand, the DQN agent exhibits promising results in low traffic conditions but struggles in high traffic due to its greedy behavior. The DDQN agent, with an extended green light base time, outperforms both the Non RL agent and the original DQN agent in high traffic scenarios, making it more suitable for real-world traffic conditions. However, it shows some inefficiencies in low traffic scenarios. Future research is recommended to address multi-agent deep reinforcement learning challenges, incorporate attention mechanisms and hierarchical reinforcement learning, explore graph theory applications, and develop efficient communication protocols among agents to further enhance traffic control solutions.
AB - Traffic congestion remains a persistent challenge in urban areas, necessitating efficient traffic control strategies. This research explores the application of advanced reinforcement learning techniques, specifically Deep Q-Learning (DQN) and Double Deep Q-Learning (DDQN), to address this issue at a four-way traffic intersection. The RL agents are trained using a reward function based on minimizing waiting times, enabling them to learn effective traffic signal control policies. The study focuses on comparing the performance of a simple non-reinforcement learning (Non RL) agent, a Deep Q-Network (DQN) agent, and an improved Double Deep Q-Learning (DDQN) agent in different traffic scenarios. The Non RL agent, which follows a fixed order of traffic phases, demonstrates limitations in both low and high traffic situations, leading to inefficiencies and imbalanced queue lengths. On the other hand, the DQN agent exhibits promising results in low traffic conditions but struggles in high traffic due to its greedy behavior. The DDQN agent, with an extended green light base time, outperforms both the Non RL agent and the original DQN agent in high traffic scenarios, making it more suitable for real-world traffic conditions. However, it shows some inefficiencies in low traffic scenarios. Future research is recommended to address multi-agent deep reinforcement learning challenges, incorporate attention mechanisms and hierarchical reinforcement learning, explore graph theory applications, and develop efficient communication protocols among agents to further enhance traffic control solutions.
KW - Deep Q-Learning
KW - Double Deep Q-Learning
KW - Reinforcement Learning
KW - Traffic Flow Optimization
KW - Traffic Intersection
KW - Traffic Light Control
KW - Wait Time Reduction
UR - https://www.scopus.com/pages/publications/85202063569
U2 - 10.1007/s11042-024-20060-x
DO - 10.1007/s11042-024-20060-x
M3 - Article
AN - SCOPUS:85202063569
SN - 1380-7501
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
ER -