Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Bouktif, Salah"

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Traffic signal control based on deep reinforcement learning with simplified state and reward definitions
    (IEEE, 2021) Bouktif, Salah; Cheniki, Abderraouf; Ouni, Ali; El-Sayed, Hesham
    Traffic congestion has recently become a real issue especially within crowded cities and urban areas. Intelligent transportation systems (ITS) leveraged various advanced tech- niques aiming to optimize the traffic flow and subsequently alleviate the traffic congestion. In particular, traffic signal control TSC is one of the essential ITS techniques for controlling the traffic flow at intersections. Many research works have been proposed to develop algorithms and techniques which optimize TSC behavior. Recent works leverage Deep Learning (DL) and Reinforcement Learning (RL) techniques to optimize TSCs. However, most of Deep RL proposals are based on complex definitions of state and reward in the RL framework. In this work, we propose to use an alternative way of formulating the state and reward definitions. Basically, The basic idea is to define both state and reward in a simplified and straightforward manner rather than the complex design. We hypothesize that such a design approach simplifies the learning of the RL agent and hence provides a rapid convergence to optimal policies. For the agent architecture, we employ the double deep Q-Network (DDQN) along with prioritized experience replay (PER). We conduct the experiments using the Simulation of Urban MObility (SUMO) simulator interfaced with Python framework and we compare the performance of our proposal to traditional and learning-based techniques
  • No Thumbnail Available
    Item
    Traffic signal control using hybrid action space deep reinforcement learning
    (MDPI AG, 2021) Bouktif, Salah; Cheniki, Abderraouf; Ouni, Ali
    Recent research works on intelligent traffic signal control (TSC) have been mainly focused on leveraging deep reinforcement learning (DRL) due to its proven capability and performance. DRL-based traffic signal control frameworks belong to either discrete or continuous controls. In discrete control, the DRL agent selects the appropriate traffic light phase from a finite set of phases. Whereas in continuous control approach, the agent decides the appropriate duration for each signal phase within a predetermined sequence of phases. Among the existing works, there are no prior approaches that propose a flexible framework combining both discrete and continuous DRL approaches in controlling traffic signal. Thus, our ultimate objective in this paper is to propose an approach capable of deciding simultaneously the proper phase and its associated duration. Our contribution resides in adapting a hybrid Deep Reinforcement Learning that considers at the same time discrete and continuous decisions. Precisely, we customize a Parameterized Deep Q-Networks (P-DQN) architecture that permits a hierarchical decision-making process that primarily decides the traffic light next phases and secondly specifies its the associated timing. The evaluation results of our approach using Simulation of Urban MObility (SUMO) shows its out-performance over the benchmarks. The proposed framework is able to reduce the average queue length of vehicles and the average travel time by 22.20% and 5.78%, respectively, over the alternative DRL-based TSC systems

DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify