ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control

Pengyuan Zhou, Tristan Braud, Ahmad Alhilal, Pan Hui, Jussi Kangasharju

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.

Original languageEnglish
Title of host publication2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)
Number of pages6
Place of PublicationNew York
PublisherIEEE
Publication date2019
Pages849-854
ISBN (Electronic)978-1-5386-9151-9
DOIs
Publication statusPublished - 2019
MoE publication typeA4 Article in conference proceedings
EventInternational Workshop on Smart Edge Computing and Networking - Kyoto, Japan
Duration: 15 Mar 201915 Mar 2019
Conference number: 3

Publication series

NameInternational Conference on Pervasive Computing and Communications
PublisherIEEE
ISSN (Print)2474-2503

Fields of Science

  • 113 Computer and information sciences

Cite this

Zhou, P., Braud, T., Alhilal, A., Hui, P., & Kangasharju, J. (2019). ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) (pp. 849-854). (International Conference on Pervasive Computing and Communications). New York: IEEE. https://doi.org/10.1109/PERCOMW.2019.8730706
Zhou, Pengyuan ; Braud, Tristan ; Alhilal, Ahmad ; Hui, Pan ; Kangasharju, Jussi. / ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). New York : IEEE, 2019. pp. 849-854 (International Conference on Pervasive Computing and Communications).
@inproceedings{3f643dcf9694421590afa6963e1cce5b,
title = "ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control",
abstract = "Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 {\%} average waiting time and 32.77{\%} trip duration in normally congested areas, with very fast training in ordinary servers.",
keywords = "113 Computer and information sciences",
author = "Pengyuan Zhou and Tristan Braud and Ahmad Alhilal and Pan Hui and Jussi Kangasharju",
year = "2019",
doi = "10.1109/PERCOMW.2019.8730706",
language = "English",
series = "International Conference on Pervasive Computing and Communications",
publisher = "IEEE",
pages = "849--854",
booktitle = "2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)",
address = "United States",

}

Zhou, P, Braud, T, Alhilal, A, Hui, P & Kangasharju, J 2019, ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. in 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). International Conference on Pervasive Computing and Communications, IEEE, New York, pp. 849-854, International Workshop on Smart Edge Computing and Networking, Kyoto, Japan, 15/03/2019. https://doi.org/10.1109/PERCOMW.2019.8730706

ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. / Zhou, Pengyuan; Braud, Tristan; Alhilal, Ahmad; Hui, Pan; Kangasharju, Jussi.

2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). New York : IEEE, 2019. p. 849-854 (International Conference on Pervasive Computing and Communications).

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

TY - GEN

T1 - ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control

AU - Zhou, Pengyuan

AU - Braud, Tristan

AU - Alhilal, Ahmad

AU - Hui, Pan

AU - Kangasharju, Jussi

PY - 2019

Y1 - 2019

N2 - Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.

AB - Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.

KW - 113 Computer and information sciences

U2 - 10.1109/PERCOMW.2019.8730706

DO - 10.1109/PERCOMW.2019.8730706

M3 - Conference contribution

T3 - International Conference on Pervasive Computing and Communications

SP - 849

EP - 854

BT - 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)

PB - IEEE

CY - New York

ER -

Zhou P, Braud T, Alhilal A, Hui P, Kangasharju J. ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). New York: IEEE. 2019. p. 849-854. (International Conference on Pervasive Computing and Communications). https://doi.org/10.1109/PERCOMW.2019.8730706