ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control

Pengyuan Zhou, Tristan Braud, Ahmad Alhilal, Pan Hui, Jussi Kangasharju

Tutkimustuotos: Artikkeli kirjassa/raportissa/konferenssijulkaisussaKonferenssiartikkeliTieteellinenvertaisarvioitu

Kuvaus

Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.

Alkuperäiskielienglanti
Otsikko2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)
Sivumäärä6
JulkaisupaikkaNew York
KustantajaIEEE
Julkaisupäivä2019
Sivut849-854
ISBN (elektroninen)978-1-5386-9151-9
DOI - pysyväislinkit
TilaJulkaistu - 2019
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisuussa
TapahtumaInternational Workshop on Smart Edge Computing and Networking - Kyoto, Japani
Kesto: 15 maaliskuuta 201915 maaliskuuta 2019
Konferenssinumero: 3

Julkaisusarja

NimiInternational Conference on Pervasive Computing and Communications
KustantajaIEEE
ISSN (painettu)2474-2503

Tieteenalat

  • 113 Tietojenkäsittely- ja informaatiotieteet

Lainaa tätä

Zhou, P., Braud, T., Alhilal, A., Hui, P., & Kangasharju, J. (2019). ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. teoksessa 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) (Sivut 849-854). (International Conference on Pervasive Computing and Communications). New York: IEEE. https://doi.org/10.1109/PERCOMW.2019.8730706
Zhou, Pengyuan ; Braud, Tristan ; Alhilal, Ahmad ; Hui, Pan ; Kangasharju, Jussi. / ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). New York : IEEE, 2019. Sivut 849-854 (International Conference on Pervasive Computing and Communications).
@inproceedings{3f643dcf9694421590afa6963e1cce5b,
title = "ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control",
abstract = "Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 {\%} average waiting time and 32.77{\%} trip duration in normally congested areas, with very fast training in ordinary servers.",
keywords = "113 Computer and information sciences",
author = "Pengyuan Zhou and Tristan Braud and Ahmad Alhilal and Pan Hui and Jussi Kangasharju",
year = "2019",
doi = "10.1109/PERCOMW.2019.8730706",
language = "English",
series = "International Conference on Pervasive Computing and Communications",
publisher = "IEEE",
pages = "849--854",
booktitle = "2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)",
address = "United States",

}

Zhou, P, Braud, T, Alhilal, A, Hui, P & Kangasharju, J 2019, ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. julkaisussa 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). International Conference on Pervasive Computing and Communications, IEEE, New York, Sivut 849-854, International Workshop on Smart Edge Computing and Networking, Kyoto, Japani, 15/03/2019. https://doi.org/10.1109/PERCOMW.2019.8730706

ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. / Zhou, Pengyuan; Braud, Tristan; Alhilal, Ahmad; Hui, Pan; Kangasharju, Jussi.

2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). New York : IEEE, 2019. s. 849-854 (International Conference on Pervasive Computing and Communications).

Tutkimustuotos: Artikkeli kirjassa/raportissa/konferenssijulkaisussaKonferenssiartikkeliTieteellinenvertaisarvioitu

TY - GEN

T1 - ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control

AU - Zhou, Pengyuan

AU - Braud, Tristan

AU - Alhilal, Ahmad

AU - Hui, Pan

AU - Kangasharju, Jussi

PY - 2019

Y1 - 2019

N2 - Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.

AB - Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.

KW - 113 Computer and information sciences

U2 - 10.1109/PERCOMW.2019.8730706

DO - 10.1109/PERCOMW.2019.8730706

M3 - Conference contribution

T3 - International Conference on Pervasive Computing and Communications

SP - 849

EP - 854

BT - 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)

PB - IEEE

CY - New York

ER -

Zhou P, Braud T, Alhilal A, Hui P, Kangasharju J. ERL: Edge Based Reinforcement Learning for Optimized Urban Traffic Light Control. julkaisussa 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). New York: IEEE. 2019. s. 849-854. (International Conference on Pervasive Computing and Communications). https://doi.org/10.1109/PERCOMW.2019.8730706