With the advent of 5G and the research into beyond 5G (B5G) networks, a novel and very relevant research issue is how to manage the coexistence of different types of traffic, each with very stringent but completely different requirements. We propose a Deep Reinforcement Learning (DRL) algorithm to slice the available physical layer resources between ultra-reliable low-latency communications (URLLC) and enhanced Mobile BroadBand (eMBB) traffic. Specifically, in our setting the time-frequency resource grid is fully occupied by eMBB traffic and we train the DRL agent to employ Proximal Policy Optimization (PPO), a state-of-the-art DRL algorithm, to dynamically allocate the incoming URLLC traffic by puncturing eMBB codewords. Assuming that each eMBB codeword can tolerate a certain limited amount of puncturing beyond which is in outage, we show that the policy devised by the DRL agent never violates the latency requirement of URLLC traffic and, at the same time, manages to keep the number of eMBB codewords in outage at minimum levels, when compared to other state-of-the-art schemes.

Saggese, F., Pasqualini, L., Moretti, M., Abrardo, A. (2021). Deep Reinforcement Learning for URLLC data management on top of scheduled eMBB traffic. In 2021 IEEE Global Communications Conference, GLOBECOM 2021 - Proceedings (pp.1-6). New York : Institute of Electrical and Electronics Engineers Inc. [10.1109/GLOBECOM46510.2021.9685777].

Deep Reinforcement Learning for URLLC data management on top of scheduled eMBB traffic

Abrardo A.
2021-01-01

Abstract

With the advent of 5G and the research into beyond 5G (B5G) networks, a novel and very relevant research issue is how to manage the coexistence of different types of traffic, each with very stringent but completely different requirements. We propose a Deep Reinforcement Learning (DRL) algorithm to slice the available physical layer resources between ultra-reliable low-latency communications (URLLC) and enhanced Mobile BroadBand (eMBB) traffic. Specifically, in our setting the time-frequency resource grid is fully occupied by eMBB traffic and we train the DRL agent to employ Proximal Policy Optimization (PPO), a state-of-the-art DRL algorithm, to dynamically allocate the incoming URLLC traffic by puncturing eMBB codewords. Assuming that each eMBB codeword can tolerate a certain limited amount of puncturing beyond which is in outage, we show that the policy devised by the DRL agent never violates the latency requirement of URLLC traffic and, at the same time, manages to keep the number of eMBB codewords in outage at minimum levels, when compared to other state-of-the-art schemes.
2021
978-1-7281-8104-2
Saggese, F., Pasqualini, L., Moretti, M., Abrardo, A. (2021). Deep Reinforcement Learning for URLLC data management on top of scheduled eMBB traffic. In 2021 IEEE Global Communications Conference, GLOBECOM 2021 - Proceedings (pp.1-6). New York : Institute of Electrical and Electronics Engineers Inc. [10.1109/GLOBECOM46510.2021.9685777].
File in questo prodotto:
File Dimensione Formato  
Deep_Reinforcement_Learning_for_URLLC_data_management_on_top_of_scheduled_eMBB_traffic.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 296.81 kB
Formato Adobe PDF
296.81 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1217198