New Internet of Things (IoT) based applications with stricter key performance indicators (KPI) such as round-trip delay, network availability, energy efficiency, spectral efficiency, security, age of information, throughput, and jitter present unprecedented challenges in achieving next-generation ultra-reliable and low-latency communications (xURLLC) for sixth-generation (6 G) communication systems and beyond. In this paper, we aim to collaboratively utilize technologies such as deep reinforcement learning (DRL), unmanned aerial vehicle (UAV), and multi-access edge computing (MEC) to meet the aforementioned KPIs and support the xURLLC services. We present a DRL-empowered UAV-assisted IoT-based MEC system in which a UAV carries a MEC server and provides computation services to IoT devices. Specifically, we have employed twin delay deep deterministic policy gradient (TD3), a DRL algorithm, to find optimal computation offloading policies while simultaneously minimizing both the processing delay and the energy consumption of IoT devices, which inherently influence the KPI requirements. Numerical results illustrate the effectiveness of the proposed approach that can significantly reduce the processing delay and energy consumption, and converge quickly, outperforming the other state-of-the-art DRL-based computation offloading algorithms including Double Deep Q-Network (DDQN) and Deep Deterministic Policy Gradient (DDPG).
Fatima, N., Saxena, P., Giambene, G. (2024). Deep reinforcement learning-based computation offloading for xURLLC services with UAV-assisted IoT-based multi-access edge computing system. WIRELESS NETWORKS, 30(9), 7275-7291 [10.1007/s11276-023-03596-y].
Deep reinforcement learning-based computation offloading for xURLLC services with UAV-assisted IoT-based multi-access edge computing system
Giovanni Giambene
2024-01-01
Abstract
New Internet of Things (IoT) based applications with stricter key performance indicators (KPI) such as round-trip delay, network availability, energy efficiency, spectral efficiency, security, age of information, throughput, and jitter present unprecedented challenges in achieving next-generation ultra-reliable and low-latency communications (xURLLC) for sixth-generation (6 G) communication systems and beyond. In this paper, we aim to collaboratively utilize technologies such as deep reinforcement learning (DRL), unmanned aerial vehicle (UAV), and multi-access edge computing (MEC) to meet the aforementioned KPIs and support the xURLLC services. We present a DRL-empowered UAV-assisted IoT-based MEC system in which a UAV carries a MEC server and provides computation services to IoT devices. Specifically, we have employed twin delay deep deterministic policy gradient (TD3), a DRL algorithm, to find optimal computation offloading policies while simultaneously minimizing both the processing delay and the energy consumption of IoT devices, which inherently influence the KPI requirements. Numerical results illustrate the effectiveness of the proposed approach that can significantly reduce the processing delay and energy consumption, and converge quickly, outperforming the other state-of-the-art DRL-based computation offloading algorithms including Double Deep Q-Network (DDQN) and Deep Deterministic Policy Gradient (DDPG).File | Dimensione | Formato | |
---|---|---|---|
s11276-023-03596-y.pdf
non disponibili
Tipologia:
PDF editoriale
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.94 MB
Formato
Adobe PDF
|
1.94 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1259814