Next Article in Journal
Sounds of History: A Digital Twin Approach to Musical Heritage Preservation in Virtual Museums
Previous Article in Journal
A Novel DOA Estimation Algorithm Based on Robust Mixed Fractional Lower-Order Correntropy in Impulsive Noise
Previous Article in Special Issue
Multi-Queue-Based Offloading Strategy for Deep Reinforcement Learning Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Offloading Strategy Based on Graph Neural Reinforcement Learning in Mobile Edge Computing

School of Information Science and Engineering, Yunnan University, Kunming 650091, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 26 April 2024 / Revised: 12 June 2024 / Accepted: 13 June 2024 / Published: 18 June 2024
(This article belongs to the Special Issue Emerging and New Technologies in Mobile Edge Computing Networks)

Abstract

In the mobile edge computing (MEC) architecture, base stations with computational capabilities are subject to service coverage limitations, and the mobility of devices leads to dynamic changes in their connections, directly impacting the offloading decisions of agents. The connections between base stations and mobile devices, as well as the connections between base stations themselves, are abstracted into an MEC structural diagram due to the difficulty of deep reinforcement learning (DRL) in capturing the complex relationships between nodes and their multi-order neighboring nodes in the graph; decisions solely generated by DRL have limitations. To address this issue, this study proposes a hierarchical mechanism strategy based on Graph Neural Reinforcement Learning (M-GNRL) under multiple constraints. Specifically, the MEC structural graph constructed with the current device as an observation point aggregates to learn node features, thus comprehensively considering the contextual information of nodes, and the learned graph information serves as the environment for deep reinforcement learning, effectively integrating a graph neural network (GNN) with DRL. In the M-GNRL strategy, edge features from GNN are introduced into the architecture of the DRL network to enhance the accuracy of agents’ decision-making. Additionally, this study proposes an updated algorithm to obtain graph data that change with observation points. Comparative experiments demonstrate that the M-GNRL algorithm outperforms other baseline algorithms in terms of system cost and convergence performance.
Keywords: MEC; service coverage; deep reinforcement learning; hierarchical mechanism; graph neural network; system cost MEC; service coverage; deep reinforcement learning; hierarchical mechanism; graph neural network; system cost

Share and Cite

MDPI and ACS Style

Wang, T.; Ouyang, X.; Sun, D.; Chen, Y.; Li, H. Offloading Strategy Based on Graph Neural Reinforcement Learning in Mobile Edge Computing. Electronics 2024, 13, 2387. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13122387

AMA Style

Wang T, Ouyang X, Sun D, Chen Y, Li H. Offloading Strategy Based on Graph Neural Reinforcement Learning in Mobile Edge Computing. Electronics. 2024; 13(12):2387. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13122387

Chicago/Turabian Style

Wang, Tao, Xue Ouyang, Dingmi Sun, Yimin Chen, and Hao Li. 2024. "Offloading Strategy Based on Graph Neural Reinforcement Learning in Mobile Edge Computing" Electronics 13, no. 12: 2387. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13122387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop