TY - JOUR
T1 - Secured Computation Offloading in Multi-Access Mobile Edge Computing Networks through Deep Reinforcement Learning
AU - Abdullah, Rijal
AU - Yaacob, Noorulsadiqin Azbiya
AU - Salameh, Anas A.
AU - Zaki, Nur Amalina Mohamad
AU - Bahardin, Nur Fadhilah
N1 - Publisher Copyright:
© 2024 by the authors of this article. Published under CC-BY.
PY - 2024/6/12
Y1 - 2024/6/12
N2 - Mobile edge computing (MEC) has emerged as a pivotal technology to address the computational demands of resource-constrained mobile devices by offloading tasks to nearby edge servers. However, ensuring the security and efficiency of computation offloading in multi-access MEC networks remains a critical challenge. This paper proposes a novel approach that leverages deep reinforcement learning (DRL) for secure computation offloading in multi-access MEC networks. The proposed framework utilizes DRL agents to dynamically make offloading decisions based on the current network conditions, resource availability, and security requirements. The agents learn optimal offloading policies through interactions with the environment, aiming to maximize task completion efficiency while minimizing security risks. To enhance security, the framework integrates encryption techniques and access control mechanisms to protect sensitive data during offloading. The proposed approach undergoes comprehensive simulations to assess its performance in terms of security, efficiency, and scal-ability. The results demonstrate that the DRL-based approach effectively balances the trade-offs between security and efficiency, achieving robust and adaptive computation offloading in multi-access MEC networks. This study contributes to advancing the state-of-the-art in secure and efficient mobile edge computing systems, fostering the development of intelligent and resilient MEC solutions for future mobile networks.
AB - Mobile edge computing (MEC) has emerged as a pivotal technology to address the computational demands of resource-constrained mobile devices by offloading tasks to nearby edge servers. However, ensuring the security and efficiency of computation offloading in multi-access MEC networks remains a critical challenge. This paper proposes a novel approach that leverages deep reinforcement learning (DRL) for secure computation offloading in multi-access MEC networks. The proposed framework utilizes DRL agents to dynamically make offloading decisions based on the current network conditions, resource availability, and security requirements. The agents learn optimal offloading policies through interactions with the environment, aiming to maximize task completion efficiency while minimizing security risks. To enhance security, the framework integrates encryption techniques and access control mechanisms to protect sensitive data during offloading. The proposed approach undergoes comprehensive simulations to assess its performance in terms of security, efficiency, and scal-ability. The results demonstrate that the DRL-based approach effectively balances the trade-offs between security and efficiency, achieving robust and adaptive computation offloading in multi-access MEC networks. This study contributes to advancing the state-of-the-art in secure and efficient mobile edge computing systems, fostering the development of intelligent and resilient MEC solutions for future mobile networks.
KW - computation offloading
KW - deep reinforcement learning (DRL)
KW - mobile edge computing (MEC)
KW - multi-access networks
KW - resource allocation
KW - security
KW - task efficiency
UR - http://www.scopus.com/inward/record.url?scp=85196817478&partnerID=8YFLogxK
U2 - 10.3991/ijim.v18i11.49051
DO - 10.3991/ijim.v18i11.49051
M3 - Article
AN - SCOPUS:85196817478
SN - 1865-7923
VL - 18
SP - 80
EP - 91
JO - International Journal of Interactive Mobile Technologies
JF - International Journal of Interactive Mobile Technologies
IS - 11
ER -