Energy-Efficient Task Scheduling in Distributed Edge Networks Using Reinforcement Learning

Energy-efficient computing, Task scheduling, Distributed edge networks, Reinforcement learning (RL), Resource optimization, Edge computing, Real-time applications, Markov Decision Process (MDP), Dynamic scheduling,Intelligent task allocation

Authors

Vol. 7 No. 03 (2019)
Education And Language
March 28, 2019

Downloads

Energy efficiency has become a critical concern in distributed edge networks due to the increasing demand for real-time processing in applications such as IoT, autonomous systems, and industrial automation. Efficient task scheduling is essential to optimize resource utilization and reduce energy consumption while maintaining system performance. This paper explores the application of reinforcement learning (RL) as an innovative approach for energy-efficient task scheduling in distributed edge networks. The proposed RL-based framework dynamically allocates tasks to edge devices, adapting to varying workloads and network conditions. By formulating the scheduling problem as a Markov Decision Process (MDP), the framework employs an intelligent agent to learn optimal scheduling policies through a reward mechanism designed to minimize energy consumption and ensure timely task execution. Experimental evaluations demonstrate the proposed method's superiority over traditional scheduling techniques, achieving significant energy savings while maintaining high task throughput. The findings highlight the potential of RL in transforming task scheduling strategies for energy-efficient and sustainable edge computing environments.