his research work introduces an Ensemble Adaptive Reinforcement
Learning (EARL) approach for efficient load balancing in Mobile Ad
Hoc Networks (MANETs). Traditional methods often fail to adapt to
the dynamic nature of MANETs, leading to congestion and
inefficiency. EARL leverages multiple reinforcement learning agents,
trained with Q-learning and Deep Q-Networks (DQN), to optimize
routing decisions based on real-time network conditions. The ensemble
mechanism combines the strengths of individual agents, enhancing
adaptability and performance. Simulation results demonstrate that
EARL significantly outperforms traditional methods like AODV and
DSR, achieving higher packet delivery ratios, lower end-to-end delays,
increased throughput, better energy efficiency, and reduced packet
loss, thereby proving its effectiveness in dynamic network
environments.
G. Rajiv Suresh Kumar1, G. Arul Geetha2 Hindusthan College of Engineering and Technology, India1, Bishop Appasamy College of Arts and Science, India2
Ad Hoc Networks, Load Balancing, Adaptive, Learning, Efficient
January | February | March | April | May | June | July | August | September | October | November | December |
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Published By : ICTACT
Published In :
ICTACT Journal on Communication Technology ( Volume: 15 , Issue: 2 , Pages: 3223 - 3227 )
Date of Publication :
June 2024
Page Views :
604
Full Text Views :
117
|