MULTI-AGENT REINFORCEMENT LEARNING FRAMEWORK FOR DYNAMIC RESOURCE SLICING AND ADAPTIVE ALLOCATION IN 6G NETWORK CORE

ICTACT Journal on Communication Technology ( Volume: 17 , Issue: 1 )

Abstract

The rapid expansion of heterogeneous services in sixth generation (6G) communication networks has increased the complexity of resource orchestration within the network core. Emerging applications such as autonomous systems, immersive communication, and large-scale Internet of Things environments have required highly flexible and efficient resource slicing mechanisms. Conventional resource allocation techniques have relied on static or semi-dynamic policies that have limited adaptability to fluctuating traffic patterns and diverse quality of service requirements. As the network scale has grown and service diversity has intensified, these approaches have faced challenges in maintaining efficient utilization and service reliability. Consequently, the dynamic management of network resources has remained a critical issue in the evolving 6G infrastructure. This study has investigated a dynamic resource slicing mechanism that has utilized Multi-Agent Reinforcement Learning based Adaptive Resource Slicing (MARL-ARS) for the 6G network core environment. The proposed framework has introduced multiple intelligent agents that have interacted with the network environment and that have cooperatively optimized the allocation of bandwidth, computational capacity, and storage resources across different network slices. Each agent has learned an optimal allocation policy through continuous interaction with the system state, while the cooperative learning structure has enabled coordinated decision making among distributed agents. The reinforcement learning mechanism has incorporated reward optimization strategies that have considered network latency, resource utilization efficiency, and service reliability. Through iterative learning, the model has gradually refined its slicing policies and has achieved adaptive resource allocation under varying traffic loads and service demands. The experimental results demonstrate that the proposed MARL-DRS framework significantly improves the performance of dynamic resource slicing in the 6G network core. The system achieves 93% resource utilization under high network load conditions, while the baseline approaches achieve between 78% and 85% utilization. The proposed model also improves the network throughput to 8.6 Gbps, which exceeds the existing approaches that achieve 6.6–7.5 Gbps. The slice allocation accuracy reaches 94% after 35 training episodes, which indicates that the cooperative learning agents effectively interpret the network state and allocate resources accordingly. In addition, the framework reduces the network latency to 35 ms under heavy traffic conditions and maintains a 96% QoS satisfaction rate across heterogeneous service slices.

Authors

Sreedevi Kadiyala1, Chandra Srinivas Potluri2
Guru Nanak Institutions Technical Campus, India1, Siddhartha Institute of Engineering and Technology, India2

Keywords

6G Network Core, Multi-Agent Reinforcement Learning, Dynamic Resource Slicing, Intelligent Resource Allocation, Network Optimization

Published By
ICTACT
Published In
ICTACT Journal on Communication Technology
( Volume: 17 , Issue: 1 )
Date of Publication
March 2026
Pages
3815 - 3824
Page Views
9
Full Text Views
3