BIO-INSPIRED METAHEURISTIC FRAMEWORK FOR HYPERPARAMETER OPTIMIZATION IN GRAPH NEURAL NETWORKS

ICTACT Journal on Soft Computing ( Volume: 16 , Issue: 4 )

Abstract

Graph Neural Networks (GNNs) have emerged as an effective paradigm for learning from graph-structured data in domains such as social network analysis, bioinformatics, and recommendation systems. However, the performance of a GNN has remained highly sensitive to the selection of hyperparameters, including learning rate, hidden dimensions, aggregation functions, and regularization coefficients. Manual tuning and grid-based search strategies have often resulted in high computational cost and suboptimal configurations, which has limited scalability and reproducibility. The hyperparameter optimization problem in GNNs has posed a complex, non-convex, and high-dimensional search space. Conventional optimization approaches have struggled to adaptively explore this space, especially under limited computational budgets. As a result, GNN models have frequently suffered from overfitting, unstable convergence, or degraded generalization performance across different graph datasets. This study has proposed a bio-inspired metaheuristic optimization framework that has integrated population-based search principles with GNN hyperparameter tuning. A nature-inspired algorithm that has mimicked collective intelligence and adaptive behavior has guided the exploration and exploitation of the hyperparameter space. The proposed framework has encoded critical GNN hyperparameters as candidate solutions, which have been iteratively evolved using fitness feedback derived from validation accuracy and loss stability. The optimization process has been coupled with a training pipeline that has ensured fair comparison across candidate configurations. Experimental evaluation is conducted on benchmark graph datasets, including Cora, Citeseer, and Pubmed. The proposed method achieves peak classification accuracy of 88.0%, precision of 86.8%, recall of 86.5%, and F1-score of 87.0%, consistently outperforming Random Search, Bayesian Optimization, and PSO by 2–4.5%. Training time is reduced by approximately 10–15%, demonstrating both efficiency and scalability. Statistical analysis confirms that the improvements are significant, indicating robust generalization across datasets and stable convergence during hyperparameter optimization.

Authors

S. Madhusudhanan
Rajalakshmi Engineering College, India

Keywords

Graph Neural Networks, Hyperparameter Optimization, Bio-Inspired Algorithms, Metaheuristic Search, Graph Learning

Published By
ICTACT
Published In
ICTACT Journal on Soft Computing
( Volume: 16 , Issue: 4 )
Date of Publication
January 2026
Pages
4132 - 4138
Page Views
22
Full Text Views
1