Abstract
This paper presents a Bayesian optimization approach to hyperparameter tuning for the Rainbow DQN reinforcement learning algorithm, using the Hyperopt library and the CartPole-v1 environment as a benchmark. The study investigates the impact of search space definition on the convergence and quality of optimized hyperparameters. Further, it analyzes the effectiveness of different evaluation methods in the context of hyperparameter optimization for deep reinforcement learning. Results demonstrate the efficacy of Bayesian optimization in identifying high-performing hyperparameter configurations for Rainbow DQN in this control task.
Authors
Akhil Veluru
Naveen Jindal School of Management, University of Texas, United States of America
Keywords
Bayesian Optimization Approach, Rainbow DQN Reinforcement Learning, Hyperparameter Optimization