Abstract
Medical image analysis has improved significantly using deep learning, but accurate segmentation remains challenging due to variability in disease patterns, anatomical structures, and image quality. This paper proposes a Hybrid CNN-U-Net Model with Adaptive Attention Mechanism (AAM) for automated brain tumor segmentation. The methodology combines CNN-based hierarchical feature extraction with U-Net’s segmentation capability, enhanced by AAM that dynamically concentrates on salient tumor regions. Evaluated on BraTS 2021 dataset, this proposed model here achieves a Dice coefficient of 93.1±1.4 with 95% confidence interval [91.5, 94.7], representing a 1.1-point improvement over the Li et al. baseline (92.0±1.5). Ablation studies isolate the contribution of each component. Statistical hypothesis testing confirms significant improvements over standard U-Net and U-Net + Attention baselines (p < 0.001). The model demonstrates greater performance in segmenting Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Per-class Dice scores are: WT = 92.8±1.2, TC = 89.9±1.8, ET = 85.6±2.3. Cross-dataset evaluation on BraTS 2020 shows generalization capability (Dice = 90.1±2.1, degradation = -2.9%).
Authors
T. Thanga Jency , S. Gajalakshmi
Alpha Arts and Science College, Chennai, India
Keywords
Deep Learning, CNN, U-Net, Adaptive Attention Mechanism, Medical Image Segmentation, Tumor Detection