REDUCING COMPUTATIONAL DEMANDS IN CAPSULE NET THROUGH KNOWLEDGE DISTILLATION AND TRANSFER LEARNING

ICTACT Journal on Soft Computing ( Volume: 15 , Issue: 3 )

Abstract

Capsule networks have emerged as a robust alternative to traditional convolutional neural networks, providing superior performance in recognizing spatial hierarchies and capturing intricate relationships in image data. However, their computational intensity and memory demands present significant challenges, particularly for resource- constrained environments. Addressing this limitation, the proposed study explores the integration of knowledge distillation and transfer learning techniques to enhance the computational efficiency of Capsule Networks without compromising their accuracy. Knowledge distillation compresses the model by transferring learned knowledge from a high-capacity teacher network to a lightweight student network, effectively reducing computational overhead. Transfer learning further minimizes resource demands by leveraging pre-trained models, thus expediting the training process and optimizing performance. Experiments were conducted on the MNIST and CIFAR-10 datasets, with the optimized Capsule Network achieving classification accuracies of 99.1% and 93.7%, respectively, while reducing computational requirements by 45%. The proposed approach demonstrated a significant improvement in training time and memory efficiency, achieving a 40% reduction in model parameters compared to baseline Capsule Network implementations. These results underline the potential of combining knowledge distillation and transfer learning to make advanced architectures like Capsule Networks accessible for real-time and edge applications. Future directions include extending this framework to more complex datasets and applications such as object detection and medical imaging.

Authors

Vince Paul1, A. Anbu Megelin Star2, A. Anto Spiritus Kingsly3, S.J. Jereesha Mary4
Christ College of Engineering, India1, DMI Engineering College, India2, Oasys Institute of Technology, India3, Annai Velankanni College of Engineering, India4

Keywords

Capsule Networks, Knowledge Distillation, Transfer Learning, Computational Efficiency, Model Compression

Published By
ICTACT
Published In
ICTACT Journal on Soft Computing
( Volume: 15 , Issue: 3 )
Date of Publication
January 2025
Pages
3578 - 3588
Page Views
253
Full Text Views
6

ICT Academy is an initiative of the Government of India in collaboration with the state Governments and Industries. ICT Academy is a not-for-profit society, the first of its kind pioneer venture under the Public-Private-Partnership (PPP) model

Contact Us

ICT Academy
Module No E6 -03, 6th floor Block - E
IIT Madras Research Park
Kanagam Road, Taramani,
Chennai 600 113,
Tamil Nadu, India

For Journal Subscription: journalsales@ictacademy.in

For further Queries and Assistance, write to us at: ictacademy.journal@ictacademy.in