vioft2nntf2t|tblJournal|Abstract_paper|0xf4ff4590310000003239110001000300 Geometric transformations of the training data as well as the test data present challenges to the use of deep neural networks to vision-based learning tasks. To address this issue, we present a deep neural network model that exhibits the desirable property of transformation-robustness. Our model, termed RobustCaps, uses group-equivariant convolutions in an improved capsule network model. RobustCaps uses a global context-normalised procedure in its routing algorithm to learn transformation-invariant part-whole relationships within image data. This learning of such relationships allows our model to outperform both capsule and convolutional neural network baselines on transformation-robust classification tasks. Specifically, RobustCaps achieves state-of-the-art accuracies on CIFAR-10, FashionMNIST, and CIFAR-100 when the images in these datasets are subjected to train and test-time rotations and translations.
Sai Raam Venkataraman, S. Balasubramanian, R. Raghunatha Sarma Sri Sathya Sai Institute of Higher Learning, India
Deep Learning, Capsule Networks, Transformation Robustness, Equivariance
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 |
| Published By : ICTACT
Published In :
ICTACT Journal on Image and Video Processing ( Volume: 13 , Issue: 3 , Pages: 2883 - 2892 )
Date of Publication :
Feburay 2023
Page Views :
643
Full Text Views :
3
|