vioft2nntf2t|tblJournal|Abstract_paper|0xf4ff13b52b000000463f080001000300 The text presented in videos contains important information for a wide range of vision-based applications. The key modules for extracting this information include detection of text followed by its recognition, which are the subject of our study. In this paper, we propose an innovative end-to-end subtitle detection and recognition system for videos. Our system consists of three modules. Video subtitle are firstly detected by a novel image operator based on our blob extraction method. Then, the video subtitle is individually segmented as single characters by simple technique on the binary image and then passed to recognition module. Lastly, Capsule neural network (CapsNet) trained on Chars74K dataset is adopted for recognizing characters. The proposed detection method is robust and has good performance on video subtitle detection, which was evaluated on dataset we constructed. In addition, CapsNet show its validity and effectiveness for recognition of video subtitle. To the best of our knowledge, this is the first work that capsule networks have been empirically investigated for Character recognition of video subtitles.
Ahmed Tibermacine1, Selmi Mohamed Amine2 Biskra University, Algeria1,Biskra University, Algeria2
Capsule Networks, Convolutional Neural Networks, Subtitle Text Detection, Text Recognition
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
| Published By : ICTACT
Published In :
ICTACT Journal on Image and Video Processing ( Volume: 11 , Issue: 3 , Pages: 2378-2384 )
Date of Publication :
February 2021
Page Views :
205
Full Text Views :
1
|