vioft2nntf2t|tblJournal|Abstract_paper|0xf4ff1cb034000000dd5b150001000400 In the realm of scene reconstruction, conventional methods often struggle with challenges posed by occlusions, lighting variations, and noisy data. To address these limitations, this paper introduces a Transduction-based Deep Belief Network (T-DBN) within a learning-based multi-camera fusion framework, offering robust scene reconstruction by effectively fusing data from multiple cameras and adapting to diverse conditions. Traditional scene reconstruction methods often struggle with challenging scenarios due to limitations in handling occlusions, lighting variations, and noisy data. The proposed T-DBN model overcomes these limitations by effectively fusing information from multiple cameras using a transduction scheme, allowing it to adapt to varying conditions. The network learns to decipher scene structures and characteristics by training on a diverse dataset. Experimental results demonstrate the superiority of the Proposed T-DBN in achieving accurate and reliable scene reconstruction compared to existing techniques. This work presents a significant advancement in multi-camera fusion and scene reconstruction through the integration of deep learning and transduction strategies.
Arvind Kumar Shukla1, Meenakshi2, Amaresh Jha3, S. Balu4, Mohammad Shabbir Alam5 IFTM University, India1, Apeejay Stya University, India2, University of Petroleum and Energy Studies, Dehradun, India3, KSR Institute for Engineering and Technology, India4, Jazan University, Kingdom of Saudi Arabia 5
Transduction, Deep Belief Networks, Multi-Camera Fusion, Scene Reconstruction, Robustness
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 103 | 2 | 0 | 0 | 12 | 0 | 1 | 0 | 6 | 1 | 0 |
| Published By : ICTACT
Published In :
ICTACT Journal on Image and Video Processing ( Volume: 14 , Issue: 1 , Pages: 3060 - 3065 )
Date of Publication :
August 2023
Page Views :
1037
Full Text Views :
258
|