Radar-based target recognition plays a crucial role in a variety of applications, such as surveillance, defense, and autonomous systems. High-resolution radar imagery, when processed effectively, can provide detailed information about objects of interest. However, due to the complex nature of radar signals and the limitations of traditional processing methods, extracting accurate and reliable target information remains challenging. Recent advancements in deep learning, particularly in the domain of image and video processing, have opened new avenues for improving radar-based target recognition. The primary challenge in radar target recognition is the effective use of high-resolution radar imagery, which often contains noise, motion blur, and other distortions. Traditional signal processing techniques struggle to handle these complexities, leading to reduced accuracy in real-world applications. Further, most existing methods are not well-equipped to handle the temporal dynamics and motion information inherent in radar-based video data, which is vital for identifying and tracking moving targets. This paper proposes a novel deep video processing technique designed for radar-based target recognition using high-resolution images. The approach leverages convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to extract spatial and temporal features from radar video sequences. By integrating image enhancement algorithms and advanced feature fusion techniques, the system is capable of processing high-resolution radar frames in real-time. The method involves a two-stage process: first, extracting high-level spatial features from individual radar images using CNNs; second, capturing temporal relationships between frames with RNNs for robust target identification and tracking. Experimental results on a radar video dataset show significant improvements in target recognition accuracy. The proposed technique achieves a recognition rate of 94.3% in identifying static and dynamic targets, outperforming traditional methods by 15-20%. In terms of processing speed, the method demonstrates real-time performance with an average frame processing time of 32 ms, ensuring its suitability for operational environments. The system also demonstrates robustness against noise, with a decrease in false positive rates by 12%.
R. Krithika1, A.N. Jayanthi2 United Institute of Technology, India1, Sri Ramakrishna Institute of Technology, India2
Radar-Based Recognition, Deep Learning, High-Resolution Images, Video Processing, Target Tracking
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 20 | 1 |
| Published By : ICTACT
Published In :
ICTACT Journal on Image and Video Processing ( Volume: 15 , Issue: 2 , Pages: 3454 - 3462 )
Date of Publication :
November 2024
Page Views :
57
Full Text Views :
21
|