Remote sensing imagery has become a pivotal source for land-use information at broad spatial scales due to advancements in satellite technology. However, challenges persist in accurately segmenting and classifying remote sensing data, particularly with high-resolution imagery. This paper proposes a novel hybrid deep learning model for spatiotemporal fusion to address these challenges, integrating SRCNN and LSTM models. The SRCNN enhances spatial details using MODIS-Landsat image pairs, while the LSTM learns phenological patterns in the enhanced images, facilitating dynamic agricultural system predictions. Evaluation comparing against benchmark fusion models. Implementation details are provided, including the use of loss functions for image segmentation and training specifics. Results demonstrate superior performance in land cover extraction accuracy compared to existing models, with an overall accuracy of 95.77% and a mean Intersection over Union (MIoU) of 82.23%. This study highlights the effectiveness of the proposed hybrid model in capturing both spatial and temporal dynamics, essential for applications ranging from land cover mapping to disaster assessment.
G. Brindha Dr. N.G.P. Institute of Technology, India
Deep Learning, SRCNN, LSTM, MioU Scores
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 0 | 0 | 32 | 6 | 8 | 1 | 0 | 1 | 0 | 0 | 0 |
| Published By : ICTACT
Published In :
ICTACT Journal on Data Science and Machine Learning ( Volume: 5 , Issue: 2 , Pages: 598 - 600 )
Date of Publication :
March 2024
Page Views :
249
Full Text Views :
48
|