ENHANCING REMOTE SENSING IMAGE FUSION AND CLASSIFICATION ACCURACY USING DEEP LEARNING MODELS
Abstract
Remote sensing imagery has become a pivotal source for land-use information at broad spatial scales due to advancements in satellite technology. However, challenges persist in accurately segmenting and classifying remote sensing data, particularly with high-resolution imagery. This paper proposes a novel hybrid deep learning model for spatiotemporal fusion to address these challenges, integrating SRCNN and LSTM models. The SRCNN enhances spatial details using MODIS-Landsat image pairs, while the LSTM learns phenological patterns in the enhanced images, facilitating dynamic agricultural system predictions. Evaluation comparing against benchmark fusion models. Implementation details are provided, including the use of loss functions for image segmentation and training specifics. Results demonstrate superior performance in land cover extraction accuracy compared to existing models, with an overall accuracy of 95.77% and a mean Intersection over Union (MIoU) of 82.23%. This study highlights the effectiveness of the proposed hybrid model in capturing both spatial and temporal dynamics, essential for applications ranging from land cover mapping to disaster assessment.

Authors
G. Brindha
Dr. N.G.P. Institute of Technology, India

Keywords
Deep Learning, SRCNN, LSTM, MioU Scores
Yearly Full Views
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
0003267000000
Published By :
ICTACT
Published In :
ICTACT Journal on Data Science and Machine Learning
( Volume: 5 , Issue: 2 , Pages: 598 - 600 )
Date of Publication :
March 2024
Page Views :
201
Full Text Views :
45

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.