Several models have been previously developed for learning correlated
representations between source and target modalities. In this paper, we
present a novel Joint Fusion model for learning cross spectral image
representation for heterogenous face recognition. The coupled
receptive face recognition model is built using ResNet architecture as
backbone, a fully connected neural network and triple auto encoders
for learning perceptible feature points invariant to changes in
Spectrum. The performance of this model is tested using CelebA and
LFW datasets. Moreover, the empirical results show that the learnt
Common Latent Embeddings by the integrated networks produce
competitive cross-spectrum face recognition results. These results are
obtained by training the model using Adam Optimizer and Mean
Squared Error (MSE) loss function. The proposed model has shown a
performance improvement of 20% in AUC (Area Under the Curve)
measure than the State-of-the-art with Polarization State Information,
and 23% improvement in AUC over the State-of-the art models in
traditional Thermal-to-Visible synthesis process. As well 12%
improvement in EER (Equal Error Rate) measure polar measure and
9% improvement in EER (Conventional) are observed while comparing
with Sate-of-the-art Models in Traditional thermal case.
Anita Sigamani1, Prema Selvaraj2 B.M.S College for Women, India1, Arulmigu Arthanareeswarar Arts and Science College, India2
Flexible Filter, Bi-HCRV, FCNN, Face Detection, Recognition
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 0 | 0 | 0 | 0 | 7 | 0 | 0 | 0 | 0 | 0 | 0 |
| Published By : ICTACT
Published In :
ICTACT Journal on Image and Video Processing ( Volume: 15 , Issue: 4 , Pages: 3620 - 3629 )
Date of Publication :
May 2025
Page Views :
70
Full Text Views :
7
|