The man-machine interface encompasses a crucial area—emotion
recognition through facial expressions. Despite its significance,
emotion recognition faces challenges such as facial accessories, non-
uniform illuminations, pose variations, audio speeches, text
conversations, and hand and facial gestures. Understanding emotions
like happiness, anger, anxiety, joy, and shock, along with their varying
degrees and overlaps, is essential for accurate recognition. These
nuances, inherent to humans, pose difficulties and costs in achieving
standard results through facial recognition. Recognizing someone’s
mood through facial expression, conversation, voice modulation, and
gestures is a skill humans excel at. However, replicating this ability
through facial recognition has proven challenging and costly. This
paper addresses these challenges by proposing diverse approaches to
emotion detection. By exploring various modes, including facial
expressions, conversation analysis, voice modulation, and gestures, the
paper tackles current research problems and holds practical
applications in public experiments and exhaustive sentiment analysis.
The paper presents a good combo of various modes of emotion
recognition on multiple datasets (tried and tested widely before
amalgamating all to produce an excellent optimal result as an output
of the model).
Bharat Gupta, Manas Gupta Ministry of Electronics and Information Technology, Government of India, India, Indian Institute of Technology Banaras Hindu University, Varanasi, India
FER, ASR, MFCC, Multimodal Deep Learning
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | 4 | 6 | 2 | 5 |
| Published By : ICTACT
Published In :
ICTACT Journal on Soft Computing ( Volume: 15 , Issue: 1 , Pages: 3392 - 3399 )
Date of Publication :
July 2024
Page Views :
161
Full Text Views :
25
|