Human Emotion Recognition System
Human Emotion Recognition System |
||
|
||
© 2024 by IJETT Journal | ||
Volume-72 Issue-5 |
||
Year of Publication : 2024 | ||
Author : Aharon Rushanyan, Artak Khemchyan |
||
DOI : 10.14445/22315381/IJETT-V72I5P111 |
How to Cite?
Aharon Rushanyan, Artak Khemchyan, " Human Emotion Recognition System," International Journal of Engineering Trends and Technology vol. 72, no. 5, pp. 105-112, 2024. Crossref, https://doi.org/10.14445/22315381/IJETT-V72I5P111
Abstract
Emotions play an important role in human interaction and behaviour, affecting our decisions, interactions, and overall well-being. Facial expressions are a primary medium of conveying and understanding these emotions. According to David Mortensen's Communication Theory[1], only one-third of other people's emotions can be understood through words and tone of voice. At the same time, the remaining two-thirds come from facial expressions. expression (Mortensen, 2014). Understanding and recognizing emotions is an element in fields such as psychology, human-computer interaction, and artificial intelligence. Improving human-machine interaction involves recognizing and understanding human emotions. As a result, the field of emotion recognition technology has grown into a large industry, finding applications in various fields such as marketing research, driver impairment monitoring, user experience testing, and health evaluation [2]. In some cases, special schemes include human emotion recognition systems from video images used to identify facial expressions that allow the identification of basic human emotions.
Keywords
Emotion recognition, Video-based emotion recognition, Facial expressions, DeepFace library, Emotion detection technology.
References
[1] C. David Mortensen, Communication Theory, Taylor & Francis, pp. 1-484, 2017.
[Google Scholar] [Publisher Link]
[2] Don’t Look Now: Why You Should Be Worried About Machines Reading Your Emotions, The Guardian, 2019. [Online]. Available: https://www.theguardian.com/technology/2019/mar/06/facial-recognition-software-emotional-science
[3] Yaniv Taigman et al., “DeepFace: Closing the Gap to Human-Level Performance in Face Verification,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 1701-1708, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Boris Delovski, How Emotional Artificial Intelligence Can Improve Education, Edlitera, 2023. [Online]. Available. https://www.edlitera.com/blog/posts/emotional-artificial-intelligence-education
[5] Ali Mollahosseini, David Chan, and Mohammad H. Mahoor, “Going Deeper in Facial Expression Recognition using Deep Neural Networks,” 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, pp. 1-10, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Mohammad H. Mahoor, AffectNet, Database. [Online]. Available: http://mohammadmahoor.com/affectnet/
[7] Sayf A. Majeed et al., “Mel Frequency Cepstral Coefficients (MFCC) Feature Extraction Enhancement in the Application of Speech Recognition: A Comparison Study,” Journal of Theoretical and Applied Information Technology, vol. 79, no. 1, pp. 38-56, 2015.
[Google Scholar] [Publisher Link]
[8] Karen Simonyan, and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv, pp. 1-14, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Kyunghyun Cho et al., “Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation,” arXiv, pp. 1-15, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Martín Abadi et al., “TensorFlow: A Framework for Large-Scale Machine Learning,” 12th USENIX Symposium on Operating System Design and Implementation (OSDI 16), pp. 265-283, 2016.
[Google Scholar] [Publisher Link]
[11] Using the Saved Model Format, TensorFlow. [Online]. Available: https://www.tensorflow.org/guide/saved_model
[12] Open Neural Network Exchange (ONNX): Towards an Open Ecosystem for AI, Microsoft. [Online]. Available: https://github.com/onnx/onnx
[13] Flask Documentation, Palletsprojects. [Online]. Available: https://flask.palletsprojects.com/en/3.0.x/
[14] FastAPI is described as a Modern and High-Performance Web Framework for Developing APIs with Python 3.7+, Cilans System, [Online]. Available: https://cilans.net/ai-ml-python-r/fastapi-is-described-as-a-modern-and-high-performance-web-frameworkfor-developing-apis-with-python-3-7/
[15] Django Documentation, Django. [Online]. Available: https://docs.djangoproject.com/
[16] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep Learning,” Nature, vol. 521, no. 7553, pp. 436-444, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor, “AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild,” IEEE Transactions on Affective Computing, vol. 10, no. 1, pp. 18-31, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Steven R. Livingstone, and Frank A. Russo, “The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A Dynamic, Multimodal Set of Facial and Vocal Expressions in North American English,” PLoS One, vol. 13, no. 5, pp. 1-35, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Jicheng Li et al., “MMASD: A Multimodal Dataset for Autism Intervention Analysis,” arxiv, pp. 1-9, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Tongshuai Song, Guanming Lu, and Jingjie Yan, “Emotion Recognition Based On Physiological Signals Using Convolutional Neural Networks,” Proceedings of the 2020 12th International Conference on Machine Learning and Computing, pp. 161-165, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Rosalind W. Picard, Affective Computing, MIT Press, pp. 1-292, 1997.
[Google Scholar] [Publisher Link]
[22] Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer New York, pp. 1-738, 2006.
[Google Scholar] [Publisher Link]
[23] Xun Chen, Z. Jane Wang, and Martin McKeown, “Joint Blind Source Separation for Neurophysiological Data Analysis: Multiset and multimodal methods,” IEEE Signal Processing Magazine, vol. 33, no. 3, pp. 86-107, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, pp. 1-800, 2016.
[Google Scholar] [Publisher Link]
[25] Paul Ekman, “An Argument for Basic Emotions,” Cognition and Emotion, vol. 6, no. 3-4, pp. 169-200, 1992.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Guoying Zhao, and Matti Pietikainen, “Dynamic Texture Recognition using Local Binary Patterns with an Application to Facial Expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 915-928, 2007.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Heysem Kaya, Furkan Gürpınar, Albert Ali Salah, “Video-Based Emotion Recognition in the Wild using Deep Transfer Learning and Score Fusion,” Image and Vision Computing, vol. 65, pp. 66-75, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Qingchen Zhang et al., “A Survey on Deep Learning for Big Data,” Information Fusion, vol. 42, pp. 146-157, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[29] Ziwei Liu et al., “Deep Learning Face Attributes in the Wild,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp. 3730-3738, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[30] Emad Barsoum et al., “Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution,” Computer Vision and Pattern Recognition, pp. 1-6, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[31] Zhongzheng Fu et al., “Emotion Recognition Based on Multi-Modal Physiological Signals and Transfer Learning,” Frontiers in Neuroscience, vol. 16, pp. 1-15, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[32] Hailun Lian et al., “A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face.” Entropy, vol. 25, no. 10, pp. 1-33, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[33] Deepanway Ghosal et al., “DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation.” arxiv, pp. 1-11, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[34] Paul Ekman, Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life, Henry Holt and Company, pp. 1-274, 2004.
[Google Scholar] [Publisher Link]
[35] Marina Sokolova, and Guy Lapalme, “A Systematic Analysis of Performance Measures for Classification Tasks,” Information Processing & Management, vol. 45, no. 4, pp. 427-437, 2009.
[CrossRef] [Google Scholar] [Publisher Link]
[36] Daniel Roy Greenfeld, and Audrey Roy Greenfeld, Two Scoops of Django 3.x: Best Practices for the Django Web Framework, Two Scoops Press, 2019.
[Publisher Link]
[37] T. Gutierrez, FastAPI Cookbook: Build Robust, Highly Scalable, and Secure Web APIs with Python, Packt Publishing, 2020. [Online]. Available: https://github.com/PacktPublishing/FastAPI-Cookbook
[39] Adam Paszke et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pp. 8024-8035, 2019.
[Google Scholar] [Publisher Link]