Emotion Recognition Using PIZAM-ANFIS by Considering Partial Occlusion and Behind the Mask

Emotion Recognition Using PIZAM-ANFIS by Considering Partial Occlusion and Behind the Mask

  IJETT-book-cover           
  
© 2025 by IJETT Journal
Volume-73 Issue-2
Year of Publication : 2025
Author : Jyoti S. Bedre, P. Lakshmi Prasanna
DOI : 10.14445/22315381/IJETT-V73I2P120

How to Cite?
Jyoti S. Bedre, P. Lakshmi Prasanna, "Emotion Recognition Using PIZAM-ANFIS by Considering Partial Occlusion and Behind the Mask," International Journal of Engineering Trends and Technology, vol. 73, no. 2, pp. 230-243, 2025. Crossref, https://doi.org/10.14445/22315381/IJETT-V73I2P120

Abstract
Emotional expressions, encompassing verbal and non-verbal communication, convey an individual’s emotional state or attitude to others. Understanding complex human behavior requires analyzing physical features across multiple modalities, with recent studies focusing extensively on spontaneous multi-modal emotion recognition for human behavior analysis. However, accurate Facial Emotion Recognition (FER) faces significant challenges due to partial facial occlusions caused by random objects and mask-wearing. The paper introduces a novel classification method, Pizam-ANFIS-based FER, which considers Occlusions and Masks (PAFEROM) to address this. Preprocessing the input image is the first step in the process, followed by cropping and face detection with the Viola-Jones Algorithm (VJA). Further, the skin tone is then analyzed, and several parts of the face are segmented using LSW-KCM. Furthermore, contour formation, edge detection by CGED, and extracting features are executed. Using Principal Component Analysis with Information Gain Analysis (PIGA), the retrieved features dimensionality is reduced before the CSE processes them for the identification of Action Units (AUs), and the proposed approach is utilized. Subsequently, the identified AUs and dimensionally reduced features are classified using Pizam-ANFIS to recognize human emotions. Experimental results indicate that the proposed model surpasses existing techniques in both effectiveness and accuracy.

Keywords
Local Structural Weighted K-Means Clustering (LSW-KMC) algorithm, Canny Gaussian Edge Detector (CGED), PizMamdani (Pizam)-Adaptive Neuro Fuzzy Interference System (Pizam-ANFIS), Correlated Swish Embedding Network (CSE).

References
[1] Wafa Mellouk, and Wahida Handouzi, “Facial Emotion Recognition Using Deep Learning: Review and Insights,” Procedia Computer Science, vol. 175, pp. 689-694, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Rohan Appasaheb Borgalli, and Sunil Surve, “Deep Learning for Facial Emotion Recognition Using Custom CNN Architecture,” Journal of Physics: Conference Series, 2nd International Conference on Computational Intelligence & IoT, vol. 2236, pp. 1-12, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Shrey Modi, and Mohammed Husain Bohara, “Facial Emotion Recognition Using Convolution Neural Network,” Proceedings - 5th International Conference on Intelligent Computing and Control Systems, Madurai, India, pp. 1339-1344, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Erick C. Valverde, Miracle Udurume, and Wansu Lim, “Performance Analysis of a Deep Learning-Based Facial Emotion Recognition System on Edge Device,” Neural Computing and Applications, pp. 695-696, 2022.
[Google Scholar] [Publisher Link]
[5] Shan Li, and Weihong Deng, “Deep Facial Expression Recognition: A Survey,” IEEE Transactions on Affective Computing, vol. 13, no. 3, pp. 1195-1215, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Amjad Rehman Khan, “Facial Emotion Recognition Using Conventional Machine Learning and Deep Learning Methods: Current Achievements, Analysis and Remaining Challenges,” Information, vol. 13, no. 6, pp. 1-17, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Mahmut Dirik, “Optimized Anfis Model with Hybrid Metaheuristic Algorithms for Facial Emotion Recognition,” International Journal of Fuzzy Systems, vol. 25, pp. 485-496, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Shervin Minaee, Mehdi Minaei, and Amirali Abdolrashidi, “Deep-Emotion: Facial Expression Recognition Using the Attentional Convolutional Network,” Sensors, vol. 21, no. 9, pp. 1-16, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Elham S. Salama et al., “A 3D-Convolutional Neural Network Framework with Ensemble Learning Techniques for Multi-Modal Emotion Recognition,” Egyptian Informatics Journal, vol. 22, pp. 167-176, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Fakir Mashuque Alamgir, and Md. Shafiul Alam, “An Artificial Intelligence-Driven Facial Emotion Recognition System Using Hybrid Deep Belief Rain Optimization,” Multimedia Tools and Applications, vol. 82, pp. 2437-2464, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Ninad Mehendale, “Facial Emotion Recognition Using Convolutional Neural Networks (FERC),” SN Applied Sciences, vol. 2, no. 3, pp. 1-8, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Xiao Liu, Xiangyi Cheng and Kiju Lee, “GA-SVM-Based Facial Emotion Recognition Using Facial Geometric Features,” IEEE Sensors Journal, vol. 21, no. 10, pp. 11532-11542, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Abdulrahman Alreshidi, and Mohib Ullah, “Facial Emotion Recognition Using Hybrid Features,” Informatics, vol. 7, no. 1, pp. 1-13, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Alia K. Hassan, and Suhaila N. Mohammed, “A Novel Facial Emotion Recognition Scheme Based on Graph Mining,” Defence Technology, vol. 16, no. 5, pp. 1062-1072, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Shaik Asif Hussain, and Ahlam Salim Abdallah Al Balushi, “A Real-Time Face Emotion Classification and Recognition Using Deep Learning Model,” Journal of Physics: Conference Series, First International Conference on Emerging Electrical Energy, Electronics and Computing Technologies, Melaka, Malaysia, vol. 1432, no. 1, pp. 1-14, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Bita Houshmand, and Naimul Mefraz Khan, “Facial Expression Recognition Under Partial Occlusion from Virtual Reality Headsets Based on Transfer Learning,” Proceedings-IEEE 6th International Conference on Multimedia Big Data, New Delhi, India, pp. 70-75, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[17] G.S. Monisha et al., “Enhanced Automatic Recognition of Human Emotions Using Machine Learning Techniques,” Procedia Computer Science, vol. 218, pp. 375-382, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[18] M.A.H. Akhand et al., “Facial Emotion Recognition Using Transfer Learning in The Deep CNN,” Electronics, vol. 10, no. 9, pp. 1-19, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Soumyajit Saha et al., “Feature Selection for Facial Emotion Recognition Using Cosine Similarity-Based Harmony Search Algorithm,” Applied Sciences, vol. 10, no. 8, pp. 1-22, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Chahak Gautam, and K.R. Seeja, “Facial Emotion Recognition Using Handcrafted Features and CNN,” Procedia Computer Science, vol. 218, pp. 1295-1303, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Giovanna Castellano, Berardina De Carolis, and Nicola Macchiarulo, “Automatic Facial Emotion Recognition at the COVID-19 Pandemic Time,” Multimedia Tools and Applications, vol. 82, pp. 12751-12769, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Shrouk Wally et al., “Occlusion Aware Student Emotion Recognition Based on Facial Action Unit Detection,” arXiv, pp. 1-14, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Yasmeen ELsayed, Ashraf ELSayed, and Mohamed A. Abdou, “An Automatic Improved Facial Expression Recognition for Masked Faces,” Neural Computing and Applications, vol. 35, no. 20, pp. 14963-14972, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Mukhriddin Mukhiddinov et al., “Masked Face Emotion Recognition Based on Facial Landmarks,” Sensors, vol. 23, no. 3, pp. 1-23, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Xiaoliang Zhu et al., “Hybrid Domain Consistency Constraints-Based Deep Neural Network for Facial Expression Recognition,” Sensors, vol. 23, no. 11, pp. 1-16, 2023.
[CrossRef] [Google Scholar] [Publisher Link]