Dual-Cross Label Smoothing and Attention Driven Joint Multimodal Deep Learning Framework for Unsupervised Person Re-Identification

Dual-Cross Label Smoothing and Attention Driven Joint Multimodal Deep Learning Framework for Unsupervised Person Re-Identification

  IJETT-book-cover           
  
© 2025 by IJETT Journal
Volume-73 Issue-9
Year of Publication : 2025
Author : Badireddygari Anurag Reddy, Deepika Ghai , Danvir Mandal
DOI : 10.14445/22315381/IJETT-V73I9P123

How to Cite?
Badireddygari Anurag Reddy, Deepika Ghai , Danvir Mandal ,"Dual-Cross Label Smoothing and Attention Driven Joint Multimodal Deep Learning Framework for Unsupervised Person Re-Identification", International Journal of Engineering Trends and Technology, vol. 73, no. 9, pp.250-261, 2025. Crossref, https://doi.org/10.14445/22315381/IJETT-V73I9P123

Abstract
The advancement of smart city infrastructure necessitates robust person Re-identification (Re-ID) systems capable of addressing challenges such as scalability, privacy, and security. This paper presents an unsupervised Re-ID framework that integrates enhanced data preprocessing, Efficient Net-B0 for feature extraction, K-Means++ clustering for stable pseudo-labeling, a dual-branch discriminative learning structure, and Context-Aware Label Smoothing (CALS) to improve resilience to pseudo-label noise, occlusion, and viewpoint variation. The framework was evaluated on three complex datasets, CASIA, Market-1501, and DukeMTMC-Re-ID, each containing significant challenges such as pose variation, illumination changes, and background clutter. Experimental results demonstrate superior performance over conventional baselines, including ResNet-50, DBSCAN, and Dual Cross-Neighbor Label Smoothing (DCLS). Both global and local learning branches achieved over 99% training accuracy within five epochs, indicating rapid convergence. The method achieved Rank-1 accuracies of 89.7%, 91.8%, and 87.5% and mAP scores of 82.5%, 85.7%, and 80.2% on CASIA, Market-1501, and DukeMTMC-Re-ID, respectively. Qualitative assessments and t-SNE visualizations confirmed improved retrieval accuracy and enhanced feature discrimination. The proposed approach demonstrates strong generalization, stability, and robustness against label noise, highlighting its suitability for real-world deployment in intelligent surveillance and public safety applications.

Keywords
Context-Aware Label Smoothing (CALS), Deep learning, Local Soft Attention (LSA), Unsupervised person re-identification.

References
[1] Silvio Sampaio et al., “Collecting, Processing and Secondary using Personal and (Pseudo) Anonymized Data in Smart Cities,” Applied Sciences, vol. 13, no. 6, pp. 3831-3861, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Irfan Yaqoob et al., “A Novel Person Re-Identification Network to Address Low-Resolution Problem in Smart City Context,” ICT Express, vol. 9, no. 5, pp. 809-814, 2023.
CrossRef] [Google Scholar] [Publisher Link]
[3] Samee Ullah Khan et al., “Efficient Person Reidentification for IoT-Assisted Cyber-Physical Systems,” IEEE Internet of Things Journal, vol. 10, no. 21, pp. 18695-18707, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Nazia Perwaiz, M.M. Fraz, and Muhammad Shahzad, “Smart Surveillance with Simultaneous Person Detection and Re-Identification,” Multimedia Tools and Applications, vol. 83, no. 5, pp. 15461-15482, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Zia-ur-Rehman, Arif Mahmood, and Wenxiong Kang, “Pseudo-Label Refinement for Improving Self-Supervised Learning Systems,” arXiv Preprint, vol. 1, pp. 1-11, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Jiachen Li, Menglin Wang, and Xiaojin Gong, “Transformer Based Multi-Grained Features for Unsupervised Person Re-Identification,” 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, Waikoloa, HI, USA, pp. 42-50, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Ang Li et al., “Global Information-Assisted Fine-Grained Visual Categorization in Internet of Things,” IEEE Internet of Things Journal, vol. 10, no. 1, pp. 940-952, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Yan Hui et al., “Unsupervised Cross-Domain Person Re-Identification Method based on Attention Block and Refined Clustering,” IEEE Access, vol. 10, pp. 105930-105941, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Lun Wang et al., “Attention-Disentangled Re-ID Network for Unsupervised Domain Adaptive Person Re-Identification,” Knowledge-Based Systems, vol. 304, 2024
[CrossRef] [Google Scholar] [Publisher Link]
[10] Haiqin Chen et al., “Dual Attention Network for Unsupervised Domain Adaptive Person Re-Identification,” IEEE Access, vol. 11, pp. 88184-88192, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Jian Han, Ya-Li Li, and Shengjin Wang, “Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive Person Re-Identification,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 1, pp. 790-798, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Yue Zou et al., “An Improved Method for Cross-Domainpedestrian Re-Identification,” Proceedings of the World Conference on Intelligent and 3-D Technologies (WCI3DT 2022), pp. 351-367, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Yanfeng Li et al., “Unsupervised Person Re-Identification based on High- Quality Pseudo Labels,” Applied Intelligence, vol. 53, no. 12, pp. 15112-15126, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Haiming Sun, and Shiwei Ma, “Pro-Reid: Producing Reliable Pseudo Labels for Unsupervised Person Re-Identification,” Image and Vision Computing, vol. 150, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Muhammad Fayyaz et al., “Person Re-Identification with Features-Based Clustering and Deep Features,” Neural Computing and Applications, vol. 32, no. 14, pp. 10519-10540, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Xiai Yan et al., “Unsupervised Domain Adaptive Person Re- Identification Method Based on Transformer,” Electronics, vol. 11, no. 19, pp. 1-13, 2022.
[CrossRef] [Google Scholar] [Publisher Link]