Evolutionary Optimization with Deep Learning-Driven Visual Place Recognition for Seasonal Variant Environment
|© 2022 by IJETT Journal|
|Year of Publication : 2022|
|Authors : P. Sasikumar, S. Sathiamoorthy
|DOI : 10.14445/22315381/IJETT-V70I7P235|
How to Cite?
P. Sasikumar, S. Sathiamoorthy, "Evolutionary Optimization with Deep Learning-Driven Visual Place Recognition for Seasonal Variant Environment" International Journal of Engineering Trends and Technology, vol. 70, no. 7, pp. 339-347, 2022. Crossref, https://doi.org/10.14445/22315381/IJETT-V70I7P235
Visual Place Recognition (VPR) defines the process of identifying the same place despite considerable variations in appearances and viewpoints. VPR is a major element of Spatial Artificial Intelligence, allowing robotic and intelligent augmentation platforms to perceive and understand the real world. Long-term navigation in varying environments is a challenging problem in VPR owing to the distinct appearances of places with significant variations at different times of day, months, and seasons. Recently, authors had to work on advanced deep learning techniques to address this issue. This paper presents a novel Remora Optimization with Deep Learning-Driven Visual Place Recognition for Seasonal variant Environment, named ROADL-VPRSI model. The proposed ROADL-VPRSI model employs a pretrained capsule network (CapsNet) model to learn the image descriptors. Besides, ROA is applied to adjust the hyperparameters involved in the CapsNet model, such as learning rate, batch size, and the number of hidden layers. Next, the feature vectors are transformed into binary codes to minimize the computational complexity for image matching. Finally, the Minkowski distance-based similarity measurement process is carried out to recognize the places effectively. The experimental validation of the ROADLVPRSI model is performed using a benchmark dataset, and the results are inspected under several measures. The comparative study highlighted the betterment of the ROADL-VPRSI model over recent methods.
Visual places recognition, Computer vision, Similarity measurement, Remora optimization algorithm.
 Garg, S., Fischer, T. and Milford, M, “ Where is Your Place, Visual Place Recognition?,” 2021. Arxiv Preprint Arxiv:2103.06443.
 Garg, S., Suenderhauf, N. and Milford, M., 2019. Semantic–Geometric Visual Place Recognition: A New Perspective for Reconciling Opposing Views. the International Journal of Robotics Research, P.0278364919839761, 2019.
 Zaffar, M., Garg, S., Milford, M., Kooij, J., Flynn, D., Mcdonald-Maier, K. and Ehsan, S, “Vpr-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change,” International Journal of Computer Vision, vol.129, no.7, pp.2136-2174, 2021.
 Sumitra, N., Lap, P., and Kunalai, P, “Improving the Prediction of Rotten Fruit Using Convolutional Neural Network,” International Journal of Engineering Trends and Technology (IJETT), vol. 69, 2021.
 Amit, V., Upadhyay, V.G., Tripathi, M.M, “Development of Artificial Intelligent Techniques for Shortterm Wind Speed Forecasting,” International Journal of Engineering Trends and Technology (IJETT), vol.69, 2021.
 Rajendra.P, Pusuluri V.N.H, Gunavardhana Naidu.T, “the Performance of Various Optimizers In Machine Learning,” International Journal of Engineering Trends and Technology (IJETT), vol.69, no.7,2021.
 Garg, S. and Milford, M, “Fast, Compact and Highly Scalable Visual Place Recognition Through Sequence-Based Matching of Overloaded Representations,” In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 3341-3348, 2020.
 Schlegel, D. and Grisetti, G, “HBST: A Hamming Distance Embedding Binary Search Tree for Feature-Based Visual Place Recognition,” IEEE Robotics and Automation Letters, vol.3, no.4, pp.3741-3748, 2018.
 Zhang, X., Wang, L. and Su, Y, “Visual Place Recognition: A Survey From Deep Learning Perspective,” Pattern Recognition, vol.113, pp.107760, 2021.
 Masone, C. and Caputo, B, “ A Survey on Deep Visual Place Recognition,” IEEE Access, vol.9, pp.19516-19547, 2021.
 Garg, S., Vankadari, M. and Milford, M, “Seqmatchnet: Contrastive Learning with Sequence Matching for Place Recognition & Relocalization,” In Conference on Robot Learning , PMLR, pp. 429-443, 2022.
 Khaliq, A., Ehsan, S., Chen, Z., Milford, M. and Mcdonald-Maier, K, “A Holistic Visual Place Recognition Approach Using Lightweight Cnns for Significant Viewpoint and Appearance Changes,” IEEE Transactions on Robotics, vol.36, no.2, pp.561-569, 2019.
 Chen, Z., Liu, L., Sa, I., Ge, Z. and Chli, M, “Learning Context Flexible Attention Model for Long-Term Visual Place Recognition,” IEEE Robotics and Automation Letters, vol.3, no.4, pp.4015-4022, 2018.
 Zhu, Y., Wang, J., Xie, L. and Zheng, L, “October. Attention-Based Pyramid Aggregation Network for Visual Place Recognition,” In Proceedings of the 26th ACM International Conference on Multimedia, pp. 99-107, 2018.
 Zaffar, M., Ehsan, S., Milford, M. and Mcdonald-Maier, K.D, “Memorable Maps: A Framework for Re-Defining Places In Visual Place Recognition,” IEEE Transactions on Intelligent Transportation Systems, vol.22, no.12, pp.7355-7369,2020.
 Peng, G., Yue, Y., Zhang, J., Wu, Z., Tang, X. and Wang, D, “May. Semantic Reinforced Attention Learning for Visual Place Recognition,” In 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp.13415-13422, 2021.
 Zhu, J., Ai, Y., Tian, B., Cao, D. and Scherer, S, “Visual Place Recognition In Long-Term and Large-Scale Environment Based on CNN Feature,” In 2018 IEEE Intelligent Vehicles Symposium (IV), IEEE, pp.1679-1685, 2018.
 Panigrahi, S., Das, J. and Swarnkar, T, “Capsule Network-Based Analysis of Histopathological Images of Oral Squamous Cell Carcinoma,” Journal of King Saud University-Computer and Information Sciences, 2020.
 Zheng, R., Jia, H., Abualigah, L., Wang, S. and Wu, D, “An Improved Remora Optimization Algorithm with An Autonomous Foraging Mechanism for Global Optimization Problems,” Mathematical Biosciences and Engineering, vol.19, no.4, pp.3994-4037, 2022.
 M. Cummins and P. Newman, “FAB-MAP: Probabilistic Localization and Mapping In the Space of Appearance,” International Journal of Robotics Research (IJRR), vol. 27, no.6,pp. 647-665, 2008
 N Sünderhauf, P. Neubert, and P. Protzel, “Are We There Yet? Challenging Seqslam on A 3000 Km Journey Across All Four Seasons,” In Workshop on Long-Term Autonomy At the IEEE International Conference on Robotics and Automation (W-ICRA), Karlsruhe, 2013.