Evaluating the Real-World Application Efficacy of MobileNet Models

Evaluating the Real-World Application Efficacy of MobileNet Models

  IJETT-book-cover           
  
© 2024 by IJETT Journal
Volume-72 Issue-9
Year of Publication : 2024
Author : Sara Bouraya, Abdessamad Belangour
DOI : 10.14445/22315381/IJETT-V72I9P116

How to Cite?
Sara Bouraya, Abdessamad Belangour, "Evaluating the Real-World Application Efficacy of MobileNet Models," International Journal of Engineering Trends and Technology, vol. 72, no. 9, pp. 197-202, 2024. Crossref, https://doi.org/10.14445/22315381/IJETT-V72I9P116

Abstract
This experimental study explores the abilities of MobileNet and its three variants within the sphere of object classification for object detection under different lighting. Our research trains every model on the ‘Car Object Detection’ dataset with adjustments to lighting, weather conditions, and urban or rural settings, which represent real-life situations more accurately. We outline specific alterations made to architecture and methods used during training that were meant to increase adaptability across different environments while maintaining accuracy, too. As a result, this work achieved remarkable results, and our best-performing algorithm attained a 97% validation accuracy rating according to tests carried out under various environmental conditions. Through lightweight convolutional networks for object detection, it becomes clear that such type was not only effective but also resource efficient, hence applicable in dynamic settings requiring real-time operation with limited resources.

Keywords
CNNs, Convolutional Neural Networks, Computer Vision, MobileNet, Object Classification.

References

[1] Joseph Redmon et al., “You Only Look Once: Unified, Real-Time Object Detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp. 779-788, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Wei Liu et al., “SSD: Single Shot Multibox Detector,” Computer Vision – ECCV: 14th European Conference, Amsterdam, The Netherlands, pp. 21-37, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv, pp. 1-17, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Joseph Redmon, and Ali Farhadi, “YOLO9000: Better, Faster, Stronger,” 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 6517-6525, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Hei Law, and Jia Deng, “CornerNet: Detecting Objects as Paired Keypoints,” International Journal of Computer Vision, vol. 128, pp. 642-656, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao, “Scaled-YOLOv4: Scaling Cross Stage Partial Network,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, pp. 13024-13033, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Kaiwen Duan et al., “CenterNet: Keypoint Triplets for Object Detection,” 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), pp. 6568-6577, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Zheng Qin et al., “ThunderNet: Towards Real-Time Generic Object Detection on Mobile Devices,” 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), pp. 6717-6726, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Ross Girshick et al., “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 580-587, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Ross Girshick, “Fast R-CNN,” 2015 IEEE International Conference on Computer Vision, Santiago, Chile, pp. 1440-1448, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Shaoqing Ren et al., “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Kaiming He et al., “Mask R-CNN,” 2017 IEEE International Conference on Computer Vision, Venice, Italy, pp. 2980-2988, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Zhaowei Cai, and Nuno Vasconcelos, “Cascade R-CNN: Delving into High Quality Object Detection,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 6154-6162, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Andrew G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv, pp. 1-9, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Mark Sandler et al., “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 4510-4520, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Andrew Howard et al., “Searching for MobileNetV3,” 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), pp. 1314-1324, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Ningning Ma et al., “ShuffleNet V2 : Practical Guidelines for Efficient CNN Architecture Design,” Computer Vision – ECCV: 15th European Conference, Munich, Germany, pp. 122-138, 2018.
[CrossRef] [Google Scholar] [Publisher Link]