Predicting Processor Performance Using Machine Learning Techniques: A Study on SPEC CPU2017 Benchmark Suite

Predicting Processor Performance Using Machine Learning Techniques: A Study on SPEC CPU2017 Benchmark Suite

  IJETT-book-cover           
  
© 2021 by IJETT Journal
Volume-69 Issue-10
Year of Publication : 2021
Authors : Mutaz A. B. Al-Tarawneh, Sami A. Al-Tarawneh, Khaled S. Al-Maaitah
DOI :  10.14445/22315381/IJETT-V69I10P214

How to Cite?

Mutaz A. B. Al-Tarawneh, Sami A. Al-Tarawneh, Khaled S. Al-Maaitah, "Predicting Processor Performance Using Machine Learning Techniques: A Study on SPEC CPU2017 Benchmark Suite," International Journal of Engineering Trends and Technology, vol. 69, no. 10, pp. 108-117, 2021. Crossref, https://doi.org/10.14445/22315381/IJETT-V69I10P214

Abstract
Recent advances in the microprocessors industry have introduced a plethora of processor models with diverse microarchitectural characteristics. Such diversity would normally complicate the decision on choosing the best processor model for a particular application class. Hence, an efficient tool is required to estimate and compare the performance of different processor models on a particular application. This paper reports on using different machine learning models to predict the performance of modern processor models on various benchmark applications. These models include Linear Regression (LR), Artificial Neural Networks (ANNs), and Random Forests (RF). They are trained and evaluated on a dataset constructed based on the Standard Performance Evaluation Corporation (SPEC) CPU2017 benchmark performance evaluation results. The SPEC CPU 2017 suite includes both integer and floating-point applications. Both training and evaluation are performed using WEKA data mining and machine learning tool. Evaluation metrics include correlation coefficient, mean absolute error (MAE), relative absolute error (RAE), root mean squared error (RMSE), and root relative squared error (RRSE). Evaluation results show that the Random Forest-based model provides superior performance over other models under all evaluation metrics. Ultimately, the trained models can provide viable tools for the performance of new processor models on standard benchmark applications.

Keywords
Processor, microarchitecture, performance, machine learning.

Reference
[1] Y. Wang, V. Lee, G.-Y. Wei, and D. Brooks, Predicting New Workload or CPU Performance by Analyzing Public Datasets, ACM Trans. Archit. Code Optim., 15(4)(2019) Article 53, doi: 10.1145/3284127.
[2] S. Singh and M. Awasthi, Efficacy of Statistical Sampling on Contemporary Workloads: The Case of SPEC CPU2017, in 2019 IEEE International Symposium on Workload Characterization (IISWC), 3-5 (2019) 70-80, doi: 10.1109/IISWC47752.2019.9042114.
[3] M. Al-Tarawneh, Z. A. Al Tarawneh, and S. E. A. Alnawayseh, A CPU-Guided Dynamic Voltage and Frequency Scaling (DVFS) of Off-Chip Buses in Homogenous Multicore Processors, 2015, DVFS; Multicore; Off-chip Bus; Power; Performance 10(7) (2015), doi: 10.15866/irecos.v10i7.6742735-747.
[4] M. Al-Tarawneh, Analysis of the Factors Influencing Architectural Time- Predictability of Superscalar Processors, Journal of Computing Science and Engineering, 13(2019) 39-65, doi: 10.5626/JCSE.2019.13.2.39.
[5] J. Bucek, K.-D. Lange, and J. v. Kistowski, SPEC CPU2017: Next-Generation Compute Benchmark, presented at the Companion of the 2018 ACM/SPEC International Conference on Performance Engineering, Berlin, Germany, (2018). [Online]. Available: https://doi.org/10.1145/3185768.3185771.
[6] A. Limaye and T. Adegbija, A Workload Characterization of the SPEC CPU2017 Benchmark Suite, in 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2-4 April 2018 (2018) 149-158, doi: 10.1109/ISPASS.2018.00028.
[7] R. H. S. R and A. Milenkovi?, SPEC CPU2017: Performance, Event, and Energy Characterization on the Core i7-8700K," presented at the Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering, Mumbai, India, (2019). [Online]. Available: https://doi.org/10.1145/3297663.3310314.
[8] S. Singh and M. Awasthi, Memory Centric Characterization and Analysis of SPEC CPU2017 Suite, presented at the Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering, Mumbai, India, (2019). [Online]. Available: https://doi.org/10.1145/3297663.3310311.
[9] A. Navarro-Torres, J. Alastruey-Benedé, P. Ibáñez-Marín, and V. Viñals-Yúfera, Memory hierarchy characterization of SPEC CPU2006 and SPEC CPU2017 on the Intel Xeon Skylake-SP, PLOS ONE, 14(8) (2019). e0220135, doi: 10.1371/journal.pone.0220135.
[10] N. Schmitt, J. Bucek, K.-D. Lange, and S. Kounev, Energy Efficiency Analysis of Compiler Optimizations on the SPEC CPU 2017 Benchmark Suite, presented at the Companion of the ACM/SPEC International Conference on Performance Engineering, Edmonton AB, Canada, (2020). [Online]. Available: https://doi.org/10.1145/3375555.3383759.
[11] P. Prieto, P. Abad, J. A. Herrero, J. A. Gregorio, and V. Puente, SPECcast: A Methodology for Fast Performance Evaluation with SPEC CPU 2017 Multiprogrammed Workloads, presented at the 49th International Conference on Parallel Processing - ICPP, Edmonton, AB, Canada, (2020). [Online]. Available: https://doi.org/10.1145/3404397.3404424.
[12] R. Hebbar and A. Milenkovi?, A Preliminary Scalability Analysis of SPEC CPU2017 Benchmarks, in SoutheastCon 2021, 10-13 March 2021 (2021) 1-8, doi: 10.1109/SoutheastCon45413.2021.9401917.
[13] A. Rimsa, J. Nelson Amaral, and F. M. Q. Pereira, Practical dynamic reconstruction of control flow graphs, Software: Practice and Experience, 51(2) (2021) 353-384, 2021, doi: https://doi.org/10.1002/spe.2907.
[14] W. Wang, Helper function inlining in dynamic binary translation, presented at the Proceedings of the 30th ACM SIGPLAN International Conference on Compiler Construction, Virtual, Republic of Korea, (2021). [Online]. Available: https://doi.org/10.1145/3446804.3446851.
[15] W. Lee, J. Lee, B. K. Park, and R. Y. C. Kim, Microarchitectural Characterization on a Mobile Workload, Applied Sciences, 11(3) (2021) 1225, [Online]. Available: https://www.mdpi.com/2076- 3417/11/3/1225.
[16] H. Jang, M. C. Park, and D. H. Lee, IBV-CFI: Efficient finegrained control-flow integrity preserving CFG precision, Computers & Security, 94, 101828, 2020/07/01/ 2020, doi: https://doi.org/10.1016/j.cose.2020.101828.
[17] R. Panda, S. Song, J. Dean, and L. K. John, Wait of a Decade: Did SPEC CPU 2017 Broaden the Performance Horizon?, in 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), 24-28 Feb. 2018 (2018) 271-282, doi: 10.1109/HPCA.2018.00032.
[18] Q. Wu, S. Flolid, S. Song, J. Deng, and L. John, Invited Paper for the Hot Workloads Special Session Hot Regions in SPEC CPU2017, 2018 IEEE International Symposium on Workload Characterization (IISWC), (2018) 71-77.
[19] SPEC. Standard Performance Evaluation Corporation. www.spec.org (accessed April,2021).
[20] E. Lieven, Computer Architecture Performance Evaluation Methods. Morgan & Claypool, (2010) 1.
[21] J. L. Hennessy and D. A. Patterson, Computer architecture : a quantitative approach. (in English), (2019).
[22] H. I. Lim, A Linear Regression Approach to Modeling Software Characteristics for Classifying Similar Software, in 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), 15-19 Jul 2019, 1(2019) 942-943, doi: 10.1109/COMPSAC.2019.00152.
[23] D. L. Mohr, W. J. Wilson, and R. J. Freund, Chapter 7 - Linear Regression, in Statistical Methods (Fourth Edition), D. L. Mohr, W. J. Wilson, and R. J. Freund Eds.: Academic Press, 2022, 301- 349.
[24] I. N. da Silva, D. Hernane Spatti, R. Andrade Flauzino, L. H. B. Liboni, and S. F. dos Reis Alves, Artificial Neural Network Architectures and Training Processes, in Artificial Neural Networks : A Practical Course, I. N. da Silva, D. Hernane Spatti, R. Andrade Flauzino, L. H. B. Liboni, and S. F. dos Reis Alves Eds. Cham: Springer International Publishing, (2017) 21-28.
[25] I. N. da Silva, D. Hernane Spatti, R. Andrade Flauzino, L. H. B. Liboni, and S. F. dos Reis Alves, The Perceptron Network, in Artificial Neural Networks : A Practical Course, I. N. da Silva, D. Hernane Spatti, R. Andrade Flauzino, L. H. B. Liboni, and S. F. dos Reis Alves Eds. Cham: Springer International Publishing, (2017) 29-40.
[26] L. Breiman, Random Forests, Machine Learning, 45(1) 5-32, 2001/10/01 2001, doi: 10.1023/A:1010933404324.
[27] Z. Khan et al., Ensemble of optimal trees, random forest and random projection ensemble classification, Advances in Data Analysis and Classification, 14(1) (2020) 97-116, doi: 10.1007/s11634-019-00364-9.
[28] I. Witten, M. Hall, E. Frank, G. Holmes, B. Pfahringer, and P. Reutemann, The WEKA data mining software: An update, SIGKDD Explorations, 11 (2009) 10-18, doi: 10.1145/1656274.1656278.