Best Architecture Recommendations of ANN Backpropagation Based on Combination of Learning Rate, Momentum, and Number of Hidden Layers

Syaharuddin Syaharuddin, Fatmawati Fatmawati, Herry Suprajitno

Abstract


This article discusses the results of research on the combination of learning rate values, momentum, and the number of neurons in the hidden layer of the ANN Backpropagation (ANN-BP) architecture using meta-analysis. This study aims to find out the most recommended values at each learning rate and momentum interval, namely [0.1], as well as the number of neurons in the hidden layer used during the data training process. We conducted a meta-analysis of the use of learning rate, momentum, and number of neurons in the hidden layer of ANN-BP. The eligibility data criteria of 63 data include a learning rate of 44 complete data, the momentum of 30 complete data, and the number of neurons in the hidden layer of 45 complete data. The results of the data analysis showed that the learning rate value was recommended at intervals of 0.1-0.2 with a RE model value of 0.938 (very high), the momentum at intervals of 0.7-0.9 with RE model values of 0.925 (very high), and the number of neurons in the input layer that was smaller than the number of neurons in the hidden layer with a RE model value of 0.932 (very high). This recommendation is obtained from the results of data analysis using JASP by looking at the effect size of the accuracy level of research sample data.

Keywords


Neural Network; Backpropagation; Learning rate; Momentum; Number of Neuron; Layer hidden.

Full Text:

DOWNLOAD [PDF]

References


Abdul Hamid, N., Mohd Nawi, N., Ghazali, R., & Mohd Salleh, M. N. (2011). Accelerating learning performance of back propagation algorithm by using adaptive gain together with adaptive momentum and adaptive learning rate on classification problems. Communications in Computer and Information Science, 151(2), 559–570. https://doi.org/10.1007/978-3-642-20998-7_62

Aizenberg, I., Sheremetov, L., Villa-Vargas, L., & Martinez-Muñoz, J. (2016). Multilayer Neural Network with Multi-Valued Neurons in time series forecasting of oil production. Neurocomputing, 175, 980–989. https://doi.org/10.1016/j.neucom.2015.06.092

Bai, Y., Li, Y., Wang, X., Xie, J., & Li, C. (2016). Air pollutants concentrations forecasting using back propagation neural network based on wavelet decomposition with meteorological conditions. Atmospheric Pollution Research, 7(3), 557–566. https://doi.org/10.1016/j.apr.2016.01.004

Baldi, P., Sadowski, P., & Lu, Z. (2018). Learning in the machine: Random backpropagation and the deep learning channel. Artificial Intelligence, 260, 1–35. https://doi.org/10.1016/j.artint.2018.03.003

Ch, S., & Mathur, S. (2012). Particle swarm optimization trained neural network for aquifer parameter estimation. KSCE Journal of Civil Engineering, 16(3), 298–307. https://doi.org/10.1007/s12205-012-1452-5

Fausett, L. (1994). Fundamentals of Neural Network. Prentice Hall, Hoboken.

Ghorbani, M. A., Zadeh, H. A., Isazadeh, M., & Terzi, O. (2016). A comparative study of artificial neural network (MLP, RBF) and support vector machine models for river flow prediction. Environmental Earth Sciences, 75(6). https://doi.org/10.1007/s12665-015-5096-x

Gowda, C. C., & Mayya, S. G. (2014). Comparison of Back Propagation Neural Network and Genetic Algorithm Neural Network for Stream Flow Prediction. Journal of Computational Environmental Sciences, 2014, 1–6. https://doi.org/10.1155/2014/290127

Hao, Z., Jiang, Y., Yu, H., & Chiang, H. D. (2021). Adaptive Learning Rate and Momentum for Training Deep Neural Networks. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12977 LNAI, 381–396. https://doi.org/10.1007/978-3-030-86523-8_23

Haviluddin, & Alfred, R. (2016). A genetic-based backpropagation neural network for forecasting in time-series data. Proceedings - 2015 International Conference on Science in Information Technology: Big Data Spectrum for Future Information Economy, 158–163. https://doi.org/10.1109/ICSITech.2015.7407796

Irawan, M. I., Syaharuddin, Utomo, D. B., & Rukmi, A. M. (2013). Intelligent irrigation water requirement system based on artificial neural networks and profit optimization for planting time decision making of crops in Lombok Island. Journal of Theoretical and Applied Information Technology, 58(3), 657–671.

Jayalakshmi, T., & Santhakumaran, A. (2011). Statistical Normalization and Back Propagationfor Classification. International Journal of Computer Theory and Engineering, 89–93. https://doi.org/10.7763/ijcte.2011.v3.288

Karsoliya, S. (2012). Approximating Number of Hidden layer neurons in Multiple Hidden Layer BPNN Architecture. International Journal of Engineering Trends and Technology, 3(6), 714–717.

Lesinski, G., Corns, S., & Dagli, C. (2016). Application of an Artificial Neural Network to Predict Graduation Success at the United States Military Academy. Procedia Computer Science, 95, 375–382. https://doi.org/10.1016/j.procs.2016.09.348

Mislan, Haviluddin, Hardwinarto, S., Sumaryono, & Aipassa, M. (2015). Rainfall Monthly Prediction Based on Artificial Neural Network: A Case Study in Tenggarong Station, East Kalimantan - Indonesia. Procedia Computer Science, 59, 142–151. https://doi.org/10.1016/j.procs.2015.07.528

Moreira, M., & Fiesler, E. (1995). Neural Networks with Adaptive Learning Rate and Momentum Terms. Technique Report 95, 4, 1–29.

Moustra, M., Avraamides, M., & Christodoulou, C. (2011). Artificial neural networks for earthquake prediction using time series magnitude data or Seismic Electric Signals. Expert Systems with Applications, 38(12), 15032–15039. https://doi.org/10.1016/j.eswa.2011.05.043

Nawi, N. M., Hamzah, F., Hamid, N. A., Rehman, M. Z., Aamir, M., & Ramli, A. A. (2017). An optimized back propagation learning algorithm with adaptive learning rate. International Journal on Advanced Science, Engineering and Information Technology, 7(5), 1693–1700. https://doi.org/10.18517/ijaseit.7.5.2972

Rehman, M. Z., & Nawi, N. M. (2012). Studying The Effect Of Adaptive Momentum In Improving The Accuracy Of Gradient Descent Back Propagation Algorithm On Classification Problems. International Journal of Modern Physics: Conference Series, 09, 432–439. https://doi.org/10.1142/s201019451200551x

Singh, B. K., Verma, K., & Thoke, A. S. (2015). Adaptive gradient descent backpropagation for classification of breast tumors in ultrasound imaging. Procedia Computer Science, 46, 1601–1609. https://doi.org/10.1016/j.procs.2015.02.091

Smith, L. N., & Topin, N. (2019). Super-convergence: Very fast training of neural networks using large learning rates. Proceeding of Artificial Intelligence and Machine Learning for Multi-Domain Operations, 1–18. https://doi.org/https://doi.org/10.1117/12.2520589

Solanki, S., & Jethva, H. B. (2013). Modified Back Propagation Algorithm of Feed Forward Networks. International Journal of Innovative Technology and Exploring Engineering (IJITEE), 2(6), 131-134.

Sun, W., & Huang, C. (2020). A carbon price prediction model based on secondary decomposition algorithm and optimized back propagation neural network. Journal of Cleaner Production, 243. https://doi.org/10.1016/j.jclepro.2019.118671

Sutskever, I., Martens, J., Dahl, G., & Hinton, G. (2013). On the importance of initialization and momentum in deep learning. 30th International Conference on Machine Learning, ICML 2013, 2176–2184.

Syaharuddin, Sari, D. A., Sabaryati, J., Zonyfar, C., Sihotang, S. F., Fadillah, A., Sari, T. H. N. I., Putra, D. S., Harun, R. R., & Mandailina, V. (2021). Computational based on GUI MATLAB for back propagation method in detecting climate change: Case study of mataram city. Journal of Physics: Conference Series, 1816(1). https://doi.org/10.1088/1742-6596/1816/1/012001

Tarigan, J., Nadia, Diedan, R., & Suryana, Y. (2017). Plate Recognition Using Backpropagation Neural Network and Genetic Algorithm. Procedia Computer Science, 116, 365–372. https://doi.org/10.1016/j.procs.2017.10.068

Yu, X. H., & Chen, G. A. (1997). Efficient backpropagation learning using optimal learning rate and momentum. Neural Networks, 10(3), 517–527. https://doi.org/10.1016/S0893-6080(96)00102-5

Zhang, Z., Yang, P., Ren, X., Su, Q., & Sun, X. (2020). Memorized sparse backpropagation. Neurocomputing, 415, 397–407. https://doi.org/10.1016/j.neucom.2020.08.055




DOI: https://doi.org/10.31764/jtam.v6i3.8524

Refbacks

  • There are currently no refbacks.


Copyright (c) 2022 Syaharuddin Syaharuddin, Fatmawati Fatmawati, Herry Suprajitno

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

_______________________________________________

JTAM already indexing:

                     


_______________________________________________

 

Creative Commons License

JTAM (Jurnal Teori dan Aplikasi Matematika) 
is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License

______________________________________________

_______________________________________________

_______________________________________________ 

JTAM (Jurnal Teori dan Aplikasi Matematika) Editorial Office: