Research Paper | Computer Science | China | Volume 9 Issue 4, April 2020
Effect of Local Dynamic Learning Rate Adaptation on Mini-Batch Gradient Descent for Training Deep Learning Models with Back Propagation Algorithm
Joyce K. Ndauka, Dr. Zheng Xiao Yan
Backpropagation has gained much popularity for training neural network models including Deep learning models. Despite its popularity, ordinary backpropagation suffers from low convergence rate due existence of constant learning rate. Since error surface is not smooth learning rate needs to be dynamically adapted to speed up rate of convergence. Much work has been done showing benefits of employing dynamic learning rate on backpropagation algorithm to speed up the rate of convergence focusing on global learning rate adaption and local adaptive learning rate on batch gradient descent. In this work we tried to observe the effect of local dynamic learning rate adaptation using improved iRprop- algorithm with mini batch gradient descent to improve convergence rate of back propagation. Experiment was conducted in python using cifar 10 dataset. Results shows the proposed algorithm outperform the ordinary back propagation algorithm in terms of speed when batch size.
Keywords: Mini-batch gradient descent, Learning rate, Backpropagation, Deep leaning
Edition: Volume 9 Issue 4, April 2020
Pages: 339 - 342
How to Cite this Article?
Joyce K. Ndauka, Dr. Zheng Xiao Yan, "Effect of Local Dynamic Learning Rate Adaptation on Mini-Batch Gradient Descent for Training Deep Learning Models with Back Propagation Algorithm", International Journal of Science and Research (IJSR), https://www.ijsr.net/search_index_results_paperid.php?id=SR20401112354, Volume 9 Issue 4, April 2020, 339 - 342
226 PDF Views | 146 PDF Downloads