A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes

D. Geraldine Bessie Amali, Dinakaran M.


Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks. The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network. The paper concludes by suggesting directions for novel ANN training algorithms based on recent advances in global optimization.


Artificial Neural Networks, Feedforward Neural Networks, Optimization Algorithms, Recurrent Neural Networks, Training Algorithms

Full Text:


DOI: http://doi.org/10.11591/ijai.v6.i1.pp26-32
Total views : 246 times


  • There are currently no refbacks.

View IJAI Stats

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.