Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A study on performance of MHDA in training MLPs

A study on performance of MHDA in training MLPs In recent years, the application of metaheuristics in training neural network models has gained significance due to the drawbacks of deterministic algorithms. This paper aims to propose the use of a recently developed “memory based hybrid dragonfly algorithm” (MHDA) for training multi-layer perceptron (MLP) model by finding the optimal set of weight and biases.Design/methodology/approachThe efficiency of MHDA in training MLPs is evaluated by applying it to classification and approximation benchmark data sets. Performance comparison between MHDA and other training algorithms is carried out and the significance of results is proved by statistical methods. The computational complexity of MHDA trained MLP is estimated.FindingsSimulation result shows that MHDA can effectively find the near optimum set of weight and biases at a higher convergence rate when compared to other training algorithms.Originality/valueThis paper presents MHDA as an alternative optimization algorithm for training MLP. MHDA can effectively optimize set of weight and biases and can be a potential trainer for MLPs. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Engineering Computations Emerald Publishing

A study on performance of MHDA in training MLPs

Engineering Computations , Volume 36 (6): 15 – Aug 15, 2019

Loading next page...
 
/lp/emerald-publishing/a-study-on-performance-of-mhda-in-training-mlps-40E4v0PsAg
Publisher
Emerald Publishing
Copyright
© Emerald Publishing Limited
ISSN
0264-4401
DOI
10.1108/ec-05-2018-0216
Publisher site
See Article on Publisher Site

Abstract

In recent years, the application of metaheuristics in training neural network models has gained significance due to the drawbacks of deterministic algorithms. This paper aims to propose the use of a recently developed “memory based hybrid dragonfly algorithm” (MHDA) for training multi-layer perceptron (MLP) model by finding the optimal set of weight and biases.Design/methodology/approachThe efficiency of MHDA in training MLPs is evaluated by applying it to classification and approximation benchmark data sets. Performance comparison between MHDA and other training algorithms is carried out and the significance of results is proved by statistical methods. The computational complexity of MHDA trained MLP is estimated.FindingsSimulation result shows that MHDA can effectively find the near optimum set of weight and biases at a higher convergence rate when compared to other training algorithms.Originality/valueThis paper presents MHDA as an alternative optimization algorithm for training MLP. MHDA can effectively optimize set of weight and biases and can be a potential trainer for MLPs.

Journal

Engineering ComputationsEmerald Publishing

Published: Aug 15, 2019

Keywords: Classification; Approximation; Computational complexity; MHDA; MLPs

References