Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Mathematical formulation for the second derivative of backpropagation error with non-linear output function in feedforward neural networks

Mathematical formulation for the second derivative of backpropagation error with non-linear... The feedforward neural network architecture uses the backpropagation learning for determination of optimal weights between different interconnected layers in order to perform as the good approximation and generalisation. The determination of the optimal weight vector is possible only when the total minimum error or global error (mean of the minimum local errors) for all patterns from the training set is supposed to minimise. In this paper, we are presenting the generalised mathematical formulation for second derivative of the error function for feedforward neural network to obtain the optimal weight vector for the given training set. The new global minimum error point can evaluate with the help of current global minimum error and the current minimised local error. The proposed method indicates that weights, which are determined from the minimisation of mean error, are more close to the optimal solution with respect to the conventional gradient descent approaches. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Information and Decision Sciences Inderscience Publishers

Mathematical formulation for the second derivative of backpropagation error with non-linear output function in feedforward neural networks

Loading next page...
 
/lp/inderscience-publishers/mathematical-formulation-for-the-second-derivative-of-backpropagation-FhW0smXe3C
Publisher
Inderscience Publishers
Copyright
Copyright © Inderscience Enterprises Ltd. All rights reserved
ISSN
1756-7017
eISSN
1756-7025
DOI
10.1504/IJIDS.2010.037231
Publisher site
See Article on Publisher Site

Abstract

The feedforward neural network architecture uses the backpropagation learning for determination of optimal weights between different interconnected layers in order to perform as the good approximation and generalisation. The determination of the optimal weight vector is possible only when the total minimum error or global error (mean of the minimum local errors) for all patterns from the training set is supposed to minimise. In this paper, we are presenting the generalised mathematical formulation for second derivative of the error function for feedforward neural network to obtain the optimal weight vector for the given training set. The new global minimum error point can evaluate with the help of current global minimum error and the current minimised local error. The proposed method indicates that weights, which are determined from the minimisation of mean error, are more close to the optimal solution with respect to the conventional gradient descent approaches.

Journal

International Journal of Information and Decision SciencesInderscience Publishers

Published: Jan 1, 2010

There are no references for this article.