Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Most neural network applications rely on the fundamental approximation property of feedforward networks. Supervised learning is a means of implementing this approximate mapping. In a realistic problem setting, a mechanism is needed to devise this learning process based on available data, which encompasses choosing an appropriate set of parameters in order to avoid overfitting, using an efficient learning algorithm measured by computation and memory complexities, ensuring the accuracy of the training procedure as measured by the training error, and testing and cross-validation for generalization. We develop a comprehensive supervised learning algorithm to address these issues. The algorithm combines training and pruning into one procedure by utilizing a common observation of Jacobian rank deficiency in feedforward networks. The algorithm not only reduces the training time and overall complexity but also achieves training accuracy and generalization capabilities comparable to more standard approaches. Extensive simulation results are provided to demonstrate the effectiveness of the algorithm.
Neural Computation – MIT Press
Published: May 15, 1998
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.