Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Similarity Metric Learning for a Variable-Kernel Classifier

Similarity Metric Learning for a Variable-Kernel Classifier Nearest-neighbor interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on several standard classification data sets, and on these problems it shows better generalization than backpropagation and most other learning methods. The number of parameters that must be determined through optimization are orders of magnitude less than for backpropagation or radial basis function (RBF) networks, which may indicate that the method better captures the essential degrees of variation in learning. Other features of VSM learning are discussed that make it relevant to models for biological learning in the brain. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Neural Computation MIT Press

Similarity Metric Learning for a Variable-Kernel Classifier

Neural Computation , Volume 7 (1) – Jan 1, 1995

Loading next page...
 
/lp/mit-press/similarity-metric-learning-for-a-variable-kernel-classifier-5XYRXp09vY

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
MIT Press
Copyright
© 1995 Massachusetts Institute of Technology
ISSN
0899-7667
eISSN
1530-888X
DOI
10.1162/neco.1995.7.1.72
Publisher site
See Article on Publisher Site

Abstract

Nearest-neighbor interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on several standard classification data sets, and on these problems it shows better generalization than backpropagation and most other learning methods. The number of parameters that must be determined through optimization are orders of magnitude less than for backpropagation or radial basis function (RBF) networks, which may indicate that the method better captures the essential degrees of variation in learning. Other features of VSM learning are discussed that make it relevant to models for biological learning in the brain.

Journal

Neural ComputationMIT Press

Published: Jan 1, 1995

There are no references for this article.