Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Bounds on the rate of convergence of learning processes based on random sets and set‐valued probability

Bounds on the rate of convergence of learning processes based on random sets and set‐valued... Purpose – Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The constructive distribution‐independent bounds on generalization are the cornerstone of constructing support vector machines. Random sets and set‐valued probability are important extensions of random variables and probability, respectively. The paper aims to address these issues. Design/methodology/approach – In this study, the bounds on the rate of convergence of learning processes based on random sets and set‐valued probability are discussed. First, the Hoeffding inequality is enhanced based on random sets, and then making use of the key theorem the non‐constructive distribution‐dependent bounds of learning machines based on random sets in set‐valued probability space are revisited. Second, some properties of random sets and set‐valued probability are discussed. Findings – In the sequel, the concepts of the annealed entropy, the growth function, and VC dimension of a set of random sets are presented. Finally, the paper establishes the VC dimension theory of SLT based on random sets and set‐valued probability, and then develops the constructive distribution‐independent bounds on the rate of uniform convergence of learning processes. It shows that such bounds are important to the analysis of the generalization abilities of learning machines. Originality/value – SLT is considered at present as one of the fundamental theories about small statistical learning. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Kybernetes Emerald Publishing

Bounds on the rate of convergence of learning processes based on random sets and set‐valued probability

Kybernetes , Volume 40 (9/10): 27 – Oct 18, 2011

Loading next page...
 
/lp/emerald-publishing/bounds-on-the-rate-of-convergence-of-learning-processes-based-on-nctORmYw3D
Publisher
Emerald Publishing
Copyright
Copyright © 2011 Emerald Group Publishing Limited. All rights reserved.
ISSN
0368-492X
DOI
10.1108/03684921111169486
Publisher site
See Article on Publisher Site

Abstract

Purpose – Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The constructive distribution‐independent bounds on generalization are the cornerstone of constructing support vector machines. Random sets and set‐valued probability are important extensions of random variables and probability, respectively. The paper aims to address these issues. Design/methodology/approach – In this study, the bounds on the rate of convergence of learning processes based on random sets and set‐valued probability are discussed. First, the Hoeffding inequality is enhanced based on random sets, and then making use of the key theorem the non‐constructive distribution‐dependent bounds of learning machines based on random sets in set‐valued probability space are revisited. Second, some properties of random sets and set‐valued probability are discussed. Findings – In the sequel, the concepts of the annealed entropy, the growth function, and VC dimension of a set of random sets are presented. Finally, the paper establishes the VC dimension theory of SLT based on random sets and set‐valued probability, and then develops the constructive distribution‐independent bounds on the rate of uniform convergence of learning processes. It shows that such bounds are important to the analysis of the generalization abilities of learning machines. Originality/value – SLT is considered at present as one of the fundamental theories about small statistical learning.

Journal

KybernetesEmerald Publishing

Published: Oct 18, 2011

Keywords: Random sets; Set‐valued probability; The key theorem; VC dimension; Rate of uniform convergence; Learning processes; Probability theory

References