Statistical learning theory (SLT) provides the theoretical basis for many machine learning algorithms (e.g. SVMs and kernel methods). Invariance, as one type of popular prior knowledge in pattern analysis, has been widely incorporated into various statistical learning algorithms to improve learning performance. Though successful in some applications, existing invariance learning algorithms are task-specific, and lack a solid theoretical basis including consistency. In this paper, we first propose the problem of statistical learning with group invariance (or group invariance learning in short) to provide a unifying framework for existing invariance learning algorithms in pattern analysis by exploiting group invariance. We then introduce the group invariance empirical risk minimization (GIERM) method to solve the group invariance learning problem, which incorporates the group action on the original data into empirical risk minimization (ERM). Finally, we investigate the consistency of the GIERM method in detail. Our theoretical results include three theorems, covering the necessary and sufficient conditions of consistency, uniform two-sided convergence and uniform one-sided convergence for the group invari- ance learning process based on the GIERM method. Keywords Statistical learning · Group invariance · Group invariance empirical risk minimization · Consistency · Uniform convergence 1 Introduction results when applied directly to the learning tasks such as
International Journal of Machine Learning and Cybernetics – Springer Journals
Published: Jun 5, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
All the latest content is available, no embargo periods.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud