Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You and Your Team.

Learn More →

Approximation of Information Divergences for Statistical Learning with Applications

Approximation of Information Divergences for Statistical Learning with Applications AbstractIn this paper we give a partial response to one of the most important statistical questions, namely, what optimal statistical decisions are and how they are related to (statistical) information theory. We exemplify the necessity of understanding the structure of information divergences and their approximations, which may in particular be understood through deconvolution. Deconvolution of information divergences is illustrated in the exponential family of distributions, leading to the optimal tests in the Bahadur sense. We provide a new approximation of I-divergences using the Fourier transformation, saddle point approximation, and uniform convergence of the Euler polygons. Uniform approximation of deconvoluted parts of I-divergences is also discussed. Our approach is illustrated on a real data example. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Mathematica Slovaca de Gruyter

Approximation of Information Divergences for Statistical Learning with Applications

Loading next page...
 
/lp/de-gruyter/approximation-of-information-divergences-for-statistical-learning-with-sVQtyGPBGk
Publisher
de Gruyter
Copyright
© 2018 Mathematical Institute Slovak Academy of Sciences
ISSN
0139-9918
eISSN
1337-2211
DOI
10.1515/ms-2017-0177
Publisher site
See Article on Publisher Site

Abstract

AbstractIn this paper we give a partial response to one of the most important statistical questions, namely, what optimal statistical decisions are and how they are related to (statistical) information theory. We exemplify the necessity of understanding the structure of information divergences and their approximations, which may in particular be understood through deconvolution. Deconvolution of information divergences is illustrated in the exponential family of distributions, leading to the optimal tests in the Bahadur sense. We provide a new approximation of I-divergences using the Fourier transformation, saddle point approximation, and uniform convergence of the Euler polygons. Uniform approximation of deconvoluted parts of I-divergences is also discussed. Our approach is illustrated on a real data example.

Journal

Mathematica Slovacade Gruyter

Published: Oct 25, 2018

Keywords: 62E17; 62F03; 65L20; 33E30; deconvolution; information divergence; likelihood; change in intensity of Poisson process

References