Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Prediction of time series by statistical learning: general losses and fast rates

Prediction of time series by statistical learning: general losses and fast rates AbstractWe establish rates of convergences in statistical learning fortime series forecasting. Using the PAC-Bayesian approach,slow rates of convergence√ d/n for the Gibbs estimator underthe absolute loss were given in a previous work [7], wheren is the sample size and d the dimension of the set of predictors.Under the same weak dependence conditions, weextend this result to any convex Lipschitz loss function. Wealso identify a condition on the parameter space that ensuressimilar rates for the classical penalized ERM procedure. Weapply this method for quantile forecasting of the French GDP.Under additional conditions on the loss functions (satisfiedby the quadratic loss function) and for uniformly mixing processes,we prove that the Gibbs estimator actually achievesfast rates of convergence d/n. We discuss the optimality ofthese different rates pointing out references to lower boundswhen they are available. In particular, these results bring ageneralization the results of [29] on sparse regression estimationto some autoregression. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Dependence Modeling de Gruyter

Prediction of time series by statistical learning: general losses and fast rates

Dependence Modeling , Volume 1 (2013): 29 – Jan 1, 2013

Loading next page...
 
/lp/de-gruyter/prediction-of-time-series-by-statistical-learning-general-losses-and-zUN0MC6OXJ
Publisher
de Gruyter
Copyright
©2013 Olivier Wintenberger et al.
ISSN
2300-2298
eISSN
2300-2298
DOI
10.2478/demo-2013-0004
Publisher site
See Article on Publisher Site

Abstract

AbstractWe establish rates of convergences in statistical learning fortime series forecasting. Using the PAC-Bayesian approach,slow rates of convergence√ d/n for the Gibbs estimator underthe absolute loss were given in a previous work [7], wheren is the sample size and d the dimension of the set of predictors.Under the same weak dependence conditions, weextend this result to any convex Lipschitz loss function. Wealso identify a condition on the parameter space that ensuressimilar rates for the classical penalized ERM procedure. Weapply this method for quantile forecasting of the French GDP.Under additional conditions on the loss functions (satisfiedby the quadratic loss function) and for uniformly mixing processes,we prove that the Gibbs estimator actually achievesfast rates of convergence d/n. We discuss the optimality ofthese different rates pointing out references to lower boundswhen they are available. In particular, these results bring ageneralization the results of [29] on sparse regression estimationto some autoregression.

Journal

Dependence Modelingde Gruyter

Published: Jan 1, 2013

There are no references for this article.