Bayesian measures of model complexity and fit

Bayesian measures of model complexity and fit Summary. We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. Using an information theoretic argument we derive a measure pD for the effective number of parameters in a model as the difference between the posterior mean of the deviance and the deviance at the posterior means of the parameters of interest. In general pD approximately corresponds to the trace of the product of Fisher's information and the posterior covariance, which in normal models is the trace of the ‘hat’ matrix projecting observations onto fitted values. Its properties in exponential families are explored. The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. Adding pD to the posterior mean deviance gives a deviance information criterion for comparing models, which is related to other information criteria and has an approximate decision theoretic justification. The procedure is illustrated in some examples, and comparisons are drawn with alternative Bayesian and classical proposals. Throughout it is emphasized that the quantities required are trivial to compute in a Markov chain Monte Carlo analysis. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of the Royal Statistical Society: Series B (Statistical Methodology) Wiley

Loading next page...
 
/lp/wiley/bayesian-measures-of-model-complexity-and-fit-P1FSsEPC0E
Publisher
Wiley
Copyright
Copyright © 2002 Wiley Subscription Services
ISSN
1369-7412
eISSN
1467-9868
DOI
10.1111/1467-9868.00353
Publisher site
See Article on Publisher Site

Abstract

Summary. We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. Using an information theoretic argument we derive a measure pD for the effective number of parameters in a model as the difference between the posterior mean of the deviance and the deviance at the posterior means of the parameters of interest. In general pD approximately corresponds to the trace of the product of Fisher's information and the posterior covariance, which in normal models is the trace of the ‘hat’ matrix projecting observations onto fitted values. Its properties in exponential families are explored. The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. Adding pD to the posterior mean deviance gives a deviance information criterion for comparing models, which is related to other information criteria and has an approximate decision theoretic justification. The procedure is illustrated in some examples, and comparisons are drawn with alternative Bayesian and classical proposals. Throughout it is emphasized that the quantities required are trivial to compute in a Markov chain Monte Carlo analysis.

Journal

Journal of the Royal Statistical Society: Series B (Statistical Methodology)Wiley

Published: Jan 1, 2002

Keywords: ; ; ; ; ; ; ; ;

References

  • Asymptotic behaviour of Bayes estimates under possibly incorrect models
    Bunke, O.; Milhaud, X.
  • Model Selection and Inference
    Burnham, K. P.; Anderson, D. R.
  • Analysis of multivariate probit models
    Chib, S.; Greenberg, E.
  • Empirical Bayes estimates of age‐standardised relative risks for use in disease mapping
    Clayton, D. G.; Kaldor, J.
  • Conditional categorical response models with application to treatment of acute myocardial infarction
    Gelfand, A. E.; Ecker, M. D.; Christiansen, C.; McLaughlin, T. J.; Soumerai, S. B.
  • Random‐effects models for longitudinal data using Gibbs sampling
    Gilks, W. R.; Wang, C. C.; Coursaget, P.; Yvonnet, B.
  • The surprise index for the multivariate normal distribution
    Good, I. J.
  • Counting degrees of freedom in hierarchical and other richly‐parameterised models
    Hodges, J.; Sargent, D.
  • Random effects models for longitudinal data
    Laird, N. M.; Ware, J. H.
  • Distribution of informational statistics and a criterion for model fitting (in Japanese)
    Takeuchi, K.
  • Technical Report
    Ye, J.; Wong, W.
  • Proc. 2nd Int. Symp. Information Theory
    Akaike, H.
  • A note on the generalized information criterion for choice of a model
    Atkinson, A. C.
  • Expected information as expected utility
    Bernardo, J. M.
  • Experts in Uncertainty
    Cooke, R. M.
  • Probabilistic Networks and Expert Systems
    Cowell, R. G.; Dawid, A. P.; Lauritzen, S. L.; Spiegelhalter, D. J.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create folders to
organize your research

Export folders, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off