Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 7-Day Trial for You or Your Team.

Learn More →

Using EM to Obtain Asymptotic Variance-Covariance Matrices: The SEM Algorithm

Using EM to Obtain Asymptotic Variance-Covariance Matrices: The SEM Algorithm Abstract The expectation maximization (EM) algorithm is a popular, and often remarkably simple, method for maximum likelihood estimation in incomplete-data problems. One criticism of EM in practice is that asymptotic variance–covariance matrices for parameters (e.g., standard errors) are not automatic byproducts, as they are when using some other methods, such as Newton–Raphson. In this article we define and illustrate a procedure that obtains numerically stable asymptotic variance–covariance matrices using only the code for computing the complete-data variance–covariance matrix, the code for EM itself, and code for standard matrix operations. The basic idea is to use the fact that the rate of convergence of EM is governed by the fractions of missing information to find the increased variability due to missing information to add to the complete-data variance–covariance matrix. We call this supplemented EM algorithm the SEM algorithm. Theory and particular examples reinforce the conclusion that the SEM algorithm can be a practically important supplement to EM in many problems. SEM is especially useful in multiparameter problems where only a subset of the parameters are affected by missing information and in parallel computing environments. SEM can also be used as a tool for monitoring whether EM has converged to a (local) maximum. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of the American Statistical Association Taylor & Francis

Using EM to Obtain Asymptotic Variance-Covariance Matrices: The SEM Algorithm

Using EM to Obtain Asymptotic Variance-Covariance Matrices: The SEM Algorithm

Journal of the American Statistical Association , Volume 86 (416): 11 – Dec 1, 1991

Abstract

Abstract The expectation maximization (EM) algorithm is a popular, and often remarkably simple, method for maximum likelihood estimation in incomplete-data problems. One criticism of EM in practice is that asymptotic variance–covariance matrices for parameters (e.g., standard errors) are not automatic byproducts, as they are when using some other methods, such as Newton–Raphson. In this article we define and illustrate a procedure that obtains numerically stable asymptotic variance–covariance matrices using only the code for computing the complete-data variance–covariance matrix, the code for EM itself, and code for standard matrix operations. The basic idea is to use the fact that the rate of convergence of EM is governed by the fractions of missing information to find the increased variability due to missing information to add to the complete-data variance–covariance matrix. We call this supplemented EM algorithm the SEM algorithm. Theory and particular examples reinforce the conclusion that the SEM algorithm can be a practically important supplement to EM in many problems. SEM is especially useful in multiparameter problems where only a subset of the parameters are affected by missing information and in parallel computing environments. SEM can also be used as a tool for monitoring whether EM has converged to a (local) maximum.

Loading next page...
 
/lp/taylor-francis/using-em-to-obtain-asymptotic-variance-covariance-matrices-the-sem-1Xn5oyBSEw

References (14)

Publisher
Taylor & Francis
Copyright
Copyright Taylor & Francis Group, LLC
ISSN
1537-274X
eISSN
0162-1459
DOI
10.1080/01621459.1991.10475130
Publisher site
See Article on Publisher Site

Abstract

Abstract The expectation maximization (EM) algorithm is a popular, and often remarkably simple, method for maximum likelihood estimation in incomplete-data problems. One criticism of EM in practice is that asymptotic variance–covariance matrices for parameters (e.g., standard errors) are not automatic byproducts, as they are when using some other methods, such as Newton–Raphson. In this article we define and illustrate a procedure that obtains numerically stable asymptotic variance–covariance matrices using only the code for computing the complete-data variance–covariance matrix, the code for EM itself, and code for standard matrix operations. The basic idea is to use the fact that the rate of convergence of EM is governed by the fractions of missing information to find the increased variability due to missing information to add to the complete-data variance–covariance matrix. We call this supplemented EM algorithm the SEM algorithm. Theory and particular examples reinforce the conclusion that the SEM algorithm can be a practically important supplement to EM in many problems. SEM is especially useful in multiparameter problems where only a subset of the parameters are affected by missing information and in parallel computing environments. SEM can also be used as a tool for monitoring whether EM has converged to a (local) maximum.

Journal

Journal of the American Statistical AssociationTaylor & Francis

Published: Dec 1, 1991

Keywords: Bayesian inference; Convergence rate; EM algorithm; Incomplete data; Maximum likelihood estimation; Observed information; Observed information

There are no references for this article.