IntroductionIn fitting regression models, measurement error in predictors may result in biased estimates of coefficients and other incorrect inferences. This problem has received a tremendous amount of attention (see, e.g., Buonaccorsi, ; Carroll et al., ; Gustafson, ; Fuller, for general treatments) and a plethora of correction techniques have been proposed for both linear and non‐linear models. Inferences can be challenging for a variety of reasons. While analytical standard errors are available for some methods these are approximate in nature and usually involve some underlying assumptions. Relatedly, Wald type confidence intervals based on these standard errors rely on approximate normality and unbiasedness of the estimator involved. An additional concern is that the corrected estimators are not unbiased; rather, most are either consistent, or approximately consistent, under suitable conditions. A data driven way of assessing potential bias in either the corrected estimators or naive estimators, which ignore the measurement error, is desirable.One obvious tool for attacking these problems is the bootstrap, which has received limited attention in the measurement error context. The majority of the applications of the bootstrap in measurement error problems have used simple with replacement resampling of observations. This is used in STATA, one of the few
Biometrics – Wiley
Published: Jan 1, 2018
Keywords: ; ; ;
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 12 million articles from more than
10,000 peer-reviewed journals.
All for just $49/month
It’s easy to organize your research with our built-in tools.
All the latest content is available, no embargo periods.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud