TY - JOUR AU1 - Levy, Jonathan AB - Abstract: In the world of targeted learning, cross-validated targeted maximum likelihood estimators, CV-TMLE [Zheng:2010aa], has a distinct advantage over TMLE [Laan:2006aa] in that one less condition is required of CV-TMLE in order to achieve asymptotic efficiency in the nonparametric or semiparametric settings. CV-TMLE as originally formulated, consists of averaging usually 10 (for 10-fold cross-validation) parameter estimates, each of which is performed on a validation set separate from where the initial fit was trained. The targeting step is usually performed as a pooled regression over all validation folds but in each fold, we separately evaluate any means as well as the parameter estimate. One nice thing about CV-TMLE, is that we average 10 plug-in estimates so the plug-in quality of preserving the natural parameter bounds is respected. Our adjustment of this procedure also preserves the plug-in characteristic as well as avoids the donsker condtion. The advantage of our procedure is the implementation of the targeting is identical to that of a regular TMLE, once all the validation set initial predictions have been formed. In short, we stack the validation set predictions and pretend as if we have a regular TMLE, which is not necessarily quite a plug-in estimator on each fold but overall will perform asymptotically the same and might have some slight advantage, a subject for future research. In the case of average treatment effect, treatment specific mean and mean outcome under a stochastic intervention, the procedure overlaps exactly with the originally formulated CV-TMLE with a pooled regression for the targeting. TI - An Easy Implementation of CV-TMLE JF - Statistics DA - 2018-11-12 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/an-easy-implementation-of-cv-tmle-kyORv4H9UQ VL - 2018 IS - 1811 DP - DeepDyve ER -