Instrumental variables estimation under a structural Cox model

Instrumental variables estimation under a structural Cox model Summary Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heuristic approaches. More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context with dichotomous exposure and instrument, amongst other limitations. The aim of this article is to reconsider IV estimation under a structural Cox model, allowing for arbitrary exposure and instruments. We propose a novel class of estimators and derive their asymptotic properties. The methodology is illustrated using two real data applications, and using simulated data. 1. Introduction In health studies, the scientific question is often of a causal nature. For example “Does the exposure increase the risk of cancer?”. The interest is not whether the prevalence of cancer differs between exposed and unexposed individuals, but rather whether exposed individuals would more likely have been cancer-free for a shorter time had they been unexposed. Randomized trials create comparable or exchangeable groups of exposed and unexposed individuals, and thereby deliver a unique source of data for addressing such causal questions. However, because such trials are often infeasible or even impossible for ethical, financial, or practical reasons, the epidemiologist is commonly left with observational data where the exchangeability of exposed and unexposed individuals cannot be guaranteed. Instrumental variables (IV) analyses attempt to re-establish the balance or exchangeability brought about by randomization, by making use of IV. These are variables that are independent of any unmeasured confounders conditional on any observed confounders of treatment and the outcome (assumption i), but are associated with the exposure of interest (assumption ii) so that one can indirectly learn about the exposure’s effect from the effect of the IV. Such indirect learning is possible when the IV affects the outcome only through the level of the exposure (assumption iii, labeled the “exclusion restriction”), and certain modeling assumptions are satisfied (see later). An example of an IV could be a certain genetic variant that alters an exposure of interest (e.g., smoking behavior), as considered in so-called Mendelian randomization (MR) studies. Here, the random inheritance of genes adds to the plausibility of (i), although the credibility of the exclusion restriction (iii) needs careful attention in applications. The rapid increase in the number of published papers using MR emphasizes the need for properly developed IV methods for various types of outcome. For continuous exposure and outcome that lend themselves to linear modeling, the IV methodology has been well developed. For survival outcomes, this methodology has been extended to accelerated failure time models (Robins and Tsiatis, 1991) and additive Aalen models (Tchetgen and others, 2015; Martinussen and others, 2017). Non-collapsibility of the hazard ratio has however turned out to be a major stumble block in extending these methodologies to the more popular Cox model. The majority of IV analyses of time-to-event endpoints have therefore become dominated by heuristic approaches. This is a concern because naive application of procedures known from the simple IV models, such as two-stage residual inclusion (2SRI) and two-stage predictor substitution (2SPS) estimators under a Cox model, have been shown to lead to bias (Wan and others, 2015). More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context. For instance, in a context of non-compliance in randomized trials for binary instrument, Loeys and Goetghebeur (2003) and Loeys and others (2005) considered a setting with a dichotomous instrument, categorical exposure, and exposure only being accessible in the experimental arm (i.e., at one level of the IV). MacKenzie and others (2014) studied IV estimation in the Cox model under the biologically implausible assumption that confounding operates additively on the hazard (Tchetgen and others, 2015). Other methods are based on principal stratification, see Frangakis and Rubin (1999), Baker (1998), Nie and others (2011) and MacKenzie and others (2016). However, using principal stratification implies a different estimand than the one pursued in this paper. In view of the above limitations, the aim of this article is to reconsider IV estimation under the structural Cox model for arbitrary IVs and exposures. We propose a novel class of estimators in Section 2 and derive their asymptotic properties enabling the construction of confidence intervals. In Section 3.1, we use our proposal to analyze data from a randomized trial with non-compliance. In Section 3.2, we consider a MR study to evaluate the importance of Vitamin D on all-cause mortality. The exposure in that study is continuous which is also covered by our proposed method. The properties of the proposed estimator are evaluated using simulated data in Section 4. Finally in Section 5, the strengths and weaknesses of the method are discussed as well as pointers to further developments. 2. IV estimation of the conditional causal hazard ratio 2.1. Structural model Throughout, our interest lies in the effect of some exposure or treatment $$X$$ on a time-to-event endpoint $$T$$. We will allow for the possibility that differently exposed subgroups may fail to be exchangeable, as would be the case in most observational studies, but will assume that data is available for each individual on an IV $$G$$. For instance, we may be interested in the effect of received treatment $$X$$ on time-to-death $$T$$ in a randomized trial where, as often happens, not all individuals comply with the assigned treatment. Here, in spite of randomization, individuals who do and do not receive treatment may lack exchangeability because of well-known phenomena such as the healthy user effect, or the sick stopper effect. However, randomization $$G$$ delivers an IV, so long as it does not affect time-to-death other than by influencing the received treatment (as would be plausible in many double-blind experiments). To define the exposure effect, let $$T^x$$ be the potential survival time had the treatment been manipulated or set to $$x$$ by some intervention (where e.g. $$x$$ and $$X$$ may be coded as 1 or 0 to denote treatment or no treatment, respectively). Throughout, we make the so-called consistency assumption that the potential survival time $$T^x$$ equals the observed survival time for individuals with exposure $$x$$. The structural Cox model, which we will study in this article, states that \begin{align} \frac{\lambda_{T^x}(t|X=x,G)}{\lambda_{T^0}(t|X=x,G)}=\exp{(\psi x)} \quad \textrm{ for all } x. \end{align} (2.1) Here, $$\lambda_{T^0}(t|X=x,G)$$ encodes the hazard at time $$t$$ for individuals with $$X=x$$ and $$G$$, had they received some reference level $$x=0$$ (which will usually refer to “no exposure”). Note that model (2.1) explicitly contrasts the same group—individuals with $$X=x$$ and $$G$$—twice, once with the observed exposure, and once with zero exposure. It thereby delivers a measure of the causal effect of treatment among the treated, unlike the hazard ratio in a more standard Cox model, \[ \frac{\lambda_{T^x}(t|X=x,G)}{\lambda_{T^0}(t|X=0,G)}=\exp{(\tilde{\psi} x)} \quad \textrm{ for all} \ x, \] which contrasts differently exposed subgroups and therefore lacks a causal interpretation. In particular, $$\tilde{\psi}$$ may therefore differ from $$\psi$$. This can also been seen as \begin{align} \exp{(\tilde{\psi} x)} =\frac{\lambda_{T^0}(t|X=x,G)}{\lambda_{T^0}(t|X=0,G)}\exp{(\psi x)}, \end{align} (2.2) where the factor $$\lambda_{T^0}(t|X=x,G)/\lambda_{T^0}(t|X=0,G)$$ expresses a selection mechanism: the fact that individuals with $$X=x$$ and $$X=0$$ might experience different hazards, even if both were unexposed, because these individuals might not be comparable. Note that the denominator in (2.1) is defined using the counterfactual hazard for $$T^0$$. The interpretation of $$\psi$$ thus relates to an intervention where the exposure is set to the reference level $$0$$. Any level of exposure could be used here, and the reference level should be chosen to yield a meaningful comparison. Note also that the structural model does not allow the causal effect to be modified by $$G$$. This No Effect Modification (NEM) assumption is not guaranteed to hold and is partly untestable. This is even so in the noncompliance example where G refers to randomization itself, because individuals with a given exposure may fail to be exchangeable between different randomized arms. The NEM assumption can be relaxed by formulating a structural model as in (2.1) where $$G$$ is included in a term in the exponential on the right-hand side. However, this requires that the instrument $$G$$ has more than two levels, or that the structural model includes baseline covariates that do not interact with the exposure or instrument in their effect on the outcome. 2.2. Estimation Suppose that the study design provides i.i.d. realizations of $$(G_i, X_i, T_i^*, \Delta_i)_{ i=1, \ldots , n}$$. Here, $$G$$ is a possibly multivariate IV, $$C$$ is the censoring time, $$T^*=\min (T, C)$$ is the observed time and $$\Delta = I(T\leq T^*)$$ is the censoring indicator. Throughout, we will assume that $$C {\perp\!\!\!\!\perp} T|G, X$$. Let $$N(t)=I(T^*\le t,\Delta=1)$$ and $$Y(t)=I(t\le T^*)$$ be the observed counting process and at risk indicator, respectively. Let $$\nu$$ be some finite time point such that we observe $$N(t)$$ and $$Y(t)$$ in the time period $$[0,\nu]$$. Let the conditional survival probabilities of $$T$$ and $$T^0$$ given the exposure and instrument be denoted $$P\left(T>t\vert X, G\right)$$ and $$P\left(T^0>t\vert X, G\right)$$ for all $$t>0$$. The structural Cox model can be equivalently rephrased in terms of these probabilities as $$ f\left\{P\left(T^x>t|X=x,G\right)\right\}=f\left\{P\left (T^0>t|X=x,G\right)\right\}+\psi x $$ for all $$t>0$$, with $$f(v)=\log\{-\log(v)\}$$. This gives the relationship \begin{align} P\left(T^0>t|X=x,G\right)=P\left(T^x>t|X=x,G\right)^{\exp{(-\psi x)}}, \end{align} (2.3) for all $$t>0$$. In what follows, we will show how this identity can be used to enable estimation of $$\psi$$. The IV assumptions (i) and (iii) imply that $$T^0$$ and $$G$$ must be independent. By this key independence, we must have at each time $$t$$ that \begin{align*} E\left\{h(G)P\left(T^0>t \vert X, G\right)\right\} &= P\left(T^0>t\right)E\left\{h(G)\right\} =0, \end{align*} where $$h$$ is a function of $$G$$ such that $$E\left\{h(G)\right\} = 0$$, for example $$h(G)=G-E\left (G\right )$$. Together with the identity (2.3), this implies \begin{align*} E\left \{h(G)P\left(T>t \vert X, G\right)^{\exp(-\psi X)} \right\} =0. \end{align*} The parameter of interest can now be found by choosing it so as to make the independence of $$T^0$$ and $$G$$, or more precisely the above equality, happen in the sample. That is, for each fixed time $$t>0$$, one can estimate $$\psi$$ as the solution to \begin{equation} 0 = \sum_{i=1 }^n \left[ h(G_i)- E\left\{h(G)\right\}\right] P(T_i>t\vert X_i, G_i)^{\exp{(-\psi X_i)}}. \end{equation} (2.4) The solution to this equation delivers an infeasible estimator because the population expectations $$E\left\{h(G)\right\}$$ and $$P(T>t\vert X, G)$$ are unknown. In practice, we will therefore substitute $$E\left\{h(G)\right\}$$ by the corresponding sample average. In the next two sections, we discuss how to model the survival probabilities $$P(T>t\vert X, G)$$ and how to choose the time point $$t$$, respectively. 2.3. Association model The quantities $$P(T_i>t\vert X_i, G_i)$$ in (2.4) can either be estimated by modeling the survival function (or the survival probability at time $$t$$) directly, or via additive or multiplicative models for the hazard function. The resulting model will be referred to as the association model. For instance, we could use the Cox proportional hazards model where the hazard for $$T$$ given $$X=x, G=g$$ is modeled as \begin{align} \lambda(t\vert X=x, G=g)=\lambda_0(t)\exp{\{m_a(x, g, \beta_a)\}} \end{align} (2.5) for some baseline hazard $$\lambda_0(t)$$ and a known function $$m_a$$. A simple interaction model is $$m_a(x, g, \beta_a) = \beta_Xx+\beta_Gg+\beta_{XG}xg$$ for $$\beta_a=(\beta_X, \beta_G, \beta_{XG})$$, which corresponds to the survival function \begin{align*} S_\beta(t\vert X=x, G=g) &= \exp\left [ -\Lambda_0(t)\exp{\{m_a(x, g, \beta_a)\}} \right], \end{align*} where $$\Lambda_0(t) = \int_0^t\lambda_0(s){\rm d}s$$. In what follows, we will refer to the parameters of the association model as $$\beta_t$$, e.g. $$\beta_t=\{\beta_a,\Lambda_0(t)\}^T$$, where the index $$t$$ refers to the considered time point. We will furthermore use $$S_\beta(t\vert X, G)$$ to denote the survival function for $$T$$ at time $$t$$ conditional on $$X$$ and $$G$$. Upon substituting $$P(T_i>t\vert X_i, G_i)$$ in (2.4) by $$S_\beta(t\vert X_i, G_i)$$ with $$\beta_t$$ substituted by a consistent estimator $$\hat{\beta}_t$$, and solving for $$\psi$$, we obtain an estimator which we will denote $$\hat{\psi}_t$$. In practice, one does not have complete freedom to choose the association model. The reason is that situations may arise in which no data generating mechanism exists where $$G$$ is a valid IV and, at the same time, both the association model and the structural model hold (see Vansteelandt and others, 2011; Robins and Rotnitzky, 2004). This explains partly why our proposed estimating procedure merely makes use of the model restrictions at a single time $$t$$, rather than all times, and why we recommend choosing a sufficiently flexible association model as a general guideline. To understand this problem better, note that for dichotomous $$G$$ and $$X$$ where we define $$p_{kl}= P(X=k\vert G=l)$$, the proportional hazards model (2.5) for all $$t$$ is incongenial with the IV assumptions and the structural model at all $$t$$ since there are no parameters $$(\lambda_0(t), \beta_X, \beta_G, \beta_{XG})$$ such that the following two expressions are equal \begin{align*} P(T^0>t|G=0)=\; & p_{10}\exp\left\{-\Lambda_0(t)\exp{(\beta_X-\psi)}\right\} +p_{00}\exp\left\{-\Lambda_0(t)\right\} \\ P(T^0>t|G=1)=\; & p_{11}\exp\left\{-\Lambda_0(t)\exp{(\beta_X-\psi+\beta_G+\beta_{XG})}\right\} + p_{01}\exp\left\{-\Lambda_0(t)\exp{(\beta_G)}\right\} \end{align*} for all times $$t$$ (besides the trivial case where $$\beta_X=\psi$$ and $$\beta_G=\beta_{XG}=0$$). In spite of this, the IV assumptions imply that both probabilities must be equal for each fixed time $$t$$. It is worth noting, however, that for this case with $$G$$ and $$X$$ both being dichotomous, we need not assume any associational model as we can estimate $$P(T>t| X, G)$$ using a fully non-parametric approach; for instance, by applying the four Kaplan–Meier estimators (one for each of the four combinations of $$G$$ and $$X$$), or using the non-parametric Aalen additive hazards model with $$G$$, $$X$$ and $$GX$$ as covariates. 2.4. Large sample results Let $$\theta_t=[E\{h(G)\}, \beta_t]^T$$ refer to the vector of nuisance parameters. Assume that we have a RAL estimator $$\hat\theta_t$$ of $$\theta_t$$ (Tsiatis, 2006) so that we have the representation \begin{equation} \sqrt{n} \left(\hat\theta_t-\theta_t\right)=\frac{1}{\sqrt n}\sum_{i=1}^n\epsilon_i^{\theta_t}+o_p(1), \end{equation} (2.6) where the terms $$\epsilon_i^{\theta_t}$$, $$i=1\ldots ,n$$, are i.i.d.’s with zero-mean. Here, $$\epsilon_i^{\theta_t}$$ is called the influence function of $$\theta_t$$ and (2.6) is referred to as the i.i.d. decomposition of the estimator $$\hat\theta_t$$. It then follows from standard theory for M-estimators that \begin{align*} \sqrt{n}\left(\hat\psi_t - \psi\right)&=-\left\{\frac{1}{ n}\sum_{i=1}^n D_{\psi}u_i\left(t, \psi, \theta_t\right)\right\}^{-1}\frac{1}{\sqrt n}\sum_{i=1}^nU_i\left(t, \psi, \theta_t\right)+ o_p(1), \end{align*} where $$\theta_t$$ refers to the vector of nuisance parameters and where $$D_{\psi}$$ refers to the gradient w.r.t. $$\psi$$. Here, \[ U_i\left(t, \psi, \theta_t\right)= u_i(t, \psi, \theta_t) + E\left\{D_\theta u(t, \psi, \theta_t)\right\} \epsilon_i^{\theta_t}(t) \] where \[u(t, \psi, \theta_t)=\left[h(G)- E\left\{h(G)\right\}\right] S_{\beta}(t|X,G)^{\exp{(-\psi X)}}\] and with $$D_{\theta}$$ referring to the gradient w.r.t. $$\theta_t$$. The asymptotic variance of $$\hat{\psi}_t$$ is then given as 1 over $$n$$ times the empirical variance of \[\left\{\frac{1}{ n}\sum_{i=1}^n D_{\psi}u_i\left(t, \psi, \theta_t\right)\right\}^{-1}U_i\left(t, \psi, \theta_t\right)\!,\] with $$\psi$$ substituted by $$\hat{\psi}_t$$ and $$\theta_t$$ by $$\hat{\theta}_t$$. When $$h(\cdot)$$ is the identity function then $$ \epsilon_i^{\theta_t} (t) = \begin{pmatrix} G_i-\mu_G\\ \epsilon_i^{\beta_t} (t) \end{pmatrix}\!, $$ and $$\epsilon^{\beta_t}(t)$$ is the i.i.d. representation of the estimator of the parameters defining the association model. For instance, consider the Cox model presented in (2.5) for $$\beta_a=(\beta_X, \beta_G,\beta_{XG})^T$$. In this case, we can split the $$\epsilon^{\beta_t}_i(t)$$ into $$\epsilon^{\beta_t}_i(t)=\left(\epsilon_i^{\Lambda_0}(t), \epsilon_i^{\beta_a}\right)^T. $$ Specifically, \begin{align*} \epsilon_i^{\beta_a}&=-I(\beta_a)^{-1}\int_0^{\nu}\{Z_i-e(t,\beta_a)\}{\rm d}M_i(t)\\ \epsilon_i^{\Lambda_0}(t)&=\int_0^{t}s_0(u,\beta_a)^{-1}{\rm d}M_i(u)-\left\{\int_0^{t}e(u,\beta_a){\rm d}\Lambda_0(u)\right\} \epsilon_i^{\beta_a} \end{align*} In the latter display, $$Z_i=(X_i,G_i,X_iG_i)$$, and $$e(t,\beta_a) $$ and $$s_0(t,\beta_a)$$ are the limits in probability of $$ E(t,\beta_a)=\frac{S_1(t,\beta_a)}{S_0(t,\beta_a)} $$ and $$S_0(t,\beta_a)$$, respectively, with $$ S_0(t,\beta_a)=n^{-1}\sum_{i=1}^nY_i(t)\exp{(\beta_a^TZ_i)}, \quad S_1(t,\beta_a)=n^{-1}\sum_{i=1}^n Y_i(t)Z_i^T\exp{(\beta_a^TZ_i)}. $$ Further, $$-I(\beta_a)$$ is the limit in probability of minus the derivative of the Cox score (for the association model) evaluated at $$\beta_a$$, and $$M_i(t)$$ is the counting process martingale coming from the Cox association model: $$ M_i(t)=N_i(t)-\int_0^tY_i(s)\exp{(\beta_a^TZ_i)}{\rm d}\Lambda_0(s). $$ It is worth noting that $$\epsilon^{\beta_t}_i(t)$$ for the Cox association model can easily be extracted from R using the cox.aalen-function from the R-package timereg. The explicit expression for $$\epsilon^{\beta_t}_i(t)$$ depends on the assumed association model and how the parameters of the model are estimated. In the case where both $$G$$ and $$X$$ are dichotomous, we can estimate $$P(T>t| X, G)$$ using a fully non-parametric Aalen additive hazards model with $$G$$, $$X$$ and $$GX$$ as covariates. We give the explicit expression for $$\epsilon^{\beta_t}_i(t)$$ for that case in Appendix A using Aalen’s least squares estimator. 2.5. Optimal time for estimation A drawback of the procedure proposed so far, is that it may deliver a different estimator $$\hat{\psi}_t$$ for each considered time $$t$$. To avoid cherry-picking, we will show how an objective choice of $$t$$, that delivers reasonable precision, can be made. Ideally, we would evaluate the estimator $$\hat{\psi}_t$$ at the time \begin{align*} \tau=\textrm{argmin}_{t>0}\textrm{Var}\left\{\hat\psi_t\right\}\!, \end{align*} that delivers an estimator with minimal variance. Unfortunately, the value of $$\tau$$ depends on the data-generating distribution, and is therefore unknown. Interestingly, however, estimating it as the minimizer of the empirical variance, i.e. \begin{align*} \hat\tau = \textrm{argmin}_{t>0}\widehat{\textrm{Var}}\left\{\hat\psi_t\right\}\!, \end{align*} is justified in the sense that $$\textrm{Var}\left\{\hat\psi_{\tau}\right\} =\textrm{Var}\left\{\hat\psi_{\hat{\tau}}\right\}$$. This follows from the fact that $$\hat{\psi}_t$$ consistently estimates $$\psi$$ for all considered $$t>0$$, provided that $$\hat\tau$$ converges to $$\tau$$ faster than $$n^{1/4}$$-rate (Newey and McFadden, 1994). 3. Applications We consider two real data sets to illustrate our method. First, we re-analyze the LIVER data also considered by Loeys and Goetghebeur (2003), where there is non-compliance. In a second application, we investigate whether vitamin D deficiency may be an important risk factor for all-cause mortality. In this latter application, we deal with a continuous exposure variable. We used the filaggrin genotype as a proxy for vitamin D status following the principles of MR (Davey-Smith and Ebrahim, 2003). 3.1. LIVER data This section compares an analysis of the LIVER data from ECOG-trial E9288 using our proposed method with the results presented in Loeys and Goetghebeur (2003). The data are based on 109 individuals with colorectal cancer, who were scheduled for surgery resection due to liver metastases between August 1990 and January 1997. The purpose of the study was to evaluate a new treatment, whereby the patient while undergoing surgery was implanted with an arterial device (AD) to administer chemotherapy. By presurgical randomization, 56 patients were randomized to surgical resection alone (standard treatment), and 53 patients were randomized to the new treatment intervention thus also receiving chemotherapy. While there was perfect compliance in the standard group because the new treatment was only available for those randomized to it, there was non-compliance in the intervention arm. In this arm, 10 patients did not receive the AD for unreported reasons, presumably related to their prognosis. In the standard group 30 patients did not survive, while in the intervention group 24 did not survive after receiving the AD, and 9 did not survive after receiving the standard treatment. To analyze the data, we estimated the cumulative hazards for the three groups defined by randomization and compliance status using a fully non-parametric Aalen additive hazards model. This means that there is no problem of incongeniality for this specific application. The corresponding survival curves are seen in Figure 1. The estimation of the causal parameter was carried out using the predicted survival probabilities after $$40$$ months, and the hazard ratio $$\exp{(\psi)}$$ is estimated to be $$1.28\, (0.61, 2.71)$$. The optimal time is estimated to be $$42.2$$ leading to a causal hazard ratio of $$1.38$$$$(0.67, 2.84)$$. This is comparable with the estimated hazard ratio of $$1.43\, (0.7, 2.96)$$ reported by Loeys and Goetghebeur. That the causal effect is not statistically significant is in accordance with the instrument not being significant in an intention-to-treat (ITT) analysis ($$p=0.57$$). It is worth noticing that the above analysis does not invoke the NEM assumption since the experimental AD-treatment was only available in one randomization group. Fig. 1. View largeDownload slide The estimated survival curves for the LIVER data. Fig. 1. View largeDownload slide The estimated survival curves for the LIVER data. 3.2. Vitamin D data Vitamin D is a fat-soluble vitamin retained from the diet and dietary supplements and produced in sun-exposed skin. Vitamin D deficiency is common and has been linked with a number of common diseases such as cardiovascular disease, diabetes, and cancer. Here, we will be interested in evaluating the effect on overall survival. Unfortunately, vitamin D status is also associated with several behavioral and environmental factors potentially giving rise to biased estimators when using standard statistical analyses. Recently, mutations in the filaggrin gene have been shown to be associated with a higher vitamin D status, supposedly through an increased UV sensitivity of keratinocytes (see Skaaby and others, 2013, and references therein). Filaggrin is an important component of the skin barrier function. The two mutations, 2282del4 and R501X, are by far the most frequent in white Caucasians, but the R2447X mutation also occurs. To assess the potential effect of vitamin D status on overall survival, we used filaggrin genotype as a proxy measure for vitamin D status according to the principles of MR (Davey-Smith and Ebrahim, 2003). The data source is the population based study Monica10, and vitamin D is measured as the serum 25-OH-D (nmol/L). The Monica I study was conducted in 1982–1984 and included examinations of 3785 individuals of Danish origin recruited from the Danish Central Personal Register as a random sample of the population. The 10-year follow-up study, Monica10, included 2656 individuals between 40–71 years. It was carried out in 1993–1994, for further details, see Skaaby and others (2013). The data we analyze consists of 2571 individuals as a few in the original data have missing information on filaggrin and or on vitamin D status. Our instrument $$G$$ is the indicator of filaggrin mutation and we used vitamin D (nmol/L) as a continuous variable centered around 20 and normed with 20, hence $$X=$$(VitD-20)/20. A vitamin D level less than 30 nmol/L indicates vitamin D deficiency. As age is clearly an important predictor of mortality we applied our method extended to the case with additional covariates, here age in four categories: (39,45), (45,55), (55,65), (65,72). The instrument is significantly associated with the exposure, in a linear model where we adjust for the age variable, the corresponding P-value is 0.006. However, the instrument is not strong as the associated F-test statistic is only 7.44. The associational Cox-model was rejected for the last two age categories using the Lin and others (1993) goodness-of-fit procedure. Hence, we used a Cox associational model with separate baseline hazards for these two age categories, and proportional effects of the second age category, the instrument and the exposure. The optimal time is estimated to be $$16$$ (years) leading to a causal hazard ratio of $$2.01$$ with corresponding 95% confidence interval $$(0.95, 4.26)$$. Hence, lowering the vitamin D level from 40 to 20 roughly doubles the hazard. This is not significant, however, as indicated by the 95% confidence interval. This is in line with the simple ITT analysis where the P-value of the instrument in a Cox-model, adjusting for the age variable as explained above, is 0.09. We also carried out a naive application of the procedures known from the simple IV models, namely the 2SPS and 2SRI estimation procedures. In the first stage, we obtained predicted values $$M$$, of $$X$$, using the instrument and the categorical age variable as predictors. Also residuals $$Re$$ from this analysis was stored. In the second stage, a Cox regression analysis was applied with $$M$$, and the categorical age variable, this is the 2SPS-method. We also did a Cox regression analysis with $$X$$ and $$Re$$, and the age variable, corresponding to the 2SRI method. This is also called the control function method. Estimates of the exposure effect was obtained and 95% confidence intervals were obtained using nonparametric bootstrap. The 2SPS resulted in an estimated hazard ratio of $$2.83$$ with corresponding 95% confidence interval $$(0.83,285)$$, and the 2SRI resulted in an estimated hazard ratio of $$3.04$$ with corresponding 95% confidence interval $$(0.89, 300)$$ thus pointing in the same direction as our structural Cox model analysis, but with much wider confidence intervals. However, the properties of both the 2SPS and the 2SRI in this setting are unclear. 4. Simulations Three simulation studies are presented. A first study, reported in Section 4.1, corresponding to the design in the LIVER data with three groups defined by randomization and treatment. Two additional simulation studies are reported in the Supplementary material available at Biostatistics online. The first of these two studies is based on the non-compliance setup with four groups. Finally, the second study reported in the Supplementary material available at Biostatistics online, considers a setup where the exposure variable $$X$$ is taken to be continuous. In all cases, 2000 simulation experiments were made, and analyses were performed both using a fixed time $$t$$ for estimation to get $$\hat\psi_t$$, and based on the estimated optimal time $$\hat\tau$$ to get $$\hat\psi$$. 4.1. Simulation based on the LIVER data Based on the LIVER data and the estimated survival curves in the three randomization-treatment groups, we set up a simulation study. Survival curves for the three groups were constructed such that the IV independence truly holds as described in the following. Based on smoothed versions of the cumulative hazards for $$T$$ in the two groups where $$G=1, X=0$$ and where $$G=1, X=1$$ (see bottom row in Figure 2), the cumulative hazard of $$T$$ where $$G=0, X=0$$ is given by \begin{align*} \exp \left\{ -\Lambda_T(t\vert X=0, G=0)\right\} &= P(X=1\vert G=1) \exp \left\{ -\Lambda_T(t\vert X=1, G=1) \exp{(-\psi)}\right\}\\ &\quad + P(X=0\vert G=1) \exp \left\{ -\Lambda_T(t\vert X=0, G=1)\right\} \end{align*} ensuring that $$G$$ is independent of $$T^0$$ for all time points. The estimated effect, $$\hat\psi=0.25$$, from the application is used as the true causal effect. The top row in Figure 2 shows the resulting survival function for the survival times $$T$$ in the group randomized to control, and finally the three smooth survival curves plotted together. The results from this simulation study are shown in Table 1 for estimation at a fixed time $$t=40$$. For each simulation run, there was a unique solution to the estimating equation. A similar table summarizing the properties of the estimator based on the optimal time is given in Table 2. It is seen from Tables 1 and 2 that both $$\hat \psi_{40}$$ and $$ \hat \psi$$ are virtually unbiased. It is also seen that the variability of the estimator $$\hat\psi$$ is indeed smaller than that of $$\hat\psi_{40}$$. The coverage of the asymptotic confidence intervals of $$\hat\psi_{40}$$ and $$\hat\psi$$ both agree with the theoretical $$95\%$$ level. To be able to compare to Loeys and Goetghebeur (2003), we also used the exact same simulation setup as the one reported in their Table 1, scenarios 1 to 5. These results are reported in Table 3. We computed both $$\hat\psi$$ and $$\hat \psi_t$$, where $$t$$ was taken to be the 95% quantile of the sorted event times. These two estimators perform similarly although $$\hat\psi$$ exhibits some bias when $$n=150$$, this bias diminishes when sample size go up (results now shown). We see that estimated s.d. agrees quite well with the empirical variation of the estimators. Comparing with the results in Loeys and Goetghebeur (2003), it is seen that their estimator is slightly more efficient, which is not surprising as our method is much more general. We impose no restrictions on the character of the exposure as well as on the instrument, while Loeys and Goetghebeur (2003) on the contrary require both to be binaries, and further that the control group has no access to the treatment (no contamination). Fig. 2. View largeDownload slide Survival curves for $$T$$ in the three groups defined by treatment assignment ($$G$$) and treatment actually taken ($$X$$). The step functions are the estimates based on the LIVER data, and the smooth curves satisfy the restriction that $$T^0$$ and $$G$$ are independent for all time points. In the three individual plots, the estimated survival curves from the LIVER data and pointwise confidence intervals are included. Fig. 2. View largeDownload slide Survival curves for $$T$$ in the three groups defined by treatment assignment ($$G$$) and treatment actually taken ($$X$$). The step functions are the estimates based on the LIVER data, and the smooth curves satisfy the restriction that $$T^0$$ and $$G$$ are independent for all time points. In the three individual plots, the estimated survival curves from the LIVER data and pointwise confidence intervals are included. Table 1. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Table 1. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Table 2. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Table 2. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Table 3. Simulation study with setup corresponding to Table 1 of Loeys and Goetghebeur (2003) Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 Table 3. Simulation study with setup corresponding to Table 1 of Loeys and Goetghebeur (2003) Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 5. Discussion In this article, we have proposed flexible strategies for estimating the effect of an arbitrary exposure on a time-to-event endpoint on the hazard ratio scale, on the basis of an IV. Our proposal readily extends to allow for the incorporation of covariates $$C$$, and for relaxation of the IV assumption to hold only after conditioning on those covariates. This can be done by substituting the term $$h(G)$$ in the estimating equations by some (arbitrary) function $$h(G,C)$$ and replacing the corresponding expectation $$E\left[h(G)\right]$$ by the conditional expectation $$E\left[h(G,C)|C\right]$$ under some model. We have chosen to work towards a proposal that is easy to implement using standard statistical software, e.g. using the R-package timereg (Martinussen and Scheike, 2006) as we did for the results presented in this article. We have chosen to base estimation on a single, well-chosen time point in view of simplicity, as well as in view of concerns that the working association model on which the analysis relies, might otherwise be more likely incongenial with the structural model and the IV assumptions. A further concern about model incongeniality is that it may in principle invalidate tests of the null hypothesis of no causal effect. This can be remedied via a locally robust procedure, as in Vansteelandt and others (2011). Such a procedure is readily achieved upon substituting the association model by a complementary log–log model for the at-risk indicator at the considered time $$t$$. This model should minimally include an intercept and a main effect of the IV, and the fitting procedure should involve inverse-probability-of-censoring weighting in the presence of censoring. As in many other IV analyses, identification of the causal effect in our proposal relies on the NEM assumption. The fact that subjects at different levels of the IV are exchangeable, does not suffice to justify this assumption. This is because subjects with a given exposure level but different levels of the IV may fail to be exchangeable and therefore experience different effects of the considered exposure level. In future work, we will instead study identifiability under specific (homogeneity) assumptions on the selection mechanism, as in Tchetgen and Vansteelandt (2013). We foresee this will better enable sensitivity analyses towards violation of these assumptions, simplify the construction of congenial parameterizations, and make the estimation of population-averaged effects (instead of treatment effects in the treated) possible. Since \begin{align*} 0= E[\{G-E(G)\}S_{\beta}(t|X,G)^{\exp{(-\psi X)}}{\exp{(-\psi X)}}\lambda(t|X,G)], \end{align*} this leads to the estimating equation \begin{equation} \sum_i\int_0^{\nu}(G_i-\overline{G})\exp\left[-\hat\Lambda_0(t)\exp{\{(\hat\beta-\psi) X_i+\hat\gamma G_i\}}\right] \exp{\{(\hat\beta-\psi) X_i+\hat\gamma G_i\}}{\rm d}\hat\Lambda_0(t)=0, \end{equation} (5.1) where $$\Lambda_0(t)$$ is the Breslow estimator based on the Cox associational model (2.5) with $$\beta\,{=}\,\beta_x$$, $$\gamma\,{=}\,\beta_\epsilon\ {\rm and}\ \beta_{x\epsilon}= 0$$. This approach uses the whole time span, but in preliminary simulation studies we found no gain in efficiency compared to the more simple estimator suggested in this article. Future work may investigate how to weight the different contributions relating to different times, more specifically whether some optimal weights exist. Finally, we briefly describe a possible goodness-of-fit-procedure for the structural Cox-model basing the estimation on time point $$\omega$$. The suggested procedure mimics that of Lin and others (1993) for the Cox (associational) model. Write the estimating equation, $$U(\psi,\hat \theta_{\omega})=0$$, where \begin{equation} U(\psi,\theta_{\omega})=\sum_i\{G_i-E(G)\}\exp\left[-\Lambda_0(\omega)\exp{\{(\beta-\psi) X_i+\gamma G_i\}}\right] \end{equation} (5.2) and where $$\theta_{t}=\{\phi,\Lambda_0(t),\beta,\gamma\}$$ with $$\phi=E(G)$$. Write (5.2), with $$\theta_{\omega}$$ replaced by $$\hat\theta_{\omega}$$, as $$U(\omega,\psi)$$ and define the goodness-of-fit process by $$U(t,\hat\psi)$$, $$t\le \omega$$. The idea is then to find the influence function of this process. Then the re-sampling approach of Lin and others (1993) can be used to generate processes from the limit distribution, and in this way see whether the observed process is deviating. A detailed study of this approach will be communicated elsewhere. 6. Software Simulation script with data is available at https://github.com/TorbenMart/IV_Str_Cox_2017. Supplementary material Supplementary material is available online at http://biostatistics.oxfordjournals.org. Acknowledgments Torben Martinussen and Ditte Nørbo Sørensen acknowledge support from the Dynamical Systems Interdisciplinary Network, University of Copenhagen. Conflict of Interest: None declared. Appendix In the case of binary instrument and exposure, a non-parametric specification of the association model, which is used to predict $$P(T>t| G,X)$$, is the Aalen additive hazards model $$ \lambda(t\vert X, G)=\beta_0(t)+ \beta_X(t)X+ \beta_G(t)G+ \beta_{XG}(t)XG $$ where the above regression functions are left unspecified. The conditional cumulative hazard is \begin{align*} \Lambda(t\vert X, G) &= B_0(t)+ B_X(t)X+ B_G(t)G+ B_{XG}(t)XG \end{align*} where $$B_0(t)=\int_0^t\beta_0(s)\, {\rm d}s$$ and similarly for the other regression coefficients. The parameter $$\beta_t$$ is here given by the integrated regression coefficients $$\beta_t=B(t)$$ where $$B(t)=\left\{B_0(t), B_X(t), B_G(t), B_{XG}(t)\right\}^T, $$ and we use Aalen’s least squares estimator (Martinussen and Scheike, 2006) to estimate $$B(t)$$. The i.i.d. decomposition corresponding to the parameter $$\beta_t$$ using Aalen’s least squares estimator is \begin{align*} \epsilon_i^{\beta_t}(t)=\int_0^t A^{-1}(s)X(s)_{(i,.)}{\rm d}M_i(s) \end{align*} where $$X(t)$$ is the $$n\times 4$$ matrix with $$i$$th row $$ X(t)_{(i,.)}= \left\{X(t)_{(i,1)}, \ldots ,X(t)_{(i,4)}\right\}= \left\{Y_i(t), Y_i(t)X_i, Y_i(t)G_i, Y_i(t)X_iG_i\right\}\!, $$ $$A(t)$$ is the limit in probability of $$\frac{1}{n}X(t)^TX(t)$$, and $$M_i(t) = N_i(t)-\int_0^tX(s)_{(i,.)}^T{\rm d}B(s)$$. This i.i.d. decomposition can also be extracted from R using the aalen()-function in the R-package timereg. References Baker S. G. ( 1998 ). Analysis of survival data from a randomized trial with all-or-none compliance: estimating the cost-effectiveness of a cancer screening program. Journal of the American Statistical Association 93 , 929 – 934 . Google Scholar CrossRef Search ADS Davey-Smith G. and Ebrahim S. ( 2003 ). Mendelian randomization: can genetic epidemiology contribute to understanding environmental determinants of disease? International Journal of Epidemiology 32 , 1 – 22 . Google Scholar CrossRef Search ADS PubMed Frangakis C. E. and Rubin D. B. ( 1999 ). Addressing complications of intention-to-treat analysis in the combined presence of all-or-none treatment-noncompliance and subsequent missing outcomes. Biometrika 86 , 365 . Google Scholar CrossRef Search ADS Lin D. Y. , Wei L. J. and Ying Z. ( 1993 ). Checking the Cox model with cumulative sums of martingale-based residuals. Biometrika 80 , 557 – 572 . Google Scholar CrossRef Search ADS Loeys T. and Goetghebeur E. ( 2003 ). A causal proportional hazards estimator for the effect of treatment actually received in a randomized trial with all-or-nothing compliance. Biometrics 59 , 100 – 105 . Google Scholar CrossRef Search ADS PubMed Loeys T. , Goetghebeur E. and Vandebosch A. ( 2005 ). Causal proportional hazards models and time-constant exposure in randomized clinical trials. Lifetime Data Analysis 11 , 435 – 449 . Google Scholar CrossRef Search ADS PubMed MacKenzie T. A. , Løberg M. and O’Malley A. J. ( 2016 ). Patient centered hazard ratio estimation using principal stratification weights: application to the norccap randomized trial of colorectal cancer screening. Observational Studies 2 , 29 – 50 . MacKenzie T. A. , Tosteson T. D. , Morden N. E. , Stukel T. A. and O’Malley A. J. ( 2014 ). Using instrumental variables to estimate a cox’s proportional hazards regression subject to additive confounding. Health Services and Outcomes Research Methodology 14 , 54 – 68 . Google Scholar CrossRef Search ADS PubMed Martinussen T. and Scheike T. H. ( 2006 ). Dynamic Regression Models for Survival Data . New York : Springer . Martinussen T. , Vansteelandt S. , Tchetgen T. , Eric J. and Zucker D. M. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models. Biometrics https://doi.org/10.1111/biom.12699 . Newey W. K. and McFadden D. ( 1994 ). Large sample estimation and hypothesis testing. In: Engle R. and McFadden D. (editors), Handbook of Econometrics , Volume 4. North Holland, Amsterdam : Elsevier . Nie H. , Cheng J. and Small D. S. ( 2011 ). Inference for the effect of treatment on survival probability in randomized trials with noncompliance and administrative censoring. Biometrics 67 , 1397 – 1405 . Google Scholar CrossRef Search ADS PubMed Robins J. and Rotnitzky A. ( 2004 ). Estimation of treatment effects in randomised trials with non-compliance and a dichotomous outcome using structural mean models. Biometrika 91 , 763 – 783 . Google Scholar CrossRef Search ADS Robins J. and Tsiatis A. A. ( 1991 ). Correcting for non-compliance in randomized trials using rank preserving structural failure time models. Communications in Statistics—Theory and Methods 20 , 2609 – 2631 . Google Scholar CrossRef Search ADS Skaaby T. , Husemoen L. L. N. , Martinussen T. , Thyssen J. P. , Melgaard M. , Heinsbk B. , Thuesen C. P. , Jrgensen T. , Johansen J. D. , Menn T. and others. ( 2013 ). Vitamin D status, Filaggrin genotype and cardiovascular risk factors: a Mendelian randomisation approach. PLoS One ; 8 ( 2 ): e57647 . Google Scholar CrossRef Search ADS PubMed Tchetgen T. E. J. and Vansteelandt S. ( 2013 ). Alternative identification and inference for the effect of treatment on the treated with an instrumental variable. Harvard University Biostatistics Working Paper Series , Working Paper 166 . Tchetgen T. E. J. , Walter S. , Vansteelandt S. , Martinussen T. and Glymour M. ( 2015 ). Instrumental variable estimation in a survival context. Epidemiology 26 , 402 – 410 . Google Scholar CrossRef Search ADS PubMed Tsiatis A. A. ( 2006 ). Semiparametric Theory and Missing Data . New York : Springer. Vansteelandt S. , Bowden J. , Babanezhad M. and Goetghebeur E. ( 2011 ). On Instrumental variables estimation of causal odds ratios. Statistical Science 26 , 403 – 422 . Google Scholar CrossRef Search ADS Wan F. , Small D. , Bekelman J. E. and Mitra N. ( 2015 ). Bias in estimating the causal hazard ratio when using two-stage instrumental variable methods. Statistics in Medicine 34 , 2235 – 2265 . Google Scholar CrossRef Search ADS PubMed © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Biostatistics Oxford University Press

Instrumental variables estimation under a structural Cox model

Loading next page...
 
/lp/ou_press/instrumental-variables-estimation-under-a-structural-cox-model-KqvYOBNJCq
Publisher
Oxford University Press
Copyright
© The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ISSN
1465-4644
eISSN
1468-4357
D.O.I.
10.1093/biostatistics/kxx057
Publisher site
See Article on Publisher Site

Abstract

Summary Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heuristic approaches. More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context with dichotomous exposure and instrument, amongst other limitations. The aim of this article is to reconsider IV estimation under a structural Cox model, allowing for arbitrary exposure and instruments. We propose a novel class of estimators and derive their asymptotic properties. The methodology is illustrated using two real data applications, and using simulated data. 1. Introduction In health studies, the scientific question is often of a causal nature. For example “Does the exposure increase the risk of cancer?”. The interest is not whether the prevalence of cancer differs between exposed and unexposed individuals, but rather whether exposed individuals would more likely have been cancer-free for a shorter time had they been unexposed. Randomized trials create comparable or exchangeable groups of exposed and unexposed individuals, and thereby deliver a unique source of data for addressing such causal questions. However, because such trials are often infeasible or even impossible for ethical, financial, or practical reasons, the epidemiologist is commonly left with observational data where the exchangeability of exposed and unexposed individuals cannot be guaranteed. Instrumental variables (IV) analyses attempt to re-establish the balance or exchangeability brought about by randomization, by making use of IV. These are variables that are independent of any unmeasured confounders conditional on any observed confounders of treatment and the outcome (assumption i), but are associated with the exposure of interest (assumption ii) so that one can indirectly learn about the exposure’s effect from the effect of the IV. Such indirect learning is possible when the IV affects the outcome only through the level of the exposure (assumption iii, labeled the “exclusion restriction”), and certain modeling assumptions are satisfied (see later). An example of an IV could be a certain genetic variant that alters an exposure of interest (e.g., smoking behavior), as considered in so-called Mendelian randomization (MR) studies. Here, the random inheritance of genes adds to the plausibility of (i), although the credibility of the exclusion restriction (iii) needs careful attention in applications. The rapid increase in the number of published papers using MR emphasizes the need for properly developed IV methods for various types of outcome. For continuous exposure and outcome that lend themselves to linear modeling, the IV methodology has been well developed. For survival outcomes, this methodology has been extended to accelerated failure time models (Robins and Tsiatis, 1991) and additive Aalen models (Tchetgen and others, 2015; Martinussen and others, 2017). Non-collapsibility of the hazard ratio has however turned out to be a major stumble block in extending these methodologies to the more popular Cox model. The majority of IV analyses of time-to-event endpoints have therefore become dominated by heuristic approaches. This is a concern because naive application of procedures known from the simple IV models, such as two-stage residual inclusion (2SRI) and two-stage predictor substitution (2SPS) estimators under a Cox model, have been shown to lead to bias (Wan and others, 2015). More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context. For instance, in a context of non-compliance in randomized trials for binary instrument, Loeys and Goetghebeur (2003) and Loeys and others (2005) considered a setting with a dichotomous instrument, categorical exposure, and exposure only being accessible in the experimental arm (i.e., at one level of the IV). MacKenzie and others (2014) studied IV estimation in the Cox model under the biologically implausible assumption that confounding operates additively on the hazard (Tchetgen and others, 2015). Other methods are based on principal stratification, see Frangakis and Rubin (1999), Baker (1998), Nie and others (2011) and MacKenzie and others (2016). However, using principal stratification implies a different estimand than the one pursued in this paper. In view of the above limitations, the aim of this article is to reconsider IV estimation under the structural Cox model for arbitrary IVs and exposures. We propose a novel class of estimators in Section 2 and derive their asymptotic properties enabling the construction of confidence intervals. In Section 3.1, we use our proposal to analyze data from a randomized trial with non-compliance. In Section 3.2, we consider a MR study to evaluate the importance of Vitamin D on all-cause mortality. The exposure in that study is continuous which is also covered by our proposed method. The properties of the proposed estimator are evaluated using simulated data in Section 4. Finally in Section 5, the strengths and weaknesses of the method are discussed as well as pointers to further developments. 2. IV estimation of the conditional causal hazard ratio 2.1. Structural model Throughout, our interest lies in the effect of some exposure or treatment $$X$$ on a time-to-event endpoint $$T$$. We will allow for the possibility that differently exposed subgroups may fail to be exchangeable, as would be the case in most observational studies, but will assume that data is available for each individual on an IV $$G$$. For instance, we may be interested in the effect of received treatment $$X$$ on time-to-death $$T$$ in a randomized trial where, as often happens, not all individuals comply with the assigned treatment. Here, in spite of randomization, individuals who do and do not receive treatment may lack exchangeability because of well-known phenomena such as the healthy user effect, or the sick stopper effect. However, randomization $$G$$ delivers an IV, so long as it does not affect time-to-death other than by influencing the received treatment (as would be plausible in many double-blind experiments). To define the exposure effect, let $$T^x$$ be the potential survival time had the treatment been manipulated or set to $$x$$ by some intervention (where e.g. $$x$$ and $$X$$ may be coded as 1 or 0 to denote treatment or no treatment, respectively). Throughout, we make the so-called consistency assumption that the potential survival time $$T^x$$ equals the observed survival time for individuals with exposure $$x$$. The structural Cox model, which we will study in this article, states that \begin{align} \frac{\lambda_{T^x}(t|X=x,G)}{\lambda_{T^0}(t|X=x,G)}=\exp{(\psi x)} \quad \textrm{ for all } x. \end{align} (2.1) Here, $$\lambda_{T^0}(t|X=x,G)$$ encodes the hazard at time $$t$$ for individuals with $$X=x$$ and $$G$$, had they received some reference level $$x=0$$ (which will usually refer to “no exposure”). Note that model (2.1) explicitly contrasts the same group—individuals with $$X=x$$ and $$G$$—twice, once with the observed exposure, and once with zero exposure. It thereby delivers a measure of the causal effect of treatment among the treated, unlike the hazard ratio in a more standard Cox model, \[ \frac{\lambda_{T^x}(t|X=x,G)}{\lambda_{T^0}(t|X=0,G)}=\exp{(\tilde{\psi} x)} \quad \textrm{ for all} \ x, \] which contrasts differently exposed subgroups and therefore lacks a causal interpretation. In particular, $$\tilde{\psi}$$ may therefore differ from $$\psi$$. This can also been seen as \begin{align} \exp{(\tilde{\psi} x)} =\frac{\lambda_{T^0}(t|X=x,G)}{\lambda_{T^0}(t|X=0,G)}\exp{(\psi x)}, \end{align} (2.2) where the factor $$\lambda_{T^0}(t|X=x,G)/\lambda_{T^0}(t|X=0,G)$$ expresses a selection mechanism: the fact that individuals with $$X=x$$ and $$X=0$$ might experience different hazards, even if both were unexposed, because these individuals might not be comparable. Note that the denominator in (2.1) is defined using the counterfactual hazard for $$T^0$$. The interpretation of $$\psi$$ thus relates to an intervention where the exposure is set to the reference level $$0$$. Any level of exposure could be used here, and the reference level should be chosen to yield a meaningful comparison. Note also that the structural model does not allow the causal effect to be modified by $$G$$. This No Effect Modification (NEM) assumption is not guaranteed to hold and is partly untestable. This is even so in the noncompliance example where G refers to randomization itself, because individuals with a given exposure may fail to be exchangeable between different randomized arms. The NEM assumption can be relaxed by formulating a structural model as in (2.1) where $$G$$ is included in a term in the exponential on the right-hand side. However, this requires that the instrument $$G$$ has more than two levels, or that the structural model includes baseline covariates that do not interact with the exposure or instrument in their effect on the outcome. 2.2. Estimation Suppose that the study design provides i.i.d. realizations of $$(G_i, X_i, T_i^*, \Delta_i)_{ i=1, \ldots , n}$$. Here, $$G$$ is a possibly multivariate IV, $$C$$ is the censoring time, $$T^*=\min (T, C)$$ is the observed time and $$\Delta = I(T\leq T^*)$$ is the censoring indicator. Throughout, we will assume that $$C {\perp\!\!\!\!\perp} T|G, X$$. Let $$N(t)=I(T^*\le t,\Delta=1)$$ and $$Y(t)=I(t\le T^*)$$ be the observed counting process and at risk indicator, respectively. Let $$\nu$$ be some finite time point such that we observe $$N(t)$$ and $$Y(t)$$ in the time period $$[0,\nu]$$. Let the conditional survival probabilities of $$T$$ and $$T^0$$ given the exposure and instrument be denoted $$P\left(T>t\vert X, G\right)$$ and $$P\left(T^0>t\vert X, G\right)$$ for all $$t>0$$. The structural Cox model can be equivalently rephrased in terms of these probabilities as $$ f\left\{P\left(T^x>t|X=x,G\right)\right\}=f\left\{P\left (T^0>t|X=x,G\right)\right\}+\psi x $$ for all $$t>0$$, with $$f(v)=\log\{-\log(v)\}$$. This gives the relationship \begin{align} P\left(T^0>t|X=x,G\right)=P\left(T^x>t|X=x,G\right)^{\exp{(-\psi x)}}, \end{align} (2.3) for all $$t>0$$. In what follows, we will show how this identity can be used to enable estimation of $$\psi$$. The IV assumptions (i) and (iii) imply that $$T^0$$ and $$G$$ must be independent. By this key independence, we must have at each time $$t$$ that \begin{align*} E\left\{h(G)P\left(T^0>t \vert X, G\right)\right\} &= P\left(T^0>t\right)E\left\{h(G)\right\} =0, \end{align*} where $$h$$ is a function of $$G$$ such that $$E\left\{h(G)\right\} = 0$$, for example $$h(G)=G-E\left (G\right )$$. Together with the identity (2.3), this implies \begin{align*} E\left \{h(G)P\left(T>t \vert X, G\right)^{\exp(-\psi X)} \right\} =0. \end{align*} The parameter of interest can now be found by choosing it so as to make the independence of $$T^0$$ and $$G$$, or more precisely the above equality, happen in the sample. That is, for each fixed time $$t>0$$, one can estimate $$\psi$$ as the solution to \begin{equation} 0 = \sum_{i=1 }^n \left[ h(G_i)- E\left\{h(G)\right\}\right] P(T_i>t\vert X_i, G_i)^{\exp{(-\psi X_i)}}. \end{equation} (2.4) The solution to this equation delivers an infeasible estimator because the population expectations $$E\left\{h(G)\right\}$$ and $$P(T>t\vert X, G)$$ are unknown. In practice, we will therefore substitute $$E\left\{h(G)\right\}$$ by the corresponding sample average. In the next two sections, we discuss how to model the survival probabilities $$P(T>t\vert X, G)$$ and how to choose the time point $$t$$, respectively. 2.3. Association model The quantities $$P(T_i>t\vert X_i, G_i)$$ in (2.4) can either be estimated by modeling the survival function (or the survival probability at time $$t$$) directly, or via additive or multiplicative models for the hazard function. The resulting model will be referred to as the association model. For instance, we could use the Cox proportional hazards model where the hazard for $$T$$ given $$X=x, G=g$$ is modeled as \begin{align} \lambda(t\vert X=x, G=g)=\lambda_0(t)\exp{\{m_a(x, g, \beta_a)\}} \end{align} (2.5) for some baseline hazard $$\lambda_0(t)$$ and a known function $$m_a$$. A simple interaction model is $$m_a(x, g, \beta_a) = \beta_Xx+\beta_Gg+\beta_{XG}xg$$ for $$\beta_a=(\beta_X, \beta_G, \beta_{XG})$$, which corresponds to the survival function \begin{align*} S_\beta(t\vert X=x, G=g) &= \exp\left [ -\Lambda_0(t)\exp{\{m_a(x, g, \beta_a)\}} \right], \end{align*} where $$\Lambda_0(t) = \int_0^t\lambda_0(s){\rm d}s$$. In what follows, we will refer to the parameters of the association model as $$\beta_t$$, e.g. $$\beta_t=\{\beta_a,\Lambda_0(t)\}^T$$, where the index $$t$$ refers to the considered time point. We will furthermore use $$S_\beta(t\vert X, G)$$ to denote the survival function for $$T$$ at time $$t$$ conditional on $$X$$ and $$G$$. Upon substituting $$P(T_i>t\vert X_i, G_i)$$ in (2.4) by $$S_\beta(t\vert X_i, G_i)$$ with $$\beta_t$$ substituted by a consistent estimator $$\hat{\beta}_t$$, and solving for $$\psi$$, we obtain an estimator which we will denote $$\hat{\psi}_t$$. In practice, one does not have complete freedom to choose the association model. The reason is that situations may arise in which no data generating mechanism exists where $$G$$ is a valid IV and, at the same time, both the association model and the structural model hold (see Vansteelandt and others, 2011; Robins and Rotnitzky, 2004). This explains partly why our proposed estimating procedure merely makes use of the model restrictions at a single time $$t$$, rather than all times, and why we recommend choosing a sufficiently flexible association model as a general guideline. To understand this problem better, note that for dichotomous $$G$$ and $$X$$ where we define $$p_{kl}= P(X=k\vert G=l)$$, the proportional hazards model (2.5) for all $$t$$ is incongenial with the IV assumptions and the structural model at all $$t$$ since there are no parameters $$(\lambda_0(t), \beta_X, \beta_G, \beta_{XG})$$ such that the following two expressions are equal \begin{align*} P(T^0>t|G=0)=\; & p_{10}\exp\left\{-\Lambda_0(t)\exp{(\beta_X-\psi)}\right\} +p_{00}\exp\left\{-\Lambda_0(t)\right\} \\ P(T^0>t|G=1)=\; & p_{11}\exp\left\{-\Lambda_0(t)\exp{(\beta_X-\psi+\beta_G+\beta_{XG})}\right\} + p_{01}\exp\left\{-\Lambda_0(t)\exp{(\beta_G)}\right\} \end{align*} for all times $$t$$ (besides the trivial case where $$\beta_X=\psi$$ and $$\beta_G=\beta_{XG}=0$$). In spite of this, the IV assumptions imply that both probabilities must be equal for each fixed time $$t$$. It is worth noting, however, that for this case with $$G$$ and $$X$$ both being dichotomous, we need not assume any associational model as we can estimate $$P(T>t| X, G)$$ using a fully non-parametric approach; for instance, by applying the four Kaplan–Meier estimators (one for each of the four combinations of $$G$$ and $$X$$), or using the non-parametric Aalen additive hazards model with $$G$$, $$X$$ and $$GX$$ as covariates. 2.4. Large sample results Let $$\theta_t=[E\{h(G)\}, \beta_t]^T$$ refer to the vector of nuisance parameters. Assume that we have a RAL estimator $$\hat\theta_t$$ of $$\theta_t$$ (Tsiatis, 2006) so that we have the representation \begin{equation} \sqrt{n} \left(\hat\theta_t-\theta_t\right)=\frac{1}{\sqrt n}\sum_{i=1}^n\epsilon_i^{\theta_t}+o_p(1), \end{equation} (2.6) where the terms $$\epsilon_i^{\theta_t}$$, $$i=1\ldots ,n$$, are i.i.d.’s with zero-mean. Here, $$\epsilon_i^{\theta_t}$$ is called the influence function of $$\theta_t$$ and (2.6) is referred to as the i.i.d. decomposition of the estimator $$\hat\theta_t$$. It then follows from standard theory for M-estimators that \begin{align*} \sqrt{n}\left(\hat\psi_t - \psi\right)&=-\left\{\frac{1}{ n}\sum_{i=1}^n D_{\psi}u_i\left(t, \psi, \theta_t\right)\right\}^{-1}\frac{1}{\sqrt n}\sum_{i=1}^nU_i\left(t, \psi, \theta_t\right)+ o_p(1), \end{align*} where $$\theta_t$$ refers to the vector of nuisance parameters and where $$D_{\psi}$$ refers to the gradient w.r.t. $$\psi$$. Here, \[ U_i\left(t, \psi, \theta_t\right)= u_i(t, \psi, \theta_t) + E\left\{D_\theta u(t, \psi, \theta_t)\right\} \epsilon_i^{\theta_t}(t) \] where \[u(t, \psi, \theta_t)=\left[h(G)- E\left\{h(G)\right\}\right] S_{\beta}(t|X,G)^{\exp{(-\psi X)}}\] and with $$D_{\theta}$$ referring to the gradient w.r.t. $$\theta_t$$. The asymptotic variance of $$\hat{\psi}_t$$ is then given as 1 over $$n$$ times the empirical variance of \[\left\{\frac{1}{ n}\sum_{i=1}^n D_{\psi}u_i\left(t, \psi, \theta_t\right)\right\}^{-1}U_i\left(t, \psi, \theta_t\right)\!,\] with $$\psi$$ substituted by $$\hat{\psi}_t$$ and $$\theta_t$$ by $$\hat{\theta}_t$$. When $$h(\cdot)$$ is the identity function then $$ \epsilon_i^{\theta_t} (t) = \begin{pmatrix} G_i-\mu_G\\ \epsilon_i^{\beta_t} (t) \end{pmatrix}\!, $$ and $$\epsilon^{\beta_t}(t)$$ is the i.i.d. representation of the estimator of the parameters defining the association model. For instance, consider the Cox model presented in (2.5) for $$\beta_a=(\beta_X, \beta_G,\beta_{XG})^T$$. In this case, we can split the $$\epsilon^{\beta_t}_i(t)$$ into $$\epsilon^{\beta_t}_i(t)=\left(\epsilon_i^{\Lambda_0}(t), \epsilon_i^{\beta_a}\right)^T. $$ Specifically, \begin{align*} \epsilon_i^{\beta_a}&=-I(\beta_a)^{-1}\int_0^{\nu}\{Z_i-e(t,\beta_a)\}{\rm d}M_i(t)\\ \epsilon_i^{\Lambda_0}(t)&=\int_0^{t}s_0(u,\beta_a)^{-1}{\rm d}M_i(u)-\left\{\int_0^{t}e(u,\beta_a){\rm d}\Lambda_0(u)\right\} \epsilon_i^{\beta_a} \end{align*} In the latter display, $$Z_i=(X_i,G_i,X_iG_i)$$, and $$e(t,\beta_a) $$ and $$s_0(t,\beta_a)$$ are the limits in probability of $$ E(t,\beta_a)=\frac{S_1(t,\beta_a)}{S_0(t,\beta_a)} $$ and $$S_0(t,\beta_a)$$, respectively, with $$ S_0(t,\beta_a)=n^{-1}\sum_{i=1}^nY_i(t)\exp{(\beta_a^TZ_i)}, \quad S_1(t,\beta_a)=n^{-1}\sum_{i=1}^n Y_i(t)Z_i^T\exp{(\beta_a^TZ_i)}. $$ Further, $$-I(\beta_a)$$ is the limit in probability of minus the derivative of the Cox score (for the association model) evaluated at $$\beta_a$$, and $$M_i(t)$$ is the counting process martingale coming from the Cox association model: $$ M_i(t)=N_i(t)-\int_0^tY_i(s)\exp{(\beta_a^TZ_i)}{\rm d}\Lambda_0(s). $$ It is worth noting that $$\epsilon^{\beta_t}_i(t)$$ for the Cox association model can easily be extracted from R using the cox.aalen-function from the R-package timereg. The explicit expression for $$\epsilon^{\beta_t}_i(t)$$ depends on the assumed association model and how the parameters of the model are estimated. In the case where both $$G$$ and $$X$$ are dichotomous, we can estimate $$P(T>t| X, G)$$ using a fully non-parametric Aalen additive hazards model with $$G$$, $$X$$ and $$GX$$ as covariates. We give the explicit expression for $$\epsilon^{\beta_t}_i(t)$$ for that case in Appendix A using Aalen’s least squares estimator. 2.5. Optimal time for estimation A drawback of the procedure proposed so far, is that it may deliver a different estimator $$\hat{\psi}_t$$ for each considered time $$t$$. To avoid cherry-picking, we will show how an objective choice of $$t$$, that delivers reasonable precision, can be made. Ideally, we would evaluate the estimator $$\hat{\psi}_t$$ at the time \begin{align*} \tau=\textrm{argmin}_{t>0}\textrm{Var}\left\{\hat\psi_t\right\}\!, \end{align*} that delivers an estimator with minimal variance. Unfortunately, the value of $$\tau$$ depends on the data-generating distribution, and is therefore unknown. Interestingly, however, estimating it as the minimizer of the empirical variance, i.e. \begin{align*} \hat\tau = \textrm{argmin}_{t>0}\widehat{\textrm{Var}}\left\{\hat\psi_t\right\}\!, \end{align*} is justified in the sense that $$\textrm{Var}\left\{\hat\psi_{\tau}\right\} =\textrm{Var}\left\{\hat\psi_{\hat{\tau}}\right\}$$. This follows from the fact that $$\hat{\psi}_t$$ consistently estimates $$\psi$$ for all considered $$t>0$$, provided that $$\hat\tau$$ converges to $$\tau$$ faster than $$n^{1/4}$$-rate (Newey and McFadden, 1994). 3. Applications We consider two real data sets to illustrate our method. First, we re-analyze the LIVER data also considered by Loeys and Goetghebeur (2003), where there is non-compliance. In a second application, we investigate whether vitamin D deficiency may be an important risk factor for all-cause mortality. In this latter application, we deal with a continuous exposure variable. We used the filaggrin genotype as a proxy for vitamin D status following the principles of MR (Davey-Smith and Ebrahim, 2003). 3.1. LIVER data This section compares an analysis of the LIVER data from ECOG-trial E9288 using our proposed method with the results presented in Loeys and Goetghebeur (2003). The data are based on 109 individuals with colorectal cancer, who were scheduled for surgery resection due to liver metastases between August 1990 and January 1997. The purpose of the study was to evaluate a new treatment, whereby the patient while undergoing surgery was implanted with an arterial device (AD) to administer chemotherapy. By presurgical randomization, 56 patients were randomized to surgical resection alone (standard treatment), and 53 patients were randomized to the new treatment intervention thus also receiving chemotherapy. While there was perfect compliance in the standard group because the new treatment was only available for those randomized to it, there was non-compliance in the intervention arm. In this arm, 10 patients did not receive the AD for unreported reasons, presumably related to their prognosis. In the standard group 30 patients did not survive, while in the intervention group 24 did not survive after receiving the AD, and 9 did not survive after receiving the standard treatment. To analyze the data, we estimated the cumulative hazards for the three groups defined by randomization and compliance status using a fully non-parametric Aalen additive hazards model. This means that there is no problem of incongeniality for this specific application. The corresponding survival curves are seen in Figure 1. The estimation of the causal parameter was carried out using the predicted survival probabilities after $$40$$ months, and the hazard ratio $$\exp{(\psi)}$$ is estimated to be $$1.28\, (0.61, 2.71)$$. The optimal time is estimated to be $$42.2$$ leading to a causal hazard ratio of $$1.38$$$$(0.67, 2.84)$$. This is comparable with the estimated hazard ratio of $$1.43\, (0.7, 2.96)$$ reported by Loeys and Goetghebeur. That the causal effect is not statistically significant is in accordance with the instrument not being significant in an intention-to-treat (ITT) analysis ($$p=0.57$$). It is worth noticing that the above analysis does not invoke the NEM assumption since the experimental AD-treatment was only available in one randomization group. Fig. 1. View largeDownload slide The estimated survival curves for the LIVER data. Fig. 1. View largeDownload slide The estimated survival curves for the LIVER data. 3.2. Vitamin D data Vitamin D is a fat-soluble vitamin retained from the diet and dietary supplements and produced in sun-exposed skin. Vitamin D deficiency is common and has been linked with a number of common diseases such as cardiovascular disease, diabetes, and cancer. Here, we will be interested in evaluating the effect on overall survival. Unfortunately, vitamin D status is also associated with several behavioral and environmental factors potentially giving rise to biased estimators when using standard statistical analyses. Recently, mutations in the filaggrin gene have been shown to be associated with a higher vitamin D status, supposedly through an increased UV sensitivity of keratinocytes (see Skaaby and others, 2013, and references therein). Filaggrin is an important component of the skin barrier function. The two mutations, 2282del4 and R501X, are by far the most frequent in white Caucasians, but the R2447X mutation also occurs. To assess the potential effect of vitamin D status on overall survival, we used filaggrin genotype as a proxy measure for vitamin D status according to the principles of MR (Davey-Smith and Ebrahim, 2003). The data source is the population based study Monica10, and vitamin D is measured as the serum 25-OH-D (nmol/L). The Monica I study was conducted in 1982–1984 and included examinations of 3785 individuals of Danish origin recruited from the Danish Central Personal Register as a random sample of the population. The 10-year follow-up study, Monica10, included 2656 individuals between 40–71 years. It was carried out in 1993–1994, for further details, see Skaaby and others (2013). The data we analyze consists of 2571 individuals as a few in the original data have missing information on filaggrin and or on vitamin D status. Our instrument $$G$$ is the indicator of filaggrin mutation and we used vitamin D (nmol/L) as a continuous variable centered around 20 and normed with 20, hence $$X=$$(VitD-20)/20. A vitamin D level less than 30 nmol/L indicates vitamin D deficiency. As age is clearly an important predictor of mortality we applied our method extended to the case with additional covariates, here age in four categories: (39,45), (45,55), (55,65), (65,72). The instrument is significantly associated with the exposure, in a linear model where we adjust for the age variable, the corresponding P-value is 0.006. However, the instrument is not strong as the associated F-test statistic is only 7.44. The associational Cox-model was rejected for the last two age categories using the Lin and others (1993) goodness-of-fit procedure. Hence, we used a Cox associational model with separate baseline hazards for these two age categories, and proportional effects of the second age category, the instrument and the exposure. The optimal time is estimated to be $$16$$ (years) leading to a causal hazard ratio of $$2.01$$ with corresponding 95% confidence interval $$(0.95, 4.26)$$. Hence, lowering the vitamin D level from 40 to 20 roughly doubles the hazard. This is not significant, however, as indicated by the 95% confidence interval. This is in line with the simple ITT analysis where the P-value of the instrument in a Cox-model, adjusting for the age variable as explained above, is 0.09. We also carried out a naive application of the procedures known from the simple IV models, namely the 2SPS and 2SRI estimation procedures. In the first stage, we obtained predicted values $$M$$, of $$X$$, using the instrument and the categorical age variable as predictors. Also residuals $$Re$$ from this analysis was stored. In the second stage, a Cox regression analysis was applied with $$M$$, and the categorical age variable, this is the 2SPS-method. We also did a Cox regression analysis with $$X$$ and $$Re$$, and the age variable, corresponding to the 2SRI method. This is also called the control function method. Estimates of the exposure effect was obtained and 95% confidence intervals were obtained using nonparametric bootstrap. The 2SPS resulted in an estimated hazard ratio of $$2.83$$ with corresponding 95% confidence interval $$(0.83,285)$$, and the 2SRI resulted in an estimated hazard ratio of $$3.04$$ with corresponding 95% confidence interval $$(0.89, 300)$$ thus pointing in the same direction as our structural Cox model analysis, but with much wider confidence intervals. However, the properties of both the 2SPS and the 2SRI in this setting are unclear. 4. Simulations Three simulation studies are presented. A first study, reported in Section 4.1, corresponding to the design in the LIVER data with three groups defined by randomization and treatment. Two additional simulation studies are reported in the Supplementary material available at Biostatistics online. The first of these two studies is based on the non-compliance setup with four groups. Finally, the second study reported in the Supplementary material available at Biostatistics online, considers a setup where the exposure variable $$X$$ is taken to be continuous. In all cases, 2000 simulation experiments were made, and analyses were performed both using a fixed time $$t$$ for estimation to get $$\hat\psi_t$$, and based on the estimated optimal time $$\hat\tau$$ to get $$\hat\psi$$. 4.1. Simulation based on the LIVER data Based on the LIVER data and the estimated survival curves in the three randomization-treatment groups, we set up a simulation study. Survival curves for the three groups were constructed such that the IV independence truly holds as described in the following. Based on smoothed versions of the cumulative hazards for $$T$$ in the two groups where $$G=1, X=0$$ and where $$G=1, X=1$$ (see bottom row in Figure 2), the cumulative hazard of $$T$$ where $$G=0, X=0$$ is given by \begin{align*} \exp \left\{ -\Lambda_T(t\vert X=0, G=0)\right\} &= P(X=1\vert G=1) \exp \left\{ -\Lambda_T(t\vert X=1, G=1) \exp{(-\psi)}\right\}\\ &\quad + P(X=0\vert G=1) \exp \left\{ -\Lambda_T(t\vert X=0, G=1)\right\} \end{align*} ensuring that $$G$$ is independent of $$T^0$$ for all time points. The estimated effect, $$\hat\psi=0.25$$, from the application is used as the true causal effect. The top row in Figure 2 shows the resulting survival function for the survival times $$T$$ in the group randomized to control, and finally the three smooth survival curves plotted together. The results from this simulation study are shown in Table 1 for estimation at a fixed time $$t=40$$. For each simulation run, there was a unique solution to the estimating equation. A similar table summarizing the properties of the estimator based on the optimal time is given in Table 2. It is seen from Tables 1 and 2 that both $$\hat \psi_{40}$$ and $$ \hat \psi$$ are virtually unbiased. It is also seen that the variability of the estimator $$\hat\psi$$ is indeed smaller than that of $$\hat\psi_{40}$$. The coverage of the asymptotic confidence intervals of $$\hat\psi_{40}$$ and $$\hat\psi$$ both agree with the theoretical $$95\%$$ level. To be able to compare to Loeys and Goetghebeur (2003), we also used the exact same simulation setup as the one reported in their Table 1, scenarios 1 to 5. These results are reported in Table 3. We computed both $$\hat\psi$$ and $$\hat \psi_t$$, where $$t$$ was taken to be the 95% quantile of the sorted event times. These two estimators perform similarly although $$\hat\psi$$ exhibits some bias when $$n=150$$, this bias diminishes when sample size go up (results now shown). We see that estimated s.d. agrees quite well with the empirical variation of the estimators. Comparing with the results in Loeys and Goetghebeur (2003), it is seen that their estimator is slightly more efficient, which is not surprising as our method is much more general. We impose no restrictions on the character of the exposure as well as on the instrument, while Loeys and Goetghebeur (2003) on the contrary require both to be binaries, and further that the control group has no access to the treatment (no contamination). Fig. 2. View largeDownload slide Survival curves for $$T$$ in the three groups defined by treatment assignment ($$G$$) and treatment actually taken ($$X$$). The step functions are the estimates based on the LIVER data, and the smooth curves satisfy the restriction that $$T^0$$ and $$G$$ are independent for all time points. In the three individual plots, the estimated survival curves from the LIVER data and pointwise confidence intervals are included. Fig. 2. View largeDownload slide Survival curves for $$T$$ in the three groups defined by treatment assignment ($$G$$) and treatment actually taken ($$X$$). The step functions are the estimates based on the LIVER data, and the smooth curves satisfy the restriction that $$T^0$$ and $$G$$ are independent for all time points. In the three individual plots, the estimated survival curves from the LIVER data and pointwise confidence intervals are included. Table 1. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Table 1. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Sample size $$n$$ Estimated time $$t$$ Frequency of single solution Empirical properties of $$\hat\psi_t$$ Mean of as. s.d. Coverage of as. CI % mean s.d. 1 109 40 100.00 0.228 0.411 0.395 95.3 2 200 40 100.00 0.239 0.293 0.288 95.1 3 1000 40 100.00 0.250 0.131 0.128 93.9 4 5000 40 100.00 0.250 0.057 0.057 95.2 Table 2. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Table 2. Simulation study based on the LIVER data. True value of $$\psi$$ is 0.25 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Sample size $$n$$ Optimal time $$\hat\tau$$ Frequency of finding $$\hat\tau$$ Empirical properties of $$\hat\psi$$ Mean of as. s.d. Coverage of as. CI % Mean s.d. 1 109 45.2 099.95 0.183 0.401 0.400 95.9 2 200 52.4 100.00 0.215 0.267 0.269 95.2 3 1000 57.6 100.00 0.250 0.120 0.119 95.0 4 5000 57.9 100.00 0.248 0.054 0.053 94.5 Table 3. Simulation study with setup corresponding to Table 1 of Loeys and Goetghebeur (2003) Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 Table 3. Simulation study with setup corresponding to Table 1 of Loeys and Goetghebeur (2003) Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 Properties of $$\hat\psi_t$$ Properties of $$\hat\psi$$ $$\psi $$ $$n$$ Mean s.d. as. s.d. Mean s.d. as. s.d. 0 150 $$-$$0.008 0.45 0.45 $$-$$0.040 0.44 0.45 0.182 150 0.184 0.45 0.44 0.152 0.42 0.44 $$-$$0.223 150 $$-$$0.226 0.48 0.47 $$-$$0.276 0.45 0.48 0.405 150 0.405 0.46 0.44 0.348 0.42 0.43 $$-$$0.693 150 $$-$$0.688 0.52 0.50 $$-$$0.750 0.47 0.49 5. Discussion In this article, we have proposed flexible strategies for estimating the effect of an arbitrary exposure on a time-to-event endpoint on the hazard ratio scale, on the basis of an IV. Our proposal readily extends to allow for the incorporation of covariates $$C$$, and for relaxation of the IV assumption to hold only after conditioning on those covariates. This can be done by substituting the term $$h(G)$$ in the estimating equations by some (arbitrary) function $$h(G,C)$$ and replacing the corresponding expectation $$E\left[h(G)\right]$$ by the conditional expectation $$E\left[h(G,C)|C\right]$$ under some model. We have chosen to work towards a proposal that is easy to implement using standard statistical software, e.g. using the R-package timereg (Martinussen and Scheike, 2006) as we did for the results presented in this article. We have chosen to base estimation on a single, well-chosen time point in view of simplicity, as well as in view of concerns that the working association model on which the analysis relies, might otherwise be more likely incongenial with the structural model and the IV assumptions. A further concern about model incongeniality is that it may in principle invalidate tests of the null hypothesis of no causal effect. This can be remedied via a locally robust procedure, as in Vansteelandt and others (2011). Such a procedure is readily achieved upon substituting the association model by a complementary log–log model for the at-risk indicator at the considered time $$t$$. This model should minimally include an intercept and a main effect of the IV, and the fitting procedure should involve inverse-probability-of-censoring weighting in the presence of censoring. As in many other IV analyses, identification of the causal effect in our proposal relies on the NEM assumption. The fact that subjects at different levels of the IV are exchangeable, does not suffice to justify this assumption. This is because subjects with a given exposure level but different levels of the IV may fail to be exchangeable and therefore experience different effects of the considered exposure level. In future work, we will instead study identifiability under specific (homogeneity) assumptions on the selection mechanism, as in Tchetgen and Vansteelandt (2013). We foresee this will better enable sensitivity analyses towards violation of these assumptions, simplify the construction of congenial parameterizations, and make the estimation of population-averaged effects (instead of treatment effects in the treated) possible. Since \begin{align*} 0= E[\{G-E(G)\}S_{\beta}(t|X,G)^{\exp{(-\psi X)}}{\exp{(-\psi X)}}\lambda(t|X,G)], \end{align*} this leads to the estimating equation \begin{equation} \sum_i\int_0^{\nu}(G_i-\overline{G})\exp\left[-\hat\Lambda_0(t)\exp{\{(\hat\beta-\psi) X_i+\hat\gamma G_i\}}\right] \exp{\{(\hat\beta-\psi) X_i+\hat\gamma G_i\}}{\rm d}\hat\Lambda_0(t)=0, \end{equation} (5.1) where $$\Lambda_0(t)$$ is the Breslow estimator based on the Cox associational model (2.5) with $$\beta\,{=}\,\beta_x$$, $$\gamma\,{=}\,\beta_\epsilon\ {\rm and}\ \beta_{x\epsilon}= 0$$. This approach uses the whole time span, but in preliminary simulation studies we found no gain in efficiency compared to the more simple estimator suggested in this article. Future work may investigate how to weight the different contributions relating to different times, more specifically whether some optimal weights exist. Finally, we briefly describe a possible goodness-of-fit-procedure for the structural Cox-model basing the estimation on time point $$\omega$$. The suggested procedure mimics that of Lin and others (1993) for the Cox (associational) model. Write the estimating equation, $$U(\psi,\hat \theta_{\omega})=0$$, where \begin{equation} U(\psi,\theta_{\omega})=\sum_i\{G_i-E(G)\}\exp\left[-\Lambda_0(\omega)\exp{\{(\beta-\psi) X_i+\gamma G_i\}}\right] \end{equation} (5.2) and where $$\theta_{t}=\{\phi,\Lambda_0(t),\beta,\gamma\}$$ with $$\phi=E(G)$$. Write (5.2), with $$\theta_{\omega}$$ replaced by $$\hat\theta_{\omega}$$, as $$U(\omega,\psi)$$ and define the goodness-of-fit process by $$U(t,\hat\psi)$$, $$t\le \omega$$. The idea is then to find the influence function of this process. Then the re-sampling approach of Lin and others (1993) can be used to generate processes from the limit distribution, and in this way see whether the observed process is deviating. A detailed study of this approach will be communicated elsewhere. 6. Software Simulation script with data is available at https://github.com/TorbenMart/IV_Str_Cox_2017. Supplementary material Supplementary material is available online at http://biostatistics.oxfordjournals.org. Acknowledgments Torben Martinussen and Ditte Nørbo Sørensen acknowledge support from the Dynamical Systems Interdisciplinary Network, University of Copenhagen. Conflict of Interest: None declared. Appendix In the case of binary instrument and exposure, a non-parametric specification of the association model, which is used to predict $$P(T>t| G,X)$$, is the Aalen additive hazards model $$ \lambda(t\vert X, G)=\beta_0(t)+ \beta_X(t)X+ \beta_G(t)G+ \beta_{XG}(t)XG $$ where the above regression functions are left unspecified. The conditional cumulative hazard is \begin{align*} \Lambda(t\vert X, G) &= B_0(t)+ B_X(t)X+ B_G(t)G+ B_{XG}(t)XG \end{align*} where $$B_0(t)=\int_0^t\beta_0(s)\, {\rm d}s$$ and similarly for the other regression coefficients. The parameter $$\beta_t$$ is here given by the integrated regression coefficients $$\beta_t=B(t)$$ where $$B(t)=\left\{B_0(t), B_X(t), B_G(t), B_{XG}(t)\right\}^T, $$ and we use Aalen’s least squares estimator (Martinussen and Scheike, 2006) to estimate $$B(t)$$. The i.i.d. decomposition corresponding to the parameter $$\beta_t$$ using Aalen’s least squares estimator is \begin{align*} \epsilon_i^{\beta_t}(t)=\int_0^t A^{-1}(s)X(s)_{(i,.)}{\rm d}M_i(s) \end{align*} where $$X(t)$$ is the $$n\times 4$$ matrix with $$i$$th row $$ X(t)_{(i,.)}= \left\{X(t)_{(i,1)}, \ldots ,X(t)_{(i,4)}\right\}= \left\{Y_i(t), Y_i(t)X_i, Y_i(t)G_i, Y_i(t)X_iG_i\right\}\!, $$ $$A(t)$$ is the limit in probability of $$\frac{1}{n}X(t)^TX(t)$$, and $$M_i(t) = N_i(t)-\int_0^tX(s)_{(i,.)}^T{\rm d}B(s)$$. This i.i.d. decomposition can also be extracted from R using the aalen()-function in the R-package timereg. References Baker S. G. ( 1998 ). Analysis of survival data from a randomized trial with all-or-none compliance: estimating the cost-effectiveness of a cancer screening program. Journal of the American Statistical Association 93 , 929 – 934 . Google Scholar CrossRef Search ADS Davey-Smith G. and Ebrahim S. ( 2003 ). Mendelian randomization: can genetic epidemiology contribute to understanding environmental determinants of disease? International Journal of Epidemiology 32 , 1 – 22 . Google Scholar CrossRef Search ADS PubMed Frangakis C. E. and Rubin D. B. ( 1999 ). Addressing complications of intention-to-treat analysis in the combined presence of all-or-none treatment-noncompliance and subsequent missing outcomes. Biometrika 86 , 365 . Google Scholar CrossRef Search ADS Lin D. Y. , Wei L. J. and Ying Z. ( 1993 ). Checking the Cox model with cumulative sums of martingale-based residuals. Biometrika 80 , 557 – 572 . Google Scholar CrossRef Search ADS Loeys T. and Goetghebeur E. ( 2003 ). A causal proportional hazards estimator for the effect of treatment actually received in a randomized trial with all-or-nothing compliance. Biometrics 59 , 100 – 105 . Google Scholar CrossRef Search ADS PubMed Loeys T. , Goetghebeur E. and Vandebosch A. ( 2005 ). Causal proportional hazards models and time-constant exposure in randomized clinical trials. Lifetime Data Analysis 11 , 435 – 449 . Google Scholar CrossRef Search ADS PubMed MacKenzie T. A. , Løberg M. and O’Malley A. J. ( 2016 ). Patient centered hazard ratio estimation using principal stratification weights: application to the norccap randomized trial of colorectal cancer screening. Observational Studies 2 , 29 – 50 . MacKenzie T. A. , Tosteson T. D. , Morden N. E. , Stukel T. A. and O’Malley A. J. ( 2014 ). Using instrumental variables to estimate a cox’s proportional hazards regression subject to additive confounding. Health Services and Outcomes Research Methodology 14 , 54 – 68 . Google Scholar CrossRef Search ADS PubMed Martinussen T. and Scheike T. H. ( 2006 ). Dynamic Regression Models for Survival Data . New York : Springer . Martinussen T. , Vansteelandt S. , Tchetgen T. , Eric J. and Zucker D. M. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models. Biometrics https://doi.org/10.1111/biom.12699 . Newey W. K. and McFadden D. ( 1994 ). Large sample estimation and hypothesis testing. In: Engle R. and McFadden D. (editors), Handbook of Econometrics , Volume 4. North Holland, Amsterdam : Elsevier . Nie H. , Cheng J. and Small D. S. ( 2011 ). Inference for the effect of treatment on survival probability in randomized trials with noncompliance and administrative censoring. Biometrics 67 , 1397 – 1405 . Google Scholar CrossRef Search ADS PubMed Robins J. and Rotnitzky A. ( 2004 ). Estimation of treatment effects in randomised trials with non-compliance and a dichotomous outcome using structural mean models. Biometrika 91 , 763 – 783 . Google Scholar CrossRef Search ADS Robins J. and Tsiatis A. A. ( 1991 ). Correcting for non-compliance in randomized trials using rank preserving structural failure time models. Communications in Statistics—Theory and Methods 20 , 2609 – 2631 . Google Scholar CrossRef Search ADS Skaaby T. , Husemoen L. L. N. , Martinussen T. , Thyssen J. P. , Melgaard M. , Heinsbk B. , Thuesen C. P. , Jrgensen T. , Johansen J. D. , Menn T. and others. ( 2013 ). Vitamin D status, Filaggrin genotype and cardiovascular risk factors: a Mendelian randomisation approach. PLoS One ; 8 ( 2 ): e57647 . Google Scholar CrossRef Search ADS PubMed Tchetgen T. E. J. and Vansteelandt S. ( 2013 ). Alternative identification and inference for the effect of treatment on the treated with an instrumental variable. Harvard University Biostatistics Working Paper Series , Working Paper 166 . Tchetgen T. E. J. , Walter S. , Vansteelandt S. , Martinussen T. and Glymour M. ( 2015 ). Instrumental variable estimation in a survival context. Epidemiology 26 , 402 – 410 . Google Scholar CrossRef Search ADS PubMed Tsiatis A. A. ( 2006 ). Semiparametric Theory and Missing Data . New York : Springer. Vansteelandt S. , Bowden J. , Babanezhad M. and Goetghebeur E. ( 2011 ). On Instrumental variables estimation of causal odds ratios. Statistical Science 26 , 403 – 422 . Google Scholar CrossRef Search ADS Wan F. , Small D. , Bekelman J. E. and Mitra N. ( 2015 ). Bias in estimating the causal hazard ratio when using two-stage instrumental variable methods. Statistics in Medicine 34 , 2235 – 2265 . Google Scholar CrossRef Search ADS PubMed © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

Journal

BiostatisticsOxford University Press

Published: Nov 20, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off