On edge correction of conditional and intrinsic autoregressions

On edge correction of conditional and intrinsic autoregressions SUMMARY This paper discusses edge correction for a large class of conditional and intrinsic autoregressions on two-dimensional finite regular arrays. The proposed method includes a novel reparameterization, retains the simple neighbourhood structure, ensures the nonnegative definiteness of the precision matrix, and enables scalable matrix-free statistical computation. The edge correction provides new insight into how higher-order differencing enters into the precision matrix of a conditional autoregression. 1. Introduction This paper is devoted principally to the following topic: how to resolve the boundary condition or edge correction needed for fast computability and complexity reduction of conditional or intrinsic autoregressions on rectangular lattices. Following Künsch (1987) and Besag & Kooperberg (1995), constructions of various elaborate forms of higher neighbourhood-order conditional and intrinsic autoregressions have come to the forefront of spatial statistics, but their computation remains a challenge. The paper provides new perspectives on these models and outlines a way to resolve their boundary conditions. Several authors have discussed how neglecting edge corrections can negatively impact the statistical analysis of spatial data; see, e.g., Künsch (1983), Dahlhaus & Künsch (1987), Buckley (1994), Besag & Kooperberg (1995), Aykroyd (1998), Dunmur & Titterington (1998), Ng et al. (1999), Dryden et al. (2002) and Huang & Ogata (2002). While many authors have found toroidal edge correction effective, others have resorted to data tapering, approximations of maximum likelihood equations, use of fixed and reflective boundary conditions and sophisticated methods such as Dempster’s algorithm to reduce edge effects. Nevertheless, edge correction for the Besag–Kooperberg locally quadratic intrinsic autoregression presents new challenges. Toroidal boundary conditions are not effective here, and although data tapering works for parameter estimation, it can lead to significant loss of high-frequency properties if the objective is to use this autoregression as a latent process in a hierarchical spatial model. Dempster’s algorithm can be computationally intensive, since it requires solving a system of linear equations of size equal to the number of boundary lattice points, and may require numerical integration for certain covariance terms. Simply put, Dempster’s and other high-rank spatial precision matrix completion algorithms are generally not scalable; see, e.g., Dahl et al. (2008) and Candès & Recht (2012). In this paper, we derive a reparameterization that provides edge correction and enables scalable matrix-free computation. The proposed edge correction is linked with reflective boundary conditions explored in Buckley (1994) for the discretized thin-plate spline. Its advantages are retention of simple neighbourhood structures and preservation of nonnegative definiteness of the precision matrix. In addition, it gives rise to an elegant spectral decomposition of the precision matrix and allows use of various scalable and matrix-free conjugate-gradient numerical algorithms, expanding the scope of statistical computations in Dutta & Mondal (2015, 2016) to higher neighbourhood-order conditional and intrinsic autoregressions and related lattice models. 2. Conditional and intrinsic autoregressions on $$\mathcal{Z}^2$$ Let $$\{X_{u,v} : u,v \in \mathcal{Z} = 0, \pm 1, \ldots\}$$ be a Gaussian conditional autoregression (Besag & Kooperberg, 1995) defined on the two-dimensional integer lattice $$\mathcal{Z}^2$$ via the conditional mean \begin{equation} E \{ X_{u,v} \mid X_{u',\,v'}, (u',v') \ne (u,v) \} = \sum_k \sum_l \beta_{k,\,l} X_{u-k, v-l} \end{equation} (1) and variance $${\rm var\,} \{X_{u,v} \mid X_{u',\,v'}, (u',v') \ne (u,v) \} = \tau^{-1}$$, where the real coefficients $$\beta_{k,\,l}$$ are nonzero only on a finite set $${\cal N}$$, and $$\beta_{0,0} =0$$, $$\beta_{k,\,l} = \beta_{-k,-l}= \beta_{-k,l}=\beta_{k,-l}$$. The set $${\cal N}$$ defines the neighbours of the lattice point $$(0,0)$$, and two lattice points $$(u,v)$$ and $$(k,l)$$ are said to be neighbours of each other if $$(u-k,v-l)$$ belongs to $${\cal N}$$. Let $$W$$ denote a nonnegative definite matrix, with rows and columns indexed by lattice points $$h= (u,v)$$ and $$h'=(u',v')$$ on $$\mathcal{Z}^2$$, representing the inverse covariance matrix, or precision matrix, of $$X_{u,v}$$. Then the matrix $$W$$ is sparse with entries $$W_{h,h} =\tau$$ and $$W_{h,h'} =- \tau \beta_{u-u', v-v'}$$. Following Künsch (1987), we call $$X_{u,v}$$ a stationary autoregression if $${\rm var\,} (X_{u,v})$$ is finite. Furthermore, we call $$X_{u,v}$$ an intrinsic autoregression of integer order $$r \geqslant 0$$ if any $$r$$th-order increments of $$X_{u,v}$$, namely $$\sum_u \sum_v c_{u,v} X_{u,v}$$ with $$\sum_u \sum_v c_{u,v} u^i v^j =0$$, for $$i$$ and $$j$$ nonnegative integers such that $$0 \leqslant i+j \leqslant r$$, have a well-defined probability distribution. For a stationary autoregression, the positive definiteness of $$W$$ is equivalent to the condition that $$\sum_k \sum_l \beta_{k,\,l} \cos (\omega k +\eta l) < 1$$ for $$-\pi < \omega, \eta \leqslant \pi$$. However, if $$X_{u,v}$$ is an intrinsic autoregression of order $$r$$, the matrix $$W$$ is singular with rank deficiency $$(r+1)(r+2)/2$$. The positive semidefiniteness of $$W$$ now translates to the conditions that (i) $$\sum_k \sum_l \beta_{k,\,l} \cos (\omega k +\eta l) < 1$$ for $$(\omega, \eta) \ne (0,0)$$, $$-\pi < \omega, \eta \leqslant \pi$$; (ii) $$\sum_k \sum_l \beta_{k,\,l} =1$$; and (iii) for $$r>0$$, all partial derivatives of $$\sum_k\sum_l \beta_{k,\,l} \cos (k\omega +l \eta)$$ up to order $$r$$ vanish at $$(\omega, \eta) =(0,0)$$. Stationary or not, the spectral density function for this autoregression takes the form \begin{equation} f(\omega,\eta)\; =\; \tau^{-1}\, \left\{ 1- \sum_k \sum_l \beta_{k,\,l} \cos (\omega k +\eta l) \right\}^{-1}, \quad \omega, \eta \in (-\pi,\pi]\text{.} \end{equation} (2) Thus, there are several ways to think about a conditional autoregression on $$\mathcal{Z}^2$$, such as the conditional mean and variance structure in (1), the spectral density $$f$$ in (2) whose reciprocal is a finite cosine series with coefficients $$\beta_{k,\,l}$$, or the precision matrix $$W$$. In this paper, we focus on the expansion of the cosine series in the reciprocal of $$f$$ using the identity \begin{equation} 1 - \sum_k \sum_l \beta_{k,\,l} \cos(\omega k +\eta l) = \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} (1-\cos \omega)^k (1-\cos \eta)^l, \end{equation} (3) where \begin{gather*} \theta_{0,0} =1+ \sum_{k \geqslant 0} \sum_{l \geqslant 0} \beta_{k,\,l} c_{k,0} c_{l,0} d_{k,\,l}, \quad \theta_{i,\,j} = \sum_{k\geqslant i} \sum_{l \geqslant j} \beta_{k,\,l} c_{k,\,i} c_{l,\,j} d_{k,\,l} \quad (i, j \geqslant0, i+j>0), \end{gather*} with \begin{gather*} c_{0,0}=1, \quad c_{k,i} = k (-2)^{i+1} \frac{ (k+i-1)! }{(k-i)! (2i)!} \quad (i=0, 1, \ldots, k) (k=1, 2, \ldots) \end{gather*} and $$d_{k,\,l}=1$$ if $$kl=0$$ and $$-1$$ otherwise; see the Appendix for details. Thus, $$\theta_{k,\,l}$$ reparameterize the coefficients $$\beta_{k,\,l}$$ and $$\sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} (2k)! (2l)! / \{ (k!\, l!)^2 \} = 1$$. For stationary autoregressions, the positive definiteness of $$W$$ implies that the polynomial $$P(z)=\sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} z_1^k z_2^l$$, where $$z=(z_1,z_2)^{{\tiny{\mbox{T}}}}$$, $$0 \leqslant z_1, z_2 \leqslant 2$$, is strictly positive and necessarily $$\theta_{0,0} >0$$. The positive definiteness holds if $$\theta_{k,\,l}$$ are positive. However, some $$\theta_{k,\,l}$$ can be negative as long as $$P(z) >0$$ for $$0 \leqslant z_1, z_2 \leqslant 2$$. Finding the boundary of the set of possible values of $$\theta_{k,\,l}$$ can be quite difficult even though this set is convex. However, for given values of $$\theta_{k,\,l}$$, one can check the positivity of $$P(z)$$ by applying the results of Schmüdgen (1991) and Putinar (1993) and solving a semidefinite optimization problem. We refer to Lasserre (2009) for further details. For an intrinsic autoregression of intrinsic order $$r \geqslant 0$$, the $$r$$th-order increments of $$X_{u,v}$$ have a well-defined distribution and the spectral density $$f$$ has a pole of order $$2 r +2$$ at the origin. This implies that we must have $$\theta_{k,\,l} =0$$ for $$k,l \geqslant 0$$ with $$k+l \leqslant r$$ and $$P(z) > 0$$ for $$(z_1,z_2) \ne (0,0)$$ with $$0 \leqslant z_1,z_2 \leqslant 2$$. For given values of $$\theta_{k,\,l}$$, this condition on $$P(z)$$ can again be checked by solving a semidefinite optimization problem. For a $$0$$th-order intrinsic autoregression, any contrasts $$X_{u,v}-X_{u',v'}$$ have a well-defined distribution and the parameters must satisfy the conditions that $$\theta_{0,0}=0$$ and $$\theta_{1,0}, \theta_{0, 1} >0$$. Stationary or not, equation (3) allows us to express a quadratic form of $$W$$ as linear combinations of squared increments of the variables. Specifically, we can write \begin{equation} \tau^{-1} x^{{\tiny{\mbox{T}}}} W x = \sum_u \sum_v \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \Bigl\{ (I-B_1)^k (I-B_2)^l x_{u,v} \Bigr\}^2, \end{equation} (4) where $$I$$ is the identity operator, and $$B_1$$ and $$B_2$$ are the backward shift operators in the $$x$$- and $$y$$-directions: $$I x_{u,v} = x_{u,v}$$, $$B_1^k x_{u,v} = x_{u-k,v}$$ and $$B_2^l x_{u,v} = x_{u,v-l}$$; see the Appendix for details. 3. Conditional and intrinsic autoregressions on a finite torus lattice Both stationary and intrinsic autoregressions have analogues on finite torus lattices. In particular, when restricted to a finite torus lattice $${\cal T}$$ of size $$m \times n$$ with $$m, n$$ positive integers and $$mn >1$$, the density of $$X_{u,v}$$ is proportional to $$ \tau^{(m n)/2} \exp(- x^{{\tiny{\mbox{T}}}} W_T x/2), $$ where \begin{equation} \tau^{-1} x^{{\tiny{\mbox{T}}}} W_T x = \sum_{(u,v) \in {\cal T} } \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \Bigl\{ (I-B_1)^k (I-B_2)^l x_{u,v} \Bigr\}^2\text{.} \end{equation} (5) The precision matrix $$W_T$$ is positive definite in the stationary case. However, for any intrinsic autoregression on $${\cal T}$$, $$W_T$$ is a singular matrix with rank $$nm-1$$. Thus, even if $$X_{u,v}$$ on $${\cal Z}^2$$ is intrinsic of order $$r \geqslant 0$$ with $$\theta_{k,\,l} =0$$ for $$k,l \geqslant 0$$ such that $$k+l \leqslant r$$, its analogue on $${\cal T}$$ is always intrinsic of order zero irrespective of the value of $$r$$. Specifically, all contrasts of $$X_{u,v}$$ on $${\cal T}$$ have a proper distribution and the zero eigenvalue of $$W_T$$ corresponds to the eigenvector $$1$$. 4. Edge corrections on finite regular arrays In practice, our interest is often directed at a finite regular lattice grid $${\cal L}=$$$$\{ (k,l): 0 \leqslant k \leqslant p-1, 0 \leqslant l \leqslant q-1\}$$ of size $$p \times q$$. However, the exact marginal distribution of the infinite lattice process $$X_{u,v}$$ on $${\cal L}$$ is not very tractable. For example, exact covariance computation for suitable contrasts of $$X_{u,v}$$ via Fourier inversion of (2) is extremely difficult in most instances and we need alternative computational methods. Furthermore, the restriction of the formula (4) or (5) on $${\cal L}$$ can fail to yield nonnegative definiteness of $$W$$ or $$W_T$$, particularly when some $$\theta_{k,\,l}$$ are negative. Similarly, a direct marginalization of $$X_{u,v}$$ from a large embedded $$m \times n$$ torus $${\cal T}$$ to a $$p \times q$$ regular grid $${\cal L}$$ destroys the sparse neighbourhood structure by causing the boundary lattice points to form a single clique, and this significantly increases the computational burden. The objective here is to find a natural approximation of the precision matrix involving boundary corrections on $${\cal L}$$. Let $$\nabla_s$$ be the first-difference matrix of size $$(s-1) \times s$$ whose $$(i,j)$$ entry is given by $$\nabla_s(i,i)= 1, \nabla_s(i, i+1) = -1$$ for $$i = 1, \ldots s-1$$, and $$\nabla_s(i,j)= 0$$ otherwise. Let $$V_{s} = \nabla_s^{{\tiny{\mbox{T}}}} \nabla_s$$. Suppose $$V_s = A_s \Lambda_s A_s^{{\tiny{\mbox{T}}}}$$ denotes the spectral representation of $$V_s$$. The matrix $$A_s$$ corresponds to the discrete cosine transform (Ahmed et al., 1974; Rao & Yip, 2014) with entries \[ a_{s,1,j} = s^{-1/2},\quad a_{s,i,j} = (2/s)^{-1/2}\cos \{\pi(i-1)(j-{\small{\frac{1}{2}}})/s \} \quad (i=2,\ldots,s) (j=1,\ldots,s)\text{.} \] The $$u$$th eigenvalue of $$A_s$$ is $$\lambda_{u,s} = 2 [ 1- \cos \{ \pi (u-1)/s \} ]$$. Next, consider restriction of the formulae (4) and (5) on $${\cal L}$$, but approximate $$\sum_u \sum_v \{ (I-B_1)^k (I-B_2)^l x_{u,v} \}^2 $$ by $$x^{{\tiny{\mbox{T}}}} ( V_q^l \otimes V_p^k ) x$$ for any nonnegative $$k$$ and $$l$$. Let $$W_L$$ denote the precision matrix under this approximation. Then $$\tau^{-1} W_L$$ is \begin{equation} \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} V_q^l \otimes V_p^k \!=\! A_q \otimes A_p \left( \sum_{k\geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \Lambda_q^l \otimes \Lambda_p^k\right) A_q^{{\tiny{\mbox{T}}}} \otimes A_p^{{\tiny{\mbox{T}}}}\text{.} \end{equation} (6) The edge-corrected precision matrix $$W_L$$ in (6) retains the sparse structure and differs from its torus lattice analogue only near boundary entries. The $$(u,v)$$th eigenvalue of $$W_L$$, namely \begin{equation} \rho_{u,v}(W_L) = \tau \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \lambda_{u,p}^k \lambda_{v,q}^l, \end{equation} (7) is always nonnegative because of (3). Thus, we have the following theorem. Theorem 1. The matrix $$W_L$$ defined in (6) is nonnegative definite. Thus, equations (3) and (4) provide representations of the reciprocal of spectral density and the precision matrix $$X_{u,v}$$ on the infinite lattice $${\cal Z}^2$$, whereas equations (5)–(7) and Theorem 1 provide corresponding approximations on a finite region. On an infinite lattice $${\cal Z}^2$$, the parameters $$\theta_{k,\,l}$$ through the representations (3) and (4) provide new insight into how higher-order differencing enters into $$W$$. On a finite region, corresponding higher-order differencing can be seen in equations (5)–(7). The matrix $$W_L$$ in equation (6) is not just a restriction of (4) or (5) on $${\cal L}$$ as it also contains modification of entries corresponding to the boundary grid points. The matrix $$W_T$$ in equation (5) has a block-circulant structure that allows various scalable and efficient matrix-free statistical computations on $${\cal T}$$; see, e.g., Besag & Moran (1975). Similarly, the matrix $$W_L$$ in equation (6) retains the simple neighbourhood structure and Theorem 1 ensures its nonnegative definiteness. The sparsity of $$W_L$$ and its spectral representation in terms of the discrete cosine transform also allow us to perform various scalable and efficient matrix-free statistical computations on $${\cal L}$$. To better understand formulas (3)–(6) and the edge correction, we now consider the following examples. Example 1. Consider a $$4$$-neighbour intrinsic autoregression (Besag & Mondal, 2005) on $$\mathcal{Z}^2$$ for which \[ {\cal N} = \{ (0,\pm 1), (\pm 1,0) \}, \quad \beta_{1,0} =\beta_{-1,0}>0, \quad \beta_{0,1} =\beta_{0,-1} >0, \quad \beta_{1,0} + \beta_{0,1} ={\small{\frac{1}{2}}}\text{.} \] Here $$r=0$$ and all contrasts of $$X_{u,v}$$ have proper distributions. Equation (3) gives $$\theta_{1,0} =\beta_{1,0}$$ and $$\theta_{0,1} =\beta_{0,1}$$. Applying edge correction on $${\cal L}$$, we get $$2 \tau^{-1}W_L= \theta_{1,0} V_q \otimes I_p + \theta_{0,1} I_q \otimes V_p$$, and $$2 (x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ expands to \[ \theta_{1,0} \sum_{u=1}^{p-1} \sum_{v=0}^{q-1} ( x_{u,v} -x_{u-1,v} )^2+ \theta_{0,1} \sum_{u=0}^{p-1} \sum_{v=1}^{q-1} ( x_{u,v} -x_{u,v-1})^2\text{.} \] The edge correction is the same as the restriction of equations (4) and (5) on a finite regular lattice. Example 2. Let $$X_{u,v}$$ be a $$12$$-neighbour intrinsic autoregression on $$\mathcal{Z}^2$$ (Besag & Kooperberg, 1995) with the conditional mean $$E \{ X_{u,v} \mid X_{u',v'}, (u',v') \ne (u,v) \}$$ equal to \begin{eqnarray*} && \beta_{1,0} ( X_{u-1,v} + X_{u+1,v} ) + \beta_{0,1}( X_{u,v-1} + X_{u,v+1}) + \beta_{1,1}( X_{u-1,v-1} + X_{u+1,v-1} \nonumber \\ &&\quad + X_{u-1,v+1} + X_{u+1,v+1}) + \beta_{2,0} ( X_{u-2,v} + X_{u+2,v} ) + \beta_{0,2} (X_{u,v-2} + X_{u,v+2}), \end{eqnarray*} and the conditional variance \[ {\rm var\,} \{X_{u,v} \mid X_{u',\,v'}, (u',v') \ne (u,v) \} = \tau^{-1}\text{.} \] Assume that \begin{gather*} \beta_{1,0} + \beta_{0,1} + 2 \beta_{1,1} +\beta_{2,0} +\beta_{0,2} ={\small{\frac{1}{2}}}, \hspace{.1in} \beta_{1,0} + 2 \beta_{1,1} + 4 \beta_{2,0} =0, \hspace{.1in} \beta_{0,1} + 2 \beta_{1,1} + 4 \beta_{0,2} =0, \\ \beta_{2,0}, \hspace{.05in} \beta_{0,2} < 0, \quad \beta_{1,1} < 2 (\beta_{2,0} \beta_{0,2})^{1/2}\text{.} \end{gather*} These restricted coefficients $$\beta_{k,\,l}$$ in the full conditional expectation formula of $$X_{u,v}$$ correspond to a weighted least squares fit of a quadratic surface at $$(0,0)$$ from the values at lattice points in $${\cal N}$$. It follows that the reciprocal of $$f$$ is \[ \tau \, \{ \theta_{2,0} (1- \cos \omega)^2 + \theta_{0,2} (1- \cos \eta)^{2} + \theta_{1,1} (1 - \cos \omega)(1- \cos \eta) \}, \quad \omega, \eta \in (-\pi,\pi], \] where $$\theta_{2,0} = - 4 \beta_{2,0} >0$$, $$\theta_{0,2} =-4 \beta_{0,2} >0$$, and $$\theta_{1,1} = -4 \beta_{1,1}$$. Necessarily $$\theta_{1,1} > -2 (\theta_{2,0} \theta_{0,2})^{1/2}$$. Here $$r=1$$. Therefore, the function $$f(\omega, \eta)$$ has a pole of order four, and the distribution of $$X_{u,v}$$ on the infinite lattice is invariant under the addition of a planar surface. This also implies that only genuine first-order increments of $$X_{u,v}$$, rather than $$X_{u,v}$$ themselves, have a proper distribution. The Besag–Kooperberg $$12$$-neighbour locally quadratic intrinsic autoregression corresponds to $$\theta_{2,0} =\theta_{0,2} = 1/2$$, $$\theta_{1,1}=-1/2$$ and all other $$\theta_{k,\,l}=0$$, whereas Buckley’s discretized thin-plate spline is tied in with $$\theta_{2,0} =\theta_{0,2} = 1/5$$, $$\theta_{1,1}=2/5$$ and all other $$\theta_{k,\,l}=0$$. Applying edge correction on a finite lattice grid $${\cal L}$$, we get $$4 \tau^{-1} W_L = \theta_{2,0} V_q^2 \otimes I_p + \theta_{0,2} I_q \otimes V_p^2 + \theta_{1,1} V_q \otimes V_p$$. The quadratic form $$4(x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ expands to \begin{eqnarray*} && \theta_{2,0} \sum_{u=2}^{p-1} \sum_{v=0}^{q-1} ( x_{u,v} -2 x_{u-1,v} +x_{u-2,v})^2 + \theta_{0,2} \sum_{u=0}^{p-1} \sum_{v=2}^{q-1} ( x_{u,v} -2 x_{u,v-1} +x_{u,v-2})^2 \nonumber\\ &&\quad + \theta_{1,1} \sum_{u=1}^{p-1}\sum_{v=1}^{q-1} (x_{u,v} -x_{u-1,v} - x_{u,v-1} +x_{u-1,v-1})^2 + \theta_{2,0} \sum_{u=0}^{p-1} (x_{u,0} - x_{u,1})^2 \nonumber\\ &&\quad + \theta_{2,0} \sum_{u=0}^{p-1} (x_{u,q-2} - x_{u,q-1})^2 + \theta_{0,2} \sum_{v=0}^{q-1} \{ (x_{0,v} - x_{1,v})^2 + (x_{p-2,v} - x_{p-1,v})^2\}\text{.} \end{eqnarray*} The first three terms in the expansion of $$4(x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ are the restriction of equations (4) and (5) on $${\cal L}$$ and the last three terms provide edge correction. Specifically, these last three terms comprise additional sum of squared increments of $$X_{u,v}$$ taken along on the two layers of boundary lattice points and are linked with reflective boundary conditions introduced by Buckley (1994) for the discretization of the thin-plate spline on rectangular arrays. Example 3. Consider a $$24$$-neighbour intrinsic autoregression on $$\mathcal{Z}^2$$ (Rue & Held, 2005, p. 117) with \[ \beta_{1,0} =\beta_{0,1} = \frac{144}{468},\quad \beta_{1,1} = \frac{8}{468}, \quad\beta_{2,0} =\beta_{0,2} = \frac{-18}{468}, \quad \beta_{2, 1} =\beta_{1,2} = \frac{-8}{468}, \quad \beta_{2,2} = \frac{-1}{468}\text{.} \] It follows that \[ \theta_{0,0}=\theta_{1,0}=\theta_{0,1}=0,\quad \theta_{2,0}=\theta_{0,2} = \frac{144}{468}, \quad \theta_{1,1} = \frac{288}{468}, \quad \theta_{2,1}=\theta_{1,2} = \frac{-96}{468}, \quad\theta_{2,2} = \frac{16}{468}\text{.} \] Applying edge correction, we get $$468 \tau^{-1} W_L$$ equal to \[ 36(V_q^2 \otimes I_p + I_q \otimes V_p^2) +72 V_q \otimes V_p - 12 (V_q^2 \otimes V_p + V_q \otimes V_p^2) + V_q^2 \otimes V_p^2\text{.} \] One can check that the expansion of $$(x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ contains not just the restriction of equations (4) and (5) on $${\cal L}$$ but a number of additional sum of squared terms involving zero-, first- and second-order increments of $$x_{u,v}$$ taken along on the two layers of boundary lattice points. For an intrinsic autoregression of order $$r$$, only one eigenvalue of $$W_L$$ is zero. This coincides with the torus lattice case. However, the exact precision matrix of the infinite lattice autoregression $$X_{u,v}$$ restricted on $${\cal L}$$ has $$(r+1)(r+2)/2$$ rank deficiencies, as one can always add a polynomial of degree $$r$$ or less to the values of $$X_{u,v}$$ without changing the distributional properties. Thus, depending on the problem in hand, it may become necessary to use ordinary least squares to remove a polynomial surface of degree $$r$$ from the spatial data before applying edge correction. 5. Discussion For practical considerations of fast computability and complexity reduction, some boundary conditions are necessary when we restrict an infinite lattice process to a finite rectangular lattice. The edge correction in §,4 is derived on the basis of this principle. However, Besag & Higdon (1999) noted that equation (4) produces more variability for contrasts near the boundary than for those in the interior of $${\cal L}$$. Thus, in applied settings, the edge correction proposed here will be more effective when used with additional layers of boundary lattice points on $${\cal L}$$. The proposed edge correction expands the scope of the scalable, statistically efficient matrix-free computation of Dutta & Mondal (2015, 2016) to a wider class of higher neighbourhood-order stationary and intrinsic autoregressions and related lattice-based spatial models. Let $$y = T \gamma + F x + \epsilon$$ be a mixed linear model where $$y$$ is the vector of spatial observations, $$T$$ is the matrix of covariate information, $$\gamma$$ is the vector of covariate effect, $$F$$ is an averaging or incidence matrix, $$x$$ is the vector of spatial effects on a fine embedded $$p \times q$$ lattice grid $${\cal L}$$, and $$\epsilon$$ is the vector of Gaussian residual errors. Assume that $$x$$ follows a higher-neighbourhood-order Gaussian conditional autoregression with an edge-corrected precision matrix $$W_L$$ of the form (6). The structure of (6) and the parameterization in terms of $$\tau \theta_{i,j}$$ then allow us to advance the matrix-free scalable $$h$$-likelihood computations detailed in Dutta & Mondal (2015, 2016). In one dimension, factorizations of positive polynomials on an interval are explicitly known; see, e.g., Karlin & Shapley (1953) and Powers & Reznick (2000). These factorizations can lead to further understanding of the structure of precision matrix of conditional autoregressions on lattice points on a line. Acknowledgement I am grateful to Julian Besag for helpful discussions. The editor, the associate editor and a reviewer provided constructive suggestions. This work was supported by the U.S. National Science Foundation. Appendix Details of equations (3) and (4) The conditions $$\beta_{0,0} =0$$ and $$\beta_{k,\,l} = \beta_{-k,-l}= \beta_{-k,l}=\beta_{k,-l}$$ imply that \[ \tau^{-1} f^{-1}(\omega, \eta) = 1- 2 \sum_{k > 0} \beta_{k,0} \cos (\omega k) -2 \sum_{l> 0} \beta_{0,\,l} \cos (\eta l) -4 \sum_{k >0} \sum_{l >0} \beta_{k,\,l} \cos (\omega k) \cos(\eta l)\text{.} \] Applying the Chebyshev polynomial identity (Abramowitz & Stegun, 1964, Ch. 22) \[ \cos ( n \omega) = n \sum_{i=0}^n (-2)^i \frac{ (n+i-1)! }{ (n-i)! (2i)! } (1- \cos \omega)^i, \quad n>0, \] we then obtain equation (3). The reverse identity \[ (1-\cos \omega )^n = b_{n,0} + b_{n,1} \cos( \omega ) + b_{n,2} \cos( 2 \omega ) +\cdots + b_{n,n} \cos( n \omega ), \hspace{.1in} n \geqslant 0, \] where \[ b_{n,0} = 2^{- n} { 2n \choose n}, \hspace{0.2in} b_{n,i} = 2^{1- n} (-1)^{i} { 2n \choose n -i} \hspace{.1in} (i=1, \ldots, n), \] allows us to write \begin{gather*} \sum_{k \geqslant 0} \sum_{l\geqslant 0} \theta_{k,\,l} b_{k, 0} b_{l,0} =1, \qquad \beta_{i,\,j} = - \frac{1}{4} \sum_{k \geqslant i} \sum_{l\geqslant j} \theta_{k,\,l} b_{k,\,i} b_{l,j} \quad (i,\,j >0), \\ \beta_{i,\,0} = - \frac{1}{2} \sum_{k \geqslant i} \sum_{l\geqslant 0} \theta_{k,\,l} b_{k,\,i} b_{l,\,0} \quad (i >0), \qquad \beta_{0,\,j} = - \frac{1}{2} \sum_{k \geqslant 0} \sum_{l\geqslant j} \theta_{k,\,l} b_{k,\,0} b_{l,\,j} \quad (j>0)\text{.} \end{gather*} Next, expanding the right-hand side of (4) in terms of $$x_{0,0}$$, we get \begin{eqnarray*} &&\sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \sum_{i=0}^k \sum_{j=0}^l {k \choose i}^2 {l \choose j}^2 x_{0,0}^2 \hspace{3.25in} \\ &&\quad + \sum_{s=-k}^k \sum_{t=-l}^l \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \sum_{i=0}^k \sum_{j=0}^l (-1)^{s+t} {k \choose i} {k \choose i +s } {l \choose j} {l \choose j +t} 2 x_{s,t} x_{0,0} + \cdots\text{.} \end{eqnarray*} However, \[ \sum_{i=0}^{n-s} {n \choose i} {n \choose i +s } = {2n\choose n-s} \hspace{.1in} (0 \leqslant s \leqslant n)\text{.} \] Thus, the coefficient of $$x_{00}^2$$ in equation (4) is \[ \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} {2k \choose k} {2l \choose l} = \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} b_{k,\,0} b_{l,\,0} =1\text{.} \] Similarly, for $$s, t >0$$, the coefficient of $$2 x_{s,t} x_{00}$$ in equation (4) is \[ \sum_{k \geqslant 0} \sum_{l \geqslant 0} (-1)^{s+t} \theta_{k,\,l} 2^{-(k+l)} {2k \choose k-s} {2l \choose l-t} = \frac{1}{4} \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} b_{k,|s|} b_{l, |t|} = -\beta_{|s|,|t|}, \] and so on. Hence, (4) gives the same conditional mean and variance structures as (1). References Abramowitz M. & Stegun I. A. ( 1964 ). Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables . Washington, DC : Courier Corporation. Ahmed N. , Natarajan T. & Rao K. R. ( 1974 ). Discrete cosine transform. IEEE Trans. Comp. 100 , 90 – 3 . Google Scholar CrossRef Search ADS Aykroyd R. G. ( 1998 ). Bayesian estimation for homogeneous and inhomogeneous Gaussian random fields. IEEE Trans. Pat. Anal. Mach. Intel. 20 , 533 – 9 . Google Scholar CrossRef Search ADS Besag J. E. & Higdon D. ( 1999 ). Bayesian analysis of agricultural field experiments (with Discussion). J. R. Statist. Soc. B 61 , 691 – 746 . Google Scholar CrossRef Search ADS Besag J. E. & Kooperberg C. ( 1995 ). On conditional and intrinsic autoregressions. Biometrika 82 , 733 – 46 . Besag J. E. & Mondal D. ( 2005 ). First-order intrinsic autoregressions and the de Wijs process. Biometrika 92 , 909 – 20 . Google Scholar CrossRef Search ADS Besag J. E. & Moran P. A. P. ( 1975 ). On the estimation and testing of spatial interaction in Gaussian lattice processes. Biometrika 62 , 555 – 62 . Google Scholar CrossRef Search ADS Buckley M. J. ( 1994 ). Fast computation of a discretized thin-plate smoothing spline for image data. Biometrika 81 , 247 – 58 . Google Scholar CrossRef Search ADS Candès E. & Recht B. ( 2012 ). Exact matrix completion via convex optimization. Commun. ACM 55 , 111 – 9 . Google Scholar CrossRef Search ADS Dahl J. , Vandenberghe L. & Roychowdhury V. ( 2008 ). Covariance selection for nonchordal graphs via chordal embedding. Optimiz. Meth. Software 23 , 501 – 20 . Google Scholar CrossRef Search ADS Dahlhaus R. & Künsch H. R. ( 1987 ). Edge effects and efficient parameter estimation for stationary random fields. Biometrika 74 , 877 – 82 . Google Scholar CrossRef Search ADS Dryden I. , Ippoliti L. & Romagnoli L. ( 2002 ). Adjusted maximum likelihood and pseudo-likelihood estimation for noisy Gaussian Markov random fields. J. Comp. Graph. Statist. 11 , 370 – 88 . Google Scholar CrossRef Search ADS Dunmur A. P. & Titterington D. M. ( 1998 ). Mean fields and two-dimensional Markov random fields in image analysis. Pat. Anal. Applic. 1 , 248 – 60 . Google Scholar CrossRef Search ADS Dutta S. & Mondal D. ( 2015 ). An $$h$$-likelihood method for spatial mixed linear models based on intrinsic autoregressions. J. R. Statist. Soc. B 77 , 699 – 726 . Google Scholar CrossRef Search ADS Dutta S. & Mondal D. ( 2016 ). Residual maximum likelihood analysis for spatial mixed linear models based on intrinsic Matérn dependence with nugget effects. Electron. J. Statist. 10 , 2856 – 93 . Google Scholar CrossRef Search ADS Huang F. & Ogata Y. ( 2002 ). Generalized pseudo-likelihood estimates for Markov random fields on lattice. Ann. Inst. Statist. Math. 54 , 1 – 18 . Google Scholar CrossRef Search ADS Karlin S. & Shapley L. S. ( 1953 ). Geometry of Moment Spaces . Providence, Rhode Island : American Mathematical Society. Künsch H. R. ( 1983 ). Approximations to the maximum likelihood equations for some Gaussian random fields. Scand. J. Statist. 10 , 239 – 46 . Künsch H. R. ( 1987 ). Intrinsic autoregressions and related models on the two-dimensional lattice. Biometrika 74 , 517 – 24 . Google Scholar CrossRef Search ADS Lasserre J. B. ( 2009 ). Moments, Positive Polynomials and their Applications . Singapore : World Scientific. Google Scholar CrossRef Search ADS Ng M. K. , Chan R. H. & Tang W. C. ( 1999 ). A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comp. 21 , 851 – 66 . Google Scholar CrossRef Search ADS Powers V. & Reznick B. ( 2000 ). Polynomials that are positive on an interval. Trans. Am. Math. Soc. 352 , 4677 – 92 . Google Scholar CrossRef Search ADS Putinar M. ( 1993 ). Positive polynomials on compact semi-algebraic sets. Indiana Univ. Math. J. 42 , 969 – 84 . Google Scholar CrossRef Search ADS Rao K. R. & Yip P. ( 2014 ). Discrete Cosine Transform: Algorithms, Advantages, Applications . San Diego, California : Academic Press. Rue H. & Held L. ( 2005 ). Gaussian Markov Random Fields . Boca Raton, Florida : Chapman and Hall . Google Scholar CrossRef Search ADS Schmüdgen K. ( 1991 ). The $$k$$-moment problem for compact semi-algebraic sets. Math. Ann. 289 , 203 – 6 . Google Scholar CrossRef Search ADS © 2018 Biometrika Trust This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Biometrika Oxford University Press

On edge correction of conditional and intrinsic autoregressions

Biometrika , Volume Advance Article (2) – Apr 25, 2018

Loading next page...
 
/lp/ou_press/on-edge-correction-of-conditional-and-intrinsic-autoregressions-riXmqhUldw
Publisher
Oxford University Press
Copyright
© 2018 Biometrika Trust
ISSN
0006-3444
eISSN
1464-3510
D.O.I.
10.1093/biomet/asy014
Publisher site
See Article on Publisher Site

Abstract

SUMMARY This paper discusses edge correction for a large class of conditional and intrinsic autoregressions on two-dimensional finite regular arrays. The proposed method includes a novel reparameterization, retains the simple neighbourhood structure, ensures the nonnegative definiteness of the precision matrix, and enables scalable matrix-free statistical computation. The edge correction provides new insight into how higher-order differencing enters into the precision matrix of a conditional autoregression. 1. Introduction This paper is devoted principally to the following topic: how to resolve the boundary condition or edge correction needed for fast computability and complexity reduction of conditional or intrinsic autoregressions on rectangular lattices. Following Künsch (1987) and Besag & Kooperberg (1995), constructions of various elaborate forms of higher neighbourhood-order conditional and intrinsic autoregressions have come to the forefront of spatial statistics, but their computation remains a challenge. The paper provides new perspectives on these models and outlines a way to resolve their boundary conditions. Several authors have discussed how neglecting edge corrections can negatively impact the statistical analysis of spatial data; see, e.g., Künsch (1983), Dahlhaus & Künsch (1987), Buckley (1994), Besag & Kooperberg (1995), Aykroyd (1998), Dunmur & Titterington (1998), Ng et al. (1999), Dryden et al. (2002) and Huang & Ogata (2002). While many authors have found toroidal edge correction effective, others have resorted to data tapering, approximations of maximum likelihood equations, use of fixed and reflective boundary conditions and sophisticated methods such as Dempster’s algorithm to reduce edge effects. Nevertheless, edge correction for the Besag–Kooperberg locally quadratic intrinsic autoregression presents new challenges. Toroidal boundary conditions are not effective here, and although data tapering works for parameter estimation, it can lead to significant loss of high-frequency properties if the objective is to use this autoregression as a latent process in a hierarchical spatial model. Dempster’s algorithm can be computationally intensive, since it requires solving a system of linear equations of size equal to the number of boundary lattice points, and may require numerical integration for certain covariance terms. Simply put, Dempster’s and other high-rank spatial precision matrix completion algorithms are generally not scalable; see, e.g., Dahl et al. (2008) and Candès & Recht (2012). In this paper, we derive a reparameterization that provides edge correction and enables scalable matrix-free computation. The proposed edge correction is linked with reflective boundary conditions explored in Buckley (1994) for the discretized thin-plate spline. Its advantages are retention of simple neighbourhood structures and preservation of nonnegative definiteness of the precision matrix. In addition, it gives rise to an elegant spectral decomposition of the precision matrix and allows use of various scalable and matrix-free conjugate-gradient numerical algorithms, expanding the scope of statistical computations in Dutta & Mondal (2015, 2016) to higher neighbourhood-order conditional and intrinsic autoregressions and related lattice models. 2. Conditional and intrinsic autoregressions on $$\mathcal{Z}^2$$ Let $$\{X_{u,v} : u,v \in \mathcal{Z} = 0, \pm 1, \ldots\}$$ be a Gaussian conditional autoregression (Besag & Kooperberg, 1995) defined on the two-dimensional integer lattice $$\mathcal{Z}^2$$ via the conditional mean \begin{equation} E \{ X_{u,v} \mid X_{u',\,v'}, (u',v') \ne (u,v) \} = \sum_k \sum_l \beta_{k,\,l} X_{u-k, v-l} \end{equation} (1) and variance $${\rm var\,} \{X_{u,v} \mid X_{u',\,v'}, (u',v') \ne (u,v) \} = \tau^{-1}$$, where the real coefficients $$\beta_{k,\,l}$$ are nonzero only on a finite set $${\cal N}$$, and $$\beta_{0,0} =0$$, $$\beta_{k,\,l} = \beta_{-k,-l}= \beta_{-k,l}=\beta_{k,-l}$$. The set $${\cal N}$$ defines the neighbours of the lattice point $$(0,0)$$, and two lattice points $$(u,v)$$ and $$(k,l)$$ are said to be neighbours of each other if $$(u-k,v-l)$$ belongs to $${\cal N}$$. Let $$W$$ denote a nonnegative definite matrix, with rows and columns indexed by lattice points $$h= (u,v)$$ and $$h'=(u',v')$$ on $$\mathcal{Z}^2$$, representing the inverse covariance matrix, or precision matrix, of $$X_{u,v}$$. Then the matrix $$W$$ is sparse with entries $$W_{h,h} =\tau$$ and $$W_{h,h'} =- \tau \beta_{u-u', v-v'}$$. Following Künsch (1987), we call $$X_{u,v}$$ a stationary autoregression if $${\rm var\,} (X_{u,v})$$ is finite. Furthermore, we call $$X_{u,v}$$ an intrinsic autoregression of integer order $$r \geqslant 0$$ if any $$r$$th-order increments of $$X_{u,v}$$, namely $$\sum_u \sum_v c_{u,v} X_{u,v}$$ with $$\sum_u \sum_v c_{u,v} u^i v^j =0$$, for $$i$$ and $$j$$ nonnegative integers such that $$0 \leqslant i+j \leqslant r$$, have a well-defined probability distribution. For a stationary autoregression, the positive definiteness of $$W$$ is equivalent to the condition that $$\sum_k \sum_l \beta_{k,\,l} \cos (\omega k +\eta l) < 1$$ for $$-\pi < \omega, \eta \leqslant \pi$$. However, if $$X_{u,v}$$ is an intrinsic autoregression of order $$r$$, the matrix $$W$$ is singular with rank deficiency $$(r+1)(r+2)/2$$. The positive semidefiniteness of $$W$$ now translates to the conditions that (i) $$\sum_k \sum_l \beta_{k,\,l} \cos (\omega k +\eta l) < 1$$ for $$(\omega, \eta) \ne (0,0)$$, $$-\pi < \omega, \eta \leqslant \pi$$; (ii) $$\sum_k \sum_l \beta_{k,\,l} =1$$; and (iii) for $$r>0$$, all partial derivatives of $$\sum_k\sum_l \beta_{k,\,l} \cos (k\omega +l \eta)$$ up to order $$r$$ vanish at $$(\omega, \eta) =(0,0)$$. Stationary or not, the spectral density function for this autoregression takes the form \begin{equation} f(\omega,\eta)\; =\; \tau^{-1}\, \left\{ 1- \sum_k \sum_l \beta_{k,\,l} \cos (\omega k +\eta l) \right\}^{-1}, \quad \omega, \eta \in (-\pi,\pi]\text{.} \end{equation} (2) Thus, there are several ways to think about a conditional autoregression on $$\mathcal{Z}^2$$, such as the conditional mean and variance structure in (1), the spectral density $$f$$ in (2) whose reciprocal is a finite cosine series with coefficients $$\beta_{k,\,l}$$, or the precision matrix $$W$$. In this paper, we focus on the expansion of the cosine series in the reciprocal of $$f$$ using the identity \begin{equation} 1 - \sum_k \sum_l \beta_{k,\,l} \cos(\omega k +\eta l) = \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} (1-\cos \omega)^k (1-\cos \eta)^l, \end{equation} (3) where \begin{gather*} \theta_{0,0} =1+ \sum_{k \geqslant 0} \sum_{l \geqslant 0} \beta_{k,\,l} c_{k,0} c_{l,0} d_{k,\,l}, \quad \theta_{i,\,j} = \sum_{k\geqslant i} \sum_{l \geqslant j} \beta_{k,\,l} c_{k,\,i} c_{l,\,j} d_{k,\,l} \quad (i, j \geqslant0, i+j>0), \end{gather*} with \begin{gather*} c_{0,0}=1, \quad c_{k,i} = k (-2)^{i+1} \frac{ (k+i-1)! }{(k-i)! (2i)!} \quad (i=0, 1, \ldots, k) (k=1, 2, \ldots) \end{gather*} and $$d_{k,\,l}=1$$ if $$kl=0$$ and $$-1$$ otherwise; see the Appendix for details. Thus, $$\theta_{k,\,l}$$ reparameterize the coefficients $$\beta_{k,\,l}$$ and $$\sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} (2k)! (2l)! / \{ (k!\, l!)^2 \} = 1$$. For stationary autoregressions, the positive definiteness of $$W$$ implies that the polynomial $$P(z)=\sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} z_1^k z_2^l$$, where $$z=(z_1,z_2)^{{\tiny{\mbox{T}}}}$$, $$0 \leqslant z_1, z_2 \leqslant 2$$, is strictly positive and necessarily $$\theta_{0,0} >0$$. The positive definiteness holds if $$\theta_{k,\,l}$$ are positive. However, some $$\theta_{k,\,l}$$ can be negative as long as $$P(z) >0$$ for $$0 \leqslant z_1, z_2 \leqslant 2$$. Finding the boundary of the set of possible values of $$\theta_{k,\,l}$$ can be quite difficult even though this set is convex. However, for given values of $$\theta_{k,\,l}$$, one can check the positivity of $$P(z)$$ by applying the results of Schmüdgen (1991) and Putinar (1993) and solving a semidefinite optimization problem. We refer to Lasserre (2009) for further details. For an intrinsic autoregression of intrinsic order $$r \geqslant 0$$, the $$r$$th-order increments of $$X_{u,v}$$ have a well-defined distribution and the spectral density $$f$$ has a pole of order $$2 r +2$$ at the origin. This implies that we must have $$\theta_{k,\,l} =0$$ for $$k,l \geqslant 0$$ with $$k+l \leqslant r$$ and $$P(z) > 0$$ for $$(z_1,z_2) \ne (0,0)$$ with $$0 \leqslant z_1,z_2 \leqslant 2$$. For given values of $$\theta_{k,\,l}$$, this condition on $$P(z)$$ can again be checked by solving a semidefinite optimization problem. For a $$0$$th-order intrinsic autoregression, any contrasts $$X_{u,v}-X_{u',v'}$$ have a well-defined distribution and the parameters must satisfy the conditions that $$\theta_{0,0}=0$$ and $$\theta_{1,0}, \theta_{0, 1} >0$$. Stationary or not, equation (3) allows us to express a quadratic form of $$W$$ as linear combinations of squared increments of the variables. Specifically, we can write \begin{equation} \tau^{-1} x^{{\tiny{\mbox{T}}}} W x = \sum_u \sum_v \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \Bigl\{ (I-B_1)^k (I-B_2)^l x_{u,v} \Bigr\}^2, \end{equation} (4) where $$I$$ is the identity operator, and $$B_1$$ and $$B_2$$ are the backward shift operators in the $$x$$- and $$y$$-directions: $$I x_{u,v} = x_{u,v}$$, $$B_1^k x_{u,v} = x_{u-k,v}$$ and $$B_2^l x_{u,v} = x_{u,v-l}$$; see the Appendix for details. 3. Conditional and intrinsic autoregressions on a finite torus lattice Both stationary and intrinsic autoregressions have analogues on finite torus lattices. In particular, when restricted to a finite torus lattice $${\cal T}$$ of size $$m \times n$$ with $$m, n$$ positive integers and $$mn >1$$, the density of $$X_{u,v}$$ is proportional to $$ \tau^{(m n)/2} \exp(- x^{{\tiny{\mbox{T}}}} W_T x/2), $$ where \begin{equation} \tau^{-1} x^{{\tiny{\mbox{T}}}} W_T x = \sum_{(u,v) \in {\cal T} } \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \Bigl\{ (I-B_1)^k (I-B_2)^l x_{u,v} \Bigr\}^2\text{.} \end{equation} (5) The precision matrix $$W_T$$ is positive definite in the stationary case. However, for any intrinsic autoregression on $${\cal T}$$, $$W_T$$ is a singular matrix with rank $$nm-1$$. Thus, even if $$X_{u,v}$$ on $${\cal Z}^2$$ is intrinsic of order $$r \geqslant 0$$ with $$\theta_{k,\,l} =0$$ for $$k,l \geqslant 0$$ such that $$k+l \leqslant r$$, its analogue on $${\cal T}$$ is always intrinsic of order zero irrespective of the value of $$r$$. Specifically, all contrasts of $$X_{u,v}$$ on $${\cal T}$$ have a proper distribution and the zero eigenvalue of $$W_T$$ corresponds to the eigenvector $$1$$. 4. Edge corrections on finite regular arrays In practice, our interest is often directed at a finite regular lattice grid $${\cal L}=$$$$\{ (k,l): 0 \leqslant k \leqslant p-1, 0 \leqslant l \leqslant q-1\}$$ of size $$p \times q$$. However, the exact marginal distribution of the infinite lattice process $$X_{u,v}$$ on $${\cal L}$$ is not very tractable. For example, exact covariance computation for suitable contrasts of $$X_{u,v}$$ via Fourier inversion of (2) is extremely difficult in most instances and we need alternative computational methods. Furthermore, the restriction of the formula (4) or (5) on $${\cal L}$$ can fail to yield nonnegative definiteness of $$W$$ or $$W_T$$, particularly when some $$\theta_{k,\,l}$$ are negative. Similarly, a direct marginalization of $$X_{u,v}$$ from a large embedded $$m \times n$$ torus $${\cal T}$$ to a $$p \times q$$ regular grid $${\cal L}$$ destroys the sparse neighbourhood structure by causing the boundary lattice points to form a single clique, and this significantly increases the computational burden. The objective here is to find a natural approximation of the precision matrix involving boundary corrections on $${\cal L}$$. Let $$\nabla_s$$ be the first-difference matrix of size $$(s-1) \times s$$ whose $$(i,j)$$ entry is given by $$\nabla_s(i,i)= 1, \nabla_s(i, i+1) = -1$$ for $$i = 1, \ldots s-1$$, and $$\nabla_s(i,j)= 0$$ otherwise. Let $$V_{s} = \nabla_s^{{\tiny{\mbox{T}}}} \nabla_s$$. Suppose $$V_s = A_s \Lambda_s A_s^{{\tiny{\mbox{T}}}}$$ denotes the spectral representation of $$V_s$$. The matrix $$A_s$$ corresponds to the discrete cosine transform (Ahmed et al., 1974; Rao & Yip, 2014) with entries \[ a_{s,1,j} = s^{-1/2},\quad a_{s,i,j} = (2/s)^{-1/2}\cos \{\pi(i-1)(j-{\small{\frac{1}{2}}})/s \} \quad (i=2,\ldots,s) (j=1,\ldots,s)\text{.} \] The $$u$$th eigenvalue of $$A_s$$ is $$\lambda_{u,s} = 2 [ 1- \cos \{ \pi (u-1)/s \} ]$$. Next, consider restriction of the formulae (4) and (5) on $${\cal L}$$, but approximate $$\sum_u \sum_v \{ (I-B_1)^k (I-B_2)^l x_{u,v} \}^2 $$ by $$x^{{\tiny{\mbox{T}}}} ( V_q^l \otimes V_p^k ) x$$ for any nonnegative $$k$$ and $$l$$. Let $$W_L$$ denote the precision matrix under this approximation. Then $$\tau^{-1} W_L$$ is \begin{equation} \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} V_q^l \otimes V_p^k \!=\! A_q \otimes A_p \left( \sum_{k\geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \Lambda_q^l \otimes \Lambda_p^k\right) A_q^{{\tiny{\mbox{T}}}} \otimes A_p^{{\tiny{\mbox{T}}}}\text{.} \end{equation} (6) The edge-corrected precision matrix $$W_L$$ in (6) retains the sparse structure and differs from its torus lattice analogue only near boundary entries. The $$(u,v)$$th eigenvalue of $$W_L$$, namely \begin{equation} \rho_{u,v}(W_L) = \tau \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \lambda_{u,p}^k \lambda_{v,q}^l, \end{equation} (7) is always nonnegative because of (3). Thus, we have the following theorem. Theorem 1. The matrix $$W_L$$ defined in (6) is nonnegative definite. Thus, equations (3) and (4) provide representations of the reciprocal of spectral density and the precision matrix $$X_{u,v}$$ on the infinite lattice $${\cal Z}^2$$, whereas equations (5)–(7) and Theorem 1 provide corresponding approximations on a finite region. On an infinite lattice $${\cal Z}^2$$, the parameters $$\theta_{k,\,l}$$ through the representations (3) and (4) provide new insight into how higher-order differencing enters into $$W$$. On a finite region, corresponding higher-order differencing can be seen in equations (5)–(7). The matrix $$W_L$$ in equation (6) is not just a restriction of (4) or (5) on $${\cal L}$$ as it also contains modification of entries corresponding to the boundary grid points. The matrix $$W_T$$ in equation (5) has a block-circulant structure that allows various scalable and efficient matrix-free statistical computations on $${\cal T}$$; see, e.g., Besag & Moran (1975). Similarly, the matrix $$W_L$$ in equation (6) retains the simple neighbourhood structure and Theorem 1 ensures its nonnegative definiteness. The sparsity of $$W_L$$ and its spectral representation in terms of the discrete cosine transform also allow us to perform various scalable and efficient matrix-free statistical computations on $${\cal L}$$. To better understand formulas (3)–(6) and the edge correction, we now consider the following examples. Example 1. Consider a $$4$$-neighbour intrinsic autoregression (Besag & Mondal, 2005) on $$\mathcal{Z}^2$$ for which \[ {\cal N} = \{ (0,\pm 1), (\pm 1,0) \}, \quad \beta_{1,0} =\beta_{-1,0}>0, \quad \beta_{0,1} =\beta_{0,-1} >0, \quad \beta_{1,0} + \beta_{0,1} ={\small{\frac{1}{2}}}\text{.} \] Here $$r=0$$ and all contrasts of $$X_{u,v}$$ have proper distributions. Equation (3) gives $$\theta_{1,0} =\beta_{1,0}$$ and $$\theta_{0,1} =\beta_{0,1}$$. Applying edge correction on $${\cal L}$$, we get $$2 \tau^{-1}W_L= \theta_{1,0} V_q \otimes I_p + \theta_{0,1} I_q \otimes V_p$$, and $$2 (x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ expands to \[ \theta_{1,0} \sum_{u=1}^{p-1} \sum_{v=0}^{q-1} ( x_{u,v} -x_{u-1,v} )^2+ \theta_{0,1} \sum_{u=0}^{p-1} \sum_{v=1}^{q-1} ( x_{u,v} -x_{u,v-1})^2\text{.} \] The edge correction is the same as the restriction of equations (4) and (5) on a finite regular lattice. Example 2. Let $$X_{u,v}$$ be a $$12$$-neighbour intrinsic autoregression on $$\mathcal{Z}^2$$ (Besag & Kooperberg, 1995) with the conditional mean $$E \{ X_{u,v} \mid X_{u',v'}, (u',v') \ne (u,v) \}$$ equal to \begin{eqnarray*} && \beta_{1,0} ( X_{u-1,v} + X_{u+1,v} ) + \beta_{0,1}( X_{u,v-1} + X_{u,v+1}) + \beta_{1,1}( X_{u-1,v-1} + X_{u+1,v-1} \nonumber \\ &&\quad + X_{u-1,v+1} + X_{u+1,v+1}) + \beta_{2,0} ( X_{u-2,v} + X_{u+2,v} ) + \beta_{0,2} (X_{u,v-2} + X_{u,v+2}), \end{eqnarray*} and the conditional variance \[ {\rm var\,} \{X_{u,v} \mid X_{u',\,v'}, (u',v') \ne (u,v) \} = \tau^{-1}\text{.} \] Assume that \begin{gather*} \beta_{1,0} + \beta_{0,1} + 2 \beta_{1,1} +\beta_{2,0} +\beta_{0,2} ={\small{\frac{1}{2}}}, \hspace{.1in} \beta_{1,0} + 2 \beta_{1,1} + 4 \beta_{2,0} =0, \hspace{.1in} \beta_{0,1} + 2 \beta_{1,1} + 4 \beta_{0,2} =0, \\ \beta_{2,0}, \hspace{.05in} \beta_{0,2} < 0, \quad \beta_{1,1} < 2 (\beta_{2,0} \beta_{0,2})^{1/2}\text{.} \end{gather*} These restricted coefficients $$\beta_{k,\,l}$$ in the full conditional expectation formula of $$X_{u,v}$$ correspond to a weighted least squares fit of a quadratic surface at $$(0,0)$$ from the values at lattice points in $${\cal N}$$. It follows that the reciprocal of $$f$$ is \[ \tau \, \{ \theta_{2,0} (1- \cos \omega)^2 + \theta_{0,2} (1- \cos \eta)^{2} + \theta_{1,1} (1 - \cos \omega)(1- \cos \eta) \}, \quad \omega, \eta \in (-\pi,\pi], \] where $$\theta_{2,0} = - 4 \beta_{2,0} >0$$, $$\theta_{0,2} =-4 \beta_{0,2} >0$$, and $$\theta_{1,1} = -4 \beta_{1,1}$$. Necessarily $$\theta_{1,1} > -2 (\theta_{2,0} \theta_{0,2})^{1/2}$$. Here $$r=1$$. Therefore, the function $$f(\omega, \eta)$$ has a pole of order four, and the distribution of $$X_{u,v}$$ on the infinite lattice is invariant under the addition of a planar surface. This also implies that only genuine first-order increments of $$X_{u,v}$$, rather than $$X_{u,v}$$ themselves, have a proper distribution. The Besag–Kooperberg $$12$$-neighbour locally quadratic intrinsic autoregression corresponds to $$\theta_{2,0} =\theta_{0,2} = 1/2$$, $$\theta_{1,1}=-1/2$$ and all other $$\theta_{k,\,l}=0$$, whereas Buckley’s discretized thin-plate spline is tied in with $$\theta_{2,0} =\theta_{0,2} = 1/5$$, $$\theta_{1,1}=2/5$$ and all other $$\theta_{k,\,l}=0$$. Applying edge correction on a finite lattice grid $${\cal L}$$, we get $$4 \tau^{-1} W_L = \theta_{2,0} V_q^2 \otimes I_p + \theta_{0,2} I_q \otimes V_p^2 + \theta_{1,1} V_q \otimes V_p$$. The quadratic form $$4(x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ expands to \begin{eqnarray*} && \theta_{2,0} \sum_{u=2}^{p-1} \sum_{v=0}^{q-1} ( x_{u,v} -2 x_{u-1,v} +x_{u-2,v})^2 + \theta_{0,2} \sum_{u=0}^{p-1} \sum_{v=2}^{q-1} ( x_{u,v} -2 x_{u,v-1} +x_{u,v-2})^2 \nonumber\\ &&\quad + \theta_{1,1} \sum_{u=1}^{p-1}\sum_{v=1}^{q-1} (x_{u,v} -x_{u-1,v} - x_{u,v-1} +x_{u-1,v-1})^2 + \theta_{2,0} \sum_{u=0}^{p-1} (x_{u,0} - x_{u,1})^2 \nonumber\\ &&\quad + \theta_{2,0} \sum_{u=0}^{p-1} (x_{u,q-2} - x_{u,q-1})^2 + \theta_{0,2} \sum_{v=0}^{q-1} \{ (x_{0,v} - x_{1,v})^2 + (x_{p-2,v} - x_{p-1,v})^2\}\text{.} \end{eqnarray*} The first three terms in the expansion of $$4(x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ are the restriction of equations (4) and (5) on $${\cal L}$$ and the last three terms provide edge correction. Specifically, these last three terms comprise additional sum of squared increments of $$X_{u,v}$$ taken along on the two layers of boundary lattice points and are linked with reflective boundary conditions introduced by Buckley (1994) for the discretization of the thin-plate spline on rectangular arrays. Example 3. Consider a $$24$$-neighbour intrinsic autoregression on $$\mathcal{Z}^2$$ (Rue & Held, 2005, p. 117) with \[ \beta_{1,0} =\beta_{0,1} = \frac{144}{468},\quad \beta_{1,1} = \frac{8}{468}, \quad\beta_{2,0} =\beta_{0,2} = \frac{-18}{468}, \quad \beta_{2, 1} =\beta_{1,2} = \frac{-8}{468}, \quad \beta_{2,2} = \frac{-1}{468}\text{.} \] It follows that \[ \theta_{0,0}=\theta_{1,0}=\theta_{0,1}=0,\quad \theta_{2,0}=\theta_{0,2} = \frac{144}{468}, \quad \theta_{1,1} = \frac{288}{468}, \quad \theta_{2,1}=\theta_{1,2} = \frac{-96}{468}, \quad\theta_{2,2} = \frac{16}{468}\text{.} \] Applying edge correction, we get $$468 \tau^{-1} W_L$$ equal to \[ 36(V_q^2 \otimes I_p + I_q \otimes V_p^2) +72 V_q \otimes V_p - 12 (V_q^2 \otimes V_p + V_q \otimes V_p^2) + V_q^2 \otimes V_p^2\text{.} \] One can check that the expansion of $$(x^{{\tiny{\mbox{T}}}} W_L x)/\tau$$ contains not just the restriction of equations (4) and (5) on $${\cal L}$$ but a number of additional sum of squared terms involving zero-, first- and second-order increments of $$x_{u,v}$$ taken along on the two layers of boundary lattice points. For an intrinsic autoregression of order $$r$$, only one eigenvalue of $$W_L$$ is zero. This coincides with the torus lattice case. However, the exact precision matrix of the infinite lattice autoregression $$X_{u,v}$$ restricted on $${\cal L}$$ has $$(r+1)(r+2)/2$$ rank deficiencies, as one can always add a polynomial of degree $$r$$ or less to the values of $$X_{u,v}$$ without changing the distributional properties. Thus, depending on the problem in hand, it may become necessary to use ordinary least squares to remove a polynomial surface of degree $$r$$ from the spatial data before applying edge correction. 5. Discussion For practical considerations of fast computability and complexity reduction, some boundary conditions are necessary when we restrict an infinite lattice process to a finite rectangular lattice. The edge correction in §,4 is derived on the basis of this principle. However, Besag & Higdon (1999) noted that equation (4) produces more variability for contrasts near the boundary than for those in the interior of $${\cal L}$$. Thus, in applied settings, the edge correction proposed here will be more effective when used with additional layers of boundary lattice points on $${\cal L}$$. The proposed edge correction expands the scope of the scalable, statistically efficient matrix-free computation of Dutta & Mondal (2015, 2016) to a wider class of higher neighbourhood-order stationary and intrinsic autoregressions and related lattice-based spatial models. Let $$y = T \gamma + F x + \epsilon$$ be a mixed linear model where $$y$$ is the vector of spatial observations, $$T$$ is the matrix of covariate information, $$\gamma$$ is the vector of covariate effect, $$F$$ is an averaging or incidence matrix, $$x$$ is the vector of spatial effects on a fine embedded $$p \times q$$ lattice grid $${\cal L}$$, and $$\epsilon$$ is the vector of Gaussian residual errors. Assume that $$x$$ follows a higher-neighbourhood-order Gaussian conditional autoregression with an edge-corrected precision matrix $$W_L$$ of the form (6). The structure of (6) and the parameterization in terms of $$\tau \theta_{i,j}$$ then allow us to advance the matrix-free scalable $$h$$-likelihood computations detailed in Dutta & Mondal (2015, 2016). In one dimension, factorizations of positive polynomials on an interval are explicitly known; see, e.g., Karlin & Shapley (1953) and Powers & Reznick (2000). These factorizations can lead to further understanding of the structure of precision matrix of conditional autoregressions on lattice points on a line. Acknowledgement I am grateful to Julian Besag for helpful discussions. The editor, the associate editor and a reviewer provided constructive suggestions. This work was supported by the U.S. National Science Foundation. Appendix Details of equations (3) and (4) The conditions $$\beta_{0,0} =0$$ and $$\beta_{k,\,l} = \beta_{-k,-l}= \beta_{-k,l}=\beta_{k,-l}$$ imply that \[ \tau^{-1} f^{-1}(\omega, \eta) = 1- 2 \sum_{k > 0} \beta_{k,0} \cos (\omega k) -2 \sum_{l> 0} \beta_{0,\,l} \cos (\eta l) -4 \sum_{k >0} \sum_{l >0} \beta_{k,\,l} \cos (\omega k) \cos(\eta l)\text{.} \] Applying the Chebyshev polynomial identity (Abramowitz & Stegun, 1964, Ch. 22) \[ \cos ( n \omega) = n \sum_{i=0}^n (-2)^i \frac{ (n+i-1)! }{ (n-i)! (2i)! } (1- \cos \omega)^i, \quad n>0, \] we then obtain equation (3). The reverse identity \[ (1-\cos \omega )^n = b_{n,0} + b_{n,1} \cos( \omega ) + b_{n,2} \cos( 2 \omega ) +\cdots + b_{n,n} \cos( n \omega ), \hspace{.1in} n \geqslant 0, \] where \[ b_{n,0} = 2^{- n} { 2n \choose n}, \hspace{0.2in} b_{n,i} = 2^{1- n} (-1)^{i} { 2n \choose n -i} \hspace{.1in} (i=1, \ldots, n), \] allows us to write \begin{gather*} \sum_{k \geqslant 0} \sum_{l\geqslant 0} \theta_{k,\,l} b_{k, 0} b_{l,0} =1, \qquad \beta_{i,\,j} = - \frac{1}{4} \sum_{k \geqslant i} \sum_{l\geqslant j} \theta_{k,\,l} b_{k,\,i} b_{l,j} \quad (i,\,j >0), \\ \beta_{i,\,0} = - \frac{1}{2} \sum_{k \geqslant i} \sum_{l\geqslant 0} \theta_{k,\,l} b_{k,\,i} b_{l,\,0} \quad (i >0), \qquad \beta_{0,\,j} = - \frac{1}{2} \sum_{k \geqslant 0} \sum_{l\geqslant j} \theta_{k,\,l} b_{k,\,0} b_{l,\,j} \quad (j>0)\text{.} \end{gather*} Next, expanding the right-hand side of (4) in terms of $$x_{0,0}$$, we get \begin{eqnarray*} &&\sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \sum_{i=0}^k \sum_{j=0}^l {k \choose i}^2 {l \choose j}^2 x_{0,0}^2 \hspace{3.25in} \\ &&\quad + \sum_{s=-k}^k \sum_{t=-l}^l \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} \sum_{i=0}^k \sum_{j=0}^l (-1)^{s+t} {k \choose i} {k \choose i +s } {l \choose j} {l \choose j +t} 2 x_{s,t} x_{0,0} + \cdots\text{.} \end{eqnarray*} However, \[ \sum_{i=0}^{n-s} {n \choose i} {n \choose i +s } = {2n\choose n-s} \hspace{.1in} (0 \leqslant s \leqslant n)\text{.} \] Thus, the coefficient of $$x_{00}^2$$ in equation (4) is \[ \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} 2^{-(k+l)} {2k \choose k} {2l \choose l} = \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} b_{k,\,0} b_{l,\,0} =1\text{.} \] Similarly, for $$s, t >0$$, the coefficient of $$2 x_{s,t} x_{00}$$ in equation (4) is \[ \sum_{k \geqslant 0} \sum_{l \geqslant 0} (-1)^{s+t} \theta_{k,\,l} 2^{-(k+l)} {2k \choose k-s} {2l \choose l-t} = \frac{1}{4} \sum_{k \geqslant 0} \sum_{l \geqslant 0} \theta_{k,\,l} b_{k,|s|} b_{l, |t|} = -\beta_{|s|,|t|}, \] and so on. Hence, (4) gives the same conditional mean and variance structures as (1). References Abramowitz M. & Stegun I. A. ( 1964 ). Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables . Washington, DC : Courier Corporation. Ahmed N. , Natarajan T. & Rao K. R. ( 1974 ). Discrete cosine transform. IEEE Trans. Comp. 100 , 90 – 3 . Google Scholar CrossRef Search ADS Aykroyd R. G. ( 1998 ). Bayesian estimation for homogeneous and inhomogeneous Gaussian random fields. IEEE Trans. Pat. Anal. Mach. Intel. 20 , 533 – 9 . Google Scholar CrossRef Search ADS Besag J. E. & Higdon D. ( 1999 ). Bayesian analysis of agricultural field experiments (with Discussion). J. R. Statist. Soc. B 61 , 691 – 746 . Google Scholar CrossRef Search ADS Besag J. E. & Kooperberg C. ( 1995 ). On conditional and intrinsic autoregressions. Biometrika 82 , 733 – 46 . Besag J. E. & Mondal D. ( 2005 ). First-order intrinsic autoregressions and the de Wijs process. Biometrika 92 , 909 – 20 . Google Scholar CrossRef Search ADS Besag J. E. & Moran P. A. P. ( 1975 ). On the estimation and testing of spatial interaction in Gaussian lattice processes. Biometrika 62 , 555 – 62 . Google Scholar CrossRef Search ADS Buckley M. J. ( 1994 ). Fast computation of a discretized thin-plate smoothing spline for image data. Biometrika 81 , 247 – 58 . Google Scholar CrossRef Search ADS Candès E. & Recht B. ( 2012 ). Exact matrix completion via convex optimization. Commun. ACM 55 , 111 – 9 . Google Scholar CrossRef Search ADS Dahl J. , Vandenberghe L. & Roychowdhury V. ( 2008 ). Covariance selection for nonchordal graphs via chordal embedding. Optimiz. Meth. Software 23 , 501 – 20 . Google Scholar CrossRef Search ADS Dahlhaus R. & Künsch H. R. ( 1987 ). Edge effects and efficient parameter estimation for stationary random fields. Biometrika 74 , 877 – 82 . Google Scholar CrossRef Search ADS Dryden I. , Ippoliti L. & Romagnoli L. ( 2002 ). Adjusted maximum likelihood and pseudo-likelihood estimation for noisy Gaussian Markov random fields. J. Comp. Graph. Statist. 11 , 370 – 88 . Google Scholar CrossRef Search ADS Dunmur A. P. & Titterington D. M. ( 1998 ). Mean fields and two-dimensional Markov random fields in image analysis. Pat. Anal. Applic. 1 , 248 – 60 . Google Scholar CrossRef Search ADS Dutta S. & Mondal D. ( 2015 ). An $$h$$-likelihood method for spatial mixed linear models based on intrinsic autoregressions. J. R. Statist. Soc. B 77 , 699 – 726 . Google Scholar CrossRef Search ADS Dutta S. & Mondal D. ( 2016 ). Residual maximum likelihood analysis for spatial mixed linear models based on intrinsic Matérn dependence with nugget effects. Electron. J. Statist. 10 , 2856 – 93 . Google Scholar CrossRef Search ADS Huang F. & Ogata Y. ( 2002 ). Generalized pseudo-likelihood estimates for Markov random fields on lattice. Ann. Inst. Statist. Math. 54 , 1 – 18 . Google Scholar CrossRef Search ADS Karlin S. & Shapley L. S. ( 1953 ). Geometry of Moment Spaces . Providence, Rhode Island : American Mathematical Society. Künsch H. R. ( 1983 ). Approximations to the maximum likelihood equations for some Gaussian random fields. Scand. J. Statist. 10 , 239 – 46 . Künsch H. R. ( 1987 ). Intrinsic autoregressions and related models on the two-dimensional lattice. Biometrika 74 , 517 – 24 . Google Scholar CrossRef Search ADS Lasserre J. B. ( 2009 ). Moments, Positive Polynomials and their Applications . Singapore : World Scientific. Google Scholar CrossRef Search ADS Ng M. K. , Chan R. H. & Tang W. C. ( 1999 ). A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comp. 21 , 851 – 66 . Google Scholar CrossRef Search ADS Powers V. & Reznick B. ( 2000 ). Polynomials that are positive on an interval. Trans. Am. Math. Soc. 352 , 4677 – 92 . Google Scholar CrossRef Search ADS Putinar M. ( 1993 ). Positive polynomials on compact semi-algebraic sets. Indiana Univ. Math. J. 42 , 969 – 84 . Google Scholar CrossRef Search ADS Rao K. R. & Yip P. ( 2014 ). Discrete Cosine Transform: Algorithms, Advantages, Applications . San Diego, California : Academic Press. Rue H. & Held L. ( 2005 ). Gaussian Markov Random Fields . Boca Raton, Florida : Chapman and Hall . Google Scholar CrossRef Search ADS Schmüdgen K. ( 1991 ). The $$k$$-moment problem for compact semi-algebraic sets. Math. Ann. 289 , 203 – 6 . Google Scholar CrossRef Search ADS © 2018 Biometrika Trust This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

BiometrikaOxford University Press

Published: Apr 25, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off