Add Journal to My Library
IMA Journal of Numerical Analysis
, Volume Advance Article – Dec 28, 2017

25 pages

/lp/ou_press/optimal-stencils-in-sobolev-spaces-pQdpYCE916

- Publisher
- Oxford University Press
- Copyright
- © The Author(s) 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
- ISSN
- 0272-4979
- eISSN
- 1464-3642
- D.O.I.
- 10.1093/imanum/drx076
- Publisher site
- See Article on Publisher Site

Abstract This paper proves that the approximation of pointwise derivatives of order s of functions in Sobolev space $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ by linear combinations of function values cannot have a convergence rate better than m − s − d/2, no matter how many nodes are used for approximation and where they are placed. These convergence rates are attained by scalable approximations that are exact on polynomials of order at least ⌊m − d/2⌋ + 1, proving that the rates are optimal for given m, s and d. And, for a fixed node set $$X\subset {\mathbb {R}}^{d}$$, the convergence rate in any Sobolev space $${{W_{2}^{m}}}(\Omega )$$ cannot be better than q − s where q is the maximal possible order of polynomial exactness of approximations based on X, no matter how large m is. In particular, scalable stencil constructions via polyharmonic kernels are shown to realize the optimal convergence rates, and good approximations of their error in Sobolev space can be calculated via their error in Beppo–Levi spaces. This allows us to construct near-optimal stencils in Sobolev spaces stably and efficiently, for use in meshless methods to solve partial differential equations via generalized finite differences. Numerical examples are included for illustration. 1. Introduction We consider discretizations of continuous linear functionals $$\lambda \;:\;U\to {\mathbb {R}}$$ on some normed linear space U of real-valued functions on some bounded domain $$\Omega \subset {\mathbb {R}}^{d}$$. The discretizations are nodal, i.e., they work with values u(xj) of functions u ∈ U on a set X = {x1, …, xM} ⊂ Ω of nodes by \begin{align} \lambda(u)\approx \lambda_{a,X}(u):= \displaystyle{\sum_{j=1}^{M}a_{j}u(x_{j}) }\qquad \text{for all}\ u\in U. \end{align} (1.1) The background is that most operator equations can be written as infinitely many linear equations \begin{align*} \lambda(u)=f_{\lambda}\ \text{for all}\ \lambda\in \Lambda\subset U^{\ast}, \end{align*}where the functionals evaluate weak or strong derivatives or differential operators like the Laplacian or take boundary values. This means that the classical approach of meshless methods is taken, namely to write the approximations entirely in terms of nodes (Belytschko et al., 1996). Our concern is to find optimal approximations in Sobolev space $${{W_{2}^{m}}}(\Omega )$$ for domains $$\Omega \subset {\mathbb {R}}^{d}$$. Their calculation is computationally costly and very unstable, but we shall prove that there are suboptimal approximations that can be calculated cheaply and stably, namely via scalable approximations that have a certain exactness on polynomials (Section 4) and may be constructed via polyharmonic kernels (Section 5). In particular, we shall show that they can have the same convergence rate as the optimal approximations, and we present the minimal assumptions on the node sets to reach that optimal rate. The application for all of this is that error bounds and convergence rates for nodal approximations to linear functionals enter into the consistency part of the error analysis (Schaback, 2017) of nodal meshless methods. These occur in many papers in science and engineering, e.g., Aboiyar et al. (2010); Agarwal & Basu (2012); Bayona et al. (2012); Chandhini & Sanyasiraju (2007); Flyer et al. (2012); Flyer et al. (2016); Gerace et al. (2009); Hoang-Trieu et al. (2012); Hosseini & Hashemi (2011); Iske (2013); Šarler (2007); Shankar et al. (2015); Shu et al. (2003, 2005); Stevens et al. (2011); Thai-Quang et al. (2012); Tolstykh (2000); Vertnik & Šarler (2011); Yao et al. (2011, 2012), and several authors have analyzed the construction of nodal approximations mathematically, e.g., Davydov & Oanh (2011a,b); Davydov et al. (2017); Iske (1995, 2003); Larsson et al. (2013); Wright & Fornberg (2006), but without considering optimal convergence rates. To get started, we present a suitable notion of scalability in Section 2 that allows us to define error functionals εh ∈ U* based on the scaled point set hX for small h > 0 and to prove convergence rates k in the sense that error bounds of the form $$\|\varepsilon _{h}\|_{U^{\ast }}\leqslant Ch^{k}$$ hold for h → 0. The standard derivative order |α| of a pointwise multivariate derivative functional λ(u) := Dαu(0) will reappear as a scaling order s(λ) that governs how the approximations of a functional λ scale for h → 0. Of course, optimal error bounds will crucially depend on the space U and the node set X. If U contains all real-valued polynomials, the achievable convergence rate of an approximation of a functional λ based on a node set X is limited by the maximal convergence rate on the subspace of polynomials. Section 3 will prove that the upper limit of the convergence rate on polynomials is qmax(λ, X) − s(λ), where qmax is the maximal order of polynomials on which the approximation is exact, and that this rate can be reached by scalable approximations constructed via exactness on polynomials. But even if the node set X is large enough to let approximations be exact on high-order polynomials, the convergence rate may be restricted by limited smoothness of the functions in U. In Section 4, in Sobolev spaces $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ or $${{W_{2}^{m}}}(\Omega )$$ with $$\Omega \subset {\mathbb {R}}^{d}$$ the achievable rate for arbitrarily large node sets X turns out to be bounded above by m − d/2 − s(λ), so that \begin{align} \min\left(m-d/2-s(\lambda),q_{\max}(\lambda,X)-s(\lambda)\right) \end{align} (1.2)is a general formula for an upper bound on the convergence rate in Sobolev space $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$, and this is confirmed by numerical experiments in Section 8. Then Sections 3, 4 and 5 prove that the convergence rate (1.2) is optimal, and it can be achieved by scalable stencils based solely on exactness on polynomials. Furthermore, Section 7 gives a sufficient condition for the convergence of optimal stencils to scalable stencils. A particularly interesting case is the best compromise case where the two constraints on the convergence rate are equal, i.e., \begin{align} q_{\max}(\lambda,X)=\lceil m-d/2\rceil. \end{align} (1.3) For a given smoothness m it yields the sparsest approximation that has the optimal convergence rate (or comes arbitrarily close to it if m − d/2 is an integer), and for a given sparsity via X it provides the minimal smoothness that is required to realize the maximal possible rate of convergence using that node set. The numerical examples are collected in Section 8, while the final Section 9 summarizes our results and points out a few open problems for further research. 2. Scalability We now study the behaviour of functionals and their approximations under scaling. Definition 2.1 A domain $$\Omega \subset {\mathbb {R}}^{d}$$ is scalable if it contains the origin as an interior point and satisfies hΩ ⊆ Ω for all $$0\leqslant h\leqslant 1$$, i.e., if Ω is star shaped with respect to the origin. A space U of functions on a scalable domain Ω is scalable if u(h⋅) is in U for all $$0<h\leqslant 1$$ and all u ∈ U. A functional λ ∈ U* on a scalable space U has scaling order or homogeneity order s if \begin{align*} \lambda(u(h\cdot)) =h^{s}\lambda(u)\quad \text{for all}\ u\in U. \end{align*} Of course, this means that the functional λ must be local in or near the origin. For example, the standard strong functionals are modelled by multivariate derivatives \begin{align*} \lambda_{\alpha}(u)=\dfrac{\partial^{\alpha} u}{\partial x^{\alpha}}(0) \end{align*}at zero, with the scaling behaviour \begin{align*} \lambda_{\alpha}(u(h\cdot)) =h^{|\alpha|}\lambda_{\alpha}(u) \end{align*}showing that the scaling order coincides with the order of differentiation here. This generalizes to all linear homogeneous differential operators, e.g., the Laplacian. Having dealt with scalability of λ, we now turn to scalability of the nodal approximation λa, X of (1.1). To match the scalability order s of λ, we should assume the same h power for λa, X, and consider \begin{align*} h^{-s}\lambda_{a,X}(u(h\,\cdot))=\displaystyle{\sum_{j=1}^{M}a_{j}h^{-s}u(hx_{j})} = \lambda_{ah^{-s},hX}(u) \end{align*}for all u ∈ U and $$0<h\leqslant 1$$. This is the right notion of scalability for the approximation, but now we need the h dependence and refrain from setting this equal to λa, X(u) like in Definition 2.1. Definition 2.2 An approximation (1.1) to a scalable functional λ of scaling order s is scalable of the same order if the error functional is scalable of order s, i.e., \begin{align} \varepsilon_{h}(u):=\lambda(u)-\lambda_{ah^{-s},hX}(u) =h^{-s}(\lambda-\lambda_{a,X})(u(h\,\cdot))=h^{-s}\varepsilon_{1}(u(h\cdot)) \end{align} (2.1)for all $$u\in U,\;0<h\leqslant 1$$. A scalable approximation (2.1) will be called a stencil. If an approximation (1.1) is given for h = 1, and if the functional λ has scaling order s, the transition to (2.1) by using weights ajh−s in the scaled case will be called enforced scaling. A standard example is the five-point star approximation \begin{align*} -\Delta u(0,0)\approx \dfrac{1}{h^{2}}(4u(0,0)-u(0,h)-u(0,-h)-u(h,0)-u(-h,0)) \end{align*}to the Laplacian in 2 dimensions, and all other notions of generalized divided differences that apply to scaled node sets hX. The scaled form in (2.1) allows the very simple error bound \begin{align*} |\varepsilon_{h}(u)|\leqslant h^{-s}\|\lambda- \lambda_{a,X}\|_{U^{\ast}}\|u(h\cdot)\|_{U}\quad \text{for all}\ u\in U \end{align*}that is useful if ∥u(h⋅)∥U is accessible and behaves nicely for h → 0. Weights of scalable approximations can be calculated at large scales and then scaled down by multiplication. This bypasses instabilities for small h and saves a lot of computational work, in particular if applications work on multiple scales or if meshless methods use the same geometric pattern of nodes repeatedly, e.g., in meshless local Petrov Galerkin (Atluri, 2005) techniques. However, optimal approximations in Sobolev spaces will not be scalable. This is why the rest of the paper studies how close scalable approximations come to the optimal ones analyzed in Davydov & Schaback (2016a). 3. Optimal convergence on polynomials We first relate the approximation error of nodal approximations to exactness on polynomials and assume that a scalable functional λ of scaling order s is given that is applicable to all d-variate polynomials. This will be true, for instance, in all Sobolev spaces $${{W_{2}^{m}}}(\Omega )$$ for bounded scalable domains $$\Omega \subset {\mathbb {R}}^{d}$$. The space of all real-valued d-variate polynomials up to order q will be denoted by $${{\mathscr {P}}_{q}^{d}}$$, and for a given node set $$X\subset {\mathbb {R}}^{d}$$ and a functional λ we define \begin{align*} q_{\max}(\lambda,X)=\max\left\{q\;:\; \lambda- \lambda_{a,X}=0\ \text{on}\ {{\mathscr{P}}_{q}^{d}}\ \text{for some}\ a \in{\mathbb{R}}^{|X|}\right\} \end{align*}to be the maximal possible polynomial exactness order (PEO) of a nodal approximation (1.1) to λ based on X. Theorem 3.1 Consider a fixed set $$X\subset {\mathbb {R}}^{d}$$ and a functional λ. If a sequence of general nodal approximations λa(h), hX converges to λ on a space spanned by finitely many monomials, then X admits an approximation to λ that is exact on these monomials. Proof. Due to \begin{align} \lambda(x^{\alpha})- \lambda_{a(h),hX}(x^{\alpha}) = \lambda(x^{\alpha})- \lambda_{a(h)h^{|\alpha|},\,X}(x^{\alpha}), \end{align} (3.1)convergence of functionals λa(h), hX to λ on a set of monomials implies that the error of the best approximation to λ by functionals λa, X, restricted to the space spanned by those monomials, is zero. We now know an upper bound for the maximal order of polynomials for which approximations can be convergent, if X and λ are fixed. This order can be achieved for scalable stencils: Theorem 3.2 If all polynomials are in U, the convergence rate of a scalable stencil of scaling order s based on a point set X on all polynomials is exactly qmax(λ, X) − s if the stencil is exact on $${{\mathscr {P}}_{q}^{d}}$$ for q = qmax(λ, X). The convergence rate on all of U is bounded above by qmax(λ, X) − s. Proof. We apply (3.1) in the scalable situation and get \begin{align*} h^{-s}\lambda((hx)^{\alpha})- h^{-s} \lambda_{a,X}((hx)^{\alpha}) = h^{-s+|\alpha|}\left(\lambda(x^{\alpha})- \lambda_{a,X}(x^{\alpha})\right), \end{align*}proving the assertion. Consequently, if a node set X = {x1, …, xM} is given, if the application allows all polynomials, and if one wants a scalable stencil, the best one can do is to take a stencil with maximal order qmax(λ, X) of polynomial exactness. It will lead to a scalable stencil with the optimal convergence rate among all approximations. Additional tricks cannot improve that rate, but it can be smaller due to restricted smoothness of functions in U. This will be the topic of Section 4. If exactness of order q is required in applications, one takes a basis p1, …, pQ of the space $${\mathscr {P}}_{q}^{d}$$ of d-variate polynomials of order q with $$Q=\dim {\mathscr {P}}_{q}^{d}={\binom{q+d-1}{d}}$$ and has to find a solution of the linear system \begin{align} \lambda(p_{k})=\displaystyle{\sum_{j=1}^{M}a_{j}p_{k}(x_{j}),\qquad 1\leqslant k\leqslant Q }. \end{align} (3.2) This may exist even in the case M < Q, the simplest example being the five-point star in 2 dimensions for λ(u) = Δu(0), which is exact of order 4, while M = 5 < Q = 10. For general point sets, there is no way around setting up and solving the above linear system. If the system has a solution, we get a stencil by enforced scaling and with error \begin{align*} h^{-s}\lambda(u(h\cdot))-h^{-s}\displaystyle{\sum_{j=1}^{M}a_{j}u(hx_{j})}, \end{align*}which then is polynomially exact of order q and has convergence rate k = q − s, but only on polynomials. If U contains functions of limited smoothness, this convergence rate will not be attained for all functions in U. We shall prove in Section 4 that the convergence rate in $${{W_{2}^{m}}}(\Omega )$$ for $$\Omega \subseteq {\mathbb {R}}^{d}$$ is limited by m − s − d/2, no matter how large the order q of polynomial exactness on X is. To make this construction partially independent of the functionals, we add the following definition. Definition 3.3 A finite point set $$X=\{x_{1},\ldots ,x_{M}\}\subset {\mathbb {R}}^{d}$$ has polynomial reproduction of order q, if all polynomials in $${\mathscr {P}}_{q}^{d}$$ can be recovered from their values on X. Theorem 3.4 If the set X allows polynomial reproduction of order q, then all admissible linear functionals of scaling order $$s\leqslant q$$ have a stencil that is exact at least of order q, by applying λ to a Lagrange basis of $${\mathscr {P}}_{q}^{d}$$. This stencil has convergence rate at least q − s on polynomials. Proof. Let the set X allow polynomial reproduction of order q. Then, for $$Q=\dim {\mathscr {P}}_{q}^{d}$$, there are polynomials p1, …, pQ and a subset Y = {y1, …, yQ} ⊆ X such that the representation \begin{align*} p(x)=\displaystyle{\sum_{j=1}^{Q}p(y_{j})p_{j}(x)\qquad \text{for all}\ p\in{\mathscr{P}}_{q}^{d} } \end{align*}holds, and the matrix of values $$p_{k}(y_{j}),\;1\leqslant j.k\leqslant Q$$ is the identity. This implies $$Q\leqslant M$$, and the stencil satisfying \begin{align*} \lambda(p)=\displaystyle{\sum_{j=1}^{Q}p(y_{j})\lambda(p_{j})\qquad \text{for all}\ p\in{\mathscr{P}}_{q}^{d} } \end{align*}with weights aj := λ(pj) is exact on $${\mathscr {P}}_{q}^{d}$$. The rest follows like above. But note that the five-point star is an example of an approximation on a set that has polynomial reproduction only of order 2, while it has a scalable stencil for the Laplacian that is exact on polynomials of order up to 4 and convergent of rate 2. The application of Theorem 3.4 would require polynomial reproduction of order 4 for the same convergence rate. In general, one can use the M given nodes to get exactness on polynomials of maximal order, and then there can be additional degrees of freedom because the Q × M linear system (3.2) may be nonuniquely solvable. Davydov & Schaback (2016b) deal with various techniques to use the additional degrees of freedom, e.g., for minimizing the ℓ1 norm of the weights. In all cases the result is scalable and then this paper applies as well. On the other hand, Davydov & Schaback (2016a) focus on nonscalable approximations induced by kernels. Both papers perform their convergence analysis mainly for single approximations. While this paper focuses on convergence rates in Sobolev spaces, Davydov & Schaback (2016a) consider Hölder spaces and Sobolev spaces $$W_{\infty }^{r}$$. A third way to use additional degrees of freedom is to take optimal stencils for polyharmonic kernels in Beppo–Levi spaces; see Section 5. But before we go over from polynomials to these spaces, we remark that many application papers use meshless methods to solve problems that have true solutions u* with rapidly convergent power series representations (see, e.g., Kansa, 2015 for a recent example with $$u^{\ast }(x,y)=\exp (ax+by)$$). In such cases, a high order of polynomial exactness pays off, but as soon as the problem is treated in Sobolev space, this advantage is gone. A truly worst-case analysis of nodal meshless methods is in Schaback (2017). This discussion showed that on polynomials one can get stencils of arbitrarily high convergence rates, provided that there are enough nodes to ensure exactness on high-degree polynomials. For working on spaces of functions with limited smoothness, the latter will limit the convergence rate of the stencil, and we want to show how. 4. Optimal convergence in Sobolev spaces Our goal is to reach the optimal convergence rates in Sobolev spaces via cheap, scalable and stable stencils, and for this we need to know those rates. But before that, we want to eliminate the difference between local and global Sobolev spaces, as far as convergence rates are concerned. Local Sobolev functionals are global ones due to $${{W_{2}^{m}}}(\Omega )^{\ast }\subset {{W_{2}^{m}}}({\mathbb {R}}^{d})^{\ast }$$, which follows from $${{W_{2}^{m}}}(\Omega )\supset {{W_{2}^{m}}}({\mathbb {R}}^{d})$$ for Lipschitz domains. This implies that we can evaluate the norm of each functional $$\lambda \in {{W_{2}^{m}}}(\Omega )^{\ast }$$ in $${{W_{2}^{m}}}({\mathbb {R}}^{d})^{\ast }$$ via the kernel, up to a fixed multiplicative constant. For the other way round and in the scalable case, we consider the subspace LΩ of all point-based functionals $$ \lambda _{a,X} \in {{W_{2}^{m}}}({\mathbb {R}}^{d})^{\ast }$$ with sets X ⊂ Ω and $$a\in {\mathbb {R}}^{|X|}$$ for a scalable domain $$\Omega \subset {\mathbb {R}}^{d}$$ and form its closure $${\mathscr {L}}_{\Omega }$$ under the kernel-based $${{W_{2}^{m}}}({\mathbb {R}}^{d})^{\ast }$$ norm. These functionals are exactly those that we study here. Since the spaces $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ and $${{W_{2}^{m}}}(\Omega )$$ are norm equivalent, the limit process is the same in $${{W_{2}^{m}}}(\Omega )$$, and therefore we have that $${\mathscr {L}}_{\Omega }\subset {{W_{2}^{m}}}(\Omega )^{\ast }$$. Theorem 4.1 The functionals considered here are always in the space $${\mathscr {L}}_{\Omega }\subset {{W_{2}^{m}}}(\Omega )^{\ast }$$, and their norms can be evaluated in $${{W_{2}^{m}}}({\mathbb {R}}^{d})^{\ast }$$ up to a space- and domain-dependent constant. The convergence rates in $${{W_{2}^{m}}}(\Omega )^{\ast }$$ and $${{W_{2}^{m}}}({\mathbb {R}}^{d})^{\ast }$$ are the same. □ In Section 5 we shall extend this argument to Beppo–Levi spaces. Theorem 4.2 The convergence rate of any nodal approximation to a scalable functional λ of scalability order s on $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ with m > d/2 is at most m − s − d/2. Proof. We need at least m > d/2 to let the nodal approximations λa, X of (1.1) be well defined. Then we take a ‘bump’ function $$v\in {{W_{2}^{m}}}({\mathbb {R}}^{d})$$ that vanishes on X and has λ(v) ≠ 0. Now we scale and consider λa(h), hX as an approximation on hX with error functional \begin{align*} \varepsilon_{h}=\lambda-\lambda_{a(h),hX}. \end{align*} Then \begin{align*} \varepsilon_{h}(v(\cdot/h)) &= \lambda(v(\cdot/h))-\lambda_{a(h),hX}(v(\cdot/h))\\ &= h^{-s}\lambda(v)-0 \end{align*}and \begin{align*} \|v(\cdot/h)\|^{2}_{{{W_{2}^{m}}}({\mathbb{R}}^{d})} &= \displaystyle{\sum_{|\alpha|\leqslant m}\int_{{\mathbb{R}}^{d}}|D^{\alpha}(v(\cdot/h))|^{2}}\nonumber\\[-4pt] &= \displaystyle{\sum_{|\alpha|\leqslant m}h^{-2|\alpha|}\int_{{\mathbb{R}}^{d}}|D^{\alpha}(v)(x/h)|^{2}\ \text{d}x}\nonumber\\[-4pt] &= \displaystyle{h^{d}\sum_{|\alpha|\leqslant m}h^{-2|\alpha|}\int_{{\mathbb{R}}^{d}}|D^{\alpha}(v)(y)|^{2}\ \text{d}y}\nonumber\\[-2pt] &\leqslant \displaystyle{h^{d-2m}\|v\|^{2}_{{{W_{2}^{m}}}({\mathbb{R}}^{d})}},\nonumber \end{align*}leading to \begin{align*} \|\varepsilon_{h}\|_{{{W_{2}^{m}}}({\mathbb{R}}^{d})^{\ast}} &= \displaystyle{ \sup_{u\in {{W_{2}^{m}}}({\mathbb{R}}^{d})\setminus \{0\}} \dfrac{|\varepsilon_{h}(u)|}{\|u\|_{{{W_{2}^{m}}}({\mathbb{R}}^{d})}}}\\[-2pt] &\geqslant \displaystyle{\frac{|\varepsilon_{h}(v(\cdot/h))|}{\|v(\cdot/h)\|_{{{W_{2}^{m}}}({\mathbb{R}}^{d})}}}\\[-2pt] &\geqslant h^{-s} \displaystyle{\frac{|\lambda(v)|}{\|v(\cdot/h)\|_{{{W_{2}^{m}}}({\mathbb{R}}^{d})}}}\\[-2pt] &\geqslant h^{m-s-d/2} \displaystyle{\frac{|\lambda(v)|}{\|v\|_{{{W_{2}^{m}}}({\mathbb{R}}^{d})}}}. \end{align*} This holds for all weights, including the nonscalable optimal ones, and for all nodal point sets X. Our next goal is to show that this rate is attainable for scalable stencils with sufficient polynomial exactness, in particular for optimal stencils calculated via polyharmonic kernels. Theorem 4.3 Let λ be a functional of scaling order s that is continuous on $$W_{2}^{\mu }(\Omega )$$ for some μ > d/2, and let X allow a polynomially exact approximation to λ of some order q ⩾ μ > d/2. Then any scalable stencil for approximation of λ on X with that exactness has the optimal convergence rate m − s − d/2 in $${{W_{2}^{m}}}(\Omega )$$ for all m with $$\mu \leqslant m < q+d/2$$. In the case m = q + d/2, the rate is at least m − s − d/2 − ε = q − s − ε for arbitrarily small ε > 0. Proof. We first treat the case $$m\leqslant q$$. By the Bramble–Hilbert lemma (Bramble & Hilbert, 1970), the error functional defined by \begin{align*} \varepsilon(u)=\lambda(u)- \lambda_{a,X}(u) \end{align*}is continuous on $${{W_{2}^{m}}}(\Omega )$$ and vanishes on $${\mathscr {P}}_{m}^{d}$$. Then it has an error bound \begin{align*} |\varepsilon(u)| \leqslant \|\varepsilon\|_{{{W_{2}^{m}}}(\Omega)^{\ast}}|u|_{{{W_{2}^{m}}}(\Omega)}\qquad \text{for all}\ u\in {{W_{2}^{m}}}(\Omega). \end{align*} This leads to \begin{align*} |h^{-s}\lambda(u(h\cdot))- h^{-s} \lambda_{a,X}(u(h\cdot))| &= h^{-s}|\varepsilon(u(h\cdot))|\\ &\leqslant h^{-s}\|\varepsilon\|_{{{W_{2}^{m}}}(\Omega)^{\ast}}|u(h\cdot)|_{{{W_{2}^{m}}}(\Omega)}\\ &= h^{-s}\|\varepsilon\|_{{{W_{2}^{m}}}(\Omega)^{\ast}}h^{m-d/2}|u|_{{{W_{2}^{m}}}(h\Omega)}\\ &\leqslant h^{-s}\|\varepsilon\|_{{{W_{2}^{m}}}(\Omega)^{\ast}}h^{m-d/2}|u|_{{{W_{2}^{m}}}(\Omega)}, \end{align*}where we used \begin{align} |u(h\cdot)|^{2}_{{{W_{2}^{m}}}(\Omega)} &= \displaystyle{\sum_{|\alpha|=m}\int_{\Omega} \left|D^{\alpha} (u(h\cdot))(x) \right|{}^{2}\ \text{d}x}\nonumber\\ &= \displaystyle{h^{2m}\sum_{|\alpha|=m}\int_{\Omega} \left|D^{\alpha} (u)(hx) \right|{}^{2}\ \text{d}x}\nonumber\\ &= \displaystyle{h^{2m-d}\sum_{|\alpha|=m}\int_{h\Omega} \left|D^{\alpha} (u)(y) \right|{}^{2}\ \text{d}y}\nonumber\\ &= h^{2m-d}|u|^{2}_{{{W_{2}^{m}}}(h\Omega)}. \end{align} (4.1) For the case $$q\leqslant m <q+d/2$$ we repeat the argument, but now in $${{W_{p}^{q}}}(\Omega )\supseteq {{W_{2}^{m}}}(\Omega )$$ for $$p\in [2,\infty )$$ with q − d/p = m − d/2. Because q ≥ μ we also have $${{W_{p}^{q}}}(\Omega )\subseteq W_{2}^{\mu }(\Omega )$$, guaranteeing continuity on $${{W_{p}^{q}}}(\Omega )$$. The corresponding proof steps are \begin{align*} |h^{-s}\lambda(u(h\cdot))- h^{-s} \lambda_{a,X}(u(h\cdot))| &\leqslant h^{-s}\|\varepsilon\|_{{{W_{p}^{q}}}(\Omega)^{\ast}}h^{q-d/p}|u|_{{{W_{p}^{q}}}(\Omega)},\\ |u(h\cdot)|^{p}_{{{W_{p}^{q}}}(\Omega)} &= h^{pq-d}|u|^{p}_{{{W_{p}^{q}}}(\Omega)}. \end{align*} For m = q + d/2, the space $${{W_{2}^{m}}}(\Omega )$$ is embedded in $${{W_{p}^{q}}}(\Omega )$$ for arbitrary $$p\in [2,\infty )$$, and on that space we get the rate q − s − d/p = m − s − d/2 − d/p. Theorem 4.3 proves optimality of the convergence rate (1.2), and it shows that the optimal rate is attained by scalable stencils whose point sets allow polynomial exactness of some order larger than m − d/2. In view of the best compromise situation, one can ask for the minimal PEO q that allows the optimal convergence rate for fixed m and d. If m − d/2 is not an integer, this is q := ⌈m − d/2⌉ as in (1.3). In the exceptional case $$m-d/2\in {\mathbb {N}}$$, the order m − d/2 + 1 is sufficient for the optimal rate, but order m − d/2 can come arbitrarily close to it. We shall deal with this situation in Sections 5 and 8. Consequently, large orders of polynomial exactness will not pay off if smoothness is the limiting factor. If the size of the point set X is the limiting factor, we get the following corollary. Corollary 4.4 Let λ be a functional of scaling order s that is continuous on $$W_{2}^{\mu }(\Omega )$$ with integer μ > d/2, and let X allow a polynomially exact approximation to λ of some order q ≥ μ. Then any scalable stencil for approximation of λ on X with that exactness has convergence rate at least q − s in $${{W_{2}^{m}}}(\Omega )$$ for all m > q + d/2. Proof. We repeat the proof of Theorem 4.3, but now on $${{W_{2}^{q}}}(\Omega )$$, and get \begin{align*} |h^{-s}\lambda(u(h\cdot))- h^{-s} \lambda_{a,X}(u(h\cdot))| &= h^{-s}|\varepsilon(u(h\cdot))|\\ &\leqslant h^{-s}\|\varepsilon\|_{{{W_{2}^{q}}}(\Omega)^{\ast}}|u(h\cdot)|_{{{W_{2}^{q}}}(\Omega)}. \end{align*} Then we use (4.1) replacing m by q there, but insert functions $$u\in {{W_{2}^{m}}}(\Omega )$$ for m > q + d/2. Then the qth derivatives in (4.1) will be continuous, proving \begin{align*} |u(h\cdot)|^{2}_{{{W_{2}^{q}}}(\Omega)} &= \displaystyle{h^{2q}\sum_{|\alpha|=q}\int_{\Omega} \left|D^{\alpha} (u)(hx) \right|{}^{2}\ \text{d}x}\\ &\leqslant C h^{2q}\|u\|_{C^{q}(\Omega)}. \end{align*} Thus the convergence rate in $${{W_{2}^{m}}}(\Omega )$$ is at least q − s. This argument used continuity of higher derivatives to bound local integrals, as in Davydov & Schaback (2016a). Note that Corollary 4.4 produces only integer or half-integer convergence rates while Theorem 4.3 allows general noninteger rates. We shall give examples in Section 8. To summarize, we get convergence rates for scalable stencils as in Table 1. For the case in the second row, the optimal convergence behaviour is not reached for order q, but for order q + 1 by applying the first row. For given m and d, a scalable stencil with PEO ⌊m − d/2⌋ + 1 is sufficient for optimal convergence in $${{W_{2}^{m}}}(\Omega ),\;\Omega \subset {\mathbb {R}}^{d}.$$ By solving the system (3.2), such stencils are easy to calculate, but if the system is underdetermined, one should make good use of the additional degrees of freedom. This topic is treated in Davydov & Schaback (2016b) by applying optimization techniques, while the next sections will focus on unique stencils obtained by polyharmonic kernels. Because the latter come close to the kernels reproducing Sobolev spaces, they should provide good approximations to the nonscalable optimal approximations in Sobolev spaces. Table 1 Convergence rates in $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ for scalable stencils defined on $$W_{2}^{\mu }({\mathbb {R}}^{d})$$ with polynomial exactness q ⩾ μ > d/2 m and q Minimal rate Optimal rate m < q + d/2 m − s − d/2 yes m = q + d/2 m − s − d/2 − ε, ε > 0 no, m − s − d/2 = q − s m > q + d/2 q − s yes for q = qmax(λ, X) m and q Minimal rate Optimal rate m < q + d/2 m − s − d/2 yes m = q + d/2 m − s − d/2 − ε, ε > 0 no, m − s − d/2 = q − s m > q + d/2 q − s yes for q = qmax(λ, X) View Large Table 1 Convergence rates in $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ for scalable stencils defined on $$W_{2}^{\mu }({\mathbb {R}}^{d})$$ with polynomial exactness q ⩾ μ > d/2 m and q Minimal rate Optimal rate m < q + d/2 m − s − d/2 yes m = q + d/2 m − s − d/2 − ε, ε > 0 no, m − s − d/2 = q − s m > q + d/2 q − s yes for q = qmax(λ, X) m and q Minimal rate Optimal rate m < q + d/2 m − s − d/2 yes m = q + d/2 m − s − d/2 − ε, ε > 0 no, m − s − d/2 = q − s m > q + d/2 q − s yes for q = qmax(λ, X) View Large 5. Polyharmonic kernels For m − d/2 > 0 real, we define the polyharmonic kernel \begin{align} H_{m,d}(r):=(-1)^{\lfloor m-d/2 \rfloor +1} \left\{ \begin{array}{ll} r^{2m-d}\log r, &2m-d\ \text{even integer}\ \\ r^{2m-d},&\ \text{else} \end{array} \right\} \end{align} (5.1)up to a positive scalar multiple. This kernel is conditionally positive definite of order \begin{align*} q(m-d/2):=\lfloor m-d/2 \rfloor +1. \end{align*} For comparison, the Whittle–Matérn kernel generating Sobolev space $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ is, up to a positive constant, \begin{align*} S_{m,d}(r):= K_{m-d/2}(r)r^{m-d/2} \end{align*}with the modified Bessel function of the second the kind. The generalized d-variate Fourier transforms then are \begin{align*} \hat{H}_{m,d}(\omega)&=\|\omega\|_{2}^{-2m},\\ \hat{S}_{m,d}(\omega)&=(1+\|\omega\|_{2}^{2})^{-m}, \end{align*}up to positive constants, showing a similarity that we will not explore further at this point. While Sm, d reproduces $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$, the polyharmonic kernel Hm, d reproduces the Beppo–Levi space BLm, d. This has a long history (see, e.g., Iske, 1995, 2003, 2005; Schaback, 1997; Wendland, 2005; Beatson et al., 2005), but we take a shortcut here and refer the reader to the background literature. From Iske (2003) we take the very useful fact that optimal approximations in Beppo–Levi spaces using polyharmonic kernels are always scalable and can be stably and efficiently calculated. We shall investigate the optimal convergence rate in Sobolev and Beppo–Levi space here, while Iske (2003) contains convergence rates in Cm(Ω). A typical scale-invariance property of Beppo–Levi spaces is \begin{align} \|u(h\cdot)\|_{{\rm BL}_{m,d}}=h^{m-d/2}\|u\|_{{\rm BL}_{m,d}}\qquad \text{for all}\ u \in {\rm BL}_{m,d}. \end{align} (5.2) Note the similarity between the above formula and (4.1) used the proof of Theorem 4.3, because the classical $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ seminorm coincides with the norm in BLm, d. Theorem 5.1 Let a scalable approximation (1.1) of scaling order s be exact on the polynomials of some order q ⩾ q(m − d/2) = ⌊m − d/2⌋ + 1 and assume that λ − λa, X is in $${\rm BL}_{m,d}^{\ast }$$. Then this stencil has the exact convergence rate m − s − d/2 in BLm, d. Proof. We evaluate the norm of the error functional after scaling via \begin{align*} \|\lambda-h^{-s} \lambda_{a,hX}\|_{{\rm BL}_{m,d}^{\ast}} &=\displaystyle{ \sup_{\|u\|_{{\rm BL}_{m,d}}\leqslant 1}|\lambda(u)-h^{-s} \lambda_{a,hX}(u)|}\\ &=h^{-s}\displaystyle{ \sup_{\|u\|_{{\rm BL}_{m,d}}\leqslant 1}|\lambda(u(h\cdot))- \lambda_{a,X}(u(h\cdot))|}\\ &=h^{-s+m-d/2}\displaystyle{ \sup_{\|u(h\cdot)\|_{{\rm BL}_{m,d}}\leqslant 1}|\lambda(u(h\cdot))- \lambda_{a,X}(u(h\cdot))|}\\ &=h^{-s+m-d/2}\|\lambda- \lambda_{a,X}\|_{{\rm BL}_{m,d}^{\ast}} \end{align*}using that (5.2) implies that the unit balls of all u and all u(h⋅) are the same up to a factor. Corollary 5.2 Polynomial exactness of more than order ⌊m − d/2⌋ + 1 does not pay off in a higher convergence rate in Beppo–Levi space BLm, d. □ Corollary 5.3 Let a point set $$X=\{x_{1},\ldots ,x_{M}\}\subset \Omega \subset {\mathbb {R}}^{d}$$ be given such that there is some approximation (1.1) that is exact on polynomials of order ⌊m − d/2⌋ + 1 and that has $$\lambda - \lambda _{a,X}\in {\rm BL}_{m,d}^{\ast }$$. Then there is a weight vector $$a^{\ast }\in {\mathbb {R}}^{M}$$ that minimizes $$\|\lambda - \lambda _{a,X}\|_{{\rm BL}_{m,d}^{\ast }}$$ under all competing approximations, and the resulting stencil is BLm, d-optimal under all stencils of at least that polynomial exactness. □ By applying Theorem 4.3, we get the following corollary. Corollary 5.4 One can use optimal scalable stencils obtained via polyharmonic kernels Hm, d to get optimal convergence rates in $${{W_{2}^{m}}}(\Omega )$$ for $$\Omega \subset {\mathbb {R}}^{d}$$, provided that the underlying sets allow exactness on polynomials of order q(m − d/2) = ⌊m − d/2⌋ + 1. □ If m − d/2 is not an integer, the above order is the smallest possible for optimal convergence. For m − d/2 integer, we have \begin{align*} q(m-d/2)=\lfloor m-d/2\rfloor+1=m-d/2+1, \end{align*}and Theorem 4.3 suggests that we could come arbitrarily close to the optimal convergence rate if we use order q = m − d/2. But then we cannot use the polyharmonic kernel Hm, d. However, there is a workaround. We construct a scalable stencil via the polyharmonic kernel $$H_{m^{\prime },d}$$ for $$m-1\leqslant m^{\prime }<m$$ using polynomial exactness of order q(m′− d/2) = q. By Theorem 6 this yields a convergence rate of at least m − s − d/2 − ε for all ε > 0, no matter how m′ was chosen. Corollary 5.5 For the special situation m = q + d/2 in Table 1 there is a scalable stencil with PEO q, based on a polyharmonic kernel, that has convergence rate of at least m − s − d/2 − ε for all ε > 0. □ 6. Stable error evaluation In the most interesting cases, the leading term of the error of a scalable stencil in Sobolev space can be stably calculated via polyharmonic kernels. To prove this, we show now that the polyharmonic kernels Hm, d arise naturally as part of the kernels Sm, d reproducing Sobolev space $$H^{m}({\mathbb {R}}^{d})$$. The latter have expansions as series in r, beginning with a finite number of even powers with alternating signs. Such even powers, when written as $$r^{2k}=\|x-y\|_{2}^{2k}$$, are polynomials in x and y. After these even powers, the next term is a polyharmonic kernel: Theorem 6.1 The first noneven term in the expansion of $$\sqrt {\frac {2}{\pi }} K_{n+1/2}(r)r^{n+1/2}$$ into powers of r for integer n ⩾ 0 is the polyharmonic kernel \begin{align*} r^{2n+1}\frac{(-1)^{n+1}}{(2n+1)(2n-1)(2n-3)\cdots 1}= r^{2n+1}\frac{(-1)^{n+1}2^{n}\,n!}{(2n+1)!}. \end{align*} The first noneven term in the expansion of Kn(r)rn for integer n ⩾ 0 is the polyharmonic kernel $$(-1)^{n+1}r^{2n}\log (r)\frac {2^{-n}}{n!}$$. Proof. NIST (2015, equation (10.39.2)) has n = 0 of \begin{align*} \sqrt{\frac{2}{\pi}} K_{n+1/2}(r)r^{n+1/2}=q_{n}(r)=e^{-r}p_{n}(r) \end{align*}with a polynomial pn of degree at most n, p0(r) = 1, q0(r) = e−r. It can easily be shown that $$rp_{n-1}(r)+p_{n}^{\prime }(r)=p_{n}(r)$$ holds, using the derivative of the above expression, and similarly one gets \begin{align*} -rq_{n-1}(r)= q_{n}^{\prime}(r) \end{align*}from that derivative formula. If we make it explicit by \begin{align*} q_{n}(r)=:\displaystyle{\sum_{j=0}^{\infty} q_{j,n}r^{j} }, \end{align*}we get \begin{align*} -q_{k-1,n-1} &= q_{k+1,n}(k+1),\qquad k,n\geqslant 1,\\ 0&=q_{1,n},\qquad\;n\geqslant 1. \end{align*} The assertion q2k−1, n = 0 for $$1\leqslant k\leqslant n$$ is true for k = 1 and all $$n\geqslant 1$$. Assume it to be true for k and all $$n\geqslant k$$. Then for all $$n\geqslant k\geqslant 1$$, \begin{align*} 0=-q_{2k-1,n} &= q_{2k+1,n+1}(2k+1),\qquad 2k\geqslant 1, n\geqslant 0 \end{align*}proves the assertion. The first odd term of the kernel expansion is q2n+1, nr2n+1, and its coefficient has the recursion \begin{align*} -q_{2n-1,n-1} &= q_{2n+1,n}(2n+1),\qquad n\geqslant 1. \end{align*} For the other case we use NIST (2015, equation (10.31.1)) in shortened form as \begin{align*} K_{n}(z)z^{n}=p_{n}(z^{2})+(-1)^{n+1}z^{n}\log(z/2)I_{n}(z) \end{align*}with an even power series pn(z2), and due to NIST (2015, equation (10.25.2) of) we have In(z) = znqn(z2) with an even power series qn(z2) with $$q_{n}(0)=\frac {2^{-n}}{n!}$$. Thus \begin{align*} K_{n}(z)z^{n}=p_{n}(z^{2})+(-1)^{n+1}z^{2n}\log(z/2)q_{n}(z^{2}), \end{align*}and the first noneven term of the expansion of Kn(r)rn is the polyharmonic kernel \begin{align*} (-1)^{n+1}r^{2n}\log(r)q_{n}(0)=(-1)^{n+1}r^{2n}\log(r)\frac{2^{-n}}{n!}. \end{align*} We now are ready to show that a good approximation of the error in Sobolev space can be calculated stably via the error in Beppo–Levi space, i.e., via polyharmonic kernels: Theorem 6.2 Assume a scalable stencil of scalability order s on a set $$X\subset {\mathbb {R}}^{d}$$ to be given with polynomial exactness q. For all integer m with $$\lfloor m-d/2\rfloor +1\leqslant q$$, its error norm can be evaluated on all Beppo–Levi spaces BLm, d and on Sobolev space $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$. The convergence rate in both cases then is m − s − d/2, and the quotient of errors converges to 1 for h → 0 if the scalar factors in the Sobolev and polyharmonic kernel are aligned properly, namely as given in Theorem 6.1. Proof. The squared norm of the stencil’s error functional can be evaluated on Sobolev space $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ by \begin{align*} \varepsilon(h)^{x}\varepsilon(h)^{y}K(x,y) =&\ \displaystyle{ h^{-2s}\left(\lambda^{x}\lambda^{y}K(hx,hy) -2\sum_{j=1}^{M}a_{j}\lambda^{y}K(hx_{j},hy) \right.}\\[-2pt] &+\displaystyle{ \left. -2\sum_{j,k=1}^{M}a_{j}a_{k}\lambda^{y}K(hx_{j},hx_{k}) \right)\!\!,} \end{align*}where we use K(x, y) as a shortcut for $$K_{m-d/2}(\|x-y\|_{2})\|x-y\|_{2}^{m-d/2}$$ and ignore scalar multiples. Now we insert the series expansions of Theorem 6.1. For odd d and m − d/2 = n + 1/2 we have, up to constant factors, \begin{align*} K_{m-d/2}(r)r^{m-d/2}=\displaystyle{\sum_{j=0}^{m-d/2-1/2}f_{2j}r^{2j}} +f_{2m-d}r^{2m-d} +\sum_{k>2m-d}f_{k}r^{k} \end{align*}and \begin{align*} K_{m-d/2}(hr)(hr)^{m-d/2}=\displaystyle{\sum_{j=0}^{m-d/2-1/2}f_{2j}h^{2j}r^{2j}} +f_{2m-d}h^{2m-d}r^{2m-d} +\sum_{k>2m-d}f_{k}h^{k}r^{k}. \end{align*} If we hit this twice with ε(h), i.e., forming \begin{align*} \|\varepsilon(h)\|^{2}_{H^{m}({\mathbb{R}}^{d})}= \varepsilon(h)^{x}\varepsilon(h)^{y}K(h\|x-y\|_{2}), \end{align*}all even terms with exponents 2j < 2q = 2p + 2s > 2m − d go away (Schaback, 2005), and we are left with the polyharmonic part and higher-order terms. The odd ones are all polyharmonic, and the even ones remain only from exponent 2q = 2p + 2s > 2m − d on, i.e., they behave like h2m−d+1 or higher-order terms. The polyharmonic terms f2m−d+2kh2m−d+2kr2m−d+2k representing BLm+k, d require polynomial exactness of order m − d/2 + 1/2 + k, which is satisfied for $$0\leqslant k<q-m+d/2$$, and double action of the error functional on these terms has a scaling law of h2m+2k−2s−d. This means that the dominating term is the one with k = 0, and the squared error norm behaves like h2m−d−2s as in the BLm, d case. Now we treat even dimensions, and use the expansion \begin{align*} K_{m-d/2}(r)r^{m-d/2}=\displaystyle{\sum_{j=0}^{\infty}f_{2j}r^{2j}} +g_{2m-d}\log(r)r^{2m-d}+ \log(r)\displaystyle{\sum_{2k>2m-d}g_{2k}r^{2k}} \end{align*}up to constant factors. With scaling, it reads as \begin{align*} K_{m-d/2}&(hr)h^{m-d/2}r^{m-d/2}\\[-2pt] &=\displaystyle{\sum_{j=0}^{\infty}f_{2j}h^{2j}r^{2j}} +g_{2m-d}\log(hr)h^{2m-d}r^{2m-d} + \log(hr)\displaystyle{\sum_{2k>2m-d}g_{2k}h^{2k}r^{2k}}\\[-2pt] &= \displaystyle{\sum_{j=0}^{\infty}f_{2j}h^{2j}r^{2j}} +g_{2m-d}h^{2m-d}\log(r)r^{2m-d}+g_{2m-d}\log(h)h^{2m-d}r^{2m-d}\\[-2pt] &\quad + \displaystyle{\sum_{2k>2m-d}g_{2k}h^{2k}r^{2k}\log(r)} +\displaystyle{\sum_{2k>2m-d}g_{2k}h^{2k}\log(h)r^{2k}.} \end{align*} We now have q = p + s ⩾ 2m − d + 2 and hitting the scaled kernel twice will annihilate all even powers up to and including exponents 2j < 2q = 2p + 2s ⩾ 2m − d + 2, i.e., the remaining even powers scale like $$h^{2m-d+2}\log (h)$$ or higher. The rest is a sum of polyharmonic kernels Hm+k, d for k ⩾ 0, and we know the scaling laws of them, if the stencil has enough polynomial exactness. Again, the term with k = 0 is the worst case, leading to a summand of type h2m−d−2s in the squared norm of the error that cannot be cancelled by the other terms of higher order. 7. Stencil convergence Here, we prove that the renormalized weights of the optimal nonscalable approximations in Sobolev space converge to the weights of a scalable stencil. Theorem 7.1 Consider the $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$-optimal approximation weights a*(h) on a set $$X\subset {\mathbb {R}}^{d}$$ for a functional of scaling order s. Assume that X allows a unique scalable stencil with weights $$\hat a$$ that is exact on polynomials of order q. Then \begin{align*} \|a^{\ast}(h)h^{s}-\hat a\|_{\infty} \leqslant Ch^{m-q+1-d/2} \end{align*}if m − d/2 < q, and \begin{align*} \|a^{\ast}(h)h^{s}-\hat a\|_{\infty} \leqslant Ch^{1} \end{align*}if m − d/2 ⩾ q. Proof. We consider the uniquely solvable system of polynomial exactness as \begin{align*} \sum_{j=1}^{M}\hat a_{j}x_{j}^{\alpha}=\lambda(x^{\alpha}),\qquad\;0\leqslant |\alpha|<q \end{align*}and in scaled form as \begin{align*} \sum_{j=1}^{M}h^{-s}\hat a_{j}(hx_{j})^{\alpha}=\lambda(x^{\alpha}) ,\qquad\;0\leqslant |\alpha|<q, \end{align*}which is the unscaled system where the equation for xα is multiplied by h|α|−s, namely \begin{align*} \sum_{j=1}^{M}h^{-s}\hat a_{j}(hx_{j})^{\alpha}=h^{|\alpha|-s}\lambda(x^{\alpha})=\lambda(x^{\alpha}) ,\qquad\;0\leqslant |\alpha|<q, \end{align*}which is no contradiction because scaling order s implies λ(xα) = 0 for |α| ≠ s. Then we insert the rescaled optimal Sobolev weights into the unscaled system to get \begin{align} h^{s}\sum\nolimits_{j=1}^{M}a_{j}^{\ast}(h)x_{j}^{\alpha}=&\ h^{s-|\alpha|}\sum\nolimits_{j=1}^{M}a_{j}^{\ast}(h)(hx_{j})^{\alpha}\nonumber\\ =&\ h^{s-|\alpha|}\lambda_{a^{\ast}(h),hX}(x^{\alpha})\nonumber\\ =&\ h^{s-|\alpha|}(\lambda_{a^{\ast}(h),hX}(x^{\alpha})-\lambda(x^{\alpha}))+h^{s-|\alpha|} \lambda(x^{\alpha})\nonumber\\ =&\ h^{s-|\alpha|}(\lambda_{a^{\ast}(h),hX}(x^{\alpha})-\lambda(x^{\alpha}))+ \lambda(x^{\alpha}) \end{align} (7.1)and \begin{align*} \sum\nolimits_{j=1}^{M}(h^{s}a_{j}^{\ast}(h)-\hat a_{j})x_{j}^{\alpha} \ \ =\ \ h^{s-|\alpha|}(\lambda_{a^{\ast}(h),hX}(x^{\alpha})-\lambda(x^{\alpha})). \end{align*} If we insert the convergence rate m − s − d/2 for the optimal Sobolev approximation in the case m − s − d/2 < q − s or m − d/2 < q, the right-hand side of this system converges to zero with rate m −|α|− d/2 ≥ m − (q − 1) − d/2 ⩾ 1 and this implies \begin{align} h^{s}a_{j}^{\ast}(h)-\hat a_{j} =\mathcal {O}(h^{m-(q-1)-d/2})\qquad \text{for}\ h\to 0. \end{align} (7.2) If we have $$m-d/2\geqslant q$$, we insert the rate q − s and get the rate $$q-|\alpha |\geqslant 1$$ for the right-hand side. 8. Examples First, we demonstrate numerically that the convergence rate \begin{align*} \min(m-d/2-s,q_{\max}(\lambda,X)-s) \end{align*}for approximations in $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ to functionals $$\lambda \in {{W_{2}^{m}}}({\mathbb {R}}^{d})^{\ast }$$ with scaling order s is optimal, even among unscaled approximations. This was verified in many cases including dimensions 2 and 3 using MAPLE© with extended precision. The number of decimal digits had to be beyond 100 in extreme situations. All the loglog plots of $$\|\varepsilon (h)\|_{{{W_{2}^{m}}}({\mathbb {R}}^{d})}$$ versus h show the standard linear behaviour for h → 0 if enough decimal digits are used and if started with small h values. Therefore, they are suppressed here. Instead, we present convergence rate estimates by plotting \begin{align*} \frac{\log(\|\varepsilon_{h_{i+1}}\|_{{{W_{2}^{m}}}({\mathbb{R}}^{d})})-\log(\|\varepsilon_{h_{i}}\|_{{{W_{2}^{m}}}({\mathbb{R}}^{d})})} {\log(h_{i+1})-\log(h_{i})} \end{align*}against hi. Fig. 1. View largeDownload slide Convergence rate estimates of the optimal εh in $$W_{2}^{3.75}({\mathbb {R}}^{2})$$ and $$W_{2}^{6.25}({\mathbb {R}}^{2})$$ approximating the Laplacian on 18 general points as a function of h. Fig. 1. View largeDownload slide Convergence rate estimates of the optimal εh in $$W_{2}^{3.75}({\mathbb {R}}^{2})$$ and $$W_{2}^{6.25}({\mathbb {R}}^{2})$$ approximating the Laplacian on 18 general points as a function of h. Fig. 2. View largeDownload slide Convergence rate estimates of the optimal εh in $$W_{2}^{4.5}({\mathbb {R}}^{3})$$ approximating the Laplacian on 10 general points as a function of h. Fig. 2. View largeDownload slide Convergence rate estimates of the optimal εh in $$W_{2}^{4.5}({\mathbb {R}}^{3})$$ approximating the Laplacian on 10 general points as a function of h. For a specific case, we take M = 18 random points in 2 dimensions and approximate the Laplacian. Then s = 2 and qmax(λ, X) = 5 leading to the expected convergence rate $$\min (m-3,3)$$ as a function of smoothness. Figure 1 shows the cases m = 3.75 and m = 6.25 with the expected rates 0.75 and 2, respectively. These correspond to situations where either the smoothness m or the size of X restrict the convergence rate. For illustration of the optimal compromise situation in (1.3), Fig. 2 shows the convergence rate 1 for approximation of the Laplacian in 3 dimensions on only 10 points in general position assuming smoothness m = 4.5. By Table 1 we expect a convergence rate between m − s − d/2 − ε = 1 − ε and 1 for all ε > 0 when using PEO q = m − d/2 = 3, but the true optimal convergence could be like $$h\log (h)$$. The issue cannot be visually decided. Test runs with the scalable approximations based on polynomial exactness show exactly the same behaviour, since they have the same convergence rate. To illustrate the ratio between the errors of scalable polyharmonic stencils and unscaled optimal approximations, Fig. 3 shows the error ratio in the 2-dimensional equilibrium case with 10 points and m = q = 4, tending to 1 for h → 0. The same remark as for the m = 4.5, d = 3 case applies here. Fig. 3. View largeDownload slide Quotient between errors of polyharmonic and optimal Sobolev approximations as functions of h. Fig. 3. View largeDownload slide Quotient between errors of polyharmonic and optimal Sobolev approximations as functions of h. Fig. 4. View largeDownload slide Convergence rate estimate for the error norm of εh in $${{W_{2}^{4}}}({\mathbb {R}}^{2})$$ approximating the Laplacian on 6 general points by a stencil of polynomial exactness of order 3. Fig. 4. View largeDownload slide Convergence rate estimate for the error norm of εh in $${{W_{2}^{4}}}({\mathbb {R}}^{2})$$ approximating the Laplacian on 6 general points by a stencil of polynomial exactness of order 3. To deal with the special situation of m − d/2 being an integer in Corollary 5.5 via polyharmonic kernels, we take 6 points in $${\mathbb {R}}^{2}$$ with q = qmax = 3 for the Laplacian with optimal convergence rate m − 2 − d/2 = 1 for m = 4. Working in BL4, 2 would need 10 points. A unique scalable stencil is obtained from $${\rm BL}_{m^{\prime },2}$$ with PEO q(m′, 2) = 3 for all $$3\leqslant m^{\prime }<4$$ and the convergence rate is at least m − s − d/2 − ε = 1 − ε for all ε > 0 by Table 1. The corresponding convergence rate estimate for m′ = 3.5 is in Fig. 4, and there is no visible $$\log (h)$$ factor. To see whether a $$\log (h)$$ term can be present in the situation of integer q = m − d/2, we take m = d = 2, q = 1, s = 0, i.e., interpolation. We need just a single point $$x\in {\mathbb {R}}^{2}$$ with ∥x∥2 = 1 for exactness on constants. The kernel is $$\phi (r)=rK_{1}(r)=1+\frac {1}{2}r^{2}\log r +{\mathcal O}(r^{2})$$ with ϕ(0) = 1. The optimal recovery for λ(u) = u(0) from u(hx) is the kernel interpolant, i.e., u(hx)ϕ(∥⋅−hx∥2), and the approximation error is \begin{align*} u(0)-u(hx)\phi(\|hx\|_{2})=u(0)-u(hx)\phi(h). \end{align*} In the dual of $${{W_{2}^{2}}}({\mathbb {R}}^{2})$$ the square of the norm of the error functional is \begin{align*} \|\delta_{0}-\phi(h)\delta_{hx}\|^{2}_{{{{W_{2}^{2}}}}^{\ast}({\mathbb{R}}^{2})} &= \phi(0)-\phi(h)^{2}\\[-3pt] &= -h^{2}\log(h)+{\mathcal O}(h^{2}), \end{align*}due to MAPLE. Since the standard error bound \begin{align*} |u(0)-u(hx)\phi(h)|\leqslant \|\delta_{0}-\phi(h)\delta_{hx}\|_{{{{W_{2}^{2}}}}^{\ast}({\mathbb{R}}^{2})} \|u\|_{{{W_{2}^{2}}}({\mathbb{R}}^{2})} \end{align*}is sharp, and since we constructed the optimal recovery, we have that the convergence for q = 1 is only $$h|\log (h)|^{1/2}$$ and not like the optimal behaviour hm−0−d/2 = h in Sobolev space $${{W_{2}^{2}}}({\mathbb {R}}^{2})$$. To reach the optimal rate, we need a PEO q ⩾ 2 by Table 1, i.e., at least 3 noncollinear points. For curiosity, note that the above analysis works for all even dimensions, provided that smoothness m = 1 + d/2 is varying accordingly. Fig. 5. View largeDownload slide Gaussian native space convergence rate estimates for the error norms of the optimal and a polynomially exact stencil of order 7, approximating the Laplacian on 30 general points, as a function of h. Fig. 5. View largeDownload slide Gaussian native space convergence rate estimates for the error norms of the optimal and a polynomially exact stencil of order 7, approximating the Laplacian on 30 general points, as a function of h. Table 2 Singular values for three point sets, for polynomial reproduction of order 7 Set > 0.002 ∈ [2.0e − 8, 3.6e − 7] < 5.0e − 14 X1 28 0 0 X2 25 3 0 X3 18 9 1 Set > 0.002 ∈ [2.0e − 8, 3.6e − 7] < 5.0e − 14 X1 28 0 0 X2 25 3 0 X3 18 9 1 View Large Table 2 Singular values for three point sets, for polynomial reproduction of order 7 Set > 0.002 ∈ [2.0e − 8, 3.6e − 7] < 5.0e − 14 X1 28 0 0 X2 25 3 0 X3 18 9 1 Set > 0.002 ∈ [2.0e − 8, 3.6e − 7] < 5.0e − 14 X1 28 0 0 X2 25 3 0 X3 18 9 1 View Large Fig. 6. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for approximations with PEO 7 on set X2. Fig. 6. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for approximations with PEO 7 on set X2. The suboptimal nearest-neighbor interpolation by constants has \begin{align*} \|\delta_{0}-\delta_{hx}\|^{2}_{{{{W_{2}^{2}}}}^{\ast}({\mathbb{R}}^{2})} &= 2-2\phi(h)\\[-3pt] &= -h^{2}\log(h)+{\mathcal O}(h^{2}) \end{align*}and a more exact expansion via MAPLE shows that this is larger than the squared error for optimal one-point interpolation in $$W_{2}^{1+d/2}({\mathbb {R}}^{d})$$ by $${\mathcal O}(\log ^{2}(h)h^{4})$$. Fig. 7. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for approximations with PEO 6 on set X2. Fig. 7. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for approximations with PEO 6 on set X2. In several numerical examples we verified the stencil convergence proven in Theorem 7.1, but the observed convergence rates turned out to be better than the proven ones. In particular, choosing 15 points in general position in $${\mathbb {R}}^{2}$$ with q = 5 led to a convergence rate $$\min (2,2m-10)$$ for m ⩾ 5 instead of $$\min (1,m-5)$$ in Theorem 7.1. This seems to be a consequence of superconvergence (Schaback, 1999, 2016), but needs further work. We now check approximation of the Laplacian in the native space of the Gaussian in Fig. 5. This should behave like $$m=\infty $$ in (1.2) and thus show a convergence rate qmax(λ, X) − s. We used 256 decimal digits for that example and took a set of 30 random points in 2 dimensions. Then qmax(Δ, X) = 7 and the observed convergence rate is indeed qmax − s = 5. Furthermore, this rate is attained already for a scalable stencil that is polynomially exact of order 7 on these points. We chose the optimal scalable polyharmonic stencil in BL7, 2 for this, and the ratio of the error norms was about 5. See Larsson et al. (2013) for a sophisticated way to circumvent the instability of calculating optimal nonscalable stencils for Gaussian kernels, but this paper suggests using scalable stencils calculated via polyharmonic kernels instead. We finally compare with approximations that optimize weights under the constraint of a fixed polynomial exactness (Davydov & Schaback, 2016b). The three point sets X1, X2 and X3 of Davydov & Schaback (2016b) have 32 points in [−1, +1]2 each, and the maximal possible order of polynomial reproduction in 2 dimensions is 7 if the geometry of the point set allows it. If everything works fine, this would result in convergence of optimal order 5 for the approximation of the Laplacian in Sobolev spaces of order m ⩾ 8, while the optimal rate for smaller m is m − 3. A simple singular value decomposition of the 28×32 value matrix of polynomials of order 7 on these points reveals that the small singular values in the three cases are like in Table 2. This means that only X1 permits working for exactness order 7 without problems, while X2 suggests order 6 and X3 should still work with order 5. If users require higher PEO, there is a risk of numerical instabilities. To demonstrate this effect, Fig. 6 shows what happens if both the polyharmonic and the minimal-weight approximations are kept at order 7 for the set X2. As Fig. 8 will show, the optimal Sobolev approximation stays at rate 4 for larger h and needs rather small h to show its optimal rate 5. In Fig. 6, both the polyharmonic and the minimal-weight approximations perform considerably worse than the optimum. If we go to PEO 6, we get Fig. 7, and now both approximations are close to what the Sobolev approximation does, though the latter is not at its optimal rate yet. In Fig. 8, the polyharmonic approximation is forced to stay at exactness order 7, while the weight-minimal approximation is taken at order 6 to allow more leeway for weight optimization. Now, in the same range as before, the weight-optimal approximation clearly outperforms the polyharmonic approximation. The same situation occurs on the set X3 under these circumstances; see Fig. 9. Thus, for problematic point sets, the polyharmonic approximation should get as much leeway as the minimal-weight approximation. Fig. 8. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for polyharmonic approximation of order 7 and minimal approximation of order 6 on set X2. Fig. 8. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for polyharmonic approximation of order 7 and minimal approximation of order 6 on set X2. Fig. 9. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for polyharmonic approximation of order 7 and minimal approximation of order 6 on set X3. Fig. 9. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for polyharmonic approximation of order 7 and minimal approximation of order 6 on set X3. The most sensible choice on X3 is to fix the exactness orders to 5, and the results are in Fig. 10. Neither approximation can compete with the convergence rate 4 that the Sobolev approximation shows in this range of h. The latter is calculated using 128 digits and can still use the point set as one that allows polynomial reproduction of order 6. The other two approximations are calculated at 32 decimal digits and see the set X3 as one that allows reproduction of order 5 only. To get back to a stable situation, we should lower the Sobolev smoothness to m = 6 to get Fig. 11. We then are back to a convergence rate like h3 in all cases. Fig. 10. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for polyharmonic and minimal approximation of order 5 on set X3. Fig. 10. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{8}}}({\mathbb {R}}^{2})$$ for polyharmonic and minimal approximation of order 5 on set X3. Fig. 11. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{6}}}({\mathbb {R}}^{2})$$ for polyharmonic and minimal approximation of order 5 on set X3. Fig. 11. View largeDownload slide Absolute and Sobolev-relative error norms in $${{W_{2}^{6}}}({\mathbb {R}}^{2})$$ for polyharmonic and minimal approximation of order 5 on set X3. 9. Summary and outlook We established the optimal convergence rate (1.2) of nodal approximations in Sobolev spaces and proved that it can be attained for scalable approximations with sufficient polynomial exactness. But we did not investigate the factors in front of the rates. For highly irregular nodes, it might be reasonable to go for a smaller convergence rate if the factor is much smaller than the one for the highest possible rate for that node configuration. This requires an analysis of how to use the additional degrees of freedom, and various possibilities for this are in Davydov & Schaback (2016b). On point sets that are badly distributed, it pays to avoid the highest possible order of polynomial exactness, and to use the additional degrees of freedom for minimization of weights along the lines of Davydov & Schaback (2016b) or to use optimal approximations by polyharmonic kernels at a smaller order of polynomial exactness. The kernels reproducing Sobolev spaces $${{W_{2}^{m}}}({\mathbb {R}}^{d})$$ have expansions into power series in r = ∥x − y∥2 that start with even powers of r until the polyharmonic kernel Hm, d occurs. This shows that error evaluation in Sobolev spaces can be replaced asymptotically by evaluation in Beppo–Levi spaces, and it suggests that the errors of optimal kernel-based approximations should be close to the errors of optimal scalable stencils based on polyharmonic kernels. This occurred in various experiments (see Fig. 3), but a more thorough investigation is needed. Finally, the exceptional case $$m-d/2\in {\mathbb {N}}$$ of the second row of Table 1 needs more attention. Approximating a functional with scaling order s by scalable stencils with the minimal PEO q = m − d/2 leads to unknown convergence behaviour between rates m − s − d/2 − ε and the optimal rate m − s − d/2 that is guaranteed for order q + 1 = m − d/2 + 1. The convergence could be like $${\mathcal O}(h^{m-s-d/2}|\log (h)|^{p})$$, for instance, and we presented an example with p = 1/2 for m = d = 2, s = 0. References Aboiyar, T., Georgoulis, E. & Iske, A. ( 2010) Adaptive ADER methods using kernel-based polyharmonic spline WENO reconstruction. SIAM J. Sci. Comput. , 32, 3251-- 3277. Google Scholar CrossRef Search ADS Agarwal, D. & Basu, P. ( 2012) Development of a meshless local RBF-DQ solver and its applications in computational fluid dynamics. Int. J. Numer. Methods Appl. , 7, 41-- 55. Atluri, S. N. ( 2005) The Meshless Method (MLPG) for Domain and BIE Discretizations . Encino, CA: Tech Science Press. Bayona, V., Moscoso, M. & Kindelan, M. ( 2012) Gaussian RBF-FD weights and its corresponding local truncation errors. Eng. Anal. Bound. Elem. , 36, 1361-- 1369. Google Scholar CrossRef Search ADS Beatson, R., Bui, H. & Levesley, J. ( 2005) Embeddings of Beppo-Levi spaces in Hölder-Zygmund spaces and a new method for radial basis function interpolation error estimates. J. Approximation Theory , 137, 166-- 178. Google Scholar CrossRef Search ADS Belytschko, T., Krongauz, Y., Organ, D., Fleming, M. & Krysl, P. ( 1996) Meshless methods: an overview and recent developments. Comput. Methods Appl. Mechanics Eng. , 139, 3-- 47. Google Scholar CrossRef Search ADS Bramble, J. & Hilbert, S. ( 1970) Estimation of linear functionals on Sobolev spaces with application to Fourier transforms and spline interpolation. SIAM J. Numer. Anal. , 7, 112-- 124. Google Scholar CrossRef Search ADS Chandhini, G. & Sanyasiraju, Y. V. S. S. ( 2007) Local RBF-FD solutions for steady convection-diffusion problems. Internat. J. Numer. Methods Engrg. , 72, 352-- 378. Google Scholar CrossRef Search ADS Davydov, O. & Oanh, D. ( 2011a) Adaptive meshless centres and RBF stencils for Poisson equation. J. Comput. Phys. , 230, 287-- 304. Google Scholar CrossRef Search ADS Davydov, O. & Oanh, D. ( 2011b) On the optimal shape parameter for Gaussian radial basis function finite difference approximation of the Poisson equation. Comput. Math. Appl. , 62, 2143-- 2161. Google Scholar CrossRef Search ADS Davydov, O., Oanh, D. & Phu, H. ( 2017) Adaptive RBF-FD method for elliptic problems with point singularities in 2D. Appl. Math. Comput. , 313, 474-- 497. Davydov, O. & Schaback, R. ( 2016a) Error bounds for kernel-based numerical differentiation. Numer. Math. , 132, 243-- 269. Google Scholar CrossRef Search ADS Davydov, O. & Schaback, R. ( 2016b) Minimal numerical differentiation formulas. Preprint, in revision for Numer. Math., available at http://arxiv.org/abs/1611.05001. Flyer, N., Fornberg, B., Bayona, V. & Barnett, G. ( 2016) On the role of polynomials in RBF-FD approximations: I. Interpolation and accuracy. J. Comput. Phys. , 321, 21-- 38. Google Scholar CrossRef Search ADS Flyer, N., Lehto, E., Blaise, S., Wright, G. & St.-Cyr, A. ( 2012) A guide to RBF-generated finite differences for nonlinear transport: shallow water simulations on a sphere. J. Comput. Phys. , 231, 4078-- 4095. Google Scholar CrossRef Search ADS Gerace, S., Erhart, K., Divo, E. & Kassab, A. ( 2009) Local and virtual RBF meshless method for high-speed flows. Mesh Reduction Methods—BEM/MRM XXXI. WIT Trans. Model. Simul., vol. 49. Southampton: WIT Press, pp. 83-- 94. Hoang-Trieu, T.-T., Mai-Duy, N. & Tran-Cong, T. ( 2012) Several compact local stencils based on integrated RBFs for fourth-order ODEs and PDEs. CMES Comput. Model. Eng. Sci. , 84, 171-- 203. Hosseini, B. & Hashemi, R. ( 2011) Solution of Burgers’ equation using a local-RBF meshless method. Int. J. Comput. Methods Eng. Sci. Mech. , 12, 44-- 58. Google Scholar CrossRef Search ADS Iske, A. ( 1995) Reconstruction of functions from generalized Hermite-Birkhoff data. Approximation Theory VIII, Vol. 1 (C. Chui & L. Schumaker eds). Singapore: World Scientific, pp. 257-- 264. Iske, A. ( 2003) On the approximation order and numerical stability of local Lagrange interpolation by polyharmonic splines. Modern Developments in Multivariate Approximation . Basel: Birkhäuser, pp. 153-- 165. Google Scholar CrossRef Search ADS Iske, A. ( 2011) On the stability of polyharmonic spline reconstruction. Conference Proceedings of Sampling Theory and Applications. SampTA2011. https://pdfs.semanticscholar.org/4bf6/fb9a393853d706dcac33ae802eedaf6e1b2e.pdf. Iske, A. ( 2013) On the construction of kernel-based adaptive particle methods in numerical flow simulation. Recent Developments in the Numerics of Nonlinear Hyperbolic Conservation (R. Ansorge, H. Bijl, A. Meister & T. Sonar eds). Notes on Numerical Fluid Mechanics and Multidisciplinary Design (NNFM). Berlin: Springer, pp. 197–221. Kansa, E. ( 2015) Radial basis functions: achievements and challenges. Boundary Elements and Other Mesh Reduction Methods XXXVII (A. Cheng & C. Brebbia eds), vol. 61., Southampton, UK: WIT Press, pp. 3-- 22. Larsson, E., Lehto, E., Heryodono, A. & Fornberg, B. ( 2013) Stable computation of differentiation matrices and scattered node stencils based on Gaussian radial basis functions. SIAM J. Sci. Comput. , 35, A2096-- A2119. Google Scholar CrossRef Search ADS NIST ( 2015) Digital library of mathematical functions. Technical Report . National Institute of Standards and Technology USA. Available at http://dlmf.nist.gov/. Šarler, B. ( 2007) From global to local radial basis function collocation method for transport phenomena. Advances in Meshfree Techniques. Comput. Methods Appl. Sci. , vol. 5. Dordrecht: Springer, pp. 257-- 282. Schaback, R. ( 1997) Reconstruction of multivariate functions from scattered data. Manuscript. Available at http://www.num.math.uni-goettingen.de/schaback/research/group.html. Schaback, R. ( 1999) Improved error bounds for scattered data interpolation by radial basis functions. Math. Comput. , 68, 201-- 216. Google Scholar CrossRef Search ADS Schaback, R. ( 2005) Multivariate interpolation by polynomials and radial basis functions. Constructive Approximation , 21, 293-- 317. Google Scholar CrossRef Search ADS Schaback, R. ( 2016) Superconvergence of kernel-based interpolation. Preprint, available at https://arxiv.org/abs/1607.04219. Schaback, R. ( 2017) Error analysis of nodal meshless methods. Meshfree Methods for Partial Differential Equations VIII (M. Griebel & M. Schweitzer eds). Lecture Notes in Computational Science and Engineering, vol. 115. Cham, Switzerland: Springer International Publishing, pp. 117–143. Shankar, V., Wright, G., Kirby, R. & Fogelson, A. ( 2015) A radial basis function (RBF)-finite difference (FD) method for diffusion and reaction-diffusion equations on surfaces. J. Sci. Comput. , 63, 745-- 768. Google Scholar CrossRef Search ADS Shu, C., Ding, H. & Yeo, K. ( 2003) Local radial basis function-based differential quadrature method and its application to solve two-dimensional incompressible Navier-Stokes equations. Comput. Methods Appl. Mech. Eng. , 192, 941-- 954. Google Scholar CrossRef Search ADS Shu, C., Ding, H. & Yeo, K. ( 2005) Computation of incompressible Navier-Stokes equations by local RBF-based differential quadrature method. CMES Comput. Model. Eng. Sci. , 7, 195-- 205. Stevens, D., Power, H., Lees, M. & Morvan, H. ( 2011) A local Hermitian RBF meshless numerical method for the solution of multi-zone problems. Numer. Methods Partial Differential Equations , 27, 1201-- 1230. Google Scholar CrossRef Search ADS Thai-Quang, N., Le-Cao, K., Mai-Duy, N. & Tran-Cong, T. ( 2012) A high-order compact local integrated-RBF scheme for steady-state incompressible viscous flows in the primitive variables. CMES Comput. Model. Eng. Sci. , 84, 528-- 557. Tolstykh, A. ( 2000) On using RBF-based differencing formulas for unstructured and mixed structured-unstructured grid calculations. Proceedings of the 16th IMACS World Congress 228. ISBN 3-9522075-1-9, CD-ROM, pp. 4606–4624. Vertnik, R. & Šarler, B. ( 2011) Local collocation approach for solving turbulent combined forced and natural convection problems. Adv. Appl. Math. Mech. , 3, 259-- 279. Google Scholar CrossRef Search ADS Wendland, H. ( 2005) Scattered Data Approximation . Cambridge, UK: Cambridge University Press, p. 394. Wright, G. & Fornberg, B. ( 2006) Scattered node compact finite difference-type formulas generated from radial basis functions. J. Comput. Phys. , 212, 99-- 123. Google Scholar CrossRef Search ADS Yao, G., Šarler, B. & Chen, C. S. ( 2011) A comparison of three explicit local meshless methods using radial basis functions. Eng. Anal. Bound. Elem. , 35, 600-- 609. Google Scholar CrossRef Search ADS Yao, G., ul Islam, S. & Šarler, B. ( 2012) Assessment of global and local meshless methods based on collocation with radial basis functions for parabolic partial differential equations in three dimensions. Eng. Anal. Bound. Elem. , 36, 1640-- 1648. Google Scholar CrossRef Search ADS © The Author(s) 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

IMA Journal of Numerical Analysis – Oxford University Press

**Published: ** Dec 28, 2017

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.