Two-scale method for the Monge–Ampère equation: pointwise error estimates

Two-scale method for the Monge–Ampère equation: pointwise error estimates Abstract In this paper we continue the analysis of the two-scale method for the Monge–Ampère equation for dimension d ≥ 2 introduced in the study by Nochetto et al. (2017, Two-scale method for the Monge–Ampère equation: convergence to the viscosity solution. Math. Comput., in press). We prove continuous dependence of discrete solutions on data that in turn hinges on a discrete version of the Alexandroff estimate. They are both instrumental to prove pointwise error estimates for classical solutions with Hölder and Sobolev regularity. We also derive convergence rates for viscosity solutions with bounded Hessians which may be piecewise smooth or degenerate. 1. Introduction We consider the Monge–Ampère equation with Dirichlet boundary condition \begin{equation} \begin{cases} \det{D^2u}=f\quad \textrm{in}\ \varOmega \subset \mathbb{R}^d, \\ \qquad\ \ u =g\quad \textrm{on}\ \partial \varOmega, \end{cases} \end{equation} (1.1) where f ≥ 0 is uniformly continuous, $$\varOmega $$ is a uniformly convex domain and g is a continuous function. We seek a convex solution u of (1.1), which is critical for (1.1) to be elliptic and have a unique viscosity solution (Gutiérrez, 2001). The Monge–Ampère equation has a wide spectrum of applications, which has led to increasing interest in the investigation of efficient numerical methods. There are several existing methods for the Monge–Ampère equation, as described in the study by Nochetto et al. (2017). Error estimates in $$H^1(\varOmega )$$ are established in the studies by Brenner et al. (2011) and Brenner & Neilan (2012) for solutions with $$H^3(\varOmega )$$ regularity or more. Awanou (2016) also proved a linear rate of convergence for classical solutions for the wide-stencil method when applied to a perturbed Monge–Ampère equation with an extra lower-order term $$\delta u$$; the parameter $$\delta> 0$$ is independent of the mesh and appears in reciprocal form in the rate. On the other hand, Nochetto and Zhang followed an approach based on the discrete Alexandroff estimate developed in Nochetto & Zhang (2014) and established pointwise error estimates in Nochetto & Zhang (2016) for the method of Oliker & Prussner (1988). In this paper we follow a similar approach and derive pointwise rates of convergence for classical solutions of (1.1) that have Hölder or Sobolev regularity and for viscosity solutions with bounded Hessians which may be piecewise smooth or degenerate. It is worth mentioning a rather strong connection between the semi-Lagrangian method of Feng & Jensen (2016) and our two-scale approach introduced in the study by Nochetto et al. (2017). In fact, for an appropriate choice of discretization of symmetric positive semidefinite (SPSD) matrices with trace 1, discussed in the study by Feng & Jensen (2016) along with the implementation, one can show that the discrete solutions of both methods coincide. Therefore, the error estimates in this paper extend to the fully discrete method of Feng & Jensen (2016). This rather surprising equivalence property is fully derived in a forthcoming paper, along with optimal error estimates in special cases via enhanced techniques for pointwise error analysis. 1.1. Our contribution The two-scale method was introduced in the study by Nochetto et al. (2017) and hinges on the following formula for the determinant of the semipositive Hessian $$D^2w$$ of a smooth function w, first suggested by Froese & Oberman (2012): \begin{equation} \det{D^2w}(x) = \min\limits_{(v_1,\ldots,v_d) \in \mathbb{S}^{\perp}} \prod_{j=1}^d v_j^TD^2w(x) \ v_j, \end{equation} (1.2) where $$\mathbb{S}^{\perp }$$ is the set of all d-orthonormal bases in $$\mathbb{R}^d$$. To discretize this expression, we impose that our discrete solutions lie in a space of continuous piecewise linear functions over an unstructured quasi-uniform mesh $$\mathcal{T}_h$$ of size h; this defines the fine scale. The mesh also defines the computational domain $$\varOmega _h$$, which we describe in more detail in Section 2. The coarser scale $$\delta $$ corresponds to the length of directions used to approximate the directional derivatives that appear in (1.2), namely, \begin{equation*} \nabla^2_{\delta} w (x;v) := \frac{ w(x+\delta v) -2w(x) +w(x-\delta v) }{ \delta^2} \quad \textrm{and} \quad |v| = 1, \end{equation*} for any $$w\in C^0(\overline{\varOmega })$$; to render the method practical, we introduce a discretization $$\mathbb{S}^{\perp }_{\theta }$$ of the set $$\mathbb{S}^{\perp }$$ governed by the parameter $$\theta $$ and denote our discrete solution by $$u_{\varepsilon }$$, where $$\varepsilon = (h,\delta , \theta )$$ represents the scales of the method and the parameter $$\theta $$. We define the discrete Monge–Ampère operator to be \begin{equation*} T_{\varepsilon}[u_{\varepsilon}](x_i):=\min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \left( \prod_{j=1}^d \nabla^{2,+}_{\delta} u_{\varepsilon}(x_i;v_j) - \sum_{j=1}^d \nabla^{2,-}_{\delta} u_{\varepsilon} (x_i;v_j) \right), \end{equation*} where $$\nabla _\delta ^{2,\pm }$$ are the positive and negative parts of $$\nabla _\delta ^ 2$$. In Section 2 we review briefly the role of each term in the operator $$T_{\varepsilon }$$ and recall some key properties of $$T_{\varepsilon }$$. The merit of this definition of $$T_{\varepsilon }$$ is that it leads to a clear separation of scales, which is a key theoretical advantage over the original wide-stencil method of Froese & Oberman (2012). This also yields continuous dependence of discrete solutions on data, namely Proposition 4.8, which allows us to prove rates of convergence in $$L^{\infty }(\varOmega )$$ for our method, depending on the regularity of u; this is not clear for the wide-stencil method of Froese & Oberman (2012). Moreover, the two-scale method is formulated over unstructured meshes $$\mathcal{T}_h$$, which adds flexibility to partition arbitrary uniformly convex domains $$\varOmega $$. This is achieved at the expense of points $$x_i\pm \delta v_j$$ no longer being nodes of $$\mathcal{T}_h$$, which is responsible for an additional interpolation error in the consistency estimate of $$T_\varepsilon $$. To locate such points and evaluate $$\nabla ^2_{\delta } u_{\varepsilon }(x_i;v_j)$$, we resort to fast search techniques within Walker and Walker,(2018) and thus render the two-scale method practical. Compared with the error analysis of the Oliker–Prussner method (Nochetto & Zhang, 2014), we do not require $$\mathcal{T}_h$$ to be Cartesian. In the study by Nochetto et al. (2017) we prove existence and uniqueness of a discrete solution for our method and convergence to the viscosity solution of (1.1), without regularity beyond uniform continuity of f and g. This entails dealing with the $$L^\infty $$-norm and using the discrete comparison principle for piecewise linear functions (monotonicity). Within this $$L^\infty $$ framework and under the regularity requirement $$u \in W^2_\infty (\varOmega )$$, we now prove rates of convergence for classical solutions with either Hölder or Sobolev regularity and for a special class of viscosity solutions. Therefore, our two-scale method (Nochetto et al., 2017) and the Oliker–Prussner method (Nochetto & Zhang, 2016; Oliker & Prussner, 1988) are the only schemes known to us to converge to the viscosity solution and have provable rates of convergence. The first important tool for proving pointwise rates of convergence is the discrete Alexandroff estimate introduced in the study by Nochetto & Zhang (2014): if $$w_h$$ is an arbitrary continuous piecewise linear function, $$w_h\ge 0$$ on $$\partial \varOmega _h$$, and $$\varGamma w_h$$ stands for its convex envelope, then \begin{equation*} \max\limits_{ x_i \in \mathcal{N}_h } w_h^-(x_i) \leq C \left( \sum\limits_{x_i \in C_{-}(w_h)} \left| \partial \varGamma w_h (x_i) \right| \right)^{1/d}, \end{equation*} where $$\partial \varGamma w_h$$ is the subdifferential of $$\varGamma w_h$$ and $$C_{-}(w_h)$$ represents the lower contact set of $$w_h$$, i.e. the set of interior nodes $$x_i\in \mathcal{N}_h^0$$ such that $$\varGamma w_h(x_i)=w_h(x_i)$$; hereafter we write $$w_h^-(x_i) := - \min \{w_h(x_i),0\}$$. To control the measure of the subdifferential at each node, we show the estimate \begin{equation*} |\partial w_h(x_i)| \leq \delta^d \min\limits_{(v_1,\ldots,v_d)\in \mathbb{S}^{\perp}} \prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j) \quad\forall \, x_i\in\mathcal{N}_h^0, \end{equation*} such that the ball centered at $$x_i$$ and of radius $$\delta $$ is contained in $$\varOmega _h$$. Combining both estimates, we derive the continuous dependence estimate \begin{equation*} \max_{\varOmega_h} \, (u_h-w_h)^- \leq C \delta \left( \sum\limits_{x_i \in C_{-}(u_h-w_h)} \left( T_\varepsilon[u_h](x_i)^{1/d}-T_\varepsilon[w_h](x_i)^{1/d} \right)^d \right)^{1/d} \end{equation*} for all continuous piecewise linear functions $$u_h$$ and $$w_h$$ such that $$T_\varepsilon [u_h](x_i)\ge 0$$ and $$T_\varepsilon [w_h](x_i)\ge 0$$ for all $$x_i\in \mathcal{N}_h^0$$. This result is instrumental and, combined with operator consistency and a discrete barrier argument close to the boundary, eventually leads to the pointwise error estimates \begin{equation*} \|u_{\varepsilon}-u\|_{L^\infty(\varOmega_h)} \leq C(d, \varOmega,f,u) \ h^{\frac{\alpha+k}{\alpha+k+2}} \end{equation*} provided $$u\in C^{2+k,\alpha }(\overline{\varOmega })$$ with $$0<\alpha \le 1$$ and k = 0, 1, as well as \begin{equation*} \|u_{\varepsilon}-u\|_{L^\infty(\varOmega_h)} \leq C(d, \varOmega,f,u) \ h^{1-\frac{2}{s}} \end{equation*} provided $$u\in W^s_p(\varOmega )$$ with $$2+d/p<s\le 4$$ and p > d, and $$\delta $$ is suitably chosen in terms of h; see Theorems 5.3 and 5.4. We also consider a special case of viscosity solutions with bounded but discontinuous Hessians and manage to prove a rate of convergence (see Theorem 5.5). Since these theorems are proven under the nondegeneracy assumption f > 0, we examine in Theorem 5.6 the effect of degeneracy f ≥ 0. In the study by Nochetto et al. (2017) we explore numerically both classical and $$W^2_\infty $$ viscosity solutions and observe linear rates with respect to h for both cases, which are better than predicted by this theory. 1.2. Outline We start by briefly presenting the operator $$T_\varepsilon $$ in Section 2 and recalling some important results from Nochetto et al. (2017). In Section 3 we mention the discrete Alexandroff estimate and combine it in Section 4 with some geometric estimates to obtain the continuous dependence of the discrete solution on data. This is much stronger than stability and is critical to prove rates of convergence for fully nonlinear partial differential equations. Lastly, in Section 5 we combine this result with operator consistency and a discrete barrier argument close to the boundary to derive rates of convergence upon making judicious choices of $$\delta $$ and $$\theta $$ in terms of h. 2. Key properties of the discrete operator We recall briefly some of the key properties of operator $$T_\varepsilon $$, as proven in the study by Nochetto et al. (2017). 2.1 Definition of $$T_{\varepsilon}$$ Let $$\mathcal{T}_h$$ be a shape-regular and quasi-uniform triangulation with mesh size h. The computational domain $$\varOmega _h$$ is the union of elements of $$\mathcal{T}_h$$ and $$\varOmega _h\ne \varOmega $$. If $$\mathcal{N}_h$$ denotes the nodes of $$\mathcal{T}_h$$, then $$\mathcal{N}_h^b := \{x_i \in \mathcal{N}_h: x_i \in \partial \varOmega _h\}$$ are the boundary nodes and $$\mathcal{N}_h^0 := \mathcal{N}_h \setminus \mathcal{N}_h^b$$ are the interior nodes. We require that $$\mathcal{N}_h^b \subset \partial \varOmega $$, which in view of the convexity of $$\varOmega $$ implies that $$\varOmega _h$$ is also convex and $$\varOmega _h \subset \varOmega $$. We denote by $$\mathbb{V}_h$$ the space of continuous piecewise linear functions over $$\mathcal{T}_h$$. We let $$\mathbb{S}^{\perp }$$ be the collection of all d-tuples of orthonormal bases and $$\mathbf{v} := (v_1,\ldots ,v_d) \in \mathbb{S}^{\perp }$$ be a generic element, whence each component $$v_i\in \mathbb{S}$$, the unit sphere $$\mathbb{S}$$ of $$\mathbb{R}^d$$. We next introduce a finite subset $$\mathbb{S}_\theta $$ of $$\mathbb{S}$$ governed by the angular parameter $$\theta>0$$; given $$v \in \mathbb S$$, there exists $$v^{\theta }\in \mathbb{S}_\theta $$ such that $$ |v-v^{\theta}| \leq \theta. $$ Likewise, we let $$\mathbb{S}^{\perp }_{\theta }\subset \mathbb{S}^{\perp }$$ be a finite approximation of $$\mathbb{S}^{\perp }$$: for any $$\mathbf v = (v_j)_{j=1}^d \in \mathbb{S}^{\perp }$$ there exists $$\mathbf v^{\theta } = \big(v_j^{\theta }\big)_{j=1}^d \in \mathbb{S}^{\perp }_{\theta }$$ such that $$v_j^\theta \in \mathbb S_{\theta }$$ and $$\big |v_j - v_j^{\theta }\big| \leq \theta $$ for all 1 ≤ j ≤ d and conversely. For $$x_i\in \mathcal{N}_h^0$$, we use centered second differences with a coarse scale $$\delta $$, \begin{equation} \nabla^2_{\delta} w(x_i;v_j) := \frac{ w\left(x_i+ \hat\delta v_j\right) -2 w(x_i) + w\left(x_i- \hat\delta v_j\right) }{ \hat\delta^2}, \end{equation} (2.1) where $$\hat \delta :=\rho \delta $$ with $$0 < \rho \le 1$$ is the biggest number such that the ball centered at $$x_i$$ of radius $$\hat \delta $$ is contained in $$\varOmega _h$$; we stress that $$\rho $$ need not be computed exactly. This is well defined for any $$w \in C^0(\overline{\varOmega })$$, in particular for $$w\in \mathbb{V}_h$$. We define $$\varepsilon := (h,\delta , \theta )$$ and we seek $$u_{\varepsilon } \in \mathbb{V}_h$$ such that $$u^{\varepsilon }(x_i)=g(x_i)$$ for $$x_i \in \mathcal{N}_h^b$$ and for $$x_i \in \mathcal{N}_h^0$$, \begin{equation} T_{\varepsilon}[u_{\varepsilon}](x_i):=\min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \left( \prod_{j=1}^d \nabla^{2,+}_{\delta} u_{\varepsilon}(x_i;v_j) - \sum_{j=1}^d \nabla^{2,-}_{\delta} u_{\varepsilon} (x_i;v_j) \right) = f(x_i), \end{equation} (2.2) where we use the notation \begin{equation*} \nabla^{2,+}_{\delta} u_{\varepsilon}(x_i;v_j) = \max{\left(\nabla^2_{\delta} u_{\varepsilon}(x_i;v_j),0\right)}, \quad \nabla^{2,-}_{\delta} u_{\varepsilon}(x_i;v_j) = -\min{\left(\nabla^2_{\delta} u_{\varepsilon}(x_i;v_j),0\right)} \end{equation*} to indicate positive and negative parts of the centered second differences. 2.2 Key properties of $$T_{\varepsilon}$$ One of the critical properties of the Monge–Ampère equation is the convexity of the solution u. The following notion mimics this at the discrete level. Definition 2.1 (Discrete convexity). We say that $$w_h \in \mathbb V_h$$ is discretely convex if \begin{equation*} \nabla^2_{\delta} w_h(x_i;v_j) \geq 0 \qquad \forall\, x_i \in \mathcal{N}_h^0, \quad \forall\, v_j \in \mathbb S_{\theta}. \end{equation*} The following lemma guarantees the discrete convexity of subsolutions of (2.2) (Nochetto et al., 2017, Lemma 2.2). Lemma 2.2 (Discrete convexity). If $$w_h \in \mathbb{V}_h$$ satisfies \begin{equation} T_{\varepsilon}[w_h](x_i) \geq 0 \quad \forall\, x_i \in \mathcal{N}_h^0, \end{equation} (2.3) then $$w_h$$ is discretely convex and as a consequence \begin{equation} T_{\varepsilon} [w_h](x_i)= \min_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j), \end{equation} (2.4) namely, \begin{equation*} \nabla^{2,+}_{\delta} w_h(x_i;v_j) = \nabla^2_{\delta} w_h(x_i;v_j), \quad \nabla^{2,-}_{\delta} w_h(x_i;v_j) =0 \quad\forall\, x_i\in \mathcal{N}_h^0,\quad\forall\, v_j\in \mathbb S_{\theta}. \end{equation*} Conversely, if $$w_h$$ is discretely convex, then (2.3) is valid. Another important property of operator $$T_\varepsilon $$ that relies on its monotonicity is the following discrete comparison principle (Nochetto et al., 2017, Lemma 2.4). Lemma 2.3 (Discrete comparison principle). Let $$u_h,w_h \in \mathbb{V}_h$$ with $$u_h \leq w_h$$ on the discrete boundary $$\partial \varOmega _h$$ be such that \begin{equation} T_{\varepsilon}[u_h](x_i) \geq T_{\varepsilon}[w_h](x_i) \geq 0 \ \ \ \forall\, x_i \in \mathcal{N}_h^0. \end{equation} (2.5) Then, $$u_h \leq w_h$$ everywhere. We now state a consistency estimate, proved in (Nochetto et al., 2017, Lemma 4.1), that leads to pointwise rates of convergence. To this end, given a node $$x_i \in \mathcal{N}_h^0$$, we denote by \begin{equation} B_i := \cup \left\{\overline{T}: T\in\mathcal{T}_h, \, \textrm{dist} (x_i,T) \le \hat\delta\right\}, \end{equation} (2.6) where $$\hat \delta $$ is defined in (2.1). We also define the $$\delta $$-interior region \begin{equation} \varOmega_{h,\delta} = \{ T \in \mathcal{T}_h \ : \ \textrm{dist} (x, \partial \varOmega_h ) \geq \delta \ \forall\, x \in T \}, \end{equation} (2.7) and the $$\delta $$-boundary region \begin{equation*} \omega_{h,\delta} = \varOmega \setminus \varOmega_{h,\delta}. \end{equation*} Lemma 2.4 (Consistency of $$T_{\varepsilon } [\mathcal{I}_h u]$$). Let $$x_i \in \mathcal{N}_h^0 \,\cap\, \varOmega _{h,\delta }$$ and $$B_i$$ be defined as in (2.6). If $$u \in C^{2+k,\alpha }(B_i)$$ with $$0<\alpha \leq 1$$ and k = 0, 1 is convex, and $$\mathcal I_h u $$ is its piecewise linear interpolant, then \begin{equation} \left| \det D^2u(x_i) - T_{\varepsilon}[\mathcal I_h u] (x_i) \right| \leq C_1(d,\varOmega,u) \delta^{k+\alpha} + C_2(d,\varOmega,u) \left( \frac{h^2}{\delta^2} + \theta^2 \right), \end{equation} (2.8) where \begin{equation} C_1(d,\varOmega,u)= C |u|_{C^{2+k,\alpha}(B_i)} |u|_{W^2_{\infty}(B_i)}^{d-1}, \quad C_2(d,\varOmega,u) = C |u|_{W^2_{\infty}(B_i)}^d. \end{equation} (2.9) If $$x_i\in \mathcal{N}_h^0$$ and $$u \in W^2_{\infty }(B_i)$$, then (2.8) remains valid with $$\alpha =k=0$$ and $$C^{2+k,\alpha }(B_i)$$ replaced by $$W^2_{\infty }(B_i)$$. 3. Discrete Alexandroff estimate In this section, we review several concepts related to convexity as well as the discrete Alexandroff estimate of Nochetto & Zhang (2014). We first recall several definitions. Definition 3.1 (Subdifferential). (i) The subdifferential of a function w at a point $$x_0\in \varOmega _h$$ is the set \begin{equation*} \partial w(x_0) := \left\{ p \in \mathbb{R}^d: \ w(x) \geq w(x_0) + p \cdot (x-x_0) \ \ \forall\, x \in \varOmega_h \right\}. \end{equation*} (ii) The subdifferential of a function w on set $$E \subset \varOmega _h$$ is $$\partial u(E) := \cup _{x \in E} \partial w(x)$$. Definition 3.2 (Convex envelope and discrete lower contact set). (i) The convex envelope $$\varGamma u$$ of a function w is defined to be \begin{equation*} \varGamma w (x) := \sup_{L } \{ L(x), \; L(y) \leq w(y)\quad \forall\, y \in \varOmega_h\ \textrm{and}\ L\ \textrm{is affine} \}. \end{equation*} (ii) The discrete lower contact set $$C_-(w_h)$$ of a function $$w_h\in \mathbb{V}_h$$ is the set of nodes where the function coincides with its convex envelope, i.e. \begin{equation*} \mathcal{C}_- (w_h) := \left\{x_i \in \mathcal{N}_h^0: \varGamma w_h(x_i) = w_h(x_i) \right\}. \end{equation*} Remark 3.3 ($$w_h$$ dominates $$\varGamma w_h$$). Since $$w_h\ge \varGamma w_h$$, at a contact node $$x_i\in \mathcal{C}_-(w_h)$$ we have $$ \nabla^2_{\delta} \varGamma w_h(x_i;v_j) \leq \nabla^2_{\delta} w_h(x_i;v_j)(x_i)\qquad \forall\, v_j \in \mathbb S_{\theta}. $$ Remark 3.4 (Minima of $$w_h$$ and $$\varGamma w_h$$). A consequence of Definition 3.2 (convex envelope and discrete lower contact set) is that the minima of $$w_h\in \mathbb{V}_h$$ and $$\varGamma w_h$$ are attained at the same contact nodes and are equal. We can now present the discrete Alexandroff estimate from Nochetto & Zhang (2014), which states that the minimum of a discrete function is controlled by the measure of the subdifferential of its convex envelope in the discrete contact set. Proposition 3.5 (Discrete Alexandroff estimate, Nochetto & Zhang, 2014). Let $$v_h$$ be a continuous piecewise linear function that satisfies $$v_h \geq 0$$ on $$\partial \varOmega _h$$. Then \begin{equation*} \max \limits_{ x_i \in \mathcal{N}_h^0} v_h(x_i)^- \leq C \left( \sum\limits_{x_i \in \mathcal{C}_{-}(v_h)} | \partial \varGamma v_h (x_i) | \right)^{1/d}, \end{equation*} where $$C=C(d,\varOmega )$$ depends only on the dimension d and the domain $$\varOmega $$. 4. Continuous dependence on data We derive the continuous dependence of the discrete solution on data in Section 4.3, which is essential to prove rates of convergence. To this end, we first prove a stability estimate in the max norm in Section 4.1 and the concavity of the discrete operator in Section 4.2. 4.1. Stability of the two-scale method We start with some geometric estimates. The first and second lemmas connect the discrete Alexandroff estimate with the two-scale method. They allow us to estimate the measure of the subdifferential of a discrete function $$w_h$$ in terms of our discrete operator $$T_\varepsilon [w_h]$$, defined in (2.2). Lemma 4.1 (Subdifferential vs. hyper-rectangle). Let $$w \in C^0(\overline{\varOmega }_h)$$ be convex and $$x_i\in \mathcal{N}_h^0$$ be so that $$x_i\pm \hat \delta v \in \overline{\varOmega }_h$$ for all $$v\in \mathbb S_{\theta }$$ with $$\hat \delta \le \delta $$. If $$\mathbf{v}=(v_j)_{j=1}^d\in \mathbb{S}^{\perp }_{\theta }$$ and \begin{equation*} \alpha_{i,j}^\pm := \frac{w\left(x_i\pm \hat\delta v_j\right) - w(x_i)}{\hat\delta} \quad\forall \, 1\le j\le d, \end{equation*} then \begin{equation*} \partial w(x_i) \subset \left\{ p \in \mathbb{R}^d: \ \alpha_{i,j}^- \le p\cdot v_j \le \alpha_{i,j}^+ \ 1\le j \le d \right\}. \end{equation*} Proof. Take $$p \in \partial w(x_i)$$ and write \begin{equation*} w(x) \geq w(x_i) + p \cdot (x-x_i) \quad \forall\, x \in \overline{\varOmega}_h. \end{equation*} Consequently, for any 1 ≤ j ≤ d we infer that \begin{equation*} w\left(x_i + \hat\delta v_j\right) \geq w(x_i) + \hat\delta \ p \cdot v_j, \quad w\left(x_i - \hat\delta v_j\right) \geq w(x_i) - \hat\delta \ p \cdot v_j, \end{equation*} or equivalently \begin{equation*} \frac{w(x_i)-w\left(x_i-\hat\delta v_j\right)}{\hat\delta } \leq p \cdot v_j \leq \frac{w\left(x_i+\hat\delta v_j\right)-w(x_i)}{\hat\delta}. \end{equation*} This implies that p belongs to the desired set. Lemma 4.2 (Hyper-rectangle volume). For d-tuple $$\mathbf{v} = (v_j)_{j=1}^d \in \mathbb{S}^{\perp }_{\theta }$$, the volume of the set \begin{equation*} K= \left\{ p \in \mathbb{R}^d: \ a_j \leq p \cdot v_j \leq b_j, \ \ j=1,\ldots,d \right\} \end{equation*} is given by \begin{equation*} |K|=\prod_{j=1}^d (b_i-a_i). \end{equation*} Proof. Let $$V=[v_1, \ldots ,v_d]\in \mathbb{R}^{d\times d}$$ be the orthogonal matrix whose columns are the elements of v; hence, $$v_j=V e_j$$ where $$\left \{ e_j \right \}_{j=1}^d $$ is the canonical basis in $$\mathbb{R}^d$$. We now seek a more convenient representation of K, \begin{align*} K &= \left\{ p \in \mathbb{R}^d: \ a_j \leq p \cdot (Ve_j) \leq b_j, \ \ j=1,\ldots,d \right\} \\ &= V^{-T} \left\{ x \in \mathbb{R}^d: \ a_j \leq x \cdot e_j \leq b_j, \ \ j=1,\ldots,d \right\} = V^{-T} \widetilde{K}, \end{align*} whence \begin{equation*} |K| = |\det{V^{-T}} | \ |\widetilde{K}|=|\widetilde{K}| = \prod_{j=1}^d (b_j-a_j) \end{equation*} because $$\widetilde{K}$$ is an orthogonal hyper-rectangle. Combining Lemmas 4.1 and 4.2 we get the following corollary. Corollary 4.3 (Subdifferential vs. discrete operator). For every $$x_i \in \mathcal{N}_h^0 \cap \varOmega _{h,\delta }$$ and a convex function w we have \begin{equation*} | \partial w(x_i)| \leq \left( \min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} w(x_i;v_j) \right) \delta^d . \end{equation*} Remark 4.4 (Artificial factor $$\frac{\delta }{h}$$). The above estimate is critical in deriving Proposition 4.8 (continuous dependence on data) and subsequently rates of convergence in Section 5. We thus wish to provide here intuition about the reduced rates of convergence of Theorem 5.3 (rates of convergence for classical solutions) relative to the numerical experiments in the study by Nochetto et al. (2017). To this end, we let $$w \in C^2(\overline \varOmega )$$ be a convex function. Then, Corollary 4.3 implies that $$ T_\varepsilon[\mathcal I_h w] (x_i) \geq \frac{1}{\delta^d} | \partial \mathcal I_h w(x_i)|. $$ However, it was shown in Nochetto & Zhang (2016, Proposition 5.4) that $$ | \partial \mathcal I_h w(x_i)| \geq C h^d \det{\left(D^2w(x_i)\right)}, $$ provided the mesh $$\mathcal{T}_h$$ is translation invariant, whence $$ \det{\left(D^2w(x_i)\right)} \leq C \frac{\delta^d}{h^d} T_\varepsilon[\mathcal I_h w] (x_i). $$ We can now see that using this estimate introduces an extra factor $$\frac{\delta }{h} \gg 1$$, which could possibly explain the suboptimal rate proved in Theorem 5.3 (rates of convergence for classical solutions). Lemma 4.5 (Stability). If $$w_h\in \mathbb{V}_h$$ is $$w_h \geq 0$$ on $$\partial \varOmega _h$$, then \begin{equation*} \max_{x_i \in \mathcal{N}_h^0} w_h(x_i)^- \leq C \delta \left( \sum_{x_i \in \mathcal{C}_-(w_h)} T_{\varepsilon}[w_h](x_i) \right)^{1/d} . \end{equation*} Proof. Since the function $$w_h\ge 0$$ on $$\partial \varOmega _h$$, we invoke Proposition 3.5 (discrete Alexandroff estimate) for $$w_h$$ to obtain \begin{equation*} \max \limits_{ x_i \in \mathcal{N}_h^0} w_h(x_i)^- \leq C \left( \sum\limits_{x_i \in \mathcal{C}_-(w_h)} \left| \partial \varGamma w_h (x_i) \right| \right)^{1/d} \end{equation*} Applying Corollary 4.3 (subdifferential vs. discrete operator) to the convex function $$\varGamma w_h (x_i)$$ at a contact point $$x_i \in \mathcal{C}_-(w_h)$$ and recalling Remark 3.3, we have \begin{equation*} \left| \partial \varGamma w_h (x_i) \right| \leq \delta^d \min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} \varGamma w_h(x_i;v_j) \leq \delta^d \min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j) = \delta^d T_{\varepsilon}[w_h](x_i), \end{equation*} where the last equality follows from Lemma 2.2 (discrete convexity). 4.2. Concavity of the discrete operator We recall concavity properties of $$(\det A)^{1/d}$$ for SPSD matrices A and extend them to $$T_{\varepsilon }$$. The results can be traced back to Lions (1983) and Krylov (1987), but we present them here for completeness. Lemma 4.6 (Concavity of determinant). The following two statements are valid. (i) For every (SPSD) matrix A we have \begin{equation*} (\det{A})^{1/d} = \frac{1}{d} \inf{\left\{ \textrm{tr}(AB) \ \Big| \ B \ \textrm{is SPD and}\ \det{B}=1 \right\}}. \end{equation*} (ii) The function $$A \mapsto (\det{A})^{1/d}$$ is concave on SPSD matrices. Proof. We proceed in three steps. Step 1: Proof of (i) for A invertible. Let B be symmetric positive definite (SPD) with $$\det B = 1$$. Then $$B^{1/2}$$ is well defined, $$\det (B^{1/2})=1$$ and we obtain \begin{equation*} \det A = \det(B^{1/2} A B^{1/2}). \end{equation*} Let P be an orthogonal matrix that converts $$B^{1/2} A B^{1/2}$$ into a diagonal matrix D, namely $$D = PB^{1/2} A B^{1/2}P^T$$. Applying the geometric mean inequality yields \begin{equation*} \det(B^{1/2} A B^{1/2})^{1/d} = (\det D)^{1/d} \le \frac{1}{d} \operatorname{tr} D = \frac{1}{d} \textrm{tr}(B^{1/2} A B^{1/2}) = \frac{1}{d} \textrm{tr}(AB), \end{equation*} where we have used the invariance of the trace under cyclic permutations of the factor to write the last two equalities. This shows that \begin{equation*} (\det{A})^{1/d} \leq \frac{1}{d} \inf{\left\{ \textrm{tr}(AB) \ \Big| \ B \ \textrm{is SPD and} \ \det{B}=1 \right\} }. \end{equation*} This inequality is actually an equality provided A is invertible. In fact, we can take $$B = (\det A)^{1/d} A^{-1}$$, which is SPD and $$\det B = 1$$. This proves (i) for A nonsingular. Step 2: Proof of (i) for A singular. Given the singular value decomposition of A, \begin{equation*} A = \sum_{i=1}^d \lambda_i v_i \otimes v_i, \quad \lambda_1 \ge \cdots \ge \lambda_k> \lambda_{k+1} = \cdots = \lambda_d = 0, \end{equation*} with orthogonal vectors $$(v_i)_{i=1}^d$$, we can assume that k > 0 for otherwise A = 0 and the assertion is trivial. Given a parameter $$\sigma>0$$, let B be defined by \begin{equation*} B := \sum_{i=1}^k \sigma v_i \otimes v_i + \sum_{i=k+1}^d \sigma^{-\beta} v_i \otimes v_i \end{equation*} and $$\beta =k/(d-k)$$ because then $$\det B = \sigma ^k\sigma ^{-\beta (d-k)}=1$$. Therefore, \begin{equation*} AB = \sigma \sum_{i=1}^k \lambda_i v_i \otimes v_i \quad\Rightarrow\quad \textrm{tr}(AB) = \sigma \sum_{i=1}^k \lambda_i \to 0 \quad \textrm{as}\ \sigma \to 0, \end{equation*} which proves (i) for A singular since B is SPD. Step 3: Proof of (ii). Let A and B be SPSD matrices and $$0\le \lambda \le 1$$. Then $$\lambda A + (1-\lambda ) B$$ is also SPSD and we can apply (i) to \begin{align*} (\det{[ \lambda A + (1- \lambda) B ]})^{1/d} &= \frac{1}{d} \inf{ \big\{ \textrm{tr}[( \lambda A + (1- \lambda) B)C] \big| \ C\ \textrm{is SPD and} \det{C}=1 \big\} } \\ &\geq \frac{\lambda}{d} \inf{ \big\{ \textrm{tr}(AC) \ \big| \ C\ \textrm{is SPD and} \ \det{C}=1 \big\} } \\ &\quad+ \frac{1- \lambda}{d} \inf{ \big\{ \textrm{tr}(BC) \ \big| \ C\ \textrm{is SPD and} \ \det{C}=1 \big\} } \\ &= \lambda (\det{A})^{1/d} + (1-\lambda) (\det{B})^{1/d}. \end{align*} This completes the proof. Upon relabeling $$\widehat{A}=\lambda A$$ and $$\widehat{B}=(1-\lambda )B$$, which are still SPSD, we can write Lemma 4.6(ii) as \begin{equation} (\det \widehat{A})^{1/d} + (\det \widehat{B})^{1/d} \le \left(\det(\widehat{A}+\widehat{B})\right)^{1/d}. \end{equation} (4.1) We now show that our discrete operator $$T_\varepsilon [\cdot ]$$ possesses a similar property. Corollary 4.7 (Concavity of discrete operator). Given two functions $$u_h,w_h\in \mathbb{V}_h$$, we have \begin{equation*} \left( T_{\epsilon} [u_h] (x_i) \right)^{1/d} + \left( T_{\epsilon} [w_h](x_i) \right)^{1/d} \le \left( T_{\epsilon}[u_h+w_h](x_i) \right)^{1/d} \end{equation*} for all nodes $$x_i \in \mathcal{N}_h^0$$ such that $$\nabla ^2_{\delta } u_h(x_i;v_j)\ge 0, \ \nabla ^2_{\delta } w_h(x_i;v_j)\geq 0$$ for all $$v_j \in \mathbb S_{\theta }$$. Proof. We argue in two steps. Step 1. For $$a=(a_j)_{j=1}^d \in \mathbb{R}^d$$ with $$a_j \geq 0, \ j=1,\ldots ,d$$ we consider the function \begin{equation*} f(a) := \left( \prod_{j=1}^d a_j \right)^{1/d}, \end{equation*} which can be conceived as the determinant of a diagonal (and thus symmetric) positive semidefinite matrix with diagonal elements $$(a_j)_{j=1}^d$$, i.e. \begin{equation*} f(a) = \left(\det \textrm{diag} {\left\{a_1,\ldots,a_d\right\}} \right)^{1/d}. \end{equation*} Applying (4.1) to $$\widehat{A}=\ \textrm{diag} {\left \{a_1,\ldots ,a_d\right \}}, \widehat{B}=\ \textrm{diag} {\left \{b_1,\ldots ,b_d\right \}} $$ with $$a=(a_j)_{j=1}^d, b=(b_j)_{j=1}^d\ge 0$$ component wise, we deduce \begin{equation*} f(a) + f(b) \le f(a+b). \end{equation*} Step 2. We now apply this formula to the discrete operator. Since both $$u_h,w_h$$ are discretely convex at $$x_i\in \mathcal{N}_h^0$$, so is $$u_h+w_h$$, and we can apply Lemma 2.2 (discrete convexity) to write \begin{equation*} T_\varepsilon[u_h+w_h](x_i) = \prod_{j=1}^d \nabla^2_{\delta}[u_h+w_h](x_i;v_j) \end{equation*} for a suitable $$\mathbf{v}=(v_j)_{j=1}^d\in \mathbb{S}^{\perp }_{\theta }$$. Making use again of (2.4), this time for $$u_h$$ and $$w_h$$ and for the specific set of directions v just found, we obtain \begin{align*} \left( T_\varepsilon [u_h](x_i) \right)^{\frac{1}{d}} + \left( T_\varepsilon [w_h](x_i) \right)^{\frac{1}{d}} &\le \left(\prod_{j=1}^d \nabla^2_{\delta} u_h(x_i;v_j)\right)^{\frac{1}{d}} + \left(\prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j)\right)^{\frac{1}{d}} \\ & \le \left(\prod_{j=1}^d \nabla^2_{\delta} u_h(x_i;v_j) + \nabla^2_{\delta} w_h(x_i;v_j) \right)^{\frac{1}{d}} = \left(T_\varepsilon[u_h+w_h](x_i)\right)^{\frac{1}{d}}, \end{align*} where the second inequality is given by Step 1 for $$a = \left ( \nabla ^2_{\delta } u_h(x_i;v_j) \right )_{j=1}^d$$ and $$b = \left ( \nabla ^2_{\delta } w_h(x_i;v_j) \right )_{j=1}^d$$. This is the asserted estimate. 4.3. Continuous dependence of the two-scale method on data We are now ready to prove the continuous dependence of discrete solutions on data. This will be instrumental later for deriving rates of convergence for the two-scale method. Proposition 4.8 (Continuous dependence on data). Given two functions $$u_h,w_h\in \mathbb{V}_h$$ such that $$u_h \geq w_h$$ on $$\partial \varOmega _h$$ and \begin{equation*} T_{\varepsilon}[u_h](x_i) =f_1(x_i) \geq 0 \ \ \textrm{and} \ \ T_{\varepsilon}[w_h](x_i) = f_2(x_i) \geq 0 \end{equation*} at all interior nodes $$x_i\in \mathcal{N}_h^0$$, we have \begin{equation*} \max \limits_{ \varOmega_h} (u_h-w_h)^- \leq C \ \delta \ \left( \sum\limits_{x_i \in \mathcal{C}_{-}(u_h-w_h)} \left( f_1(x_i)^{1/d}- f_2(x_i)^{1/d} \right)^d \right)^{1/d}. \end{equation*} Proof. Since $$u_h-w_h\in \mathbb{V}_h$$ and $$u_h -w_h \geq 0$$ on $$\partial \varOmega _h$$, Lemma 4.5 (stability) yields \begin{equation*} \max_{x_i\in\mathcal{N}_h^0} (u_h - w_h) (x_i)^- \leq C \delta \left( \sum_{x_i \in \mathcal{C}_-(u_h - w_h)} T_{\epsilon} [u_h - w_h] (x_i) \right)^{1/d} . \end{equation*} Since $$x_i \in \mathcal{C}_-(u_h-w_h)$$, we have $$\nabla ^2_{\delta } (u_h-w_h)(x_i;v_j)\geq 0$$, whence \begin{equation*} \nabla^2_{\delta} u_h(x_i;v_j)\geq \nabla^2_{\delta} w_h(x_i;v_j) \ge 0 \quad\forall\, v_j\in\mathbb S_{\theta}, \end{equation*} where we have made use of Lemma 2.2 (discrete convexity). Invoking Corollary 4.7 (concavity of discrete operator) for $$u_h-w_h$$ and $$w_h$$, we deduce \begin{equation*} \left( T_{\epsilon} [u_h-w_h] (x_i) \right)^{1/d} \le \left( T_{\epsilon} [u_h](x_i) \right)^{1/d} - \left( T_{\epsilon} [w_h](x_i) \right)^{1/d}, \end{equation*} whence \begin{align*} \max_{x_i\in\mathcal{N}_h^0} (u_h - w_h) (x_i)^- \leq &\; C \delta \left( \sum_{x_i \in \mathcal{C}_-(u_h - w_h)} \left( T_{\epsilon} [u_h](x_i)^{1/d} - T_{\epsilon} [w_h] (x_i)^{1/d} \right)^d \right)^{1/d} \\ = &\; C \delta \left( \sum_{x_i \in \mathcal{C}_-(u_h - w_h)} \left( f_1(x_i)^{1/d} - f_2(x_i)^{1/d} \right)^d \right)^{1/d} . \end{align*} This completes the proof. 5. Rates of convergence We now combine the preceding estimates to prove pointwise convergence rates for solutions with varying degrees of regularity. We first present in Theorem 5.3 the case of a classical solution with Hölder regularity. This allows us to introduce the main techniques employed for deriving the rates of convergence. We then build on these techniques and prove error estimates for three more cases of increasing generality. In Theorem 5.4 we assume a classical solution with Sobolev regularity, which requires the use of embedding estimates and accumulating the truncation error in $$l^d$$, rather than $$l^\infty $$. We next deal with a nonclassical solution that is globally in $$W^{2}_{\infty }(\varOmega )$$ but its Hessian is discontinuous across a d − 1 -dimensional Lipschitz surface. To prove rates for this case we need to take advantage of the small volume affected by this discontinuity and combine it with the techniques used in Theorems 5.3 and 5.4. Lastly, we remove the nondegeneracy assumption $$f \geq f_0>0$$ used in the previous three cases to obtain rates of convergence for a piecewise smooth viscosity solution with degenerate right-hand side f. This corresponds to one of the numerical experiments performed in the study by Nochetto et al. (2017). Our estimates do not require h small and are stated over the computational domain $$\varOmega _h\subset \varOmega $$. 5.1. Barrier function We recall here the two discrete barrier functions introduced in Nochetto et al. (2017, Lemmas 5.1, 5.2). The first one is critical in order to control the behavior of $$u_{\varepsilon }$$ close to the boundary of $$\varOmega _h$$ and to prove the convergence to the unique viscosity solution u of (1.1). We now use the same barrier function to control the pointwise error of $$u_{\varepsilon }$$ and u close to the boundary. The second barrier allows us to treat the degenerate case f ≥ 0, using techniques similar to the case f > 0. Lemma 5.1 (Discrete boundary barrier). Let $$\varOmega $$ be uniformly convex and E > 0 be arbitrary. For each node $$z \in \mathcal{N}_h^0$$ with $$\ \textrm{dist} (z,\partial \varOmega _h) \leq \delta $$, there exists a function $$p_h\in \mathbb{V}_h$$ such that $$T_{\varepsilon }[p_h](x_i) \geq E$$ for all $$x_i \in \mathcal{N}_h^0$$, $$p_h \leq 0$$ on $$\partial \varOmega _h$$ and \begin{equation*} |p_h(z)| \leq CE^{1/d} \delta \end{equation*} with C depending on $$\varOmega $$. Lemma 5.2 (Discrete interior barrier). Let $$\varOmega $$ be contained in the ball $$B(x_0,R)$$ of center $$x_0$$ and radius R. If $$q(x):= \frac{1}{2}\left ( |x-x_0|^2 - R^2 \right )$$, then its interpolant $$q_h:=\mathcal I_h q\in \mathbb{V}_h$$ satisfies \begin{equation*} T_\varepsilon[q_h](x_i) \ge 1\quad\forall\, x_i\in\mathcal{N}_h^0, \qquad q_h(x_i) \le 0 \quad\forall\, x_i\in\mathcal{N}_h^b. \end{equation*} 5.2. Error estimates for solutions with Hölder regularity We now deal with classical solutions u of (1.1) of class $$C^{2+k,\alpha }(\overline{\varOmega })$$, with k = 0, 1 and $$0<\alpha \leq 1$$, and derive pointwise error estimates. We proceed as follows. We first use Lemma 5.1 (discrete boundary barrier) to control $$u_{\varepsilon } -\mathcal I_h u$$ in the $$\delta $$-neighborhood $$\omega _{h,\delta }$$ of $$\partial \varOmega _h$$, where the consistency error of $$T_\varepsilon [\mathcal I_h u]$$ is of order 1 according to Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$). In the $$\delta $$-interior region $$\varOmega _{h,\delta }$$ we combine the interior consistency error of $$T_\varepsilon [\mathcal I_h u]$$ from Lemma 2.4 and Proposition 4.8 (continuous dependence on data). Judicious choices of $$\delta $$ and $$\theta $$ in terms of h conclude the argument. Theorem 5.3 (Rates of convergence for classical solutions). Let $$f(x) \geq f_0>0$$ for all $$x \in \varOmega $$. Let u be the classical solution of (1.1) and $$u_{\varepsilon }$$ be the discrete solution of (2.2). If $$u \in C^{2,\alpha }(\overline{\varOmega })$$ for $$0<\alpha \leq 1$$ and \begin{equation*} \delta = R_0(u) \ h^{\frac{2}{2+\alpha}}, \quad \theta = R_0(u)^{-1} \ h^{\frac{\alpha}{2+\alpha}}, \end{equation*} with $$R_0(u) = |u|_{W^2_{\infty }(\varOmega )}^{\frac{1}{2+\alpha }} \ |u|_{C^{2,\alpha }(\overline{\varOmega })}^{-{\frac{1}{2+\alpha }}}$$, then \begin{equation*} \|u-u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(\varOmega,d,f_0) \left( |u|_{C^{2,\alpha}(\overline{\varOmega})}^{\frac{1}{2+\alpha}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{2+\alpha}} + \left(1+ R_0(u)\right)|u|_{W^2_{\infty}(\varOmega)} \right) \ h^{\frac{\alpha}{2+\alpha}}. \end{equation*} Otherwise, if $$u \in C^{3,\alpha }(\overline{\varOmega })$$ for $$0 < \alpha \leq 1$$ and \begin{equation*} \delta = R_1(u) \ h^{\frac{2}{3+\alpha}}, \quad \theta = R_1(u)^{-1} \ h^{\frac{1+\alpha}{3+\alpha}}, \end{equation*} with $$R_1(u) := |u|_{W^2_{\infty }(\varOmega )}^\frac{1}{3+\alpha }|u|_{C^{3,\alpha }(\overline{\varOmega })}^{-\frac{1}{3+\alpha }}$$, then \begin{equation*} \|u-u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(\varOmega,d,f_0) \left( |u|_{C^{3,\alpha}(\overline{\varOmega})}^{\frac{1}{3+\alpha}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{3+\alpha}} + \left(1 + R_1(u) \right)\!|u|_{W^2_{\infty}(\varOmega)} \right) \ h^{\frac{1+\alpha}{3+\alpha}}. \end{equation*} Proof. If $$R_k(u) := |u|_{W^2_{\infty }(\varOmega )}^\frac{1}{2+k+\alpha }|u|_{C^{2+k,\alpha }(\overline{\varOmega })}^{-\frac{1}{2+k+\alpha }}$$, k = 0, 1, we prove below the estimate \begin{equation*} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim \left( \left(1+R_k(u)|u|_{W^2_{\infty}(\varOmega)}\right)+ |u|_{C^{2+k,\alpha}(\overline{\varOmega})}^{\frac{1}{2+k+\alpha}}\! |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{2+k+\alpha}} \right) \ h^{\frac{k+\alpha}{2+k+\alpha}} \end{equation*} with a hidden constant depending on $$\varOmega , d, f_0$$. We proceed in three steps. The estimates for $$\max _{\varOmega _h} \, (\mathcal I_h u - u_{\varepsilon })$$ are similar and thus omitted. Adding the interpolation error $$\|u-\mathcal I_h u\|_{L^{\infty }(\varOmega _h)} \leq Ch^2 |u|_{W^2_{\infty }(\varOmega )}$$ (Brenner & Scott, 2008) readily gives the asserted estimates because $$\frac{k+\alpha }{2+k+\alpha } \le \frac{1}{2}$$ for k = 0, 1 and $$0<\alpha \le 1$$. Step 1: Boundary estimate. We show that for $$z \in \mathcal{N}_h^0$$, so that $$\textrm{dist}(z,\partial \varOmega _h) \leq \delta $$, \begin{equation*} u_{\varepsilon}(z) - \mathcal I_h u(z) \leq C |u|_{W^2_{\infty}(\varOmega)} \delta. \end{equation*} Given the function $$p_h$$ of Lemma 5.1 (discrete boundary barrier), for z fixed, we examine the behavior of $$u_{\varepsilon } + p_h$$. For any interior node $$x_i \in \mathcal{N}_h^0$$, we have \begin{equation*} \begin{aligned} \prod_{j=1}^d \nabla^2_{\delta} (u_{\varepsilon} +p_h) (x_i;v_j) &= \prod_{j=1}^d \left(\nabla^2_{\delta} u_{\varepsilon} (x_i;v_j) + \nabla^2_{\delta} p_h(x_i;v_j) \right)\\ &\geq \prod_{j=1}^d \nabla^2_{\delta} u_{\varepsilon} (x_i;v_j) + \prod_{j=1}^d \nabla^2_{\delta} p_h (x_i;v_j) \quad \forall\, \mathbf{v }=(v_j)_{j=1}^d \in \mathbb{S}^{\perp}_{\theta} \end{aligned} \end{equation*} because $$\nabla ^2_{\delta } u_{\varepsilon } (x_i;v_j) \geq 0$$ and $$\nabla ^2_{\delta } p_h(x_i;v_j)\geq 0$$. We apply Lemma 2.4 (consistency of $$T_{\varepsilon }[\mathcal I_h u])$$ to obtain \begin{equation*} \begin{aligned} T_{\varepsilon} [u_{\varepsilon} + p_h] (x_i) & \geq T_{\varepsilon}[u_{\varepsilon}](x_i) + T_{\varepsilon}[p_h](x_i) \\ & \geq f(x_i) + E \geq T_{\varepsilon}[\mathcal{I}_h u](x_i) - C|u|_{W^2_{\infty}(\varOmega)}^d + E \geq T_{\varepsilon} [\mathcal I_h u](x_i), \end{aligned} \end{equation*} provided $$E \geq C|u|_{W^2_{\infty }(\varOmega )}^d$$. Since $$\mathcal I_h u = u_{\varepsilon }$$ and $$p_h \leq 0 $$ on $$\partial \varOmega _h$$, we deduce from Lemma 2.3 (discrete comparison principle) that \begin{equation*} u_{\varepsilon}(z) + p_h(z) \leq \mathcal I_h u (z), \end{equation*} whence \begin{equation*} u_{\varepsilon}(z) - \mathcal I_h u(z) \leq C |u|_{W^2_{\infty}(\varOmega)} \delta. \end{equation*} Step 2: Interior estimate. We show that for all $$x_i \in \mathcal{N}_h^0$$, so that $$\textrm{dist} (x_i,\partial \varOmega _h) \geq \delta $$, \begin{equation*} T_{\varepsilon} [u_{\varepsilon}] (x_i)- T_{\varepsilon} [\mathcal I_h u](x_i) \leq C_1(u) \delta^{\alpha +k} + C_2(u) \left( \frac{h^2}{\delta^2}+\theta^2 \right) \end{equation*} with k = 0, 1 and \begin{equation*} C_1(u) = C |u|_{C^{2+k,\alpha}(\overline{\varOmega})} \ |u|_{W^2_{\infty}(\varOmega)}^{d-1}, \quad C_2(u) = C |u|_{W^2_{\infty}(\varOmega)}^d \end{equation*} dictated by Lemma 2.4. Step 1 guarantees that \begin{equation*} u_{\varepsilon} - \mathcal I_h u \leq C|u|_{W^2_{\infty}(\varOmega)} \delta \quad \textrm{on } \partial \varOmega_{h,\delta}, \end{equation*} where $$\varOmega _{h,\delta }$$ is defined in (2.7). Let $$d_{\varepsilon }:= \mathcal I_h u-u_{\varepsilon } +C|u|_{W^2_{\infty }(\varOmega )}\delta $$ and note that $$d_{\varepsilon } \geq 0 $$ on $$\partial \varOmega _{h,\delta }$$. We then apply Proposition 4.8 (continuous dependence on data) to $$d_{\varepsilon }$$ in $$\varOmega _{h,\delta }$$, in conjunction with Lemma 2.4 (consistency of $$T_{\varepsilon }[\mathcal I_h u]$$), to obtain \begin{equation*} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- \lesssim \ \delta \left( \sum_{x_i \in \mathcal{C}_-(d_{\varepsilon})} \left( (f(x_i) +e)^{1/d} - f(x_i)^{1/d} \right)^d \right)^{1/d} \end{equation*} with $$e:= C_1(u) \delta ^{\alpha +k} + C_2(u) \left ( \frac{h^2}{\delta ^2} + \theta ^2 \right )$$. We now use that the function $$t \mapsto t^{1/d}$$ is concave with derivative $$\frac{1}{d}t^{1/d-1}$$ and $$f(x_i)\geq f_0>0$$ to estimate \begin{equation*} (f(x_i)+e)^{1/d}-f(x_i)^{1/d} \leq \frac{e}{d f_0^{\frac{d-1}{d}}}, \end{equation*} whence \begin{equation*} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- \lesssim \delta \left( \sum_{x_i \in \mathcal{C}_-(d_{\varepsilon})} \left(C_1(u) \delta^{\alpha +k}+C_2(u)\left(\frac{h^2}{\delta^2}+\theta^2\right) \right)^d \right)^{1/d}. \end{equation*} Since the cardinality of $$\mathcal{C}_-(d_{\varepsilon })$$ is bounded by that of $$\mathcal N_h$$, which in turn is bounded by $$Ch^{-d}$$ with C depending on shape regularity, we end up with \begin{equation} \max_{\varOmega_h} \, (u_{\varepsilon} -\mathcal I_h u) \lesssim |u|_{W^2_{\infty}(\varOmega)} \delta + \frac{\delta}{h} \left( C_1(u) \delta^{\alpha+k} + C_2(u) \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation} (5.1) Step 3: Choice of $$\delta $$ and $$\theta $$. To find an optimal choice of $$\delta $$ and $$\theta $$ in terms of h, we minimize the right-hand side of the preceding estimate. We first set $$\theta ^2 = \frac{h^2}{\delta ^2}$$ and equate the last two terms: \begin{equation*} C_1(u) \delta^{k+\alpha} = C_2(u) \frac{h^2}{\delta^2} \quad \Longrightarrow \quad \delta = R_k(u) h^{\frac{2}{2+k+\alpha}}. \end{equation*} Again writing $$C_1(u)$$ and $$C_2(u)$$ in terms of $$|u|_{C^{2+k,\alpha }(\overline{\varOmega })}$$ and $$|u|_{W^2_{\infty }(\varOmega )}$$, we thus obtain \begin{equation} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim R_k(u) |u|_{W^2_{\infty}(\varOmega)} \ h^{\frac{2}{2+k+\alpha}} + |u|_{C^{2+k,\alpha}(\overline{\varOmega})}^{\frac{1}{2+k+\alpha}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{2+k+\alpha}} \ h^{\frac{k+\alpha}{2+k+\alpha}}.\nonumber \end{equation} (5.2) Finally, the desired estimate follows immediately because $$k+\alpha \leq 2$$. We observe that according to Theorem 5.3 the rate of convergence is of order $$h^{1/2}$$ whenever $$u\in C^{3,1}(\overline{\varOmega })$$. However, our numerical experiments in Nochetto et al. (2017) indicate linear rates of convergence, which correspond to Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u_h]$$). This mismatch may be attributed to the factor $$\frac{\delta }{h} \gg 1$$ in (5.1), which relates to Remark 4.4 (artificial factor $$\frac{\delta }{h}$$). This issue will be tackled in a forthcoming paper. 5.3. Error estimates for solutions with Sobolev regularity We now derive error estimates for solutions $$u \in W^s_p(\varOmega )$$ with $$s>2+\frac{d}{p}$$ so that $$W^s_p(\varOmega )\subset C^2(\overline{\varOmega })$$. We exploit the structure of the estimate of Proposition 4.8 (continuous dependence on data), which shows that its right-hand side accumulates in $$l^d$$ rather than $$l^{\infty }$$. Theorem 5.4 (Convergence rate for $$W_p^s$$ solutions). Let $$f \geq f_0>0$$ in $$\varOmega $$ and let the viscosity solution u of (1.1) be of class $$W_p^s(\varOmega )$$ with $$\frac{d}{p} < s-2-k\le 1, \ k=0,1$$. If $$u_{\varepsilon }$$ is the discrete solution of (2.2) and \begin{equation*} \delta = R(u) \ h^{\frac{2}{s}}, \quad \theta = R(u)^{-1} \ h^{1-\frac{2}{s}}, \end{equation*} with $$R(u) := |u|_{W^2_{\infty }(\varOmega )}^{\frac{1}{s}} |u|_{W_p^s(\varOmega )}^{-\frac{1}{s}}$$, then \begin{equation*} \| u - u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(d,\varOmega,f_0) \left( |u|_{W_p^s(\varOmega)}^{\frac{1}{s}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{s}} + \left(1 + R(u) \right) |u|_{W^2_\infty(\varOmega)} \right) h^{1-\frac{2}{s}} . \end{equation*} Proof. We proceed as in Theorem 5.3 to show an upper bound for $$u_{\varepsilon } - \mathcal I_h u$$. The boundary estimate of Step 1 remains intact, namely \begin{equation*} u_{\varepsilon}(z) - \mathcal I_h u(z) \leq C \ |u|_{W^2_{\infty}(\varOmega)} \ \delta \end{equation*} for all $$z \in \mathcal{N}_h^0$$ such that $$\textrm{dist} (z,\partial \varOmega _h) \leq \delta $$. On the other hand, Step 2 yields \begin{equation*} \max_{\varOmega_{h,\delta}}(u_{\varepsilon} - \mathcal I_h u) \lesssim \delta |u|_{W^2_{\infty}(\varOmega)} + \delta \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \delta^{(k+\alpha)d} + C_2(u)^d \left( \frac{h^2}{\delta^2} + \theta^2 \right)^d \right)^{1/d}, \end{equation*} where $$C_1(u)$$ and $$C_2(u)$$ are defined in Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$) and $$0 < \alpha = s-2-k-\frac{d}{p} \leq 1$$ corresponds to the Sobolev embedding $$W_p^s(B_i) \subset C^{2+k,\alpha }(B_i)$$. In the following calculations we resort to the Sobolev inequality (Giusti, 2003, Theorem 2.9) \begin{equation*} |u|_{C^{2+k,\alpha}(B_i)} \le C |u|_{W_p^s(B_i)}, \end{equation*} involving only seminorms. We stress that C > 0 depends on the Lipschitz constant of $$B_i$$ but not on its size. The latter is due to the fact that the Sobolev numbers of $$W^{s-2-k}_p(B_i)$$ and $$C^{0,\alpha }(B_i)$$ coincide: $$0< s-k-2-d/p=\alpha \le 1$$. We refer to Giusti (2003, Theorem 2.9) for a proof for 0 < s < 1. We now use the Hölder inequality with exponent $$\frac{p}{d}> 1$$ to obtain \begin{equation*} \begin{aligned} \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \right)^{\frac{1}{d}} &\lesssim \left( \sum_{x_i \in \mathcal{N}_h^0} |u|_{W_p^s(B_i)}^d |u|_{W^2_{\infty}(B_i)}^{d(d-1)} \right)^{\frac{1}{d}} \\ &\lesssim \left( \sum_{x_i \in \mathcal{N}_h^0} |u|_{W_p^s(B_i)}^{d\frac{p}{d}} \right)^{\frac{1}{d}\frac{d}{p}} \ \left( \sum_{x_i \in \mathcal{N}_h^0} |u|_{W^2_{\infty}(B_i)}^{d(d-1)\frac{p}{p-d}} \right)^{\frac{1}{d}\frac{p-d}{p}}. \end{aligned} \end{equation*} Since the cardinality of the set of balls $$B_i$$ containing an arbitrarily given $$x \in \varOmega $$ is proportional to $$\left ( \frac{\delta }{h} \right )^d$$, while the cardinality of $$\mathcal{N}_h^0$$ is proportional to $$h^{-d}$$, we get \begin{align*} \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \right)^{\frac{1}{d}} & \lesssim \left( \frac{\delta}{h} \right)^{\frac{d}{p}} |u|_{W_p^s(\varOmega)} \ \left( h^{-d} |u|_{W^2_{\infty}(\varOmega)}^{\frac{d(d-1)p}{p-d}} \right)^{\frac{p-d}{pd}} \\ & \lesssim \frac{\delta^{\frac{d}{p}}}{h} \ |u|_{W_p^s(\varOmega)} \ |u|_{W^2_{\infty}(\varOmega)}^{d-1}. \end{align*} Exploiting that $$\alpha + k + \frac{d}{p}+1=s-1$$, we readily arrive at \begin{equation*} \delta \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \ \delta^{(k+\alpha)d} \right)^{\frac{1}{d}} \lesssim \frac{\delta^{s-1}}{h} |u|_{W_p^s(\varOmega)} \ |u|_{W^2_{\infty}(\varOmega)}^{d-1}. \end{equation*} In addition, we have \begin{equation*} \left( \sum_{x_i \in \mathcal{N}_h^0} C_2(u)^d \right)^{\frac{1}{d}} \lesssim |u|_{W^2_{\infty}(\varOmega)}^d \ \frac{1}{h}, \end{equation*} whence \begin{equation*} \delta \left( \sum_{x_i \in \mathcal{N}_h^0} C_2(u)^d \left( \frac{h^2}{\delta^2} + \theta^2 \right)^d \right)^{\frac{1}{d}} \lesssim |u|_{W^2_{\infty}(\varOmega)}^d \ \frac{\delta}{h} \left( \frac{h^2}{\delta^2} + \theta^2 \right). \end{equation*} Collecting the previous estimates, we end up with \begin{equation*} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim \delta |u|_{W^2_{\infty}(\varOmega)} + |u|_{W^2_{\infty}(\varOmega)}^{d-1} \frac{\delta}{h} \left( |u|_{W_p^s(\varOmega)} \delta^{s-2} + |u|_{W^2_{\infty}(\varOmega)} \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation*} To find an optimal relation among $$h,\delta $$ and $$\theta $$, we first choose $$\theta ^2 = \frac{h^2}{\delta ^2}$$ and next equate the two terms in the second summand to obtain \begin{equation*} \delta = R(u) \ h^{\frac{2}{s}}, \quad \theta = R(u)^{-1} \ h^{1-\frac{2}{s}}, \end{equation*} whence \begin{equation*} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim R(u) |u|_{W^2_{\infty}(\varOmega)} h^{\frac{2}{s}} + |u|_{W_p^s(\varOmega)}^{\frac{1}{s}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{s}} h^{1-\frac{2}{s}}. \end{equation*} Adding the interpolation error estimate $$\|u-\mathcal I_h u\|_{L^\infty (\varOmega )} \lesssim h^2 |u|_{W^2_{\infty }(\varOmega )}$$, and using that $$2> \frac{2}{s} \ge 1 - \frac{2}{s}$$ for 2 < s ≤ 4, leads to the asserted estimate. The error estimate of Theorem 5.4 (convergence rate for $$W_p^s$$-solutions) is of order $$\frac{1}{2}$$ for s = 4 and $$u \in W_p^4(\varOmega )$$ with p > d. This rate requires much weaker regularity than the corresponding error estimate in Theorem 5.3, namely $$u \in C^{3,1}(\overline{\varOmega }) = W^4_{\infty }(\varOmega )$$. In both cases, the relation between $$\delta $$ and h is $$\delta \approx h^{\frac{1}{2}}$$. 5.4. Error estimates for piecewise smooth solutions We now derive pointwise rates of convergence for a larger class of solutions than in Section 5.3. These are viscosity solutions that are piecewise $$W_p^s$$ but have discontinuous Hessians across a Lipschitz (d − 1)-dimensional manifold $$\mathcal{S}$$; we refer to the second numerical example in Nochetto et al. (2017). Since $$T_\varepsilon [\mathcal I_h u]$$ has a consistency error of order 1 in a $$\delta $$-region around $$\mathcal{S}$$, due to the discontinuity of $$D^2u$$, we exploit the fact that the measure of this region is proportional to $$\delta |\mathcal{S}|$$. We are thus able to adapt the argument of Theorem 5.4 (convergence rate for $$W^s_p$$ solutions) and accumulate such consistency error in $$l^d$$ at the expense of an extra additive term of order $$h^{-1}\delta ^{1+\frac{1}{d}}$$. This term is responsible for a reduced convergence rate when $$u \in W^s_p(\varOmega \setminus \mathcal{S})$$, $$s> 2+ \frac{1}{d}$$. Theorem 5.5 (Convergence rate for piecewise smooth solutions). Let $$\mathcal{S}$$ denote a (d − 1)-dimensional Lipschitz manifold that divides $$\varOmega $$ into two disjoint subdomains $$\varOmega _1, \varOmega _2$$ so that $$S = \overline{\varOmega }_1 \cap \overline{\varOmega }_2$$. Let $$f \geq f_0>0$$ in $$\varOmega $$ and let $$u \in W_p^s(\varOmega _i) \cap W^2_{\infty }(\varOmega )$$, for i = 1, 2 and $$\frac{d}{p} < s-2-k \le 1, k=0,1$$ be the viscosity solution of (1.1). If $$u_{\varepsilon }$$ denotes the discrete solution of (2.2), then for $$\beta = \min \big \{s,2+\frac{1}{d}\big\}$$ we have \begin{equation*} \| u -u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(d,\varOmega,f_0) \left( R(u)^{-1} |u|_{W^2_{\infty}(\varOmega)}^d + (1+ R(u) ) |u|_{W^2_{\infty}(\varOmega)} \right) h^{1-\frac{2}{\beta}}, \end{equation*} with $$R(u) = \big(\frac{|u|_{W^2_{\infty }(\varOmega )}}{|u|_{W_p^s(\varOmega \setminus \mathcal{S})}+{|u|_{W^2_{\infty }(\varOmega )}}}\big)^{\frac{1}{\beta }}$$ and $$|u|_{W_p^s(\varOmega \setminus \mathcal{S})}:= \max _i |u|_{W_p^s(\varOmega _i)}$$, provided \begin{equation*} \delta = R(u) \ h^{\frac{2}{\beta}}, \quad \theta= R(u)^{-1} \ h^{1-\frac{2}{\beta}}. \end{equation*} Proof. We proceed as in Theorems 5.3 and 5.4. The boundary layer estimate relies on the regularity $$u \in W^2_{\infty }(\varOmega )$$, which is still valid, whence for all $$x \in \varOmega _h$$ such that $$\textrm{dist}(x,\partial \varOmega _h) \leq \delta $$ we obtain \begin{equation*} u_{\varepsilon}(x) - \mathcal I_h u(x) \leq C |u|_{W^2_{\infty}(\varOmega)} \delta. \end{equation*} Consider now the internal layer \begin{equation*} \mathcal{S}^{\delta}_h : = \left\{ x \in \varOmega_h : \ \textrm{dist} (x,\mathcal{S}) \leq \delta \right\}, \end{equation*} which is the region affected by the discontinuity of the Hessian $$D^2u$$. Recall the auxiliary function $$d_{\varepsilon } = \mathcal I_h u - u_{\varepsilon } + C |u|_{W^2_{\infty }(\varOmega )} \delta $$ of Theorem 5.3 (rates of convergence for classical solutions) and split the contact set $$\mathcal{C}_-^\delta (d_{\varepsilon }):= \mathcal{C}_-(d_{\varepsilon }) \cap \varOmega _{h,\delta }$$ as \begin{equation*} \mathcal{S}_{h,1}^{\delta} := \mathcal{C}_-^\delta(d_{\varepsilon}) \cap \mathcal{S}^{\delta}_h, \quad \mathcal{S}_{h,2}^{\delta}:= \mathcal{C}_-^\delta(d_{\varepsilon}) \setminus \mathcal{S}^{\delta}_h. \end{equation*} An argument similar to Step 2 (interior estimate) of the proof of Theorem 5.3, based on combining Proposition 4.8 (continuous dependence on data) and Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$) with the assumption $$f\ge f_0>0$$, yields \begin{equation*} \begin{aligned} \max_{\varOmega_{h,\delta}} \ d_{\varepsilon}^- \lesssim&\, \delta \left( \sum_{x_i \in \mathcal{S}_{h,1}^{\delta}} C_2(u)^d \right)^{1/d} \\ &+ \delta \ \left( \sum_{x_i \in \mathcal{S}_{h,2}^{\delta}} C_1(u)^d \delta^{(k+\alpha)d} + C_2(u)^d \left( \frac{h^2}{\delta^2} + \theta^2 \right)^d \right)^{1/d} =: I_1 + I_2, \end{aligned} \end{equation*} because the consistency error in $$\mathcal{S}_{h,1}^{\delta }$$ is bounded by $$C_2(u) = C |u|_{W^2_\infty (B_i)}^d$$. As in Theorem 5.4 (convergence rate for $$W^s_p$$ solutions), $$C_1(u)$$ satisfies \begin{equation*} C_1(u) \lesssim |u|_{W^s_p(B_i)} |u|_{W^2_\infty(B_i)}^{d-1}. \end{equation*} Since the number of nodes $$x_i \in \mathcal{S}_{h,1}^{\delta }$$ is bounded by $$C|\mathcal{S}|\delta h^{-d}$$, we deduce \begin{equation*} I_1 \lesssim \delta \left( \sum_{x_i \in \mathcal{S}_{h,1}^{\delta}} C_2(u)^d \right)^{1/d} \lesssim \ |u|_{W^2_{\infty}(\varOmega)}^d \frac{\delta^{1+\frac{1}{d}}}{h}. \end{equation*} For $$I_2$$ we distinguish whether $$x_i$$ belongs to $$\varOmega _1$$ or $$\varOmega _2$$ and accumulate $$C_1(u)$$ in $$\ell ^p$$, exactly as in Theorem 5.4, to obtain \begin{equation*} I_2 \lesssim \ |u|_{W^2_{\infty}(\varOmega)}^{d-1} \left( |u|_{W_p^s(\varOmega \setminus \mathcal{S} ) }\frac{\delta^{s-1}}{h} + |u|_{W^2_{\infty}(\varOmega)} \ \frac{\delta}{h} \ \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation*} Collecting the previous estimates and using the definition of $$\beta $$ yields \begin{align*} \max_{\varOmega_h} (u_{\varepsilon} - \mathcal I_h u) \lesssim&\, |u|_{W^2_{\infty}(\varOmega)} \delta \\ &+ \ |u|_{W^2_{\infty}(\varOmega)}^{d-1} \frac{\delta}{h} \left( \left(|u|_{W_p^s(\varOmega \setminus \mathcal{S} )} + |u|_{W^2_{\infty}(\varOmega)}\right)\delta^{\beta-2} + |u|_{W^2_{\infty}(\varOmega)} \ \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{align*} We finally realize that this estimate is similar to that in the proof of Theorem 5.4 except for the middle term on the right-hand side. Therefore, we proceed as in Theorem 5.4 to find the relation between $$\delta , \theta $$ and h, add the estimate $$\|u-\mathcal I_h u\|_{L^\infty (\varOmega )}\lesssim h^2 |u|_{W^2_{\infty }(\varOmega )}$$ and eventually derive the asserted error estimate. 5.5. Error estimates for piecewise smooth solutions with degenerate f We observe that in all three preceding theorems we assume that $$f \geq f_0>0$$. This is an important assumption in the proofs, since it allows us to use the concavity of $$t\mapsto t^{1/d}$$ and Proposition 4.8 (continuous dependence on data) to obtain \begin{equation} (f(x_i)+e)^{1/d} -f(x_i)^{1/d} \leq \frac{e}{df_0^{\frac{d-1}{d}}}, \end{equation} (5.2) where e is related to the consistency of the operator in Lemma 2.4 (consistency of $$T_{\varepsilon }[\mathcal I_h u]$$). We see that this is only possible if $$f_0>0$$. If we allow f to touch zero, then (5.2) reduces to \begin{equation} (f(x_i)+e)^{1/d} -f(x_i)^{1/d} \leq e^{1/d}, \end{equation} (5.3) with equality for $$f(x_i)=0$$. This leads to a rate of order $$\left (\frac{\delta }{h}\right )^{1-\frac{2}{d}}\ge 1$$ for d ≥ 2. To circumvent this obstruction, we use Lemma 5.2 (discrete interior barrier), which allows us to introduce an extra parameter $$\sigma>0$$ that compensates for the lack of lower bound $$f_0>0$$ and yields pointwise error estimates of reduced order. Theorem 5.6 (Degenerate forcing f ≥ 0). Let $$\mathcal{S}$$ denote a (d − 1)-dimensional Lipschitz manifold that divides $$\varOmega $$ into two disjoint subdomains $$\varOmega _1, \varOmega _2$$ such that $$\mathcal{S} = \overline{\varOmega }_1 \cap \overline{\varOmega }_2$$. Let f ≥ 0 in $$\varOmega $$ and let $$u \in W_p^s(\varOmega _i) \cap W^2_{\infty }(\varOmega )$$, for i = 1, 2 and $$\frac{d}{p} < s-2-k \le 1, k=0,1$$ be the viscosity solution of (1.1). If $$u_{\varepsilon }$$ denotes the discrete solution of (2.2), then for $$\beta = \min \big\{ s, 2+\frac{1}{d}\big\}$$ we have \begin{equation*} \| u -u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(d,\varOmega) |u|_{W^2_{\infty}(\varOmega)} \left( 1+ R(u) + R(u)^{-\frac{1}{d}} \right) \ h^{\frac{1}{d}\left(1-\frac{2}{\beta}\right)} \end{equation*} with $$R(u) = \big(\frac{|u|_{W^2_{\infty }(\varOmega )}}{|u|_{W_p^s(\varOmega \setminus \mathcal{S})}+|u|_{W^2_{\infty }(\varOmega )}}\big)^{\frac{1}{\beta }}$$ and $$|u|_{W_p^s(\varOmega \setminus \mathcal{S})}:= \max _i |u|_{W_p^s(\varOmega _i)}$$, provided \begin{equation*} \delta = R(u) \ h^{\frac{2}{\beta}}, \quad \theta= R(u)^{-1} \ h^{1-\frac{2}{\beta}}. \end{equation*} Proof. We employ the interior barrier function $$q_h$$ of Lemma 5.2 scaled by a parameter $$\sigma>0$$ to control $$u_{\varepsilon }-\mathcal I_h u$$ and $$\mathcal I_h u -u_{\varepsilon }$$ in two steps. The parameter $$\sigma $$ allows us to mimic the calculation in (5.2). In the third step we choose $$\sigma $$ optimally with respect to the scales of our scheme. Step 1: Upper bound for $$u_{\varepsilon }-\mathcal I_h u$$. We let $$w_h:= u_{\varepsilon } + \sigma q_h$$ and $$v_h:= \mathcal I_h u +C|u|_{W^2_{\infty }(\varOmega )} \delta $$, observe that $$T_\varepsilon [w_h](x_i) \ge f(x_i) + \sigma ^d$$ and proceed as in Step 1 of the proof of Theorem 5.3 to show $$ w_h(z) \leq v_h(z) $$ for all $$z\in \mathcal{N}_h^0$$ such that $$\textrm{dist} (z,\partial \varOmega _h) \le \delta $$. We now focus on $$\varOmega _{h,\delta }$$ and define the auxiliary function $$d_{\varepsilon }:= v_h -w_h$$ and contact set $$\mathcal{C}_-^\delta (d_{\varepsilon }) := \mathcal{C}_-(d_{\varepsilon }) \cap \varOmega _{h,\delta }$$. Since the previous argument guarantees that $$d_{\varepsilon } \geq 0$$ on $$\partial \varOmega _{h,\delta }$$, Proposition 4.8 (continuous dependence on data) gives \begin{equation*} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- \lesssim \ \delta \left( \sum_{x_i \in \mathcal{C}_-^\delta(d_{\varepsilon})} \left( \left(T_\varepsilon[v_h](x_i)\right)^{1/d}- \left(T_\varepsilon[w_h](x_i)\right)^{1/d} \right)^d \right)^{1/d} . \end{equation*} If $$e_i$$ is the local consistency error given in Lemma 2.4, we further note that \begin{equation*} T_\varepsilon[v_h](x_i) \le f(x_i) + e_i, \quad T_\varepsilon[w_h](x_i) \geq T_\varepsilon[u_{\varepsilon}](x_i) + T_\varepsilon[\sigma q_h](x_i) \geq f(x_i)+ \sigma^d \end{equation*} for all $$x_i\in \mathcal{N}_h^0$$, whence \begin{equation*} \begin{aligned} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- &\lesssim \delta \left( \sum_{x_i \in \mathcal{C}_-^\delta(d_{\varepsilon})} \left( (f(x_i)+e_i)^{1/d}- \left(f(x_i)+\sigma^d\right)^{1/d} \right)^d \right)^{1/d}. \end{aligned} \end{equation*} We now observe that $$e_i \geq \sigma ^d$$ for all $$x_i \in \mathcal{C}_-^\delta (d_{\varepsilon })$$ because all terms in the above sum are non-negative. If there is no such $$x_i$$, then the above bound implies that $$d_{\varepsilon }^-=0$$ and $$w_h \leq v_h$$, whence $$u_{\varepsilon } - \mathcal I_h u \lesssim \sigma + |u|_{W^2_{\infty }(\varOmega )} \delta $$. Otherwise, the above observation combined with (5.2) and $$f(x_i)\ge 0$$ implies \begin{equation*} \begin{aligned} (f(x_i)+e_i )^{1/d} - \left(f(x_i)+\sigma^d \right)^{1/d} &= \left(f(x_i)+\sigma^d+\left(e_i-\sigma^d\right)\right)^{1/d}- \left(f(x_i)+\sigma^d\right)^{1/d} \\ &\leq \frac{e_i-\sigma^d}{d \sigma^{d\frac{d-1}{d}}} \leq d^{-1} \sigma^{1-d} e_i. \end{aligned} \end{equation*} We next proceed exactly as in Theorem 5.5 (convergence rate for piecewise smooth solutions) to derive an upper bound for $$d_{\varepsilon }^-$$, but with the additional factor $$\sigma ^{1-d}$$. Employing the definition of $$d_{\varepsilon }$$, we thereby obtain \begin{equation*} u_{\varepsilon} - \mathcal I_h u \lesssim \sigma + |u|_{W^2_{\infty}(\varOmega)} \delta + \sigma^{1-d} \ \frac{\delta}{h} \left( C_1(u) \delta^{s-2} + C_2(u) \ \left( \delta^{1/d} + \frac{h^2}{\delta^2} + \theta^2 \right) \right), \end{equation*} where $$C_1(u) = C |u|_{W_p^s(\varOmega \setminus \mathcal{S})} |u|_{W^2_{\infty }(\varOmega )}^{d-1}$$ and $$C_2(u)=C|u|_{W^2_{\infty }(\varOmega )}^d$$. Step 2: Lower bound for $$u_{\varepsilon }-\mathcal I_h u$$. To prove the reverse inequality, we proceed as in Step 1, except that this time we define $$v_h:= u_{\varepsilon } + C |u|_{W^2_{\infty }(\varOmega )} \delta $$ and $$w_h:= \mathcal I_h u + \sigma q_h$$. An argument similar to Step 1 yields $$w_h \leq v_h$$ in $$\omega _{h,\delta }$$. Moreover, recalling Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$) we have for all $$x_i\in \mathcal{N}_h^0$$, \begin{equation*} T_\varepsilon[v_h](x_i) = f(x_i) \le T_\varepsilon[\mathcal I_h u](x_i) + e_i, \quad T_\varepsilon[w_h](x_i) \ge T_\varepsilon[\mathcal I_h u](x_i) + \sigma^d, \end{equation*} where $$e_i$$ is a local bound for the consistency error. Combining this with Proposition 4.8 (continuous dependence on data) in $$\varOmega _{h,\delta }$$ gives \begin{equation*} \max_{\varOmega_{h,\delta}} d_\varepsilon^- \lesssim \delta \left( \sum_{x_i\in\mathcal{C}_-^\delta(d_\varepsilon^-)} \left( \left(T_\varepsilon[\mathcal I_h u](x_i) + e_i\right)^{\frac{1}{d}} - \left( T_\varepsilon[\mathcal I_h u](x_i) + \sigma^d \right)^{\frac{1}{d}} \right)^d\right)^{\frac{1}{d}}. \end{equation*} Since $$\mathcal I_h u$$ is discretely convex, we apply Lemma 2.2 (discrete convexity) to deduce $$T_\varepsilon [\mathcal I_h u](x_i)\ge 0$$ and next argue as in Step 1 to obtain \begin{equation*} \mathcal I_h u - u_{\varepsilon} \lesssim \sigma + |u|_{W^2_{\infty}(\varOmega)} \delta + \sigma^{1-d} \ \frac{\delta}{h} \left( C_1(u) \delta^{s-2} + C_2(u) \ \left( \delta^{1/d} + \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation*} Step 3: Choice of $$\delta , \theta $$ and $$\sigma $$. Since $$\|u-\mathcal I_h u\|_{L^{\infty }(\varOmega _h)}\leq C|u|_{W^2_{\infty }(\varOmega )} h^2$$, combining Steps 1 and 2 yields \begin{align*} \|u_{\varepsilon} - u\|_{L^\infty(\varOmega_h)} &\lesssim \sigma + |u|_{W^2_{\infty}(\varOmega)} (\delta + h^2) \\ &\quad + \sigma^{1-d} \ \frac{\delta}{h} \left( C_1(u) \delta^{s-2} + C_2(u) \ \left( \delta^{1/d} + \frac{h^2}{\delta^2} + \theta^2 \right) \right) . \end{align*} We now minimize the right-hand side upon choosing $$\delta , \theta $$ and $$\sigma $$ suitably with respect to h. We first recall the definition of $$\beta $$ and choose $$\delta $$ and $$\theta $$ as in Theorem 5.5. At this stage it remains only to find $$\sigma $$ upon solving \begin{equation*} \sigma = C_2(u) \sigma^{1-d} \frac{h}{\delta} = C \sigma^{1-d} |u|_{W^2_{\infty}(\varOmega)}^d R(u)^{-1} \ h^{1-\frac{2}{\beta}}, \end{equation*} which leads to \begin{equation*} \sigma = |u|_{W^2_{\infty}(\varOmega)} R(u)^{-\frac{1}{d}} \ h^{\frac{1}{d}\left(1-\frac{2}{\beta}\right)}. \end{equation*} Since $$\beta>2$$ we get $$h^2 + \delta \le \left (1 + R(u)^{-\frac{1}{\beta }}\right ) h^{\frac{2}{\beta }}$$ and \begin{equation*} \|u_{\varepsilon} - u\|_{L^\infty(\varOmega_h)} \lesssim |u|_{W^2_{\infty}(\varOmega)} (1 + R(u)) h^{\frac{2}{\beta}} + |u|_{W^2_{\infty}(\varOmega)} R(u)^{-\frac{1}{d}} \ h^{\frac{1}{d}\left(1-\frac{2}{\beta}\right)}. \end{equation*} This yields the asserted estimate and finishes the proof. Theorem 5.6 is an extension of Theorem 5.5 to the degenerate case f ≥ 0, but the same techniques and estimates extend as well to Theorems 5.3 and 5.4. We stress that Theorems 5.5 and 5.6 correspond to nonclassical viscosity solutions that are of class $$W^{2}_{\infty }(\varOmega )$$. In order to deal with discontinuous Hessians and degenerate right-hand sides, we rely on techniques that give rise to reduced rates. For Theorem 5.5 we obtain rates that depend on the space dimension, whereas for Theorem 5.6 we resort to a regularization procedure that leads to further reduction of the rates. Although the derived estimates are suboptimal with respect to the computational rates observed in the study by Nochetto et al. (2017), we wish to emphasize that Theorem 5.6 is, to our knowledge, the only error estimate available in the literature that deals with degenerate right-hand sides. 6. Conclusions In this paper we extend the analysis of the two-scale method introduced in the study by Nochetto et al. (2017). We derive continuous dependence of discrete solutions on data and use it to prove rates of convergence in the $$L^{\infty }$$-norm in the computational domain $$\varOmega _h$$ for four different cases. We first prove rates of order up to $$h^{1/2}$$ for smooth classical solutions with Hölder regularity. We then exploit the structure of the continuous dependence estimate of discrete solutions on data to derive error estimates for classical solutions with Sobolev regularity, thereby achieving the same rates under weaker regularity assumptions. In a more general scenario, we derive error estimates for viscosity solutions with discontinuous Hessian across a surface with appropriate smoothness, but otherwise possessing piecewise Sobolev regularity. Lastly, we use an interior barrier function that allows us to remove the nondegeneracy assumption f > 0 at the cost of a reduced rate that depends on dimension. Our theoretical predictions are suboptimal with respect to the linear rates observed experimentally in the study by Nochetto et al. (2017) for a smooth classical solution and a piecewise smooth viscosity solution with degenerate right-hand side f ≥ 0. This can be attributed to the fact that the continuous dependence estimate of discrete solutions on data introduces a factor $$\frac{\delta }{h} \gg 1$$ in the error estimates. This feature is similar to the discrete Alexandroff-Bakelman-Pucci (ABP) estimate developed in Kuo & Trudinger (2000) and is the result of using sets of measure $$\approx \delta ^d$$ instead of $$\approx h^d$$ to approximate subdifferentials. In a forthcoming paper we will tackle this issue and connect our two-scale method with that of Feng & Jensen (2016). Funding National Science Foundation Grant DMS (1411808 to R.H.N., D.N. and W.Z.); the Institut Henri Poincaré (Paris) (to R.H.N.); the Hausdorff Institute (Bonn) (to R.H.N.); the 2016-2017 Patrick and Marguerite Sung Fellowship of the University of Maryland (to D.N.); the Brin Postdoctoral Fellowship of the University of Maryland (to W.Z.). References Awanou , G. ( 2016 ) Convergence rate of a stable, monotone and consistent scheme for the Monge–Ampère equation . Symmetry , 8 , 1 – 7 . Google Scholar CrossRef Search ADS Brenner , S. C. , Gudi , T. , Neilan , M. & Sung , L.-Y. ( 2011 ) $$C^0$$ penalty methods for the fully nonlinear Monge–Ampère equation . Math. Comput. , 80 , 1979 – 1995 . Google Scholar CrossRef Search ADS Brenner , S. C. & Neilan , M. ( 2012 ) Finite element approximations of the three dimensional Monge–Ampère equation . ESAIM Math. Model. Numer. Anal. , 46 , 979 – 1001 . Google Scholar CrossRef Search ADS Brenner , S. C. & Scott , R. ( 2008 ) The Mathematical Theory of Finite Element Methods . Springer : Springer-Verlag New York , XVIII, 400. Google Scholar CrossRef Search ADS Feng , X. & Jensen , M. ( 2016 ) Convergent semi-Lagrangian methods for the Monge–Ampère equation on unstructured grids . arXiv:1602.04758v2. Froese , B. & Oberman , A. ( 2012 ) Convergent finite difference solvers for viscosity solutions of the elliptic Monge–Ampère equation in dimensions two and higher . SIAM J. Numer. Anal. , 49 , 1692 – 1714 . Google Scholar CrossRef Search ADS Giusti , E. ( 2003 ) Direct Methods in the Calculus of Variation . Singapore : World Scientific . Google Scholar CrossRef Search ADS Gutiérrez , C. ( 2001 ) The Monge–Ampère Equation . Birkhäuser : Birkhäuser Basel , XI, 132. Google Scholar CrossRef Search ADS Krylov , N. V. ( 1987 ) Nonlinear Elliptic and Parabolic Equations of the Second Order . Netherlands : Springer . Google Scholar CrossRef Search ADS Kuo , H.-J. & Trudinger , N. S. ( 2000 ) Special Issue of the Proceedings of 1999 International Conference on Nonlinear Analysis (Taipei) , Taiwanese J. Math , 4 , 1 – 8 . Google Scholar CrossRef Search ADS Lions , P. L. ( 1983 ) Hamilton–Jacobi–Bellman equations and the optimal control of stochastic systems, Proc. Int. Congress of Math., Warsaw, 2 , 1403 – 1417 . Nochetto , R. H. , Ntogkas , D. & Zhang , W. ( 2017 ) Two-scale method for the Monge–Ampère equation: convergence to the viscosity solution . Math. Comput . (in press), available at: arXiv:1706.06193 . Nochetto , R. H. & Zhang , W. ( 2017 ) Discrete ABP estimate and convergence rates for linear elliptic equations in non-divergence form. Found. Comput. Math. Available at https://doi.org/10.1007/s10208-017-9347-y. Nochetto , R. H. & Zhang W. ( 2016 ) Pointwise rates of convergence for the Oliker–Prussner method for the Monge–Ampère equation . arXiv:1611.02786. Oliker , V. I. & Prussner , L. D. ( 1988 ) On the numerical solution of the equation ($$\partial ^2 z/\partial x^2)(\partial ^2z/\partial y^2)-(\partial ^2z/\partial x \partial y)^2 = f$$ and its discretizations I . Numer. Math. , 54 , 271 – 293 . Google Scholar CrossRef Search ADS Walker , S. W. ( 2018 ) FELICITY: A MATLAB/C++ Toolbox for developing finite element methods and simulation modeling . SIAM J. Sci. Comput. , 40 , C234 – C257 . Walker , S. W. ( 2018 ) FELICITY: Finite ELement Implementation and Computational Interface Tool for You . Available at http://www.mathworks.com/matlabcentral/fileexchange/31141-felicity. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications 2018. This work is written by (a) US Government employee(s) and is in the public domain in the US. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

Two-scale method for the Monge–Ampère equation: pointwise error estimates

Loading next page...
 
/lp/ou_press/two-scale-method-for-the-monge-amp-re-equation-pointwise-error-dfulyR23iE
Publisher
Oxford University Press
Copyright
Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications 2018.
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/dry026
Publisher site
See Article on Publisher Site

Abstract

Abstract In this paper we continue the analysis of the two-scale method for the Monge–Ampère equation for dimension d ≥ 2 introduced in the study by Nochetto et al. (2017, Two-scale method for the Monge–Ampère equation: convergence to the viscosity solution. Math. Comput., in press). We prove continuous dependence of discrete solutions on data that in turn hinges on a discrete version of the Alexandroff estimate. They are both instrumental to prove pointwise error estimates for classical solutions with Hölder and Sobolev regularity. We also derive convergence rates for viscosity solutions with bounded Hessians which may be piecewise smooth or degenerate. 1. Introduction We consider the Monge–Ampère equation with Dirichlet boundary condition \begin{equation} \begin{cases} \det{D^2u}=f\quad \textrm{in}\ \varOmega \subset \mathbb{R}^d, \\ \qquad\ \ u =g\quad \textrm{on}\ \partial \varOmega, \end{cases} \end{equation} (1.1) where f ≥ 0 is uniformly continuous, $$\varOmega $$ is a uniformly convex domain and g is a continuous function. We seek a convex solution u of (1.1), which is critical for (1.1) to be elliptic and have a unique viscosity solution (Gutiérrez, 2001). The Monge–Ampère equation has a wide spectrum of applications, which has led to increasing interest in the investigation of efficient numerical methods. There are several existing methods for the Monge–Ampère equation, as described in the study by Nochetto et al. (2017). Error estimates in $$H^1(\varOmega )$$ are established in the studies by Brenner et al. (2011) and Brenner & Neilan (2012) for solutions with $$H^3(\varOmega )$$ regularity or more. Awanou (2016) also proved a linear rate of convergence for classical solutions for the wide-stencil method when applied to a perturbed Monge–Ampère equation with an extra lower-order term $$\delta u$$; the parameter $$\delta> 0$$ is independent of the mesh and appears in reciprocal form in the rate. On the other hand, Nochetto and Zhang followed an approach based on the discrete Alexandroff estimate developed in Nochetto & Zhang (2014) and established pointwise error estimates in Nochetto & Zhang (2016) for the method of Oliker & Prussner (1988). In this paper we follow a similar approach and derive pointwise rates of convergence for classical solutions of (1.1) that have Hölder or Sobolev regularity and for viscosity solutions with bounded Hessians which may be piecewise smooth or degenerate. It is worth mentioning a rather strong connection between the semi-Lagrangian method of Feng & Jensen (2016) and our two-scale approach introduced in the study by Nochetto et al. (2017). In fact, for an appropriate choice of discretization of symmetric positive semidefinite (SPSD) matrices with trace 1, discussed in the study by Feng & Jensen (2016) along with the implementation, one can show that the discrete solutions of both methods coincide. Therefore, the error estimates in this paper extend to the fully discrete method of Feng & Jensen (2016). This rather surprising equivalence property is fully derived in a forthcoming paper, along with optimal error estimates in special cases via enhanced techniques for pointwise error analysis. 1.1. Our contribution The two-scale method was introduced in the study by Nochetto et al. (2017) and hinges on the following formula for the determinant of the semipositive Hessian $$D^2w$$ of a smooth function w, first suggested by Froese & Oberman (2012): \begin{equation} \det{D^2w}(x) = \min\limits_{(v_1,\ldots,v_d) \in \mathbb{S}^{\perp}} \prod_{j=1}^d v_j^TD^2w(x) \ v_j, \end{equation} (1.2) where $$\mathbb{S}^{\perp }$$ is the set of all d-orthonormal bases in $$\mathbb{R}^d$$. To discretize this expression, we impose that our discrete solutions lie in a space of continuous piecewise linear functions over an unstructured quasi-uniform mesh $$\mathcal{T}_h$$ of size h; this defines the fine scale. The mesh also defines the computational domain $$\varOmega _h$$, which we describe in more detail in Section 2. The coarser scale $$\delta $$ corresponds to the length of directions used to approximate the directional derivatives that appear in (1.2), namely, \begin{equation*} \nabla^2_{\delta} w (x;v) := \frac{ w(x+\delta v) -2w(x) +w(x-\delta v) }{ \delta^2} \quad \textrm{and} \quad |v| = 1, \end{equation*} for any $$w\in C^0(\overline{\varOmega })$$; to render the method practical, we introduce a discretization $$\mathbb{S}^{\perp }_{\theta }$$ of the set $$\mathbb{S}^{\perp }$$ governed by the parameter $$\theta $$ and denote our discrete solution by $$u_{\varepsilon }$$, where $$\varepsilon = (h,\delta , \theta )$$ represents the scales of the method and the parameter $$\theta $$. We define the discrete Monge–Ampère operator to be \begin{equation*} T_{\varepsilon}[u_{\varepsilon}](x_i):=\min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \left( \prod_{j=1}^d \nabla^{2,+}_{\delta} u_{\varepsilon}(x_i;v_j) - \sum_{j=1}^d \nabla^{2,-}_{\delta} u_{\varepsilon} (x_i;v_j) \right), \end{equation*} where $$\nabla _\delta ^{2,\pm }$$ are the positive and negative parts of $$\nabla _\delta ^ 2$$. In Section 2 we review briefly the role of each term in the operator $$T_{\varepsilon }$$ and recall some key properties of $$T_{\varepsilon }$$. The merit of this definition of $$T_{\varepsilon }$$ is that it leads to a clear separation of scales, which is a key theoretical advantage over the original wide-stencil method of Froese & Oberman (2012). This also yields continuous dependence of discrete solutions on data, namely Proposition 4.8, which allows us to prove rates of convergence in $$L^{\infty }(\varOmega )$$ for our method, depending on the regularity of u; this is not clear for the wide-stencil method of Froese & Oberman (2012). Moreover, the two-scale method is formulated over unstructured meshes $$\mathcal{T}_h$$, which adds flexibility to partition arbitrary uniformly convex domains $$\varOmega $$. This is achieved at the expense of points $$x_i\pm \delta v_j$$ no longer being nodes of $$\mathcal{T}_h$$, which is responsible for an additional interpolation error in the consistency estimate of $$T_\varepsilon $$. To locate such points and evaluate $$\nabla ^2_{\delta } u_{\varepsilon }(x_i;v_j)$$, we resort to fast search techniques within Walker and Walker,(2018) and thus render the two-scale method practical. Compared with the error analysis of the Oliker–Prussner method (Nochetto & Zhang, 2014), we do not require $$\mathcal{T}_h$$ to be Cartesian. In the study by Nochetto et al. (2017) we prove existence and uniqueness of a discrete solution for our method and convergence to the viscosity solution of (1.1), without regularity beyond uniform continuity of f and g. This entails dealing with the $$L^\infty $$-norm and using the discrete comparison principle for piecewise linear functions (monotonicity). Within this $$L^\infty $$ framework and under the regularity requirement $$u \in W^2_\infty (\varOmega )$$, we now prove rates of convergence for classical solutions with either Hölder or Sobolev regularity and for a special class of viscosity solutions. Therefore, our two-scale method (Nochetto et al., 2017) and the Oliker–Prussner method (Nochetto & Zhang, 2016; Oliker & Prussner, 1988) are the only schemes known to us to converge to the viscosity solution and have provable rates of convergence. The first important tool for proving pointwise rates of convergence is the discrete Alexandroff estimate introduced in the study by Nochetto & Zhang (2014): if $$w_h$$ is an arbitrary continuous piecewise linear function, $$w_h\ge 0$$ on $$\partial \varOmega _h$$, and $$\varGamma w_h$$ stands for its convex envelope, then \begin{equation*} \max\limits_{ x_i \in \mathcal{N}_h } w_h^-(x_i) \leq C \left( \sum\limits_{x_i \in C_{-}(w_h)} \left| \partial \varGamma w_h (x_i) \right| \right)^{1/d}, \end{equation*} where $$\partial \varGamma w_h$$ is the subdifferential of $$\varGamma w_h$$ and $$C_{-}(w_h)$$ represents the lower contact set of $$w_h$$, i.e. the set of interior nodes $$x_i\in \mathcal{N}_h^0$$ such that $$\varGamma w_h(x_i)=w_h(x_i)$$; hereafter we write $$w_h^-(x_i) := - \min \{w_h(x_i),0\}$$. To control the measure of the subdifferential at each node, we show the estimate \begin{equation*} |\partial w_h(x_i)| \leq \delta^d \min\limits_{(v_1,\ldots,v_d)\in \mathbb{S}^{\perp}} \prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j) \quad\forall \, x_i\in\mathcal{N}_h^0, \end{equation*} such that the ball centered at $$x_i$$ and of radius $$\delta $$ is contained in $$\varOmega _h$$. Combining both estimates, we derive the continuous dependence estimate \begin{equation*} \max_{\varOmega_h} \, (u_h-w_h)^- \leq C \delta \left( \sum\limits_{x_i \in C_{-}(u_h-w_h)} \left( T_\varepsilon[u_h](x_i)^{1/d}-T_\varepsilon[w_h](x_i)^{1/d} \right)^d \right)^{1/d} \end{equation*} for all continuous piecewise linear functions $$u_h$$ and $$w_h$$ such that $$T_\varepsilon [u_h](x_i)\ge 0$$ and $$T_\varepsilon [w_h](x_i)\ge 0$$ for all $$x_i\in \mathcal{N}_h^0$$. This result is instrumental and, combined with operator consistency and a discrete barrier argument close to the boundary, eventually leads to the pointwise error estimates \begin{equation*} \|u_{\varepsilon}-u\|_{L^\infty(\varOmega_h)} \leq C(d, \varOmega,f,u) \ h^{\frac{\alpha+k}{\alpha+k+2}} \end{equation*} provided $$u\in C^{2+k,\alpha }(\overline{\varOmega })$$ with $$0<\alpha \le 1$$ and k = 0, 1, as well as \begin{equation*} \|u_{\varepsilon}-u\|_{L^\infty(\varOmega_h)} \leq C(d, \varOmega,f,u) \ h^{1-\frac{2}{s}} \end{equation*} provided $$u\in W^s_p(\varOmega )$$ with $$2+d/p<s\le 4$$ and p > d, and $$\delta $$ is suitably chosen in terms of h; see Theorems 5.3 and 5.4. We also consider a special case of viscosity solutions with bounded but discontinuous Hessians and manage to prove a rate of convergence (see Theorem 5.5). Since these theorems are proven under the nondegeneracy assumption f > 0, we examine in Theorem 5.6 the effect of degeneracy f ≥ 0. In the study by Nochetto et al. (2017) we explore numerically both classical and $$W^2_\infty $$ viscosity solutions and observe linear rates with respect to h for both cases, which are better than predicted by this theory. 1.2. Outline We start by briefly presenting the operator $$T_\varepsilon $$ in Section 2 and recalling some important results from Nochetto et al. (2017). In Section 3 we mention the discrete Alexandroff estimate and combine it in Section 4 with some geometric estimates to obtain the continuous dependence of the discrete solution on data. This is much stronger than stability and is critical to prove rates of convergence for fully nonlinear partial differential equations. Lastly, in Section 5 we combine this result with operator consistency and a discrete barrier argument close to the boundary to derive rates of convergence upon making judicious choices of $$\delta $$ and $$\theta $$ in terms of h. 2. Key properties of the discrete operator We recall briefly some of the key properties of operator $$T_\varepsilon $$, as proven in the study by Nochetto et al. (2017). 2.1 Definition of $$T_{\varepsilon}$$ Let $$\mathcal{T}_h$$ be a shape-regular and quasi-uniform triangulation with mesh size h. The computational domain $$\varOmega _h$$ is the union of elements of $$\mathcal{T}_h$$ and $$\varOmega _h\ne \varOmega $$. If $$\mathcal{N}_h$$ denotes the nodes of $$\mathcal{T}_h$$, then $$\mathcal{N}_h^b := \{x_i \in \mathcal{N}_h: x_i \in \partial \varOmega _h\}$$ are the boundary nodes and $$\mathcal{N}_h^0 := \mathcal{N}_h \setminus \mathcal{N}_h^b$$ are the interior nodes. We require that $$\mathcal{N}_h^b \subset \partial \varOmega $$, which in view of the convexity of $$\varOmega $$ implies that $$\varOmega _h$$ is also convex and $$\varOmega _h \subset \varOmega $$. We denote by $$\mathbb{V}_h$$ the space of continuous piecewise linear functions over $$\mathcal{T}_h$$. We let $$\mathbb{S}^{\perp }$$ be the collection of all d-tuples of orthonormal bases and $$\mathbf{v} := (v_1,\ldots ,v_d) \in \mathbb{S}^{\perp }$$ be a generic element, whence each component $$v_i\in \mathbb{S}$$, the unit sphere $$\mathbb{S}$$ of $$\mathbb{R}^d$$. We next introduce a finite subset $$\mathbb{S}_\theta $$ of $$\mathbb{S}$$ governed by the angular parameter $$\theta>0$$; given $$v \in \mathbb S$$, there exists $$v^{\theta }\in \mathbb{S}_\theta $$ such that $$ |v-v^{\theta}| \leq \theta. $$ Likewise, we let $$\mathbb{S}^{\perp }_{\theta }\subset \mathbb{S}^{\perp }$$ be a finite approximation of $$\mathbb{S}^{\perp }$$: for any $$\mathbf v = (v_j)_{j=1}^d \in \mathbb{S}^{\perp }$$ there exists $$\mathbf v^{\theta } = \big(v_j^{\theta }\big)_{j=1}^d \in \mathbb{S}^{\perp }_{\theta }$$ such that $$v_j^\theta \in \mathbb S_{\theta }$$ and $$\big |v_j - v_j^{\theta }\big| \leq \theta $$ for all 1 ≤ j ≤ d and conversely. For $$x_i\in \mathcal{N}_h^0$$, we use centered second differences with a coarse scale $$\delta $$, \begin{equation} \nabla^2_{\delta} w(x_i;v_j) := \frac{ w\left(x_i+ \hat\delta v_j\right) -2 w(x_i) + w\left(x_i- \hat\delta v_j\right) }{ \hat\delta^2}, \end{equation} (2.1) where $$\hat \delta :=\rho \delta $$ with $$0 < \rho \le 1$$ is the biggest number such that the ball centered at $$x_i$$ of radius $$\hat \delta $$ is contained in $$\varOmega _h$$; we stress that $$\rho $$ need not be computed exactly. This is well defined for any $$w \in C^0(\overline{\varOmega })$$, in particular for $$w\in \mathbb{V}_h$$. We define $$\varepsilon := (h,\delta , \theta )$$ and we seek $$u_{\varepsilon } \in \mathbb{V}_h$$ such that $$u^{\varepsilon }(x_i)=g(x_i)$$ for $$x_i \in \mathcal{N}_h^b$$ and for $$x_i \in \mathcal{N}_h^0$$, \begin{equation} T_{\varepsilon}[u_{\varepsilon}](x_i):=\min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \left( \prod_{j=1}^d \nabla^{2,+}_{\delta} u_{\varepsilon}(x_i;v_j) - \sum_{j=1}^d \nabla^{2,-}_{\delta} u_{\varepsilon} (x_i;v_j) \right) = f(x_i), \end{equation} (2.2) where we use the notation \begin{equation*} \nabla^{2,+}_{\delta} u_{\varepsilon}(x_i;v_j) = \max{\left(\nabla^2_{\delta} u_{\varepsilon}(x_i;v_j),0\right)}, \quad \nabla^{2,-}_{\delta} u_{\varepsilon}(x_i;v_j) = -\min{\left(\nabla^2_{\delta} u_{\varepsilon}(x_i;v_j),0\right)} \end{equation*} to indicate positive and negative parts of the centered second differences. 2.2 Key properties of $$T_{\varepsilon}$$ One of the critical properties of the Monge–Ampère equation is the convexity of the solution u. The following notion mimics this at the discrete level. Definition 2.1 (Discrete convexity). We say that $$w_h \in \mathbb V_h$$ is discretely convex if \begin{equation*} \nabla^2_{\delta} w_h(x_i;v_j) \geq 0 \qquad \forall\, x_i \in \mathcal{N}_h^0, \quad \forall\, v_j \in \mathbb S_{\theta}. \end{equation*} The following lemma guarantees the discrete convexity of subsolutions of (2.2) (Nochetto et al., 2017, Lemma 2.2). Lemma 2.2 (Discrete convexity). If $$w_h \in \mathbb{V}_h$$ satisfies \begin{equation} T_{\varepsilon}[w_h](x_i) \geq 0 \quad \forall\, x_i \in \mathcal{N}_h^0, \end{equation} (2.3) then $$w_h$$ is discretely convex and as a consequence \begin{equation} T_{\varepsilon} [w_h](x_i)= \min_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j), \end{equation} (2.4) namely, \begin{equation*} \nabla^{2,+}_{\delta} w_h(x_i;v_j) = \nabla^2_{\delta} w_h(x_i;v_j), \quad \nabla^{2,-}_{\delta} w_h(x_i;v_j) =0 \quad\forall\, x_i\in \mathcal{N}_h^0,\quad\forall\, v_j\in \mathbb S_{\theta}. \end{equation*} Conversely, if $$w_h$$ is discretely convex, then (2.3) is valid. Another important property of operator $$T_\varepsilon $$ that relies on its monotonicity is the following discrete comparison principle (Nochetto et al., 2017, Lemma 2.4). Lemma 2.3 (Discrete comparison principle). Let $$u_h,w_h \in \mathbb{V}_h$$ with $$u_h \leq w_h$$ on the discrete boundary $$\partial \varOmega _h$$ be such that \begin{equation} T_{\varepsilon}[u_h](x_i) \geq T_{\varepsilon}[w_h](x_i) \geq 0 \ \ \ \forall\, x_i \in \mathcal{N}_h^0. \end{equation} (2.5) Then, $$u_h \leq w_h$$ everywhere. We now state a consistency estimate, proved in (Nochetto et al., 2017, Lemma 4.1), that leads to pointwise rates of convergence. To this end, given a node $$x_i \in \mathcal{N}_h^0$$, we denote by \begin{equation} B_i := \cup \left\{\overline{T}: T\in\mathcal{T}_h, \, \textrm{dist} (x_i,T) \le \hat\delta\right\}, \end{equation} (2.6) where $$\hat \delta $$ is defined in (2.1). We also define the $$\delta $$-interior region \begin{equation} \varOmega_{h,\delta} = \{ T \in \mathcal{T}_h \ : \ \textrm{dist} (x, \partial \varOmega_h ) \geq \delta \ \forall\, x \in T \}, \end{equation} (2.7) and the $$\delta $$-boundary region \begin{equation*} \omega_{h,\delta} = \varOmega \setminus \varOmega_{h,\delta}. \end{equation*} Lemma 2.4 (Consistency of $$T_{\varepsilon } [\mathcal{I}_h u]$$). Let $$x_i \in \mathcal{N}_h^0 \,\cap\, \varOmega _{h,\delta }$$ and $$B_i$$ be defined as in (2.6). If $$u \in C^{2+k,\alpha }(B_i)$$ with $$0<\alpha \leq 1$$ and k = 0, 1 is convex, and $$\mathcal I_h u $$ is its piecewise linear interpolant, then \begin{equation} \left| \det D^2u(x_i) - T_{\varepsilon}[\mathcal I_h u] (x_i) \right| \leq C_1(d,\varOmega,u) \delta^{k+\alpha} + C_2(d,\varOmega,u) \left( \frac{h^2}{\delta^2} + \theta^2 \right), \end{equation} (2.8) where \begin{equation} C_1(d,\varOmega,u)= C |u|_{C^{2+k,\alpha}(B_i)} |u|_{W^2_{\infty}(B_i)}^{d-1}, \quad C_2(d,\varOmega,u) = C |u|_{W^2_{\infty}(B_i)}^d. \end{equation} (2.9) If $$x_i\in \mathcal{N}_h^0$$ and $$u \in W^2_{\infty }(B_i)$$, then (2.8) remains valid with $$\alpha =k=0$$ and $$C^{2+k,\alpha }(B_i)$$ replaced by $$W^2_{\infty }(B_i)$$. 3. Discrete Alexandroff estimate In this section, we review several concepts related to convexity as well as the discrete Alexandroff estimate of Nochetto & Zhang (2014). We first recall several definitions. Definition 3.1 (Subdifferential). (i) The subdifferential of a function w at a point $$x_0\in \varOmega _h$$ is the set \begin{equation*} \partial w(x_0) := \left\{ p \in \mathbb{R}^d: \ w(x) \geq w(x_0) + p \cdot (x-x_0) \ \ \forall\, x \in \varOmega_h \right\}. \end{equation*} (ii) The subdifferential of a function w on set $$E \subset \varOmega _h$$ is $$\partial u(E) := \cup _{x \in E} \partial w(x)$$. Definition 3.2 (Convex envelope and discrete lower contact set). (i) The convex envelope $$\varGamma u$$ of a function w is defined to be \begin{equation*} \varGamma w (x) := \sup_{L } \{ L(x), \; L(y) \leq w(y)\quad \forall\, y \in \varOmega_h\ \textrm{and}\ L\ \textrm{is affine} \}. \end{equation*} (ii) The discrete lower contact set $$C_-(w_h)$$ of a function $$w_h\in \mathbb{V}_h$$ is the set of nodes where the function coincides with its convex envelope, i.e. \begin{equation*} \mathcal{C}_- (w_h) := \left\{x_i \in \mathcal{N}_h^0: \varGamma w_h(x_i) = w_h(x_i) \right\}. \end{equation*} Remark 3.3 ($$w_h$$ dominates $$\varGamma w_h$$). Since $$w_h\ge \varGamma w_h$$, at a contact node $$x_i\in \mathcal{C}_-(w_h)$$ we have $$ \nabla^2_{\delta} \varGamma w_h(x_i;v_j) \leq \nabla^2_{\delta} w_h(x_i;v_j)(x_i)\qquad \forall\, v_j \in \mathbb S_{\theta}. $$ Remark 3.4 (Minima of $$w_h$$ and $$\varGamma w_h$$). A consequence of Definition 3.2 (convex envelope and discrete lower contact set) is that the minima of $$w_h\in \mathbb{V}_h$$ and $$\varGamma w_h$$ are attained at the same contact nodes and are equal. We can now present the discrete Alexandroff estimate from Nochetto & Zhang (2014), which states that the minimum of a discrete function is controlled by the measure of the subdifferential of its convex envelope in the discrete contact set. Proposition 3.5 (Discrete Alexandroff estimate, Nochetto & Zhang, 2014). Let $$v_h$$ be a continuous piecewise linear function that satisfies $$v_h \geq 0$$ on $$\partial \varOmega _h$$. Then \begin{equation*} \max \limits_{ x_i \in \mathcal{N}_h^0} v_h(x_i)^- \leq C \left( \sum\limits_{x_i \in \mathcal{C}_{-}(v_h)} | \partial \varGamma v_h (x_i) | \right)^{1/d}, \end{equation*} where $$C=C(d,\varOmega )$$ depends only on the dimension d and the domain $$\varOmega $$. 4. Continuous dependence on data We derive the continuous dependence of the discrete solution on data in Section 4.3, which is essential to prove rates of convergence. To this end, we first prove a stability estimate in the max norm in Section 4.1 and the concavity of the discrete operator in Section 4.2. 4.1. Stability of the two-scale method We start with some geometric estimates. The first and second lemmas connect the discrete Alexandroff estimate with the two-scale method. They allow us to estimate the measure of the subdifferential of a discrete function $$w_h$$ in terms of our discrete operator $$T_\varepsilon [w_h]$$, defined in (2.2). Lemma 4.1 (Subdifferential vs. hyper-rectangle). Let $$w \in C^0(\overline{\varOmega }_h)$$ be convex and $$x_i\in \mathcal{N}_h^0$$ be so that $$x_i\pm \hat \delta v \in \overline{\varOmega }_h$$ for all $$v\in \mathbb S_{\theta }$$ with $$\hat \delta \le \delta $$. If $$\mathbf{v}=(v_j)_{j=1}^d\in \mathbb{S}^{\perp }_{\theta }$$ and \begin{equation*} \alpha_{i,j}^\pm := \frac{w\left(x_i\pm \hat\delta v_j\right) - w(x_i)}{\hat\delta} \quad\forall \, 1\le j\le d, \end{equation*} then \begin{equation*} \partial w(x_i) \subset \left\{ p \in \mathbb{R}^d: \ \alpha_{i,j}^- \le p\cdot v_j \le \alpha_{i,j}^+ \ 1\le j \le d \right\}. \end{equation*} Proof. Take $$p \in \partial w(x_i)$$ and write \begin{equation*} w(x) \geq w(x_i) + p \cdot (x-x_i) \quad \forall\, x \in \overline{\varOmega}_h. \end{equation*} Consequently, for any 1 ≤ j ≤ d we infer that \begin{equation*} w\left(x_i + \hat\delta v_j\right) \geq w(x_i) + \hat\delta \ p \cdot v_j, \quad w\left(x_i - \hat\delta v_j\right) \geq w(x_i) - \hat\delta \ p \cdot v_j, \end{equation*} or equivalently \begin{equation*} \frac{w(x_i)-w\left(x_i-\hat\delta v_j\right)}{\hat\delta } \leq p \cdot v_j \leq \frac{w\left(x_i+\hat\delta v_j\right)-w(x_i)}{\hat\delta}. \end{equation*} This implies that p belongs to the desired set. Lemma 4.2 (Hyper-rectangle volume). For d-tuple $$\mathbf{v} = (v_j)_{j=1}^d \in \mathbb{S}^{\perp }_{\theta }$$, the volume of the set \begin{equation*} K= \left\{ p \in \mathbb{R}^d: \ a_j \leq p \cdot v_j \leq b_j, \ \ j=1,\ldots,d \right\} \end{equation*} is given by \begin{equation*} |K|=\prod_{j=1}^d (b_i-a_i). \end{equation*} Proof. Let $$V=[v_1, \ldots ,v_d]\in \mathbb{R}^{d\times d}$$ be the orthogonal matrix whose columns are the elements of v; hence, $$v_j=V e_j$$ where $$\left \{ e_j \right \}_{j=1}^d $$ is the canonical basis in $$\mathbb{R}^d$$. We now seek a more convenient representation of K, \begin{align*} K &= \left\{ p \in \mathbb{R}^d: \ a_j \leq p \cdot (Ve_j) \leq b_j, \ \ j=1,\ldots,d \right\} \\ &= V^{-T} \left\{ x \in \mathbb{R}^d: \ a_j \leq x \cdot e_j \leq b_j, \ \ j=1,\ldots,d \right\} = V^{-T} \widetilde{K}, \end{align*} whence \begin{equation*} |K| = |\det{V^{-T}} | \ |\widetilde{K}|=|\widetilde{K}| = \prod_{j=1}^d (b_j-a_j) \end{equation*} because $$\widetilde{K}$$ is an orthogonal hyper-rectangle. Combining Lemmas 4.1 and 4.2 we get the following corollary. Corollary 4.3 (Subdifferential vs. discrete operator). For every $$x_i \in \mathcal{N}_h^0 \cap \varOmega _{h,\delta }$$ and a convex function w we have \begin{equation*} | \partial w(x_i)| \leq \left( \min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} w(x_i;v_j) \right) \delta^d . \end{equation*} Remark 4.4 (Artificial factor $$\frac{\delta }{h}$$). The above estimate is critical in deriving Proposition 4.8 (continuous dependence on data) and subsequently rates of convergence in Section 5. We thus wish to provide here intuition about the reduced rates of convergence of Theorem 5.3 (rates of convergence for classical solutions) relative to the numerical experiments in the study by Nochetto et al. (2017). To this end, we let $$w \in C^2(\overline \varOmega )$$ be a convex function. Then, Corollary 4.3 implies that $$ T_\varepsilon[\mathcal I_h w] (x_i) \geq \frac{1}{\delta^d} | \partial \mathcal I_h w(x_i)|. $$ However, it was shown in Nochetto & Zhang (2016, Proposition 5.4) that $$ | \partial \mathcal I_h w(x_i)| \geq C h^d \det{\left(D^2w(x_i)\right)}, $$ provided the mesh $$\mathcal{T}_h$$ is translation invariant, whence $$ \det{\left(D^2w(x_i)\right)} \leq C \frac{\delta^d}{h^d} T_\varepsilon[\mathcal I_h w] (x_i). $$ We can now see that using this estimate introduces an extra factor $$\frac{\delta }{h} \gg 1$$, which could possibly explain the suboptimal rate proved in Theorem 5.3 (rates of convergence for classical solutions). Lemma 4.5 (Stability). If $$w_h\in \mathbb{V}_h$$ is $$w_h \geq 0$$ on $$\partial \varOmega _h$$, then \begin{equation*} \max_{x_i \in \mathcal{N}_h^0} w_h(x_i)^- \leq C \delta \left( \sum_{x_i \in \mathcal{C}_-(w_h)} T_{\varepsilon}[w_h](x_i) \right)^{1/d} . \end{equation*} Proof. Since the function $$w_h\ge 0$$ on $$\partial \varOmega _h$$, we invoke Proposition 3.5 (discrete Alexandroff estimate) for $$w_h$$ to obtain \begin{equation*} \max \limits_{ x_i \in \mathcal{N}_h^0} w_h(x_i)^- \leq C \left( \sum\limits_{x_i \in \mathcal{C}_-(w_h)} \left| \partial \varGamma w_h (x_i) \right| \right)^{1/d} \end{equation*} Applying Corollary 4.3 (subdifferential vs. discrete operator) to the convex function $$\varGamma w_h (x_i)$$ at a contact point $$x_i \in \mathcal{C}_-(w_h)$$ and recalling Remark 3.3, we have \begin{equation*} \left| \partial \varGamma w_h (x_i) \right| \leq \delta^d \min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} \varGamma w_h(x_i;v_j) \leq \delta^d \min\limits_{\mathbf{v} \in \mathbb{S}^{\perp}_{\theta}} \prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j) = \delta^d T_{\varepsilon}[w_h](x_i), \end{equation*} where the last equality follows from Lemma 2.2 (discrete convexity). 4.2. Concavity of the discrete operator We recall concavity properties of $$(\det A)^{1/d}$$ for SPSD matrices A and extend them to $$T_{\varepsilon }$$. The results can be traced back to Lions (1983) and Krylov (1987), but we present them here for completeness. Lemma 4.6 (Concavity of determinant). The following two statements are valid. (i) For every (SPSD) matrix A we have \begin{equation*} (\det{A})^{1/d} = \frac{1}{d} \inf{\left\{ \textrm{tr}(AB) \ \Big| \ B \ \textrm{is SPD and}\ \det{B}=1 \right\}}. \end{equation*} (ii) The function $$A \mapsto (\det{A})^{1/d}$$ is concave on SPSD matrices. Proof. We proceed in three steps. Step 1: Proof of (i) for A invertible. Let B be symmetric positive definite (SPD) with $$\det B = 1$$. Then $$B^{1/2}$$ is well defined, $$\det (B^{1/2})=1$$ and we obtain \begin{equation*} \det A = \det(B^{1/2} A B^{1/2}). \end{equation*} Let P be an orthogonal matrix that converts $$B^{1/2} A B^{1/2}$$ into a diagonal matrix D, namely $$D = PB^{1/2} A B^{1/2}P^T$$. Applying the geometric mean inequality yields \begin{equation*} \det(B^{1/2} A B^{1/2})^{1/d} = (\det D)^{1/d} \le \frac{1}{d} \operatorname{tr} D = \frac{1}{d} \textrm{tr}(B^{1/2} A B^{1/2}) = \frac{1}{d} \textrm{tr}(AB), \end{equation*} where we have used the invariance of the trace under cyclic permutations of the factor to write the last two equalities. This shows that \begin{equation*} (\det{A})^{1/d} \leq \frac{1}{d} \inf{\left\{ \textrm{tr}(AB) \ \Big| \ B \ \textrm{is SPD and} \ \det{B}=1 \right\} }. \end{equation*} This inequality is actually an equality provided A is invertible. In fact, we can take $$B = (\det A)^{1/d} A^{-1}$$, which is SPD and $$\det B = 1$$. This proves (i) for A nonsingular. Step 2: Proof of (i) for A singular. Given the singular value decomposition of A, \begin{equation*} A = \sum_{i=1}^d \lambda_i v_i \otimes v_i, \quad \lambda_1 \ge \cdots \ge \lambda_k> \lambda_{k+1} = \cdots = \lambda_d = 0, \end{equation*} with orthogonal vectors $$(v_i)_{i=1}^d$$, we can assume that k > 0 for otherwise A = 0 and the assertion is trivial. Given a parameter $$\sigma>0$$, let B be defined by \begin{equation*} B := \sum_{i=1}^k \sigma v_i \otimes v_i + \sum_{i=k+1}^d \sigma^{-\beta} v_i \otimes v_i \end{equation*} and $$\beta =k/(d-k)$$ because then $$\det B = \sigma ^k\sigma ^{-\beta (d-k)}=1$$. Therefore, \begin{equation*} AB = \sigma \sum_{i=1}^k \lambda_i v_i \otimes v_i \quad\Rightarrow\quad \textrm{tr}(AB) = \sigma \sum_{i=1}^k \lambda_i \to 0 \quad \textrm{as}\ \sigma \to 0, \end{equation*} which proves (i) for A singular since B is SPD. Step 3: Proof of (ii). Let A and B be SPSD matrices and $$0\le \lambda \le 1$$. Then $$\lambda A + (1-\lambda ) B$$ is also SPSD and we can apply (i) to \begin{align*} (\det{[ \lambda A + (1- \lambda) B ]})^{1/d} &= \frac{1}{d} \inf{ \big\{ \textrm{tr}[( \lambda A + (1- \lambda) B)C] \big| \ C\ \textrm{is SPD and} \det{C}=1 \big\} } \\ &\geq \frac{\lambda}{d} \inf{ \big\{ \textrm{tr}(AC) \ \big| \ C\ \textrm{is SPD and} \ \det{C}=1 \big\} } \\ &\quad+ \frac{1- \lambda}{d} \inf{ \big\{ \textrm{tr}(BC) \ \big| \ C\ \textrm{is SPD and} \ \det{C}=1 \big\} } \\ &= \lambda (\det{A})^{1/d} + (1-\lambda) (\det{B})^{1/d}. \end{align*} This completes the proof. Upon relabeling $$\widehat{A}=\lambda A$$ and $$\widehat{B}=(1-\lambda )B$$, which are still SPSD, we can write Lemma 4.6(ii) as \begin{equation} (\det \widehat{A})^{1/d} + (\det \widehat{B})^{1/d} \le \left(\det(\widehat{A}+\widehat{B})\right)^{1/d}. \end{equation} (4.1) We now show that our discrete operator $$T_\varepsilon [\cdot ]$$ possesses a similar property. Corollary 4.7 (Concavity of discrete operator). Given two functions $$u_h,w_h\in \mathbb{V}_h$$, we have \begin{equation*} \left( T_{\epsilon} [u_h] (x_i) \right)^{1/d} + \left( T_{\epsilon} [w_h](x_i) \right)^{1/d} \le \left( T_{\epsilon}[u_h+w_h](x_i) \right)^{1/d} \end{equation*} for all nodes $$x_i \in \mathcal{N}_h^0$$ such that $$\nabla ^2_{\delta } u_h(x_i;v_j)\ge 0, \ \nabla ^2_{\delta } w_h(x_i;v_j)\geq 0$$ for all $$v_j \in \mathbb S_{\theta }$$. Proof. We argue in two steps. Step 1. For $$a=(a_j)_{j=1}^d \in \mathbb{R}^d$$ with $$a_j \geq 0, \ j=1,\ldots ,d$$ we consider the function \begin{equation*} f(a) := \left( \prod_{j=1}^d a_j \right)^{1/d}, \end{equation*} which can be conceived as the determinant of a diagonal (and thus symmetric) positive semidefinite matrix with diagonal elements $$(a_j)_{j=1}^d$$, i.e. \begin{equation*} f(a) = \left(\det \textrm{diag} {\left\{a_1,\ldots,a_d\right\}} \right)^{1/d}. \end{equation*} Applying (4.1) to $$\widehat{A}=\ \textrm{diag} {\left \{a_1,\ldots ,a_d\right \}}, \widehat{B}=\ \textrm{diag} {\left \{b_1,\ldots ,b_d\right \}} $$ with $$a=(a_j)_{j=1}^d, b=(b_j)_{j=1}^d\ge 0$$ component wise, we deduce \begin{equation*} f(a) + f(b) \le f(a+b). \end{equation*} Step 2. We now apply this formula to the discrete operator. Since both $$u_h,w_h$$ are discretely convex at $$x_i\in \mathcal{N}_h^0$$, so is $$u_h+w_h$$, and we can apply Lemma 2.2 (discrete convexity) to write \begin{equation*} T_\varepsilon[u_h+w_h](x_i) = \prod_{j=1}^d \nabla^2_{\delta}[u_h+w_h](x_i;v_j) \end{equation*} for a suitable $$\mathbf{v}=(v_j)_{j=1}^d\in \mathbb{S}^{\perp }_{\theta }$$. Making use again of (2.4), this time for $$u_h$$ and $$w_h$$ and for the specific set of directions v just found, we obtain \begin{align*} \left( T_\varepsilon [u_h](x_i) \right)^{\frac{1}{d}} + \left( T_\varepsilon [w_h](x_i) \right)^{\frac{1}{d}} &\le \left(\prod_{j=1}^d \nabla^2_{\delta} u_h(x_i;v_j)\right)^{\frac{1}{d}} + \left(\prod_{j=1}^d \nabla^2_{\delta} w_h(x_i;v_j)\right)^{\frac{1}{d}} \\ & \le \left(\prod_{j=1}^d \nabla^2_{\delta} u_h(x_i;v_j) + \nabla^2_{\delta} w_h(x_i;v_j) \right)^{\frac{1}{d}} = \left(T_\varepsilon[u_h+w_h](x_i)\right)^{\frac{1}{d}}, \end{align*} where the second inequality is given by Step 1 for $$a = \left ( \nabla ^2_{\delta } u_h(x_i;v_j) \right )_{j=1}^d$$ and $$b = \left ( \nabla ^2_{\delta } w_h(x_i;v_j) \right )_{j=1}^d$$. This is the asserted estimate. 4.3. Continuous dependence of the two-scale method on data We are now ready to prove the continuous dependence of discrete solutions on data. This will be instrumental later for deriving rates of convergence for the two-scale method. Proposition 4.8 (Continuous dependence on data). Given two functions $$u_h,w_h\in \mathbb{V}_h$$ such that $$u_h \geq w_h$$ on $$\partial \varOmega _h$$ and \begin{equation*} T_{\varepsilon}[u_h](x_i) =f_1(x_i) \geq 0 \ \ \textrm{and} \ \ T_{\varepsilon}[w_h](x_i) = f_2(x_i) \geq 0 \end{equation*} at all interior nodes $$x_i\in \mathcal{N}_h^0$$, we have \begin{equation*} \max \limits_{ \varOmega_h} (u_h-w_h)^- \leq C \ \delta \ \left( \sum\limits_{x_i \in \mathcal{C}_{-}(u_h-w_h)} \left( f_1(x_i)^{1/d}- f_2(x_i)^{1/d} \right)^d \right)^{1/d}. \end{equation*} Proof. Since $$u_h-w_h\in \mathbb{V}_h$$ and $$u_h -w_h \geq 0$$ on $$\partial \varOmega _h$$, Lemma 4.5 (stability) yields \begin{equation*} \max_{x_i\in\mathcal{N}_h^0} (u_h - w_h) (x_i)^- \leq C \delta \left( \sum_{x_i \in \mathcal{C}_-(u_h - w_h)} T_{\epsilon} [u_h - w_h] (x_i) \right)^{1/d} . \end{equation*} Since $$x_i \in \mathcal{C}_-(u_h-w_h)$$, we have $$\nabla ^2_{\delta } (u_h-w_h)(x_i;v_j)\geq 0$$, whence \begin{equation*} \nabla^2_{\delta} u_h(x_i;v_j)\geq \nabla^2_{\delta} w_h(x_i;v_j) \ge 0 \quad\forall\, v_j\in\mathbb S_{\theta}, \end{equation*} where we have made use of Lemma 2.2 (discrete convexity). Invoking Corollary 4.7 (concavity of discrete operator) for $$u_h-w_h$$ and $$w_h$$, we deduce \begin{equation*} \left( T_{\epsilon} [u_h-w_h] (x_i) \right)^{1/d} \le \left( T_{\epsilon} [u_h](x_i) \right)^{1/d} - \left( T_{\epsilon} [w_h](x_i) \right)^{1/d}, \end{equation*} whence \begin{align*} \max_{x_i\in\mathcal{N}_h^0} (u_h - w_h) (x_i)^- \leq &\; C \delta \left( \sum_{x_i \in \mathcal{C}_-(u_h - w_h)} \left( T_{\epsilon} [u_h](x_i)^{1/d} - T_{\epsilon} [w_h] (x_i)^{1/d} \right)^d \right)^{1/d} \\ = &\; C \delta \left( \sum_{x_i \in \mathcal{C}_-(u_h - w_h)} \left( f_1(x_i)^{1/d} - f_2(x_i)^{1/d} \right)^d \right)^{1/d} . \end{align*} This completes the proof. 5. Rates of convergence We now combine the preceding estimates to prove pointwise convergence rates for solutions with varying degrees of regularity. We first present in Theorem 5.3 the case of a classical solution with Hölder regularity. This allows us to introduce the main techniques employed for deriving the rates of convergence. We then build on these techniques and prove error estimates for three more cases of increasing generality. In Theorem 5.4 we assume a classical solution with Sobolev regularity, which requires the use of embedding estimates and accumulating the truncation error in $$l^d$$, rather than $$l^\infty $$. We next deal with a nonclassical solution that is globally in $$W^{2}_{\infty }(\varOmega )$$ but its Hessian is discontinuous across a d − 1 -dimensional Lipschitz surface. To prove rates for this case we need to take advantage of the small volume affected by this discontinuity and combine it with the techniques used in Theorems 5.3 and 5.4. Lastly, we remove the nondegeneracy assumption $$f \geq f_0>0$$ used in the previous three cases to obtain rates of convergence for a piecewise smooth viscosity solution with degenerate right-hand side f. This corresponds to one of the numerical experiments performed in the study by Nochetto et al. (2017). Our estimates do not require h small and are stated over the computational domain $$\varOmega _h\subset \varOmega $$. 5.1. Barrier function We recall here the two discrete barrier functions introduced in Nochetto et al. (2017, Lemmas 5.1, 5.2). The first one is critical in order to control the behavior of $$u_{\varepsilon }$$ close to the boundary of $$\varOmega _h$$ and to prove the convergence to the unique viscosity solution u of (1.1). We now use the same barrier function to control the pointwise error of $$u_{\varepsilon }$$ and u close to the boundary. The second barrier allows us to treat the degenerate case f ≥ 0, using techniques similar to the case f > 0. Lemma 5.1 (Discrete boundary barrier). Let $$\varOmega $$ be uniformly convex and E > 0 be arbitrary. For each node $$z \in \mathcal{N}_h^0$$ with $$\ \textrm{dist} (z,\partial \varOmega _h) \leq \delta $$, there exists a function $$p_h\in \mathbb{V}_h$$ such that $$T_{\varepsilon }[p_h](x_i) \geq E$$ for all $$x_i \in \mathcal{N}_h^0$$, $$p_h \leq 0$$ on $$\partial \varOmega _h$$ and \begin{equation*} |p_h(z)| \leq CE^{1/d} \delta \end{equation*} with C depending on $$\varOmega $$. Lemma 5.2 (Discrete interior barrier). Let $$\varOmega $$ be contained in the ball $$B(x_0,R)$$ of center $$x_0$$ and radius R. If $$q(x):= \frac{1}{2}\left ( |x-x_0|^2 - R^2 \right )$$, then its interpolant $$q_h:=\mathcal I_h q\in \mathbb{V}_h$$ satisfies \begin{equation*} T_\varepsilon[q_h](x_i) \ge 1\quad\forall\, x_i\in\mathcal{N}_h^0, \qquad q_h(x_i) \le 0 \quad\forall\, x_i\in\mathcal{N}_h^b. \end{equation*} 5.2. Error estimates for solutions with Hölder regularity We now deal with classical solutions u of (1.1) of class $$C^{2+k,\alpha }(\overline{\varOmega })$$, with k = 0, 1 and $$0<\alpha \leq 1$$, and derive pointwise error estimates. We proceed as follows. We first use Lemma 5.1 (discrete boundary barrier) to control $$u_{\varepsilon } -\mathcal I_h u$$ in the $$\delta $$-neighborhood $$\omega _{h,\delta }$$ of $$\partial \varOmega _h$$, where the consistency error of $$T_\varepsilon [\mathcal I_h u]$$ is of order 1 according to Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$). In the $$\delta $$-interior region $$\varOmega _{h,\delta }$$ we combine the interior consistency error of $$T_\varepsilon [\mathcal I_h u]$$ from Lemma 2.4 and Proposition 4.8 (continuous dependence on data). Judicious choices of $$\delta $$ and $$\theta $$ in terms of h conclude the argument. Theorem 5.3 (Rates of convergence for classical solutions). Let $$f(x) \geq f_0>0$$ for all $$x \in \varOmega $$. Let u be the classical solution of (1.1) and $$u_{\varepsilon }$$ be the discrete solution of (2.2). If $$u \in C^{2,\alpha }(\overline{\varOmega })$$ for $$0<\alpha \leq 1$$ and \begin{equation*} \delta = R_0(u) \ h^{\frac{2}{2+\alpha}}, \quad \theta = R_0(u)^{-1} \ h^{\frac{\alpha}{2+\alpha}}, \end{equation*} with $$R_0(u) = |u|_{W^2_{\infty }(\varOmega )}^{\frac{1}{2+\alpha }} \ |u|_{C^{2,\alpha }(\overline{\varOmega })}^{-{\frac{1}{2+\alpha }}}$$, then \begin{equation*} \|u-u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(\varOmega,d,f_0) \left( |u|_{C^{2,\alpha}(\overline{\varOmega})}^{\frac{1}{2+\alpha}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{2+\alpha}} + \left(1+ R_0(u)\right)|u|_{W^2_{\infty}(\varOmega)} \right) \ h^{\frac{\alpha}{2+\alpha}}. \end{equation*} Otherwise, if $$u \in C^{3,\alpha }(\overline{\varOmega })$$ for $$0 < \alpha \leq 1$$ and \begin{equation*} \delta = R_1(u) \ h^{\frac{2}{3+\alpha}}, \quad \theta = R_1(u)^{-1} \ h^{\frac{1+\alpha}{3+\alpha}}, \end{equation*} with $$R_1(u) := |u|_{W^2_{\infty }(\varOmega )}^\frac{1}{3+\alpha }|u|_{C^{3,\alpha }(\overline{\varOmega })}^{-\frac{1}{3+\alpha }}$$, then \begin{equation*} \|u-u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(\varOmega,d,f_0) \left( |u|_{C^{3,\alpha}(\overline{\varOmega})}^{\frac{1}{3+\alpha}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{3+\alpha}} + \left(1 + R_1(u) \right)\!|u|_{W^2_{\infty}(\varOmega)} \right) \ h^{\frac{1+\alpha}{3+\alpha}}. \end{equation*} Proof. If $$R_k(u) := |u|_{W^2_{\infty }(\varOmega )}^\frac{1}{2+k+\alpha }|u|_{C^{2+k,\alpha }(\overline{\varOmega })}^{-\frac{1}{2+k+\alpha }}$$, k = 0, 1, we prove below the estimate \begin{equation*} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim \left( \left(1+R_k(u)|u|_{W^2_{\infty}(\varOmega)}\right)+ |u|_{C^{2+k,\alpha}(\overline{\varOmega})}^{\frac{1}{2+k+\alpha}}\! |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{2+k+\alpha}} \right) \ h^{\frac{k+\alpha}{2+k+\alpha}} \end{equation*} with a hidden constant depending on $$\varOmega , d, f_0$$. We proceed in three steps. The estimates for $$\max _{\varOmega _h} \, (\mathcal I_h u - u_{\varepsilon })$$ are similar and thus omitted. Adding the interpolation error $$\|u-\mathcal I_h u\|_{L^{\infty }(\varOmega _h)} \leq Ch^2 |u|_{W^2_{\infty }(\varOmega )}$$ (Brenner & Scott, 2008) readily gives the asserted estimates because $$\frac{k+\alpha }{2+k+\alpha } \le \frac{1}{2}$$ for k = 0, 1 and $$0<\alpha \le 1$$. Step 1: Boundary estimate. We show that for $$z \in \mathcal{N}_h^0$$, so that $$\textrm{dist}(z,\partial \varOmega _h) \leq \delta $$, \begin{equation*} u_{\varepsilon}(z) - \mathcal I_h u(z) \leq C |u|_{W^2_{\infty}(\varOmega)} \delta. \end{equation*} Given the function $$p_h$$ of Lemma 5.1 (discrete boundary barrier), for z fixed, we examine the behavior of $$u_{\varepsilon } + p_h$$. For any interior node $$x_i \in \mathcal{N}_h^0$$, we have \begin{equation*} \begin{aligned} \prod_{j=1}^d \nabla^2_{\delta} (u_{\varepsilon} +p_h) (x_i;v_j) &= \prod_{j=1}^d \left(\nabla^2_{\delta} u_{\varepsilon} (x_i;v_j) + \nabla^2_{\delta} p_h(x_i;v_j) \right)\\ &\geq \prod_{j=1}^d \nabla^2_{\delta} u_{\varepsilon} (x_i;v_j) + \prod_{j=1}^d \nabla^2_{\delta} p_h (x_i;v_j) \quad \forall\, \mathbf{v }=(v_j)_{j=1}^d \in \mathbb{S}^{\perp}_{\theta} \end{aligned} \end{equation*} because $$\nabla ^2_{\delta } u_{\varepsilon } (x_i;v_j) \geq 0$$ and $$\nabla ^2_{\delta } p_h(x_i;v_j)\geq 0$$. We apply Lemma 2.4 (consistency of $$T_{\varepsilon }[\mathcal I_h u])$$ to obtain \begin{equation*} \begin{aligned} T_{\varepsilon} [u_{\varepsilon} + p_h] (x_i) & \geq T_{\varepsilon}[u_{\varepsilon}](x_i) + T_{\varepsilon}[p_h](x_i) \\ & \geq f(x_i) + E \geq T_{\varepsilon}[\mathcal{I}_h u](x_i) - C|u|_{W^2_{\infty}(\varOmega)}^d + E \geq T_{\varepsilon} [\mathcal I_h u](x_i), \end{aligned} \end{equation*} provided $$E \geq C|u|_{W^2_{\infty }(\varOmega )}^d$$. Since $$\mathcal I_h u = u_{\varepsilon }$$ and $$p_h \leq 0 $$ on $$\partial \varOmega _h$$, we deduce from Lemma 2.3 (discrete comparison principle) that \begin{equation*} u_{\varepsilon}(z) + p_h(z) \leq \mathcal I_h u (z), \end{equation*} whence \begin{equation*} u_{\varepsilon}(z) - \mathcal I_h u(z) \leq C |u|_{W^2_{\infty}(\varOmega)} \delta. \end{equation*} Step 2: Interior estimate. We show that for all $$x_i \in \mathcal{N}_h^0$$, so that $$\textrm{dist} (x_i,\partial \varOmega _h) \geq \delta $$, \begin{equation*} T_{\varepsilon} [u_{\varepsilon}] (x_i)- T_{\varepsilon} [\mathcal I_h u](x_i) \leq C_1(u) \delta^{\alpha +k} + C_2(u) \left( \frac{h^2}{\delta^2}+\theta^2 \right) \end{equation*} with k = 0, 1 and \begin{equation*} C_1(u) = C |u|_{C^{2+k,\alpha}(\overline{\varOmega})} \ |u|_{W^2_{\infty}(\varOmega)}^{d-1}, \quad C_2(u) = C |u|_{W^2_{\infty}(\varOmega)}^d \end{equation*} dictated by Lemma 2.4. Step 1 guarantees that \begin{equation*} u_{\varepsilon} - \mathcal I_h u \leq C|u|_{W^2_{\infty}(\varOmega)} \delta \quad \textrm{on } \partial \varOmega_{h,\delta}, \end{equation*} where $$\varOmega _{h,\delta }$$ is defined in (2.7). Let $$d_{\varepsilon }:= \mathcal I_h u-u_{\varepsilon } +C|u|_{W^2_{\infty }(\varOmega )}\delta $$ and note that $$d_{\varepsilon } \geq 0 $$ on $$\partial \varOmega _{h,\delta }$$. We then apply Proposition 4.8 (continuous dependence on data) to $$d_{\varepsilon }$$ in $$\varOmega _{h,\delta }$$, in conjunction with Lemma 2.4 (consistency of $$T_{\varepsilon }[\mathcal I_h u]$$), to obtain \begin{equation*} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- \lesssim \ \delta \left( \sum_{x_i \in \mathcal{C}_-(d_{\varepsilon})} \left( (f(x_i) +e)^{1/d} - f(x_i)^{1/d} \right)^d \right)^{1/d} \end{equation*} with $$e:= C_1(u) \delta ^{\alpha +k} + C_2(u) \left ( \frac{h^2}{\delta ^2} + \theta ^2 \right )$$. We now use that the function $$t \mapsto t^{1/d}$$ is concave with derivative $$\frac{1}{d}t^{1/d-1}$$ and $$f(x_i)\geq f_0>0$$ to estimate \begin{equation*} (f(x_i)+e)^{1/d}-f(x_i)^{1/d} \leq \frac{e}{d f_0^{\frac{d-1}{d}}}, \end{equation*} whence \begin{equation*} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- \lesssim \delta \left( \sum_{x_i \in \mathcal{C}_-(d_{\varepsilon})} \left(C_1(u) \delta^{\alpha +k}+C_2(u)\left(\frac{h^2}{\delta^2}+\theta^2\right) \right)^d \right)^{1/d}. \end{equation*} Since the cardinality of $$\mathcal{C}_-(d_{\varepsilon })$$ is bounded by that of $$\mathcal N_h$$, which in turn is bounded by $$Ch^{-d}$$ with C depending on shape regularity, we end up with \begin{equation} \max_{\varOmega_h} \, (u_{\varepsilon} -\mathcal I_h u) \lesssim |u|_{W^2_{\infty}(\varOmega)} \delta + \frac{\delta}{h} \left( C_1(u) \delta^{\alpha+k} + C_2(u) \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation} (5.1) Step 3: Choice of $$\delta $$ and $$\theta $$. To find an optimal choice of $$\delta $$ and $$\theta $$ in terms of h, we minimize the right-hand side of the preceding estimate. We first set $$\theta ^2 = \frac{h^2}{\delta ^2}$$ and equate the last two terms: \begin{equation*} C_1(u) \delta^{k+\alpha} = C_2(u) \frac{h^2}{\delta^2} \quad \Longrightarrow \quad \delta = R_k(u) h^{\frac{2}{2+k+\alpha}}. \end{equation*} Again writing $$C_1(u)$$ and $$C_2(u)$$ in terms of $$|u|_{C^{2+k,\alpha }(\overline{\varOmega })}$$ and $$|u|_{W^2_{\infty }(\varOmega )}$$, we thus obtain \begin{equation} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim R_k(u) |u|_{W^2_{\infty}(\varOmega)} \ h^{\frac{2}{2+k+\alpha}} + |u|_{C^{2+k,\alpha}(\overline{\varOmega})}^{\frac{1}{2+k+\alpha}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{2+k+\alpha}} \ h^{\frac{k+\alpha}{2+k+\alpha}}.\nonumber \end{equation} (5.2) Finally, the desired estimate follows immediately because $$k+\alpha \leq 2$$. We observe that according to Theorem 5.3 the rate of convergence is of order $$h^{1/2}$$ whenever $$u\in C^{3,1}(\overline{\varOmega })$$. However, our numerical experiments in Nochetto et al. (2017) indicate linear rates of convergence, which correspond to Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u_h]$$). This mismatch may be attributed to the factor $$\frac{\delta }{h} \gg 1$$ in (5.1), which relates to Remark 4.4 (artificial factor $$\frac{\delta }{h}$$). This issue will be tackled in a forthcoming paper. 5.3. Error estimates for solutions with Sobolev regularity We now derive error estimates for solutions $$u \in W^s_p(\varOmega )$$ with $$s>2+\frac{d}{p}$$ so that $$W^s_p(\varOmega )\subset C^2(\overline{\varOmega })$$. We exploit the structure of the estimate of Proposition 4.8 (continuous dependence on data), which shows that its right-hand side accumulates in $$l^d$$ rather than $$l^{\infty }$$. Theorem 5.4 (Convergence rate for $$W_p^s$$ solutions). Let $$f \geq f_0>0$$ in $$\varOmega $$ and let the viscosity solution u of (1.1) be of class $$W_p^s(\varOmega )$$ with $$\frac{d}{p} < s-2-k\le 1, \ k=0,1$$. If $$u_{\varepsilon }$$ is the discrete solution of (2.2) and \begin{equation*} \delta = R(u) \ h^{\frac{2}{s}}, \quad \theta = R(u)^{-1} \ h^{1-\frac{2}{s}}, \end{equation*} with $$R(u) := |u|_{W^2_{\infty }(\varOmega )}^{\frac{1}{s}} |u|_{W_p^s(\varOmega )}^{-\frac{1}{s}}$$, then \begin{equation*} \| u - u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(d,\varOmega,f_0) \left( |u|_{W_p^s(\varOmega)}^{\frac{1}{s}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{s}} + \left(1 + R(u) \right) |u|_{W^2_\infty(\varOmega)} \right) h^{1-\frac{2}{s}} . \end{equation*} Proof. We proceed as in Theorem 5.3 to show an upper bound for $$u_{\varepsilon } - \mathcal I_h u$$. The boundary estimate of Step 1 remains intact, namely \begin{equation*} u_{\varepsilon}(z) - \mathcal I_h u(z) \leq C \ |u|_{W^2_{\infty}(\varOmega)} \ \delta \end{equation*} for all $$z \in \mathcal{N}_h^0$$ such that $$\textrm{dist} (z,\partial \varOmega _h) \leq \delta $$. On the other hand, Step 2 yields \begin{equation*} \max_{\varOmega_{h,\delta}}(u_{\varepsilon} - \mathcal I_h u) \lesssim \delta |u|_{W^2_{\infty}(\varOmega)} + \delta \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \delta^{(k+\alpha)d} + C_2(u)^d \left( \frac{h^2}{\delta^2} + \theta^2 \right)^d \right)^{1/d}, \end{equation*} where $$C_1(u)$$ and $$C_2(u)$$ are defined in Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$) and $$0 < \alpha = s-2-k-\frac{d}{p} \leq 1$$ corresponds to the Sobolev embedding $$W_p^s(B_i) \subset C^{2+k,\alpha }(B_i)$$. In the following calculations we resort to the Sobolev inequality (Giusti, 2003, Theorem 2.9) \begin{equation*} |u|_{C^{2+k,\alpha}(B_i)} \le C |u|_{W_p^s(B_i)}, \end{equation*} involving only seminorms. We stress that C > 0 depends on the Lipschitz constant of $$B_i$$ but not on its size. The latter is due to the fact that the Sobolev numbers of $$W^{s-2-k}_p(B_i)$$ and $$C^{0,\alpha }(B_i)$$ coincide: $$0< s-k-2-d/p=\alpha \le 1$$. We refer to Giusti (2003, Theorem 2.9) for a proof for 0 < s < 1. We now use the Hölder inequality with exponent $$\frac{p}{d}> 1$$ to obtain \begin{equation*} \begin{aligned} \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \right)^{\frac{1}{d}} &\lesssim \left( \sum_{x_i \in \mathcal{N}_h^0} |u|_{W_p^s(B_i)}^d |u|_{W^2_{\infty}(B_i)}^{d(d-1)} \right)^{\frac{1}{d}} \\ &\lesssim \left( \sum_{x_i \in \mathcal{N}_h^0} |u|_{W_p^s(B_i)}^{d\frac{p}{d}} \right)^{\frac{1}{d}\frac{d}{p}} \ \left( \sum_{x_i \in \mathcal{N}_h^0} |u|_{W^2_{\infty}(B_i)}^{d(d-1)\frac{p}{p-d}} \right)^{\frac{1}{d}\frac{p-d}{p}}. \end{aligned} \end{equation*} Since the cardinality of the set of balls $$B_i$$ containing an arbitrarily given $$x \in \varOmega $$ is proportional to $$\left ( \frac{\delta }{h} \right )^d$$, while the cardinality of $$\mathcal{N}_h^0$$ is proportional to $$h^{-d}$$, we get \begin{align*} \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \right)^{\frac{1}{d}} & \lesssim \left( \frac{\delta}{h} \right)^{\frac{d}{p}} |u|_{W_p^s(\varOmega)} \ \left( h^{-d} |u|_{W^2_{\infty}(\varOmega)}^{\frac{d(d-1)p}{p-d}} \right)^{\frac{p-d}{pd}} \\ & \lesssim \frac{\delta^{\frac{d}{p}}}{h} \ |u|_{W_p^s(\varOmega)} \ |u|_{W^2_{\infty}(\varOmega)}^{d-1}. \end{align*} Exploiting that $$\alpha + k + \frac{d}{p}+1=s-1$$, we readily arrive at \begin{equation*} \delta \left( \sum_{x_i \in \mathcal{N}_h^0} C_1(u)^d \ \delta^{(k+\alpha)d} \right)^{\frac{1}{d}} \lesssim \frac{\delta^{s-1}}{h} |u|_{W_p^s(\varOmega)} \ |u|_{W^2_{\infty}(\varOmega)}^{d-1}. \end{equation*} In addition, we have \begin{equation*} \left( \sum_{x_i \in \mathcal{N}_h^0} C_2(u)^d \right)^{\frac{1}{d}} \lesssim |u|_{W^2_{\infty}(\varOmega)}^d \ \frac{1}{h}, \end{equation*} whence \begin{equation*} \delta \left( \sum_{x_i \in \mathcal{N}_h^0} C_2(u)^d \left( \frac{h^2}{\delta^2} + \theta^2 \right)^d \right)^{\frac{1}{d}} \lesssim |u|_{W^2_{\infty}(\varOmega)}^d \ \frac{\delta}{h} \left( \frac{h^2}{\delta^2} + \theta^2 \right). \end{equation*} Collecting the previous estimates, we end up with \begin{equation*} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim \delta |u|_{W^2_{\infty}(\varOmega)} + |u|_{W^2_{\infty}(\varOmega)}^{d-1} \frac{\delta}{h} \left( |u|_{W_p^s(\varOmega)} \delta^{s-2} + |u|_{W^2_{\infty}(\varOmega)} \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation*} To find an optimal relation among $$h,\delta $$ and $$\theta $$, we first choose $$\theta ^2 = \frac{h^2}{\delta ^2}$$ and next equate the two terms in the second summand to obtain \begin{equation*} \delta = R(u) \ h^{\frac{2}{s}}, \quad \theta = R(u)^{-1} \ h^{1-\frac{2}{s}}, \end{equation*} whence \begin{equation*} \max_{\varOmega_h} \, (u_{\varepsilon} - \mathcal I_h u) \lesssim R(u) |u|_{W^2_{\infty}(\varOmega)} h^{\frac{2}{s}} + |u|_{W_p^s(\varOmega)}^{\frac{1}{s}} \ |u|_{W^2_{\infty}(\varOmega)}^{d-\frac{1}{s}} h^{1-\frac{2}{s}}. \end{equation*} Adding the interpolation error estimate $$\|u-\mathcal I_h u\|_{L^\infty (\varOmega )} \lesssim h^2 |u|_{W^2_{\infty }(\varOmega )}$$, and using that $$2> \frac{2}{s} \ge 1 - \frac{2}{s}$$ for 2 < s ≤ 4, leads to the asserted estimate. The error estimate of Theorem 5.4 (convergence rate for $$W_p^s$$-solutions) is of order $$\frac{1}{2}$$ for s = 4 and $$u \in W_p^4(\varOmega )$$ with p > d. This rate requires much weaker regularity than the corresponding error estimate in Theorem 5.3, namely $$u \in C^{3,1}(\overline{\varOmega }) = W^4_{\infty }(\varOmega )$$. In both cases, the relation between $$\delta $$ and h is $$\delta \approx h^{\frac{1}{2}}$$. 5.4. Error estimates for piecewise smooth solutions We now derive pointwise rates of convergence for a larger class of solutions than in Section 5.3. These are viscosity solutions that are piecewise $$W_p^s$$ but have discontinuous Hessians across a Lipschitz (d − 1)-dimensional manifold $$\mathcal{S}$$; we refer to the second numerical example in Nochetto et al. (2017). Since $$T_\varepsilon [\mathcal I_h u]$$ has a consistency error of order 1 in a $$\delta $$-region around $$\mathcal{S}$$, due to the discontinuity of $$D^2u$$, we exploit the fact that the measure of this region is proportional to $$\delta |\mathcal{S}|$$. We are thus able to adapt the argument of Theorem 5.4 (convergence rate for $$W^s_p$$ solutions) and accumulate such consistency error in $$l^d$$ at the expense of an extra additive term of order $$h^{-1}\delta ^{1+\frac{1}{d}}$$. This term is responsible for a reduced convergence rate when $$u \in W^s_p(\varOmega \setminus \mathcal{S})$$, $$s> 2+ \frac{1}{d}$$. Theorem 5.5 (Convergence rate for piecewise smooth solutions). Let $$\mathcal{S}$$ denote a (d − 1)-dimensional Lipschitz manifold that divides $$\varOmega $$ into two disjoint subdomains $$\varOmega _1, \varOmega _2$$ so that $$S = \overline{\varOmega }_1 \cap \overline{\varOmega }_2$$. Let $$f \geq f_0>0$$ in $$\varOmega $$ and let $$u \in W_p^s(\varOmega _i) \cap W^2_{\infty }(\varOmega )$$, for i = 1, 2 and $$\frac{d}{p} < s-2-k \le 1, k=0,1$$ be the viscosity solution of (1.1). If $$u_{\varepsilon }$$ denotes the discrete solution of (2.2), then for $$\beta = \min \big \{s,2+\frac{1}{d}\big\}$$ we have \begin{equation*} \| u -u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(d,\varOmega,f_0) \left( R(u)^{-1} |u|_{W^2_{\infty}(\varOmega)}^d + (1+ R(u) ) |u|_{W^2_{\infty}(\varOmega)} \right) h^{1-\frac{2}{\beta}}, \end{equation*} with $$R(u) = \big(\frac{|u|_{W^2_{\infty }(\varOmega )}}{|u|_{W_p^s(\varOmega \setminus \mathcal{S})}+{|u|_{W^2_{\infty }(\varOmega )}}}\big)^{\frac{1}{\beta }}$$ and $$|u|_{W_p^s(\varOmega \setminus \mathcal{S})}:= \max _i |u|_{W_p^s(\varOmega _i)}$$, provided \begin{equation*} \delta = R(u) \ h^{\frac{2}{\beta}}, \quad \theta= R(u)^{-1} \ h^{1-\frac{2}{\beta}}. \end{equation*} Proof. We proceed as in Theorems 5.3 and 5.4. The boundary layer estimate relies on the regularity $$u \in W^2_{\infty }(\varOmega )$$, which is still valid, whence for all $$x \in \varOmega _h$$ such that $$\textrm{dist}(x,\partial \varOmega _h) \leq \delta $$ we obtain \begin{equation*} u_{\varepsilon}(x) - \mathcal I_h u(x) \leq C |u|_{W^2_{\infty}(\varOmega)} \delta. \end{equation*} Consider now the internal layer \begin{equation*} \mathcal{S}^{\delta}_h : = \left\{ x \in \varOmega_h : \ \textrm{dist} (x,\mathcal{S}) \leq \delta \right\}, \end{equation*} which is the region affected by the discontinuity of the Hessian $$D^2u$$. Recall the auxiliary function $$d_{\varepsilon } = \mathcal I_h u - u_{\varepsilon } + C |u|_{W^2_{\infty }(\varOmega )} \delta $$ of Theorem 5.3 (rates of convergence for classical solutions) and split the contact set $$\mathcal{C}_-^\delta (d_{\varepsilon }):= \mathcal{C}_-(d_{\varepsilon }) \cap \varOmega _{h,\delta }$$ as \begin{equation*} \mathcal{S}_{h,1}^{\delta} := \mathcal{C}_-^\delta(d_{\varepsilon}) \cap \mathcal{S}^{\delta}_h, \quad \mathcal{S}_{h,2}^{\delta}:= \mathcal{C}_-^\delta(d_{\varepsilon}) \setminus \mathcal{S}^{\delta}_h. \end{equation*} An argument similar to Step 2 (interior estimate) of the proof of Theorem 5.3, based on combining Proposition 4.8 (continuous dependence on data) and Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$) with the assumption $$f\ge f_0>0$$, yields \begin{equation*} \begin{aligned} \max_{\varOmega_{h,\delta}} \ d_{\varepsilon}^- \lesssim&\, \delta \left( \sum_{x_i \in \mathcal{S}_{h,1}^{\delta}} C_2(u)^d \right)^{1/d} \\ &+ \delta \ \left( \sum_{x_i \in \mathcal{S}_{h,2}^{\delta}} C_1(u)^d \delta^{(k+\alpha)d} + C_2(u)^d \left( \frac{h^2}{\delta^2} + \theta^2 \right)^d \right)^{1/d} =: I_1 + I_2, \end{aligned} \end{equation*} because the consistency error in $$\mathcal{S}_{h,1}^{\delta }$$ is bounded by $$C_2(u) = C |u|_{W^2_\infty (B_i)}^d$$. As in Theorem 5.4 (convergence rate for $$W^s_p$$ solutions), $$C_1(u)$$ satisfies \begin{equation*} C_1(u) \lesssim |u|_{W^s_p(B_i)} |u|_{W^2_\infty(B_i)}^{d-1}. \end{equation*} Since the number of nodes $$x_i \in \mathcal{S}_{h,1}^{\delta }$$ is bounded by $$C|\mathcal{S}|\delta h^{-d}$$, we deduce \begin{equation*} I_1 \lesssim \delta \left( \sum_{x_i \in \mathcal{S}_{h,1}^{\delta}} C_2(u)^d \right)^{1/d} \lesssim \ |u|_{W^2_{\infty}(\varOmega)}^d \frac{\delta^{1+\frac{1}{d}}}{h}. \end{equation*} For $$I_2$$ we distinguish whether $$x_i$$ belongs to $$\varOmega _1$$ or $$\varOmega _2$$ and accumulate $$C_1(u)$$ in $$\ell ^p$$, exactly as in Theorem 5.4, to obtain \begin{equation*} I_2 \lesssim \ |u|_{W^2_{\infty}(\varOmega)}^{d-1} \left( |u|_{W_p^s(\varOmega \setminus \mathcal{S} ) }\frac{\delta^{s-1}}{h} + |u|_{W^2_{\infty}(\varOmega)} \ \frac{\delta}{h} \ \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation*} Collecting the previous estimates and using the definition of $$\beta $$ yields \begin{align*} \max_{\varOmega_h} (u_{\varepsilon} - \mathcal I_h u) \lesssim&\, |u|_{W^2_{\infty}(\varOmega)} \delta \\ &+ \ |u|_{W^2_{\infty}(\varOmega)}^{d-1} \frac{\delta}{h} \left( \left(|u|_{W_p^s(\varOmega \setminus \mathcal{S} )} + |u|_{W^2_{\infty}(\varOmega)}\right)\delta^{\beta-2} + |u|_{W^2_{\infty}(\varOmega)} \ \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{align*} We finally realize that this estimate is similar to that in the proof of Theorem 5.4 except for the middle term on the right-hand side. Therefore, we proceed as in Theorem 5.4 to find the relation between $$\delta , \theta $$ and h, add the estimate $$\|u-\mathcal I_h u\|_{L^\infty (\varOmega )}\lesssim h^2 |u|_{W^2_{\infty }(\varOmega )}$$ and eventually derive the asserted error estimate. 5.5. Error estimates for piecewise smooth solutions with degenerate f We observe that in all three preceding theorems we assume that $$f \geq f_0>0$$. This is an important assumption in the proofs, since it allows us to use the concavity of $$t\mapsto t^{1/d}$$ and Proposition 4.8 (continuous dependence on data) to obtain \begin{equation} (f(x_i)+e)^{1/d} -f(x_i)^{1/d} \leq \frac{e}{df_0^{\frac{d-1}{d}}}, \end{equation} (5.2) where e is related to the consistency of the operator in Lemma 2.4 (consistency of $$T_{\varepsilon }[\mathcal I_h u]$$). We see that this is only possible if $$f_0>0$$. If we allow f to touch zero, then (5.2) reduces to \begin{equation} (f(x_i)+e)^{1/d} -f(x_i)^{1/d} \leq e^{1/d}, \end{equation} (5.3) with equality for $$f(x_i)=0$$. This leads to a rate of order $$\left (\frac{\delta }{h}\right )^{1-\frac{2}{d}}\ge 1$$ for d ≥ 2. To circumvent this obstruction, we use Lemma 5.2 (discrete interior barrier), which allows us to introduce an extra parameter $$\sigma>0$$ that compensates for the lack of lower bound $$f_0>0$$ and yields pointwise error estimates of reduced order. Theorem 5.6 (Degenerate forcing f ≥ 0). Let $$\mathcal{S}$$ denote a (d − 1)-dimensional Lipschitz manifold that divides $$\varOmega $$ into two disjoint subdomains $$\varOmega _1, \varOmega _2$$ such that $$\mathcal{S} = \overline{\varOmega }_1 \cap \overline{\varOmega }_2$$. Let f ≥ 0 in $$\varOmega $$ and let $$u \in W_p^s(\varOmega _i) \cap W^2_{\infty }(\varOmega )$$, for i = 1, 2 and $$\frac{d}{p} < s-2-k \le 1, k=0,1$$ be the viscosity solution of (1.1). If $$u_{\varepsilon }$$ denotes the discrete solution of (2.2), then for $$\beta = \min \big\{ s, 2+\frac{1}{d}\big\}$$ we have \begin{equation*} \| u -u_{\varepsilon}\|_{L^{\infty}(\varOmega_h)} \leq C(d,\varOmega) |u|_{W^2_{\infty}(\varOmega)} \left( 1+ R(u) + R(u)^{-\frac{1}{d}} \right) \ h^{\frac{1}{d}\left(1-\frac{2}{\beta}\right)} \end{equation*} with $$R(u) = \big(\frac{|u|_{W^2_{\infty }(\varOmega )}}{|u|_{W_p^s(\varOmega \setminus \mathcal{S})}+|u|_{W^2_{\infty }(\varOmega )}}\big)^{\frac{1}{\beta }}$$ and $$|u|_{W_p^s(\varOmega \setminus \mathcal{S})}:= \max _i |u|_{W_p^s(\varOmega _i)}$$, provided \begin{equation*} \delta = R(u) \ h^{\frac{2}{\beta}}, \quad \theta= R(u)^{-1} \ h^{1-\frac{2}{\beta}}. \end{equation*} Proof. We employ the interior barrier function $$q_h$$ of Lemma 5.2 scaled by a parameter $$\sigma>0$$ to control $$u_{\varepsilon }-\mathcal I_h u$$ and $$\mathcal I_h u -u_{\varepsilon }$$ in two steps. The parameter $$\sigma $$ allows us to mimic the calculation in (5.2). In the third step we choose $$\sigma $$ optimally with respect to the scales of our scheme. Step 1: Upper bound for $$u_{\varepsilon }-\mathcal I_h u$$. We let $$w_h:= u_{\varepsilon } + \sigma q_h$$ and $$v_h:= \mathcal I_h u +C|u|_{W^2_{\infty }(\varOmega )} \delta $$, observe that $$T_\varepsilon [w_h](x_i) \ge f(x_i) + \sigma ^d$$ and proceed as in Step 1 of the proof of Theorem 5.3 to show $$ w_h(z) \leq v_h(z) $$ for all $$z\in \mathcal{N}_h^0$$ such that $$\textrm{dist} (z,\partial \varOmega _h) \le \delta $$. We now focus on $$\varOmega _{h,\delta }$$ and define the auxiliary function $$d_{\varepsilon }:= v_h -w_h$$ and contact set $$\mathcal{C}_-^\delta (d_{\varepsilon }) := \mathcal{C}_-(d_{\varepsilon }) \cap \varOmega _{h,\delta }$$. Since the previous argument guarantees that $$d_{\varepsilon } \geq 0$$ on $$\partial \varOmega _{h,\delta }$$, Proposition 4.8 (continuous dependence on data) gives \begin{equation*} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- \lesssim \ \delta \left( \sum_{x_i \in \mathcal{C}_-^\delta(d_{\varepsilon})} \left( \left(T_\varepsilon[v_h](x_i)\right)^{1/d}- \left(T_\varepsilon[w_h](x_i)\right)^{1/d} \right)^d \right)^{1/d} . \end{equation*} If $$e_i$$ is the local consistency error given in Lemma 2.4, we further note that \begin{equation*} T_\varepsilon[v_h](x_i) \le f(x_i) + e_i, \quad T_\varepsilon[w_h](x_i) \geq T_\varepsilon[u_{\varepsilon}](x_i) + T_\varepsilon[\sigma q_h](x_i) \geq f(x_i)+ \sigma^d \end{equation*} for all $$x_i\in \mathcal{N}_h^0$$, whence \begin{equation*} \begin{aligned} \max_{\varOmega_{h,\delta}} d_{\varepsilon}^- &\lesssim \delta \left( \sum_{x_i \in \mathcal{C}_-^\delta(d_{\varepsilon})} \left( (f(x_i)+e_i)^{1/d}- \left(f(x_i)+\sigma^d\right)^{1/d} \right)^d \right)^{1/d}. \end{aligned} \end{equation*} We now observe that $$e_i \geq \sigma ^d$$ for all $$x_i \in \mathcal{C}_-^\delta (d_{\varepsilon })$$ because all terms in the above sum are non-negative. If there is no such $$x_i$$, then the above bound implies that $$d_{\varepsilon }^-=0$$ and $$w_h \leq v_h$$, whence $$u_{\varepsilon } - \mathcal I_h u \lesssim \sigma + |u|_{W^2_{\infty }(\varOmega )} \delta $$. Otherwise, the above observation combined with (5.2) and $$f(x_i)\ge 0$$ implies \begin{equation*} \begin{aligned} (f(x_i)+e_i )^{1/d} - \left(f(x_i)+\sigma^d \right)^{1/d} &= \left(f(x_i)+\sigma^d+\left(e_i-\sigma^d\right)\right)^{1/d}- \left(f(x_i)+\sigma^d\right)^{1/d} \\ &\leq \frac{e_i-\sigma^d}{d \sigma^{d\frac{d-1}{d}}} \leq d^{-1} \sigma^{1-d} e_i. \end{aligned} \end{equation*} We next proceed exactly as in Theorem 5.5 (convergence rate for piecewise smooth solutions) to derive an upper bound for $$d_{\varepsilon }^-$$, but with the additional factor $$\sigma ^{1-d}$$. Employing the definition of $$d_{\varepsilon }$$, we thereby obtain \begin{equation*} u_{\varepsilon} - \mathcal I_h u \lesssim \sigma + |u|_{W^2_{\infty}(\varOmega)} \delta + \sigma^{1-d} \ \frac{\delta}{h} \left( C_1(u) \delta^{s-2} + C_2(u) \ \left( \delta^{1/d} + \frac{h^2}{\delta^2} + \theta^2 \right) \right), \end{equation*} where $$C_1(u) = C |u|_{W_p^s(\varOmega \setminus \mathcal{S})} |u|_{W^2_{\infty }(\varOmega )}^{d-1}$$ and $$C_2(u)=C|u|_{W^2_{\infty }(\varOmega )}^d$$. Step 2: Lower bound for $$u_{\varepsilon }-\mathcal I_h u$$. To prove the reverse inequality, we proceed as in Step 1, except that this time we define $$v_h:= u_{\varepsilon } + C |u|_{W^2_{\infty }(\varOmega )} \delta $$ and $$w_h:= \mathcal I_h u + \sigma q_h$$. An argument similar to Step 1 yields $$w_h \leq v_h$$ in $$\omega _{h,\delta }$$. Moreover, recalling Lemma 2.4 (consistency of $$T_\varepsilon [\mathcal I_h u]$$) we have for all $$x_i\in \mathcal{N}_h^0$$, \begin{equation*} T_\varepsilon[v_h](x_i) = f(x_i) \le T_\varepsilon[\mathcal I_h u](x_i) + e_i, \quad T_\varepsilon[w_h](x_i) \ge T_\varepsilon[\mathcal I_h u](x_i) + \sigma^d, \end{equation*} where $$e_i$$ is a local bound for the consistency error. Combining this with Proposition 4.8 (continuous dependence on data) in $$\varOmega _{h,\delta }$$ gives \begin{equation*} \max_{\varOmega_{h,\delta}} d_\varepsilon^- \lesssim \delta \left( \sum_{x_i\in\mathcal{C}_-^\delta(d_\varepsilon^-)} \left( \left(T_\varepsilon[\mathcal I_h u](x_i) + e_i\right)^{\frac{1}{d}} - \left( T_\varepsilon[\mathcal I_h u](x_i) + \sigma^d \right)^{\frac{1}{d}} \right)^d\right)^{\frac{1}{d}}. \end{equation*} Since $$\mathcal I_h u$$ is discretely convex, we apply Lemma 2.2 (discrete convexity) to deduce $$T_\varepsilon [\mathcal I_h u](x_i)\ge 0$$ and next argue as in Step 1 to obtain \begin{equation*} \mathcal I_h u - u_{\varepsilon} \lesssim \sigma + |u|_{W^2_{\infty}(\varOmega)} \delta + \sigma^{1-d} \ \frac{\delta}{h} \left( C_1(u) \delta^{s-2} + C_2(u) \ \left( \delta^{1/d} + \frac{h^2}{\delta^2} + \theta^2 \right) \right). \end{equation*} Step 3: Choice of $$\delta , \theta $$ and $$\sigma $$. Since $$\|u-\mathcal I_h u\|_{L^{\infty }(\varOmega _h)}\leq C|u|_{W^2_{\infty }(\varOmega )} h^2$$, combining Steps 1 and 2 yields \begin{align*} \|u_{\varepsilon} - u\|_{L^\infty(\varOmega_h)} &\lesssim \sigma + |u|_{W^2_{\infty}(\varOmega)} (\delta + h^2) \\ &\quad + \sigma^{1-d} \ \frac{\delta}{h} \left( C_1(u) \delta^{s-2} + C_2(u) \ \left( \delta^{1/d} + \frac{h^2}{\delta^2} + \theta^2 \right) \right) . \end{align*} We now minimize the right-hand side upon choosing $$\delta , \theta $$ and $$\sigma $$ suitably with respect to h. We first recall the definition of $$\beta $$ and choose $$\delta $$ and $$\theta $$ as in Theorem 5.5. At this stage it remains only to find $$\sigma $$ upon solving \begin{equation*} \sigma = C_2(u) \sigma^{1-d} \frac{h}{\delta} = C \sigma^{1-d} |u|_{W^2_{\infty}(\varOmega)}^d R(u)^{-1} \ h^{1-\frac{2}{\beta}}, \end{equation*} which leads to \begin{equation*} \sigma = |u|_{W^2_{\infty}(\varOmega)} R(u)^{-\frac{1}{d}} \ h^{\frac{1}{d}\left(1-\frac{2}{\beta}\right)}. \end{equation*} Since $$\beta>2$$ we get $$h^2 + \delta \le \left (1 + R(u)^{-\frac{1}{\beta }}\right ) h^{\frac{2}{\beta }}$$ and \begin{equation*} \|u_{\varepsilon} - u\|_{L^\infty(\varOmega_h)} \lesssim |u|_{W^2_{\infty}(\varOmega)} (1 + R(u)) h^{\frac{2}{\beta}} + |u|_{W^2_{\infty}(\varOmega)} R(u)^{-\frac{1}{d}} \ h^{\frac{1}{d}\left(1-\frac{2}{\beta}\right)}. \end{equation*} This yields the asserted estimate and finishes the proof. Theorem 5.6 is an extension of Theorem 5.5 to the degenerate case f ≥ 0, but the same techniques and estimates extend as well to Theorems 5.3 and 5.4. We stress that Theorems 5.5 and 5.6 correspond to nonclassical viscosity solutions that are of class $$W^{2}_{\infty }(\varOmega )$$. In order to deal with discontinuous Hessians and degenerate right-hand sides, we rely on techniques that give rise to reduced rates. For Theorem 5.5 we obtain rates that depend on the space dimension, whereas for Theorem 5.6 we resort to a regularization procedure that leads to further reduction of the rates. Although the derived estimates are suboptimal with respect to the computational rates observed in the study by Nochetto et al. (2017), we wish to emphasize that Theorem 5.6 is, to our knowledge, the only error estimate available in the literature that deals with degenerate right-hand sides. 6. Conclusions In this paper we extend the analysis of the two-scale method introduced in the study by Nochetto et al. (2017). We derive continuous dependence of discrete solutions on data and use it to prove rates of convergence in the $$L^{\infty }$$-norm in the computational domain $$\varOmega _h$$ for four different cases. We first prove rates of order up to $$h^{1/2}$$ for smooth classical solutions with Hölder regularity. We then exploit the structure of the continuous dependence estimate of discrete solutions on data to derive error estimates for classical solutions with Sobolev regularity, thereby achieving the same rates under weaker regularity assumptions. In a more general scenario, we derive error estimates for viscosity solutions with discontinuous Hessian across a surface with appropriate smoothness, but otherwise possessing piecewise Sobolev regularity. Lastly, we use an interior barrier function that allows us to remove the nondegeneracy assumption f > 0 at the cost of a reduced rate that depends on dimension. Our theoretical predictions are suboptimal with respect to the linear rates observed experimentally in the study by Nochetto et al. (2017) for a smooth classical solution and a piecewise smooth viscosity solution with degenerate right-hand side f ≥ 0. This can be attributed to the fact that the continuous dependence estimate of discrete solutions on data introduces a factor $$\frac{\delta }{h} \gg 1$$ in the error estimates. This feature is similar to the discrete Alexandroff-Bakelman-Pucci (ABP) estimate developed in Kuo & Trudinger (2000) and is the result of using sets of measure $$\approx \delta ^d$$ instead of $$\approx h^d$$ to approximate subdifferentials. In a forthcoming paper we will tackle this issue and connect our two-scale method with that of Feng & Jensen (2016). Funding National Science Foundation Grant DMS (1411808 to R.H.N., D.N. and W.Z.); the Institut Henri Poincaré (Paris) (to R.H.N.); the Hausdorff Institute (Bonn) (to R.H.N.); the 2016-2017 Patrick and Marguerite Sung Fellowship of the University of Maryland (to D.N.); the Brin Postdoctoral Fellowship of the University of Maryland (to W.Z.). References Awanou , G. ( 2016 ) Convergence rate of a stable, monotone and consistent scheme for the Monge–Ampère equation . Symmetry , 8 , 1 – 7 . Google Scholar CrossRef Search ADS Brenner , S. C. , Gudi , T. , Neilan , M. & Sung , L.-Y. ( 2011 ) $$C^0$$ penalty methods for the fully nonlinear Monge–Ampère equation . Math. Comput. , 80 , 1979 – 1995 . Google Scholar CrossRef Search ADS Brenner , S. C. & Neilan , M. ( 2012 ) Finite element approximations of the three dimensional Monge–Ampère equation . ESAIM Math. Model. Numer. Anal. , 46 , 979 – 1001 . Google Scholar CrossRef Search ADS Brenner , S. C. & Scott , R. ( 2008 ) The Mathematical Theory of Finite Element Methods . Springer : Springer-Verlag New York , XVIII, 400. Google Scholar CrossRef Search ADS Feng , X. & Jensen , M. ( 2016 ) Convergent semi-Lagrangian methods for the Monge–Ampère equation on unstructured grids . arXiv:1602.04758v2. Froese , B. & Oberman , A. ( 2012 ) Convergent finite difference solvers for viscosity solutions of the elliptic Monge–Ampère equation in dimensions two and higher . SIAM J. Numer. Anal. , 49 , 1692 – 1714 . Google Scholar CrossRef Search ADS Giusti , E. ( 2003 ) Direct Methods in the Calculus of Variation . Singapore : World Scientific . Google Scholar CrossRef Search ADS Gutiérrez , C. ( 2001 ) The Monge–Ampère Equation . Birkhäuser : Birkhäuser Basel , XI, 132. Google Scholar CrossRef Search ADS Krylov , N. V. ( 1987 ) Nonlinear Elliptic and Parabolic Equations of the Second Order . Netherlands : Springer . Google Scholar CrossRef Search ADS Kuo , H.-J. & Trudinger , N. S. ( 2000 ) Special Issue of the Proceedings of 1999 International Conference on Nonlinear Analysis (Taipei) , Taiwanese J. Math , 4 , 1 – 8 . Google Scholar CrossRef Search ADS Lions , P. L. ( 1983 ) Hamilton–Jacobi–Bellman equations and the optimal control of stochastic systems, Proc. Int. Congress of Math., Warsaw, 2 , 1403 – 1417 . Nochetto , R. H. , Ntogkas , D. & Zhang , W. ( 2017 ) Two-scale method for the Monge–Ampère equation: convergence to the viscosity solution . Math. Comput . (in press), available at: arXiv:1706.06193 . Nochetto , R. H. & Zhang , W. ( 2017 ) Discrete ABP estimate and convergence rates for linear elliptic equations in non-divergence form. Found. Comput. Math. Available at https://doi.org/10.1007/s10208-017-9347-y. Nochetto , R. H. & Zhang W. ( 2016 ) Pointwise rates of convergence for the Oliker–Prussner method for the Monge–Ampère equation . arXiv:1611.02786. Oliker , V. I. & Prussner , L. D. ( 1988 ) On the numerical solution of the equation ($$\partial ^2 z/\partial x^2)(\partial ^2z/\partial y^2)-(\partial ^2z/\partial x \partial y)^2 = f$$ and its discretizations I . Numer. Math. , 54 , 271 – 293 . Google Scholar CrossRef Search ADS Walker , S. W. ( 2018 ) FELICITY: A MATLAB/C++ Toolbox for developing finite element methods and simulation modeling . SIAM J. Sci. Comput. , 40 , C234 – C257 . Walker , S. W. ( 2018 ) FELICITY: Finite ELement Implementation and Computational Interface Tool for You . Available at http://www.mathworks.com/matlabcentral/fileexchange/31141-felicity. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications 2018. This work is written by (a) US Government employee(s) and is in the public domain in the US.

Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: May 8, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off