Uniform convergence of a multigrid method for elliptic equations with anisotropic coefficients

Uniform convergence of a multigrid method for elliptic equations with anisotropic coefficients Abstract We prove the uniform convergence of the V-cycle multigrid method with line Gauss–Seidel iteration as its smoother for anisotropic elliptic equations. We define a rectified projection operator to decompose the functions into different levels. This operator is based on piecewise energy norm projection. Using the Xu–Zikatanov identity, we can show that the convergence rate is independent of the mesh size, the number of levels and the coefficients of equations. The main improvement of this paper is that we lose the restriction from previous works whereby the domain must be convex, and we prove the uniform convergence of the multigrid method for the problems defined on the domains which are constructed by finite rectangles, such as the L-shape domains. 1. Introduction The multigrid method has been proposed as an efficient way to solve elliptic equations (see Bank & Dupont, 1981). When the equation is anisotropic, line Gauss–Seidel iteration was selected as the smoother (see Arms & Zondek, 1956; Cuthill & Varga, 1959). In this article, we are concerned with the convergence analysis of this multigrid method for anisotropic problems. An early framework used to analyze the convergence of general multigrid methods was proposed by Braess and Hackbusch (see, e.g., Braess & Hackbusch, 1983; Hackbusch, 1985), and it later was extended by Bramble and Pasciak in Bramble & Pasciak (1987). This approach relies on the ‘regularity and approximation’ assumption. By this framework, for anisotropic elliptic equations, Stevenson proved the uniform convergence of the multigrid methods with the ‘W’-cycle and ‘V’-cycle (see Stevenson, 1993). Then, Bramble and Zhang extended this result to problems with variable coefficients defined on rectangular domains (see Bramble & Zhang, 2001). Neuss, Bramble and Zhang proved the uniform convergence of the ‘V’-cycle multigrid method for anisotropic elliptic equations (see Neuss, 1998; Bramble & Zhang, 2001). However, all the proofs mentioned above depend on the $$H^{2}$$ regularity of the solution. Another framework for general multigrid methods was proposed by Bramble, Pasciak, Wang and Xu in Bramble et al. (1991). In 2002, Xu and Zikatanov proposed the XZ identity (see Xu & Zikatanov, 2002), which provides a new framework to prove the convergence of general multigrid methods. In line with this approach, for anisotropic elliptic equations, Wu et al. (2012) analyzed multigrid methods based on two mesh coarsening strategies: uniform coarsening and semicoarsening. For the case of semicoarsening, they proved the uniform convergence of the algorithm. But for uniform coarsening, the convergence rate depends on the number of mesh levels. In this paper, we consider the multigrid method for anisotropic problems. We give an example of the L-shape domain (without $$H^{2}$$ regularity) to show the uniform convergence for uniform coarsening. Note that in general, $$H^{2}$$ regularity cannot be reached for problems defined on concave domains (see Mikhail & Sungwon, 2007); thus, we must obtain a proof that does not depend on the global $$H^{2}$$ regularity of the problem. In fact, such is the case with our method in this article, and indeed, our proof could easily be extended to other domains that are assembled by finite rectangles. We apply the XZ identity and intend to prove that the value K in the XZ identity is uniformly bounded. The framework of the estimate of K is roughly similar to the framework posed in the studies by Xu (1992) and Lee et al. (2008), but many details must be rectified to adapt the anisotropic problem. A kernel aspect is the choice of the projection operator. The projection operator is used to decompose a function into different levels, and this plays an important role in our proof. In the studies by Bramble et al. (1991) and Lee et al. (2008), the $$L^{2}$$ projection operator is chosen because the lack of $$H^{2}$$ regularity can be overcomed. It is difficult for the $$L^{2}$$ projection to overcome anisotropy; the (global) energy norm projection cannot overcome the lack of $$H^{2}$$ regularity. However, on each rectangle subdomain, the $$H^{2}$$ regularity is guaranteed. This fact inspired us to propose the idea of divide-and-conquer. In this paper, we proposed a new method to construct the projection operator. The new projection operator is based on the local energy norm projection on each rectangle subdomain, and then we gather them together in some way. We use this new rectified energy projection operator to decompose functions into different levels and prove the uniform convergence of the multigrid method. The rest of this paper is organized as follows. In Section 2, we introduce the model problem that we shall consider and outline the line Gauss–Seidel multigrid method. In Section 3, we introduce the rectified energy norm projection operator and use the XZ identity to deduce the proof of uniform convergence. 2. Model problem and multigrid settings 2.1 Model problem Elliptic partial differential equations appear in many important practical problems, such as equilibrium problems in elasticity, irrotational motion of nonviscous fluid, potential problems, temperature distribution problems and diffusion problems (see Iorio & Magalhes, 2001). In many actual problems, the physical features are anisotropic. In the corresponding partial differential equations, the anisotropy usually leads to different coefficients in different directions. In this paper, we consider the following model problem:   \begin{align} \begin{cases} -\varepsilon \partial_{xx} u - \partial_{yy} u = f,\qquad (x,y) \in \varOmega,\\ u|_{\partial \varOmega} = 0, \end{cases} \end{align} (2.1)where $$\varepsilon \in (0,\ 1)$$ is a constant, $$\varOmega $$ is a bounded domain in $$\mathbb{R}^{2}$$ and $$f \in L^{2}(\varOmega )$$. To illustrate our algorithm and theoretical analysis, in this paper, we focus on the case where $$\varOmega $$ is an L-shape domain. The solution of equation (2.1) exists uniquely, and we can solve it by the finite difference or finite element method. In actual computation, the condition number of the corresponding linear system increases quickly when the mesh is refined. This phenomenon causes the difficulty in solving our problem. To overcome this disadvantage, multigrid methods are widely used. For isotropic problems, ordinary iterations such as the Gauss–Seidel, Jacobi and SOR iteration are used as smoothers in multigrid methods, and all these methods work well, namely, converge uniformly. However, when the anisotropic problem (2.1) is considered, all the multigrid methods mentioned above cannot lead to a convergence rate that is independent of the coefficient $$\varepsilon $$. To avoid the influence of $$\epsilon $$ in the convergence rate, line Gauss–Seidel iteration has been used (see, e.g., Arms & Zondek, 1956; Cuthill & Varga, 1959). It should be noted that the term ‘uniform converge’ that occurs in the rest of the paper means that the convergence rate is independent of the mesh size, the number of levels and the coefficient $$\varepsilon $$. 2.2 Bilinear element and subspace settings We analyze problem (2.1) by its variational form: find $$u\in{H_{0}^{1}}(\varOmega )$$ such that   \begin{align} \varepsilon \left(\frac{\partial u}{\partial x},\frac{\partial v}{\partial x}\right)+ \left(\frac{\partial u}{\partial y},\frac{\partial v}{\partial y}\right)= (\,f,v),\quad \forall v\in{H_{0}^{1}}(\varOmega), \end{align} (2.2)where (⋅, ⋅) denotes the $$L^{2}$$ inner product on $$\varOmega $$. We also introduce the energy inner product a(⋅, ⋅) as   $$ a(u,v)=\varepsilon \left(\frac{\partial u}{\partial x},\frac{\partial v}{\partial x}\right)+ \left(\frac{\partial u}{\partial y},\frac{\partial v}{\partial y}\right), $$and then problem (2.2) can be briefly written as: find $$u\in{H_{0}^{1}}(\varOmega )$$ such that   $$ a(u,v)= (\,f,v),\quad \forall v\in{H_{0}^{1}}(\varOmega). $$The energy inner product a can also be defined on other domains. In general, we denote our energy inner product on a domain $$\omega $$ by $$(\cdot ,\cdot )_{a,\omega }$$. Note that when $$\omega =\varOmega $$, we have $$(\cdot ,\cdot )_{a,\varOmega }=a(\cdot ,\cdot )$$. Moreover, we denote the energy norm according to this energy inner product by $$\Vert \cdot \Vert _{a,\omega }$$, as $$\Vert \cdot \Vert _{a,\omega }^{2}=(\cdot ,\cdot )_{a,\omega }$$. When the domain $$\omega =\varOmega $$, we can omit $$\varOmega $$ in $$\Vert \cdot \Vert _{a,\varOmega }$$ and use the notation $$\Vert \cdot \Vert _{a}$$. Similarly, we denote by $$\Vert \cdot \Vert _{0,\omega }$$ and $$\Vert \cdot \Vert _{0}$$ the $$L^{2}$$ norm on $$\omega $$ and $$\varOmega $$, respectively. We use bilinear finite elements on rectangular meshes to discretize the equation (2.2). Suppose that there is a sequence of nested rectangular partitions $${\mathscr T}_{k}(0\leqslant k\leqslant J)$$ of $$\varOmega $$, where the edges of the elements are all parallel to the coordinate axes. We denote the side lengths parallel to the x and y directions of each element in $${\mathscr T}_{k}$$ by $${h_{k}^{x}}$$ and $${h_{k}^{y}}$$, respectively. In this article, we consider the situation where the meshes are refined uniformly, i.e.,   $$ \frac{{h_{k}^{x}}}{h_{k-1}^{x}}=\frac{{h_{k}^{y}}}{h_{k-1}^{y}}=\gamma^{2}\in(0,1). $$ In the kth level, suppose there are $$N_{k}+2$$ grid lines parallel to the y axis, and we denote them from left to right by $$L_{k,0},L_{k,1},\ldots ,L_{k,N_{k}+1}$$, respectively. The number of nodes on $$L_{k,i}$$ is denoted by $$N_{k,i}$$, and these nodes are labeled from bottom to top. Then, we denote by $$\phi _{k,i}^{j}$$ the nodal basis function corresponding to the jth node on $$L_{k,i}$$. With the above settings, note that the ranges of the indexes here are k = 0, 1, … , J, $$i=0,1,\ldots ,N_{k}+1$$ and $$j=1,2 \ldots ,N_{k,i}$$. Now, we introduce the finite element space $$V_{k}$$ according to the rectangular partition $${\mathscr T}_{k}$$ as   $$ V_{k}=\{v\in C(\overline{\varOmega}):v|_{T}\in Q_{1}(T),\,\forall T \in{\mathscr T}_{k};\, v(P)=0\ \textrm{if}\ P \in \partial\varOmega\}, $$and we then obviously have $$V_{0}\subset V_{1}\subset \ldots \subset V_{J}$$. Moreover, we have $$V_{k}\subset{H_{0}^{1}}(\varOmega )$$ (see Shi & Wang, 2016). To preform line Gauss–Seidel iterations, we introduce the subspaces and subdomains related to the ‘lines’. Define subspace $$V_{k,i}=V_{k}\cap\textrm{span}\left \{\phi _{k,i}^{1},\ldots,\phi _{k,i}^{N_{k,i}}\right \}$$ and subdomain   $$\varOmega _{k,i}=\left \{(x,y)\in \varOmega :(i-1){h_{k}^{x}}<x<(i+1){h_{k}^{x}}\right \},$$where $$i=1,2,\ldots,N_{k}$$. Then we have $$\textbf{supp}\, v_{k,i}\subset \varOmega _{k,i}$$, $$\forall v_{k,i}\in V_{k,i}$$. Notice that in our algorithm, the ‘lines’ are vertical. We then introduce $$P_{k}\,:\, V_{J}\rightarrow V_{k}$$, $$Q_{k}\,:\, V_{J}\rightarrow V_{k}$$ and an auxiliary operator $$A_{k}\,:\, V_{k}\rightarrow V_{k}$$ as follows:   \begin{align} (P_{k}v,\varphi)_{a,\varOmega} =(v,\varphi)_{a,\varOmega}, \quad\forall\varphi\in V_{k}, \end{align} (2.3)  \begin{align} (Q_{k}v,\varphi) =(v,\varphi), \quad \forall\varphi\in V_{k}, \end{align} (2.4)  \begin{align} (A_{k}v,\varphi) =(v,\varphi)_{a,\varOmega},\quad \forall\varphi\in V_{k}. \end{align} (2.5)$$P_{k}$$ and $$Q_{k}$$ can be seen as the projection with respect to the energy norm and the $$L^{2}$$ norm, respectively. Similarly, on subspace $$V_{k,i}$$ with $$k=0,1,\ldots,J$$ and $$i=1,2,\ldots,N_{k}$$, we also introduce corresponding projections $$P_{k,i}\,:\, V_{J}\rightarrow V_{k,i}$$, $$Q_{k,i}\,:\, V_{J}\rightarrow V_{k,i}$$ and $$A_{k,i}\,:\, V_{k,i}\rightarrow V_{k,i}$$ as follows:   \begin{align} (P_{k,i}v,\varphi)_{a,\varOmega} =(v,\varphi)_{a,\varOmega},\quad \forall\varphi\in V_{k,i}, \end{align} (2.6)  \begin{align} (Q_{k,i}v,\varphi) =(v,\varphi), \quad\quad\; \forall\varphi\in V_{k,i}, \end{align} (2.7)  \begin{align} (A_{k,i}v,\varphi) =(v,\varphi)_{a,\varOmega}, \quad \forall\varphi\in V_{k,i}. \end{align} (2.8) According to the definitions of these projection operators and the inclusion relationships between subspaces, the following equations hold if $$0 \leqslant k \leqslant l \leqslant J $$:   \begin{align} P_{k}P_{l}=P_{l}P_{k}=P_{k},\quad P_{k,i}P_{l}=P_{l}P_{k,i}=P_{k,i}, \end{align} (2.9)  \begin{align}\;\, Q_{k}Q_{l}=Q_{l}Q_{k}=Q_{k},\quad Q_{k,i}Q_{l}=Q_{l}Q_{k,i}=Q_{k,i}. \end{align} (2.10)Moreover, $$\forall u\in V_{J},\; v_{k}\in V_{k},\; v_{k,i}\in V_{k,i}$$, we have   $$ \left(A_{k}P_{k}u,v_{k}\right)=(P_{k}u,v_{k})_{a,\varOmega}=(u,v_{k})_{a,\varOmega}=(A_{J}u,v_{k})= (Q_{k}A_{J}u,v_{k}), $$  $$ (A_{k,i}P_{k,i}u,v_{k,i})=(P_{k,i}u,v_{k,i})_{a,\varOmega}=(u,v_{k,i})_{a,\varOmega}=(A_{J}u,v_{k,i})=(Q_{k,i}A_{J}u,v_{k,i}). $$Thus, we obtain the following two important relationships between these operators:   \begin{align} A_{k}P_{k}=Q_{k}A_{J},\quad A_{k,i}P_{k,i}=Q_{k,i}A_{J},\quad \forall 0\leqslant k\leqslant J,\,1\leqslant i\leqslant N_{k}. \end{align} (2.11) 2.3 Multigrid method based on line Gauss–Seidel iteration The Galerkin format that we need to solve on the finite element space can be described as follows: find $$u\in V_{J}$$ such that   $$ a(u,v)=(\,f,v),\quad\forall v\in V_{J}. $$ By the auxiliary operator $$A_{J}$$, which we have defined in Section 2.2 (Equation (2.5)), we have   $$ (A_{J}u,v)=a(u,v)=(\,f,v). $$Since the above equation holds for any $$v\in V_{J}$$, we obtain   \begin{align} A_{J}u=Qf. \end{align} (2.12)This operator equation corresponds to our finite element problem, where $$Q\,:\, L^{2}(\varOmega )\rightarrow V_{J}$$ is the $$L^{2}$$ projection from $$L^{2}(\varOmega )$$ to $$V_{J}$$. Generally, if the initial guess $$u^{0}$$ has been chosen, the iteration method used to solve the above operator equation can be described as follows:   \begin{align} u^{l+1}=u^{l}+B_{J}\big(Qf-A_{J}u^{l}\big), \end{align} (2.13)where $$B_{J}$$ is an approximation of $$A_{J}^{-1}$$, and it is given by the iteration algorithm individually. Algorithm 2.1 defines $$B_{J}$$ for the multigrid method of which the smoother was given by the line Gauss–Seidel iteration. In practice, Algorithm 2.1 is usually used recursively. On each level, the iteration contains three steps: pre-smoothing, correction and post-smoothing. The advantage of the multigrid method is the validity for almost all error components with different frequencies, which ensures rapid convergence. In the next section, we analyze our algorithm theoretically. Our aim is to prove the uniform convergence of Algorithm 2.1. 3. Convergence analysis In this section, we first introduce and analyze the error transfer operator. Then, we can prove the uniform convergence through this error transfer operator. Some preliminary lemmas are proposed in the process of the proof. In the following, we use the symbol ‘$$\lesssim $$’. Here, $$x\lesssim y$$ means $$x \leqslant Cy $$, where C is a constant independent of the domain size, the mesh width h and the anisotropic coefficient $$\varepsilon $$. Without loss of generality, we assume $$h_{k}={h_{k}^{x}}={h_{k}^{y}}$$. As for $${h_{k}^{x}}\neq{h_{k}^{y}}$$, we can dilate the x-coordinate to make $$h_{k}^{\hat{x}}={h_{k}^{y}}$$. Although this dilation transformation could change the size of domain $$\varOmega $$ and anisotropic coefficient $$\varepsilon $$, it has no influence on our proving method. 3.1 Error transfer operator Assume $$u\in V_{J}$$ is the solution of $$A_{J}u=Qf$$; by (2.13) we have   $$u^{l+1}=u^{l}+B_{J}\big(Qf-A_{J}u^{l}\big)=u^{l}+B_{J}A_{J}\big(u-u^{l}\big). $$Subtracting u from the above equation and changing the sign, we obtain   \begin{align} u-u^{l+1}=u-u^{l}-B_{J}A_{J}\big(u-u^{l}\big)=(I-B_{J}A_{J})\big(u-u^{l}\big). \end{align} (3.1) Then, we denote by $$E_{J}\triangleq I-B_{J}A_{J}$$ the error transfer operator. From (3.1), we see that the convergence of our algorithm relates to the norm of this error transfer operator. Therefore, our aim is to estimate the norm of $$E_{J}$$. Now, we consider the specific formula of $$E_{J}$$ according to our concrete algorithm. For every $$w\in V_{J}$$, let $$r=A_{k}P_{k}w\in V_{k}$$, and substitute it into Algorithm 2.1; we have   $$w^{i}=w^{i-1}+A_{k,N_{k}+1-i}^{-1}Q_{k,N_{k}+1-i}\left(A_{k}P_{k}w-A_{k}w^{i-1}\right),\qquad 1\leqslant i\leqslant N_{k}. $$Due to $$w^{i}\in V_{k}$$, $$P_{k}w^{i}=w^{i}\, (1\leqslant i\leqslant N_{k})$$, the above equation can be transformed into   \begin{align*} w^{i} & =w^{i-1}+A_{k,N_{k}+1-i}^{-1}Q_{k,N_{k}+1-i}\left(A_{k}P_{k}w-A_{k}P_{k}w^{i-1}\right)\\ & =w^{i-1}+A_{k,N_{k}+1-i}^{-1}Q_{k,N_{k}+1-i}A_{k}P_{k}\left(w-w^{i-1}\right). \end{align*}Recalling equation (2.11), we obtain   $$A_{k,i}^{-1}Q_{k,i}A_{k}P_{k}=A_{k,i}^{-1}Q_{k,i}Q_{k}A_{J}=A_{k,i}^{-1}Q_{k,i}A_{J}= A_{k,i}^{-1}A_{k,i}P_{k,i}=P_{k,i}, $$then,   \begin{align} w-w^{i}=\big(I-P_{k,N_{k}+1-i}\big)\big(w-w^{i-1}\big),\qquad 1\leqslant i\leqslant{N_{k}}. \end{align} (3.2)Similarly, we can deduce   \begin{align} w-w^{N_{k}+1} =(I-B_{k-1}A_{k-1}P_{k-1})\big(w-w^{N_{k}}\big), \end{align} (3.3)  \begin{align}\qquad\; w-w^{N_{k}+i+1} =(I-P_{k,i})\big(w-w^{N_{k}+i}\big),\qquad 1\leqslant i\leqslant N_{k}. \end{align} (3.4)By (3.2), (3.3) and (3.4), we have   \begin{align*} w-B_{k}r =&\,w-w^{2N_{k}+1}\\ = &\,(I-P_{k,N_{k}})\ldots (I-P_{k,1})(I-B_{k-1}A_{k-1}P_{k-1})(I-P_{k,1})\ldots (I-P_{k,N_{k}})w. \end{align*}Now, we define $$T_{k}\triangleq (I-P_{k,N_{k}})\ldots (I-P_{k,1})$$, where k = 1, 2, … , J. Note $$r=A_{k}P_{k}w$$; then, by the notation $$T_{k}$$, the above equation can be transformed into   \begin{align} (I-B_{k}A_{k}P_{k})w=T_{k}(I-B_{k-1}A_{k-1}P_{k-1})T_{k}^{*}w,\qquad \forall w\in V_{J}. \end{align} (3.5)Finally, using the above formula recursively, we have   \begin{align} \begin{aligned} E_{J} & =I-B_{J}A_{J}=I-B_{J}A_{J}P_{J}\\ & =T_{J}T_{J-1}\ldots T_{1}(I-B_{0}A_{0}P_{0})T_{1}^{*}\ldots T_{J-1}^{*}T_{J}^{*}\\ & =T_{J}T_{J-1}\ldots T_{1}(I-P_{0})T_{1}^{*}\ldots T_{J-1}^{*}T_{J}^{*}. \end{aligned} \end{align} (3.6)Furthermore, let $${E_{J}^{N}}\triangleq T_{J}T_{J-1}\ldots T_{1}(I-P_{0})$$, and note that $$I-P_{0}=(I-P_{0})\left (I-P_{0}^{*}\right )$$; we obtain   $$E_{J}={E_{J}^{N}}\left({E_{J}^{N}}\right)^{*}. $$‘/’ and ‘∖’ type of multigrid algorithm: Sometimes, we may perform Algorithm 2.1 incompletely. If we cut the step of pre-smoothing (‘/’ type of multigrid algorithm), then $${E_{J}^{N}}$$ will be the error transfer operator; and if we cut off the post-smoothing (‘∖’ type of multigrid algorithm), $$\left ({E_{J}^{N}}\right )^{*}$$ will be the error transfer operator. Following the property of adjoint operator (see Yosida, 1964), we have   $$\left\Vert{E_{J}^{N}}\right\Vert{}_{a}=\left\Vert \left({E_{J}^{N}}\right)^{\ast}\right\Vert{}_{a}=\left\Vert{E_{J}^{N}}\left({E_{J}^{N}}\right)^{\ast}\right\Vert{}_{a}^{1/2}=\left\Vert E_{J}\right\Vert{}_{a}^{1/2}.$$Thus, we just need to estimate $$\left \Vert{E_{J}^{N}}\right \Vert_{a}$$, in spite of the algorithm (‘/’-cycle, ‘∖’-cycle or ‘V’-cycle). 3.2 The XZ identity and frame of proving In order to estimate $$\left \Vert{E_{J}^{N}}\right \Vert_{a}$$, we introduce the XZ identity proposed by Xu and Zikatanov (see, e.g., Xu & Zikatanov, 2002; Lee et al., 2008; Chen, 2011). Theorem 3.1 (XZ identity). Let V be a Hilbert space with inner-product $$(\cdot ,\cdot )_{a}$$, and $$V_{i}\subset V(i=1,2,\ldots ,J)$$ be closed subspaces of V satisfying $$V=\sum _{i=1}^{J}V_{i}$$. Given $$P_{i}\,:\,V\mapsto V_{i}$$, the orthogonal projection with respect to $$(\cdot ,\cdot )_{a}$$. Then, the following identity holds:   \begin{align} \left\Vert (I-P_{J})\ldots (I-P_{1})\right\Vert{}_{a}^{2}=1-\frac{1}{K}, \end{align} (3.7)where   \begin{align} K=\sup_{v\in V}\inf_{\sum_{i=1}^{J} v_{i}=v} \frac{\sum_{i=1}^{J}\left\Vert P_{i}\sum_{j=i}^{J}v_{j}\right\Vert{}_{a}^{2}}{\Vert v\Vert_{{a}^{2}}}. \end{align} (3.8) When we apply the subspace decomposition   $$V=V_{0}+\sum_{k=1}^{J}\sum_{i=1}^{N_{k}}V_{k,i}$$in Theorem 3.1, $$\left \Vert{E_{J}^{N}}\right \Vert_{a}$$ is estimated as   \begin{align} \left\Vert{E_{J}^{N}}\right\Vert{}_{{a}^{2}}=1-\frac{1}{K}, \end{align} (3.9)where   \begin{align} K=\sup_{v\in V}\inf_{v=v_{0}+\sum_{k=1}^{J}\sum_{i=1}^{N_{k}}v_{k,i}}\frac{\|P_{0} v\|_{a}^{2} + \sum_{k = 1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i}\left(\sum_{(l,j)\geqslant(k,i)} v_{l,j}\right)\right\|_{a}^{2}}{\|v\|_{a}^{2}}. \end{align} (3.10)In the above equation, $$v_{0}\in V_{0}$$, $$v_{k,i}\in V_{k,i}$$ and the notation (l, j) ⩾ (k, i) means   \begin{align} \sum_{(l,j)\geqslant(k,i)} v_{l,j} = \sum_{j = i}^{N_{k}} v_{k,j} + \sum_{l=k+1}^{J}\sum_{j=1}^{N_{l}} v_{l,j} . \end{align} (3.11) Due to (3.9), our algorithm is convergent uniformly if K is bounded (especially independent of the levels J and the anisotropic coefficient $$\varepsilon $$). To show K is bounded, from (3.10), we need to prove the following proposition: $$\forall v\in V_{J}$$, there exists a decomposition of v on $$V_{k,i}$$, i.e., $$v=v_{0}+\sum _{k=1}^{J}\sum _{i=1}^{N_{k}}v_{k,i}$$, such that   \begin{align} \frac{\|P_{0} v\|_{a}^{2} + \sum_{k = 1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i}\left(\sum_{(l,j)\geqslant(k,i)} v_{l,j}\right)\right\|_{a}^{2}}{\|v\|_{a}^{2}}\leqslant C, \end{align} (3.12)where C is independent of J and $$\varepsilon $$. To prove this proposition, we need to construct a decomposition of v. There are two steps we need to follow. The first step is to decompose v into different levels as $$v=\sum _{k=0}^{J}v_{k}$$, and the second step is to decompose each $$v_{k}$$ within one level as $$v_{k}=\sum _{i=1}^{N_{k}}v_{k,i}$$. There are two usual ways to construct the decomposition between different levels; one is based on the $$L^{2}$$ projection $$Q_{k}$$, i.e., $$v_{k}=(Q_{k}-Q_{k-1})v$$, while the other is based on the energy projection $$P_{k}$$, i.e., $$v_{k}=(P_{k}-P_{k-1})v$$. For anisotropic elliptic equations, our analysis of the convergence of the algorithm mainly relies on the following inequalities:   \begin{align} \sum_{k=0}^{J} \|v_{k}\|_{a}^{2} \lesssim \|v\|_{a}^{2}, \end{align} (3.13)  \begin{align} \frac{\varepsilon}{{h_{k}^{2}}}\|v_{k}\|_{0}^{2} \lesssim \|v_{k}\|_{a}^{2}. \end{align} (3.14)Since $$Q_{k}$$ is the $$L^{2}$$ projection, it does not contain any information about the diffusion directions. Therefore, in an anisotropic elliptic equation, we are faced with the essential difficulty that is hard to overcome when considering the decomposition $$v_{k}=\left (Q_{k}-Q_{k-1}\right )v$$ under the a-norm. If the decomposition $$v_{k}=(P_{k}-P_{k-1})v$$ is used, we have $$\sum _{k=0}^{J} \|v_{k}\|_{a}^{2}=\|v\|_{a}^{2}$$ directly by the definition of $$P_{k}$$. Hence, the inequality (3.13) holds naturally. To ensure the inequality (3.14), unfortunately, we need the solution that belongs to $$H^{2}(\varOmega )$$ in this case. However, the $$H^{2}$$ regularity cannot be reached if the domain is nonconvex. To overcome this difficulty, we divide a concave domain into several subdomains and consider the related problems defined on these subdomains. Of course, we need the $$H^{2}$$ regularity of the solution in these subdomains. By analyzing the solutions on common boundaries, we can put the local results together. In practice, we first define the projection operators $${P_{k}^{m}}$$ on each subdomain. Secondly, using the method of piecing together, we define the global operator $$\tilde{P}_{k}$$ and then decompose v as $$v_{k}=(\tilde{P}_{k}-\tilde{P}_{k-1})v$$. Finally, we deduce the inequality (3.12) and thus prove the uniform convergence of the multigrid method. 3.3 Rectified energy projection Without loss of generality, we illustrate the rectified energy projection on the L-shape domain, and other domains that are constructed by several rectangles can be dealt with similarly. We choose the L-shape domain as our example because it is a classic nonconvex domain and contains only two rectangles. For the domain $$\varOmega =(0,X) \times (0,Y)\backslash [x_{0},X) \times [y_{0},Y)$$, we define the rectified energy projection $$\tilde{P}_{k}$$ in what follows. First, we divide the domain $$\varOmega $$ into two subdomains, $$\varOmega _{1}=(0, x_{0})\times (0, Y)$$ and $$\varOmega _{2}=(x_{0}, X)\times (0, y_{0})$$. The boundaries are defined as $$\varGamma _{1} = \{(x_{0},y)\,|\,0<y<Y\},\ \varGamma _{2} = \{(x_{0},y)\,|\,0<y<y_{0}\}$$ and $$\varGamma _{m}^{\prime} = \partial \varOmega _{m} \backslash \varGamma _{m}\;(m = 1, 2)$$. As shown in Fig. 1, $$\varGamma _{1},\ \varGamma _{2}$$ are the common edges of the subdomains, while $$\varGamma _{1}^{\prime},\ \varGamma _{2}^{\prime}$$ are not common edges. Fig. 1. View largeDownload slide Diagram of domain division. Fig. 1. View largeDownload slide Diagram of domain division. We introduce auxiliary problems on the two subdomains as   \begin{align} \begin{cases} -\varepsilon \partial_{xx} u - \partial_{yy} u = f,\qquad (x,y) \in \varOmega_{m},\\ \partial_{\nu} u|_{\varGamma_{m}} = 0,\quad u|_{\varGamma_{m}^{\prime}} = 0. \end{cases} \qquad m = 1,2. \end{align} (3.15) Then, the corresponding variational form of the above problems is to find $$u^{m}\in V^{m}$$ such that   $$(u^{m},v^{m})_{a,\varOmega_{m}} = (f,v^{m}),\quad \forall v^{m} \in V^{m}, $$where $$V^{m} = \{u\in H^{1}(\varOmega _{m}):\,u|_{\varGamma _{m}^{\prime }}=0 \}$$. Now, we define $$\mathscr{T}_{k}^{m} = \mathscr{T}_{k} \cap \overline{\varOmega }_{m}$$ and $${V_{k}^{m}} = \left \{v\in C(\overline{\varOmega }_{m}):\,v|_{T}\in Q_{1}(T),\, \forall T \in \mathscr{T}_{k}^{m};\ v=0\ \textrm{on}\ \varGamma _{m}^{\prime }\right \}$$. Then, we have $${V_{k}^{m}}\subset H^{1}(\varOmega _{m})$$. However, the functions in $${V_{k}^{m}}$$ can arbitrarily take values for the nodes on the edge $$\varGamma _{m}$$; thus, $${V_{k}^{m}}\not \subset{H_{0}^{1}}(\varOmega _{m})$$. According to the space $${V_{k}^{m}}$$, we introduce the projection operator $${P_{k}^{m}}\,:\,{V_{J}^{m}}\mapsto{V_{k}^{m}}$$ as   \begin{align} \left({P_{k}^{m}} u^{m}, {v_{k}^{m}}\right)_{a,\varOmega_{m}} = \left(u^{m}, {v_{k}^{m}}\right)_{a,\varOmega_{m}}\!,\quad \forall{v_{k}^{m}} \in{V_{k}^{m}}. \end{align} (3.16) Comparing the definitions of $$V_{k}$$ and $${V_{k}^{m}}$$, we know $$v_{k}|_{\varOmega _{m}}\in{V_{k}^{m}},\ \forall v_{k}\in V_{k}$$. That is to say, when $$V_{k}$$ is restricted on $$\varOmega _{m}$$, it is a subspace of $${V_{k}^{m}}$$. Moreover, by the definition of $${P_{k}^{m}}$$, obviously we have   \begin{align} {P_{k}^{m}}(v_{k}|_{\varOmega_{m}})=v_{k}|_{\varOmega_{m}}. \end{align} (3.17) In the kth level, we denote by $$L_{k,n_{k}}$$ the grid line, which coincides with the common edge $$\varGamma _{1}$$. And we suppose the turning point $$(x_{0},y_{0})$$ on $$L_{k,n_{k}}$$ is the $$n_{k,n_{k}}$$th node from bottom to top. By using bilinear basis functions $$\phi _{k,i}^{j}$$, for each $$u\in V_{J}$$, $${P_{k}^{m}}(u|_{\varOmega _{m}})$$ can be represented as follows:   \begin{align} {P_{k}^{1}} (u|_{\varOmega_{1}}) = \sum_{i = 1}^{n_{k} - 1} \sum_{j = 1}^{N_{k,i}} u_{k,i}^{1,\, j} \phi_{k,i}^{\,j} + \sum_{j = 1}^{N_{k,n_{k}}} u_{k,n_{k}}^{1,\,j} \phi_{k,n_{k}}^{1,\,j}, \end{align} (3.18)  \begin{align}\quad\;\; {P_{k}^{2}} (u|_{\varOmega_{2}}) = \sum_{i = n_{k} + 1}^{N_{k}} \sum_{j = 1}^{N_{k,i}} u_{k,i}^{2,\,j} \phi_{k,i}^{\,j} + \sum_{j = 1}^{n_{k,n_{k}}-1} u_{k,n_{k}}^{2,\,j} \phi_{k,n_{k}}^{2,\,j}, \end{align} (3.19)where $$\phi _{k,n_{k}}^{m,j}=\phi _{k,n_{k}}^{j}|_{\varOmega _{m}}\;(m=1,2)$$. Expressed simply, for all $$u\in V_{J}$$, we use $${P_{k}^{m}} u$$ to indicate $${P_{k}^{m}}(u|_{\varOmega _{m}})$$ later in the article without any ambiguity. Now, we can piece $${P_{k}^{1}}$$ and $${P_{k}^{2}}$$ together and define the rectified energy projection $$\tilde{P}_{k}\,:\,V_{J}\mapsto V_{k}$$ as   \begin{align} \tilde{P}_{k} u = \sum_{i = 1}^{n_{k} - 1}\ \ \sum_{j = 1}^{N_{k,i}} u_{k,i}^{1,\,j} \phi_{k,i}^{\,j} + \sum_{i = n_{k} + 1}^{N_{k}} \sum_{j = 1}^{N_{k,i}} u_{k,\,i}^{2,\,j} \phi_{k,i}^{\,j} + \sum_{j = 1}^{n_{k,n_{k}}-1} u_{k,n_{k}}^{2,\,j} \phi_{k,n_{k}}^{\,j}, \end{align} (3.20)where $$u_{k,i}^{1,\,j},\ u_{k,n_{k}}^{2,\,j}$$ are given by (3.18) and (3.19). To better understand the operator $$\tilde{P}_{k}$$, we further explain the motivation that leads us to this operator. As mentioned above, if we derive a decomposition of v by $$v_{k}=(P_{k}-P_{k-1})v$$ on the L-shape domain $$\varOmega $$, this decomposition cannot deduce the needed result $$\varepsilon \Vert v_{k} \Vert _{0}^{2}\lesssim{h_{k}^{2}}\Vert v_{k}\Vert _{a}^{2}$$. However, when we define $${v_{k}^{1}}=\left ({P_{k}^{1}}-P_{k-1}^{1}\right )v$$ and $${v_{k}^{2}}=\left ({P_{k}^{2}}-P_{k-1}^{2}\right )v$$ on the two subdomains $$\varOmega _{1}$$ and $$\varOmega _{2}$$, respectively, $$\varepsilon \left \Vert{v_{k}^{1}} \right \Vert_{0}^{2}\lesssim{h_{k}^{2}}\left \Vert{v_{k}^{1}}\right \Vert_{a}^{2}$$ and $$\varepsilon \left \Vert{v_{k}^{2}} \right \Vert_{0}^{2}\lesssim{h_{k}^{2}}\left \Vert{v_{k}^{2}}\right \Vert_{a}^{2}$$ hold on each subdomain. Therefore, we expect to define a new global operator by $${P_{k}^{1}}$$ and $${P_{k}^{2}}$$ so that their advantages can be inherited and the global a-norm stability still holds as $$P_{k}$$. A simple idea is to restrict v on the subdomains $$\varOmega _{1}$$ and $$\varOmega _{2}$$ and then perform P-projection partially, and set the results of this projection to be the outcome of a new operator. As for the nodes on the common edge, we can choose $$\varOmega _{1}$$ or $$\varOmega _{2}$$ as the region whose values are assigned to these nodes. Note that (3.20) is an example of this idea, where the values on $$\varGamma _{2}$$ are given by $${P_{k}^{2}}u$$. Then, the operator $$\tilde{P}_{k}$$ is defined completely in this way. Now, we can construct the decomposition of v in (3.12) by the operator $$\tilde{P}_{k}$$ in what follows. Let $$v_{k}=\big (\tilde{P}_{k}-\tilde{P}_{k-1}\big )v\;(k=0,1,\dots ,J)$$, where $$\tilde{P}_{-1}:=0$$. Then, we have   \begin{align} \sum_{k = 0}^{J} v_{k} = \tilde{P}_{J} v = v,\quad \forall v\in V_{J}. \end{align} (3.21)On each level, decomposing $$v_{k}\in V_{k}\;(k=1,2,\ldots,J)$$ into subspaces $$V_{k,i}$$, we have   \begin{align} v_{k} = \sum_{i = 1}^{N_{k}} v_{k,i}\,,\qquad v_{k,i}\in V_{k,i}. \end{align} (3.22) In the next subsection, we prove that the inequality (3.12) holds for the above decomposition. Here, we provide further explanation of the decomposition between different levels (Equation (3.22)). As we know, $$V_{k,i}$$ is a subspace expanded by the basis functions corresponding to the ith grid line parallel to the y axis. Recalling the anisotropic equation, we can see that (3.22) indicates a decomposition along the weak spreading direction (x direction), while holding the component along the strong spreading direction (y direction) unchanged. In Lemma 3.5 below, we can see an advantage of this decomposition, which will deduce the a-norm stability. 3.4 Proof of convergence As mentioned in Section 3.3, when setting $${v_{k}^{m}}=\left ({P_{k}^{m}}-P_{k-1}^{m}\right )v$$, we have   \begin{align} \varepsilon\left\|{v_{k}^{m}}\right\|_{0}^{2} \lesssim{h_{k}^{2}}\left\|{v_{k}^{m}}\right\|_{a}^{2}\!, \quad m= 1,2, \quad k = 1,2,\dots,J. \end{align} (3.23)Now, we will prove the above inequality. First, we give two auxiliary lemmas. Lemma 3.2 Let T be a rectangle, $$h_{x},\ h_{y}$$ be side lengths of T and $$\Pi _{h}$$ be the bilinear interpolation operator. Then, for all $$v\in H^{2}(T)$$, we have   \begin{align} \begin{split} \|\partial_{x}(v - \Pi_{h} v) \|_{0,T}^{2} \lesssim{h_{x}^{2}}\| \partial_{xx} v\|_{0,T}^{2} + {h_{y}^{2}}\| \partial_{xy} v\|_{0,T}^{2}, \\ \|\partial_{y}(v - \Pi_{h} v) \|_{0,T}^{2} \lesssim{h_{x}^{2}}\| \partial_{yx} v\|_{0,T}^{2} + {h_{y}^{2}}\| \partial_{yy} v\|_{0,T}^{2} . \end{split} \end{align} (3.24) This lemma comes from a simple standard scaling argument, and we omit the proof. Generally, if we consider the bilinear interpolation defined on a rectangular grid of a bounded domain $$\varOmega $$, the inequalities in Lemma 3.2 also hold. From Lemma 3.2, it can be seen that the a-norm of the interpolation error could be bounded by the second derivatives of the original function. To estimate the right-hand side in Lemma 3.2 further, we need the $$H^{2}$$ regularity of the solutions in the auxiliary problems and the estimation on the second derivatives of these solutions. Therefore, we introduce the following lemma. Lemma 3.3 Let $$\varOmega =(0,\ d)\times (0,\ l\,),\ \varGamma =\{(0,\ y)\,|\,0<y<l\},\ \varGamma ^{\prime }=\partial \varOmega \backslash \varGamma $$ and $$\varepsilon \leqslant 1$$. Then, $$\forall f\in L^{2}(\varOmega )$$, the weak solution u to the following problem   \begin{align} \begin{cases} -\varepsilon \partial_{xx} u - \partial_{yy} u = f,\qquad (x,y) \in \varOmega\!,\\ \partial_{\nu} u|_{\varGamma} = 0,\quad u|_{\varGamma^{\prime}} = 0 \end{cases} \end{align} (3.25)belongs to $$H^{2}(\varOmega )$$ and satisfies the inequalities   \begin{align} \varepsilon \|u_{xx}\|_{0,{\varOmega}} \lesssim \|\,{f}\|_{0,{\varOmega}}, \end{align} (3.26)  \begin{align} \sqrt{\varepsilon} \|u_{xy}\|_{0,{\varOmega}} \lesssim \|\,{f}\|_{0,{\varOmega}},\quad \end{align} (3.27)  \begin{align}\; \|u_{yy}\|_{0,{\varOmega}} \lesssim \|\,{f}\|_{0,{\varOmega}}. \end{align} (3.28) The proof of this lemma is also a standard reflection and scaling method. We can transform the domain into $$\tilde{\varOmega }=(-d,\ d)\times (0,\ l)$$ by a reflection and transform x, y into $$\hat{x}=x/\sqrt{\varepsilon },\ \hat{y}=y$$. Then, this lemma is a standard conclusion of the $$H^{2}$$ regularity on $$\tilde{\varOmega }$$. Thus, we also omit the details of the proof. Lemma 3.3 shows the $$H^{2}$$ regularity of the solutions to the auxiliary problems. Moreover, it also gives an estimation on the second-order derivatives, which contains the anisotropic coefficient $$\varepsilon $$. Note that the above estimation has nothing to do with the domain (the only requirement is the convexity of $$\tilde{\varOmega }$$). By Lemmas 3.2 and 3.3, we obtain an estimation for the interpolation error of the solution with respect to the a-norm. In the following, we deduce the inequality (3.23) mentioned above by the Aubin–Nistche technique, which relates the $$L^{2}$$ norm and the energy norm of an energy projection error. Lemma 3.4 Given $$v^{m} \in H^{1}(\varOmega _{m})\;(m=1,2)$$, let $${v_{k}^{m}}:={P_{k}^{m}} v^{m}$$ be the Ritz projection of $$v^{m}$$ on $${V_{k}^{m}}$$; then,   \begin{align} \frac{\varepsilon}{(h_{k})^{2}}\left\|v^{m} - {v_{k}^{m}}\right\|_{0,\varOmega_{m}}^{2} \lesssim \left\|v^{m} - {v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2}\!\!. \end{align} (3.29) Proof. Define $$g=v^{m}-{v_{k}^{m}}$$, then   \begin{align} \begin{split} \left(g, {u_{k}^{m}}\right)_{a,\varOmega_{m}} =& \left(\left(I - {P_{k}^{m}}\right) v^{m}, {u_{k}^{m}}\right)_{a, \varOmega_{m}} \\ =& \left(v^{m}, {u_{k}^{m}}\right)_{a,\varOmega_{m}} - \left({P_{k}^{m}} v^{m}, {u_{k}^{m}}\right)_{a,\varOmega_{m}}\\ =& \left(v^{m}, {u_{k}^{m}}\right)_{a,\varOmega_{m}} - \left(v^{m}, {u_{k}^{m}}\right)_{a, \varOmega_{m}} = 0, \quad \forall{u_{k}^{m}} \in{V_{k}^{m}}. \end{split} \end{align} (3.30)Let $$\phi \in V^{m}$$ be the solution satisfying the following variational equation:   \begin{align} a(\phi,u) = (g,u),\qquad \forall u \in V^{m}. \end{align} (3.31)By the equations (3.30) and (3.31), for all $${u_{k}^{m}} \in{V_{k}^{m}}$$, we have   $$\|g\|_{0,\varOmega_{m}}^{2} = (\phi, g)_{a, \varOmega_{m}} = \left(\phi - {u_{k}^{m}},g\right)_{a,\varOmega_{m}} \leqslant \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}} \|g\|_{a,\varOmega_{m}}. $$Hence,   \begin{align} \|g\|_{0,\varOmega_{m}}^{2} \leqslant \left(\inf_{{u_{k}^{m}} \in{V_{k}^{m}}} \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}}\right) \|g\|_{a,\varOmega_{m}}. \end{align} (3.32)Note $$\phi $$ is the solution to the problem   \begin{align} \begin{cases} -\varepsilon \partial_{xx} \phi - \partial_{yy} \phi = g,\qquad (x,y) \in \varOmega_{m},\,\\ \partial_{\nu} \phi|_{\varGamma_{m}} = 0, \quad \phi|_{\varGamma^{\prime}_{m}} = 0. \end{cases} \end{align} (3.33)By Lemma 3.3, we have $$\phi \in H^{2}(\varOmega _{m})$$ and   \begin{align*} &\varepsilon \left(\|\partial_{xx} \phi\|_{0,\varOmega}^{2} + \|\partial_{xy} \phi\|_{0,\varOmega}^{2}\right) + \left(\|\partial_{yx} \phi\|_{0,\varOmega}^{2} + \|\partial_{yy} \phi\|_{0,\varOmega}^{2}\right) \\ \lesssim &\, \varepsilon\left(\frac{\|g\|_{0,\varOmega_{m}}^{2}}{\varepsilon^{2}}+ \frac{\|g\|_{0,\varOmega_{m}}^{2}}{{\varepsilon}}\right)+ \left(\frac{\|g\|_{0,\varOmega_{m}}^{2}}{{\varepsilon}}+\|g\|_{0,\varOmega_{m}}^{2}\right)\\ \lesssim & \frac{\|g\|_{0,\varOmega_{m}}^{2}}{\varepsilon}. \end{align*}Therefore, by Lemma 3.2, we obtain   \begin{align*} \inf_{{u_{k}^{m}} \in{V_{k}^{m}}} \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}} &\leqslant \left\|\phi - \Pi_{h_{k}} \phi\right\|_{a,\varOmega_{m}}\\ &= \left(\varepsilon\|\partial_{x}(\phi - \Pi_{h_{k}} \phi)\|_{0,\varOmega_{m}}^{2} + \|\partial_{y}(\phi - \Pi_{h_{k}} \phi) \|_{0,\varOmega_{m}}^{2} \right)^{1/2}\\ &\lesssim h_{k} \left[\varepsilon \left(\|\partial_{xx} \phi\|_{0,\varOmega}^{2} + \|\partial_{xy} \phi\|_{0,\varOmega}^{2}\right) + \left(\|\partial_{yx} \phi\|_{0,\Omega}^{2} + \|\partial_{yy} \phi\|_{0,\varOmega}^{2}\right)\right]^{1/2} \\ &\lesssim h_{k} \|g\|_{0,\varOmega_{m}}/\sqrt{\varepsilon}. \end{align*}Substituting the above formula into the inequality (3.32), then   \begin{align*} \|g\|_{0,\varOmega_{m}}^{2} &\leqslant \left(\inf_{{u_{k}^{m}} \in{V_{k}^{m}}} \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}}\right) \|g\|_{a,\varOmega_{m}}\\ &\lesssim \frac{h_{k}}{\sqrt{\varepsilon}} \|g\|_{0,\varOmega_{m}} \|g\|_{a,\varOmega_{m}} . \end{align*}That is to say,   \begin{align} \frac{\varepsilon}{(h_{k})^{2}}\|g\|_{0,\varOmega_{m}}^{2} \lesssim \|g\|_{a,\varOmega_{m}}^{2}, \end{align} (3.34)thus, we have proved this lemma. Here, we show a little more detail about the relationship between Lemma 3.4 and the inequality (3.23). Actually, since $$P_{k-1}^{m}\left ({P_{k}^{m}} - P_{k-1}^{m}\right ) = P_{k-1}^{m} - P_{k-1}^{m} = 0$$, we have   $${v_{k}^{m}} = \left({P_{k}^{m}} - P_{k-1}^{m}\right) v^{m} = \left({P_{k}^{m}} - P_{k-1}^{m}\right) v^{m} - P_{k-1}^{m}\left({P_{k}^{m}} - P_{k-1}^{m}\right) v^{m} = {v_{k}^{m}} - P_{k-1}^{m} {v_{k}^{m}}. $$By Lemma 3.4, since $${v_{k}^{m}} \in H^{1}(\varOmega _{m})$$, the following inequality   $$\frac{\varepsilon}{h_{k-1}^{2}}\left\|{v_{k}^{m}} - P_{k-1}^{m} {v_{k}^{m}}\right\|_{0,\varOmega_{m}}^{2} \lesssim \left\| {v_{k}^{m}} - P_{k-1}^{m} {v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2}$$holds and it can be represented as   $$\varepsilon\left\|{v_{k}^{m}}\right\|_{0}^{2} \lesssim h_{k-1}^{2}\left\|{v_{k}^{m}}\right\|_{a}^{2}.$$Note that $$h_{k} = \gamma ^{2} h_{k-1}$$ and $$\gamma $$ is a constant; we have obtained the inequality (3.23). That is, Lemma 3.4 contains the inequality (3.23). Observing inequality (3.12) carefully, we find that the stability of the decomposition with respect to a-norm is needed, i.e., if $$v = v_{0} + \sum _{k=1}^{J} \sum _{i = 1}^{N_{k}} v_{k,i}$$ guarantees the inequality (3.12); then,   \begin{align} \|v_{0}\|_{a,\varOmega}^{2} + \sum_{k=1}^{J} \sum_{i = 1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim \|v\|_{a,\varOmega}^{2} . \end{align} (3.35)In fact, inequality (3.35) will be used for the proof of inequality (3.12). Therefore, we need to show that the inequality (3.35) holds for the decomposition defined by (3.21) and (3.22). First, we consider the situation on one level and introduce the following lemma. Lemma 3.5 Let $$v_{k} \in V_{k}\ (k=1,2,\ldots,J)$$. If $$v_{k} = \sum _{i=1}^{N_{k}} v_{k,i}$$ with $$v_{k,i} \in V_{k,i}$$, then   \begin{align} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim \|v_{k}\|_{a,\varOmega}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\|v_{k}\|_{0,\varOmega}^{2} . \end{align} (3.36) To prove Lemma 3.5, we first prove the following proposition. Proposition 3.6 If $$v_{k} \in V_{k}$$ and $$v_{k} = \sum _{i=1}^{N_{k}} v_{k,i}$$ with $$v_{k,i} \in V_{k,i}$$, then we have   \begin{align} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \leqslant 2\|v_{k}\|_{0,\varOmega}^{2}, \end{align} (3.37)  \begin{align} \sum_{i=1}^{N_{k}} \|\partial_{y} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \leqslant 2\|\partial_{y} v_{k}\|_{0,\varOmega}^{2}. \end{align} (3.38) Proof. Since $$V_{k},\ V_{k,i}$$ are the shape function spaces based on the bilinear elements, the relationship between $$v_{k,i}$$ and $$v_{k}$$ can be expressed as below:   \begin{align} v_{k,i}(x,y) = v_{k}(x_{k,i},y)\varPhi_{k,i}(x), \end{align} (3.39)where   $$\varPhi_{k,i}(x) = \begin{cases} (x - x_{k,i - 1}) / (x_{k,i} - x_{k,i-1}),\qquad &x \in [x_{k,i-1},x_{k,i}], \\ (x_{k,i + 1} - x) / (x_{k,i+1} - x_{k,i}),\qquad &x \in [x_{k,i},x_{k,i+1}]. \end{cases}$$Then, by simple computation, we have   \begin{align} \int_{x_{k,i-1}}^{x_{k,i+1}} (\varPhi_{k,i}(x))^{2}\, \mathrm{d}x = \frac{2}{3} h_{k},\qquad \int_{x_{k,i}}^{x_{k,i+1}} \varPhi_{k,i}(x) \varPhi_{k,i+1}(x)\, \mathrm{d}x = \frac{1}{6} h_{k}. \end{align} (3.40)Hence,   \begin{align*} \|v_{k}\|_{0,\varOmega}^{2} =&\, \left\|\sum_{i=1}^{N_{k}} v_{k,i}\right\|_{0,\varOmega}^{2} \\ =&\, \sum_{i=1}^{N_{k}} \int_{\varOmega_{k,i}} v_{k,i}^{2} \, \mathrm{d}x\,\mathrm{d}y + 2 \sum_{i = 1}^{N_{k}-1} \int_{\varOmega_{k,i}\cap\varOmega_{k,i+1}} v_{k,i} v_{k,i+1}\ \mathrm{d}x\,\mathrm{d}y\\ =&\, \sum_{i=1}^{N_{k}} \int_{0}^{l_{y}} \int_{x_{k,i-1}}^{x_{k,i+1}} (v_{k}(x_{k,i},y)\varPhi_{k,i}(x))^{2} \ \mathrm{d}x\,\mathrm{d}y \\ &+ 2 \sum_{i = 1}^{N_{k}-1} \int_{0}^{l_{y}} \int_{x_{k,i}}^{x_{k,i+1}} v_{k}(x_{k,i},y) v_{k}(x_{k,i+1},y) \varPhi_{k,i}(x)\varPhi_{k,i+1}(x)\ \mathrm{d}x\,\mathrm{d}y\\ =&\, \sum_{i=1}^{N_{k}} \frac{2h_{k}}{3}\int_{0}^{l_{y}} (v_{k}(x_{k,i},y))^{2} \ \mathrm{d}y + \frac{h_{k}}{3} \sum_{i = 1}^{N_{k}-1} \int_{0}^{l_{y}} v_{k}(x_{k,i},y)v_{k}(x_{k,i+1},y) \ \mathrm{d}y \\ \geqslant&\, \sum_{i=1}^{N_{k}} \frac{h_{k}}{3}\int_{0}^{l_{y}} (v_{k}(x_{k,i},y))^{2} \ \mathrm{d}y + \frac{h_{k}}{6} \sum_{i = 1}^{N_{k}-1} \int_{0}^{l_{y}} (v_{k}(x_{k,i},y)+v_{k}(x_{k,i+1},y))^{2} \ \mathrm{d}y \\ \geqslant&\, \sum_{i=1}^{N_{k}} \frac{h_{k}}{3}\int_{0}^{l_{y}} (v_{k}(x_{k,i},y))^{2} \ \mathrm{d}y = \frac{1}{2} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{0,\varOmega_{k,i}}^{2} . \end{align*}Similarly, we can prove the inequality   \begin{align} \|\partial_{y} v_{k}\|_{0,\varOmega}^{2} \geqslant \frac{1}{2} \sum_{i=1}^{N_{k}}\|\partial_{y} v_{k,i}\|_{0,\varOmega_{k,i}}^{2}.\ \end{align} (3.41) Now we prove Lemma 3.5. Proof. By the inverse inequality (see Shi & Wang, 2016) and Proposition 3.6, we have   \begin{align} \sum_{i=1}^{N_{k}} \|\partial_{x} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \lesssim (h_{k})^{-2} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \lesssim (h_{k})^{-2} \|v_{k}\|_{0,\varOmega}^{2} . \end{align} (3.42)Combining the inequalities (3.41) and (3.42), we obtain   \begin{align} \begin{split} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} =& \ \varepsilon \sum_{i=1}^{N_{k}} \|\partial_{x} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} + \sum_{i=1}^{N_{k}}\|\partial_{y} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \\ \lesssim& \frac{\varepsilon}{(h_{k})^{2}}\|v_{k}\|_{0,\varOmega}^{2} + \|v_{k}\|_{a,\varOmega}^{2}. \end{split} \end{align} (3.43) Lemma 3.5 gives an estimation for the decomposition on one level defined by (3.22). As we can see in the proof, the estimation we acquired above depends on the settings of the subspaces $$V_{k,i}$$. When the subspaces are expanded by the basis functions, which are defined on the grid lines parallel to the x axis, the right term in inequality (3.36) will change to $$\|v_{k}\|_{a,\varOmega }^{2} + \|v_{k}\|_{0,\varOmega }^{2}/(h_{k})^{2}$$. Furthermore, if the subspace contains only one basis function, the right term becomes $$\|v_{k}\|_{0,\varOmega }^{2}/(h_{k})^{2}$$. Compared with the inequality (3.23), the above two choices could not give us a stable decomposition. In fact, the smoothers corresponding to the above choices are all inefficient. Other than the above two choices, our method can be summarized as holding the function components along the strong spreading direction unchanged while decomposing the function along the weak spreading direction. For the anisotropic equation, this method offers the great advantage of making the decomposition stable within one level with respect to the a-norm. In the following, we consider the problem on multi-levels. In fact, we can show the multi-level a-norm stability by Lemmas 3.4 and 3.5. Theorem 3.7 (Stable decomposition). For all $$v \in V_{J}$$, let $$v =\sum _{k=0}^{J} v_{k} = v_{0} + \sum _{k=1}^{J} \sum _{i = 1}^{N_{k}} v_{k,i}$$ be the decomposition defined by the equations (3.21) and (3.22); then,   \begin{align} \sum_{k=0}^{J} \|v_{k}\|_{a,\varOmega}^{2} \lesssim \|v\|_{a,\varOmega}^{2}, \end{align} (3.44)  \begin{align} \|v_{0}\|_{a,\varOmega}^{2} + \sum_{k=1}^{J} \sum_{i = 1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim \|v\|_{a,\varOmega}^{2}. \end{align} (3.45) Proof. Let $${v_{k}^{m}} = \left ({P_{k}^{m}} - P_{k-1}^{m}\right )v$$ ($$k = 0,1,\dots ,J$$), where m = 1, 2 and $$P_{-1}^{m} = 0$$. Decompose $${v_{k}^{m}}$$ into   \begin{align*} {v_{k}^{1}} &= \sum_{i=1}^{n_{k}} v_{k,i}^{1}, \quad v_{k,i}^{1}\in V_{k,i}^{1},\\{v_{k}^{2}} &= \sum_{i=n_{k}}^{N_{k}} v_{k,i}^{2}, \quad v_{k,i}^{2}\in V_{k,i}^{2}, \end{align*}where   $$\begin{cases} V_{k,i}^{1} \ \,= V_{k,i},\qquad i = 1,2,\dots,n_{k} -1, \\ V_{k,n_{k}}^{1} = V_{k} \cap \textrm{span}\left\{{\phi}_{k,n_{k}}^{1,1},\dots,{\phi}_{k,n_{k}}^{1,N_{k,n_{k}}}\right\}, \\ V_{k,i}^{2} \ \,= V_{k,i},\qquad i = n_{k} + 1,n_{k} + 2,\dots,N_{k}, \\ V_{k,n_{k}}^{2} = V_{k} \cap \textrm{span}\left\{{\phi}_{k,n_{k}}^{2,1},\dots,{\phi}_{k,n_{k}}^{2,n_{k,n_{k}}-1}\right\}. \end{cases}$$Similar to Lemma 3.5, we have   \begin{align*} \sum_{i=1}^{n_{k}} \left\|v_{k,i}^{1}\right\|_{a,\varOmega_{1}}^{2} &\lesssim \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2}\!,\\ \sum_{i=n_{k}}^{N_{k}} \left\|v_{k,i}^{2}\right\|_{a,\varOmega_{2}}^{2} &\lesssim \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\!\!. \end{align*}Specifically,   \begin{align} \left\|v_{k,n_{k}}^{1}\right\|_{a,\varOmega_{1}}^{2} \lesssim \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2}\!\!, \end{align} (3.46)  \begin{align} \left\|v_{k,n_{k}}^{2}\right\|_{a,\varOmega_{2}}^{2} \lesssim \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\!\!. \end{align} (3.47)Following the definitions of $$v_{k}$$ and $$\tilde{P}_{k}$$ (Equations (3.22), (3.20)), $$v_{k}$$ can be expressed as follows:   \begin{align} \begin{cases} v_{k}|_{\varOmega_{1}} = {v_{k}^{1}} - v_{k,n_{k}}^{1} + \tilde{v}_{k,n_{k}}^{2} ,\\ v_{k}|_{\varOmega_{2}} = {v_{k}^{2}} , \end{cases} \end{align} (3.48)where $$\tilde{v}_{k,n_{k}}^{2}$$ is the reflection of $$v_{k,n_{k}}^{2}$$ along the edge $$\varGamma _{2}$$. Then, we have $$ \|\tilde{v}_{k,n_{k}}^{2} \|_{a,\varOmega _{1}} = \|v_{k,n_{k}}^{2} \|_{a,\varOmega _{2}}$$ and $$ \|\tilde{v}_{k,n_{k}}^{2} \|_{0,\varOmega _{1}} = \|v_{k,n_{k}}^{2} \|_{0,\varOmega _{2}}$$. Thereby,   \begin{align} \begin{split} \|v_{k}\|_{a,\varOmega}^{2} =& \|v_{k}\|_{a,\varOmega_{1}}^{2}+\|v_{k}\|_{a,\varOmega_{2}}^{2} \\ =& \left\|{v_{k}^{1}} - v_{k,n_{k}}^{1} + \tilde{v}_{k,n_{k}}^{2}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|v_{k,n_{k}}^{1}\right\|_{a,\varOmega_{1}}^{2} + \left\|\tilde{v}_{k,n_{k}}^{2}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left(\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\right). \end{split} \end{align} (3.49)By Lemma 3.4, $$\varepsilon \|{v_{k}^{m}} \|_{0,\varOmega }^{2}\lesssim{h_{k}^{2}} \|{v_{k}^{m}} \|_{a}^{2}\ (m=1,2)$$. Thus, we have   \begin{align} \|v_{k}\|_{a,\varOmega}^{2} \lesssim \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\!\!,\quad 0\leqslant k\leqslant J. \end{align} (3.50)And because $$ ({v_{k}^{m}},\ {v_{l}^{m}} )_{a,\varOmega _{m}} = ( ({P_{k}^{m}} - P_{k-1}^{m} ) v,\ ({P_{l}^{m}} - P_{l-1}^{m} ))_{a,\Omega _{m}} = 0$$ when $$0\leqslant k,l\leqslant J,\ k\neq l$$, we deduce that   \begin{align} \sum_{k = 0}^{J} \left\|{v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2} = \left\|\sum_{k=0}^{J} {v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2} = \left\|{P_{J}^{m}} (v|_{\varOmega_{m}})\right\|_{a,\varOmega_{m}}^{2} = \| v|_{\varOmega_{m}}\|_{a,\varOmega_{m}}^{2},\ m=1,2. \end{align} (3.51)Hence,   $$\sum_{k=0}^{J} \|v_{k}\|_{a,\varOmega}^{2} \lesssim \sum_{k=0}^{J} \left(\left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} \right) = \| v|_{\varOmega_{1}}\|_{a,\varOmega_{1}}^{2}+\| v|_{\varOmega_{2}}\|_{a,\varOmega_{2}}^{2} = \|v\|_{a,\varOmega}^{2} .$$Thus, we complete the proof of the inequality (3.44). The proof of the inequality (3.45) is similar to the above. First, by the equality (3.48), we have   \begin{align} \begin{split} \|v_{k}\|_{0,\varOmega}^{2} =&\left \|v_{k}\left\|_{0,\varOmega_{1}}^{2} + \right\|v_{k}\right\|_{0,\varOmega_{2}}^{2} \\ =& \left\|{v_{k}^{1}} - v_{k,n_{k}}^{1} + \tilde{v}_{k,n_{k}}^{2}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}} \right\|_{0,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|v_{k,n_{k}}^{1}\right\|_{0,\varOmega_{1}}^{2} + \left\|\tilde{v}_{k,n_{k}}^{2}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\!\!,\qquad 0\leqslant k\leqslant J. \end{split} \end{align} (3.52)Then, recalling Lemmas 3.4 and 3.5, we derive   \begin{align} \begin{split} \|v_{0}\|_{a}^{2} + \sum_{k=1}^{J} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim& \ \|v_{0}\|_{a}^{2} + \sum_{k=1}^{J} \left(\frac{\varepsilon}{(h_{k})^{2}}\|v_{k}\|_{0,\varOmega}^{2} + \|v_{k}\|_{a,\varOmega}^{2}\right) \\ \lesssim&\ \sum_{k=0}^{J} \left(\frac{\varepsilon}{(h_{k})^{2}}\left(\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\right) + \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\right)\\ \lesssim&\ \sum_{k=0}^{J} \left(\left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} \right) = \|v\|_{a,\varOmega}^{2} . \end{split} \end{align} (3.53)Thus, we have completed the proof. Now, we provide a preliminary analysis on the inequality (3.12). By the equations (3.11), (3.21) and (3.22), we have   $$\sum_{(l,j)\geqslant(k,i)} v_{l,\,j} = \sum_{j = i}^{N_{k}} v_{k,\,j} + \sum_{l=k+1}^{J}\sum_{j=1}^{N_{l}} v_{l,\,j} = \sum_{j = i}^{N_{k}} v_{k,\,j} + \sum_{l=k+1}^{J} v_{l} = \sum_{j = i}^{N_{k}} v_{k,\,j} + \big(v - \tilde{P}_{k} v\big). $$Thereby,   \begin{align} \begin{split} \left\|P_{k,i}\sum_{(l,\,j)\geqslant(k,i)} v_{l,\,j}\right\|_{a}^{2} =&\left\|P_{k,i}\left(\sum_{j = i}^{N_{k}}v_{k,\,j} + \sum_{l=k+1}^{J}\sum_{j=1}^{N_{l}}v_{l,\,j}\right)\right\|_{a}^{2}\\ =&\left\|P_{k,i}\sum_{j = i}^{N_{k}} v_{k,\,j} + P_{k,i}\big(v - \tilde{P}_{k} v\big)\right\|_{a}^{2} . \end{split} \end{align} (3.54) If $$P_{k}$$ is used instead of $$\tilde{P}_{k}$$, we have $$P_{k,i}P_{k} = P_{k,i}$$ by the property of the projection operators. Thus,   $$P_{k,i}(v - P_{k} v) = P_{k,i}v - P_{k,i} v = 0,$$and in this case we just need to estimate   \begin{align} \sum_{k = 1}^{J}\sum_{i = 1}^{N_{k}} \left\|P_{k,i}\sum_{j = i}^{N_{k}} v_{k,\,j}\right\|_{a,\varOmega}^{2} . \end{align} (3.55)Actually, in the following Theorem 3.10, we can see (3.55) could be estimated by Theorem 3.7. However, the rectified operator $$\tilde{P}_{k}$$ that we have used in this article makes $$P_{k,i}\tilde{P}_{k} \neq P_{k,i}$$. Therefore, we need an extra estimation to complete the proof of the inequality (3.12) in addition to the case where $$P_{k}$$ is used. Similar to the proof where we choose the $$L^{2}$$ projection operator $$Q_{k}$$ to construct the decomposition between different levels (see Xu, 1992; Bramble & Zhang, 2000), we introduce the strengthened Cauchy–Schwartz inequality to help the proving. Theorem 3.8 (Strengthened Cauchy–Schwartz inequality). Let $$v \in V_{J}$$, $$u_{k} \in V_{k}$$ and $$v_{l} = \tilde{P}_{l} v\ -\ \tilde{P}_{l-1} v$$. If l > k, then   \begin{align} \left(u_{k},v_{l}\right)_{a,\varOmega} \lesssim \gamma^{l - k - 1} \|u_{k}\|_{a,\varOmega} \|v_{l}\|_{a,\varOmega} . \end{align} (3.56) The difference between the strengthened Cauchy–Schwartz inequality and the general form is the coefficient $$\gamma ^{l-k-1}$$ in (3.56), where $$\gamma $$ is the ratio between two adjacent levels, satisfying $$\gamma ^{2}=h_{k}/h_{k-1}<1$$. The strengthened Cauchy–Schwartz inequality describes the difference between subspaces $$\big (\tilde{P}_{k}-\tilde{P}_{k-1}\big )V$$ with respect to the decomposition. $$\gamma ^{l-k-1}$$ represents the cosine of the angle between two subspaces, and we can see that the larger $$\gamma $$ is, the larger the angle will be. The proof of Theorem 3.8 requires the following lemma. Lemma 3.9 Assume that $$\varOmega _{H}=[0, H]\times [0, l\,],\ \varOmega _{h}=[0, h]\times [0, l\,]$$ with h ∈ (0, H), and assume u is a bilinear function on $$\varOmega _{H}$$; then,   \begin{align} \left\|u\right\|_{a,\varOmega_{h}}^{2} \lesssim \frac{h}{H}\left\|u\right\|_{a,\varOmega_{H}}^{2}\!. \end{align} (3.57) Proof. Since u is a bilinear function on $$\Omega _{H}$$, we know $$\partial _{x} u,\ \partial _{y} u$$ are both linear functions with only one variable. Therefore,   $$\int_{\varOmega_{h}} (\partial_{x} u)^{2}\, \mathrm{d}x\,\mathrm{d}y = h {\int_{0}^{l}} (\partial_{x} u)^{2}\, \mathrm{d}y = \frac{h}{H} \int_{\varOmega_{H}} (\partial_{x} u)^{2}\, \mathrm{d}x\,\mathrm{d}y.$$Without loss of generality, we set $$\partial _{y} u=b_{1}x+b_{2}$$; then,   \begin{align*} \int_{\varOmega_{h}} (\partial_{y} u)^{2}\, \mathrm{d}x\,\mathrm{d}y &= l {\int_{0}^{h}} (b_{1} x + b_{2})^{2}\, \mathrm{d}x \\ &= lh\left(\frac{{b_{1}^{2}} h^{2}}{3} +{b_{1} b_{2} h} + {b_{2}^{2}}\right) \\ &= lh\left(\frac{1}{3}\left(b_{1} h + \frac{3}{2}b_{2}\right)^{2} + \frac{1}{4}{b_{2}^{2}}\right)\\ &\triangleq lh\cdot g(h). \end{align*}Note that g(h) is a quadratic function, so its maximum value lies on the endpoints of the closed interval, i.e.,   $$\max_{h\in[0,H]} g(h) \leqslant \max\{g(0),g(H)\} .$$Thus, by   $$g(0) = {b_{2}^{2}} \leqslant \frac{4}{3}\left(b_{1} H + \frac{3}{2}b_{2}\right)^{2} + {b_{2}^{2}} = 4\left(\frac{1}{3}\left(b_{1} H + \frac{3}{2}b_{2}\right)^{2} + \frac{1}{4}{b_{2}^{2}}\right) = 4 g(H),$$we have   $$\int_{\varOmega_{h}} (\partial_{y} u)^{2}\, \mathrm{d}x\,\mathrm{d}y = lh\cdot g(h) \leqslant 4 lh\cdot g(H) = 4\frac{h}{H} \int_{\varOmega_{H}}(\partial_{y} u)^{2}\, \mathrm{d}x\,\mathrm{d}y.$$Summarizing the above results, we have   $$\left\|u\right\|_{a,\varOmega_{h}}^{2} \leqslant 4\frac{h}{H} \left\|u\right\|_{a,\varOmega_{H}}^{2}. $$ Lemma 3.9 gives an estimation on the a-norm of the bilinear functions, where the a-norm is defined on a narrow band and the bilinear functions are defined on a coarser element. In the following, we will use this estimation to prove Theorem 3.8. The proof of Theorem 3.8: Proof. Due to the definition of $$\tilde{P}_{l}$$, we have $$\tilde{P}_{l} v={P_{l}^{1}} v$$ on $$\varOmega _{1} \backslash \varOmega _{l, n_{l}}$$ and $$\tilde{P}_{l} v={P_{l}^{2}} v$$ on $$\varOmega _{2}$$. When l ⩾ k, the restriction of $$v_{k}$$ on $$\varOmega _{m}$$ is a subspace of $${V_{l}^{m}}\ (m=1,\ 2)$$. Thereby, when l > k, we have   \begin{align*} \left(u_{k}, v_{l}\right)_{a,\varOmega \setminus (\varOmega_{l-1,n_{l-1}}\cap\varOmega_{1})} &= \left(u_{k}, \tilde{P}_{l} v - \tilde{P}_{l-1} v \right)_{a,\varOmega_{1} \setminus \varOmega_{l-1,n_{l-1}}} + \left(u_{k}, \tilde{P}_{l} v - \tilde{P}_{l-1} v \right)_{a,\varOmega_{2}}\\ &= \left(u_{k}, {P_{l}^{1}} v - P_{l-1}^{1} v\right)_{a,\varOmega_{1} \setminus \varOmega_{l-1,n_{l-1}}} + \left(u_{k}, {P_{l}^{2}} v - P_{l-1}^{2} v\right)_{a,\varOmega_{2}}\\ &= \left(u_{k},v - v\right)_{a,\varOmega_{1} \setminus \varOmega_{l-1,n_{l-1}}} + \left(u_{k},v - v\right)_{a,\varOmega_{2}} = 0. \end{align*}And since $$u_{k}$$ is a bilinear function defined on the element in the kth level, by Lemma 3.9, we have   \begin{align*} \left(u_{k},v_{l}\right)_{a,\varOmega} &= \left(u_{k},v_{l}\right)_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}} \\ &\leqslant \|u_{k}\|_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}} \|v_{l}\|_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}}\\ &\lesssim \sqrt{\frac{h_{l-1}}{h_{k}}}\|u_{k}\|_{a,\varOmega_{1}\cap\varOmega_{k,n_{k}}} \|v_{l}\|_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}}\\ &\leqslant \gamma^{l - k - 1} \|u_{k}\|_{a,\varOmega} \|v_{l}\|_{a,\varOmega} . \end{align*} Now, we can prove the inequality (3.12) for the decomposition defined by (3.21) and (3.22). Theorem 3.10 For every $$v \in V_{J}$$, let $$v=v_{0}+\sum _{k=1}^{J} \sum _{i=1}^{N_{k}} v_{k,i}$$ be the decomposition defined by the equations (3.21) and (3.22). Then, we have   $$\frac{\left\|P_{0} v\right\|_{a}^{2} + \sum_{k = 1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i}\left(\sum_{(l,j)\geqslant(k,i)} v_{l,\,j}\right)\right\|_{a}^{2}}{\|v\|_{a}^{2}}\leqslant C, $$where C is a constant independent with $$\varepsilon $$ and $$h_{J}$$. Proof. By (3.54), we have   \begin{align} \left\|P_{k,i}\sum_{(l,j)\geqslant(k,i)} v_{l,j}\right\|_{a}^{2} =\left\|P_{k,i}\sum_{j = i}^{N_{k}} v_{k,j} + P_{k,i}\big(v - \tilde{P}_{k} v\big)\right\|_{a}^{2} . \end{align} (3.58)Note that $$\forall u \in V_{J}$$, $$P_{k,i}u$$ is the a-norm projection of u on the subspace $$V_{k,i}$$; hence, we have   $$\|P_{k,i}u\|_{a,\varOmega}\leqslant\|u\|_{a,\varOmega_{k,i}} . $$Thereby, we obtain   \begin{align} \begin{split} \left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j}\right\|_{a,\varOmega}^{2} \leqslant& \left(\sum_{j=i}^{N_{k}} \|P_{k,i} v_{k,j}\|_{a,\varOmega}\right)^{2} \\ \leqslant& \left(\sum_{j=i}^{N_{k}} \|v_{k,\,j}\|_{a,\varOmega_{k,i}}\right)^{2} = \left(\|v_{k,i}\|_{a,\varOmega_{k,i}} + \|v_{k,i + 1}\|_{a,\varOmega_{k,i}}\right)^{2}\!\!. \end{split} \end{align} (3.59)And then by Theorem 3.7, we derive   \begin{align} \begin{split} \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j}\right\|_{a,\varOmega}^{2} \leqslant& \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} \left( \|v_{k,i}\|_{a,\varOmega_{k,i}} + \|v_{k,i + 1}\|_{a,\varOmega_{k,i}}\right)^{2} \\ \leqslant& \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} 2\left( \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} + \|v_{k,i + 1}\|_{a,\varOmega_{k,i}}^{2}\right) \\ \leqslant& \;4 \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \\ \lesssim&\; \|v\|_{a,\varOmega}^{2} . \end{split} \end{align} (3.60)On the other hand, considering the second part of the right-hand side of (3.58), by Theorem 3.7, we obtain   \begin{align} \begin{split} \sum_{i=1}^{N_{k}} \left\|P_{k,i}\big(v-\tilde{P}_{k} v\big)\right\|_{a}^{2} =& \sum_{i=1}^{N_{k}} \left\|P_{k,i}\big(P_{k} v - \tilde{P}_{k} v\big)\right\|_{a}^{2}\\ \leqslant& \sum_{i=1}^{N_{k}} \left\|P_{k} v - \tilde{P}_{k} v\right\|_{a,\varOmega_{k,i}}^{2}\\ \lesssim& \left\|P_{k} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2}\!\!. \end{split} \end{align} (3.61)Hence, by $$P_{k} \tilde{P}_{k} v=\tilde{P}_{k} v$$ and Theorem 3.8, we have   \begin{align*} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} &= \left(P_{k} v-\tilde{P}_{k} v, P_{k} v-\tilde{P}_{k} v\right)_{a, \varOmega} \\ &=\left(v-\tilde{P}_{k} v, P_{k} v-\tilde{P}_{k} v\right)_{a, \varOmega} \\ &=\left(\sum_{l = k}^{J - 1} \tilde{P}_{l + 1} v - \tilde{P}_{l} v, P_{k} v-\tilde{P}_{k} v\right)_{a, \varOmega} \\ &\lesssim \sum_{l = k}^{J - 1} \gamma^{l - k} \left\|\tilde{P}_{l + 1} v - \tilde{P}_{l} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\!\!. \end{align*}Therefore,   \begin{align*} \sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2} \lesssim\ & \sum_{k = 1}^{J - 1} \sum_{l = k}^{J - 1} \gamma^{l - k} \left\|\tilde{P}_{l + 1} v - \tilde{P}_{l} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\\ =\ & \sum_{k = 1}^{J - 1} \sum_{s = 0}^{J - k - 1} \gamma^{s} \left\|\tilde{P}_{s + k + 1} v - \tilde{P}_{s + k} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\\ =\ & \sum_{s = 0}^{J - 2} \sum_{k = 1}^{J - s - 1} \gamma^{s} \left\|\tilde{P}_{s + k + 1} v - \tilde{P}_{s + k} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\\ \leqslant\ & \sum_{s = 0}^{J - 2} \gamma^{s} \left(\sum_{k = 1}^{J - s - 1} \left\|\tilde{P}_{s + k + 1} v - \tilde{P}_{s + k} v\right\|_{a,\varOmega}^{2}\right)^{1/2} \left(\sum_{k = 1}^{J - s - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2}\\ \leqslant\ & \sum_{s = 0}^{J - 2} \gamma^{s} \left(\sum_{k = 1}^{J - 1} \left\|\tilde{P}_{k + 1} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2} \left(\sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2}\\ \leqslant\ & \frac{1}{1 - \gamma} \left(\sum_{k = 1}^{J - 1} \left\|\tilde{P}_{k + 1} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2} \left(\sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2}, \end{align*}and this leads to   \begin{align} \begin{split} \sum_{k = 1}^{J} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2} =& \sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2}\\ \lesssim& \frac{1}{(1 - \gamma)^{2}} \sum_{k = 1}^{J - 1} \left\|\tilde{P}_{k + 1} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2}\\ \lesssim& \sum_{k=1}^{J - 1} \|v_{k+1}\|_{a,\varOmega}^{2} \lesssim \|v\|_{a,\Omega}^{2} . \end{split} \end{align} (3.62)Note that the last inequality in the above formula is from Theorem 3.7. Now, we obtain   \begin{align} \sum_{k = 1}^{J} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2} \lesssim \|v\|_{a,\varOmega}^{2} . \end{align} (3.63)Finally, by (3.58), (3.60), (3.61) and (3.63), we have   \begin{align*} &\|P_{0} v\|_{a,\varOmega}^{2} + \sum_{k = 1}^{J} \sum_{i = 1}^{N_{k}} \left\|P_{k,i} \sum_{(l,j)\geqslant(k,i)} v_{l,j}\right\|_{a,\varOmega}^{2} \\ &\quad\lesssim \|v\|_{a,\varOmega}^{2} + \sum_{k = 1}^{J} \sum_{i = 1}^{N_{k}} \left(\left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j} \right\|_{a,\varOmega}^{2} + \left\|P_{k,i}\big(v - \tilde{P}_{k} v\big)\right\|_{a,\varOmega}^{2} \right)\\ &\quad\lesssim \|v\|_{a,\varOmega}^{2} + \sum_{k = 1}^{J} \sum_{i = 1}^{N_{k}} \left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j} \right\|_{a,\varOmega}^{2} + \sum_{k=1}^{J} \left\|P_{k} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2}\\ &\quad\lesssim \|v\|_{a,\varOmega}^{2} . \end{align*}Thus, we have deduced the theorem. By Theorem 3.10 and the XZ identity, we directly derive an estimation on the a-norm of the error transfer operator $${E_{J}^{N}}$$ and thus the operator $$E_{J}$$. Theorem 3.11 Let $$E_{J}$$ be the error transfer operator corresponding to Algorithm 2.1; then, we have   $$\|E_{J}\|_{a} \leqslant 1 - \frac{1}{C}, $$where C is a constant independent with $$\varepsilon $$ and $$\ h_{J}$$. Now, we have proved the uniform convergence of the multigrid method. The rectified energy projection $$\tilde{P}_{k}$$, which we introduced in this paper, cannot only be applied to the L-shape domain, but also the domains assembled by finite rectangles, for example, the homocentric squares. And in these domains, we can derive the uniform convergence similarly by the rectified energy projection and the XZ identity. Finally, we address the more general case where $$\varOmega $$ is an arbitrary polygonal domain. In the first step, we consider the regularity of the solutions defined on the subdomains. Namely, we consider the regularity of auxiliary problems related to the following mixed boundary value:   $$u|_{\varGamma_{1}} = 0,\ \partial_{\nu} u|_{\varGamma_{2}} = 0,$$where $$\varGamma _{1}$$ and $$\varGamma _{2}$$ intersect at some vertices in the domain. Because of the locality of the regularity, we only need to consider the solution around the reentrant corner. Let $$\varGamma _{1},\ \varGamma _{2}$$ be two straight intersecting lines, and let their intersect point be the original point. In polar coordinates, we assume $$\varGamma _{1},\ \varGamma _{2}$$ can be represented as:   $$\varGamma_{1} : \theta = 0, \qquad \varGamma_{2} : \theta = \theta_{0}. $$ Consider the following mixed boundary value problem on a domain surrounding the origin:   \begin{align} \begin{cases} -\varDelta u =\ 0, \qquad &0\leqslant \theta \leqslant \theta_{0},\\ u|_{\theta = 0} =\ 0, \quad &\partial_{\nu} u |_{\theta = \theta_{0}} = 0 . \end{cases} \end{align} (3.64)It can be verified that $$v = r^{\alpha}\sin(\alpha \theta ),\ \alpha = \pi /2\theta _{0}$$ is a solution of the above problem. When $$\theta _{0}> \pi /2$$, $$\alpha < 1$$, we have $$v \notin H^{2}$$. In fact, if the types of boundary conditions on adjacent edges are not the same, we need the condition that the angle between two adjacent edges is less than or equal to $$\pi /2$$ to make the solution belong to $$H^{2}$$. On the other hand, suppose the boundary condition types on two adjacent edges are not the same. If the angle between these two adjacent edges is less than or equal to $$\pi /2$$, reflect the problem along $$\Gamma _{2}$$, and then we obtain a Dirichlet problem defined on a convex domain. This problem has a unique solution $$\tilde{u} \in H^{2}$$, which is symmetrical with respect to $$\varGamma _{2}$$, so $$\partial _{\nu } \tilde{u}|_{\Gamma _{2}} = 0$$. Let $$u = \tilde{u}|_{\varOmega }$$; then, $$u \in H^{2}$$ is the solution of the original mixed boundary problem. Now, we can provide instruction on how to divide the whole domain $$\varOmega $$. On each subdomain, the auxiliary mixed boundary problem must satisfy that each angle between two adjacent edges where boundary conditions are not the same is less than or equal to $$\pi /2$$. And when the above condition holds, we can analyze the problem on the whole domain $$\varOmega $$ using an approach similar to the one applied to the problem on the L-domain and obtain the same conclusion of uniform convergence. 4. Conclusion In this article, we analyze the convergence of the multigrid method for solving the anisotropic problem. On the domains assembled by finite rectangles such as the L-shape domains and homocentric squares, we can prove the uniform convergence for the algorithm and show the convergence rate has nothing to do with the mesh size and the anisotropic coefficient. In the second section, we introduce the ‘V’-cycle multigrid algorithm based on the line Gauss–Seidel iteration. In the third section, we deduce the error transfer operator and introduce the analysis framework based on the XZ identity, by which we convert the problem of analyzing the error transfer operator into constructing the decomposition of the piecewise bilinear functions. Then, we define a rectified energy projection based on the domain decomposition and use it to decompose the function. Finally, we derive the uniform convergence. In fact, the proof we give in this article contains the idea of divide-and-conquer. That is to say, when the problem does not have a global regularity, we can divide it into sub-problems and analyze each sub-problem locally. Finally, we can combine the local information together and derive the global result that we want. In this article, the rectified energy projection provides the bridge between global problems and local problems. Funding The first, second and third authors were partially supported by NSFC 11421101; the third and fourth authors were partially supported by NSFC 91130011. References Arms, R. J. & Zondek, B. ( 1956) A method of block iteration. SIAM J. Appl. Math. , 4, 220– 229. Google Scholar CrossRef Search ADS   Bank, R. E. & Dupont, T. ( 1981) An optimal order process for solving finite element equations. Math. Comput. , 36, 35– 51. Google Scholar CrossRef Search ADS   Braess, D. & Hackbusch, W. ( 1983) A new convergence proof for the multigrid method including the V-cycle. SIAM J. Numer. Anal. , 20, 967– 975. Google Scholar CrossRef Search ADS   Bramble, J. H. & Pasciak, J. E. ( 1987) New convergence estimates for multigrid algorithms. Math. Comput. , 49, 311– 329. Google Scholar CrossRef Search ADS   Bramble, J. H., Pasciak, J. E., Wang, J. & Xu, J. ( 1991) Convergence estimates for multigrid algorithms without regularity assumptions. Math. Comput. , 57, 23– 45. Google Scholar CrossRef Search ADS   Bramble, J. H. & Zhang, X. ( 2000) The analysis of multigrid methods . Handbook of Numerical Analysis, vol. 7. Amsterdam: Elsevier, pp. 173– 415. Bramble, J. H. & Zhang, X. ( 2001) Uniform convergence of the multigrid V-cycle for an anisotropic problem. Math. Comput. , 70, 453– 470. Google Scholar CrossRef Search ADS   Chen, L. ( 2011) Deriving the X-Z identity from auxiliary space method*. In: Huang, Y., Kornhuber, R., Widlund, O. & Xu, J. (eds) Domain Decomposition Methods in Science and Engineering XIX. Lecture Notes in Computational Science and Engineering, vol. 78. Berlin, Heidelberg: Springer. Google Scholar CrossRef Search ADS   Cuthill, E. H. & Varga, R. S. ( 1959) A method of normalized block iteration. J. ACM , 6, 236– 244. Google Scholar CrossRef Search ADS   Hackbusch, W. ( 1985) Multi-Grid Methods and Applications . Berlin: Springer. Google Scholar CrossRef Search ADS   Iorio, R. & Magalhes, V. D. ( 2001) Fourier Analysis and Partial Differential Equations . Cambridge: Cambridge University Press. Google Scholar CrossRef Search ADS   Lee, Y. J., Wu, J., Xu, J. & Zikatanov, L. ( 2008) A sharp convergence estimate for the method of subspace corrections for singular systems of equations. Math. Comput. , 77, 831– 850. Google Scholar CrossRef Search ADS   Mikhail, S. & Sungwon, C. ( 2007) Holder regularity of solutions to second-order elliptic equations in nonsmooth domains. Bound. Value Probl. , 2007. Neuss, N. ( 1998) V-cycle convergence with unsymmetric smoothers and application to an anisotropic model problem. SIAM J. Numer. Anal. , 35, 1201– 1212. Google Scholar CrossRef Search ADS   Shi, Z. & Wang, M. ( 2016) Finite Element Methods . Beijing: Science Press. Stevenson, R. ( 1993) New estimates of the contraction number of V-cycle multi-grid with applications to anisotropic equations, Incomplete Decompositions, Proceedings of the Eiglish GAMM Seminar. Notes on Numerical Fluid Mechanics , 41, 159– 167. Wu, Y., Chen, L., Xie, X. & Xu, J. ( 2012) Convergence analysis of V-cycle multigrid methods for anisotropic elliptic equations. IMA J. Numer. Anal. , 32, 573– 598. Xu, J. ( 1992) Iterative methods by space decomposition and subspace correction. SIAM Review , 34, 581– 613. Google Scholar CrossRef Search ADS   Xu, J. & Zikatanov, L. ( 2002) The method of alternating projections and the method of subspace corrections in Hilbert space. J. Amer. Math. Soc. , 15, 573– 598. Google Scholar CrossRef Search ADS   Yosida, K. ( 1964) Functional Analysis. Berlin: Springer. © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

Uniform convergence of a multigrid method for elliptic equations with anisotropic coefficients

Loading next page...
 
/lp/ou_press/uniform-convergence-of-a-multigrid-method-for-elliptic-equations-with-HTWhBS0DuG
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/dry019
Publisher site
See Article on Publisher Site

Abstract

Abstract We prove the uniform convergence of the V-cycle multigrid method with line Gauss–Seidel iteration as its smoother for anisotropic elliptic equations. We define a rectified projection operator to decompose the functions into different levels. This operator is based on piecewise energy norm projection. Using the Xu–Zikatanov identity, we can show that the convergence rate is independent of the mesh size, the number of levels and the coefficients of equations. The main improvement of this paper is that we lose the restriction from previous works whereby the domain must be convex, and we prove the uniform convergence of the multigrid method for the problems defined on the domains which are constructed by finite rectangles, such as the L-shape domains. 1. Introduction The multigrid method has been proposed as an efficient way to solve elliptic equations (see Bank & Dupont, 1981). When the equation is anisotropic, line Gauss–Seidel iteration was selected as the smoother (see Arms & Zondek, 1956; Cuthill & Varga, 1959). In this article, we are concerned with the convergence analysis of this multigrid method for anisotropic problems. An early framework used to analyze the convergence of general multigrid methods was proposed by Braess and Hackbusch (see, e.g., Braess & Hackbusch, 1983; Hackbusch, 1985), and it later was extended by Bramble and Pasciak in Bramble & Pasciak (1987). This approach relies on the ‘regularity and approximation’ assumption. By this framework, for anisotropic elliptic equations, Stevenson proved the uniform convergence of the multigrid methods with the ‘W’-cycle and ‘V’-cycle (see Stevenson, 1993). Then, Bramble and Zhang extended this result to problems with variable coefficients defined on rectangular domains (see Bramble & Zhang, 2001). Neuss, Bramble and Zhang proved the uniform convergence of the ‘V’-cycle multigrid method for anisotropic elliptic equations (see Neuss, 1998; Bramble & Zhang, 2001). However, all the proofs mentioned above depend on the $$H^{2}$$ regularity of the solution. Another framework for general multigrid methods was proposed by Bramble, Pasciak, Wang and Xu in Bramble et al. (1991). In 2002, Xu and Zikatanov proposed the XZ identity (see Xu & Zikatanov, 2002), which provides a new framework to prove the convergence of general multigrid methods. In line with this approach, for anisotropic elliptic equations, Wu et al. (2012) analyzed multigrid methods based on two mesh coarsening strategies: uniform coarsening and semicoarsening. For the case of semicoarsening, they proved the uniform convergence of the algorithm. But for uniform coarsening, the convergence rate depends on the number of mesh levels. In this paper, we consider the multigrid method for anisotropic problems. We give an example of the L-shape domain (without $$H^{2}$$ regularity) to show the uniform convergence for uniform coarsening. Note that in general, $$H^{2}$$ regularity cannot be reached for problems defined on concave domains (see Mikhail & Sungwon, 2007); thus, we must obtain a proof that does not depend on the global $$H^{2}$$ regularity of the problem. In fact, such is the case with our method in this article, and indeed, our proof could easily be extended to other domains that are assembled by finite rectangles. We apply the XZ identity and intend to prove that the value K in the XZ identity is uniformly bounded. The framework of the estimate of K is roughly similar to the framework posed in the studies by Xu (1992) and Lee et al. (2008), but many details must be rectified to adapt the anisotropic problem. A kernel aspect is the choice of the projection operator. The projection operator is used to decompose a function into different levels, and this plays an important role in our proof. In the studies by Bramble et al. (1991) and Lee et al. (2008), the $$L^{2}$$ projection operator is chosen because the lack of $$H^{2}$$ regularity can be overcomed. It is difficult for the $$L^{2}$$ projection to overcome anisotropy; the (global) energy norm projection cannot overcome the lack of $$H^{2}$$ regularity. However, on each rectangle subdomain, the $$H^{2}$$ regularity is guaranteed. This fact inspired us to propose the idea of divide-and-conquer. In this paper, we proposed a new method to construct the projection operator. The new projection operator is based on the local energy norm projection on each rectangle subdomain, and then we gather them together in some way. We use this new rectified energy projection operator to decompose functions into different levels and prove the uniform convergence of the multigrid method. The rest of this paper is organized as follows. In Section 2, we introduce the model problem that we shall consider and outline the line Gauss–Seidel multigrid method. In Section 3, we introduce the rectified energy norm projection operator and use the XZ identity to deduce the proof of uniform convergence. 2. Model problem and multigrid settings 2.1 Model problem Elliptic partial differential equations appear in many important practical problems, such as equilibrium problems in elasticity, irrotational motion of nonviscous fluid, potential problems, temperature distribution problems and diffusion problems (see Iorio & Magalhes, 2001). In many actual problems, the physical features are anisotropic. In the corresponding partial differential equations, the anisotropy usually leads to different coefficients in different directions. In this paper, we consider the following model problem:   \begin{align} \begin{cases} -\varepsilon \partial_{xx} u - \partial_{yy} u = f,\qquad (x,y) \in \varOmega,\\ u|_{\partial \varOmega} = 0, \end{cases} \end{align} (2.1)where $$\varepsilon \in (0,\ 1)$$ is a constant, $$\varOmega $$ is a bounded domain in $$\mathbb{R}^{2}$$ and $$f \in L^{2}(\varOmega )$$. To illustrate our algorithm and theoretical analysis, in this paper, we focus on the case where $$\varOmega $$ is an L-shape domain. The solution of equation (2.1) exists uniquely, and we can solve it by the finite difference or finite element method. In actual computation, the condition number of the corresponding linear system increases quickly when the mesh is refined. This phenomenon causes the difficulty in solving our problem. To overcome this disadvantage, multigrid methods are widely used. For isotropic problems, ordinary iterations such as the Gauss–Seidel, Jacobi and SOR iteration are used as smoothers in multigrid methods, and all these methods work well, namely, converge uniformly. However, when the anisotropic problem (2.1) is considered, all the multigrid methods mentioned above cannot lead to a convergence rate that is independent of the coefficient $$\varepsilon $$. To avoid the influence of $$\epsilon $$ in the convergence rate, line Gauss–Seidel iteration has been used (see, e.g., Arms & Zondek, 1956; Cuthill & Varga, 1959). It should be noted that the term ‘uniform converge’ that occurs in the rest of the paper means that the convergence rate is independent of the mesh size, the number of levels and the coefficient $$\varepsilon $$. 2.2 Bilinear element and subspace settings We analyze problem (2.1) by its variational form: find $$u\in{H_{0}^{1}}(\varOmega )$$ such that   \begin{align} \varepsilon \left(\frac{\partial u}{\partial x},\frac{\partial v}{\partial x}\right)+ \left(\frac{\partial u}{\partial y},\frac{\partial v}{\partial y}\right)= (\,f,v),\quad \forall v\in{H_{0}^{1}}(\varOmega), \end{align} (2.2)where (⋅, ⋅) denotes the $$L^{2}$$ inner product on $$\varOmega $$. We also introduce the energy inner product a(⋅, ⋅) as   $$ a(u,v)=\varepsilon \left(\frac{\partial u}{\partial x},\frac{\partial v}{\partial x}\right)+ \left(\frac{\partial u}{\partial y},\frac{\partial v}{\partial y}\right), $$and then problem (2.2) can be briefly written as: find $$u\in{H_{0}^{1}}(\varOmega )$$ such that   $$ a(u,v)= (\,f,v),\quad \forall v\in{H_{0}^{1}}(\varOmega). $$The energy inner product a can also be defined on other domains. In general, we denote our energy inner product on a domain $$\omega $$ by $$(\cdot ,\cdot )_{a,\omega }$$. Note that when $$\omega =\varOmega $$, we have $$(\cdot ,\cdot )_{a,\varOmega }=a(\cdot ,\cdot )$$. Moreover, we denote the energy norm according to this energy inner product by $$\Vert \cdot \Vert _{a,\omega }$$, as $$\Vert \cdot \Vert _{a,\omega }^{2}=(\cdot ,\cdot )_{a,\omega }$$. When the domain $$\omega =\varOmega $$, we can omit $$\varOmega $$ in $$\Vert \cdot \Vert _{a,\varOmega }$$ and use the notation $$\Vert \cdot \Vert _{a}$$. Similarly, we denote by $$\Vert \cdot \Vert _{0,\omega }$$ and $$\Vert \cdot \Vert _{0}$$ the $$L^{2}$$ norm on $$\omega $$ and $$\varOmega $$, respectively. We use bilinear finite elements on rectangular meshes to discretize the equation (2.2). Suppose that there is a sequence of nested rectangular partitions $${\mathscr T}_{k}(0\leqslant k\leqslant J)$$ of $$\varOmega $$, where the edges of the elements are all parallel to the coordinate axes. We denote the side lengths parallel to the x and y directions of each element in $${\mathscr T}_{k}$$ by $${h_{k}^{x}}$$ and $${h_{k}^{y}}$$, respectively. In this article, we consider the situation where the meshes are refined uniformly, i.e.,   $$ \frac{{h_{k}^{x}}}{h_{k-1}^{x}}=\frac{{h_{k}^{y}}}{h_{k-1}^{y}}=\gamma^{2}\in(0,1). $$ In the kth level, suppose there are $$N_{k}+2$$ grid lines parallel to the y axis, and we denote them from left to right by $$L_{k,0},L_{k,1},\ldots ,L_{k,N_{k}+1}$$, respectively. The number of nodes on $$L_{k,i}$$ is denoted by $$N_{k,i}$$, and these nodes are labeled from bottom to top. Then, we denote by $$\phi _{k,i}^{j}$$ the nodal basis function corresponding to the jth node on $$L_{k,i}$$. With the above settings, note that the ranges of the indexes here are k = 0, 1, … , J, $$i=0,1,\ldots ,N_{k}+1$$ and $$j=1,2 \ldots ,N_{k,i}$$. Now, we introduce the finite element space $$V_{k}$$ according to the rectangular partition $${\mathscr T}_{k}$$ as   $$ V_{k}=\{v\in C(\overline{\varOmega}):v|_{T}\in Q_{1}(T),\,\forall T \in{\mathscr T}_{k};\, v(P)=0\ \textrm{if}\ P \in \partial\varOmega\}, $$and we then obviously have $$V_{0}\subset V_{1}\subset \ldots \subset V_{J}$$. Moreover, we have $$V_{k}\subset{H_{0}^{1}}(\varOmega )$$ (see Shi & Wang, 2016). To preform line Gauss–Seidel iterations, we introduce the subspaces and subdomains related to the ‘lines’. Define subspace $$V_{k,i}=V_{k}\cap\textrm{span}\left \{\phi _{k,i}^{1},\ldots,\phi _{k,i}^{N_{k,i}}\right \}$$ and subdomain   $$\varOmega _{k,i}=\left \{(x,y)\in \varOmega :(i-1){h_{k}^{x}}<x<(i+1){h_{k}^{x}}\right \},$$where $$i=1,2,\ldots,N_{k}$$. Then we have $$\textbf{supp}\, v_{k,i}\subset \varOmega _{k,i}$$, $$\forall v_{k,i}\in V_{k,i}$$. Notice that in our algorithm, the ‘lines’ are vertical. We then introduce $$P_{k}\,:\, V_{J}\rightarrow V_{k}$$, $$Q_{k}\,:\, V_{J}\rightarrow V_{k}$$ and an auxiliary operator $$A_{k}\,:\, V_{k}\rightarrow V_{k}$$ as follows:   \begin{align} (P_{k}v,\varphi)_{a,\varOmega} =(v,\varphi)_{a,\varOmega}, \quad\forall\varphi\in V_{k}, \end{align} (2.3)  \begin{align} (Q_{k}v,\varphi) =(v,\varphi), \quad \forall\varphi\in V_{k}, \end{align} (2.4)  \begin{align} (A_{k}v,\varphi) =(v,\varphi)_{a,\varOmega},\quad \forall\varphi\in V_{k}. \end{align} (2.5)$$P_{k}$$ and $$Q_{k}$$ can be seen as the projection with respect to the energy norm and the $$L^{2}$$ norm, respectively. Similarly, on subspace $$V_{k,i}$$ with $$k=0,1,\ldots,J$$ and $$i=1,2,\ldots,N_{k}$$, we also introduce corresponding projections $$P_{k,i}\,:\, V_{J}\rightarrow V_{k,i}$$, $$Q_{k,i}\,:\, V_{J}\rightarrow V_{k,i}$$ and $$A_{k,i}\,:\, V_{k,i}\rightarrow V_{k,i}$$ as follows:   \begin{align} (P_{k,i}v,\varphi)_{a,\varOmega} =(v,\varphi)_{a,\varOmega},\quad \forall\varphi\in V_{k,i}, \end{align} (2.6)  \begin{align} (Q_{k,i}v,\varphi) =(v,\varphi), \quad\quad\; \forall\varphi\in V_{k,i}, \end{align} (2.7)  \begin{align} (A_{k,i}v,\varphi) =(v,\varphi)_{a,\varOmega}, \quad \forall\varphi\in V_{k,i}. \end{align} (2.8) According to the definitions of these projection operators and the inclusion relationships between subspaces, the following equations hold if $$0 \leqslant k \leqslant l \leqslant J $$:   \begin{align} P_{k}P_{l}=P_{l}P_{k}=P_{k},\quad P_{k,i}P_{l}=P_{l}P_{k,i}=P_{k,i}, \end{align} (2.9)  \begin{align}\;\, Q_{k}Q_{l}=Q_{l}Q_{k}=Q_{k},\quad Q_{k,i}Q_{l}=Q_{l}Q_{k,i}=Q_{k,i}. \end{align} (2.10)Moreover, $$\forall u\in V_{J},\; v_{k}\in V_{k},\; v_{k,i}\in V_{k,i}$$, we have   $$ \left(A_{k}P_{k}u,v_{k}\right)=(P_{k}u,v_{k})_{a,\varOmega}=(u,v_{k})_{a,\varOmega}=(A_{J}u,v_{k})= (Q_{k}A_{J}u,v_{k}), $$  $$ (A_{k,i}P_{k,i}u,v_{k,i})=(P_{k,i}u,v_{k,i})_{a,\varOmega}=(u,v_{k,i})_{a,\varOmega}=(A_{J}u,v_{k,i})=(Q_{k,i}A_{J}u,v_{k,i}). $$Thus, we obtain the following two important relationships between these operators:   \begin{align} A_{k}P_{k}=Q_{k}A_{J},\quad A_{k,i}P_{k,i}=Q_{k,i}A_{J},\quad \forall 0\leqslant k\leqslant J,\,1\leqslant i\leqslant N_{k}. \end{align} (2.11) 2.3 Multigrid method based on line Gauss–Seidel iteration The Galerkin format that we need to solve on the finite element space can be described as follows: find $$u\in V_{J}$$ such that   $$ a(u,v)=(\,f,v),\quad\forall v\in V_{J}. $$ By the auxiliary operator $$A_{J}$$, which we have defined in Section 2.2 (Equation (2.5)), we have   $$ (A_{J}u,v)=a(u,v)=(\,f,v). $$Since the above equation holds for any $$v\in V_{J}$$, we obtain   \begin{align} A_{J}u=Qf. \end{align} (2.12)This operator equation corresponds to our finite element problem, where $$Q\,:\, L^{2}(\varOmega )\rightarrow V_{J}$$ is the $$L^{2}$$ projection from $$L^{2}(\varOmega )$$ to $$V_{J}$$. Generally, if the initial guess $$u^{0}$$ has been chosen, the iteration method used to solve the above operator equation can be described as follows:   \begin{align} u^{l+1}=u^{l}+B_{J}\big(Qf-A_{J}u^{l}\big), \end{align} (2.13)where $$B_{J}$$ is an approximation of $$A_{J}^{-1}$$, and it is given by the iteration algorithm individually. Algorithm 2.1 defines $$B_{J}$$ for the multigrid method of which the smoother was given by the line Gauss–Seidel iteration. In practice, Algorithm 2.1 is usually used recursively. On each level, the iteration contains three steps: pre-smoothing, correction and post-smoothing. The advantage of the multigrid method is the validity for almost all error components with different frequencies, which ensures rapid convergence. In the next section, we analyze our algorithm theoretically. Our aim is to prove the uniform convergence of Algorithm 2.1. 3. Convergence analysis In this section, we first introduce and analyze the error transfer operator. Then, we can prove the uniform convergence through this error transfer operator. Some preliminary lemmas are proposed in the process of the proof. In the following, we use the symbol ‘$$\lesssim $$’. Here, $$x\lesssim y$$ means $$x \leqslant Cy $$, where C is a constant independent of the domain size, the mesh width h and the anisotropic coefficient $$\varepsilon $$. Without loss of generality, we assume $$h_{k}={h_{k}^{x}}={h_{k}^{y}}$$. As for $${h_{k}^{x}}\neq{h_{k}^{y}}$$, we can dilate the x-coordinate to make $$h_{k}^{\hat{x}}={h_{k}^{y}}$$. Although this dilation transformation could change the size of domain $$\varOmega $$ and anisotropic coefficient $$\varepsilon $$, it has no influence on our proving method. 3.1 Error transfer operator Assume $$u\in V_{J}$$ is the solution of $$A_{J}u=Qf$$; by (2.13) we have   $$u^{l+1}=u^{l}+B_{J}\big(Qf-A_{J}u^{l}\big)=u^{l}+B_{J}A_{J}\big(u-u^{l}\big). $$Subtracting u from the above equation and changing the sign, we obtain   \begin{align} u-u^{l+1}=u-u^{l}-B_{J}A_{J}\big(u-u^{l}\big)=(I-B_{J}A_{J})\big(u-u^{l}\big). \end{align} (3.1) Then, we denote by $$E_{J}\triangleq I-B_{J}A_{J}$$ the error transfer operator. From (3.1), we see that the convergence of our algorithm relates to the norm of this error transfer operator. Therefore, our aim is to estimate the norm of $$E_{J}$$. Now, we consider the specific formula of $$E_{J}$$ according to our concrete algorithm. For every $$w\in V_{J}$$, let $$r=A_{k}P_{k}w\in V_{k}$$, and substitute it into Algorithm 2.1; we have   $$w^{i}=w^{i-1}+A_{k,N_{k}+1-i}^{-1}Q_{k,N_{k}+1-i}\left(A_{k}P_{k}w-A_{k}w^{i-1}\right),\qquad 1\leqslant i\leqslant N_{k}. $$Due to $$w^{i}\in V_{k}$$, $$P_{k}w^{i}=w^{i}\, (1\leqslant i\leqslant N_{k})$$, the above equation can be transformed into   \begin{align*} w^{i} & =w^{i-1}+A_{k,N_{k}+1-i}^{-1}Q_{k,N_{k}+1-i}\left(A_{k}P_{k}w-A_{k}P_{k}w^{i-1}\right)\\ & =w^{i-1}+A_{k,N_{k}+1-i}^{-1}Q_{k,N_{k}+1-i}A_{k}P_{k}\left(w-w^{i-1}\right). \end{align*}Recalling equation (2.11), we obtain   $$A_{k,i}^{-1}Q_{k,i}A_{k}P_{k}=A_{k,i}^{-1}Q_{k,i}Q_{k}A_{J}=A_{k,i}^{-1}Q_{k,i}A_{J}= A_{k,i}^{-1}A_{k,i}P_{k,i}=P_{k,i}, $$then,   \begin{align} w-w^{i}=\big(I-P_{k,N_{k}+1-i}\big)\big(w-w^{i-1}\big),\qquad 1\leqslant i\leqslant{N_{k}}. \end{align} (3.2)Similarly, we can deduce   \begin{align} w-w^{N_{k}+1} =(I-B_{k-1}A_{k-1}P_{k-1})\big(w-w^{N_{k}}\big), \end{align} (3.3)  \begin{align}\qquad\; w-w^{N_{k}+i+1} =(I-P_{k,i})\big(w-w^{N_{k}+i}\big),\qquad 1\leqslant i\leqslant N_{k}. \end{align} (3.4)By (3.2), (3.3) and (3.4), we have   \begin{align*} w-B_{k}r =&\,w-w^{2N_{k}+1}\\ = &\,(I-P_{k,N_{k}})\ldots (I-P_{k,1})(I-B_{k-1}A_{k-1}P_{k-1})(I-P_{k,1})\ldots (I-P_{k,N_{k}})w. \end{align*}Now, we define $$T_{k}\triangleq (I-P_{k,N_{k}})\ldots (I-P_{k,1})$$, where k = 1, 2, … , J. Note $$r=A_{k}P_{k}w$$; then, by the notation $$T_{k}$$, the above equation can be transformed into   \begin{align} (I-B_{k}A_{k}P_{k})w=T_{k}(I-B_{k-1}A_{k-1}P_{k-1})T_{k}^{*}w,\qquad \forall w\in V_{J}. \end{align} (3.5)Finally, using the above formula recursively, we have   \begin{align} \begin{aligned} E_{J} & =I-B_{J}A_{J}=I-B_{J}A_{J}P_{J}\\ & =T_{J}T_{J-1}\ldots T_{1}(I-B_{0}A_{0}P_{0})T_{1}^{*}\ldots T_{J-1}^{*}T_{J}^{*}\\ & =T_{J}T_{J-1}\ldots T_{1}(I-P_{0})T_{1}^{*}\ldots T_{J-1}^{*}T_{J}^{*}. \end{aligned} \end{align} (3.6)Furthermore, let $${E_{J}^{N}}\triangleq T_{J}T_{J-1}\ldots T_{1}(I-P_{0})$$, and note that $$I-P_{0}=(I-P_{0})\left (I-P_{0}^{*}\right )$$; we obtain   $$E_{J}={E_{J}^{N}}\left({E_{J}^{N}}\right)^{*}. $$‘/’ and ‘∖’ type of multigrid algorithm: Sometimes, we may perform Algorithm 2.1 incompletely. If we cut the step of pre-smoothing (‘/’ type of multigrid algorithm), then $${E_{J}^{N}}$$ will be the error transfer operator; and if we cut off the post-smoothing (‘∖’ type of multigrid algorithm), $$\left ({E_{J}^{N}}\right )^{*}$$ will be the error transfer operator. Following the property of adjoint operator (see Yosida, 1964), we have   $$\left\Vert{E_{J}^{N}}\right\Vert{}_{a}=\left\Vert \left({E_{J}^{N}}\right)^{\ast}\right\Vert{}_{a}=\left\Vert{E_{J}^{N}}\left({E_{J}^{N}}\right)^{\ast}\right\Vert{}_{a}^{1/2}=\left\Vert E_{J}\right\Vert{}_{a}^{1/2}.$$Thus, we just need to estimate $$\left \Vert{E_{J}^{N}}\right \Vert_{a}$$, in spite of the algorithm (‘/’-cycle, ‘∖’-cycle or ‘V’-cycle). 3.2 The XZ identity and frame of proving In order to estimate $$\left \Vert{E_{J}^{N}}\right \Vert_{a}$$, we introduce the XZ identity proposed by Xu and Zikatanov (see, e.g., Xu & Zikatanov, 2002; Lee et al., 2008; Chen, 2011). Theorem 3.1 (XZ identity). Let V be a Hilbert space with inner-product $$(\cdot ,\cdot )_{a}$$, and $$V_{i}\subset V(i=1,2,\ldots ,J)$$ be closed subspaces of V satisfying $$V=\sum _{i=1}^{J}V_{i}$$. Given $$P_{i}\,:\,V\mapsto V_{i}$$, the orthogonal projection with respect to $$(\cdot ,\cdot )_{a}$$. Then, the following identity holds:   \begin{align} \left\Vert (I-P_{J})\ldots (I-P_{1})\right\Vert{}_{a}^{2}=1-\frac{1}{K}, \end{align} (3.7)where   \begin{align} K=\sup_{v\in V}\inf_{\sum_{i=1}^{J} v_{i}=v} \frac{\sum_{i=1}^{J}\left\Vert P_{i}\sum_{j=i}^{J}v_{j}\right\Vert{}_{a}^{2}}{\Vert v\Vert_{{a}^{2}}}. \end{align} (3.8) When we apply the subspace decomposition   $$V=V_{0}+\sum_{k=1}^{J}\sum_{i=1}^{N_{k}}V_{k,i}$$in Theorem 3.1, $$\left \Vert{E_{J}^{N}}\right \Vert_{a}$$ is estimated as   \begin{align} \left\Vert{E_{J}^{N}}\right\Vert{}_{{a}^{2}}=1-\frac{1}{K}, \end{align} (3.9)where   \begin{align} K=\sup_{v\in V}\inf_{v=v_{0}+\sum_{k=1}^{J}\sum_{i=1}^{N_{k}}v_{k,i}}\frac{\|P_{0} v\|_{a}^{2} + \sum_{k = 1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i}\left(\sum_{(l,j)\geqslant(k,i)} v_{l,j}\right)\right\|_{a}^{2}}{\|v\|_{a}^{2}}. \end{align} (3.10)In the above equation, $$v_{0}\in V_{0}$$, $$v_{k,i}\in V_{k,i}$$ and the notation (l, j) ⩾ (k, i) means   \begin{align} \sum_{(l,j)\geqslant(k,i)} v_{l,j} = \sum_{j = i}^{N_{k}} v_{k,j} + \sum_{l=k+1}^{J}\sum_{j=1}^{N_{l}} v_{l,j} . \end{align} (3.11) Due to (3.9), our algorithm is convergent uniformly if K is bounded (especially independent of the levels J and the anisotropic coefficient $$\varepsilon $$). To show K is bounded, from (3.10), we need to prove the following proposition: $$\forall v\in V_{J}$$, there exists a decomposition of v on $$V_{k,i}$$, i.e., $$v=v_{0}+\sum _{k=1}^{J}\sum _{i=1}^{N_{k}}v_{k,i}$$, such that   \begin{align} \frac{\|P_{0} v\|_{a}^{2} + \sum_{k = 1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i}\left(\sum_{(l,j)\geqslant(k,i)} v_{l,j}\right)\right\|_{a}^{2}}{\|v\|_{a}^{2}}\leqslant C, \end{align} (3.12)where C is independent of J and $$\varepsilon $$. To prove this proposition, we need to construct a decomposition of v. There are two steps we need to follow. The first step is to decompose v into different levels as $$v=\sum _{k=0}^{J}v_{k}$$, and the second step is to decompose each $$v_{k}$$ within one level as $$v_{k}=\sum _{i=1}^{N_{k}}v_{k,i}$$. There are two usual ways to construct the decomposition between different levels; one is based on the $$L^{2}$$ projection $$Q_{k}$$, i.e., $$v_{k}=(Q_{k}-Q_{k-1})v$$, while the other is based on the energy projection $$P_{k}$$, i.e., $$v_{k}=(P_{k}-P_{k-1})v$$. For anisotropic elliptic equations, our analysis of the convergence of the algorithm mainly relies on the following inequalities:   \begin{align} \sum_{k=0}^{J} \|v_{k}\|_{a}^{2} \lesssim \|v\|_{a}^{2}, \end{align} (3.13)  \begin{align} \frac{\varepsilon}{{h_{k}^{2}}}\|v_{k}\|_{0}^{2} \lesssim \|v_{k}\|_{a}^{2}. \end{align} (3.14)Since $$Q_{k}$$ is the $$L^{2}$$ projection, it does not contain any information about the diffusion directions. Therefore, in an anisotropic elliptic equation, we are faced with the essential difficulty that is hard to overcome when considering the decomposition $$v_{k}=\left (Q_{k}-Q_{k-1}\right )v$$ under the a-norm. If the decomposition $$v_{k}=(P_{k}-P_{k-1})v$$ is used, we have $$\sum _{k=0}^{J} \|v_{k}\|_{a}^{2}=\|v\|_{a}^{2}$$ directly by the definition of $$P_{k}$$. Hence, the inequality (3.13) holds naturally. To ensure the inequality (3.14), unfortunately, we need the solution that belongs to $$H^{2}(\varOmega )$$ in this case. However, the $$H^{2}$$ regularity cannot be reached if the domain is nonconvex. To overcome this difficulty, we divide a concave domain into several subdomains and consider the related problems defined on these subdomains. Of course, we need the $$H^{2}$$ regularity of the solution in these subdomains. By analyzing the solutions on common boundaries, we can put the local results together. In practice, we first define the projection operators $${P_{k}^{m}}$$ on each subdomain. Secondly, using the method of piecing together, we define the global operator $$\tilde{P}_{k}$$ and then decompose v as $$v_{k}=(\tilde{P}_{k}-\tilde{P}_{k-1})v$$. Finally, we deduce the inequality (3.12) and thus prove the uniform convergence of the multigrid method. 3.3 Rectified energy projection Without loss of generality, we illustrate the rectified energy projection on the L-shape domain, and other domains that are constructed by several rectangles can be dealt with similarly. We choose the L-shape domain as our example because it is a classic nonconvex domain and contains only two rectangles. For the domain $$\varOmega =(0,X) \times (0,Y)\backslash [x_{0},X) \times [y_{0},Y)$$, we define the rectified energy projection $$\tilde{P}_{k}$$ in what follows. First, we divide the domain $$\varOmega $$ into two subdomains, $$\varOmega _{1}=(0, x_{0})\times (0, Y)$$ and $$\varOmega _{2}=(x_{0}, X)\times (0, y_{0})$$. The boundaries are defined as $$\varGamma _{1} = \{(x_{0},y)\,|\,0<y<Y\},\ \varGamma _{2} = \{(x_{0},y)\,|\,0<y<y_{0}\}$$ and $$\varGamma _{m}^{\prime} = \partial \varOmega _{m} \backslash \varGamma _{m}\;(m = 1, 2)$$. As shown in Fig. 1, $$\varGamma _{1},\ \varGamma _{2}$$ are the common edges of the subdomains, while $$\varGamma _{1}^{\prime},\ \varGamma _{2}^{\prime}$$ are not common edges. Fig. 1. View largeDownload slide Diagram of domain division. Fig. 1. View largeDownload slide Diagram of domain division. We introduce auxiliary problems on the two subdomains as   \begin{align} \begin{cases} -\varepsilon \partial_{xx} u - \partial_{yy} u = f,\qquad (x,y) \in \varOmega_{m},\\ \partial_{\nu} u|_{\varGamma_{m}} = 0,\quad u|_{\varGamma_{m}^{\prime}} = 0. \end{cases} \qquad m = 1,2. \end{align} (3.15) Then, the corresponding variational form of the above problems is to find $$u^{m}\in V^{m}$$ such that   $$(u^{m},v^{m})_{a,\varOmega_{m}} = (f,v^{m}),\quad \forall v^{m} \in V^{m}, $$where $$V^{m} = \{u\in H^{1}(\varOmega _{m}):\,u|_{\varGamma _{m}^{\prime }}=0 \}$$. Now, we define $$\mathscr{T}_{k}^{m} = \mathscr{T}_{k} \cap \overline{\varOmega }_{m}$$ and $${V_{k}^{m}} = \left \{v\in C(\overline{\varOmega }_{m}):\,v|_{T}\in Q_{1}(T),\, \forall T \in \mathscr{T}_{k}^{m};\ v=0\ \textrm{on}\ \varGamma _{m}^{\prime }\right \}$$. Then, we have $${V_{k}^{m}}\subset H^{1}(\varOmega _{m})$$. However, the functions in $${V_{k}^{m}}$$ can arbitrarily take values for the nodes on the edge $$\varGamma _{m}$$; thus, $${V_{k}^{m}}\not \subset{H_{0}^{1}}(\varOmega _{m})$$. According to the space $${V_{k}^{m}}$$, we introduce the projection operator $${P_{k}^{m}}\,:\,{V_{J}^{m}}\mapsto{V_{k}^{m}}$$ as   \begin{align} \left({P_{k}^{m}} u^{m}, {v_{k}^{m}}\right)_{a,\varOmega_{m}} = \left(u^{m}, {v_{k}^{m}}\right)_{a,\varOmega_{m}}\!,\quad \forall{v_{k}^{m}} \in{V_{k}^{m}}. \end{align} (3.16) Comparing the definitions of $$V_{k}$$ and $${V_{k}^{m}}$$, we know $$v_{k}|_{\varOmega _{m}}\in{V_{k}^{m}},\ \forall v_{k}\in V_{k}$$. That is to say, when $$V_{k}$$ is restricted on $$\varOmega _{m}$$, it is a subspace of $${V_{k}^{m}}$$. Moreover, by the definition of $${P_{k}^{m}}$$, obviously we have   \begin{align} {P_{k}^{m}}(v_{k}|_{\varOmega_{m}})=v_{k}|_{\varOmega_{m}}. \end{align} (3.17) In the kth level, we denote by $$L_{k,n_{k}}$$ the grid line, which coincides with the common edge $$\varGamma _{1}$$. And we suppose the turning point $$(x_{0},y_{0})$$ on $$L_{k,n_{k}}$$ is the $$n_{k,n_{k}}$$th node from bottom to top. By using bilinear basis functions $$\phi _{k,i}^{j}$$, for each $$u\in V_{J}$$, $${P_{k}^{m}}(u|_{\varOmega _{m}})$$ can be represented as follows:   \begin{align} {P_{k}^{1}} (u|_{\varOmega_{1}}) = \sum_{i = 1}^{n_{k} - 1} \sum_{j = 1}^{N_{k,i}} u_{k,i}^{1,\, j} \phi_{k,i}^{\,j} + \sum_{j = 1}^{N_{k,n_{k}}} u_{k,n_{k}}^{1,\,j} \phi_{k,n_{k}}^{1,\,j}, \end{align} (3.18)  \begin{align}\quad\;\; {P_{k}^{2}} (u|_{\varOmega_{2}}) = \sum_{i = n_{k} + 1}^{N_{k}} \sum_{j = 1}^{N_{k,i}} u_{k,i}^{2,\,j} \phi_{k,i}^{\,j} + \sum_{j = 1}^{n_{k,n_{k}}-1} u_{k,n_{k}}^{2,\,j} \phi_{k,n_{k}}^{2,\,j}, \end{align} (3.19)where $$\phi _{k,n_{k}}^{m,j}=\phi _{k,n_{k}}^{j}|_{\varOmega _{m}}\;(m=1,2)$$. Expressed simply, for all $$u\in V_{J}$$, we use $${P_{k}^{m}} u$$ to indicate $${P_{k}^{m}}(u|_{\varOmega _{m}})$$ later in the article without any ambiguity. Now, we can piece $${P_{k}^{1}}$$ and $${P_{k}^{2}}$$ together and define the rectified energy projection $$\tilde{P}_{k}\,:\,V_{J}\mapsto V_{k}$$ as   \begin{align} \tilde{P}_{k} u = \sum_{i = 1}^{n_{k} - 1}\ \ \sum_{j = 1}^{N_{k,i}} u_{k,i}^{1,\,j} \phi_{k,i}^{\,j} + \sum_{i = n_{k} + 1}^{N_{k}} \sum_{j = 1}^{N_{k,i}} u_{k,\,i}^{2,\,j} \phi_{k,i}^{\,j} + \sum_{j = 1}^{n_{k,n_{k}}-1} u_{k,n_{k}}^{2,\,j} \phi_{k,n_{k}}^{\,j}, \end{align} (3.20)where $$u_{k,i}^{1,\,j},\ u_{k,n_{k}}^{2,\,j}$$ are given by (3.18) and (3.19). To better understand the operator $$\tilde{P}_{k}$$, we further explain the motivation that leads us to this operator. As mentioned above, if we derive a decomposition of v by $$v_{k}=(P_{k}-P_{k-1})v$$ on the L-shape domain $$\varOmega $$, this decomposition cannot deduce the needed result $$\varepsilon \Vert v_{k} \Vert _{0}^{2}\lesssim{h_{k}^{2}}\Vert v_{k}\Vert _{a}^{2}$$. However, when we define $${v_{k}^{1}}=\left ({P_{k}^{1}}-P_{k-1}^{1}\right )v$$ and $${v_{k}^{2}}=\left ({P_{k}^{2}}-P_{k-1}^{2}\right )v$$ on the two subdomains $$\varOmega _{1}$$ and $$\varOmega _{2}$$, respectively, $$\varepsilon \left \Vert{v_{k}^{1}} \right \Vert_{0}^{2}\lesssim{h_{k}^{2}}\left \Vert{v_{k}^{1}}\right \Vert_{a}^{2}$$ and $$\varepsilon \left \Vert{v_{k}^{2}} \right \Vert_{0}^{2}\lesssim{h_{k}^{2}}\left \Vert{v_{k}^{2}}\right \Vert_{a}^{2}$$ hold on each subdomain. Therefore, we expect to define a new global operator by $${P_{k}^{1}}$$ and $${P_{k}^{2}}$$ so that their advantages can be inherited and the global a-norm stability still holds as $$P_{k}$$. A simple idea is to restrict v on the subdomains $$\varOmega _{1}$$ and $$\varOmega _{2}$$ and then perform P-projection partially, and set the results of this projection to be the outcome of a new operator. As for the nodes on the common edge, we can choose $$\varOmega _{1}$$ or $$\varOmega _{2}$$ as the region whose values are assigned to these nodes. Note that (3.20) is an example of this idea, where the values on $$\varGamma _{2}$$ are given by $${P_{k}^{2}}u$$. Then, the operator $$\tilde{P}_{k}$$ is defined completely in this way. Now, we can construct the decomposition of v in (3.12) by the operator $$\tilde{P}_{k}$$ in what follows. Let $$v_{k}=\big (\tilde{P}_{k}-\tilde{P}_{k-1}\big )v\;(k=0,1,\dots ,J)$$, where $$\tilde{P}_{-1}:=0$$. Then, we have   \begin{align} \sum_{k = 0}^{J} v_{k} = \tilde{P}_{J} v = v,\quad \forall v\in V_{J}. \end{align} (3.21)On each level, decomposing $$v_{k}\in V_{k}\;(k=1,2,\ldots,J)$$ into subspaces $$V_{k,i}$$, we have   \begin{align} v_{k} = \sum_{i = 1}^{N_{k}} v_{k,i}\,,\qquad v_{k,i}\in V_{k,i}. \end{align} (3.22) In the next subsection, we prove that the inequality (3.12) holds for the above decomposition. Here, we provide further explanation of the decomposition between different levels (Equation (3.22)). As we know, $$V_{k,i}$$ is a subspace expanded by the basis functions corresponding to the ith grid line parallel to the y axis. Recalling the anisotropic equation, we can see that (3.22) indicates a decomposition along the weak spreading direction (x direction), while holding the component along the strong spreading direction (y direction) unchanged. In Lemma 3.5 below, we can see an advantage of this decomposition, which will deduce the a-norm stability. 3.4 Proof of convergence As mentioned in Section 3.3, when setting $${v_{k}^{m}}=\left ({P_{k}^{m}}-P_{k-1}^{m}\right )v$$, we have   \begin{align} \varepsilon\left\|{v_{k}^{m}}\right\|_{0}^{2} \lesssim{h_{k}^{2}}\left\|{v_{k}^{m}}\right\|_{a}^{2}\!, \quad m= 1,2, \quad k = 1,2,\dots,J. \end{align} (3.23)Now, we will prove the above inequality. First, we give two auxiliary lemmas. Lemma 3.2 Let T be a rectangle, $$h_{x},\ h_{y}$$ be side lengths of T and $$\Pi _{h}$$ be the bilinear interpolation operator. Then, for all $$v\in H^{2}(T)$$, we have   \begin{align} \begin{split} \|\partial_{x}(v - \Pi_{h} v) \|_{0,T}^{2} \lesssim{h_{x}^{2}}\| \partial_{xx} v\|_{0,T}^{2} + {h_{y}^{2}}\| \partial_{xy} v\|_{0,T}^{2}, \\ \|\partial_{y}(v - \Pi_{h} v) \|_{0,T}^{2} \lesssim{h_{x}^{2}}\| \partial_{yx} v\|_{0,T}^{2} + {h_{y}^{2}}\| \partial_{yy} v\|_{0,T}^{2} . \end{split} \end{align} (3.24) This lemma comes from a simple standard scaling argument, and we omit the proof. Generally, if we consider the bilinear interpolation defined on a rectangular grid of a bounded domain $$\varOmega $$, the inequalities in Lemma 3.2 also hold. From Lemma 3.2, it can be seen that the a-norm of the interpolation error could be bounded by the second derivatives of the original function. To estimate the right-hand side in Lemma 3.2 further, we need the $$H^{2}$$ regularity of the solutions in the auxiliary problems and the estimation on the second derivatives of these solutions. Therefore, we introduce the following lemma. Lemma 3.3 Let $$\varOmega =(0,\ d)\times (0,\ l\,),\ \varGamma =\{(0,\ y)\,|\,0<y<l\},\ \varGamma ^{\prime }=\partial \varOmega \backslash \varGamma $$ and $$\varepsilon \leqslant 1$$. Then, $$\forall f\in L^{2}(\varOmega )$$, the weak solution u to the following problem   \begin{align} \begin{cases} -\varepsilon \partial_{xx} u - \partial_{yy} u = f,\qquad (x,y) \in \varOmega\!,\\ \partial_{\nu} u|_{\varGamma} = 0,\quad u|_{\varGamma^{\prime}} = 0 \end{cases} \end{align} (3.25)belongs to $$H^{2}(\varOmega )$$ and satisfies the inequalities   \begin{align} \varepsilon \|u_{xx}\|_{0,{\varOmega}} \lesssim \|\,{f}\|_{0,{\varOmega}}, \end{align} (3.26)  \begin{align} \sqrt{\varepsilon} \|u_{xy}\|_{0,{\varOmega}} \lesssim \|\,{f}\|_{0,{\varOmega}},\quad \end{align} (3.27)  \begin{align}\; \|u_{yy}\|_{0,{\varOmega}} \lesssim \|\,{f}\|_{0,{\varOmega}}. \end{align} (3.28) The proof of this lemma is also a standard reflection and scaling method. We can transform the domain into $$\tilde{\varOmega }=(-d,\ d)\times (0,\ l)$$ by a reflection and transform x, y into $$\hat{x}=x/\sqrt{\varepsilon },\ \hat{y}=y$$. Then, this lemma is a standard conclusion of the $$H^{2}$$ regularity on $$\tilde{\varOmega }$$. Thus, we also omit the details of the proof. Lemma 3.3 shows the $$H^{2}$$ regularity of the solutions to the auxiliary problems. Moreover, it also gives an estimation on the second-order derivatives, which contains the anisotropic coefficient $$\varepsilon $$. Note that the above estimation has nothing to do with the domain (the only requirement is the convexity of $$\tilde{\varOmega }$$). By Lemmas 3.2 and 3.3, we obtain an estimation for the interpolation error of the solution with respect to the a-norm. In the following, we deduce the inequality (3.23) mentioned above by the Aubin–Nistche technique, which relates the $$L^{2}$$ norm and the energy norm of an energy projection error. Lemma 3.4 Given $$v^{m} \in H^{1}(\varOmega _{m})\;(m=1,2)$$, let $${v_{k}^{m}}:={P_{k}^{m}} v^{m}$$ be the Ritz projection of $$v^{m}$$ on $${V_{k}^{m}}$$; then,   \begin{align} \frac{\varepsilon}{(h_{k})^{2}}\left\|v^{m} - {v_{k}^{m}}\right\|_{0,\varOmega_{m}}^{2} \lesssim \left\|v^{m} - {v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2}\!\!. \end{align} (3.29) Proof. Define $$g=v^{m}-{v_{k}^{m}}$$, then   \begin{align} \begin{split} \left(g, {u_{k}^{m}}\right)_{a,\varOmega_{m}} =& \left(\left(I - {P_{k}^{m}}\right) v^{m}, {u_{k}^{m}}\right)_{a, \varOmega_{m}} \\ =& \left(v^{m}, {u_{k}^{m}}\right)_{a,\varOmega_{m}} - \left({P_{k}^{m}} v^{m}, {u_{k}^{m}}\right)_{a,\varOmega_{m}}\\ =& \left(v^{m}, {u_{k}^{m}}\right)_{a,\varOmega_{m}} - \left(v^{m}, {u_{k}^{m}}\right)_{a, \varOmega_{m}} = 0, \quad \forall{u_{k}^{m}} \in{V_{k}^{m}}. \end{split} \end{align} (3.30)Let $$\phi \in V^{m}$$ be the solution satisfying the following variational equation:   \begin{align} a(\phi,u) = (g,u),\qquad \forall u \in V^{m}. \end{align} (3.31)By the equations (3.30) and (3.31), for all $${u_{k}^{m}} \in{V_{k}^{m}}$$, we have   $$\|g\|_{0,\varOmega_{m}}^{2} = (\phi, g)_{a, \varOmega_{m}} = \left(\phi - {u_{k}^{m}},g\right)_{a,\varOmega_{m}} \leqslant \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}} \|g\|_{a,\varOmega_{m}}. $$Hence,   \begin{align} \|g\|_{0,\varOmega_{m}}^{2} \leqslant \left(\inf_{{u_{k}^{m}} \in{V_{k}^{m}}} \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}}\right) \|g\|_{a,\varOmega_{m}}. \end{align} (3.32)Note $$\phi $$ is the solution to the problem   \begin{align} \begin{cases} -\varepsilon \partial_{xx} \phi - \partial_{yy} \phi = g,\qquad (x,y) \in \varOmega_{m},\,\\ \partial_{\nu} \phi|_{\varGamma_{m}} = 0, \quad \phi|_{\varGamma^{\prime}_{m}} = 0. \end{cases} \end{align} (3.33)By Lemma 3.3, we have $$\phi \in H^{2}(\varOmega _{m})$$ and   \begin{align*} &\varepsilon \left(\|\partial_{xx} \phi\|_{0,\varOmega}^{2} + \|\partial_{xy} \phi\|_{0,\varOmega}^{2}\right) + \left(\|\partial_{yx} \phi\|_{0,\varOmega}^{2} + \|\partial_{yy} \phi\|_{0,\varOmega}^{2}\right) \\ \lesssim &\, \varepsilon\left(\frac{\|g\|_{0,\varOmega_{m}}^{2}}{\varepsilon^{2}}+ \frac{\|g\|_{0,\varOmega_{m}}^{2}}{{\varepsilon}}\right)+ \left(\frac{\|g\|_{0,\varOmega_{m}}^{2}}{{\varepsilon}}+\|g\|_{0,\varOmega_{m}}^{2}\right)\\ \lesssim & \frac{\|g\|_{0,\varOmega_{m}}^{2}}{\varepsilon}. \end{align*}Therefore, by Lemma 3.2, we obtain   \begin{align*} \inf_{{u_{k}^{m}} \in{V_{k}^{m}}} \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}} &\leqslant \left\|\phi - \Pi_{h_{k}} \phi\right\|_{a,\varOmega_{m}}\\ &= \left(\varepsilon\|\partial_{x}(\phi - \Pi_{h_{k}} \phi)\|_{0,\varOmega_{m}}^{2} + \|\partial_{y}(\phi - \Pi_{h_{k}} \phi) \|_{0,\varOmega_{m}}^{2} \right)^{1/2}\\ &\lesssim h_{k} \left[\varepsilon \left(\|\partial_{xx} \phi\|_{0,\varOmega}^{2} + \|\partial_{xy} \phi\|_{0,\varOmega}^{2}\right) + \left(\|\partial_{yx} \phi\|_{0,\Omega}^{2} + \|\partial_{yy} \phi\|_{0,\varOmega}^{2}\right)\right]^{1/2} \\ &\lesssim h_{k} \|g\|_{0,\varOmega_{m}}/\sqrt{\varepsilon}. \end{align*}Substituting the above formula into the inequality (3.32), then   \begin{align*} \|g\|_{0,\varOmega_{m}}^{2} &\leqslant \left(\inf_{{u_{k}^{m}} \in{V_{k}^{m}}} \left\|\phi - {u_{k}^{m}}\right\|_{a,\varOmega_{m}}\right) \|g\|_{a,\varOmega_{m}}\\ &\lesssim \frac{h_{k}}{\sqrt{\varepsilon}} \|g\|_{0,\varOmega_{m}} \|g\|_{a,\varOmega_{m}} . \end{align*}That is to say,   \begin{align} \frac{\varepsilon}{(h_{k})^{2}}\|g\|_{0,\varOmega_{m}}^{2} \lesssim \|g\|_{a,\varOmega_{m}}^{2}, \end{align} (3.34)thus, we have proved this lemma. Here, we show a little more detail about the relationship between Lemma 3.4 and the inequality (3.23). Actually, since $$P_{k-1}^{m}\left ({P_{k}^{m}} - P_{k-1}^{m}\right ) = P_{k-1}^{m} - P_{k-1}^{m} = 0$$, we have   $${v_{k}^{m}} = \left({P_{k}^{m}} - P_{k-1}^{m}\right) v^{m} = \left({P_{k}^{m}} - P_{k-1}^{m}\right) v^{m} - P_{k-1}^{m}\left({P_{k}^{m}} - P_{k-1}^{m}\right) v^{m} = {v_{k}^{m}} - P_{k-1}^{m} {v_{k}^{m}}. $$By Lemma 3.4, since $${v_{k}^{m}} \in H^{1}(\varOmega _{m})$$, the following inequality   $$\frac{\varepsilon}{h_{k-1}^{2}}\left\|{v_{k}^{m}} - P_{k-1}^{m} {v_{k}^{m}}\right\|_{0,\varOmega_{m}}^{2} \lesssim \left\| {v_{k}^{m}} - P_{k-1}^{m} {v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2}$$holds and it can be represented as   $$\varepsilon\left\|{v_{k}^{m}}\right\|_{0}^{2} \lesssim h_{k-1}^{2}\left\|{v_{k}^{m}}\right\|_{a}^{2}.$$Note that $$h_{k} = \gamma ^{2} h_{k-1}$$ and $$\gamma $$ is a constant; we have obtained the inequality (3.23). That is, Lemma 3.4 contains the inequality (3.23). Observing inequality (3.12) carefully, we find that the stability of the decomposition with respect to a-norm is needed, i.e., if $$v = v_{0} + \sum _{k=1}^{J} \sum _{i = 1}^{N_{k}} v_{k,i}$$ guarantees the inequality (3.12); then,   \begin{align} \|v_{0}\|_{a,\varOmega}^{2} + \sum_{k=1}^{J} \sum_{i = 1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim \|v\|_{a,\varOmega}^{2} . \end{align} (3.35)In fact, inequality (3.35) will be used for the proof of inequality (3.12). Therefore, we need to show that the inequality (3.35) holds for the decomposition defined by (3.21) and (3.22). First, we consider the situation on one level and introduce the following lemma. Lemma 3.5 Let $$v_{k} \in V_{k}\ (k=1,2,\ldots,J)$$. If $$v_{k} = \sum _{i=1}^{N_{k}} v_{k,i}$$ with $$v_{k,i} \in V_{k,i}$$, then   \begin{align} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim \|v_{k}\|_{a,\varOmega}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\|v_{k}\|_{0,\varOmega}^{2} . \end{align} (3.36) To prove Lemma 3.5, we first prove the following proposition. Proposition 3.6 If $$v_{k} \in V_{k}$$ and $$v_{k} = \sum _{i=1}^{N_{k}} v_{k,i}$$ with $$v_{k,i} \in V_{k,i}$$, then we have   \begin{align} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \leqslant 2\|v_{k}\|_{0,\varOmega}^{2}, \end{align} (3.37)  \begin{align} \sum_{i=1}^{N_{k}} \|\partial_{y} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \leqslant 2\|\partial_{y} v_{k}\|_{0,\varOmega}^{2}. \end{align} (3.38) Proof. Since $$V_{k},\ V_{k,i}$$ are the shape function spaces based on the bilinear elements, the relationship between $$v_{k,i}$$ and $$v_{k}$$ can be expressed as below:   \begin{align} v_{k,i}(x,y) = v_{k}(x_{k,i},y)\varPhi_{k,i}(x), \end{align} (3.39)where   $$\varPhi_{k,i}(x) = \begin{cases} (x - x_{k,i - 1}) / (x_{k,i} - x_{k,i-1}),\qquad &x \in [x_{k,i-1},x_{k,i}], \\ (x_{k,i + 1} - x) / (x_{k,i+1} - x_{k,i}),\qquad &x \in [x_{k,i},x_{k,i+1}]. \end{cases}$$Then, by simple computation, we have   \begin{align} \int_{x_{k,i-1}}^{x_{k,i+1}} (\varPhi_{k,i}(x))^{2}\, \mathrm{d}x = \frac{2}{3} h_{k},\qquad \int_{x_{k,i}}^{x_{k,i+1}} \varPhi_{k,i}(x) \varPhi_{k,i+1}(x)\, \mathrm{d}x = \frac{1}{6} h_{k}. \end{align} (3.40)Hence,   \begin{align*} \|v_{k}\|_{0,\varOmega}^{2} =&\, \left\|\sum_{i=1}^{N_{k}} v_{k,i}\right\|_{0,\varOmega}^{2} \\ =&\, \sum_{i=1}^{N_{k}} \int_{\varOmega_{k,i}} v_{k,i}^{2} \, \mathrm{d}x\,\mathrm{d}y + 2 \sum_{i = 1}^{N_{k}-1} \int_{\varOmega_{k,i}\cap\varOmega_{k,i+1}} v_{k,i} v_{k,i+1}\ \mathrm{d}x\,\mathrm{d}y\\ =&\, \sum_{i=1}^{N_{k}} \int_{0}^{l_{y}} \int_{x_{k,i-1}}^{x_{k,i+1}} (v_{k}(x_{k,i},y)\varPhi_{k,i}(x))^{2} \ \mathrm{d}x\,\mathrm{d}y \\ &+ 2 \sum_{i = 1}^{N_{k}-1} \int_{0}^{l_{y}} \int_{x_{k,i}}^{x_{k,i+1}} v_{k}(x_{k,i},y) v_{k}(x_{k,i+1},y) \varPhi_{k,i}(x)\varPhi_{k,i+1}(x)\ \mathrm{d}x\,\mathrm{d}y\\ =&\, \sum_{i=1}^{N_{k}} \frac{2h_{k}}{3}\int_{0}^{l_{y}} (v_{k}(x_{k,i},y))^{2} \ \mathrm{d}y + \frac{h_{k}}{3} \sum_{i = 1}^{N_{k}-1} \int_{0}^{l_{y}} v_{k}(x_{k,i},y)v_{k}(x_{k,i+1},y) \ \mathrm{d}y \\ \geqslant&\, \sum_{i=1}^{N_{k}} \frac{h_{k}}{3}\int_{0}^{l_{y}} (v_{k}(x_{k,i},y))^{2} \ \mathrm{d}y + \frac{h_{k}}{6} \sum_{i = 1}^{N_{k}-1} \int_{0}^{l_{y}} (v_{k}(x_{k,i},y)+v_{k}(x_{k,i+1},y))^{2} \ \mathrm{d}y \\ \geqslant&\, \sum_{i=1}^{N_{k}} \frac{h_{k}}{3}\int_{0}^{l_{y}} (v_{k}(x_{k,i},y))^{2} \ \mathrm{d}y = \frac{1}{2} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{0,\varOmega_{k,i}}^{2} . \end{align*}Similarly, we can prove the inequality   \begin{align} \|\partial_{y} v_{k}\|_{0,\varOmega}^{2} \geqslant \frac{1}{2} \sum_{i=1}^{N_{k}}\|\partial_{y} v_{k,i}\|_{0,\varOmega_{k,i}}^{2}.\ \end{align} (3.41) Now we prove Lemma 3.5. Proof. By the inverse inequality (see Shi & Wang, 2016) and Proposition 3.6, we have   \begin{align} \sum_{i=1}^{N_{k}} \|\partial_{x} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \lesssim (h_{k})^{-2} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \lesssim (h_{k})^{-2} \|v_{k}\|_{0,\varOmega}^{2} . \end{align} (3.42)Combining the inequalities (3.41) and (3.42), we obtain   \begin{align} \begin{split} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} =& \ \varepsilon \sum_{i=1}^{N_{k}} \|\partial_{x} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} + \sum_{i=1}^{N_{k}}\|\partial_{y} v_{k,i}\|_{0,\varOmega_{k,i}}^{2} \\ \lesssim& \frac{\varepsilon}{(h_{k})^{2}}\|v_{k}\|_{0,\varOmega}^{2} + \|v_{k}\|_{a,\varOmega}^{2}. \end{split} \end{align} (3.43) Lemma 3.5 gives an estimation for the decomposition on one level defined by (3.22). As we can see in the proof, the estimation we acquired above depends on the settings of the subspaces $$V_{k,i}$$. When the subspaces are expanded by the basis functions, which are defined on the grid lines parallel to the x axis, the right term in inequality (3.36) will change to $$\|v_{k}\|_{a,\varOmega }^{2} + \|v_{k}\|_{0,\varOmega }^{2}/(h_{k})^{2}$$. Furthermore, if the subspace contains only one basis function, the right term becomes $$\|v_{k}\|_{0,\varOmega }^{2}/(h_{k})^{2}$$. Compared with the inequality (3.23), the above two choices could not give us a stable decomposition. In fact, the smoothers corresponding to the above choices are all inefficient. Other than the above two choices, our method can be summarized as holding the function components along the strong spreading direction unchanged while decomposing the function along the weak spreading direction. For the anisotropic equation, this method offers the great advantage of making the decomposition stable within one level with respect to the a-norm. In the following, we consider the problem on multi-levels. In fact, we can show the multi-level a-norm stability by Lemmas 3.4 and 3.5. Theorem 3.7 (Stable decomposition). For all $$v \in V_{J}$$, let $$v =\sum _{k=0}^{J} v_{k} = v_{0} + \sum _{k=1}^{J} \sum _{i = 1}^{N_{k}} v_{k,i}$$ be the decomposition defined by the equations (3.21) and (3.22); then,   \begin{align} \sum_{k=0}^{J} \|v_{k}\|_{a,\varOmega}^{2} \lesssim \|v\|_{a,\varOmega}^{2}, \end{align} (3.44)  \begin{align} \|v_{0}\|_{a,\varOmega}^{2} + \sum_{k=1}^{J} \sum_{i = 1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim \|v\|_{a,\varOmega}^{2}. \end{align} (3.45) Proof. Let $${v_{k}^{m}} = \left ({P_{k}^{m}} - P_{k-1}^{m}\right )v$$ ($$k = 0,1,\dots ,J$$), where m = 1, 2 and $$P_{-1}^{m} = 0$$. Decompose $${v_{k}^{m}}$$ into   \begin{align*} {v_{k}^{1}} &= \sum_{i=1}^{n_{k}} v_{k,i}^{1}, \quad v_{k,i}^{1}\in V_{k,i}^{1},\\{v_{k}^{2}} &= \sum_{i=n_{k}}^{N_{k}} v_{k,i}^{2}, \quad v_{k,i}^{2}\in V_{k,i}^{2}, \end{align*}where   $$\begin{cases} V_{k,i}^{1} \ \,= V_{k,i},\qquad i = 1,2,\dots,n_{k} -1, \\ V_{k,n_{k}}^{1} = V_{k} \cap \textrm{span}\left\{{\phi}_{k,n_{k}}^{1,1},\dots,{\phi}_{k,n_{k}}^{1,N_{k,n_{k}}}\right\}, \\ V_{k,i}^{2} \ \,= V_{k,i},\qquad i = n_{k} + 1,n_{k} + 2,\dots,N_{k}, \\ V_{k,n_{k}}^{2} = V_{k} \cap \textrm{span}\left\{{\phi}_{k,n_{k}}^{2,1},\dots,{\phi}_{k,n_{k}}^{2,n_{k,n_{k}}-1}\right\}. \end{cases}$$Similar to Lemma 3.5, we have   \begin{align*} \sum_{i=1}^{n_{k}} \left\|v_{k,i}^{1}\right\|_{a,\varOmega_{1}}^{2} &\lesssim \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2}\!,\\ \sum_{i=n_{k}}^{N_{k}} \left\|v_{k,i}^{2}\right\|_{a,\varOmega_{2}}^{2} &\lesssim \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\!\!. \end{align*}Specifically,   \begin{align} \left\|v_{k,n_{k}}^{1}\right\|_{a,\varOmega_{1}}^{2} \lesssim \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2}\!\!, \end{align} (3.46)  \begin{align} \left\|v_{k,n_{k}}^{2}\right\|_{a,\varOmega_{2}}^{2} \lesssim \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\!\!. \end{align} (3.47)Following the definitions of $$v_{k}$$ and $$\tilde{P}_{k}$$ (Equations (3.22), (3.20)), $$v_{k}$$ can be expressed as follows:   \begin{align} \begin{cases} v_{k}|_{\varOmega_{1}} = {v_{k}^{1}} - v_{k,n_{k}}^{1} + \tilde{v}_{k,n_{k}}^{2} ,\\ v_{k}|_{\varOmega_{2}} = {v_{k}^{2}} , \end{cases} \end{align} (3.48)where $$\tilde{v}_{k,n_{k}}^{2}$$ is the reflection of $$v_{k,n_{k}}^{2}$$ along the edge $$\varGamma _{2}$$. Then, we have $$ \|\tilde{v}_{k,n_{k}}^{2} \|_{a,\varOmega _{1}} = \|v_{k,n_{k}}^{2} \|_{a,\varOmega _{2}}$$ and $$ \|\tilde{v}_{k,n_{k}}^{2} \|_{0,\varOmega _{1}} = \|v_{k,n_{k}}^{2} \|_{0,\varOmega _{2}}$$. Thereby,   \begin{align} \begin{split} \|v_{k}\|_{a,\varOmega}^{2} =& \|v_{k}\|_{a,\varOmega_{1}}^{2}+\|v_{k}\|_{a,\varOmega_{2}}^{2} \\ =& \left\|{v_{k}^{1}} - v_{k,n_{k}}^{1} + \tilde{v}_{k,n_{k}}^{2}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|v_{k,n_{k}}^{1}\right\|_{a,\varOmega_{1}}^{2} + \left\|\tilde{v}_{k,n_{k}}^{2}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} + \frac{\varepsilon}{(h_{k})^{2}}\left(\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\right). \end{split} \end{align} (3.49)By Lemma 3.4, $$\varepsilon \|{v_{k}^{m}} \|_{0,\varOmega }^{2}\lesssim{h_{k}^{2}} \|{v_{k}^{m}} \|_{a}^{2}\ (m=1,2)$$. Thus, we have   \begin{align} \|v_{k}\|_{a,\varOmega}^{2} \lesssim \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\!\!,\quad 0\leqslant k\leqslant J. \end{align} (3.50)And because $$ ({v_{k}^{m}},\ {v_{l}^{m}} )_{a,\varOmega _{m}} = ( ({P_{k}^{m}} - P_{k-1}^{m} ) v,\ ({P_{l}^{m}} - P_{l-1}^{m} ))_{a,\Omega _{m}} = 0$$ when $$0\leqslant k,l\leqslant J,\ k\neq l$$, we deduce that   \begin{align} \sum_{k = 0}^{J} \left\|{v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2} = \left\|\sum_{k=0}^{J} {v_{k}^{m}}\right\|_{a,\varOmega_{m}}^{2} = \left\|{P_{J}^{m}} (v|_{\varOmega_{m}})\right\|_{a,\varOmega_{m}}^{2} = \| v|_{\varOmega_{m}}\|_{a,\varOmega_{m}}^{2},\ m=1,2. \end{align} (3.51)Hence,   $$\sum_{k=0}^{J} \|v_{k}\|_{a,\varOmega}^{2} \lesssim \sum_{k=0}^{J} \left(\left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} \right) = \| v|_{\varOmega_{1}}\|_{a,\varOmega_{1}}^{2}+\| v|_{\varOmega_{2}}\|_{a,\varOmega_{2}}^{2} = \|v\|_{a,\varOmega}^{2} .$$Thus, we complete the proof of the inequality (3.44). The proof of the inequality (3.45) is similar to the above. First, by the equality (3.48), we have   \begin{align} \begin{split} \|v_{k}\|_{0,\varOmega}^{2} =&\left \|v_{k}\left\|_{0,\varOmega_{1}}^{2} + \right\|v_{k}\right\|_{0,\varOmega_{2}}^{2} \\ =& \left\|{v_{k}^{1}} - v_{k,n_{k}}^{1} + \tilde{v}_{k,n_{k}}^{2}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}} \right\|_{0,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|v_{k,n_{k}}^{1}\right\|_{0,\varOmega_{1}}^{2} + \left\|\tilde{v}_{k,n_{k}}^{2}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\\ \lesssim& \left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\!\!,\qquad 0\leqslant k\leqslant J. \end{split} \end{align} (3.52)Then, recalling Lemmas 3.4 and 3.5, we derive   \begin{align} \begin{split} \|v_{0}\|_{a}^{2} + \sum_{k=1}^{J} \sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \lesssim& \ \|v_{0}\|_{a}^{2} + \sum_{k=1}^{J} \left(\frac{\varepsilon}{(h_{k})^{2}}\|v_{k}\|_{0,\varOmega}^{2} + \|v_{k}\|_{a,\varOmega}^{2}\right) \\ \lesssim&\ \sum_{k=0}^{J} \left(\frac{\varepsilon}{(h_{k})^{2}}\left(\left\|{v_{k}^{1}}\right\|_{0,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{0,\varOmega_{2}}^{2}\right) + \left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2}\right)\\ \lesssim&\ \sum_{k=0}^{J} \left(\left\|{v_{k}^{1}}\right\|_{a,\varOmega_{1}}^{2} + \left\|{v_{k}^{2}}\right\|_{a,\varOmega_{2}}^{2} \right) = \|v\|_{a,\varOmega}^{2} . \end{split} \end{align} (3.53)Thus, we have completed the proof. Now, we provide a preliminary analysis on the inequality (3.12). By the equations (3.11), (3.21) and (3.22), we have   $$\sum_{(l,j)\geqslant(k,i)} v_{l,\,j} = \sum_{j = i}^{N_{k}} v_{k,\,j} + \sum_{l=k+1}^{J}\sum_{j=1}^{N_{l}} v_{l,\,j} = \sum_{j = i}^{N_{k}} v_{k,\,j} + \sum_{l=k+1}^{J} v_{l} = \sum_{j = i}^{N_{k}} v_{k,\,j} + \big(v - \tilde{P}_{k} v\big). $$Thereby,   \begin{align} \begin{split} \left\|P_{k,i}\sum_{(l,\,j)\geqslant(k,i)} v_{l,\,j}\right\|_{a}^{2} =&\left\|P_{k,i}\left(\sum_{j = i}^{N_{k}}v_{k,\,j} + \sum_{l=k+1}^{J}\sum_{j=1}^{N_{l}}v_{l,\,j}\right)\right\|_{a}^{2}\\ =&\left\|P_{k,i}\sum_{j = i}^{N_{k}} v_{k,\,j} + P_{k,i}\big(v - \tilde{P}_{k} v\big)\right\|_{a}^{2} . \end{split} \end{align} (3.54) If $$P_{k}$$ is used instead of $$\tilde{P}_{k}$$, we have $$P_{k,i}P_{k} = P_{k,i}$$ by the property of the projection operators. Thus,   $$P_{k,i}(v - P_{k} v) = P_{k,i}v - P_{k,i} v = 0,$$and in this case we just need to estimate   \begin{align} \sum_{k = 1}^{J}\sum_{i = 1}^{N_{k}} \left\|P_{k,i}\sum_{j = i}^{N_{k}} v_{k,\,j}\right\|_{a,\varOmega}^{2} . \end{align} (3.55)Actually, in the following Theorem 3.10, we can see (3.55) could be estimated by Theorem 3.7. However, the rectified operator $$\tilde{P}_{k}$$ that we have used in this article makes $$P_{k,i}\tilde{P}_{k} \neq P_{k,i}$$. Therefore, we need an extra estimation to complete the proof of the inequality (3.12) in addition to the case where $$P_{k}$$ is used. Similar to the proof where we choose the $$L^{2}$$ projection operator $$Q_{k}$$ to construct the decomposition between different levels (see Xu, 1992; Bramble & Zhang, 2000), we introduce the strengthened Cauchy–Schwartz inequality to help the proving. Theorem 3.8 (Strengthened Cauchy–Schwartz inequality). Let $$v \in V_{J}$$, $$u_{k} \in V_{k}$$ and $$v_{l} = \tilde{P}_{l} v\ -\ \tilde{P}_{l-1} v$$. If l > k, then   \begin{align} \left(u_{k},v_{l}\right)_{a,\varOmega} \lesssim \gamma^{l - k - 1} \|u_{k}\|_{a,\varOmega} \|v_{l}\|_{a,\varOmega} . \end{align} (3.56) The difference between the strengthened Cauchy–Schwartz inequality and the general form is the coefficient $$\gamma ^{l-k-1}$$ in (3.56), where $$\gamma $$ is the ratio between two adjacent levels, satisfying $$\gamma ^{2}=h_{k}/h_{k-1}<1$$. The strengthened Cauchy–Schwartz inequality describes the difference between subspaces $$\big (\tilde{P}_{k}-\tilde{P}_{k-1}\big )V$$ with respect to the decomposition. $$\gamma ^{l-k-1}$$ represents the cosine of the angle between two subspaces, and we can see that the larger $$\gamma $$ is, the larger the angle will be. The proof of Theorem 3.8 requires the following lemma. Lemma 3.9 Assume that $$\varOmega _{H}=[0, H]\times [0, l\,],\ \varOmega _{h}=[0, h]\times [0, l\,]$$ with h ∈ (0, H), and assume u is a bilinear function on $$\varOmega _{H}$$; then,   \begin{align} \left\|u\right\|_{a,\varOmega_{h}}^{2} \lesssim \frac{h}{H}\left\|u\right\|_{a,\varOmega_{H}}^{2}\!. \end{align} (3.57) Proof. Since u is a bilinear function on $$\Omega _{H}$$, we know $$\partial _{x} u,\ \partial _{y} u$$ are both linear functions with only one variable. Therefore,   $$\int_{\varOmega_{h}} (\partial_{x} u)^{2}\, \mathrm{d}x\,\mathrm{d}y = h {\int_{0}^{l}} (\partial_{x} u)^{2}\, \mathrm{d}y = \frac{h}{H} \int_{\varOmega_{H}} (\partial_{x} u)^{2}\, \mathrm{d}x\,\mathrm{d}y.$$Without loss of generality, we set $$\partial _{y} u=b_{1}x+b_{2}$$; then,   \begin{align*} \int_{\varOmega_{h}} (\partial_{y} u)^{2}\, \mathrm{d}x\,\mathrm{d}y &= l {\int_{0}^{h}} (b_{1} x + b_{2})^{2}\, \mathrm{d}x \\ &= lh\left(\frac{{b_{1}^{2}} h^{2}}{3} +{b_{1} b_{2} h} + {b_{2}^{2}}\right) \\ &= lh\left(\frac{1}{3}\left(b_{1} h + \frac{3}{2}b_{2}\right)^{2} + \frac{1}{4}{b_{2}^{2}}\right)\\ &\triangleq lh\cdot g(h). \end{align*}Note that g(h) is a quadratic function, so its maximum value lies on the endpoints of the closed interval, i.e.,   $$\max_{h\in[0,H]} g(h) \leqslant \max\{g(0),g(H)\} .$$Thus, by   $$g(0) = {b_{2}^{2}} \leqslant \frac{4}{3}\left(b_{1} H + \frac{3}{2}b_{2}\right)^{2} + {b_{2}^{2}} = 4\left(\frac{1}{3}\left(b_{1} H + \frac{3}{2}b_{2}\right)^{2} + \frac{1}{4}{b_{2}^{2}}\right) = 4 g(H),$$we have   $$\int_{\varOmega_{h}} (\partial_{y} u)^{2}\, \mathrm{d}x\,\mathrm{d}y = lh\cdot g(h) \leqslant 4 lh\cdot g(H) = 4\frac{h}{H} \int_{\varOmega_{H}}(\partial_{y} u)^{2}\, \mathrm{d}x\,\mathrm{d}y.$$Summarizing the above results, we have   $$\left\|u\right\|_{a,\varOmega_{h}}^{2} \leqslant 4\frac{h}{H} \left\|u\right\|_{a,\varOmega_{H}}^{2}. $$ Lemma 3.9 gives an estimation on the a-norm of the bilinear functions, where the a-norm is defined on a narrow band and the bilinear functions are defined on a coarser element. In the following, we will use this estimation to prove Theorem 3.8. The proof of Theorem 3.8: Proof. Due to the definition of $$\tilde{P}_{l}$$, we have $$\tilde{P}_{l} v={P_{l}^{1}} v$$ on $$\varOmega _{1} \backslash \varOmega _{l, n_{l}}$$ and $$\tilde{P}_{l} v={P_{l}^{2}} v$$ on $$\varOmega _{2}$$. When l ⩾ k, the restriction of $$v_{k}$$ on $$\varOmega _{m}$$ is a subspace of $${V_{l}^{m}}\ (m=1,\ 2)$$. Thereby, when l > k, we have   \begin{align*} \left(u_{k}, v_{l}\right)_{a,\varOmega \setminus (\varOmega_{l-1,n_{l-1}}\cap\varOmega_{1})} &= \left(u_{k}, \tilde{P}_{l} v - \tilde{P}_{l-1} v \right)_{a,\varOmega_{1} \setminus \varOmega_{l-1,n_{l-1}}} + \left(u_{k}, \tilde{P}_{l} v - \tilde{P}_{l-1} v \right)_{a,\varOmega_{2}}\\ &= \left(u_{k}, {P_{l}^{1}} v - P_{l-1}^{1} v\right)_{a,\varOmega_{1} \setminus \varOmega_{l-1,n_{l-1}}} + \left(u_{k}, {P_{l}^{2}} v - P_{l-1}^{2} v\right)_{a,\varOmega_{2}}\\ &= \left(u_{k},v - v\right)_{a,\varOmega_{1} \setminus \varOmega_{l-1,n_{l-1}}} + \left(u_{k},v - v\right)_{a,\varOmega_{2}} = 0. \end{align*}And since $$u_{k}$$ is a bilinear function defined on the element in the kth level, by Lemma 3.9, we have   \begin{align*} \left(u_{k},v_{l}\right)_{a,\varOmega} &= \left(u_{k},v_{l}\right)_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}} \\ &\leqslant \|u_{k}\|_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}} \|v_{l}\|_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}}\\ &\lesssim \sqrt{\frac{h_{l-1}}{h_{k}}}\|u_{k}\|_{a,\varOmega_{1}\cap\varOmega_{k,n_{k}}} \|v_{l}\|_{a,\varOmega_{1}\cap\varOmega_{l-1,n_{l-1}}}\\ &\leqslant \gamma^{l - k - 1} \|u_{k}\|_{a,\varOmega} \|v_{l}\|_{a,\varOmega} . \end{align*} Now, we can prove the inequality (3.12) for the decomposition defined by (3.21) and (3.22). Theorem 3.10 For every $$v \in V_{J}$$, let $$v=v_{0}+\sum _{k=1}^{J} \sum _{i=1}^{N_{k}} v_{k,i}$$ be the decomposition defined by the equations (3.21) and (3.22). Then, we have   $$\frac{\left\|P_{0} v\right\|_{a}^{2} + \sum_{k = 1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i}\left(\sum_{(l,j)\geqslant(k,i)} v_{l,\,j}\right)\right\|_{a}^{2}}{\|v\|_{a}^{2}}\leqslant C, $$where C is a constant independent with $$\varepsilon $$ and $$h_{J}$$. Proof. By (3.54), we have   \begin{align} \left\|P_{k,i}\sum_{(l,j)\geqslant(k,i)} v_{l,j}\right\|_{a}^{2} =\left\|P_{k,i}\sum_{j = i}^{N_{k}} v_{k,j} + P_{k,i}\big(v - \tilde{P}_{k} v\big)\right\|_{a}^{2} . \end{align} (3.58)Note that $$\forall u \in V_{J}$$, $$P_{k,i}u$$ is the a-norm projection of u on the subspace $$V_{k,i}$$; hence, we have   $$\|P_{k,i}u\|_{a,\varOmega}\leqslant\|u\|_{a,\varOmega_{k,i}} . $$Thereby, we obtain   \begin{align} \begin{split} \left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j}\right\|_{a,\varOmega}^{2} \leqslant& \left(\sum_{j=i}^{N_{k}} \|P_{k,i} v_{k,j}\|_{a,\varOmega}\right)^{2} \\ \leqslant& \left(\sum_{j=i}^{N_{k}} \|v_{k,\,j}\|_{a,\varOmega_{k,i}}\right)^{2} = \left(\|v_{k,i}\|_{a,\varOmega_{k,i}} + \|v_{k,i + 1}\|_{a,\varOmega_{k,i}}\right)^{2}\!\!. \end{split} \end{align} (3.59)And then by Theorem 3.7, we derive   \begin{align} \begin{split} \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} \left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j}\right\|_{a,\varOmega}^{2} \leqslant& \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} \left( \|v_{k,i}\|_{a,\varOmega_{k,i}} + \|v_{k,i + 1}\|_{a,\varOmega_{k,i}}\right)^{2} \\ \leqslant& \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} 2\left( \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} + \|v_{k,i + 1}\|_{a,\varOmega_{k,i}}^{2}\right) \\ \leqslant& \;4 \sum_{k=1}^{J}\sum_{i=1}^{N_{k}} \|v_{k,i}\|_{a,\varOmega_{k,i}}^{2} \\ \lesssim&\; \|v\|_{a,\varOmega}^{2} . \end{split} \end{align} (3.60)On the other hand, considering the second part of the right-hand side of (3.58), by Theorem 3.7, we obtain   \begin{align} \begin{split} \sum_{i=1}^{N_{k}} \left\|P_{k,i}\big(v-\tilde{P}_{k} v\big)\right\|_{a}^{2} =& \sum_{i=1}^{N_{k}} \left\|P_{k,i}\big(P_{k} v - \tilde{P}_{k} v\big)\right\|_{a}^{2}\\ \leqslant& \sum_{i=1}^{N_{k}} \left\|P_{k} v - \tilde{P}_{k} v\right\|_{a,\varOmega_{k,i}}^{2}\\ \lesssim& \left\|P_{k} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2}\!\!. \end{split} \end{align} (3.61)Hence, by $$P_{k} \tilde{P}_{k} v=\tilde{P}_{k} v$$ and Theorem 3.8, we have   \begin{align*} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} &= \left(P_{k} v-\tilde{P}_{k} v, P_{k} v-\tilde{P}_{k} v\right)_{a, \varOmega} \\ &=\left(v-\tilde{P}_{k} v, P_{k} v-\tilde{P}_{k} v\right)_{a, \varOmega} \\ &=\left(\sum_{l = k}^{J - 1} \tilde{P}_{l + 1} v - \tilde{P}_{l} v, P_{k} v-\tilde{P}_{k} v\right)_{a, \varOmega} \\ &\lesssim \sum_{l = k}^{J - 1} \gamma^{l - k} \left\|\tilde{P}_{l + 1} v - \tilde{P}_{l} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\!\!. \end{align*}Therefore,   \begin{align*} \sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2} \lesssim\ & \sum_{k = 1}^{J - 1} \sum_{l = k}^{J - 1} \gamma^{l - k} \left\|\tilde{P}_{l + 1} v - \tilde{P}_{l} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\\ =\ & \sum_{k = 1}^{J - 1} \sum_{s = 0}^{J - k - 1} \gamma^{s} \left\|\tilde{P}_{s + k + 1} v - \tilde{P}_{s + k} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\\ =\ & \sum_{s = 0}^{J - 2} \sum_{k = 1}^{J - s - 1} \gamma^{s} \left\|\tilde{P}_{s + k + 1} v - \tilde{P}_{s + k} v\right\|_{a,\varOmega}\left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}\\ \leqslant\ & \sum_{s = 0}^{J - 2} \gamma^{s} \left(\sum_{k = 1}^{J - s - 1} \left\|\tilde{P}_{s + k + 1} v - \tilde{P}_{s + k} v\right\|_{a,\varOmega}^{2}\right)^{1/2} \left(\sum_{k = 1}^{J - s - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2}\\ \leqslant\ & \sum_{s = 0}^{J - 2} \gamma^{s} \left(\sum_{k = 1}^{J - 1} \left\|\tilde{P}_{k + 1} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2} \left(\sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2}\\ \leqslant\ & \frac{1}{1 - \gamma} \left(\sum_{k = 1}^{J - 1} \left\|\tilde{P}_{k + 1} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2} \left(\sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a,\varOmega}^{2} \right)^{1/2}, \end{align*}and this leads to   \begin{align} \begin{split} \sum_{k = 1}^{J} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2} =& \sum_{k = 1}^{J - 1} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2}\\ \lesssim& \frac{1}{(1 - \gamma)^{2}} \sum_{k = 1}^{J - 1} \left\|\tilde{P}_{k + 1} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2}\\ \lesssim& \sum_{k=1}^{J - 1} \|v_{k+1}\|_{a,\varOmega}^{2} \lesssim \|v\|_{a,\Omega}^{2} . \end{split} \end{align} (3.62)Note that the last inequality in the above formula is from Theorem 3.7. Now, we obtain   \begin{align} \sum_{k = 1}^{J} \left\|P_{k} v-\tilde{P}_{k} v\right\|_{a, \varOmega}^{2} \lesssim \|v\|_{a,\varOmega}^{2} . \end{align} (3.63)Finally, by (3.58), (3.60), (3.61) and (3.63), we have   \begin{align*} &\|P_{0} v\|_{a,\varOmega}^{2} + \sum_{k = 1}^{J} \sum_{i = 1}^{N_{k}} \left\|P_{k,i} \sum_{(l,j)\geqslant(k,i)} v_{l,j}\right\|_{a,\varOmega}^{2} \\ &\quad\lesssim \|v\|_{a,\varOmega}^{2} + \sum_{k = 1}^{J} \sum_{i = 1}^{N_{k}} \left(\left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j} \right\|_{a,\varOmega}^{2} + \left\|P_{k,i}\big(v - \tilde{P}_{k} v\big)\right\|_{a,\varOmega}^{2} \right)\\ &\quad\lesssim \|v\|_{a,\varOmega}^{2} + \sum_{k = 1}^{J} \sum_{i = 1}^{N_{k}} \left\|P_{k,i} \sum_{j = i}^{N_{k}} v_{k,\,j} \right\|_{a,\varOmega}^{2} + \sum_{k=1}^{J} \left\|P_{k} v - \tilde{P}_{k} v\right\|_{a,\varOmega}^{2}\\ &\quad\lesssim \|v\|_{a,\varOmega}^{2} . \end{align*}Thus, we have deduced the theorem. By Theorem 3.10 and the XZ identity, we directly derive an estimation on the a-norm of the error transfer operator $${E_{J}^{N}}$$ and thus the operator $$E_{J}$$. Theorem 3.11 Let $$E_{J}$$ be the error transfer operator corresponding to Algorithm 2.1; then, we have   $$\|E_{J}\|_{a} \leqslant 1 - \frac{1}{C}, $$where C is a constant independent with $$\varepsilon $$ and $$\ h_{J}$$. Now, we have proved the uniform convergence of the multigrid method. The rectified energy projection $$\tilde{P}_{k}$$, which we introduced in this paper, cannot only be applied to the L-shape domain, but also the domains assembled by finite rectangles, for example, the homocentric squares. And in these domains, we can derive the uniform convergence similarly by the rectified energy projection and the XZ identity. Finally, we address the more general case where $$\varOmega $$ is an arbitrary polygonal domain. In the first step, we consider the regularity of the solutions defined on the subdomains. Namely, we consider the regularity of auxiliary problems related to the following mixed boundary value:   $$u|_{\varGamma_{1}} = 0,\ \partial_{\nu} u|_{\varGamma_{2}} = 0,$$where $$\varGamma _{1}$$ and $$\varGamma _{2}$$ intersect at some vertices in the domain. Because of the locality of the regularity, we only need to consider the solution around the reentrant corner. Let $$\varGamma _{1},\ \varGamma _{2}$$ be two straight intersecting lines, and let their intersect point be the original point. In polar coordinates, we assume $$\varGamma _{1},\ \varGamma _{2}$$ can be represented as:   $$\varGamma_{1} : \theta = 0, \qquad \varGamma_{2} : \theta = \theta_{0}. $$ Consider the following mixed boundary value problem on a domain surrounding the origin:   \begin{align} \begin{cases} -\varDelta u =\ 0, \qquad &0\leqslant \theta \leqslant \theta_{0},\\ u|_{\theta = 0} =\ 0, \quad &\partial_{\nu} u |_{\theta = \theta_{0}} = 0 . \end{cases} \end{align} (3.64)It can be verified that $$v = r^{\alpha}\sin(\alpha \theta ),\ \alpha = \pi /2\theta _{0}$$ is a solution of the above problem. When $$\theta _{0}> \pi /2$$, $$\alpha < 1$$, we have $$v \notin H^{2}$$. In fact, if the types of boundary conditions on adjacent edges are not the same, we need the condition that the angle between two adjacent edges is less than or equal to $$\pi /2$$ to make the solution belong to $$H^{2}$$. On the other hand, suppose the boundary condition types on two adjacent edges are not the same. If the angle between these two adjacent edges is less than or equal to $$\pi /2$$, reflect the problem along $$\Gamma _{2}$$, and then we obtain a Dirichlet problem defined on a convex domain. This problem has a unique solution $$\tilde{u} \in H^{2}$$, which is symmetrical with respect to $$\varGamma _{2}$$, so $$\partial _{\nu } \tilde{u}|_{\Gamma _{2}} = 0$$. Let $$u = \tilde{u}|_{\varOmega }$$; then, $$u \in H^{2}$$ is the solution of the original mixed boundary problem. Now, we can provide instruction on how to divide the whole domain $$\varOmega $$. On each subdomain, the auxiliary mixed boundary problem must satisfy that each angle between two adjacent edges where boundary conditions are not the same is less than or equal to $$\pi /2$$. And when the above condition holds, we can analyze the problem on the whole domain $$\varOmega $$ using an approach similar to the one applied to the problem on the L-domain and obtain the same conclusion of uniform convergence. 4. Conclusion In this article, we analyze the convergence of the multigrid method for solving the anisotropic problem. On the domains assembled by finite rectangles such as the L-shape domains and homocentric squares, we can prove the uniform convergence for the algorithm and show the convergence rate has nothing to do with the mesh size and the anisotropic coefficient. In the second section, we introduce the ‘V’-cycle multigrid algorithm based on the line Gauss–Seidel iteration. In the third section, we deduce the error transfer operator and introduce the analysis framework based on the XZ identity, by which we convert the problem of analyzing the error transfer operator into constructing the decomposition of the piecewise bilinear functions. Then, we define a rectified energy projection based on the domain decomposition and use it to decompose the function. Finally, we derive the uniform convergence. In fact, the proof we give in this article contains the idea of divide-and-conquer. That is to say, when the problem does not have a global regularity, we can divide it into sub-problems and analyze each sub-problem locally. Finally, we can combine the local information together and derive the global result that we want. In this article, the rectified energy projection provides the bridge between global problems and local problems. Funding The first, second and third authors were partially supported by NSFC 11421101; the third and fourth authors were partially supported by NSFC 91130011. References Arms, R. J. & Zondek, B. ( 1956) A method of block iteration. SIAM J. Appl. Math. , 4, 220– 229. Google Scholar CrossRef Search ADS   Bank, R. E. & Dupont, T. ( 1981) An optimal order process for solving finite element equations. Math. Comput. , 36, 35– 51. Google Scholar CrossRef Search ADS   Braess, D. & Hackbusch, W. ( 1983) A new convergence proof for the multigrid method including the V-cycle. SIAM J. Numer. Anal. , 20, 967– 975. Google Scholar CrossRef Search ADS   Bramble, J. H. & Pasciak, J. E. ( 1987) New convergence estimates for multigrid algorithms. Math. Comput. , 49, 311– 329. Google Scholar CrossRef Search ADS   Bramble, J. H., Pasciak, J. E., Wang, J. & Xu, J. ( 1991) Convergence estimates for multigrid algorithms without regularity assumptions. Math. Comput. , 57, 23– 45. Google Scholar CrossRef Search ADS   Bramble, J. H. & Zhang, X. ( 2000) The analysis of multigrid methods . Handbook of Numerical Analysis, vol. 7. Amsterdam: Elsevier, pp. 173– 415. Bramble, J. H. & Zhang, X. ( 2001) Uniform convergence of the multigrid V-cycle for an anisotropic problem. Math. Comput. , 70, 453– 470. Google Scholar CrossRef Search ADS   Chen, L. ( 2011) Deriving the X-Z identity from auxiliary space method*. In: Huang, Y., Kornhuber, R., Widlund, O. & Xu, J. (eds) Domain Decomposition Methods in Science and Engineering XIX. Lecture Notes in Computational Science and Engineering, vol. 78. Berlin, Heidelberg: Springer. Google Scholar CrossRef Search ADS   Cuthill, E. H. & Varga, R. S. ( 1959) A method of normalized block iteration. J. ACM , 6, 236– 244. Google Scholar CrossRef Search ADS   Hackbusch, W. ( 1985) Multi-Grid Methods and Applications . Berlin: Springer. Google Scholar CrossRef Search ADS   Iorio, R. & Magalhes, V. D. ( 2001) Fourier Analysis and Partial Differential Equations . Cambridge: Cambridge University Press. Google Scholar CrossRef Search ADS   Lee, Y. J., Wu, J., Xu, J. & Zikatanov, L. ( 2008) A sharp convergence estimate for the method of subspace corrections for singular systems of equations. Math. Comput. , 77, 831– 850. Google Scholar CrossRef Search ADS   Mikhail, S. & Sungwon, C. ( 2007) Holder regularity of solutions to second-order elliptic equations in nonsmooth domains. Bound. Value Probl. , 2007. Neuss, N. ( 1998) V-cycle convergence with unsymmetric smoothers and application to an anisotropic model problem. SIAM J. Numer. Anal. , 35, 1201– 1212. Google Scholar CrossRef Search ADS   Shi, Z. & Wang, M. ( 2016) Finite Element Methods . Beijing: Science Press. Stevenson, R. ( 1993) New estimates of the contraction number of V-cycle multi-grid with applications to anisotropic equations, Incomplete Decompositions, Proceedings of the Eiglish GAMM Seminar. Notes on Numerical Fluid Mechanics , 41, 159– 167. Wu, Y., Chen, L., Xie, X. & Xu, J. ( 2012) Convergence analysis of V-cycle multigrid methods for anisotropic elliptic equations. IMA J. Numer. Anal. , 32, 573– 598. Xu, J. ( 1992) Iterative methods by space decomposition and subspace correction. SIAM Review , 34, 581– 613. Google Scholar CrossRef Search ADS   Xu, J. & Zikatanov, L. ( 2002) The method of alternating projections and the method of subspace corrections in Hilbert space. J. Amer. Math. Soc. , 15, 573– 598. Google Scholar CrossRef Search ADS   Yosida, K. ( 1964) Functional Analysis. Berlin: Springer. © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com

Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: Apr 17, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off