Error estimates of a finite difference method for the Klein–Gordon–Zakharov system in the subsonic limit regime

Error estimates of a finite difference method for the Klein–Gordon–Zakharov system in the... Abstract We establish the error estimates of a conservative finite difference method (CFDM) for the Klein–Gordon–Zakharov (KGZ) system with a dimensionless parameter $${\varepsilon} \in (0,1]$$, which is inversely proportional to the speed of sound. When $${\varepsilon} \rightarrow 0^+$$, the KGZ system collapses to the Klein–Gordon (KG) equation. In the subsonic limit regime, i.e., $$0<{\varepsilon}\ll1$$, the solution propagates waves with wavelength $$O({\varepsilon})$$ and $$O(1)$$ in time and space, respectively, with amplitude at $$O({\varepsilon}^{\alpha^\dagger})$$, where $$\alpha^\dagger=\min\{\alpha,1+\beta,2\}$$, $$\alpha\ge 0$$ and $$\beta\ge-1$$ are two parameters describing the incompatibility of the initial data of the KGZ system with respect to the limiting KG equation. This oscillation in time brings difficulties in designing numerical methods and establishing their error estimates in the subsonic limit regime. We propose a CFDM and analyse the error bounds in this article. By applying the energy method and the limiting KG equation, we obtain two independent error bounds at $$O(h^2/{\varepsilon}^{1-\alpha^*}+\tau^2/{\varepsilon}^{3-\alpha^\dagger})$$ and $$O(h^2+\tau^2+{\varepsilon}^{\alpha^*})$$ with $$h$$ mesh size, $$\tau$$ time step and $$\alpha^*=\min\{1,\alpha,1+\beta\}$$. Hence, we obtain uniform error bounds at $$O(h^2+\tau^{{2}/{4-\alpha^\dagger}})$$ when $$\alpha\ge1$$ and $$\beta\ge 0$$. While for $$\alpha=0$$ or $$\beta=-1$$, the result suggests the meshing strategy requirement of the CFDM is $$h=O({\varepsilon}^{1/2})$$ and $$\tau=O({\varepsilon}^{3/2})$$ for $$0<{\varepsilon}\ll1$$. 1. Introduction The Klein–Gordon–Zakharov (KGZ) system describes the interaction between the Langmuir waves and ion acoustic waves in plasma, which reads as follows under a proper nondimensionalization (see Masmoudi & Nakanishi, 2005) : \begin{equation}\label{KGZ} \left\{ \begin{aligned} &{\gamma}^2{\partial}_{tt}E({\bf x},t)-{\it{\Delta}} E({\bf x},t)+\frac{1}{{\gamma}^2}E({\bf x},t)+N({\bf x},t)E({\bf x},t)=0,\\ &{\varepsilon}^2 {\partial}_{tt} N({\bf x},t)-{\it{\Delta}} N({\bf x},t)-{\it{\Delta}}|E|^2({\bf x},t)=0, \quad {\bf x} \in \mathbb{R}^d, \quad t>0, \end{aligned} \right . \end{equation} (1.1) with initial conditions \begin{equation}\label{init} E({\bf x},0)=E_0({\bf x}),\quad {\partial}_t E({\bf x},0)=E_1({\bf x}),\quad N({\bf x},0)=N_0({\bf x}),\quad {\partial}_t N ({\bf x},0)=N_1({\bf x}). \end{equation} (1.2) Here, $$t$$ is time, $${\bf x}$$ is the spatial variable, $$E({\bf x},t)$$ and $$N({\bf x},t)$$ are real-valued functions representing the fast time scale component of the electric field raised by electrons and the derivation of ion density from its equilibrium, respectively, and $$\gamma,\quad \varepsilon \in (0,1]$$ are two dimensionless parameters that are inversely proportional to the plasma frequency and speed of sound, respectively. It can be derived from the Euler equations for the electrons and ions, coupled with the Maxwell equation for the electron field, by neglecting the magnetic effect and assuming that ions move much slower than electrons (refer to Zakharov (1972), Sulem & Sulem (1979), Dendy (1990) and Bergé et al. (1996) for physical and formal derivations, and Texier (2005) for mathematical justifications). As is known, (1.1) is time symmetric or time reversible and conserves the total energy (see, e.g., Masmoudi & Nakanishi, 2005, 2008) \begin{equation}\label{energ} \mathcal {H}(t):=\int_{\mathbb{R}^d}\bigg[{\gamma}^2|{\partial}_tE|^2+|\nabla E|^2+\frac{1}{{\gamma}^2}E^2+\frac{{\varepsilon}^2}{2} |\nabla \varphi|^2+\frac{1}{2}N^2+NE^2\bigg]\,{\rm{d}}{\bf x}\equiv \mathcal {H}(0),\quad t\ge 0, \end{equation} (1.3) where $$\varphi$$ is defined by $${\it{\Delta}}\varphi={\partial}_t N$$ with $$\lim\limits_{|{\bf x}|\rightarrow \infty}{\varphi}=0$$. For the KGZ system (1.1) with $${\gamma}={\gamma}_0$$ and $${\varepsilon}={\varepsilon}_0$$, i.e., $$O(1)$$-plasma-frequency and $$O(1)$$-acoustic-speed regime, there are extensive analytical and numerical results in the literature. For the local well-posedness of the Cauchy problem in the energy space $$H^1\times L^2$$, we refer to Ozawa & Tsutsumi (1992) and references therein. Along the numerical part, we refer to Wang et al. (2007) for a finite difference method and Bao et al. (2013) for an exponential-wave-integrator Fourier pseudospectral method. Various different singular limits have been considered for the KGZ system, and a number of approximation results have been established (see, e.g., Masmoudi & Nakanishi, 2005, 2008; Daub et al., 2016). Based on the results in Masmoudi & Nakanishi (2005), Masmoudi & Nakanishi (2008) and Daub et al. (2016), in the high-plasma-frequency limit, i.e., $${\gamma} \rightarrow 0^+$$, the KGZ system converges to the Zakharov system; and in the subsonic limit, i.e., $${\varepsilon} \rightarrow 0^+$$, the KGZ system converges to the Klein–Gordon (KG) equation (see, e.g., Daub et al., 2016). Specifically, Daub et al. (2016) proved rigorously the convergence of the KGZ system to a modified KG equation defined on a $$d$$-dimensional ($$d\le 3$$) torus or to the KG equation on the whole space $$\mathbb{R}^d$$ with $$d\ge3$$ in the subsonic limit. Consider $$\gamma=1$$ and $${\varepsilon} \rightarrow 0^+$$, in which case we denote the solution of (1.1) with (1.2) as $$E^{\varepsilon}({\bf x},t)$$, $$N^{\varepsilon}({\bf x},t)$$, and the initial data in (1.2) are denoted by $$\left(E_0({\bf x}), E_1({\bf x}), N_0^{\varepsilon}({\bf x}), N_1^{\varepsilon}({\bf x})\right)$$. When $${\varepsilon}\rightarrow 0^+$$, i.e., in the subsonic limit regime, formally we have $$E^{\varepsilon}({\bf x},t)\rightarrow E({\bf x},t)$$, $$N^{\varepsilon}({\bf x},t)\rightarrow N({\bf x},t)=-E({\bf x},t)^2$$, where $$E:=E({\bf x},t)$$ is the solution of the KG equation (see Daub et al., 2016) \begin{equation}\label{KG1} \left\{ \begin{aligned} &{\partial}_{tt}E({\bf x},t)-{\it{\Delta}} E({\bf x},t)+E({\bf x},t)-E({\bf x},t)^3=0,\quad {\bf x}\in \mathbb{R}^d, \quad t>0,\\ & E({\bf x},0)=E_0({\bf x}),\quad {\partial}_t E({\bf x},0)=E_1({\bf x}),\quad {\bf x} \in \mathbb{R}^d.\\ \end{aligned} \right. \end{equation} (1.4) The convergence rate depends on the compatibility of the initial data ($$E_0,E_1,N_0^{\varepsilon},N_1^{\varepsilon}$$) of (1.1) with respect to (1.4), which can be characterized as \begin{equation}\label{inc} N_0^{\varepsilon}({\bf x})=-E_0({\bf x})^2+{\varepsilon}^\alpha {\omega}_0({\bf x}),\quad N_1^{\varepsilon}({\bf x})=-2E_0({\bf x})E_1({\bf x})+{\varepsilon}^\beta {\omega}_1({\bf x}), \end{equation} (1.5) where $$\alpha\ge 0 $$ and $$\beta\ge -1$$ are parameters describing the incompatibility of the initial data, and $${\omega}_0({\bf x})$$, $${\omega}_1({\bf x})$$ are given functions independent of $${\varepsilon}$$. Following the results in the literature (Ozawa & Tsutsumi, 1992; Masmoudi & Nakanishi, 2008; Bao & Su, 2017a), when $$0<{\varepsilon}\ll1$$, the solution of the KGZ system (1.1) propagates waves with wavelength $$O({\varepsilon})$$ and $$O(1)$$ in time and space, respectively (cf. Fig. 1). Furthermore, the amplitude of the initial layer is $$O({\varepsilon}^{\alpha^\dagger})$$, where $$\alpha^\dagger:=\min\{\alpha,1+\beta,2\}$$. Specifically, when $$\alpha\ge 2$$ and $$\beta\ge 1$$, the leading oscillation comes from the `$${\varepsilon}^2{\partial}_{tt}$$' term and otherwise from the incompatibility of the initial data. To illustrate the oscillation further, Fig. 1 shows the solutions $$N^{\varepsilon}(x,1)$$ and $$N^{\varepsilon}(1,t)$$ of the KGZ (1.1) for $$d=1$$, $$E_0(x)=e^{-x^2/2}$$, $$E_1(x)=e^{-x^2/3}\sin x$$, $${\omega}_0(x)=e^{\frac{1}{x^2-18^2}}\cos (2x)\chi_{(-18,18)}$$ with $$\chi$$ the characteristic function, $$\alpha=\beta=0$$ and $${\omega}_1(x)\equiv0$$ in (1.5) for different $${\varepsilon}$$, which was obtained numerically on a bounded computational interval $$[-200,200]$$ with the homogenous Dirichlet boundary condition (see, e.g., Bao et al., 2013). Fig. 1. View largeDownload slide The temporal oscillation of the KGZ system (1.1) and the solution of the KG equation (1.4) for $$d=1$$. Fig. 1. View largeDownload slide The temporal oscillation of the KGZ system (1.1) and the solution of the KG equation (1.4) for $$d=1$$. The highly temporal oscillatory nature in the solution of the KGZ system brings significant numerical difficulties, especially in the subsonic limit regime, i.e., $$0 < {\varepsilon}\ll 1$$. Recently, different numerical methods were proposed and analysed for the efficient computation of the KG equation in the nonrelativistic limit regime (see, e.g., Bao et al., 2014; Faou & Schratz, 2014) and the Zakharov system in the subsonic limit regime (see, e.g., Bao & Su, 2017b; Cai & Yuan, 2017). To our knowledge, so far there are few results on the numerics of the KGZ system in the subsonic limit regime. The main aim of this article is to carry out error analysis for a conservative finite difference method (CFDM) of KGZ system in the subsonic limit regime, i.e., $${\varepsilon} \rightarrow 0^+$$. Particularly, we pay attention to how the error bounds depend explicitly on the small parameter $${\varepsilon}$$, the mesh size $$h$$ and time step $$\tau$$. The rest of the article is organized as follows. In Section 2, we introduce a CFDM and our main results. Section 3 is devoted to the details of the error analysis. Numerical results are reported in Section 4. Some conclusions are drawn in Section 5. Throughout the article, we adopt the standard Sobolev spaces and write $$A\lesssim B$$ to mean that there exists a generic constant $$C$$ independent of $${\varepsilon}$$, $$\tau$$, $$h$$, such that $$|A|\le C\,B$$. 2. The numerical methods and main results For simplicity of notation, we will only present the numerical methods for the KGZ system in one space dimension, and extensions to higher dimensions are straightforward. Practically, (1.1) is truncated on a bounded interval $${\it{\Omega}}=(a,b)$$, with zero Dirichlet boundary condition \begin{equation} \label{KGZ1d} \left\{ \begin{aligned} & {\partial}_{tt}E^{\varepsilon}(x,t)-{\partial}_{xx} E^{\varepsilon}(x,t)+E^{\varepsilon}(x,t)+N^{\varepsilon}(x,t)E^{\varepsilon}(x,t)=0,\quad x \in {\it{\Omega}}, \quad t>0,\\ &{\varepsilon}^2 {\partial}_{tt} N^{\varepsilon}(x,t)-{\partial}_{xx} N^{\varepsilon}(x,t)-{\partial}_{xx}|E^{\varepsilon}|^2(x,t)=0, \quad x \in {\it{\Omega}}, \quad t>0,\\ &E^{\varepsilon}(x,0)=E_0(x),\quad {\partial}_t E^{\varepsilon}(x,0)=E_1(x),\quad N^{\varepsilon}(x,0)=N^{\varepsilon}_0(x),\quad {\partial}_t N^{\varepsilon} (x,0)=N^{\varepsilon}_1(x),\quad x \in \overline{{\it{\Omega}}}, \\ &E^{\varepsilon}(a,t)=E^{\varepsilon}(b,t)=0,\quad N^{\varepsilon}(a,t)=N^{\varepsilon}(b,t)=0, \quad t> 0. \end{aligned} \right . \end{equation} (2.1) As $${\varepsilon}\rightarrow 0^+$$, the KGZ (2.1) collapses to the KG equation (see, e.g., Daub et al., 2016) \begin{equation}\label{KG} \left\{ \begin{aligned} &{\partial}_{tt}E(x,t)-{\partial}_{xx}E(x,t)+E(x,t)-E(x,t)^3=0,\quad x\in {\it{\Omega}}, \quad t>0,\\ & E(x,0)=E_0(x),\quad {\partial}_t E(x,0)=E_1(x),\quad x \in \overline{{\it{\Omega}}};\qquad E(a,t)=E(b,t)=0,\quad t>0. \end{aligned} \right. \end{equation} (2.2) Here, we assume that the initial data ($$N^{\varepsilon}_0, N^{\varepsilon}_1$$) satisfies the condition \begin{equation} \label{inic} N^{\varepsilon}_0(x)=-E_0(x)^2+{\varepsilon}^{\alpha} {\omega}_0(x), \quad N^{\varepsilon}_1(x)=-2E_0(x)E_1(x)+{\varepsilon}^{\beta} {\omega}_1(x), \end{equation} (2.3) where $${\alpha}\ge 0$$ and $${\beta}\ge -1$$ are parameters describing the consistency of the initial data of the KGZ system (2.1) with respect to (2.2), and $${\omega}_0(x)$$, $${\omega}_1(x)$$ are independent of $${\varepsilon}$$. 2.1 The finite difference method Choose a mesh size $$h:={\it{\Delta}} x=(b-a)/M$$ with $$M$$ being a positive integer and a time step $$\tau:={\it{\Delta}} t>0$$, and denote grid points and time steps as $$x_j:=a+jh,\quad j=0,1,\ldots,M;\quad t_k:=k\tau,\quad k=0,1,2,\ldots.$$ Define the index sets $$\mathcal {T}_M=\{\,j \ | \ j=1,2,\ldots,M-1\}$$, $$\mathcal{T}_M^0=\{j\ |\ j=0,1,\ldots,M\}$$. Let $$E^{{\varepsilon},k}_j$$, $$N^{{\varepsilon},k}_j$$ and $$E^k_j$$ be the approximations of $$E^{{\varepsilon}}(x_j,t_k)$$, $$N^{\varepsilon}(x_j,t_k)$$ and $$E(x_j,t_k)$$, respectively, and $$E^{{\varepsilon},k}$$, $$N^{{\varepsilon},k}$$, $$E^k \in \mathbb{R}^{(M+1)}$$ be the numerical solution vectors at $$t=t_k$$. The finite difference operators are the standard notations as follows: \begin{align*} &{\delta}_x^+E_j^k=\frac{E_{j+1}^k-E_j^k}{h},\quad {\delta}_t^+E_j^k=\frac{E_j^{k+1}-E_j^k}{\tau},\quad {\delta}_t^c E_j^k=\frac{E_j^{k+1}-E_j^{k-1}}{2\tau},\\ &{\delta}_x^2E_j^k=\frac{E_{j+1}^k-2E_j^k+E_{j-1}^k}{h^2}, \quad {\delta}_t^2E_j^k=\frac{E_j^{k+1}-2E_j^k+E_j^{k-1}}{\tau^2}. \end{align*} In this article, we consider the conservative finite difference discretization of (2.1) as follows \begin{gather}\label{Dpsi1} {\delta}_t^2E_j^{{\varepsilon},k}=\big({\delta}_x^2-1-N_j^{{\varepsilon},k}\big) \frac{E_j^{{\varepsilon},k+1}+E_j^{{\varepsilon},k-1}}{2},\\ \end{gather} (2.4a) \begin{gather}\label{Dpsi2} {\varepsilon}^2{\delta}_t^2N_j^{{\varepsilon},k}=\frac{1}{2}{\delta}_x^2\left(N_j^{{\varepsilon},k+1}+N_j^{{\varepsilon},k-1}\right)+ {\delta}_x^2|E_j^{{\varepsilon},k}|^2, \quad j\in \mathcal{T}_M, \quad k\ge 1. \end{gather} (2.4b) This is a semiimplicit finite difference scheme and only a linear system needs to be solved at every time step. Thus, it is more efficient than the CFDM in Wang et al. (2007), where a fully nonlinear coupled system must be solved. Furthermore, the initial and boundary conditions are discretized as \[E_j^{{\varepsilon},0}=E_0(x_j),\quad N_j^{{\varepsilon},0}=-E_0(x_j)^2+{\varepsilon}^{\alpha}{\omega}_0(x_j);\quad E_0^{{\varepsilon},k}=E_M^{{\varepsilon},k}=N_0^{{\varepsilon},k}=N_M^{{\varepsilon},k}=0,\quad k\ge 0.\] Next we consider the value of the first step $$E_j^{{\varepsilon},1}$$ and $$N_j^{{\varepsilon},1}$$. By Taylor expansion, we get \begin{align} E_j^{{\varepsilon},1}&=E_0(x_j)+\tau E_1(x_j)+ \frac{\tau^2}{2}\left[E_0''(x_j)-E_j^{{\varepsilon},0}-N_j^{{\varepsilon},0}E_j^{{\varepsilon},0}\right]\!,\label{psi1}\\ \end{align} (2.5) \begin{align} N_j^{{\varepsilon},1}&= N_j^{{\varepsilon},0}+\tau\left[-2E_j^{{\varepsilon},0}E_1(x_j)+{\varepsilon}^{\beta}{\omega}_1(x_j)\right] +\frac{\tau^2}{2}{\varepsilon}^{{\alpha}-2}{\omega}_0''(x_j).\label{phi0} \end{align} (2.6) When $$0\le \alpha<2$$ or $$-1\le \beta<0$$, the above approximation is not appropriate if $${\varepsilon}\ll 1$$, in which case $$\tau$$ has to be very small to bound $$N_j^{{\varepsilon},1}$$. To get the first step value $$N_j^{{\varepsilon},1}$$ uniformly bounded for $${\varepsilon} \in (0,1]$$, similar to Cai & Yuan (2017), we replace (2.6) by a modified version \begin{equation} N_j^{{\varepsilon},1}=N_j^{{\varepsilon},0}-2\tau E_j^{{\varepsilon},0}E_1(x_j)+{\varepsilon}^{1+{\beta}}\sin\left(\frac{\tau}{{\varepsilon}}\right){\omega}_1(x_j) +2\sin^2{\left(\frac{\tau}{2{\varepsilon}}\right)}{\varepsilon}^{\alpha}{\omega}_0''(x_j).\label{phi1} \end{equation} (2.7) 2.2 Main results For simplicity of notation, denote $$\alpha^*=\min \{1,\alpha,1+\beta\},\quad \alpha^\dagger=\min\{\alpha,1+\beta,2\}.$$ Let $$T^*>0$$ be the maximum common existence time for the solution to the KGZ (2.1) and the solution to the KG (2.2). Then for $$0<T<T^*$$, according to the known results in Masmoudi & Nakanishi (2008), Ozawa & Tsutsumi (1992), Schochet & Weinstein (1986), Cai & Yuan (2017) and Bao & Su (2017b), we can assume the exact solution $$(E^{\varepsilon}(x,t), N^{\varepsilon}(x,t))$$ of KGZ (1.1) and the solution $$E(x,t)$$ of KG (2.2) are smooth enough and satisfy \begin{equation*} {\rm (A)}\hskip3cm \begin{split} &\|N^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}+\|E^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}+\|E_t^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}+ \|E_{tt}^{\varepsilon}\|_{W^{2,\infty}(\it{\Omega})}+\|E\|_{W^{3,\infty}([0,T];W^{2,\infty}(\it{\Omega}))} \lesssim 1,\\ &\|{\partial}_t^3E^{\varepsilon}\|_{W^{1,\infty}(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{1-\alpha^*}},\quad \|{\partial}_t^4E^{\varepsilon}\|_{L^\infty(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{2-\alpha^\dagger}},\quad \|{\partial}_t N^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{1-\alpha^*}},\\ &\|{\partial}_t^mN^{\varepsilon}\|_{W^{2,\infty}(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{m-\alpha^\dagger}}, \quad m=2,\,3; \qquad\|{\partial}_t^m N^{\varepsilon}\|_{L^\infty(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{m-\alpha^\dagger}},\quad m=4,\, 5, \end{split} \end{equation*} with the convergence (see, e.g., Daub et al., 2016) \begin{equation*} {\rm (B)}\hskip3cm \|E^{\varepsilon}-E\|_{L^\infty([0,T];H^1(\it{\Omega}))}\lesssim {\varepsilon}, \qquad \|N^{\varepsilon}-N\|_{L^\infty([0,T];L^\infty(\it{\Omega}))}\lesssim {\varepsilon}^{\alpha^*}.\hskip7cm \end{equation*} Furthermore, we assume that the initial data satisfy \begin{equation*} {\rm (C)}\hskip3cm \|E_0\|_{W^{2,\infty}(\it{\Omega})}+\|E_1\|_{L^\infty(\it{\Omega})}+\|{\omega}_0\|_{W^{2,\infty}(\it{\Omega})}+ \|{\omega}_1\|_{L^\infty(\it{\Omega})}\lesssim 1.\hskip6cm \end{equation*} To measure the error between the exact solution and the numerical solution of the KGZ system, we introduce some notations. Denote $$X_M=\{v=(v_j)_{j\in\mathcal{T}_M^0} | \,v_0=v_M=0\} \subseteq \mathbb{R}^{M+1}.$$ The norms and inner products over $$X_M$$ are defined as \begin{align*} (u,v)=h\sum\limits_{j=1}^{M-1}u_j v_j, \quad \langle u,v\rangle=h\sum\limits_{j=0}^{M-1}u_j v_j,\quad \|u\|^2=(u,u), \quad \|{\delta}_x^+ u\|^2=\langle u,u\rangle, \quad \|u\|_\infty=\sup\limits_{j\in \mathcal{T}_M^0}|u_j|. \end{align*} Then, for sequences in $$X_M$$, we have the summation by parts \begin{equation}\label{part-f} (-{\delta}_x^2 u,v)=\langle{\delta}_x^+ u,{\delta}_x^+ v\rangle,\quad ((-{\delta}_x^2)^{-1}u,v)=(u,(-{\delta}_x^2)^{-1}v). \end{equation} (2.8) Define the error functions $$e^{{\varepsilon},k}$$, $$n^{{\varepsilon},k}$$ as $$e^{{\varepsilon},k}_j=E^{\varepsilon}(x_j,t_k)-E_j^{{\varepsilon},k},\quad n_j^{{\varepsilon},k}=N^{\varepsilon}(x_j,t_k)-N_j^{{\varepsilon},k},\quad j\in \mathcal{T}_M,\quad k\ge 0.$$ Then we have the following error estimates. Theorem 2.1 Under assumptions (A)–(C), there exist $$h_0>0$$ and $$\tau_0>0$$ sufficiently small, when $$0<h\le h_0$$ and $$0<\tau\le \tau_0$$, the scheme (2.4) with (2.5)–(2.7) satisfies the error estimates \begin{align} \|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|+\|n^{{\varepsilon},k}\|&\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2} {{\varepsilon}^{3-\alpha^\dagger}},\quad 0\le k\le \frac{T}{\tau},\quad 0<{\varepsilon}\le1,\label{esti1}\\ \end{align} (2.9) \begin{align} \|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|+\|n^{{\varepsilon},k}\|&\lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*},\quad 0\le k\le \frac{T}{\tau},\quad 0<{\varepsilon}\le1.\label{esti2} \end{align} (2.10) Hence, for $$\alpha>0$$ and $$\beta>-1$$, we have the uniform $${\varepsilon}$$-independent convergence rate as \begin{equation}\label{esti3} \|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|+\|n^{{\varepsilon},k}\|\lesssim \left\{ \begin{aligned} & h^2+\tau^{\frac{2}{4-\alpha^\dagger}},\quad \,\,\,\,\,\alpha\ge 1, \quad\mathrm{and}\quad \beta\ge 0,\\ & (h^2+\tau^2)^{\frac{1}{3}\alpha^*},\quad 0<\alpha<1,\quad\mathrm{or}\quad -1<\beta<0, \end{aligned}\right. \end{equation} (2.11) and for initial data leading to $$O(1)$$-amplitude initial layer ($$\alpha=0$$ or $$\beta=-1$$), we can get the meshing strategy from (2.9) that $$h=O({\varepsilon}^{1/2})$$, $$\tau=O({\varepsilon}^{3/2})$$. 3. Energy conservation and error estimates 3.1 Energy conservation Proposition 3.1 For the scheme (2.4), the following quantities are conserved, \begin{align*} \mathcal{H}^k&=\|{\delta}_t^+E^{{\varepsilon},k}\|^2+\frac{1}{2}\left(\|{\delta}_x^+E^{{\varepsilon},k}\|^2+\|{\delta}_x^+ E^{{\varepsilon},k+1}\|^2+\|E^{{\varepsilon},k}\|^2+\|E^{{\varepsilon},k+1}\|^2\right) +\frac{{\varepsilon}^2}{2}\|{\delta}_x^+\varphi^{{\varepsilon},k+\frac{1}{2}}\|^2\\ &\quad+\frac{1}{4}\left(\|N^{{\varepsilon},k}\|^2 +\|N^{{\varepsilon},k+1}\|^2\right)+\frac{1}{2}\big[(N^{{\varepsilon},k},|E^{{\varepsilon},k+1}|^2) +\left(N^{{\varepsilon},k+1},|E^{{\varepsilon},k}|^2\right)\big]\equiv \mathcal{H}^0, \end{align*} where $$\varphi^{{\varepsilon},k+1/2}\in X_M$$ is the numerical approximation of $$\varphi^{\varepsilon}(\cdot,t_{k+1/2})$$, which is the solution of \[{\delta}_x^2\varphi_j^{{\varepsilon},k+1/2}={\delta}_t^+N_j^{{\varepsilon},k},\quad j\in\mathcal{T}_M,\quad k\ge 0.\] In particular, we have the a priori bound for $$E^{{\varepsilon},k}$$ in one-dimension as follows \begin{equation}\label{bound} \|E^{{\varepsilon},k}\|_\infty\lesssim \sqrt{\|E^{{\varepsilon},k}\|\|{\delta}_x^+E^{{\varepsilon},k}\|} \le C_e,\quad k\ge 0. \end{equation} (3.1) Proof. Multiplying (2.4a) by $$E_{j}^{{\varepsilon},k+1}-E_{j}^{{\varepsilon},k-1}$$, summing all together for $$j \in \mathcal{T}_M$$, one has \begin{equation}\label{first} \begin{split} &\|{\delta}_t^+E^{{\varepsilon},k}\|^2-\|{\delta}_t^+E^{{\varepsilon},k-1}\|^2+\frac{1}{2} (\|E^{{\varepsilon},k+1}\|^2-\|E^{{\varepsilon},k-1}\|^2)\\ &\quad+\frac{1}{2} \left(\|{\delta}_x^+E^{{\varepsilon},k+1}\|^2-\|{\delta}_x^+E^{{\varepsilon},k-1}\|^2\right)+\frac{1}{2}\left(N^{{\varepsilon},k}, |E^{{\varepsilon},k+1}|^2-|E^{{\varepsilon},k-1}|^2\right)=0. \end{split} \end{equation} (3.2) Similarly, multiplying (2.4b) by $$\tau*\big({\varphi}_j^{{\varepsilon},k-1/2}+{\varphi}_j^{{\varepsilon},k+1/2}\big)$$, summing for all $$j\in \mathcal{T}_M$$, we obtain \begin{equation}\label{sec} {\varepsilon}^2\left(\|{\delta}_x^+{\varphi}^{{\varepsilon},k+\frac{1}{2}}\|^2-\|{\delta}_x^+{\varphi}^{{\varepsilon},k-\frac{1}{2}}\|^2\right) +\frac{1}{2}\left(\|N^{{\varepsilon},k+1}\|^2-\|N^{{\varepsilon},k-1}\|^2\right) +\left(|E^{{\varepsilon},k}|^2,2\tau{\delta}_t^c N^{{\varepsilon},k}\right)=0. \end{equation} (3.3) Summing (3.2) and $$\frac{1}{2}$$ (3.3), we get that \begin{align*} &\|{\delta}_t^+E^{{\varepsilon},k}\|^2+\frac{1}{2} \|E^{{\varepsilon},k+1}\|^2+\frac{1}{2}\|{\delta}_x^+E^{{\varepsilon},k+1}\|^2+ \frac{{\varepsilon}^2}{2}\|{\delta}_x^+{\varphi}^{{\varepsilon},k+\frac{1}{2}}\|^2 +\frac{1}{4}\|N^{{\varepsilon},k+1}\|^2\\ &\quad=\|{\delta}_t^+E^{{\varepsilon},k-1}\|^2+\frac{1}{2} \|E^{{\varepsilon},k-1}\|^2+\frac{1}{2}\|{\delta}_x^+E^{{\varepsilon},k-1}\|^2+ \frac{{\varepsilon}^2}{2}\|{\delta}_x^+{\varphi}^{{\varepsilon},k-\frac{1}{2}}\|^2 +\frac{1}{4}\|N^{{\varepsilon},k-1}\|^2\\ &\qquad+\frac{1}{2}\big[(N^{{\varepsilon},k-1},|E^{{\varepsilon},k}|^2)+(N^{{\varepsilon},k},|E^{{\varepsilon},k-1}|^2)\big] -\frac{1}{2}\big[(N^{{\varepsilon},k},|E^{{\varepsilon},k+1}|^2)+(N^{{\varepsilon},k+1},|E^{{\varepsilon},k}|^2)\big]. \end{align*} Then, the conclusion is provided by summing the above equation from $$1$$ to $$k$$. It follows from (2.5) and (2.7) that \begin{align*} &\|{\delta}_t^+E^{{\varepsilon},0}\|\lesssim \|E_1\|_{L^\infty}+\tau\left(\|E_0''\|_{L^\infty}+ \|E_0\|_{L^\infty}+\|E_0\|^3_{L^\infty}\right)+ {\varepsilon}^{\alpha}\tau\|E_0\|_{L^\infty}\|{\omega}_0\|_{L^\infty},\\ &\|{\delta}_x^+{\varphi}^{{\varepsilon},1/2}\|\lesssim \|{\delta}_t^+N^{{\varepsilon},0}\| \lesssim \|E_0E_1\|_{L^\infty}+{\varepsilon}^\beta\|{\omega}_1\|_{L^\infty}+{\varepsilon}^{\alpha-1}\|{\omega}_0''\|_{L^\infty}, \end{align*} which implies that $$\mathcal{H}^0$$ is uniformly bounded for $${\varepsilon} \in (0,1]$$ by assumption (C). Thus, the a priori bounds for $$E^{{\varepsilon},k}$$ can be obtained by a standard argument (see, e.g., Chang et al., 1995). □ 3.2 Error analysis It can be concluded from Proposition 3.1 that $$\|E^{{\varepsilon},k}\|_\infty$$ is uniformly bounded; however, similar bounds cannot be obtained for higher dimensions or nonconservative schemes. In Thomée (1997) and Cai & Yuan (2017), this difficulty was overcome by truncating the nonlinearity to a global Lipschitz function with compact support in $$d$$-dimensions. The idea is as follows. Choose a function $$\rho(s)\in C^\infty(\mathbb{R})$$ such that $$\rho(s)= \left\{ \begin{aligned} &1, \quad &|s|\le 1,\\ &\in[0,1],\quad &|s|\le 2,\\ &0,\quad &|s|\ge 2. \\ \end{aligned} \right. $$ Set $$M_0'=\max \{\|E(x,t)\|_{L^\infty(\it{\Omega}_T)}, \sup\limits_{{\varepsilon}\in(0,1]} \|E^{\varepsilon}(x,t)\|_{L^\infty(\it{\Omega}_T)}\}, \qquad \it{\Omega}_T=\it{\Omega}\times [0,T],$$ which is well defined by assumption (A). Denote $$M_0=\max\{M_0',C_e\}$$ for $$d=1$$ and $$M_0=M_0'$$ for $$d=2, 3$$. For $$s\ge 0$$, $$y_1,y_2\in\mathbb{R}$$, define \[\rho_B(s)=s\rho(s/B),\quad B=(M_0+1)^2,\quad g(y_1,y_2)=\frac{y_1+y_2}{2}\int_0^1 \rho_B'(sy_1^2+(1-s)y_2^2)\,{\rm{d}}s.\] Then $$\rho_B(s)$$ is global Lipschitz and there exists $$C_B>0$$, such that \begin{equation}\label{f_B} |\rho_B(s_1)-\rho_B(s_2)|\le \sqrt{C_B}|\sqrt{s_1}-\sqrt{s_2}|\quad \forall s_1, s_2 \ge 0. \end{equation} (3.4) Set $$\widehat{E}^{{\varepsilon},0}=E^{{\varepsilon},0}$$, $$\widehat{N}^{{\varepsilon},0}=N^{{\varepsilon},0}$$, $$\widehat{E}^{{\varepsilon},1}=E^{{\varepsilon},1}$$, $$\widehat{N}^{{\varepsilon},1}=N^{{\varepsilon},1}$$ and determine $$\widehat{E}^{{\varepsilon},k}$$, $$\widehat{N}^{{\varepsilon},k}\in X_M$$ as follows \begin{equation}\label{scheme} \begin{aligned} &{\delta}_t^2\widehat{E}^{{\varepsilon},k}_j=({\delta}_x^2-1)\frac{\widehat{E}_j^{{\varepsilon},k+1}+\widehat{E}_j^{{\varepsilon},k-1}}{2} -\widehat{N}_j^{{\varepsilon},k}g\left(\widehat{E}_j^{{\varepsilon},k+1}, \widehat{E}_j^{{\varepsilon},k-1}\right)\!,\\ &{\varepsilon}^2{\delta}_t^2\widehat{N}_j^{{\varepsilon},k}=\frac{1}{2}{\delta}_x^2\left(\widehat{N}_j^{{\varepsilon},k+1}+ \widehat{N}_j^{{\varepsilon},k-1}\right)+{\delta}_x^2\rho_B\left(|\widehat{E}_j^{{\varepsilon},k}|^2\right)\!,\quad k\ge 1. \end{aligned} \end{equation} (3.5) Here, $$(\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k})$$ can be viewed as another approximation of $$(E^{\varepsilon}(x_j,t_k),N^{\varepsilon}(x_j,t_k))$$. Next we will prove two types of estimates for $$(\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k})$$ in Theorem 2.1. Define the error function $$\widehat{e}^{\,{\varepsilon},k}$$, $$\widehat{n}^{{\varepsilon},k}\in X_M$$ as \[\widehat{e}_j^{{\varepsilon},k}=E^{\varepsilon}(x_j,t_k)-\widehat{E}^{{\varepsilon},k}_j,\quad \widehat{n}_j^{{\varepsilon},k}=N^{{\varepsilon}}(x_j,t_k)-\widehat{N}^{{\varepsilon},k}_j, \quad j\in \mathcal{T}_M.\] Regarding the error bounds on $$(\widehat{e}^{\,{\varepsilon},k}, \widehat{n}^{{\varepsilon},k})$$, we have the following estimates. Theorem 3.2 Under assumptions (A) and (C), there exists $$\tau_1>0$$ independent of $${\varepsilon}$$ such that, when $$0<\tau\le \tau_1$$, we have the following error estimate for the scheme (3.5): \begin{equation}\label{mes1} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|+\|\widehat{n}^{{\varepsilon},k}\|\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}},\quad 0\le k\le\frac{T}{\tau},\quad 0<{\varepsilon}\le1. \end{equation} (3.6) Introduce the local truncation error $$\widehat{\xi}^{{\varepsilon},k}$$, $$\widehat{\eta}^{{\varepsilon},k}\in X_M$$ for $$k\ge 1$$ as \begin{equation}\label{local2} \begin{split} \widehat{\xi}_j^{{\varepsilon},k}&={\delta}_t^2E^{\varepsilon}(x_j,t_k)+\big[-{\delta}_x^2+1+N^{\varepsilon}(x_j,t_k)\big] \frac{E^{\varepsilon}(x_j,t_{k+1})+E^{\varepsilon}(x_j,t_{k-1})}{2},\\ \widehat{\eta}_j^{{\varepsilon},k}&={\varepsilon}^2{\delta}_t^2N^{\varepsilon}(x_j,t_k)-\frac{1}{2}{\delta}_x^2\big[N^{\varepsilon}(x_j,t_{k+1}) +N^{\varepsilon}(x_j,t_{k-1})\big]-{\delta}_x^2|E^{\varepsilon}(x_j,t_k)|^2. \end{split} \end{equation} (3.7) Then, we have the local truncation error as follows. Lemma 3.3 Under the assumption (A), we have $$\|\widehat{\xi}^{{\varepsilon},k}\|+ \|\widehat{\eta}^{{\varepsilon},k}\|\lesssim h^2+\frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}},\quad 1\le k\le \frac{T}{\tau}-1;\quad \|{\delta}_t^c\widehat{\eta}^{{\varepsilon},k}\|\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}},\quad 2\le k\le \frac{T}{\tau}-2.$$ Proof. Applying Taylor expansion, we have \begin{align*} \widehat{\xi}_j^{{\varepsilon},k}=\,&\frac{\tau^2}{6}\int_{-1}^1(1-|s|)^3{\partial}_t^4E^{\varepsilon}(x_j,t_k+s\tau)\,{\rm{d}}s -\frac{h^2}{12}\int_{-1}^1(1-|s|)^3\sum\limits_{m=\pm1}{\partial}_x^4E^{\varepsilon}(x_j+sh,t_k+m \tau)\,{\rm{d}}s\\ &+\frac{\tau^2}{2}\int_{-1}^1(1-|s|)\big[ \left(1+N^{\varepsilon}(x_j,t_k)\right)E_{tt}^{\varepsilon}(x_j,t_k+s\tau)-{\partial}_x^2E_{tt}^{\varepsilon}(x_j,t_k+s\tau) \big]\,{\rm{d}}s. \end{align*} Hence, \[ |\widehat{\xi}_j^{{\varepsilon},k}|\lesssim h^2\|{\partial}_x^4E^{\varepsilon}\|_{L^\infty}+\tau^2 \left((1+\|N^{\varepsilon}\|_{L^\infty})\|E^{\varepsilon}_{tt}\|_{L^\infty} +\|{\partial}_t^4E^{\varepsilon}\|_{L^\infty}+\|{\partial}_x^2E^{\varepsilon}_{tt}\|_{L^\infty}\right) \lesssim h^2+\frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}}.\] Applying the same approach, we derive that \begin{align*} \widehat{\eta}_j^{{\varepsilon},k}&=\frac{{\varepsilon}^2\tau^2}{6}\int_{-1}^1(1-|s|)^3 {\partial}_t^4N^{\varepsilon}(x_j,t_k+s\tau)\,{\rm{d}}s-\frac{\tau^2}{2}\int_{-1}^1(1-|s|){\partial}_x^2N_{tt}^{\varepsilon}(x_j,t_k+s\tau)\,{\rm{d}}s\\ &\quad-\frac{h^2}{12}\int_{-1}^1(1-|s|)^3\Big[{\partial}_x^4|E^{\varepsilon}|^2(x_j+sh,t_k)+\sum\limits_{m=\pm1} {\partial}_x^4N^{\varepsilon}(x_j+sh,t_k+m\tau)\Big]\,{\rm{d}}s, \end{align*} which implies that \[ |\widehat{\eta}_j^{{\varepsilon},k}|\lesssim h^2(\|{\partial}_x^4N^{\varepsilon}\|_{L^\infty}+\|{\partial}_x^4|E^{\varepsilon}|^2\|_{L^\infty}) +\tau^2(\|{\partial}_x^2N_{tt}^{\varepsilon}\|_{L^\infty}+{\varepsilon}^2\|{\partial}_t^4N^{\varepsilon}\|_{L^\infty}) \lesssim h^2+\frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}}.\] Applying $${\delta}_t^c$$ to $$\widehat{\eta}_j^{{\varepsilon},k}$$ for $$2\le k\le \frac{T}{\tau}-1$$, one can deduce that \[ |{\delta}_t^c\widehat{\eta}_j^{{\varepsilon},k}|\lesssim h^2(\|{\partial}_x^4N^{\varepsilon}_t\|_{L^\infty}+\|{\partial}_x^4|E^{\varepsilon}|_t^2\|_{L^\infty}) +\tau^2(\|{\partial}_t^3N_{xx}^{\varepsilon}\|_{L^\infty}+{\varepsilon}^2\|{\partial}_t^5N^{\varepsilon}\|_{L^\infty})\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}},\] which completes the proof. □ For the initial step, we have the following estimates. Lemma 3.4 Under assumptions (A) and (C), the error of the first step (2.5) and (2.7) satisfies \[\widehat{e}_j^{{\varepsilon},0}=\widehat{n}_j^{{\varepsilon},0}=0,\quad |\widehat{e}_j^{{\varepsilon},1}|+|{\delta}_x^+\widehat{e}_j^{{\varepsilon},1}|\lesssim \frac{\tau^3}{{\varepsilon}^{1-\alpha^*}},\quad |{\delta}_t^+\widehat{e}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{1-\alpha^*}},\quad |\widehat{n}_j^{{\varepsilon},1}|\lesssim \frac{\tau^3}{{\varepsilon}^{3-\alpha^\dagger}},\quad |{\delta}_t^+\widehat{n}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}.\] Proof. By the definition of $$\widehat{E}^{{\varepsilon},1}_j$$, we have \[ |\widehat{e}_j^{{\varepsilon},1}|= \left|\frac{\tau^3}{2}\int_0^1(1-s)^2E_{ttt}^{\varepsilon}(x_j,s\tau)\,{\rm{d}}s\right| \lesssim\tau^3\|E_{ttt}^{\varepsilon}\|_{L^\infty}\lesssim \frac{\tau^3}{{\varepsilon}^{1-\alpha^*}},\] which immediately gives $$|{\delta}_t^+\widehat{e}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{1-\alpha^*}}$$ and $$|{\delta}_x^+e_j^{{\varepsilon},1}|\lesssim \frac{ \tau^3}{{\varepsilon}^{1-\alpha^*}}$$. Similarly, \begin{align*} |\widehat{n}_j^{{\varepsilon},1}|&=\left|\frac{\tau^3}{2}\int_0^1(1-s)^2N_{ttt}^{\varepsilon}(x_j,s\tau)\,{\rm{d}}s +{\varepsilon}^{\beta}\left[\tau-{\varepsilon}\sin\left(\frac{\tau}{{\varepsilon}}\right)\right]{\omega}_1(x_j)+ {\varepsilon}^\alpha\left[\frac{\tau^2}{2{\varepsilon}^2}-2\sin^2\left(\frac{\tau}{2{\varepsilon}}\right)\right]{\omega}_0''(x_j)\right|\\ &\lesssim \tau^3\|N_{ttt}^{\varepsilon}\|_{L^\infty}+\tau^3{\varepsilon}^{{\beta}-2}\|{\omega}_1\|_{L^\infty} +\tau^3{\varepsilon}^{\alpha-3}\|{\omega}_0''\|_{L^\infty}\lesssim\frac{\tau^3}{{\varepsilon}^{3-\alpha^\dagger}}, \end{align*} which implies that $$|{\delta}_t^+\widehat{n}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}$$. □ Proof of Theorem 3.2. Subtracting (3.5) from (3.7), we obtain the error equations \begin{align} &{\delta}_t^2\widehat{e}^{\,{\varepsilon},k}_j=\frac{{\delta}_x^2-1}{2} (\widehat{e}^{{\varepsilon},k+1}_j+\widehat{e}^{{\varepsilon},k-1}_j)-\widehat{r}_j^{{\varepsilon},k}+\widehat{\xi}_j^{{\varepsilon},k},\label{e_eq}\\ \end{align} (3.8) \begin{align} &{\varepsilon}^2{\delta}_t^2\widehat{n}_j^{{\varepsilon},k}=\frac{1}{2}{\delta}_x^2(\widehat{n}_j^{{\varepsilon},k+1}+\widehat{n}_j^{{\varepsilon},k-1}) +{\delta}_x^2\widehat{p}_j^{{\varepsilon},k}+\widehat{\eta}_j^{{\varepsilon},k},\quad j\in \mathcal{T}_M,\quad k\ge 1,\label{n_eq} \end{align} (3.9) where \begin{align*} \widehat{r}_j^{{\varepsilon},k}&=N^{\varepsilon}(x_j,t_k)g\left(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1})\right) -\widehat{N}_j^{{\varepsilon},k}g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1}),\\ \widehat{p}_j^{{\varepsilon},k}&=\rho_B(E^{\varepsilon}(x_j,t_k)^2)-\rho_B(|\widehat{E}_j^{{\varepsilon},k}|^2). \end{align*} Recalling the definition of $$\rho_B$$ (cf. (3.4)), one can easily get that \begin{equation}\label{p_k} |\widehat{p}_j^{{\varepsilon},k}|\le \sqrt{C_B} |\widehat{e}_j^{{\varepsilon},k}|, \quad j \in \mathcal{T}_M,\quad k\ge 0. \end{equation} (3.10) As for the properties about $$g$$, similar to Cai & Yuan (2017), we have $$|g(\widehat{E}^{{\varepsilon},k+1}_j,\widehat{E}_j^{{\varepsilon},k-1})|\lesssim 1$$, and \begin{equation}\label{g_pro} |g(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1}))-g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1})| \lesssim |\widehat{e}_j^{{\varepsilon},k+1}|+|\widehat{e}_j^{{\varepsilon},k-1}|. \end{equation} (3.11) By the definition of $$\widehat{r}^{{\varepsilon},k}$$, similar to Cai & Yuan (2017), one can derive that \begin{equation}\label{r_bound} |\widehat{r}_j^{{\varepsilon},k}|\lesssim |\widehat{n}_j^{{\varepsilon},k}|+|\widehat{e}^{{\varepsilon},k+1}_j|+|\widehat{e}^{{\varepsilon},k-1}_j|. \end{equation} (3.12) Multiplying both sides of (3.8) by $$\widehat{e}_j^{{\varepsilon},k+1}-\widehat{e}_j^{{\varepsilon},k-1}$$, summing together for $$j\in \mathcal{T}_M$$, we obtain \begin{equation}\label{e_eq2} \begin{aligned}[b] &\|{\delta}_t^+\widehat{e}^{\,{\varepsilon},k}\|^2+\frac{1}{2}\|{\delta}_x^+\widehat{e}^{{\varepsilon},k+1}\|^2+\frac{1}{2} \|\widehat{e}^{{\varepsilon},k+1}\|^2\\ &\quad=\|{\delta}_t^+\widehat{e}^{{\varepsilon},k-1}\|^2+ \frac{1}{2}\|{\delta}_x^+\widehat{e}^{{\varepsilon},k-1}\|^2+\frac{1}{2} \|\widehat{e}^{{\varepsilon},k-1}\|^2+(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k}, \widehat{e}^{{\varepsilon},k+1}-\widehat{e}^{{\varepsilon},k-1}). \end{aligned} \end{equation} (3.13) To estimate $$\|\widehat{n}^{{\varepsilon},k}\|$$, introduce $$\widehat{u}^{{\varepsilon},k+1/2}\in X_M$$ satisfying $${\delta}_x^2\widehat{u}_j^{{\varepsilon},k+1/2}={\delta}_t^+\widehat{n}_j^{{\varepsilon},k} (k\ge 0)$$. Multiplying both sides of (3.9) by $$\tau*(\widehat{u}_j^{{\varepsilon},k+1/2}+ \widehat{u}_j^{{\varepsilon},k-1/2})$$, summing for $$j\in \mathcal{T}_M$$, we have \begin{equation}\label{n_eq2} \begin{aligned}[b] &{\varepsilon}^2(\|{\delta}_x^+\widehat{u}^{{\varepsilon},k+1/2}\|^2-\|{\delta}_x^+\widehat{u}^{{\varepsilon},k-1/2}\|^2) +\frac{1}{2}(\|\widehat{n}^{{\varepsilon},k+1}\|^2-\|\widehat{n}^{{\varepsilon},k-1}\|^2)\\ &\quad=-(\widehat{p}^{{\varepsilon},k}, \widehat{n}^{{\varepsilon},k+1}-\widehat{n}^{{\varepsilon},k-1}) -\tau(\widehat{\eta}^{{\varepsilon},k},\widehat{u}^{{\varepsilon},k+1/2}+\widehat{u}^{{\varepsilon},k-1/2}),\quad 1\le k\le \frac T\tau-1. \end{aligned} \end{equation} (3.14) Introduce a discrete ‘energy’ by \begin{equation}\label{energy_d} \begin{split} \widehat{\mathcal{A}}^{{\varepsilon},k}&=6C_B\|{\delta}_t^+\widehat{e}^{\,{\varepsilon},k}\|^2+3C_B (\|\widehat{e}^{\,{\varepsilon},k}\|^2+ \|\widehat{e}^{{\varepsilon},k+1}\|^2+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|^2+\|{\delta}_x^+ \widehat{e}^{{\varepsilon},k+1}\|^2)\\ &\quad +{\varepsilon}^2\|{\delta}_x^+\widehat{u}^{{\varepsilon},k+1/2}\|^2 +\frac{1}{2}(\|\widehat{n}^{{\varepsilon},k}\|^2+\|\widehat{n}^{{\varepsilon},k+1}\|^2),\quad 1\le k\le \frac T\tau-1. \end{split} \end{equation} (3.15) Combining $$6C_B*$$ (3.13) $$\,+\,$$ (3.14), we get for $$1\le k\le \frac T\tau-1$$ \begin{equation}\label{sp} \widehat{\mathcal{A}}^{{\varepsilon},k}-\widehat{\mathcal{A}}^{{\varepsilon},k-1}=6C_B(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k}, 2\tau{\delta}_t^c\widehat{e}^{\,{\varepsilon},k}) -(\widehat{p}^{{\varepsilon},k},2\tau{\delta}_t^c\widehat{n}^{{\varepsilon},k})- \tau(\widehat{\eta}^{{\varepsilon},k},\widehat{u}^{{\varepsilon},k+1/2}+\widehat{u}^{{\varepsilon},k-1/2}). \end{equation} (3.16) We estimate the terms in (3.16). First, by Cauchy inequality and (3.12), one has \begin{equation}\label{xi_b} \big|\!\left(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k},2\tau{\delta}_t^c\widehat{e}^{\,{\varepsilon},k}\right)\!\big| =\tau\big|(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k},{\delta}_t^+(\widehat{e}^{\,{\varepsilon},k}+\widehat{e}^{{\varepsilon},k-1}))\big| \lesssim \tau\big(\|\widehat{\xi}^{{\varepsilon},k}\|^2+\widehat{\mathcal{A}}^{{\varepsilon},k}+\widehat{\mathcal{A}}^{{\varepsilon},k-1}\big). \end{equation} (3.17) On the other hand, \[ \sum\limits_{i=1}^k(\widehat{p}^{{\varepsilon},i},2\tau{\delta}_t^c\widehat{n}^{{\varepsilon},i})= (\widehat{p}^{{\varepsilon},k},\widehat{n}^{{\varepsilon},k+1})+(\widehat{p}^{{\varepsilon},k+1},\widehat{n}^{{\varepsilon},k})-(\widehat{p}^{{\varepsilon},1},\widehat{n}^{{\varepsilon},0})- (\widehat{p}^{{\varepsilon},0},\widehat{n}^{{\varepsilon},1})-\sum\limits_{i=1}^{k}(\widehat{n}^{{\varepsilon},i},2\tau{\delta}_t^c \widehat{p}^{{\varepsilon},i}).\] It can be easily received from (3.11) and assumption (A) that \begin{align*} 2\tau {\delta}_t^c \widehat{p}_j^{{\varepsilon},k}&=4\tau g\left(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1})\right){\delta}_t^c E^{\varepsilon}(x_j,t_k)-4\tau g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1}){\delta}_t^c\widehat{E}_j^{{\varepsilon},k}\\ &=4\tau g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1}){\delta}_t^c \widehat{e}_j^{{\varepsilon},k} +4\tau\big[g(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1}))-g(\widehat{E}_j^{{\varepsilon},k+1}, \widehat{E}_j^{{\varepsilon},k-1})\big]{\delta}_t^c E^{\varepsilon}(x_j,t_{k})\\ &\lesssim\tau \|E_t^{\varepsilon}\|_{L^\infty}\big(|\widehat{e}_j^{{\varepsilon},k+1}|+|\widehat{e}_j^{{\varepsilon},k-1}|\big) +\tau\big(|{\delta}_t^+\widehat{e}_j^{{\varepsilon},k}|+|{\delta}_t^+\widehat{e}_j^{{\varepsilon},k-1}|\big), \end{align*} which yields for $$1\le k\le\frac T\tau-1$$ \begin{equation}\label{est-p} (\widehat{n}^{{\varepsilon},k},2\tau{\delta}_t^c \widehat{p}^{{\varepsilon},k})\lesssim \tau\left[\|\widehat{n}^{{\varepsilon},k}\|^2+\sum\limits_{l=k-1}^k\|{\delta}_t^+\widehat{e}^{{\varepsilon},l}\|^2+ \sum\limits_{l=k\pm1}\|\widehat{e}^{{\varepsilon},l}\|^2\right]\lesssim \tau(\widehat{\mathcal{A}}^{{\varepsilon},k}+\widehat{\mathcal{A}}^{{\varepsilon},k-1}). \end{equation} (3.18) Similar to Bao & Su (2017b), it can be established from Sobolev inequality and Cauchy inequality that \begin{align} \tau\left|\sum\limits_{i=1}^k\left(\widehat{\eta}^{{\varepsilon},i},\widehat{u}^{{\varepsilon},i+1/2}+ \widehat{u}^{{\varepsilon},i-1/2}\right)\right|&\le \frac{\widehat{\mathcal{A}}^{{\varepsilon},k}}{4}+C\left[ \widehat{\mathcal{A}}^{{\varepsilon},0}+ \sum\limits_{i=1}^2\|\widehat{\eta}^{{\varepsilon},i}\|^2+ \sum\limits_{i=k-1}^k\|\widehat{\eta}^{{\varepsilon},i}\|^2\nonumber\right.\\ &\left.\quad +\tau\sum\limits_{i=2}^{k-1} (\widehat{\mathcal{A}}^{{\varepsilon},i}+\|{\delta}_t^c\widehat{\eta}^{{\varepsilon},i}\|^2)\right].\label{u_bound} \end{align} (3.19) Summing (3.16) together for time step $$1,2,\cdots,k<\frac{T}{\tau}$$, applying (3.17)–(3.19), we obtain that \begin{equation}\label{sp2} \begin{split} &\frac{3}{4}\widehat{\mathcal{A}}^{{\varepsilon},k}+(\widehat{p}^{{\varepsilon},k},\widehat{n}^{{\varepsilon},k+1})+(\widehat{p}^{{\varepsilon},k+1},\widehat{n}^{{\varepsilon},k})\\ &\quad\lesssim \widehat{\mathcal{A}}^{{\varepsilon},0}+\tau\sum\limits_{i=1}^{k}\left(\|\widehat{\xi}^{{\varepsilon},i}\|^2+\widehat{\mathcal{A}}^{{\varepsilon},i}\right) +\sum\limits_{i=1}^2\|\widehat{\eta}^{{\varepsilon},i}\|^2 +\sum\limits_{i=k-1}^{k}\|\widehat{\eta}^{{\varepsilon},i}\|^2+ \tau\sum\limits_{i=2}^{k-1}\|{\delta}_t^c\widehat{\eta}^{{\varepsilon},i}\|^2. \end{split} \end{equation} (3.20) Note that by Cauchy inequality and (3.10), we have \begin{align*} &|(\widehat{p}^{{\varepsilon},k},\widehat{n}^{{\varepsilon},k+1})+(\widehat{p}^{{\varepsilon},k+1},\widehat{n}^{{\varepsilon},k})|\le \frac{\|\widehat{n}^{{\varepsilon},k}\|^2+\|\widehat{n}^{{\varepsilon},k+1}\|^2}{4}+\|\widehat{p}^{{\varepsilon},k}\|^2+\|\widehat{p}^{{\varepsilon},k+1}\|^2\\ &\quad\le\frac{\|\widehat{n}^{{\varepsilon},k}\|^2+\|\widehat{n}^{{\varepsilon},k+1}\|^2}{4}+C_B\left(\|\widehat{e}^{\,{\varepsilon},k}\|^2 +\|\widehat{e}^{{\varepsilon},k+1}\|^2\right)\le \frac{\widehat{\mathcal{A}}^{{\varepsilon},k}}{2}, \end{align*} which together with (3.20) gives that $$\widehat{\mathcal{A}}^{{\varepsilon},k}\lesssim \widehat{\mathcal{A}}^{{\varepsilon},0}+\tau\sum\limits_{i=1}^{k}\big(\|\widehat{\xi}^{{\varepsilon},i}\|^2+\widehat{\mathcal{A}}^{{\varepsilon},i}\big) +\sum\limits_{i=1}^2\|\widehat{\eta}^{{\varepsilon},i}\|^2 +\sum\limits_{i=k-1}^{k}\|\widehat{\eta}^{{\varepsilon},i}\|^2+ \tau\sum\limits_{i=2}^{k-1}\|{\delta}_t^c\widehat{\eta}^{{\varepsilon},i}\|^2.$$ Applying the discrete Sobolev inequality and Lemma 3.4, we derive \begin{equation}\label{u-i} {\varepsilon}\|{\delta}_x^+\widehat{u}^{{\varepsilon},1/2}\|\lesssim {\varepsilon} \|{\delta}_t^+\widehat{n}^{{\varepsilon},0}\|\lesssim \frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}}, \end{equation} (3.21) which together with Lemma 3.4 yields that $$\widehat{\mathcal{A}}^{{\varepsilon},0}\lesssim \big(\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}\big)^2$$. Applying Lemma 3.3, we get that $$\widehat{\mathcal{A}}^{{\varepsilon},k}\lesssim \tau\sum\limits_{i=1}^k\widehat{\mathcal{A}}^{{\varepsilon},i}+ \Big(\frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}\Big)^2.$$ Hence, by discrete Gronwall inequality, there exists $$\tau_1>0$$ such that when $$0<\tau\le \tau_1$$, we have $$\widehat{\mathcal{A}}^{{\varepsilon},k}\lesssim \big(\frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}\big)^2$$, which gives (3.6) directly. □ Next, we prove the second type error estimate for the numerical approximation $$(\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k})$$. Theorem 3.5 Under assumptions (A)–(C), there exists $$\tau_2>0$$ sufficiently small and independent of $${\varepsilon}$$ such that, when $$0<\tau\le \tau_2$$, we have the following error estimate for (3.5): \begin{equation}\label{mes2} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|+\|\widehat{n}^{{\varepsilon},k}\|\lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*},\quad 0\le k\le\frac{T}{\tau},\quad 0<{\varepsilon}\le1. \end{equation} (3.22) Define another error function $$\widetilde{e}^{{\varepsilon},k}_j=E(x_j,t_k)-\widehat{E}_j^{{\varepsilon},k},\quad \widetilde{n}_j^{{\varepsilon},k}=N(x_j,t_k)-\widehat{N}_j^{{\varepsilon},k},\quad j\in \mathcal{T}_M,\quad k\ge 0,$$ where $$E(x,t)$$ is the solution to the limit KG (2.2) and $$N(x,t)=-E(x,t)^2$$. Similarly, denote $$\widetilde{u}^{{\varepsilon},k+\frac{1}{2}}=({\delta}_x^2)^{-1}({\delta}_t^+\widetilde{n}^{{\varepsilon},k})\in X_M (k\ge 0)$$. The local truncation error $$\widetilde{\xi}^{{\varepsilon},k}$$, $$\widetilde{\eta}^{{\varepsilon},k}$$ is defined as \begin{equation}\label{2local2} \begin{split} &\widetilde{\xi}_j^{{\varepsilon},k}={\delta}_t^2E(x_j,t_k)-\big[{\delta}_x^2-1-N(x_j,t_k)\big] \frac{E(x_j,t_{k+1})+E(x_j,t_{k-1})}{2}\\ &\widetilde{\eta}_j^{{\varepsilon},k}={\varepsilon}^2{\delta}_t^2N(x_j,t_k)-\frac{1}{2}{\delta}_x^2\big[N(x_j,t_{k+1}) +N(x_j,t_{k-1})\big]-{\delta}_x^2|E(x_j,t_k)|^2. \end{split} \end{equation} (3.23) Under assumption (A), we can obtain the following error bounds \begin{equation}\label{2local3} \|\widetilde{\xi}^{{\varepsilon},k}\|\lesssim h^2+\tau^2,\quad \|\widetilde{\eta}^{{\varepsilon},k}\|+\|{\delta}_t^c\widetilde{\eta}^{{\varepsilon},k}\|\lesssim h^2+\tau^2+{\varepsilon}^2,\quad k\ge1. \end{equation} (3.24) Similar to Lemma 3.4, we have the error bounds for $$\widetilde{e}^{{\varepsilon},k}$$, $$\widetilde{n}^{{\varepsilon},k}$$ at the first step. Lemma 3.6 Under assumptions (A)–(C), the error of the first step (2.5) and (2.7) satisfies \begin{align*} &\widetilde{e}_j^{{\varepsilon},0}=0,\quad |\widetilde{e}_j^{{\varepsilon},1}|+|{\delta}_x^+\widetilde{e}_j^{{\varepsilon},1}|\lesssim \tau^3+{\varepsilon}^{\alpha}\tau^2,\quad |{\delta}_t^+\widetilde{e}_j^{{\varepsilon},0}|\lesssim \tau^2+{\varepsilon}^{\alpha}\tau,\\ &|\widetilde{n}_j^{{\varepsilon},0}|\lesssim {\varepsilon}^\alpha, \quad |\widetilde{n}_j^{{\varepsilon},1}|\lesssim \tau^2+{\varepsilon}^\alpha+{\varepsilon}^{1+\beta},\quad |{\delta}_t^+\widetilde{n}_j^{{\varepsilon},0}|\lesssim \tau+{\varepsilon}^{\alpha-1}+{\varepsilon}^{\beta}. \end{align*} Proof of Theorem 3.5. Define another discrete ‘energy’ \begin{equation}\label{energy_d2} \begin{split} \widetilde{\mathcal {A}}^{{\varepsilon},k}&=6C_B\|{\delta}_t^+\widetilde{e}^{{\varepsilon},k}\|^2+3C_B (\|\widetilde{e}^{{\varepsilon},k}\|^2+ \|\widetilde{e}^{{\varepsilon},k+1}\|^2+\|{\delta}_x^+\widetilde{e}^{{\varepsilon},k}\|^2+\|{\delta}_x^+ \widetilde{e}^{{\varepsilon},k+1}\|^2)\\ &\quad +{\varepsilon}^2\|{\delta}_x^+\widetilde{u}^{{\varepsilon},k+1/2}\|^2 +\frac{1}{2}(\|\widetilde{n}^{{\varepsilon},k}\|^2+\|\widetilde{n}^{{\varepsilon},k+1}\|^2),\quad 1\le k\le \frac T\tau-1. \end{split} \end{equation} (3.25) Similar approach as in the proof of Theorem 3.2 will yield that $$\widetilde{\mathcal{A}}^{{\varepsilon},k}\lesssim \widetilde{\mathcal{A}}^{{\varepsilon},0}+\sum\limits_{i=1}^2\|\widetilde{\eta}^{{\varepsilon},i}\|^2 +\sum\limits_{i=k-1}^{k}\|\widetilde{\eta}^{{\varepsilon},i}\|^2+ \tau\sum\limits_{i=2}^{k-1}\|{\delta}_t^c\widetilde{\eta}^{{\varepsilon},i}\|^2 +\tau\sum\limits_{i=1}^{k}\big(\|\widetilde{\xi}^{{\varepsilon},i}\|^2+ \widetilde{\mathcal{A}}^{{\varepsilon},i}\big).$$ By Lemma 3.6 and the discrete Sobolev inequality, we derive that $${\varepsilon}\|{\delta}_x^+\widetilde{u}^{{\varepsilon},1/2}\|\lesssim \tau^2+{\varepsilon}^{\alpha^\dagger}$$, which together with Lemma 3.6 gives $$\widetilde{\mathcal{A}}^{{\varepsilon},0}\lesssim \big(\tau^2+{\varepsilon}^{\alpha^\dagger}\big)^2$$. Applying (3.24), there exists $$\tau_2>0$$ sufficiently small such that when $$0<\tau\le\tau_2$$, $$ \widetilde{\mathcal{A}}^{{\varepsilon},k}\lesssim \tau\sum\limits_{i=1}^{k-1} \widetilde{\mathcal{A}}^{{\varepsilon},i}+\left(h^2+\tau^2+{\varepsilon}^{\alpha^\dagger}\right)^2. $$ By discrete Gronwall inequality, for $$\tau<\tau_2$$, one has $$\widetilde{\mathcal{A}}^{{\varepsilon},k}\lesssim \left(h^2+\tau^2+{\varepsilon}^{\alpha^\dagger}\right)^2$$. Using assumption (B) and the triangle inequality, we derive that \begin{align*} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|&\le \|\widetilde{e}^{{\varepsilon},k}\|+\|{\delta}_x^+\widetilde{e}^{{\varepsilon},k}\|+\|E^{\varepsilon}(\cdot,t_k)-E(\cdot,t_k)\|_{H^1} \lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*},\\ \|\widehat{n}^{{\varepsilon},k}\|\le\|\widetilde{n}^{{\varepsilon},k}\|+&\|N^{{\varepsilon}}(\cdot,t_k)-N(\cdot,t_k)\|_{L^2} \lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*}, \end{align*} which completes the proof of Theorem 3.5. □ Proof of Theorem 2.1. Set $$\tau_0=\min\{\tau_1,\tau_2\}$$, combining the two estimates (3.6) and (3.22), we derive for $$\tau\le \tau_0$$, \begin{equation}\label{mes3} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|+\|\widehat{n}^{{\varepsilon},k}\|\lesssim \left\{ \begin{aligned} &h^2+\tau^{\frac{2}{4-\alpha^\dagger}},\quad &\alpha^*=1,\\ &(h^2+\tau^2)^{\frac{1}{3}\alpha^*},\quad &0<\alpha^*<1. \end{aligned} \right. \end{equation} (3.26) To establish the error estimates for ($$E^{{\varepsilon},k},N^{{\varepsilon},k}$$), by the definition of $$\rho_B$$, it suffices to show that $$\|\widehat{E}^{{\varepsilon},k}\|_\infty\le M_0+1, \quad k\ge 0,$$ which implies that (3.5) collapses to (2.4), i.e., ($$\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k}$$) are identical to ($$E^{{\varepsilon},k},N^{{\varepsilon},k}$$). For $$d=1$$, it is obvious due to the definition of $$M_0$$ and (3.1). For $$d=2, 3$$, $$\|\widehat{E}^{{\varepsilon},k}\|_\infty$$ is controlled by $$\|\widehat{E}^{{\varepsilon},k}\|_\infty\le\|E^{\varepsilon}(\cdot,t_k)\|_{L^\infty}+ \|\widehat{e}^{\,{\varepsilon},k}\|_\infty$$, where $$\|\widehat{e}^{\,{\varepsilon},k}\|_\infty\lesssim \frac{\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|}{C_d(h)}, \qquad C_d(h)\sim \left\{ \begin{aligned} &\frac{1}{|\mathrm{ln}\, h|},\quad &d=2,\\ &h^{1/2},\quad &d=3. \\ \end{aligned} \right. $$ Hence, by adding the auxiliary conditions that $$\tau=o\left(C_d(h)^{2-\frac{\alpha^\dagger}{2}}\right)$$ for $$\alpha^*=1$$, while $$h=o\left({\varepsilon}^{\frac{2}{3}(1-\alpha^*)}\right)$$ and $$\tau=o\left({\varepsilon}^{\frac{3-\alpha^*}{2}}C_d(h)^{1/2}\right)$$ for $$\alpha^*<1$$, one can obtain the error bounds. □ 4. Numerical results We present numerical results for KGZ (2.1) by the conservative finite discretization (2.4) with (2.5) and (2.7). The initial condition is set as \begin{align*} &E_0(x)=e^{-x^2}\sin x,\qquad \qquad \quad \,\,E_1(x)=\mathrm{sech}\left(x^2/2\right)\cos x+\mathrm{sech} \left(x^2/3\right)\sin{(2x)},\\ &{\omega}_0(x)=\mathrm{sech} (x^2)\cos{(3x)},\quad\quad {\omega}_1(x)=\mathrm{sech}( x^2)\sin{(4x)}, \end{align*} and the parameters $$\alpha$$ and $$\beta$$ are chosen as $$ \it{Case\,I}. \,\alpha=2\,and\,\beta=1; \quad \it{Case\,II}. \,\alpha=1\,and\,\beta=0; \quad \it{Case\,III}. \,\alpha=0\,and\,\beta=-1. $$ In our computation, we truncate the problem on $$\it{\Omega}_\varepsilon=\left[-30-\frac{1}{\varepsilon}, 30+\frac{1}{\varepsilon}\right]$$, which is large enough such that the homogeneous Dirichlet boundary condition does not introduce significant errors. The bounded computational domain $$\it{\Omega}_\varepsilon$$ has to be chosen as $$\varepsilon$$-dependent due to that the outspreading waves are at wave speed $$O\left(\frac{1}{\varepsilon}\right)$$ and the homogeneous Dirichlet boundary condition is taken at the boundary. The computational $${\varepsilon}$$-dependent domain can be chosen as $${\varepsilon}$$-independent if one applies appropriate boundary conditions instead of simple homogeneous boundary condition (see, e.g., Bao & Su, 2017b). To quantify the error, we introduce the following error function $$e_{\varepsilon}^{\tau,h}(t_k):=\|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|,\quad n_{\varepsilon}^{\tau,h}(t_k):=\|n^{{\varepsilon},k}\|,$$ where $$e^{{\varepsilon},k}=E^{\varepsilon}(\cdot,t_k)-E^{{\varepsilon},k}$$, $$n^{{\varepsilon},k}=N^{\varepsilon}(\cdot,t_k)-N^{{\varepsilon},k}$$. The ‘exact’ solution is obtained by the EWI-SP method (see, e.g., Bao et al., 2013) with very small mesh size $$h=1/32$$ and time step $$\tau=10^{-6}$$. The errors are displayed at $$t=1$$. For spatial error analysis, we set a time step $$\tau=10^{-5}$$, such that the temporal error can be neglected; for temporal error analysis, the mesh size $$h$$ is set as $$h=10^{-4}$$ such that the spatial error can be ignored. Figures 2 and 3 display the spatial errors for Cases II and III, respectively. The results for Case I are similar to Case II, which are omitted here for brevity. Similarly, the temporal discretization errors are shown in Fig. 4 and Tables 1–3 for $$E^{\varepsilon}$$ and $$N^{\varepsilon}$$, respectively. From these tables, we can draw the following conclusions for the proposed CFDM in the subsonic limit regime: Fig. 2. View largeDownload slide Spatial errors at time $$t=1$$ for Case II, i.e., $$\alpha=1$$ and $$\beta=0$$. Fig. 2. View largeDownload slide Spatial errors at time $$t=1$$ for Case II, i.e., $$\alpha=1$$ and $$\beta=0$$. Fig. 3. View largeDownload slide Spatial errors at time $$t=1$$ for Case III, i.e., $$\alpha=0$$ and $$\beta=-1$$. Fig. 3. View largeDownload slide Spatial errors at time $$t=1$$ for Case III, i.e., $$\alpha=0$$ and $$\beta=-1$$. Fig. 4. View largeDownload slide Temporal errors at time $$t=1$$ for $$E^{\varepsilon}$$. Fig. 4. View largeDownload slide Temporal errors at time $$t=1$$ for $$E^{\varepsilon}$$. Table 1 Temporal error analysis at time$${\it t}=1$$for Case I, i.e.,$$\alpha=2$$and$$\beta=1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 Table 1 Temporal error analysis at time$${\it t}=1$$for Case I, i.e.,$$\alpha=2$$and$$\beta=1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 Table 2 Temporal error analysis at time$${\it t}=1$$for Case II, i.e.,$$\alpha=1$$and$$\beta=0$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 Table 2 Temporal error analysis at time$${\it t}=1$$for Case II, i.e.,$$\alpha=1$$and$$\beta=0$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 Table 3 Temporal error analysis at time$${\it t}=1$$for Case III, i.e.,$$\alpha=0$$and$$\beta=-1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 Table 3 Temporal error analysis at time$${\it t}=1$$for Case III, i.e.,$$\alpha=0$$and$$\beta=-1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 (i) For the spatial discretization, the method is uniformly second-order accurate w.r.t. $${\varepsilon} \in(0,1]$$ for $$E^{\varepsilon}$$ with any initial data (cf. Figs 2(a), 3(a)). For $$N^{\varepsilon}$$, it is uniformly convergent with quadratic rate in space for Cases I and II (cf. Fig. 2(b)), while the error bound depends on $${\varepsilon}$$ as $$O(h^2/{\varepsilon})$$ for Case III (cf. Fig. 3(b)). This suggests that the error estimate (2.9) in space is optimal for $$N^{\varepsilon}$$. (ii) For the temporal discretization, the method is uniformly second-order accurate for $$E^{\varepsilon}$$ for $$\alpha\ge1$$ and $$\beta\ge 0$$ (cf. Fig. 4(a)). While for $$N^{\varepsilon}$$, the upper triangle part of Table 1 shows the order at $$O(\tau^2/{\varepsilon})$$ and the lower triangle part depicts the error at the order $$O(\tau^2+{\varepsilon}^2)$$ for $$\alpha=2 $$ and $$\beta=1$$. For Case II, the upper triangle part of Table 2 implies the convergence order at $$O(\tau^2/{\varepsilon}^2)$$, and the lower triangle part shows the convergence order at $$O(\tau^2+{\varepsilon})$$. The uniform convergence rate is attained when the two types of estimates are compatible, which can be confirmed by the degeneracy of the error listed in Table 4 for $$n_{\varepsilon}^{\tau,h}$$ as $$\tau^2\sim {\varepsilon}^3$$. For $$\alpha=0$$, $$\beta=-1$$, the upper triangle part of Table 3 depicts the convergence order at $$O(\tau^2/{\varepsilon}^3)$$ for $$n_{\varepsilon}^{\tau,h}$$. Table 4 Degeneracy of temporal error at time$$t=1$$for$$N^{\varepsilon}$$$$($$the convergence orders are calculated with respect to time step$$\tau$$$$)$$ $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 Table 4 Degeneracy of temporal error at time$$t=1$$for$$N^{\varepsilon}$$$$($$the convergence orders are calculated with respect to time step$$\tau$$$$)$$ $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 It can be observed from the numerical results that the temporal errors are much better than the predicted estimates in Theorem 2.1. In fact, the numerical results suggest the first type of error estimate (2.9) is sharp for $$N^{{\varepsilon},k}$$, whereas the second type of error bound (2.10) can be improved as $$\|n^{{\varepsilon},k}\|\lesssim h^2+\tau^2+{\varepsilon}^{\alpha^\dagger},$$ which yields the uniform $${\varepsilon}$$-independent convergence rate for $$\alpha\ge 1$$ and $$\beta\ge 0$$ as $$\|n^{{\varepsilon},k}\|\lesssim h^2+\tau^{2\alpha^\dagger/3}.$$ This numerical result coincides with that for the Zakharov system (cf. Cai & Yuan, 2017). The reason why the second type of estimate (2.10) is not sharp is due to that we obtain the estimate (2.10) via the limiting equation (1.1); however, the KG equation (1.1) may not be the best approximating one with respect to the KGZ system (1.1). As for the asymptotic equation of the KGZ system (1.1) in the subsonic limit regime, it is a more complicated issue (see, e.g., Daub et al., 2016) compared to the Zakharov system, which collapses directly to the cubically nonlinear Schrödinger (NLS) equation since the NLS equation preserves the $$L^2$$ norm naturally. Recently, Daub et al. (2016) proved rigorously the convergence of the KGZ system to a modified KG equation defined on a $$d$$-dimensional torus or to the KG equation on the whole space $$\mathbb{R}^d$$ with $$d\ge 3$$ in the subsonic limit. In one word, the asymptotic equation of the KGZ in the subsonic limit is far from enough and would be our future work. Moreover, Fig. 5 displays the errors between the KGZ system to the KG equation for initial data of Cases I–III, where $$\eta_e^{\varepsilon}(t)=\|E^{\varepsilon}(\cdot,t)-E(\cdot,t)\|_{H^1},\quad \eta_n^{\varepsilon}(t)=\|N^{\varepsilon}(\cdot,t)-N(\cdot,t)\|_{L^2}.$$ One can see that the KGZ system reduces to the KG equation with order $$O({\varepsilon})$$ and $$O({\varepsilon}^{\alpha^*})$$ for $$E^{\varepsilon}$$ and $$N^{\varepsilon}$$, respectively, which coincides with assumption (B). Fig. 5. View largeDownload slide Errors between the KGZ and KG at time $$t=1$$ for initial data of Cases I–III. Fig. 5. View largeDownload slide Errors between the KGZ and KG at time $$t=1$$ for initial data of Cases I–III. Remark 4.1 In view of the results displayed in this article, the classical CFDM may not be a good choice for solving the KGZ system in the subsonic limit regime, especially for ill-prepared initial condition, which requires the harsh meshing strategy $$h=O({\varepsilon}^{1/2})$$ and $$\tau=O({\varepsilon}^{3/2})$$. Inspired by the analysis technique applied here, by introducing an asymptotic consistent formulation, we proposed an Finite Difference Method (FDM) which is uniformly accurate at $$O(h^2+\tau)$$ (see Bao & Su, 2017a) for any initial data. Similar to the classical FDM, the scheme in Bao & Su (2017a) is semiimplicit, and only a linear system needs to be solved at each time step. The key points for the great improvement of the new formulation include (1) by writing the highly oscillatory wave with amplitude at $$O(1)$$ explicitly, which is the solution of a linear wave equation and can be solved separately, the oscillatory $$N^{\varepsilon}$$ to be solved with amplitude $$O(1)$$ was replaced by a new function $$F^{\varepsilon}$$ with amplitude $$O({\varepsilon})$$ and (2) instead of using the KG equation as the limiting equation for approximating the KGZ system, which plays a crucial role in establishing the error estimate (2.10), we found a KG equation with an oscillatory potential which approximates the formulated KGZ system linearly in the subsonic limit for any initial data. For the details, we refer to Bao & Su (2017a). 5. Conclusion We analysed a CFDM for discretizing the KGZ system in the subsonic limit regime in $$d$$ ($$d=1, 2, 3$$) dimensions. When $$0<{\varepsilon}\ll1$$, i.e., in the subsonic limit regime, the solution of the KGZ system propagates highly oscillatory waves with wavelength $$O({\varepsilon})$$ in time or rapid outspreading waves at speed $$O(1/{\varepsilon})$$ in space. For the CFDM, we obtain the uniform convergence at the order $$O\left(h^2+\tau^{\frac{2}{4-\alpha^\dagger}}\right)$$ for $$\alpha\ge1$$ and $$\beta\ge 0$$, which can be improved as $$O\left(h^2+\tau^{\frac{2}{3}\alpha^\dagger}\right)$$, suggested by the numerical results. For $$\alpha=0$$ or $$\beta=-1$$, the error was obtained at $$O(h^2/{\varepsilon}+\tau^2/{\varepsilon}^3)$$, which implies the $${\varepsilon}$$-scalability of the CFDM for the KGZ system is $$h=O({\varepsilon}^{1/2})$$ and $$\tau=O({\varepsilon}^{3/2})$$ for $$0<{\varepsilon}\ll1$$. Acknowledgements The author thank Professors Weizhu Bao and Yongyong Cai for their valuable discussions and suggestions. References Bao W. , Cai Y. & Zhao X. ( 2014 ) A uniformly accurate multiscale time integrator pseudospectral method for the Klein–Gordon equation in the nonrelativistic limit regime. SIAM J. Numer. Anal. , 52 , 2488 – 2511 . Google Scholar CrossRef Search ADS Bao W. , Dong X. & Zhao X. ( 2013 ) An exponential wave integrator sine pseudospectral method for the Klein–Gordon–Zakharov system. SIAM J. Sci. Comput. , 35 , 2903 – 2927 . Google Scholar CrossRef Search ADS Bao W. & Su C. ( 2017a ) Uniform error bounds of a finite difference method for the Klein–Gordon–Zakharov system in the subsonic limit regime. Math. Comput. (in press). Bao W. & Su C. ( 2017b ) Uniform error bounds of a finite difference method for the Zakharov system in the subsonic limit regime via an asymptotic consistent formulation. SIAM Multiscale Model. Simul. , 15 , 977 – 1002 . Google Scholar CrossRef Search ADS Bergé L. , Bidégaray B. & Colin T. ( 1996 ) A perturbative analysis of the time-envelope approximation in strong Langmuir turbulence. Phys. D , 95 , 351 – 379 . Google Scholar CrossRef Search ADS Cai Y. & Yuan Y. ( 2017 ) Uniform error estimates of the conservative finite difference method for the Zakharov system in the subsonic limit regime. Math. Comput. (in press). Chang Q. , Guo B. & Jiang H. ( 1995 ) Finite difference method for generalized Zakharov equations. Math. Comput. , 64 , 537 – 553 . Google Scholar CrossRef Search ADS Daub M. , Schneider G. & Schratz K. ( 2016 ) From the Klein–Gordon–Zakharov system to the Klein–Gordon equation. Math. Methods Appl. Sci. , 39 , 5371 – 5380 . Google Scholar CrossRef Search ADS Dendy R. O. ( 1990 ) Plasma Dynamics. Chicago : Clarendon Press . Faou E. & Schratz K. ( 2014 ) Asymptotic preserving schemes for the Klein–Gordon equation in the non-relativistic limit regime. Numer. Math. , 126 , 441 – 469 . Google Scholar CrossRef Search ADS Masmoudi N. & Nakanishi K. ( 2005 ) From the Klein–Gordon–Zakharov system to the nonlinear Schrödinger equation. J. Hyperbol. Differ. Equ. , 2 , 975 – 1008 . Google Scholar CrossRef Search ADS Masmoudi N. & Nakanishi K. ( 2008 ) Energy convergence for singular limits of Zakharov type systems. Invent. Math. , 172 , 535 – 583 . Google Scholar CrossRef Search ADS Ozawa T. & Tsutsumi Y. ( 1992 ) The nonlinear Schrödinger limit and the initial layer of the Zakharov equations. Differ. Integral Equ. , 5 , 721 – 745 . Schochet S. H. & Weinstein M. I. ( 1986 ) The nonlinear Schrödinger limit of the Zakharov equations governing Langmuir turbulence. Commun. Math. Phys. , 106 , 569 – 580 . Google Scholar CrossRef Search ADS Sulem C. & Sulem P. L. ( 1979 ) Regularity properties for the equations of Langmuir turbulence. C. R. Acad. Sci. Paris Sér. A Math. , 289 , 173 – 176 . Texier B. ( 2005 ) WKB asymptotics for the Euler-Maxwell equations. Asymptot. Anal. , 42 , 211 – 250 . Thomée V. ( 1997 ) Galerkin Finite Element Methods for Parabolic Problems. Berlin : Springer . Google Scholar CrossRef Search ADS Wang T. , Chen J. & Zhang L. ( 2007 ) Conservative difference methods for the Klein–Gordon–Zakharov equations. J. Comput. Appl. Math. , 205 , 430 – 452 . Google Scholar CrossRef Search ADS Zakharov V. E. ( 1972 ) Collapse of Langmuir waves. Soviet Phys. , 35 , 908 – 914 . © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

Error estimates of a finite difference method for the Klein–Gordon–Zakharov system in the subsonic limit regime

Loading next page...
 
/lp/ou_press/error-estimates-of-a-finite-difference-method-for-the-klein-gordon-Pd7ofvT021
Publisher
Oxford University Press
Copyright
© The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/drx044
Publisher site
See Article on Publisher Site

Abstract

Abstract We establish the error estimates of a conservative finite difference method (CFDM) for the Klein–Gordon–Zakharov (KGZ) system with a dimensionless parameter $${\varepsilon} \in (0,1]$$, which is inversely proportional to the speed of sound. When $${\varepsilon} \rightarrow 0^+$$, the KGZ system collapses to the Klein–Gordon (KG) equation. In the subsonic limit regime, i.e., $$0<{\varepsilon}\ll1$$, the solution propagates waves with wavelength $$O({\varepsilon})$$ and $$O(1)$$ in time and space, respectively, with amplitude at $$O({\varepsilon}^{\alpha^\dagger})$$, where $$\alpha^\dagger=\min\{\alpha,1+\beta,2\}$$, $$\alpha\ge 0$$ and $$\beta\ge-1$$ are two parameters describing the incompatibility of the initial data of the KGZ system with respect to the limiting KG equation. This oscillation in time brings difficulties in designing numerical methods and establishing their error estimates in the subsonic limit regime. We propose a CFDM and analyse the error bounds in this article. By applying the energy method and the limiting KG equation, we obtain two independent error bounds at $$O(h^2/{\varepsilon}^{1-\alpha^*}+\tau^2/{\varepsilon}^{3-\alpha^\dagger})$$ and $$O(h^2+\tau^2+{\varepsilon}^{\alpha^*})$$ with $$h$$ mesh size, $$\tau$$ time step and $$\alpha^*=\min\{1,\alpha,1+\beta\}$$. Hence, we obtain uniform error bounds at $$O(h^2+\tau^{{2}/{4-\alpha^\dagger}})$$ when $$\alpha\ge1$$ and $$\beta\ge 0$$. While for $$\alpha=0$$ or $$\beta=-1$$, the result suggests the meshing strategy requirement of the CFDM is $$h=O({\varepsilon}^{1/2})$$ and $$\tau=O({\varepsilon}^{3/2})$$ for $$0<{\varepsilon}\ll1$$. 1. Introduction The Klein–Gordon–Zakharov (KGZ) system describes the interaction between the Langmuir waves and ion acoustic waves in plasma, which reads as follows under a proper nondimensionalization (see Masmoudi & Nakanishi, 2005) : \begin{equation}\label{KGZ} \left\{ \begin{aligned} &{\gamma}^2{\partial}_{tt}E({\bf x},t)-{\it{\Delta}} E({\bf x},t)+\frac{1}{{\gamma}^2}E({\bf x},t)+N({\bf x},t)E({\bf x},t)=0,\\ &{\varepsilon}^2 {\partial}_{tt} N({\bf x},t)-{\it{\Delta}} N({\bf x},t)-{\it{\Delta}}|E|^2({\bf x},t)=0, \quad {\bf x} \in \mathbb{R}^d, \quad t>0, \end{aligned} \right . \end{equation} (1.1) with initial conditions \begin{equation}\label{init} E({\bf x},0)=E_0({\bf x}),\quad {\partial}_t E({\bf x},0)=E_1({\bf x}),\quad N({\bf x},0)=N_0({\bf x}),\quad {\partial}_t N ({\bf x},0)=N_1({\bf x}). \end{equation} (1.2) Here, $$t$$ is time, $${\bf x}$$ is the spatial variable, $$E({\bf x},t)$$ and $$N({\bf x},t)$$ are real-valued functions representing the fast time scale component of the electric field raised by electrons and the derivation of ion density from its equilibrium, respectively, and $$\gamma,\quad \varepsilon \in (0,1]$$ are two dimensionless parameters that are inversely proportional to the plasma frequency and speed of sound, respectively. It can be derived from the Euler equations for the electrons and ions, coupled with the Maxwell equation for the electron field, by neglecting the magnetic effect and assuming that ions move much slower than electrons (refer to Zakharov (1972), Sulem & Sulem (1979), Dendy (1990) and Bergé et al. (1996) for physical and formal derivations, and Texier (2005) for mathematical justifications). As is known, (1.1) is time symmetric or time reversible and conserves the total energy (see, e.g., Masmoudi & Nakanishi, 2005, 2008) \begin{equation}\label{energ} \mathcal {H}(t):=\int_{\mathbb{R}^d}\bigg[{\gamma}^2|{\partial}_tE|^2+|\nabla E|^2+\frac{1}{{\gamma}^2}E^2+\frac{{\varepsilon}^2}{2} |\nabla \varphi|^2+\frac{1}{2}N^2+NE^2\bigg]\,{\rm{d}}{\bf x}\equiv \mathcal {H}(0),\quad t\ge 0, \end{equation} (1.3) where $$\varphi$$ is defined by $${\it{\Delta}}\varphi={\partial}_t N$$ with $$\lim\limits_{|{\bf x}|\rightarrow \infty}{\varphi}=0$$. For the KGZ system (1.1) with $${\gamma}={\gamma}_0$$ and $${\varepsilon}={\varepsilon}_0$$, i.e., $$O(1)$$-plasma-frequency and $$O(1)$$-acoustic-speed regime, there are extensive analytical and numerical results in the literature. For the local well-posedness of the Cauchy problem in the energy space $$H^1\times L^2$$, we refer to Ozawa & Tsutsumi (1992) and references therein. Along the numerical part, we refer to Wang et al. (2007) for a finite difference method and Bao et al. (2013) for an exponential-wave-integrator Fourier pseudospectral method. Various different singular limits have been considered for the KGZ system, and a number of approximation results have been established (see, e.g., Masmoudi & Nakanishi, 2005, 2008; Daub et al., 2016). Based on the results in Masmoudi & Nakanishi (2005), Masmoudi & Nakanishi (2008) and Daub et al. (2016), in the high-plasma-frequency limit, i.e., $${\gamma} \rightarrow 0^+$$, the KGZ system converges to the Zakharov system; and in the subsonic limit, i.e., $${\varepsilon} \rightarrow 0^+$$, the KGZ system converges to the Klein–Gordon (KG) equation (see, e.g., Daub et al., 2016). Specifically, Daub et al. (2016) proved rigorously the convergence of the KGZ system to a modified KG equation defined on a $$d$$-dimensional ($$d\le 3$$) torus or to the KG equation on the whole space $$\mathbb{R}^d$$ with $$d\ge3$$ in the subsonic limit. Consider $$\gamma=1$$ and $${\varepsilon} \rightarrow 0^+$$, in which case we denote the solution of (1.1) with (1.2) as $$E^{\varepsilon}({\bf x},t)$$, $$N^{\varepsilon}({\bf x},t)$$, and the initial data in (1.2) are denoted by $$\left(E_0({\bf x}), E_1({\bf x}), N_0^{\varepsilon}({\bf x}), N_1^{\varepsilon}({\bf x})\right)$$. When $${\varepsilon}\rightarrow 0^+$$, i.e., in the subsonic limit regime, formally we have $$E^{\varepsilon}({\bf x},t)\rightarrow E({\bf x},t)$$, $$N^{\varepsilon}({\bf x},t)\rightarrow N({\bf x},t)=-E({\bf x},t)^2$$, where $$E:=E({\bf x},t)$$ is the solution of the KG equation (see Daub et al., 2016) \begin{equation}\label{KG1} \left\{ \begin{aligned} &{\partial}_{tt}E({\bf x},t)-{\it{\Delta}} E({\bf x},t)+E({\bf x},t)-E({\bf x},t)^3=0,\quad {\bf x}\in \mathbb{R}^d, \quad t>0,\\ & E({\bf x},0)=E_0({\bf x}),\quad {\partial}_t E({\bf x},0)=E_1({\bf x}),\quad {\bf x} \in \mathbb{R}^d.\\ \end{aligned} \right. \end{equation} (1.4) The convergence rate depends on the compatibility of the initial data ($$E_0,E_1,N_0^{\varepsilon},N_1^{\varepsilon}$$) of (1.1) with respect to (1.4), which can be characterized as \begin{equation}\label{inc} N_0^{\varepsilon}({\bf x})=-E_0({\bf x})^2+{\varepsilon}^\alpha {\omega}_0({\bf x}),\quad N_1^{\varepsilon}({\bf x})=-2E_0({\bf x})E_1({\bf x})+{\varepsilon}^\beta {\omega}_1({\bf x}), \end{equation} (1.5) where $$\alpha\ge 0 $$ and $$\beta\ge -1$$ are parameters describing the incompatibility of the initial data, and $${\omega}_0({\bf x})$$, $${\omega}_1({\bf x})$$ are given functions independent of $${\varepsilon}$$. Following the results in the literature (Ozawa & Tsutsumi, 1992; Masmoudi & Nakanishi, 2008; Bao & Su, 2017a), when $$0<{\varepsilon}\ll1$$, the solution of the KGZ system (1.1) propagates waves with wavelength $$O({\varepsilon})$$ and $$O(1)$$ in time and space, respectively (cf. Fig. 1). Furthermore, the amplitude of the initial layer is $$O({\varepsilon}^{\alpha^\dagger})$$, where $$\alpha^\dagger:=\min\{\alpha,1+\beta,2\}$$. Specifically, when $$\alpha\ge 2$$ and $$\beta\ge 1$$, the leading oscillation comes from the `$${\varepsilon}^2{\partial}_{tt}$$' term and otherwise from the incompatibility of the initial data. To illustrate the oscillation further, Fig. 1 shows the solutions $$N^{\varepsilon}(x,1)$$ and $$N^{\varepsilon}(1,t)$$ of the KGZ (1.1) for $$d=1$$, $$E_0(x)=e^{-x^2/2}$$, $$E_1(x)=e^{-x^2/3}\sin x$$, $${\omega}_0(x)=e^{\frac{1}{x^2-18^2}}\cos (2x)\chi_{(-18,18)}$$ with $$\chi$$ the characteristic function, $$\alpha=\beta=0$$ and $${\omega}_1(x)\equiv0$$ in (1.5) for different $${\varepsilon}$$, which was obtained numerically on a bounded computational interval $$[-200,200]$$ with the homogenous Dirichlet boundary condition (see, e.g., Bao et al., 2013). Fig. 1. View largeDownload slide The temporal oscillation of the KGZ system (1.1) and the solution of the KG equation (1.4) for $$d=1$$. Fig. 1. View largeDownload slide The temporal oscillation of the KGZ system (1.1) and the solution of the KG equation (1.4) for $$d=1$$. The highly temporal oscillatory nature in the solution of the KGZ system brings significant numerical difficulties, especially in the subsonic limit regime, i.e., $$0 < {\varepsilon}\ll 1$$. Recently, different numerical methods were proposed and analysed for the efficient computation of the KG equation in the nonrelativistic limit regime (see, e.g., Bao et al., 2014; Faou & Schratz, 2014) and the Zakharov system in the subsonic limit regime (see, e.g., Bao & Su, 2017b; Cai & Yuan, 2017). To our knowledge, so far there are few results on the numerics of the KGZ system in the subsonic limit regime. The main aim of this article is to carry out error analysis for a conservative finite difference method (CFDM) of KGZ system in the subsonic limit regime, i.e., $${\varepsilon} \rightarrow 0^+$$. Particularly, we pay attention to how the error bounds depend explicitly on the small parameter $${\varepsilon}$$, the mesh size $$h$$ and time step $$\tau$$. The rest of the article is organized as follows. In Section 2, we introduce a CFDM and our main results. Section 3 is devoted to the details of the error analysis. Numerical results are reported in Section 4. Some conclusions are drawn in Section 5. Throughout the article, we adopt the standard Sobolev spaces and write $$A\lesssim B$$ to mean that there exists a generic constant $$C$$ independent of $${\varepsilon}$$, $$\tau$$, $$h$$, such that $$|A|\le C\,B$$. 2. The numerical methods and main results For simplicity of notation, we will only present the numerical methods for the KGZ system in one space dimension, and extensions to higher dimensions are straightforward. Practically, (1.1) is truncated on a bounded interval $${\it{\Omega}}=(a,b)$$, with zero Dirichlet boundary condition \begin{equation} \label{KGZ1d} \left\{ \begin{aligned} & {\partial}_{tt}E^{\varepsilon}(x,t)-{\partial}_{xx} E^{\varepsilon}(x,t)+E^{\varepsilon}(x,t)+N^{\varepsilon}(x,t)E^{\varepsilon}(x,t)=0,\quad x \in {\it{\Omega}}, \quad t>0,\\ &{\varepsilon}^2 {\partial}_{tt} N^{\varepsilon}(x,t)-{\partial}_{xx} N^{\varepsilon}(x,t)-{\partial}_{xx}|E^{\varepsilon}|^2(x,t)=0, \quad x \in {\it{\Omega}}, \quad t>0,\\ &E^{\varepsilon}(x,0)=E_0(x),\quad {\partial}_t E^{\varepsilon}(x,0)=E_1(x),\quad N^{\varepsilon}(x,0)=N^{\varepsilon}_0(x),\quad {\partial}_t N^{\varepsilon} (x,0)=N^{\varepsilon}_1(x),\quad x \in \overline{{\it{\Omega}}}, \\ &E^{\varepsilon}(a,t)=E^{\varepsilon}(b,t)=0,\quad N^{\varepsilon}(a,t)=N^{\varepsilon}(b,t)=0, \quad t> 0. \end{aligned} \right . \end{equation} (2.1) As $${\varepsilon}\rightarrow 0^+$$, the KGZ (2.1) collapses to the KG equation (see, e.g., Daub et al., 2016) \begin{equation}\label{KG} \left\{ \begin{aligned} &{\partial}_{tt}E(x,t)-{\partial}_{xx}E(x,t)+E(x,t)-E(x,t)^3=0,\quad x\in {\it{\Omega}}, \quad t>0,\\ & E(x,0)=E_0(x),\quad {\partial}_t E(x,0)=E_1(x),\quad x \in \overline{{\it{\Omega}}};\qquad E(a,t)=E(b,t)=0,\quad t>0. \end{aligned} \right. \end{equation} (2.2) Here, we assume that the initial data ($$N^{\varepsilon}_0, N^{\varepsilon}_1$$) satisfies the condition \begin{equation} \label{inic} N^{\varepsilon}_0(x)=-E_0(x)^2+{\varepsilon}^{\alpha} {\omega}_0(x), \quad N^{\varepsilon}_1(x)=-2E_0(x)E_1(x)+{\varepsilon}^{\beta} {\omega}_1(x), \end{equation} (2.3) where $${\alpha}\ge 0$$ and $${\beta}\ge -1$$ are parameters describing the consistency of the initial data of the KGZ system (2.1) with respect to (2.2), and $${\omega}_0(x)$$, $${\omega}_1(x)$$ are independent of $${\varepsilon}$$. 2.1 The finite difference method Choose a mesh size $$h:={\it{\Delta}} x=(b-a)/M$$ with $$M$$ being a positive integer and a time step $$\tau:={\it{\Delta}} t>0$$, and denote grid points and time steps as $$x_j:=a+jh,\quad j=0,1,\ldots,M;\quad t_k:=k\tau,\quad k=0,1,2,\ldots.$$ Define the index sets $$\mathcal {T}_M=\{\,j \ | \ j=1,2,\ldots,M-1\}$$, $$\mathcal{T}_M^0=\{j\ |\ j=0,1,\ldots,M\}$$. Let $$E^{{\varepsilon},k}_j$$, $$N^{{\varepsilon},k}_j$$ and $$E^k_j$$ be the approximations of $$E^{{\varepsilon}}(x_j,t_k)$$, $$N^{\varepsilon}(x_j,t_k)$$ and $$E(x_j,t_k)$$, respectively, and $$E^{{\varepsilon},k}$$, $$N^{{\varepsilon},k}$$, $$E^k \in \mathbb{R}^{(M+1)}$$ be the numerical solution vectors at $$t=t_k$$. The finite difference operators are the standard notations as follows: \begin{align*} &{\delta}_x^+E_j^k=\frac{E_{j+1}^k-E_j^k}{h},\quad {\delta}_t^+E_j^k=\frac{E_j^{k+1}-E_j^k}{\tau},\quad {\delta}_t^c E_j^k=\frac{E_j^{k+1}-E_j^{k-1}}{2\tau},\\ &{\delta}_x^2E_j^k=\frac{E_{j+1}^k-2E_j^k+E_{j-1}^k}{h^2}, \quad {\delta}_t^2E_j^k=\frac{E_j^{k+1}-2E_j^k+E_j^{k-1}}{\tau^2}. \end{align*} In this article, we consider the conservative finite difference discretization of (2.1) as follows \begin{gather}\label{Dpsi1} {\delta}_t^2E_j^{{\varepsilon},k}=\big({\delta}_x^2-1-N_j^{{\varepsilon},k}\big) \frac{E_j^{{\varepsilon},k+1}+E_j^{{\varepsilon},k-1}}{2},\\ \end{gather} (2.4a) \begin{gather}\label{Dpsi2} {\varepsilon}^2{\delta}_t^2N_j^{{\varepsilon},k}=\frac{1}{2}{\delta}_x^2\left(N_j^{{\varepsilon},k+1}+N_j^{{\varepsilon},k-1}\right)+ {\delta}_x^2|E_j^{{\varepsilon},k}|^2, \quad j\in \mathcal{T}_M, \quad k\ge 1. \end{gather} (2.4b) This is a semiimplicit finite difference scheme and only a linear system needs to be solved at every time step. Thus, it is more efficient than the CFDM in Wang et al. (2007), where a fully nonlinear coupled system must be solved. Furthermore, the initial and boundary conditions are discretized as \[E_j^{{\varepsilon},0}=E_0(x_j),\quad N_j^{{\varepsilon},0}=-E_0(x_j)^2+{\varepsilon}^{\alpha}{\omega}_0(x_j);\quad E_0^{{\varepsilon},k}=E_M^{{\varepsilon},k}=N_0^{{\varepsilon},k}=N_M^{{\varepsilon},k}=0,\quad k\ge 0.\] Next we consider the value of the first step $$E_j^{{\varepsilon},1}$$ and $$N_j^{{\varepsilon},1}$$. By Taylor expansion, we get \begin{align} E_j^{{\varepsilon},1}&=E_0(x_j)+\tau E_1(x_j)+ \frac{\tau^2}{2}\left[E_0''(x_j)-E_j^{{\varepsilon},0}-N_j^{{\varepsilon},0}E_j^{{\varepsilon},0}\right]\!,\label{psi1}\\ \end{align} (2.5) \begin{align} N_j^{{\varepsilon},1}&= N_j^{{\varepsilon},0}+\tau\left[-2E_j^{{\varepsilon},0}E_1(x_j)+{\varepsilon}^{\beta}{\omega}_1(x_j)\right] +\frac{\tau^2}{2}{\varepsilon}^{{\alpha}-2}{\omega}_0''(x_j).\label{phi0} \end{align} (2.6) When $$0\le \alpha<2$$ or $$-1\le \beta<0$$, the above approximation is not appropriate if $${\varepsilon}\ll 1$$, in which case $$\tau$$ has to be very small to bound $$N_j^{{\varepsilon},1}$$. To get the first step value $$N_j^{{\varepsilon},1}$$ uniformly bounded for $${\varepsilon} \in (0,1]$$, similar to Cai & Yuan (2017), we replace (2.6) by a modified version \begin{equation} N_j^{{\varepsilon},1}=N_j^{{\varepsilon},0}-2\tau E_j^{{\varepsilon},0}E_1(x_j)+{\varepsilon}^{1+{\beta}}\sin\left(\frac{\tau}{{\varepsilon}}\right){\omega}_1(x_j) +2\sin^2{\left(\frac{\tau}{2{\varepsilon}}\right)}{\varepsilon}^{\alpha}{\omega}_0''(x_j).\label{phi1} \end{equation} (2.7) 2.2 Main results For simplicity of notation, denote $$\alpha^*=\min \{1,\alpha,1+\beta\},\quad \alpha^\dagger=\min\{\alpha,1+\beta,2\}.$$ Let $$T^*>0$$ be the maximum common existence time for the solution to the KGZ (2.1) and the solution to the KG (2.2). Then for $$0<T<T^*$$, according to the known results in Masmoudi & Nakanishi (2008), Ozawa & Tsutsumi (1992), Schochet & Weinstein (1986), Cai & Yuan (2017) and Bao & Su (2017b), we can assume the exact solution $$(E^{\varepsilon}(x,t), N^{\varepsilon}(x,t))$$ of KGZ (1.1) and the solution $$E(x,t)$$ of KG (2.2) are smooth enough and satisfy \begin{equation*} {\rm (A)}\hskip3cm \begin{split} &\|N^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}+\|E^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}+\|E_t^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}+ \|E_{tt}^{\varepsilon}\|_{W^{2,\infty}(\it{\Omega})}+\|E\|_{W^{3,\infty}([0,T];W^{2,\infty}(\it{\Omega}))} \lesssim 1,\\ &\|{\partial}_t^3E^{\varepsilon}\|_{W^{1,\infty}(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{1-\alpha^*}},\quad \|{\partial}_t^4E^{\varepsilon}\|_{L^\infty(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{2-\alpha^\dagger}},\quad \|{\partial}_t N^{\varepsilon}\|_{W^{4,\infty}(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{1-\alpha^*}},\\ &\|{\partial}_t^mN^{\varepsilon}\|_{W^{2,\infty}(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{m-\alpha^\dagger}}, \quad m=2,\,3; \qquad\|{\partial}_t^m N^{\varepsilon}\|_{L^\infty(\it{\Omega})}\lesssim \frac{1}{{\varepsilon}^{m-\alpha^\dagger}},\quad m=4,\, 5, \end{split} \end{equation*} with the convergence (see, e.g., Daub et al., 2016) \begin{equation*} {\rm (B)}\hskip3cm \|E^{\varepsilon}-E\|_{L^\infty([0,T];H^1(\it{\Omega}))}\lesssim {\varepsilon}, \qquad \|N^{\varepsilon}-N\|_{L^\infty([0,T];L^\infty(\it{\Omega}))}\lesssim {\varepsilon}^{\alpha^*}.\hskip7cm \end{equation*} Furthermore, we assume that the initial data satisfy \begin{equation*} {\rm (C)}\hskip3cm \|E_0\|_{W^{2,\infty}(\it{\Omega})}+\|E_1\|_{L^\infty(\it{\Omega})}+\|{\omega}_0\|_{W^{2,\infty}(\it{\Omega})}+ \|{\omega}_1\|_{L^\infty(\it{\Omega})}\lesssim 1.\hskip6cm \end{equation*} To measure the error between the exact solution and the numerical solution of the KGZ system, we introduce some notations. Denote $$X_M=\{v=(v_j)_{j\in\mathcal{T}_M^0} | \,v_0=v_M=0\} \subseteq \mathbb{R}^{M+1}.$$ The norms and inner products over $$X_M$$ are defined as \begin{align*} (u,v)=h\sum\limits_{j=1}^{M-1}u_j v_j, \quad \langle u,v\rangle=h\sum\limits_{j=0}^{M-1}u_j v_j,\quad \|u\|^2=(u,u), \quad \|{\delta}_x^+ u\|^2=\langle u,u\rangle, \quad \|u\|_\infty=\sup\limits_{j\in \mathcal{T}_M^0}|u_j|. \end{align*} Then, for sequences in $$X_M$$, we have the summation by parts \begin{equation}\label{part-f} (-{\delta}_x^2 u,v)=\langle{\delta}_x^+ u,{\delta}_x^+ v\rangle,\quad ((-{\delta}_x^2)^{-1}u,v)=(u,(-{\delta}_x^2)^{-1}v). \end{equation} (2.8) Define the error functions $$e^{{\varepsilon},k}$$, $$n^{{\varepsilon},k}$$ as $$e^{{\varepsilon},k}_j=E^{\varepsilon}(x_j,t_k)-E_j^{{\varepsilon},k},\quad n_j^{{\varepsilon},k}=N^{\varepsilon}(x_j,t_k)-N_j^{{\varepsilon},k},\quad j\in \mathcal{T}_M,\quad k\ge 0.$$ Then we have the following error estimates. Theorem 2.1 Under assumptions (A)–(C), there exist $$h_0>0$$ and $$\tau_0>0$$ sufficiently small, when $$0<h\le h_0$$ and $$0<\tau\le \tau_0$$, the scheme (2.4) with (2.5)–(2.7) satisfies the error estimates \begin{align} \|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|+\|n^{{\varepsilon},k}\|&\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2} {{\varepsilon}^{3-\alpha^\dagger}},\quad 0\le k\le \frac{T}{\tau},\quad 0<{\varepsilon}\le1,\label{esti1}\\ \end{align} (2.9) \begin{align} \|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|+\|n^{{\varepsilon},k}\|&\lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*},\quad 0\le k\le \frac{T}{\tau},\quad 0<{\varepsilon}\le1.\label{esti2} \end{align} (2.10) Hence, for $$\alpha>0$$ and $$\beta>-1$$, we have the uniform $${\varepsilon}$$-independent convergence rate as \begin{equation}\label{esti3} \|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|+\|n^{{\varepsilon},k}\|\lesssim \left\{ \begin{aligned} & h^2+\tau^{\frac{2}{4-\alpha^\dagger}},\quad \,\,\,\,\,\alpha\ge 1, \quad\mathrm{and}\quad \beta\ge 0,\\ & (h^2+\tau^2)^{\frac{1}{3}\alpha^*},\quad 0<\alpha<1,\quad\mathrm{or}\quad -1<\beta<0, \end{aligned}\right. \end{equation} (2.11) and for initial data leading to $$O(1)$$-amplitude initial layer ($$\alpha=0$$ or $$\beta=-1$$), we can get the meshing strategy from (2.9) that $$h=O({\varepsilon}^{1/2})$$, $$\tau=O({\varepsilon}^{3/2})$$. 3. Energy conservation and error estimates 3.1 Energy conservation Proposition 3.1 For the scheme (2.4), the following quantities are conserved, \begin{align*} \mathcal{H}^k&=\|{\delta}_t^+E^{{\varepsilon},k}\|^2+\frac{1}{2}\left(\|{\delta}_x^+E^{{\varepsilon},k}\|^2+\|{\delta}_x^+ E^{{\varepsilon},k+1}\|^2+\|E^{{\varepsilon},k}\|^2+\|E^{{\varepsilon},k+1}\|^2\right) +\frac{{\varepsilon}^2}{2}\|{\delta}_x^+\varphi^{{\varepsilon},k+\frac{1}{2}}\|^2\\ &\quad+\frac{1}{4}\left(\|N^{{\varepsilon},k}\|^2 +\|N^{{\varepsilon},k+1}\|^2\right)+\frac{1}{2}\big[(N^{{\varepsilon},k},|E^{{\varepsilon},k+1}|^2) +\left(N^{{\varepsilon},k+1},|E^{{\varepsilon},k}|^2\right)\big]\equiv \mathcal{H}^0, \end{align*} where $$\varphi^{{\varepsilon},k+1/2}\in X_M$$ is the numerical approximation of $$\varphi^{\varepsilon}(\cdot,t_{k+1/2})$$, which is the solution of \[{\delta}_x^2\varphi_j^{{\varepsilon},k+1/2}={\delta}_t^+N_j^{{\varepsilon},k},\quad j\in\mathcal{T}_M,\quad k\ge 0.\] In particular, we have the a priori bound for $$E^{{\varepsilon},k}$$ in one-dimension as follows \begin{equation}\label{bound} \|E^{{\varepsilon},k}\|_\infty\lesssim \sqrt{\|E^{{\varepsilon},k}\|\|{\delta}_x^+E^{{\varepsilon},k}\|} \le C_e,\quad k\ge 0. \end{equation} (3.1) Proof. Multiplying (2.4a) by $$E_{j}^{{\varepsilon},k+1}-E_{j}^{{\varepsilon},k-1}$$, summing all together for $$j \in \mathcal{T}_M$$, one has \begin{equation}\label{first} \begin{split} &\|{\delta}_t^+E^{{\varepsilon},k}\|^2-\|{\delta}_t^+E^{{\varepsilon},k-1}\|^2+\frac{1}{2} (\|E^{{\varepsilon},k+1}\|^2-\|E^{{\varepsilon},k-1}\|^2)\\ &\quad+\frac{1}{2} \left(\|{\delta}_x^+E^{{\varepsilon},k+1}\|^2-\|{\delta}_x^+E^{{\varepsilon},k-1}\|^2\right)+\frac{1}{2}\left(N^{{\varepsilon},k}, |E^{{\varepsilon},k+1}|^2-|E^{{\varepsilon},k-1}|^2\right)=0. \end{split} \end{equation} (3.2) Similarly, multiplying (2.4b) by $$\tau*\big({\varphi}_j^{{\varepsilon},k-1/2}+{\varphi}_j^{{\varepsilon},k+1/2}\big)$$, summing for all $$j\in \mathcal{T}_M$$, we obtain \begin{equation}\label{sec} {\varepsilon}^2\left(\|{\delta}_x^+{\varphi}^{{\varepsilon},k+\frac{1}{2}}\|^2-\|{\delta}_x^+{\varphi}^{{\varepsilon},k-\frac{1}{2}}\|^2\right) +\frac{1}{2}\left(\|N^{{\varepsilon},k+1}\|^2-\|N^{{\varepsilon},k-1}\|^2\right) +\left(|E^{{\varepsilon},k}|^2,2\tau{\delta}_t^c N^{{\varepsilon},k}\right)=0. \end{equation} (3.3) Summing (3.2) and $$\frac{1}{2}$$ (3.3), we get that \begin{align*} &\|{\delta}_t^+E^{{\varepsilon},k}\|^2+\frac{1}{2} \|E^{{\varepsilon},k+1}\|^2+\frac{1}{2}\|{\delta}_x^+E^{{\varepsilon},k+1}\|^2+ \frac{{\varepsilon}^2}{2}\|{\delta}_x^+{\varphi}^{{\varepsilon},k+\frac{1}{2}}\|^2 +\frac{1}{4}\|N^{{\varepsilon},k+1}\|^2\\ &\quad=\|{\delta}_t^+E^{{\varepsilon},k-1}\|^2+\frac{1}{2} \|E^{{\varepsilon},k-1}\|^2+\frac{1}{2}\|{\delta}_x^+E^{{\varepsilon},k-1}\|^2+ \frac{{\varepsilon}^2}{2}\|{\delta}_x^+{\varphi}^{{\varepsilon},k-\frac{1}{2}}\|^2 +\frac{1}{4}\|N^{{\varepsilon},k-1}\|^2\\ &\qquad+\frac{1}{2}\big[(N^{{\varepsilon},k-1},|E^{{\varepsilon},k}|^2)+(N^{{\varepsilon},k},|E^{{\varepsilon},k-1}|^2)\big] -\frac{1}{2}\big[(N^{{\varepsilon},k},|E^{{\varepsilon},k+1}|^2)+(N^{{\varepsilon},k+1},|E^{{\varepsilon},k}|^2)\big]. \end{align*} Then, the conclusion is provided by summing the above equation from $$1$$ to $$k$$. It follows from (2.5) and (2.7) that \begin{align*} &\|{\delta}_t^+E^{{\varepsilon},0}\|\lesssim \|E_1\|_{L^\infty}+\tau\left(\|E_0''\|_{L^\infty}+ \|E_0\|_{L^\infty}+\|E_0\|^3_{L^\infty}\right)+ {\varepsilon}^{\alpha}\tau\|E_0\|_{L^\infty}\|{\omega}_0\|_{L^\infty},\\ &\|{\delta}_x^+{\varphi}^{{\varepsilon},1/2}\|\lesssim \|{\delta}_t^+N^{{\varepsilon},0}\| \lesssim \|E_0E_1\|_{L^\infty}+{\varepsilon}^\beta\|{\omega}_1\|_{L^\infty}+{\varepsilon}^{\alpha-1}\|{\omega}_0''\|_{L^\infty}, \end{align*} which implies that $$\mathcal{H}^0$$ is uniformly bounded for $${\varepsilon} \in (0,1]$$ by assumption (C). Thus, the a priori bounds for $$E^{{\varepsilon},k}$$ can be obtained by a standard argument (see, e.g., Chang et al., 1995). □ 3.2 Error analysis It can be concluded from Proposition 3.1 that $$\|E^{{\varepsilon},k}\|_\infty$$ is uniformly bounded; however, similar bounds cannot be obtained for higher dimensions or nonconservative schemes. In Thomée (1997) and Cai & Yuan (2017), this difficulty was overcome by truncating the nonlinearity to a global Lipschitz function with compact support in $$d$$-dimensions. The idea is as follows. Choose a function $$\rho(s)\in C^\infty(\mathbb{R})$$ such that $$\rho(s)= \left\{ \begin{aligned} &1, \quad &|s|\le 1,\\ &\in[0,1],\quad &|s|\le 2,\\ &0,\quad &|s|\ge 2. \\ \end{aligned} \right. $$ Set $$M_0'=\max \{\|E(x,t)\|_{L^\infty(\it{\Omega}_T)}, \sup\limits_{{\varepsilon}\in(0,1]} \|E^{\varepsilon}(x,t)\|_{L^\infty(\it{\Omega}_T)}\}, \qquad \it{\Omega}_T=\it{\Omega}\times [0,T],$$ which is well defined by assumption (A). Denote $$M_0=\max\{M_0',C_e\}$$ for $$d=1$$ and $$M_0=M_0'$$ for $$d=2, 3$$. For $$s\ge 0$$, $$y_1,y_2\in\mathbb{R}$$, define \[\rho_B(s)=s\rho(s/B),\quad B=(M_0+1)^2,\quad g(y_1,y_2)=\frac{y_1+y_2}{2}\int_0^1 \rho_B'(sy_1^2+(1-s)y_2^2)\,{\rm{d}}s.\] Then $$\rho_B(s)$$ is global Lipschitz and there exists $$C_B>0$$, such that \begin{equation}\label{f_B} |\rho_B(s_1)-\rho_B(s_2)|\le \sqrt{C_B}|\sqrt{s_1}-\sqrt{s_2}|\quad \forall s_1, s_2 \ge 0. \end{equation} (3.4) Set $$\widehat{E}^{{\varepsilon},0}=E^{{\varepsilon},0}$$, $$\widehat{N}^{{\varepsilon},0}=N^{{\varepsilon},0}$$, $$\widehat{E}^{{\varepsilon},1}=E^{{\varepsilon},1}$$, $$\widehat{N}^{{\varepsilon},1}=N^{{\varepsilon},1}$$ and determine $$\widehat{E}^{{\varepsilon},k}$$, $$\widehat{N}^{{\varepsilon},k}\in X_M$$ as follows \begin{equation}\label{scheme} \begin{aligned} &{\delta}_t^2\widehat{E}^{{\varepsilon},k}_j=({\delta}_x^2-1)\frac{\widehat{E}_j^{{\varepsilon},k+1}+\widehat{E}_j^{{\varepsilon},k-1}}{2} -\widehat{N}_j^{{\varepsilon},k}g\left(\widehat{E}_j^{{\varepsilon},k+1}, \widehat{E}_j^{{\varepsilon},k-1}\right)\!,\\ &{\varepsilon}^2{\delta}_t^2\widehat{N}_j^{{\varepsilon},k}=\frac{1}{2}{\delta}_x^2\left(\widehat{N}_j^{{\varepsilon},k+1}+ \widehat{N}_j^{{\varepsilon},k-1}\right)+{\delta}_x^2\rho_B\left(|\widehat{E}_j^{{\varepsilon},k}|^2\right)\!,\quad k\ge 1. \end{aligned} \end{equation} (3.5) Here, $$(\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k})$$ can be viewed as another approximation of $$(E^{\varepsilon}(x_j,t_k),N^{\varepsilon}(x_j,t_k))$$. Next we will prove two types of estimates for $$(\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k})$$ in Theorem 2.1. Define the error function $$\widehat{e}^{\,{\varepsilon},k}$$, $$\widehat{n}^{{\varepsilon},k}\in X_M$$ as \[\widehat{e}_j^{{\varepsilon},k}=E^{\varepsilon}(x_j,t_k)-\widehat{E}^{{\varepsilon},k}_j,\quad \widehat{n}_j^{{\varepsilon},k}=N^{{\varepsilon}}(x_j,t_k)-\widehat{N}^{{\varepsilon},k}_j, \quad j\in \mathcal{T}_M.\] Regarding the error bounds on $$(\widehat{e}^{\,{\varepsilon},k}, \widehat{n}^{{\varepsilon},k})$$, we have the following estimates. Theorem 3.2 Under assumptions (A) and (C), there exists $$\tau_1>0$$ independent of $${\varepsilon}$$ such that, when $$0<\tau\le \tau_1$$, we have the following error estimate for the scheme (3.5): \begin{equation}\label{mes1} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|+\|\widehat{n}^{{\varepsilon},k}\|\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}},\quad 0\le k\le\frac{T}{\tau},\quad 0<{\varepsilon}\le1. \end{equation} (3.6) Introduce the local truncation error $$\widehat{\xi}^{{\varepsilon},k}$$, $$\widehat{\eta}^{{\varepsilon},k}\in X_M$$ for $$k\ge 1$$ as \begin{equation}\label{local2} \begin{split} \widehat{\xi}_j^{{\varepsilon},k}&={\delta}_t^2E^{\varepsilon}(x_j,t_k)+\big[-{\delta}_x^2+1+N^{\varepsilon}(x_j,t_k)\big] \frac{E^{\varepsilon}(x_j,t_{k+1})+E^{\varepsilon}(x_j,t_{k-1})}{2},\\ \widehat{\eta}_j^{{\varepsilon},k}&={\varepsilon}^2{\delta}_t^2N^{\varepsilon}(x_j,t_k)-\frac{1}{2}{\delta}_x^2\big[N^{\varepsilon}(x_j,t_{k+1}) +N^{\varepsilon}(x_j,t_{k-1})\big]-{\delta}_x^2|E^{\varepsilon}(x_j,t_k)|^2. \end{split} \end{equation} (3.7) Then, we have the local truncation error as follows. Lemma 3.3 Under the assumption (A), we have $$\|\widehat{\xi}^{{\varepsilon},k}\|+ \|\widehat{\eta}^{{\varepsilon},k}\|\lesssim h^2+\frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}},\quad 1\le k\le \frac{T}{\tau}-1;\quad \|{\delta}_t^c\widehat{\eta}^{{\varepsilon},k}\|\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}},\quad 2\le k\le \frac{T}{\tau}-2.$$ Proof. Applying Taylor expansion, we have \begin{align*} \widehat{\xi}_j^{{\varepsilon},k}=\,&\frac{\tau^2}{6}\int_{-1}^1(1-|s|)^3{\partial}_t^4E^{\varepsilon}(x_j,t_k+s\tau)\,{\rm{d}}s -\frac{h^2}{12}\int_{-1}^1(1-|s|)^3\sum\limits_{m=\pm1}{\partial}_x^4E^{\varepsilon}(x_j+sh,t_k+m \tau)\,{\rm{d}}s\\ &+\frac{\tau^2}{2}\int_{-1}^1(1-|s|)\big[ \left(1+N^{\varepsilon}(x_j,t_k)\right)E_{tt}^{\varepsilon}(x_j,t_k+s\tau)-{\partial}_x^2E_{tt}^{\varepsilon}(x_j,t_k+s\tau) \big]\,{\rm{d}}s. \end{align*} Hence, \[ |\widehat{\xi}_j^{{\varepsilon},k}|\lesssim h^2\|{\partial}_x^4E^{\varepsilon}\|_{L^\infty}+\tau^2 \left((1+\|N^{\varepsilon}\|_{L^\infty})\|E^{\varepsilon}_{tt}\|_{L^\infty} +\|{\partial}_t^4E^{\varepsilon}\|_{L^\infty}+\|{\partial}_x^2E^{\varepsilon}_{tt}\|_{L^\infty}\right) \lesssim h^2+\frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}}.\] Applying the same approach, we derive that \begin{align*} \widehat{\eta}_j^{{\varepsilon},k}&=\frac{{\varepsilon}^2\tau^2}{6}\int_{-1}^1(1-|s|)^3 {\partial}_t^4N^{\varepsilon}(x_j,t_k+s\tau)\,{\rm{d}}s-\frac{\tau^2}{2}\int_{-1}^1(1-|s|){\partial}_x^2N_{tt}^{\varepsilon}(x_j,t_k+s\tau)\,{\rm{d}}s\\ &\quad-\frac{h^2}{12}\int_{-1}^1(1-|s|)^3\Big[{\partial}_x^4|E^{\varepsilon}|^2(x_j+sh,t_k)+\sum\limits_{m=\pm1} {\partial}_x^4N^{\varepsilon}(x_j+sh,t_k+m\tau)\Big]\,{\rm{d}}s, \end{align*} which implies that \[ |\widehat{\eta}_j^{{\varepsilon},k}|\lesssim h^2(\|{\partial}_x^4N^{\varepsilon}\|_{L^\infty}+\|{\partial}_x^4|E^{\varepsilon}|^2\|_{L^\infty}) +\tau^2(\|{\partial}_x^2N_{tt}^{\varepsilon}\|_{L^\infty}+{\varepsilon}^2\|{\partial}_t^4N^{\varepsilon}\|_{L^\infty}) \lesssim h^2+\frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}}.\] Applying $${\delta}_t^c$$ to $$\widehat{\eta}_j^{{\varepsilon},k}$$ for $$2\le k\le \frac{T}{\tau}-1$$, one can deduce that \[ |{\delta}_t^c\widehat{\eta}_j^{{\varepsilon},k}|\lesssim h^2(\|{\partial}_x^4N^{\varepsilon}_t\|_{L^\infty}+\|{\partial}_x^4|E^{\varepsilon}|_t^2\|_{L^\infty}) +\tau^2(\|{\partial}_t^3N_{xx}^{\varepsilon}\|_{L^\infty}+{\varepsilon}^2\|{\partial}_t^5N^{\varepsilon}\|_{L^\infty})\lesssim \frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}},\] which completes the proof. □ For the initial step, we have the following estimates. Lemma 3.4 Under assumptions (A) and (C), the error of the first step (2.5) and (2.7) satisfies \[\widehat{e}_j^{{\varepsilon},0}=\widehat{n}_j^{{\varepsilon},0}=0,\quad |\widehat{e}_j^{{\varepsilon},1}|+|{\delta}_x^+\widehat{e}_j^{{\varepsilon},1}|\lesssim \frac{\tau^3}{{\varepsilon}^{1-\alpha^*}},\quad |{\delta}_t^+\widehat{e}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{1-\alpha^*}},\quad |\widehat{n}_j^{{\varepsilon},1}|\lesssim \frac{\tau^3}{{\varepsilon}^{3-\alpha^\dagger}},\quad |{\delta}_t^+\widehat{n}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}.\] Proof. By the definition of $$\widehat{E}^{{\varepsilon},1}_j$$, we have \[ |\widehat{e}_j^{{\varepsilon},1}|= \left|\frac{\tau^3}{2}\int_0^1(1-s)^2E_{ttt}^{\varepsilon}(x_j,s\tau)\,{\rm{d}}s\right| \lesssim\tau^3\|E_{ttt}^{\varepsilon}\|_{L^\infty}\lesssim \frac{\tau^3}{{\varepsilon}^{1-\alpha^*}},\] which immediately gives $$|{\delta}_t^+\widehat{e}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{1-\alpha^*}}$$ and $$|{\delta}_x^+e_j^{{\varepsilon},1}|\lesssim \frac{ \tau^3}{{\varepsilon}^{1-\alpha^*}}$$. Similarly, \begin{align*} |\widehat{n}_j^{{\varepsilon},1}|&=\left|\frac{\tau^3}{2}\int_0^1(1-s)^2N_{ttt}^{\varepsilon}(x_j,s\tau)\,{\rm{d}}s +{\varepsilon}^{\beta}\left[\tau-{\varepsilon}\sin\left(\frac{\tau}{{\varepsilon}}\right)\right]{\omega}_1(x_j)+ {\varepsilon}^\alpha\left[\frac{\tau^2}{2{\varepsilon}^2}-2\sin^2\left(\frac{\tau}{2{\varepsilon}}\right)\right]{\omega}_0''(x_j)\right|\\ &\lesssim \tau^3\|N_{ttt}^{\varepsilon}\|_{L^\infty}+\tau^3{\varepsilon}^{{\beta}-2}\|{\omega}_1\|_{L^\infty} +\tau^3{\varepsilon}^{\alpha-3}\|{\omega}_0''\|_{L^\infty}\lesssim\frac{\tau^3}{{\varepsilon}^{3-\alpha^\dagger}}, \end{align*} which implies that $$|{\delta}_t^+\widehat{n}_j^{{\varepsilon},0}|\lesssim \frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}$$. □ Proof of Theorem 3.2. Subtracting (3.5) from (3.7), we obtain the error equations \begin{align} &{\delta}_t^2\widehat{e}^{\,{\varepsilon},k}_j=\frac{{\delta}_x^2-1}{2} (\widehat{e}^{{\varepsilon},k+1}_j+\widehat{e}^{{\varepsilon},k-1}_j)-\widehat{r}_j^{{\varepsilon},k}+\widehat{\xi}_j^{{\varepsilon},k},\label{e_eq}\\ \end{align} (3.8) \begin{align} &{\varepsilon}^2{\delta}_t^2\widehat{n}_j^{{\varepsilon},k}=\frac{1}{2}{\delta}_x^2(\widehat{n}_j^{{\varepsilon},k+1}+\widehat{n}_j^{{\varepsilon},k-1}) +{\delta}_x^2\widehat{p}_j^{{\varepsilon},k}+\widehat{\eta}_j^{{\varepsilon},k},\quad j\in \mathcal{T}_M,\quad k\ge 1,\label{n_eq} \end{align} (3.9) where \begin{align*} \widehat{r}_j^{{\varepsilon},k}&=N^{\varepsilon}(x_j,t_k)g\left(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1})\right) -\widehat{N}_j^{{\varepsilon},k}g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1}),\\ \widehat{p}_j^{{\varepsilon},k}&=\rho_B(E^{\varepsilon}(x_j,t_k)^2)-\rho_B(|\widehat{E}_j^{{\varepsilon},k}|^2). \end{align*} Recalling the definition of $$\rho_B$$ (cf. (3.4)), one can easily get that \begin{equation}\label{p_k} |\widehat{p}_j^{{\varepsilon},k}|\le \sqrt{C_B} |\widehat{e}_j^{{\varepsilon},k}|, \quad j \in \mathcal{T}_M,\quad k\ge 0. \end{equation} (3.10) As for the properties about $$g$$, similar to Cai & Yuan (2017), we have $$|g(\widehat{E}^{{\varepsilon},k+1}_j,\widehat{E}_j^{{\varepsilon},k-1})|\lesssim 1$$, and \begin{equation}\label{g_pro} |g(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1}))-g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1})| \lesssim |\widehat{e}_j^{{\varepsilon},k+1}|+|\widehat{e}_j^{{\varepsilon},k-1}|. \end{equation} (3.11) By the definition of $$\widehat{r}^{{\varepsilon},k}$$, similar to Cai & Yuan (2017), one can derive that \begin{equation}\label{r_bound} |\widehat{r}_j^{{\varepsilon},k}|\lesssim |\widehat{n}_j^{{\varepsilon},k}|+|\widehat{e}^{{\varepsilon},k+1}_j|+|\widehat{e}^{{\varepsilon},k-1}_j|. \end{equation} (3.12) Multiplying both sides of (3.8) by $$\widehat{e}_j^{{\varepsilon},k+1}-\widehat{e}_j^{{\varepsilon},k-1}$$, summing together for $$j\in \mathcal{T}_M$$, we obtain \begin{equation}\label{e_eq2} \begin{aligned}[b] &\|{\delta}_t^+\widehat{e}^{\,{\varepsilon},k}\|^2+\frac{1}{2}\|{\delta}_x^+\widehat{e}^{{\varepsilon},k+1}\|^2+\frac{1}{2} \|\widehat{e}^{{\varepsilon},k+1}\|^2\\ &\quad=\|{\delta}_t^+\widehat{e}^{{\varepsilon},k-1}\|^2+ \frac{1}{2}\|{\delta}_x^+\widehat{e}^{{\varepsilon},k-1}\|^2+\frac{1}{2} \|\widehat{e}^{{\varepsilon},k-1}\|^2+(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k}, \widehat{e}^{{\varepsilon},k+1}-\widehat{e}^{{\varepsilon},k-1}). \end{aligned} \end{equation} (3.13) To estimate $$\|\widehat{n}^{{\varepsilon},k}\|$$, introduce $$\widehat{u}^{{\varepsilon},k+1/2}\in X_M$$ satisfying $${\delta}_x^2\widehat{u}_j^{{\varepsilon},k+1/2}={\delta}_t^+\widehat{n}_j^{{\varepsilon},k} (k\ge 0)$$. Multiplying both sides of (3.9) by $$\tau*(\widehat{u}_j^{{\varepsilon},k+1/2}+ \widehat{u}_j^{{\varepsilon},k-1/2})$$, summing for $$j\in \mathcal{T}_M$$, we have \begin{equation}\label{n_eq2} \begin{aligned}[b] &{\varepsilon}^2(\|{\delta}_x^+\widehat{u}^{{\varepsilon},k+1/2}\|^2-\|{\delta}_x^+\widehat{u}^{{\varepsilon},k-1/2}\|^2) +\frac{1}{2}(\|\widehat{n}^{{\varepsilon},k+1}\|^2-\|\widehat{n}^{{\varepsilon},k-1}\|^2)\\ &\quad=-(\widehat{p}^{{\varepsilon},k}, \widehat{n}^{{\varepsilon},k+1}-\widehat{n}^{{\varepsilon},k-1}) -\tau(\widehat{\eta}^{{\varepsilon},k},\widehat{u}^{{\varepsilon},k+1/2}+\widehat{u}^{{\varepsilon},k-1/2}),\quad 1\le k\le \frac T\tau-1. \end{aligned} \end{equation} (3.14) Introduce a discrete ‘energy’ by \begin{equation}\label{energy_d} \begin{split} \widehat{\mathcal{A}}^{{\varepsilon},k}&=6C_B\|{\delta}_t^+\widehat{e}^{\,{\varepsilon},k}\|^2+3C_B (\|\widehat{e}^{\,{\varepsilon},k}\|^2+ \|\widehat{e}^{{\varepsilon},k+1}\|^2+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|^2+\|{\delta}_x^+ \widehat{e}^{{\varepsilon},k+1}\|^2)\\ &\quad +{\varepsilon}^2\|{\delta}_x^+\widehat{u}^{{\varepsilon},k+1/2}\|^2 +\frac{1}{2}(\|\widehat{n}^{{\varepsilon},k}\|^2+\|\widehat{n}^{{\varepsilon},k+1}\|^2),\quad 1\le k\le \frac T\tau-1. \end{split} \end{equation} (3.15) Combining $$6C_B*$$ (3.13) $$\,+\,$$ (3.14), we get for $$1\le k\le \frac T\tau-1$$ \begin{equation}\label{sp} \widehat{\mathcal{A}}^{{\varepsilon},k}-\widehat{\mathcal{A}}^{{\varepsilon},k-1}=6C_B(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k}, 2\tau{\delta}_t^c\widehat{e}^{\,{\varepsilon},k}) -(\widehat{p}^{{\varepsilon},k},2\tau{\delta}_t^c\widehat{n}^{{\varepsilon},k})- \tau(\widehat{\eta}^{{\varepsilon},k},\widehat{u}^{{\varepsilon},k+1/2}+\widehat{u}^{{\varepsilon},k-1/2}). \end{equation} (3.16) We estimate the terms in (3.16). First, by Cauchy inequality and (3.12), one has \begin{equation}\label{xi_b} \big|\!\left(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k},2\tau{\delta}_t^c\widehat{e}^{\,{\varepsilon},k}\right)\!\big| =\tau\big|(\widehat{\xi}^{{\varepsilon},k}-\widehat{r}^{{\varepsilon},k},{\delta}_t^+(\widehat{e}^{\,{\varepsilon},k}+\widehat{e}^{{\varepsilon},k-1}))\big| \lesssim \tau\big(\|\widehat{\xi}^{{\varepsilon},k}\|^2+\widehat{\mathcal{A}}^{{\varepsilon},k}+\widehat{\mathcal{A}}^{{\varepsilon},k-1}\big). \end{equation} (3.17) On the other hand, \[ \sum\limits_{i=1}^k(\widehat{p}^{{\varepsilon},i},2\tau{\delta}_t^c\widehat{n}^{{\varepsilon},i})= (\widehat{p}^{{\varepsilon},k},\widehat{n}^{{\varepsilon},k+1})+(\widehat{p}^{{\varepsilon},k+1},\widehat{n}^{{\varepsilon},k})-(\widehat{p}^{{\varepsilon},1},\widehat{n}^{{\varepsilon},0})- (\widehat{p}^{{\varepsilon},0},\widehat{n}^{{\varepsilon},1})-\sum\limits_{i=1}^{k}(\widehat{n}^{{\varepsilon},i},2\tau{\delta}_t^c \widehat{p}^{{\varepsilon},i}).\] It can be easily received from (3.11) and assumption (A) that \begin{align*} 2\tau {\delta}_t^c \widehat{p}_j^{{\varepsilon},k}&=4\tau g\left(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1})\right){\delta}_t^c E^{\varepsilon}(x_j,t_k)-4\tau g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1}){\delta}_t^c\widehat{E}_j^{{\varepsilon},k}\\ &=4\tau g(\widehat{E}_j^{{\varepsilon},k+1},\widehat{E}_j^{{\varepsilon},k-1}){\delta}_t^c \widehat{e}_j^{{\varepsilon},k} +4\tau\big[g(E^{\varepsilon}(x_j,t_{k+1}),E^{\varepsilon}(x_j,t_{k-1}))-g(\widehat{E}_j^{{\varepsilon},k+1}, \widehat{E}_j^{{\varepsilon},k-1})\big]{\delta}_t^c E^{\varepsilon}(x_j,t_{k})\\ &\lesssim\tau \|E_t^{\varepsilon}\|_{L^\infty}\big(|\widehat{e}_j^{{\varepsilon},k+1}|+|\widehat{e}_j^{{\varepsilon},k-1}|\big) +\tau\big(|{\delta}_t^+\widehat{e}_j^{{\varepsilon},k}|+|{\delta}_t^+\widehat{e}_j^{{\varepsilon},k-1}|\big), \end{align*} which yields for $$1\le k\le\frac T\tau-1$$ \begin{equation}\label{est-p} (\widehat{n}^{{\varepsilon},k},2\tau{\delta}_t^c \widehat{p}^{{\varepsilon},k})\lesssim \tau\left[\|\widehat{n}^{{\varepsilon},k}\|^2+\sum\limits_{l=k-1}^k\|{\delta}_t^+\widehat{e}^{{\varepsilon},l}\|^2+ \sum\limits_{l=k\pm1}\|\widehat{e}^{{\varepsilon},l}\|^2\right]\lesssim \tau(\widehat{\mathcal{A}}^{{\varepsilon},k}+\widehat{\mathcal{A}}^{{\varepsilon},k-1}). \end{equation} (3.18) Similar to Bao & Su (2017b), it can be established from Sobolev inequality and Cauchy inequality that \begin{align} \tau\left|\sum\limits_{i=1}^k\left(\widehat{\eta}^{{\varepsilon},i},\widehat{u}^{{\varepsilon},i+1/2}+ \widehat{u}^{{\varepsilon},i-1/2}\right)\right|&\le \frac{\widehat{\mathcal{A}}^{{\varepsilon},k}}{4}+C\left[ \widehat{\mathcal{A}}^{{\varepsilon},0}+ \sum\limits_{i=1}^2\|\widehat{\eta}^{{\varepsilon},i}\|^2+ \sum\limits_{i=k-1}^k\|\widehat{\eta}^{{\varepsilon},i}\|^2\nonumber\right.\\ &\left.\quad +\tau\sum\limits_{i=2}^{k-1} (\widehat{\mathcal{A}}^{{\varepsilon},i}+\|{\delta}_t^c\widehat{\eta}^{{\varepsilon},i}\|^2)\right].\label{u_bound} \end{align} (3.19) Summing (3.16) together for time step $$1,2,\cdots,k<\frac{T}{\tau}$$, applying (3.17)–(3.19), we obtain that \begin{equation}\label{sp2} \begin{split} &\frac{3}{4}\widehat{\mathcal{A}}^{{\varepsilon},k}+(\widehat{p}^{{\varepsilon},k},\widehat{n}^{{\varepsilon},k+1})+(\widehat{p}^{{\varepsilon},k+1},\widehat{n}^{{\varepsilon},k})\\ &\quad\lesssim \widehat{\mathcal{A}}^{{\varepsilon},0}+\tau\sum\limits_{i=1}^{k}\left(\|\widehat{\xi}^{{\varepsilon},i}\|^2+\widehat{\mathcal{A}}^{{\varepsilon},i}\right) +\sum\limits_{i=1}^2\|\widehat{\eta}^{{\varepsilon},i}\|^2 +\sum\limits_{i=k-1}^{k}\|\widehat{\eta}^{{\varepsilon},i}\|^2+ \tau\sum\limits_{i=2}^{k-1}\|{\delta}_t^c\widehat{\eta}^{{\varepsilon},i}\|^2. \end{split} \end{equation} (3.20) Note that by Cauchy inequality and (3.10), we have \begin{align*} &|(\widehat{p}^{{\varepsilon},k},\widehat{n}^{{\varepsilon},k+1})+(\widehat{p}^{{\varepsilon},k+1},\widehat{n}^{{\varepsilon},k})|\le \frac{\|\widehat{n}^{{\varepsilon},k}\|^2+\|\widehat{n}^{{\varepsilon},k+1}\|^2}{4}+\|\widehat{p}^{{\varepsilon},k}\|^2+\|\widehat{p}^{{\varepsilon},k+1}\|^2\\ &\quad\le\frac{\|\widehat{n}^{{\varepsilon},k}\|^2+\|\widehat{n}^{{\varepsilon},k+1}\|^2}{4}+C_B\left(\|\widehat{e}^{\,{\varepsilon},k}\|^2 +\|\widehat{e}^{{\varepsilon},k+1}\|^2\right)\le \frac{\widehat{\mathcal{A}}^{{\varepsilon},k}}{2}, \end{align*} which together with (3.20) gives that $$\widehat{\mathcal{A}}^{{\varepsilon},k}\lesssim \widehat{\mathcal{A}}^{{\varepsilon},0}+\tau\sum\limits_{i=1}^{k}\big(\|\widehat{\xi}^{{\varepsilon},i}\|^2+\widehat{\mathcal{A}}^{{\varepsilon},i}\big) +\sum\limits_{i=1}^2\|\widehat{\eta}^{{\varepsilon},i}\|^2 +\sum\limits_{i=k-1}^{k}\|\widehat{\eta}^{{\varepsilon},i}\|^2+ \tau\sum\limits_{i=2}^{k-1}\|{\delta}_t^c\widehat{\eta}^{{\varepsilon},i}\|^2.$$ Applying the discrete Sobolev inequality and Lemma 3.4, we derive \begin{equation}\label{u-i} {\varepsilon}\|{\delta}_x^+\widehat{u}^{{\varepsilon},1/2}\|\lesssim {\varepsilon} \|{\delta}_t^+\widehat{n}^{{\varepsilon},0}\|\lesssim \frac{\tau^2}{{\varepsilon}^{2-\alpha^\dagger}}, \end{equation} (3.21) which together with Lemma 3.4 yields that $$\widehat{\mathcal{A}}^{{\varepsilon},0}\lesssim \big(\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}\big)^2$$. Applying Lemma 3.3, we get that $$\widehat{\mathcal{A}}^{{\varepsilon},k}\lesssim \tau\sum\limits_{i=1}^k\widehat{\mathcal{A}}^{{\varepsilon},i}+ \Big(\frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}\Big)^2.$$ Hence, by discrete Gronwall inequality, there exists $$\tau_1>0$$ such that when $$0<\tau\le \tau_1$$, we have $$\widehat{\mathcal{A}}^{{\varepsilon},k}\lesssim \big(\frac{h^2}{{\varepsilon}^{1-\alpha^*}}+\frac{\tau^2}{{\varepsilon}^{3-\alpha^\dagger}}\big)^2$$, which gives (3.6) directly. □ Next, we prove the second type error estimate for the numerical approximation $$(\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k})$$. Theorem 3.5 Under assumptions (A)–(C), there exists $$\tau_2>0$$ sufficiently small and independent of $${\varepsilon}$$ such that, when $$0<\tau\le \tau_2$$, we have the following error estimate for (3.5): \begin{equation}\label{mes2} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|+\|\widehat{n}^{{\varepsilon},k}\|\lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*},\quad 0\le k\le\frac{T}{\tau},\quad 0<{\varepsilon}\le1. \end{equation} (3.22) Define another error function $$\widetilde{e}^{{\varepsilon},k}_j=E(x_j,t_k)-\widehat{E}_j^{{\varepsilon},k},\quad \widetilde{n}_j^{{\varepsilon},k}=N(x_j,t_k)-\widehat{N}_j^{{\varepsilon},k},\quad j\in \mathcal{T}_M,\quad k\ge 0,$$ where $$E(x,t)$$ is the solution to the limit KG (2.2) and $$N(x,t)=-E(x,t)^2$$. Similarly, denote $$\widetilde{u}^{{\varepsilon},k+\frac{1}{2}}=({\delta}_x^2)^{-1}({\delta}_t^+\widetilde{n}^{{\varepsilon},k})\in X_M (k\ge 0)$$. The local truncation error $$\widetilde{\xi}^{{\varepsilon},k}$$, $$\widetilde{\eta}^{{\varepsilon},k}$$ is defined as \begin{equation}\label{2local2} \begin{split} &\widetilde{\xi}_j^{{\varepsilon},k}={\delta}_t^2E(x_j,t_k)-\big[{\delta}_x^2-1-N(x_j,t_k)\big] \frac{E(x_j,t_{k+1})+E(x_j,t_{k-1})}{2}\\ &\widetilde{\eta}_j^{{\varepsilon},k}={\varepsilon}^2{\delta}_t^2N(x_j,t_k)-\frac{1}{2}{\delta}_x^2\big[N(x_j,t_{k+1}) +N(x_j,t_{k-1})\big]-{\delta}_x^2|E(x_j,t_k)|^2. \end{split} \end{equation} (3.23) Under assumption (A), we can obtain the following error bounds \begin{equation}\label{2local3} \|\widetilde{\xi}^{{\varepsilon},k}\|\lesssim h^2+\tau^2,\quad \|\widetilde{\eta}^{{\varepsilon},k}\|+\|{\delta}_t^c\widetilde{\eta}^{{\varepsilon},k}\|\lesssim h^2+\tau^2+{\varepsilon}^2,\quad k\ge1. \end{equation} (3.24) Similar to Lemma 3.4, we have the error bounds for $$\widetilde{e}^{{\varepsilon},k}$$, $$\widetilde{n}^{{\varepsilon},k}$$ at the first step. Lemma 3.6 Under assumptions (A)–(C), the error of the first step (2.5) and (2.7) satisfies \begin{align*} &\widetilde{e}_j^{{\varepsilon},0}=0,\quad |\widetilde{e}_j^{{\varepsilon},1}|+|{\delta}_x^+\widetilde{e}_j^{{\varepsilon},1}|\lesssim \tau^3+{\varepsilon}^{\alpha}\tau^2,\quad |{\delta}_t^+\widetilde{e}_j^{{\varepsilon},0}|\lesssim \tau^2+{\varepsilon}^{\alpha}\tau,\\ &|\widetilde{n}_j^{{\varepsilon},0}|\lesssim {\varepsilon}^\alpha, \quad |\widetilde{n}_j^{{\varepsilon},1}|\lesssim \tau^2+{\varepsilon}^\alpha+{\varepsilon}^{1+\beta},\quad |{\delta}_t^+\widetilde{n}_j^{{\varepsilon},0}|\lesssim \tau+{\varepsilon}^{\alpha-1}+{\varepsilon}^{\beta}. \end{align*} Proof of Theorem 3.5. Define another discrete ‘energy’ \begin{equation}\label{energy_d2} \begin{split} \widetilde{\mathcal {A}}^{{\varepsilon},k}&=6C_B\|{\delta}_t^+\widetilde{e}^{{\varepsilon},k}\|^2+3C_B (\|\widetilde{e}^{{\varepsilon},k}\|^2+ \|\widetilde{e}^{{\varepsilon},k+1}\|^2+\|{\delta}_x^+\widetilde{e}^{{\varepsilon},k}\|^2+\|{\delta}_x^+ \widetilde{e}^{{\varepsilon},k+1}\|^2)\\ &\quad +{\varepsilon}^2\|{\delta}_x^+\widetilde{u}^{{\varepsilon},k+1/2}\|^2 +\frac{1}{2}(\|\widetilde{n}^{{\varepsilon},k}\|^2+\|\widetilde{n}^{{\varepsilon},k+1}\|^2),\quad 1\le k\le \frac T\tau-1. \end{split} \end{equation} (3.25) Similar approach as in the proof of Theorem 3.2 will yield that $$\widetilde{\mathcal{A}}^{{\varepsilon},k}\lesssim \widetilde{\mathcal{A}}^{{\varepsilon},0}+\sum\limits_{i=1}^2\|\widetilde{\eta}^{{\varepsilon},i}\|^2 +\sum\limits_{i=k-1}^{k}\|\widetilde{\eta}^{{\varepsilon},i}\|^2+ \tau\sum\limits_{i=2}^{k-1}\|{\delta}_t^c\widetilde{\eta}^{{\varepsilon},i}\|^2 +\tau\sum\limits_{i=1}^{k}\big(\|\widetilde{\xi}^{{\varepsilon},i}\|^2+ \widetilde{\mathcal{A}}^{{\varepsilon},i}\big).$$ By Lemma 3.6 and the discrete Sobolev inequality, we derive that $${\varepsilon}\|{\delta}_x^+\widetilde{u}^{{\varepsilon},1/2}\|\lesssim \tau^2+{\varepsilon}^{\alpha^\dagger}$$, which together with Lemma 3.6 gives $$\widetilde{\mathcal{A}}^{{\varepsilon},0}\lesssim \big(\tau^2+{\varepsilon}^{\alpha^\dagger}\big)^2$$. Applying (3.24), there exists $$\tau_2>0$$ sufficiently small such that when $$0<\tau\le\tau_2$$, $$ \widetilde{\mathcal{A}}^{{\varepsilon},k}\lesssim \tau\sum\limits_{i=1}^{k-1} \widetilde{\mathcal{A}}^{{\varepsilon},i}+\left(h^2+\tau^2+{\varepsilon}^{\alpha^\dagger}\right)^2. $$ By discrete Gronwall inequality, for $$\tau<\tau_2$$, one has $$\widetilde{\mathcal{A}}^{{\varepsilon},k}\lesssim \left(h^2+\tau^2+{\varepsilon}^{\alpha^\dagger}\right)^2$$. Using assumption (B) and the triangle inequality, we derive that \begin{align*} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|&\le \|\widetilde{e}^{{\varepsilon},k}\|+\|{\delta}_x^+\widetilde{e}^{{\varepsilon},k}\|+\|E^{\varepsilon}(\cdot,t_k)-E(\cdot,t_k)\|_{H^1} \lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*},\\ \|\widehat{n}^{{\varepsilon},k}\|\le\|\widetilde{n}^{{\varepsilon},k}\|+&\|N^{{\varepsilon}}(\cdot,t_k)-N(\cdot,t_k)\|_{L^2} \lesssim h^2+\tau^2+{\varepsilon}^{\alpha^*}, \end{align*} which completes the proof of Theorem 3.5. □ Proof of Theorem 2.1. Set $$\tau_0=\min\{\tau_1,\tau_2\}$$, combining the two estimates (3.6) and (3.22), we derive for $$\tau\le \tau_0$$, \begin{equation}\label{mes3} \|\widehat{e}^{\,{\varepsilon},k}\|+\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|+\|\widehat{n}^{{\varepsilon},k}\|\lesssim \left\{ \begin{aligned} &h^2+\tau^{\frac{2}{4-\alpha^\dagger}},\quad &\alpha^*=1,\\ &(h^2+\tau^2)^{\frac{1}{3}\alpha^*},\quad &0<\alpha^*<1. \end{aligned} \right. \end{equation} (3.26) To establish the error estimates for ($$E^{{\varepsilon},k},N^{{\varepsilon},k}$$), by the definition of $$\rho_B$$, it suffices to show that $$\|\widehat{E}^{{\varepsilon},k}\|_\infty\le M_0+1, \quad k\ge 0,$$ which implies that (3.5) collapses to (2.4), i.e., ($$\widehat{E}^{{\varepsilon},k},\widehat{N}^{{\varepsilon},k}$$) are identical to ($$E^{{\varepsilon},k},N^{{\varepsilon},k}$$). For $$d=1$$, it is obvious due to the definition of $$M_0$$ and (3.1). For $$d=2, 3$$, $$\|\widehat{E}^{{\varepsilon},k}\|_\infty$$ is controlled by $$\|\widehat{E}^{{\varepsilon},k}\|_\infty\le\|E^{\varepsilon}(\cdot,t_k)\|_{L^\infty}+ \|\widehat{e}^{\,{\varepsilon},k}\|_\infty$$, where $$\|\widehat{e}^{\,{\varepsilon},k}\|_\infty\lesssim \frac{\|{\delta}_x^+\widehat{e}^{\,{\varepsilon},k}\|}{C_d(h)}, \qquad C_d(h)\sim \left\{ \begin{aligned} &\frac{1}{|\mathrm{ln}\, h|},\quad &d=2,\\ &h^{1/2},\quad &d=3. \\ \end{aligned} \right. $$ Hence, by adding the auxiliary conditions that $$\tau=o\left(C_d(h)^{2-\frac{\alpha^\dagger}{2}}\right)$$ for $$\alpha^*=1$$, while $$h=o\left({\varepsilon}^{\frac{2}{3}(1-\alpha^*)}\right)$$ and $$\tau=o\left({\varepsilon}^{\frac{3-\alpha^*}{2}}C_d(h)^{1/2}\right)$$ for $$\alpha^*<1$$, one can obtain the error bounds. □ 4. Numerical results We present numerical results for KGZ (2.1) by the conservative finite discretization (2.4) with (2.5) and (2.7). The initial condition is set as \begin{align*} &E_0(x)=e^{-x^2}\sin x,\qquad \qquad \quad \,\,E_1(x)=\mathrm{sech}\left(x^2/2\right)\cos x+\mathrm{sech} \left(x^2/3\right)\sin{(2x)},\\ &{\omega}_0(x)=\mathrm{sech} (x^2)\cos{(3x)},\quad\quad {\omega}_1(x)=\mathrm{sech}( x^2)\sin{(4x)}, \end{align*} and the parameters $$\alpha$$ and $$\beta$$ are chosen as $$ \it{Case\,I}. \,\alpha=2\,and\,\beta=1; \quad \it{Case\,II}. \,\alpha=1\,and\,\beta=0; \quad \it{Case\,III}. \,\alpha=0\,and\,\beta=-1. $$ In our computation, we truncate the problem on $$\it{\Omega}_\varepsilon=\left[-30-\frac{1}{\varepsilon}, 30+\frac{1}{\varepsilon}\right]$$, which is large enough such that the homogeneous Dirichlet boundary condition does not introduce significant errors. The bounded computational domain $$\it{\Omega}_\varepsilon$$ has to be chosen as $$\varepsilon$$-dependent due to that the outspreading waves are at wave speed $$O\left(\frac{1}{\varepsilon}\right)$$ and the homogeneous Dirichlet boundary condition is taken at the boundary. The computational $${\varepsilon}$$-dependent domain can be chosen as $${\varepsilon}$$-independent if one applies appropriate boundary conditions instead of simple homogeneous boundary condition (see, e.g., Bao & Su, 2017b). To quantify the error, we introduce the following error function $$e_{\varepsilon}^{\tau,h}(t_k):=\|e^{{\varepsilon},k}\|+\|{\delta}_x^+e^{{\varepsilon},k}\|,\quad n_{\varepsilon}^{\tau,h}(t_k):=\|n^{{\varepsilon},k}\|,$$ where $$e^{{\varepsilon},k}=E^{\varepsilon}(\cdot,t_k)-E^{{\varepsilon},k}$$, $$n^{{\varepsilon},k}=N^{\varepsilon}(\cdot,t_k)-N^{{\varepsilon},k}$$. The ‘exact’ solution is obtained by the EWI-SP method (see, e.g., Bao et al., 2013) with very small mesh size $$h=1/32$$ and time step $$\tau=10^{-6}$$. The errors are displayed at $$t=1$$. For spatial error analysis, we set a time step $$\tau=10^{-5}$$, such that the temporal error can be neglected; for temporal error analysis, the mesh size $$h$$ is set as $$h=10^{-4}$$ such that the spatial error can be ignored. Figures 2 and 3 display the spatial errors for Cases II and III, respectively. The results for Case I are similar to Case II, which are omitted here for brevity. Similarly, the temporal discretization errors are shown in Fig. 4 and Tables 1–3 for $$E^{\varepsilon}$$ and $$N^{\varepsilon}$$, respectively. From these tables, we can draw the following conclusions for the proposed CFDM in the subsonic limit regime: Fig. 2. View largeDownload slide Spatial errors at time $$t=1$$ for Case II, i.e., $$\alpha=1$$ and $$\beta=0$$. Fig. 2. View largeDownload slide Spatial errors at time $$t=1$$ for Case II, i.e., $$\alpha=1$$ and $$\beta=0$$. Fig. 3. View largeDownload slide Spatial errors at time $$t=1$$ for Case III, i.e., $$\alpha=0$$ and $$\beta=-1$$. Fig. 3. View largeDownload slide Spatial errors at time $$t=1$$ for Case III, i.e., $$\alpha=0$$ and $$\beta=-1$$. Fig. 4. View largeDownload slide Temporal errors at time $$t=1$$ for $$E^{\varepsilon}$$. Fig. 4. View largeDownload slide Temporal errors at time $$t=1$$ for $$E^{\varepsilon}$$. Table 1 Temporal error analysis at time$${\it t}=1$$for Case I, i.e.,$$\alpha=2$$and$$\beta=1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 Table 1 Temporal error analysis at time$${\it t}=1$$for Case I, i.e.,$$\alpha=2$$and$$\beta=1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32E$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 3.77E$$-$$2 9.97E$$-$$3 2.54E$$-$$3 6.39E$$-$$4 1.60E$$-$$4 4.02E$$-$$5 1.01E$$-$$5 rate — 1.92 1.97 1.99 2.00 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 5.12E$$-$$2 1.69E$$-$$2 4.59E$$-$$3 1.17E$$-$$3 2.94E$$-$$4 7.38E$$-$$5 1.85E$$-$$5 rate — 1.60 1.88 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 2.57E$$-$$2 1.75E$$-$$2 7.60E$$-$$3 2.21E$$-$$3 5.69E$$-$$4 1.43E$$-$$4 3.58E$$-$$5 rate — 0.55 1.20 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.75E$$-$$2 7.01E$$-$$3 4.70E$$-$$3 3.21E$$-$$3 1.06E$$-$$3 2.82E$$-$$4 7.10E$$-$$5 rate — 1.32 0.58 0.55 1.60 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 1.65E$$-$$2 5.18E$$-$$3 2.07E$$-$$3 1.26E$$-$$3 1.05E$$-$$3 4.84E$$-$$4 1.39E$$-$$4 rate — 1.67 1.32 0.72 0.26 1.12 1.80 $${\varepsilon}=1/2^{6}$$ 1.43E$$-$$2 4.35E$$-$$3 1.54E$$-$$3 6.61E$$-$$4 3.64E$$-$$4 2.82E$$-$$4 2.02E$$-$$4 rate — 1.72 1.50 1.22 0.86 0.37 0.48 Table 2 Temporal error analysis at time$${\it t}=1$$for Case II, i.e.,$$\alpha=1$$and$$\beta=0$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 Table 2 Temporal error analysis at time$${\it t}=1$$for Case II, i.e.,$$\alpha=1$$and$$\beta=0$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1$$ 1.23E$$-$$2 3.27E$$-$$3 8.37E$$-$$4 2.11E$$-$$4 5.30E$$-$$5 1.33E$$-$$5 3.32EE$$-$$6 rate — 1.91 1.97 1.99 1.99 1.99 2.00 $${\varepsilon}=1/2$$ 6.552 1.76E$$-$$2 4.50E$$-$$3 1.13E$$-$$3 2.85E$$-$$4 7.14E$$-$$5 1.79E$$-$$5 rate — 1.90 1.97 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 1.82E$$-$$1 6.24E$$-$$2 1.71E$$-$$2 4.36E$$-$$3 1.10E$$-$$3 2.75E$$-$$4 6.89E$$-$$5 rate — 1.55 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.42E$$-$$1 1.27E$$-$$1 5.78E$$-$$2 1.68E$$-$$2 4.32E$$-$$3 1.09E$$-$$3 2.72E$$-$$4 rate — 0.16 1.14 1.78 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 1.29E$$-$$1 6.87E$$-$$2 6.62E$$-$$2 4.88E$$-$$2 1.62E$$-$$2 4.30E$$-$$3 1.08E$$-$$3 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 2.73E$$-$$1 6.89E$$-$$2 3.41E$$-$$2 3.22E$$-$$2 3.16E$$-$$2 1.49E$$-$$2 4.24E$$-$$3 rate — 1.99 1.02 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 1.12E$$-$$1 1.30E$$-$$1 3.46E$$-$$2 1.70E$$-$$2 1.58E$$-$$2 1.63E$$-$$2 1.24E$$-$$2 rate — $$-$$0.22 1.91 1.02 0.10 $$-$$0.04 0.40 Table 3 Temporal error analysis at time$${\it t}=1$$for Case III, i.e.,$$\alpha=0$$and$$\beta=-1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 Table 3 Temporal error analysis at time$${\it t}=1$$for Case III, i.e.,$$\alpha=0$$and$$\beta=-1$$ $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 $$n_{\varepsilon}^{\tau,h}(1)$$ $$\tau_0=0.05$$ $$\tau_0/2$$ $$\tau_0/2^2$$ $$\tau_0/2^3$$ $$\tau_0/2^4$$ $$\tau_0/2^5$$ $$\tau_0/2^6$$ $${\varepsilon}=1/2$$ 1.24E$$-$$1 3.36E$$-$$2 8.61E$$-$$3 2.17E$$-$$3 5.45E$$-$$4 1.37E$$-$$4 3.43E$$-$$5 rate — 1.89 1.96 1.99 1.99 2.00 1.99 $${\varepsilon}=1/2^{2}$$ 7.15E$$-$$1 2.46E$$-$$1 6.75E$$-$$2 1.73E$$-$$2 4.34E$$-$$3 1.09E$$-$$3 2.73E$$-$$4 rate — 1.54 1.87 1.97 1.99 2.00 2.00 $${\varepsilon}=1/2^{3}$$ 1.13 1.01 4.61E$$-$$1 1.34E$$-$$1 3.44E$$-$$2 8.65E$$-$$3 2.17E$$-$$3 rate — 0.16 1.13 1.79 1.96 1.99 2.00 $${\varepsilon}=1/2^{4}$$ 2.07 1.10 1.06 7.79E$$-$$1 2.58E$$-$$1 6.86E$$-$$2 1.73E$$-$$2 rate — 0.91 0.05 0.44 1.59 1.91 1.99 $${\varepsilon}=1/2^{5}$$ 8.67 2.19 1.09 1.03 1.01 4.77E$$-$$1 1.36E$$-$$1 rate — 1.98 1.01 0.08 0.03 1.08 1.81 $${\varepsilon}=1/2^{6}$$ 7.13 8.34 2.21 1.09 1.01 1.04 7.93E$$-$$1 rate — $$-$$0.23 1.91 1.02 0.10 $$-$$0.04 0.40 (i) For the spatial discretization, the method is uniformly second-order accurate w.r.t. $${\varepsilon} \in(0,1]$$ for $$E^{\varepsilon}$$ with any initial data (cf. Figs 2(a), 3(a)). For $$N^{\varepsilon}$$, it is uniformly convergent with quadratic rate in space for Cases I and II (cf. Fig. 2(b)), while the error bound depends on $${\varepsilon}$$ as $$O(h^2/{\varepsilon})$$ for Case III (cf. Fig. 3(b)). This suggests that the error estimate (2.9) in space is optimal for $$N^{\varepsilon}$$. (ii) For the temporal discretization, the method is uniformly second-order accurate for $$E^{\varepsilon}$$ for $$\alpha\ge1$$ and $$\beta\ge 0$$ (cf. Fig. 4(a)). While for $$N^{\varepsilon}$$, the upper triangle part of Table 1 shows the order at $$O(\tau^2/{\varepsilon})$$ and the lower triangle part depicts the error at the order $$O(\tau^2+{\varepsilon}^2)$$ for $$\alpha=2 $$ and $$\beta=1$$. For Case II, the upper triangle part of Table 2 implies the convergence order at $$O(\tau^2/{\varepsilon}^2)$$, and the lower triangle part shows the convergence order at $$O(\tau^2+{\varepsilon})$$. The uniform convergence rate is attained when the two types of estimates are compatible, which can be confirmed by the degeneracy of the error listed in Table 4 for $$n_{\varepsilon}^{\tau,h}$$ as $$\tau^2\sim {\varepsilon}^3$$. For $$\alpha=0$$, $$\beta=-1$$, the upper triangle part of Table 3 depicts the convergence order at $$O(\tau^2/{\varepsilon}^3)$$ for $$n_{\varepsilon}^{\tau,h}$$. Table 4 Degeneracy of temporal error at time$$t=1$$for$$N^{\varepsilon}$$$$($$the convergence orders are calculated with respect to time step$$\tau$$$$)$$ $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 Table 4 Degeneracy of temporal error at time$$t=1$$for$$N^{\varepsilon}$$$$($$the convergence orders are calculated with respect to time step$$\tau$$$$)$$ $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=2,\beta=1$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 1.69E$$-$$2 1.06E$$-$$3 6.65E$$-$$5 order — 3.95/3 3.99/3 3.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 $$\alpha=1,\beta=0$$ $$\tau_0=0.2,{\varepsilon}_0=1$$ $$\tau_0/2^3,{\varepsilon}_0/2^2$$ $$\tau_0/2^6,{\varepsilon}_0/2^4$$ $$\tau_0/2^9,{\varepsilon}_0/2^6$$ $$n_{\varepsilon}^{\tau,h}(1)$$ 2.61E$$-$$1 6.24E$$-$$2 1.62E$$-$$2 4.08E$$-$$3 order — 2.06/3 1.94/3 1.99/3 It can be observed from the numerical results that the temporal errors are much better than the predicted estimates in Theorem 2.1. In fact, the numerical results suggest the first type of error estimate (2.9) is sharp for $$N^{{\varepsilon},k}$$, whereas the second type of error bound (2.10) can be improved as $$\|n^{{\varepsilon},k}\|\lesssim h^2+\tau^2+{\varepsilon}^{\alpha^\dagger},$$ which yields the uniform $${\varepsilon}$$-independent convergence rate for $$\alpha\ge 1$$ and $$\beta\ge 0$$ as $$\|n^{{\varepsilon},k}\|\lesssim h^2+\tau^{2\alpha^\dagger/3}.$$ This numerical result coincides with that for the Zakharov system (cf. Cai & Yuan, 2017). The reason why the second type of estimate (2.10) is not sharp is due to that we obtain the estimate (2.10) via the limiting equation (1.1); however, the KG equation (1.1) may not be the best approximating one with respect to the KGZ system (1.1). As for the asymptotic equation of the KGZ system (1.1) in the subsonic limit regime, it is a more complicated issue (see, e.g., Daub et al., 2016) compared to the Zakharov system, which collapses directly to the cubically nonlinear Schrödinger (NLS) equation since the NLS equation preserves the $$L^2$$ norm naturally. Recently, Daub et al. (2016) proved rigorously the convergence of the KGZ system to a modified KG equation defined on a $$d$$-dimensional torus or to the KG equation on the whole space $$\mathbb{R}^d$$ with $$d\ge 3$$ in the subsonic limit. In one word, the asymptotic equation of the KGZ in the subsonic limit is far from enough and would be our future work. Moreover, Fig. 5 displays the errors between the KGZ system to the KG equation for initial data of Cases I–III, where $$\eta_e^{\varepsilon}(t)=\|E^{\varepsilon}(\cdot,t)-E(\cdot,t)\|_{H^1},\quad \eta_n^{\varepsilon}(t)=\|N^{\varepsilon}(\cdot,t)-N(\cdot,t)\|_{L^2}.$$ One can see that the KGZ system reduces to the KG equation with order $$O({\varepsilon})$$ and $$O({\varepsilon}^{\alpha^*})$$ for $$E^{\varepsilon}$$ and $$N^{\varepsilon}$$, respectively, which coincides with assumption (B). Fig. 5. View largeDownload slide Errors between the KGZ and KG at time $$t=1$$ for initial data of Cases I–III. Fig. 5. View largeDownload slide Errors between the KGZ and KG at time $$t=1$$ for initial data of Cases I–III. Remark 4.1 In view of the results displayed in this article, the classical CFDM may not be a good choice for solving the KGZ system in the subsonic limit regime, especially for ill-prepared initial condition, which requires the harsh meshing strategy $$h=O({\varepsilon}^{1/2})$$ and $$\tau=O({\varepsilon}^{3/2})$$. Inspired by the analysis technique applied here, by introducing an asymptotic consistent formulation, we proposed an Finite Difference Method (FDM) which is uniformly accurate at $$O(h^2+\tau)$$ (see Bao & Su, 2017a) for any initial data. Similar to the classical FDM, the scheme in Bao & Su (2017a) is semiimplicit, and only a linear system needs to be solved at each time step. The key points for the great improvement of the new formulation include (1) by writing the highly oscillatory wave with amplitude at $$O(1)$$ explicitly, which is the solution of a linear wave equation and can be solved separately, the oscillatory $$N^{\varepsilon}$$ to be solved with amplitude $$O(1)$$ was replaced by a new function $$F^{\varepsilon}$$ with amplitude $$O({\varepsilon})$$ and (2) instead of using the KG equation as the limiting equation for approximating the KGZ system, which plays a crucial role in establishing the error estimate (2.10), we found a KG equation with an oscillatory potential which approximates the formulated KGZ system linearly in the subsonic limit for any initial data. For the details, we refer to Bao & Su (2017a). 5. Conclusion We analysed a CFDM for discretizing the KGZ system in the subsonic limit regime in $$d$$ ($$d=1, 2, 3$$) dimensions. When $$0<{\varepsilon}\ll1$$, i.e., in the subsonic limit regime, the solution of the KGZ system propagates highly oscillatory waves with wavelength $$O({\varepsilon})$$ in time or rapid outspreading waves at speed $$O(1/{\varepsilon})$$ in space. For the CFDM, we obtain the uniform convergence at the order $$O\left(h^2+\tau^{\frac{2}{4-\alpha^\dagger}}\right)$$ for $$\alpha\ge1$$ and $$\beta\ge 0$$, which can be improved as $$O\left(h^2+\tau^{\frac{2}{3}\alpha^\dagger}\right)$$, suggested by the numerical results. For $$\alpha=0$$ or $$\beta=-1$$, the error was obtained at $$O(h^2/{\varepsilon}+\tau^2/{\varepsilon}^3)$$, which implies the $${\varepsilon}$$-scalability of the CFDM for the KGZ system is $$h=O({\varepsilon}^{1/2})$$ and $$\tau=O({\varepsilon}^{3/2})$$ for $$0<{\varepsilon}\ll1$$. Acknowledgements The author thank Professors Weizhu Bao and Yongyong Cai for their valuable discussions and suggestions. References Bao W. , Cai Y. & Zhao X. ( 2014 ) A uniformly accurate multiscale time integrator pseudospectral method for the Klein–Gordon equation in the nonrelativistic limit regime. SIAM J. Numer. Anal. , 52 , 2488 – 2511 . Google Scholar CrossRef Search ADS Bao W. , Dong X. & Zhao X. ( 2013 ) An exponential wave integrator sine pseudospectral method for the Klein–Gordon–Zakharov system. SIAM J. Sci. Comput. , 35 , 2903 – 2927 . Google Scholar CrossRef Search ADS Bao W. & Su C. ( 2017a ) Uniform error bounds of a finite difference method for the Klein–Gordon–Zakharov system in the subsonic limit regime. Math. Comput. (in press). Bao W. & Su C. ( 2017b ) Uniform error bounds of a finite difference method for the Zakharov system in the subsonic limit regime via an asymptotic consistent formulation. SIAM Multiscale Model. Simul. , 15 , 977 – 1002 . Google Scholar CrossRef Search ADS Bergé L. , Bidégaray B. & Colin T. ( 1996 ) A perturbative analysis of the time-envelope approximation in strong Langmuir turbulence. Phys. D , 95 , 351 – 379 . Google Scholar CrossRef Search ADS Cai Y. & Yuan Y. ( 2017 ) Uniform error estimates of the conservative finite difference method for the Zakharov system in the subsonic limit regime. Math. Comput. (in press). Chang Q. , Guo B. & Jiang H. ( 1995 ) Finite difference method for generalized Zakharov equations. Math. Comput. , 64 , 537 – 553 . Google Scholar CrossRef Search ADS Daub M. , Schneider G. & Schratz K. ( 2016 ) From the Klein–Gordon–Zakharov system to the Klein–Gordon equation. Math. Methods Appl. Sci. , 39 , 5371 – 5380 . Google Scholar CrossRef Search ADS Dendy R. O. ( 1990 ) Plasma Dynamics. Chicago : Clarendon Press . Faou E. & Schratz K. ( 2014 ) Asymptotic preserving schemes for the Klein–Gordon equation in the non-relativistic limit regime. Numer. Math. , 126 , 441 – 469 . Google Scholar CrossRef Search ADS Masmoudi N. & Nakanishi K. ( 2005 ) From the Klein–Gordon–Zakharov system to the nonlinear Schrödinger equation. J. Hyperbol. Differ. Equ. , 2 , 975 – 1008 . Google Scholar CrossRef Search ADS Masmoudi N. & Nakanishi K. ( 2008 ) Energy convergence for singular limits of Zakharov type systems. Invent. Math. , 172 , 535 – 583 . Google Scholar CrossRef Search ADS Ozawa T. & Tsutsumi Y. ( 1992 ) The nonlinear Schrödinger limit and the initial layer of the Zakharov equations. Differ. Integral Equ. , 5 , 721 – 745 . Schochet S. H. & Weinstein M. I. ( 1986 ) The nonlinear Schrödinger limit of the Zakharov equations governing Langmuir turbulence. Commun. Math. Phys. , 106 , 569 – 580 . Google Scholar CrossRef Search ADS Sulem C. & Sulem P. L. ( 1979 ) Regularity properties for the equations of Langmuir turbulence. C. R. Acad. Sci. Paris Sér. A Math. , 289 , 173 – 176 . Texier B. ( 2005 ) WKB asymptotics for the Euler-Maxwell equations. Asymptot. Anal. , 42 , 211 – 250 . Thomée V. ( 1997 ) Galerkin Finite Element Methods for Parabolic Problems. Berlin : Springer . Google Scholar CrossRef Search ADS Wang T. , Chen J. & Zhang L. ( 2007 ) Conservative difference methods for the Klein–Gordon–Zakharov equations. J. Comput. Appl. Math. , 205 , 430 – 452 . Google Scholar CrossRef Search ADS Zakharov V. E. ( 1972 ) Collapse of Langmuir waves. Soviet Phys. , 35 , 908 – 914 . © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: Aug 16, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off