TY - JOUR AU - Luca, Gerardo-Giorda, AB - Abstract This article studies optimized Schwarz methods for the Stokes–Darcy problem. Robin transmission conditions are introduced, and the coupled problem is reduced to a suitable interface system that can be solved using Krylov methods. Practical strategies to compute optimal Robin coefficients are proposed, which take into account both the physical parameters of the problem and the mesh size. Numerical results show the effectiveness of our approach. 1. Introduction The Stokes–Darcy problem has received growing attention from the mathematical community over the last decade from the seminal works by Discacciati et al. (2002) and Layton et al. (2003). The interest in this problem is not only due to its many possible applications but also due to its mathematical nature. Indeed, it is a good example of a multi-physics problem, where two different boundary value problems are coupled into a global heterogeneous one. To compute the approximate solution of this problem, one could solve it in a monolithic way using either a direct or a suitably preconditioned iterative method. However, its multi-physics nature makes it suitable for splitting methods typical of domain decomposition techniques. These methods allow the solution of the global problem to be recovered by iteratively solving each subproblem separately and they thus permit software, specifically developed to deal with either incompressible or porous media flows, to be reused. The difficulty of this approach is guaranteeing effective convergence and robustness of the iterations. Classical Dirichlet–Neumann-type methods (see Quarteroni & Valli, 1999) for the Stokes–Darcy problem were studied in Discacciati et al. (2002) and Discacciati (2004), showing that they may exhibit slow convergence for small values of the viscosity of the fluid and the permeability of the porous medium. A Robin–Robin method was then proposed as a possible alternative in Discacciati (2004) and Discacciati et al. (2007). Analogous substructuring methods based on Robin interface conditions were subsequently studied in Cao et al. (2010a,b), Chen et al. (2011) and, more recently, in Caiazzo et al. (2014), where a comparison of these different methods was carried out. All these works show that the Robin–Robin method is more robust than the Dirichlet–Neumann one, but it is still unclear how to choose the Robin coefficients in an optimal way taking into account both the main physical parameters of the problem and the mesh size. Indeed, apparently contradictory results can be found in the literature regarding the relative magnitude of such coefficients and their dependence on the physical and computational quantities that characterize the problem and its discretization. In this work, we focus on the effective solution of the heterogeneous Stokes–Darcy problem by means of a Robin-type interface coupling between the subdomains and we optimize the convergence properties of the coupling algorithm in the framework of optimized Schwarz methods. Differently from the classical Schwarz algorithm (see, e.g., Smith et al., 1996; Quarteroni & Valli, 1999; Toselli & Widlund, 2005), based on Dirichlet transmission conditions (rather slow and very much dependent on the size of the overlap), optimized Schwarz algorithms are based on more effective transmission conditions and show significant improvement in terms of both robustness and computational cost (see Gander et al., 2002; Gander, 2006; Dolean & Nataf, 2007; Dolean et al., 2009). In addition, while in general the classical Schwarz method is not convergent in the absence of overlap, optimized Schwarz algorithms do not suffer from such a drawback and ensure convergence also for decompositions into nonoverlapping subdomains. Optimized Schwarz methods are thus a natural framework to deal with spatial decompositions of the computational domain that are driven by a multi-physics problem as the one at hand. Originally, Lions (1990) proposed Robin conditions to obtain convergence without overlap, whereas in a short note on nonlinear problems, Hagstrom et al. (1988) suggested nonlocal operators for best performance. In Charton et al. (1991), these optimal, nonlocal transmission conditions were developed for advection–diffusion problems, with local approximations for small viscosity, and low-order frequency approximations were proposed in Nataf & Rogier (1995) and Deng (1997). Optimized transmission conditions for the best performance in a given class of local transmission conditions were introduced for advection–diffusion problems in Japhet et al. (2001), for the Helmholtz equation in Gander et al. (2002), for Laplace’s equation in Engquist & Zhao (1998) and for Maxwell’s equation in Alonso-Rodriguez & Gerardo-Giorda (2006). Complete results and attainable performance for symmetric, positive-definite problems are reported in Gander (2006). The optimized Schwarz methods were also extended to systems of partial differential equations, such as the compressible Euler equations (Dolean & Nataf, 2007) and the full Maxwell system (see Dolean et al., 2009). Recently, optimized Schwarz strategies have been proposed for the coupling of heterogeneous models, such as in fluid–structure interaction problem (Gerardo-Giorda et al., 2010) and in the coupling of bidomain and monodomain models in electrocardiology (Gerardo-Giorda et al., 2011). The article is organized as follows. After introducing the Stokes–Darcy problem in Section 2, in Section 3 a Robin–Robin iterative method is studied. Its convergence properties are enlightened by means of Fourier analysis in the framework of optimized Schwarz methods, and optimal Robin parameters for the interface conditions are devised. In Section 4 the algebraic interpretation of the method is provided, and the numerical tests illustrate the convergence properties of the method and its robustness with respect to both the mesh size and the problem coefficients. 2. Problem setting and discretization We consider a computational domain formed by two subregions: one occupied by a fluid and the other formed by a porous medium. More precisely, let $${\it {\Omega}} \subset \mathbb{R}^\mathrm{D}$$ ($$\mathrm{D}=2,3$$) be a bounded domain, partitioned into two nonintersecting subdomains $${\it {\Omega}}_{f}$$ and $${\it {\Omega}}_{p}$$ separated by an interface $$\Gamma$$, i.e., $$\overline{{\it {\Omega}}} = \overline{{\it {\Omega}}}_{ f} \cup \overline{{\it {\Omega}}}_{p}$$, $${\it {\Omega}}_{f} \cap {\it {\Omega}}_p= \emptyset$$ and $$\overline{{\it {\Omega}}}_f \cap \overline{{\it {\Omega}}}_p = \Gamma$$. We suppose the boundaries $$\partial {\it {\Omega}}_f$$ and $$\partial {\it {\Omega}}_p$$ to be Lipschitz continuous. From the physical point of view, $$\Gamma$$ is a surface separating the domain $${\it {\Omega}}_f$$ filled by a fluid from the domain $${\it {\Omega}}_p$$ formed by a porous medium. We assume that $${\it {\Omega}}_f$$ has a fixed surface, i.e., we neglect the case of free surface flows. In Fig. 1, we show a schematic representation of the computational domain. In the following, $$\mathbf{n}_p$$ and $$\mathbf{n}_f$$ denote the unit outward normal vectors to $$\partial {\it {\Omega}}_p$$ and $$\partial {\it {\Omega}}_f$$, respectively, and we have $$\mathbf{n}_f=-\mathbf{n}_p$$ on $$\Gamma$$. We suppose $$\mathbf{n}_f$$ and $$\mathbf{n}_p$$ to be regular enough, and we indicate $$\mathbf{n} = \mathbf{n}_f$$ for simplicity of notation. Fig. 1. View largeDownload slide Schematic representation of a two-dimensional section of the computational domain. Fig. 1. View largeDownload slide Schematic representation of a two-dimensional section of the computational domain. The fluid in $${\it {\Omega}}_{\rm f}$$ is incompressible with constant viscosity and density, and it can be described by the dimensionless steady Stokes equations: find the fluid velocity $$\mathbf{u}_f$$ and pressure $$p_f$$, such that \begin{equation}\label{eq:stokes}- \boldsymbol{\nabla} \cdot \left( 2\mu_f \nabla^{\mathrm{s}} \mathbf{u}_f - p_f \boldsymbol{I} \right) = \mathbf{f}_f \quad \mbox{and} \quad \nabla \cdot \mathbf{u}_f = 0 \quad \textrm{in } {\it {\Omega}}_f , \end{equation} (2.1) where $$\boldsymbol{I}$$ and $$\nabla^{\mathrm{s}} \mathbf{u}_f = \frac{1}{2} \big(\nabla \mathbf{u}_f + \big(\nabla \mathbf{u}_f\big)^{\rm T}\big)$$ are the identity and the strain rate tensor; $$\mu_f = \left(\mathrm{Re} \, \mathrm{Eu}\right)^{-1} >0$$ with $$\mathrm{Re}$$ and $$\mathrm{Eu}$$ being the Reynolds number and the Euler number respectively; $$\mathbf{f}_f$$ is a given external force ($$\nabla$$ and $$\nabla \cdot$$ denote the dimensionless gradient and divergence operator, respectively, with respect to the space coordinates). The motion of the fluid through the porous medium can be described by the following dimensionless elliptic problem: find the pressure $$p_{p}$$ such that \begin{equation}\label{eq:darcy_fcalar}- \nabla \cdot \left(\boldsymbol{\eta}_p \nabla p_p\right) = - \nabla \cdot \mathbf{g}_p \quad \mbox{in } {\it {\Omega}}_p, \end{equation} (2.2) where $$\boldsymbol{\eta}_p = \left( \mathrm{Re} \, \mathrm{Eu} \, \mathrm{Da} \right)\boldsymbol{K}$$, $$\boldsymbol{K}$$ being the diagonal dimensionless intrinsic permeability tensor and $$\mathrm{Da}$$ the Darcy number (Bear & Bachmat, 1991), whereas $$\mathbf{g}_p$$ is a given external force that accounts for gravity. The fluid velocity in $${\it {\Omega}}_p$$ can be obtained using Darcy’s law (see, e.g., Bear, 1979): $$\mathbf{u}_p = - \boldsymbol{\eta}_p \nabla p_p + \mathbf{g}_p$$. Suitable continuity conditions must be imposed across the interface $${\it {\Gamma}}$$ to describe filtration phenomena. As a consequence of the incompressibility of the fluid, we prescribe the continuity of the normal velocity across $${\it {\Gamma}}$$: \begin{equation}\label{eq:interfsimple}\mathbf{u}_f \cdot \mathbf{n} = - \left( \boldsymbol{\eta}_p \nabla p_p\right) \cdot \mathbf{n} + \mathbf{g}_p \cdot \mathbf{n} \quad \mbox{on } {\it {\Gamma}}. \end{equation} (2.3) Moreover, we impose the following condition relating the normal stresses across $${\it {\Gamma}}$$ (see, e.g., Layton et al., 2003; Discacciati, 2004; Girault & Rivière, 2009): \begin{equation}\label{eq:interf2}- \mathbf{n} \cdot \left( 2\mu_f \nabla^{\mathrm{s}} \mathbf{u}_f - p_f \boldsymbol{I} \right) \cdot \mathbf{n} = p_p \quad \textrm{on } {\it {\Gamma}}. \end{equation} (2.4) Finally, we introduce the so-called Beavers–Joseph–Saffman condition (see, e.g., Beavers & Joseph, 1967; Saffman, 1971; Jäger & Mikelić, 1996; Discacciati et al., 2002; Layton et al., 2003): \begin{equation}\label{eq:interf3}- \left( \left( 2\mu_f \nabla^{\mathrm{s}} \mathbf{u}_f - p_f \boldsymbol{I} \right) \cdot \mathbf{n}\right)_\tau = \xi_f \left(\mathbf{u}_f\right)_\tau \quad \mbox{on } {\it {\Gamma}}, \end{equation} (2.5) where $$\xi_f = \alpha_{\rm BJ} \big( \mathrm{Re} \, \mathrm{Eu} \, \sqrt{\mathrm{Da}} \, \sqrt{\boldsymbol{\tau}\cdot \boldsymbol{K} \cdot \boldsymbol{\tau}} \big)^{-1}$$ and $$\alpha_{\rm BJ}$$ is a dimensionless constant that depends only on the geometric structure of the porous medium. We indicate by $$(\mathbf{v})_{\tau}$$ the tangential component of any vector $$\mathbf{v}$$: $$(\mathbf{v})_\tau = \mathbf{v} - ( \mathbf{v} \cdot \mathbf{n} ) \mathbf{n}$$ on $${\it {\Gamma}}$$. As concerns the boundary conditions, several choices can be made (see, e.g., Discacciati & Quarteroni, 2009). For simplicity, we consider here homogeneous boundary data and, with reference to Fig. 1 for the notation, for the Darcy problem we impose $$p_p = 0$$ on $${\it {\Gamma}}_p^D$$ and $$\boldsymbol{\eta}_p \nabla p_p \cdot\mathbf{n}_p= \mathbf{g}_p \cdot \mathbf{n}_p$$ on $${\it {\Gamma}}_p^N$$, whereas for the Stokes problem, we set $$\mathbf{u}_f=\mathbf{0}$$ on $$\partial{\it {\Omega}}_ f \setminus {\it {\Gamma}}$$. 3. Optimized Robin–Robin method 3.1 Formulation of the Robin–Robin method Let $$\alpha_f$$ and $$\alpha_p$$ be two positive parameters: $$\alpha_f,\alpha_p>0$$. By combining (2.3) and (2.4) linearly with coefficients $$(-\alpha_f,1)$$ and $$(\alpha_p,1)$$, we obtain two Robin interface conditions on $${\it {\Gamma}}$$: \begin{equation}\label{int1_2}-\mathbf{n} \cdot \left( 2 \mu_f \nabla^{\mathrm{s}} \mathbf{u}_f - p_f \boldsymbol{I} \right) \cdot \mathbf{n} -\alpha_f \mathbf{u}_f \cdot \mathbf{n}= {p_p} + \alpha_f \, \left( \left(\boldsymbol{\eta}_p\,{\nabla} p_p\right) \cdot \mathbf{n} - \mathbf{g}_p \cdot \mathbf{n} \right) \end{equation} (3.1) and \begin{equation}\label{int2_2}{p_p} - \alpha_p \, \left( \left(\boldsymbol{\eta}_p\,{\nabla} p_p\right) \cdot \mathbf{n} - \mathbf{g}_p \cdot \mathbf{n} \right) = -\mathbf{n} \cdot \left( 2 \mu_f \nabla^{\mathrm{s}} \mathbf{u}_f - {p_f} \boldsymbol{I} \right) \cdot \mathbf{n} +\alpha_p \mathbf{u}_f \cdot \mathbf{n}. \end{equation} (3.2) A Robin–Robin-type algorithm amounts to setting up a fixed-point problem that solves iteratively the fluid problem with boundary condition (3.1) and the porous medium problem with boundary condition (3.2). More precisely, the algorithm reads as follows. Given the Darcy pressure $$p_p^0$$ in $${\it {\Omega}}_p$$, for $$m \geq 1$$ until convergence, find the fluid velocity $${\mathbf{u}_f^m}$$, the fluid pressure $${p_f^m}$$ in $${\it {\Omega}}_f$$ and the pressure $${p_p^m}$$ in $${\it {\Omega}}_p$$, such that the following problems are satisfied: 1. Stokes problem \begin{equation}\label{fluid_0}\begin{array}{rl}-\boldsymbol{\nabla} \cdot \left( 2 {\mu_f} \nabla^{\mathrm{s}} \mathbf{u}_f^{m} - p_f^m \boldsymbol{I} \right) = \mathbf{f}_f \quad \mbox{and} \quad \nabla \cdot \mathbf{u}_f^{m} = 0 & \mbox{in } {\it {\Omega}}_f,\\ \mathbf{u}_f^{m} = \mathbf{0} & \mbox{on } \partial{\it {\Omega}}_f \setminus {\it {\Gamma}},\\ -\left(\mathbf{n} \cdot \left( 2 \mu_f \nabla^{\mathrm{s}} \mathbf{u}_f^{m} - p_f^m \boldsymbol{I} \right)\right)_\tau = \xi_f \, \left(\mathbf{u}_f^{m}\right)_\tau &\mbox{on } {\it {\Gamma}},\\ -\mathbf{n} \cdot \left(2 \mu_f \nabla^{\mathrm{s}} \mathbf{u}_f^m - p_f^m \boldsymbol{I} \right) \cdot \mathbf{n} - \alpha_f \mathbf{u}_f^m \cdot \mathbf{n} = %\qquad \qquad \qquad & \\ p_p^{m-1} + \alpha_f \, \left( \left(\boldsymbol{\eta}_p\,\nabla p_p^{m-1}\right) \cdot \mathbf{n} - \mathbf{g}_p \cdot \mathbf{n}\right ) & \mbox{on } {\it {\Gamma}}, \\ \end{array}\end{equation} (3.3) 2. Darcy problem \begin{equation}\label{porous_0}\begin{array}{rl} -{\nabla} \cdot \left( \boldsymbol{\eta}_p \, \nabla {p_p^m} \right) = - \nabla \cdot \mathbf{g}_p &\mbox{in } {\it {\Omega}}_p,\\ p_p^m = 0 & \mbox{on } {\it {\Gamma}}_p^D,\\ - \left( \boldsymbol{\eta}_p \nabla p_p^m\right) \cdot \mathbf{n}_p + \mathbf{g}_p \cdot \mathbf{n}_p = 0 & \mbox{on }{\it {\Gamma}}_p^N,\\ {p_p^m} - \alpha_p \, \left( \left(\boldsymbol{\eta}_p\,\nabla p_p^m\right) \cdot \mathbf{n} - \mathbf{g}_p \cdot \mathbf{n} \right) = -\mathbf{n} \cdot \left( 2 {\mu_f} \nabla^{\mathrm{s}} \mathbf{u}_f^{m} - {p_f^m} \boldsymbol{I} \right) \cdot \mathbf{n} +\alpha_p \mathbf{u}_f^{m} \cdot \mathbf{n} &\mbox{on } {\it {\Gamma}}. \end{array}\end{equation} (3.4) We aim now to optimize the Robin–Robin algorithm (3.3)–(3.4) in the framework of optimized Schwarz methods. Such methods, based on the interface continuity requirements on traces and fluxes of Robin type, are a generalization of the nonoverlapping algorithm proposed for elliptic problems in Lions (1990), which ensures convergence also without relaxation. Since optimized Schwarz methods do not require overlap to converge, they have become quite popular in the last decade and are a natural framework for dealing with the spatial decomposition of a domain driven by a multi-physics problem (see (Gerardo-Giorda et al., 2010). Although in general optimized Schwarz methods based on one-sided interface conditions ($$\alpha_f=\alpha_p$$) have been extensively used over the years (see, e.g., Lions, 1990; Collino et al., 1997; Japhet et al., 2001; Gander, 2006), the use of two-sided interface conditions ($$\alpha_f\neq\alpha_p$$) has recently become increasingly popular due to better convergence properties of the associated algorithms (see Alonso-Rodriguez & Gerardo-Giorda, 2006; Dubois, 2007; Gander et al., 2007; Dolean et al., 2009; Gerardo-Giorda & Perego, 2013). Since such parameters are in general obtained by suitable approximations of the symbols in the Fourier space of the Steklov–Poincarè operator (or Dirichlet-to-Neumann mapping) associated with the problem within the subdomain (Gander, 2006), the two-sided interface conditions are a natural choice in the presence of multi-physics problems where different problems have to be solved in different regions of the computational domain (Gerardo-Giorda et al., 2010, 2011). In the rest of the section we study, by means of Fourier analysis, the convergence properties of the Robin–Robin algorithm (3.3)–(3.4) in simplified settings and its optimization in the two-sided interface conditions framework. 3.2 A simplified problem We introduce suitable simplifying hypotheses and subproblems. The fluid domain is the half-plane $${\it {\Omega}}_f = \left\{(x,y)\in\mathbb{R}^2:\, x<0\right\}$$, the porous medium is the complementary half-plane $${\it {\Omega}}_p= \left\{(x,y)\in\mathbb{R}^2:\, x>0\right\}$$, whereas the interface is given by $${\it {\Gamma}} = \left\{(x,y)\in\mathbb{R}^2:\, x=0\right\}$$. Thus, $$\mathbf{n}=(1,0)$$ and $$\vec{\tau}=(0,1)$$. We assume $$\mu_f$$ to be constant, $$\boldsymbol{\eta}_p = \mbox{diag}(\eta_1,\eta_2)$$ to be constant and anisotropic (i.e., $$\eta_1\neq\eta_2$$) and we denote $$\mathbf{u}_f(x,y)=\left[ u_1(x,y), u_2(x,y) \right]^{\rm{T}}$$ and $$\mathbf{g}_p = (g_1,g_2)$$. In this simplified setting, the Robin–Robin algorithm reads as follows: given $$\mathbf{u}_f^0$$, $$p_f^0$$ and $$p_p^0$$, solve for $$m> 0$$ until convergence, 1. the fluid problem \begin{equation}\label{fluid}\begin{aligned}-\mu_f \left( \begin{array}{l} \left(\partial_{xx} + \partial_{yy} \right) u_1^{m} \\ \left( \partial_{xx} + \partial_{yy}\right) u_2^{m}\end{array} \right) + \left( \begin{array}{l}\partial_x p_f^{m} \\ \partial_y p_f^{m}\end{array} \right) &= \mathbf{f}_f & \mbox{in } (-\infty, 0)\times \mathbb{R}, \\ \partial_x u_1^{m} + \partial_y u_2^{m} &= 0 & \mbox{in } (-\infty, 0)\times \mathbb{R}, \\ - \mu_f \left(\partial_x u_2 + \partial_y u_1\right) &= \xi_f \,u_2^{m} &\mbox{on } \{0\}\times \mathbb{R}, \\ \left(-2 \mu_f \partial_x u_1^{m} + p_f^{m}\right) - \alpha_f \,u_1^{m} & = p_p^{m-1} - \alpha_f \,\left(-\eta_1\, \partial_x p_p^{m-1} + g_1\right) &\mbox{on } \{0\}\times \mathbb{R}\end{aligned}\end{equation} (3.5) (in the momentum equation, we have used (2.1) to obtain $$-\boldsymbol{\nabla}\cdot \left(2 \nabla^{\mathrm{s}} \mathbf{u}_f\right) = - \Delta \mathbf{u}_f$$); 2. the porous-medium problem \begin{equation}\label{porous}\begin{aligned}-\left(\partial_x \left(\eta_1 \partial_x\right) + \partial_y \left(\eta_2 \partial_y\right)\right)\, p_p^{m} &= - \left(\partial_x g_1 + \partial_y g_2\right) &\mbox{in } (0,\infty)\times \mathbb{R},\\ p_p^m + \alpha_p\,\left( - \eta_1\, \partial_x p_p^m + g_1\right) &= \left(-2 \mu_f \partial_x u_1^m + p_f^m\right) + \alpha_p \,u_1^m &\mbox{on } \{0\}\times \mathbb{R}. \end{aligned}\end{equation} (3.6) 2.2 Convergence analysis We will base our convergence analysis on a Fourier transform in the direction tangential to the interface (corresponding to the $$y$$-variable in the case at hand), which is defined, for $$w(x,y)\in L^2(\mathbb{R}^2)$$, as $$\mathcal{F} : w(x,y) \mapsto \widehat{w}(x,k) = \int_{\mathbb R}e^{-iky}w(x,y) \,{\rm{d}}y, $$ where $$k$$ is the frequency variable. We will then be able to quantify, in the frequency space, the error between the normal component of the velocity at the $$m$$th iteration, $$\widehat{u}^m_1(x,k)$$, and the exact value $$\widehat{u}_1(x,k)$$. As a consequence, on the interface $${\it {\Gamma}}$$, we can introduce a reduction factor at iteration $$m$$, for each frequency $$k$$, as $$\rho^m(k) := \frac{\left|\widehat{u}^{m}_1(0,k)-\widehat{u}_1(0,k)\right|}{\left|\widehat{u}^{m-2}_1(0,k)-\widehat{u}_1(0,k)\right|}. $$ The Robin–Robin algorithm converges if, at each iteration $$m$$, we have $$\rho^m(k)<1$$ for all the relevant frequencies of the problem, namely for $$k_{{\rm{min}}}\leq k\leq k_{\rm{max}}$$, where $$k_{{\rm{min}}}>0$$ is the smallest frequency relevant to the problem and $$k_{{\rm{max}}}$$ is the largest frequency supported by the numerical grid, which is of order $$\pi/h$$, with $$h$$ being the mesh size (see Gander et al., 2002; Gander, 2006). The ultimate goal is then to minimize, at each iteration step, the reduction factor $$\rho^m(k)$$ over all the Fourier modes. Note that the asymptotic requirements for the Fourier transformability of the solutions entail their boundedness at infinity. Because the problems are linear, we can study the convergence directly on the error equation, namely the convergence to the zero solution when the forcing terms vanish, i.e., $$\mathbf{f}_f=\mathbf{0}$$ and $$\mathbf{g}_p=\mathbf{0}$$. First, we characterize the reduction factor of the algorithm. Proposition 3.1 Let $$\eta_p = \sqrt{\eta_1\eta_2}$$. Given $$\mathbf{u}_f^0,p_f^0,p_p^0$$, the reduction factor of the algorithm (3.5)–(3.6) does not depend on the iteration, and it is given by \begin{equation}\label{rho}\rho\left(\alpha_f,\alpha_p,k\right) = \left| g(\alpha_f,\alpha_p,k) \right|, \quad \mbox{where} \quadg\left(\alpha_f,\alpha_p,k\right) = \left( \frac{2 \mu_f \,|k| - \alpha_p}{2 \mu_f \,|k| + \alpha_f} \right) \cdot \left( \frac{1 - \alpha_f \, \eta_p\, |k| }{1 + \alpha_p \, \eta_p \, |k|} \right)\!. \end{equation} (3.7) Proof. Taking the divergence of (3.5)$$_1$$ and using (3.5)$$_2$$, the fluid problem can be rewritten in the unknown pressure: $$-{\it {\Delta}} p_f^{m+1} = 0 \qquad \mbox{in } {\it {\Omega}}_f. $$ Applying the Fourier transform in the $$y$$-direction, the equation for the pressure above becomes, for all $$k$$, an ordinary differential equation $$- \partial_{xx} \hat{p}_f^{m+1} + k^2 \hat{p}_f^{m+1} = 0 \qquad \mbox{in } (-\infty,0), $$ whose solution is given by $$\hat{p}_f^{m+1} (x,k) = P^{m+1}(k)\, e^{|k|x} + Q^{m+1}(k)\, e^{-|k|x}$$. The boundedness assumption on the solution entails $$Q^{m+1}(k)=0$$, thus \begin{equation}\label{p_fol}\hat{p}_f^{m+1}(x,k) = P^{m+1}(k) \,e^{|k|x}, \end{equation} (3.8) and the value of $$P^{m+1}(k)$$ is determined uniquely by the interface condition (3.5)$$_4$$, \begin{equation}\label{p_interface}-2 \mu_f\, \partial_x \hat{u}_1^{m+1} + \hat{p}_f^{m+1} - \alpha_f \,\hat{u}_1^{m+1}= \hat{p}_p^{m} + \alpha_f \,\eta_1\, \partial_x \hat{p}_p^{m}. \end{equation} (3.9) Similarly, the equation for the Darcy pressure reads, for all $$k$$, $$-\eta_1\, \partial_{xx} \hat{p}_p^{m+1} + \eta_2\, k^2 \hat{p}_p^{m+1} = 0 \qquad \mbox{in } (0,+\infty), $$ whose solution, due to the boundedness assumption, is given by \begin{equation}\label{phi_fol}\hat{p}_p(x,k) = {\it {\Phi}}^{m+1}(k) \,e^{- \sqrt{{\eta_2}/{\eta_1}}|k|x}, \end{equation} (3.10) where the value of $$\Phi^{m+1}(k)$$ is determined uniquely by the interface condition (3.6)$$_2$$, \begin{equation}\label{phi_interface}\hat{p}_p^{m+1} - \alpha_p \,\eta_1\, \partial_x \hat{p}_p^{m+1} = -2 \mu_f\, \partial_x \hat{u}_1^{m+1} + \hat{p}_f^{m+1} + \alpha_p \,\hat{u}_1^{m+1}. \end{equation} (3.11) To write the fluid component of the Robin interface conditions (3.5)$$_4$$ and (3.6)$$_2$$ in terms of the sole pressure, we need to express the interface velocity $$\hat{u}_1$$ as a function of $$\hat{p}_f$$. From (3.5), the first equation of the fluid problem in the $$x$$-direction, after applying the Fourier transform in the $$y$$-direction, reads \begin{equation}\label{prob-u}\partial_{xx} \hat{u}_1^{m+1} - k^2 \hat{u}_1^{m+1} = \frac{k}{\mu} P^{m+1}(k) \,e^{|k|x}, \end{equation} (3.12) having noticed that $$\partial_x \hat{p}_f^{m+1} = |k| P^{m+1}(k)\, e^{|k|x}$$. Due to the boundedness assumption, the homogeneous solution of this equation is $$\hat{u}^{m+1}_{1,{\rm{hom}}}(x,k) = A^{m+1}(k) \, e^{|k|x}$$ for suitable $$A^{m+1}(k)$$. As the right-hand side of (3.12) is a solution to the homogeneous equation, the solution to the complete equation is given by \begin{equation}\label{u1_fol}\hat{u}^{m+1}_{1}(x,k) = \left( A^{m+1}(k) +\dfrac{x}{2\mu_f}\,P^{m+1}(k) \right)\, e^{|k|x}. \end{equation} (3.13) Inserting (3.8), (3.10) and (3.13) into (3.9) and (3.11), and using the fact that $$\partial_x \hat{p}_p^{m+1} = -|k| \sqrt{\frac{\eta_2}{\eta_1}}\Phi^{m+1}(k)\, e^{-|k|x}$$, we get $$\begin{aligned}- \left(\alpha_f + 2 \mu_f \,|k| \right) \,A^{m+1}(k) &= \left(1 - \alpha_f \, \eta_p\, |k| \right)\,\Phi^{m}(k) \qquad & \mbox{in }x=0,\ \\ \left(1 + \alpha_p \, \eta_p\, |k|\right)\,\Phi^{m}(k) &= \left(\alpha_p -2 \mu_f \,|k| \right) \,A^{m-1}(k) & \mbox{in }x=0. \end{aligned}$$ As a consequence, we have $$| A^{m+1}(k) | = \rho(\alpha_f,\alpha_p,k) | A^{m-1}(k) |$$, and in general $$| A^{2m}(k) | = \rho^m(\alpha_f,\alpha_p,k) | A^{0} (k)|$$, where $$\rho(\alpha_f,\alpha_p,k)$$ is given by (3.7).□ We want now to characterize optimal parameters $$\alpha_f,\alpha_p>0$$ that ensure the convergence of the algorithm for all relevant frequencies. In particular, such parameters must ensure that $$\rho(\alpha_f,\alpha_p,k)<1$$ for all $$k \in [k_{{\rm{min}}},k_{{\rm{max}}}]$$. (Notice that the special choice $$\alpha_p=\alpha_f$$ always guarantees the convergence of the algorithm.) 3.4 Optimization of the Robin parameters $$\alpha_p$$ and $$\alpha_f$$ We focus here on the choice of the parameters $$\alpha_p$$ and $$\alpha_f$$ and their optimization. We are interested in a range of frequencies $$0 < k_{{\rm{min}}} \leq |k| \leq k_{{\rm{max}}}$$. Considering the symmetry of $$g(\alpha_f,\alpha_p,k)$$ as a function of $$k$$, in the following, we restrict ourselves to the case $$k>0$$ without loss of generality. Ideally, the optimal parameters force the reduction factor $$\rho(\alpha_p,\alpha_f,k)$$ to be identically zero for all $$k$$, so that convergence is attained in a number of iterations equal to the number of subdomains (2, in the case at hand). The optimal parameters can be easily devised from (3.7): \begin{equation}\label{alpha_exact}\alpha_p^{\rm{{\rm{exact}}}}(k)= 2 \mu_f \, k, \qquad \qquad \alpha_f^{{\rm{exact}}}(k) = \dfrac{1}{\eta_p\, k}, \end{equation} (3.14) but they are unfortunately not viable. In fact, they both depend on the frequency $$k$$, and their back transforms in the physical space either introduce an imaginary coefficient, which multiplies a first-order tangential derivative $$(\alpha_p^{{\rm{exact}}}(k))$$ or result in a nonlocal operator $$(\alpha_f^{{\rm{exact}}}(k))$$. 3.4.1 Low-order Taylor approximation The first possible approach resides in using approximations based on low-order Taylor expansions of the optimal values (3.14), a choice that proved very effective when applied to the coupling of heterogeneous problems (see, e.g., Gerardo-Giorda et al., 2011). Expanding the exact values, one around $$k=k_{\rm{min}}$$ and the other around $$k=k_{{\rm{max}}}$$, namely \begin{equation}\label{eq:lot}\alpha_p^1= \alpha_p^{{\rm{exact}}}(k_{{\rm{min}}})= 2 \mu_f \, k_{{\rm{min}}},\qquad \qquad \alpha_f^1=\alpha_f^{{\rm{exact}}}(k_{{\rm{max}}})= \frac{1}{\eta_p\, k_{{\rm{max}}}}\end{equation} (3.15) or $$\alpha_p^2 =\alpha_p^{{\rm{exact}}}(k_{{\rm{max}}})= 2 \mu_f \, k_{{\rm{max}}},\qquad \qquad \alpha_f^2=\alpha_f^{{\rm{exact}}}(k_{{\rm{min}}})= \frac{1}{\eta_p\, k_{{\rm{min}}}}, $$ guarantees exact convergence of the algorithm for the minimal and maximal frequency. Notice that if the minimal frequency is $$k_{{\rm{min}}}=0$$, only the first combination is viable and corresponds to a Neumann–Robin iterative algorithm. Moreover, whenever $$k_{{\rm{min}}}>0$$, a little algebraic manipulation shows that $$\rho(\alpha_f^1,\alpha_p^1,k)=\rho(\alpha_f^2,\alpha_p^2,k)$$ for all $$k\in [ k_{{\rm{min}}}, k_{{\rm{max}}}]$$. Although the minimal and maximal frequencies are treated exactly, a Taylor expansion offers no control on the effective convergence rate of the algorithm, which is given by the maximum over all the relevant frequencies. As a function of $$k\geq 0$$, $$g(\alpha_f,\alpha_p,k)$$ is continuous, it has two positive roots $$k_1 = (\alpha_f \eta_p)^{-1}$$ and $$k_2 = \alpha_p/(2\mu_f)$$, and a local maximum in $$k_* = \frac{\alpha_p-\alpha_f}{2\mu_f + \alpha_f\alpha_p\eta_p} + \sqrt{\left( \frac{\alpha_p-\alpha_f}{2\mu_f + \alpha_f\alpha_p\eta_p} \right)^2 + \frac{1}{2\mu_f\eta_p}}. $$ Rolle’s theorem allows us to conclude that $$k_*$$ lies between the zeros $$k_1$$, $$k_2$$ and that $$g(\alpha_f,\alpha_p,k_*)>0$$. Finally, it can be easily shown that $$0 2\mu_f\,k$$, $$\qquad$$$$\partial_{\alpha_f}\,\rho(\alpha_f,\alpha_P) < 0$$$$\quad$$ and $$\quad$$$$\partial_{\alpha_p}\,\rho(\alpha_f,\alpha_p) > 0;$$ 3. for $$\alpha_f > \dfrac{1}{\eta_p\, k}$$, $$\alpha_p > 2\mu_f\,k$$, $$\qquad$$$$\partial_{\alpha_f}\,\rho(\alpha_F,\alpha_P) > 0$$$$\quad$$ and $$\quad$$$$\partial_{\alpha_p}\,\rho(\alpha_f,\alpha_p) > 0$$; 4. for $$\alpha_f > \dfrac{1}{\eta_p\, k}$$, $$\alpha_p < 2\mu_f\,k$$, $$\qquad$$$$\partial_{\alpha_f}\,\rho(\alpha_f,\alpha_p) > 0$$$$\quad$$ and $$\quad$$$$\partial_{\alpha_p}\,\rho(\alpha_f,\alpha_p) < 0$$. Thus, the point $$(\alpha_f,\alpha_p) = \big(\frac{1}{\eta_p\, k}\, ,\,2\mu_f\,k\big)$$ is a minimum for $$\rho(\alpha_f,\alpha_f)$$. In addition, since $$\rho(\alpha_f,\alpha_f) \geq 0$$ and $$\rho \big(\frac{1}{\eta_p\, k}\, ,\,2\mu_f\,k\big)=0$$, the minimum is absolute.□ Lemma 3.2 guarantees that, for any given $$k$$, the minimum of the convergence rate with respect to $$(\alpha_f,\alpha_p)$$ lies on the hyperbola (3.18). The following proposition provides the solution of the optimization procedure along it. Proposition 3.3 The solution of the min–max problem \begin{equation}\label{minmax_2}\min_{\alpha_f \alpha_p =\frac{2\mu_f}{\eta_p}} \, \max_{k\in \left[k_{{\rm{min}}},k_{{\rm{max}}}\right]}\,\rho(\alpha_f,\alpha_p,k) \end{equation} (3.19) is given by the pair \begin{equation}\label{equioscillation1}\begin{array}{l}\displaystyle\alpha_f^* = \phantom{-} \dfrac{1 - 2\mu_f\eta_p\, k_{{\rm{min}}} k_{{\rm{max}}}}{\eta_p(k_{{\rm{min}}}+k_{{\rm{max}}})} + \sqrt{ \left( \dfrac{1 - 2\mu_f\eta_p\, k_{{\rm{min}}} k_{{\rm{max}}}}{\eta_p(k_{{\rm{min}}}+k_{{\rm{max}}})} \right)^2 + \dfrac{2\mu_f}{\eta_p}},\\ \displaystyle\alpha_p^* = -\dfrac{1 - 2\mu_f\eta_p\, k_{{\rm{min}}} k_{{\rm{max}}}}{\eta_p(k_{{\rm{min}}}+k_{{\rm{max}}})} + \sqrt{ \left( \dfrac{1 - 2\mu_f\eta_p\, k_{{\rm{min}}} k_{{\rm{max}}}}{\eta_p(k_{{\rm{min}}}+k_{{\rm{max}}})} \right)^2 + \dfrac{2\mu_f}{\eta_p}}. \end{array}\end{equation} (3.20) Moreover, $$\rho(\alpha_f^*,\alpha_p^*,k)<1$$ for all $$k\in [k_{{\rm{min}}},k_{{\rm{max}}}]$$. Proof. From Lemma 3.2 we know that, regardless of where the maximum with respect to $$k$$ is, the minimum with respect to $$(\alpha_f,\alpha_p)$$ is along the hyperbola (3.18). A simple algebra shows that the convergence rate of the optimized Schwarz method along (3.18) reads \begin{equation}\label{rho_2mu/eta}\rho(\alpha_f,k) = \dfrac{2\mu_f}{\eta_p} \, \left( \dfrac{\eta_p \alpha_f\,k -1}{2\mu_f k +\alpha_f}\right)^2. \end{equation} (3.21) The function in (3.21) is always positive and has a minimum in $$k = \frac{1}{\eta_p \alpha_f}$$, where it vanishes. Since it is continuous, its maximum is attained at one end of the interval $$[k_{{\rm{min}}},k_{{\rm{max}}}]$$: \begin{equation}\label{eq:dim1}\max_{k \in [k_{{\rm{min}}},k_{{\rm{max}}}]}\,\rho(\alpha_f,k) \, = \, \max \left\{ \rho\left(\alpha_f,k_{{\rm{min}}}\right)\, , \,\rho\left(\alpha_f,k_{{\rm{max}}}\right) \right\}. \end{equation} (3.22) Moreover, since $$\partial_{\alpha_f} \rho = \frac{4\mu_f}{\eta_p} \, \frac{2\mu_f \eta_p k^2 + 1}{(2\mu_f k + \alpha_f)^3} \, (\eta_p \alpha_f k - 1), $$ it is immediate to observe that for all $$k\in [k_{{\rm{min}}},k_{{\rm{max}}}]$$, $$\rho(\alpha_f,k)$$ is decreasing for $$\alpha_f < \frac{1}{\eta_p k}$$ and increasing for $$\alpha_f > \frac{1}{\eta_p k}$$. In particular, we have $$\begin{aligned}& \rho(0,k_{{\rm{min}}}) >\rho(0,k_{{\rm{max}}}), \qquad & \rho\left(\dfrac{1}{\eta_p k_{{\rm{max}}}},k_{{\rm{min}}}\right) > \rho\left(\dfrac{1}{\eta_p k_{{\rm{max}}}},k_{{\rm{max}}}\right)=0,\\ &\lim_{\alpha_f\to\infty}\dfrac{\rho(\alpha_f,k_{{\rm{min}}})}{\rho(\alpha_f,k_{{\rm{max}}})}<1, &\rho\left(\dfrac{1}{\eta_p k_{{\rm{min}}}},k_{{\rm{max}}}\right) > \rho \left(\dfrac{1}{\eta_p k_{{\rm{min}}}},k_{{\rm{min}}}\right) = 0, \end{aligned}$$ and we can observe that $$ \max \left\{ \rho(\alpha_f,k_{{\rm{min}}})\, , \,\rho(\alpha_f,k_{{\rm{max}}}) \right\} = \left\{ \begin{array}{ll} \rho(\alpha_f,k_{{\rm{min}}}) & \mbox{for } \alpha_f < \alpha_f^*,\\ \rho(\alpha_f,k_{{\rm{max}}}) & \mbox{for } \alpha_f \geq \alpha_f^*, \end{array} \right. $$ where $$\alpha_f^*>0$$ is the value at which the convergence rate exhibits equioscillation between the minimal and maximal frequencies, i.e., \begin{equation}\label{eq:dim2}\rho(\alpha_f^*,k_{{\rm{min}}}) = \rho(\alpha_f^*,k_{{\rm{max}}}). \end{equation} (3.23) Simple algebraic manipulations show that finding the optimal value of $$\alpha_f$$ that satisfies (3.23) is equivalent to solving the algebraic equation \begin{equation}\label{eq:af2}\alpha_f^2 + 2\alpha_f \frac{2\mu_f\eta k_{{\rm{min}}}k_{{\rm{max}}} -1}{\eta(k_{{\rm{min}}}+k_{{\rm{max}}})} - \frac{2\mu_f}{\eta} = 0, \end{equation} (3.24) whose positive solution $$\alpha_f^*$$ is given in (3.20). The expression for $$\alpha_p^*$$ in (3.20) is obtained by replacing $$\alpha_f^*$$ in (3.18). To guarantee that $$\rho(\alpha_f^*,k)<1$$ for all $$k \in [k_{{\rm{min}}},k_{{\rm{max}}}]$$, since both (3.22) and (3.23) hold, we just have to prove that either $$\rho(\alpha_f^*,k_{{\rm{min}}}) < 1$$ or $$\rho(\alpha_f^*,k_{{\rm{max}}}) < 1$$. First, notice that $$\rho(\alpha_f^*,k)<1$$ if and only if \[ \left( \sqrt{\frac{2\mu_f}{\eta_p}} \frac{1-\eta_p\alpha_f^* k}{2\mu_fk + \alpha_f^*} -1 \right) \, \left( \sqrt{\frac{2\mu_f}{\eta_p}} \frac{1-\eta_p\alpha_f^* k}{2\mu_fk + \alpha_f^*} +1 \right) < 0. \] This inequality can be equivalently rewritten as \begin{equation*}\label{eq:step1} 2\mu_f (\eta_p^2 (\alpha_f^*)^2 k^2 + 1) - \eta_p (4\mu_f^2 k^2 + (\alpha_f^*)^2 + 8\mu_f\alpha_f^* k) < 0. \end{equation*} Using the expression of $$(\alpha_f^*)^2$$ from (3.24) and making a few simplifications, we obtain \begin{equation*}\label{eq:step2}-(1-2\mu_f\eta_p k_{{\rm{min}}}k_{{\rm{max}}}) (1 -2\mu_f \eta_p k^2) - 4\mu_f \eta_p k (k_{{\rm{min}}} + k_{{\rm{max}}}) < 0. \end{equation*} It is straightforward to see that if we set, e.g., $$k=k_{{\rm{min}}}$$ we find \[ - 1 -4\mu_f^2 \eta_p k_{{\rm{min}}}^3 k_{{\rm{max}}} - 2\mu_f \eta_p k_{{\rm{min}}} (k_{{\rm{min}}} + k_{{\rm{max}}}) < 0, \] which is obviously true.□ Remark 3.4 (i) The ratio $$\alpha_{f}^*/\alpha_{p}^*$$ depends only on the physical coefficients $$\mu_f$$, $$\eta_p$$ and on the mesh size $$h$$. In fact, from equation (3.20), we observe that $$\alpha_{f}^*-\alpha_{p}^* = 2\,\dfrac{1 - 2\mu_f\eta_p\, k_{{\rm{min}}} k_{{\rm{max}}}}{\eta_p(k_{{\rm{min}}}+k_{{\rm{max}}})}, $$ whose sign is ruled by the sign of the numerator. In particular, in the case $$k_{{\rm{min}}}=\pi/L$$ ($$L$$ being the length of the interface $${\it {\Gamma}}$$) and $$k_{{\rm{max}}}=\pi/h$$, we have \begin{equation}\label{eq:remarkh}\alpha_{f}^* < \alpha_{p}^* \quad\mbox{if} \quad\displaystyle h < \frac{2\mu_f\eta_p\,\pi^2}{L} \quad \mbox{and} \quad%\\ \alpha_{f}^* > \alpha_{p}^* \quad\mbox{if} \quad\displaystyle h > \frac{2\mu_f\eta_p\,\pi^2}{L}. \end{equation} (3.25) (ii) In the limit $$h\to 0$$, the convergence rate becomes $$\rho(\alpha_{f}^0,\alpha_{p}^0,k)$$, where \begin{equation}\label{eq:aFaP_asymptotic}\alpha_{f}^0 = - \dfrac{2\mu_f\pi}{L}+ \sqrt{\left(\dfrac{2\mu_f\pi}{L}\right)^2 +\dfrac{2\mu_f}{\eta_p} }\, , \qquad \alpha_{p}^0 = \dfrac{2\mu_f\pi}{L}+ \sqrt{\left(\dfrac{2\mu_f\pi}{L}\right)^2 +\dfrac{2\mu_f}{\eta_p} }, \end{equation} (3.26) entailing $$\alpha_{f}^0 < \alpha_{p}^0$$. This is not surprising: in fact, when $$h\to 0$$, $$k_{{\rm{max}}} \to \infty$$ and $$\lim_{k\to \infty} \rho(\alpha_f,\alpha_p,k) = {\alpha_f}/{\alpha_p}$$. Thus, the larger the maximal frequency supported by the numerical grid, the larger $$\alpha_{p}$$ with respect to $$\alpha_{f}$$ to guarantee that the reduction factor is below 1. 3.4.3 Minimization of the mean convergence rate Both the Taylor expansion and the equioscillation approach ensure that the optimized Schwarz algorithm is convergent in its iterative form. However, when the optimized Schwarz method is used as a preconditioner for a Krylov method to solve the interface problem, these two choices do not necessarily guarantee the fastest convergence. A common feature of the Taylor expansion and the equioscillation approach is that the number of frequencies showing a not-so-small convergence rate is not negligible (see Fig. 2). In this section, we present an alternative approach: by relaxing the constraint on the effective convergence rate, we look for parameters that ensure better convergence for a larger number of frequencies in the error. Fig. 2. View largeDownload slide Convergence rates, as a function of $$k$$, for the parameters obtained through zero-order Taylor expansion (T, dotted line), the solution of the min–max problem via equioscillation (E, dashed line) and $$\alpha_f^{\rm{opt}}$$ (M, solid line). Left: $$\mu_f=1$$, $$\eta_p=1$$e$$-$$2, $$h=2^{-5}$$. Right: $$\mu_f=1$$e$$-$$1, $$\eta_p=1$$, $$h=2^{-5}$$. Fig. 2. View largeDownload slide Convergence rates, as a function of $$k$$, for the parameters obtained through zero-order Taylor expansion (T, dotted line), the solution of the min–max problem via equioscillation (E, dashed line) and $$\alpha_f^{\rm{opt}}$$ (M, solid line). Left: $$\mu_f=1$$, $$\eta_p=1$$e$$-$$2, $$h=2^{-5}$$. Right: $$\mu_f=1$$e$$-$$1, $$\eta_p=1$$, $$h=2^{-5}$$. We still look for $$\alpha_f$$ and $$\alpha_p$$ along the curve (3.18) and we restrict ourselves to the set \begin{equation}\label{A_f}\mathcal{A}_f = \left\{ \alpha_f > 0 \, : \, \rho(\alpha_f,k)\leq1 \quad \forall\, k \in [k_{{\rm{min}}},k_{{\rm{max}}}] \right\}\!. \end{equation} (3.27) Notice that convergence of the Robin–Robin method in its iterative form would be ensured only in the case that the inequality in the definition of $$\mathcal{A}_f $$ is strict. However, from the previous section, we know that there can be at most one frequency whose corresponding convergence rate equals 1, either in $$k_{{\rm{min}}}$$ or in $$k_{{\rm{max}}}$$. When the optimized Schwarz method is used as a preconditioner for a Krylov method, the latter can handle isolated problems in the spectrum. This last approach is actually the most popular in the literature (see, e.g., Gander et al., 2002; Dolean et al., 2009; Gerardo-Giorda & Perego, 2013). Lemma 3.5 The set $$\mathcal{A}_f$$ is one of the following intervals: [1.] If $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 > 0$$, \begin{equation}\label{casoA}\mathcal{A}_f = \left(0 , \sqrt{\frac{2\mu_f}{\eta_p}} \min \left(\frac{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} + 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1} , \frac{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} + 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1} \right) \right]. \end{equation} (3.28) [2.] If $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 < 0$$, \begin{equation}\label{casoB}\mathcal{A}_f = \left[\sqrt{\frac{2\mu_f}{\eta_p}} \max \left(- \frac{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} + 1} , - \frac{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} + 1} \right) , +\infty \right). \end{equation} (3.29) [3.] If $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 < 0$$ and $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 > 0$$, \begin{equation}\label{casoC}\mathcal{A}_f = \left( 0 , \sqrt{\frac{2\mu_f}{\eta_p}} \frac{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} + 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1} \right] \cap \left[ - \sqrt{\frac{2\mu_f}{\eta_p}} \frac{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} + 1}, +\infty \right). \end{equation} (3.30) [4.] If $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 = 0$$, \begin{equation}\label{casoD}\mathcal{A}_f = \left( 0, \sqrt{\frac{2\mu_f}{\eta_p}} \frac{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} + 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1} \right]. \end{equation} (3.31) [5.] If $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 = 0$$, \begin{equation}\label{casoE}\mathcal{A}_f = \left[ - \sqrt{\frac{2\mu_f}{\eta_p}} \frac{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1}{\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} + 1} , +\infty \right) \, . \end{equation} (3.32) Proof. The condition $$\rho(\alpha_f,k) \leq 1$$ can be equivalently reformulated as \begin{eqnarray*}&& \left(\alpha_f \sqrt{\eta_p} \left(\sqrt{2\mu_f\eta_p} k + 1\right) + \sqrt{2\mu_f} \left( \sqrt{2\mu_f\eta_p} k - 1\right) \right) \\ && \qquad \left(\alpha_f \sqrt{\eta_p} \left(\sqrt{2\mu_f\eta_p} k - 1\right) - \sqrt{2\mu_f} \left( \sqrt{2\mu_f\eta_p} k + 1\right)\right) \leq 0 \, . \end{eqnarray*} The term $$\sqrt{2\mu_f\eta_p} k + 1$$ is always positive, whereas $$\sqrt{2\mu_f\eta_p} k - 1$$ may change its sign, so we must discuss different cases. As a result of (3.22) we have ensured that $$\rho(\alpha_f,k)\leq 1$$ for all $$k \in [k_{{\rm{min}}},k_{{\rm{max}}}],$$ provided the inequality holds in both $$k_{{\rm{min}}}$$ and $$k_{{\rm{max}}}$$; thus we consider only $$k=k_{{\rm{min}}}$$ and $$k=k_{{\rm{max}}}$$. (i.) If $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 > 0$$, then also $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 > 0$$, so that $$\mathcal{A}_f$$ is the set (3.28). (ii.) If $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 < 0$$, also $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 < 0$$, and $$\mathcal{A}_f$$ is the set (3.29). (iii.) If $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 < 0$$, then either $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 < 0$$, in which case we obtain (3.29) again, or $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 > 0$$ and $$\mathcal{A}_f$$ is the set (3.30). (iv.) If $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 > 0$$, either $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 > 0$$, in which case we obtain (3.28), or $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 < 0$$ and we get (3.30). (v.) If $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 = 0$$, $$k_{{\rm{max}}} > k_{{\rm{min}}} = {1}/{\sqrt{2\mu_f\eta_p}}$$, so that $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 > 0$$. Thus, $$\mathcal{A}_f$$ is characterized as (3.31). (vi.) Finally, if $$\sqrt{2\mu_f\eta_p} k_{{\rm{max}}} - 1 = 0$$, $$k_{{\rm{min}}} < k_{{\rm{max}}} = \frac{1}{\sqrt{2\mu_f\eta_p}}$$, so that $$\sqrt{2\mu_f\eta_p} k_{{\rm{min}}} - 1 < 0$$. Thus, $$\mathcal{A}_f$$ is the set (3.32).□ To improve the overall convergence for a Krylov method, we minimize, on the set $$\mathcal{A}_f$$, the expected value of $$\rho(\alpha_f,k)$$ in the interval $$[k_{{\rm{min}}},k_{{\rm{max}}}]$$: \begin{equation}\label{eq:expectedvalue}E(\alpha_f):=\mathbb{E}[\rho(\alpha_f,k)] = \frac{1}{k_{{\rm{max}}}-k_{{\rm{min}}}} \int_{k_{\rm{min}}}^{k_{{\rm{max}}}} \rho(\alpha_f,k) \, {\rm{d}}k. \end{equation} (3.33) As a result of (3.21), the expected value of $$\rho(\alpha_f,k)$$ in $$[k_{{\rm{min}}},k_{{\rm{max}}}]$$ can be explicitly computed with the help of a little calculus, and we have \[E(\alpha_f) = \frac{1}{2\mu_f} \left(\alpha_f^2 \eta_p + \frac{\left(\alpha_f\eta_p + 2\mu_f\right)^2}{\eta_p (2\mu_f k_{{\rm{max}}} + \alpha_f)(2\mu_f k_{{\rm{min}}} + \alpha_f)} - \frac{\alpha_f (\alpha_f^2 \eta_p + 2\mu_f)}{\mu_f (k_{{\rm{max}}}-k_{{\rm{min}}})} \log \left(\frac{2\mu_f k_{{\rm{max}}}+\alpha_f}{2\mu_f k_{{\rm{min}}} + \alpha_f} \right)\right)\!. \] The function $$E(\alpha_f)$$ is continuous, positive (being the integral of a non-negative function), $$E(0) = 1/(2\mu_f\eta_p k_{{\rm{min}}}k_{{\rm{max}}})$$, $$\lim_{\alpha_f \to +\infty} E(\alpha_f) = +\infty$$ and $$\partial_{\alpha_f} E(0) < 0$$. Thus, $$E(\alpha_f)$$ has at least one (local) minimum $$\alpha_f^{\rm opt} < +\infty$$ in $$\mathcal{A}_f$$ that may coincide with one of the extrema of $$\mathcal{A}_f$$ if the latter is a bounded set. In Table 1 we report the optimization interval $$\mathcal{A}_f$$ and the resulting optimized parameter $$\alpha_f^{\rm opt}$$ for different values of the problem coefficients $$\mu_f$$ and $$\eta_p$$. In addition, in Fig. 2 we plot the convergence rates, as a function of $$k$$, for the zero-order Taylor expansion (T), the solution (3.20) of the min–max problem via equioscillation (E) and $$\alpha_f^{\rm opt}$$ (M), for two set of coefficients $$\mu_f$$ and $$\eta_p$$. Table 1 Optimization interval $$\mathcal{A}_f$$ and optimized parameter $$\alpha_f^{\rm{opt}}$$ for different values of the coefficients $$\mu_f$$ and $$\eta_p$$ and $$h=2^{-5}$$. The column $$(m,M)$$ reports the signs of $$(2\mu_f\eta_p\,k_{{\rm{min}}}-1)$$ and $$(2\mu_f\eta_p\,k_{{\rm{max}}} -1)$$, respectively, where $$k_{{\rm{min}}}=\pi$$ and $$k_{{\rm{max}}}=\pi/h$$ $$\mu_f$$ $$\eta_p$$ $$(m,M)$$ $$\mathcal{A}_f$$ $$\alpha_f^{{\rm{opt}}}$$ 1 1 (+,+) (0, 1.4342] 0.0357 1 1e$$-$$2 ($$-$$,+) [5.4414, 16.2821] 5.4414 1 1e$$-$$4 ($$-$$,+) [129.3895, 812.1057] 217.3489 1e$$-$$1 1 (+,+) (0, 0.4676] 0.0364 1e$$-$$2 1 ($$-$$,+) [0.0544, 0.1628] 0.0544 1e$$-$$1 1e$$-$$2 ($$-$$,+) [3.3703, 7.0307] 3.3703 1e$$-$$1 1e$$-$$3 ($$-$$,+) [12.9390, 81.2106] 21.7349 1e$$-$$1 1e$$-$$4 ($$-$$,$$-$$) [43.4821,$$+\infty$$) 195.9084 $$\mu_f$$ $$\eta_p$$ $$(m,M)$$ $$\mathcal{A}_f$$ $$\alpha_f^{{\rm{opt}}}$$ 1 1 (+,+) (0, 1.4342] 0.0357 1 1e$$-$$2 ($$-$$,+) [5.4414, 16.2821] 5.4414 1 1e$$-$$4 ($$-$$,+) [129.3895, 812.1057] 217.3489 1e$$-$$1 1 (+,+) (0, 0.4676] 0.0364 1e$$-$$2 1 ($$-$$,+) [0.0544, 0.1628] 0.0544 1e$$-$$1 1e$$-$$2 ($$-$$,+) [3.3703, 7.0307] 3.3703 1e$$-$$1 1e$$-$$3 ($$-$$,+) [12.9390, 81.2106] 21.7349 1e$$-$$1 1e$$-$$4 ($$-$$,$$-$$) [43.4821,$$+\infty$$) 195.9084 Table 1 Optimization interval $$\mathcal{A}_f$$ and optimized parameter $$\alpha_f^{\rm{opt}}$$ for different values of the coefficients $$\mu_f$$ and $$\eta_p$$ and $$h=2^{-5}$$. The column $$(m,M)$$ reports the signs of $$(2\mu_f\eta_p\,k_{{\rm{min}}}-1)$$ and $$(2\mu_f\eta_p\,k_{{\rm{max}}} -1)$$, respectively, where $$k_{{\rm{min}}}=\pi$$ and $$k_{{\rm{max}}}=\pi/h$$ $$\mu_f$$ $$\eta_p$$ $$(m,M)$$ $$\mathcal{A}_f$$ $$\alpha_f^{{\rm{opt}}}$$ 1 1 (+,+) (0, 1.4342] 0.0357 1 1e$$-$$2 ($$-$$,+) [5.4414, 16.2821] 5.4414 1 1e$$-$$4 ($$-$$,+) [129.3895, 812.1057] 217.3489 1e$$-$$1 1 (+,+) (0, 0.4676] 0.0364 1e$$-$$2 1 ($$-$$,+) [0.0544, 0.1628] 0.0544 1e$$-$$1 1e$$-$$2 ($$-$$,+) [3.3703, 7.0307] 3.3703 1e$$-$$1 1e$$-$$3 ($$-$$,+) [12.9390, 81.2106] 21.7349 1e$$-$$1 1e$$-$$4 ($$-$$,$$-$$) [43.4821,$$+\infty$$) 195.9084 $$\mu_f$$ $$\eta_p$$ $$(m,M)$$ $$\mathcal{A}_f$$ $$\alpha_f^{{\rm{opt}}}$$ 1 1 (+,+) (0, 1.4342] 0.0357 1 1e$$-$$2 ($$-$$,+) [5.4414, 16.2821] 5.4414 1 1e$$-$$4 ($$-$$,+) [129.3895, 812.1057] 217.3489 1e$$-$$1 1 (+,+) (0, 0.4676] 0.0364 1e$$-$$2 1 ($$-$$,+) [0.0544, 0.1628] 0.0544 1e$$-$$1 1e$$-$$2 ($$-$$,+) [3.3703, 7.0307] 3.3703 1e$$-$$1 1e$$-$$3 ($$-$$,+) [12.9390, 81.2106] 21.7349 1e$$-$$1 1e$$-$$4 ($$-$$,$$-$$) [43.4821,$$+\infty$$) 195.9084 4. Numerical results In this section we present some numerical tests to assess the performance of the optimized Schwarz method. In particular, we focus on the effectiveness and robustness of the method with respect to both the mesh size $$h$$ and the physical parameters $$\mu_f$$ and $$\eta_p$$. 4.1 Finite element discretization and algebraic form We consider a finite element discretization based on Taylor–Hood elements for the Stokes problem (see, e.g., Boffi et al., 2013) and on quadratic Lagrangian elements for the scalar elliptic form of the Darcy equation. Let the indices $$I_f$$, $$I_p$$ and $${\it{\Gamma}}$$ denote the internal degrees of freedom in $${\it{\Omega}}_f$$, $${\it{\Omega}}_p$$ and on the interface. Let $$\boldsymbol{\lambda}_p$$ and $$\boldsymbol{\lambda}_f$$ be the vectors of components $$\int_{\it{\Gamma}} \lambda_p \psi_i$$ and $$\int_{\it{\Gamma}} \lambda_f \psi_i$$, where $$\lambda_p$$ and $$\lambda_f$$ are the interface variables $$\lambda_p = p_p + \alpha_f (\boldsymbol{\eta}_p \nabla p_p \cdot \mathbf{n} - \mathbf{g}_p \cdot \mathbf{n})$$ and $$\lambda_f = - \mathbf{n} \cdot ( 2 \mu_f \nabla^{\mathrm{s}} \mathbf{u}_f - p_f \boldsymbol{I} ) \cdot \mathbf{n} + \alpha_p \mathbf{u}_f \cdot \mathbf{n}$$ on $${\it{\Gamma}}$$, and $$\psi_i$$ is a suitable finite element basis function on $${\it{\Gamma}}$$. Letting $$M_{{\it{\Gamma}}{\it{\Gamma}}}$$ be a mass matrix on $${\it{\Gamma}}$$, we can write the algebraic form of the algorithm (3.3)–(3.4) as follows: given $$\boldsymbol{\lambda}_p^0$$, for $$m \geq 1$$ until convergence, 1. solve the Stokes problem \begin{equation}\label{eq:stokes_alg11}\begin{pmatrix}\left(A_f^{\mu_f}\right)_{I_f I_f} & \left(A_f^{\mu_f}\right)_{I_f {\it{\Gamma}}} & (G_f)_{I_f} \\ \left(A_f^{\mu_f}\right)_{{\it{\Gamma}} I_f} & \left(A_f^{\mu_f}\right)_{{\it{\Gamma}} {\it{\Gamma}}} + \alpha_f M_{{\it{\Gamma}} {\it{\Gamma}}} & (G_f)_{{\it{\Gamma}}} \\ \left(G_f\right)_{I_f}^T & (G_f)_{{\it{\Gamma}}}^T & 0 \end{pmatrix}\begin{pmatrix}\mathbf{u}_{s,I_f}^m \\ \mathbf{u}_{s,{\it{\Gamma}}}^m \\ \mathbf{p}_f^m \end{pmatrix}= \begin{pmatrix}\mathbf{f}_{s,I_f} \\ \mathbf{f}_{s,{\it{\Gamma}}} \\ \mathbf{0}\end{pmatrix}- \begin{pmatrix}\mathbf{0} \\ \boldsymbol{\lambda}_p^{m-1} \\ \mathbf{0}\end{pmatrix}, \end{equation} (4.1) where $$\mathbf{u}_{s,{\it{\Gamma}}}$$ is the vector of the degrees of freedom of the normal velocity on $${\it{\Gamma}}$$; 2. compute \begin{equation} \boldsymbol{\lambda}_f^m = \boldsymbol{\lambda}_p^{m-1} + (\alpha_f + \alpha_p) M_{{\it{\Gamma}}{\it{\Gamma}}} \mathbf{u}_{s,{\it{\Gamma}}}^m; \end{equation} (4.2) 3. solve the Darcy problem \begin{equation}\label{eq:darcy_alg11}\begin{pmatrix}\left(A_p^{\eta_p}\right)_{I_p I_p} & \left(A_p^{\eta_p}\right)_{I_p {\it{\Gamma}}} \\ \left(A_p^{\eta_p}\right)_{{\it{\Gamma}} I_p} & \left(A_p^{\eta_p}\right)_{{\it{\Gamma}} {\it{\Gamma}}} + \alpha_p^{-1} M_{{\it{\Gamma}} {\it{\Gamma}}}\end{pmatrix}\begin{pmatrix}\mathbf{p}_{p,I_p}^m \\ \mathbf{p}_{p,{\it{\Gamma}}}^m \end{pmatrix}= \begin{pmatrix}\mathbf{g}_{p,I_p} \\ \mathbf{g}_{p,{\it{\Gamma}}}\end{pmatrix}+ \alpha_p^{-1}\begin{pmatrix}\mathbf{0} \\ \boldsymbol{\lambda}_f^m \end{pmatrix}, \end{equation} (4.3) where $$\mathbf{p}_{p,{\it{\Gamma}}}$$ is the vector of the degrees of freedom of the pressure on $${\it{\Gamma}}$$; 4. compute \begin{equation}\label{eq:lambdad_alg} \boldsymbol{\lambda}_p^m = \left( 1 + \frac{\alpha_f}{\alpha_p} \right) M_{{\it{\Gamma}}{\it{\Gamma}}} \mathbf{p}_{p,{\it{\Gamma}}}^m - \frac{\alpha_f}{\alpha_p} \boldsymbol{\lambda}_f^m . \end{equation} (4.4) Let $$R_{f,{\it{\Gamma}}}$$ be the algebraic restriction operator that to the Stokes velocity and pressure in $${\it{\Omega}}_f$$ associates the Stokes normal velocity on the interface $${\it{\Gamma}}$$. Moreover, let $$R_{p,{\it{\Gamma}}}$$ be the algebraic restriction operator that to the Darcy pressure in $${\it{\Omega}}_p$$ associates the Darcy pressure on the interface $${\it{\Gamma}}$$. Then, we can introduce the discrete Robin-to-Dirichlet operators \[\begin{array}{rcl}S_f &=& R_{f,{\it{\Gamma}}}\begin{pmatrix}\left(A_f^{\mu_f}\right)_{I_f I_f} & \left(A_f^{\mu_f}\right)_{I_f {\it{\Gamma}}} & (G_f)_{I_f} \\ \left(A_f^{\mu_f}\right)_{{\it{\Gamma}} I_f} & \left(A_f^{\mu_f}\right)_{{\it{\Gamma}} {\it{\Gamma}}} + \alpha_f M_{{\it{\Gamma}} {\it{\Gamma}}} & (G_f)_{{\it{\Gamma}}} \\ (G_f)_{I_f}^T & (G_f)_{{\it{\Gamma}}}^T & 0 \end{pmatrix}^{-1}R_{f,{\it{\Gamma}}}^T,\\ S_p &=& \alpha_p^{-1} R_{p,{\it{\Gamma}}}\begin{pmatrix}\left(A_p^{\eta_p}\right)_{I_p I_p} & \left(A_p^{\eta_p}\right)_{I_p {\it{\Gamma}}} \\ \left(A_p^{\eta_p}\right)_{{\it{\Gamma}} I_p} & \left(A_p^{\eta_p}\right)_{{\it{\Gamma}} {\it{\Gamma}}} + \alpha_p^{-1} M_{{\it{\Gamma}} {\it{\Gamma}}}\end{pmatrix}^{-1}R_{p,{\it{\Gamma}}}^T. \end{array}\] Simple algebraic computations allow (4.1)–(4.4) to be reinterpreted as a Gauss–Seidel iteration to solve the interface linear system \begin{equation}\label{eq:rr_lambda}A_{\rm RR} \, \begin{pmatrix} \boldsymbol{\lambda}_f \\ \boldsymbol{\lambda}_p \end{pmatrix} = \begin{pmatrix} - (\alpha_f + \alpha_p) M_{{\it{\Gamma}}{\it{\Gamma}}} \mathbf{f}_{\it{\Gamma}} \\ \left( 1 + \frac{\alpha_f}{\alpha_p} \right) M_{{\it{\Gamma}}{\it{\Gamma}}} \mathbf{g}_{\it{\Gamma}} \end{pmatrix}, \end{equation} (4.5) where \begin{equation}\label{eq:rr_matrix1}A_{\rm RR} = \begin{pmatrix}-I & I - (\alpha_f + \alpha_p) M_{{\it{\Gamma}}{\it{\Gamma}}} S_f \\ \frac{\alpha_f}{\alpha_p} I - \left( 1 + \frac{\alpha_f}{\alpha_p} \right) M_{{\it{\Gamma}}{\it{\Gamma}}} S_p & I \end{pmatrix}\end{equation} (4.6) $$\mathbf{f}_{\it{\Gamma}}$$ and $$\mathbf{g}_{\it{\Gamma}}$$ are vectors depending on the data of the problem. The matrix $$A_{\rm RR}$$ is nonsymmetric and indefinite. In fact, if $$N_{\it{\Gamma}}$$ is the number of rows of $$\boldsymbol{\lambda}_f$$ (or, equivalently, $$\boldsymbol{\lambda}_p$$) and $$(\mathbf{x}, \, \mathbf{y})$$ is an arbitrary nonnull vector in $$\mathbb{R}^{2N_{\it{\Gamma}}}$$, then denoting by $$(\cdot,\cdot)_2$$ and $$\|\cdot\|_2$$ the Euclidean scalar product and norm, we obtain \begin{equation*}\left(\mathbf{x}^{T}, \, \mathbf{y}^{T}\right) A_{\rm RR}\begin{pmatrix} \mathbf{x}\\ \mathbf{y} \end{pmatrix} = \|\mathbf{y}\|_2^2 - \|\mathbf{x}\|_2^2 + \left( 1 + \frac{\alpha_f}{\alpha_p} \right) \left( (\mathbf{x},\mathbf{y})_2 - \alpha_p \mathbf{x}^{T} M_{{\it{\Gamma}}{\it{\Gamma}}} S_f \mathbf{y} - \mathbf{y}^{T} M_{{\it{\Gamma}}{\it{\Gamma}}} S_p \mathbf{x} \right)\!, \end{equation*} whose sign may be either positive or negative. 4.2 Test $$1$$ We assess the effectiveness of the optimized Schwarz method on a model problem with known analytic solution. The computational domains are $${\it{\Omega}}_f = (0,1)\times (1,2)$$ and $${\it{\Omega}}_p = (0,1) \times (0,1)$$ separated by the interface $${\it{\Gamma}} = (0,1) \times \{ 1 \}$$. The computational grids are uniform, structured, made of triangles and characterized by mesh size $$h = 2^{-(s+2)}$$ with $$s=1,\ldots,6$$. We set $$\eta_p$$ constant (assuming $$\eta_1 = \eta_2$$), $$\alpha_{\rm BJ} = 1$$, $$k_{{\rm{min}}} = \pi$$ and $$k_{{\rm{max}}} = \pi/h$$. The boundary conditions and the forcing terms are such that the exact solution is $$\mathbf{u}_f = \left( \sqrt{\mu_f \eta_p} , \, \alpha_{\rm BJ} x \right)$$, $$p_f = 2\mu_f (x+y-1) + (3\eta_p)^{-1}$$, $$p_p = \left( -\alpha_{\rm BJ} x\left(y-1\right)+ y^3/3 -y^2 +y \right)/\eta_p + 2 \mu_f x$$. We solve the interface system (4.5) using generalized minimal residual (GMRES) (Saad & Schultz, 1986) with tolerance 1e$$-$$9 on the relative residual, starting the iterations from $$(\boldsymbol{\lambda}_f, \, \boldsymbol{\lambda}_p)^{T} = \mathbf{0}$$. In Figs 3–4, we plot the values of $$\alpha_{f}$$ and $$\alpha_{p}$$, computed using the three approaches studied in Sections 3.4.1–3.4.3, and the corresponding number of GMRES iterations for different values of $$\mu_f$$, $$\eta_p$$ and $$h$$. The method based on low-order Taylor approximation is not very effective, because the number of iterations grows significantly in some cases, especially for small values of the physical parameters. The coefficients computed both with equioscillation and with the mean minimization criterion seem to guarantee robustness with respect to $$h$$, i.e., the iteration counts appear to stabilize as the mesh size becomes reasonable in terms of accuracy of the solution, and this behaviour remains evident as $$h\to 0$$. However, there is still dependence on the value of the physical parameters. Finally, notice that the parameters obtained by equioscillation obey the inequalities (3.25), i.e., $$\alpha_f$$ is larger than $$\alpha_p$$ if the mesh size $$h$$ is large enough compared with the physical parameters of the problem, whereas $$\alpha_f$$ becomes smaller than $$\alpha_p$$ if $$h$$ is taken small enough. Although analytic expression is not available for $$\alpha_f$$ and $$\alpha_p$$ in the case of mean minimization, we can infer from the graphs that they behave in an analogous way as concerns their mutual magnitude. Fig. 3. View largeDownload slide Left: parameters $$\alpha_{f}$$ (circles) and $$\alpha_{p}$$ (diamonds) versus $$h$$ for different values of $$\mu_f$$ and $$\eta_p$$. Right: corresponding number of iterations versus $$h$$. Dotted lines: low-order Taylor expansion; dashed lines: equioscillation; solid lines: mean minimization. Fig. 3. View largeDownload slide Left: parameters $$\alpha_{f}$$ (circles) and $$\alpha_{p}$$ (diamonds) versus $$h$$ for different values of $$\mu_f$$ and $$\eta_p$$. Right: corresponding number of iterations versus $$h$$. Dotted lines: low-order Taylor expansion; dashed lines: equioscillation; solid lines: mean minimization. Fig. 4. View largeDownload slide Left: parameters $$\alpha_{f}$$ (circles) and $$\alpha_{p}$$ (diamonds) versus $$h$$ for different values of $$\mu_f$$ and $$\eta_p$$. Right: corresponding number of iterations versus $$h$$. Dotted lines: low-order Taylor expansion; dashed lines: equioscillation; solid lines: mean minimization. Fig. 4. View largeDownload slide Left: parameters $$\alpha_{f}$$ (circles) and $$\alpha_{p}$$ (diamonds) versus $$h$$ for different values of $$\mu_f$$ and $$\eta_p$$. Right: corresponding number of iterations versus $$h$$. Dotted lines: low-order Taylor expansion; dashed lines: equioscillation; solid lines: mean minimization. In Table 2, we show the effective convergence rate $$\rho_{{\rm{max}}} = \max_{k \in [k_{{\rm{min}}},k_{{\rm{max}}}]} \rho(\alpha_f,\alpha_p,k)$$, the mean convergence rate $$E(\alpha_f,\alpha_p) = (k_{{\rm{max}}}-k_{{\rm{min}}})^{-1} \int_{k_{{\rm{min}}}}^{k_{{\rm{max}}}} \rho(\alpha_f,\alpha_p,k) \, {\rm{d}}k$$ and the iteration count for different values of $$\mu_f$$ and $$\eta_p$$, $$h=2^{-5}$$ and the three choices of the optimal parameters: low-order Taylor expansion (T) (3.15), equioscillation (E) (3.20) and mean minimization (M). We can observe that minimizing the effective convergence rate is not necessarily a winning strategy. In general, minimizing the mean convergence rate provides better results when an iterative method is used to solve the interface problem, even in the case when the effective convergence rate equals 1. Table 2 Effective convergence rate $$\rho_{{\rm{max}}}$$, mean convergence rate $$E(\alpha_f,\alpha_p)$$ and iteration count for different values of $$\mu_f$$ and $$\eta_p$$, $$h=2^{-5}$$ and the three choices of $$\alpha_f$$, $$\alpha_p$$: low-order Taylor expansion $$({\rm T})$$, equioscillation $$({\rm E})$$ and mean minimization $$({\rm M})$$. $$\mu_f$$ $$\eta_p$$ $$\alpha_f$$ $$\alpha_p$$ $$\rho_{{\rm{max}}}$$ $$E(\alpha_f,\alpha_p)$$ Iter 0.0099 6.2832 0.0116 0.0026 8 T 1 1 0.1622 12.3285 0.0116 0.0089 8 E 0.0357 56.0435 0.0395 0.0009 8 M 0.9947 6.2832 0.3613 0.1363 22 T 1 1e$$-$$2 9.9150 20.1714 0.3613 0.2320 18 E 5.4414 36.7552 1.0000 0.0729 14 M 99.4718 6.2832 0.2414 0.1581 46 T 1 1e$$-$$4 258.1914 77.4619 0.2414 0.0853 30 E 217.3489 92.0180 0.3472 0.0775 26 M 0.0099 0.6283 0.0945 0.0239 12 T 1e$$-$$1 1 0.1484 1.3477 0.0945 0.0706 12 E 0.0364 5.4896 0.3549 0.0089 10 M 0.0099 0.0628 0.3613 0.1363 22 T 1e$$-$$2 1 0.0992 0.2017 0.3613 0.2320 18 E 0.0544 0.3676 1.0000 0.0729 14 M 0.9947 0.6283 0.4806 0.2740 38 T 1e$$-$$1 1e$$-$$2 4.8415 4.1309 0.4806 0.2249 24 E 3.3703 5.9342 1.0000 0.1313 20 M 9.9472 0.6283 0.2414 0.1581 46 T 1e$$-$$1 1e$$-$$3 25.8191 7.7462 0.2414 0.0853 30 E 21.7349 9.2018 0.3472 0.0775 26 M 99.4718 0.6283 0.0429 0.0286 32 T 1e$$-$$1 1e$$-$$4 201.6164 9.9198 0.0429 0.0143 32 E 195.9084 10.2089 0.0456 0.0143 32 (M) $$\mu_f$$ $$\eta_p$$ $$\alpha_f$$ $$\alpha_p$$ $$\rho_{{\rm{max}}}$$ $$E(\alpha_f,\alpha_p)$$ Iter 0.0099 6.2832 0.0116 0.0026 8 T 1 1 0.1622 12.3285 0.0116 0.0089 8 E 0.0357 56.0435 0.0395 0.0009 8 M 0.9947 6.2832 0.3613 0.1363 22 T 1 1e$$-$$2 9.9150 20.1714 0.3613 0.2320 18 E 5.4414 36.7552 1.0000 0.0729 14 M 99.4718 6.2832 0.2414 0.1581 46 T 1 1e$$-$$4 258.1914 77.4619 0.2414 0.0853 30 E 217.3489 92.0180 0.3472 0.0775 26 M 0.0099 0.6283 0.0945 0.0239 12 T 1e$$-$$1 1 0.1484 1.3477 0.0945 0.0706 12 E 0.0364 5.4896 0.3549 0.0089 10 M 0.0099 0.0628 0.3613 0.1363 22 T 1e$$-$$2 1 0.0992 0.2017 0.3613 0.2320 18 E 0.0544 0.3676 1.0000 0.0729 14 M 0.9947 0.6283 0.4806 0.2740 38 T 1e$$-$$1 1e$$-$$2 4.8415 4.1309 0.4806 0.2249 24 E 3.3703 5.9342 1.0000 0.1313 20 M 9.9472 0.6283 0.2414 0.1581 46 T 1e$$-$$1 1e$$-$$3 25.8191 7.7462 0.2414 0.0853 30 E 21.7349 9.2018 0.3472 0.0775 26 M 99.4718 0.6283 0.0429 0.0286 32 T 1e$$-$$1 1e$$-$$4 201.6164 9.9198 0.0429 0.0143 32 E 195.9084 10.2089 0.0456 0.0143 32 (M) Table 2 Effective convergence rate $$\rho_{{\rm{max}}}$$, mean convergence rate $$E(\alpha_f,\alpha_p)$$ and iteration count for different values of $$\mu_f$$ and $$\eta_p$$, $$h=2^{-5}$$ and the three choices of $$\alpha_f$$, $$\alpha_p$$: low-order Taylor expansion $$({\rm T})$$, equioscillation $$({\rm E})$$ and mean minimization $$({\rm M})$$. $$\mu_f$$ $$\eta_p$$ $$\alpha_f$$ $$\alpha_p$$ $$\rho_{{\rm{max}}}$$ $$E(\alpha_f,\alpha_p)$$ Iter 0.0099 6.2832 0.0116 0.0026 8 T 1 1 0.1622 12.3285 0.0116 0.0089 8 E 0.0357 56.0435 0.0395 0.0009 8 M 0.9947 6.2832 0.3613 0.1363 22 T 1 1e$$-$$2 9.9150 20.1714 0.3613 0.2320 18 E 5.4414 36.7552 1.0000 0.0729 14 M 99.4718 6.2832 0.2414 0.1581 46 T 1 1e$$-$$4 258.1914 77.4619 0.2414 0.0853 30 E 217.3489 92.0180 0.3472 0.0775 26 M 0.0099 0.6283 0.0945 0.0239 12 T 1e$$-$$1 1 0.1484 1.3477 0.0945 0.0706 12 E 0.0364 5.4896 0.3549 0.0089 10 M 0.0099 0.0628 0.3613 0.1363 22 T 1e$$-$$2 1 0.0992 0.2017 0.3613 0.2320 18 E 0.0544 0.3676 1.0000 0.0729 14 M 0.9947 0.6283 0.4806 0.2740 38 T 1e$$-$$1 1e$$-$$2 4.8415 4.1309 0.4806 0.2249 24 E 3.3703 5.9342 1.0000 0.1313 20 M 9.9472 0.6283 0.2414 0.1581 46 T 1e$$-$$1 1e$$-$$3 25.8191 7.7462 0.2414 0.0853 30 E 21.7349 9.2018 0.3472 0.0775 26 M 99.4718 0.6283 0.0429 0.0286 32 T 1e$$-$$1 1e$$-$$4 201.6164 9.9198 0.0429 0.0143 32 E 195.9084 10.2089 0.0456 0.0143 32 (M) $$\mu_f$$ $$\eta_p$$ $$\alpha_f$$ $$\alpha_p$$ $$\rho_{{\rm{max}}}$$ $$E(\alpha_f,\alpha_p)$$ Iter 0.0099 6.2832 0.0116 0.0026 8 T 1 1 0.1622 12.3285 0.0116 0.0089 8 E 0.0357 56.0435 0.0395 0.0009 8 M 0.9947 6.2832 0.3613 0.1363 22 T 1 1e$$-$$2 9.9150 20.1714 0.3613 0.2320 18 E 5.4414 36.7552 1.0000 0.0729 14 M 99.4718 6.2832 0.2414 0.1581 46 T 1 1e$$-$$4 258.1914 77.4619 0.2414 0.0853 30 E 217.3489 92.0180 0.3472 0.0775 26 M 0.0099 0.6283 0.0945 0.0239 12 T 1e$$-$$1 1 0.1484 1.3477 0.0945 0.0706 12 E 0.0364 5.4896 0.3549 0.0089 10 M 0.0099 0.0628 0.3613 0.1363 22 T 1e$$-$$2 1 0.0992 0.2017 0.3613 0.2320 18 E 0.0544 0.3676 1.0000 0.0729 14 M 0.9947 0.6283 0.4806 0.2740 38 T 1e$$-$$1 1e$$-$$2 4.8415 4.1309 0.4806 0.2249 24 E 3.3703 5.9342 1.0000 0.1313 20 M 9.9472 0.6283 0.2414 0.1581 46 T 1e$$-$$1 1e$$-$$3 25.8191 7.7462 0.2414 0.0853 30 E 21.7349 9.2018 0.3472 0.0775 26 M 99.4718 0.6283 0.0429 0.0286 32 T 1e$$-$$1 1e$$-$$4 201.6164 9.9198 0.0429 0.0143 32 E 195.9084 10.2089 0.0456 0.0143 32 (M) Finally, in Fig. 5, we consider four possible combinations of $$\mu_f$$ and $$\eta_p$$ and $$h=2^{-5}$$ and we show the number of iterations for a range of values of $$\alpha_f$$ and $$\alpha_p$$. In all cases, the optimal coefficients devised by minimizing the convergence rate either fall in the regions of the minimum number of iterations or are in the closest ones to it. Fig. 5. View largeDownload slide Number of iterations for $$h=2^{-5}$$ and different values of $$\alpha_f$$ and $$\alpha_p$$. The dotted line represents the curve $$\alpha_f\alpha_p = 2\mu_f/\eta_p$$ (3.18). Squares correspond to $$(\alpha_f,\alpha_p)$$ computed using the low-order Taylor expansion (3.15), circles to the case of equioscillation (3.20) and diamonds to the mean minimization. Fig. 5. View largeDownload slide Number of iterations for $$h=2^{-5}$$ and different values of $$\alpha_f$$ and $$\alpha_p$$. The dotted line represents the curve $$\alpha_f\alpha_p = 2\mu_f/\eta_p$$ (3.18). Squares correspond to $$(\alpha_f,\alpha_p)$$ computed using the low-order Taylor expansion (3.15), circles to the case of equioscillation (3.20) and diamonds to the mean minimization. 4.3 Test 2 We simulate a two-dimensional cross-flow membrane filtration problem similar to Hanspal et al. (2009). The fluid domain is $${\it{\Omega}}_f = (0,0.015) \times (0.0025,0.0075)$$ m, the porous medium domain is $${\it{\Omega}}_p = (0.0035,0.0105) \times (0,0.0025)$$ m and the interface is $${\it{\Gamma}} = (0.0035,0.0105) \times 0.0025$$ m. The boundary conditions are set as follows: on $${\it{\Gamma}}_f^{\rm{in}}=\{0\}\times (0.0025,0.0075)$$ we impose the parabolic inflow velocity profile $$\mathbf{u}_f = (-16000 y^2 + 160y - 0.3,0)$$ m/s; on $${\it{\Gamma}}_f^{{\rm{out}}}=\{0.015\}\times(0.00625,0.0075)$$, $$( 2\mu \nabla^{\mathrm{s}} \mathbf{u}_f - p_f \boldsymbol{I} ) \cdot \mathbf{n} = \mathbf{0}$$ kg/(m$$\cdot$$s); on $$\partial{\it{\Omega}}_f \setminus ({\it{\Gamma}}_f^{{\rm{in}}} \cup {\it{\Gamma}}_f^{{\rm{out}}} \cup {\it{\Gamma}})$$, $$\mathbf{u}_f = \mathbf{0}$$ m/s; on $${\it{\Gamma}}_p^b=(0.0035,0.0105) \times \{ 0 \}$$, $$p_p = 0$$ kg/(m$$\cdot$$s); on $$\partial{\it{\Omega}}_p \setminus ({\it{\Gamma}}_p^b \cup {\it{\Gamma}})$$, $$\mathbf{u}_p\cdot\mathbf{n} = 0$$ m/s. Since gravitational effects are neglected, both $$\mathbf{f}_f$$ and $$\mathbf{g}_p$$ are null. The fluid has density $$1000$$ kg/m$$^3$$ and dynamic viscosity $$0.001$$ kg/(m$$\cdot$$s). The permeability is either $$\boldsymbol{K}_1 = 1$$e$$-$$6 $${\rm{diag}} (1,1)$$ m$$^2$$ or $$\boldsymbol{K}_2 = 1$$e$$-$$12 $${\rm{diag}} (1,1)$$ m$$^2$$. Finally, $$\alpha_{\rm BJ}=1$$. Using $$X_f = 0.005$$ m and $$U_f = 0.1$$ m/s for adimensionalization, we obtain $$\mu_f = 0.002$$, $$\eta_p=20$$ for $$\boldsymbol{K}_1$$ and $$\eta_p=2$$e$$-$$5 for $$\boldsymbol{K}_2$$. Table 3 the coefficients $$\alpha_f$$ and $$\alpha_p$$ together with the number of GMRES iterations required to converge to the tolerance 1e$$-$$9 on the relative residual for the two different values of the permeability. We can see that the mean minimization approach guarantees convergence in a number of iterations almost independent of the computational grid and it performs better than both the low-order Taylor and the equioscillation methods at least when is $$\eta_p$$ is quite large. The computed solutions are shown in Figs 6 and 7. Fig. 6. View largeDownload slide Stokes velocity for $$\boldsymbol{K}_1 = 1$$e$$-$$6 $${\rm{diag}} (1,1)$$ m$$^2$$ (left) and $$\boldsymbol{K}_2 = 1$$e$$-$$12 $${\rm{diag}} (1,1)$$ m$$^2$$ (right). Fig. 6. View largeDownload slide Stokes velocity for $$\boldsymbol{K}_1 = 1$$e$$-$$6 $${\rm{diag}} (1,1)$$ m$$^2$$ (left) and $$\boldsymbol{K}_2 = 1$$e$$-$$12 $${\rm{diag}} (1,1)$$ m$$^2$$ (right). Fig. 7. View largeDownload slide Darcy pressure for $$\boldsymbol{K}_1 = 1$$e$$-$$6 $${\rm{diag}} (1,1)$$ m$$^2$$ (left) and $$\boldsymbol{K}_2 = 1$$e$$-$$12 $${\rm{diag}} (1,1)$$ m$$^2$$ (right). Fig. 7. View largeDownload slide Darcy pressure for $$\boldsymbol{K}_1 = 1$$e$$-$$6 $${\rm{diag}} (1,1)$$ m$$^2$$ (left) and $$\boldsymbol{K}_2 = 1$$e$$-$$12 $${\rm{diag}} (1,1)$$ m$$^2$$ (right). Table 3 Parameters $$\alpha_{f}$$, $$\alpha_{p}$$ and GMRES iterations for the cases of low-order Taylor expansion, equioscillation and mean minimization. Meshes have $$h = 2^{-(2+s)}$$. and dofs is the number of interface unknowns. Top: $$\mu_f = 0.002$$, $$\eta_p = 20$$. Bottom: $$\mu_f = 0.002$$, $$\eta_p = 2{\rm e}{-}5$$ $$s$$ dofs $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter (Low-order Taylor) (Equioscillation) (Mean minimization) 1 50 1.99e$$-$$03 8.98e$$-$$03 21 9.11e$$-$$03 2.19e$$-$$02 18 5.18e$$-$$03 3.86e$$-$$02 13 2 98 9.95e$$-$$04 8.98e$$-$$03 21 8.43e$$-$$03 2.37e$$-$$02 17 3.34e$$-$$03 5.99e$$-$$02 13 3 194 4.97e$$-$$04 8.98e$$-$$03 21 8.10e$$-$$03 2.47e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 4 386 2.49e$$-$$04 8.98e$$-$$03 21 7.94e$$-$$03 2.52e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 1 50 1.99e+03 8.98e$$-$$03 10 3.65e+03 5.48e$$-$$01 10 1.99e+03 1.01e$$-$$01 10 2 98 9.95e+02 8.98e$$-$$03 10 1.90e+03 1.05e$$-$$01 10 9.95e+02 2.01e$$-$$01 10 3 194 4.97e+02 8.98e$$-$$03 12 9.73e+02 2.06e$$-$$01 12 4.97e+02 4.02e$$-$$01 12 4 386 2.49e+02 8.98e$$-$$03 14 4.92e+02 4.06e$$-$$01 14 2.49e+02 8.04e$$-$$01 14 $$s$$ dofs $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter (Low-order Taylor) (Equioscillation) (Mean minimization) 1 50 1.99e$$-$$03 8.98e$$-$$03 21 9.11e$$-$$03 2.19e$$-$$02 18 5.18e$$-$$03 3.86e$$-$$02 13 2 98 9.95e$$-$$04 8.98e$$-$$03 21 8.43e$$-$$03 2.37e$$-$$02 17 3.34e$$-$$03 5.99e$$-$$02 13 3 194 4.97e$$-$$04 8.98e$$-$$03 21 8.10e$$-$$03 2.47e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 4 386 2.49e$$-$$04 8.98e$$-$$03 21 7.94e$$-$$03 2.52e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 1 50 1.99e+03 8.98e$$-$$03 10 3.65e+03 5.48e$$-$$01 10 1.99e+03 1.01e$$-$$01 10 2 98 9.95e+02 8.98e$$-$$03 10 1.90e+03 1.05e$$-$$01 10 9.95e+02 2.01e$$-$$01 10 3 194 4.97e+02 8.98e$$-$$03 12 9.73e+02 2.06e$$-$$01 12 4.97e+02 4.02e$$-$$01 12 4 386 2.49e+02 8.98e$$-$$03 14 4.92e+02 4.06e$$-$$01 14 2.49e+02 8.04e$$-$$01 14 Table 3 Parameters $$\alpha_{f}$$, $$\alpha_{p}$$ and GMRES iterations for the cases of low-order Taylor expansion, equioscillation and mean minimization. Meshes have $$h = 2^{-(2+s)}$$. and dofs is the number of interface unknowns. Top: $$\mu_f = 0.002$$, $$\eta_p = 20$$. Bottom: $$\mu_f = 0.002$$, $$\eta_p = 2{\rm e}{-}5$$ $$s$$ dofs $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter (Low-order Taylor) (Equioscillation) (Mean minimization) 1 50 1.99e$$-$$03 8.98e$$-$$03 21 9.11e$$-$$03 2.19e$$-$$02 18 5.18e$$-$$03 3.86e$$-$$02 13 2 98 9.95e$$-$$04 8.98e$$-$$03 21 8.43e$$-$$03 2.37e$$-$$02 17 3.34e$$-$$03 5.99e$$-$$02 13 3 194 4.97e$$-$$04 8.98e$$-$$03 21 8.10e$$-$$03 2.47e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 4 386 2.49e$$-$$04 8.98e$$-$$03 21 7.94e$$-$$03 2.52e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 1 50 1.99e+03 8.98e$$-$$03 10 3.65e+03 5.48e$$-$$01 10 1.99e+03 1.01e$$-$$01 10 2 98 9.95e+02 8.98e$$-$$03 10 1.90e+03 1.05e$$-$$01 10 9.95e+02 2.01e$$-$$01 10 3 194 4.97e+02 8.98e$$-$$03 12 9.73e+02 2.06e$$-$$01 12 4.97e+02 4.02e$$-$$01 12 4 386 2.49e+02 8.98e$$-$$03 14 4.92e+02 4.06e$$-$$01 14 2.49e+02 8.04e$$-$$01 14 $$s$$ dofs $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter $$\alpha_f$$ $$\alpha_p$$ Iter (Low-order Taylor) (Equioscillation) (Mean minimization) 1 50 1.99e$$-$$03 8.98e$$-$$03 21 9.11e$$-$$03 2.19e$$-$$02 18 5.18e$$-$$03 3.86e$$-$$02 13 2 98 9.95e$$-$$04 8.98e$$-$$03 21 8.43e$$-$$03 2.37e$$-$$02 17 3.34e$$-$$03 5.99e$$-$$02 13 3 194 4.97e$$-$$04 8.98e$$-$$03 21 8.10e$$-$$03 2.47e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 4 386 2.49e$$-$$04 8.98e$$-$$03 21 7.94e$$-$$03 2.52e$$-$$02 17 3.16e$$-$$03 6.33e$$-$$02 13 1 50 1.99e+03 8.98e$$-$$03 10 3.65e+03 5.48e$$-$$01 10 1.99e+03 1.01e$$-$$01 10 2 98 9.95e+02 8.98e$$-$$03 10 1.90e+03 1.05e$$-$$01 10 9.95e+02 2.01e$$-$$01 10 3 194 4.97e+02 8.98e$$-$$03 12 9.73e+02 2.06e$$-$$01 12 4.97e+02 4.02e$$-$$01 12 4 386 2.49e+02 8.98e$$-$$03 14 4.92e+02 4.06e$$-$$01 14 2.49e+02 8.04e$$-$$01 14 5. Conclusions In this article, an optimized Schwarz method for the Stokes–Darcy problem was studied. Different strategies have been provided to practically compute optimal parameters for the Robin interface conditions to guarantee the convergence of the method. The methods we propose take into account both the physical parameters typical of this coupled problem (i.e., the fluid viscosity and the permeability of the porous medium) and the size $$h$$ of the computational grid used for simulations. Previous results (Discacciati et al., 2007) showed that for a fixed computational mesh and fluid viscosity and permeability tending to zero, convergence of the Robin–Robin method was guaranteed if $$\alpha_f > \alpha_p$$. However, Chen et al. (2011) proved that geometric convergence could be obtained in some cases for an appropriate choice of $$\alpha_f < \alpha_p$$. The analysis carried out in this article helps to clarify the issue of the relative size of the parameters by clearly highlighting that their values may significantly change depending not only on the physical parameters of the problem but also on the computational grid used for the finite element approximation. In particular, (3.25) shows that, if the product of the physical parameters is small enough compared with $$h$$, then $$\alpha_f > \alpha_p$$ guarantees optimal convergence of the Robin–Robin algorithm, while for fine enough meshes, $$\alpha_p$$ may be taken larger than $$\alpha_f$$. Finally, in this article, the Robin–Robin method is reinterpreted as an iterative method for solving a suitable interface linear system (4.5), for which Krylov space methods can be used to enhance convergence. Funding European Union Seventh Framework Programme (FP7/2007-2013; grant 294229) to M.D.; Basque Government through the BERC 2014-2017 program and Spanish Ministry of Economics and Competitiveness (MINECO) through the BCAM Severo Ochoa Excellent Accreditation (SEV-2013-0323) to L.G.G. References Alonso-Rodriguez A. & Gerardo-Giorda L. ( 2006 ) New non-overlapping domain decomposition methods for the time-harmonic Maxwell system . SIAM J. Sci. Comput. , 28 , 102 – 122 . Google Scholar Crossref Search ADS Bear J. ( 1979 ) Hydraulics of Groundwater . New York : McGraw-Hill . Bear J. & Bachmat Y. ( 1991 ) Introduction to Modeling of Transport Phenomena in Porous Media . Dordrecht, The Netherlands : Kluwer Academic Publisher . Beavers G. & Joseph D. ( 1967 ) Boundary conditions at a naturally permeable wall . J. Fluid Mech. , 30 , 197 – 207 . Google Scholar Crossref Search ADS Boffi D. , Brezzi F. & Fortin M. ( 2013 ) Mixed Finite Element Methods and Applications . Berlin : Springer . Caiazzo A. , John V. & Wilbrandt U. ( 2014 ) On classical iterative subdomain methods for the Stokes–Darcy problem . Comput. Geosci. , 18 , 711 – 728 . Google Scholar Crossref Search ADS Cao Y. , Gunzburger M. , Hu X. , Hua F. , Wang X. & Zhao W. ( 2010b ) Finite element approximations for Stokes–Darcy flow with Beavers-Joseph interface conditions . SIAM J. Numer. Anal. , 47 , 4239 – 4256 . Google Scholar Crossref Search ADS Cao Y. , Gunzburger M. , Hua F. & Wang X. ( 2010a ) Coupled Stokes–Darcy model with Beavers–Joseph interface boundary conditions . Commun. Math. Sci. , 8 , 1 – 25 . Google Scholar Crossref Search ADS Charton P. , Nataf F. & Rogier F. ( 1991 ) Méthode de décomposition de domaine pour l’équation d’advection-diffusion . C. R. Acad. Sci. , 313 , 623 – 626 . Chen W. , Gunzburger M. , Hua F. & Wang X. ( 2011 ) A parallel Robin–Robin domain decomposition method for the Stokes–Darcy system . SIAM J. Numer. Anal. , 49 , 1064 – 1084 . Google Scholar Crossref Search ADS Collino P. , Delbue G. , Joly P. & Piacentini A. ( 1997 ) A new interface condition in the non-overlapping domain decomposition for the Maxwell equations . Comput. Methods Appl. Mech. Engrg. , 148 , 195 – 207 . Google Scholar Crossref Search ADS Deng Q. ( 1997 ) An analysis for a nonoverlapping domain decomposition iterative procedure . SIAM J. Sci. Comput. , 18 , 1517 – 1525 . Google Scholar Crossref Search ADS Discacciati M. ( 2004 ) Domain decomposition methods for the coupling of surface and groundwater flows . Ph.D. Thesis , École Polytechnique Fédérale de Lausanne , Switzerland . Discacciati M. , Miglio E. & Quarteroni A. ( 2002 ) Mathematical and numerical models for coupling surface and groundwater flows . Appl. Numer. Math. , 43 , 57 – 74 . Google Scholar Crossref Search ADS Discacciati M. & Quarteroni A. ( 2009 ) Navier–Stokes/Darcy coupling: modeling, analysis, and numerical approximation . Rev. Mat. Complut. , 22 , 315 – 426 . Google Scholar Crossref Search ADS Discacciati M. , Quarteroni A. & Valli A. ( 2007 ) Robin–Robin domain decomposition methods for the Stokes–Darcy coupling . SIAM J. Numer. Anal. , 45 , 1246 – 1268 . Google Scholar Crossref Search ADS Dolean V. , Gander M. & Gerardo-Giorda L. ( 2009 ) Optimized Schwarz methods for Maxwell’s equations . SIAM J. Sci. Comput. , 31 , 2193 – 2213 . Google Scholar Crossref Search ADS Dolean V. & Nataf F. ( 2007 ) An optimized Schwarz algorithm for the compressible Euler equations . Domain Decomposition Methods in Science and Engineering XVI ( Widlund O. & Keyes D. eds). Berlin : Springer , pp. 173 – 180 . Dubois O. ( 2007 ) Optimized Schwarz methods with Robin conditions for the advection-diffusion equation . Domain Decomposition Methods in Science and Engineering XVI ( Widlund O. & Keyes D. eds). Berlin : Springer , pp. 181 – 188 . Engquist B. & Zhao H.-K. ( 1998 ) Absorbing boundary conditions for domain decomposition . Appl. Numer. Math. , 27 , 341 – 365 . Google Scholar Crossref Search ADS Gander M. ( 2006 ) Optimized Schwarz methods . SIAM J. Numer. Anal. , 44 , 699 – 731 . Google Scholar Crossref Search ADS Gander M. , Halpern L. & Magoulès F. ( 2007 ) An optimized Schwarz method with two-sided Robin transmission conditions for the Helmholtz equation . Int. J. Numer. Methods Fluids , 55 , 163 – 175 . Google Scholar Crossref Search ADS Gerardo-Giorda L. , Nobile F. & Vergara C. ( 2010 ) Analysis and optimization of Robin–Robin partitioned procedures in fluid-structure interaction problems . SIAM J. Numer. Anal. , 48 , 2091 – 2116 . Google Scholar Crossref Search ADS Gander M. , Magoulès F. & Nataf F. ( 2002 ) Optimized Schwarz methods without overlap for the Helmholtz equation . SIAM J. Sci. Comput. , 21 , 38 – 60 . Google Scholar Crossref Search ADS Gerardo-Giorda L. & Perego M. ( 2013 ) Optimized Schwarz methods for the bidomain system in electrocardiology . ESAIM Math. Model. Numer. Anal. , 75 , 583 – 608 . Google Scholar Crossref Search ADS Gerardo-Giorda L. , Perego M. & Veneziani A. ( 2011 ) Optimized Schwarz coupling of bidomain and monodomain models in electrocardiology . ESAIM Math. Model. Numer. Anal. , 45 , 309 – 334 . Google Scholar Crossref Search ADS Girault V. & Rivière B. ( 2009 ) DG approximation of coupled Navier–Stokes and Darcy equations by Beaver–Joseph–Saffman interface condition . SIAM J. Numer. Anal. , 47 , 2052 – 2089 . Google Scholar Crossref Search ADS Hagstrom T. , Tewarson R. & Jazcilevich A. ( 1988 ) Numerical experiments on a domain decomposition algorithm for nonlinear elliptic boundary value problems . Appl. Math. Lett. , 1 , 299 – 302 . Google Scholar Crossref Search ADS Hanspal N. , Waghode A. , Nassehi V. & Wakeman R. ( 2009 ) Development of a predictive mathematical model for coupled Stokes/Darcy flows in cross-flow membrane filtration . Chem. Eng. J. , 149 , 132 – 142 . Google Scholar Crossref Search ADS Jäger W. & Mikelić A. ( 1996 ) On the boundary conditions at the contact interface between a porous medium and a free fluid . Ann. Sc. Norm. Super. Pisa Cl. Sci. , 23 , 403 – 465 . Japhet C. , Nataf F. & Rogier F. ( 2001 ) The optimized order 2 method. Application to convection-diffusion problems . Future Gener. Comput. Syst. , 18 , 17 – 30 . Google Scholar Crossref Search ADS Layton W. , Schieweck F. & Yotov I. ( 2003 ) Coupling fluid flow with porous media flow . SIAM J. Numer. Anal. , 40 , 2195 – 2218 . Google Scholar Crossref Search ADS Lions P. ( 1990 ) On the Schwarz alternating method III: a variant for non-overlapping subdomains . Third International Symposium on Domain Decomposition Methods for Partial Differential Equations ( Chan T. Glowinski R. Périaux J. & Widlund O. eds). Philadelphia : SIAM , pp. 202 – 231 . Nataf F. & Rogier F. ( 1995 ) Factorization of the convection-diffusion operator and the Schwarz algorithm . Math. Models Methods Appl. Sci. , 5 , 67 – 93 . Google Scholar Crossref Search ADS Quarteroni A. & Valli A. ( 1999 ) Domain Decomposition Methods for Partial Differential Equations . New York : Clarendon Press . Saad Y. & Schultz M. ( 1986 ) GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems . SIAM J. Sci. Stat. Comput. , 7 , 856 – 869 . Google Scholar Crossref Search ADS Saffman P. ( 1971 ) On the boundary condition at the interface of a porous medium . Stud. Appl. Math. , 1 , 93 – 101 . Google Scholar Crossref Search ADS Smith B. , Bjørstad P. & Gropp W. ( 1996 ) Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations . Cambridge : Cambridge University Press . Toselli A. & Widlund O. ( 2005 ) Domain Decomposition Methods—Algorithms and Theory . Springer Series in Computational Mathematics , vol. 34 . Berlin : Springer . © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Optimized Schwarz methods for the Stokes–Darcy coupling JO - IMA Journal of Numerical Analysis DO - 10.1093/imanum/drx054 DA - 2018-10-16 UR - https://www.deepdyve.com/lp/oxford-university-press/optimized-schwarz-methods-for-the-stokes-darcy-coupling-wfHP5C18s2 SP - 1959 VL - 38 IS - 4 DP - DeepDyve ER -