Marchenko inversion in a strong scattering regime including surface-related multiples

Marchenko inversion in a strong scattering regime including surface-related multiples Abstract Marchenko redatuming is a powerful technique which uses all orders of scattering to construct an image of the subsurface. The algorithm is of great value when strong internal scattering is present, as it mitigates unwanted overburden artefacts when imaging a deeper subsurface target. The solution to the Marchenko equation, on which Marchenko methods are based, is often written in terms of an iterative substitution expansion (Neumann series) and conceptualized as a process of adding and removing seismic events, in an attempt to construct a focusing function. In this work we show that in some cases one may have to look beyond series expansion techniques to determine the correct solution. We prove that in the presence of surface-related multiples, the originally proposed Neumann series type algorithm may fail to converge, whereas in the absence of such multiples convergence is guaranteed. We study convergence properties of a class of such algorithms, which correspond to different choices of pre-conditioning of the Marchenko equation. We also propose and study the effectiveness of two other means of solving the Marchenko equation: LSQR (standard least squares) and a modified Levinson-type algorithm inspired by Merchant-Parks inversion, and discuss some of the challenges they pose. Interferometry, Inverse theory, Computational seismology, Theoretical seismology, Wave scattering and diffraction 1 INTRODUCTION Commonly used imaging algorithms are based on a single-scattering assumption, and multiple reflections, whether surface-related or internal, result in spurious reflectors in the reconstructed image. At the same time, contemporary hydrocarbon exploration is forced to investigate increasingly greater depths and more complex geological structures, which causes the problem of multiple reflections to become increasingly important. For this reason, more attention is devoted to removing multiple reflections from data prior to imaging. Marchenko redatuming is a powerful technique which uses all orders of scattering to construct an image of the subsurface; it consists of a two-step scheme comprising of solving a Marchenko equation and a multidimensional deconvolution (MDD; Wapenaar et al. 2013; Broggini et al. 2014; da Costa Filho et al. 2014; Slob et al. 2014; Wapenaar 2014; Wapenaar & Slob 2014; Wapenaar et al. 2014; da Costa Filho et al. 2015; van der Neut et al. 2015). It has been used, among other things, to eliminate internal multiples from pre-stack data (Meles et al. 2015; da Costa Filho et al. 2017), to construct pre-stack primaries without multiple elimination (Meles et al. 2016), and overburden elimination, that is, to produce a reflection response from a redatuming level sitting below some scattering medium (van der Neut et al. 2016, 2017). The key elements to a successful application of Marchenko redatuming are: an estimate of the one-way traveltime between any point at the acquisition plane (which must coincide with the free-surface) and any point at the redatuming level; a noise-free reflection response with sources and receivers at a (sufficiently dense) grid on the acquisition surface, deconvolved for the source wavelet and no intrinsic attenuation. In the presence of dissipation, additional data sets (often unavailable in seismic applications) are required (Slob 2016). Initially Marchenko redatuming has been proposed to deal with internal multiples only (Wapenaar et al. 2013, 2014), where it relies on perfect surface multiple removal beforehand (as in e.g. Ravasi et al. 2016). Typically, such pre-processing leaves remnants of surface multiples comparable in strength to the (often weaker) internal multiples and these remnants can lead to calculation of incorrect focusing fields. Soon after, Singh et al. (2015) extended the Marchenko method to include free-surface multiples. In this paper we point out a fundamental issue of the Marchenko equation recursive substitution solver used in Wapenaar et al. (2014), Singh et al. (2015) or Slob (2016). The problem is manifest in the strong internal scattering regime, that is, in the presence of numerous layers with large impedance contrasts. The consequences of using the recursive substitution scheme in such setting are threefold. For one, in the absence of surface-related multiples the scheme suffers from sluggish convergence, even though, as we prove, the scheme is always convergent. In this case an approximation to the solution using only a few iterations might be insufficient for proper applications. Second, in the presence of surface-related multiples we show that the recursive substitution solver from Singh et al. (2015) can be seen as one of a family of distinct Marchenko equation decompositions. The choice of decomposition corresponds to a choice of pre-conditioner which affects the convergence rate of the solver. In contrast to the situation without a free surface, the scheme may now diverge, even though, as we demonstrate, a unique solution to the Marchenko equation always exists. Thirdly, we show that the scheme from Slob (2016) for internal demultiple in the presence of dissipation may also diverge in the presence of strong reflection and strong attenuation. None of these effects are a consequence of numerical accuracy, and we analytically derive conditions (scattering strength, focusing depth choice, among others) under which the schemes diverge. Where possible we provide a rigorous proof in 1-D, which can also be shown to hold for constant velocity 2-D and 3-D media. Lastly, we propose two other means to solve the Marchenko equation: LSQR and Merchant–Parks–Levinson algorithm, which do not suffer from divergence problems. We study their convergence properties and relative computational cost. These solvers do provide a solution to the Marchenko inversion also in cases where the Neumann series diverges; however they do so at an increased computational cost. This work is structured as follows. In the following section we briefly outline the Marchenko equation and its derivation (readers unfamiliar with the topic are referred to Wapenaar et al. (2014) and Singh et al. (2015) for further details), and we recast it in a form suitable to our analysis. In the third section we look at several different decompositions of the main operator. In Section 4, we write the 1-D Marchenko equation in a convenient matrix representation, and then in Section 5 we study the eigenvalue spectra of the relevant operators. Moreover, in Section 6 we study convergence properties of the recursive substitution schemes under different decompositions from Section 3. We show the breaking point of the scheme in the case of free-surface effects and investigate performance of non-Neumann series solvers in Section 7. 2 MATRIX-OPERATOR REPRESENTATION OF THE MARCHENKO EQUATION The Marchenko autofocusing scheme relies on up- and downgoing wave field decomposition, and amounts to solving a set of coupled Marchenko equations for unknown focusing operators, which are generalizations of migration operators which include high-order scattering. These solutions are directly related to up and down propagating Green’s functions through the representation theorems of convolution and correlation type (Wapenaar et al. 2014), modified to include the free-surface effects (Singh et al. 2015) \begin{eqnarray} G^-_{\zeta }&=&R_{\zeta } \left( f_{1}^+-{\zeta } f_{1}^- \right)-f_{1}^{-}, \end{eqnarray} (1) \begin{eqnarray} G^{+}_{\zeta }&=&f_{1}^{+*}-R_{\zeta } \left( f_{1}^{-*}-{\zeta } f_{1}^{+*} \right) . \end{eqnarray} (2) Here, $$G_{\zeta }^{\pm }$$ represents the up- (−) and downward (+) propagating Green’s function due to a source at the (acquisition) surface, recorded at the redatuming level; $$f_{1}^{\pm }$$ represents the up- (−) and downwards (+) propagating wavefields measured at the surface ζ is the free-surface reflection coefficient; and Rζ is an operator which carries out time-domain convolutions and sums over source (or receiver) positions with a wavelet deconvolved surface reflection response (due to a dipole source and a monopole receiver) with a field that it is acting on. For $$G_{\zeta }^{\pm }$$ and Rζ, the subscript ζ denotes the presence (ζ ≠ 0) or absence (ζ = 0) of surface-related multiples. Focusing functions $$\left( f_{1}^{\pm } \right)$$, are implicitly defined in the ζ = 0 medium, and hence (unlike the Green’s functions and the reflection response) will not be appended with a ζ subscript. Later in the paper we will consider the extreme cases of ζ = 0, ±1. We use * to denote time reversal (frequency-domain complex conjugation). The wavefields in the above equations are represented in the frequency-domain, where time-domain convolutions correspond to a simple multiplication; for 1-D, the spatial summation over source (or receiver) positions is absent. We refer to Wapenaar et al. (2014) and Singh et al. (2015) for further details. We may rewrite the relations above in terms of a block matrix operator equation \begin{eqnarray} \vec{G}_{\zeta }&=&\mathcal {A}_{\zeta }\vec{f}_{1} , \end{eqnarray} (3) where $$\vec{f}_1= \left[ -f_{1}^-,f_{1}^{+*} \right]^T$$, $$\vec{G}_{\zeta }= \left[ G^-_{\zeta },G^{+}_{\zeta } \right]^T$$ and \begin{eqnarray} \mathcal {A}_{\zeta }=\left(\begin{array}{c{@}{\quad}c}1+{\zeta } R_{\zeta } & R_{\zeta }T \\ R_{\zeta }T & 1+{\zeta } R_{\zeta }\end{array}\right)\equiv I_2-\mathcal {B}_{\zeta } . \end{eqnarray} (4) Here T stands for a time reversal operator, that is, $$Tf_1^+=f_1^{+*}$$, T refers to transposition, and I2 is the 2 × 2 identity matrix. We use the fact that $$G^{\pm }_{\zeta }$$ and $$f_1^{\pm }$$ are temporally separated to define a projector P, such that $$P G_{\zeta }^{\pm }=0$$, and $$P f_1^{-}=f_1^{-}$$. In Marchenko literature, P is known as the windowing operator; it separates causal from acausal events in G by maintaining all events before the first arrival (acausal events) and suppressing all other (causal) events. The downward propagating focusing field $$f_1^{+}=T_d^{\text{inv}}+M_+$$ is composed of the ‘primary’ focusing field $$T_d^{\text{inv}}$$ (approximated by a time-reversed direct arrival field from a point on the redatuming level recorded at the surface) and the coda wavefield M+ respectively. They satisfy $$P M_+^*=M_+^*$$ and $$P T_d^{\text{inv}*}=0$$. The action of projector P on eq. (3) yields the set of coupled Marchenko equations $$\hat{P} \mathcal {A}_{\zeta }\vec{f}_1=0$$, where $$\hat{P}=I_2 \otimes P$$, which can be expressed as \begin{eqnarray} [ I_2-\sigma _z \hat{P} \mathcal {B}_{\zeta }\sigma _z]\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right), \end{eqnarray} (5) where σz = diag[1, −1] is one of the Pauli sigma-matrices. At this point, Wapenaar et al. (2014) and Singh et al. (2015) recognized a nested integral equation \begin{eqnarray} \left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right)+\sigma _z \hat{P}\mathcal {B}_{\zeta }\sigma _z\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right), \end{eqnarray} (6) which can be solved by a recursive substitution method (Neumann series). This interpretation is very much in the spirit of Rose’s first observation of iteratively collecting the reflected response, reversing it in time and sending it back into the medium (Rose 2002a,b). However, in order to understand the effect of the free surface and all the consequences that go with it, we need to take a different path. We interpret equation (5) as an inversion. In the presence of the free surface the operator \begin{eqnarray} I_2-\sigma _z \hat{P}\mathcal {B}_{\zeta }\sigma _z= L-U \end{eqnarray} (7) can be decomposed in terms of operators L and U, of which we will study three interesting cases: the trivial case L = I2 (BD for block-diagonal, as we explain below), the Gauss-Seidel (GS) case, and the modified Gauss-Seidel (MGS) case (Gauss 1903; Hazewinkel 2001). The introduced operators L and U are meant to allude to lower and upper-triangular matrices, but do not necessarily need to be such. Considering such L that L−1 exists, we can rewrite eq. (5) by substituting the decomposition (7) and acting with L−1, as \begin{eqnarray} ( I_2-L^{-1}U)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right), \end{eqnarray} (8) where we see that L−1 acts as a pre-conditioner. Note that eqs (5) and (8) yield the same solutions, however the procedure of solving the two might be different on account of appropriately chosen L−1. The statement that we will prove in this paper is that eq. (5) in the presence of a free surface (ζ ≠ 0), cannot be always solved by a Neumann series expansion (Singh et al. 2015), which only converges if the spectral radius (maximum magnitude eigenvalue) of the operator $$\sigma _z\hat{P}\mathcal {B}_{\zeta }\sigma _z$$ is bounded by unity, and diverges otherwise (Fredholm 1903; Mathews & Walker 1970; Lax 2002). Previously, no claims about convergence were made (e.g. see Singh et al. 2015). We show that there are several pre-conditioners (both trivial and non-trivial), leading to different inversions with different convergence properties of the series expansion (or in some cases divergence). In particular, we will show that the spectral radius ρ of L−1U depends on the decomposition choice and the scattering strength given by Rζ. We will show that only in some cases it is bounded by unity, in which case this allows for a solution of the form \begin{eqnarray} \left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)&=& \left( I_2-L^{-1}U \right)^{-1}L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right) \nonumber \\ &=&\sum \limits _{k=0}^{\infty } \left( L^{-1}U \right)^k L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right). \end{eqnarray} (9) This will be the topic of Section 3 and 5. It is important to note that the problem can be solved by other (iterative non-Neumann type) inversion schemes, which, together with their convergence properties, we will address in Section 7. 3 L − U DECOMPOSITION OF THE SET OF COUPLED MARCHENKO EQUATIONS In this section we will expand on the L − U decomposition idea, and later show that in the presence of a free surface, for the same data set and operator Rζ, one decomposition will result in divergent Neumann expansion, while a different decomposition might still lead to a series converging to a correct result. Some of the technical details are presented in Appendix A. 3.1 The trivial case (BD) As a reference point we will first address the trivial case of L = I2 (2 × 2 identity matrix). 3.1.1 Block-diagonalization Eq. (5) can be block-diagonalized, by defining two new variables x± such that \begin{eqnarray} \left(\begin{array}{c}x_+ \\ x_-\end{array}\right)=\left(\begin{array}{c{@}{\quad}c}1 & 1 \\ 1 & -1\end{array}\right)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right) , \end{eqnarray} (10) which decouples the two Marchenko equations into \begin{eqnarray} \left( 1+PR_{\zeta } \left( \zeta \pm T \right) \right)x_{\mp }= PR_{\zeta }T_d^{\text{inv}} . \end{eqnarray} (11) The inverse operator (1 + PRζ(ζ ± T))−1 can be expressed in terms of a Neumann series \begin{eqnarray} \left( 1+ PR_{\zeta } \left( \zeta \pm T \right) \right)^{-1}=\sum \limits _{k=0}^{\infty } \left( -PR_{\zeta } \left( \zeta \pm T \right) \right)^{k} \end{eqnarray} (12) under the condition that the spectral radius of PR(ζ ± T) is bounded from above by one. If this condition is satisfied, the problem can be solved recursively using \begin{eqnarray} x_{\pm ,k+1} &=& f_{1,0}^{-}\mp PR_{\zeta }x_{\pm ,k}^{*}-{\zeta } PR_{\zeta }x_{\pm ,k} \end{eqnarray} (13) starting with k = 0 and $$x_{\pm ,0}=PR_{\zeta }T_d^{\text{inv}}$$. We will refer to the scheme above as the BD Neumann series. Note that this is not the scheme proposed by Wapenaar et al. (2014) or Singh et al. (2015). We shall return to this matter in Section 6.2. A simple numerical experiment shows that this scheme diverges for a single reflector with a reflection coefficient of one-third or more for sufficiently large focusing depth, caused by the fact that the spectral radius of PRζ(ζ ± T), for ζ ≠ 0, is no longer bounded by one. The problem worsens significantly for geometries with a larger number of reflectors, something that will be discussed in Sections 3.3, 5 and 6.5. 3.1.2 Surface multiple removal—the obvious pre-conditioner The BD form of the equation also helps demonstrate another property of the problem. By studying the properties of operators P, Rζ and T, and by virtue of (1 + ζRζ)R0 = Rζ, or the fact that eq. (3) transforms from ζ = 0 problem to a ζ ≠ 0 problem by applying (1 + ζRζ) to both sides, one can show that \begin{eqnarray} 1+PR_{\zeta } \left( \zeta \pm T \right)&=& \left( 1+{\zeta } PR_{\zeta } \right) \left( 1\pm PR_0T \right) . \end{eqnarray} (14) Taking the inverse on both sides (which does exist as we shall show), we get \begin{eqnarray} \left( 1+PR_{\zeta } \left( \zeta \pm T \right) \right)^{-1}&=& \left( 1\pm PR_0T \right)^{-1} \left( 1+{\zeta } PR_{\zeta } \right)^{-1} . \end{eqnarray} (15) The solution to the problem then reads \begin{eqnarray} \left( f_1^-\mp M_+^* \right)&=& \left( 1\pm PR_0T \right)^{-1} \left( 1+{\zeta } PR_{\zeta } \right)^{-1} PR_{\zeta }T_d^{\text{inv}}\nonumber \\ &=& \left[ \sum \limits _{p=0}^{\infty } \left( \mp PR_0T \right)^{p} \right] PR_{0}T_d^{\text{inv}}, \end{eqnarray} (16) where we have used that (1 + ζPRζ)−1PRζ = PR0, which amounts to first removing the surface-related multiples from the reflection response Rζ, followed by an inversion procedure as if the free-surface was not present. In this way we can potentially decompose a single (possibly divergent) Neumann series into a product of two other (possibly convergent) Neumann series, yielding solutions \begin{eqnarray} f_1^-&=& \left[ \sum \limits _{p=0}^{\infty } \left( PR_0T \right)^{2p+1} \right] T_d^{\text{inv}*},\nonumber \\ M_+^*&=& \left[ \sum \limits _{p=1}^{\infty } \left( PR_0T \right)^{2p} \right] T_d^{\text{inv}*}. \end{eqnarray} (17) The ζPRζ operator can be represented by a product of a diagonal matrix P and a lower triangular Toeplitz matrix Rζ with zero diagonal elements (no recorded reflections at t = 0). As a consequence, all eigenvalues of PRζ are equal to zero, which means that the inverse (1 + ζPRζ)−1 can be expressed as a finite Neumann series, hence the inverse exists. In Section 5 we will show that the spectral radius of PR0T is bounded from above by 1, which implies that (1 ± PR0T)−1 always exists. These two facts combined with the existence of the decomposition (14) imply that (1 + ζPRζ)(1 ± PR0T) too will have an inverse, hence 1 + PRζ(ζ ± T) will be invertible too, thus showing that upon an appropriate parametrization of $$f_1^+$$ and the right projection, the problem is always invertible. The problem with this approach in practice however, is that it necessitates removal of surface-related multiples first, and our inability to perform this step perfectly on real data makes this scheme less desirable. For this reason we would like to investigate different pre-conditioners. 3.2 GS case The GS method amounts to defining matrix L and U such that \begin{eqnarray} L &=&\left(\begin{array}{c{@}{\quad}c}1+{\zeta } PR_{\zeta } & 0 \\ -PR_{\zeta }T & 1+{\zeta } PR_{\zeta }\end{array}\right),\quad U=\left(\begin{array}{c{@}{\quad}c}0 & PR_{\zeta }T \\ 0 & 0\end{array}\right), \end{eqnarray} (18) where L is now indeed lower-triangular, and U strictly upper-triangular. From eq. (8), the solution can be parametrized as \begin{equation*}\left(\begin{array}{c{@}{\quad}c}1 & -PR_{0}T \\ 0 & 1- \left( PR_{0}T \right)^2 \end{array}\right)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{0}T_d^{\text{inv}} \\ PR_{0}TPR_{0}T_d^{\text{inv}}\end{array}\right) \end{equation*} where we have used the result from above (1 + ζPRζ)−1PRζ = PR0. See Appendix A for details. This means that the GS scheme also, undesirably, requires surface multiple removal prior to the iterative scheme, which reduces the problem down to the ζ = 0 scheme, with the solution given by eq. (17). 3.3 MGS case The two methods above required a surface multiple removal step, due to the presence of 1 + ζPRζ in L which we subsequently took an inverse of. One modification we can apply to the GS type scheme is to use a different decomposition of the matrix L − U such that L−1 exists, \begin{eqnarray} L&=&\left(\begin{array}{c{@}{\quad}c}1 & 0 \\ -PR_{\zeta }T & 1\end{array}\right)\qquad U=\left(\begin{array}{c{@}{\quad}c}-{\zeta } PR_{\zeta } & PR_{\zeta }T \\ 0 & -{\zeta } PR_{\zeta }\end{array}\right). \end{eqnarray} (19) This results in the equation \begin{eqnarray} \left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=\Omega \sigma _x\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right)+\Omega \left( \sigma _x\Omega \sigma _x \right)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right), \end{eqnarray} (20) where ϕ ≡ Ωσx \begin{eqnarray} \phi &=& \left(\begin{array}{c{@}{\quad}c}0 & 1 \\ -{\zeta } PR_{\zeta } & PR_{\zeta }T\end{array}\right)\nonumber \\ &\equiv &\underbrace{\left(\begin{array}{c{@}{\quad}c}1 & 0 z\\ PR_{\zeta }T & -{\zeta } PR_{\zeta }\end{array}\right)}_{\Omega } \underbrace{\left(\begin{array}{c{@}{\quad}c}0 & 1 \\ 1 & 0\end{array}\right)}_{\sigma _x}. \end{eqnarray} (21) See Appendix A for further details. eq. (20) yields another recursive, two-step method \begin{eqnarray} M_{+,k}^* &=& PR_{\zeta }f_{1,k-1}^{-*}-{\zeta } PR_{\zeta }M_{+,k-1}^*\nonumber \\ f_{1,k} &=&f_{1,k-1} \nonumber \\ M_{+,k+1}^* &=&M_{+,k}^* \nonumber \\ f_{1,k+1}^- &=& f_{1,0}^{-}+PR_{\zeta }M_{+,k}-{\zeta } PR_{\zeta }f_{1,k}^- . \end{eqnarray} (22) We can simplify the steps to yield \begin{eqnarray} M_{+,k}^* &=& PR_{\zeta }f_{1,k}^{-*}-{\zeta } PR_{\zeta }M_{+,k-1}^*\nonumber \\ f_{1,k+1}^- &=& f_{1,0}^{-}+PR_{\zeta }M_{+,k}-{\zeta } PR_{\zeta }f_{1,k}^- . \end{eqnarray} (23) This is exactly the scheme first published by Wapenaar et al. (2014) (ζ = 0) and Singh et al. (2015) (ζ ≠ 0). The difference between the MGS equation (23) and the BD equation (13) is that the former splits the update step in half and the second half of the iterative step uses an already updated form of one field to calculate the other. Note that it is the presence of ζ ≠ 0 that gives rise to a set of different decompositions, and that for ζ = 0 the MGS scheme reduces to the GS scheme. Moreover, one can also decompose L − U using a free parameter (e.g. a successive-over-relaxation approach from Young 1971), however early investigation has shown that these parametrized decompositions do not seem to improve convergence without a priori removing free-surface effects. 4 A CONVENIENT 1-D MARCHENKO EQUATION REPRESENTATION The scheme as proposed by Wapenaar et al. (2013, 2014), is composed of three building blocks: time reversal T; reflection response convolutions and summing over source/receiver locations (in 2-D or 3-D) Rζ; and projection P. In the 1-D case spatial index contractions reduce to scalar multiplication, and thus can be skipped. In order to study the convergence of the scheme, we would like to represent the (series of the) three operations above in terms of a single matrix operation acting on some representation of the primary focusing field $$T_d^{\text{inv}}$$. We choose to work in the time domain and represent convolutions using Toeplitz forms, which in a discretized setting amounts to Toeplitz matrices $${\rm Toe} \left[ \vec{a} \right]\equiv \left[ a \right]_{i-j}$$ \begin{eqnarray} \left[ a \right]_{i-j}= \left( \begin{array}{c{@}{\quad}c{@}{\quad}c{@}{\quad}c{@}{\quad}c{@}{\quad}c} a_0 & a_{-1} & \cdots & \cdots & a_{1-N} & a_{-N} \\ a_1 & a_0 & \ddots & \ddots & \ddots & a_{1-N}\\ a_2 & a_1 & \ddots &\ddots & \ddots &\vdots \\ \vdots & \ddots & \ddots & & \ddots &\vdots \\ \vdots & \ddots & \ddots & a_1 & a_0 & a_{-1} \\ a_{N} & \cdots & \cdots & a_2 & a_1 &a_0 \end{array} \right) \end{eqnarray} (24) where N = tmax/dt + 1, with tmax the total seismic trace length and dt denoting the time bin width. Note that ak, for k > 0 (k < 0) corresponds to the positive (negative) times. Before we address the form of the projection P and time reversal T operators, we will first discuss the exact form of the full bandwidth seismic trace $$\mathcal {R}_{\zeta } \left( \omega \right)$$. Suppose we have a 1-D model, with M reflectors characterized by (real) reflection coefficients ri, \begin{eqnarray} \left|r_i\right| < 1, \qquad i=1,\ldots ,M, \end{eqnarray} (25) and two-way traveltimes τi \begin{eqnarray} 0 < \tau _1 < \tau _2 < \cdots < \tau _M, \end{eqnarray} (26) and the following shorthand notation in terms of complex variables, \begin{eqnarray} z_i \left( \omega \right) = r_i e^{i\omega \tau _i} . \end{eqnarray} (27) One can obtain a causal seismic trace $$\vec{\mathfrak {r}}_{\zeta }$$, with zero entries at negative times, that is, $$\mathfrak {r}_{k\le 0,\zeta }=0$$, and the kth entry is given by \begin{eqnarray} \mathfrak {r}_{k,\zeta }=\frac{1}{2\pi }\int \limits _{-\pi }^{\pi }e^{ik\omega } \mathcal {R}_{\zeta } \left( \omega \right) {\rm d}\omega ,\quad k=0,1,\ldots ,N-1 , \end{eqnarray} (28) where $$\mathcal {R}_{\zeta } \left( \omega \right)=\mathcal {R}_0 \left( \omega \right)/ \left( 1-{\zeta } \mathcal {R}_0 \left( \omega \right) \right)$$, and where $$\mathcal {R}_0(\omega )$$ is the frequency domain reflection response containing only primaries and internal multiples, so no free-surface multiples (Slob et al. 2014) \begin{eqnarray} \mathcal {R}_0(\omega ) = \frac{N^{ \left( 0 \right)}}{D^{ \left( 0 \right)}}. \end{eqnarray} (29) The numerator and the denominator above can be obtained by an iterative procedure using the formula \begin{eqnarray} \begin{array}{ll} \! N^{ \left( k-1 \right)}=z_k \left( \omega \right)D^{ \left( k \right)}+N^{ \left( k \right)},\\ \! D^{ \left( k-1 \right)}=z_k \left( \omega \right)^*N^{ \left( k \right)}+D^{ \left( k \right)},\end{array} \end{eqnarray} (30) starting with \begin{eqnarray} N^{ \left( M \right)}&=&0 \quad {\rm and}\quad D^{ \left( M \right)}=1. \end{eqnarray} (31) Using formula (29), the surface reflection response R0(ω) (for 1, 2 or 3 reflectors respectively) is given by \begin{eqnarray} \mathcal {R}_0(\omega ) & = & z_1 , \nonumber \\ \mathcal {R}_0(\omega ) & = & \frac{z_1+z_2}{1+z_2z_1^*} , \nonumber \\ \mathcal {R}_0(\omega ) & = & \frac{z_1+z_2+z_3+ z_3z_2^*z_1}{1+z_3z_2^* + z_3z_1^* + z_2z_1^*} . \end{eqnarray} (32) The reflection response convolution operator $$R_{\zeta }={\rm Toe} \left[ \vec{\mathfrak {r}}_{\zeta } \right]$$ is a lower-triangular Toeplitz matrix constructed from the time domain seismic trace $$\mathfrak {r}_{\zeta }$$. We can choose a matrix-vector representation where $$T_d^{\text{inv}}$$ is given by a vector of length 2N + 1 (N samples corresponding to positive (negative) times each and a t = 0 sample), given by \begin{eqnarray} T_d^{\text{inv}}=\delta _{d,i} , \end{eqnarray} (33) where δij is the Kronecker delta, and d = N − td/dt is the time sample corresponding to the time t = −td (negative direct arrival time). In this case the operators P and T take the form (2N + 1) × (2N + 1) matrices \begin{eqnarray} P_k&=&\left(\begin{array}{c{@}{\quad}c}I_{ (N+d)\times (N+d)} & 0_{ (N+d)\times (N+1-d)} \\ 0_{ (N+1-d)\times (N+d)} & 0_{ (N+1-d)\times (N+1-d)}\end{array}\right) , \nonumber \\ T_{ij}&=&\sum \limits _{i,j=1}^{2N+1}\delta _{i,2N+2-j} , \end{eqnarray} (34) where Ik × k represents a k × k dimensional identity matrix, 0j × k is a j × k matrix of zeros. It is easy to see that Tij is an anti-diagonal matrix of 1’s. Moreover, it is easy to see that RζT is a Hankel matrix (an operator carrying out correlation on vectors) and that the matrices above satisfy the property $$TR_{\zeta }T=R_{\zeta }^{T}$$ where T denotes matrix transposition. It is important to note that in the matrix-vector representation, a (coupled) Marchenko equation becomes a matrix equation such that $$( 1-\sigma _z \hat{P} \mathcal {B}_{\zeta }\sigma _z)$$ can be represented explicitly as a matrix, and hence its inverse can be found. By virtue of the projector P choice, this matrix and its inverse will also depend on seismic trace elements which correspond to reflections below the focal point, however the condition $$P T_d^{\text{inv}*}=0$$ guarantees that the solutions $$f_1^{\pm }$$ will not be dependent on them. In Section 5 we use the latter representation to study different iterative solution methods and their convergence properties. We would like to stress that regardless of the representation used, or whether the analysis is done in 1-D or 2-D, we have qualitatively found the same results. Of the representations considered, it is the 1-D-vector representation that makes the discussion most transparent. 5 1-D MODEL SPECTRAL ANALYSIS: CHARACTERISTIC POLYNOMIALS In this section we will study the spectral radii of the BD, MGS and dissipative Marchenko Neumann series schemes in 1-D. To this effect we will derive the characteristic polynomials of the operators which build up the series. The maximum-magnitude roots of these polynomials, the so-called spectral radius, determines the convergence properties of the different algorithms and will be further studied in Section 6. For mathematical details please consult Appendix B. 5.1 BD formulation To establish whether the inverse of diag(PRζ(ζ − T), PRζ(ζ + T)) can be found by Neumann expansion, we determine its characteristic polynomial χζ(μ), which is a product of characteristic polynomials of PRζ(ζ − T) and PRζ(ζ + T). Numerical investigation has shown that the convergence issue only becomes a problem for some geometries, past some threshold focusing depth. For example, in the free-surface model presented in Singh et al. (2015), where focusing functions were successfully obtained for a shallower depth, it can be shown that this will no longer be possible for much greater depths using the same iterative substitution algorithm. Since we are interested if a particular set of reflectors might lead to series divergence for any depth, we will pick very large depths (and hence first arrival times td → ∞, that is, focusing far below the deepest reflector), meaning P ≈ 1, or where the strength of the (multiple) reflections has diminished enough, that is, $$PR\vec{v}\approx R\vec{v}$$, $$\forall \vec{v}$$. For consistency of the reciprocity theorems, this means that G− = 0 and that the first event in G+ is pushed infinitely far into the future, which for traces of finite length means that events in G+ will be absent. Additionally, this assumption offers us some mathematical freedom which we will be making use of later. The characteristic polynomial of a BD matrix diag(Rζ(ζ − T), Rζ(ζ + T)) is given by \begin{eqnarray} \chi _{\zeta } \left( \mu \right)&=&\det \left[ \left( R_{\zeta } \left( \zeta -T \right)-\mu \right) \left( R_{\zeta } \left( \zeta +T \right)-\mu \right) \right], \end{eqnarray} (35) which in the special cases ζ ∈ {0, ±1} amounts to \begin{eqnarray} \chi _{{\zeta =0} } \left( \mu \right)&=& \det \left[ \mu ^2-R_{0}R_{0}^T \right] , \end{eqnarray} (36) \begin{eqnarray} \chi _{\zeta =\pm 1} \left( \mu \right)&=&\mu \det \left[ \mu - \zeta \left( R_{\zeta }+R_{\zeta }^T \right) \right] , \end{eqnarray} (37) where we see that the form of the expression for the spectrum is different for ζ = 0 or ζ = ±1, that is, it is determined by the existence or lack of surface-related multiples. Interestingly cases ζ = 0, ±1, correspond to spectra of real symmetric matrices, and in the presence of a reflecting surface, exactly half of the eigenvalues will be equal to zero on account of μ pre-multiplying the determinant in eq. (37). It is easy to see that $$TR_{\zeta } \left( t \right)T=R_{\zeta }^T$$ corresponds to a Toeplitz matrix constructed from a time reversed trace. Therefore, the transposition of the operator Rζ amounts to frequency domain complex conjugation of the trace that was used to construct it. 5.2 MGS formulation The characteristic polynomial of matrix ϕ, defined in eq. (21), is given by \begin{equation*} {\rm det} \left[ \phi -\mu \right]= {\rm det}\left(\begin{array}{c{@}{\quad}c}-\mu & 1\\ -{\zeta } R_{\zeta } & R_{\zeta }T-\mu \end{array}\right) , \end{equation*} which in the special cases considered, for ζ = 0 amounts to \begin{eqnarray} {\rm det} \left[ \mu ^2-R_0R_0^T \right]=0 , \end{eqnarray} (38) yielding the same result as (37). For a reflecting surface with ζ = ±1 we have \begin{eqnarray} {\rm det} \left[ \nu ^2-\nu \left( R_{\zeta }R_{\zeta }^T\pm \left( R_{\zeta }+R_{\zeta }^T \right) \right)+R_{\zeta }R_{\zeta }^T \right]=0 , \end{eqnarray} (39) with ν = μ2, yielding a different expression than eq. (36) on account of additional $$R_{\zeta }R_{\zeta }^T$$ proportional terms that now no longer drop out. 5.3 Dissipative formulation Marchenko redatuming in the presence of dissipation was formulated by Slob (2016), where the presented autofocusing formulation amounts to inverting (using the large td, i.e. P = 1 approximation) \begin{eqnarray} \left(\begin{array}{c{@}{\quad}c}1 & -RT\\ -\bar{R}T & 1\end{array}\right)\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right) , \end{eqnarray} (40) with the characteristic polynomial defined by \begin{eqnarray} {\rm det} \left[ \mu ^2-R\bar{R}^T \right]=0 . \end{eqnarray} (41) We will show later that, unlike for the dissipationless case, |μ| is now no longer bounded by 1. 6 CONVERGENCE PROPERTIES OF MARCHENKO AUTOFOCUSING NEUMANN SERIES ALGORITHMS In this section we will build on the results derived above. We will first consider the case of interbeds only, followed by the case of surface multiples. We show how the convergence changes as a function of depth, and how it behaves in two dimensions. We end this section with a short note on the convergence of Marchenko inversion in the presence of dissipation. 6.1 Convergence in the absence of surface-related multiples In this section we will show that Marchenko inversion in the absence of free-surface effects will always converge. To do so, we work in the frequency domain. Let us consider the eigenvectors ψ(ω) of R0T (where T is time-reversal, or in the frequency domain, complex conjugation), \begin{eqnarray} \mathcal {R}_0(\omega ) T \psi (\omega ) =\mathcal {R}_0(\omega ) \psi (\omega )^* = \lambda \psi (\omega ) , \end{eqnarray} (42) implying \begin{eqnarray} \mathcal {R}_0(\omega )\mathcal {R}_0(\omega )^* \psi (\omega ) = \left| \lambda \right|^2 \psi (\omega ) . \end{eqnarray} (43) Clearly we have eigenvectors \begin{eqnarray} \psi (\omega ) = \delta (\omega -\omega _0) + \delta (\omega +\omega _0) , \end{eqnarray} (44) whose eigenvalues satisfy \begin{eqnarray} \left| \lambda (\omega _0) \right| = \left| \mathcal {R}_0(\omega _0) \right|, \end{eqnarray} (45) which for a fixed model {ri, τi}, with 1, 2 or 3 reflectors, numerical experiments show that they obey the bounds \begin{eqnarray} \left| \mathcal {R}_0(\omega )\right| & \le & \left| r_{1} \right| \nonumber \\ \left| \mathcal {R}_0(\omega )\right| & \le & \frac{\left| r_{1} \right| + \left| r_{2} \right|}{1+\left| r_{2} \right|\left| r_{1} \right|}\nonumber \\ \left| \mathcal {R}_0(\omega )\right| & \le & \frac{\left| r_{1} \right| + \left| r_{2} \right| + \left| r_{3} \right| + \left| r_{3} \right|\left| r_{2} \right|\left| r_{1} \right|}{1+\left| r_{3} \right|\left| r_{2} \right| + \left| r_{3} \right|\left| r_{1} \right| + \left| r_{2} \right|\left| r_{1} \right|} . \end{eqnarray} (46) These bounds are stronger than the one we attempt to prove here, namely, \begin{eqnarray} \left| \mathcal {R}_0(\omega ) \right| < 1 . \end{eqnarray} (47) Using the polynomials N and D, we have \begin{equation*} \mathcal {R}_0(\omega ) \mathcal {R}_0(\omega )^* = \frac{ N^{ \left( 0 \right)} N^{ \left( 0 \right)*}}{D^{ \left( 0 \right)}D^{ \left( 0 \right)*}} \end{equation*} and from the recurrence relations (30) we derive \begin{eqnarray} D^{ \left( 0 \right)} D^{ \left( 0 \right)*} - N^{ \left( 0 \right)}N^{ \left( 0 \right)*} & = & (1-z_1 z_1^*) \left[ D^{ \left( 1 \right)} D^{ \left( 1 \right)*} - N^{ \left( 1 \right)}N^{ \left( 1 \right)*} \right] \nonumber \\ & = & \prod _i (1- \left| z_i \right|^2) \nonumber \\ & = & \prod _i (1-r_i^2) \in \left(0,1\right] . \end{eqnarray} (48) Therefore, we have in general \begin{eqnarray} \mathcal {R}_0(\omega ) \mathcal {R}_0(\omega )^*&=& \left| \mathcal {R}_0(\omega ) \right|^2 = \frac{x(\omega )^2}{c+x(\omega )^2} < 1\nonumber \\ &\Rightarrow & \left| \mathcal {R}_0(\omega ) \right|<1 \end{eqnarray} (49) since $$c = \prod _i (1-r_i^2) \in \left(0,1\right]$$ and x(ω)2 = N(0)N(0)* ≥ 0. For a proof of the stronger bounds (46), see Appendix C. This means that the spectral radius for Marchenko inversion in the absence of surface-related multiples will be always bounded by 1, meaning that the Neumann series expansion of the Marchenko inversion step is always justified and will always converge. A word of caution is in order here. The proof above holds for the case of all order scattering in an infinitely long trace. Should a trace finite in length (tmax) be used, that is, $$\tilde{\mathcal {R}}_0 \left( t,t_{\text{max}} \right)\equiv \Theta \left( t_{\text{max}}-t \right)\mathcal {R}_0 \left( t \right)$$, where Θ is the Heaviside Theta function, it is possible that in the strong internal scattering regime $$\left| \tilde{\mathcal {R}}_0(\omega ,t_{\text{max}}) \right|>1$$ for some ω. For example, consider a two reflector model, with geologically unrealistic reflection coefficients of r1 = r2 = 0.75. In this case $$\left| \tilde{\mathcal {R}}_0(\omega ,t_{\text{max}}) \right|>1$$ will hold if the trace terminates before the emergence of the second order internal multiple, and $$\left| \tilde{\mathcal {R}}_0(\omega ,t_{\text{max}}) \right|<1$$ will hold if the trace contains said event and all higher order multiples. In Marchenko inversion however the projector only acts on the convolution, rather than projects the trace that is later being convolved. It appears it may be for that reason that the point above does not lead to divergence. 6.2 Convergence in the presence of surface-related multiples In order to understand the convergence properties of the Neumann type methods in the presence of surface-related multiples, that is, when ζ = ±1, we need to study the spectrum of a real symmetric Toeplitz (RST) matrix R + RT (see eq. 37) generated by a function Rζ(ω) such that the matrix entries satisfy \begin{eqnarray} t_{r}&=&\frac{1}{2\pi }\int \limits _{-\pi }^{\pi } \exp \left( i\omega r \right) \left( \mathcal {R}_{\zeta } \left( \omega \right)+\mathcal {R}_{\zeta } \left( -\omega \right) \right) {\rm d}\omega \nonumber \\ &=&\frac{1}{\pi }\int \limits _{0}^{\pi } \cos \left( \omega r \right) \left( \mathcal {R}_{\zeta } \left( \omega \right)+\mathcal {R}_{\zeta } \left( -\omega \right) \right) {\rm d}\omega . \end{eqnarray} (50) The result from Grenander & Szegö (1958, p. 64) dictates that the set of eigenvalues of an RST matrix is bounded from below by the absolute minimum and from above by the absolute maximum of $$\mathcal {R}_{\zeta } \left( \omega \right)+\mathcal {R}_{\zeta } \left( -\omega \right)$$. Thus considering a single reflector geometry, with $$\mathcal {R}_0 \left( \omega \right)=r_1 e^{i\omega \tau _1}$$, where τ1 is a two way time between the source and the reflector, we get \begin{eqnarray} \mathcal {R}_{\zeta } \left( \omega \right)&=&\mathcal {R}_0 \left( \omega \right)\sum \limits _{i=0}^{\infty } \left( {\zeta } r_1 e^{i\omega \tau _1} \right)^i=\frac{r_1 e^{i\omega \tau _1}}{1-{\zeta } r_1 e^{i\omega \tau _1}} \end{eqnarray} (51) such that \begin{eqnarray} \mathcal {R}_{\zeta } \left( \omega \right)&+&\mathcal {R}_{\zeta } \left( -\omega \right)=\frac{2r_1 \left( \cos {\omega \tau _1}-{\zeta } r_1 \right)}{1+r^2-2r\zeta \cos \omega \tau _1} . \end{eqnarray} (52) It is easy to verify that the right hand side in eq. (52) has an extremum at ωτ1 = kπ, with an integer k (where k is even for ζ = 1 and odd for ζ = −1). The convergence criterion then implies that \begin{eqnarray} \rho \left( \sigma _z\mathcal {B}_{\zeta }\sigma _z \right)=\frac{2 |r|}{1- |r|}<1&\rightarrow |r|<\displaystyle\frac{1}{3} . \end{eqnarray} (53) This implies that already for a single reflector and deep enough focusing points (requiring long enough traces), the algorithm implementation given by the BD equation (13) diverges if the reflection coefficient exceeds one-third, which can be very easily verified numerically. 6.3 MGS convergence properties The result above does not hold for the MGS scheme outlined in Section 3.3. Numerical experiments show that for a single reflector geometry the MGS scheme from eq. (23) will only start diverging if the reflection coefficient exceeds approximately 0.5. This has to do with the fact that the Neumann series is given in terms of operator ϕ (given by eq. 21), which does not solve the divergence problem, but reformulates the problem in such a way that it allows for a convergent solution for reflector geometries where the BD algorithm (13) already diverges. In order to illustrate the process the following numerical experiment was designed. A set of 1-D overburden geometries with a free surface (ζ = −1) was created, where both the number of reflectors nr, the two-way times $$t_{i,n_r}$$ and the reflection coefficients $$r_{i,n_r}$$ were randomly generated. The two-way times were randomly selected from a uniform distribution on the interval $$0<t_{n_r,n_r}<2t_d=2$$ s sampled at 2dt = 8 ms. Only unique values of $$t_{n_r,n_r}$$ were kept to assure that there is only one scatterer at a given position in time. The reflection coefficients where normally distributed with zero mean, and a standard deviation selected randomly (uniform probability distribution) from the interval [0.15, 0.35] to introduce some more variability in the scattering strengths between different geometries. All reflectors satisfying $$0.4< \left| r_{i,n_r} \right|$$ were removed from the overburden model, characterizing a strong internal scattering regime, yet assumed realistic enough for geophysical applications. Subsequently, a 4.0 s (dt = 4ms) long 1-D synthetic full bandwidth (with Nyquist frequency of 125 Hz) seismic trace was computed analytically (a series of Kronecker delta spikes) and used to generate a convolution operator Rζ = −1. For every random geometry the eigenvalue spectra of operators $$\sigma _z P\mathcal {B}_{\zeta }\sigma _z$$, PRζ(ζ ± T) and ϕ2 were calculated with Matlab’s function eig (using the P and T representation given in eq. (34)), and hence (by selecting the one with the maximum absolute value) the radius of convergence was found. We have grouped the results in Fig. 1, where we have plotted the spectral radii of $$\sigma _z P\mathcal {B}_{\zeta }\sigma _z$$ (representing the BD case) versus those of ϕ2 (representing the MGS case). Note that the spectral radii of $$\sigma _z P \mathcal {B}_{\zeta }\sigma _z$$, PRζ(ζ + T) and PRζ(ζ − T) turn out to be identical. Figure 1. View largeDownload slide A scatter plot demonstrating the relationship between the spectral radii $$\rho \left( \sigma _zP \mathcal {B}\sigma _z \right)$$ (BD, left vertical axis) and ρ(ϕ2) (MGS, horizontal axis). Plots have been created by generating 1000 random 1-D overburdens with a random number (2 < nr ≤ 30, uniformly distributed) of randomly localized (uniformly distributed) scatterers with random reflection coefficients ri (normally distributed with mean zero, and standard deviation 0.1) with extreme outliers, that is, |ri| > 0.4 removed. The overburdens have been classified by the number of overburden reflectors nr (regardless of their strength): nr ≤ 5 (blue dot, 164), 6 ≤ nr ≤ 10 (red triangle, 249), 11 ≤ nr ≤ 15 (green diamond, 271), and 16 ≤ nr ≤ 30 (black square, 316), where the parentheses contain information about representing colour, symbol, and total number of overburdens in a given class. The left panel shows a wide range of the results (although it does not display the cases of very large spectral radii), while the right panel presents a close up describing the spectral radii behaviour in the parameter area where either of the series is convergent, or just becomes divergent. On both plots, probability density functions (PDFs) of given classes (blue, red, green or black) as a function of the spectral radius ρ(ϕ2) are plotted (corresponding y-axis shown on the right; note the difference of the left (spectral radius) and right (PDF) vertical axes, also between the two plots). The grey line is given by $$2\rho \left( \phi ^2 \right)=\rho \left( \sigma _z \mathcal {B} \sigma _z \right)$$, where it is important to note that this relationship no longer holds for spectral radii ρ(ϕ2) far outside the parameter domain where the series is convergent. In either case where the spectral radius exceeds 1, the Neumann series expansion diverges. Figure 1. View largeDownload slide A scatter plot demonstrating the relationship between the spectral radii $$\rho \left( \sigma _zP \mathcal {B}\sigma _z \right)$$ (BD, left vertical axis) and ρ(ϕ2) (MGS, horizontal axis). Plots have been created by generating 1000 random 1-D overburdens with a random number (2 < nr ≤ 30, uniformly distributed) of randomly localized (uniformly distributed) scatterers with random reflection coefficients ri (normally distributed with mean zero, and standard deviation 0.1) with extreme outliers, that is, |ri| > 0.4 removed. The overburdens have been classified by the number of overburden reflectors nr (regardless of their strength): nr ≤ 5 (blue dot, 164), 6 ≤ nr ≤ 10 (red triangle, 249), 11 ≤ nr ≤ 15 (green diamond, 271), and 16 ≤ nr ≤ 30 (black square, 316), where the parentheses contain information about representing colour, symbol, and total number of overburdens in a given class. The left panel shows a wide range of the results (although it does not display the cases of very large spectral radii), while the right panel presents a close up describing the spectral radii behaviour in the parameter area where either of the series is convergent, or just becomes divergent. On both plots, probability density functions (PDFs) of given classes (blue, red, green or black) as a function of the spectral radius ρ(ϕ2) are plotted (corresponding y-axis shown on the right; note the difference of the left (spectral radius) and right (PDF) vertical axes, also between the two plots). The grey line is given by $$2\rho \left( \phi ^2 \right)=\rho \left( \sigma _z \mathcal {B} \sigma _z \right)$$, where it is important to note that this relationship no longer holds for spectral radii ρ(ϕ2) far outside the parameter domain where the series is convergent. In either case where the spectral radius exceeds 1, the Neumann series expansion diverges. It is noteworthy that for ζ = ±1 in a certain parameter domain, the spectral radius of ϕ2 is half that of $$\sigma _z P\mathcal {B}\sigma _z$$, that is, $$2\rho \left( \phi ^2 \right)\approx \rho \left( \sigma _z P\mathcal {B}\sigma _z \right)$$ (see Fig. 1), which means that in the presence of free-surface effects the MGS scheme (23) still converges, while the BD Neumann series (13) already diverges. If we translate this result to our analysis before, we get that the relation $$2\rho \left( \phi ^2 \right)\approx \rho \left( \sigma _z P \mathcal {B}_{\zeta }\sigma _z \right)$$ translates to \begin{eqnarray} \rho (\phi ^2)=\frac{ |r|}{1- |r|}\lesssim 1 &\rightarrow |r|\lesssim \displaystyle\frac{1}{2} . \end{eqnarray} (54) Lastly, we want to point out that in the absence of surface-related multiples the MGS scheme reduces to the (BD) Neumann series expansion and it should not be surprising that $$\rho \left( \phi ^2 \right)=\rho \left( \sigma _z P\mathcal {B}\sigma _z \right)$$, that is, the MGS scheme offers no benefit over the other schemes outlined above. 6.4 Empirical spectral analysis in the presence of projector P In the section above we have considered an asymptotic case of td → ∞ allowing us to work analytically. The analysis showed that single reflector geometries lead to divergent Neumann series expansions. We showed that this is only true when first arrival time td is much larger than the onset of the first surface-related multiple. This means that some of the Neumann series solutions to the Marchenko equation might still converge when td is small, as exemplified by a numerical example in Singh et al. (2015), where, assuming that the model is homogeneous when extended below, divergence sets in beyond the depth of 12 km (≈3 s). In this section we will illustrate the onset of divergence in the Neumann series expansion as a function of depth. To achieve this we cannot simply replace the traces $$\mathcal {R}_{\zeta } \left( t \right)$$ with their truncated equivalents $$\tilde{\mathcal {R}}_{\zeta } \left( t,2t_d \right)$$ in the characteristic polynomials χ (see Appendix B1), as this is obviously not equivalent. In the absence of transparent analytical results we aid ourselves with numerical experiments. We may either choose a large number of weak reflectors, or a couple of (slightly less realistic) strong ones to illustrate our point. For greater transparency, we choose the latter, and pick a two-reflector model with r = 0.4 (for which either Neumann series expansion schemes should diverge for sufficiently large depths). The two-way traveltimes from the surface to each reflector are t1 = 0.4 s and t2 = 0.48 s. We analytically generate a trace along the same lines as in Section 6.3. We perform Marchenko inversion by Neumann series expansion for a range of depths starting from corresponding one-way traveltimes of 0.248 s ≤ td ≤ 2 s, we calculate the spectral radii of the operators involved in the Neumann series for every depth, and plot the spectral radii as a function of focusing depth in Fig. 2. Two things become immediately apparent: (1) while the MGS scheme continues to converge, the BD scheme is already divergent for that depth, however the ratio of their spectral radii is a function of depth; and (2) the spectral radius function ρ(td) is a step function with a discontinuity hallmarked by an extra event (Kronecker delta) being present in the 0 < t < 2td window in the seismic trace, that is, the minimum length of the seismic trace that is used to construct the solution to the Marchenko equation. Figure 2. View largeDownload slide (a) Seismic trace used, with surface-related multiples ζ = −1 (note that the horizontal scale is double the one in plot (b), two-way versus one-way time). (b) Spectral radii $$\rho (\sigma _z P\mathcal {B}\sigma _z)$$ (BD, dashed red), ρ(ϕ) (black) and ρ(ϕ2) (MGS, dash-dotted blue) as a function of depth (one way-time) for a model of two reflectors with reflectivity of 0.4 located at two-way times of t1 = 0.4 and t2 = 0.48. Additionally we plot a grey horizontal dashed line at 1 indicating the boundary of absolute convergence of the algorithm, as well as a thick green line which is given by the ratio $$\rho \left( \sigma _z P\mathcal {B}\sigma _z \right)/\rho \left( \phi ^2 \right)$$. The minimum focusing depth corresponds to the depth (in one-way time) below the deeper reflector. Figure 2. View largeDownload slide (a) Seismic trace used, with surface-related multiples ζ = −1 (note that the horizontal scale is double the one in plot (b), two-way versus one-way time). (b) Spectral radii $$\rho (\sigma _z P\mathcal {B}\sigma _z)$$ (BD, dashed red), ρ(ϕ) (black) and ρ(ϕ2) (MGS, dash-dotted blue) as a function of depth (one way-time) for a model of two reflectors with reflectivity of 0.4 located at two-way times of t1 = 0.4 and t2 = 0.48. Additionally we plot a grey horizontal dashed line at 1 indicating the boundary of absolute convergence of the algorithm, as well as a thick green line which is given by the ratio $$\rho \left( \sigma _z P\mathcal {B}\sigma _z \right)/\rho \left( \phi ^2 \right)$$. The minimum focusing depth corresponds to the depth (in one-way time) below the deeper reflector. We can see that the BD matrix scheme (13) stops converging (spectral radius exceeds unity) past the point where the combined second-order surface multiple due to r1 and r2 appended with an additional internal reverberation between r1 and r2 enter the 0 < t < 2td window (see the vertical dashed line at focusing depth of 0.44 s in Fig. 2). For comparison, the MGS scheme crosses the ρ = 1 line immediately after the 0 < t < 2td window contains the fourth-order surface multiple which scattered three times off r2 and once off r1 (depth of 0.92 s). It is noteworthy that the spectral radius is a piece-wise constant function (see Fig. 2), and there are small increments every 0.04 s, which correspond to a one-way time interval between the reflectors (an extra interbed present inside the −td < t < td window), and larger increments every 0.2 and 0.24 s corresponding to one-way traveltimes between the surface and the reflectors hallmarking an extra surface multiple being present inside the −td < t < td window (some main events are indicated with a dashed vertical line). Clearly the two algorithms are limited in their ability to deliver solutions by means of convergent Neumann series expansion, and the MGS (blue) algorithm converges for deeper focusing depths. Interestingly enough the ratio $$\rho \left( \sigma _z P\mathcal {B}\sigma _z \right) /\rho \left( \phi ^2 \right)\approx 2$$, also is a function of depth and converges to some value (about 1.85 s here) asymptotically, indicating that the relationship between spectral radii of the two methods is substantially more complicated than given by a simple ‘empirical’ expression. Before closing this subsection we would like to address two points. For one, the step-like behaviour of the spectral radius ρ(td) function will become increasingly continuous as the reflection coefficients ri become weaker and their number grows to infinity. Second, we would like to emphasize that it might take many iterations to see divergence occur, especially for ρ = 1 + ε with 0 < ε ≪ 1, since in such cases only a very small number of eigenvalues (out of 2tmax /dt, where tmax  is the trace length), might be greater than one. In other words, the boundary ρ = 1 between convergent and divergent regime is very sharp, but seeing it manifest itself might take many iterations, that is, an apparent non-divergent result does not guarantee an accurate solution. 6.5 Convergence and Divergence in 2-D The analysis presented so far only covers one-dimensional Marchenko inversion. One could ask whether the divergence described above will also occur in the presence of geometrical spreading which suppresses the perceived amplitudes of the data used. This subsection will be dedicated to showing that divergence will be as much of an issue in 1-D as in 2-D or 3-D. Therefore we extend the 1-D model from Section 6.4 to 2-D and perform analogous analyses. In the previous subsection we have noticed two things. First, for any given reflector geometry, schemes BD (13) or MGS (23), if divergent for asymptotic focusing depths, might converge for shallower depths. Second, as much as in 1-D we can afford to iterate the scheme for hundreds of iterations, such an experiment is not affordable in 2-D. For this reason, in order to show that 2-D suffers to the same extent as 1-D, we choose a 2-D version of a previously studied 1-D example where (1) the focusing depths are 100 m below the depth at which the 1-D scheme diverged, and (2) we pick cases where the scheme diverged after a handful of iterations, and show the scheme will diverge past a certain depth for both 1-D as well as 2-D data sets based on the same reflector geometry. The model contains a free surface ζ = −1, is laterally invariant and free of refractions. The associated density contrasts might be unrealistic for geophysical applications, but we chose the parameters to simply illustrate the phenomenon, which becomes much less transparent if a (geologically realistic) large number of weaker reflectors is considered, where the schemes may diverge in the strong internal scattering regime, as evidenced in the previous section. We use a two-reflector model from Section 6.4, with reflection coefficients r1 = r2 = 0.4, and two-way traveltimes t1 = 0.40 s and t2 = 0.48 s, which for a background velocity of 2000 ms−1 corresponds to two reflectors at 200 and 240 m depth (apart from the free surface, ζ = −1). We analytically generate a synthetic fixed spread (broadband) data set, with a maximum offset of 4000 m, with 401 sources and as many co-located receivers spaced 10 m apart. In order to avoid aliasing noise we apply a (60 Hz) high-cut filter to the data, and we dress the inverse first arrival (primary focusing) field $$T_d^{\text{inv}}$$ with a 30 Hz Ricker wavelet. In order to accommodate for the finite wavelet width, the projector is shifted by ε = 32 ms. Since the coda is expected to have contributions at −td + 80 ms, this additional ε shift will leave it unaffected, hence short period interbed artefacts will not be introduced in the focusing fields (Slob et al. 2014; van der Neut et al. 2015). A simple calculation from Section 6.4, supported by a 1-D test, suggests that the BD and MGS schemes should no longer converge for depths exceeding 800 and 1800 m, respectively (see Fig. 2). We present results for the more robust MGS scheme. We have picked three focusing depths: 1800, 2200 and 3000 m, corresponding to one-way zero-offset first arrival times of 0.9, 1.1, and 1.5 s. The results are shown in Fig. 3. The MGS scheme converged for the focusing depth of 1800 m in 2-D, and diverged for depths of 2200 and 3000 m in exactly the same way as its 1-D counterpart. For example at a 3000 m depth, the 1-D algorithm would have an MGS spectral radius ρ(ϕ2) of about 1.4, which should show significant signs of divergence after about 20 iterations (as 1.420 ≈ 103). Figure 3. View largeDownload slide Shot panels of the focusing function $$f_1^-$$, calculated using the MGS Neumann series for synthetic data generated using a model with a free surface (ζ = −1) at x0, 3 = 0m, and two reflectors with reflectivity of r1 = r2 = 0.4 at 200 m and 240 m depth. The focal point is given by $$\vec{x}_i= \left( x_{i,1},x_{i,3} \right)$$, where we chose the lateral focusing coordinate is at xi, 1 = 2000 m, and three focusing depths of xi, 3 = 1800 m (top row), 2200 m (middle row), and 3000 m (bottom row). The result is shown after 5 (first column), 10 (second column), 20 (third column) and 30 (fourth column) iterations of the MGS Neumann scheme eq. (23). The plot below the shot panels depicts the behaviour of the ℓ2-norm (for visualization purposes a log10 scale is used) of the shot panels above as a function of the number of iterations. Figure 3. View largeDownload slide Shot panels of the focusing function $$f_1^-$$, calculated using the MGS Neumann series for synthetic data generated using a model with a free surface (ζ = −1) at x0, 3 = 0m, and two reflectors with reflectivity of r1 = r2 = 0.4 at 200 m and 240 m depth. The focal point is given by $$\vec{x}_i= \left( x_{i,1},x_{i,3} \right)$$, where we chose the lateral focusing coordinate is at xi, 1 = 2000 m, and three focusing depths of xi, 3 = 1800 m (top row), 2200 m (middle row), and 3000 m (bottom row). The result is shown after 5 (first column), 10 (second column), 20 (third column) and 30 (fourth column) iterations of the MGS Neumann scheme eq. (23). The plot below the shot panels depicts the behaviour of the ℓ2-norm (for visualization purposes a log10 scale is used) of the shot panels above as a function of the number of iterations. In the results presented in Fig. 3, it is important to notice that the norm of the solution shows a non-monotonous growth for the divergent (xi,3 = 3000 m—blue dot-dashed line) and marginally divergent (xi,3 = 2200 m—red dashed line) cases, however it stays almost constant for the convergent case (xi,3 = 1800 m—black continuous line). Note that the divergence of 2-D occurred past the same depths as it did in 1-D. Similar experiments can be repeated for other reflector geometries and the divergence boundary qualitatively should not be affected, by the dimensionality of the overburden model used. This is because one can always map a 2-D or 3-D constant velocity background (horizontally) translationally invariant model onto one in 1-D, perform the analysis in one dimension and use reverse mapping back onto two- or three dimensions, that is, there is an isomorphism between convolutions in 1-D and convolutions and stacking (with no aperture limitations) in 2-D or 3-D. 6.6 Convergence in the presence of dissipation We illustrate the failure of the Neumann series to converge in the presence of dissipation with a simple example, which aims to act as a proof that the iterative substitution algorithm does not converge in general. Consider a three-layer (two-reflector) model, with the dissipation upon passing through the ith layer and back given by $$e^{-\lambda _i}<1$$, in which case the reflection response is given by \begin{eqnarray} R_0(\omega ) &=& \frac{z_1 e^{-\lambda _1}+z_2e^{-\lambda _1-\lambda _2}}{1+z_2z_1^* e^{-\lambda _2}} \nonumber \\ \bar{R}_0(\omega ) &=& \frac{z_1 e^{\lambda _1}+z_2e^{\lambda _1+\lambda _2}}{1+z_2z_1^* e^{\lambda _2}} . \end{eqnarray} (55) In which case the operator $$RT\bar{R}T$$ expressed in the frequency domain takes the form \begin{eqnarray} RT\bar{R}T= R_0(\omega )\bar{R} \left( -\omega \right) . \end{eqnarray} (56) It is straightforward to verify that by substituting large values for |ri| < 1 and λi, representing strong scattering and strong dissipation, the spectral radius $$\left| \mu _{\rm max} \right|= \left| R_0(\omega )\bar{R} \left( -\omega \right) \right|$$ will exceed unity for some frequency ω, and as a result the Neumann series solution suggested in Slob (2016), will diverge, and fail to yield a solution. This is not due to numerical precision issues when calculating $$\bar{R}$$, which may have very large amplitude, but rather it is a fundamental failure of the Neumann series, which no longer accurately represents the inverse of the Marchenko equation operator. For a trivial case of a single reflector where $$R_0(\omega )\bar{R} \left( -\omega \right)=r_1^2$$, in theory, the series should always converge. 7 NON-NEUMANN TYPE SOLVERS The Marchenko equation can be thought of as an inversion problem, and a series expansion scheme, such as BD (13) or MGS (23), is just one of the ways that (under restricted conditions) it can be solved. We present two alternative methods which can be applied to solving the Marchenko equation: LSQR (Paige & Saunders 1982a,b) (outlined below) and a Merchant and Parks (Merchant & Parks 1982) take on a Levinson type algorithm (Levinson 1947) which we outline in Appendix D. The least-squares solver such as LSQR for a consistent set of equations will converge to a solution with a monotonically decreasing residual (Paige & Saunders 1982a), thus in the presence of surface-related multiples LSQR, in contrast to the Neumann-expansion MGS or BD scheme, will always converge. In Fig. 4, we show an example of an LSQR applied to a 1-D example. Figure 4. View largeDownload slide The true redatumed response (top horizontal panel) in the model of 35 reflectors (right vertical panel) with a focusing time of td = 1s. The remaining four horizontal panels show the standard error in the redatumed response due to unconverged Marchenko inversion for: the MGS Neumann series without free surface (1st, green), LSQR without free surface (2nd, red), LSQR with a free surface (3rd, blue), the ‘Cross’-LSQR (bottom blue). Inside individual panels, the error after 102, 103, 5 × 103 and 104 iterations are shown from top to bottom, with the lower vertical scales identical to the one of the top trace (with shifted origin). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. Figure 4. View largeDownload slide The true redatumed response (top horizontal panel) in the model of 35 reflectors (right vertical panel) with a focusing time of td = 1s. The remaining four horizontal panels show the standard error in the redatumed response due to unconverged Marchenko inversion for: the MGS Neumann series without free surface (1st, green), LSQR without free surface (2nd, red), LSQR with a free surface (3rd, blue), the ‘Cross’-LSQR (bottom blue). Inside individual panels, the error after 102, 103, 5 × 103 and 104 iterations are shown from top to bottom, with the lower vertical scales identical to the one of the top trace (with shifted origin). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. In this subsection we aim to analyse the effectiveness of the LSQR solver in tackling Marchenko inversion with and without surface-related multiples, and compare it to the rate of convergence of the Neumann series MGS scheme (23) without a free surface. The BD or MGS schemes can be regarded as two different LSQR pre-conditioning schemes. We tested both, and observed no significant difference in convergence. We continue working in the same representation as before, that is, the vector-matrix representation from Section 4. This is due to several reasons. First, we have found that the band-limited data and $$T_d^{\text{inv}}$$ with a wavelet result in slower convergence. Second, for sufficiently complicated overburdens we found that a large number of iterations are needed to appreciably suppress any artefacts, thus making a 1-D implementation less computationally costly (yet presenting the same phenomenon, which only becomes worse in 2-D or 3-D). This is particularly important if our numerical experiments are to be repeated for various strong internal scattering configurations. Additionally, working with pure Kronecker delta spikes allows us to avoid short period interbed multiple problems (a known limitation of Marchenko inversion (Slob et al. 2014)). Lastly, the Green’s function deconvolution step in 1-D leaves no residual (as opposed to the MDD in 2-D), which allows us to study in isolation the imperfect convergence effects on the obtained redatumed response. In general the approximate least-squares solution to a problem Ax = b obtained after n iterations, xn, will be off by an error εn, such that \begin{eqnarray} x_n=x-A^{-1}r\equiv x - \epsilon _n . \end{eqnarray} (57) In case of inversion of eq. (5), where A−1 exists, this means that \begin{eqnarray} \vec{f}_{1,n}=\vec{f}_1+ \vec{\epsilon }_n, \end{eqnarray} (58) where εn denotes the extra contribution to the focusing functions, which can take nonzero value for all −∞ < t < ∞, as opposed to −td < t < td and td < t for $$\vec{f}_1$$ or $$\vec{G}_{\zeta }$$ respectively. As a result $$\vec{G}_{n,\zeta }$$ can be expressed in terms of the true solutions by \begin{eqnarray} \vec{G}_{n,\zeta }&=& \left( P \mathcal {A}_{\zeta } + \left( 1-P \right)\mathcal {A}_{\zeta } \right)\vec{f}_{1,n}\nonumber \\ &=& \left( P \mathcal {A}_{\zeta } + \left( 1-P \right)\mathcal {A}_{\zeta } \right) (\vec{f}_1+ \vec{\epsilon }_n)\nonumber \\ &=& P \mathcal {A}_{\zeta }\vec{\epsilon }_n + \left( 1-P \right)\mathcal {A}_{\zeta }\vec{\epsilon }_n+ \left( 1-P \right)\mathcal {A}_{\zeta }\vec{f}_1\nonumber \\ &\equiv & \vec{\epsilon }_{n,P}+ \vec{\epsilon }_{n,1-P}+\vec{G}_{\zeta } \end{eqnarray} (59) where $$\vec{\epsilon }_{n,P}$$ and $$\vec{\epsilon }_{n,1-P}$$ denote the erroneous contributions to $$\vec{G}_{n,\zeta }$$ confined to time intervals −td < t < td and td < t respectively. The former error can be removed by applying (1 − P) to the solution, but the latter will lead to extra contributions which might affect the quality of the redatumed response. Since finding the redatumed response requires a deconvolution step, analytical evaluation of the impact of $$\vec{\epsilon }_{n,P}$$ and $$\vec{\epsilon }_{n,1-P}$$ on the redatumed response Rred is not straightforward. We show the $$\vec{\epsilon }_{n,P}$$ and $$\vec{\epsilon }_{n,1-P}$$ in Fig. 6. In order to assess the impact of unconverged Marchenko inversion we can make use of the following scheme. For any reflector geometry (with reflectors placed on-sample) we analytically calculate a reflection response due to a 1-D data-source. Next, we find the exact (true) solution to the Marchenko equation by finding a matrix representation of (1 + PRζ(ζ ± T)), given some seismic trace, and directly inverting it in Matlab (which performs an LU decomposition). Alternatively we can use a model-driven and wave equation based method like the one given in Resnick et al. (1986) to find the focusing fields, and thus we find the true focusing and Green’s functions, as well as the true redatumed response (the direct inversion and model-based scheme yielded solutions identical up to numerical noise). To quantify convergence we define the normalized residual norm (NRN) \begin{eqnarray} {}\text{NRN}_n=\Vert r_n\Vert _2/\Vert PR_{\zeta }T_d^{\text{inv}}\Vert _2 \end{eqnarray} (60) where it is important to note that the normalization is required since it will be different for the free-surface scheme as compared to the non-free-surface schemes, due to the presence of surface-related multiples. In the expression above we used \begin{eqnarray} r_n=\left(\begin{array}{c{@}{\quad}c}1+{\zeta } R & -RT\\ -RT & 1+{\zeta } R\end{array}\right)\left(\begin{array}{c}f_{1,n}^-\\ M_{+,n}^*\end{array}\right)-\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ 0\end{array}\right). \end{eqnarray} (61) We also define the (standard) error and normalized-standard-error-norm (NSEN) \begin{eqnarray} {\rm error}_{i,n} \left( h \right)&=&h_{i,n}-h_{\rm true}, \nonumber \\ {\rm NSEN}_{i,n} \left( h \right)&=&\frac{\Vert {\rm error}_{i,n} \left( h \right)\Vert _2}{\Vert h_{\rm true}\Vert _2}. \end{eqnarray} (62) Here i = 1, 2, 3 refers to the scheme being used: Neumann-series ζ = 0 (green), LSQR ζ = 0 (red), and LSQR ζ = −1 respectively (blue), where the colour in parentheses is used throughout Figs 4–6. Moreover, n stands for the number of iterations applied, and h is the object being evaluated (e.g. a focusing function or the redatumed response). Note that the residual norm and the NSEN are not the same. LSQR inversion attempts to minimize a residual, however it does not guarantee that it will obtain a solution that will be closer to the true one, and hence that a lower NRNn will necessarily lower NSENn(h), or that both will decrease at the same rate. Multiple tests based on random configurations of reflector locations and strengths, have shown that NRNn is a function which is well described by two piecewise-defined power-law decays (more so in case of LSQR solvers, methods (2) and (3), and less so for the MGS Neumann expansion, method (1)) \begin{eqnarray} \log NR_n &=&\Theta \left( n_c-n \right) k_1 \log \left( n_c-n \right)\nonumber \\ &&+\,\Theta \left( n-n_c \right) k_2 \log \left( n-n_c \right) , \end{eqnarray} (63) where k1 and k2 are the power-decay rates, with k2 < k1 < 0. Here nc stands for a critical number of iterations, that is, one where decay rate changes its behaviour to an accelerated one. Note that nc is not a well-defined number but an approximate region, and that depending on the complexity of the overburden nc can range from 101 to 103. This behaviour seems to suggest that for n < nc, the algorithm is in a sluggish convergence regime, which is characterized by the obtained focusing function substantially differing from the correct answer, yet one which still has an appreciably low residual (see Fig. 5). Every iteration at this stage of the algorithm appears to attempt to put more and more events into $$f_1^{\pm }$$ where they do not seem to belong. This is the reason why adaptive subtraction schemes, which assume kinetically correct events, no longer work here. Only once the algorithm ‘locks in’ on the almost correct configuration it can accelerate its convergence. It is important to note that the sluggish convergence regime (n < nc) can be reached very early for simple geometries (i.e. weak internal scattering regime), and hence the piecewise power-law decay might not be observed. In this case it is very likely that internal multiples are not the first order imaging problem. Figure 5. View largeDownload slide Convergence of schemes (1), (2) and (3): left, middle and right panel respectively. The continuous line shows the residual norm, the dashed line is the normalized standard error norm (NSEN) of the focusing function $$f_1^+$$, and the dotted line shows the NSEN of the redatumed response as a function of the number of iterations. Both the residuals as well as the iteration number are shown on a log10 scale. On the right panel, in addition the normalized error residual due to the ‘Cross’-LSQR scheme is shown (thin continuous line). Figure 5. View largeDownload slide Convergence of schemes (1), (2) and (3): left, middle and right panel respectively. The continuous line shows the residual norm, the dashed line is the normalized standard error norm (NSEN) of the focusing function $$f_1^+$$, and the dotted line shows the NSEN of the redatumed response as a function of the number of iterations. Both the residuals as well as the iteration number are shown on a log10 scale. On the right panel, in addition the normalized error residual due to the ‘Cross’-LSQR scheme is shown (thin continuous line). We illustrate these considerations with a numerical experiment using a 35-reflector geometry with a focusing depth below the 30th (hence forming a 30-reflector overburden and a 5-reflector target zone) with td = 1s. See the vertical panel in Fig. 4. We analytically find a synthetic trace due to a delta-source without and with surface-related multiples which are the inputs to methods (1) MGS Neumann series ζ = 0, (2) LSQR ζ = 0 and (3) LSQR ζ = −1 respectively. We let either of these schemes run for over 10,000 iterations and at 1, 5 and every multiple of 10 iterations we find: the solutions $$\vec{f}_{1,n,i}$$, $$\vec{G}_{1,n,i}$$ and Rred,n,i due to methods i = 1, 2 and 3, the normalized residual NRNn, $${\rm error}_{i,n} \left( f_1^+ \right)$$, errori, n(Rred) and their corresponding NSENs. We can see in Fig. 5 that the MGS Neumann series method (green) shows dramatically fast convergence with a residual (continuous line) of 10−5 reached in approximately 100 iterations, a feat for which the LSQR scheme with the same data input needs about an order of magnitude more iterations, and the LSQR with surface-related multiples needs a factor of 5 more. Moreover, we see a manifestation of the double power-law decay (63) in all three methods which is most clearly seen for methods (2) and (3). In addition, this experiment shows that the $${\rm NSEN} \left( f_1^+ \right)$$ (dashed line) as well as NSEN(Rred) (dotted line) decay at a much smaller rate with the number of iterations relative the residual; this holds for all methods but is most manifest for the LSQR solver. We also observe that at some point the NSEN(Rred) calculated using the focusing functions from the LSQR ζ = −1 scheme plateaus at the considerably high value of 10−1.5, suggesting that surface-related multiples present in the Green’s functions in combination with poor convergence of the LSQR ζ = −1 scheme hinder the deconvolution step. In order to investigate the last point further we perform one more experiment, which we call ‘Cross’-LSQR. We convolve $$\vec{f}_{1,3,n}$$ with Rζ = 0 to get $$\vec{G}_{0,n}\ne \vec{G}_{{\zeta =-1} ,3,n}$$, that is, a Green’s function without free-surface effects, however containing the residual error due to the LSQR ζ = −1 method, which did have the free surface. In practice, we would not be able to do this, since we lack the reflectivity model, unless we perform a perfect surface multiple removal. We observe that using surface multiple-free data to produce the Green’s functions the redatumed response convergence no longer plateaus (thin blue line in the third panel on Fig. 6), suggesting that the presence of surface-related multiples in the Green’s functions might pose a problem. The (rather significant) errors in the redatumed response due to insufficient number of iterations are presented in Fig. 4. Note that for the purpose of this example we placed several strong reflectors in the target zone, however target reflectors with reflection coefficients of the order of 0.1, can be easily overwhelmed by the residual after 100 or even, prohibitively expensive, 1000 LSQR iterations. Figure 6. View largeDownload slide Top panel: true downward propagating function G+. The lower four panels show the error in the estimated G+ for methods (1)-(3) and the ‘Cross’-LSQR method, respectively. Inside every panel the two traces shown correspond to errors after 102 (top) and 103 (bottom) iterations, with the lower vertical scale identical to the one of the top trace in that panel (with shifted origin). The $$\epsilon ^+_{n,P}$$ contribution can be seen at time s t < td while that of $$\epsilon ^+_{n,1-P}$$ for times td ≤ t. The third panel shows the difference between the $$G_{{\zeta =-1} ,n,3}^{+}$$ and the true $$G_{{\zeta =-1} }^{+}$$ (not shown on this figure). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. Figure 6. View largeDownload slide Top panel: true downward propagating function G+. The lower four panels show the error in the estimated G+ for methods (1)-(3) and the ‘Cross’-LSQR method, respectively. Inside every panel the two traces shown correspond to errors after 102 (top) and 103 (bottom) iterations, with the lower vertical scale identical to the one of the top trace in that panel (with shifted origin). The $$\epsilon ^+_{n,P}$$ contribution can be seen at time s t < td while that of $$\epsilon ^+_{n,1-P}$$ for times td ≤ t. The third panel shows the difference between the $$G_{{\zeta =-1} ,n,3}^{+}$$ and the true $$G_{{\zeta =-1} }^{+}$$ (not shown on this figure). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. 8 CONCLUSIONS In this work we have studied the effect of a free surface on Marchenko inversion and redatuming, where the Marchenko equation as given by Wapenaar et al. (2014) is appended by extra terms given by Singh et al. (2015). We have shown that in the strong internal scattering regime, in the presence of surface-related multiples, where Marchenko redatuming could make a significant impact, the proposed iterative Neumann series solvers do not always converge. We have shown that there is a class of pre-conditioners which modify the convergence behaviour of the Marchenko inversion and extend the range of application of series solvers in the presence of surface-related multiples. Additionally, we have shown analytically, as well as numerically, the parameter range where the surface multiple schemes converge, and we have shown that not only Marchenko inversion yields a unique solution, but also that the internal multiple-only implementation of the scheme will always have a convergent Neumann series. As a solution to the problem of diverging Neumann series in the presence of a free-surface, we have proposed two alternative solvers: one which is exact and is only applicable in 1-D, and another one which is asymptotically exact and applicable in 1-D, 2-D and 3-D. In the latter, we have observed that there is a risk of obtaining an approximate solution, which while minimizing the residual, can have substantial artefacts in the reconstructed Green’s functions and consequently in the redatumed response. Lastly, we note that the presence of noise may also cause the series expansion to diverge, even in the absence of surface-related multiples, a topic requiring further investigation. Acknowledgements The authors wish to thank Fons ten Kroode, Chris Willacy and Zofia Czyczula-Rudjord for fruitful discussions and for reviewing the manuscript, and the reviewers for their constructive comments. REFERENCES Broggini F. , Wapenaar K. , van der Neut J. , Snieder R. , 2014 . Data-driven Green’s function retrieval and application to imaging with multidimensional deconvolution , J. geophys. Res. , 119 , 425 – 441 . Google Scholar Crossref Search ADS da Costa Filho C.A. , Ravasi M. , Curtis A. , Meles G.A. , 2014 . Elastodynamic Green’s function retrieval through single-sided Marchenko inverse scattering , Phys. Rev. E , 90 , 063201-1-063201-9 . da Costa Filho C.A. , Ravasi M. , Curtis A. , 2015 . Elastic P- and S-wave autofocus imaging with primaries and internal multiples , Geophysics , 80 , S187 – S202 . Google Scholar Crossref Search ADS da Costa Filho C.A. , Meles G.A. , Curtis A. , 2017 . Elastic internal multiple analysis and attenuation using Marchenko and interferometric methods , Geophysics , 82 , Q1 – Q12 . Google Scholar Crossref Search ADS Fredholm E.I. , 1903 . Sur une classe d’equations fonctionnelles , Acta Math. , 27 , 365 – 390 . Google Scholar Crossref Search ADS Gauss C.F. , 1903 . Werke. Königlichen Gesellschaft der Wissenschaften zu Göttingen , 9 . Grenander U. , Szegö G. , 1958 . Toeplitz Forms and Their Applications , AMS Chelsea Publishing Series, University of California Press . Hazewinkel M. , 2001 . Encyclopedia of Mathematics , Springer Lax P.B. , 2002 . Functional Analysis , Wiley-Interscience . Levinson N. , 1947 . The Wiener RMS error criterion in filter design and prediction , J. Math. Phys. , 25 , 261 – 278 . Google Scholar Crossref Search ADS Mathews J. , Walker R.L. , 1970 . Mathematical Methods of Physics , ( 2nd ed. ), W. A. Benjamin . Meles G.A. , Löer K. , Ravasi M. , Curtis A. , da Costa Filho C.A. , 2015 . Internal multiple prediction and removal using Marchenko autofocusing and seismic interferometry , Geophysics , 80 , A7 – A11 . Google Scholar Crossref Search ADS Meles G.A. , Wapenaar K. , Curtis A. , 2016 . Reconstructing the primary reflections in seismic data by Marchenko redatuming and convolutional interferometry , Geophysics , 81 , Q15 – Q26 . Google Scholar Crossref Search ADS Merchant G. , Parks T.W. , 1982 . Efficient solution of a Toeplitz-plus-Hankel coefficient matrix system of equations , IEEE Trans. Acoust. Speech Signal Process. , 30 , 40 – 44 . Google Scholar Crossref Search ADS Paige C.C. , Saunders M.A. , 1982a . LSQR: an algorithm for sparse linear equations and sparse least squares , ACM Trans. Math. Softw. , 8 ( 1 ), 43 – 71 . Google Scholar Crossref Search ADS Paige C.C. , Saunders M.A. , 1982b . LSQR: sparse linear equations and least-squares problems , ACM Trans. Math. Softw. , 8 ( 2 ), 195 – 209 . Google Scholar Crossref Search ADS Ravasi M. , Vasconcelos I. , Kritski A. , Curtis A. , da Costa Filho C.A. , Meles G.A , 2016 . Target-oriented Marchenko imaging of a North Sea field , Geophys. J. Int. , 205 ( 1 ), 99 – 104 . Google Scholar Crossref Search ADS Resnick J.R. , Lerche I. , Shuey R.T. , 1986 . Reflection, transmission, and the generalized primary wave , Geophys. J. Int. , 87 , 349 – 377 . Google Scholar Crossref Search ADS Rose J.H. , 2002a . “Single-sided” autofocusing of sound in layered materials , Inverse Probl. , 18 , 1923 – 1934 . Google Scholar Crossref Search ADS Rose J.H. , 2002b . Time reversal, focusing and exact inverse scattering: imaging of complex media with acoustic and seismic waves , in Topics in Applied Physics , 84 , pp. 97 – 106 , eds Fink M. , Kuperman W.A. , Montagner JP. , Tourin A. , Springer . Google Scholar Crossref Search ADS Singh S. , Snieder R. , Behura J. , van der Neut J. , Wapenaar K. , Slob E. , 2015 . Marchenko imaging: imaging with primaries, internal multiples, and free-surface multiples , Geophysics , 80 , S165 – S174 . Google Scholar Crossref Search ADS Slob E. , 2016 . Green’s function retrieval and Marchenko imaging in a dissipative acoustic medium , Phys. Rev. Lett. , 116 , 164301-1-164301-6 . Google Scholar Crossref Search ADS Slob E. , Wapenaar K. , Broggini F. , Snieder R. , 2014 . Seismic reflector imaging using internal multiples with Marchenko-type equations , Geophysics , 79 , S63 – S76 . Google Scholar Crossref Search ADS van der Neut J. , Wapenaar K. , 2016 . Adaptive overburden elimination with the multidimensional Marchenko equation , Geophysics , 81 , T265 – T284 . Google Scholar Crossref Search ADS van der Neut J. , Wapenaar K. , Thorbecke J. , Slob E. , Vasconcelos I. , 2015 . An illustration of adaptive Marchenko imaging , Leading Edge , 34 ( 7 ), 818 – 822 . Google Scholar Crossref Search ADS van der Neut J. , Ravasi M. , Liu Y. , Vasconcelos I. , 2017 . Target-enclosed seismic imaging , Geophysics , 82 , Q53 – Q66 . Google Scholar Crossref Search ADS Wapenaar K. , 2014 . Single-sided Marchenko focusing of compressional and shear waves , Phys. Rev. E , 90 ( 6 ), 063202 , doi:10.1103/PhysRevE.90.063202 . Google Scholar Crossref Search ADS Wapenaar K. , Slob E. , 2014 . On the Marchenko equation for multicomponent single-sided reflection data , Geophys. J. Int. , 199 , 1367 – 1371 . Google Scholar Crossref Search ADS Wapenaar K. , Broggini F. , Slob E. , Snieder R. , 2013 . Three-Dimensional Single-Sided Marchenko Inverse Scattering, Data-Driven Focusing, Green’s Function Retrieval, and their Mutual Relations , Phys. Rev. Lett. , 110 , 084301 , doi:10.1103/PhysRevLett.110.084301 . Google Scholar Crossref Search ADS PubMed Wapenaar K. , Thorbecke J. , van der Neut J. , Broggini F. , Slob E. , Snieder R. , 2014 . Marchenko imaging , Geophysics , 79 , WA39 – WA57 . Google Scholar Crossref Search ADS Young D.M. Jr , 1971 . Iterative Solution of Large Linear Systems , Academic Press . APPENDIX A: OBTAINING PRECONDITIONED EQUATIONS In this section we present the technical details of GS and MGS parametrization of the surface-multiple Marchenko equation. A1 GS case For the GS decomposition the inverse of L reads \begin{eqnarray} L^{-1}&=&\left(\begin{array}{c{@}{\quad}c}\mathcal {Y} & 0\\ \mathcal {Y}PR_{\zeta }T\mathcal {Y} & \mathcal {Y} \end{array}\right) \end{eqnarray} (A1) where we defined $$\mathcal {Y}= \left( 1+{\zeta } PR_{\zeta } \right)^{-1}$$, which exists for all physical Rζ. Lastly we also get \begin{eqnarray} L^{-1}U &=&\left(\begin{array}{c{@}{\quad}c}0 & \mathcal {Y}PR_{\zeta }T\\ 0 & \left( \mathcal {Y}PR_{\zeta }T \right)^2 \end{array}\right) \end{eqnarray} (A2) and \begin{eqnarray} L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ 0\end{array}\right) &=&\left(\begin{array}{c}\mathcal {Y}PR_{\zeta }T_d^{\text{inv}}\\ \mathcal {Y}PR_{\zeta }T\mathcal {Y}PR_{\zeta }T_d^{\text{inv}}\end{array}\right). \end{eqnarray} (A3) From eq. (8), the solution can be parametrized as \begin{eqnarray} \left(\begin{array}{c{@}{\quad}c}1 & -PR_{0}T\\ 0 & 1- \left( PR_{0}T \right)^2 \end{array}\right)\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{0}T_d^{\text{inv}}\\ PR_{0}TPR_{0}T_d^{\text{inv}}\end{array}\right) \end{eqnarray} (A4) A2 MGS case For the MGS decomposition the inverse of L reads \begin{eqnarray} L ^{-1}=\left(\begin{array}{c{@}{\quad}c}1 & 0\\ PR_{\zeta }T & 1\end{array}\right) \end{eqnarray} (A5) leading to \begin{eqnarray} L^{-1}U & = & \left(\begin{array}{c{@}{\quad}c}-{\zeta } PR_{\zeta } & PR_{\zeta }T\\ -{\zeta } PR_{\zeta }TPR_{\zeta } & \left( PR_{\zeta }T \right)^2-{\zeta } PR_{\zeta }\end{array}\right) \end{eqnarray} (A6) and \begin{eqnarray} L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ 0\end{array}\right) & = & \left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ PR_{\zeta }TPR_{\zeta }T_d^{\text{inv}}\end{array}\right) . \end{eqnarray} (A7) The matrix L−1U is actually the square of another matrix ϕ ≡ Ωσx \begin{eqnarray} \phi &=& \left(\begin{array}{c{@}{\quad}c}0 & 1\\ -{\zeta } PR_{\zeta } & PR_{\zeta }T\end{array}\right)\nonumber \\ &\equiv &\underbrace{\left(\begin{array}{c{@}{\quad}c}1 & 0\\ PR_{\zeta }T & -{\zeta } PR_{\zeta }\end{array}\right)}_{\Omega }\underbrace{\left(\begin{array}{c{@}{\quad}c}0 & 1\\ 1 & 0\end{array}\right)}_{\sigma _x} , \end{eqnarray} (A8) such that L−1U can be rewritten as \begin{eqnarray} L^{-1}U=\phi ^2&=& \Omega \sigma _x \Omega \sigma _x= \Omega \times \left( \sigma _x \Omega \sigma _x \right)\nonumber \\ &=&\underbrace{\left(\begin{array}{c{@}{\quad}c}1 & 0\\ PR_{\zeta }T & -{\zeta } P R_{\zeta }\end{array}\right)}_{\Omega }\times \underbrace{\left(\begin{array}{c{@}{\quad}c}-{\zeta } P R_{\zeta } & PR_{\zeta }T\\ 0 & 1\end{array}\right)}_{\sigma _x \Omega \sigma _x} . \nonumber\\ \end{eqnarray} (A9) APPENDIX B: CHARACTERISTIC POLYNOMIALS In this section we show the details of the characteristic polynomials from Section 5. B1 BD formulation The characteristic polynomial of a BD matrix $$\mathcal {M}_{\zeta } = {\rm diag} \left( R_{\zeta } \left( \zeta -T \right),R_{\zeta } \left( \zeta +T \right) \right)$$ is given by \begin{eqnarray} \chi _{\zeta } \left( \mu \right)&=&\det \left[ \left( R_{\zeta } \left( \zeta -T \right)-\mu \right) \left( R_{\zeta } \left( \zeta +T \right)-\mu \right) \right]\nonumber \\ &=&\det \left[ R_{\zeta } \left( \zeta -T \right)-\mu \right]\det \left[ R_{\zeta } \left( \zeta +T \right)-\mu \right], \end{eqnarray} (B1) where μ is an eigenvalue of $$\mathcal {M}_{\zeta }$$ if it satisfies χζ(μ) = 0. Using the fact that the spectrum of eigenvalues is invariant under transposition we can transpose the second determinant, and also using $$\det A \det B=\det AB$$ we get \begin{eqnarray} \chi _{\zeta } \left( \mu \right)&=&\det \left[ R_{\zeta } \left( \zeta -T \right)-\mu \right]\det \left[ \left( \zeta +T \right)R_{\zeta }^T-\mu \right]\nonumber \\ &=&\det \left[ R_{\zeta } \left( \zeta ^2-T^2 \right)R_{\zeta }^T -\zeta \mu \left( R_{\zeta }+TR_{\zeta }T \right)+\mu ^2 \right] , \end{eqnarray} (B2) where in the last step we have used that $$R_{\zeta }^T=TR_{\zeta }T$$. B2 GS formulation We proceed similarly to the derivation above to find the characteristic polynomial of matrix ϕ2: \begin{eqnarray} {\rm det} \left[ \phi ^2-\mu ^2 \right]&=&{\rm det} \left[ \phi -\mu \right]{\rm det} \left[ \phi +\mu \right]\nonumber \\ &=&{\rm det} \left[ \phi -\mu \right]{\rm det} \left[ \phi ^T+\mu \right]. \end{eqnarray} (B3) We then use the identity \begin{eqnarray} {\rm det}\left(\begin{array}{c{@}{\quad}c}A & B\\ C & D\end{array}\right) &=& {\rm det}\, {A}\, {\rm det} \left[ D-CA^{-1}B \right] , \end{eqnarray} (B4) which holds provided that detA ≠ 0, and using the approximation $$PR \vec{x}\approx R\vec{x}$$ (for any $$\vec{x}$$), we get \begin{eqnarray} {\rm det} \left[ \phi -\mu \right]&=& {\rm det}\left(\begin{array}{c{@}{\quad}c}-\mu & 1\\ -{\zeta } R_{\zeta } & R_{\zeta }T-\mu \end{array}\right)\nonumber \\ &=& {\rm det} \left[ -\mu \right]{\rm det} \left[ \left( R_{\zeta }T-\mu \right)-{\zeta } R_{\zeta }\mu ^{-1} \right]\nonumber \\ &=&{\rm det} \left[ \mu ^2 -\mu R_{\zeta }T+{\zeta } R_{\zeta } \right]. \end{eqnarray} (B5) Similarly we obtain \begin{eqnarray} {\rm det} \left[ \phi ^T+\mu \right]&=& {\rm det} \left[ \mu ^2 + \mu R_{\zeta }T+{\zeta } R_{\zeta }^T \right], \end{eqnarray} (B6) where the relation (RζT)T = RζT was used in the process. Putting both results together yields \begin{eqnarray} {\rm det} \left[ \phi ^2-\mu ^2 \right] & = & {\rm det} \left[ \mu ^4-\mu ^2 \left( \zeta \left( R_{\zeta }+R_{\zeta }^T \right)+R_{\zeta }R_{\zeta }^T \right)\right. \nonumber\\ &&\left.+\,\zeta ^2R_{\zeta }R_{\zeta }^T \right]. \end{eqnarray} (B7) B3 Dissipative formulation Marchenko redatuming in the presence of dissipation was formulated in Slob (2016), where the presented autofocusing formulation amounts to inverting (using the large td, i.e. P = 1 approximation) \begin{eqnarray} \left(\begin{array}{c{@}{\quad}c}1 & -RT\\ -\bar{R}T & 1\end{array}\right)\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right) . \end{eqnarray} (B8) Therefore, \begin{eqnarray} \left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)&=&\left(\begin{array}{c{@}{\quad}c}1 & -RT\\ -\bar{R}T & 1\end{array}\right)^{-1}\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right)\nonumber \\ &\stackrel{?}{=}&\sum \limits _{k=0}^{\infty }\left(\begin{array}{c{@}{\quad}c}0 & RT\\ \bar{R}T & 0\end{array}\right)^k\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right) . \end{eqnarray} (B9) The series expansion above is only a true inverse if it is convergent. Using the same trick as in the previous section we find that the maximum magnitude roots of the characteristic polynomial \begin{eqnarray} {\rm det}\left(\begin{array}{c{@}{\quad}c}-\mu & RT\\ \bar{R}T & -\mu \end{array}\right)=0 \end{eqnarray} (B10) amount to, by again using (B4), \begin{eqnarray} {\rm det} \left[ \mu ^2-R\bar{R}^T \right]=0 . \end{eqnarray} (B11) APPENDIX C: STRONGER BOUND ON THE MAXIMUM OF $$\left| \mathcal {R}_0 \left( \omega \right) \right|$$ In this section we return to the proof from Section 6.1. Since $$\left| \mathcal {R}_0(\omega ) \right|$$ in eq. (49) is strictly increasing as a function of x = |N(0)| ≥ 0, we can get an upper bound for $$\left| \mathcal {R}_0(\omega ) \right|$$ by finding an upper bound on x. In particular, using the triangle inequality |z1 + z2| ≤ |z1| + |z2| we have that \begin{eqnarray} x(\omega ) = \left| N{(0)} \right| \le \sum _{k \text{ odd}} \epsilon _k( \left| r_1 \right|, \ldots , \left( r_M \right)) , \end{eqnarray} (C1) where εk is the elementary symmetric polynomial of order k in M variables (recall that M is the number of reflectors). This can be used to show that \begin{eqnarray} \left| \mathcal {R}_0(\omega ) \right| \le \frac{\sum _{k \text{ odd}} \epsilon _k( \left| r_1 \right|, \ldots , \left| r_M \right|)}{\sum _{k \text{ even}} \epsilon _k( \left| r_1 \right|, \ldots , \left| r_M \right|)} , \end{eqnarray} (C2) by using the identity \begin{eqnarray} \sum _k (-1)^k \epsilon _k( \left| r_1 \right|, \ldots , \left| r_M \right|)& = &\prod _i (1- \left| r_i \right|^2) \nonumber \\ &=& \prod _i (1-r_i^2) . \end{eqnarray} (C3) This means that |R0(ω)| satisfies not only the weaker bound (49), but also the stronger one (46). APPENDIX D: LEAN MERCHANT–PARKS–LEVINSON (LMPL) In Section 5, we have shown that the inversion of surface multiples and interbeds problem amounts to finding an inverse of a Toeplitz plus Hankel (T+H) matrix (1 + PR(ζ ± T)). In a more general context, this problem was studied by Merchant & Parks (1982), who, with a clever transformation, were able to map a T+H problem, onto a mosaic Toeplitz matrix inversion, which can be handled by means of a Levinson algorithm (hence we call it the Merchant–Parks–Levinson inversion, for details see Merchant & Parks 1982). Unfortunately, the standard Levinson algorithm suffers from numerical weak instability. However, the symmetry of our particular T+H problem, that is, the fact that the entries from the Toeplitz matrix can be mapped onto those of the Hankel matrix, allows us to simplify it to derive an effective algorithm (termed LMPL scheme), which does not suffer from the weak instability. We outline the algorithm below. For a given focusing depth zd and a corresponding velocity model dependent time td, we have a time domain seismic trace given by a vector $$\vec{R}_{\zeta }= \left[ R_1,R_2,\ldots ,R_{N} \right]^T$$ truncated at N = ⌈2td/dt⌉ + 1 samples. Let $$\vec{y}_k$$ denote an arbitrary vector $$\vec{y}_{k}= \left[ y_{1,k}, y_{2,k},\ldots ,y_{k,k} \right]^T$$, where yi, k is the ith element of a vector of length k ≤ N. We also denote an arbitrary vector $$\vec{y}= \left[ y_{1}, y_{2},\ldots ,y_{N} \right]^T$$ of length N. Also, define a vector reversal operator Tk of dimension k × k such that $$T_{k} \vec{y}_{k}= \left[ y_{k,k}, y_{k-1,k},\ldots , y_{1,k} \right]^T$$, which is a specific manifestation of the vector representation time-reversal operator, for vectors of length k. The algorithm below has to be repeated twice for parameter α = ±1. Initialization stage. Define $$\vec{p}=\vec{0}_{N\times 1}$$ and $$\vec{q}=\vec{0}_{N\times 1}$$ and let $$x_{0}^{ \left( 1 \right)}=1, x_{0}^{ \left( 2 \right)}=0, v=0$$. Set p1 = R1, q1 = RN. Throughout the algorithm we will use vectors $$\vec{p}_k, \vec{q}_k, \vec{x}_k^{ \left( 1 \right)}$$, and $$\vec{x}_k^{ \left( 2 \right)}$$, whose lengths will be increasing by 1, every iteration step. Note that $$x_{0}^{ \left( i \right)}$$ is not an element of vector $$\vec{x}_{k}^{ \left( i \right)}$$, however they are involved in calculating a scalar εx. Iterative stage. Let i = 1. Calculate the scalars \begin{eqnarray} \epsilon _x&=&\sum \limits _{j=0}^{i-1}R_{i-j} \left( x_{j,i-1}^{ \left( 1 \right)}+x_{j,i-1}^{ \left( 2 \right)} \right) , \nonumber \\ \epsilon _p&=&\sum \limits _{j=1}^{i}R_{i-j+1} \left( p_{j,i}+\alpha q_{j,i} \right) , \nonumber \\ b_x&=& \left( 1+v \right)^{-1}\epsilon _x , \end{eqnarray} (D1) where in the initial implementation of the MPL algorithm bx and εx were vectors and v was a matrix, and the computation of (1 + v)−1 could be weakly numerically unstable. In our simplified LMPL algorithm, the objects above are scalars, and the instability is removed. Next, update the entries in $$\vec{x}_{k}^{ \left( i \right)}$$, and extend their length by a new element \begin{eqnarray} \left(\begin{array}{c}\vec{x}_{i-1}^{ \left( 1 \right)}\\ \vec{x}_{i-1}^{ \left( 2 \right)}\end{array}\right)&\leftarrow & \left(\begin{array}{c}\vec{x}_{i-1}^{ \left( 1 \right)}\\ \vec{x}_{i-1}^{ \left( 2 \right)}\end{array}\right)-b_x \left(\begin{array}{c}T_{i-1}\vec{x}_{i-1}^{ \left( 2 \right)}\\ T_{i-1}\vec{x}_{i-1}^{ \left( 1 \right)}\end{array}\right) , \nonumber \\ x_{i,1} = -b_x; \,x_{i,2}=0; \end{eqnarray} (D2) update/calculate the scalars \begin{eqnarray} v&\leftarrow & v-\epsilon _x b_x ,\nonumber \\ g&=& \left( 1+v \right)^{-1} \left( R_{i}+\alpha R_{N-i-1}-\epsilon _p \right) , \end{eqnarray} (D3) and update vectors $$\vec{p}_k$$, and $$\vec{q}_k$$ \begin{eqnarray} \left(\begin{array}{c}\vec{p}_i\\ \vec{q}_i\end{array}\right) &\leftarrow & \left(\begin{array}{c}\vec{p}_i\\ \vec{q}_i\end{array}\right)+g \left(\begin{array}{c}T_i \vec{x}_{i}^{ \left( 2 \right)}\\ T_i\vec{x}_{i}^{ \left( 1 \right)}\end{array}\right) , \end{eqnarray} (D4) and extend them with a new element \begin{eqnarray} p_{i+1}&=& \left( 1+v \right)^{-1} \left( R_{i+1}-\alpha v r_{n-1-i}-\epsilon _p \right) , \nonumber \\ q_{i+1}&=& R_{N-i} . \end{eqnarray} (D5) Increment i ← i + 1, unless i = N − 1 is reached. Upon completing the i- and the α- loops, the solutions are \begin{eqnarray} f_1^-&=&\frac{1}{2} \left( \vec{p}_{\alpha =1}+\vec{p}_{\alpha =-1} \right) , \nonumber \\ M_+ &=&\frac{1}{2}T_N \left( \vec{p}_{\alpha =1}-\vec{p}_{\alpha =-1} \right) . \end{eqnarray} (D6) The LMPL method outlined above is exact and requires carrying out all of the N steps of the algorithm (stopping at i < N might not yield a good approximation of $$\vec{p}$$ and $$\vec{q}$$, because they are not even the correct length). The method however has a favourable scaling over direct inversion or conjugate gradient methods (Merchant & Parks 1982). © The Authors 2017. Published by Oxford University Press on behalf of The Royal Astronomical Society. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Geophysical Journal International Oxford University Press

Marchenko inversion in a strong scattering regime including surface-related multiples

Geophysical Journal International , Volume Advance Article (2) – Oct 11, 2017

Loading next page...
 
/lp/ou_press/marchenko-inversion-in-a-strong-scattering-regime-including-surface-YAmeGHXEuA
Publisher
The Royal Astronomical Society
Copyright
© The Authors 2017. Published by Oxford University Press on behalf of The Royal Astronomical Society.
ISSN
0956-540X
eISSN
1365-246X
D.O.I.
10.1093/gji/ggx434
Publisher site
See Article on Publisher Site

Abstract

Abstract Marchenko redatuming is a powerful technique which uses all orders of scattering to construct an image of the subsurface. The algorithm is of great value when strong internal scattering is present, as it mitigates unwanted overburden artefacts when imaging a deeper subsurface target. The solution to the Marchenko equation, on which Marchenko methods are based, is often written in terms of an iterative substitution expansion (Neumann series) and conceptualized as a process of adding and removing seismic events, in an attempt to construct a focusing function. In this work we show that in some cases one may have to look beyond series expansion techniques to determine the correct solution. We prove that in the presence of surface-related multiples, the originally proposed Neumann series type algorithm may fail to converge, whereas in the absence of such multiples convergence is guaranteed. We study convergence properties of a class of such algorithms, which correspond to different choices of pre-conditioning of the Marchenko equation. We also propose and study the effectiveness of two other means of solving the Marchenko equation: LSQR (standard least squares) and a modified Levinson-type algorithm inspired by Merchant-Parks inversion, and discuss some of the challenges they pose. Interferometry, Inverse theory, Computational seismology, Theoretical seismology, Wave scattering and diffraction 1 INTRODUCTION Commonly used imaging algorithms are based on a single-scattering assumption, and multiple reflections, whether surface-related or internal, result in spurious reflectors in the reconstructed image. At the same time, contemporary hydrocarbon exploration is forced to investigate increasingly greater depths and more complex geological structures, which causes the problem of multiple reflections to become increasingly important. For this reason, more attention is devoted to removing multiple reflections from data prior to imaging. Marchenko redatuming is a powerful technique which uses all orders of scattering to construct an image of the subsurface; it consists of a two-step scheme comprising of solving a Marchenko equation and a multidimensional deconvolution (MDD; Wapenaar et al. 2013; Broggini et al. 2014; da Costa Filho et al. 2014; Slob et al. 2014; Wapenaar 2014; Wapenaar & Slob 2014; Wapenaar et al. 2014; da Costa Filho et al. 2015; van der Neut et al. 2015). It has been used, among other things, to eliminate internal multiples from pre-stack data (Meles et al. 2015; da Costa Filho et al. 2017), to construct pre-stack primaries without multiple elimination (Meles et al. 2016), and overburden elimination, that is, to produce a reflection response from a redatuming level sitting below some scattering medium (van der Neut et al. 2016, 2017). The key elements to a successful application of Marchenko redatuming are: an estimate of the one-way traveltime between any point at the acquisition plane (which must coincide with the free-surface) and any point at the redatuming level; a noise-free reflection response with sources and receivers at a (sufficiently dense) grid on the acquisition surface, deconvolved for the source wavelet and no intrinsic attenuation. In the presence of dissipation, additional data sets (often unavailable in seismic applications) are required (Slob 2016). Initially Marchenko redatuming has been proposed to deal with internal multiples only (Wapenaar et al. 2013, 2014), where it relies on perfect surface multiple removal beforehand (as in e.g. Ravasi et al. 2016). Typically, such pre-processing leaves remnants of surface multiples comparable in strength to the (often weaker) internal multiples and these remnants can lead to calculation of incorrect focusing fields. Soon after, Singh et al. (2015) extended the Marchenko method to include free-surface multiples. In this paper we point out a fundamental issue of the Marchenko equation recursive substitution solver used in Wapenaar et al. (2014), Singh et al. (2015) or Slob (2016). The problem is manifest in the strong internal scattering regime, that is, in the presence of numerous layers with large impedance contrasts. The consequences of using the recursive substitution scheme in such setting are threefold. For one, in the absence of surface-related multiples the scheme suffers from sluggish convergence, even though, as we prove, the scheme is always convergent. In this case an approximation to the solution using only a few iterations might be insufficient for proper applications. Second, in the presence of surface-related multiples we show that the recursive substitution solver from Singh et al. (2015) can be seen as one of a family of distinct Marchenko equation decompositions. The choice of decomposition corresponds to a choice of pre-conditioner which affects the convergence rate of the solver. In contrast to the situation without a free surface, the scheme may now diverge, even though, as we demonstrate, a unique solution to the Marchenko equation always exists. Thirdly, we show that the scheme from Slob (2016) for internal demultiple in the presence of dissipation may also diverge in the presence of strong reflection and strong attenuation. None of these effects are a consequence of numerical accuracy, and we analytically derive conditions (scattering strength, focusing depth choice, among others) under which the schemes diverge. Where possible we provide a rigorous proof in 1-D, which can also be shown to hold for constant velocity 2-D and 3-D media. Lastly, we propose two other means to solve the Marchenko equation: LSQR and Merchant–Parks–Levinson algorithm, which do not suffer from divergence problems. We study their convergence properties and relative computational cost. These solvers do provide a solution to the Marchenko inversion also in cases where the Neumann series diverges; however they do so at an increased computational cost. This work is structured as follows. In the following section we briefly outline the Marchenko equation and its derivation (readers unfamiliar with the topic are referred to Wapenaar et al. (2014) and Singh et al. (2015) for further details), and we recast it in a form suitable to our analysis. In the third section we look at several different decompositions of the main operator. In Section 4, we write the 1-D Marchenko equation in a convenient matrix representation, and then in Section 5 we study the eigenvalue spectra of the relevant operators. Moreover, in Section 6 we study convergence properties of the recursive substitution schemes under different decompositions from Section 3. We show the breaking point of the scheme in the case of free-surface effects and investigate performance of non-Neumann series solvers in Section 7. 2 MATRIX-OPERATOR REPRESENTATION OF THE MARCHENKO EQUATION The Marchenko autofocusing scheme relies on up- and downgoing wave field decomposition, and amounts to solving a set of coupled Marchenko equations for unknown focusing operators, which are generalizations of migration operators which include high-order scattering. These solutions are directly related to up and down propagating Green’s functions through the representation theorems of convolution and correlation type (Wapenaar et al. 2014), modified to include the free-surface effects (Singh et al. 2015) \begin{eqnarray} G^-_{\zeta }&=&R_{\zeta } \left( f_{1}^+-{\zeta } f_{1}^- \right)-f_{1}^{-}, \end{eqnarray} (1) \begin{eqnarray} G^{+}_{\zeta }&=&f_{1}^{+*}-R_{\zeta } \left( f_{1}^{-*}-{\zeta } f_{1}^{+*} \right) . \end{eqnarray} (2) Here, $$G_{\zeta }^{\pm }$$ represents the up- (−) and downward (+) propagating Green’s function due to a source at the (acquisition) surface, recorded at the redatuming level; $$f_{1}^{\pm }$$ represents the up- (−) and downwards (+) propagating wavefields measured at the surface ζ is the free-surface reflection coefficient; and Rζ is an operator which carries out time-domain convolutions and sums over source (or receiver) positions with a wavelet deconvolved surface reflection response (due to a dipole source and a monopole receiver) with a field that it is acting on. For $$G_{\zeta }^{\pm }$$ and Rζ, the subscript ζ denotes the presence (ζ ≠ 0) or absence (ζ = 0) of surface-related multiples. Focusing functions $$\left( f_{1}^{\pm } \right)$$, are implicitly defined in the ζ = 0 medium, and hence (unlike the Green’s functions and the reflection response) will not be appended with a ζ subscript. Later in the paper we will consider the extreme cases of ζ = 0, ±1. We use * to denote time reversal (frequency-domain complex conjugation). The wavefields in the above equations are represented in the frequency-domain, where time-domain convolutions correspond to a simple multiplication; for 1-D, the spatial summation over source (or receiver) positions is absent. We refer to Wapenaar et al. (2014) and Singh et al. (2015) for further details. We may rewrite the relations above in terms of a block matrix operator equation \begin{eqnarray} \vec{G}_{\zeta }&=&\mathcal {A}_{\zeta }\vec{f}_{1} , \end{eqnarray} (3) where $$\vec{f}_1= \left[ -f_{1}^-,f_{1}^{+*} \right]^T$$, $$\vec{G}_{\zeta }= \left[ G^-_{\zeta },G^{+}_{\zeta } \right]^T$$ and \begin{eqnarray} \mathcal {A}_{\zeta }=\left(\begin{array}{c{@}{\quad}c}1+{\zeta } R_{\zeta } & R_{\zeta }T \\ R_{\zeta }T & 1+{\zeta } R_{\zeta }\end{array}\right)\equiv I_2-\mathcal {B}_{\zeta } . \end{eqnarray} (4) Here T stands for a time reversal operator, that is, $$Tf_1^+=f_1^{+*}$$, T refers to transposition, and I2 is the 2 × 2 identity matrix. We use the fact that $$G^{\pm }_{\zeta }$$ and $$f_1^{\pm }$$ are temporally separated to define a projector P, such that $$P G_{\zeta }^{\pm }=0$$, and $$P f_1^{-}=f_1^{-}$$. In Marchenko literature, P is known as the windowing operator; it separates causal from acausal events in G by maintaining all events before the first arrival (acausal events) and suppressing all other (causal) events. The downward propagating focusing field $$f_1^{+}=T_d^{\text{inv}}+M_+$$ is composed of the ‘primary’ focusing field $$T_d^{\text{inv}}$$ (approximated by a time-reversed direct arrival field from a point on the redatuming level recorded at the surface) and the coda wavefield M+ respectively. They satisfy $$P M_+^*=M_+^*$$ and $$P T_d^{\text{inv}*}=0$$. The action of projector P on eq. (3) yields the set of coupled Marchenko equations $$\hat{P} \mathcal {A}_{\zeta }\vec{f}_1=0$$, where $$\hat{P}=I_2 \otimes P$$, which can be expressed as \begin{eqnarray} [ I_2-\sigma _z \hat{P} \mathcal {B}_{\zeta }\sigma _z]\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right), \end{eqnarray} (5) where σz = diag[1, −1] is one of the Pauli sigma-matrices. At this point, Wapenaar et al. (2014) and Singh et al. (2015) recognized a nested integral equation \begin{eqnarray} \left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right)+\sigma _z \hat{P}\mathcal {B}_{\zeta }\sigma _z\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right), \end{eqnarray} (6) which can be solved by a recursive substitution method (Neumann series). This interpretation is very much in the spirit of Rose’s first observation of iteratively collecting the reflected response, reversing it in time and sending it back into the medium (Rose 2002a,b). However, in order to understand the effect of the free surface and all the consequences that go with it, we need to take a different path. We interpret equation (5) as an inversion. In the presence of the free surface the operator \begin{eqnarray} I_2-\sigma _z \hat{P}\mathcal {B}_{\zeta }\sigma _z= L-U \end{eqnarray} (7) can be decomposed in terms of operators L and U, of which we will study three interesting cases: the trivial case L = I2 (BD for block-diagonal, as we explain below), the Gauss-Seidel (GS) case, and the modified Gauss-Seidel (MGS) case (Gauss 1903; Hazewinkel 2001). The introduced operators L and U are meant to allude to lower and upper-triangular matrices, but do not necessarily need to be such. Considering such L that L−1 exists, we can rewrite eq. (5) by substituting the decomposition (7) and acting with L−1, as \begin{eqnarray} ( I_2-L^{-1}U)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right), \end{eqnarray} (8) where we see that L−1 acts as a pre-conditioner. Note that eqs (5) and (8) yield the same solutions, however the procedure of solving the two might be different on account of appropriately chosen L−1. The statement that we will prove in this paper is that eq. (5) in the presence of a free surface (ζ ≠ 0), cannot be always solved by a Neumann series expansion (Singh et al. 2015), which only converges if the spectral radius (maximum magnitude eigenvalue) of the operator $$\sigma _z\hat{P}\mathcal {B}_{\zeta }\sigma _z$$ is bounded by unity, and diverges otherwise (Fredholm 1903; Mathews & Walker 1970; Lax 2002). Previously, no claims about convergence were made (e.g. see Singh et al. 2015). We show that there are several pre-conditioners (both trivial and non-trivial), leading to different inversions with different convergence properties of the series expansion (or in some cases divergence). In particular, we will show that the spectral radius ρ of L−1U depends on the decomposition choice and the scattering strength given by Rζ. We will show that only in some cases it is bounded by unity, in which case this allows for a solution of the form \begin{eqnarray} \left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)&=& \left( I_2-L^{-1}U \right)^{-1}L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right) \nonumber \\ &=&\sum \limits _{k=0}^{\infty } \left( L^{-1}U \right)^k L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right). \end{eqnarray} (9) This will be the topic of Section 3 and 5. It is important to note that the problem can be solved by other (iterative non-Neumann type) inversion schemes, which, together with their convergence properties, we will address in Section 7. 3 L − U DECOMPOSITION OF THE SET OF COUPLED MARCHENKO EQUATIONS In this section we will expand on the L − U decomposition idea, and later show that in the presence of a free surface, for the same data set and operator Rζ, one decomposition will result in divergent Neumann expansion, while a different decomposition might still lead to a series converging to a correct result. Some of the technical details are presented in Appendix A. 3.1 The trivial case (BD) As a reference point we will first address the trivial case of L = I2 (2 × 2 identity matrix). 3.1.1 Block-diagonalization Eq. (5) can be block-diagonalized, by defining two new variables x± such that \begin{eqnarray} \left(\begin{array}{c}x_+ \\ x_-\end{array}\right)=\left(\begin{array}{c{@}{\quad}c}1 & 1 \\ 1 & -1\end{array}\right)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right) , \end{eqnarray} (10) which decouples the two Marchenko equations into \begin{eqnarray} \left( 1+PR_{\zeta } \left( \zeta \pm T \right) \right)x_{\mp }= PR_{\zeta }T_d^{\text{inv}} . \end{eqnarray} (11) The inverse operator (1 + PRζ(ζ ± T))−1 can be expressed in terms of a Neumann series \begin{eqnarray} \left( 1+ PR_{\zeta } \left( \zeta \pm T \right) \right)^{-1}=\sum \limits _{k=0}^{\infty } \left( -PR_{\zeta } \left( \zeta \pm T \right) \right)^{k} \end{eqnarray} (12) under the condition that the spectral radius of PR(ζ ± T) is bounded from above by one. If this condition is satisfied, the problem can be solved recursively using \begin{eqnarray} x_{\pm ,k+1} &=& f_{1,0}^{-}\mp PR_{\zeta }x_{\pm ,k}^{*}-{\zeta } PR_{\zeta }x_{\pm ,k} \end{eqnarray} (13) starting with k = 0 and $$x_{\pm ,0}=PR_{\zeta }T_d^{\text{inv}}$$. We will refer to the scheme above as the BD Neumann series. Note that this is not the scheme proposed by Wapenaar et al. (2014) or Singh et al. (2015). We shall return to this matter in Section 6.2. A simple numerical experiment shows that this scheme diverges for a single reflector with a reflection coefficient of one-third or more for sufficiently large focusing depth, caused by the fact that the spectral radius of PRζ(ζ ± T), for ζ ≠ 0, is no longer bounded by one. The problem worsens significantly for geometries with a larger number of reflectors, something that will be discussed in Sections 3.3, 5 and 6.5. 3.1.2 Surface multiple removal—the obvious pre-conditioner The BD form of the equation also helps demonstrate another property of the problem. By studying the properties of operators P, Rζ and T, and by virtue of (1 + ζRζ)R0 = Rζ, or the fact that eq. (3) transforms from ζ = 0 problem to a ζ ≠ 0 problem by applying (1 + ζRζ) to both sides, one can show that \begin{eqnarray} 1+PR_{\zeta } \left( \zeta \pm T \right)&=& \left( 1+{\zeta } PR_{\zeta } \right) \left( 1\pm PR_0T \right) . \end{eqnarray} (14) Taking the inverse on both sides (which does exist as we shall show), we get \begin{eqnarray} \left( 1+PR_{\zeta } \left( \zeta \pm T \right) \right)^{-1}&=& \left( 1\pm PR_0T \right)^{-1} \left( 1+{\zeta } PR_{\zeta } \right)^{-1} . \end{eqnarray} (15) The solution to the problem then reads \begin{eqnarray} \left( f_1^-\mp M_+^* \right)&=& \left( 1\pm PR_0T \right)^{-1} \left( 1+{\zeta } PR_{\zeta } \right)^{-1} PR_{\zeta }T_d^{\text{inv}}\nonumber \\ &=& \left[ \sum \limits _{p=0}^{\infty } \left( \mp PR_0T \right)^{p} \right] PR_{0}T_d^{\text{inv}}, \end{eqnarray} (16) where we have used that (1 + ζPRζ)−1PRζ = PR0, which amounts to first removing the surface-related multiples from the reflection response Rζ, followed by an inversion procedure as if the free-surface was not present. In this way we can potentially decompose a single (possibly divergent) Neumann series into a product of two other (possibly convergent) Neumann series, yielding solutions \begin{eqnarray} f_1^-&=& \left[ \sum \limits _{p=0}^{\infty } \left( PR_0T \right)^{2p+1} \right] T_d^{\text{inv}*},\nonumber \\ M_+^*&=& \left[ \sum \limits _{p=1}^{\infty } \left( PR_0T \right)^{2p} \right] T_d^{\text{inv}*}. \end{eqnarray} (17) The ζPRζ operator can be represented by a product of a diagonal matrix P and a lower triangular Toeplitz matrix Rζ with zero diagonal elements (no recorded reflections at t = 0). As a consequence, all eigenvalues of PRζ are equal to zero, which means that the inverse (1 + ζPRζ)−1 can be expressed as a finite Neumann series, hence the inverse exists. In Section 5 we will show that the spectral radius of PR0T is bounded from above by 1, which implies that (1 ± PR0T)−1 always exists. These two facts combined with the existence of the decomposition (14) imply that (1 + ζPRζ)(1 ± PR0T) too will have an inverse, hence 1 + PRζ(ζ ± T) will be invertible too, thus showing that upon an appropriate parametrization of $$f_1^+$$ and the right projection, the problem is always invertible. The problem with this approach in practice however, is that it necessitates removal of surface-related multiples first, and our inability to perform this step perfectly on real data makes this scheme less desirable. For this reason we would like to investigate different pre-conditioners. 3.2 GS case The GS method amounts to defining matrix L and U such that \begin{eqnarray} L &=&\left(\begin{array}{c{@}{\quad}c}1+{\zeta } PR_{\zeta } & 0 \\ -PR_{\zeta }T & 1+{\zeta } PR_{\zeta }\end{array}\right),\quad U=\left(\begin{array}{c{@}{\quad}c}0 & PR_{\zeta }T \\ 0 & 0\end{array}\right), \end{eqnarray} (18) where L is now indeed lower-triangular, and U strictly upper-triangular. From eq. (8), the solution can be parametrized as \begin{equation*}\left(\begin{array}{c{@}{\quad}c}1 & -PR_{0}T \\ 0 & 1- \left( PR_{0}T \right)^2 \end{array}\right)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{0}T_d^{\text{inv}} \\ PR_{0}TPR_{0}T_d^{\text{inv}}\end{array}\right) \end{equation*} where we have used the result from above (1 + ζPRζ)−1PRζ = PR0. See Appendix A for details. This means that the GS scheme also, undesirably, requires surface multiple removal prior to the iterative scheme, which reduces the problem down to the ζ = 0 scheme, with the solution given by eq. (17). 3.3 MGS case The two methods above required a surface multiple removal step, due to the presence of 1 + ζPRζ in L which we subsequently took an inverse of. One modification we can apply to the GS type scheme is to use a different decomposition of the matrix L − U such that L−1 exists, \begin{eqnarray} L&=&\left(\begin{array}{c{@}{\quad}c}1 & 0 \\ -PR_{\zeta }T & 1\end{array}\right)\qquad U=\left(\begin{array}{c{@}{\quad}c}-{\zeta } PR_{\zeta } & PR_{\zeta }T \\ 0 & -{\zeta } PR_{\zeta }\end{array}\right). \end{eqnarray} (19) This results in the equation \begin{eqnarray} \left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right)=\Omega \sigma _x\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}} \\ 0\end{array}\right)+\Omega \left( \sigma _x\Omega \sigma _x \right)\left(\begin{array}{c}f_1^- \\ M_+^*\end{array}\right), \end{eqnarray} (20) where ϕ ≡ Ωσx \begin{eqnarray} \phi &=& \left(\begin{array}{c{@}{\quad}c}0 & 1 \\ -{\zeta } PR_{\zeta } & PR_{\zeta }T\end{array}\right)\nonumber \\ &\equiv &\underbrace{\left(\begin{array}{c{@}{\quad}c}1 & 0 z\\ PR_{\zeta }T & -{\zeta } PR_{\zeta }\end{array}\right)}_{\Omega } \underbrace{\left(\begin{array}{c{@}{\quad}c}0 & 1 \\ 1 & 0\end{array}\right)}_{\sigma _x}. \end{eqnarray} (21) See Appendix A for further details. eq. (20) yields another recursive, two-step method \begin{eqnarray} M_{+,k}^* &=& PR_{\zeta }f_{1,k-1}^{-*}-{\zeta } PR_{\zeta }M_{+,k-1}^*\nonumber \\ f_{1,k} &=&f_{1,k-1} \nonumber \\ M_{+,k+1}^* &=&M_{+,k}^* \nonumber \\ f_{1,k+1}^- &=& f_{1,0}^{-}+PR_{\zeta }M_{+,k}-{\zeta } PR_{\zeta }f_{1,k}^- . \end{eqnarray} (22) We can simplify the steps to yield \begin{eqnarray} M_{+,k}^* &=& PR_{\zeta }f_{1,k}^{-*}-{\zeta } PR_{\zeta }M_{+,k-1}^*\nonumber \\ f_{1,k+1}^- &=& f_{1,0}^{-}+PR_{\zeta }M_{+,k}-{\zeta } PR_{\zeta }f_{1,k}^- . \end{eqnarray} (23) This is exactly the scheme first published by Wapenaar et al. (2014) (ζ = 0) and Singh et al. (2015) (ζ ≠ 0). The difference between the MGS equation (23) and the BD equation (13) is that the former splits the update step in half and the second half of the iterative step uses an already updated form of one field to calculate the other. Note that it is the presence of ζ ≠ 0 that gives rise to a set of different decompositions, and that for ζ = 0 the MGS scheme reduces to the GS scheme. Moreover, one can also decompose L − U using a free parameter (e.g. a successive-over-relaxation approach from Young 1971), however early investigation has shown that these parametrized decompositions do not seem to improve convergence without a priori removing free-surface effects. 4 A CONVENIENT 1-D MARCHENKO EQUATION REPRESENTATION The scheme as proposed by Wapenaar et al. (2013, 2014), is composed of three building blocks: time reversal T; reflection response convolutions and summing over source/receiver locations (in 2-D or 3-D) Rζ; and projection P. In the 1-D case spatial index contractions reduce to scalar multiplication, and thus can be skipped. In order to study the convergence of the scheme, we would like to represent the (series of the) three operations above in terms of a single matrix operation acting on some representation of the primary focusing field $$T_d^{\text{inv}}$$. We choose to work in the time domain and represent convolutions using Toeplitz forms, which in a discretized setting amounts to Toeplitz matrices $${\rm Toe} \left[ \vec{a} \right]\equiv \left[ a \right]_{i-j}$$ \begin{eqnarray} \left[ a \right]_{i-j}= \left( \begin{array}{c{@}{\quad}c{@}{\quad}c{@}{\quad}c{@}{\quad}c{@}{\quad}c} a_0 & a_{-1} & \cdots & \cdots & a_{1-N} & a_{-N} \\ a_1 & a_0 & \ddots & \ddots & \ddots & a_{1-N}\\ a_2 & a_1 & \ddots &\ddots & \ddots &\vdots \\ \vdots & \ddots & \ddots & & \ddots &\vdots \\ \vdots & \ddots & \ddots & a_1 & a_0 & a_{-1} \\ a_{N} & \cdots & \cdots & a_2 & a_1 &a_0 \end{array} \right) \end{eqnarray} (24) where N = tmax/dt + 1, with tmax the total seismic trace length and dt denoting the time bin width. Note that ak, for k > 0 (k < 0) corresponds to the positive (negative) times. Before we address the form of the projection P and time reversal T operators, we will first discuss the exact form of the full bandwidth seismic trace $$\mathcal {R}_{\zeta } \left( \omega \right)$$. Suppose we have a 1-D model, with M reflectors characterized by (real) reflection coefficients ri, \begin{eqnarray} \left|r_i\right| < 1, \qquad i=1,\ldots ,M, \end{eqnarray} (25) and two-way traveltimes τi \begin{eqnarray} 0 < \tau _1 < \tau _2 < \cdots < \tau _M, \end{eqnarray} (26) and the following shorthand notation in terms of complex variables, \begin{eqnarray} z_i \left( \omega \right) = r_i e^{i\omega \tau _i} . \end{eqnarray} (27) One can obtain a causal seismic trace $$\vec{\mathfrak {r}}_{\zeta }$$, with zero entries at negative times, that is, $$\mathfrak {r}_{k\le 0,\zeta }=0$$, and the kth entry is given by \begin{eqnarray} \mathfrak {r}_{k,\zeta }=\frac{1}{2\pi }\int \limits _{-\pi }^{\pi }e^{ik\omega } \mathcal {R}_{\zeta } \left( \omega \right) {\rm d}\omega ,\quad k=0,1,\ldots ,N-1 , \end{eqnarray} (28) where $$\mathcal {R}_{\zeta } \left( \omega \right)=\mathcal {R}_0 \left( \omega \right)/ \left( 1-{\zeta } \mathcal {R}_0 \left( \omega \right) \right)$$, and where $$\mathcal {R}_0(\omega )$$ is the frequency domain reflection response containing only primaries and internal multiples, so no free-surface multiples (Slob et al. 2014) \begin{eqnarray} \mathcal {R}_0(\omega ) = \frac{N^{ \left( 0 \right)}}{D^{ \left( 0 \right)}}. \end{eqnarray} (29) The numerator and the denominator above can be obtained by an iterative procedure using the formula \begin{eqnarray} \begin{array}{ll} \! N^{ \left( k-1 \right)}=z_k \left( \omega \right)D^{ \left( k \right)}+N^{ \left( k \right)},\\ \! D^{ \left( k-1 \right)}=z_k \left( \omega \right)^*N^{ \left( k \right)}+D^{ \left( k \right)},\end{array} \end{eqnarray} (30) starting with \begin{eqnarray} N^{ \left( M \right)}&=&0 \quad {\rm and}\quad D^{ \left( M \right)}=1. \end{eqnarray} (31) Using formula (29), the surface reflection response R0(ω) (for 1, 2 or 3 reflectors respectively) is given by \begin{eqnarray} \mathcal {R}_0(\omega ) & = & z_1 , \nonumber \\ \mathcal {R}_0(\omega ) & = & \frac{z_1+z_2}{1+z_2z_1^*} , \nonumber \\ \mathcal {R}_0(\omega ) & = & \frac{z_1+z_2+z_3+ z_3z_2^*z_1}{1+z_3z_2^* + z_3z_1^* + z_2z_1^*} . \end{eqnarray} (32) The reflection response convolution operator $$R_{\zeta }={\rm Toe} \left[ \vec{\mathfrak {r}}_{\zeta } \right]$$ is a lower-triangular Toeplitz matrix constructed from the time domain seismic trace $$\mathfrak {r}_{\zeta }$$. We can choose a matrix-vector representation where $$T_d^{\text{inv}}$$ is given by a vector of length 2N + 1 (N samples corresponding to positive (negative) times each and a t = 0 sample), given by \begin{eqnarray} T_d^{\text{inv}}=\delta _{d,i} , \end{eqnarray} (33) where δij is the Kronecker delta, and d = N − td/dt is the time sample corresponding to the time t = −td (negative direct arrival time). In this case the operators P and T take the form (2N + 1) × (2N + 1) matrices \begin{eqnarray} P_k&=&\left(\begin{array}{c{@}{\quad}c}I_{ (N+d)\times (N+d)} & 0_{ (N+d)\times (N+1-d)} \\ 0_{ (N+1-d)\times (N+d)} & 0_{ (N+1-d)\times (N+1-d)}\end{array}\right) , \nonumber \\ T_{ij}&=&\sum \limits _{i,j=1}^{2N+1}\delta _{i,2N+2-j} , \end{eqnarray} (34) where Ik × k represents a k × k dimensional identity matrix, 0j × k is a j × k matrix of zeros. It is easy to see that Tij is an anti-diagonal matrix of 1’s. Moreover, it is easy to see that RζT is a Hankel matrix (an operator carrying out correlation on vectors) and that the matrices above satisfy the property $$TR_{\zeta }T=R_{\zeta }^{T}$$ where T denotes matrix transposition. It is important to note that in the matrix-vector representation, a (coupled) Marchenko equation becomes a matrix equation such that $$( 1-\sigma _z \hat{P} \mathcal {B}_{\zeta }\sigma _z)$$ can be represented explicitly as a matrix, and hence its inverse can be found. By virtue of the projector P choice, this matrix and its inverse will also depend on seismic trace elements which correspond to reflections below the focal point, however the condition $$P T_d^{\text{inv}*}=0$$ guarantees that the solutions $$f_1^{\pm }$$ will not be dependent on them. In Section 5 we use the latter representation to study different iterative solution methods and their convergence properties. We would like to stress that regardless of the representation used, or whether the analysis is done in 1-D or 2-D, we have qualitatively found the same results. Of the representations considered, it is the 1-D-vector representation that makes the discussion most transparent. 5 1-D MODEL SPECTRAL ANALYSIS: CHARACTERISTIC POLYNOMIALS In this section we will study the spectral radii of the BD, MGS and dissipative Marchenko Neumann series schemes in 1-D. To this effect we will derive the characteristic polynomials of the operators which build up the series. The maximum-magnitude roots of these polynomials, the so-called spectral radius, determines the convergence properties of the different algorithms and will be further studied in Section 6. For mathematical details please consult Appendix B. 5.1 BD formulation To establish whether the inverse of diag(PRζ(ζ − T), PRζ(ζ + T)) can be found by Neumann expansion, we determine its characteristic polynomial χζ(μ), which is a product of characteristic polynomials of PRζ(ζ − T) and PRζ(ζ + T). Numerical investigation has shown that the convergence issue only becomes a problem for some geometries, past some threshold focusing depth. For example, in the free-surface model presented in Singh et al. (2015), where focusing functions were successfully obtained for a shallower depth, it can be shown that this will no longer be possible for much greater depths using the same iterative substitution algorithm. Since we are interested if a particular set of reflectors might lead to series divergence for any depth, we will pick very large depths (and hence first arrival times td → ∞, that is, focusing far below the deepest reflector), meaning P ≈ 1, or where the strength of the (multiple) reflections has diminished enough, that is, $$PR\vec{v}\approx R\vec{v}$$, $$\forall \vec{v}$$. For consistency of the reciprocity theorems, this means that G− = 0 and that the first event in G+ is pushed infinitely far into the future, which for traces of finite length means that events in G+ will be absent. Additionally, this assumption offers us some mathematical freedom which we will be making use of later. The characteristic polynomial of a BD matrix diag(Rζ(ζ − T), Rζ(ζ + T)) is given by \begin{eqnarray} \chi _{\zeta } \left( \mu \right)&=&\det \left[ \left( R_{\zeta } \left( \zeta -T \right)-\mu \right) \left( R_{\zeta } \left( \zeta +T \right)-\mu \right) \right], \end{eqnarray} (35) which in the special cases ζ ∈ {0, ±1} amounts to \begin{eqnarray} \chi _{{\zeta =0} } \left( \mu \right)&=& \det \left[ \mu ^2-R_{0}R_{0}^T \right] , \end{eqnarray} (36) \begin{eqnarray} \chi _{\zeta =\pm 1} \left( \mu \right)&=&\mu \det \left[ \mu - \zeta \left( R_{\zeta }+R_{\zeta }^T \right) \right] , \end{eqnarray} (37) where we see that the form of the expression for the spectrum is different for ζ = 0 or ζ = ±1, that is, it is determined by the existence or lack of surface-related multiples. Interestingly cases ζ = 0, ±1, correspond to spectra of real symmetric matrices, and in the presence of a reflecting surface, exactly half of the eigenvalues will be equal to zero on account of μ pre-multiplying the determinant in eq. (37). It is easy to see that $$TR_{\zeta } \left( t \right)T=R_{\zeta }^T$$ corresponds to a Toeplitz matrix constructed from a time reversed trace. Therefore, the transposition of the operator Rζ amounts to frequency domain complex conjugation of the trace that was used to construct it. 5.2 MGS formulation The characteristic polynomial of matrix ϕ, defined in eq. (21), is given by \begin{equation*} {\rm det} \left[ \phi -\mu \right]= {\rm det}\left(\begin{array}{c{@}{\quad}c}-\mu & 1\\ -{\zeta } R_{\zeta } & R_{\zeta }T-\mu \end{array}\right) , \end{equation*} which in the special cases considered, for ζ = 0 amounts to \begin{eqnarray} {\rm det} \left[ \mu ^2-R_0R_0^T \right]=0 , \end{eqnarray} (38) yielding the same result as (37). For a reflecting surface with ζ = ±1 we have \begin{eqnarray} {\rm det} \left[ \nu ^2-\nu \left( R_{\zeta }R_{\zeta }^T\pm \left( R_{\zeta }+R_{\zeta }^T \right) \right)+R_{\zeta }R_{\zeta }^T \right]=0 , \end{eqnarray} (39) with ν = μ2, yielding a different expression than eq. (36) on account of additional $$R_{\zeta }R_{\zeta }^T$$ proportional terms that now no longer drop out. 5.3 Dissipative formulation Marchenko redatuming in the presence of dissipation was formulated by Slob (2016), where the presented autofocusing formulation amounts to inverting (using the large td, i.e. P = 1 approximation) \begin{eqnarray} \left(\begin{array}{c{@}{\quad}c}1 & -RT\\ -\bar{R}T & 1\end{array}\right)\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right) , \end{eqnarray} (40) with the characteristic polynomial defined by \begin{eqnarray} {\rm det} \left[ \mu ^2-R\bar{R}^T \right]=0 . \end{eqnarray} (41) We will show later that, unlike for the dissipationless case, |μ| is now no longer bounded by 1. 6 CONVERGENCE PROPERTIES OF MARCHENKO AUTOFOCUSING NEUMANN SERIES ALGORITHMS In this section we will build on the results derived above. We will first consider the case of interbeds only, followed by the case of surface multiples. We show how the convergence changes as a function of depth, and how it behaves in two dimensions. We end this section with a short note on the convergence of Marchenko inversion in the presence of dissipation. 6.1 Convergence in the absence of surface-related multiples In this section we will show that Marchenko inversion in the absence of free-surface effects will always converge. To do so, we work in the frequency domain. Let us consider the eigenvectors ψ(ω) of R0T (where T is time-reversal, or in the frequency domain, complex conjugation), \begin{eqnarray} \mathcal {R}_0(\omega ) T \psi (\omega ) =\mathcal {R}_0(\omega ) \psi (\omega )^* = \lambda \psi (\omega ) , \end{eqnarray} (42) implying \begin{eqnarray} \mathcal {R}_0(\omega )\mathcal {R}_0(\omega )^* \psi (\omega ) = \left| \lambda \right|^2 \psi (\omega ) . \end{eqnarray} (43) Clearly we have eigenvectors \begin{eqnarray} \psi (\omega ) = \delta (\omega -\omega _0) + \delta (\omega +\omega _0) , \end{eqnarray} (44) whose eigenvalues satisfy \begin{eqnarray} \left| \lambda (\omega _0) \right| = \left| \mathcal {R}_0(\omega _0) \right|, \end{eqnarray} (45) which for a fixed model {ri, τi}, with 1, 2 or 3 reflectors, numerical experiments show that they obey the bounds \begin{eqnarray} \left| \mathcal {R}_0(\omega )\right| & \le & \left| r_{1} \right| \nonumber \\ \left| \mathcal {R}_0(\omega )\right| & \le & \frac{\left| r_{1} \right| + \left| r_{2} \right|}{1+\left| r_{2} \right|\left| r_{1} \right|}\nonumber \\ \left| \mathcal {R}_0(\omega )\right| & \le & \frac{\left| r_{1} \right| + \left| r_{2} \right| + \left| r_{3} \right| + \left| r_{3} \right|\left| r_{2} \right|\left| r_{1} \right|}{1+\left| r_{3} \right|\left| r_{2} \right| + \left| r_{3} \right|\left| r_{1} \right| + \left| r_{2} \right|\left| r_{1} \right|} . \end{eqnarray} (46) These bounds are stronger than the one we attempt to prove here, namely, \begin{eqnarray} \left| \mathcal {R}_0(\omega ) \right| < 1 . \end{eqnarray} (47) Using the polynomials N and D, we have \begin{equation*} \mathcal {R}_0(\omega ) \mathcal {R}_0(\omega )^* = \frac{ N^{ \left( 0 \right)} N^{ \left( 0 \right)*}}{D^{ \left( 0 \right)}D^{ \left( 0 \right)*}} \end{equation*} and from the recurrence relations (30) we derive \begin{eqnarray} D^{ \left( 0 \right)} D^{ \left( 0 \right)*} - N^{ \left( 0 \right)}N^{ \left( 0 \right)*} & = & (1-z_1 z_1^*) \left[ D^{ \left( 1 \right)} D^{ \left( 1 \right)*} - N^{ \left( 1 \right)}N^{ \left( 1 \right)*} \right] \nonumber \\ & = & \prod _i (1- \left| z_i \right|^2) \nonumber \\ & = & \prod _i (1-r_i^2) \in \left(0,1\right] . \end{eqnarray} (48) Therefore, we have in general \begin{eqnarray} \mathcal {R}_0(\omega ) \mathcal {R}_0(\omega )^*&=& \left| \mathcal {R}_0(\omega ) \right|^2 = \frac{x(\omega )^2}{c+x(\omega )^2} < 1\nonumber \\ &\Rightarrow & \left| \mathcal {R}_0(\omega ) \right|<1 \end{eqnarray} (49) since $$c = \prod _i (1-r_i^2) \in \left(0,1\right]$$ and x(ω)2 = N(0)N(0)* ≥ 0. For a proof of the stronger bounds (46), see Appendix C. This means that the spectral radius for Marchenko inversion in the absence of surface-related multiples will be always bounded by 1, meaning that the Neumann series expansion of the Marchenko inversion step is always justified and will always converge. A word of caution is in order here. The proof above holds for the case of all order scattering in an infinitely long trace. Should a trace finite in length (tmax) be used, that is, $$\tilde{\mathcal {R}}_0 \left( t,t_{\text{max}} \right)\equiv \Theta \left( t_{\text{max}}-t \right)\mathcal {R}_0 \left( t \right)$$, where Θ is the Heaviside Theta function, it is possible that in the strong internal scattering regime $$\left| \tilde{\mathcal {R}}_0(\omega ,t_{\text{max}}) \right|>1$$ for some ω. For example, consider a two reflector model, with geologically unrealistic reflection coefficients of r1 = r2 = 0.75. In this case $$\left| \tilde{\mathcal {R}}_0(\omega ,t_{\text{max}}) \right|>1$$ will hold if the trace terminates before the emergence of the second order internal multiple, and $$\left| \tilde{\mathcal {R}}_0(\omega ,t_{\text{max}}) \right|<1$$ will hold if the trace contains said event and all higher order multiples. In Marchenko inversion however the projector only acts on the convolution, rather than projects the trace that is later being convolved. It appears it may be for that reason that the point above does not lead to divergence. 6.2 Convergence in the presence of surface-related multiples In order to understand the convergence properties of the Neumann type methods in the presence of surface-related multiples, that is, when ζ = ±1, we need to study the spectrum of a real symmetric Toeplitz (RST) matrix R + RT (see eq. 37) generated by a function Rζ(ω) such that the matrix entries satisfy \begin{eqnarray} t_{r}&=&\frac{1}{2\pi }\int \limits _{-\pi }^{\pi } \exp \left( i\omega r \right) \left( \mathcal {R}_{\zeta } \left( \omega \right)+\mathcal {R}_{\zeta } \left( -\omega \right) \right) {\rm d}\omega \nonumber \\ &=&\frac{1}{\pi }\int \limits _{0}^{\pi } \cos \left( \omega r \right) \left( \mathcal {R}_{\zeta } \left( \omega \right)+\mathcal {R}_{\zeta } \left( -\omega \right) \right) {\rm d}\omega . \end{eqnarray} (50) The result from Grenander & Szegö (1958, p. 64) dictates that the set of eigenvalues of an RST matrix is bounded from below by the absolute minimum and from above by the absolute maximum of $$\mathcal {R}_{\zeta } \left( \omega \right)+\mathcal {R}_{\zeta } \left( -\omega \right)$$. Thus considering a single reflector geometry, with $$\mathcal {R}_0 \left( \omega \right)=r_1 e^{i\omega \tau _1}$$, where τ1 is a two way time between the source and the reflector, we get \begin{eqnarray} \mathcal {R}_{\zeta } \left( \omega \right)&=&\mathcal {R}_0 \left( \omega \right)\sum \limits _{i=0}^{\infty } \left( {\zeta } r_1 e^{i\omega \tau _1} \right)^i=\frac{r_1 e^{i\omega \tau _1}}{1-{\zeta } r_1 e^{i\omega \tau _1}} \end{eqnarray} (51) such that \begin{eqnarray} \mathcal {R}_{\zeta } \left( \omega \right)&+&\mathcal {R}_{\zeta } \left( -\omega \right)=\frac{2r_1 \left( \cos {\omega \tau _1}-{\zeta } r_1 \right)}{1+r^2-2r\zeta \cos \omega \tau _1} . \end{eqnarray} (52) It is easy to verify that the right hand side in eq. (52) has an extremum at ωτ1 = kπ, with an integer k (where k is even for ζ = 1 and odd for ζ = −1). The convergence criterion then implies that \begin{eqnarray} \rho \left( \sigma _z\mathcal {B}_{\zeta }\sigma _z \right)=\frac{2 |r|}{1- |r|}<1&\rightarrow |r|<\displaystyle\frac{1}{3} . \end{eqnarray} (53) This implies that already for a single reflector and deep enough focusing points (requiring long enough traces), the algorithm implementation given by the BD equation (13) diverges if the reflection coefficient exceeds one-third, which can be very easily verified numerically. 6.3 MGS convergence properties The result above does not hold for the MGS scheme outlined in Section 3.3. Numerical experiments show that for a single reflector geometry the MGS scheme from eq. (23) will only start diverging if the reflection coefficient exceeds approximately 0.5. This has to do with the fact that the Neumann series is given in terms of operator ϕ (given by eq. 21), which does not solve the divergence problem, but reformulates the problem in such a way that it allows for a convergent solution for reflector geometries where the BD algorithm (13) already diverges. In order to illustrate the process the following numerical experiment was designed. A set of 1-D overburden geometries with a free surface (ζ = −1) was created, where both the number of reflectors nr, the two-way times $$t_{i,n_r}$$ and the reflection coefficients $$r_{i,n_r}$$ were randomly generated. The two-way times were randomly selected from a uniform distribution on the interval $$0<t_{n_r,n_r}<2t_d=2$$ s sampled at 2dt = 8 ms. Only unique values of $$t_{n_r,n_r}$$ were kept to assure that there is only one scatterer at a given position in time. The reflection coefficients where normally distributed with zero mean, and a standard deviation selected randomly (uniform probability distribution) from the interval [0.15, 0.35] to introduce some more variability in the scattering strengths between different geometries. All reflectors satisfying $$0.4< \left| r_{i,n_r} \right|$$ were removed from the overburden model, characterizing a strong internal scattering regime, yet assumed realistic enough for geophysical applications. Subsequently, a 4.0 s (dt = 4ms) long 1-D synthetic full bandwidth (with Nyquist frequency of 125 Hz) seismic trace was computed analytically (a series of Kronecker delta spikes) and used to generate a convolution operator Rζ = −1. For every random geometry the eigenvalue spectra of operators $$\sigma _z P\mathcal {B}_{\zeta }\sigma _z$$, PRζ(ζ ± T) and ϕ2 were calculated with Matlab’s function eig (using the P and T representation given in eq. (34)), and hence (by selecting the one with the maximum absolute value) the radius of convergence was found. We have grouped the results in Fig. 1, where we have plotted the spectral radii of $$\sigma _z P\mathcal {B}_{\zeta }\sigma _z$$ (representing the BD case) versus those of ϕ2 (representing the MGS case). Note that the spectral radii of $$\sigma _z P \mathcal {B}_{\zeta }\sigma _z$$, PRζ(ζ + T) and PRζ(ζ − T) turn out to be identical. Figure 1. View largeDownload slide A scatter plot demonstrating the relationship between the spectral radii $$\rho \left( \sigma _zP \mathcal {B}\sigma _z \right)$$ (BD, left vertical axis) and ρ(ϕ2) (MGS, horizontal axis). Plots have been created by generating 1000 random 1-D overburdens with a random number (2 < nr ≤ 30, uniformly distributed) of randomly localized (uniformly distributed) scatterers with random reflection coefficients ri (normally distributed with mean zero, and standard deviation 0.1) with extreme outliers, that is, |ri| > 0.4 removed. The overburdens have been classified by the number of overburden reflectors nr (regardless of their strength): nr ≤ 5 (blue dot, 164), 6 ≤ nr ≤ 10 (red triangle, 249), 11 ≤ nr ≤ 15 (green diamond, 271), and 16 ≤ nr ≤ 30 (black square, 316), where the parentheses contain information about representing colour, symbol, and total number of overburdens in a given class. The left panel shows a wide range of the results (although it does not display the cases of very large spectral radii), while the right panel presents a close up describing the spectral radii behaviour in the parameter area where either of the series is convergent, or just becomes divergent. On both plots, probability density functions (PDFs) of given classes (blue, red, green or black) as a function of the spectral radius ρ(ϕ2) are plotted (corresponding y-axis shown on the right; note the difference of the left (spectral radius) and right (PDF) vertical axes, also between the two plots). The grey line is given by $$2\rho \left( \phi ^2 \right)=\rho \left( \sigma _z \mathcal {B} \sigma _z \right)$$, where it is important to note that this relationship no longer holds for spectral radii ρ(ϕ2) far outside the parameter domain where the series is convergent. In either case where the spectral radius exceeds 1, the Neumann series expansion diverges. Figure 1. View largeDownload slide A scatter plot demonstrating the relationship between the spectral radii $$\rho \left( \sigma _zP \mathcal {B}\sigma _z \right)$$ (BD, left vertical axis) and ρ(ϕ2) (MGS, horizontal axis). Plots have been created by generating 1000 random 1-D overburdens with a random number (2 < nr ≤ 30, uniformly distributed) of randomly localized (uniformly distributed) scatterers with random reflection coefficients ri (normally distributed with mean zero, and standard deviation 0.1) with extreme outliers, that is, |ri| > 0.4 removed. The overburdens have been classified by the number of overburden reflectors nr (regardless of their strength): nr ≤ 5 (blue dot, 164), 6 ≤ nr ≤ 10 (red triangle, 249), 11 ≤ nr ≤ 15 (green diamond, 271), and 16 ≤ nr ≤ 30 (black square, 316), where the parentheses contain information about representing colour, symbol, and total number of overburdens in a given class. The left panel shows a wide range of the results (although it does not display the cases of very large spectral radii), while the right panel presents a close up describing the spectral radii behaviour in the parameter area where either of the series is convergent, or just becomes divergent. On both plots, probability density functions (PDFs) of given classes (blue, red, green or black) as a function of the spectral radius ρ(ϕ2) are plotted (corresponding y-axis shown on the right; note the difference of the left (spectral radius) and right (PDF) vertical axes, also between the two plots). The grey line is given by $$2\rho \left( \phi ^2 \right)=\rho \left( \sigma _z \mathcal {B} \sigma _z \right)$$, where it is important to note that this relationship no longer holds for spectral radii ρ(ϕ2) far outside the parameter domain where the series is convergent. In either case where the spectral radius exceeds 1, the Neumann series expansion diverges. It is noteworthy that for ζ = ±1 in a certain parameter domain, the spectral radius of ϕ2 is half that of $$\sigma _z P\mathcal {B}\sigma _z$$, that is, $$2\rho \left( \phi ^2 \right)\approx \rho \left( \sigma _z P\mathcal {B}\sigma _z \right)$$ (see Fig. 1), which means that in the presence of free-surface effects the MGS scheme (23) still converges, while the BD Neumann series (13) already diverges. If we translate this result to our analysis before, we get that the relation $$2\rho \left( \phi ^2 \right)\approx \rho \left( \sigma _z P \mathcal {B}_{\zeta }\sigma _z \right)$$ translates to \begin{eqnarray} \rho (\phi ^2)=\frac{ |r|}{1- |r|}\lesssim 1 &\rightarrow |r|\lesssim \displaystyle\frac{1}{2} . \end{eqnarray} (54) Lastly, we want to point out that in the absence of surface-related multiples the MGS scheme reduces to the (BD) Neumann series expansion and it should not be surprising that $$\rho \left( \phi ^2 \right)=\rho \left( \sigma _z P\mathcal {B}\sigma _z \right)$$, that is, the MGS scheme offers no benefit over the other schemes outlined above. 6.4 Empirical spectral analysis in the presence of projector P In the section above we have considered an asymptotic case of td → ∞ allowing us to work analytically. The analysis showed that single reflector geometries lead to divergent Neumann series expansions. We showed that this is only true when first arrival time td is much larger than the onset of the first surface-related multiple. This means that some of the Neumann series solutions to the Marchenko equation might still converge when td is small, as exemplified by a numerical example in Singh et al. (2015), where, assuming that the model is homogeneous when extended below, divergence sets in beyond the depth of 12 km (≈3 s). In this section we will illustrate the onset of divergence in the Neumann series expansion as a function of depth. To achieve this we cannot simply replace the traces $$\mathcal {R}_{\zeta } \left( t \right)$$ with their truncated equivalents $$\tilde{\mathcal {R}}_{\zeta } \left( t,2t_d \right)$$ in the characteristic polynomials χ (see Appendix B1), as this is obviously not equivalent. In the absence of transparent analytical results we aid ourselves with numerical experiments. We may either choose a large number of weak reflectors, or a couple of (slightly less realistic) strong ones to illustrate our point. For greater transparency, we choose the latter, and pick a two-reflector model with r = 0.4 (for which either Neumann series expansion schemes should diverge for sufficiently large depths). The two-way traveltimes from the surface to each reflector are t1 = 0.4 s and t2 = 0.48 s. We analytically generate a trace along the same lines as in Section 6.3. We perform Marchenko inversion by Neumann series expansion for a range of depths starting from corresponding one-way traveltimes of 0.248 s ≤ td ≤ 2 s, we calculate the spectral radii of the operators involved in the Neumann series for every depth, and plot the spectral radii as a function of focusing depth in Fig. 2. Two things become immediately apparent: (1) while the MGS scheme continues to converge, the BD scheme is already divergent for that depth, however the ratio of their spectral radii is a function of depth; and (2) the spectral radius function ρ(td) is a step function with a discontinuity hallmarked by an extra event (Kronecker delta) being present in the 0 < t < 2td window in the seismic trace, that is, the minimum length of the seismic trace that is used to construct the solution to the Marchenko equation. Figure 2. View largeDownload slide (a) Seismic trace used, with surface-related multiples ζ = −1 (note that the horizontal scale is double the one in plot (b), two-way versus one-way time). (b) Spectral radii $$\rho (\sigma _z P\mathcal {B}\sigma _z)$$ (BD, dashed red), ρ(ϕ) (black) and ρ(ϕ2) (MGS, dash-dotted blue) as a function of depth (one way-time) for a model of two reflectors with reflectivity of 0.4 located at two-way times of t1 = 0.4 and t2 = 0.48. Additionally we plot a grey horizontal dashed line at 1 indicating the boundary of absolute convergence of the algorithm, as well as a thick green line which is given by the ratio $$\rho \left( \sigma _z P\mathcal {B}\sigma _z \right)/\rho \left( \phi ^2 \right)$$. The minimum focusing depth corresponds to the depth (in one-way time) below the deeper reflector. Figure 2. View largeDownload slide (a) Seismic trace used, with surface-related multiples ζ = −1 (note that the horizontal scale is double the one in plot (b), two-way versus one-way time). (b) Spectral radii $$\rho (\sigma _z P\mathcal {B}\sigma _z)$$ (BD, dashed red), ρ(ϕ) (black) and ρ(ϕ2) (MGS, dash-dotted blue) as a function of depth (one way-time) for a model of two reflectors with reflectivity of 0.4 located at two-way times of t1 = 0.4 and t2 = 0.48. Additionally we plot a grey horizontal dashed line at 1 indicating the boundary of absolute convergence of the algorithm, as well as a thick green line which is given by the ratio $$\rho \left( \sigma _z P\mathcal {B}\sigma _z \right)/\rho \left( \phi ^2 \right)$$. The minimum focusing depth corresponds to the depth (in one-way time) below the deeper reflector. We can see that the BD matrix scheme (13) stops converging (spectral radius exceeds unity) past the point where the combined second-order surface multiple due to r1 and r2 appended with an additional internal reverberation between r1 and r2 enter the 0 < t < 2td window (see the vertical dashed line at focusing depth of 0.44 s in Fig. 2). For comparison, the MGS scheme crosses the ρ = 1 line immediately after the 0 < t < 2td window contains the fourth-order surface multiple which scattered three times off r2 and once off r1 (depth of 0.92 s). It is noteworthy that the spectral radius is a piece-wise constant function (see Fig. 2), and there are small increments every 0.04 s, which correspond to a one-way time interval between the reflectors (an extra interbed present inside the −td < t < td window), and larger increments every 0.2 and 0.24 s corresponding to one-way traveltimes between the surface and the reflectors hallmarking an extra surface multiple being present inside the −td < t < td window (some main events are indicated with a dashed vertical line). Clearly the two algorithms are limited in their ability to deliver solutions by means of convergent Neumann series expansion, and the MGS (blue) algorithm converges for deeper focusing depths. Interestingly enough the ratio $$\rho \left( \sigma _z P\mathcal {B}\sigma _z \right) /\rho \left( \phi ^2 \right)\approx 2$$, also is a function of depth and converges to some value (about 1.85 s here) asymptotically, indicating that the relationship between spectral radii of the two methods is substantially more complicated than given by a simple ‘empirical’ expression. Before closing this subsection we would like to address two points. For one, the step-like behaviour of the spectral radius ρ(td) function will become increasingly continuous as the reflection coefficients ri become weaker and their number grows to infinity. Second, we would like to emphasize that it might take many iterations to see divergence occur, especially for ρ = 1 + ε with 0 < ε ≪ 1, since in such cases only a very small number of eigenvalues (out of 2tmax /dt, where tmax  is the trace length), might be greater than one. In other words, the boundary ρ = 1 between convergent and divergent regime is very sharp, but seeing it manifest itself might take many iterations, that is, an apparent non-divergent result does not guarantee an accurate solution. 6.5 Convergence and Divergence in 2-D The analysis presented so far only covers one-dimensional Marchenko inversion. One could ask whether the divergence described above will also occur in the presence of geometrical spreading which suppresses the perceived amplitudes of the data used. This subsection will be dedicated to showing that divergence will be as much of an issue in 1-D as in 2-D or 3-D. Therefore we extend the 1-D model from Section 6.4 to 2-D and perform analogous analyses. In the previous subsection we have noticed two things. First, for any given reflector geometry, schemes BD (13) or MGS (23), if divergent for asymptotic focusing depths, might converge for shallower depths. Second, as much as in 1-D we can afford to iterate the scheme for hundreds of iterations, such an experiment is not affordable in 2-D. For this reason, in order to show that 2-D suffers to the same extent as 1-D, we choose a 2-D version of a previously studied 1-D example where (1) the focusing depths are 100 m below the depth at which the 1-D scheme diverged, and (2) we pick cases where the scheme diverged after a handful of iterations, and show the scheme will diverge past a certain depth for both 1-D as well as 2-D data sets based on the same reflector geometry. The model contains a free surface ζ = −1, is laterally invariant and free of refractions. The associated density contrasts might be unrealistic for geophysical applications, but we chose the parameters to simply illustrate the phenomenon, which becomes much less transparent if a (geologically realistic) large number of weaker reflectors is considered, where the schemes may diverge in the strong internal scattering regime, as evidenced in the previous section. We use a two-reflector model from Section 6.4, with reflection coefficients r1 = r2 = 0.4, and two-way traveltimes t1 = 0.40 s and t2 = 0.48 s, which for a background velocity of 2000 ms−1 corresponds to two reflectors at 200 and 240 m depth (apart from the free surface, ζ = −1). We analytically generate a synthetic fixed spread (broadband) data set, with a maximum offset of 4000 m, with 401 sources and as many co-located receivers spaced 10 m apart. In order to avoid aliasing noise we apply a (60 Hz) high-cut filter to the data, and we dress the inverse first arrival (primary focusing) field $$T_d^{\text{inv}}$$ with a 30 Hz Ricker wavelet. In order to accommodate for the finite wavelet width, the projector is shifted by ε = 32 ms. Since the coda is expected to have contributions at −td + 80 ms, this additional ε shift will leave it unaffected, hence short period interbed artefacts will not be introduced in the focusing fields (Slob et al. 2014; van der Neut et al. 2015). A simple calculation from Section 6.4, supported by a 1-D test, suggests that the BD and MGS schemes should no longer converge for depths exceeding 800 and 1800 m, respectively (see Fig. 2). We present results for the more robust MGS scheme. We have picked three focusing depths: 1800, 2200 and 3000 m, corresponding to one-way zero-offset first arrival times of 0.9, 1.1, and 1.5 s. The results are shown in Fig. 3. The MGS scheme converged for the focusing depth of 1800 m in 2-D, and diverged for depths of 2200 and 3000 m in exactly the same way as its 1-D counterpart. For example at a 3000 m depth, the 1-D algorithm would have an MGS spectral radius ρ(ϕ2) of about 1.4, which should show significant signs of divergence after about 20 iterations (as 1.420 ≈ 103). Figure 3. View largeDownload slide Shot panels of the focusing function $$f_1^-$$, calculated using the MGS Neumann series for synthetic data generated using a model with a free surface (ζ = −1) at x0, 3 = 0m, and two reflectors with reflectivity of r1 = r2 = 0.4 at 200 m and 240 m depth. The focal point is given by $$\vec{x}_i= \left( x_{i,1},x_{i,3} \right)$$, where we chose the lateral focusing coordinate is at xi, 1 = 2000 m, and three focusing depths of xi, 3 = 1800 m (top row), 2200 m (middle row), and 3000 m (bottom row). The result is shown after 5 (first column), 10 (second column), 20 (third column) and 30 (fourth column) iterations of the MGS Neumann scheme eq. (23). The plot below the shot panels depicts the behaviour of the ℓ2-norm (for visualization purposes a log10 scale is used) of the shot panels above as a function of the number of iterations. Figure 3. View largeDownload slide Shot panels of the focusing function $$f_1^-$$, calculated using the MGS Neumann series for synthetic data generated using a model with a free surface (ζ = −1) at x0, 3 = 0m, and two reflectors with reflectivity of r1 = r2 = 0.4 at 200 m and 240 m depth. The focal point is given by $$\vec{x}_i= \left( x_{i,1},x_{i,3} \right)$$, where we chose the lateral focusing coordinate is at xi, 1 = 2000 m, and three focusing depths of xi, 3 = 1800 m (top row), 2200 m (middle row), and 3000 m (bottom row). The result is shown after 5 (first column), 10 (second column), 20 (third column) and 30 (fourth column) iterations of the MGS Neumann scheme eq. (23). The plot below the shot panels depicts the behaviour of the ℓ2-norm (for visualization purposes a log10 scale is used) of the shot panels above as a function of the number of iterations. In the results presented in Fig. 3, it is important to notice that the norm of the solution shows a non-monotonous growth for the divergent (xi,3 = 3000 m—blue dot-dashed line) and marginally divergent (xi,3 = 2200 m—red dashed line) cases, however it stays almost constant for the convergent case (xi,3 = 1800 m—black continuous line). Note that the divergence of 2-D occurred past the same depths as it did in 1-D. Similar experiments can be repeated for other reflector geometries and the divergence boundary qualitatively should not be affected, by the dimensionality of the overburden model used. This is because one can always map a 2-D or 3-D constant velocity background (horizontally) translationally invariant model onto one in 1-D, perform the analysis in one dimension and use reverse mapping back onto two- or three dimensions, that is, there is an isomorphism between convolutions in 1-D and convolutions and stacking (with no aperture limitations) in 2-D or 3-D. 6.6 Convergence in the presence of dissipation We illustrate the failure of the Neumann series to converge in the presence of dissipation with a simple example, which aims to act as a proof that the iterative substitution algorithm does not converge in general. Consider a three-layer (two-reflector) model, with the dissipation upon passing through the ith layer and back given by $$e^{-\lambda _i}<1$$, in which case the reflection response is given by \begin{eqnarray} R_0(\omega ) &=& \frac{z_1 e^{-\lambda _1}+z_2e^{-\lambda _1-\lambda _2}}{1+z_2z_1^* e^{-\lambda _2}} \nonumber \\ \bar{R}_0(\omega ) &=& \frac{z_1 e^{\lambda _1}+z_2e^{\lambda _1+\lambda _2}}{1+z_2z_1^* e^{\lambda _2}} . \end{eqnarray} (55) In which case the operator $$RT\bar{R}T$$ expressed in the frequency domain takes the form \begin{eqnarray} RT\bar{R}T= R_0(\omega )\bar{R} \left( -\omega \right) . \end{eqnarray} (56) It is straightforward to verify that by substituting large values for |ri| < 1 and λi, representing strong scattering and strong dissipation, the spectral radius $$\left| \mu _{\rm max} \right|= \left| R_0(\omega )\bar{R} \left( -\omega \right) \right|$$ will exceed unity for some frequency ω, and as a result the Neumann series solution suggested in Slob (2016), will diverge, and fail to yield a solution. This is not due to numerical precision issues when calculating $$\bar{R}$$, which may have very large amplitude, but rather it is a fundamental failure of the Neumann series, which no longer accurately represents the inverse of the Marchenko equation operator. For a trivial case of a single reflector where $$R_0(\omega )\bar{R} \left( -\omega \right)=r_1^2$$, in theory, the series should always converge. 7 NON-NEUMANN TYPE SOLVERS The Marchenko equation can be thought of as an inversion problem, and a series expansion scheme, such as BD (13) or MGS (23), is just one of the ways that (under restricted conditions) it can be solved. We present two alternative methods which can be applied to solving the Marchenko equation: LSQR (Paige & Saunders 1982a,b) (outlined below) and a Merchant and Parks (Merchant & Parks 1982) take on a Levinson type algorithm (Levinson 1947) which we outline in Appendix D. The least-squares solver such as LSQR for a consistent set of equations will converge to a solution with a monotonically decreasing residual (Paige & Saunders 1982a), thus in the presence of surface-related multiples LSQR, in contrast to the Neumann-expansion MGS or BD scheme, will always converge. In Fig. 4, we show an example of an LSQR applied to a 1-D example. Figure 4. View largeDownload slide The true redatumed response (top horizontal panel) in the model of 35 reflectors (right vertical panel) with a focusing time of td = 1s. The remaining four horizontal panels show the standard error in the redatumed response due to unconverged Marchenko inversion for: the MGS Neumann series without free surface (1st, green), LSQR without free surface (2nd, red), LSQR with a free surface (3rd, blue), the ‘Cross’-LSQR (bottom blue). Inside individual panels, the error after 102, 103, 5 × 103 and 104 iterations are shown from top to bottom, with the lower vertical scales identical to the one of the top trace (with shifted origin). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. Figure 4. View largeDownload slide The true redatumed response (top horizontal panel) in the model of 35 reflectors (right vertical panel) with a focusing time of td = 1s. The remaining four horizontal panels show the standard error in the redatumed response due to unconverged Marchenko inversion for: the MGS Neumann series without free surface (1st, green), LSQR without free surface (2nd, red), LSQR with a free surface (3rd, blue), the ‘Cross’-LSQR (bottom blue). Inside individual panels, the error after 102, 103, 5 × 103 and 104 iterations are shown from top to bottom, with the lower vertical scales identical to the one of the top trace (with shifted origin). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. In this subsection we aim to analyse the effectiveness of the LSQR solver in tackling Marchenko inversion with and without surface-related multiples, and compare it to the rate of convergence of the Neumann series MGS scheme (23) without a free surface. The BD or MGS schemes can be regarded as two different LSQR pre-conditioning schemes. We tested both, and observed no significant difference in convergence. We continue working in the same representation as before, that is, the vector-matrix representation from Section 4. This is due to several reasons. First, we have found that the band-limited data and $$T_d^{\text{inv}}$$ with a wavelet result in slower convergence. Second, for sufficiently complicated overburdens we found that a large number of iterations are needed to appreciably suppress any artefacts, thus making a 1-D implementation less computationally costly (yet presenting the same phenomenon, which only becomes worse in 2-D or 3-D). This is particularly important if our numerical experiments are to be repeated for various strong internal scattering configurations. Additionally, working with pure Kronecker delta spikes allows us to avoid short period interbed multiple problems (a known limitation of Marchenko inversion (Slob et al. 2014)). Lastly, the Green’s function deconvolution step in 1-D leaves no residual (as opposed to the MDD in 2-D), which allows us to study in isolation the imperfect convergence effects on the obtained redatumed response. In general the approximate least-squares solution to a problem Ax = b obtained after n iterations, xn, will be off by an error εn, such that \begin{eqnarray} x_n=x-A^{-1}r\equiv x - \epsilon _n . \end{eqnarray} (57) In case of inversion of eq. (5), where A−1 exists, this means that \begin{eqnarray} \vec{f}_{1,n}=\vec{f}_1+ \vec{\epsilon }_n, \end{eqnarray} (58) where εn denotes the extra contribution to the focusing functions, which can take nonzero value for all −∞ < t < ∞, as opposed to −td < t < td and td < t for $$\vec{f}_1$$ or $$\vec{G}_{\zeta }$$ respectively. As a result $$\vec{G}_{n,\zeta }$$ can be expressed in terms of the true solutions by \begin{eqnarray} \vec{G}_{n,\zeta }&=& \left( P \mathcal {A}_{\zeta } + \left( 1-P \right)\mathcal {A}_{\zeta } \right)\vec{f}_{1,n}\nonumber \\ &=& \left( P \mathcal {A}_{\zeta } + \left( 1-P \right)\mathcal {A}_{\zeta } \right) (\vec{f}_1+ \vec{\epsilon }_n)\nonumber \\ &=& P \mathcal {A}_{\zeta }\vec{\epsilon }_n + \left( 1-P \right)\mathcal {A}_{\zeta }\vec{\epsilon }_n+ \left( 1-P \right)\mathcal {A}_{\zeta }\vec{f}_1\nonumber \\ &\equiv & \vec{\epsilon }_{n,P}+ \vec{\epsilon }_{n,1-P}+\vec{G}_{\zeta } \end{eqnarray} (59) where $$\vec{\epsilon }_{n,P}$$ and $$\vec{\epsilon }_{n,1-P}$$ denote the erroneous contributions to $$\vec{G}_{n,\zeta }$$ confined to time intervals −td < t < td and td < t respectively. The former error can be removed by applying (1 − P) to the solution, but the latter will lead to extra contributions which might affect the quality of the redatumed response. Since finding the redatumed response requires a deconvolution step, analytical evaluation of the impact of $$\vec{\epsilon }_{n,P}$$ and $$\vec{\epsilon }_{n,1-P}$$ on the redatumed response Rred is not straightforward. We show the $$\vec{\epsilon }_{n,P}$$ and $$\vec{\epsilon }_{n,1-P}$$ in Fig. 6. In order to assess the impact of unconverged Marchenko inversion we can make use of the following scheme. For any reflector geometry (with reflectors placed on-sample) we analytically calculate a reflection response due to a 1-D data-source. Next, we find the exact (true) solution to the Marchenko equation by finding a matrix representation of (1 + PRζ(ζ ± T)), given some seismic trace, and directly inverting it in Matlab (which performs an LU decomposition). Alternatively we can use a model-driven and wave equation based method like the one given in Resnick et al. (1986) to find the focusing fields, and thus we find the true focusing and Green’s functions, as well as the true redatumed response (the direct inversion and model-based scheme yielded solutions identical up to numerical noise). To quantify convergence we define the normalized residual norm (NRN) \begin{eqnarray} {}\text{NRN}_n=\Vert r_n\Vert _2/\Vert PR_{\zeta }T_d^{\text{inv}}\Vert _2 \end{eqnarray} (60) where it is important to note that the normalization is required since it will be different for the free-surface scheme as compared to the non-free-surface schemes, due to the presence of surface-related multiples. In the expression above we used \begin{eqnarray} r_n=\left(\begin{array}{c{@}{\quad}c}1+{\zeta } R & -RT\\ -RT & 1+{\zeta } R\end{array}\right)\left(\begin{array}{c}f_{1,n}^-\\ M_{+,n}^*\end{array}\right)-\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ 0\end{array}\right). \end{eqnarray} (61) We also define the (standard) error and normalized-standard-error-norm (NSEN) \begin{eqnarray} {\rm error}_{i,n} \left( h \right)&=&h_{i,n}-h_{\rm true}, \nonumber \\ {\rm NSEN}_{i,n} \left( h \right)&=&\frac{\Vert {\rm error}_{i,n} \left( h \right)\Vert _2}{\Vert h_{\rm true}\Vert _2}. \end{eqnarray} (62) Here i = 1, 2, 3 refers to the scheme being used: Neumann-series ζ = 0 (green), LSQR ζ = 0 (red), and LSQR ζ = −1 respectively (blue), where the colour in parentheses is used throughout Figs 4–6. Moreover, n stands for the number of iterations applied, and h is the object being evaluated (e.g. a focusing function or the redatumed response). Note that the residual norm and the NSEN are not the same. LSQR inversion attempts to minimize a residual, however it does not guarantee that it will obtain a solution that will be closer to the true one, and hence that a lower NRNn will necessarily lower NSENn(h), or that both will decrease at the same rate. Multiple tests based on random configurations of reflector locations and strengths, have shown that NRNn is a function which is well described by two piecewise-defined power-law decays (more so in case of LSQR solvers, methods (2) and (3), and less so for the MGS Neumann expansion, method (1)) \begin{eqnarray} \log NR_n &=&\Theta \left( n_c-n \right) k_1 \log \left( n_c-n \right)\nonumber \\ &&+\,\Theta \left( n-n_c \right) k_2 \log \left( n-n_c \right) , \end{eqnarray} (63) where k1 and k2 are the power-decay rates, with k2 < k1 < 0. Here nc stands for a critical number of iterations, that is, one where decay rate changes its behaviour to an accelerated one. Note that nc is not a well-defined number but an approximate region, and that depending on the complexity of the overburden nc can range from 101 to 103. This behaviour seems to suggest that for n < nc, the algorithm is in a sluggish convergence regime, which is characterized by the obtained focusing function substantially differing from the correct answer, yet one which still has an appreciably low residual (see Fig. 5). Every iteration at this stage of the algorithm appears to attempt to put more and more events into $$f_1^{\pm }$$ where they do not seem to belong. This is the reason why adaptive subtraction schemes, which assume kinetically correct events, no longer work here. Only once the algorithm ‘locks in’ on the almost correct configuration it can accelerate its convergence. It is important to note that the sluggish convergence regime (n < nc) can be reached very early for simple geometries (i.e. weak internal scattering regime), and hence the piecewise power-law decay might not be observed. In this case it is very likely that internal multiples are not the first order imaging problem. Figure 5. View largeDownload slide Convergence of schemes (1), (2) and (3): left, middle and right panel respectively. The continuous line shows the residual norm, the dashed line is the normalized standard error norm (NSEN) of the focusing function $$f_1^+$$, and the dotted line shows the NSEN of the redatumed response as a function of the number of iterations. Both the residuals as well as the iteration number are shown on a log10 scale. On the right panel, in addition the normalized error residual due to the ‘Cross’-LSQR scheme is shown (thin continuous line). Figure 5. View largeDownload slide Convergence of schemes (1), (2) and (3): left, middle and right panel respectively. The continuous line shows the residual norm, the dashed line is the normalized standard error norm (NSEN) of the focusing function $$f_1^+$$, and the dotted line shows the NSEN of the redatumed response as a function of the number of iterations. Both the residuals as well as the iteration number are shown on a log10 scale. On the right panel, in addition the normalized error residual due to the ‘Cross’-LSQR scheme is shown (thin continuous line). We illustrate these considerations with a numerical experiment using a 35-reflector geometry with a focusing depth below the 30th (hence forming a 30-reflector overburden and a 5-reflector target zone) with td = 1s. See the vertical panel in Fig. 4. We analytically find a synthetic trace due to a delta-source without and with surface-related multiples which are the inputs to methods (1) MGS Neumann series ζ = 0, (2) LSQR ζ = 0 and (3) LSQR ζ = −1 respectively. We let either of these schemes run for over 10,000 iterations and at 1, 5 and every multiple of 10 iterations we find: the solutions $$\vec{f}_{1,n,i}$$, $$\vec{G}_{1,n,i}$$ and Rred,n,i due to methods i = 1, 2 and 3, the normalized residual NRNn, $${\rm error}_{i,n} \left( f_1^+ \right)$$, errori, n(Rred) and their corresponding NSENs. We can see in Fig. 5 that the MGS Neumann series method (green) shows dramatically fast convergence with a residual (continuous line) of 10−5 reached in approximately 100 iterations, a feat for which the LSQR scheme with the same data input needs about an order of magnitude more iterations, and the LSQR with surface-related multiples needs a factor of 5 more. Moreover, we see a manifestation of the double power-law decay (63) in all three methods which is most clearly seen for methods (2) and (3). In addition, this experiment shows that the $${\rm NSEN} \left( f_1^+ \right)$$ (dashed line) as well as NSEN(Rred) (dotted line) decay at a much smaller rate with the number of iterations relative the residual; this holds for all methods but is most manifest for the LSQR solver. We also observe that at some point the NSEN(Rred) calculated using the focusing functions from the LSQR ζ = −1 scheme plateaus at the considerably high value of 10−1.5, suggesting that surface-related multiples present in the Green’s functions in combination with poor convergence of the LSQR ζ = −1 scheme hinder the deconvolution step. In order to investigate the last point further we perform one more experiment, which we call ‘Cross’-LSQR. We convolve $$\vec{f}_{1,3,n}$$ with Rζ = 0 to get $$\vec{G}_{0,n}\ne \vec{G}_{{\zeta =-1} ,3,n}$$, that is, a Green’s function without free-surface effects, however containing the residual error due to the LSQR ζ = −1 method, which did have the free surface. In practice, we would not be able to do this, since we lack the reflectivity model, unless we perform a perfect surface multiple removal. We observe that using surface multiple-free data to produce the Green’s functions the redatumed response convergence no longer plateaus (thin blue line in the third panel on Fig. 6), suggesting that the presence of surface-related multiples in the Green’s functions might pose a problem. The (rather significant) errors in the redatumed response due to insufficient number of iterations are presented in Fig. 4. Note that for the purpose of this example we placed several strong reflectors in the target zone, however target reflectors with reflection coefficients of the order of 0.1, can be easily overwhelmed by the residual after 100 or even, prohibitively expensive, 1000 LSQR iterations. Figure 6. View largeDownload slide Top panel: true downward propagating function G+. The lower four panels show the error in the estimated G+ for methods (1)-(3) and the ‘Cross’-LSQR method, respectively. Inside every panel the two traces shown correspond to errors after 102 (top) and 103 (bottom) iterations, with the lower vertical scale identical to the one of the top trace in that panel (with shifted origin). The $$\epsilon ^+_{n,P}$$ contribution can be seen at time s t < td while that of $$\epsilon ^+_{n,1-P}$$ for times td ≤ t. The third panel shows the difference between the $$G_{{\zeta =-1} ,n,3}^{+}$$ and the true $$G_{{\zeta =-1} }^{+}$$ (not shown on this figure). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. Figure 6. View largeDownload slide Top panel: true downward propagating function G+. The lower four panels show the error in the estimated G+ for methods (1)-(3) and the ‘Cross’-LSQR method, respectively. Inside every panel the two traces shown correspond to errors after 102 (top) and 103 (bottom) iterations, with the lower vertical scale identical to the one of the top trace in that panel (with shifted origin). The $$\epsilon ^+_{n,P}$$ contribution can be seen at time s t < td while that of $$\epsilon ^+_{n,1-P}$$ for times td ≤ t. The third panel shows the difference between the $$G_{{\zeta =-1} ,n,3}^{+}$$ and the true $$G_{{\zeta =-1} }^{+}$$ (not shown on this figure). The vertical scale factor of 10−6 only applies to the second panel (MGS) which illustrates the superior convergence of this method. 8 CONCLUSIONS In this work we have studied the effect of a free surface on Marchenko inversion and redatuming, where the Marchenko equation as given by Wapenaar et al. (2014) is appended by extra terms given by Singh et al. (2015). We have shown that in the strong internal scattering regime, in the presence of surface-related multiples, where Marchenko redatuming could make a significant impact, the proposed iterative Neumann series solvers do not always converge. We have shown that there is a class of pre-conditioners which modify the convergence behaviour of the Marchenko inversion and extend the range of application of series solvers in the presence of surface-related multiples. Additionally, we have shown analytically, as well as numerically, the parameter range where the surface multiple schemes converge, and we have shown that not only Marchenko inversion yields a unique solution, but also that the internal multiple-only implementation of the scheme will always have a convergent Neumann series. As a solution to the problem of diverging Neumann series in the presence of a free-surface, we have proposed two alternative solvers: one which is exact and is only applicable in 1-D, and another one which is asymptotically exact and applicable in 1-D, 2-D and 3-D. In the latter, we have observed that there is a risk of obtaining an approximate solution, which while minimizing the residual, can have substantial artefacts in the reconstructed Green’s functions and consequently in the redatumed response. Lastly, we note that the presence of noise may also cause the series expansion to diverge, even in the absence of surface-related multiples, a topic requiring further investigation. Acknowledgements The authors wish to thank Fons ten Kroode, Chris Willacy and Zofia Czyczula-Rudjord for fruitful discussions and for reviewing the manuscript, and the reviewers for their constructive comments. REFERENCES Broggini F. , Wapenaar K. , van der Neut J. , Snieder R. , 2014 . Data-driven Green’s function retrieval and application to imaging with multidimensional deconvolution , J. geophys. Res. , 119 , 425 – 441 . Google Scholar Crossref Search ADS da Costa Filho C.A. , Ravasi M. , Curtis A. , Meles G.A. , 2014 . Elastodynamic Green’s function retrieval through single-sided Marchenko inverse scattering , Phys. Rev. E , 90 , 063201-1-063201-9 . da Costa Filho C.A. , Ravasi M. , Curtis A. , 2015 . Elastic P- and S-wave autofocus imaging with primaries and internal multiples , Geophysics , 80 , S187 – S202 . Google Scholar Crossref Search ADS da Costa Filho C.A. , Meles G.A. , Curtis A. , 2017 . Elastic internal multiple analysis and attenuation using Marchenko and interferometric methods , Geophysics , 82 , Q1 – Q12 . Google Scholar Crossref Search ADS Fredholm E.I. , 1903 . Sur une classe d’equations fonctionnelles , Acta Math. , 27 , 365 – 390 . Google Scholar Crossref Search ADS Gauss C.F. , 1903 . Werke. Königlichen Gesellschaft der Wissenschaften zu Göttingen , 9 . Grenander U. , Szegö G. , 1958 . Toeplitz Forms and Their Applications , AMS Chelsea Publishing Series, University of California Press . Hazewinkel M. , 2001 . Encyclopedia of Mathematics , Springer Lax P.B. , 2002 . Functional Analysis , Wiley-Interscience . Levinson N. , 1947 . The Wiener RMS error criterion in filter design and prediction , J. Math. Phys. , 25 , 261 – 278 . Google Scholar Crossref Search ADS Mathews J. , Walker R.L. , 1970 . Mathematical Methods of Physics , ( 2nd ed. ), W. A. Benjamin . Meles G.A. , Löer K. , Ravasi M. , Curtis A. , da Costa Filho C.A. , 2015 . Internal multiple prediction and removal using Marchenko autofocusing and seismic interferometry , Geophysics , 80 , A7 – A11 . Google Scholar Crossref Search ADS Meles G.A. , Wapenaar K. , Curtis A. , 2016 . Reconstructing the primary reflections in seismic data by Marchenko redatuming and convolutional interferometry , Geophysics , 81 , Q15 – Q26 . Google Scholar Crossref Search ADS Merchant G. , Parks T.W. , 1982 . Efficient solution of a Toeplitz-plus-Hankel coefficient matrix system of equations , IEEE Trans. Acoust. Speech Signal Process. , 30 , 40 – 44 . Google Scholar Crossref Search ADS Paige C.C. , Saunders M.A. , 1982a . LSQR: an algorithm for sparse linear equations and sparse least squares , ACM Trans. Math. Softw. , 8 ( 1 ), 43 – 71 . Google Scholar Crossref Search ADS Paige C.C. , Saunders M.A. , 1982b . LSQR: sparse linear equations and least-squares problems , ACM Trans. Math. Softw. , 8 ( 2 ), 195 – 209 . Google Scholar Crossref Search ADS Ravasi M. , Vasconcelos I. , Kritski A. , Curtis A. , da Costa Filho C.A. , Meles G.A , 2016 . Target-oriented Marchenko imaging of a North Sea field , Geophys. J. Int. , 205 ( 1 ), 99 – 104 . Google Scholar Crossref Search ADS Resnick J.R. , Lerche I. , Shuey R.T. , 1986 . Reflection, transmission, and the generalized primary wave , Geophys. J. Int. , 87 , 349 – 377 . Google Scholar Crossref Search ADS Rose J.H. , 2002a . “Single-sided” autofocusing of sound in layered materials , Inverse Probl. , 18 , 1923 – 1934 . Google Scholar Crossref Search ADS Rose J.H. , 2002b . Time reversal, focusing and exact inverse scattering: imaging of complex media with acoustic and seismic waves , in Topics in Applied Physics , 84 , pp. 97 – 106 , eds Fink M. , Kuperman W.A. , Montagner JP. , Tourin A. , Springer . Google Scholar Crossref Search ADS Singh S. , Snieder R. , Behura J. , van der Neut J. , Wapenaar K. , Slob E. , 2015 . Marchenko imaging: imaging with primaries, internal multiples, and free-surface multiples , Geophysics , 80 , S165 – S174 . Google Scholar Crossref Search ADS Slob E. , 2016 . Green’s function retrieval and Marchenko imaging in a dissipative acoustic medium , Phys. Rev. Lett. , 116 , 164301-1-164301-6 . Google Scholar Crossref Search ADS Slob E. , Wapenaar K. , Broggini F. , Snieder R. , 2014 . Seismic reflector imaging using internal multiples with Marchenko-type equations , Geophysics , 79 , S63 – S76 . Google Scholar Crossref Search ADS van der Neut J. , Wapenaar K. , 2016 . Adaptive overburden elimination with the multidimensional Marchenko equation , Geophysics , 81 , T265 – T284 . Google Scholar Crossref Search ADS van der Neut J. , Wapenaar K. , Thorbecke J. , Slob E. , Vasconcelos I. , 2015 . An illustration of adaptive Marchenko imaging , Leading Edge , 34 ( 7 ), 818 – 822 . Google Scholar Crossref Search ADS van der Neut J. , Ravasi M. , Liu Y. , Vasconcelos I. , 2017 . Target-enclosed seismic imaging , Geophysics , 82 , Q53 – Q66 . Google Scholar Crossref Search ADS Wapenaar K. , 2014 . Single-sided Marchenko focusing of compressional and shear waves , Phys. Rev. E , 90 ( 6 ), 063202 , doi:10.1103/PhysRevE.90.063202 . Google Scholar Crossref Search ADS Wapenaar K. , Slob E. , 2014 . On the Marchenko equation for multicomponent single-sided reflection data , Geophys. J. Int. , 199 , 1367 – 1371 . Google Scholar Crossref Search ADS Wapenaar K. , Broggini F. , Slob E. , Snieder R. , 2013 . Three-Dimensional Single-Sided Marchenko Inverse Scattering, Data-Driven Focusing, Green’s Function Retrieval, and their Mutual Relations , Phys. Rev. Lett. , 110 , 084301 , doi:10.1103/PhysRevLett.110.084301 . Google Scholar Crossref Search ADS PubMed Wapenaar K. , Thorbecke J. , van der Neut J. , Broggini F. , Slob E. , Snieder R. , 2014 . Marchenko imaging , Geophysics , 79 , WA39 – WA57 . Google Scholar Crossref Search ADS Young D.M. Jr , 1971 . Iterative Solution of Large Linear Systems , Academic Press . APPENDIX A: OBTAINING PRECONDITIONED EQUATIONS In this section we present the technical details of GS and MGS parametrization of the surface-multiple Marchenko equation. A1 GS case For the GS decomposition the inverse of L reads \begin{eqnarray} L^{-1}&=&\left(\begin{array}{c{@}{\quad}c}\mathcal {Y} & 0\\ \mathcal {Y}PR_{\zeta }T\mathcal {Y} & \mathcal {Y} \end{array}\right) \end{eqnarray} (A1) where we defined $$\mathcal {Y}= \left( 1+{\zeta } PR_{\zeta } \right)^{-1}$$, which exists for all physical Rζ. Lastly we also get \begin{eqnarray} L^{-1}U &=&\left(\begin{array}{c{@}{\quad}c}0 & \mathcal {Y}PR_{\zeta }T\\ 0 & \left( \mathcal {Y}PR_{\zeta }T \right)^2 \end{array}\right) \end{eqnarray} (A2) and \begin{eqnarray} L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ 0\end{array}\right) &=&\left(\begin{array}{c}\mathcal {Y}PR_{\zeta }T_d^{\text{inv}}\\ \mathcal {Y}PR_{\zeta }T\mathcal {Y}PR_{\zeta }T_d^{\text{inv}}\end{array}\right). \end{eqnarray} (A3) From eq. (8), the solution can be parametrized as \begin{eqnarray} \left(\begin{array}{c{@}{\quad}c}1 & -PR_{0}T\\ 0 & 1- \left( PR_{0}T \right)^2 \end{array}\right)\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}PR_{0}T_d^{\text{inv}}\\ PR_{0}TPR_{0}T_d^{\text{inv}}\end{array}\right) \end{eqnarray} (A4) A2 MGS case For the MGS decomposition the inverse of L reads \begin{eqnarray} L ^{-1}=\left(\begin{array}{c{@}{\quad}c}1 & 0\\ PR_{\zeta }T & 1\end{array}\right) \end{eqnarray} (A5) leading to \begin{eqnarray} L^{-1}U & = & \left(\begin{array}{c{@}{\quad}c}-{\zeta } PR_{\zeta } & PR_{\zeta }T\\ -{\zeta } PR_{\zeta }TPR_{\zeta } & \left( PR_{\zeta }T \right)^2-{\zeta } PR_{\zeta }\end{array}\right) \end{eqnarray} (A6) and \begin{eqnarray} L^{-1}\left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ 0\end{array}\right) & = & \left(\begin{array}{c}PR_{\zeta }T_d^{\text{inv}}\\ PR_{\zeta }TPR_{\zeta }T_d^{\text{inv}}\end{array}\right) . \end{eqnarray} (A7) The matrix L−1U is actually the square of another matrix ϕ ≡ Ωσx \begin{eqnarray} \phi &=& \left(\begin{array}{c{@}{\quad}c}0 & 1\\ -{\zeta } PR_{\zeta } & PR_{\zeta }T\end{array}\right)\nonumber \\ &\equiv &\underbrace{\left(\begin{array}{c{@}{\quad}c}1 & 0\\ PR_{\zeta }T & -{\zeta } PR_{\zeta }\end{array}\right)}_{\Omega }\underbrace{\left(\begin{array}{c{@}{\quad}c}0 & 1\\ 1 & 0\end{array}\right)}_{\sigma _x} , \end{eqnarray} (A8) such that L−1U can be rewritten as \begin{eqnarray} L^{-1}U=\phi ^2&=& \Omega \sigma _x \Omega \sigma _x= \Omega \times \left( \sigma _x \Omega \sigma _x \right)\nonumber \\ &=&\underbrace{\left(\begin{array}{c{@}{\quad}c}1 & 0\\ PR_{\zeta }T & -{\zeta } P R_{\zeta }\end{array}\right)}_{\Omega }\times \underbrace{\left(\begin{array}{c{@}{\quad}c}-{\zeta } P R_{\zeta } & PR_{\zeta }T\\ 0 & 1\end{array}\right)}_{\sigma _x \Omega \sigma _x} . \nonumber\\ \end{eqnarray} (A9) APPENDIX B: CHARACTERISTIC POLYNOMIALS In this section we show the details of the characteristic polynomials from Section 5. B1 BD formulation The characteristic polynomial of a BD matrix $$\mathcal {M}_{\zeta } = {\rm diag} \left( R_{\zeta } \left( \zeta -T \right),R_{\zeta } \left( \zeta +T \right) \right)$$ is given by \begin{eqnarray} \chi _{\zeta } \left( \mu \right)&=&\det \left[ \left( R_{\zeta } \left( \zeta -T \right)-\mu \right) \left( R_{\zeta } \left( \zeta +T \right)-\mu \right) \right]\nonumber \\ &=&\det \left[ R_{\zeta } \left( \zeta -T \right)-\mu \right]\det \left[ R_{\zeta } \left( \zeta +T \right)-\mu \right], \end{eqnarray} (B1) where μ is an eigenvalue of $$\mathcal {M}_{\zeta }$$ if it satisfies χζ(μ) = 0. Using the fact that the spectrum of eigenvalues is invariant under transposition we can transpose the second determinant, and also using $$\det A \det B=\det AB$$ we get \begin{eqnarray} \chi _{\zeta } \left( \mu \right)&=&\det \left[ R_{\zeta } \left( \zeta -T \right)-\mu \right]\det \left[ \left( \zeta +T \right)R_{\zeta }^T-\mu \right]\nonumber \\ &=&\det \left[ R_{\zeta } \left( \zeta ^2-T^2 \right)R_{\zeta }^T -\zeta \mu \left( R_{\zeta }+TR_{\zeta }T \right)+\mu ^2 \right] , \end{eqnarray} (B2) where in the last step we have used that $$R_{\zeta }^T=TR_{\zeta }T$$. B2 GS formulation We proceed similarly to the derivation above to find the characteristic polynomial of matrix ϕ2: \begin{eqnarray} {\rm det} \left[ \phi ^2-\mu ^2 \right]&=&{\rm det} \left[ \phi -\mu \right]{\rm det} \left[ \phi +\mu \right]\nonumber \\ &=&{\rm det} \left[ \phi -\mu \right]{\rm det} \left[ \phi ^T+\mu \right]. \end{eqnarray} (B3) We then use the identity \begin{eqnarray} {\rm det}\left(\begin{array}{c{@}{\quad}c}A & B\\ C & D\end{array}\right) &=& {\rm det}\, {A}\, {\rm det} \left[ D-CA^{-1}B \right] , \end{eqnarray} (B4) which holds provided that detA ≠ 0, and using the approximation $$PR \vec{x}\approx R\vec{x}$$ (for any $$\vec{x}$$), we get \begin{eqnarray} {\rm det} \left[ \phi -\mu \right]&=& {\rm det}\left(\begin{array}{c{@}{\quad}c}-\mu & 1\\ -{\zeta } R_{\zeta } & R_{\zeta }T-\mu \end{array}\right)\nonumber \\ &=& {\rm det} \left[ -\mu \right]{\rm det} \left[ \left( R_{\zeta }T-\mu \right)-{\zeta } R_{\zeta }\mu ^{-1} \right]\nonumber \\ &=&{\rm det} \left[ \mu ^2 -\mu R_{\zeta }T+{\zeta } R_{\zeta } \right]. \end{eqnarray} (B5) Similarly we obtain \begin{eqnarray} {\rm det} \left[ \phi ^T+\mu \right]&=& {\rm det} \left[ \mu ^2 + \mu R_{\zeta }T+{\zeta } R_{\zeta }^T \right], \end{eqnarray} (B6) where the relation (RζT)T = RζT was used in the process. Putting both results together yields \begin{eqnarray} {\rm det} \left[ \phi ^2-\mu ^2 \right] & = & {\rm det} \left[ \mu ^4-\mu ^2 \left( \zeta \left( R_{\zeta }+R_{\zeta }^T \right)+R_{\zeta }R_{\zeta }^T \right)\right. \nonumber\\ &&\left.+\,\zeta ^2R_{\zeta }R_{\zeta }^T \right]. \end{eqnarray} (B7) B3 Dissipative formulation Marchenko redatuming in the presence of dissipation was formulated in Slob (2016), where the presented autofocusing formulation amounts to inverting (using the large td, i.e. P = 1 approximation) \begin{eqnarray} \left(\begin{array}{c{@}{\quad}c}1 & -RT\\ -\bar{R}T & 1\end{array}\right)\left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)=\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right) . \end{eqnarray} (B8) Therefore, \begin{eqnarray} \left(\begin{array}{c}f_1^-\\ M_+^*\end{array}\right)&=&\left(\begin{array}{c{@}{\quad}c}1 & -RT\\ -\bar{R}T & 1\end{array}\right)^{-1}\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right)\nonumber \\ &\stackrel{?}{=}&\sum \limits _{k=0}^{\infty }\left(\begin{array}{c{@}{\quad}c}0 & RT\\ \bar{R}T & 0\end{array}\right)^k\left(\begin{array}{c}f_{1,0}^-\\ 0\end{array}\right) . \end{eqnarray} (B9) The series expansion above is only a true inverse if it is convergent. Using the same trick as in the previous section we find that the maximum magnitude roots of the characteristic polynomial \begin{eqnarray} {\rm det}\left(\begin{array}{c{@}{\quad}c}-\mu & RT\\ \bar{R}T & -\mu \end{array}\right)=0 \end{eqnarray} (B10) amount to, by again using (B4), \begin{eqnarray} {\rm det} \left[ \mu ^2-R\bar{R}^T \right]=0 . \end{eqnarray} (B11) APPENDIX C: STRONGER BOUND ON THE MAXIMUM OF $$\left| \mathcal {R}_0 \left( \omega \right) \right|$$ In this section we return to the proof from Section 6.1. Since $$\left| \mathcal {R}_0(\omega ) \right|$$ in eq. (49) is strictly increasing as a function of x = |N(0)| ≥ 0, we can get an upper bound for $$\left| \mathcal {R}_0(\omega ) \right|$$ by finding an upper bound on x. In particular, using the triangle inequality |z1 + z2| ≤ |z1| + |z2| we have that \begin{eqnarray} x(\omega ) = \left| N{(0)} \right| \le \sum _{k \text{ odd}} \epsilon _k( \left| r_1 \right|, \ldots , \left( r_M \right)) , \end{eqnarray} (C1) where εk is the elementary symmetric polynomial of order k in M variables (recall that M is the number of reflectors). This can be used to show that \begin{eqnarray} \left| \mathcal {R}_0(\omega ) \right| \le \frac{\sum _{k \text{ odd}} \epsilon _k( \left| r_1 \right|, \ldots , \left| r_M \right|)}{\sum _{k \text{ even}} \epsilon _k( \left| r_1 \right|, \ldots , \left| r_M \right|)} , \end{eqnarray} (C2) by using the identity \begin{eqnarray} \sum _k (-1)^k \epsilon _k( \left| r_1 \right|, \ldots , \left| r_M \right|)& = &\prod _i (1- \left| r_i \right|^2) \nonumber \\ &=& \prod _i (1-r_i^2) . \end{eqnarray} (C3) This means that |R0(ω)| satisfies not only the weaker bound (49), but also the stronger one (46). APPENDIX D: LEAN MERCHANT–PARKS–LEVINSON (LMPL) In Section 5, we have shown that the inversion of surface multiples and interbeds problem amounts to finding an inverse of a Toeplitz plus Hankel (T+H) matrix (1 + PR(ζ ± T)). In a more general context, this problem was studied by Merchant & Parks (1982), who, with a clever transformation, were able to map a T+H problem, onto a mosaic Toeplitz matrix inversion, which can be handled by means of a Levinson algorithm (hence we call it the Merchant–Parks–Levinson inversion, for details see Merchant & Parks 1982). Unfortunately, the standard Levinson algorithm suffers from numerical weak instability. However, the symmetry of our particular T+H problem, that is, the fact that the entries from the Toeplitz matrix can be mapped onto those of the Hankel matrix, allows us to simplify it to derive an effective algorithm (termed LMPL scheme), which does not suffer from the weak instability. We outline the algorithm below. For a given focusing depth zd and a corresponding velocity model dependent time td, we have a time domain seismic trace given by a vector $$\vec{R}_{\zeta }= \left[ R_1,R_2,\ldots ,R_{N} \right]^T$$ truncated at N = ⌈2td/dt⌉ + 1 samples. Let $$\vec{y}_k$$ denote an arbitrary vector $$\vec{y}_{k}= \left[ y_{1,k}, y_{2,k},\ldots ,y_{k,k} \right]^T$$, where yi, k is the ith element of a vector of length k ≤ N. We also denote an arbitrary vector $$\vec{y}= \left[ y_{1}, y_{2},\ldots ,y_{N} \right]^T$$ of length N. Also, define a vector reversal operator Tk of dimension k × k such that $$T_{k} \vec{y}_{k}= \left[ y_{k,k}, y_{k-1,k},\ldots , y_{1,k} \right]^T$$, which is a specific manifestation of the vector representation time-reversal operator, for vectors of length k. The algorithm below has to be repeated twice for parameter α = ±1. Initialization stage. Define $$\vec{p}=\vec{0}_{N\times 1}$$ and $$\vec{q}=\vec{0}_{N\times 1}$$ and let $$x_{0}^{ \left( 1 \right)}=1, x_{0}^{ \left( 2 \right)}=0, v=0$$. Set p1 = R1, q1 = RN. Throughout the algorithm we will use vectors $$\vec{p}_k, \vec{q}_k, \vec{x}_k^{ \left( 1 \right)}$$, and $$\vec{x}_k^{ \left( 2 \right)}$$, whose lengths will be increasing by 1, every iteration step. Note that $$x_{0}^{ \left( i \right)}$$ is not an element of vector $$\vec{x}_{k}^{ \left( i \right)}$$, however they are involved in calculating a scalar εx. Iterative stage. Let i = 1. Calculate the scalars \begin{eqnarray} \epsilon _x&=&\sum \limits _{j=0}^{i-1}R_{i-j} \left( x_{j,i-1}^{ \left( 1 \right)}+x_{j,i-1}^{ \left( 2 \right)} \right) , \nonumber \\ \epsilon _p&=&\sum \limits _{j=1}^{i}R_{i-j+1} \left( p_{j,i}+\alpha q_{j,i} \right) , \nonumber \\ b_x&=& \left( 1+v \right)^{-1}\epsilon _x , \end{eqnarray} (D1) where in the initial implementation of the MPL algorithm bx and εx were vectors and v was a matrix, and the computation of (1 + v)−1 could be weakly numerically unstable. In our simplified LMPL algorithm, the objects above are scalars, and the instability is removed. Next, update the entries in $$\vec{x}_{k}^{ \left( i \right)}$$, and extend their length by a new element \begin{eqnarray} \left(\begin{array}{c}\vec{x}_{i-1}^{ \left( 1 \right)}\\ \vec{x}_{i-1}^{ \left( 2 \right)}\end{array}\right)&\leftarrow & \left(\begin{array}{c}\vec{x}_{i-1}^{ \left( 1 \right)}\\ \vec{x}_{i-1}^{ \left( 2 \right)}\end{array}\right)-b_x \left(\begin{array}{c}T_{i-1}\vec{x}_{i-1}^{ \left( 2 \right)}\\ T_{i-1}\vec{x}_{i-1}^{ \left( 1 \right)}\end{array}\right) , \nonumber \\ x_{i,1} = -b_x; \,x_{i,2}=0; \end{eqnarray} (D2) update/calculate the scalars \begin{eqnarray} v&\leftarrow & v-\epsilon _x b_x ,\nonumber \\ g&=& \left( 1+v \right)^{-1} \left( R_{i}+\alpha R_{N-i-1}-\epsilon _p \right) , \end{eqnarray} (D3) and update vectors $$\vec{p}_k$$, and $$\vec{q}_k$$ \begin{eqnarray} \left(\begin{array}{c}\vec{p}_i\\ \vec{q}_i\end{array}\right) &\leftarrow & \left(\begin{array}{c}\vec{p}_i\\ \vec{q}_i\end{array}\right)+g \left(\begin{array}{c}T_i \vec{x}_{i}^{ \left( 2 \right)}\\ T_i\vec{x}_{i}^{ \left( 1 \right)}\end{array}\right) , \end{eqnarray} (D4) and extend them with a new element \begin{eqnarray} p_{i+1}&=& \left( 1+v \right)^{-1} \left( R_{i+1}-\alpha v r_{n-1-i}-\epsilon _p \right) , \nonumber \\ q_{i+1}&=& R_{N-i} . \end{eqnarray} (D5) Increment i ← i + 1, unless i = N − 1 is reached. Upon completing the i- and the α- loops, the solutions are \begin{eqnarray} f_1^-&=&\frac{1}{2} \left( \vec{p}_{\alpha =1}+\vec{p}_{\alpha =-1} \right) , \nonumber \\ M_+ &=&\frac{1}{2}T_N \left( \vec{p}_{\alpha =1}-\vec{p}_{\alpha =-1} \right) . \end{eqnarray} (D6) The LMPL method outlined above is exact and requires carrying out all of the N steps of the algorithm (stopping at i < N might not yield a good approximation of $$\vec{p}$$ and $$\vec{q}$$, because they are not even the correct length). The method however has a favourable scaling over direct inversion or conjugate gradient methods (Merchant & Parks 1982). © The Authors 2017. Published by Oxford University Press on behalf of The Royal Astronomical Society.

Journal

Geophysical Journal InternationalOxford University Press

Published: Oct 11, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off