# Seismic noise attenuation using an online subspace tracking algorithm

Seismic noise attenuation using an online subspace tracking algorithm Summary We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise. Image processing, Inverse theory, Time-series analysis INTRODUCTION Seismic data is inevitably corrupted by different types of noise. The existence of noise in the pre-stack seismic data masks useful signals, affects the amplitude information, and thus causes unreliable inversion results. Even in post-stack seismic data, the existence of noise will affect interpretation result which directly links the seismic images with subsurface oil & gas reservoirs (Wang et al. 2011; Chen 2015b, 2016; Gan et al. 2016e; Ren & Tian 2016; Zhang et al. 2016e). A successful separation between useful reflection signals and unwanted noise is a long standing problem in the area of seismic data processing and greatly affects the fidelity of subsequent seismic imaging and geophysical inversion, like amplitude-variation-with-offset (AVO) inversion, full waveform inversion, attenuation estimation and geological interpretation (He & Wu 2015; Zhang & Chen 2015; Li et al. 2016a; Lines et al. 2016; Liu et al. 2016a; Mousavi & Langston 2016a,b; Mousavi et al. 2016; Qu & Verschuur 2016; Qu et al. 2016a; Zhang et al. 2016f; Jin et al. 2017; Li et al. 2017; Ebrahimi et al. 2017; Mousavi & Langston 2017). Effectively rejecting the cross-talk interference caused in the simultaneous-source seismic acquisition also plays a significant role in tremendously saving the data recording cost (Berkhout 2008; Qu et al. 2014, 2015, 2016b; Abma et al. 2015; Chen 2015a; Chen et al. 2015a,b, 2017e; Zu et al. 2015, 2017a,b,c; Zhou 2017). Over the past few decades, a large number of denoising methods for random noise have been developed and widely applied in practice. The simplest denoising method is by stacking the seismic data along the offset direction (Mayne 1962; Liu et al. 2009a; Yang et al. 2015). By stacking the useful signals from multiple traces and multiple directions (e.g. offset and midpoint), the signal is enhanced while influence of noise is mitigated. Prediction based methods utilize the predictable property of useful signals to construct prediction filters and then enhance signals and reject noise, for example, t–x predictive filtering (Abma & Claerbout 1995), f–x deconvolution (Canales 1984; Naghizadeh & Sacchi 2012), the forward–backward prediction approach (Wang 1999), the polynomial fitting based approach (Liu et al. 2011) and non-stationary predictive filtering (Liu et al. 2012; Liu & Chen 2013). Sparse transform based approaches first transform seismic data to a sparse domain (Sacchi & Ulrych 1995; Wang et al. 2015b; Chen 2017; Huang et al. 2017b), then apply soft thresholding to the coefficients, finally transform the sparse coefficients back to the time–space domain. Widely used sparse transforms are Fourier transform (Pratt et al. 1998; Naghizadeh & Sacchi 2011; Zhong et al. 2014; Wang et al. 2015a; Li et al. 2016d; Shen et al. 2016), curvelet transform (Candès et al. 2006; Hermann et al. 2007; Neelamani et al. 2008; Liu et al. 2016e; Zu et al. 2016), seislet transform (Fomel & Liu 2010; Liu & Fomel 2010; Chen et al. 2014; Chen & Fomel 2015a; Gan et al. 2015b,c; Liu et al. 2015; Gan et al. 2016c; Liu et al. 2016f), seislet frame transform (Fomel & Liu 2010; Gan et al. 2015a, 2016b), shearlet transform (Kong et al. 2016; Liu et al. 2016a), Radon transform (Sacchi & Ulrych 1995; Xue et al. 2014; Sun & Wang 2016; Xue et al. 2016b, 2017), different types of wavelet transforms (e.g. physical, synchrosqueezing, empirical wavelet transforms, etc.) (Donoho & Johnstone 1994; Zhang & Ulrych 2003; Gao et al. 2006; Xie et al. 2015a; Liu et al. 2016b,g), and dictionary learning based sparse transform (Elad & Aharon 2006; Mairal et al. 2008; Yu et al. 2015; Chen et al. 2016c; Siahsar et al. 2017a,b). Decomposition based approaches decompose the noisy seismic data into different components and then select the principal components to represent the useful signals. Empirical mode decomposition and its variations (Chen & Ma 2014; Chen et al. 2015c, 2016a,b, 2017a,b,d; Gan et al. 2016a), variational mode decomposition (Liu et al. 2016c,d, 2017), singular value decomposition (SVD) based approaches (Bekara & van der Baan 2007; Huang et al. 2016b; Wang et al. 2017), morphological decomposition (Li et al. 2016b,c; Huang et al. 2017e,f,g), empirical wavelet transform (Gilles 2013; Verma & Gurve 2015), and regularized non-stationary decomposition based approaches (Fomel 2013; Wu et al. 2017) are frequently used to extract the useful components of multidimensional seismic data in the literature. Rank-reduction based approaches assume the seismic data to be low-rank after some data rearrangement steps. Such methods include the Cadzow filtering (Trickett 2008), singular spectrum analysis (SSA; Vautard et al. 1992; Oropeza & Sacchi 2011; Gao et al. 2013; Cheng & Sacchi 2015), damped SSA (Huang et al. 2015, 2016a; Chen et al. 2016d,e; Siahsar et al. 2017c), multistep SSA (Zhang et al. 2016c,2016d), sparsity-regularized SSA (Siahsar et al. 2016; Zhang et al. 2016a,b; Anvari et al. 2017; Zhang et al. 2017), randomized-order SSA (Huang et al. 2017d), structural low-rank approximation (Zhou et al. 2017), empirical low-rank approximation (Chen et al. 2017f). Mean (or stacking) and median filters utilize the statistical difference between signal and noise to reject the Gaussian white noise or impulsive noise (Liu et al. 2009b; Liu 2013; Xie et al. 2015b; Yang et al. 2015; Gan et al. 2016d; Chen et al. 2017c; Xie et al. 2017). Instead of developing a standalone denoising strategy, Chen & Fomel (2015b) proposed a two-step denoising approach that tries to solve a long-existing problem in almost all denoising approaches: the signal leakage problem. By initiating a new concept called local orthogonalization, Chen & Fomel (2015b) successfully retrieved the coherent signals from the removed noise section to guarantee no signal leakage in any denoising algorithms. Huang et al. (2017c) used a similar algorithm to extract useful signals from the highly noise-corrupted microseismic data using an orthogonalized morphological decomposition method. In addition to these classic noise attenuation methods, some advanced denoising methods have been proposed in recent years. Time-frequency peak filtering (Kahoo & Siahkoohi 2009; Lin et al. 2013) based approaches utilize the high-resolution property of time-frequency transform to distinguish between useful signals and random noise. Xue et al. (2016a) used the rank-increasing property of noise when iteratively estimating the useful signals from simultaneous-source data. In this paper, we follow the general set-up of the rank-reduction based methods and propose a new algorithm based on completely different low-rank decomposition engine. We substitute the SVD used in traditional rank-reduction method with an efficient online subspace tracking (OSST) method. The subspace tracking algorithm requires only basic linear algebraic manipulations. Each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces (Balzano et al. 2010). The subspace tracking method was initially proposed for matrix completion. Here, we carefully studied the application of the subspace tracking method in seismic noise attenuation following the rank reduction framework. After the seismic data is mapped to a low-rank space, the subspace tracking algorithm can be applied directly for low-rank decomposition. Due to the online nature of the subspace tracking algorithm, it reconstructs less random noise than the traditional SVD based subspace tracking algorithm during low-rank approximation. Here, the ‘online’ simply means that each column of a low-rank matrix is inserted one-by-one into the algorithm for low-rank approximation. The traditional SSA can be replaced by the proposed algorithm for obtaining significantly better performance and higher computational efficiency. The paper is organized as follows: we first introduce the low-rank approximation theory that is widely used in data processing applications, like matrix completion, seismic noise attenuation, and data restoration. We then introduce the OSST algorithm, where we provide detailed mathematical implications since it is relatively new in the seismic community. We also introduce the application of the subspace tracking method in seismic noise attenuation as a low-rank decomposition engine. Next, we use synthetic and field data examples to test and confirm the effectiveness of the proposed algorithm. We investigate the proposed algorithm from the perspective of removing different types of noise in the seismic data, which includes random noise, spiky noise, blending noise, and coherent noise. We draw some key conclusions at the end of the paper. THEORY Low-rank approximation Given a low-rank matrix D, it can be approximated by tracking the K-dimensional subspace S such that the following objective function is minimized   $$\min _{S} J(S) = \min _{\mathbf {U},\mathbf {V}} \| \mathbf {U}\mathbf {V} - \mathbf {D}\| _F^2,$$ (1)where U is a matrix whose columns span S. V is the corresponding weighting matrix. $$J=\,\| \mathbf {U}\mathbf {V} - \mathbf {D} \| _F^2$$ is the objective function. ∥·∥F denotes the Frobenius norm of an input matrix. Note that $$\mathbf {U}\in \mathbb {R}^{M\times K}$$, $$\mathbf {V}\in \mathbb {R}^{K\times N}$$, and $$\mathbf {D}\in \mathbb {R}^{M\times N}$$. There have been a lot of different algorithms to solve eq. (1). Keshavan et al. (2010) used a gradient descent algorithm to jointly minimize both U and V while Dai & Milenkovic (2010) minimized this cost function by first solving for V and then taking a gradient step with respect to U. Most algorithms are based on the batch subspace tracking strategy (Balzano et al. 2010), where the full information in D is treated as a whole to solve for U and V. These methods often require eigenvalue decomposition, or SVD (Golub & Van Loan 1996; Sarwar et al. 2002; Cai et al. 2010b). Some accelerated decomposition algorithms like Lanczos (Gao et al. 2013), QR decomposition (Cheng & Sacchi 2014), randomized SVD methods (Oropeza & Sacchi 2011) can also be used to solve eq. (1) with higher efficiency. We will first introduce the most widely used SVD method for solving eq. (1) (Golub & Van Loan 1996; Cai et al. 2010a). SVD is a matrix factorization technique commonly used for producing low-rank approximations. The SVD of the data matrix D can be expressed as:   $$\mathbf {D}=\mathbf {U\Sigma V}^T.$$ (2)Here, U is composed of the eigenvectors of DDT. V is composed of the eigenvectors of DTD. Σ is a diagonal matrix composed of the decreasing singular values. The SVD method is a batch subspace tracking method or an offline subspace tracking algorithm (Balzano et al. 2010). The K-dimensional subspace S is tracked provided that the whole matrix D is given. An opposite of the offline subspace tracking algorithm is the OSST algorithm, where the low-rank approximation is done step by step, with consecutively inserted columns from D for updating matrix U. There are two main obvious advantages of the OSST strategy. It will be highly time and memory efficient when the number of columns in D is extremely large. It will better decompose the data into signal subspace with less residual noise. Later in the paper we will use numerical examples to demonstrate better subspace tracking performance of the online method. A theoretical proof of the mixed signal-and-noise subspace decomposed from the traditional truncated singular value decomposition (TSVD)-based SSA method is given in Appendix B. In the next subsection, we will introduce in detail an OSST algorithm, which is called Grassmannian rank-one update subspace estimation (GROUSE) and was initially proposed by Balzano et al. (2010). Grassmannian rank-one update subspace estimation Balzano et al. (2010) considered optimizing the objective function shown in eq. (1) one column at a time. They showed that by using the online algorithm, where each measurement dn corresponds to a random column of the matrix D, it is possible to obtain a good approximation of the given matrix D. The set of all subspaces of $$\mathbb {R}^M$$ of dimension K is denoted as $$\mathcal {G}(M,K)$$ and is called the Grassmannian. The Grassmannian is a compact Riemannian manifold, and its geodesics can be explicitly computed according to Edelman et al. (1988). An element $$S\in \mathcal {G}^{n\times d}$$ can be represented by any M × K matrix U whose columns form an orthonormal basis for S. GROUSE algorithm derives from an application of incremental gradient descent on the Grassmannian manifold. We can first compute a gradient of the cost function J, and then follow this gradient along a short geodesic curve in the Grassmannian. To compute the gradient of J on the Grassmannian manifold, we first need to compute the partial derivatives of J with respect to the components of U. For a generic subspace, the matrix UTU has full rank. We can rewrite the objective function as   $$J(S,n)=\| \mathbf {U}\mathbf {v}-\mathbf {d}_n \| _2^2.$$ (3)Here, n also denotes the nth column in D. v denotes a column vector in V. It follows that the derivative of J with respect to the elements of U is   \begin{eqnarray} \frac{\partial J}{\partial U} &=& 2(\mathbf {Uv}-\mathbf {d}_n)\mathbf {v}^T \nonumber\\ &=& -2(\mathbf {d}_n-\mathbf {Uv})\mathbf {v}^T, \end{eqnarray} (4) v is chosen as the least-squares solution (w) of eq. (5):   $$\min _{\mathbf {v}} \| \mathbf {Uv}-\mathbf {d}_n \| _2^2,$$ (5)that is,   $$w=(\mathbf {U}^T\mathbf {U})^{-1}\mathbf {U}\mathbf {d}_n,$$ (6)and r = (dn − Uv), which denotes the zero padded residual vector, eq. (4) can be formulated as   $$\frac{\partial J}{\partial \mathbf {U}} = -2\mathbf {r}\mathbf {w}^T.$$ (7) It can be further derived that (Edelman et al. 1988)   \begin{eqnarray} \nabla J &=& (\mathbf {I}-\mathbf {UU}^T)\frac{\partial J}{\partial \mathbf {U}} \nonumber\\ &=& -2(\mathbf {I}-\mathbf {UU}^T)\mathbf {rw}^T \nonumber\\ &=& -2\mathbf {rw}^T. \end{eqnarray} (8) A gradient step along the geodesic with tangent vector −∇J is given in Edelman et al. (1988), and is a function of the singular values and vectors of ∇J. It is trivial to compute the SSA of ∇J, as it is rank one. The sole non-zero singular value is σ = 2‖r‖‖w‖ and the corresponding left and right singular vectors are $$\frac{\mathbf {r}}{\| \mathbf {r} \| }$$ and $$\frac{\mathbf {w}}{\| \mathbf {w} \| }$$, respectively. Let x2, …, xK be an orthonormal set orthogonal to r and y2, …, yK be an orthonormal set orthogonal to w. Then   \begin{eqnarray} -2\mathbf {rw}^T &=& \left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}-\frac{\mathbf {r}}{\| \mathbf {r} \| } & \mathbf {x}_2 & \cdots & \mathbf {x}_K \end{array} \right] \times \text{diag}(\sigma ,0,\ldots ,0)\nonumber\\ &&\times \, \left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\frac{\mathbf {w}}{\| \mathbf {w} \| } & \mathbf {y}_2 & \cdots & \mathbf {y}_K \end{array} \right] \end{eqnarray} (9)forms an SVD for the gradient. It can be derived that for η > 0, a step of length η in the direction ∇J is given by   \begin{eqnarray} \mathbf {U}(\eta ) & = &\mathbf {U}+\frac{(\cos (\sigma \eta )-1)}{\| \mathbf {w} \| ^2} \mathbf {Uww}^T+\sin (\sigma \eta ) \frac{\mathbf {r}}{\| \mathbf {r} \| } \frac{\mathbf {w}^T}{\| \mathbf {w} \| } \nonumber\\ &=&\mathbf {U}+\left( \sin (\sigma \eta )\frac{\mathbf {r}}{\| \mathbf {r} \| } + (\cos (\sigma \eta )-1)\frac{\mathbf {p}}{\| \mathbf {p} \| } \right) \frac{\mathbf {w}^T}{\| \mathbf {w} \| } \end{eqnarray} (10)where p = Uw, the predicted value of the projection of the vector d onto S. Appendix A provides a detailed derivation of the step size ΔU = Un + 1 − Un. The whole GROUSE algorithm is detailed in Algorithm 1 (Balzano et al. 2010). Algorithm 1 Grassmannian Rank-One Update Subspace Estimation     View Large Low-rank matrix construction and algorithms We follow Oropeza & Sacchi (2011) to construct the low-rank matrix. A Hankel matrix for each frequency slice in f–x domain can be constructed to represent the useful signal components in a low-rank manner. The Hankel matrix for each frequency slice D(x, w) is constructed as:   $$\mathbf {H}=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}D(1,w) & D(2,w) & \cdots &D(p,w) \\ D(2,w) & D(3,w) &\cdots &D(p+1,w) \\ \vdots & \vdots &\ddots &\vdots \\ D(L,w)&D(L+1,w) &\cdots &D(N_x,w) \end{array} \right),$$ (11)where $$L=\lfloor \frac{N_x}{2}\rfloor +1$$ and p = Nx − L + 1, ⌊ · ⌋ denotes the integer part of its argument, w denotes frequency. As a bridge connecting the state-of-the-art SSA method to the proposed method, we first introduce the implementation of the TSVD-based SSA method. In the TSVD-based SSA method, after constructing the Hankel matrix, we apply TSVD to the matrix H:   $$\hat{\mathbf {H}}=\text{TSVD}\left(\mathbf {H}\right).$$ (12)Then an averaging operator is applied along the antidiagonals of the low-rank approximated Hankel matrix $$\tilde{\mathbf {H}}$$. The whole algorithm can be shown in Algorithm 2, where $$\hat{\mathcal {H}}$$ denotes the Hankelization operator and $$\mathcal {A}$$ denotes the averaging operator. Algorithm 2 Denoising seismic data via TSVD-based SSA algorithm.     View Large The proposed method simply substitutes the low-rank decomposition operator using SVD with the GROUSE-based low-rank decomposition operator. It is also worth mentioning that GROUSE method is just one choice of the OSST algorithm. Other subspace tracking algorithms such as incremental SVD (Sarwar et al. 2002) can also be used for the purpose of low-rank decomposition. We will use GROUSE-based SSA to briefly refer to the proposed algorithm throughout the paper. The detailed GROUSE-based SSA algorithm in a similar way as Algorithm 2 is shown in Algorithm 3. For a better understanding of the algorithm, Fig. 1 shows the main steps of the algorithm. The low-rank approximation step is chosen as GROUSE method in the proposed GROUSE-based method. Figure 1. View largeDownload slide Flowchart of the algorithm. Figure 1. View largeDownload slide Flowchart of the algorithm. Algorithm 3 Denoising seismic data via GROUSE-based SSA algorithm.     View Large EXAMPLES Evaluation of denoising performance For synthetic examples, since we have the clean data, we can use the following signal-to-noise ratio (SNR) measurement to evaluate denoising performance (Liu et al. 2009a):   $${}\text{SNR}=10\log _{10}\frac{\Vert \mathbf {s} \Vert _2^2}{\Vert \mathbf {s} -\hat{\mathbf {s}}\Vert _2^2},$$ (13)where s and $$\hat{\mathbf {s}}$$ are clean signal and estimated signal, respectively. For field data examples, we do not know the correct signal. In order to numerically measure the denoising performance, we utilize the local similarity measurement (Chen & Fomel 2015b). Measuring denoising performance using local similarity is based on the criterion that denoised signal and noise should be orthogonal to each other. Thus, a high local similarity between denoised signal and noise indicates a mixture between signal and noise subspace, or there are some useful signals in the removed noise section. For a successful denoising performance, the local similarity map should be zero everywhere. It is worth mentioning that local similarity is effective in detecting the damage degree of denoising methods but is less effective in evaluating the noise removal ability of a denoising method. To effectively apply the SSA filter, the denoised data are constrained to be low-rank in Hankel matrices extracted from small temporal-spatial windows (or patches). The local-window strategy is important because the SSA method is a valid denoising and reconstruction technique for a superposition of plane waves. In a small window, the data can be approximated via a limited number of dips plus incoherent noise. The size of local windows depends on the complexity of input seismic data. For relatively more complex seismic data, for example, pre-stack data with large curvature in some areas, the window size should be small enough so that in each window the events are roughly linear events. For structurally simpler data, for example, most post-stack data, the selection of window size can be more flexible since both small window and large window can obtain similar results. For most data presented in this paper, the local curvature of the geological structures is not very large, so we will choose relatively smaller window sizes. Neighbour windows will have 50 per cent overlap and a cosine taper is applied in both time and space. The window size is a relatively term. ‘Large’ or ‘small’ should be evaluated based on the ratio between window size and complete data size. We will specify the window sizes for each example when discussing the denoising performance. Synthetic examples The first example is a 2-D example. This example is relatively simple, containing only three linear events, as shown in Fig. 2. Fig. 2(a) shows the clean data or the ground truth for this example. We test the performance of the proposed algorithm in rejecting three types of noise, namely, random noise, spiky noise, and blending noise which arises from the simultaneous source acquisition (Zu et al. 2016). We also compare the OSST based method with the traditional SSA based method. The noisy data containing band-limited random noise is shown in Fig. 2(b). Figure 2. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. Figure 2. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. Fig. 3 shows the denoising performance for both TSVD-based SSA and GROUSE-based SSA methods. Fig. 3(a) shows the denoised data using TSVD-based SSA method while Fig. 3(b) shows the denoised data using OSST. Comparing Figs 3(a) and (b), the denoised data using the proposed method is cleaner than the traditional TSVD-based SSA method. In addition to comparing the noise rejection ability, we are also interested in comparing the signal preservation ability. The best way to see if any useful energy is damaged is to check the removed noise section. The removed noise section is the difference between the noisy data and denoised data. If there is obvious coherent energy in the noise section, it indicates damages to the useful signals. Figs 3(c) and (d) show the removed noise sections using the TSVD-based SSA method and the proposed method, respectively. Both removed noise sections do not show obvious coherent events and demonstrate that both TSVD-based SSA and GROUSE-based SSA methods do not cause significant damages to useful signals. To quantitatively compare the performance of two methods, we calculate the SNRs of different data based on eq. (13). The SNR of noisy data is −5.12 dB. The SNRs of the TSVD-based SSA and GROUSE-based SSA methods are 5.35 and 6.72 dB, respectively. In this example, both methods use the same predefined rank: K = 3. Figure 3. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. Figure 3. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. In order to compare the performance of two methods in different noise level. We increase the variance of noise from 0.1 to 1, and calculate the SNRs of denoised data of both methods and show them in Table 1. To see the varied SNRs more vividly, we plot the data from Table 1 in Fig. 4. The black line shows the SNRs varying with input noise variances. The red line shows the SNRs corresponding to the TSVD-based SSA method. The green line shows the SNRs of the proposed method. It is obvious that both methods obtain large SNR improvement for all noise levels and the SNRs of the proposed method are always higher than the TSVD-based SSA method. We can also observe clearly that the difference between the TSVD-based SSA and GROUSE-based SSA methods decreases first and reaches a minimum when noise variance σ2 = 0.3, then the difference increases as noise variance becomes larger, which indicates that the proposed method outperforms the TSVD-based SSA method more when the seismic data becomes noisier. Figure 4. View largeDownload slide Output SNRs varying with different input noise level. The noise level is measured by noise variance. Figure 4. View largeDownload slide Output SNRs varying with different input noise level. The noise level is measured by noise variance. Table 1. Comparison of SNRs for different input noise level. The diagrams corresponding to this table are shown in Fig. 4. Noise  Input  SNR of  SNR of  variance  SNR  TSVD-based SSA  GROUSE-based SSA  0.1  4.42  15.16  19.78  0.2  − 1.59  8.94  10.96  0.3  − 5.12  5.34  6.72  0.4  − 7.62  2.83  4.18  0.5  − 9.553  0.82  2.46  0.6  − 11.14  − 0.81  1.35  0.7  − 12.47  − 2.25  0.61  0.8  − 13.64  − 3.50  0.17  0.9  − 14.66  − 4.61  − 0.13  1.0  − 15.57  − 5.57  − 0.39  Noise  Input  SNR of  SNR of  variance  SNR  TSVD-based SSA  GROUSE-based SSA  0.1  4.42  15.16  19.78  0.2  − 1.59  8.94  10.96  0.3  − 5.12  5.34  6.72  0.4  − 7.62  2.83  4.18  0.5  − 9.553  0.82  2.46  0.6  − 11.14  − 0.81  1.35  0.7  − 12.47  − 2.25  0.61  0.8  − 13.64  − 3.50  0.17  0.9  − 14.66  − 4.61  − 0.13  1.0  − 15.57  − 5.57  − 0.39  View Large In order to test the robustness of the proposed algorithm to different types of noise, we then apply it to reject spiky noise and blending noise (Chen et al. 2014). The noisy data containing spiky noise are shown in Fig. 5(a). Figs 5(b) and (c) show denoised data using the TSVD-based SSA and GROUSE-based SSA methods, respectively. The performance shows that both methods can be used to effectively reject spiky noise and the proposed GROUSE-based SSA method can obtain a slightly smoother result with less residual noise. Figs 5(d) and (e) show their corresponding noise sections. The noise sections confirm the effective signal preservation abilities of the two methods. The SNRs of the noisy data, denoised data using TSVD-based SSA method, and denoised data using GROUSE-based SSA methods are 2.24 dB, 16.73 dB, and 18.97 dB, respectively, which quantitatively verifies the superior performance of the proposed GROUSE-based SSA method. Next, we apply the proposed method to remove blending noise caused from the marine simultaneous source acquisition (Berkhout 2008). Rejecting the blending noise is significant in boosting a huge economic saving in modern marine acquisition. The noisy data that contains blending noise is shown in Fig. 6(a). Figs 6(b) and (c) show the denoised data using the TSVD-based SSA and GROUSE-based SSA methods, respectively. Figs 6(d) and (e) show their corresponding noise sections. Although both methods cause some residual crosstalk noise in the data, the denoised data are both much cleaner than the noisy data. Considering the strong amplitude of the blending noise, it is usually rejected using an iterative way. Fig. 6 just shows the performance of one-step filtering. The subsequent filtering will compensate for the imperfect performance of both methods. However, we do see that the proposed method can obtain a slightly cleaner performance than the traditional TSVD-based SSA method. Since this example contains only linear events, we do not apply local windows to process the data. Figure 5. View largeDownload slide Spiky noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a),(b),(c) are 2.24 dB, 16.73 dB, and 18.97 dB, respectively. Figure 5. View largeDownload slide Spiky noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a),(b),(c) are 2.24 dB, 16.73 dB, and 18.97 dB, respectively. Figure 6. View largeDownload slide Blending noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a)–(c) are 0.85, 12.23, 15.15 dB, respectively. Figure 6. View largeDownload slide Blending noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a)–(c) are 0.85, 12.23, 15.15 dB, respectively. We further investigate the reason why the traditional TSVD-based SSA method causes more residual noise than the GROUSE-based SSA method. The strategy we use is to test the noise influence in the low-rank approximation of both TSVD-based SSA and GROUSE-based SSA method. We apply TSVD-based SSA and GROUSE-based SSA with different predefined ranks to pure random noise (which is band-limited here to simulate the real random noise in seismic data), and quantitatively measure how different low-rank approximation methods try to approximate the noise. Ideally, a low-rank approximation algorithm should approximate none (or very small amount) of the noise, since there is no significant spatially coherent signals in the pure noise. The pure noise is shown in Fig. 7(a). Figs 7(b) and (c) show the results with predefined rank K = 1 from TSVD-based SSA and GROUSE-based SSA methods. Figs 7(d) and (e) show the results with predefined rank K = 3 from TSVD-based SSA and GROUSE-based SSA methods, respectively. It is very salient from Fig. 7 that either K = 1 or K = 3, the TSVD-based SSA method causes a much stronger noise reconstruction performance than the GROUSE-based SSA method, which indicates that for the same predefined rank, the TSVD-based SSA method will approximate more noise than the GROUSE-based SSA method. In other words, the proposed method is less sensitive to noise when approximating the low-rank components using the OSST algorithm than the TSVD-based SSA method which uses an offline algorithm. Fig. 8 shows the diagrams of reconstructed noise energy varying with different ranks. The noise energy here is defined as   $$E_n = \| \mathbf {n} \| _2^2,$$ (14)where En denotes the energy of reconstructed noise and n denotes the vectorized noise vector. ∥ · ∥2 denotes the L2 norm. In Fig. 8, the black line denotes the constant input noise energy. The red line denotes the reconstructed noise energy using the TSVD-based SSA method varying with different ranks. The green line denotes the reconstructed noise energy using the proposed method varying with different ranks. It is quite reasonable that as the rank increases, both methods reconstruct more and more noise. It is obvious that the GROUSE-based SSA method is always reconstructing less noise than the TSVD-based SSA method for the same predefined rank. As the rank increases, the difference in noise energy between two methods also increases, which indicates that the superiority of the proposed method over the TSVD-based SSA method will become more obvious when the predefined ranks is larger. This observation is very useful in practice since in processing field data, the rank is usually chosen relatively large, where the proposed method will be more appropriate to substitute the traditional TSVD-based SSA method. Figure 7. View largeDownload slide Test of applying TSVD-based SSA and GROUSE-based SSA methods to pure random noise. (a) Pure random noise. (b) Result from TSVD-based SSA with K = 1. (c) Result from GROUSE-based SSA with K = 1. (d) Result from TSVD-based SSA with K = 3. (e) Result from GROUSE-based SSA with K = 3. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. Figure 7. View largeDownload slide Test of applying TSVD-based SSA and GROUSE-based SSA methods to pure random noise. (a) Pure random noise. (b) Result from TSVD-based SSA with K = 1. (c) Result from GROUSE-based SSA with K = 1. (d) Result from TSVD-based SSA with K = 3. (e) Result from GROUSE-based SSA with K = 3. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. Figure 8. View largeDownload slide Noise approximation test. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. Figure 8. View largeDownload slide Noise approximation test. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. This noise influence test supports the advantage of OSST algorithms over those offline subspace tracking algorithms, that is, the traditional TSVD-based SSA method. In fact, the TSVD-based SSA method can only decompose the input noisy data into a mixed signal-and-noise subspace. The mixed signal-and-noise subspace explains why there is a large amount of residual noise left in the denoised data, even for input pure-noise data as shown in Fig. 7(a). Appendix B gives a detailed proof on mixed signal-and-noise subspace decomposed from traditional TSVD method. Another insightful explanation on why the TSVD-based SSA method still causes residual noise is provided in Aharchaou et al. (2017), where the authors explained that thresholding alone (given a value for the rank) in the singular spectrum is not enough for rejecting all noise, shrinking (to counteract the effect of noise) of those (inflated) singular values is also necessary. In addition, in the GROUSE-based method, a spatial randomization operator is applied to the Hankel matrix before the OSST. The randomization operator helps the low-dimensional subspace better represent the full-dimensional measurement. Besides, the randomization operator helps make the locally coherent random noise incoherent by a global column rearrangement, which also accounts for the reason why the GROUSE-based method is less sensitive to noise and can achieve better denoising performance than the TSVD-based method. Fig. 9 provides a demonstration for such randomization. It can be seen from Fig. 9(a) that in a low-rank signal matrix, some noise may appear to be locally coherent although globally random. By applying a spatial randomization operator, the noise becomes highly random both locally and globally. This procedure makes the GROUSE method only extract the useful signals during subspace tracking. Figure 9. View largeDownload slide Demonstration for the influence of the randomization operator to the random property of unwanted-noise. Figure 9. View largeDownload slide Demonstration for the influence of the randomization operator to the random property of unwanted-noise. We highly recommend the readers to think in depth the noise influence test we conducted in this paper (Figs 7 and 8). It is very important to understand the difference between two methods in being sensitive to noise. Before initializing the project related with the paper, we were keeping asking ourselves if a new method can really substitute the TSVD-based SSA method and why it can. We are keeping it in mind when testing the GROUSE-based method. We clearly see that the TSVD-based SSA method tends to extract a lot more of components in the noise than the GROUSE-based SSA method. To explain the reason, we theoretically analyse the TSVD-based method. We show that the TSVD-based SSA method decomposes the data into a mixed signal-and-noise subspace, which explains why the denoised data using TSVD-based method contains a lot more residual noise and theoretically supports the noise influence test. Since the GROUSE-based method has a different algorithm setup as the TSVD-based method, we cannot formulate the two algorithms in the same system, for example, in the linear-algebra setup shown in Appendix B. It is really difficult for us to derive a similar formula as the TSVD-based method from the GROUSE-based method so that we can intuitively understand why one is superior than the other. However, what we really can do is to investigate which method is more sensitive to noise. The more sensitive, the more residual noise we will obtain in the final result. From this aspect, we found that the GROUSE-based method is less sensitive. Nowadays, it becomes even more important to remove various forms of coherent noise than to remove pure random noise. The proposed algorithm can also be straightforwardly applied in some coherent noise attenuation problems. Following Huang et al. (2017d), when the trajectory of a coherent noise component is successfully tracked (or the trajectory of useful signal component is obtained), the coherent noise can be randomized to facilitate the application of a common random noise attenuation algorithm in attenuating the randomized coherent noise. For better understanding the randomization process, we gives a brief introduction of the randomization here. The randomization process is implemented to eliminate the correlation of coherent noise while preserving the useful signals. The trajectory of the useful signal is regarded as the shape of the window for the randomization operator. Then, the order of the data in this window can be randomly rearranged. After energy balancing, the useful signals with the same shape in the window have approximately equal amplitude at each point, whereas the coherent noise and the random noise do not. After the rearrangement, the useful signals are almost unchanged assuming waveforms on consecutive traces are the same, whereas the energies of the coherent and random noise are dispersed randomly. In other words, the correlation of the coherent noise is eliminated. This process transforms coherent noise into random noise to some extent. Fig. 10 provides an example for attenuating coherent noise, where we treat the first hyperbolic event with the steepest dip as the coherent noise. Fig. 10(a) shows the noisy data, where the noise is pointed by the labelled arrow. After randomizing the data along the structural direction of the useful reflections, the shuffled data is shown in Fig. 10(d). Fig. 10(b) shows the denoised data with both coherent and random noise successfully removed using the proposed method. Fig. 10(c) shows the denoised result using the TSVD-based SSA method. Both methods successfully remove random and coherent noise but the proposed method is cleaner than the TSVD-based SSA method. The TSVD-based SSA method causes significant residual noise due to the strong reconstructed noise during low-rank approximation. It is consistent with the above detailed analysis on noise influence. Figs 10(e) and (f) show the removed noise sections using two methods, which do not contain any useful energy. The data size of this example is 501 × 150. The window size we use is 50 × 50. 50 × 50 means 50 time samples by 50 space traces. Figure 10. View largeDownload slide Coherent noise test. (a) Noisy data containing coherent noise. (b) Denoised data using the TSVD-based SSA method. (b) Denoised data using the proposed method. (d) Randomized data using the method proposed in Huang et al. (2017d). (e) Removed noise including both random and coherent noise using the TSVD-based SSA method. (f) Removed noise including both random and coherent noise using the proposed method. Figure 10. View largeDownload slide Coherent noise test. (a) Noisy data containing coherent noise. (b) Denoised data using the TSVD-based SSA method. (b) Denoised data using the proposed method. (d) Randomized data using the method proposed in Huang et al. (2017d). (e) Removed noise including both random and coherent noise using the TSVD-based SSA method. (f) Removed noise including both random and coherent noise using the proposed method. The next synthetic example is a model that contains flat events, flat events with amplitude-versus-offset (AVO) phenomenon, curved events, and discontinuities. Although containing only several events as shown in Fig. 11(a), the reflectors stand for very typical geological structures such as sedimentary layers, salt domes, pinch-outs, and faults. Fig. 11(b) shows the noisy data by adding some band-limited random noise. Since the structure of this example is more complicated, we apply both TSVD-based SSA and GROUSE-based SSA methods in local windows. The data size of this example is 512 × 50. The window size we use is 50 × 20. In each local window, we use a predefined rank K = 2. Figs 12(a) and (b) show denoised data using the TSVD-based SSA and GROUSE-based SSA methods, respectively, where the GROUSE-based SSA method is slightly cleaner than the TSVD-based SSA method. Figs 12(c) and (d) show the corresponding noise sections, where no observable signals are existing. For this example, to get a better comparison between two methods, we calculate the FK spectra of different data. Fig. 13 shows a comparison of different FK spectra from the clean data (Fig. 13a), noisy data (Fig. 13b), denoised data using TSVD-based SSA method (Fig. 13c), and denoised data using GROUSE-based SSA method (Fig. 13d). From Fig. 13, it is more obvious that the FK spectrum of GROUSE-based SSA result is cleaner than that of TSVD-based SSA result, and is much more similar to that of the clean data. In this example, the SNRs of the noisy data and denoised results from TSVD-based SSA and GROUSE-based SSA methods are 0.73, 5.10 and 7.22 dB, respectively. This example confirms the effectiveness of the proposed method in dealing with curved and more complex events. Figure 11. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. SNR of (b) is 0.73 dB. Figure 11. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. SNR of (b) is 0.73 dB. Figure 12. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding to (a). (d) Removed noise corresponding to (b). SNRs of (a) and (b) are 5.10 dB and 7.22 dB, respectively. Figure 12. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding to (a). (d) Removed noise corresponding to (b). SNRs of (a) and (b) are 5.10 dB and 7.22 dB, respectively. Figure 13. View largeDownload slide FK spectrum comparison. (a) FK spectrum of clean data. (b) FK spectrum of noisy data. (c) FK spectrum of TSVD-based SSA processed data. (d) FK spectrum of GROUSE-based SSA processed data. Figure 13. View largeDownload slide FK spectrum comparison. (a) FK spectrum of clean data. (b) FK spectrum of noisy data. (c) FK spectrum of TSVD-based SSA processed data. (d) FK spectrum of GROUSE-based SSA processed data. We then use a 3-D synthetic example to further compare the performance of different methods. The clean data is shown in Fig. 14(a), which contains three plane-wave components. Fig. 14(b) shows the noisy data with band-limited random noise. For 3-D data, it is slightly different to construct the low-rank matrix. Appendix C provides how to construct the low-rank matrix in the case of 3-D seismic data. In this example, in addition to the TSVD-based SSA method, we also compare the proposed method with the FK thresholding method and accelerated TSVD-based SSA method with randomized singular value decomposition (RSVD). The results from four aforementioned methods are shown in Fig. 15. It is obvious that the FK method causes most residual noise, which is followed by the TSVD-based SSA method. Although the accelerated TSVD-based SSA method is much cleaner than FK and TSVD-based SSA methods, the amplitude of the useful events is damaged a lot, which can be verified from its removed noise cube as shown in Fig. 16(c). Fig. 16 shows the removed noise cubes corresponding to the four mentioned methods. All figures except for Fig. 16(c) do not contain observable spatially coherent energy. In Fig. 16(c), there is significant useful energy which indicates the worse signal preservation ability of the RSVD based TSVD-based SSA method. The proposed method obtains the cleanest result without damaging useful energy. Fig. 17 shows a comparison of 2-D sections extracted from the 3-D data cubes, from which we can get a clearer conclusion mentioned above. The SNR of the noisy data in this example is −8.39 dB. The SNRs of FK method, TSVD-based SSA method, RSVD method, and the proposed method are 4.30, 4.60, 3.94 and 6.81 dB, respectively. For this example, we also measure the computational time for different methods. The data size for this example is 300 × 20 × 20. The time cost for FK method, TSVD-based SSA method, RSVD method, and the proposed method is 0.05, 11.63, 1.7, and 5.96 s, respectively. We conclude from this example that while the RSVD algorithm accelerates TSVD-based SSA a lot (by a factor of seven), it sacrifices the denoising performance to some extent. The proposed method obtains the best denoising performance and it is also faster than the traditional TSVD-based SSA method (by a factor of two). Since this example only contains planar events, we do not use local windows to process the data. Figure 14. View largeDownload slide 3-D synthetic example. (a) Clean data. (b) Noisy data. Figure 14. View largeDownload slide 3-D synthetic example. (a) Clean data. (b) Noisy data. Figure 15. View largeDownload slide Denoising comparison. (a) Denoised data using FK method. (b) Denoised data using TSVD-based SSA method with SVD for low-rank decomposition. (c) Denoised data using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Denoised data using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 15. View largeDownload slide Denoising comparison. (a) Denoised data using FK method. (b) Denoised data using TSVD-based SSA method with SVD for low-rank decomposition. (c) Denoised data using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Denoised data using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 16. View largeDownload slide Noise comparison. (a) Removed noise using FK method. (b) Removed noise using TSVD-based SSA method with SVD for low-rank decomposition. (c) Removed noise using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Removed noise using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 16. View largeDownload slide Noise comparison. (a) Removed noise using FK method. (b) Removed noise using TSVD-based SSA method with SVD for low-rank decomposition. (c) Removed noise using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Removed noise using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 17. View largeDownload slide 2-D section comparison (5th Xline section). (a) Clean data. (b) Denoised data using FK. (c) Denoised data using SSA. (d) Noisy data. (e) Denoised data using RSVD. (f) Denoised data using OSST. Figure 17. View largeDownload slide 2-D section comparison (5th Xline section). (a) Clean data. (b) Denoised data using FK. (c) Denoised data using SSA. (d) Noisy data. (e) Denoised data using RSVD. (f) Denoised data using OSST. Real data examples The first field data example is a 2-D time-migrated seismic profile, as shown in Fig. 18. This is a very noisy data set, where strong random noise masks the shallow reflection signals seriously and some useful signals are hardly seen. Fig. 19 shows a comparison between the TSVD-based SSA method and the proposed method, which both significantly improve the seismic image. The shallow part seismic reflection signals are much clearer than the raw data. It is not easy to judge the performance solely from the denoised data. The removed noise sections shown in Fig. 20 offer more clues in judging the denoising performance of the two methods. It is very clear from Fig. 20 that there are significant damages to useful reflection signals caused by the TSVD-based SSA method, as indicated by the arrows in Fig. 20(a). The removed noise from the proposed method contains almost no useful signals, which indicates a successful preservation of those useful signals. An enlarged comparison between the two removed noise sections are shown in Fig. 21, where we can see even more clearly that the TSVD-based SSA method cause some damages to useful signals. As introduced in the beginning of this section, we use local similarity to quantitatively compare the denoising performance for real data. Fig. 22 shows a comparison of local similarity between denoised data and removed noise for two methods. It is seen that the local similarity successfully detects those areas where we lost significant useful energy. The abnormal values in Fig. 22(a) (highlighted by the arrows) point out the areas of signal damages, which is consistent with the observed spatially coherent signals in Fig. 20(a). The data size of this example is 1001 × 201. The window size we use is 200 × 50. Figure 18. View largeDownload slide 2-D field data example. Figure 18. View largeDownload slide 2-D field data example. Figure 19. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. Figure 19. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. Figure 20. View largeDownload slide Denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. The zoomed sections of the frameboxes are shown in Fig. 21. Figure 20. View largeDownload slide Denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. The zoomed sections of the frameboxes are shown in Fig. 21. Figure 21. View largeDownload slide Zoomed denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. Figure 21. View largeDownload slide Zoomed denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. Figure 22. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using SSA. (d) Local similarity using OSST. Figure 22. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using SSA. (d) Local similarity using OSST. The second field data example is a 3-D stacked seismic profile. The raw data is shown in Fig. 23. Figs 24(a) and (b) show the denoised data using the TSVD-based SSA method and the proposed method, respectively. Both denoising methods obtain much cleaner data. Figs 24(c) and (d) show their corresponding noise sections. Fig. 25 shows a comparison of local similarity between denoised data and removed noise for two methods. The two local similarity cubes do not contain any abnormal values, which indicates that both methods preserve useful signals well. We extract one Xline section from each figure in Figs 23 and 24 and compare them in Fig. 26. As highlighted by the green frameboxes, both methods successfully remove a huge amount of random noise and the proposed method obtains more spatially coherent reflection signals. The frameboxes are then zoomed in Fig. 27, where the comparison between two methods is much clearer. The TSVD-based SSA method causes a small amount of residual noise while the proposed method makes the seismic events very smooth and spatially coherent. The data size of this example is 751 × 100 × 10. The window size we use is 100 × 50 × 10. Figure 23. View largeDownload slide 3-D field data example. Figure 23. View largeDownload slide 3-D field data example. Figure 24. View largeDownload slide Denoising comparison in 3-D volume. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding (a). (d) Removed noise corresponding to (b). Figure 24. View largeDownload slide Denoising comparison in 3-D volume. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding (a). (d) Removed noise corresponding to (b). Figure 25. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using TSVD-based SSA method. (b) Local similarity using GROUSE-based SSA method. Figure 25. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using TSVD-based SSA method. (b) Local similarity using GROUSE-based SSA method. Figure 26. View largeDownload slide Denoising comparison in 2-D section. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. (d) Removed noise corresponding (a). (e) Removed noise corresponding to (b). Figure 26. View largeDownload slide Denoising comparison in 2-D section. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. (d) Removed noise corresponding (a). (e) Removed noise corresponding to (b). Figure 27. View largeDownload slide Zoomed comparison. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. Figure 27. View largeDownload slide Zoomed comparison. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. The last field data example is an earthquake data stack profile. Fig. 28 shows Professor Peter Shearer’s stacks over many earthquakes at constant epicentral distance (offset angle) (Shearer 1991a,b). Fig. 28 has been improved a lot from the raw data by stacking different earthquake data. Different seismic phases can be seen from the Fig. 28, as highlighted by the labelled arrows. However, we can see that there are still significant random and erratic noise existing in the stack profile. By applying the proposed method to denoise the raw stack data, we obtain a much improved result, which is shown in Fig. 29. Different phases have been shown much clearer in the denoised data. To check whether we have removed any useful signals, we plot the difference section in Fig. 30, where we see no obvious spatially coherent signals. The noise section confirms that we only remove unwanted components while leaving the spatially coherent signals clearer. The data size of this example is 1200 × 360. The window size we use is 200 × 90. Figure 28. View largeDownload slide Raw earthquake stack data. Figure 28. View largeDownload slide Raw earthquake stack data. Figure 29. View largeDownload slide Denoised stack data. Figure 29. View largeDownload slide Denoised stack data. Figure 30. View largeDownload slide Removed noise from the raw earthquake stack data. Figure 30. View largeDownload slide Removed noise from the raw earthquake stack data. CONCLUSIONS We have introduced a new OSST based low-rank approximation algorithm to attenuate different types of noise existing in multidimensional seismic data. Each subspace update can be performed in linear time in the dimension of the subspace and thus is efficient. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. We comprehensively investigate the application in different noise attenuation problems through a bunch of synthetic and real data examples, which demonstrates the robustness of the algorithm to different types of noise including random noise, spiky noise, blending noise, and even coherent noise. We specifically compare the proposed method with the traditional SSA based method, which is currently widely used in many fields. The numerical examples show that the proposed method can obtain significantly better performance in causing less residual noise in the denoised data. The superiority of the propose method over the TSVD-based SSA method is even more obvious in dealing with extremely noisy data. The numerical example also shows that the TSVD-based SSA method approximates more noise than the GROUSE-based SSA method when used with denoising for the same predefined rank, which explains the question why TSVD-based SSA method causes more residual noise. The proposed method is also faster than the typical TSVD-based SSA method which uses standard SVD by a factor of two. ACKNOWLEDGEMENTS This work was supported by the National Natural Science Foundation of China (No. 61401307), Hebei Province Foundation of Returned oversea scholars (CL201707), Hebei Province Project of Science and Technology R & D (11213565), and Hebei Province Natural Science Foundation (No. E2016202341). We appreciate editor Jean Virieux, reviewer Mehdi Aharchaou and two anonymous reviewers for constructive suggestions that greatly improved the manuscript. REFERENCES Abma R., Claerbout J., 1995. Lateral prediction for noise attenuation by t − x and f − x techniques, Geophysics , 60, 1887– 1896. Google Scholar CrossRef Search ADS   Abma R., Howe D., Foster M., Ahmed I., Tanis M., Zhang Q., Arogunmati A., Alexander G., 2015. Independent simultaneous source acquisition and processing, Geophysics , 80( 6), WD37– WD44. Google Scholar CrossRef Search ADS   Aharchaou M., Anderson J., Hughes S., Bingham J., 2017. Singular-spectrum analysis via optimal shrinkage of singular values, in SEG Technical Program Expanded Abstracts 2017 , pp. 4950– 4954, Society of Exploration Geophysicists. Anvari R., Siahsar M.A.N., Gholtashi S., Kahoo A.R., Mohammadi M., 2017. Seismic random noise attenuation using synchrosqueezed wavelet transform and low-rank signal matrix approximation, in IEEE Transactions on Geoscience and Remote Sensing , IEEE, doi:10.1109/TGRS.2017.2730228. Balzano L., Nowak R., Recht B., 2010. Online identification and tracking of subspaces from highly incomplete information, in 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) , IEEE, pp. 704– 711. Bekara M., van der Baan M., 2007. Local singular value decomposition for signal enhancement of seismic data, Geophysics , 72( 2), V59– V65. Google Scholar CrossRef Search ADS   Berkhout A.J., 2008. Changing the mindset in seismic data acquisition, Leading Edge , 27, 924– 938. Google Scholar CrossRef Search ADS   Cai J., Candès E.J., Shen Z., 2010a. A singular value thresholding algorithm for matrix completion, SIAM J. Optim. , 20, 1956– 1982. Google Scholar CrossRef Search ADS   Cai J.F., Candés E.J., Shen Z., 2010b. A singular value thresholding algorithm for matrix completion, SIAM J. Optim. , 20(4), 1956– 1982. Google Scholar CrossRef Search ADS   Canales L., 1984. Random noise reduction, in 54th Annual International Meeting, SEG, Expanded Abstracts , pp. 525– 527. Candès E.J., Demanet L., Donoho D.L., Ying L., 2006. Fast discrete curvelet transforms, SIAM Multiscale Model. Simul. , 5, 861– 899. Google Scholar CrossRef Search ADS   Chen W., Chen Y., Liu W., 2016a. Ground roll attenuation using improved complete ensemble empirical mode decomposition, J. Seismic Explor. , 25, 485– 495. Chen W., Chen Y., Liu W., Cheng Z., 2016b. Nonstationary signal decomposition via improved complete ensemble empirical mode decomposition and its application in ground roll noise attenuation, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 4139– 4144. Chen W., Chen Y., Cheng Z., 2017a. Seismic time-frequency analysis using an improved empirical mode decomposition algorithm, J. Seismic Explor. , 26( 4), 367– 380. Chen W., Xie J., Zu S., Gan S., Chen Y., 2017b. Multiple reflections noise attenuation using adaptive randomized-order empirical mode decomposition, IEEE Geosci. Remote Sens. Lett. , 14( 1), 18– 22. Google Scholar CrossRef Search ADS   Chen W., Yuan J., Chen Y., Gan S., 2017c. Preparing the initial model for iterative deblending by median filtering, J. Seismic Explor. , 26( 1), 25– 47. Chen W., Zhang D., Chen Y., 2017d. Random noise reduction using a hybrid method based on ensemble empirical mode decomposition, J. Seismic Explor. , ( 26), 227– 249. Chen Y., 2015a. Iterative deblending with multiple constraints based on shaping regularization, IEEE Geosci. Remote Sens. Lett. , 12, 2247– 2251. Google Scholar CrossRef Search ADS   Chen Y., 2015b. Noise attenuation of seismic data from simultaneous-source acquisition, PhD thesis , The University of Texas at Austin. Chen Y., 2016. Dip-separated structural filtering using seislet thresholding and adaptive empirical mode decomposition based dip filter, Geophys. J. Int. , 206( 1), 457– 469. Google Scholar CrossRef Search ADS   Chen Y., 2017. Fast dictionary learning for noise attenuation of multidimensional seismic data, Geophys. J. Int. , 209, 21– 31. Chen Y., Fomel S., 2015a. EMD-seislet transform, 85th Annual International Meeting, SEG, Expanded Abstracts , pp. 4775– 4778. Chen Y., Fomel S., 2015b. Random noise attenuation using local signal-and-noise orthogonalization, Geophysics , 80( 6), WD1– WD9. Google Scholar CrossRef Search ADS   Chen Y., Ma J., 2014. Random noise attenuation by f-x empirical mode decomposition predictive filtering, Geophysics , 79, V81– V91. Google Scholar CrossRef Search ADS   Chen Y., Fomel S., Hu J., 2014. Iterative deblending of simultaneous-source seismic data using seislet-domain shaping regularization, Geophysics , 79( 5), V179– V189. Google Scholar CrossRef Search ADS   Chen Y., Jin Z., Gan S., Yang W., Xiang K., Bai M., Huang W., 2015a. Iterative deblending using shaping regularization with a combined PNMO-MF-FK coherency filter, J. appl. Geophys. , 122, 18– 27. Google Scholar CrossRef Search ADS   Chen Y., Yuan J., Zu S., Qu S., Gan S., 2015b. Seismic imaging of simultaneous-source data using constrained least-squares reverse time migration, J. Appl. Geophys. , 114, 32– 35. Google Scholar CrossRef Search ADS   Chen Y., Zhang G., Gan S., Zhang C., 2015c. Enhancing seismic reflections using empirical mode decomposition in the flattened domain, J. Appl. Geophys. , 119, 99– 105. Google Scholar CrossRef Search ADS   Chen Y., Ma J., Fomel S., 2016c. Double-sparsity dictionary for seismic noise attenuation, Geophysics , 81( 2), V17– V30. Chen Y., Zhang D., Huang W., Chen W., 2016d. An open-source matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction, Comput. Geosci. , 95, 59– 66. Google Scholar CrossRef Search ADS   Chen Y., Zhang D., Jin Z., Chen X., Zu S., Huang W., Gan S., 2016e. Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method, Geophys. J. Int. , 206, 1695– 1717. Google Scholar CrossRef Search ADS   Chen Y., Chen H., Xiang K., Chen X., 2017e. Preserving the discontinuities in least-squares reverse time migration of simultaneous-source data, Geophysics , 82( 3), S185– S196. Google Scholar CrossRef Search ADS   Chen Y., Zhou Y., Chen W., Zu S., Huang W., Zhang D., 2017f. Empirical low rank decomposition for seismic noise attenuation, IEEE Trans. Geosci. Remote Sens. , 55( 8), 4696– 4711. Google Scholar CrossRef Search ADS   Cheng J., Sacchi M., 2014. Fast dual-domain reduced-rank algorithm for 3D deblending via randomized QR decomposition, Geophysics , 81, V89– V101. Google Scholar CrossRef Search ADS   Cheng J., Sacchi M., 2015. Separation and reconstruction of simultaneous source data via iterative rank reduction, Geophysics , 80, V57– V66. Google Scholar CrossRef Search ADS   Dai W., Milenkovic O., 2010. Set: an algorithm for consistent matrix completion, in Proceedings of ICASSP , Dallas, TX, pp. 3646–3649. Donoho D.L., Johnstone I.M., 1994. Ideal spatial adaptation by wavelet shrinkage, Biometrika , 81, 425– 455. Google Scholar CrossRef Search ADS   Ebrahimi S., Kahoo A.R., Chen Y., Porsani M.J., 2017. A high-resolution weighted ab semblance for dealing with amplitude-variation-with-offset phenomenon, Geophysics , 82( 2), V85– V93. Google Scholar CrossRef Search ADS   Edelman A., Arias T.A., Smith S.T., 1988. The geometry of algorithms with orthogonality constraints, SIAM J. Matrix Anal. Appl. , 20( 2), 303– 353. Google Scholar CrossRef Search ADS   Elad M., Aharon M., 2006. Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process. , 15( 12), 3736– 3745. Google Scholar CrossRef Search ADS PubMed  Fomel S., 2013. Seismic data decomposition into spectral components using regularized nonstationary autoregression, Geophysics , 78, O69– O76. Google Scholar CrossRef Search ADS   Fomel S., Liu Y., 2010. Seislet transform and seislet frame, Geophysics , 75, V25– V38. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., 2015a. Deblending of distance separated simultaneous-source data using seislet frames in the shot domain, in 85th Annual International Meeting , SEG Expanded Abstracts , pp. 65– 70. Gan S., Wang S., Chen Y., Chen X., 2015b. Seismic data reconstruction via fast projection onto convex sets in the seislet transform domain, in 85th Annual International Meeting , SEG Expanded Abstracts , pp. 3814– 3819. Gan S., Wang S., Chen Y., Zhang Y., Jin Z., 2015c. Dealiased seismic data interpolation using seislet transform with low-frequency constraint, IEEE Geosci. Remote Sens. Lett. , 12, 2150– 2154. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen J., Zhong W., Zhang C., 2016a. Improved random noise attenuation using f-x empirical mode decomposition and local similarity, Appl. Geophys. , 13, 127– 134. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., 2016b. Simultaneous-source separation using iterative seislet-frame thresholding, IEEE Geosci. Remote Sens. Lett. , 13( 2), 197– 201. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., Huang W., Chen H., 2016c. Compressive sensing for seismic data reconstruction via fast projection onto convex sets based on seislet transform, J. Appl. Geophys. , 130, 194– 208. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., Xiang K., 2016d. Separation of simultaneous sources using a structural-oriented median filter in the flattened dimension, Comput. Geosci. , 86, 46– 54. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Qu S., Zu S., 2016e. Velocity analysis of simultaneous-source data using high-resolution semblance-coping with the strong noise, Geophys. J. Int. , 204, 768– 779. Google Scholar CrossRef Search ADS   Gao J., Mao J., Chen W., Zheng Q., 2006. On the denoising method of prestack seismic data in wavelet domain, Chin. J. Geophys. , 49( 4), 1155– 1163. Gao J., Sacchi M.D., Chen X., 2013. A fast reduced-rank interpolation method for prestack seismic volumes that depend on four spatial dimensions, Geophysics , 78, V21– V30. Google Scholar CrossRef Search ADS   Gilles J., 2013. Empirical wavelet transform, IEEE Trans. Signal Process. , 61, 3999– 4010. Google Scholar CrossRef Search ADS   Golub G.H., Van Loan C.F., 1996. Matrix Computations , 3rd edn, John Hopkins Univ. Press. He B., Wu G., 2015. A slim approximate hessian for perturbation velocity inversion with incomplete reflection data, J. Seismic Explor. , 24( 3), 281– 304. Hermann F.J., Boniger U., Verschuur D.J., 2007. Non-linear primary-multiple separation with directional curvelet frames, Geophys. J. Int. , 170, 781– 799. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhang M., Chen Y., Yu J., 2015. Random noise attenuation for 3D seismic data by modified multichannel singular spectrum analysis, in 77th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201412830. Huang W., Wang R., Chen Y., Li H., Gan S., 2016a. Damped multichannel singular spectrum analysis for 3D random noise attenuation, Geophysics , 81( 4), V261– V270. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhou Y., Chen Y., Yang R., 2016b. Improved principal component analysis for 3D seismic data simultaneous reconstruction and denoising, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 4777– 4781. Huang W., Wang R., Chen X., Chen Y., 2017a. Double least squares projections method for signal estimation, in IEEE Trans. Geosci. Remote Sens. , 55( 7), 4111– 4129. Google Scholar CrossRef Search ADS   Huang W., Wang R., Chen X., Zu S., Chen Y., 2017b. Damped sparse representation for seismic random noise attenuation, in 87th Annual International Meeting, SEG, Expanded Abstracts , pp. 5079– 5084. Huang W., Wang R., Li H., Chen Y., 2017c. Unveiling the signals from extremely noisy microseismic data for high-resolution hydraulic fracturing monitoring, Sci. Rep. , 7, 11996, doi:10.1038/s41598-017-09711-2. Huang W., Wang R., Yuan Y., Gan S., Chen Y., 2017d. Signal extraction using randomized-order multichannel singular spectrum analysis, Geophysics , 82, V59– V74. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhang D., Zhou Y., Yang W., Chen Y., 2017e. Mathematical morphological filtering for linear noise attenuation of seismic data, Geophysics , 82( 6), V369– V384. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhou Y., Chen X., 2017f. Simultaneous coherent and random noise attenuation by morphological filtering with dual-directional structuring element, IEEE Geosci. Remote Sens. Lett. , doi:10.1109/LGRS.2017.2730849. Huang W., Wang R., Zu S., Chen Y., 2017g. Low-frequency noise attenuation in seismic and microseismic data using mathematical morphological filtering, Geophys. J. Int. , 211, 1296– 1318. Google Scholar CrossRef Search ADS   Jin Z., Chapman M., Wu X., Papageorgiou G., 2017. Estimating gas saturation in a thin layer by using frequency-dependent amplitude versus offset modelling, Geophys. Prospect. , 65(3), 747– 765. Google Scholar CrossRef Search ADS   Kahoo A., Siahkoohi T.R., 2009. Random noise suppression from seismic data using time frequency peak filtering, in 71st Annual International Conference and Exhibition, EAGE, Extended Abstracts . Keshavan R.H., Montanari A., Oh S., 2010. Matrix completion from a few entries, IEEE Trans. Inf. Theory , 56( 6), 2980– 2998. Google Scholar CrossRef Search ADS   Kong D., Peng Z., He Y., Fan H., 2016. Seismic random noise attenuation using directional total variation in the shearlet domain, J. Seismic Explor. , 25( 4), 321– 338. Li F., Zhou H., Zhao T., Marfurt K., 2016a. Unconventional reservoir characterization based on spectrally corrected seismic attenuation estimation, J. Seismic Explor. , 25( 5), 447– 461. Li H., Wang R., Cao S., Chen Y., Huang W., 2016b. A method for low-frequency noise suppression based on mathematical morphology in microseismic monitoring, Geophysics , 81( 3), V159– V167. Google Scholar CrossRef Search ADS   Li H., Wang R., Cao S., Chen Y., Tian N., Chen X., 2016c. Weak signal detection using multiscale morphology in microseismic monitoring, J. Appl. Geophys. , 133, 39– 49. Google Scholar CrossRef Search ADS   Li J., Wang S., Yang D., Tang G., Chen Y., 2017. Subsurface attenuation estimation using a novel hybrid method based on fwe function and power spectrum, Explor. Geophys. , doi:10.1071/EG16022. Li Y., Li Z., Zhang K., Lin Y., 2016d. Frequency-domain full waveform inversion with rugged free surface based on variable grid finite-difference method, J. Seism. Explor. , 25( 6), 543– 543. Lin H., Li Y., Yang B., Ma T., 2013. Random denoising and signal nonlinearity approach by time-frequency peak filtering using weighted frequency reassignment, Geophysics , 78, V229– V237. Google Scholar CrossRef Search ADS   Lines L., Daley P., Embleton J., Gray D., 2016. Pssp waves - their presence and possible utilization in seismic inversion, J. Seismic Explor. , 25( 6), 497– 512. Liu Y., 2013. Noise reduction by vector median filtering, Geophysics , 78, V79– V87. Google Scholar CrossRef Search ADS   Liu Y., Fomel S., 2010. High-order seislet transform and its application of random noise attenuation, Geophysics , 75, WB235– WB245. Google Scholar CrossRef Search ADS   Liu G., Chen X., 2013. Noncausal f-x-y regularized nonstationary prediction filtering for random noise attenuation on 3D seismic data, J. Appl. Geophys. , 93, 60– 66. Google Scholar CrossRef Search ADS   Liu G., Fomel S., Jin L., Chen X., 2009a. Stacking seismic data using local correlation, Geophysics , 74, V43– V48. Google Scholar CrossRef Search ADS   Liu Y., Liu C., Wang D., 2009b. A 1d time-varying median filter for seismic random, spike-like noise elimination, Geophysics , 74, V17– V24. Google Scholar CrossRef Search ADS   Liu G., Chen X., Du J., Song J., 2011. Seismic noise attenuation using nonstationary polynomial fitting, Appl. Geophys. , 8( 1), 18– 26. Google Scholar CrossRef Search ADS   Liu G., Chen X., Du J., Wu K., 2012. Random noise attenuation using f-x regularized nonstationary autoregression, Geophysics , 77, V61– V69. Google Scholar CrossRef Search ADS   Liu Y., Fomel S., Liu C., 2015. Signal and noise separation in prestack seismic data using velocity-dependent seislet transform, Geophysics , 80( 6), WD117– WD128. Google Scholar CrossRef Search ADS   Liu C., Wang D., Hu B., Wang T., 2016a. Seismic deconvolution with shearlet sparsity constrained inversion, J. Seismic Explor. , 25( 5), 433– 445. Liu W., Cao S., Chen Y., 2016b. Seismic time-frequency analysis via empirical wavelet transform, IEEE Geosci. Remote Sens. Lett. , 13, 28– 32. Google Scholar CrossRef Search ADS   Liu W., Cao S., Chen Y., 2016c. Application of variational mode decomposition in random noise attenuation and time-frequency analysis of seismic data, in 78th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201601249. Liu W., Cao S., Chen Y., 2016d. Applications of variational mode decomposition in seismic time-frequency analysis, Geophysics , 81, V365– V378. Google Scholar CrossRef Search ADS   Liu W., Cao S., Chen Y., Zu S., 2016e. An effective approach to attenuate random noise based on compressive sensing and curvelet transform, J. Geophys. Eng. , 13, 135– 145. Google Scholar CrossRef Search ADS   Liu W., Cao S., Gan S., Chen Y., Zu S., Jin Z., 2016f. One-step slope estimation for dealiased seismic data reconstruction via iterative seislet thresholding, IEEE Geosci. Remote Sens. Lett. , 13, 1462– 1466. Google Scholar CrossRef Search ADS   Liu W., Cao S., Liu Y., Chen Y., 2016g. Synchrosqueezing transform and its applications in seismic data analysis, J. Seismic Explor. , 25, 27– 44. Liu W., Cao S., Wang Z., Kong X., Chen Y., 2017. Spectral decomposition for hydrocarbon detection based on vmd and teager-kaiser energy, IEEE Geosci. Remote Sens. Lett. , 14( 4), 539– 543. Google Scholar CrossRef Search ADS   Mairal J., Sapiro G., Elad M., 2008. Learning multiscale sparse representations for image and video restoration, SIAM Multiscale Model. Simul. , 7( 1), 214– 241. Google Scholar CrossRef Search ADS   Mayne W.H., 1962. Common reflection point horizontal data stacking techniques, Geophysics , 27, 927– 938. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., 2016a. Hybrid seismic denoising using higher-order statistics and improved wavelet block thresholding, Bull. seism. Soc. Am. , 106( 4), 1380– 1393. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., 2016b. Adaptive noise estimation and suppression for improving microseismic event detection, J. Appl. Geophys. , 132, 116– 124. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., 2017. Automatic noise-removal/signal-removal based on general cross-validation thresholding in synchrosqueezed domain and its application on earthquake data, Geophysics , 82( 4), V211– V227. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., Horton S.P., 2016. Automatic microseismic denoising and onset detection using the synchrosqueezed continuous wavelet transform, Geophysics , 81( 4), V341– V355. Google Scholar CrossRef Search ADS   Naghizadeh M., Sacchi M.D., 2011. Seismic data interpolation and denoising in the frequency-wavenumber domain, Geophysics , 77( 2), V71– V80. Google Scholar CrossRef Search ADS   Naghizadeh M., Sacchi M.D., 2012. Multicomponent seismic random noise attenuation via vector autoregressive operators, Geophysics , 78( 2), V91– V99. Google Scholar CrossRef Search ADS   Neelamani R., Baumstein A.I., Gillard D.G., Hadidi M.T., Soroka W.L., 2008. Coherent and random noise attenuation using the curvelet transform, Leading Edge , 27, 240– 248. Google Scholar CrossRef Search ADS   Oropeza V., Sacchi M., 2011. Simultaneous seismic data denoising and reconstruction via multichannel singular spectrum analysis, Geophysics , 76, V25– V32. Google Scholar CrossRef Search ADS   Pratt G., Shin C., Hick G., 1998. Gauss–newton and full newton methods in frequency–space seismic waveform inversion, Geophys. J. Int. , 133, 341– 362. Google Scholar CrossRef Search ADS   Qu S., Verschuur D.J., 2016. Getting accurate time-lapse information using geology-constrained simultaneous joint migration-inversion, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 5451– 5456. Qu S., Zhou H., Chen H., Zu S., Zhi L., 2014. Separation of simultaneous vibroseis data, in 84th Annual International Meeting, SEG, Expanded Abstracts , pp. 4340– 4344. Qu S., Zhou H., Chen Y., Yu S., Zhang H., Yuan J., Yang Y., Qin M., 2015. An effective method for reducing harmonic distortion in correlated vibroseis data, J. Appl. Geophys. , 115, 120– 128. Google Scholar CrossRef Search ADS   Qu S., Verschuur D., Chen Y., 2016a. Full waveform inversion using an automatic directional total variation constraint, in 78th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201701340. Qu S., Zhou H., Liu R., Chen Y., Zu S., Yu S., Yuan J., Yang Y., 2016b. Deblending of simultaneous-source seismic data using fast iterative shrinkage-thresholding algorithm with firm-thresholding, Acta Geophys. , 64, 1064– 1092. Google Scholar CrossRef Search ADS   Ren C., Tian X., 2016. Prestack migration based on asymmetric wave-equation extrapolation, J. Seismic Explor. , 25( 4), 375– 397. Sacchi M., Ulrych T., 1995. High-resolution velocity gathers and offset space reconstruction, Geophysics , 60, 1169– 1177. Google Scholar CrossRef Search ADS   Sarwar B., Karypis G., Konstan J., Riedl J., 2002. Incremental singular value decomposition algorithms for highly scalable recommender systems, in Fifth International Conference on Computer and Information Science, pp. 27– 28. Shearer P.M., 1991a. Imaging global body wave phases by stacking long-period seismograms, J. geophys. Res. , 96( B12), 20 535– 20 324. Google Scholar CrossRef Search ADS   Shearer P.M., 1991b. Constraints on upper mantle discontinuities from observations of long period reflected and converted phases, J. geophys. Res. , 96( B11), 18 147– 18 182. Google Scholar CrossRef Search ADS   Shen H., Yan Y., Chen C., Zhang B., 2016. Multiple-transient surface wave phase velocity analysis in expanded f-k domain and its application, J. Seismic Explor. , 25( 4), 299– 319. Siahsar M.A.N., Gholtashi S., Kahoo A.R., Marvi H., Ahmadifard A., 2016. Sparse time-frequency representation for seismic noise reduction using low-rank and sparse decomposition, Geophysics , 81( 2), V117– V124. Google Scholar CrossRef Search ADS   Siahsar M.A.N., Abolghasemi V., Chen Y., 2017a. Simultaneous denoising and interpolation of 2D seismic data using data-driven non-negative dictionary learning, Signal Process. , 141, 309– 321. Google Scholar CrossRef Search ADS   Siahsar M.A.N., Gholtashi S., Kahoo A.R., Chen W., Chen Y., 2017b. Data-driven multi-task sparse dictionary learning for noise attenuation of 3D seismic data, Geophysics , 82( 6), doi:10.1190/geo2017–0084.1. Siahsar M.A.N., Gholtashi S., Olyaei E., Chen W., Chen Y., 2017c. Simultaneous denoising and interpolation of 3D seismic data via damped data-driven optimal singular value shrinkage, IEEE Geosci. Remote Sens. Lett. , 14( 7), 1086– 1090. Google Scholar CrossRef Search ADS   Sun W., Wang H., 2016. Water-layer-related demultiple method using constraints in the sparse, local tau-p domain, J. Seismic Explor. , 25( 5), 463– 483. Trickett S., 2008. F-xy cadzow noise suppression, CSPG CSEG CWLS Convention , pp. 303– 306. Vautard R., Yiou P., Ghil M., 1992. Singular-spectrum analysis: A toolkit for short, noisy chaotic signals, Physica D: Nonlinear Phenom. , 58( 1), 95– 126. Google Scholar CrossRef Search ADS   Verma A.K., Gurve D., 2015. Image decomposition using adaptive empirical wavelet transform, IOSR J. VLSI Signal Process. , 5( 1), 36– 44. Wang Y., 1999. Random noise attenuation using forward-backward linear prediction, J. Seismic Explor. , 8, 133– 142. Wang B., Li J., Chen X., 2015a. A novel method for simultaneous seismic data interpolation and noise removal based on the l0 constraint, J. Seismic Explor. , 24( 3), 187– 204. Wang B., Wu R., Chen X., Li J., 2015b. Simultaneous seismic data interpolation and denoising with a new adaptive method based on dreamlet transform, Geophys. J. Int. , 201, 1180– 1192. Wang Y., Cao J., Yang C., 2011. Recovery of seismic wavefields based on compressive sensing by an l1-norm constrained trust region method and the piecewise random subsampling, Geophys. J. Int. , 187, 199– 213. Google Scholar CrossRef Search ADS   Wang Y., Zhou H., Zu S., Mao W., Chen Y., 2017. Three-operator proximal splitting scheme for 3D seismic data reconstruction, IEEE Geosci. Remote Sens. Lett. , 14( 10), 1830– 1834. Google Scholar CrossRef Search ADS   Wu G., Fomel S., Chen Y., 2017. Data-driven time-frequency analysis of seismic data using non-stationary prony method, Geophys. Prospect. , doi:10.1111/1365–2478.12530. Xie Q., Chen Y., Zhang G., Gan S., Wang E., 2015a. Seismic data analysis using synchrosqueezeing wavelet transform - a case study applied to boonsville field, in 77th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201412752. Xie Q., Di B., Chen Y., Gan S., Jiao S., 2015b. Simultaneous structure enhancing filtering and fault detection using plane wave prediction, in 77th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201412614. Xie J., Chen W., Zhang D., Zu S., Chen Y., 2017. Application of principal component analysis in weighted stacking of seismic data, IEEE Geosci. Remote Sens. Lett. , 14( 8), 1213– 1217. Google Scholar CrossRef Search ADS   Xue Y., Ma J., Chen X., 2014. High-order sparse radon transform for avo-preserving data reconstruction, Geophysics , 79, V13– V22. Google Scholar CrossRef Search ADS   Xue Y., Chang F., Zhang D., Chen Y., 2016a. Simultaneous sources separation via an iterative rank-increasing method, IEEE Geosci. Remote Sens. Lett. , 13( 12), 1915– 1919. Google Scholar CrossRef Search ADS   Xue Y., Yang J., Ma J., Chen Y., 2016b. Amplitude-preserving nonlinear adaptive multiple attenuation using the high-order sparse radon transform, J. Geophys. Eng. , 13, 207– 219. Google Scholar CrossRef Search ADS   Xue Y., Man M., Zu S., Chang F., Chen Y., 2017. Amplitude-preserving iterative deblending of simultaneous source seismic data using high-order radon transform, J. Appl. Geophys. , 139, 79– 90. Google Scholar CrossRef Search ADS   Yang W., Wang R., Wu J., Chen Y., Gan S., Zhong W., 2015. An efficient and effective common reflection surface stacking approach using local similarity and plane-wave flattening, J. Appl. Geophys. , 117, 67– 72. Google Scholar CrossRef Search ADS   Yu S., Ma J., Zhang X., Sacchi M., 2015. Denoising and interpolation of high-dimensional seismic data by learning tight frame, Geophysics , 80( 5), V119– V132. Google Scholar CrossRef Search ADS   Zhang C., Chen L., 2015. A symplectic partitioned runge-kutta method using the eighth-order nad operator for solving the 2D elastic wave equation, J. Seismic Explor. , ( 3), 205– 230. Zhang D., Chen Y., Gan S., 2016a. Iterative reconstruction of 3D seismic data via multiple constraints, in 78th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201601241. Zhang D., Chen Y., Gan S., 2016b. Multidimensional seismic data reconstruction with multiple constraints, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 4801– 4806. Zhang D., Chen Y., Huang W., Gan S., 2016c. Multi-step damped multichannel singular spectrum analysis for simultaneous reconstruction and denoising of 3D seismic data, J. Geophys. Eng. , 13, 704– 720. Google Scholar CrossRef Search ADS   Zhang D., Chen Y., Huang W., Gan S., 2016d. Multi-step reconstruction of 3D seismic data via an improved MSSA algorithm, in CPS/SEG Beijing 2016 International Geophysical Conference & Exposition, SEG, Expanded Abstracts , pp. 745– 749. Zhang D., Zhou Y., Chen H., Chen W., Zu S., Chen Y., 2017. Hybrid rank-sparsity constraint model for simultaneous reconstruction and denoising of 3D seismic data, Geophysics , 82( 5), V351– V367. Google Scholar CrossRef Search ADS   Zhang H., Xu J., Liu Q., Zhang J., 2016e. Imaging 2D rugged topography seismic data: a topography PSTM approach integrated with residual static correction, J. Seismic Explor. , 25( 4), 339– 358. Zhang Q., Chen Y., Guan H., Wen J., 2016f. Well-log constrained inversion for lithology characterization: a case study at the JZ25-1 oil field, China, J. Seismic Explor. , 25( 2), 121– 129. Zhang R., Ulrych T., 2003. Physical wavelet frame denoising, Geophysics , 68, 225– 231. Google Scholar CrossRef Search ADS   Zhong W., Chen Y., Gan S., Yuan J., 2014. L1/2 norm regularization for 3D seismic data interpolation, J. Seismic Explor. , 25, 257– 268. Zhou Y., 2017. A POCS method for iterative deblending constrained by a blending mask, J. Appl. Geophys. , 138, 245– 254. Google Scholar CrossRef Search ADS   Zhou Y., Shi C., Chen H., Xie J., Wu G., Chen Y., 2017. Spike-like blending noise attenuation using structural low-rank decomposition, IEEE Geosci. Remote Sens. Lett. , 14( 9), 1633– 1637. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Chen Y., Liu Y., Qu S., 2015. A periodically variational dithering code for improving deblending, in SEG Expanded Abstracts: 85th Annual International Meeting , pp. 38– 42. Zu S., Zhou H., Chen Y., Qu S., Zou X., Chen H., Liu R., 2016. A periodically varying code for improving deblending of simultaneous sources in marine acquisition, Geophysics , 81( 3), V213– V225. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Chen H., Zheng H., Chen Y., 2017a. Two field trials for deblending of simultaneous source surveys: why we failed and why we succeeded?, J. Appl. Geophys. , 143, 182– 194. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Li Q., Chen H., Zhang Q., Mao W., Chen Y., 2017b. Shot-domain deblending using least-squares inversion, Geophysics , 82( 4), V241– V256. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Mao W., Zhang D., Li C., Pan X., Chen Y., 2017c. Iterative deblending of simultaneous-source data using a coherency-pass shaping operator, Geophys. J. Int. , 211( 1), 541– 557. Google Scholar CrossRef Search ADS   APPENDIX A: DERIVATION OF THE STEP SIZE A gradient step along the geodesic with tangent vector with tangent vector −∇J can be expressed as (Edelman et al. 1988):   \begin{eqnarray} \mathbf {U}_{n+1} &=& \left[\begin{array}{c@{\quad}c}\mathbf {U}_{n}\mathcal {V} & \mathcal {U} \end{array}\right] \left[\begin{array}{c}\cos (\Sigma \eta ) \nonumber\\ \sin (\Sigma \eta ) \end{array}\right]\mathcal {V}^T \\ &=&\mathbf {U}_{n}\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T + \mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T, \end{eqnarray} (A1)where η is the step size, $$\mathcal {U}\Sigma \mathcal {V}^T$$ is the compact SVD of −∇J. Eq. (A1) can be further derived as   \begin{eqnarray} \Delta U &=& \mathbf {U}_{n+1} - \mathbf {U}_{n} \nonumber\\ &=&\mathbf {U}_{n}\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T-\mathbf {U}_{n} + \mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T. \end{eqnarray} (A2)Since $$\mathcal {V}\mathcal {V}^T=I$$,   \begin{eqnarray} \Delta U &=& \mathbf {U}_{n+1} - \mathbf {U}_{n} \nonumber\\ &=&\mathbf {U}_{n}\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T-\mathbf {U}_{n} + \mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T \nonumber\\ &=&\mathbf {U}_{n}\left(\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T -\mathcal {V}\mathcal {V}^T\right) +\mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T \nonumber\\ &=& \mathbf {U}_{n}\mathcal {V}\left(\cos (\Sigma \eta )-I\right)\mathcal {V}^T +\mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T. \end{eqnarray} (A3)Let   $$-\nabla J = 2rw^T = \mathcal {U}\Sigma \mathcal {V}^T,$$ (A4)where   $$\mathcal {U}=\left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\frac{\mathbf {r}}{\| \mathbf {r} \| } & \mathbf {x}_2 & \cdots & \mathbf {x}_K \end{array} \right],$$ (A5)  $$\Sigma =\text{diag}(\sigma ,0,\ldots ,0),$$ (A6)  $$\mathcal {V}=\left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\frac{\mathbf {w}}{\| \mathbf {w} \| } & \mathbf {y}_2 & \cdots & \mathbf {y}_K \end{array} \right].$$ (A7) In eqs (A5)–(A7), σ = 2‖r‖‖w‖. [x2, …, xK] is an orthonormal set orthogonal to r and [y2, …, yK] is an orthonormal set orthogonal to w. Inserting eqs (A5)–(A7) into eq. (A4), the updating step can be obtained as   \begin{eqnarray} \Delta U &=&\mathbf {U}_{n}\mathcal {V}\left(\cos (\Sigma \eta )-I\right)\mathcal {V}^T +\mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T \nonumber\\ &=&\mathbf {U}_{n} \frac{\mathbf {w}}{\| \mathbf {w} \| }(\cos (\sigma \eta )-1)\frac{\mathbf {w}^T}{\| \mathbf {w} \| } + \frac{\mathbf {r}}{\| r \| }\sin (\sigma \eta ) \frac{\mathbf {w}}{\| \mathbf {w} \| }\nonumber\\ &=&\mathbf {U}_{n}(\cos (\sigma \eta )-1)\frac{\mathbf {w}\mathbf {w}^T}{\| \mathbf {w} \| ^2}+\sin (\sigma \eta )\frac{\mathbf {r}}{\| \mathbf {r} \| }\frac{\mathbf {w}}{\| \mathbf {w} \| }. \end{eqnarray} (A8)Let p = Unw,   $$\Delta \mathbf {U} = \left((\cos (\sigma \eta )-1)\frac{\mathbf {p}}{\| \mathbf {p} \| } + \sin (\sigma \eta )\frac{\mathbf {r}}{\| \mathbf {r} \| }\right) \frac{\mathbf {w}^T}{\| \mathbf {w} \| }.$$ (A9) APPENDIX B: MIXED SIGNAL AND NOISE SUBSPACE USING TSVD-BASED SSA Let us reformulate the Hankel matrix H expressed in eq. (11) as follows:   $$\mathbf {H}=\mathbf {S}+\mathbf {N},$$ (B1)where S denotes the signal component in H and N denotes the pre-arranged noise in the Hankel matrix. The SVD of S can be represented as:   $$\mathbf {S} = \big[\mathcal {U}_1^{S}\quad \mathcal {U}_2^{S}\big]\left[\begin{array}{c@{\quad}c}\Sigma _1^{S} & \mathbf {0}\\ \mathbf {0} & \Sigma _2^{S} \end{array} \right]\left[\begin{array}{c}(\mathcal {V}_1^{S})^T\\ (\mathcal {V}_2^{S})^T \end{array} \right].$$ (B2)Because of the deficient rank, the matrix S can be written as   $$\mathbf {S}=\mathcal {U}_1^S\Sigma _1^S(\mathcal {V}_1^S)^T.$$ (B3)Because eq. (B2) is an SVD of the signal matrix S, the left matrix in eq. (B2) is a unitary matrix:   $$\mathbf {I}=\mathcal {U}^S(\mathcal {U}^S)^T=[\mathcal {U}_1^S\quad \mathcal {U}_2^S]\left[\begin{array}{c}(\mathcal {U}_1^S)^T \\ (\mathcal {U}_2^S)^T \end{array} \right].$$ (B4)Combining eqs (B1), (B3) and (B4), we can derive:   \begin{eqnarray} \mathbf {H}&=& \mathbf {S}+\mathbf {N}\nonumber\\ &=&\mathbf {S}+\mathbf {IN}\nonumber\\ &=& \big[\mathcal {U}_1^S\quad \mathcal {U}_2^S\big]\left[\begin{array}{c}(\mathcal {U}_1^S)^T \nonumber\\ (\mathcal {U}_2^S)^T \end{array} \right]\mathbf {N} \\ &=&\mathcal {U}_1^S\Sigma _1^S(\mathcal {V}_1^S)^T + \left( \mathcal {U}_1^S(\mathcal {U}_1^S)^T+\mathcal {U}_2^S(\mathcal {U}_2^S)^T \right)\mathbf {N}\nonumber\\ &=&\mathcal {U}_1^S \left( (\mathcal {U}_1^S)^T\mathbf {N}+\Sigma _1^S(\mathcal {V}_1^S)^T \right)+\mathcal {U}_2^S(\mathcal {U}_2^S)^T\mathbf {N}\nonumber\\ &=&\mathcal {U}_1^S\left( \mathbf {N}^T\mathcal {U}_1^S+\mathcal {V}_1^S\Sigma _1^S \right)^T + \mathcal {U}_2^S(\mathbf {N}^T\mathcal {U}_2^S)^T\nonumber\\ &=& [\mathcal {U}_1^S \quad \mathcal {U}_2^S]\left[\begin{array}{c@{\quad}c}\mathbf {I} & \mathbf {0}\\ \mathbf {0} & \mathbf {I} \end{array} \right]\left[\begin{array}{c}(\mathbf {N}^T\mathcal {U}_1^S+\mathcal {V}_1^S\Sigma _1^S)^T\\ (\mathbf {N}^T\mathcal {U}_2^S)^T \end{array} \right], \end{eqnarray} (B5)we can further derive eq. (B5) and factorize H as follows:   $$\mathbf {H} = \big[\mathcal {U}_1^S \quad \mathcal {U}_2^S\big]\left[\begin{array}{c@{\quad}c}\Sigma _{1} & \mathbf {0}\\ \mathbf {0} & \Sigma _{2} \end{array} \right]\left[\begin{array}{c}(\Sigma _{1})^{-1}(\mathbf {N}^T\mathcal {U}_1^S+\mathcal {V}_1^S\Sigma _1^S)^T\\ (\Sigma _{2})^{-1}(\mathbf {N}^T\mathcal {U}_2^S)^T \end{array} \right],$$ (B6)where Σ1 and Σ2 denote diagonal and positive definite matrices. Note that H is constructed such that it is close to a square matrix (Oropeza & Sacchi 2011), and thus the Σ1 and Σ2 are assumed to be square matrices for derivation convenience. We observe that the left matrix has orthonormal columns and the middle matrix is diagonal. It can be shown that the right matrix also has orthonormal columns. Thus, eq. (B6) is an SVD of H. According to the TSVD method, we let Σ2 be 0 and then the following equation holds:   \begin{eqnarray} \tilde{\mathbf {H}} &=& \mathcal {U}_1^S(\mathcal {U}_1^S)^T\mathbf {N} + \mathcal {U}_1^S\Sigma _1^S(\mathcal {V}_1^S)^T \nonumber\\ &=&\mathbf {S} + \mathcal {U}_1^S(\mathcal {U}_1^S)^T\mathbf {N}. \end{eqnarray} (B7)From eq. (B7), it is straightforward to understand that TSVD result $$\tilde{\mathbf {H}}$$ still contains significant random noise $$\mathcal {U}_1^S(\mathcal {U}_1^S)^T\mathbf {N}$$, or in other words, the traditional TSVD can only decompose the data space into a mixture of signal and noise subspace (Huang et al. 2017a). APPENDIX C: DERIVATION OF THE 3-D LOW-RANK MATRIX CONSTRUCTION Instead of constructing a Hankel matrix from a 1-D spatial trace, in 3-D case, we construct a block Hankel matrix from a 2-D spatial matrix (X and Y) in the frequency domain (Oropeza & Sacchi 2011). Consider a block of 3-D data $$\mathbf {D}_{\text{time}}(x,y,t)$$ of Nx by Ny by Nt samples. First, we transforms $$\mathbf {D}_{\text{time}}(x,y,t)$$ into $$\mathbf {D}_{\text{freq}}(x,y,w)$$ (w = 1⋅⋅⋅Nw) with complex values in the frequency domain. Each frequency slice of the data, at a given frequency w0, can be represented by the following matrix:   $$\mathbf {D}(w_0)=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}D(1,1) & D(1,2) & \cdots &D(1,N_x) \\ D(2,1) & D(2,2) &\cdots &D(2,N_x) \\ \vdots & \vdots &\ddots &\vdots \\ D(N_y,1)&D(N_y,2) &\cdots &D(N_y,N_x) \end{array} \right).$$ (C1) In the following context, w0 is omitted for notational convenience. Then, we construct a Hankel matrix for each row of D. The Hankel matrix Ri for row i of D is as follows:   $$\mathbf {R}_i=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} D(i,1) & D(i,2) & \cdots &D(i,p) \\ D(i,2) & D(i,3) &\cdots &D(i,p+1) \\ \vdots & \vdots &\ddots &\vdots \\ D(i,N_x-p+1)&D(i,N_x-p+2) &\cdots &D(i,N_x) \end{array}\right),$$ (C2)which is exactly the same as eq. (11). The block matrix is constructed by treating all Hankel matrix Ri as an element and arranging them into a block matrix:   $$\mathbf {H}=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\mathbf {R}_1 &\mathbf {R}_2 & \cdots &\mathbf {R}_q \\ \mathbf {R}_2 &\mathbf {R}_3 &\cdots &\mathbf {R}_{q+1} \\ \vdots & \vdots &\ddots &\vdots \\ \mathbf {R}_{N_y-q+1}&\mathbf {R}_{N_y-q+2} &\cdots &\mathbf {R}_{N_y} \end{array} \right).$$ (C3)The size of H is I × J, I = (Nx − p + 1)(Ny − q + 1), J = pq. p and q are predefined integers chosen such that the Hankel matrix Ri and the block Hankel matrix M are close to square matrices, for example, $$p=N_x-\lfloor \frac{N_x}{2}\rfloor$$ and $$q=N_y-\lfloor \frac{N_y}{2}\rfloor$$, where ⌊ · ⌋ denotes the integer part of the argument. The following steps for low-rank decomposition is exactly the same as those in 2-D case, as introduced in detail in the main context, which includes either SVD or GROUSE decomposition. © The Author(s) 2017. Published by Oxford University Press on behalf of The Royal Astronomical Society. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Geophysical Journal International Oxford University Press

# Seismic noise attenuation using an online subspace tracking algorithm

, Volume 212 (2) – Feb 1, 2018
26 pages

/lp/ou_press/seismic-noise-attenuation-using-an-online-subspace-tracking-algorithm-b8iGwOfRk2
Publisher
The Royal Astronomical Society
ISSN
0956-540X
eISSN
1365-246X
D.O.I.
10.1093/gji/ggx422
Publisher site
See Article on Publisher Site

### Abstract

Summary We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise. Image processing, Inverse theory, Time-series analysis INTRODUCTION Seismic data is inevitably corrupted by different types of noise. The existence of noise in the pre-stack seismic data masks useful signals, affects the amplitude information, and thus causes unreliable inversion results. Even in post-stack seismic data, the existence of noise will affect interpretation result which directly links the seismic images with subsurface oil & gas reservoirs (Wang et al. 2011; Chen 2015b, 2016; Gan et al. 2016e; Ren & Tian 2016; Zhang et al. 2016e). A successful separation between useful reflection signals and unwanted noise is a long standing problem in the area of seismic data processing and greatly affects the fidelity of subsequent seismic imaging and geophysical inversion, like amplitude-variation-with-offset (AVO) inversion, full waveform inversion, attenuation estimation and geological interpretation (He & Wu 2015; Zhang & Chen 2015; Li et al. 2016a; Lines et al. 2016; Liu et al. 2016a; Mousavi & Langston 2016a,b; Mousavi et al. 2016; Qu & Verschuur 2016; Qu et al. 2016a; Zhang et al. 2016f; Jin et al. 2017; Li et al. 2017; Ebrahimi et al. 2017; Mousavi & Langston 2017). Effectively rejecting the cross-talk interference caused in the simultaneous-source seismic acquisition also plays a significant role in tremendously saving the data recording cost (Berkhout 2008; Qu et al. 2014, 2015, 2016b; Abma et al. 2015; Chen 2015a; Chen et al. 2015a,b, 2017e; Zu et al. 2015, 2017a,b,c; Zhou 2017). Over the past few decades, a large number of denoising methods for random noise have been developed and widely applied in practice. The simplest denoising method is by stacking the seismic data along the offset direction (Mayne 1962; Liu et al. 2009a; Yang et al. 2015). By stacking the useful signals from multiple traces and multiple directions (e.g. offset and midpoint), the signal is enhanced while influence of noise is mitigated. Prediction based methods utilize the predictable property of useful signals to construct prediction filters and then enhance signals and reject noise, for example, t–x predictive filtering (Abma & Claerbout 1995), f–x deconvolution (Canales 1984; Naghizadeh & Sacchi 2012), the forward–backward prediction approach (Wang 1999), the polynomial fitting based approach (Liu et al. 2011) and non-stationary predictive filtering (Liu et al. 2012; Liu & Chen 2013). Sparse transform based approaches first transform seismic data to a sparse domain (Sacchi & Ulrych 1995; Wang et al. 2015b; Chen 2017; Huang et al. 2017b), then apply soft thresholding to the coefficients, finally transform the sparse coefficients back to the time–space domain. Widely used sparse transforms are Fourier transform (Pratt et al. 1998; Naghizadeh & Sacchi 2011; Zhong et al. 2014; Wang et al. 2015a; Li et al. 2016d; Shen et al. 2016), curvelet transform (Candès et al. 2006; Hermann et al. 2007; Neelamani et al. 2008; Liu et al. 2016e; Zu et al. 2016), seislet transform (Fomel & Liu 2010; Liu & Fomel 2010; Chen et al. 2014; Chen & Fomel 2015a; Gan et al. 2015b,c; Liu et al. 2015; Gan et al. 2016c; Liu et al. 2016f), seislet frame transform (Fomel & Liu 2010; Gan et al. 2015a, 2016b), shearlet transform (Kong et al. 2016; Liu et al. 2016a), Radon transform (Sacchi & Ulrych 1995; Xue et al. 2014; Sun & Wang 2016; Xue et al. 2016b, 2017), different types of wavelet transforms (e.g. physical, synchrosqueezing, empirical wavelet transforms, etc.) (Donoho & Johnstone 1994; Zhang & Ulrych 2003; Gao et al. 2006; Xie et al. 2015a; Liu et al. 2016b,g), and dictionary learning based sparse transform (Elad & Aharon 2006; Mairal et al. 2008; Yu et al. 2015; Chen et al. 2016c; Siahsar et al. 2017a,b). Decomposition based approaches decompose the noisy seismic data into different components and then select the principal components to represent the useful signals. Empirical mode decomposition and its variations (Chen & Ma 2014; Chen et al. 2015c, 2016a,b, 2017a,b,d; Gan et al. 2016a), variational mode decomposition (Liu et al. 2016c,d, 2017), singular value decomposition (SVD) based approaches (Bekara & van der Baan 2007; Huang et al. 2016b; Wang et al. 2017), morphological decomposition (Li et al. 2016b,c; Huang et al. 2017e,f,g), empirical wavelet transform (Gilles 2013; Verma & Gurve 2015), and regularized non-stationary decomposition based approaches (Fomel 2013; Wu et al. 2017) are frequently used to extract the useful components of multidimensional seismic data in the literature. Rank-reduction based approaches assume the seismic data to be low-rank after some data rearrangement steps. Such methods include the Cadzow filtering (Trickett 2008), singular spectrum analysis (SSA; Vautard et al. 1992; Oropeza & Sacchi 2011; Gao et al. 2013; Cheng & Sacchi 2015), damped SSA (Huang et al. 2015, 2016a; Chen et al. 2016d,e; Siahsar et al. 2017c), multistep SSA (Zhang et al. 2016c,2016d), sparsity-regularized SSA (Siahsar et al. 2016; Zhang et al. 2016a,b; Anvari et al. 2017; Zhang et al. 2017), randomized-order SSA (Huang et al. 2017d), structural low-rank approximation (Zhou et al. 2017), empirical low-rank approximation (Chen et al. 2017f). Mean (or stacking) and median filters utilize the statistical difference between signal and noise to reject the Gaussian white noise or impulsive noise (Liu et al. 2009b; Liu 2013; Xie et al. 2015b; Yang et al. 2015; Gan et al. 2016d; Chen et al. 2017c; Xie et al. 2017). Instead of developing a standalone denoising strategy, Chen & Fomel (2015b) proposed a two-step denoising approach that tries to solve a long-existing problem in almost all denoising approaches: the signal leakage problem. By initiating a new concept called local orthogonalization, Chen & Fomel (2015b) successfully retrieved the coherent signals from the removed noise section to guarantee no signal leakage in any denoising algorithms. Huang et al. (2017c) used a similar algorithm to extract useful signals from the highly noise-corrupted microseismic data using an orthogonalized morphological decomposition method. In addition to these classic noise attenuation methods, some advanced denoising methods have been proposed in recent years. Time-frequency peak filtering (Kahoo & Siahkoohi 2009; Lin et al. 2013) based approaches utilize the high-resolution property of time-frequency transform to distinguish between useful signals and random noise. Xue et al. (2016a) used the rank-increasing property of noise when iteratively estimating the useful signals from simultaneous-source data. In this paper, we follow the general set-up of the rank-reduction based methods and propose a new algorithm based on completely different low-rank decomposition engine. We substitute the SVD used in traditional rank-reduction method with an efficient online subspace tracking (OSST) method. The subspace tracking algorithm requires only basic linear algebraic manipulations. Each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces (Balzano et al. 2010). The subspace tracking method was initially proposed for matrix completion. Here, we carefully studied the application of the subspace tracking method in seismic noise attenuation following the rank reduction framework. After the seismic data is mapped to a low-rank space, the subspace tracking algorithm can be applied directly for low-rank decomposition. Due to the online nature of the subspace tracking algorithm, it reconstructs less random noise than the traditional SVD based subspace tracking algorithm during low-rank approximation. Here, the ‘online’ simply means that each column of a low-rank matrix is inserted one-by-one into the algorithm for low-rank approximation. The traditional SSA can be replaced by the proposed algorithm for obtaining significantly better performance and higher computational efficiency. The paper is organized as follows: we first introduce the low-rank approximation theory that is widely used in data processing applications, like matrix completion, seismic noise attenuation, and data restoration. We then introduce the OSST algorithm, where we provide detailed mathematical implications since it is relatively new in the seismic community. We also introduce the application of the subspace tracking method in seismic noise attenuation as a low-rank decomposition engine. Next, we use synthetic and field data examples to test and confirm the effectiveness of the proposed algorithm. We investigate the proposed algorithm from the perspective of removing different types of noise in the seismic data, which includes random noise, spiky noise, blending noise, and coherent noise. We draw some key conclusions at the end of the paper. THEORY Low-rank approximation Given a low-rank matrix D, it can be approximated by tracking the K-dimensional subspace S such that the following objective function is minimized   $$\min _{S} J(S) = \min _{\mathbf {U},\mathbf {V}} \| \mathbf {U}\mathbf {V} - \mathbf {D}\| _F^2,$$ (1)where U is a matrix whose columns span S. V is the corresponding weighting matrix. $$J=\,\| \mathbf {U}\mathbf {V} - \mathbf {D} \| _F^2$$ is the objective function. ∥·∥F denotes the Frobenius norm of an input matrix. Note that $$\mathbf {U}\in \mathbb {R}^{M\times K}$$, $$\mathbf {V}\in \mathbb {R}^{K\times N}$$, and $$\mathbf {D}\in \mathbb {R}^{M\times N}$$. There have been a lot of different algorithms to solve eq. (1). Keshavan et al. (2010) used a gradient descent algorithm to jointly minimize both U and V while Dai & Milenkovic (2010) minimized this cost function by first solving for V and then taking a gradient step with respect to U. Most algorithms are based on the batch subspace tracking strategy (Balzano et al. 2010), where the full information in D is treated as a whole to solve for U and V. These methods often require eigenvalue decomposition, or SVD (Golub & Van Loan 1996; Sarwar et al. 2002; Cai et al. 2010b). Some accelerated decomposition algorithms like Lanczos (Gao et al. 2013), QR decomposition (Cheng & Sacchi 2014), randomized SVD methods (Oropeza & Sacchi 2011) can also be used to solve eq. (1) with higher efficiency. We will first introduce the most widely used SVD method for solving eq. (1) (Golub & Van Loan 1996; Cai et al. 2010a). SVD is a matrix factorization technique commonly used for producing low-rank approximations. The SVD of the data matrix D can be expressed as:   $$\mathbf {D}=\mathbf {U\Sigma V}^T.$$ (2)Here, U is composed of the eigenvectors of DDT. V is composed of the eigenvectors of DTD. Σ is a diagonal matrix composed of the decreasing singular values. The SVD method is a batch subspace tracking method or an offline subspace tracking algorithm (Balzano et al. 2010). The K-dimensional subspace S is tracked provided that the whole matrix D is given. An opposite of the offline subspace tracking algorithm is the OSST algorithm, where the low-rank approximation is done step by step, with consecutively inserted columns from D for updating matrix U. There are two main obvious advantages of the OSST strategy. It will be highly time and memory efficient when the number of columns in D is extremely large. It will better decompose the data into signal subspace with less residual noise. Later in the paper we will use numerical examples to demonstrate better subspace tracking performance of the online method. A theoretical proof of the mixed signal-and-noise subspace decomposed from the traditional truncated singular value decomposition (TSVD)-based SSA method is given in Appendix B. In the next subsection, we will introduce in detail an OSST algorithm, which is called Grassmannian rank-one update subspace estimation (GROUSE) and was initially proposed by Balzano et al. (2010). Grassmannian rank-one update subspace estimation Balzano et al. (2010) considered optimizing the objective function shown in eq. (1) one column at a time. They showed that by using the online algorithm, where each measurement dn corresponds to a random column of the matrix D, it is possible to obtain a good approximation of the given matrix D. The set of all subspaces of $$\mathbb {R}^M$$ of dimension K is denoted as $$\mathcal {G}(M,K)$$ and is called the Grassmannian. The Grassmannian is a compact Riemannian manifold, and its geodesics can be explicitly computed according to Edelman et al. (1988). An element $$S\in \mathcal {G}^{n\times d}$$ can be represented by any M × K matrix U whose columns form an orthonormal basis for S. GROUSE algorithm derives from an application of incremental gradient descent on the Grassmannian manifold. We can first compute a gradient of the cost function J, and then follow this gradient along a short geodesic curve in the Grassmannian. To compute the gradient of J on the Grassmannian manifold, we first need to compute the partial derivatives of J with respect to the components of U. For a generic subspace, the matrix UTU has full rank. We can rewrite the objective function as   $$J(S,n)=\| \mathbf {U}\mathbf {v}-\mathbf {d}_n \| _2^2.$$ (3)Here, n also denotes the nth column in D. v denotes a column vector in V. It follows that the derivative of J with respect to the elements of U is   \begin{eqnarray} \frac{\partial J}{\partial U} &=& 2(\mathbf {Uv}-\mathbf {d}_n)\mathbf {v}^T \nonumber\\ &=& -2(\mathbf {d}_n-\mathbf {Uv})\mathbf {v}^T, \end{eqnarray} (4) v is chosen as the least-squares solution (w) of eq. (5):   $$\min _{\mathbf {v}} \| \mathbf {Uv}-\mathbf {d}_n \| _2^2,$$ (5)that is,   $$w=(\mathbf {U}^T\mathbf {U})^{-1}\mathbf {U}\mathbf {d}_n,$$ (6)and r = (dn − Uv), which denotes the zero padded residual vector, eq. (4) can be formulated as   $$\frac{\partial J}{\partial \mathbf {U}} = -2\mathbf {r}\mathbf {w}^T.$$ (7) It can be further derived that (Edelman et al. 1988)   \begin{eqnarray} \nabla J &=& (\mathbf {I}-\mathbf {UU}^T)\frac{\partial J}{\partial \mathbf {U}} \nonumber\\ &=& -2(\mathbf {I}-\mathbf {UU}^T)\mathbf {rw}^T \nonumber\\ &=& -2\mathbf {rw}^T. \end{eqnarray} (8) A gradient step along the geodesic with tangent vector −∇J is given in Edelman et al. (1988), and is a function of the singular values and vectors of ∇J. It is trivial to compute the SSA of ∇J, as it is rank one. The sole non-zero singular value is σ = 2‖r‖‖w‖ and the corresponding left and right singular vectors are $$\frac{\mathbf {r}}{\| \mathbf {r} \| }$$ and $$\frac{\mathbf {w}}{\| \mathbf {w} \| }$$, respectively. Let x2, …, xK be an orthonormal set orthogonal to r and y2, …, yK be an orthonormal set orthogonal to w. Then   \begin{eqnarray} -2\mathbf {rw}^T &=& \left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}-\frac{\mathbf {r}}{\| \mathbf {r} \| } & \mathbf {x}_2 & \cdots & \mathbf {x}_K \end{array} \right] \times \text{diag}(\sigma ,0,\ldots ,0)\nonumber\\ &&\times \, \left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\frac{\mathbf {w}}{\| \mathbf {w} \| } & \mathbf {y}_2 & \cdots & \mathbf {y}_K \end{array} \right] \end{eqnarray} (9)forms an SVD for the gradient. It can be derived that for η > 0, a step of length η in the direction ∇J is given by   \begin{eqnarray} \mathbf {U}(\eta ) & = &\mathbf {U}+\frac{(\cos (\sigma \eta )-1)}{\| \mathbf {w} \| ^2} \mathbf {Uww}^T+\sin (\sigma \eta ) \frac{\mathbf {r}}{\| \mathbf {r} \| } \frac{\mathbf {w}^T}{\| \mathbf {w} \| } \nonumber\\ &=&\mathbf {U}+\left( \sin (\sigma \eta )\frac{\mathbf {r}}{\| \mathbf {r} \| } + (\cos (\sigma \eta )-1)\frac{\mathbf {p}}{\| \mathbf {p} \| } \right) \frac{\mathbf {w}^T}{\| \mathbf {w} \| } \end{eqnarray} (10)where p = Uw, the predicted value of the projection of the vector d onto S. Appendix A provides a detailed derivation of the step size ΔU = Un + 1 − Un. The whole GROUSE algorithm is detailed in Algorithm 1 (Balzano et al. 2010). Algorithm 1 Grassmannian Rank-One Update Subspace Estimation     View Large Low-rank matrix construction and algorithms We follow Oropeza & Sacchi (2011) to construct the low-rank matrix. A Hankel matrix for each frequency slice in f–x domain can be constructed to represent the useful signal components in a low-rank manner. The Hankel matrix for each frequency slice D(x, w) is constructed as:   $$\mathbf {H}=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}D(1,w) & D(2,w) & \cdots &D(p,w) \\ D(2,w) & D(3,w) &\cdots &D(p+1,w) \\ \vdots & \vdots &\ddots &\vdots \\ D(L,w)&D(L+1,w) &\cdots &D(N_x,w) \end{array} \right),$$ (11)where $$L=\lfloor \frac{N_x}{2}\rfloor +1$$ and p = Nx − L + 1, ⌊ · ⌋ denotes the integer part of its argument, w denotes frequency. As a bridge connecting the state-of-the-art SSA method to the proposed method, we first introduce the implementation of the TSVD-based SSA method. In the TSVD-based SSA method, after constructing the Hankel matrix, we apply TSVD to the matrix H:   $$\hat{\mathbf {H}}=\text{TSVD}\left(\mathbf {H}\right).$$ (12)Then an averaging operator is applied along the antidiagonals of the low-rank approximated Hankel matrix $$\tilde{\mathbf {H}}$$. The whole algorithm can be shown in Algorithm 2, where $$\hat{\mathcal {H}}$$ denotes the Hankelization operator and $$\mathcal {A}$$ denotes the averaging operator. Algorithm 2 Denoising seismic data via TSVD-based SSA algorithm.     View Large The proposed method simply substitutes the low-rank decomposition operator using SVD with the GROUSE-based low-rank decomposition operator. It is also worth mentioning that GROUSE method is just one choice of the OSST algorithm. Other subspace tracking algorithms such as incremental SVD (Sarwar et al. 2002) can also be used for the purpose of low-rank decomposition. We will use GROUSE-based SSA to briefly refer to the proposed algorithm throughout the paper. The detailed GROUSE-based SSA algorithm in a similar way as Algorithm 2 is shown in Algorithm 3. For a better understanding of the algorithm, Fig. 1 shows the main steps of the algorithm. The low-rank approximation step is chosen as GROUSE method in the proposed GROUSE-based method. Figure 1. View largeDownload slide Flowchart of the algorithm. Figure 1. View largeDownload slide Flowchart of the algorithm. Algorithm 3 Denoising seismic data via GROUSE-based SSA algorithm.     View Large EXAMPLES Evaluation of denoising performance For synthetic examples, since we have the clean data, we can use the following signal-to-noise ratio (SNR) measurement to evaluate denoising performance (Liu et al. 2009a):   $${}\text{SNR}=10\log _{10}\frac{\Vert \mathbf {s} \Vert _2^2}{\Vert \mathbf {s} -\hat{\mathbf {s}}\Vert _2^2},$$ (13)where s and $$\hat{\mathbf {s}}$$ are clean signal and estimated signal, respectively. For field data examples, we do not know the correct signal. In order to numerically measure the denoising performance, we utilize the local similarity measurement (Chen & Fomel 2015b). Measuring denoising performance using local similarity is based on the criterion that denoised signal and noise should be orthogonal to each other. Thus, a high local similarity between denoised signal and noise indicates a mixture between signal and noise subspace, or there are some useful signals in the removed noise section. For a successful denoising performance, the local similarity map should be zero everywhere. It is worth mentioning that local similarity is effective in detecting the damage degree of denoising methods but is less effective in evaluating the noise removal ability of a denoising method. To effectively apply the SSA filter, the denoised data are constrained to be low-rank in Hankel matrices extracted from small temporal-spatial windows (or patches). The local-window strategy is important because the SSA method is a valid denoising and reconstruction technique for a superposition of plane waves. In a small window, the data can be approximated via a limited number of dips plus incoherent noise. The size of local windows depends on the complexity of input seismic data. For relatively more complex seismic data, for example, pre-stack data with large curvature in some areas, the window size should be small enough so that in each window the events are roughly linear events. For structurally simpler data, for example, most post-stack data, the selection of window size can be more flexible since both small window and large window can obtain similar results. For most data presented in this paper, the local curvature of the geological structures is not very large, so we will choose relatively smaller window sizes. Neighbour windows will have 50 per cent overlap and a cosine taper is applied in both time and space. The window size is a relatively term. ‘Large’ or ‘small’ should be evaluated based on the ratio between window size and complete data size. We will specify the window sizes for each example when discussing the denoising performance. Synthetic examples The first example is a 2-D example. This example is relatively simple, containing only three linear events, as shown in Fig. 2. Fig. 2(a) shows the clean data or the ground truth for this example. We test the performance of the proposed algorithm in rejecting three types of noise, namely, random noise, spiky noise, and blending noise which arises from the simultaneous source acquisition (Zu et al. 2016). We also compare the OSST based method with the traditional SSA based method. The noisy data containing band-limited random noise is shown in Fig. 2(b). Figure 2. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. Figure 2. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. Fig. 3 shows the denoising performance for both TSVD-based SSA and GROUSE-based SSA methods. Fig. 3(a) shows the denoised data using TSVD-based SSA method while Fig. 3(b) shows the denoised data using OSST. Comparing Figs 3(a) and (b), the denoised data using the proposed method is cleaner than the traditional TSVD-based SSA method. In addition to comparing the noise rejection ability, we are also interested in comparing the signal preservation ability. The best way to see if any useful energy is damaged is to check the removed noise section. The removed noise section is the difference between the noisy data and denoised data. If there is obvious coherent energy in the noise section, it indicates damages to the useful signals. Figs 3(c) and (d) show the removed noise sections using the TSVD-based SSA method and the proposed method, respectively. Both removed noise sections do not show obvious coherent events and demonstrate that both TSVD-based SSA and GROUSE-based SSA methods do not cause significant damages to useful signals. To quantitatively compare the performance of two methods, we calculate the SNRs of different data based on eq. (13). The SNR of noisy data is −5.12 dB. The SNRs of the TSVD-based SSA and GROUSE-based SSA methods are 5.35 and 6.72 dB, respectively. In this example, both methods use the same predefined rank: K = 3. Figure 3. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. Figure 3. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. In order to compare the performance of two methods in different noise level. We increase the variance of noise from 0.1 to 1, and calculate the SNRs of denoised data of both methods and show them in Table 1. To see the varied SNRs more vividly, we plot the data from Table 1 in Fig. 4. The black line shows the SNRs varying with input noise variances. The red line shows the SNRs corresponding to the TSVD-based SSA method. The green line shows the SNRs of the proposed method. It is obvious that both methods obtain large SNR improvement for all noise levels and the SNRs of the proposed method are always higher than the TSVD-based SSA method. We can also observe clearly that the difference between the TSVD-based SSA and GROUSE-based SSA methods decreases first and reaches a minimum when noise variance σ2 = 0.3, then the difference increases as noise variance becomes larger, which indicates that the proposed method outperforms the TSVD-based SSA method more when the seismic data becomes noisier. Figure 4. View largeDownload slide Output SNRs varying with different input noise level. The noise level is measured by noise variance. Figure 4. View largeDownload slide Output SNRs varying with different input noise level. The noise level is measured by noise variance. Table 1. Comparison of SNRs for different input noise level. The diagrams corresponding to this table are shown in Fig. 4. Noise  Input  SNR of  SNR of  variance  SNR  TSVD-based SSA  GROUSE-based SSA  0.1  4.42  15.16  19.78  0.2  − 1.59  8.94  10.96  0.3  − 5.12  5.34  6.72  0.4  − 7.62  2.83  4.18  0.5  − 9.553  0.82  2.46  0.6  − 11.14  − 0.81  1.35  0.7  − 12.47  − 2.25  0.61  0.8  − 13.64  − 3.50  0.17  0.9  − 14.66  − 4.61  − 0.13  1.0  − 15.57  − 5.57  − 0.39  Noise  Input  SNR of  SNR of  variance  SNR  TSVD-based SSA  GROUSE-based SSA  0.1  4.42  15.16  19.78  0.2  − 1.59  8.94  10.96  0.3  − 5.12  5.34  6.72  0.4  − 7.62  2.83  4.18  0.5  − 9.553  0.82  2.46  0.6  − 11.14  − 0.81  1.35  0.7  − 12.47  − 2.25  0.61  0.8  − 13.64  − 3.50  0.17  0.9  − 14.66  − 4.61  − 0.13  1.0  − 15.57  − 5.57  − 0.39  View Large In order to test the robustness of the proposed algorithm to different types of noise, we then apply it to reject spiky noise and blending noise (Chen et al. 2014). The noisy data containing spiky noise are shown in Fig. 5(a). Figs 5(b) and (c) show denoised data using the TSVD-based SSA and GROUSE-based SSA methods, respectively. The performance shows that both methods can be used to effectively reject spiky noise and the proposed GROUSE-based SSA method can obtain a slightly smoother result with less residual noise. Figs 5(d) and (e) show their corresponding noise sections. The noise sections confirm the effective signal preservation abilities of the two methods. The SNRs of the noisy data, denoised data using TSVD-based SSA method, and denoised data using GROUSE-based SSA methods are 2.24 dB, 16.73 dB, and 18.97 dB, respectively, which quantitatively verifies the superior performance of the proposed GROUSE-based SSA method. Next, we apply the proposed method to remove blending noise caused from the marine simultaneous source acquisition (Berkhout 2008). Rejecting the blending noise is significant in boosting a huge economic saving in modern marine acquisition. The noisy data that contains blending noise is shown in Fig. 6(a). Figs 6(b) and (c) show the denoised data using the TSVD-based SSA and GROUSE-based SSA methods, respectively. Figs 6(d) and (e) show their corresponding noise sections. Although both methods cause some residual crosstalk noise in the data, the denoised data are both much cleaner than the noisy data. Considering the strong amplitude of the blending noise, it is usually rejected using an iterative way. Fig. 6 just shows the performance of one-step filtering. The subsequent filtering will compensate for the imperfect performance of both methods. However, we do see that the proposed method can obtain a slightly cleaner performance than the traditional TSVD-based SSA method. Since this example contains only linear events, we do not apply local windows to process the data. Figure 5. View largeDownload slide Spiky noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a),(b),(c) are 2.24 dB, 16.73 dB, and 18.97 dB, respectively. Figure 5. View largeDownload slide Spiky noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a),(b),(c) are 2.24 dB, 16.73 dB, and 18.97 dB, respectively. Figure 6. View largeDownload slide Blending noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a)–(c) are 0.85, 12.23, 15.15 dB, respectively. Figure 6. View largeDownload slide Blending noise test. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise using SSA. (d) Removed noise using OSST. SNRs of (a)–(c) are 0.85, 12.23, 15.15 dB, respectively. We further investigate the reason why the traditional TSVD-based SSA method causes more residual noise than the GROUSE-based SSA method. The strategy we use is to test the noise influence in the low-rank approximation of both TSVD-based SSA and GROUSE-based SSA method. We apply TSVD-based SSA and GROUSE-based SSA with different predefined ranks to pure random noise (which is band-limited here to simulate the real random noise in seismic data), and quantitatively measure how different low-rank approximation methods try to approximate the noise. Ideally, a low-rank approximation algorithm should approximate none (or very small amount) of the noise, since there is no significant spatially coherent signals in the pure noise. The pure noise is shown in Fig. 7(a). Figs 7(b) and (c) show the results with predefined rank K = 1 from TSVD-based SSA and GROUSE-based SSA methods. Figs 7(d) and (e) show the results with predefined rank K = 3 from TSVD-based SSA and GROUSE-based SSA methods, respectively. It is very salient from Fig. 7 that either K = 1 or K = 3, the TSVD-based SSA method causes a much stronger noise reconstruction performance than the GROUSE-based SSA method, which indicates that for the same predefined rank, the TSVD-based SSA method will approximate more noise than the GROUSE-based SSA method. In other words, the proposed method is less sensitive to noise when approximating the low-rank components using the OSST algorithm than the TSVD-based SSA method which uses an offline algorithm. Fig. 8 shows the diagrams of reconstructed noise energy varying with different ranks. The noise energy here is defined as   $$E_n = \| \mathbf {n} \| _2^2,$$ (14)where En denotes the energy of reconstructed noise and n denotes the vectorized noise vector. ∥ · ∥2 denotes the L2 norm. In Fig. 8, the black line denotes the constant input noise energy. The red line denotes the reconstructed noise energy using the TSVD-based SSA method varying with different ranks. The green line denotes the reconstructed noise energy using the proposed method varying with different ranks. It is quite reasonable that as the rank increases, both methods reconstruct more and more noise. It is obvious that the GROUSE-based SSA method is always reconstructing less noise than the TSVD-based SSA method for the same predefined rank. As the rank increases, the difference in noise energy between two methods also increases, which indicates that the superiority of the proposed method over the TSVD-based SSA method will become more obvious when the predefined ranks is larger. This observation is very useful in practice since in processing field data, the rank is usually chosen relatively large, where the proposed method will be more appropriate to substitute the traditional TSVD-based SSA method. Figure 7. View largeDownload slide Test of applying TSVD-based SSA and GROUSE-based SSA methods to pure random noise. (a) Pure random noise. (b) Result from TSVD-based SSA with K = 1. (c) Result from GROUSE-based SSA with K = 1. (d) Result from TSVD-based SSA with K = 3. (e) Result from GROUSE-based SSA with K = 3. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. Figure 7. View largeDownload slide Test of applying TSVD-based SSA and GROUSE-based SSA methods to pure random noise. (a) Pure random noise. (b) Result from TSVD-based SSA with K = 1. (c) Result from GROUSE-based SSA with K = 1. (d) Result from TSVD-based SSA with K = 3. (e) Result from GROUSE-based SSA with K = 3. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. Figure 8. View largeDownload slide Noise approximation test. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. Figure 8. View largeDownload slide Noise approximation test. Note that the TSVD-based SSA method approximates stronger noise than GROUSE-based SSA method for the same predefined rank K. This noise influence test supports the advantage of OSST algorithms over those offline subspace tracking algorithms, that is, the traditional TSVD-based SSA method. In fact, the TSVD-based SSA method can only decompose the input noisy data into a mixed signal-and-noise subspace. The mixed signal-and-noise subspace explains why there is a large amount of residual noise left in the denoised data, even for input pure-noise data as shown in Fig. 7(a). Appendix B gives a detailed proof on mixed signal-and-noise subspace decomposed from traditional TSVD method. Another insightful explanation on why the TSVD-based SSA method still causes residual noise is provided in Aharchaou et al. (2017), where the authors explained that thresholding alone (given a value for the rank) in the singular spectrum is not enough for rejecting all noise, shrinking (to counteract the effect of noise) of those (inflated) singular values is also necessary. In addition, in the GROUSE-based method, a spatial randomization operator is applied to the Hankel matrix before the OSST. The randomization operator helps the low-dimensional subspace better represent the full-dimensional measurement. Besides, the randomization operator helps make the locally coherent random noise incoherent by a global column rearrangement, which also accounts for the reason why the GROUSE-based method is less sensitive to noise and can achieve better denoising performance than the TSVD-based method. Fig. 9 provides a demonstration for such randomization. It can be seen from Fig. 9(a) that in a low-rank signal matrix, some noise may appear to be locally coherent although globally random. By applying a spatial randomization operator, the noise becomes highly random both locally and globally. This procedure makes the GROUSE method only extract the useful signals during subspace tracking. Figure 9. View largeDownload slide Demonstration for the influence of the randomization operator to the random property of unwanted-noise. Figure 9. View largeDownload slide Demonstration for the influence of the randomization operator to the random property of unwanted-noise. We highly recommend the readers to think in depth the noise influence test we conducted in this paper (Figs 7 and 8). It is very important to understand the difference between two methods in being sensitive to noise. Before initializing the project related with the paper, we were keeping asking ourselves if a new method can really substitute the TSVD-based SSA method and why it can. We are keeping it in mind when testing the GROUSE-based method. We clearly see that the TSVD-based SSA method tends to extract a lot more of components in the noise than the GROUSE-based SSA method. To explain the reason, we theoretically analyse the TSVD-based method. We show that the TSVD-based SSA method decomposes the data into a mixed signal-and-noise subspace, which explains why the denoised data using TSVD-based method contains a lot more residual noise and theoretically supports the noise influence test. Since the GROUSE-based method has a different algorithm setup as the TSVD-based method, we cannot formulate the two algorithms in the same system, for example, in the linear-algebra setup shown in Appendix B. It is really difficult for us to derive a similar formula as the TSVD-based method from the GROUSE-based method so that we can intuitively understand why one is superior than the other. However, what we really can do is to investigate which method is more sensitive to noise. The more sensitive, the more residual noise we will obtain in the final result. From this aspect, we found that the GROUSE-based method is less sensitive. Nowadays, it becomes even more important to remove various forms of coherent noise than to remove pure random noise. The proposed algorithm can also be straightforwardly applied in some coherent noise attenuation problems. Following Huang et al. (2017d), when the trajectory of a coherent noise component is successfully tracked (or the trajectory of useful signal component is obtained), the coherent noise can be randomized to facilitate the application of a common random noise attenuation algorithm in attenuating the randomized coherent noise. For better understanding the randomization process, we gives a brief introduction of the randomization here. The randomization process is implemented to eliminate the correlation of coherent noise while preserving the useful signals. The trajectory of the useful signal is regarded as the shape of the window for the randomization operator. Then, the order of the data in this window can be randomly rearranged. After energy balancing, the useful signals with the same shape in the window have approximately equal amplitude at each point, whereas the coherent noise and the random noise do not. After the rearrangement, the useful signals are almost unchanged assuming waveforms on consecutive traces are the same, whereas the energies of the coherent and random noise are dispersed randomly. In other words, the correlation of the coherent noise is eliminated. This process transforms coherent noise into random noise to some extent. Fig. 10 provides an example for attenuating coherent noise, where we treat the first hyperbolic event with the steepest dip as the coherent noise. Fig. 10(a) shows the noisy data, where the noise is pointed by the labelled arrow. After randomizing the data along the structural direction of the useful reflections, the shuffled data is shown in Fig. 10(d). Fig. 10(b) shows the denoised data with both coherent and random noise successfully removed using the proposed method. Fig. 10(c) shows the denoised result using the TSVD-based SSA method. Both methods successfully remove random and coherent noise but the proposed method is cleaner than the TSVD-based SSA method. The TSVD-based SSA method causes significant residual noise due to the strong reconstructed noise during low-rank approximation. It is consistent with the above detailed analysis on noise influence. Figs 10(e) and (f) show the removed noise sections using two methods, which do not contain any useful energy. The data size of this example is 501 × 150. The window size we use is 50 × 50. 50 × 50 means 50 time samples by 50 space traces. Figure 10. View largeDownload slide Coherent noise test. (a) Noisy data containing coherent noise. (b) Denoised data using the TSVD-based SSA method. (b) Denoised data using the proposed method. (d) Randomized data using the method proposed in Huang et al. (2017d). (e) Removed noise including both random and coherent noise using the TSVD-based SSA method. (f) Removed noise including both random and coherent noise using the proposed method. Figure 10. View largeDownload slide Coherent noise test. (a) Noisy data containing coherent noise. (b) Denoised data using the TSVD-based SSA method. (b) Denoised data using the proposed method. (d) Randomized data using the method proposed in Huang et al. (2017d). (e) Removed noise including both random and coherent noise using the TSVD-based SSA method. (f) Removed noise including both random and coherent noise using the proposed method. The next synthetic example is a model that contains flat events, flat events with amplitude-versus-offset (AVO) phenomenon, curved events, and discontinuities. Although containing only several events as shown in Fig. 11(a), the reflectors stand for very typical geological structures such as sedimentary layers, salt domes, pinch-outs, and faults. Fig. 11(b) shows the noisy data by adding some band-limited random noise. Since the structure of this example is more complicated, we apply both TSVD-based SSA and GROUSE-based SSA methods in local windows. The data size of this example is 512 × 50. The window size we use is 50 × 20. In each local window, we use a predefined rank K = 2. Figs 12(a) and (b) show denoised data using the TSVD-based SSA and GROUSE-based SSA methods, respectively, where the GROUSE-based SSA method is slightly cleaner than the TSVD-based SSA method. Figs 12(c) and (d) show the corresponding noise sections, where no observable signals are existing. For this example, to get a better comparison between two methods, we calculate the FK spectra of different data. Fig. 13 shows a comparison of different FK spectra from the clean data (Fig. 13a), noisy data (Fig. 13b), denoised data using TSVD-based SSA method (Fig. 13c), and denoised data using GROUSE-based SSA method (Fig. 13d). From Fig. 13, it is more obvious that the FK spectrum of GROUSE-based SSA result is cleaner than that of TSVD-based SSA result, and is much more similar to that of the clean data. In this example, the SNRs of the noisy data and denoised results from TSVD-based SSA and GROUSE-based SSA methods are 0.73, 5.10 and 7.22 dB, respectively. This example confirms the effectiveness of the proposed method in dealing with curved and more complex events. Figure 11. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. SNR of (b) is 0.73 dB. Figure 11. View largeDownload slide 2-D synthetic example. (a) Clean data. (b) Noisy data. SNR of (b) is 0.73 dB. Figure 12. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding to (a). (d) Removed noise corresponding to (b). SNRs of (a) and (b) are 5.10 dB and 7.22 dB, respectively. Figure 12. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding to (a). (d) Removed noise corresponding to (b). SNRs of (a) and (b) are 5.10 dB and 7.22 dB, respectively. Figure 13. View largeDownload slide FK spectrum comparison. (a) FK spectrum of clean data. (b) FK spectrum of noisy data. (c) FK spectrum of TSVD-based SSA processed data. (d) FK spectrum of GROUSE-based SSA processed data. Figure 13. View largeDownload slide FK spectrum comparison. (a) FK spectrum of clean data. (b) FK spectrum of noisy data. (c) FK spectrum of TSVD-based SSA processed data. (d) FK spectrum of GROUSE-based SSA processed data. We then use a 3-D synthetic example to further compare the performance of different methods. The clean data is shown in Fig. 14(a), which contains three plane-wave components. Fig. 14(b) shows the noisy data with band-limited random noise. For 3-D data, it is slightly different to construct the low-rank matrix. Appendix C provides how to construct the low-rank matrix in the case of 3-D seismic data. In this example, in addition to the TSVD-based SSA method, we also compare the proposed method with the FK thresholding method and accelerated TSVD-based SSA method with randomized singular value decomposition (RSVD). The results from four aforementioned methods are shown in Fig. 15. It is obvious that the FK method causes most residual noise, which is followed by the TSVD-based SSA method. Although the accelerated TSVD-based SSA method is much cleaner than FK and TSVD-based SSA methods, the amplitude of the useful events is damaged a lot, which can be verified from its removed noise cube as shown in Fig. 16(c). Fig. 16 shows the removed noise cubes corresponding to the four mentioned methods. All figures except for Fig. 16(c) do not contain observable spatially coherent energy. In Fig. 16(c), there is significant useful energy which indicates the worse signal preservation ability of the RSVD based TSVD-based SSA method. The proposed method obtains the cleanest result without damaging useful energy. Fig. 17 shows a comparison of 2-D sections extracted from the 3-D data cubes, from which we can get a clearer conclusion mentioned above. The SNR of the noisy data in this example is −8.39 dB. The SNRs of FK method, TSVD-based SSA method, RSVD method, and the proposed method are 4.30, 4.60, 3.94 and 6.81 dB, respectively. For this example, we also measure the computational time for different methods. The data size for this example is 300 × 20 × 20. The time cost for FK method, TSVD-based SSA method, RSVD method, and the proposed method is 0.05, 11.63, 1.7, and 5.96 s, respectively. We conclude from this example that while the RSVD algorithm accelerates TSVD-based SSA a lot (by a factor of seven), it sacrifices the denoising performance to some extent. The proposed method obtains the best denoising performance and it is also faster than the traditional TSVD-based SSA method (by a factor of two). Since this example only contains planar events, we do not use local windows to process the data. Figure 14. View largeDownload slide 3-D synthetic example. (a) Clean data. (b) Noisy data. Figure 14. View largeDownload slide 3-D synthetic example. (a) Clean data. (b) Noisy data. Figure 15. View largeDownload slide Denoising comparison. (a) Denoised data using FK method. (b) Denoised data using TSVD-based SSA method with SVD for low-rank decomposition. (c) Denoised data using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Denoised data using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 15. View largeDownload slide Denoising comparison. (a) Denoised data using FK method. (b) Denoised data using TSVD-based SSA method with SVD for low-rank decomposition. (c) Denoised data using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Denoised data using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 16. View largeDownload slide Noise comparison. (a) Removed noise using FK method. (b) Removed noise using TSVD-based SSA method with SVD for low-rank decomposition. (c) Removed noise using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Removed noise using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 16. View largeDownload slide Noise comparison. (a) Removed noise using FK method. (b) Removed noise using TSVD-based SSA method with SVD for low-rank decomposition. (c) Removed noise using TSVD-based SSA method with RSVD for low-rank decomposition. (d) Removed noise using GROUSE-based SSA method via GROUSE for low-rank decomposition. Figure 17. View largeDownload slide 2-D section comparison (5th Xline section). (a) Clean data. (b) Denoised data using FK. (c) Denoised data using SSA. (d) Noisy data. (e) Denoised data using RSVD. (f) Denoised data using OSST. Figure 17. View largeDownload slide 2-D section comparison (5th Xline section). (a) Clean data. (b) Denoised data using FK. (c) Denoised data using SSA. (d) Noisy data. (e) Denoised data using RSVD. (f) Denoised data using OSST. Real data examples The first field data example is a 2-D time-migrated seismic profile, as shown in Fig. 18. This is a very noisy data set, where strong random noise masks the shallow reflection signals seriously and some useful signals are hardly seen. Fig. 19 shows a comparison between the TSVD-based SSA method and the proposed method, which both significantly improve the seismic image. The shallow part seismic reflection signals are much clearer than the raw data. It is not easy to judge the performance solely from the denoised data. The removed noise sections shown in Fig. 20 offer more clues in judging the denoising performance of the two methods. It is very clear from Fig. 20 that there are significant damages to useful reflection signals caused by the TSVD-based SSA method, as indicated by the arrows in Fig. 20(a). The removed noise from the proposed method contains almost no useful signals, which indicates a successful preservation of those useful signals. An enlarged comparison between the two removed noise sections are shown in Fig. 21, where we can see even more clearly that the TSVD-based SSA method cause some damages to useful signals. As introduced in the beginning of this section, we use local similarity to quantitatively compare the denoising performance for real data. Fig. 22 shows a comparison of local similarity between denoised data and removed noise for two methods. It is seen that the local similarity successfully detects those areas where we lost significant useful energy. The abnormal values in Fig. 22(a) (highlighted by the arrows) point out the areas of signal damages, which is consistent with the observed spatially coherent signals in Fig. 20(a). The data size of this example is 1001 × 201. The window size we use is 200 × 50. Figure 18. View largeDownload slide 2-D field data example. Figure 18. View largeDownload slide 2-D field data example. Figure 19. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. Figure 19. View largeDownload slide Denoising comparison. (a) Denoised data using SSA. (b) Denoised data using OSST. Figure 20. View largeDownload slide Denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. The zoomed sections of the frameboxes are shown in Fig. 21. Figure 20. View largeDownload slide Denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. The zoomed sections of the frameboxes are shown in Fig. 21. Figure 21. View largeDownload slide Zoomed denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. Figure 21. View largeDownload slide Zoomed denoising comparison. (a) Removed noise using SSA. (d) Removed noise using OSST. Figure 22. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using SSA. (d) Local similarity using OSST. Figure 22. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using SSA. (d) Local similarity using OSST. The second field data example is a 3-D stacked seismic profile. The raw data is shown in Fig. 23. Figs 24(a) and (b) show the denoised data using the TSVD-based SSA method and the proposed method, respectively. Both denoising methods obtain much cleaner data. Figs 24(c) and (d) show their corresponding noise sections. Fig. 25 shows a comparison of local similarity between denoised data and removed noise for two methods. The two local similarity cubes do not contain any abnormal values, which indicates that both methods preserve useful signals well. We extract one Xline section from each figure in Figs 23 and 24 and compare them in Fig. 26. As highlighted by the green frameboxes, both methods successfully remove a huge amount of random noise and the proposed method obtains more spatially coherent reflection signals. The frameboxes are then zoomed in Fig. 27, where the comparison between two methods is much clearer. The TSVD-based SSA method causes a small amount of residual noise while the proposed method makes the seismic events very smooth and spatially coherent. The data size of this example is 751 × 100 × 10. The window size we use is 100 × 50 × 10. Figure 23. View largeDownload slide 3-D field data example. Figure 23. View largeDownload slide 3-D field data example. Figure 24. View largeDownload slide Denoising comparison in 3-D volume. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding (a). (d) Removed noise corresponding to (b). Figure 24. View largeDownload slide Denoising comparison in 3-D volume. (a) Denoised data using SSA. (b) Denoised data using OSST. (c) Removed noise corresponding (a). (d) Removed noise corresponding to (b). Figure 25. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using TSVD-based SSA method. (b) Local similarity using GROUSE-based SSA method. Figure 25. View largeDownload slide Denoising comparison in terms of local similarity between denoised data and removed noise. (a) Local similarity using TSVD-based SSA method. (b) Local similarity using GROUSE-based SSA method. Figure 26. View largeDownload slide Denoising comparison in 2-D section. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. (d) Removed noise corresponding (a). (e) Removed noise corresponding to (b). Figure 26. View largeDownload slide Denoising comparison in 2-D section. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. (d) Removed noise corresponding (a). (e) Removed noise corresponding to (b). Figure 27. View largeDownload slide Zoomed comparison. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. Figure 27. View largeDownload slide Zoomed comparison. (a) Raw seismic data. (b) Denoised data using SSA. (c) Denoised data using OSST. The last field data example is an earthquake data stack profile. Fig. 28 shows Professor Peter Shearer’s stacks over many earthquakes at constant epicentral distance (offset angle) (Shearer 1991a,b). Fig. 28 has been improved a lot from the raw data by stacking different earthquake data. Different seismic phases can be seen from the Fig. 28, as highlighted by the labelled arrows. However, we can see that there are still significant random and erratic noise existing in the stack profile. By applying the proposed method to denoise the raw stack data, we obtain a much improved result, which is shown in Fig. 29. Different phases have been shown much clearer in the denoised data. To check whether we have removed any useful signals, we plot the difference section in Fig. 30, where we see no obvious spatially coherent signals. The noise section confirms that we only remove unwanted components while leaving the spatially coherent signals clearer. The data size of this example is 1200 × 360. The window size we use is 200 × 90. Figure 28. View largeDownload slide Raw earthquake stack data. Figure 28. View largeDownload slide Raw earthquake stack data. Figure 29. View largeDownload slide Denoised stack data. Figure 29. View largeDownload slide Denoised stack data. Figure 30. View largeDownload slide Removed noise from the raw earthquake stack data. Figure 30. View largeDownload slide Removed noise from the raw earthquake stack data. CONCLUSIONS We have introduced a new OSST based low-rank approximation algorithm to attenuate different types of noise existing in multidimensional seismic data. Each subspace update can be performed in linear time in the dimension of the subspace and thus is efficient. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. We comprehensively investigate the application in different noise attenuation problems through a bunch of synthetic and real data examples, which demonstrates the robustness of the algorithm to different types of noise including random noise, spiky noise, blending noise, and even coherent noise. We specifically compare the proposed method with the traditional SSA based method, which is currently widely used in many fields. The numerical examples show that the proposed method can obtain significantly better performance in causing less residual noise in the denoised data. The superiority of the propose method over the TSVD-based SSA method is even more obvious in dealing with extremely noisy data. The numerical example also shows that the TSVD-based SSA method approximates more noise than the GROUSE-based SSA method when used with denoising for the same predefined rank, which explains the question why TSVD-based SSA method causes more residual noise. The proposed method is also faster than the typical TSVD-based SSA method which uses standard SVD by a factor of two. ACKNOWLEDGEMENTS This work was supported by the National Natural Science Foundation of China (No. 61401307), Hebei Province Foundation of Returned oversea scholars (CL201707), Hebei Province Project of Science and Technology R & D (11213565), and Hebei Province Natural Science Foundation (No. E2016202341). We appreciate editor Jean Virieux, reviewer Mehdi Aharchaou and two anonymous reviewers for constructive suggestions that greatly improved the manuscript. REFERENCES Abma R., Claerbout J., 1995. Lateral prediction for noise attenuation by t − x and f − x techniques, Geophysics , 60, 1887– 1896. Google Scholar CrossRef Search ADS   Abma R., Howe D., Foster M., Ahmed I., Tanis M., Zhang Q., Arogunmati A., Alexander G., 2015. Independent simultaneous source acquisition and processing, Geophysics , 80( 6), WD37– WD44. Google Scholar CrossRef Search ADS   Aharchaou M., Anderson J., Hughes S., Bingham J., 2017. Singular-spectrum analysis via optimal shrinkage of singular values, in SEG Technical Program Expanded Abstracts 2017 , pp. 4950– 4954, Society of Exploration Geophysicists. Anvari R., Siahsar M.A.N., Gholtashi S., Kahoo A.R., Mohammadi M., 2017. Seismic random noise attenuation using synchrosqueezed wavelet transform and low-rank signal matrix approximation, in IEEE Transactions on Geoscience and Remote Sensing , IEEE, doi:10.1109/TGRS.2017.2730228. Balzano L., Nowak R., Recht B., 2010. Online identification and tracking of subspaces from highly incomplete information, in 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) , IEEE, pp. 704– 711. Bekara M., van der Baan M., 2007. Local singular value decomposition for signal enhancement of seismic data, Geophysics , 72( 2), V59– V65. Google Scholar CrossRef Search ADS   Berkhout A.J., 2008. Changing the mindset in seismic data acquisition, Leading Edge , 27, 924– 938. Google Scholar CrossRef Search ADS   Cai J., Candès E.J., Shen Z., 2010a. A singular value thresholding algorithm for matrix completion, SIAM J. Optim. , 20, 1956– 1982. Google Scholar CrossRef Search ADS   Cai J.F., Candés E.J., Shen Z., 2010b. A singular value thresholding algorithm for matrix completion, SIAM J. Optim. , 20(4), 1956– 1982. Google Scholar CrossRef Search ADS   Canales L., 1984. Random noise reduction, in 54th Annual International Meeting, SEG, Expanded Abstracts , pp. 525– 527. Candès E.J., Demanet L., Donoho D.L., Ying L., 2006. Fast discrete curvelet transforms, SIAM Multiscale Model. Simul. , 5, 861– 899. Google Scholar CrossRef Search ADS   Chen W., Chen Y., Liu W., 2016a. Ground roll attenuation using improved complete ensemble empirical mode decomposition, J. Seismic Explor. , 25, 485– 495. Chen W., Chen Y., Liu W., Cheng Z., 2016b. Nonstationary signal decomposition via improved complete ensemble empirical mode decomposition and its application in ground roll noise attenuation, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 4139– 4144. Chen W., Chen Y., Cheng Z., 2017a. Seismic time-frequency analysis using an improved empirical mode decomposition algorithm, J. Seismic Explor. , 26( 4), 367– 380. Chen W., Xie J., Zu S., Gan S., Chen Y., 2017b. Multiple reflections noise attenuation using adaptive randomized-order empirical mode decomposition, IEEE Geosci. Remote Sens. Lett. , 14( 1), 18– 22. Google Scholar CrossRef Search ADS   Chen W., Yuan J., Chen Y., Gan S., 2017c. Preparing the initial model for iterative deblending by median filtering, J. Seismic Explor. , 26( 1), 25– 47. Chen W., Zhang D., Chen Y., 2017d. Random noise reduction using a hybrid method based on ensemble empirical mode decomposition, J. Seismic Explor. , ( 26), 227– 249. Chen Y., 2015a. Iterative deblending with multiple constraints based on shaping regularization, IEEE Geosci. Remote Sens. Lett. , 12, 2247– 2251. Google Scholar CrossRef Search ADS   Chen Y., 2015b. Noise attenuation of seismic data from simultaneous-source acquisition, PhD thesis , The University of Texas at Austin. Chen Y., 2016. Dip-separated structural filtering using seislet thresholding and adaptive empirical mode decomposition based dip filter, Geophys. J. Int. , 206( 1), 457– 469. Google Scholar CrossRef Search ADS   Chen Y., 2017. Fast dictionary learning for noise attenuation of multidimensional seismic data, Geophys. J. Int. , 209, 21– 31. Chen Y., Fomel S., 2015a. EMD-seislet transform, 85th Annual International Meeting, SEG, Expanded Abstracts , pp. 4775– 4778. Chen Y., Fomel S., 2015b. Random noise attenuation using local signal-and-noise orthogonalization, Geophysics , 80( 6), WD1– WD9. Google Scholar CrossRef Search ADS   Chen Y., Ma J., 2014. Random noise attenuation by f-x empirical mode decomposition predictive filtering, Geophysics , 79, V81– V91. Google Scholar CrossRef Search ADS   Chen Y., Fomel S., Hu J., 2014. Iterative deblending of simultaneous-source seismic data using seislet-domain shaping regularization, Geophysics , 79( 5), V179– V189. Google Scholar CrossRef Search ADS   Chen Y., Jin Z., Gan S., Yang W., Xiang K., Bai M., Huang W., 2015a. Iterative deblending using shaping regularization with a combined PNMO-MF-FK coherency filter, J. appl. Geophys. , 122, 18– 27. Google Scholar CrossRef Search ADS   Chen Y., Yuan J., Zu S., Qu S., Gan S., 2015b. Seismic imaging of simultaneous-source data using constrained least-squares reverse time migration, J. Appl. Geophys. , 114, 32– 35. Google Scholar CrossRef Search ADS   Chen Y., Zhang G., Gan S., Zhang C., 2015c. Enhancing seismic reflections using empirical mode decomposition in the flattened domain, J. Appl. Geophys. , 119, 99– 105. Google Scholar CrossRef Search ADS   Chen Y., Ma J., Fomel S., 2016c. Double-sparsity dictionary for seismic noise attenuation, Geophysics , 81( 2), V17– V30. Chen Y., Zhang D., Huang W., Chen W., 2016d. An open-source matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction, Comput. Geosci. , 95, 59– 66. Google Scholar CrossRef Search ADS   Chen Y., Zhang D., Jin Z., Chen X., Zu S., Huang W., Gan S., 2016e. Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method, Geophys. J. Int. , 206, 1695– 1717. Google Scholar CrossRef Search ADS   Chen Y., Chen H., Xiang K., Chen X., 2017e. Preserving the discontinuities in least-squares reverse time migration of simultaneous-source data, Geophysics , 82( 3), S185– S196. Google Scholar CrossRef Search ADS   Chen Y., Zhou Y., Chen W., Zu S., Huang W., Zhang D., 2017f. Empirical low rank decomposition for seismic noise attenuation, IEEE Trans. Geosci. Remote Sens. , 55( 8), 4696– 4711. Google Scholar CrossRef Search ADS   Cheng J., Sacchi M., 2014. Fast dual-domain reduced-rank algorithm for 3D deblending via randomized QR decomposition, Geophysics , 81, V89– V101. Google Scholar CrossRef Search ADS   Cheng J., Sacchi M., 2015. Separation and reconstruction of simultaneous source data via iterative rank reduction, Geophysics , 80, V57– V66. Google Scholar CrossRef Search ADS   Dai W., Milenkovic O., 2010. Set: an algorithm for consistent matrix completion, in Proceedings of ICASSP , Dallas, TX, pp. 3646–3649. Donoho D.L., Johnstone I.M., 1994. Ideal spatial adaptation by wavelet shrinkage, Biometrika , 81, 425– 455. Google Scholar CrossRef Search ADS   Ebrahimi S., Kahoo A.R., Chen Y., Porsani M.J., 2017. A high-resolution weighted ab semblance for dealing with amplitude-variation-with-offset phenomenon, Geophysics , 82( 2), V85– V93. Google Scholar CrossRef Search ADS   Edelman A., Arias T.A., Smith S.T., 1988. The geometry of algorithms with orthogonality constraints, SIAM J. Matrix Anal. Appl. , 20( 2), 303– 353. Google Scholar CrossRef Search ADS   Elad M., Aharon M., 2006. Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process. , 15( 12), 3736– 3745. Google Scholar CrossRef Search ADS PubMed  Fomel S., 2013. Seismic data decomposition into spectral components using regularized nonstationary autoregression, Geophysics , 78, O69– O76. Google Scholar CrossRef Search ADS   Fomel S., Liu Y., 2010. Seislet transform and seislet frame, Geophysics , 75, V25– V38. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., 2015a. Deblending of distance separated simultaneous-source data using seislet frames in the shot domain, in 85th Annual International Meeting , SEG Expanded Abstracts , pp. 65– 70. Gan S., Wang S., Chen Y., Chen X., 2015b. Seismic data reconstruction via fast projection onto convex sets in the seislet transform domain, in 85th Annual International Meeting , SEG Expanded Abstracts , pp. 3814– 3819. Gan S., Wang S., Chen Y., Zhang Y., Jin Z., 2015c. Dealiased seismic data interpolation using seislet transform with low-frequency constraint, IEEE Geosci. Remote Sens. Lett. , 12, 2150– 2154. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen J., Zhong W., Zhang C., 2016a. Improved random noise attenuation using f-x empirical mode decomposition and local similarity, Appl. Geophys. , 13, 127– 134. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., 2016b. Simultaneous-source separation using iterative seislet-frame thresholding, IEEE Geosci. Remote Sens. Lett. , 13( 2), 197– 201. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., Huang W., Chen H., 2016c. Compressive sensing for seismic data reconstruction via fast projection onto convex sets based on seislet transform, J. Appl. Geophys. , 130, 194– 208. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Chen X., Xiang K., 2016d. Separation of simultaneous sources using a structural-oriented median filter in the flattened dimension, Comput. Geosci. , 86, 46– 54. Google Scholar CrossRef Search ADS   Gan S., Wang S., Chen Y., Qu S., Zu S., 2016e. Velocity analysis of simultaneous-source data using high-resolution semblance-coping with the strong noise, Geophys. J. Int. , 204, 768– 779. Google Scholar CrossRef Search ADS   Gao J., Mao J., Chen W., Zheng Q., 2006. On the denoising method of prestack seismic data in wavelet domain, Chin. J. Geophys. , 49( 4), 1155– 1163. Gao J., Sacchi M.D., Chen X., 2013. A fast reduced-rank interpolation method for prestack seismic volumes that depend on four spatial dimensions, Geophysics , 78, V21– V30. Google Scholar CrossRef Search ADS   Gilles J., 2013. Empirical wavelet transform, IEEE Trans. Signal Process. , 61, 3999– 4010. Google Scholar CrossRef Search ADS   Golub G.H., Van Loan C.F., 1996. Matrix Computations , 3rd edn, John Hopkins Univ. Press. He B., Wu G., 2015. A slim approximate hessian for perturbation velocity inversion with incomplete reflection data, J. Seismic Explor. , 24( 3), 281– 304. Hermann F.J., Boniger U., Verschuur D.J., 2007. Non-linear primary-multiple separation with directional curvelet frames, Geophys. J. Int. , 170, 781– 799. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhang M., Chen Y., Yu J., 2015. Random noise attenuation for 3D seismic data by modified multichannel singular spectrum analysis, in 77th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201412830. Huang W., Wang R., Chen Y., Li H., Gan S., 2016a. Damped multichannel singular spectrum analysis for 3D random noise attenuation, Geophysics , 81( 4), V261– V270. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhou Y., Chen Y., Yang R., 2016b. Improved principal component analysis for 3D seismic data simultaneous reconstruction and denoising, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 4777– 4781. Huang W., Wang R., Chen X., Chen Y., 2017a. Double least squares projections method for signal estimation, in IEEE Trans. Geosci. Remote Sens. , 55( 7), 4111– 4129. Google Scholar CrossRef Search ADS   Huang W., Wang R., Chen X., Zu S., Chen Y., 2017b. Damped sparse representation for seismic random noise attenuation, in 87th Annual International Meeting, SEG, Expanded Abstracts , pp. 5079– 5084. Huang W., Wang R., Li H., Chen Y., 2017c. Unveiling the signals from extremely noisy microseismic data for high-resolution hydraulic fracturing monitoring, Sci. Rep. , 7, 11996, doi:10.1038/s41598-017-09711-2. Huang W., Wang R., Yuan Y., Gan S., Chen Y., 2017d. Signal extraction using randomized-order multichannel singular spectrum analysis, Geophysics , 82, V59– V74. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhang D., Zhou Y., Yang W., Chen Y., 2017e. Mathematical morphological filtering for linear noise attenuation of seismic data, Geophysics , 82( 6), V369– V384. Google Scholar CrossRef Search ADS   Huang W., Wang R., Zhou Y., Chen X., 2017f. Simultaneous coherent and random noise attenuation by morphological filtering with dual-directional structuring element, IEEE Geosci. Remote Sens. Lett. , doi:10.1109/LGRS.2017.2730849. Huang W., Wang R., Zu S., Chen Y., 2017g. Low-frequency noise attenuation in seismic and microseismic data using mathematical morphological filtering, Geophys. J. Int. , 211, 1296– 1318. Google Scholar CrossRef Search ADS   Jin Z., Chapman M., Wu X., Papageorgiou G., 2017. Estimating gas saturation in a thin layer by using frequency-dependent amplitude versus offset modelling, Geophys. Prospect. , 65(3), 747– 765. Google Scholar CrossRef Search ADS   Kahoo A., Siahkoohi T.R., 2009. Random noise suppression from seismic data using time frequency peak filtering, in 71st Annual International Conference and Exhibition, EAGE, Extended Abstracts . Keshavan R.H., Montanari A., Oh S., 2010. Matrix completion from a few entries, IEEE Trans. Inf. Theory , 56( 6), 2980– 2998. Google Scholar CrossRef Search ADS   Kong D., Peng Z., He Y., Fan H., 2016. Seismic random noise attenuation using directional total variation in the shearlet domain, J. Seismic Explor. , 25( 4), 321– 338. Li F., Zhou H., Zhao T., Marfurt K., 2016a. Unconventional reservoir characterization based on spectrally corrected seismic attenuation estimation, J. Seismic Explor. , 25( 5), 447– 461. Li H., Wang R., Cao S., Chen Y., Huang W., 2016b. A method for low-frequency noise suppression based on mathematical morphology in microseismic monitoring, Geophysics , 81( 3), V159– V167. Google Scholar CrossRef Search ADS   Li H., Wang R., Cao S., Chen Y., Tian N., Chen X., 2016c. Weak signal detection using multiscale morphology in microseismic monitoring, J. Appl. Geophys. , 133, 39– 49. Google Scholar CrossRef Search ADS   Li J., Wang S., Yang D., Tang G., Chen Y., 2017. Subsurface attenuation estimation using a novel hybrid method based on fwe function and power spectrum, Explor. Geophys. , doi:10.1071/EG16022. Li Y., Li Z., Zhang K., Lin Y., 2016d. Frequency-domain full waveform inversion with rugged free surface based on variable grid finite-difference method, J. Seism. Explor. , 25( 6), 543– 543. Lin H., Li Y., Yang B., Ma T., 2013. Random denoising and signal nonlinearity approach by time-frequency peak filtering using weighted frequency reassignment, Geophysics , 78, V229– V237. Google Scholar CrossRef Search ADS   Lines L., Daley P., Embleton J., Gray D., 2016. Pssp waves - their presence and possible utilization in seismic inversion, J. Seismic Explor. , 25( 6), 497– 512. Liu Y., 2013. Noise reduction by vector median filtering, Geophysics , 78, V79– V87. Google Scholar CrossRef Search ADS   Liu Y., Fomel S., 2010. High-order seislet transform and its application of random noise attenuation, Geophysics , 75, WB235– WB245. Google Scholar CrossRef Search ADS   Liu G., Chen X., 2013. Noncausal f-x-y regularized nonstationary prediction filtering for random noise attenuation on 3D seismic data, J. Appl. Geophys. , 93, 60– 66. Google Scholar CrossRef Search ADS   Liu G., Fomel S., Jin L., Chen X., 2009a. Stacking seismic data using local correlation, Geophysics , 74, V43– V48. Google Scholar CrossRef Search ADS   Liu Y., Liu C., Wang D., 2009b. A 1d time-varying median filter for seismic random, spike-like noise elimination, Geophysics , 74, V17– V24. Google Scholar CrossRef Search ADS   Liu G., Chen X., Du J., Song J., 2011. Seismic noise attenuation using nonstationary polynomial fitting, Appl. Geophys. , 8( 1), 18– 26. Google Scholar CrossRef Search ADS   Liu G., Chen X., Du J., Wu K., 2012. Random noise attenuation using f-x regularized nonstationary autoregression, Geophysics , 77, V61– V69. Google Scholar CrossRef Search ADS   Liu Y., Fomel S., Liu C., 2015. Signal and noise separation in prestack seismic data using velocity-dependent seislet transform, Geophysics , 80( 6), WD117– WD128. Google Scholar CrossRef Search ADS   Liu C., Wang D., Hu B., Wang T., 2016a. Seismic deconvolution with shearlet sparsity constrained inversion, J. Seismic Explor. , 25( 5), 433– 445. Liu W., Cao S., Chen Y., 2016b. Seismic time-frequency analysis via empirical wavelet transform, IEEE Geosci. Remote Sens. Lett. , 13, 28– 32. Google Scholar CrossRef Search ADS   Liu W., Cao S., Chen Y., 2016c. Application of variational mode decomposition in random noise attenuation and time-frequency analysis of seismic data, in 78th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201601249. Liu W., Cao S., Chen Y., 2016d. Applications of variational mode decomposition in seismic time-frequency analysis, Geophysics , 81, V365– V378. Google Scholar CrossRef Search ADS   Liu W., Cao S., Chen Y., Zu S., 2016e. An effective approach to attenuate random noise based on compressive sensing and curvelet transform, J. Geophys. Eng. , 13, 135– 145. Google Scholar CrossRef Search ADS   Liu W., Cao S., Gan S., Chen Y., Zu S., Jin Z., 2016f. One-step slope estimation for dealiased seismic data reconstruction via iterative seislet thresholding, IEEE Geosci. Remote Sens. Lett. , 13, 1462– 1466. Google Scholar CrossRef Search ADS   Liu W., Cao S., Liu Y., Chen Y., 2016g. Synchrosqueezing transform and its applications in seismic data analysis, J. Seismic Explor. , 25, 27– 44. Liu W., Cao S., Wang Z., Kong X., Chen Y., 2017. Spectral decomposition for hydrocarbon detection based on vmd and teager-kaiser energy, IEEE Geosci. Remote Sens. Lett. , 14( 4), 539– 543. Google Scholar CrossRef Search ADS   Mairal J., Sapiro G., Elad M., 2008. Learning multiscale sparse representations for image and video restoration, SIAM Multiscale Model. Simul. , 7( 1), 214– 241. Google Scholar CrossRef Search ADS   Mayne W.H., 1962. Common reflection point horizontal data stacking techniques, Geophysics , 27, 927– 938. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., 2016a. Hybrid seismic denoising using higher-order statistics and improved wavelet block thresholding, Bull. seism. Soc. Am. , 106( 4), 1380– 1393. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., 2016b. Adaptive noise estimation and suppression for improving microseismic event detection, J. Appl. Geophys. , 132, 116– 124. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., 2017. Automatic noise-removal/signal-removal based on general cross-validation thresholding in synchrosqueezed domain and its application on earthquake data, Geophysics , 82( 4), V211– V227. Google Scholar CrossRef Search ADS   Mousavi S.M., Langston C.A., Horton S.P., 2016. Automatic microseismic denoising and onset detection using the synchrosqueezed continuous wavelet transform, Geophysics , 81( 4), V341– V355. Google Scholar CrossRef Search ADS   Naghizadeh M., Sacchi M.D., 2011. Seismic data interpolation and denoising in the frequency-wavenumber domain, Geophysics , 77( 2), V71– V80. Google Scholar CrossRef Search ADS   Naghizadeh M., Sacchi M.D., 2012. Multicomponent seismic random noise attenuation via vector autoregressive operators, Geophysics , 78( 2), V91– V99. Google Scholar CrossRef Search ADS   Neelamani R., Baumstein A.I., Gillard D.G., Hadidi M.T., Soroka W.L., 2008. Coherent and random noise attenuation using the curvelet transform, Leading Edge , 27, 240– 248. Google Scholar CrossRef Search ADS   Oropeza V., Sacchi M., 2011. Simultaneous seismic data denoising and reconstruction via multichannel singular spectrum analysis, Geophysics , 76, V25– V32. Google Scholar CrossRef Search ADS   Pratt G., Shin C., Hick G., 1998. Gauss–newton and full newton methods in frequency–space seismic waveform inversion, Geophys. J. Int. , 133, 341– 362. Google Scholar CrossRef Search ADS   Qu S., Verschuur D.J., 2016. Getting accurate time-lapse information using geology-constrained simultaneous joint migration-inversion, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 5451– 5456. Qu S., Zhou H., Chen H., Zu S., Zhi L., 2014. Separation of simultaneous vibroseis data, in 84th Annual International Meeting, SEG, Expanded Abstracts , pp. 4340– 4344. Qu S., Zhou H., Chen Y., Yu S., Zhang H., Yuan J., Yang Y., Qin M., 2015. An effective method for reducing harmonic distortion in correlated vibroseis data, J. Appl. Geophys. , 115, 120– 128. Google Scholar CrossRef Search ADS   Qu S., Verschuur D., Chen Y., 2016a. Full waveform inversion using an automatic directional total variation constraint, in 78th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201701340. Qu S., Zhou H., Liu R., Chen Y., Zu S., Yu S., Yuan J., Yang Y., 2016b. Deblending of simultaneous-source seismic data using fast iterative shrinkage-thresholding algorithm with firm-thresholding, Acta Geophys. , 64, 1064– 1092. Google Scholar CrossRef Search ADS   Ren C., Tian X., 2016. Prestack migration based on asymmetric wave-equation extrapolation, J. Seismic Explor. , 25( 4), 375– 397. Sacchi M., Ulrych T., 1995. High-resolution velocity gathers and offset space reconstruction, Geophysics , 60, 1169– 1177. Google Scholar CrossRef Search ADS   Sarwar B., Karypis G., Konstan J., Riedl J., 2002. Incremental singular value decomposition algorithms for highly scalable recommender systems, in Fifth International Conference on Computer and Information Science, pp. 27– 28. Shearer P.M., 1991a. Imaging global body wave phases by stacking long-period seismograms, J. geophys. Res. , 96( B12), 20 535– 20 324. Google Scholar CrossRef Search ADS   Shearer P.M., 1991b. Constraints on upper mantle discontinuities from observations of long period reflected and converted phases, J. geophys. Res. , 96( B11), 18 147– 18 182. Google Scholar CrossRef Search ADS   Shen H., Yan Y., Chen C., Zhang B., 2016. Multiple-transient surface wave phase velocity analysis in expanded f-k domain and its application, J. Seismic Explor. , 25( 4), 299– 319. Siahsar M.A.N., Gholtashi S., Kahoo A.R., Marvi H., Ahmadifard A., 2016. Sparse time-frequency representation for seismic noise reduction using low-rank and sparse decomposition, Geophysics , 81( 2), V117– V124. Google Scholar CrossRef Search ADS   Siahsar M.A.N., Abolghasemi V., Chen Y., 2017a. Simultaneous denoising and interpolation of 2D seismic data using data-driven non-negative dictionary learning, Signal Process. , 141, 309– 321. Google Scholar CrossRef Search ADS   Siahsar M.A.N., Gholtashi S., Kahoo A.R., Chen W., Chen Y., 2017b. Data-driven multi-task sparse dictionary learning for noise attenuation of 3D seismic data, Geophysics , 82( 6), doi:10.1190/geo2017–0084.1. Siahsar M.A.N., Gholtashi S., Olyaei E., Chen W., Chen Y., 2017c. Simultaneous denoising and interpolation of 3D seismic data via damped data-driven optimal singular value shrinkage, IEEE Geosci. Remote Sens. Lett. , 14( 7), 1086– 1090. Google Scholar CrossRef Search ADS   Sun W., Wang H., 2016. Water-layer-related demultiple method using constraints in the sparse, local tau-p domain, J. Seismic Explor. , 25( 5), 463– 483. Trickett S., 2008. F-xy cadzow noise suppression, CSPG CSEG CWLS Convention , pp. 303– 306. Vautard R., Yiou P., Ghil M., 1992. Singular-spectrum analysis: A toolkit for short, noisy chaotic signals, Physica D: Nonlinear Phenom. , 58( 1), 95– 126. Google Scholar CrossRef Search ADS   Verma A.K., Gurve D., 2015. Image decomposition using adaptive empirical wavelet transform, IOSR J. VLSI Signal Process. , 5( 1), 36– 44. Wang Y., 1999. Random noise attenuation using forward-backward linear prediction, J. Seismic Explor. , 8, 133– 142. Wang B., Li J., Chen X., 2015a. A novel method for simultaneous seismic data interpolation and noise removal based on the l0 constraint, J. Seismic Explor. , 24( 3), 187– 204. Wang B., Wu R., Chen X., Li J., 2015b. Simultaneous seismic data interpolation and denoising with a new adaptive method based on dreamlet transform, Geophys. J. Int. , 201, 1180– 1192. Wang Y., Cao J., Yang C., 2011. Recovery of seismic wavefields based on compressive sensing by an l1-norm constrained trust region method and the piecewise random subsampling, Geophys. J. Int. , 187, 199– 213. Google Scholar CrossRef Search ADS   Wang Y., Zhou H., Zu S., Mao W., Chen Y., 2017. Three-operator proximal splitting scheme for 3D seismic data reconstruction, IEEE Geosci. Remote Sens. Lett. , 14( 10), 1830– 1834. Google Scholar CrossRef Search ADS   Wu G., Fomel S., Chen Y., 2017. Data-driven time-frequency analysis of seismic data using non-stationary prony method, Geophys. Prospect. , doi:10.1111/1365–2478.12530. Xie Q., Chen Y., Zhang G., Gan S., Wang E., 2015a. Seismic data analysis using synchrosqueezeing wavelet transform - a case study applied to boonsville field, in 77th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201412752. Xie Q., Di B., Chen Y., Gan S., Jiao S., 2015b. Simultaneous structure enhancing filtering and fault detection using plane wave prediction, in 77th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201412614. Xie J., Chen W., Zhang D., Zu S., Chen Y., 2017. Application of principal component analysis in weighted stacking of seismic data, IEEE Geosci. Remote Sens. Lett. , 14( 8), 1213– 1217. Google Scholar CrossRef Search ADS   Xue Y., Ma J., Chen X., 2014. High-order sparse radon transform for avo-preserving data reconstruction, Geophysics , 79, V13– V22. Google Scholar CrossRef Search ADS   Xue Y., Chang F., Zhang D., Chen Y., 2016a. Simultaneous sources separation via an iterative rank-increasing method, IEEE Geosci. Remote Sens. Lett. , 13( 12), 1915– 1919. Google Scholar CrossRef Search ADS   Xue Y., Yang J., Ma J., Chen Y., 2016b. Amplitude-preserving nonlinear adaptive multiple attenuation using the high-order sparse radon transform, J. Geophys. Eng. , 13, 207– 219. Google Scholar CrossRef Search ADS   Xue Y., Man M., Zu S., Chang F., Chen Y., 2017. Amplitude-preserving iterative deblending of simultaneous source seismic data using high-order radon transform, J. Appl. Geophys. , 139, 79– 90. Google Scholar CrossRef Search ADS   Yang W., Wang R., Wu J., Chen Y., Gan S., Zhong W., 2015. An efficient and effective common reflection surface stacking approach using local similarity and plane-wave flattening, J. Appl. Geophys. , 117, 67– 72. Google Scholar CrossRef Search ADS   Yu S., Ma J., Zhang X., Sacchi M., 2015. Denoising and interpolation of high-dimensional seismic data by learning tight frame, Geophysics , 80( 5), V119– V132. Google Scholar CrossRef Search ADS   Zhang C., Chen L., 2015. A symplectic partitioned runge-kutta method using the eighth-order nad operator for solving the 2D elastic wave equation, J. Seismic Explor. , ( 3), 205– 230. Zhang D., Chen Y., Gan S., 2016a. Iterative reconstruction of 3D seismic data via multiple constraints, in 78th Annual International Conference and Exhibition, EAGE, Extended Abstracts , doi:10.3997/2214–4609.201601241. Zhang D., Chen Y., Gan S., 2016b. Multidimensional seismic data reconstruction with multiple constraints, in 86th Annual International Meeting, SEG, Expanded Abstracts , pp. 4801– 4806. Zhang D., Chen Y., Huang W., Gan S., 2016c. Multi-step damped multichannel singular spectrum analysis for simultaneous reconstruction and denoising of 3D seismic data, J. Geophys. Eng. , 13, 704– 720. Google Scholar CrossRef Search ADS   Zhang D., Chen Y., Huang W., Gan S., 2016d. Multi-step reconstruction of 3D seismic data via an improved MSSA algorithm, in CPS/SEG Beijing 2016 International Geophysical Conference & Exposition, SEG, Expanded Abstracts , pp. 745– 749. Zhang D., Zhou Y., Chen H., Chen W., Zu S., Chen Y., 2017. Hybrid rank-sparsity constraint model for simultaneous reconstruction and denoising of 3D seismic data, Geophysics , 82( 5), V351– V367. Google Scholar CrossRef Search ADS   Zhang H., Xu J., Liu Q., Zhang J., 2016e. Imaging 2D rugged topography seismic data: a topography PSTM approach integrated with residual static correction, J. Seismic Explor. , 25( 4), 339– 358. Zhang Q., Chen Y., Guan H., Wen J., 2016f. Well-log constrained inversion for lithology characterization: a case study at the JZ25-1 oil field, China, J. Seismic Explor. , 25( 2), 121– 129. Zhang R., Ulrych T., 2003. Physical wavelet frame denoising, Geophysics , 68, 225– 231. Google Scholar CrossRef Search ADS   Zhong W., Chen Y., Gan S., Yuan J., 2014. L1/2 norm regularization for 3D seismic data interpolation, J. Seismic Explor. , 25, 257– 268. Zhou Y., 2017. A POCS method for iterative deblending constrained by a blending mask, J. Appl. Geophys. , 138, 245– 254. Google Scholar CrossRef Search ADS   Zhou Y., Shi C., Chen H., Xie J., Wu G., Chen Y., 2017. Spike-like blending noise attenuation using structural low-rank decomposition, IEEE Geosci. Remote Sens. Lett. , 14( 9), 1633– 1637. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Chen Y., Liu Y., Qu S., 2015. A periodically variational dithering code for improving deblending, in SEG Expanded Abstracts: 85th Annual International Meeting , pp. 38– 42. Zu S., Zhou H., Chen Y., Qu S., Zou X., Chen H., Liu R., 2016. A periodically varying code for improving deblending of simultaneous sources in marine acquisition, Geophysics , 81( 3), V213– V225. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Chen H., Zheng H., Chen Y., 2017a. Two field trials for deblending of simultaneous source surveys: why we failed and why we succeeded?, J. Appl. Geophys. , 143, 182– 194. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Li Q., Chen H., Zhang Q., Mao W., Chen Y., 2017b. Shot-domain deblending using least-squares inversion, Geophysics , 82( 4), V241– V256. Google Scholar CrossRef Search ADS   Zu S., Zhou H., Mao W., Zhang D., Li C., Pan X., Chen Y., 2017c. Iterative deblending of simultaneous-source data using a coherency-pass shaping operator, Geophys. J. Int. , 211( 1), 541– 557. Google Scholar CrossRef Search ADS   APPENDIX A: DERIVATION OF THE STEP SIZE A gradient step along the geodesic with tangent vector with tangent vector −∇J can be expressed as (Edelman et al. 1988):   \begin{eqnarray} \mathbf {U}_{n+1} &=& \left[\begin{array}{c@{\quad}c}\mathbf {U}_{n}\mathcal {V} & \mathcal {U} \end{array}\right] \left[\begin{array}{c}\cos (\Sigma \eta ) \nonumber\\ \sin (\Sigma \eta ) \end{array}\right]\mathcal {V}^T \\ &=&\mathbf {U}_{n}\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T + \mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T, \end{eqnarray} (A1)where η is the step size, $$\mathcal {U}\Sigma \mathcal {V}^T$$ is the compact SVD of −∇J. Eq. (A1) can be further derived as   \begin{eqnarray} \Delta U &=& \mathbf {U}_{n+1} - \mathbf {U}_{n} \nonumber\\ &=&\mathbf {U}_{n}\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T-\mathbf {U}_{n} + \mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T. \end{eqnarray} (A2)Since $$\mathcal {V}\mathcal {V}^T=I$$,   \begin{eqnarray} \Delta U &=& \mathbf {U}_{n+1} - \mathbf {U}_{n} \nonumber\\ &=&\mathbf {U}_{n}\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T-\mathbf {U}_{n} + \mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T \nonumber\\ &=&\mathbf {U}_{n}\left(\mathcal {V}\cos (\Sigma \eta )\mathcal {V}^T -\mathcal {V}\mathcal {V}^T\right) +\mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T \nonumber\\ &=& \mathbf {U}_{n}\mathcal {V}\left(\cos (\Sigma \eta )-I\right)\mathcal {V}^T +\mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T. \end{eqnarray} (A3)Let   $$-\nabla J = 2rw^T = \mathcal {U}\Sigma \mathcal {V}^T,$$ (A4)where   $$\mathcal {U}=\left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\frac{\mathbf {r}}{\| \mathbf {r} \| } & \mathbf {x}_2 & \cdots & \mathbf {x}_K \end{array} \right],$$ (A5)  $$\Sigma =\text{diag}(\sigma ,0,\ldots ,0),$$ (A6)  $$\mathcal {V}=\left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\frac{\mathbf {w}}{\| \mathbf {w} \| } & \mathbf {y}_2 & \cdots & \mathbf {y}_K \end{array} \right].$$ (A7) In eqs (A5)–(A7), σ = 2‖r‖‖w‖. [x2, …, xK] is an orthonormal set orthogonal to r and [y2, …, yK] is an orthonormal set orthogonal to w. Inserting eqs (A5)–(A7) into eq. (A4), the updating step can be obtained as   \begin{eqnarray} \Delta U &=&\mathbf {U}_{n}\mathcal {V}\left(\cos (\Sigma \eta )-I\right)\mathcal {V}^T +\mathcal {U}\sin (\Sigma \eta )\mathcal {V}^T \nonumber\\ &=&\mathbf {U}_{n} \frac{\mathbf {w}}{\| \mathbf {w} \| }(\cos (\sigma \eta )-1)\frac{\mathbf {w}^T}{\| \mathbf {w} \| } + \frac{\mathbf {r}}{\| r \| }\sin (\sigma \eta ) \frac{\mathbf {w}}{\| \mathbf {w} \| }\nonumber\\ &=&\mathbf {U}_{n}(\cos (\sigma \eta )-1)\frac{\mathbf {w}\mathbf {w}^T}{\| \mathbf {w} \| ^2}+\sin (\sigma \eta )\frac{\mathbf {r}}{\| \mathbf {r} \| }\frac{\mathbf {w}}{\| \mathbf {w} \| }. \end{eqnarray} (A8)Let p = Unw,   $$\Delta \mathbf {U} = \left((\cos (\sigma \eta )-1)\frac{\mathbf {p}}{\| \mathbf {p} \| } + \sin (\sigma \eta )\frac{\mathbf {r}}{\| \mathbf {r} \| }\right) \frac{\mathbf {w}^T}{\| \mathbf {w} \| }.$$ (A9) APPENDIX B: MIXED SIGNAL AND NOISE SUBSPACE USING TSVD-BASED SSA Let us reformulate the Hankel matrix H expressed in eq. (11) as follows:   $$\mathbf {H}=\mathbf {S}+\mathbf {N},$$ (B1)where S denotes the signal component in H and N denotes the pre-arranged noise in the Hankel matrix. The SVD of S can be represented as:   $$\mathbf {S} = \big[\mathcal {U}_1^{S}\quad \mathcal {U}_2^{S}\big]\left[\begin{array}{c@{\quad}c}\Sigma _1^{S} & \mathbf {0}\\ \mathbf {0} & \Sigma _2^{S} \end{array} \right]\left[\begin{array}{c}(\mathcal {V}_1^{S})^T\\ (\mathcal {V}_2^{S})^T \end{array} \right].$$ (B2)Because of the deficient rank, the matrix S can be written as   $$\mathbf {S}=\mathcal {U}_1^S\Sigma _1^S(\mathcal {V}_1^S)^T.$$ (B3)Because eq. (B2) is an SVD of the signal matrix S, the left matrix in eq. (B2) is a unitary matrix:   $$\mathbf {I}=\mathcal {U}^S(\mathcal {U}^S)^T=[\mathcal {U}_1^S\quad \mathcal {U}_2^S]\left[\begin{array}{c}(\mathcal {U}_1^S)^T \\ (\mathcal {U}_2^S)^T \end{array} \right].$$ (B4)Combining eqs (B1), (B3) and (B4), we can derive:   \begin{eqnarray} \mathbf {H}&=& \mathbf {S}+\mathbf {N}\nonumber\\ &=&\mathbf {S}+\mathbf {IN}\nonumber\\ &=& \big[\mathcal {U}_1^S\quad \mathcal {U}_2^S\big]\left[\begin{array}{c}(\mathcal {U}_1^S)^T \nonumber\\ (\mathcal {U}_2^S)^T \end{array} \right]\mathbf {N} \\ &=&\mathcal {U}_1^S\Sigma _1^S(\mathcal {V}_1^S)^T + \left( \mathcal {U}_1^S(\mathcal {U}_1^S)^T+\mathcal {U}_2^S(\mathcal {U}_2^S)^T \right)\mathbf {N}\nonumber\\ &=&\mathcal {U}_1^S \left( (\mathcal {U}_1^S)^T\mathbf {N}+\Sigma _1^S(\mathcal {V}_1^S)^T \right)+\mathcal {U}_2^S(\mathcal {U}_2^S)^T\mathbf {N}\nonumber\\ &=&\mathcal {U}_1^S\left( \mathbf {N}^T\mathcal {U}_1^S+\mathcal {V}_1^S\Sigma _1^S \right)^T + \mathcal {U}_2^S(\mathbf {N}^T\mathcal {U}_2^S)^T\nonumber\\ &=& [\mathcal {U}_1^S \quad \mathcal {U}_2^S]\left[\begin{array}{c@{\quad}c}\mathbf {I} & \mathbf {0}\\ \mathbf {0} & \mathbf {I} \end{array} \right]\left[\begin{array}{c}(\mathbf {N}^T\mathcal {U}_1^S+\mathcal {V}_1^S\Sigma _1^S)^T\\ (\mathbf {N}^T\mathcal {U}_2^S)^T \end{array} \right], \end{eqnarray} (B5)we can further derive eq. (B5) and factorize H as follows:   $$\mathbf {H} = \big[\mathcal {U}_1^S \quad \mathcal {U}_2^S\big]\left[\begin{array}{c@{\quad}c}\Sigma _{1} & \mathbf {0}\\ \mathbf {0} & \Sigma _{2} \end{array} \right]\left[\begin{array}{c}(\Sigma _{1})^{-1}(\mathbf {N}^T\mathcal {U}_1^S+\mathcal {V}_1^S\Sigma _1^S)^T\\ (\Sigma _{2})^{-1}(\mathbf {N}^T\mathcal {U}_2^S)^T \end{array} \right],$$ (B6)where Σ1 and Σ2 denote diagonal and positive definite matrices. Note that H is constructed such that it is close to a square matrix (Oropeza & Sacchi 2011), and thus the Σ1 and Σ2 are assumed to be square matrices for derivation convenience. We observe that the left matrix has orthonormal columns and the middle matrix is diagonal. It can be shown that the right matrix also has orthonormal columns. Thus, eq. (B6) is an SVD of H. According to the TSVD method, we let Σ2 be 0 and then the following equation holds:   \begin{eqnarray} \tilde{\mathbf {H}} &=& \mathcal {U}_1^S(\mathcal {U}_1^S)^T\mathbf {N} + \mathcal {U}_1^S\Sigma _1^S(\mathcal {V}_1^S)^T \nonumber\\ &=&\mathbf {S} + \mathcal {U}_1^S(\mathcal {U}_1^S)^T\mathbf {N}. \end{eqnarray} (B7)From eq. (B7), it is straightforward to understand that TSVD result $$\tilde{\mathbf {H}}$$ still contains significant random noise $$\mathcal {U}_1^S(\mathcal {U}_1^S)^T\mathbf {N}$$, or in other words, the traditional TSVD can only decompose the data space into a mixture of signal and noise subspace (Huang et al. 2017a). APPENDIX C: DERIVATION OF THE 3-D LOW-RANK MATRIX CONSTRUCTION Instead of constructing a Hankel matrix from a 1-D spatial trace, in 3-D case, we construct a block Hankel matrix from a 2-D spatial matrix (X and Y) in the frequency domain (Oropeza & Sacchi 2011). Consider a block of 3-D data $$\mathbf {D}_{\text{time}}(x,y,t)$$ of Nx by Ny by Nt samples. First, we transforms $$\mathbf {D}_{\text{time}}(x,y,t)$$ into $$\mathbf {D}_{\text{freq}}(x,y,w)$$ (w = 1⋅⋅⋅Nw) with complex values in the frequency domain. Each frequency slice of the data, at a given frequency w0, can be represented by the following matrix:   $$\mathbf {D}(w_0)=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}D(1,1) & D(1,2) & \cdots &D(1,N_x) \\ D(2,1) & D(2,2) &\cdots &D(2,N_x) \\ \vdots & \vdots &\ddots &\vdots \\ D(N_y,1)&D(N_y,2) &\cdots &D(N_y,N_x) \end{array} \right).$$ (C1) In the following context, w0 is omitted for notational convenience. Then, we construct a Hankel matrix for each row of D. The Hankel matrix Ri for row i of D is as follows:   $$\mathbf {R}_i=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} D(i,1) & D(i,2) & \cdots &D(i,p) \\ D(i,2) & D(i,3) &\cdots &D(i,p+1) \\ \vdots & \vdots &\ddots &\vdots \\ D(i,N_x-p+1)&D(i,N_x-p+2) &\cdots &D(i,N_x) \end{array}\right),$$ (C2)which is exactly the same as eq. (11). The block matrix is constructed by treating all Hankel matrix Ri as an element and arranging them into a block matrix:   $$\mathbf {H}=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\mathbf {R}_1 &\mathbf {R}_2 & \cdots &\mathbf {R}_q \\ \mathbf {R}_2 &\mathbf {R}_3 &\cdots &\mathbf {R}_{q+1} \\ \vdots & \vdots &\ddots &\vdots \\ \mathbf {R}_{N_y-q+1}&\mathbf {R}_{N_y-q+2} &\cdots &\mathbf {R}_{N_y} \end{array} \right).$$ (C3)The size of H is I × J, I = (Nx − p + 1)(Ny − q + 1), J = pq. p and q are predefined integers chosen such that the Hankel matrix Ri and the block Hankel matrix M are close to square matrices, for example, $$p=N_x-\lfloor \frac{N_x}{2}\rfloor$$ and $$q=N_y-\lfloor \frac{N_y}{2}\rfloor$$, where ⌊ · ⌋ denotes the integer part of the argument. The following steps for low-rank decomposition is exactly the same as those in 2-D case, as introduced in detail in the main context, which includes either SVD or GROUSE decomposition. © The Author(s) 2017. Published by Oxford University Press on behalf of The Royal Astronomical Society.

### Journal

Geophysical Journal InternationalOxford University Press

Published: Feb 1, 2018

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 12 million articles from more than
10,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Unlimited reading Read as many articles as you need. Full articles with original layout, charts and figures. Read online, from anywhere. ### Stay up to date Keep up with your field with Personalized Recommendations and Follow Journals to get automatic updates. ### Organize your research It’s easy to organize your research with our built-in tools. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. ### Monthly Plan • Read unlimited articles • Personalized recommendations • No expiration • Print 20 pages per month • 20% off on PDF purchases • Organize your research • Get updates on your journals and topic searches$49/month

14-day Free Trial

Best Deal — 39% off

### Annual Plan

• All the features of the Professional Plan, but for 39% off!
• Billed annually
• No expiration
• For the normal price of 10 articles elsewhere, you get one full year of unlimited access to articles.

$588$360/year

billed annually

14-day Free Trial