Access the full text.
Sign up today, get DeepDyve free for 14 days.
XL Li, Adali (2010)
Independent component analysis by entropy bound minimizationIEEE Trans. Signal Process., 58
N Besic, G Vasile, J Chanussot, S Stankovic (2015)
Polarimetric incoherent target decomposition by means of independent component analysisIEEE Trans. Geosci. Remote Sens., 53
JF Cardoso, B Laheld (1996)
Equivariant adaptive source separationIEEE Transaction Signal Process., 44
X J-d Chuan, H Dan, X Hai-hua (2013)
A new blind image source separation algorithm based on feedback sparse component analysisSignal Process., 93
L Xiaoa, C Lia, Z Wub, T Wangc (2016)
An enhancement method for X-ray image via fuzzy noise removal and homomorphic filteringNeurocomputing, 195
Q Huang, B Hao, S Chang (2016)
Adaptive digital ridgelet transform and its application in image denoisingDigital Signal Processing, 52
KR Rao, P Yip (1990)
Discrete cosine transform
YW Wei, Y Wangb (2016)
Dynamic blind source separation based on source-direction predictionNeurocomputing, 185
A Hyvärinen, E Oja (1997)
A fast fixed-point algorithm for independent component analysisNeural Comput., 9
A Hyvärinen (1999)
Survey on independent component analysisNeural Computing Surveys, 2
S Ali, NA Khan, M Haneef (2017)
Blind source separation schemes for mono-sensor and multi-sensor systems with application to signal detectionCircuits Systems Signal Process, 36
A Hyvärinen, E Oja (2000)
Independent component analysis: algorithms and applicationsNeural Netw., 13
G Pendharkar, GR Naik, HT Nguyen (2014)
Using blind source separation on accelerometry data to analyze and distinguish the toe walking gait from normal gait in ITW childrenBiomed. Signal Process. Contr., 13
LT Duarte, JMT Romano, C Jutten, KY Chumbimuni-Torres, LT Kubota (2014)
Application of blind source separation methods to ion-selective electrode arrays in flow-injection analysisIEEE Sensors J., 14
EJ Candes, DL Donoho (1999)
Ridgelets: a key to higher dimensional intermittency?Philos. Trans. R. Soc. Lond., A357
JS Walker (1999)
A primer on wavelets and their scientific applications
T Adali, VD Calhoun (2007)
Complex ICA of brain imaging dataIEEE Signal Process. Mag., 24
X He, F He, A He (2018)
Super-Gaussian BSS using fast-ICA with Chebyshev-Pade approximantCircuits Systems Signal Process, 37
Z Wang, AC Bovik, HR Sheikh, EP Simoncelli (2004)
Image quality assessment: from error visibility to structural similarityIEEE Trans. Image Process., 13
A Cichocki, S Amari (2005)
Adaptive blind signal and image processing: learning algorithms and applications
MSC Almeida, LB Almeida (2008)
Wavelet-based separation of nonlinear show-through and bleed-through image mixturesNeurocomputing, 72
Signal and image separation is an important processing step for accurate image reconstruction, which is increasingly applied to many medical imaging applications and communication systems. Most of the conventional separation approaches are based on frequency domain and time domain. These approaches, however, are sensitive to noise and thus often produce undesirable results. In this paper, we propose a novel method of image separation. It incorporates the property of pyramid component extracted from the image and a finite ridgelet transform (FRT) to obtain a precise analysis of the images and thus correctly separate the images even in a highly noisy environment. We obtain the multiple components of the target images by employing a pyramid processing, which operates in the various domains and thus can decompose the image into multiple components. In addition, the pyramid decomposition in the proposed method can eliminate information redundancy in the target image and thus can substantially enhance the quality of image separation. We have conducted extensive simulations, which demonstrate that the proposed pyramid structure with FRT outperforms the conventional methods based on time domain and trigonometric transforms. Keywords: Pyramid technique, Finite ridgelet transform (FRT), ICA, Blind source separation (BSS), Pyramid technique 1 Introduction the de-mixing matrix and extract the source signals with a Blind source separation (BSS) has been one of the major scaling factor and permutation [4, 5]. research areas for over a decade and is receiving growing In biomedical applications, ICA has been applied to the attention due to its processing applications in image and functional magnetic resonance imaging (fMRI) data ana- signal processing. It aims at extracting a set of source lysis applies ICA. For example, in the article of [6], tem- signals from an observed mixture of signals with little or poral dynamics and their spatial sources have been no information about either the mixing environment or successfully recognized by real-valued ICA. In addition, the mixing process and sources. The applications of BSS the ICA has been applied to [7] for classification in elec- range from medical engineering to neuroscience and troencephalography (EEG) which has a two-state output also from telecommunications to financial time series (fatigue state vs. alert state). ICA has been also used in gait analysis. For example, its recent applications include activity analysis, which usually relies on multiple sensors astronomical imaging, remote sensing, medical imaging, such as pressure, gyroscope, and accelerometer. The mul- biological data analysis, image and speech signal pro- tiple sensors often incur crosstalk problem sensors where cessing, etc. [1–3]. each sensor interferes with another sensor. ICA has also Independent component analysis (ICA) has been often been exploited in [8] to enhance an automated classifica- regarded as an attractive solution to the BSS problem. Its tion technique to recognize toe walking gait from normal process is based on non-Gaussianity method, and so, it can gait in idiopathic toe walking (ITW) children. utilize its statistical independence of the sources to calculate The simplest BSS model assumes the existence of n in- dependent sources s , s ,..., s , and the same number of 1 2 n linear and instantaneous mixtures of these sources, x , * Correspondence: [email protected] 1 x ,..., x , that is, 2 n Department of Electronic Engineering, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju City, South Korea Full list of author information is available at the end of the article © The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 2 of 16 Fig. 1 The mixing and de-mixing models 0 1 0 10 1 0 1 T T x ¼ a s þ a s þ :… þ a s ; 1≤ j≤N ð1Þ x ðÞ k A ⋯ A s ðÞ k n ðÞ k j j1 1 j2 2 jN N 1 1 1 11 1N @ A @ A@ A @ A ⋮ ¼ ⋮ þ ⋮ ð3Þ ⋮⋱ ⋮ T T In vector-matrix notation, the above mixing model in x ðÞ k A ⋯ A s ðÞ k n ðÞ k N N N N1 NN the presence of noise can be expressed as The model described above is represented in Fig. 1. The de-mixing process [9–11] can be represented by x ¼ As þ n ð2Þ calculating the separating matrix W, which is the inverse of the mixing matrix A, and computing the independent Here, A is an N × N square mixing matrix. sources, which are obtained by Equation (2) can be expressed in matrix form as follows: s ¼ Wx ð4Þ In this paper, we introduce a novel (differential) image separation algorithm that separates mixed images by extracting the components of the images using a pyra- mid technique. In this way, the image structure can be decomposed into multiple images of different scales. The proposed method creates different levels of scaled-down images in a pyramid structure. We there- fore conduct the separation process on each level of the pyramid. While the lowest scale image at the top of the pyramid has same features, it incurs lower redundancy than the original image at the bottom of the pyramid. Our separation process conducted on the scaled images of the pyramid can, therefore, lead to better separation performance with lower redundancy in the resulting sep- arated images. Our method has the following two advantages over the most of ICA methods. Its first advantage is the high performance under noisy condition. Most of the ICA techniques consider only noiseless data; hence, they often lead to poor results in the presence of noise [12, 13]. In contrast, our method can separate the mixed image under a noisy condition and still provide high peak signal-to-noise ratio (PSNR). The second advantage is its fast processing and yet accurate separation results. Since it removes the redundancy in the image informa- tion, it can obtain the estimated image sources faster and more accurately than the ICA methods. The key contribution of proposed algorithm extracts the Fig. 2 Pyramid technique effect scaled-down images of the pyramid, in a way that Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 3 of 16 Fig. 3 Ridgelet transform calculation using FFT in rectopolar maintains the important information of the original mechanism and sparse component analysis (SCA). In image, while reducing the redundant information. [17], a wavelet packet transform method was proposed The remainder of this paper is organized as follows. In in combination with a geometric de-mixing algorithm. It Section 2, we provide the related work. Section 3 illus- decomposes the mixed images by a wavelet transform trates various techniques including the principle of pyra- (WT) and then uses the most relevant component as an mid image, finite ridgelet transform, and trigonometric input to its de-mixing geometric algorithm. transforms. In Section 4, the proposed image separation In the article of [18], columns or rows of mixed images approach is presented. To demonstrate the effectiveness were concatenated to arrange them into a 1-D mixed of the proposed technique, an extensive set of simulation image. Then, a source separation of frequency-time ap- experiments and performance comparison is reported in proach with mutual diagonal was introduced to enhance Section 5. Finally, Section 6 presents the concluding these 1-D signals to resolve their components. Then, remarks. two-dimensional (2-D) astrophysical image components were achieved by segmenting separated 1-D original sig- 2 Related work nals and rearranging these segments as columns or rows. In literature, there have been several papers published, Recently, researchers implemented sparse component which propose various approaches to the source image analysis (SCA) to improve the method of blind image separation problem. The method in [14] considers a separation [19, 20]. These approaches could accurately nonlinear real-life mixture of document images that separate the image mixtures using linear clustering when occur when a page of a document is scanned and the the linear clustering has less run time than super-plane back page shows through. It used a separation method clustering techniques, and the image sources are sparse based on the fact that the high-frequency components of enough [21]. the images are sparse and are stronger on one side of The work of [22] applied the discrete cosine transform the paper than on the other one. Astrophysical image (DCT) as an approach to get the information in the fre- separation has been considered for a blind source separ- quency domain. It uses a block-segmented DCT ation method in [15]. In the work of [16], feedback reorganization to get the information in the segmented sparse component analysis of image mixture was devel- blocks while selecting the sparsest block by comparing oped to extract the image sources by utilizing a feedback the linear strength in each block. Moreover, the authors Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 4 of 16 Fig. 4 Ridgelet transform flowchart of [22] used the geometric characteristic of sparse blocks algorithm to the pixel values, since most of the information to study the linear orientations that match with the mix- around neighboring pixels is redundant, and the entropy of ing matrix columns. the pixels in the same area is low. Therefore, as a more effi- cient method to calculate the inverse matrix, we proposed a 3 Methods of the proposed scheme new technique that can remove the redundancy without af- 3.1 Principle of pyramid image enhancement fecting the information, and that can increase the entropy, An image can be decomposed and analyzed in a form of while maintaining the features. The pyramid technique has a pyramid with a few levels of scaled-down images. The been proposed in this paper, which scales down the image pyramid places the original image at the first level and while maintaining the main features such as salient features adds scaled-down images at the higher levels [23] as il- and removing the redundant information. In the presented lustrated in Fig. 2. work, we use three levels to construct pyramid levels as The pyramid scales down an image using the low-pass showninFig. 2.Level 1isthe inputimage.Level 2isthe filter with a Gaussian mask expressed by Eq. 5. output after applying the filter based on Eq. (5), followed by a downsampling step. The above steps are then repeated to produce level 3. The ratio between the image outputs of 0 1 1 464 1 the two consecutive levels determines the scale of the pyra- B C 416 24164 B C 1 mid levels with respect to the original image. We can use B C H ¼ 624 36246 ð5Þ B C these scales for further processing using ICA separation. @ A 416 24164 1 464 1 3.2 Transform techniques The motivation behind our pyramid technique is that the 3.2.1 Ridgelet transform surrounding pixels within a certain area often have the Ridgelet transform (RT) is a highly effective approxima- similar characteristics, and thus, they are highly correlated tion approach to represent an image object as described with each other. To estimate the inverse matrix from by Candes and Donoho [24, 25]. It has a discontinuity Eq. (4) directly, the mixed image is converted from 2-D sig- across a line, and a curvelet, which is adopted in their pa- nal to 1-D signal. It is, therefore, inefficient to apply ICA pers as a type of RT, and is an effective transform for Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 5 of 16 (a) (b) (c) Fig. 5 Reconstruction and decomposition based on wavelet. a Full decomposition-reconstruction of two band filter bank. b Decomposition tree of wavelet packet. c Reconstruction tree of wavelet packet objects with discontinuities across curves. The approxima- Given an integrable signal f(x , x ), the RT is defined 1 2 tion quality of RT is very close to the ideal Lagrangian con- as dition and is in general better than any other algorithms such as Fourier transform (FT) and wavelet transform RTðÞ a; b; θ¼ fxðÞ ; x ψ ðÞ x ; x dx dx ð8Þ f 1 2 1 2 1 2 a;b;θ (WT). Due to such advantage, RT is widely used in image analysis, such as watermarking, image enhancement, image de-noising, and texture classification [26, 27]. It follows that Suppose that there is a function ψ:R→ R satisfying the admissibility condition ψ ðÞ x ; x ¼ ψ ðÞ t δðÞ x cosθ þ x sinθ−t dt 1 2 1 2 a;b;θ a;b 2 2 jj ψξðÞ =jj ξ dξ < ∞ ð6Þ ð9Þ −1/2 where ψ stands for the Fourier transform of the function where ψ (t)= a . ψ((t − b)/a). a, b ψ. For each a >0, b ∈ R, and θ ∈ [0, 2π], a bivariate RT Then the RT can be expressed as 2 2 ψ :R → R is defined as a,b,θ Z Z RTðÞ a; b; θ¼ ψ ðÞ t fxðÞ ; x δðÞ x cosθ þ x sinθ−t dx dx dt f 1 2 1 2 1 2 a;b 2 2 −1=2 R R ψ ðÞ x ; x ¼ a :ψðÞ ðÞ x cosθ þ x sinθ−b =a 1 2 1 2 a;b;θ ¼ ψ ðÞ t R ðÞ θ; t dt a;b ð7Þ ð10Þ For a fixed θ, ψ (x , x ) is constant along the line a, b, θ 1 2 x cos θ + x sin θ = constant. As a result, the formula is represented by 1 2 Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 6 of 16 Fig. 6 Block diagram of the proposed image separation algorithm Z To complete the RT, we perform a 1-D wavelet trans- form along the radial variable in Radon space. Figure 4 R ðÞ θ; t ¼ fxðÞ ; x δðÞ x cosθ þ x sinθ−t dx dx f 1 2 1 2 1 2 displays the flow graph of the RT. ð11Þ Do and Vetterli supposed a different execution of the ridgelet transform called finite ridgelet transform. From signal f(x ,x ), we can calculate Radon transform It has numerical exactness like the RT with little com- 1 2 (RAT). Thus, the RT space can be expressed as an im- putational complications. As supposed above, a separate plementation of a 1-D wavelet transform to the slices of RT can be realized via a Radon transform and a 1-D the Radon space. discrete wavelet transform (DWT) as presented in Fig. 4. It is known that approximate RAT for an image can be The finite Radon transform (FRAT) is simply an effectively computed with the fast Fourier transform addition of image pixels over a certain set of lines. Those (FFT). This approach is summarized by [28–30]: lines are known in a limited geometry in a similar scheme to the lines for the constant Radon transform (a) 2-D FFT step: calculate the 2-D FFT of the image. (RAT) in the Euclidean geometry [12–16]. (b) Cartesian to polar conversion step: obtain samples We denote Z = {0, 1, 2 ... p − 1}, where p is a prime on the recto polar as shown in Fig. 3. number. Note that Z is a limited field with modulo p (c) 1-D inverse FFT step: calculate the 1-D inverse FFT processes. on each angular line. Then, the FRAT of a real function f on the limited grid Z is given by For the implementation of the Cartesian to polar con- version, we use a rectopolar coordinate plane. The pffiffiffi r bc l ¼ FRAT ðÞ k; l ¼ 1= p fiðÞ ; j ð12Þ geometry of this coordinate plane is presented in Fig. 3, k f ðÞ i; j εL K;i where the data points are marked with circles. Here, for an image of size n × n, there are 2n radial lines in the Here, L denotes the group of points that make up a k,l frequency plane selected by connecting the origin to the line on the lattice Z . vertices lying on the boundary of the array. The grid lines of the rectopolar coordinate plane are the intersec- ðÞ i; j : j ¼ ki þ lðÞ modp ; i ∈Z ; 0≤k≤p−1 L ¼ ð13Þ k;l tions between the set of radial lines and that of Cartesian ðÞ i; j : j∈Z ; k ¼ p lines parallel to the axes. Thus, there are 2n × n points (marked with circles) on the rectopolar grid lines, and The lines of the FRAT show a wrap-around effect in the corresponding data structure is a rectangular format the transform. This means that the FRAT deals with the with n ×2n elements. input image as one period of a periodic picture. In the Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 7 of 16 r [p−1]), where the trend k is constant. The total record- ing is known as the FRT as shown in Fig. 4. 3.2.2 Discrete wavelet transform Wavelets have become an efficient tool in several signal processing areas such as signal de-noising, image fusion, and signal restoration and compression. The conven- tional discrete wavelet transform (DWT) may be regarded as the result of filtering the input signal with a bank of band-pass filters whose impulse responses are all approximately given by scaled versions of a mother wavelet. The scaling factor between adjacent filters is usually 2:1, which leads to octave bandwidths and center frequencies that are one octave apart from each other as illustrated by Fig. 5 [31]. The outputs of the filters are usually maximally decimated so that the number of DWT output samples equals the number of input sam- ples and the transform is invertible as shown in Fig. 5. The art of calculating a good wavelet lies in the design of appropriate filters, H , H , G , and G , to realize vari- 1 0 1 0 ous trade-offs between frequency and spatial space char- acteristics while satisfying the condition of perfect reconstruction (PR) introduced by [31]. In Fig. 5, the procedure of interpolation and decimation by 2:1 as the result of H and H defines all odd components of these 1 0 signals to zero. For the low pass branch, this is equivalent to multiply- ing x (n)by ð1 þð−1Þ Þ. Hence, X (z) is converted to fX ðzÞþ X ð−zÞg. Simi- 0 0 0 Fig. 7 Flowchart of the FastICA algorithm larly, X (z) is converted to fX ðzÞþ X ð−zÞg. 1 1 1 Thus, the expression for Y(z) is given by the equation FRAT domain, the energy is best compressed if the below [31]: mean is removed from the image f(i, j) prior to the transform. pffiffiffi In Eq. (12), the factor 1= p is supplied in order to YzðÞ ¼fg X ðÞ z þ X ðÞ −z G ðÞ z 0 0 0 normalize l standard between the result and input of the FRAT. With an invertible FRAT and by using Eq. (13), we þfg X ðÞ z þ X ðÞ −z G ðÞ z 1 1 1 can have an invertible separate finite ridgelet transform H ðÞ z G ðÞ z 0 0 ¼ XzðÞ (FRT) by taking the separate wavelet transform on each þ H ðÞ z G ðÞ z 2 1 1 repetition of FRAT projection repetition, (r [0], r [1], … 1 k k H ðÞ −z G ðÞ z 0 0 þ XðÞ −z þ H ðÞ −z G ðÞ z 2 1 1 ð14Þ The first PR condition requires aliasing cancelation and forces the above term in X(−z) to be zero [31]. Hence, {H (−z)G (z)+ H (−z)G (z)} = 0, which can be 0 0 1 1 achieved if: −k k H ðÞ z ¼ z G ðÞ −z andG ðÞ z ¼ z H ðÞ −z ð15Þ 1 0 1 0 Here, k is limited to odd numbers (usually k = ± 1). Fig. 8 Original images, images from left are Cameraman From X(z)to Y(z), the transfer function need to be and Baboon unity in the second condition of PR: Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 8 of 16 Fig. 9 Mixing results at several noise level. a 4 dB. b − 5 dB. c − 10 dB. d − 15 dB images are decomposed into frequency bands using differ- fg H ðÞ z G ðÞ z þ H ðÞ z G ðÞ z ¼ 2 ð16Þ 0 0 1 1 ent transforms. Then, each frequency band is handled, separately, using the pyramid technique to extract its 3.2.3 Trigonometric transform details. Thetwo primarytrigonometric transforms are the A flow diagram of the proposed method is depicted in discrete cosine transform (DCT) and the discrete Fig. 6, which is also described by the following steps: sine transform (DST). Trigonometric transform has an energy compaction feature. The properties of Step 1: Decompose the mixed image into different these transforms are described below. transforms using the finite ridgelet transform (FRT), wavelet transform (WT), discrete sine transform (DST), 3.2.3.1 DCT The DCT is a 1-D transform with the cap- anddiscretecosinetransform (DCT). ability of energy compaction. For a 1-D signal x(n), an Step 2: Apply a pyramid construction on each application example of DCT is given by [32]. transform to obtain the different scale components of each transform in each pyramid level. We chose k−1 πðÞ 2k−1ðÞ m−1 xmðÞ¼ ωðÞ m xkðÞ cos m ¼ 0;::…; k−1 ð17Þ three levels of pyramid construction in the present 2k k¼0 work, while it can be extended to a larger number where of levels. Step 3: Conduct a separation operation on all the pffiffiffi m ¼ 0 pyramid levels in each transform and calculate the rffiffiffi ωðÞ m ¼ ð18Þ > 2 inverse matrix (un-mixing matrix). The operation m ¼ 1;…; k−1 proceeds from level 3 (the smallest scale) towards level 1 (the largest scale). In level 3 of pyramid component, we start with a random matrix to calculate the values 3.2.3.2 DST The DST is another transform and can be of inverse matrix. The output values of estimating calculated by Eq. (17). Application examples of the DST inverse matrix from level 3 are used as the input matrix can be found in [31]: to update the inverse matrix values for level 2. The updated output of the inverse matrix from level 2 is in turn used as the input matrix for level 1 to calculate k−1 πmk xmðÞ¼ ωðÞ m xkðÞ sin m ¼ 0;::…; k−1 ð19Þ the final values of the inverse matrix. The final k þ 1 k¼0 estimated values of the inverse matrix are applied to the original mixed image to extract accurate separated 4 The proposed image separation approach images in step 4. As described in the prior sections, we merge the benefits Step 4: Calculate an estimate of the separated image using of the pyramid technique and FRT. First, the mixed the mixed image with the calculated inverse matrix. Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 9 of 16 Fig. 10 Estimated results of separated image at noise level 4 dB. a Proposed method (FRT with pyramid). b FRT without pyramid. c DWT with pyramid. d Time with pyramid. e DWT without pyramid. f Time without pyramid. g DCT with pyramid. h DST with pyramid. i DCT without pyramid. j DST without pyramid Table 1 SNR of overall separation performances on image Table 2 PSNR of overall separation performances on image mixtures mixtures Algorithm Cameraman (dB) Baboon (dB) Algorithm Cameraman (dB) Baboon (dB) Proposed method (FRT with pyramid) 7.2776 10.0862 Proposed method (FRT with pyramid) 12.8952 15.4340 FRT without pyramid 6.0783 8.2862 FRT without pyramid 11.6959 13.6340 DWT with pyramid 2.3037 1.8058 DWT with pyramid 7.7732 7.1535 DWT without pyramid − 0.8739 − 1.1538 DWT without pyramid 7.9213 4.5781 Time domain with pyramid 2.9164 1.7148 Time domain with pyramid 7.7579 7.6330 Time domain without pyramid 1.1403 − 1.8537 Time domain without pyramid 7.5340 7.1717 DCT with pyramid 0.9739 − 0.0567 DCT with pyramid 6.5915 5.1941 DCT without pyramid 0.7575 − 0.1537 DCT without pyramid 6.3751 4.77 DST with pyramid − 0.2865 0.7814 DST with pyramid 5.3311 6.1143 DST without pyramid − 0.1981 0.7665 DST without pyramid 5.8157 5.1292 Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 10 of 16 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Table 3 RMSE of overall separation performances on image X X 1 M−1 N−1 RMSE ¼ fxðÞ ; y −fxðÞ ; y mixtures x¼0 y¼0 MN Algorithm Cameraman (dB) Baboon (dB) ð21Þ Proposed method (FRT with pyramid) 0.2266 0.1692 FRT without pyramid 0.2601 0.2081 RMSE is to measure the square error between two im- DWT with pyramid 0.4086 0.9618 ages. It considers image degradation as perceived vari- DWT without pyramid 0.5133 0.4389 ation in information. Time domain with pyramid 0.4094 0.8057 (iii)Peak signal-to-noise ratio (PSNR): Time domain without pyramid 0.2676 0.4379 DCT with pyramid 0.4681 0.0701 DCT without pyramid 0.4800 0.5499 DST with pyramid 0.5615 0.2186 PSNR ¼ 20 log dB ð22Þ DST without pyramid 0.5119 0.4946 10 RMSE PSNR is one of the most widely used metrics for ICA has been regarded as one of the most efficient ap- evaluating the quality of estimated image. The higher proaches reported in different fields. It provides advantages the PSNR values are, the higher quality the estimation of fast convergence and straightforward implementation. output provides. Figure 7 illustrates a flowchart that realizes the fast inde- pendent component analysis (FastICA) approach [32]: (iv) Normalized cross-correlation (NCC): Next, we introduce several performance metrics to evaluate the image separation results for each given method, which are also described in [33–35]. P P M−1 N−1 fxðÞ ; y fxðÞ ; y x¼0 y¼0 (i) Signal-to-noise ratio (SNR): NCC ¼ ð23Þ P P M−1 N−1 2 ðÞ fxðÞ ; y x¼0 y¼0 NCC is another common performance metric that is 0 1 P P useful to compare the estimation results from different M−1 N−1 2 f ðÞ x; y B x¼0 y¼0 C SNR ¼ 10 log dB ð20Þ @ A 10 source images. P P M−1 N−1 fxðÞ ; y −fxðÞ ; y x¼0 y¼0 5 Experiment result and discussion Here, the size of the image is N × M, while f(x, y) rep- A computer simulation is presented in this section to resents the original image and f ðx; yÞ an estimated evaluate the performance of the proposed approach after image. the image was mixed. In all experiments, test images were used which were extracted from a standard image (ii) Root mean square error (RMSE): database. We assumed that the mixed images are cor- rupted by an additive white Gaussian noise (AWGN) with zero mean and unit variance to illustrate the visual aspect of the various mixed images; we reported in Fig. 9 Table 4 NCC of overall separation performances on image one of each noisy mixture in several noise levels. mixtures We have conducted experiments on the images of Algorithm Cameraman (dB) Baboon (dB) Fig. 8 and obtained better image separation results from Proposed method (FRT with pyramid) 0.9548 0.7777 the proposed separation method compared with other FRT without pyramid 0.9234 0.7510 separation methods. Due to the space restriction of the DWT with pyramid 0.4001 0.3191 paper, we summarize the detailed experiment results with Cameraman and Baboon images. Figure 10 shows DWT without pyramid − 0.2206 − 0.1886 the experimental results with Cameraman and Baboon Time domain with pyramid 0.4991 0.1902 images using the proposed separation method and vari- Time domain without pyramid 0.2192 − 0.4878 ous other methods. DCT with pyramid 0.1109 − 0.0611 These test images are created by a convolutional DCT without pyramid − 0.0842 0.0141 mixing process using a set of mixing matrices gener- DST with pyramid − 0.0896 1.1536 ated randomly by MATLAB, and the criteria of this DST without pyramid 0.0264 0.0848 matrix are normally distributed random numbers. Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 11 of 16 Fig. 11 Output SNR vs. input SNR for Cameraman image overall separation performances Fig. 12 Output SNR vs. input SNR for Baboon image overall separation performances Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 12 of 16 Fig. 13 Output PSNR vs. input SNR for Cameraman image overall separation performances Fig. 14 Output PSNR vs. input SNR for Baboon image overall separation performances Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 13 of 16 Fig. 15 Output RMSE vs. input SNR for Cameraman image overall separation performances Fig. 16 Output RMSE vs. input SNR for Baboon image overall separation performances Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 14 of 16 Fig. 17 Output NCC vs. input SNR for Cameraman image overall separation performances Fig. 18 Output NCC vs. input SNR for Baboon image overall separation performances Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 15 of 16 Also, Fig. 9 shows the result of mixing process at dif- can obtain PSNR values of 15 dB or higher, whereas ferent noise level. other methods provide much poorer PSNR values in As a result of the experiments, the separated images the range of 4~14 dB. are shown in Fig. 10. The numerical results of these On the other hand, Figs. 15 and 16 demonstrate the experiments are included in Tables 1, 2, 3,and 4 at output of RMSE. In the RMSE results, the lowest curve noise level 4 dB. These tables give an image quality indicates the best result. We can observe that the pro- comparison between separation algorithms, revealing posed method produces an RMSE value of 0.25 or lower, the superiority of the proposed algorithm at noise which is 0.2~0.6 lower than other methods. Figures 17 level 4 dB. We used SNR, PSNR, RMSE, and NCC to and 18 illustrate the simulation results of NCC. It is ob- evaluate the estimation quality of separated images. served that the proposed method produces an NCC As illustrated by Fig. 10 and from Tables 1, 2, 3, and 4, value very close to 1, while other methods provide much the proposed approach performs preliminary separation NCC values in the range of − 0.1~ 4. better than the conventional separation methods based on the time domain, wavelet transform, and trigonomet- 6 Conclusions ric transform. We can observe that the resulting images This paper addressed the blind image separation of Fig. 10a separated by the proposed RT with homo- problem by introducing a new image separation morphic method has better quality than other images of technique based on a novel concept of pyramid pro- Fig. 10b–j separated by various different methods. cessing and ridgelet transform. The proposed ap- From Tables 1, 2, 3,and 4, it can be observed that the proach first uses FRT domain coefficients to obtain proposed RT with pyramid operation produces higher the frequency components. It then applies a pyramid quality and efficiency compared with all other ap- processing to estimate the mixing matrix by con- proaches tested in our experiments. Tables 1 and 2 structing the different level scales to extract more prove that the SNR and PSNR of the images separated details in information and remove redundant infor- by the proposed method are higher than those of all the mation. We conducted an extended set of simulation other methods. Table 3 indicates that the RMSE of the experiments using various image separation methods resulting images separated by the pyramid operation that employ the proposed FRT as well as other with FRT is the best compared with all other methods methods including DWT, time domain, DCT, and considered in our experiments. In addition, as illustrated DST along with pyramid and non-pyramid opera- in Table 4, the NCC of the proposed technique shows a tions, respectively. The experimental results demon- value closer to 1 than any other methods do. Here, an strate that the proposed method outperforms all NCC result of 1 is the best possible result. From the ex- other methods that we tested. In summary, it pre- perimental results of Tables 1, 2, 3,and 4, it is observed sents PSNR values of 12~16 dB under a wide range that the separation quality of the time domain-based of noise condition, while all other methods provide method is relatively low. This result follows from the much poor PSNR values of 4~10 dB under the same fact that the sources must satisfy statistical independ- noise condition. The proposed method, therefore, ence to allow FastICA methods to achieve high-quality appears to be an efficient approach to separate separation results. mixed images even under noisy conditions. Figures 11, 12, 13, 14, 15, 16, 17,and 18 plot an extensive set of simulation results measured with a wide range of noise levels. These results compare the Abbreviations separation quality of the tested images using various BSS: Blind source separation; DCT: Discrete cosine transform; DST: Discrete evaluation metrics including SNR, PSNR, RMSE, and sine transform; DWT: Discrete wavelet transform; EEG: Electroencephalography; FastICA: Fast independent component analysis; NCC. Figures 11 and 12 present the SNR output of fMRI: Functional magnetic resonance imaging; FRAT: Finite Radon Transform; the separated images for Cameraman and Baboon, re- FRT: Finite ridgelet transform; ICA: Independent component analysis; spectively. These figures demonstrate the performance ITW: Idiopathic toe walking; NCC: Normalized cross-correlation; PR: Perfect reconstruction; PSNR: Peak signal-to-noise ratio; RAT: Radon Transform; comparison of the proposed BSS method with respect RMSE: Root mean square error; RT: Ridgelet transform; SNR: Signal-to-noise to the other methods at different input noise levels. It ratio; WT: Wavelet transform shows that the proposed method provides the highest performance. For example, the proposed FRT with Funding pyramid method achieves an SNR of higher than This work was supported by IITP grant through the Korean Government, 7 dB for the Baboon image, whereas the DCT with- development of wide area driving environment awareness and cooperative driving technology which are based on V2X wireless communication under out pyramid method gives an SNR as low as 0.5 dB. grant R7117-19-0164, and it was also supported by the Center for Integrated Figures 13 and 14 illustrate the PSNR measurement Smart Sensors funded by the Ministry of Science of Korean Government, ICT of the separated images, where the proposed method and Future Planning as Global Frontier Project (CISS-2016). Abbass and Kim EURASIP Journal on Image and Video Processing (2018) 2018:38 Page 16 of 16 Authors’ contributions 20. C Hu, Z Xu, Y Liu, L Mei, L Chen, X Luo, Semantic link network based model MYA and HWK designed the proposed algorithm together. MYA for organizing multimedia big data. IEEE Trans Emerg Top Comput., 2(3), implemented it with MATLAB. Both authors wrote and approved the final 376–87 (2014) manuscript. The corresponding author is HWK ([email protected]). 21. XC Yu, JD Xu, D Hu, Xing HH, A new blind image source separation algorithm based on feedback sparse component analysis. Signal Process., 93(1), 288–96 (2013) Competing interests 22. Y Zhang, D Yang, R Qi, Z Gong, Blind image separation based on The authors declare that they have no competing interests. reorganization of block DCT. Multimedia Tools and Applications (2016) 23. Burt and Adelson, “The Laplacian pyramid as a compact image code,” IEEE Transactions on Communications, Vol. COM-31, no. 4, pp. 532–540, 1983. Publisher’sNote 24. L Xiaoa, C Lia, Z Wub, T Wangc, An enhancement method for X-ray image Springer Nature remains neutral with regard to jurisdictional claims in via fuzzy noise removal and homomorphic filtering. Neurocomputing 195, published maps and institutional affiliations. 56–64 (2016) 25. E.J. Candes, Ridgelets: “theory and applications”, Ph.D. thesis, Department of Author details Statistics, Stanford University, 1998. Department of Electronic Engineering, College of Electrical and Computer 26. E.J. Candes, D.L. Donoho, “Curvelets, Tech. report”, Department of Statistics, Engineering, Chungbuk National University, Cheongju City, South Korea. Stanford University, 1999. Engineering Department, Nuclear Research Center, Atomic Energy Authority, 27. E.J. Candes, D.L. Donoho, “Curvelets: a surprisingly effective nonadaptive Cairo City, Egypt. representation for objects with edges”, Tech. report, Department of Statistics, Stanford University, 2000. Received: 12 December 2017 Accepted: 14 May 2018 28. J-L Starck, EJ Candès, DL Donoho, The curvelet transform for image denoising. IEEE Trans. Image Process., 11(6), 670–84 (2002) 29. Q Huang, B Hao, S Chang, Adaptive digital ridgelet transform and its application in image denoising. Digital Signal Processing 52,45–54 (2016) References 30. EJ Candes, DL Donoho, Ridgelets: a key to higher dimensional 1. A Cichocki, S Amari, Adaptive blind signal and image processing: learning intermittency? Philos. Trans. R. Soc. Lond. A357, 2459–2509 (1999) algorithms and applications (Wiley, New York, 2005) 31. JS Walker, A primer on wavelets and their scientific applications (CRC Press, 2. YW Wei, Y Wangb, Dynamic blind source separation based on source- Boca Raton, 1999) direction prediction. Neurocomputing 185,73–81 (2016) 32. KR Rao, P Yip, Discrete cosine transform (Academic, New York, 1990) 3. S Ali, NA Khan, M Haneef, et al., Blind source separation schemes for mono- 33. A Hyvärinen, E Oja, A fast fixed-point algorithm for independent sensor and multi-sensor systems with application to signal detection. component analysis. Neural Comput. 9(7), 1483–1492 (1997) Circuits Systems Signal Process 36(11), 4615–4636 (2017) 34. H Hammam, AA Elazm, ME Elhalawany, et al., Blind separation of audio 4. LT Duarte, JMT Romano, C Jutten, KY Chumbimuni-Torres, LT Kubota, signals using trigonometric transforms and wavelet denoising. Int J Speech Application of blind source separation methods to ion-selective electrode Technol, 13, 1 (2010) arrays in flow-injection analysis. IEEE Sensors J 14(Issue 7), 2228–2229 (2014) 35. Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from 5. XL Li, Adali, Independent component analysis by entropy bound error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600– minimization. IEEE Trans. Signal Process. 58(10), 5151–5164 (2010) 612 (2004) 6. T Adali, VD Calhoun, Complex ICA of brain imaging data. IEEE Signal Process. Mag. 24(5), 136–139 (2007) 7. R Chai, GR Naik, TN Nguyen, SH Ling, Y Tran, A Craig, HT Nguyen, Driver fatigue classification with independent component by entropy rate bound minimization analysis in an EEG-based system. IEEE J. Biomed. Health Inform. 21(3), 715–24 (2016) 8. G Pendharkar, GR Naik, HT Nguyen, Using blind source separation on accelerometry data to analyze and distinguish the toe walking gait from normal gait in ITW children. Biomed. Signal Process. Contr. 13,41–49 (2014) 9. A Hyvärinen, Survey on independent component analysis. Neural Computing Surveys 2,94–128 (1999) 10. A Hyvärinen, E Oja, Independent component analysis: algorithms and applications. Neural Netw. 13(4–5), 411–430 (2000) 11. JF Cardoso, B Laheld, Equivariant adaptive source separation. IEEE Transaction Signal Process. 44, 3017–3030 (1996) 12. E Oja, A Hyvärinen, J Karhunen, Independent component analysis (Wiley, United States of America, 2001) 13. X He, F He, A He, Super-Gaussian BSS using fast-ICA with Chebyshev-Pade approximant. Circuits Systems Signal Process 37(1), 305–341 (2018) 14. MSC Almeida, LB Almeida, Wavelet-based separation of nonlinear show- through and bleed-through image mixtures. Neurocomputing 72(1–3), 57– 70 (2008) 15. MT Ozgen, EE Kuruoglu, D Herranz, Astrophysical image separation by blind time-frequency source separation methods. Digit Signal Process, 360–369 (2009, 2009) 16. X J-d Chuan, H Dan, X Hai-hua, A new blind image source separation algorithm based on feedback sparse component analysis. Signal Process. 93, 288–296 (2013) 17. S Belaid, J Hattay, W Naanaa, et al. A new multi-scale framework for convolutive blind source separation. SIViP. 10, 1203 (2016) 18. S Kim, CD Yoo, Underdetermined blind source separation based on subspace representation. IEEE Trans. Signal Process., 57(7), 2604–14 (2009) 19. N Besic, G Vasile, J Chanussot, S Stankovic, Polarimetric incoherent target decomposition by means of independent component analysis. IEEE Trans. Geosci. Remote Sens. 53(3), 1236–1247 (2015)
EURASIP Journal on Image and Video Processing – Springer Journals
Published: May 30, 2018
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.