Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Extracting Filaments Based on Morphology Components Analysis from Radio Astronomical Images

Extracting Filaments Based on Morphology Components Analysis from Radio Astronomical Images Hindawi Advances in Astronomy Volume 2019, Article ID 2397536, 11 pages https://doi.org/10.1155/2019/2397536 Research Article Extracting Filaments Based on Morphology Components Analysis from Radio Astronomical Images 1 1 1 2 3 1 1 M. Zhu, W. Liu, B. Y. Wang, M. F. Zhang, W. W. Tian, X. C. Yu , T. H. Liang, 2 1 1 D. Wu, D. Hu, and F.Q.Duan College of Information Science and Technology, Beijing Normal University, Beijing, China Key Laboratory of Optical Astronomy, National Astronomical Observatory of China, Beijing, China e University of Chinese Academy of Sciences, Beijing, China Correspondence should be addressed to X. C. Yu; yuxianchuan@163.com Received 17 October 2018; Accepted 3 March 2019; Published 2 June 2019 Guest Editor: Junhui Fan Copyright © 2019 M. Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Filaments are a type of wide-existing astronomical structure. It is a challenge to separate filaments from radio astronomical images, because their radiation is usually weak. What is more, filaments oen ft mix with bright objects, e.g., stars, which makes it difficult to separate them. In order to extract filaments, A. Men’shchikov proposed a method “getfilaments” to find filaments automatically. However, the algorithm removed tiny structures by counting connected pixels number simply. Removing tiny structures based on local information might remove some part of the filaments because filaments in radio astronomical image are usually weak. In order to solve this problem, we applied morphology components analysis (MCA) to process each singe spatial scale image and proposed a filaments extraction algorithm based on MCA. MCA uses a dictionary whose elements can be wavelet translation function, curvelet translation function, or ridgelet translation function to decompose images. Different selection of elements in the dictionary can get different morphology components of the spatial scale image. By using MCA, we can get line structure, gauss sources, and other structures in spatial scale images and exclude the components that are not related to filaments. Experimental results showed that our proposed method based on MCA is eeff ctive in extracting filaments from real radio astronomical images, and images processed by our method have higher peak signal-to-noise ratio (PSNR). 1. Introduction Some researchers paid more attention to the large-scale lfi aments of theuniverse[8], which may giveclues to better A substantial part of interstellar medium exists in the form understand the slightly nonuniform cosmic microwave back- of a fascinating web of omnipresent filamentary structures ground (CMB) and the birth of the rfi st generation of stars. [1], called la fi ments. The astronomical filament is rs fi t dis- In addition, la fi ments have been observed in other objects, covered in the Milky Way. Along with the development of such as supernova remnants (SNR) [9] and protoplanetary telescopes, various filaments come into sight. Among them disk [10]. the la fi ments in star-forming regions are the most fascinat- The fact that many filaments are fuzzy in images causes ing, many magnetohydrodynamic (MHD) simulations have difficulty to distinguish them from background and sur- shown that giant molecular clouds (GMCs) primarily evolve rounding objects. Schneider et al. [11] investigated spatial and into filaments before they collapse to form stars [2, 3]. Recent density structure of the Rosette molecular cloud, by applying observations also conrfi m these simulations [4, 5]. Since the a curvelet analysis, a filament-tracing algorithm (DisPerSE), formation of massive stellar objects is still unclear, further and probability density functions (PDFs) on Herschel col- research on lfi aments is essential. Filaments in Galactic umn density maps. Hennebelle et al. [12] showed a method and cosmological efi lds are also important. Studies have based on adaptive mesh refinement magneto hydrodynamic argued that low mass galaxies got their gas through “cold simulations, which treat self-consistently cooling and self- accretion”, which is oen ft directed along la fi ments [6, 7]. gravity. Tugay [13] proposed a layer smoothing method, 2 Advances in Astronomy which described cellular large-scale structure of the universe The equivalent constrained optimization problem is as fol- (LSS) as a grid of clusters with density larger than a limited lows: value, to detect extragalactic la fi ments. Men’shchikov [14] opt opt opt {𝛼 , 𝛼 ,..., 𝛼 } proposed a multi-scale filaments extraction method named 1 2 P getfilaments, which decomposed a simulated astronomical image containing la fi ments into spatial images at different 󵄩 󵄩 󵄩 󵄩 = arg min∑ 󵄩 𝛼 󵄩 , 󵄩 󵄩 1 scales to prevent interaction influence of different spatial (2) {𝛼 ,...,𝛼 } 1 P k=1 scale structures. The getfilaments works well in simulated images and has been used to identify filaments for real subject to : x = ∑D 𝛼 . astronomical images, e.g., the far-infrared images of Musca k k k=1 cloud observed with Herschel [15]. However, getfilaments might exclude some tiny structure of la fi ments in astronomy However, this model does not take into account factors images, because it removes tiny structures just by counting that may lead to the failure of the image decomposition, connected pixels number, and filaments in astronomy images such as noise. When noise exists in the image x,the vector are usually weak. opt opt opt {𝛼 , 𝛼 ,..., 𝛼 } might be not sparse since noise cannot be 1 2 P In this paper, we develop an improved method based on sparsely represented. For this kind of noise, we put the noise morphology components analysis (MCA) and getfilaments. in the error item to achieve the sparse decomposition of the MCA is ableto decomposetheimage into morphological image x. The constraint in (2) is modified as follows: components based on different features from the perspective of mathematical morphology and is often used in image opt opt opt {𝛼 , 𝛼 ,..., 𝛼 } 1 2 P restoration, separation, and decomposition [16–19]. The basic idea of MCA decomposition algorithm is to choose two 󵄩 󵄩 󵄩 󵄩 dictionaries: smooth dictionary and texture dictionary, to = arg min∑ 󵄩 𝛼 󵄩 , 󵄩 󵄩 1 (3) {𝛼 ,...,𝛼 } represent morphology components [20]. We can design dif- 1 P k=1 ferent dictionaries to represent different sparse components 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 in the image. Smooth dictionary produces the decomposed 󵄩 󵄩 subject to : 󵄩 x − ∑D 𝛼 󵄩 ≤𝜖. k k 󵄩 󵄩 󵄩 󵄩 smooth component which carries the geometric and piece- 󵄩 󵄩 k=1 󵄩 󵄩 wise smooth information of the image, and texture dictionary produces the decomposed texture component which carries where 𝜖 represents the noise level in the image x. the marginal and edge information. The paperisstructured asfollows. Theimproved method .. MCA Decompostion Algorithm. In this paper, we focus named la fi ment extraction algorithm based on MCA is on the image decomposition into two components: cartoon described in Section 2. Section 3 is devoted to discussing layer and texture layer. Cartoon layer contains cartoon experimental results of our method and comparing our and piecewise smooth information, and texture layer may method with the getfilaments method by employing data contain other texture information, marginal information, from GALFA-HI of Arecibo. and noises [21, 22]. Studies [23, 24] have shown that noises exist in both cartoon and texture layer. In other words, the smooth part not only contains the majority of the useful 2. Filament Extraction Algorithm information, but also contains a small part of the noise. If Based on MCA we set the same threshold of noise variance for the whole image, rather than calculating the threshold for each part .. MCA Model. MCA was proposed by Starck et al. [17]. of the image, some useful information might be removed. MCA is a kind of decomposition algorithm based on signal We therefore introduce the MCA decomposition algorithm sparsity and morphological diversity. MCA assumes that to process an image into smooth (cartoon) layer and texture signals are linear combinations of several morphological layer. components, and each morphological component can be We assume that matrix D is the dictionary matrix of sparsely represented on its own dictionary. the texture layer and that D is the dictionary matrix of We assume that image x comprises 𝑀 different morpho- the cartoon layer. A solution for the decomposition could logical components: x = x +x +⋅⋅⋅+x . We design different 1 2 M be obtained by relaxing the constraint in (3) to become an dictionaries D for dieff rent morphological components x i i approximate one: and assume all components mix together linearly. The image opt opt 󵄩 󵄩 󵄩 󵄩 x as an one-dimensional vector of length M can then be 󵄩 󵄩 󵄩 󵄩 {𝛼 , 𝛼 }= arg min 󵄩 𝛼 󵄩 + 󵄩 𝛼 󵄩 t c 󵄩 t󵄩 1 󵄩 c󵄩 1 represented as follows: {𝛼 ,𝛼 } t c (4) 󵄩 󵄩 󵄩 󵄩 +𝜆 󵄩 x − D 𝛼 − D 𝛼 󵄩 , t t c c x = D𝛼 , (1) 󵄩 󵄩 2 where 𝜆 is a Lagrange operators. Den fi e x = D 𝛼 and x = t t t c M×P + + where the matrix D =[𝐷 ,...,𝐷 ]∈ 𝑅 (typically, M ≪ D 𝛼 .Given x ,we can recover 𝛼 as 𝛼 = D x ,where D 1 P c c t t t t t t P) is a dictionary. 𝛼 ∈𝑅 is the vector of sparse coefficients. is the Moore-Penrose pseudoinverse of D .In order to get t Advances in Astronomy 3 Convolution Convolve the image into layered images Decomposition Decompose each layered image using MCA Denoise and enhance cartoon layer and texture layer Denoise Merge cartoon layer and texture layer Combination and Merge layered images to get filaments extraction Highlight contours using the watershed algorithm Figure 1: Flow of the filament extraction algorithm. piecewise smooth component, add a TV (Total Variation) complex analysis with real numbers and is appropriate for the penalty [25] to tfi the smooth layer. TV is used to damp sparse representation of the texture and periodic part of the ringing artifacts near edges and oscillating. Put these back image. into (4), and, thus, we obtain the following: .. Filament Extraction Algorithm Based on MCA. The shape opt opt 󵄩 + 󵄩 󵄩 + 󵄩 󵄩 󵄩 󵄩 󵄩 {x , x }= arg min 󵄩 D x 󵄩 + 󵄩 D x 󵄩 of the la fi ments can be obtained by applying the above t t c c 󵄩 t 󵄩 1 󵄩 c 󵄩 1 {x,x} t c lfi ament extraction algorithm to radio astronomical images. (5) We finally use the watershed algorithm to highlight filaments. 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 +𝜆 󵄩 x − x − x 󵄩 +𝛾 󵄩 𝑇𝑉 ( x )󵄩 . t c c 󵄩 󵄩 2 󵄩 󵄩 The watershed algorithm [30, 31] is based on mathe- matical morphology. The watershed algorithm segments an 𝛾 is the TV regularization parameter, in multiscale method, image into nonoverlapping regions and gets a pixel width alower 𝛾 is able to remove the artefacts caused by curvelet. and continuous boundary for the purpose of extracting and 𝑇𝑉( x ) is a measure of the amount of oscillations in the identifying a specific area [32, 33]. A grayscale image can be cartoon layer. Penalizing with TV, the cartoon layer is closer viewed as a topographic surface. A high grayscale value of a to the piecewise smooth image. However, TV suffers from the pixel denotes a peak or hill while a low grayscale denotes a so-called staircase effect that impacts the quality of images valley. Each local minimum of pixels and the aec ff ted region reconstruction. The adaptive TV [26] and the higher order are called a catchment basin, and the boundary of catchment derivative [27] are solutions to reduce the staircase effect. basins forms the watershed. By filling each isolated valley Then we discuss the choice of dictionaries for the car- (local minimum) with differently colored water (labels), toon and the texture layer. Appropriate dictionaries are the region of influence of each local minimum gradually very important for sparse representations over the image. expands outwards. The adjacent regions then converge, and Generally, the choice of dictionaries depends on experiences. the boundaries that form the watershed appear. The structure in the dictionary is more matched with the The whole la fi ment extraction algorithm (https://github image easier to form a sparse representation. The commonly .com/MiWBY/MCA) can be roughly divided into four steps used dictionary of MCA includes wavelet transform, ridgelet (as shown in Figure 1). transform, curvelet transform, discrete cosine transform (DCT), and so on. The two dictionaries used in this paper () Convolution. First, using a Gaussian filter, we convolve the are described as follows. original images into a series of layered images. Different full First, we choose curvelet as the dictionary for cartoon widths at half maximum can be set for dieff rent image layers: layer. The curvelet transform based on the multiscale ridgelet transform was proposed by Cand & Donoho [28]. It rfi st 𝑋 =𝐺 ∗𝑋−𝐺 ∗ 𝑋 (𝑗 = 1,2,...,𝑁 ). (6) j j−1 j s decomposes the image into a set of wavelet bands and then analyzes each band with the ridgelet transform at different where 𝑋 is the original image, 𝑋 is the jth subimage after scale levels. The curvelet transform performs well at the convolution, 𝐺 and 𝐺 are dieff rent Gaussian beams for j−1 j detection of anisotropic structures, smooth curves, and edges dieff rent spatial components, ∗ is the convolution operation, of dieff rent lengths [29]. and 𝑁 is the number of the layers. We next choose the local DCT as the dictionary for In this process, structures at different scales in the texture layer. DCT is a variant of the discrete Fourier trans- astronomical images can be separated into different layers form (DFT). It uses a symmetric signal extension to replace (subimages), and each layer contains similar scales, which 4 Advances in Astronomy make the input sources become simpler in the later denoising represent the filaments. For example, if we just use cartoon and extraction process. layer to represent lfi aments, lfi aments may lose some texture. u Th s, we merge the cartoon layer and textual layer to () Decompostion. We apply the MCA algorithm to each represent la fi ments better. Next, the layered subimages are layered image so that each layered image is decomposed added together to produce the la fi ments. Finally, we apply the into a cartoon layer and a texture layer. Here we use the watershed algorithm to highlight the contours of filaments. curvelet as the dictionary for the cartoon layer and local By applying MCA to decompose a real image, new DCT as the dictionary for the texture layer as described in features (components) can be obtained. This leads to better Section 2.2. The cartoon layer contains most of lfi aments and image separability. Furthermore, the smooth components low-frequency noise, and the texture layer contains sources, have a better signal-to-noise ratio than the original image. high-frequency noise, and small part of filaments. Starck et al. [17] proposed the MCA decomposition 3. Extraction Results algorithm based on BCR algorithm (Block Coordinate Relax- ation). The algorithm is given as follows. Input: The subimage .. Results for a Simulated Image. Before applying our 𝑋 aer ft convolution, which is described as the input image method to real radio astronomical images, we simulated an x here, dictionary D of the cartoon layer, dictionary D of c t image that is composed of a straight filament with 37 size of the texture layer, number of iterations 𝐿 , and the threshold max FWHM, a string of sources with 24 size of FWHM, a simple 𝛿= 𝜆 ⋅𝐿 . max background with 4000 size of FWHM, and a moderate-level Output: Cartoon layer x and texture layer x . c t noise with noise level=1.05 to test the improved algorithm (Figure 2(a)). The simulation method is the same as that (1) Initialize 𝐿 ,and𝜆= 𝑘∗𝜖 ( typically, k =3),where max mentioned in Men’shchikov et al. [14]. In the simulated 𝜖 is the value of noise level. Then the threshold 𝛿= image, there is only one spatial component, while our method 𝜆⋅𝐿 . max assumes there are many spatial components, which is similar (2) For 𝑗=1:𝐿 max to real astronomical images. In other words, even if there is For 𝑘=1:𝑃 only one spatial component, our method will also treat it as many components. (i) Update x assuming x is fixed: c t We first extract lfi aments using MCA method without convolution and denoising (Figure 2). In Figure 2(c), texture (a) Caculate the residual r = x − x − x . c t layer (especially in the area marked by red box) still contains (b) Calculate 𝛼 = D (x + r). c c part of lfi aments structures, which means just using cartoon (c) Soft thresholding the coecffi ient 𝛼 with the layer to represent filaments is insufficient. So it is necessary 𝛿 threshold and obtain 𝛼̂ . to combine cartoon layer and texture layer. However, noises (d) Reconstruct x by x = D 𝛼̂ . c c c c and sources also exist in texture layer. If the two layers (ii) Update x with the above method t are combined directly, the reconstructed filaments contains noises and sources (Figure 2(d)), so denoising is necessary Apply the TV correction by x = x −𝜇𝛾(𝑇𝑉( x )/x ), c c c c before combination. where 𝜇 is the minimum parameter, and is chosen Next, extracted results obtained using our improved by a line-decreasing the overall penalty function, or method are shown in Figure 3. Compared to Figure 2(d), the as a xfi ed step-size of moderate value that guarantees reconstructed la fi ments in Figure 3(e) contain less noise. The convergence. edge of the la fi ment is unreal as the result of decomposition. (3) Update the threshold by 𝛾=𝛾−𝜆 . If𝛾>𝜆 ,return step 2.Else, nfi ish. .. Results for Astronomical Images ... Decomposition and Denosing Results. Aiming at real () Denoise. First, we denoise each layer using the iterative radio astronomical images, we compare the extraction results cleaning algorithm proposed by Men’shchikov et al. [34]. of our method with those of the getfilaments method. The cleaning algorithm employs a global intensity threshold We employ data from GALFA-HI of Arecibo as example for single-scale images, as the larger-scale background has images. The equatorial coordinates of the objects are (12.00h, been effectively filtered out by the spatial decomposition. +10.35 ), and the object name is ’GALFA-HI RA+DEC Tile This iterative algorithm automatically finds a cut-off level that 004.00+02.35’. The data cube contains 2048 images at dieff r- separates the signal of important sources from the noise and ent velocity (with respect to the local standard stationary sys- background at each scale. Next, we enhance details for both tem). Here we select the 715th image from the 2048 original the cartoon layer and texture layer. images as the experimental image (Figure 4). In the exper- () Combination and Extraction. To get the extracted fila- imental images, filaments describe significantly elongated ments, we first merge the cartoon layer and textual layer structures. After convolution of the 715th image, we obtain for each layered image. Because la fi ments are irregular, and 99 layers (subimages) at different scales (Figure 5) and select structures of filaments exist in both cartoon layer and texture the 40th subimage as a comparison example (Figure 5(b)). layer, it is not appropriate to use just one component to In order to display the image properly and improve visual 󸀠󸀠 󸀠󸀠 󸀠󸀠 Advances in Astronomy 5 0 60 180 0 80 140 -10 10 70 0 60 180 -0.8 01 (a) (b) (c) (d) (e) Figure 2: Results of the simulated images obtained using MCA without convolution and denoising. (a) Original simulated image. (b) Cartoon layer. (c) Texture layer. (d) Reconstructed filaments without denoising. (e) Residuals. 0 60 180 0 60 180 0 80 140 -5 -5 50 03 60 0 10 160 (a) (b) (c) (d) (e) (f) Figure 3: Extraction results for the simulated image obtained usi ng our method. (a) Original simulated image. (b) eTh 40th subimage aer ft convolution. (c) Cartoon layer of the 40th subimage aer ft decomposition and denoising. (d) Texture layer of 40th subimage aeft r decomposition and denoising. (e) Reconstructed filaments. (f) Residuals. contrast between getfilaments and our method, we mark the layer dictionary and LDCT as the texture layer dictionary for image with different colors according to the intensity (Unit: MCA. We decompose each layered image to get the cartoon MJy/sr) in the image. layer and texture layer (Figure 6). The cartoon layer contains First, we apply the MCA algorithm to process image smooth parts of the image and retains most of the low- layers before applying the iterative cleaning algorithm. As frequency information of the la fi ment in the layered image. described in Section 2.2, we choose curvelet as the cartoon The frequency of the texture layer is higher. The texture 6 Advances in Astronomy (a) 0 2468 10 12 (b) Figure 4: eTh 715th original image from GALFA-HI. (a) Original image for experiments. (b) Colored image for better visual contrast. 0 2468 10 12 0 2 4 6 8 10 12 (a) (b) 0 2468 10 12 0 2 4 6 8 10 12 (c) (d) Figure 5: Images at dier ff ent scales aer ft convolution of the 715th image. Choose the 40th subimage as comparison example. (a) eTh 1st subimage. (b) eTh 40th subimage. (c) eTh 60th subimage. (d) eTh 80th subimage. Advances in Astronomy 7 -2 -1 0123456789 0123 4 (a) (b) Figure 6: Decomposition results of the 40th subimage obtained using MCA. (a) eTh cartoon layer obtained using the MCA algorithm. (b) The texture layer obtained using the MCA algorithm. 0123456789 -1 -0.5 0 1 1.5 2 2.5 3 3.5 4 (a) (b) Figure 7: Denoising results for the cartoon layer and texture layer obtained using the iterative cleaning algorithm. (a) Denoising results for the cartoon layer of the 40th subimage. (b) Denoising results for the texture layer of the 40th subimage. layer contains more edge information that is dicffi ult to be the lfi ament aremorecomplete than those in panelb.Setting distinguished visually in the layered image. The texture layer different threshold of noise variance for the two parts can is also part of the la fi ment. The texture layer also contains part avoid the removal of useful information, especially in the of the la fi ment. It shows some artefacts that might be caused area marked by red box. Our method not only removes noise by DCT in texture layer; these artefacts can be removed aer ft from the cartoon layer and texture layer but also strengthens denoising. details in the image synthesis process. Applying MCA to We then set different reasonable threshold for cartoon extract la fi ment can retain much structural information of the layer and texture layer, respectively, and apply the iterative filament. cleaning algorithm to the cartoon layer and texture layer to remove noise (as shown in Figure 7). Compared to ... Extraction Results. Finally, the layered subimages are Figure 6(b), Figure 7(b) almost contains no artefacts. added together to produce the la fi ment. Figure 9 shows the Next, the cartoon layer and texture layer are fused extraction results obtained using the getfilaments algorithm. according to the intensity ratio (e.g., the information of the Figure 9(a) is the extracted filament done by getfilaments. texture layer is expanded by a factor of 5). Small structures can Compared to the input filament in Figure 3(a), most of then be retained and interference information is removed. noises are cleaned and structures of the lfi ament are also Processing results after fusion are shown in Figure 8(c). To removed (marked in red boxes in Figures 9(b) and 9(c)). allow comparison with our method, we also use the iterative The extraction results obtained using our method are shown cleaning algorithm directly to process the 40th subimage; the in Figure 10. Comparing with the getfilaments method, results are shown in Figure 8(b). our method obtains the la fi ment more complete, excludes As seen in Figure 8(b), most of the noises in the 40th noise, and retains more structural information, especially subimage are removed by cleaning. However, getfilaments in the three areas marked by red boxes (Figures 10(b) and method sets only one noise threshold, and the values less than 10(c)). the threshold are cleared. This might clear weak information of the la fi ment. As shown in the red box of Figure 8(b), the weak part of the lfi ament is directly removed. Figure 8(c) is .. Peak Signal-to-Noise Ratio. We compare the peak signal- the denoised image after MCA decomposition. Structures of to-noise ratio (PSNR) of images processed by the getfilaments 8 Advances in Astronomy 02468 10 12 (a) 02468 10 12 (b) 02468 10 12 (c) Figure 8: Denoising results of the 40th subimage. (a) eTh original 40th s ubimage.(b)The40thsubimageobtainedusing theiterativecleaning algorithm directly. (c) eTh denoised 40th subimage aeft r MCA decomposition and fusion. Table 1: PSNR comparison of images processed using different algorithm with that of our method. The PSNR is the objective methods. criterion most widely used to evaluate image quality. An image has less noise when the PSNR is higher. The PSNR is Images processed by different methods Noise intensity defined as follows: Original Getfilaments Our method 0.1 18.2545 18.3443 18.5836 (2 −1) 0.15 15.4864 16.5203 16.8379 (7) 𝑅𝑁 = 10 ∗ log . 0.2 14.9552 15.3325 15.7445 0.3 13.2314 13.7914 13.8200 where𝐸 is the mean square error (i.e., difference) between 0.5 11.2925 11.3277 11.8972 theoriginal imageand the imageaeft r noise is superimposed and n is 8 since pixels are represented using 8 bits per sample. For different intensities of salt-and-pepper noise, we our improved method is always higher than that of getfila- analyze the PSNRs of images processed by different methods. ments method, which means that the images processed by Table 1 shows that the PSNR of the images processed using MCA have less noise. 𝑀𝑆 𝑀𝑆 𝑃𝑆 Advances in Astronomy 9 (a) 0 2 4 6 8 1012141618 (b) (c) -15 -12.5 -10.5 -7.5 -5 -2.5 0 2.5 5 7.5 (d) Figure 9: Extraction results using the getfilaments algorithm. (a) Extracted filament. Part of information is removed, including noises and structures. (b) Extracted filament with colors for contrast. Structures of the filament are cleaned in three marked places. (c) Contour extraction of the filament. (d) Residuals aeft r the subtraction of the extracted filament from the original input image. Data Availability Disclosure The data suffixed with tfi s used to support the findings of this F. Q. Duan present address is College of Information Science studywere suppliedbyNational Astronomical Observatory and Technology, Beijing Normal University, Beijing, China. of China under license and so cannot be made freely avail- The authors presented this work in 2016 Astronomical Data able. Analysis Systems and Software Conference. 10 Advances in Astronomy (a) 02468 10 12 (b) (c) 0.1 0.2 0.3 0.4 0.5 0.6 (d) Figure 10: Extraction results obtained using our method. (a) Un-colored extracted filament. Noises are removed, and filament’s structures are retained. (b) Colored extracted filament. Compared to the filament in Figure 9(b), filament’s structures are more complete, especially in three marked places. (c) Contour extraction. (d) Residuals aeft r the subtraction of the extracted filament from the original image. Conflicts of Interest Acknowledgments The authors declare that they have no conflicts of inter- The study is supported by the National Natural Science est. Foundation of China (no. 41272359), Ministry of Land and Advances in Astronomy 11 Resources for the Public Welfare Industry Research Projects [17] J.L. Starck,M.Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational (201511079-02), and Ph.D. Programs Foundation of Ministry approach,” IEEE Transactions on Image Processing,vol. 14, no. of Education of China (20120003110032). 10, pp. 1570–1582, 2005. [18] S. Velasco-Forero and J. Angulo, “Classification of hyperspectral References images by tensor modeling and additive morphological decom- position,” Pattern Recognition,vol.46,no. 2,pp.566–577, 2013. [1] A. Men’shchikov, P. Andre, ´ P. Didelon et al., “Filamentary [19] L. Yan, X. Xiaohua, and L. Jian-Huang, “Face hallucination structures and compact objects in the aquila and polaris clouds based on morphological component analysis,” Signal Processing, observed by herschel,” Astronomy and Astrophysics,vol. 518, vol.93,no. 2,pp.445–458, 2013. article L103, 7 pages, 2010. [20] Z.Xue,J.Li,L.Cheng,andP.Du,“Spectral–spatial classification [2] G. C. Gom ´ ez and E. Vazquez-Semadeni, “Filaments in simula- of hyperspectral data via morphological component analysis- tions of molecular cloud formation,” e Astrophysical Journal, based image separation,” IEEE Transactions on Geoscience and vol. 791, no. 2, article 6298, 2014. Remote Sensing, vol.53,no. 1,pp. 70–84, 2015. [3] P. Padoan and A. Nordlund, “The stellar initial mass function [21] D. Szolgay and T. Sziranyi, “Adaptive image decomposition from turbulent fragmentation,” e Astrophysical Journal,vol. into cartoon and texture parts optimized by the orthogonality 576, no. 2, pp. 870–879, 2002. criterion,” IEEE Transactions on Image Processing,pp. 3405– [4] P. Andre´, A. Men’shchikov, S. Bontemps et al., “From fila- 3415, 2012. mentary clouds to prestellar cores to the stellar IMF: initial [22] A. Buades, T. Le, J.-M. Morel, and L. Vese, “Fast Cartoon + highlights from the herschel gould belt survey,” Astronomy & Texture Image Filters,” IEEE Transactions on Geoscience and Astrophysics, vol.518,article L102, 7 pages,2010. Remote Sensing,vol. 19,no. 8,2010. [5] S. Molinari, B. Swinyard, J. Bally et al., “Clouds, filaments, and [23] C. Yu, Q. Qiu, Y. Zhao, and X. Chen, “Satellite image classifi- protostars: the herschel Hi-GAL milky way,” Astronomy and cation using morphological component analysis of texture and Astrophysics, vol.518,Article ID L100, 5 pages,2010. cartoon layers,” IEEE Geoscience and Remote Sensing Letters, [6] D. Kereˇs,N. Katz,D.H. Weinberg,andR. Dave,´ “How do galax- vol. 10, no. 5, pp. 1109–1113, 2013. ies get their gas?” Monthly Notices of the Royal Astronomical [24] X. Xiang, L. Jun, H. Xin, M. Dalla Mura, and P. Antonio, “Mul- Society, vol.363, no.1,pp.2–28, 2005. tiple morphological component analysis based decomposition [7] D. Kereˇs, N. Katz, M. Fardal, R. Dave, ´ and D. H. Weinberg, for remote sensing image classification,” IEEE Transactions on “Galaxies in a simulated ΛcDM universe - I. Cold mode and Geoscience and Remote Sensing, vol.54,no.5,pp. 3083–3102, hot cores,” Monthly Notices of the Royal Astronomical Society, vol. 395, no. 1, pp. 160–179, 2009. [25] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation [8] S. F. Shandarin and Y. B. Zeldovich, “eTh large-scale structure based noise removal algorithms,” Physica D Nonlinear Phenom- of the universe: turbulence, intermittency, structures in a self- ena,vol. 60, no.1-4, pp. 259–268,1992. gravitating medium,” Reviews of Modern Physics,vol.61,no.2, pp. 185–220, 1989. [26] L. L. Jiang, H. Q. Yin, and X. C. Feng, “Adaptive variational models for image decomposition combining staircase reduction [9] C. F. McKee and L. L. Cowie, “The interaction between the and texture extraction,” Journal of Systems Engineering and blast wave of a supernova remnant and interstellar clouds,” e Electronics, vol.20,no. 2,pp.254–259, 2009. Astrophysical Journal,vol.195,pp. 715–725, 1975. [27] T. F. Chan, S. Esedoglu, and F. E. Park, “Image decomposition [10] S. Casassus, G. van der Plas, M. Sebastian Perez et al., “Flows of combining staircase reduction and texture extraction,” Journal gas through a protoplanetary gap,” Nature,vol.493, pp. 191–194, of Visual Communication and Image Representation,vol.18, no. 6, pp. 464–486, 2007. [11] N. Schneider, T. Csengeri, M. Hennemann et al., “Cluster- [28] E. J. Cand and D. L. Donoho, “Curvelets, multiresolution formation in the Rosette molecular cloud at the junctions of representation, and scaling laws,” 1, 2000. filaments (Corrigendum),” Astronomy and Astrophysics,vol.551, article C1, 1 page, 2013. [29] J.-L. Starck, E. J. Candes, and D. L. Donoho, “The curvelet transform for image denoising,” IEEE Transactions on Image [12] P. Hennebelle, R. Banerjee, E. Vaz ´ quez-Semadeni, R. S. Klessen, Processing, vol. 11, no. 6, pp. 670–684, 2002. and E. Audit, “From the warm magnetized atomic medium to molecular clouds,” Astronomy & Astrophysics,vol.486, no.3,pp. [30] L. Vincent and P. Soille, “Morphological segmentation of binary L43–L46, 2008. patterns,” Pattern Recognition Letters,1991. [13] A. V. Tugay, “Extragalactic filament detection with a layer [31] N. Laurent and S. Michel, “Geodesic saliency of watershed smoothing method,” Physics, vol. 1, 2014, https://arxiv.org/abs/ contours and hierarchical segmentation,” vol 18, 1163, 1996. 1410.2971. [32] F. Meyer, “Watersheds on weighted graphs,” Pattern Recognition [14] A. Men’shchikov, “A multi-scale filament extraction method: Letters,vol.47,pp.72–79, 2014. getfilaments,” Astronomy and Astrophysics,vol. 560,article A63, [33] F. Malmberg and C.L.L.Hendriks, “An efficient algorithm for 15 pages, 2013. exact evaluation of stochastic watersheds,” Pattern Recognition [15] N. L. J. Cox, D. Arzoumanian, Ph. Andre´ et al., “Filamentary Letters,vol.47,pp.80–84, 2014. structure and magnetic field orientation in Musca,” Astronomy [34] A. Men’shchikov, P. Andre, ´ P. Didelon et al., “A multi- &Astrophysics, vol. 590, Article A110, 8 pages, 2016. scale, multi-wavelength source extraction method: getsources,” [16] J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, “Sparsity Astronomy and Astrophysics, vol.542,article A81,31pages, 2012. and morphological diversity in blind source separation,” IEEE Transactions on Image Processing,vol. 16, no. 11, pp.2662–2674, 2007. Journal of International Journal of The Scientific Advances in Applied Bionics Engineering Geophysics Chemistry World Journal and Biomechanics Hindawi Hindawi Hindawi Publishing Corporation Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www www.hindawi.com .hindawi.com V Volume 2018 olume 2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Active and Passive Shock and Vibration Electronic Components Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Submit your manuscripts at www.hindawi.com Advances in Advances in Mathematical Physics Astronomy Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 International Journal of Rotating Machinery Advances in Optical Advances in Technologies OptoElectronics Advances in Advances in Physical Chemistry Condensed Matter Physics Hindawi Hindawi Hindawi Hindawi Volume 2018 www.hindawi.com Hindawi Volume 2018 Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com www.hindawi.com International Journal of Journal of International Journal of Advances in Antennas and Advances in Chemistry Propagation High Energy Physics Acoustics and Vibration Optics Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Astronomy Hindawi Publishing Corporation

Extracting Filaments Based on Morphology Components Analysis from Radio Astronomical Images

Loading next page...
 
/lp/hindawi-publishing-corporation/extracting-filaments-based-on-morphology-components-analysis-from-A6IT5m37T3

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2019 M. Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-7969
eISSN
1687-7977
DOI
10.1155/2019/2397536
Publisher site
See Article on Publisher Site

Abstract

Hindawi Advances in Astronomy Volume 2019, Article ID 2397536, 11 pages https://doi.org/10.1155/2019/2397536 Research Article Extracting Filaments Based on Morphology Components Analysis from Radio Astronomical Images 1 1 1 2 3 1 1 M. Zhu, W. Liu, B. Y. Wang, M. F. Zhang, W. W. Tian, X. C. Yu , T. H. Liang, 2 1 1 D. Wu, D. Hu, and F.Q.Duan College of Information Science and Technology, Beijing Normal University, Beijing, China Key Laboratory of Optical Astronomy, National Astronomical Observatory of China, Beijing, China e University of Chinese Academy of Sciences, Beijing, China Correspondence should be addressed to X. C. Yu; yuxianchuan@163.com Received 17 October 2018; Accepted 3 March 2019; Published 2 June 2019 Guest Editor: Junhui Fan Copyright © 2019 M. Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Filaments are a type of wide-existing astronomical structure. It is a challenge to separate filaments from radio astronomical images, because their radiation is usually weak. What is more, filaments oen ft mix with bright objects, e.g., stars, which makes it difficult to separate them. In order to extract filaments, A. Men’shchikov proposed a method “getfilaments” to find filaments automatically. However, the algorithm removed tiny structures by counting connected pixels number simply. Removing tiny structures based on local information might remove some part of the filaments because filaments in radio astronomical image are usually weak. In order to solve this problem, we applied morphology components analysis (MCA) to process each singe spatial scale image and proposed a filaments extraction algorithm based on MCA. MCA uses a dictionary whose elements can be wavelet translation function, curvelet translation function, or ridgelet translation function to decompose images. Different selection of elements in the dictionary can get different morphology components of the spatial scale image. By using MCA, we can get line structure, gauss sources, and other structures in spatial scale images and exclude the components that are not related to filaments. Experimental results showed that our proposed method based on MCA is eeff ctive in extracting filaments from real radio astronomical images, and images processed by our method have higher peak signal-to-noise ratio (PSNR). 1. Introduction Some researchers paid more attention to the large-scale lfi aments of theuniverse[8], which may giveclues to better A substantial part of interstellar medium exists in the form understand the slightly nonuniform cosmic microwave back- of a fascinating web of omnipresent filamentary structures ground (CMB) and the birth of the rfi st generation of stars. [1], called la fi ments. The astronomical filament is rs fi t dis- In addition, la fi ments have been observed in other objects, covered in the Milky Way. Along with the development of such as supernova remnants (SNR) [9] and protoplanetary telescopes, various filaments come into sight. Among them disk [10]. the la fi ments in star-forming regions are the most fascinat- The fact that many filaments are fuzzy in images causes ing, many magnetohydrodynamic (MHD) simulations have difficulty to distinguish them from background and sur- shown that giant molecular clouds (GMCs) primarily evolve rounding objects. Schneider et al. [11] investigated spatial and into filaments before they collapse to form stars [2, 3]. Recent density structure of the Rosette molecular cloud, by applying observations also conrfi m these simulations [4, 5]. Since the a curvelet analysis, a filament-tracing algorithm (DisPerSE), formation of massive stellar objects is still unclear, further and probability density functions (PDFs) on Herschel col- research on lfi aments is essential. Filaments in Galactic umn density maps. Hennebelle et al. [12] showed a method and cosmological efi lds are also important. Studies have based on adaptive mesh refinement magneto hydrodynamic argued that low mass galaxies got their gas through “cold simulations, which treat self-consistently cooling and self- accretion”, which is oen ft directed along la fi ments [6, 7]. gravity. Tugay [13] proposed a layer smoothing method, 2 Advances in Astronomy which described cellular large-scale structure of the universe The equivalent constrained optimization problem is as fol- (LSS) as a grid of clusters with density larger than a limited lows: value, to detect extragalactic la fi ments. Men’shchikov [14] opt opt opt {𝛼 , 𝛼 ,..., 𝛼 } proposed a multi-scale filaments extraction method named 1 2 P getfilaments, which decomposed a simulated astronomical image containing la fi ments into spatial images at different 󵄩 󵄩 󵄩 󵄩 = arg min∑ 󵄩 𝛼 󵄩 , 󵄩 󵄩 1 scales to prevent interaction influence of different spatial (2) {𝛼 ,...,𝛼 } 1 P k=1 scale structures. The getfilaments works well in simulated images and has been used to identify filaments for real subject to : x = ∑D 𝛼 . astronomical images, e.g., the far-infrared images of Musca k k k=1 cloud observed with Herschel [15]. However, getfilaments might exclude some tiny structure of la fi ments in astronomy However, this model does not take into account factors images, because it removes tiny structures just by counting that may lead to the failure of the image decomposition, connected pixels number, and filaments in astronomy images such as noise. When noise exists in the image x,the vector are usually weak. opt opt opt {𝛼 , 𝛼 ,..., 𝛼 } might be not sparse since noise cannot be 1 2 P In this paper, we develop an improved method based on sparsely represented. For this kind of noise, we put the noise morphology components analysis (MCA) and getfilaments. in the error item to achieve the sparse decomposition of the MCA is ableto decomposetheimage into morphological image x. The constraint in (2) is modified as follows: components based on different features from the perspective of mathematical morphology and is often used in image opt opt opt {𝛼 , 𝛼 ,..., 𝛼 } 1 2 P restoration, separation, and decomposition [16–19]. The basic idea of MCA decomposition algorithm is to choose two 󵄩 󵄩 󵄩 󵄩 dictionaries: smooth dictionary and texture dictionary, to = arg min∑ 󵄩 𝛼 󵄩 , 󵄩 󵄩 1 (3) {𝛼 ,...,𝛼 } represent morphology components [20]. We can design dif- 1 P k=1 ferent dictionaries to represent different sparse components 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 in the image. Smooth dictionary produces the decomposed 󵄩 󵄩 subject to : 󵄩 x − ∑D 𝛼 󵄩 ≤𝜖. k k 󵄩 󵄩 󵄩 󵄩 smooth component which carries the geometric and piece- 󵄩 󵄩 k=1 󵄩 󵄩 wise smooth information of the image, and texture dictionary produces the decomposed texture component which carries where 𝜖 represents the noise level in the image x. the marginal and edge information. The paperisstructured asfollows. Theimproved method .. MCA Decompostion Algorithm. In this paper, we focus named la fi ment extraction algorithm based on MCA is on the image decomposition into two components: cartoon described in Section 2. Section 3 is devoted to discussing layer and texture layer. Cartoon layer contains cartoon experimental results of our method and comparing our and piecewise smooth information, and texture layer may method with the getfilaments method by employing data contain other texture information, marginal information, from GALFA-HI of Arecibo. and noises [21, 22]. Studies [23, 24] have shown that noises exist in both cartoon and texture layer. In other words, the smooth part not only contains the majority of the useful 2. Filament Extraction Algorithm information, but also contains a small part of the noise. If Based on MCA we set the same threshold of noise variance for the whole image, rather than calculating the threshold for each part .. MCA Model. MCA was proposed by Starck et al. [17]. of the image, some useful information might be removed. MCA is a kind of decomposition algorithm based on signal We therefore introduce the MCA decomposition algorithm sparsity and morphological diversity. MCA assumes that to process an image into smooth (cartoon) layer and texture signals are linear combinations of several morphological layer. components, and each morphological component can be We assume that matrix D is the dictionary matrix of sparsely represented on its own dictionary. the texture layer and that D is the dictionary matrix of We assume that image x comprises 𝑀 different morpho- the cartoon layer. A solution for the decomposition could logical components: x = x +x +⋅⋅⋅+x . We design different 1 2 M be obtained by relaxing the constraint in (3) to become an dictionaries D for dieff rent morphological components x i i approximate one: and assume all components mix together linearly. The image opt opt 󵄩 󵄩 󵄩 󵄩 x as an one-dimensional vector of length M can then be 󵄩 󵄩 󵄩 󵄩 {𝛼 , 𝛼 }= arg min 󵄩 𝛼 󵄩 + 󵄩 𝛼 󵄩 t c 󵄩 t󵄩 1 󵄩 c󵄩 1 represented as follows: {𝛼 ,𝛼 } t c (4) 󵄩 󵄩 󵄩 󵄩 +𝜆 󵄩 x − D 𝛼 − D 𝛼 󵄩 , t t c c x = D𝛼 , (1) 󵄩 󵄩 2 where 𝜆 is a Lagrange operators. Den fi e x = D 𝛼 and x = t t t c M×P + + where the matrix D =[𝐷 ,...,𝐷 ]∈ 𝑅 (typically, M ≪ D 𝛼 .Given x ,we can recover 𝛼 as 𝛼 = D x ,where D 1 P c c t t t t t t P) is a dictionary. 𝛼 ∈𝑅 is the vector of sparse coefficients. is the Moore-Penrose pseudoinverse of D .In order to get t Advances in Astronomy 3 Convolution Convolve the image into layered images Decomposition Decompose each layered image using MCA Denoise and enhance cartoon layer and texture layer Denoise Merge cartoon layer and texture layer Combination and Merge layered images to get filaments extraction Highlight contours using the watershed algorithm Figure 1: Flow of the filament extraction algorithm. piecewise smooth component, add a TV (Total Variation) complex analysis with real numbers and is appropriate for the penalty [25] to tfi the smooth layer. TV is used to damp sparse representation of the texture and periodic part of the ringing artifacts near edges and oscillating. Put these back image. into (4), and, thus, we obtain the following: .. Filament Extraction Algorithm Based on MCA. The shape opt opt 󵄩 + 󵄩 󵄩 + 󵄩 󵄩 󵄩 󵄩 󵄩 {x , x }= arg min 󵄩 D x 󵄩 + 󵄩 D x 󵄩 of the la fi ments can be obtained by applying the above t t c c 󵄩 t 󵄩 1 󵄩 c 󵄩 1 {x,x} t c lfi ament extraction algorithm to radio astronomical images. (5) We finally use the watershed algorithm to highlight filaments. 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 +𝜆 󵄩 x − x − x 󵄩 +𝛾 󵄩 𝑇𝑉 ( x )󵄩 . t c c 󵄩 󵄩 2 󵄩 󵄩 The watershed algorithm [30, 31] is based on mathe- matical morphology. The watershed algorithm segments an 𝛾 is the TV regularization parameter, in multiscale method, image into nonoverlapping regions and gets a pixel width alower 𝛾 is able to remove the artefacts caused by curvelet. and continuous boundary for the purpose of extracting and 𝑇𝑉( x ) is a measure of the amount of oscillations in the identifying a specific area [32, 33]. A grayscale image can be cartoon layer. Penalizing with TV, the cartoon layer is closer viewed as a topographic surface. A high grayscale value of a to the piecewise smooth image. However, TV suffers from the pixel denotes a peak or hill while a low grayscale denotes a so-called staircase effect that impacts the quality of images valley. Each local minimum of pixels and the aec ff ted region reconstruction. The adaptive TV [26] and the higher order are called a catchment basin, and the boundary of catchment derivative [27] are solutions to reduce the staircase effect. basins forms the watershed. By filling each isolated valley Then we discuss the choice of dictionaries for the car- (local minimum) with differently colored water (labels), toon and the texture layer. Appropriate dictionaries are the region of influence of each local minimum gradually very important for sparse representations over the image. expands outwards. The adjacent regions then converge, and Generally, the choice of dictionaries depends on experiences. the boundaries that form the watershed appear. The structure in the dictionary is more matched with the The whole la fi ment extraction algorithm (https://github image easier to form a sparse representation. The commonly .com/MiWBY/MCA) can be roughly divided into four steps used dictionary of MCA includes wavelet transform, ridgelet (as shown in Figure 1). transform, curvelet transform, discrete cosine transform (DCT), and so on. The two dictionaries used in this paper () Convolution. First, using a Gaussian filter, we convolve the are described as follows. original images into a series of layered images. Different full First, we choose curvelet as the dictionary for cartoon widths at half maximum can be set for dieff rent image layers: layer. The curvelet transform based on the multiscale ridgelet transform was proposed by Cand & Donoho [28]. It rfi st 𝑋 =𝐺 ∗𝑋−𝐺 ∗ 𝑋 (𝑗 = 1,2,...,𝑁 ). (6) j j−1 j s decomposes the image into a set of wavelet bands and then analyzes each band with the ridgelet transform at different where 𝑋 is the original image, 𝑋 is the jth subimage after scale levels. The curvelet transform performs well at the convolution, 𝐺 and 𝐺 are dieff rent Gaussian beams for j−1 j detection of anisotropic structures, smooth curves, and edges dieff rent spatial components, ∗ is the convolution operation, of dieff rent lengths [29]. and 𝑁 is the number of the layers. We next choose the local DCT as the dictionary for In this process, structures at different scales in the texture layer. DCT is a variant of the discrete Fourier trans- astronomical images can be separated into different layers form (DFT). It uses a symmetric signal extension to replace (subimages), and each layer contains similar scales, which 4 Advances in Astronomy make the input sources become simpler in the later denoising represent the filaments. For example, if we just use cartoon and extraction process. layer to represent lfi aments, lfi aments may lose some texture. u Th s, we merge the cartoon layer and textual layer to () Decompostion. We apply the MCA algorithm to each represent la fi ments better. Next, the layered subimages are layered image so that each layered image is decomposed added together to produce the la fi ments. Finally, we apply the into a cartoon layer and a texture layer. Here we use the watershed algorithm to highlight the contours of filaments. curvelet as the dictionary for the cartoon layer and local By applying MCA to decompose a real image, new DCT as the dictionary for the texture layer as described in features (components) can be obtained. This leads to better Section 2.2. The cartoon layer contains most of lfi aments and image separability. Furthermore, the smooth components low-frequency noise, and the texture layer contains sources, have a better signal-to-noise ratio than the original image. high-frequency noise, and small part of filaments. Starck et al. [17] proposed the MCA decomposition 3. Extraction Results algorithm based on BCR algorithm (Block Coordinate Relax- ation). The algorithm is given as follows. Input: The subimage .. Results for a Simulated Image. Before applying our 𝑋 aer ft convolution, which is described as the input image method to real radio astronomical images, we simulated an x here, dictionary D of the cartoon layer, dictionary D of c t image that is composed of a straight filament with 37 size of the texture layer, number of iterations 𝐿 , and the threshold max FWHM, a string of sources with 24 size of FWHM, a simple 𝛿= 𝜆 ⋅𝐿 . max background with 4000 size of FWHM, and a moderate-level Output: Cartoon layer x and texture layer x . c t noise with noise level=1.05 to test the improved algorithm (Figure 2(a)). The simulation method is the same as that (1) Initialize 𝐿 ,and𝜆= 𝑘∗𝜖 ( typically, k =3),where max mentioned in Men’shchikov et al. [14]. In the simulated 𝜖 is the value of noise level. Then the threshold 𝛿= image, there is only one spatial component, while our method 𝜆⋅𝐿 . max assumes there are many spatial components, which is similar (2) For 𝑗=1:𝐿 max to real astronomical images. In other words, even if there is For 𝑘=1:𝑃 only one spatial component, our method will also treat it as many components. (i) Update x assuming x is fixed: c t We first extract lfi aments using MCA method without convolution and denoising (Figure 2). In Figure 2(c), texture (a) Caculate the residual r = x − x − x . c t layer (especially in the area marked by red box) still contains (b) Calculate 𝛼 = D (x + r). c c part of lfi aments structures, which means just using cartoon (c) Soft thresholding the coecffi ient 𝛼 with the layer to represent filaments is insufficient. So it is necessary 𝛿 threshold and obtain 𝛼̂ . to combine cartoon layer and texture layer. However, noises (d) Reconstruct x by x = D 𝛼̂ . c c c c and sources also exist in texture layer. If the two layers (ii) Update x with the above method t are combined directly, the reconstructed filaments contains noises and sources (Figure 2(d)), so denoising is necessary Apply the TV correction by x = x −𝜇𝛾(𝑇𝑉( x )/x ), c c c c before combination. where 𝜇 is the minimum parameter, and is chosen Next, extracted results obtained using our improved by a line-decreasing the overall penalty function, or method are shown in Figure 3. Compared to Figure 2(d), the as a xfi ed step-size of moderate value that guarantees reconstructed la fi ments in Figure 3(e) contain less noise. The convergence. edge of the la fi ment is unreal as the result of decomposition. (3) Update the threshold by 𝛾=𝛾−𝜆 . If𝛾>𝜆 ,return step 2.Else, nfi ish. .. Results for Astronomical Images ... Decomposition and Denosing Results. Aiming at real () Denoise. First, we denoise each layer using the iterative radio astronomical images, we compare the extraction results cleaning algorithm proposed by Men’shchikov et al. [34]. of our method with those of the getfilaments method. The cleaning algorithm employs a global intensity threshold We employ data from GALFA-HI of Arecibo as example for single-scale images, as the larger-scale background has images. The equatorial coordinates of the objects are (12.00h, been effectively filtered out by the spatial decomposition. +10.35 ), and the object name is ’GALFA-HI RA+DEC Tile This iterative algorithm automatically finds a cut-off level that 004.00+02.35’. The data cube contains 2048 images at dieff r- separates the signal of important sources from the noise and ent velocity (with respect to the local standard stationary sys- background at each scale. Next, we enhance details for both tem). Here we select the 715th image from the 2048 original the cartoon layer and texture layer. images as the experimental image (Figure 4). In the exper- () Combination and Extraction. To get the extracted fila- imental images, filaments describe significantly elongated ments, we first merge the cartoon layer and textual layer structures. After convolution of the 715th image, we obtain for each layered image. Because la fi ments are irregular, and 99 layers (subimages) at different scales (Figure 5) and select structures of filaments exist in both cartoon layer and texture the 40th subimage as a comparison example (Figure 5(b)). layer, it is not appropriate to use just one component to In order to display the image properly and improve visual 󸀠󸀠 󸀠󸀠 󸀠󸀠 Advances in Astronomy 5 0 60 180 0 80 140 -10 10 70 0 60 180 -0.8 01 (a) (b) (c) (d) (e) Figure 2: Results of the simulated images obtained using MCA without convolution and denoising. (a) Original simulated image. (b) Cartoon layer. (c) Texture layer. (d) Reconstructed filaments without denoising. (e) Residuals. 0 60 180 0 60 180 0 80 140 -5 -5 50 03 60 0 10 160 (a) (b) (c) (d) (e) (f) Figure 3: Extraction results for the simulated image obtained usi ng our method. (a) Original simulated image. (b) eTh 40th subimage aer ft convolution. (c) Cartoon layer of the 40th subimage aer ft decomposition and denoising. (d) Texture layer of 40th subimage aeft r decomposition and denoising. (e) Reconstructed filaments. (f) Residuals. contrast between getfilaments and our method, we mark the layer dictionary and LDCT as the texture layer dictionary for image with different colors according to the intensity (Unit: MCA. We decompose each layered image to get the cartoon MJy/sr) in the image. layer and texture layer (Figure 6). The cartoon layer contains First, we apply the MCA algorithm to process image smooth parts of the image and retains most of the low- layers before applying the iterative cleaning algorithm. As frequency information of the la fi ment in the layered image. described in Section 2.2, we choose curvelet as the cartoon The frequency of the texture layer is higher. The texture 6 Advances in Astronomy (a) 0 2468 10 12 (b) Figure 4: eTh 715th original image from GALFA-HI. (a) Original image for experiments. (b) Colored image for better visual contrast. 0 2468 10 12 0 2 4 6 8 10 12 (a) (b) 0 2468 10 12 0 2 4 6 8 10 12 (c) (d) Figure 5: Images at dier ff ent scales aer ft convolution of the 715th image. Choose the 40th subimage as comparison example. (a) eTh 1st subimage. (b) eTh 40th subimage. (c) eTh 60th subimage. (d) eTh 80th subimage. Advances in Astronomy 7 -2 -1 0123456789 0123 4 (a) (b) Figure 6: Decomposition results of the 40th subimage obtained using MCA. (a) eTh cartoon layer obtained using the MCA algorithm. (b) The texture layer obtained using the MCA algorithm. 0123456789 -1 -0.5 0 1 1.5 2 2.5 3 3.5 4 (a) (b) Figure 7: Denoising results for the cartoon layer and texture layer obtained using the iterative cleaning algorithm. (a) Denoising results for the cartoon layer of the 40th subimage. (b) Denoising results for the texture layer of the 40th subimage. layer contains more edge information that is dicffi ult to be the lfi ament aremorecomplete than those in panelb.Setting distinguished visually in the layered image. The texture layer different threshold of noise variance for the two parts can is also part of the la fi ment. The texture layer also contains part avoid the removal of useful information, especially in the of the la fi ment. It shows some artefacts that might be caused area marked by red box. Our method not only removes noise by DCT in texture layer; these artefacts can be removed aer ft from the cartoon layer and texture layer but also strengthens denoising. details in the image synthesis process. Applying MCA to We then set different reasonable threshold for cartoon extract la fi ment can retain much structural information of the layer and texture layer, respectively, and apply the iterative filament. cleaning algorithm to the cartoon layer and texture layer to remove noise (as shown in Figure 7). Compared to ... Extraction Results. Finally, the layered subimages are Figure 6(b), Figure 7(b) almost contains no artefacts. added together to produce the la fi ment. Figure 9 shows the Next, the cartoon layer and texture layer are fused extraction results obtained using the getfilaments algorithm. according to the intensity ratio (e.g., the information of the Figure 9(a) is the extracted filament done by getfilaments. texture layer is expanded by a factor of 5). Small structures can Compared to the input filament in Figure 3(a), most of then be retained and interference information is removed. noises are cleaned and structures of the lfi ament are also Processing results after fusion are shown in Figure 8(c). To removed (marked in red boxes in Figures 9(b) and 9(c)). allow comparison with our method, we also use the iterative The extraction results obtained using our method are shown cleaning algorithm directly to process the 40th subimage; the in Figure 10. Comparing with the getfilaments method, results are shown in Figure 8(b). our method obtains the la fi ment more complete, excludes As seen in Figure 8(b), most of the noises in the 40th noise, and retains more structural information, especially subimage are removed by cleaning. However, getfilaments in the three areas marked by red boxes (Figures 10(b) and method sets only one noise threshold, and the values less than 10(c)). the threshold are cleared. This might clear weak information of the la fi ment. As shown in the red box of Figure 8(b), the weak part of the lfi ament is directly removed. Figure 8(c) is .. Peak Signal-to-Noise Ratio. We compare the peak signal- the denoised image after MCA decomposition. Structures of to-noise ratio (PSNR) of images processed by the getfilaments 8 Advances in Astronomy 02468 10 12 (a) 02468 10 12 (b) 02468 10 12 (c) Figure 8: Denoising results of the 40th subimage. (a) eTh original 40th s ubimage.(b)The40thsubimageobtainedusing theiterativecleaning algorithm directly. (c) eTh denoised 40th subimage aeft r MCA decomposition and fusion. Table 1: PSNR comparison of images processed using different algorithm with that of our method. The PSNR is the objective methods. criterion most widely used to evaluate image quality. An image has less noise when the PSNR is higher. The PSNR is Images processed by different methods Noise intensity defined as follows: Original Getfilaments Our method 0.1 18.2545 18.3443 18.5836 (2 −1) 0.15 15.4864 16.5203 16.8379 (7) 𝑅𝑁 = 10 ∗ log . 0.2 14.9552 15.3325 15.7445 0.3 13.2314 13.7914 13.8200 where𝐸 is the mean square error (i.e., difference) between 0.5 11.2925 11.3277 11.8972 theoriginal imageand the imageaeft r noise is superimposed and n is 8 since pixels are represented using 8 bits per sample. For different intensities of salt-and-pepper noise, we our improved method is always higher than that of getfila- analyze the PSNRs of images processed by different methods. ments method, which means that the images processed by Table 1 shows that the PSNR of the images processed using MCA have less noise. 𝑀𝑆 𝑀𝑆 𝑃𝑆 Advances in Astronomy 9 (a) 0 2 4 6 8 1012141618 (b) (c) -15 -12.5 -10.5 -7.5 -5 -2.5 0 2.5 5 7.5 (d) Figure 9: Extraction results using the getfilaments algorithm. (a) Extracted filament. Part of information is removed, including noises and structures. (b) Extracted filament with colors for contrast. Structures of the filament are cleaned in three marked places. (c) Contour extraction of the filament. (d) Residuals aeft r the subtraction of the extracted filament from the original input image. Data Availability Disclosure The data suffixed with tfi s used to support the findings of this F. Q. Duan present address is College of Information Science studywere suppliedbyNational Astronomical Observatory and Technology, Beijing Normal University, Beijing, China. of China under license and so cannot be made freely avail- The authors presented this work in 2016 Astronomical Data able. Analysis Systems and Software Conference. 10 Advances in Astronomy (a) 02468 10 12 (b) (c) 0.1 0.2 0.3 0.4 0.5 0.6 (d) Figure 10: Extraction results obtained using our method. (a) Un-colored extracted filament. Noises are removed, and filament’s structures are retained. (b) Colored extracted filament. Compared to the filament in Figure 9(b), filament’s structures are more complete, especially in three marked places. (c) Contour extraction. (d) Residuals aeft r the subtraction of the extracted filament from the original image. Conflicts of Interest Acknowledgments The authors declare that they have no conflicts of inter- The study is supported by the National Natural Science est. Foundation of China (no. 41272359), Ministry of Land and Advances in Astronomy 11 Resources for the Public Welfare Industry Research Projects [17] J.L. Starck,M.Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational (201511079-02), and Ph.D. Programs Foundation of Ministry approach,” IEEE Transactions on Image Processing,vol. 14, no. of Education of China (20120003110032). 10, pp. 1570–1582, 2005. [18] S. Velasco-Forero and J. Angulo, “Classification of hyperspectral References images by tensor modeling and additive morphological decom- position,” Pattern Recognition,vol.46,no. 2,pp.566–577, 2013. [1] A. Men’shchikov, P. Andre, ´ P. Didelon et al., “Filamentary [19] L. Yan, X. Xiaohua, and L. Jian-Huang, “Face hallucination structures and compact objects in the aquila and polaris clouds based on morphological component analysis,” Signal Processing, observed by herschel,” Astronomy and Astrophysics,vol. 518, vol.93,no. 2,pp.445–458, 2013. article L103, 7 pages, 2010. [20] Z.Xue,J.Li,L.Cheng,andP.Du,“Spectral–spatial classification [2] G. C. Gom ´ ez and E. Vazquez-Semadeni, “Filaments in simula- of hyperspectral data via morphological component analysis- tions of molecular cloud formation,” e Astrophysical Journal, based image separation,” IEEE Transactions on Geoscience and vol. 791, no. 2, article 6298, 2014. Remote Sensing, vol.53,no. 1,pp. 70–84, 2015. [3] P. Padoan and A. Nordlund, “The stellar initial mass function [21] D. Szolgay and T. Sziranyi, “Adaptive image decomposition from turbulent fragmentation,” e Astrophysical Journal,vol. into cartoon and texture parts optimized by the orthogonality 576, no. 2, pp. 870–879, 2002. criterion,” IEEE Transactions on Image Processing,pp. 3405– [4] P. Andre´, A. Men’shchikov, S. Bontemps et al., “From fila- 3415, 2012. mentary clouds to prestellar cores to the stellar IMF: initial [22] A. Buades, T. Le, J.-M. Morel, and L. Vese, “Fast Cartoon + highlights from the herschel gould belt survey,” Astronomy & Texture Image Filters,” IEEE Transactions on Geoscience and Astrophysics, vol.518,article L102, 7 pages,2010. Remote Sensing,vol. 19,no. 8,2010. [5] S. Molinari, B. Swinyard, J. Bally et al., “Clouds, filaments, and [23] C. Yu, Q. Qiu, Y. Zhao, and X. Chen, “Satellite image classifi- protostars: the herschel Hi-GAL milky way,” Astronomy and cation using morphological component analysis of texture and Astrophysics, vol.518,Article ID L100, 5 pages,2010. cartoon layers,” IEEE Geoscience and Remote Sensing Letters, [6] D. Kereˇs,N. Katz,D.H. Weinberg,andR. Dave,´ “How do galax- vol. 10, no. 5, pp. 1109–1113, 2013. ies get their gas?” Monthly Notices of the Royal Astronomical [24] X. Xiang, L. Jun, H. Xin, M. Dalla Mura, and P. Antonio, “Mul- Society, vol.363, no.1,pp.2–28, 2005. tiple morphological component analysis based decomposition [7] D. Kereˇs, N. Katz, M. Fardal, R. Dave, ´ and D. H. Weinberg, for remote sensing image classification,” IEEE Transactions on “Galaxies in a simulated ΛcDM universe - I. Cold mode and Geoscience and Remote Sensing, vol.54,no.5,pp. 3083–3102, hot cores,” Monthly Notices of the Royal Astronomical Society, vol. 395, no. 1, pp. 160–179, 2009. [25] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation [8] S. F. Shandarin and Y. B. Zeldovich, “eTh large-scale structure based noise removal algorithms,” Physica D Nonlinear Phenom- of the universe: turbulence, intermittency, structures in a self- ena,vol. 60, no.1-4, pp. 259–268,1992. gravitating medium,” Reviews of Modern Physics,vol.61,no.2, pp. 185–220, 1989. [26] L. L. Jiang, H. Q. Yin, and X. C. Feng, “Adaptive variational models for image decomposition combining staircase reduction [9] C. F. McKee and L. L. Cowie, “The interaction between the and texture extraction,” Journal of Systems Engineering and blast wave of a supernova remnant and interstellar clouds,” e Electronics, vol.20,no. 2,pp.254–259, 2009. Astrophysical Journal,vol.195,pp. 715–725, 1975. [27] T. F. Chan, S. Esedoglu, and F. E. Park, “Image decomposition [10] S. Casassus, G. van der Plas, M. Sebastian Perez et al., “Flows of combining staircase reduction and texture extraction,” Journal gas through a protoplanetary gap,” Nature,vol.493, pp. 191–194, of Visual Communication and Image Representation,vol.18, no. 6, pp. 464–486, 2007. [11] N. Schneider, T. Csengeri, M. Hennemann et al., “Cluster- [28] E. J. Cand and D. L. Donoho, “Curvelets, multiresolution formation in the Rosette molecular cloud at the junctions of representation, and scaling laws,” 1, 2000. filaments (Corrigendum),” Astronomy and Astrophysics,vol.551, article C1, 1 page, 2013. [29] J.-L. Starck, E. J. Candes, and D. L. Donoho, “The curvelet transform for image denoising,” IEEE Transactions on Image [12] P. Hennebelle, R. Banerjee, E. Vaz ´ quez-Semadeni, R. S. Klessen, Processing, vol. 11, no. 6, pp. 670–684, 2002. and E. Audit, “From the warm magnetized atomic medium to molecular clouds,” Astronomy & Astrophysics,vol.486, no.3,pp. [30] L. Vincent and P. Soille, “Morphological segmentation of binary L43–L46, 2008. patterns,” Pattern Recognition Letters,1991. [13] A. V. Tugay, “Extragalactic filament detection with a layer [31] N. Laurent and S. Michel, “Geodesic saliency of watershed smoothing method,” Physics, vol. 1, 2014, https://arxiv.org/abs/ contours and hierarchical segmentation,” vol 18, 1163, 1996. 1410.2971. [32] F. Meyer, “Watersheds on weighted graphs,” Pattern Recognition [14] A. Men’shchikov, “A multi-scale filament extraction method: Letters,vol.47,pp.72–79, 2014. getfilaments,” Astronomy and Astrophysics,vol. 560,article A63, [33] F. Malmberg and C.L.L.Hendriks, “An efficient algorithm for 15 pages, 2013. exact evaluation of stochastic watersheds,” Pattern Recognition [15] N. L. J. Cox, D. Arzoumanian, Ph. Andre´ et al., “Filamentary Letters,vol.47,pp.80–84, 2014. structure and magnetic field orientation in Musca,” Astronomy [34] A. Men’shchikov, P. Andre, ´ P. Didelon et al., “A multi- &Astrophysics, vol. 590, Article A110, 8 pages, 2016. scale, multi-wavelength source extraction method: getsources,” [16] J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, “Sparsity Astronomy and Astrophysics, vol.542,article A81,31pages, 2012. and morphological diversity in blind source separation,” IEEE Transactions on Image Processing,vol. 16, no. 11, pp.2662–2674, 2007. Journal of International Journal of The Scientific Advances in Applied Bionics Engineering Geophysics Chemistry World Journal and Biomechanics Hindawi Hindawi Hindawi Publishing Corporation Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www www.hindawi.com .hindawi.com V Volume 2018 olume 2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Active and Passive Shock and Vibration Electronic Components Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Submit your manuscripts at www.hindawi.com Advances in Advances in Mathematical Physics Astronomy Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 International Journal of Rotating Machinery Advances in Optical Advances in Technologies OptoElectronics Advances in Advances in Physical Chemistry Condensed Matter Physics Hindawi Hindawi Hindawi Hindawi Volume 2018 www.hindawi.com Hindawi Volume 2018 Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com www.hindawi.com International Journal of Journal of International Journal of Advances in Antennas and Advances in Chemistry Propagation High Energy Physics Acoustics and Vibration Optics Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal

Advances in AstronomyHindawi Publishing Corporation

Published: Jun 2, 2019

References