Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A CME Automatic Detection Method Based on Adaptive Background Learning Technology

A CME Automatic Detection Method Based on Adaptive Background Learning Technology Hindawi Advances in Astronomy Volume 2019, Article ID 6582104, 14 pages https://doi.org/10.1155/2019/6582104 Research Article A CME Automatic Detection Method Based on Adaptive Background Learning Technology 1,2 2 1 1 Zhenping Qiang , Xianyong Bai , Qinghui Zhang , and Hong Lin College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650224, China CAS Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China Correspondence should be addressed to Zhenping Qiang; qzplucky@163.com Received 14 March 2019; Revised 22 August 2019; Accepted 8 October 2019; Published 7 November 2019 Guest Editor: Junhui Fan Copyright © 2019 Zhenping Qiang et al. /is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, we describe a technique, which uses an adaptive background learning method to detect the CME (coronal mass ejections) automatically from SOHO/LASCO C2 image sequences. /e method consists of several modules: adaptive background module, candidate CME area detection module, and CME detection module. /e core of the method is based on adaptive background learning, where CMEs are assumed to be a foreground moving object outward as observed in running-difference time series. Using the static and dynamic features to model the corona observation scene can more accurately describe the complex background. Moreover, the method can detect the subtle changes in the corona sequences while filtering their noise effectively. We applied this method to a month of continuous corona images, compared the result with CDAW, CACTus, SEEDS, and CORIMP catalogs and found a good detection rate in the automatic methods. It detected about 73% of the CMEs listed in the CDAW CME catalog, which is identified by human visual inspection. Currently, the derived parameters are position angle, angular width, linear velocity, minimum velocity, and maximum velocity of CMES. Other parameters could also easily be added if needed. decades ago. Along with the continuous progress of space 1. Introduction observations of the corona, a series of satellites with coronal A coronal mass ejection (CME) is a significant release of imaging observation ability such as OSO-7, P78-1, Skylab, plasma and accompanying magnetic field from the solar SMM, and SOHO were launched; especially over the past 24 corona. It often follows solar flares and is normally present years, coronal mass ejections have been detected routinely by during a solar prominence eruption. /e plasma is released visual inspection of each image from the Large Angle into the solar wind and can be observed in coronagraph Spectrometric Coronagraph (LASCO) onboard SOHO [6]. imagery [1–3]. CME is the most energetic and important To further understand CME, especially its three-dimensional solar activity and is a significant driver of space weather in properties which can be observed by the Sun-Earth Con- nection Coronal and Heliospheric Investigation (SECCHI) the near-Earth environment and throughout the helio- sphere. When the ejection is directed towards and reaches [7], SECCHI flew aboard NASA’s recently launched Solar the Earth as an interplanetary CME (ICME), ICME can Terrestrial Relations Observatory (STEREO). cause geomagnetic storms that may disrupt Earth’s mag- In contrast to this huge amount of observations of netosphere, damage satellites potentially, induce ground CMEs, the identification and cataloging of CMEs are im- currents, and increase the radiation risk for astronauts [4]. portant tasks that provide the basic knowledge for further /us, CME detection is an active area of research. scientific studies. /ere are two main categories of methods CME was first observed coincided with the first-observed used to detect the CMEs. One category is the manual de- solar flare on 1 September 1859, and it has been studied tection method, with the LASCO instrument coronagraphs. extensively since it was first reported [5] more than four Currently, there exists a manual catalog which is the 2 Advances in Astronomy /ese CME automatic detection methods that men- Coordinated Data Analysis Workshop Data Center (CDAW) catalog [8] to catalog observed CMEs. /is catalog tioned above are mainly based on three kinds of strategies: (i) enhance the coronagraph images and describe the ki- is compiled by observers who look through the sequences of LASCO coronagraph images. But this human-based process nematics and morphology features (edge, luminance, shape, is tedious and subjected to observers’ biases. To promote the etc) of the processing images and then use these features to detection of CMEs, another category is the automatic de- determine the occurrence of CME; (ii) establish the CME tection method, which detects and characterizes CMEs in evolution models according to the historical CMEs’ dynamic coronagraph images. evolution characteristics, use the same feature extraction /e Computer Aided CME Tracking software package is method to extract dynamic evolution characteristics of the processing sequences, and then compare the extracted the first automatic detection method introduced in 2004 [9]. It utilizes the Hough transform to identify CMEs. In 2005, characteristics to the model to determine the occurrence of CME; and (iii) apply supervised classification problem in Boursier et al. proposed a method named the Automatic Recognition of Transient Events and Marseille Inventory machine learning to detect CME. /e coronagraph data can be considered as a three/four- from Synoptic maps (ARTEMIS) [10]; it utilizes LASCO C2 synoptic maps and based on an adaptive filtering and seg- dimensional dataset with two/three spatial and one temporal mentation to detect CMEs. In [11], Olmedo et al. presented dimension. /e key to automatic detection methods is how the Solar Eruptive Event Detection System (SEEDS) which to distinguish CME regions from other parts of the image. used image segmentation techniques to detect CMEs. In /ese methods do not utilize time dimension information [12], Young and Gallagher described and demonstrated a adequately. To make full use of time-domain information, multiscale edge detection technique that addresses the CME we can use video processing technology for CME detection. detection and tracking, which could serve as one part of an In fact, we can consider a coronagraph image sequence as a automated CME detection system. In 2009, Goussies et al. video and regard CMEs as abnormal events in the video. /e developed an algorithm based on level set and region detection process of CME can use the video surveillance competition methods to characterize the CME texture, and technology, which includes change detection, background by using the texture information in the region competition model, foreground detection, and object tracking. Further, motion equations to evolve the curve, to this end, seg- considering the coronagraph image sequence itself is a mentation of the leading edge of CMEs is performed on dynamic scene, and the CME is also a dynamic process, so individual frames [13]. In the same year, Byrne et al. adopted the CME detection methods must adapt to the scene change. a multiscale decomposition technology to extract structure Inspired by these ideas, in this paper, we attempt to detect of the processing image and used an ellipse parameterization CMEs based on adaptive background learning technology. of the front to extract the kinematics (height, velocity, and /e method consists of three main modules described below: acceleration) and morphology (width and orientation) (1) Adaptive background module: this module is mainly change to detect the CMEs [14]. In [15], Gallagher et al. implemented to maintain the background model of developed an image processing technique to define the the coronagraph image sequence evolution of CMEs by texture and used a supervised seg- mentation algorithm to isolate a particular region of interest (2) Candidate CME area detection module: this module based upon its similarity with a prespecified model to au- is used to detect the foreground areas of the co- tomatically track the CMEs. In 2012, Zhao-Xian et al. [16] ronagraph images presented a method to detect CMEs by analyzing the sudden (3) CME detection module: this module is based on the change of frequency spectrum in the coronagraph. In 2014, candidate areas to identify the CME event Bemporad et al. [17] described the onboard CME detection /e remaining of the paper is organized as follows. In algorithm for the Solar Orbiter-METIS coronagraph. /e Section 2, we first give a specification about the adaptive algorithm is based on the running differences between background module. /en in Section 3, we will formulate the consecutive images to get significant changes and to provide background and foreground classification problem and the CME first detection time. In 2017, Zhang et al. [18] propose a method of candidate CME area detection. Section proposed a suspected CME region detection algorithm by 4 describes an algorithm for CME detection based on using the extreme learning machine (ELM) method which candidate CME area detection module. /e experimental takes into account the features of the grayscale and the results and validation on LASCO C2 data are presented in texture. In 2018, based on the intensity thresholding followed Section 5. /e paper is concluded in Section 6. by the area thresholding in successive difference images spatially rebinned to improve signal-to-noise ratio, Patel et al. [19] proposed a CME detection algorithm for the Visible 2. Adaptive Background Module Emission Line Coronagraph on ADITYA-L1. Recently, machine learning has been used in solar physics. Dhuri et al. In coronagraph image sequence, the background environ- [20] used machine learning to classify vector magnetic field ments always change; for example, small moving objects observations from flaring ARs. Huang et al. [21] applied a such as stars and cosmic rays can make the background deep learning method to flare forecasting. Very recently, change. So, the background representation model must be Wang et al. [22] even proposed an automatic tool for CME more robust and adaptive, and the background module must detection and tracking with machine learning techniques. be continuously updated to represent the change of the Advances in Astronomy 3 scene. To solve the strong chaotic interference in the with the feature vector v. For the coronagraph images, the background, several methods have been proposed to adapt most prominent feature is the luminance characteristics and to variety of background situations. Among them, mixture takes into account the dynamic disturbance; we must increase of Gaussians (MoG) [23] is considered as a promising the feature vector to characterize the dynamic properties. In method. In the video monitoring, because of the high frame this paper, we adopt the luminance features and co-occur- rate, the MoG can achieve good results in the gradual change rence luminance features to model the background. scene, but for CME detection, the interference changes /e coronal image’s luminance level is high, if calculating significantly, so it needs better method to model the dynamic and recording all the luminance feature vectors’ probability is scene. Li et al. proposed a statistical modeling [24] that used unrealistic. Fortunately, at the same location of the co- the co-occurrence of color characteristics of two consecutive ronagraph image, the luminance change is not very big. So for frames to model the dynamic scene. By using this statistical each pixel, it will be enough to record a small subspace feature modeling, this method can represent nonstatic background vectors as the background model. An example of the principal objects, so it has good robustness for the existence of dy- feature representation with luminance and co-occurrence namic background periodic interference. /e statistical luminance in LASCO C2 pseudocolor coronagraph images in modeling is very suitable for CME detection which is often the year of 2014 is shown in Figure 1. /e left image (a) shows associated with other forms of solar activities. We apply this the position of the selected pixel, and the right image (b) and method to model the background, namely, employing the image (c) are the histograms of the statistics for the most color feature to describe the static background and co-oc- significant color and co-occurrence color. /e histogram of currence color features to describe the moving background, the color features shows that only the first thirty color dis- and then use a Bayes decision rule for classification of tributions account for 68.38% of all color feature space, and background and foreground. the first thirty co-occurrence color distributions account for 79.51% of all co-occurrence color feature spaces. /erefore, as shown in Figure 1, we can represent P (v) and 2.1. Formulation of the Classification Rule Based on Bayes. P (v | b) well by selecting a small number of feature vectors. In In the method of automated detection of CMEs based on an the experiments of this paper, the color feature vector is adaptive background module, each pixel in the coronagraph quantized for 128 levels and recorded the first 25 feature image is divided into two categories: background pixels and vectors and the co-occurrence color feature vector is quan- foreground pixels (candidate CME area pixels). /erefore, tized for 64 levels and recorded the first 40 feature vectors. using the Bayes rule, the feature vector distribution prob- ability of each pixel satisfies the following equation: 2.3. Background Model and Parameters. In this paper, we P (v) � P (v | b)P (b) + P (v | f)P (f), (1) s s s s s focus on the effective detection method of CME, and the data object processed is pseudocolor coronagraph images. where s � (x, y) indicates pixel position, v is the statistical So we use statistical features in pseudocolor coronagraph feature vector, P (v | b) is the probability of the feature images to model the background, in particular including vector v being observed as a background at s, P (b) is the the prior probability of feature vectors belonging to the prior probability of the pixel s belonging to the background, background, color, and co-occurrence color feature vectors and P (v) is the prior probability of the feature vector v statistics list information. Suppose at the time t, at pixel being observed at the position s. Similarly, f denotes the point s, the color is c � 􏼂 r g b 􏼃 , the previous frame’s t t t t foreground (or candidate CME area). By using the Bayes decision rule, the pixel can be classified as background if the luminance is c � 􏼂 r g b 􏼃 , and the co-occur- t− 1 t− 1 t− 1 t− 1 rence color feature vector can be defined as cc � feature vector satisfies the following equation: 􏼂 r g b r g b 􏼃 . For each pixel, the back- t− 1 t− 1 t− 1 t t t P (b | v)> P (f | v), (2) s s ground model includes the following: s,t s,t s,t and by using Bayesian conditional posterior probability, (1) /e prior probabilities p and p , p indicate b,c b,cc b,c that color feature vector belongs to the background P (v | C)P (C) s s s,t P (C | v) � , C � b or f, of the time t at pixel point s, and p indicates that (3) s b,cc P (v) the co-occurrence color feature vector belongs to the background of the time t at pixel point s and substituting (1) and (3) into (2), it becomes (2) Color feature vector statistics list of the time t at pixel c,s,t,i 2P (v | b)P (b)> P (v), (4) s s s point s, S , i � 1, . . . , Nc: s,t,i t,i that is, if we obtained the prior probability P (b), P (v), and p � P􏼐v s􏼑, s s ⎧ ⎪ 􏼌 v c conditional probability P (v | b) at the moment t, the pixel s 􏼌 s,t,i c,s,t,i t,i p � P v b, s , 􏼐 􏼌 􏼑 S � (5) with the feature vector v can be classified as background or c v v ,b ⎪ c foreground based on formula (4). ⎪ t,i i i i v � 􏽨 r g b 􏽩 , c t t t 2.2. Description of the Feature Vector. In formula (4), the where Nc is the recording number of statistical s,t,i probability functions P (v) and P (v | b) are all associated color feature vectors, p is the statistical s s v c 4 Advances in Astronomy (a) (310, 360) color distribution map (the first thirty) 4.00 3.50 3.00 2.50 % 2.00 1.50 1.00 0.50 0.00 123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Percentage (b) (310, 360) co-occurrence color distribution map (the first thirty) 6.00 5.00 4.00 % 3.00 2.00 1.00 0.00 1 23456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Percentage (c) Figure 1: One example of learned principal features of LASCO C2 pseudocolor coronagraph images in the year of 2014. (a) /e position of the selected pixel, (b) significant color histogram, and (c) significant co-occurrence color histogram. probability of ith color feature vector v at position s s,t,i t,i c ⎧ ⎪ p � P􏼐v 􏼌 s􏼑, v cc s,t,i ⎪ cc until time t, and p is the probability of ith color ⎪ v ,b ⎪ feature vector v at position s which was judged as cc,s,t,i s,t,i t,i p � P􏼐v 􏼌 b, s􏼑, S � (6) v ,b cc v ⎪ cc the background ⎪ ⎪ T i i (3) Co-occurrence color feature vector statistics list of ⎩ t,i i i i i v � 􏽨 􏽩 , r g b r g b cc cc,s,t,i t− 1 t− 1 t− 1 t t t the time t at pixel point s, S , i � 1, . . . , Ncc: v Advances in Astronomy 5 where Ncc is the recording number of statistical co- feature vector v (color feather vector and co-occurrence s,t,i occurrence color feature vectors, p is the statistical color vector which are based on pixel change classification) cc probability of ith co-occurrence color feature vector v and match with each vector in the pixel’s feature vector cc s,t,i at position s until time t, and p is probability of ith statistics list by using formula (7), the sum of all matched v ,b cc co-occurrence color feature vector v at position s (M � 1) prior probability and condition probability in the cc which was judged as the background statistical list was further calculated to obtain the prior t t probability P (v ) and conditional probability P (v | b) of s s According to the distribution of the feature vectors, as the pixel’s vector v . Meanwhile, the prior probability P (b) shown in Figure 1, the first N elements of the list are is obtained, which was maintained in the background model. enough to cover the majority part of the feature vectors from t t Finally, by substituting P (v ), P (v | b), and P (b) into s,t,i s,t,i s s s the background. /erefore, in the case p ≈ p or v ,b c c formula (4), the pixel point s can be classified as background s,t,i s,t,i s,t,i s,t,i p ≈ p , p and p can be used to represent the v v ,b v v cc cc c cc or candidate CME area: s,t,i s,t,i s,t,i s,t,i 􏼌 􏼌 background. Otherwise, when p ≫ p or p ≫ p , it 􏼌 􏼌 v v ,b v v ,b 􏼌 􏼌 c c cc cc ⎨ ⎧ 1, ∀i􏼌v (i) − v (i)􏼌 ≤δ, 1 2 indicates that this feature vector is corresponding to the M v , v 􏼁 � (7) 1 2 0, otherwise, foreground. /is is the foundation that we used to detect the CMEs. where δ � 2 is chosen so that if the similar features are quantized into neighboring vectors, the statistics can still be 3. Candidate CME Area Detection Module retrieved. If no element in the pixel’s feature vector statistics t t list is matched, P (v ) and P (v | b) are set to 0. s s /e candidate CME area detection module is based on the established background model we discussed in 2.3 and the formulation of background and foreground classification we 3.3. Candidate CME Area Segmentation. It is obvious that, discussed in 2.1. /e candidate CME area detection module after the pixel’s classification, only a small percentage of the consists of three parts: change detection, change classifica- background pixels are wrongly classified as candidate CME tion, and candidate CME area segmentation. In the first step, ones. /ere are many isolated points, so the morphological nonchange pixels are filtered out by using the background operation (a pair of open and close) is applied to remove the difference and frame difference, which will improve the scattered error points and connect the candidate CME area computing speed; in the meantime, the detected change points. Finally, the candidate CME area detection module pixels are separated as pixels belonging to stationary and will output a binary image O(s, t). moving scene according to interframe changes. In the second step, based on the learned statistics information of color feature vectors and co-occurrence color feature vec- 3.4.AdaptiveBackgroundLearning. /e coronagraph image tors, the pixels associated with stationary or moving scene sequence is a gradually changing scene, so the background are further classified as background or candidate CME area model must be maintained to adapt to the various changes by using the Bayes decision rule. In the third step, candidate over time. In practice, the background model’s probability CME areas are segmented by the morphological processing information and a reference background image must be based on the classification results. /e process is the basis of updating. the algorithm [25], and the block diagram of the candidate CME area detection is shown in Figure 2. 3.4.1. Updating to Background Model’s Probability Information. Based on the previous obtained binary image 3.1.ChangeClassification. Candidate CME area detection is O(s, t), the pixel s with the feature vector v is classified as based on two-class background features (color and co-oc- candidate CME area or background. /e prior probability currence color); first of all, each coronagraph image’s change and the conditional probability associated with the color must be classified into two types. As shown in Figure 2, the feature are gradually updated by formula (8), and the change classification gets through temporal differences and updating of the prior probability and the conditional background differences. /e temporal difference binary probability associated with the co-occurrence color feature is image is denoted by Ftd(s, t), and the background difference similar: binary image is denoted by Fbd(s, t). If Fbd(s, t) � 1 (no s,t+1 s,t s,t matter the result what Fbd(s, t) is) is detected, the pixel s is ⎧ ⎪ p � 1 − α p + α M , 1 1 b,c b,c b,c classified as a change pixel. If Fbd(s, t) � 1 and Fbd(s, t) � 0 ⎨ s,t+1,i s,t,i s,t p � 1 − α 􏼁 p + α M , (8) 1 1 v v v c c c are detected, the pixel s is classified as a stationary pixel. /ey ⎪ s,t+1,i s,t,i s,t s,t are further classified as background or candidate CME area ⎩ p � 1 − α 􏼁 p + α 􏼐M ∧ M 􏼑, v ,b 1 v ,b 1 b,c v c c c separately, the change pixel will be classified by co-occur- rence color features, and the stationary pixel will be classified for i � 1, . . . , Nc, where α is a learning rate which controls by color features. the speed of feature learning; in the experiment, we set s,t α � 0.005; M � 1 when s is labeled as the background at b,c s,t s,t t,i time t from O(s, t); otherwise, M � 0. M � 1 when v in v c 3.2. Pixel’s Classification. For the current processing co- b,c c,s,t,i ronagraph image’s each pixel point s, at first to extract the the color feature vector statistics list S in formula (5) v 6 Advances in Astronomy Statistics co-occurrence color feature Temporal difference image Change pixels Temporal Change pixel difference classification Stationary pixels Candidate CME area image Candidate CME CME area detection detection Statistics color feature Coronagraph image Nonchange sequence Background difference image pixel Background classification difference Background model Background image Figure 2: /e block diagram of the candidate CME area detection. t s,t of the sense must be maintained at each time step. An infinite matches v best and M � 0 for others. In more detail, the c v impulse response (IIR) is used to update the gradual changes above updating can be stated as follows: for stationary background sense. If the pixel s is classified as a (a) If the pixel s is labeled as a background point at time t change point in the change classification step and the can- s,t+1 s,t by color feature, p is slightly increased from p b,c b,c didate CME area segmentation result O(s, t) � 1, the refer- s,t due to M � 1. Meanwhile, the probability of the b,c ence background image is updated as s,t matched feature is also increased due to M � 1. If s,t B (s, t + 1) � 1 − α 􏼁 B (s, t) + α I (s, t), (10) c 2 c 2 c M � 0, then the statistics for the unmatched fea- tures are gradually decreased. If there is no matched whereα is a parameter of the IIR filter and c ∈ 􏼈r, g, b􏼉 is the t,i 2 feature between v and the elements of the feature color information of the process point. A small positive c,s,t,i vector recording list S , the Ncth element in the number of α is selected to smooth out the disturbances list is replaced by a new feature vector by formula (9). caused by image noise, and in the experiment, we set If the number of the elements is smaller than Nc, a α � 0.1. new feature vector by formula (9) is added: If Fbd(s, t) � 1 and Ftd(s, t) � 1, but O(s, t) � 0. /is s,t+1,Nc means that there is a significant change, but in the end, it was p � α , v 1 not classified as candidate CME area; it indicates a back- t+1,Nc ground change is detected. So the processed pixel s’s color p � α , (9) v ,b information should replace the reference background, that t,Nc t v � v . c c is, B (s, t + 1) � I (s, t). c c /rough this operation, the reference background image can be a good representation of coronal scene change. (b) If the pixel s is labeled as a foreground point at time t s,t+1 s,t+1,i by color feature, p and p are slightly de- b,c v ,b s,t creased due to M � 0. However, the probability of 4. CME Detection Module b,c the matched feature is increased. Based on the candidate CME areas, we can detect the CME To ensure that the element that is replaced is the lowest according to the morphological and dynamic characteristics probability one, updated elements in the feature vector of the candidate CME area. For example, to identify a newly c,s,t,i statistics list S are resorted to a descending order v emerging CME, it must be seen to move outward in at least s,t+1,i according to p . two running-difference images. /is condition is set by Robbrecht and Berghmans [9] and Olmedo [26] to define a 3.4.2. Updating to the Reference Background Image. In the newly emerging CME. candidate CME area detection process, it need to use back- /e CME detection method we proposed is based on a ground difference to classify the change, so a reference continuous frame processing approach, so after the de- background image that represents the most recent appearance tection of the candidate CME area, we set two conditions as Advances in Astronomy 7 (a) (b) (c) (d) (e) (f) Figure 3: An experimental process graph of CME candidate area segmentation based on scene modeling. (a) LASCO C2 pseudocolor coronagraph images; (b) the reference background images; (c) the difference images between the two sequential frames; (d) the difference images between the current image and reference background; (e) the final candidate CME area images; (f) the changing region images of the candidate CME areas. criterion of CME event: (1) the CME candidate region of two counterclockwise, becomes a [θ, r] FOV, with θ the poloidal consecutive frames detected must be extended from the angle around the Sun and r the radial distance measured heliocentric; (2) since the start of the CME candidate region from the limb. /is kind of transformation has been used in is detected, the region has enlarged gradually. other CME detection algorithms [9, 11]. While trans- Besides, considering the angle range of the CMEs, we set forming, we also rebin, from 1024 × 1024 pixels for the [x, y] the minimum angle threshold filtering noise. And the fea- FOV to 360 × 360 pixels for the [θ, r] FOV. /rough the tures of interest are intrinsically in polar coordinates owing appropriate r − range selection, the dark occulter and corner to the spherical structure of the Sun. A polar transformation regions can easily be avoided. /e radial FOV in polar is applied to each candidate CME area image: the [x, y] field coordinate image corresponds to 360 discrete points be- of view (FOV), starting from the North of the Sun going tween 2.2 and 6.2 solar radii. We set the minimum detection 8 Advances in Astronomy (a) (b) (c) (d) (e) Figure 4: An example of CME detection process. (a) LASCO C2 pseudocolor coronagraph images; (b) the candidate CME area images; (c) the polar images of (b); (d) the increasing region images of the candidate CME areas; (e) the polar images of (d). angle parameter d (refer to the CME list in CDAW in 2014, An experimental process figure of the extraction can- the minimum angle is 5 degrees, and we set d � 4). didate CME region is based on the scene modeling, we use data from the LASCO C2 pseudocolor coronagraph images, 5. Results and Validation and 1024∗1024 image sequences are processed. Figure 3 is a CME candidate area segmentation process, which includes 6 In this section, the visual examples and comparison on frames (22 :12 : 05, 22 : 24 : 05, 22 : 36 : 05, 23 :12 :10, 23 : 24 : LASCO C2 pseudocolor coronagraph images are described, 05, and 23 : 36 : 06 in 2014/03/04) processing result; column respectively. (a) is LASCO C2 pseudocolor coronagraph images; column (b) is the reference background images; column (c) is the 5.1.Results. We present the results obtained by running the difference images between the two sequential frames; col- detection algorithms based on adaptive background learning umn (d) is the difference images between the current image technology. and reference background; column (e) is the final candidate Advances in Astronomy 9 Table 1: Comparison the results of different CME detection methods. Detected CME Detected CME number in the Accuracy rate False-negative Undetected CME number in the Methods number CDAW catalog (%) rate (%) CDAW catalog CDAW 259 CORIMP 132 47 18.15 32.82 212 SEEDS 410 117 45.17 113.12 142 CACTus 188 85 32.82 39.77 174 Our 283 189 72.97 36.29 70 method (a) (b) (c) Figure 5: A very poor CME event (appearance date-time (UT): 2014/06/01 02 : 24 : 05) detected by the adaptive background learning method. (a) LASCO C2 pseudocolor coronagraph images; (b) the candidate CME area images; (c) the increasing region images of the candidate CME areas. CME area images; column (f) is the changing region images false-negative rate, and the number of undetected CME events. of the candidate CME areas. /e accuracy rate is the ratio of the total number of the CME Figure 4 is an example of CME detection process, which events which were both detected by the automated method and includes 2 frames (14 :12 and 14 : 24 in 2014/01/01) pro- recorded in the CDAW list to the total number of CME events cessing result; column (a) is original coronal images; column in the CDAW catalog. /e false-negative rate is the ratio of the (b) is candidate CME area images; column (c) is polar images total number of the CME events which the automated method of the candidate CME areas; column (d) is the increasing did not detect but were recorded in the CDAW catalog to the region images of the candidate CME areas; column (e) is the total number of CME events in the CDAW catalog. polar images of the increasing regions. /e red box area in In the comparison experiments, for each CME event the last image is the detected CME area. recorded in the CDAW catalog, if other automated methods detected the CME event within the time range of this event and within the angular range of this event, it considers that 5.2. Validation and Comparison. Without loss of generality, the automated method detect a CDAW list CME event. /e we have chosen a full month of pseudocolor coronagraph comparison of the detection results by the adaptive back- image sequences observed by LASCO C2 in June 2014 as a ground detection algorithm we propose with the other test dataset for comparison. /e manual CDAW list is used automated algorithms is shown in Table 1. as a reference, and we compared the results of the adaptive For the processing datasets, as shown in Table 1, the background learning method with CORIMP, CACTus, and method we propose has a higher accuracy rate than other SEEDS catalog to verify the effectiveness of our proposed methods, and the false-negative rate is only higher than algorithm. /e main comparisons include accuracy rate, CORIMP method and lower than the SEEDS and CACTus 10 Advances in Astronomy (a) (b) (c) Figure 6: A poor CME event (appearance date-time (UT): 2014/06/04 19 : 00 : 05) detected by the adaptive background learning method. (a) LASCO C2 pseudocolor coronagraph images; (b) the candidate CME area images; (c) the increasing region images of the candidate CME areas. methods. For the total detected CME number, our proposed learning can represent the dynamic scene very well, which is suitable for the event detection in dynamic scenes. method is higher than CORIMP and CACTus and is only lower than the SEEDS method. In terms of the undetected Figure 5 is an example of the very weak CME event de- CME events in CDAW catalog number comparison, our tection by our method, which occurred in the Helmet method is the lowest. streamer area. /is event was not detected by CORIMP, In recent years, the CDAW catalog CME events are SEEDS, and CACTus methods and only recorded in CDAW more finely recorded; especially, the very weak events in the catalog. Figure 5 shows two continuous coronagraph im- Helmet streamers are recorded, and the number of the ages, the candidate CME area images, the candidate CME event recorded is also more and more. For example, CME change area images, and the very weak CME event area event number recorded in 1996 is 206 and in 2014 it is 2477. located in the red box. In the experiment, for the very weak Such changes make the automatic CME detection very CME events, our detection algorithm can only detect parts difficult, so the novel detection method must detect the of the CME event, and this is the main reason to cause the subtle changes in the coronal images. /e automatically misdetection. Figure 6 is a weak CME event detection detecting CME method based on adaptive background process images, the weak CME event area is also located in Advances in Astronomy 11 Figure 7: Morphological change description graph of a CME event (appearance date-time (UT): 2014/06/24 05 : 36 : 05) by using the intermediate images of our propose method. 12 Advances in Astronomy Table 2: CME information comparison table on the event shown in Figure 7. Comparison items Methods Central PA (deg) Angular width (deg) Linear (median) speed (km/s) Min speed (km/s) Max speed (km/s) CDAW 158 177 633 CORIMP 168 77 442 755 SEEDS 158 94 511 CACTus 167 96 473 403 600 Our method 157 95 425 316 507 5:48 6:00 6:12 6:24 6:36 6:48 7:00 7:12 7:24 7:36 7:48 8:00 Time Speed (km/s) (a) (b) Figure 8: Speed calculation sketch map and speed change curve chart. the red box, the first column is the coronagraph images, the extreme point of the CME area in each frame to calculate the second column is the foreground detected by our method, speed, but use the average value of the frontier sample points to and the third column is the change area of the foreground. calculate the speed. If the speed is calculated according to the /is event was also not detected by CORIMP, SEEDS, and extreme points of the frontier extreme point, the average speed CACTus methods. of this CME event calculated by our method is 490 km/s, which is similar to the other automatic detection methods. 5.3. Computation of Information on CME. /e information 6. Discussion and Conclusion on CME events can be calculated conveniently by using the processed images. Figure 7 is a sequence of processed images In this paper, we have developed a new method that is of CME events. Our method detected the event’s first C2 capable of detecting, tracking, and calculating the in- appearance date-time (UT): 2014/06/24 05 : 35 : 05, and the formation of CMEs in SOHO/LASCO C2 pseudocolor co- duration of this CME event is 2.6 hours, including 14 frames. ronagraph images. /e basic algorithm includes the In Figure 7, we show nine processed results of these frames. following: (i) establishing and maintaining the background /e first column is the coronal images; the second column is model of the coronal image sequences, (ii) detecting the the detected candidate CME regions; the third column is the candidate regions of CME based on Bayesian theorem, (iii) outline of the candidate CME regions which were marked by identifying the CME events, and (iv) calculating the in- the blue curve; the fourth column is the changing areas of the formation of CME events. candidate CME regions’ images; the fifth column is the /is novel method is based on adaptive background contour of the changing areas which were marked by the learning technology, and through the static and dynamic purple curve. We use the location information of the time- characteristics to model the background, this method can stamp to filter the noise caused by the timestamp, so we can describe the complex background well, especially the dy- extract a more accurate CME area and ensure that the final namic changes in the background. So by using the pro- calculated CME feature information is more accurate. /e posed method to detect the CME in the superposition area comparison of extracted information on this CME with other with the Helmet streamers has more obvious advantage. At methods is shown in Table 2. During the calculation of our the same time, due to the background modeling learning, in method, the speed of each frame can be calculated according this method, the information of multiframe images is to the change of each frame, the calculating schematic dia- counted. In this way, the influence on the results caused by gram, and the speed change curve shown in Figure 8. the noise in the single-frame image can be suppressed and In Table 2, the speed calculated by our method is the can enhance the robustness to CME detection. Our CME lowest; this is mainly due to the method did not use the frontier event identification method is based on the candidate CME Advances in Astronomy 13 area. It uses the fact that the CME region always enlarged Program. /e CORIMP CME catalog has been provided by gradually; on the one hand, it can avoid the effect of the Institute for Astronomy University of Hawaii. noise, and on the other hand, it can effectively track a complete CME event. Finally, through the detected region References information on each frame, it is convenient and effective to extract the morphological and motion information of the [1] E. R. Christian, Coronal Mass Ejections, NASA/Goddard CME event. Space Flight Center, Maryland, USA, 2012. Automated methods such as CACTus, SEEDS, and [2] D. H. Hathaway, Coronal Mass Ejections, NASA/Marshall CORIMP have a low detection rate of CMEs compared to Space Flight Center, Maryland, USA, 2014. [3] B. C. Low, “Coronal mass ejections,” Reviews of Geophysics, CDAW catalogs made by human observers. /is is mainly vol. 25, no. 3, pp. 663–675, 2016. because the method of manual labeling in recent years has [4] F. A. Cucinotta, “Space radiation risks for astronauts on marked poor CME events, especially the poor events in the multiple international space station missions,” PLoS One, Helmet streamers. So new approaches are needed to detect vol. 9, no. 4, Article ID e96099, 2014. subtle changes in the dynamic scenes, and the method we [5] R. Tousey, “/e solar corona,” in Space Research XIII, Aka- proposed has good performance in this aspect. demic-Verlag, Berlin, Germany, 1973. Similar to other automated methods, the biggest prob- [6] G. E. Brueckner, R. A. Howard, M. J. Koomen et al., “/e large lem in the adaptive background learning method is the angle spectroscopic coronagraph (LASCO),” Solar Physics, estimation of the various parameters and thresholds, such as vol. 162, no. 1-2, pp. 357–402, 1995. quantized levels of the pixel information, learning update [7] R. A. Howard, J. D. Moses, and D. G. Socker, “Sun-earth rate, and foreground detection threshold. For example, a connection coronal and heliospheric investigation (SEC- CHI),” in Proceedings of the International Symposium on small foreground detection threshold can reduce not only Optical Science and Technology, San Diego, CA, USA, May the false-negative rate, but also the accuracy rate. So the selection of these empirical values has a certain effect on the [8] N. Gopalswamy, S. Yashiro, G. Michalek et al., “/e SOHO/ algorithm, and further investigation will be carried out in LASCO CME catalog,” Earth, Moon, and Planets, vol. 104, these areas. We are also planning to apply the method to no. 1–4, pp. 295–313, 2009. corona images acquired by other devices. [9] E. Robbrecht and D. Berghmans, “Automated recognition of coronal mass ejections (CMEs) in near-real-time data,” As- Data Availability tronomy & Astrophysics, vol. 425, no. 3, pp. 1097–1106, 2004. [10] Y. Boursier, A. Llebaria, F. Goudail et al., “Automatic de- /e SOHO/LASCO data used to support the findings of this tection of coronal mass ejections on LASCO-C2 synoptic study are available from the SOHO/LASCO Instrument maps,” Proceedings of SPIE—>e International Society for Homepage (http://lasco-www.nrl.navy.mil/). Optical Engineering, vol. 5901, 2005. [11] O. Olmedo, J. Zhang, K. Poland, and K. Borne, “Automatic detection and tracking of coronal mass ejections in co- Conflicts of Interest ronagraph time series,” Solar Physics, vol. 248, no. 2, pp. 485–499, 2008. /e authors declare that there are no conflicts of interest [12] C. A. Young and P. T. Gallagher, “Multiscale edge detection in regarding the publication of this paper. the corona,” Solar Physics, vol. 248, no. 2, pp. 457–469, 2008. [13] N. A. Goussies, M. E. Mejail, J. Jacobo, and G. Stenborg, Acknowledgments “Detection and tracking of coronal mass ejections based on supervised segmentation and level set,” Pattern Recognition /is work was supported by the National Natural Science Letters, vol. 31, no. 6, pp. 496–501, 2010. Foundation of China (Grant nos. 11603016 and 11873062), [14] J. P. Byrne, P. T. Gallagher, R. T. J. McAteer, and C. A. Young, Key Scientific Research Foundation Project of Southwest “/e kinematics of coronal mass ejections using multiscale Forestry University (Grant no. 111827), and Open Research methods,” Astronomy & Astrophysics, vol. 495, no. 1, Program of CAS Key Laboratory of Solar Activity, National pp. 325–334, 2009. Astronomical Observatories (KLSA201909). SOHO is a [15] P. T. Gallagher, C. A. Young, J. P. Byrne, and R. T. J. McAteer, “Coronal mass ejection detection using wavelets, curvelets project of international cooperation between ESA and and ridgelets: applications for space weather monitoring,” NASA. /e SOHO/LASCO data used here are produced by a Advances in Space Research, vol. 47, no. 12, pp. 2118–2126, consortium of the Naval Research Laboratory (USA), Max- Planck-Institute fur ¨ Aeronomie (Germany), Laboratoire [16] Z. Zhao-xian, W. Ya-li, and L. Jin-sheng, “A method to au- d’Astronomie Spatiale (France), and the University of Bir- tomatic detecting coronal mass ejections in coronagraph mingham (UK). /e authors acknowledge the use of the based on frequency spectrum analysis,” in Proceedings of the CME catalog generated and maintained at the CDAW Data 2012 International Conference of Modern Computer Science Center by NASA and the Catholic University of America in and Applications, pp. 223–227, Springer, Berlin, Heidelberg, cooperation with the Naval Research Laboratory. /e June 2013. CACTus CME catalog is generated and maintained by the [17] A. Bemporad, V. Andretta, M. Pancrazzi et al., “On-board SIDC at the Royal Observatory of Belgium. /e SEEDS CME CME detection algorithm for the solar orbiter-METIS co- catalog has been supported by NASA Living with a Star ronagraph,” Proceedings of SPIE—>e International Society Program and NASA Applied Information Systems Research for Optical Engineering, vol. 9152, p. 91520K, 2014. 14 Advances in Astronomy [18] L. Zhang, J. Yin, J. Lin et al., “Detection of coronal mass ejections using multiple features and space-time continuity,” Solar Physics, vol. 292, no. 7, p. 91, 2017. [19] R. Patel, K. Amareswari, V. Pant et al., “Onboard automated CME detection algorithm for the visible emission line co- ronagraph on ADITYA-L1,” Solar Physics, vol. 293, no. 7, pp. 1–25, 2018. [20] D. B. Dhuri, S. M. Hanasoge, and M. C. M. Cheung, “Machine learning reveals systematic accumulation of electric current in lead-up to solar flares,” Proceedings of the National Academy of Sciences, vol. 116, no. 23, pp. 11141–11146, 2019. [21] X. Huang, H. Wang, L. Xu, J. Liu, R. Li, and X. Dai, “Deep learning based solar flare forecasting model. I: results for line- of-sight magnetograms,” >e Astrophysical Journal, vol. 856, no. 1, p. 7, 2018. [22] P. Wang, Y. Zhang, L. Feng et al., “A new automatic tool for CME detection and tracking with machine learning tech- niques,” 2019, https://arxiv.org/abs/1907.08798. [23] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747–757, 2000. [24] L. Li, W. Huang, I. Y. H. Gu et al., “Foreground object de- tection from videos containing complex background,” in Proceedings of the Eleventh ACM International Conference on Multimedia, January 2003. [25] Q. Mo, F. Dai, D. Liu, J. Qin, Z. Xie, and T. Li, “Development of private processes: a refinement approach,” IEEE Access, vol. 7, pp. 31517–31534, 2019. [26] O. Olmedo, A study of the initiation process of coronal mass ejections and the tool for their auto-detection, Ph.D. /esis, College of Science, pp. 137–149, 2011. Journal of International Journal of The Scientific Advances in Applied Bionics Engineering Geophysics Chemistry World Journal and Biomechanics Hindawi Hindawi Hindawi Publishing Corporation Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www www.hindawi.com .hindawi.com V Volume 2018 olume 2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Active and Passive Shock and Vibration Electronic Components Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Submit your manuscripts at www.hindawi.com Advances in Advances in Mathematical Physics Astronomy Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 International Journal of Rotating Machinery Advances in Optical Advances in Technologies OptoElectronics Advances in Advances in Physical Chemistry Condensed Matter Physics Hindawi Hindawi Hindawi Hindawi Volume 2018 www.hindawi.com Hindawi Volume 2018 Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com www.hindawi.com International Journal of Journal of International Journal of Advances in Antennas and Advances in Chemistry Propagation High Energy Physics Acoustics and Vibration Optics Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Astronomy Hindawi Publishing Corporation

A CME Automatic Detection Method Based on Adaptive Background Learning Technology

Loading next page...
 
/lp/hindawi-publishing-corporation/a-cme-automatic-detection-method-based-on-adaptive-background-learning-W02UVFpL17
Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2019 Zhenping Qiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-7969
eISSN
1687-7977
DOI
10.1155/2019/6582104
Publisher site
See Article on Publisher Site

Abstract

Hindawi Advances in Astronomy Volume 2019, Article ID 6582104, 14 pages https://doi.org/10.1155/2019/6582104 Research Article A CME Automatic Detection Method Based on Adaptive Background Learning Technology 1,2 2 1 1 Zhenping Qiang , Xianyong Bai , Qinghui Zhang , and Hong Lin College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650224, China CAS Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China Correspondence should be addressed to Zhenping Qiang; qzplucky@163.com Received 14 March 2019; Revised 22 August 2019; Accepted 8 October 2019; Published 7 November 2019 Guest Editor: Junhui Fan Copyright © 2019 Zhenping Qiang et al. /is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, we describe a technique, which uses an adaptive background learning method to detect the CME (coronal mass ejections) automatically from SOHO/LASCO C2 image sequences. /e method consists of several modules: adaptive background module, candidate CME area detection module, and CME detection module. /e core of the method is based on adaptive background learning, where CMEs are assumed to be a foreground moving object outward as observed in running-difference time series. Using the static and dynamic features to model the corona observation scene can more accurately describe the complex background. Moreover, the method can detect the subtle changes in the corona sequences while filtering their noise effectively. We applied this method to a month of continuous corona images, compared the result with CDAW, CACTus, SEEDS, and CORIMP catalogs and found a good detection rate in the automatic methods. It detected about 73% of the CMEs listed in the CDAW CME catalog, which is identified by human visual inspection. Currently, the derived parameters are position angle, angular width, linear velocity, minimum velocity, and maximum velocity of CMES. Other parameters could also easily be added if needed. decades ago. Along with the continuous progress of space 1. Introduction observations of the corona, a series of satellites with coronal A coronal mass ejection (CME) is a significant release of imaging observation ability such as OSO-7, P78-1, Skylab, plasma and accompanying magnetic field from the solar SMM, and SOHO were launched; especially over the past 24 corona. It often follows solar flares and is normally present years, coronal mass ejections have been detected routinely by during a solar prominence eruption. /e plasma is released visual inspection of each image from the Large Angle into the solar wind and can be observed in coronagraph Spectrometric Coronagraph (LASCO) onboard SOHO [6]. imagery [1–3]. CME is the most energetic and important To further understand CME, especially its three-dimensional solar activity and is a significant driver of space weather in properties which can be observed by the Sun-Earth Con- nection Coronal and Heliospheric Investigation (SECCHI) the near-Earth environment and throughout the helio- sphere. When the ejection is directed towards and reaches [7], SECCHI flew aboard NASA’s recently launched Solar the Earth as an interplanetary CME (ICME), ICME can Terrestrial Relations Observatory (STEREO). cause geomagnetic storms that may disrupt Earth’s mag- In contrast to this huge amount of observations of netosphere, damage satellites potentially, induce ground CMEs, the identification and cataloging of CMEs are im- currents, and increase the radiation risk for astronauts [4]. portant tasks that provide the basic knowledge for further /us, CME detection is an active area of research. scientific studies. /ere are two main categories of methods CME was first observed coincided with the first-observed used to detect the CMEs. One category is the manual de- solar flare on 1 September 1859, and it has been studied tection method, with the LASCO instrument coronagraphs. extensively since it was first reported [5] more than four Currently, there exists a manual catalog which is the 2 Advances in Astronomy /ese CME automatic detection methods that men- Coordinated Data Analysis Workshop Data Center (CDAW) catalog [8] to catalog observed CMEs. /is catalog tioned above are mainly based on three kinds of strategies: (i) enhance the coronagraph images and describe the ki- is compiled by observers who look through the sequences of LASCO coronagraph images. But this human-based process nematics and morphology features (edge, luminance, shape, is tedious and subjected to observers’ biases. To promote the etc) of the processing images and then use these features to detection of CMEs, another category is the automatic de- determine the occurrence of CME; (ii) establish the CME tection method, which detects and characterizes CMEs in evolution models according to the historical CMEs’ dynamic coronagraph images. evolution characteristics, use the same feature extraction /e Computer Aided CME Tracking software package is method to extract dynamic evolution characteristics of the processing sequences, and then compare the extracted the first automatic detection method introduced in 2004 [9]. It utilizes the Hough transform to identify CMEs. In 2005, characteristics to the model to determine the occurrence of CME; and (iii) apply supervised classification problem in Boursier et al. proposed a method named the Automatic Recognition of Transient Events and Marseille Inventory machine learning to detect CME. /e coronagraph data can be considered as a three/four- from Synoptic maps (ARTEMIS) [10]; it utilizes LASCO C2 synoptic maps and based on an adaptive filtering and seg- dimensional dataset with two/three spatial and one temporal mentation to detect CMEs. In [11], Olmedo et al. presented dimension. /e key to automatic detection methods is how the Solar Eruptive Event Detection System (SEEDS) which to distinguish CME regions from other parts of the image. used image segmentation techniques to detect CMEs. In /ese methods do not utilize time dimension information [12], Young and Gallagher described and demonstrated a adequately. To make full use of time-domain information, multiscale edge detection technique that addresses the CME we can use video processing technology for CME detection. detection and tracking, which could serve as one part of an In fact, we can consider a coronagraph image sequence as a automated CME detection system. In 2009, Goussies et al. video and regard CMEs as abnormal events in the video. /e developed an algorithm based on level set and region detection process of CME can use the video surveillance competition methods to characterize the CME texture, and technology, which includes change detection, background by using the texture information in the region competition model, foreground detection, and object tracking. Further, motion equations to evolve the curve, to this end, seg- considering the coronagraph image sequence itself is a mentation of the leading edge of CMEs is performed on dynamic scene, and the CME is also a dynamic process, so individual frames [13]. In the same year, Byrne et al. adopted the CME detection methods must adapt to the scene change. a multiscale decomposition technology to extract structure Inspired by these ideas, in this paper, we attempt to detect of the processing image and used an ellipse parameterization CMEs based on adaptive background learning technology. of the front to extract the kinematics (height, velocity, and /e method consists of three main modules described below: acceleration) and morphology (width and orientation) (1) Adaptive background module: this module is mainly change to detect the CMEs [14]. In [15], Gallagher et al. implemented to maintain the background model of developed an image processing technique to define the the coronagraph image sequence evolution of CMEs by texture and used a supervised seg- mentation algorithm to isolate a particular region of interest (2) Candidate CME area detection module: this module based upon its similarity with a prespecified model to au- is used to detect the foreground areas of the co- tomatically track the CMEs. In 2012, Zhao-Xian et al. [16] ronagraph images presented a method to detect CMEs by analyzing the sudden (3) CME detection module: this module is based on the change of frequency spectrum in the coronagraph. In 2014, candidate areas to identify the CME event Bemporad et al. [17] described the onboard CME detection /e remaining of the paper is organized as follows. In algorithm for the Solar Orbiter-METIS coronagraph. /e Section 2, we first give a specification about the adaptive algorithm is based on the running differences between background module. /en in Section 3, we will formulate the consecutive images to get significant changes and to provide background and foreground classification problem and the CME first detection time. In 2017, Zhang et al. [18] propose a method of candidate CME area detection. Section proposed a suspected CME region detection algorithm by 4 describes an algorithm for CME detection based on using the extreme learning machine (ELM) method which candidate CME area detection module. /e experimental takes into account the features of the grayscale and the results and validation on LASCO C2 data are presented in texture. In 2018, based on the intensity thresholding followed Section 5. /e paper is concluded in Section 6. by the area thresholding in successive difference images spatially rebinned to improve signal-to-noise ratio, Patel et al. [19] proposed a CME detection algorithm for the Visible 2. Adaptive Background Module Emission Line Coronagraph on ADITYA-L1. Recently, machine learning has been used in solar physics. Dhuri et al. In coronagraph image sequence, the background environ- [20] used machine learning to classify vector magnetic field ments always change; for example, small moving objects observations from flaring ARs. Huang et al. [21] applied a such as stars and cosmic rays can make the background deep learning method to flare forecasting. Very recently, change. So, the background representation model must be Wang et al. [22] even proposed an automatic tool for CME more robust and adaptive, and the background module must detection and tracking with machine learning techniques. be continuously updated to represent the change of the Advances in Astronomy 3 scene. To solve the strong chaotic interference in the with the feature vector v. For the coronagraph images, the background, several methods have been proposed to adapt most prominent feature is the luminance characteristics and to variety of background situations. Among them, mixture takes into account the dynamic disturbance; we must increase of Gaussians (MoG) [23] is considered as a promising the feature vector to characterize the dynamic properties. In method. In the video monitoring, because of the high frame this paper, we adopt the luminance features and co-occur- rate, the MoG can achieve good results in the gradual change rence luminance features to model the background. scene, but for CME detection, the interference changes /e coronal image’s luminance level is high, if calculating significantly, so it needs better method to model the dynamic and recording all the luminance feature vectors’ probability is scene. Li et al. proposed a statistical modeling [24] that used unrealistic. Fortunately, at the same location of the co- the co-occurrence of color characteristics of two consecutive ronagraph image, the luminance change is not very big. So for frames to model the dynamic scene. By using this statistical each pixel, it will be enough to record a small subspace feature modeling, this method can represent nonstatic background vectors as the background model. An example of the principal objects, so it has good robustness for the existence of dy- feature representation with luminance and co-occurrence namic background periodic interference. /e statistical luminance in LASCO C2 pseudocolor coronagraph images in modeling is very suitable for CME detection which is often the year of 2014 is shown in Figure 1. /e left image (a) shows associated with other forms of solar activities. We apply this the position of the selected pixel, and the right image (b) and method to model the background, namely, employing the image (c) are the histograms of the statistics for the most color feature to describe the static background and co-oc- significant color and co-occurrence color. /e histogram of currence color features to describe the moving background, the color features shows that only the first thirty color dis- and then use a Bayes decision rule for classification of tributions account for 68.38% of all color feature space, and background and foreground. the first thirty co-occurrence color distributions account for 79.51% of all co-occurrence color feature spaces. /erefore, as shown in Figure 1, we can represent P (v) and 2.1. Formulation of the Classification Rule Based on Bayes. P (v | b) well by selecting a small number of feature vectors. In In the method of automated detection of CMEs based on an the experiments of this paper, the color feature vector is adaptive background module, each pixel in the coronagraph quantized for 128 levels and recorded the first 25 feature image is divided into two categories: background pixels and vectors and the co-occurrence color feature vector is quan- foreground pixels (candidate CME area pixels). /erefore, tized for 64 levels and recorded the first 40 feature vectors. using the Bayes rule, the feature vector distribution prob- ability of each pixel satisfies the following equation: 2.3. Background Model and Parameters. In this paper, we P (v) � P (v | b)P (b) + P (v | f)P (f), (1) s s s s s focus on the effective detection method of CME, and the data object processed is pseudocolor coronagraph images. where s � (x, y) indicates pixel position, v is the statistical So we use statistical features in pseudocolor coronagraph feature vector, P (v | b) is the probability of the feature images to model the background, in particular including vector v being observed as a background at s, P (b) is the the prior probability of feature vectors belonging to the prior probability of the pixel s belonging to the background, background, color, and co-occurrence color feature vectors and P (v) is the prior probability of the feature vector v statistics list information. Suppose at the time t, at pixel being observed at the position s. Similarly, f denotes the point s, the color is c � 􏼂 r g b 􏼃 , the previous frame’s t t t t foreground (or candidate CME area). By using the Bayes decision rule, the pixel can be classified as background if the luminance is c � 􏼂 r g b 􏼃 , and the co-occur- t− 1 t− 1 t− 1 t− 1 rence color feature vector can be defined as cc � feature vector satisfies the following equation: 􏼂 r g b r g b 􏼃 . For each pixel, the back- t− 1 t− 1 t− 1 t t t P (b | v)> P (f | v), (2) s s ground model includes the following: s,t s,t s,t and by using Bayesian conditional posterior probability, (1) /e prior probabilities p and p , p indicate b,c b,cc b,c that color feature vector belongs to the background P (v | C)P (C) s s s,t P (C | v) � , C � b or f, of the time t at pixel point s, and p indicates that (3) s b,cc P (v) the co-occurrence color feature vector belongs to the background of the time t at pixel point s and substituting (1) and (3) into (2), it becomes (2) Color feature vector statistics list of the time t at pixel c,s,t,i 2P (v | b)P (b)> P (v), (4) s s s point s, S , i � 1, . . . , Nc: s,t,i t,i that is, if we obtained the prior probability P (b), P (v), and p � P􏼐v s􏼑, s s ⎧ ⎪ 􏼌 v c conditional probability P (v | b) at the moment t, the pixel s 􏼌 s,t,i c,s,t,i t,i p � P v b, s , 􏼐 􏼌 􏼑 S � (5) with the feature vector v can be classified as background or c v v ,b ⎪ c foreground based on formula (4). ⎪ t,i i i i v � 􏽨 r g b 􏽩 , c t t t 2.2. Description of the Feature Vector. In formula (4), the where Nc is the recording number of statistical s,t,i probability functions P (v) and P (v | b) are all associated color feature vectors, p is the statistical s s v c 4 Advances in Astronomy (a) (310, 360) color distribution map (the first thirty) 4.00 3.50 3.00 2.50 % 2.00 1.50 1.00 0.50 0.00 123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Percentage (b) (310, 360) co-occurrence color distribution map (the first thirty) 6.00 5.00 4.00 % 3.00 2.00 1.00 0.00 1 23456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Percentage (c) Figure 1: One example of learned principal features of LASCO C2 pseudocolor coronagraph images in the year of 2014. (a) /e position of the selected pixel, (b) significant color histogram, and (c) significant co-occurrence color histogram. probability of ith color feature vector v at position s s,t,i t,i c ⎧ ⎪ p � P􏼐v 􏼌 s􏼑, v cc s,t,i ⎪ cc until time t, and p is the probability of ith color ⎪ v ,b ⎪ feature vector v at position s which was judged as cc,s,t,i s,t,i t,i p � P􏼐v 􏼌 b, s􏼑, S � (6) v ,b cc v ⎪ cc the background ⎪ ⎪ T i i (3) Co-occurrence color feature vector statistics list of ⎩ t,i i i i i v � 􏽨 􏽩 , r g b r g b cc cc,s,t,i t− 1 t− 1 t− 1 t t t the time t at pixel point s, S , i � 1, . . . , Ncc: v Advances in Astronomy 5 where Ncc is the recording number of statistical co- feature vector v (color feather vector and co-occurrence s,t,i occurrence color feature vectors, p is the statistical color vector which are based on pixel change classification) cc probability of ith co-occurrence color feature vector v and match with each vector in the pixel’s feature vector cc s,t,i at position s until time t, and p is probability of ith statistics list by using formula (7), the sum of all matched v ,b cc co-occurrence color feature vector v at position s (M � 1) prior probability and condition probability in the cc which was judged as the background statistical list was further calculated to obtain the prior t t probability P (v ) and conditional probability P (v | b) of s s According to the distribution of the feature vectors, as the pixel’s vector v . Meanwhile, the prior probability P (b) shown in Figure 1, the first N elements of the list are is obtained, which was maintained in the background model. enough to cover the majority part of the feature vectors from t t Finally, by substituting P (v ), P (v | b), and P (b) into s,t,i s,t,i s s s the background. /erefore, in the case p ≈ p or v ,b c c formula (4), the pixel point s can be classified as background s,t,i s,t,i s,t,i s,t,i p ≈ p , p and p can be used to represent the v v ,b v v cc cc c cc or candidate CME area: s,t,i s,t,i s,t,i s,t,i 􏼌 􏼌 background. Otherwise, when p ≫ p or p ≫ p , it 􏼌 􏼌 v v ,b v v ,b 􏼌 􏼌 c c cc cc ⎨ ⎧ 1, ∀i􏼌v (i) − v (i)􏼌 ≤δ, 1 2 indicates that this feature vector is corresponding to the M v , v 􏼁 � (7) 1 2 0, otherwise, foreground. /is is the foundation that we used to detect the CMEs. where δ � 2 is chosen so that if the similar features are quantized into neighboring vectors, the statistics can still be 3. Candidate CME Area Detection Module retrieved. If no element in the pixel’s feature vector statistics t t list is matched, P (v ) and P (v | b) are set to 0. s s /e candidate CME area detection module is based on the established background model we discussed in 2.3 and the formulation of background and foreground classification we 3.3. Candidate CME Area Segmentation. It is obvious that, discussed in 2.1. /e candidate CME area detection module after the pixel’s classification, only a small percentage of the consists of three parts: change detection, change classifica- background pixels are wrongly classified as candidate CME tion, and candidate CME area segmentation. In the first step, ones. /ere are many isolated points, so the morphological nonchange pixels are filtered out by using the background operation (a pair of open and close) is applied to remove the difference and frame difference, which will improve the scattered error points and connect the candidate CME area computing speed; in the meantime, the detected change points. Finally, the candidate CME area detection module pixels are separated as pixels belonging to stationary and will output a binary image O(s, t). moving scene according to interframe changes. In the second step, based on the learned statistics information of color feature vectors and co-occurrence color feature vec- 3.4.AdaptiveBackgroundLearning. /e coronagraph image tors, the pixels associated with stationary or moving scene sequence is a gradually changing scene, so the background are further classified as background or candidate CME area model must be maintained to adapt to the various changes by using the Bayes decision rule. In the third step, candidate over time. In practice, the background model’s probability CME areas are segmented by the morphological processing information and a reference background image must be based on the classification results. /e process is the basis of updating. the algorithm [25], and the block diagram of the candidate CME area detection is shown in Figure 2. 3.4.1. Updating to Background Model’s Probability Information. Based on the previous obtained binary image 3.1.ChangeClassification. Candidate CME area detection is O(s, t), the pixel s with the feature vector v is classified as based on two-class background features (color and co-oc- candidate CME area or background. /e prior probability currence color); first of all, each coronagraph image’s change and the conditional probability associated with the color must be classified into two types. As shown in Figure 2, the feature are gradually updated by formula (8), and the change classification gets through temporal differences and updating of the prior probability and the conditional background differences. /e temporal difference binary probability associated with the co-occurrence color feature is image is denoted by Ftd(s, t), and the background difference similar: binary image is denoted by Fbd(s, t). If Fbd(s, t) � 1 (no s,t+1 s,t s,t matter the result what Fbd(s, t) is) is detected, the pixel s is ⎧ ⎪ p � 1 − α p + α M , 1 1 b,c b,c b,c classified as a change pixel. If Fbd(s, t) � 1 and Fbd(s, t) � 0 ⎨ s,t+1,i s,t,i s,t p � 1 − α 􏼁 p + α M , (8) 1 1 v v v c c c are detected, the pixel s is classified as a stationary pixel. /ey ⎪ s,t+1,i s,t,i s,t s,t are further classified as background or candidate CME area ⎩ p � 1 − α 􏼁 p + α 􏼐M ∧ M 􏼑, v ,b 1 v ,b 1 b,c v c c c separately, the change pixel will be classified by co-occur- rence color features, and the stationary pixel will be classified for i � 1, . . . , Nc, where α is a learning rate which controls by color features. the speed of feature learning; in the experiment, we set s,t α � 0.005; M � 1 when s is labeled as the background at b,c s,t s,t t,i time t from O(s, t); otherwise, M � 0. M � 1 when v in v c 3.2. Pixel’s Classification. For the current processing co- b,c c,s,t,i ronagraph image’s each pixel point s, at first to extract the the color feature vector statistics list S in formula (5) v 6 Advances in Astronomy Statistics co-occurrence color feature Temporal difference image Change pixels Temporal Change pixel difference classification Stationary pixels Candidate CME area image Candidate CME CME area detection detection Statistics color feature Coronagraph image Nonchange sequence Background difference image pixel Background classification difference Background model Background image Figure 2: /e block diagram of the candidate CME area detection. t s,t of the sense must be maintained at each time step. An infinite matches v best and M � 0 for others. In more detail, the c v impulse response (IIR) is used to update the gradual changes above updating can be stated as follows: for stationary background sense. If the pixel s is classified as a (a) If the pixel s is labeled as a background point at time t change point in the change classification step and the can- s,t+1 s,t by color feature, p is slightly increased from p b,c b,c didate CME area segmentation result O(s, t) � 1, the refer- s,t due to M � 1. Meanwhile, the probability of the b,c ence background image is updated as s,t matched feature is also increased due to M � 1. If s,t B (s, t + 1) � 1 − α 􏼁 B (s, t) + α I (s, t), (10) c 2 c 2 c M � 0, then the statistics for the unmatched fea- tures are gradually decreased. If there is no matched whereα is a parameter of the IIR filter and c ∈ 􏼈r, g, b􏼉 is the t,i 2 feature between v and the elements of the feature color information of the process point. A small positive c,s,t,i vector recording list S , the Ncth element in the number of α is selected to smooth out the disturbances list is replaced by a new feature vector by formula (9). caused by image noise, and in the experiment, we set If the number of the elements is smaller than Nc, a α � 0.1. new feature vector by formula (9) is added: If Fbd(s, t) � 1 and Ftd(s, t) � 1, but O(s, t) � 0. /is s,t+1,Nc means that there is a significant change, but in the end, it was p � α , v 1 not classified as candidate CME area; it indicates a back- t+1,Nc ground change is detected. So the processed pixel s’s color p � α , (9) v ,b information should replace the reference background, that t,Nc t v � v . c c is, B (s, t + 1) � I (s, t). c c /rough this operation, the reference background image can be a good representation of coronal scene change. (b) If the pixel s is labeled as a foreground point at time t s,t+1 s,t+1,i by color feature, p and p are slightly de- b,c v ,b s,t creased due to M � 0. However, the probability of 4. CME Detection Module b,c the matched feature is increased. Based on the candidate CME areas, we can detect the CME To ensure that the element that is replaced is the lowest according to the morphological and dynamic characteristics probability one, updated elements in the feature vector of the candidate CME area. For example, to identify a newly c,s,t,i statistics list S are resorted to a descending order v emerging CME, it must be seen to move outward in at least s,t+1,i according to p . two running-difference images. /is condition is set by Robbrecht and Berghmans [9] and Olmedo [26] to define a 3.4.2. Updating to the Reference Background Image. In the newly emerging CME. candidate CME area detection process, it need to use back- /e CME detection method we proposed is based on a ground difference to classify the change, so a reference continuous frame processing approach, so after the de- background image that represents the most recent appearance tection of the candidate CME area, we set two conditions as Advances in Astronomy 7 (a) (b) (c) (d) (e) (f) Figure 3: An experimental process graph of CME candidate area segmentation based on scene modeling. (a) LASCO C2 pseudocolor coronagraph images; (b) the reference background images; (c) the difference images between the two sequential frames; (d) the difference images between the current image and reference background; (e) the final candidate CME area images; (f) the changing region images of the candidate CME areas. criterion of CME event: (1) the CME candidate region of two counterclockwise, becomes a [θ, r] FOV, with θ the poloidal consecutive frames detected must be extended from the angle around the Sun and r the radial distance measured heliocentric; (2) since the start of the CME candidate region from the limb. /is kind of transformation has been used in is detected, the region has enlarged gradually. other CME detection algorithms [9, 11]. While trans- Besides, considering the angle range of the CMEs, we set forming, we also rebin, from 1024 × 1024 pixels for the [x, y] the minimum angle threshold filtering noise. And the fea- FOV to 360 × 360 pixels for the [θ, r] FOV. /rough the tures of interest are intrinsically in polar coordinates owing appropriate r − range selection, the dark occulter and corner to the spherical structure of the Sun. A polar transformation regions can easily be avoided. /e radial FOV in polar is applied to each candidate CME area image: the [x, y] field coordinate image corresponds to 360 discrete points be- of view (FOV), starting from the North of the Sun going tween 2.2 and 6.2 solar radii. We set the minimum detection 8 Advances in Astronomy (a) (b) (c) (d) (e) Figure 4: An example of CME detection process. (a) LASCO C2 pseudocolor coronagraph images; (b) the candidate CME area images; (c) the polar images of (b); (d) the increasing region images of the candidate CME areas; (e) the polar images of (d). angle parameter d (refer to the CME list in CDAW in 2014, An experimental process figure of the extraction can- the minimum angle is 5 degrees, and we set d � 4). didate CME region is based on the scene modeling, we use data from the LASCO C2 pseudocolor coronagraph images, 5. Results and Validation and 1024∗1024 image sequences are processed. Figure 3 is a CME candidate area segmentation process, which includes 6 In this section, the visual examples and comparison on frames (22 :12 : 05, 22 : 24 : 05, 22 : 36 : 05, 23 :12 :10, 23 : 24 : LASCO C2 pseudocolor coronagraph images are described, 05, and 23 : 36 : 06 in 2014/03/04) processing result; column respectively. (a) is LASCO C2 pseudocolor coronagraph images; column (b) is the reference background images; column (c) is the 5.1.Results. We present the results obtained by running the difference images between the two sequential frames; col- detection algorithms based on adaptive background learning umn (d) is the difference images between the current image technology. and reference background; column (e) is the final candidate Advances in Astronomy 9 Table 1: Comparison the results of different CME detection methods. Detected CME Detected CME number in the Accuracy rate False-negative Undetected CME number in the Methods number CDAW catalog (%) rate (%) CDAW catalog CDAW 259 CORIMP 132 47 18.15 32.82 212 SEEDS 410 117 45.17 113.12 142 CACTus 188 85 32.82 39.77 174 Our 283 189 72.97 36.29 70 method (a) (b) (c) Figure 5: A very poor CME event (appearance date-time (UT): 2014/06/01 02 : 24 : 05) detected by the adaptive background learning method. (a) LASCO C2 pseudocolor coronagraph images; (b) the candidate CME area images; (c) the increasing region images of the candidate CME areas. CME area images; column (f) is the changing region images false-negative rate, and the number of undetected CME events. of the candidate CME areas. /e accuracy rate is the ratio of the total number of the CME Figure 4 is an example of CME detection process, which events which were both detected by the automated method and includes 2 frames (14 :12 and 14 : 24 in 2014/01/01) pro- recorded in the CDAW list to the total number of CME events cessing result; column (a) is original coronal images; column in the CDAW catalog. /e false-negative rate is the ratio of the (b) is candidate CME area images; column (c) is polar images total number of the CME events which the automated method of the candidate CME areas; column (d) is the increasing did not detect but were recorded in the CDAW catalog to the region images of the candidate CME areas; column (e) is the total number of CME events in the CDAW catalog. polar images of the increasing regions. /e red box area in In the comparison experiments, for each CME event the last image is the detected CME area. recorded in the CDAW catalog, if other automated methods detected the CME event within the time range of this event and within the angular range of this event, it considers that 5.2. Validation and Comparison. Without loss of generality, the automated method detect a CDAW list CME event. /e we have chosen a full month of pseudocolor coronagraph comparison of the detection results by the adaptive back- image sequences observed by LASCO C2 in June 2014 as a ground detection algorithm we propose with the other test dataset for comparison. /e manual CDAW list is used automated algorithms is shown in Table 1. as a reference, and we compared the results of the adaptive For the processing datasets, as shown in Table 1, the background learning method with CORIMP, CACTus, and method we propose has a higher accuracy rate than other SEEDS catalog to verify the effectiveness of our proposed methods, and the false-negative rate is only higher than algorithm. /e main comparisons include accuracy rate, CORIMP method and lower than the SEEDS and CACTus 10 Advances in Astronomy (a) (b) (c) Figure 6: A poor CME event (appearance date-time (UT): 2014/06/04 19 : 00 : 05) detected by the adaptive background learning method. (a) LASCO C2 pseudocolor coronagraph images; (b) the candidate CME area images; (c) the increasing region images of the candidate CME areas. methods. For the total detected CME number, our proposed learning can represent the dynamic scene very well, which is suitable for the event detection in dynamic scenes. method is higher than CORIMP and CACTus and is only lower than the SEEDS method. In terms of the undetected Figure 5 is an example of the very weak CME event de- CME events in CDAW catalog number comparison, our tection by our method, which occurred in the Helmet method is the lowest. streamer area. /is event was not detected by CORIMP, In recent years, the CDAW catalog CME events are SEEDS, and CACTus methods and only recorded in CDAW more finely recorded; especially, the very weak events in the catalog. Figure 5 shows two continuous coronagraph im- Helmet streamers are recorded, and the number of the ages, the candidate CME area images, the candidate CME event recorded is also more and more. For example, CME change area images, and the very weak CME event area event number recorded in 1996 is 206 and in 2014 it is 2477. located in the red box. In the experiment, for the very weak Such changes make the automatic CME detection very CME events, our detection algorithm can only detect parts difficult, so the novel detection method must detect the of the CME event, and this is the main reason to cause the subtle changes in the coronal images. /e automatically misdetection. Figure 6 is a weak CME event detection detecting CME method based on adaptive background process images, the weak CME event area is also located in Advances in Astronomy 11 Figure 7: Morphological change description graph of a CME event (appearance date-time (UT): 2014/06/24 05 : 36 : 05) by using the intermediate images of our propose method. 12 Advances in Astronomy Table 2: CME information comparison table on the event shown in Figure 7. Comparison items Methods Central PA (deg) Angular width (deg) Linear (median) speed (km/s) Min speed (km/s) Max speed (km/s) CDAW 158 177 633 CORIMP 168 77 442 755 SEEDS 158 94 511 CACTus 167 96 473 403 600 Our method 157 95 425 316 507 5:48 6:00 6:12 6:24 6:36 6:48 7:00 7:12 7:24 7:36 7:48 8:00 Time Speed (km/s) (a) (b) Figure 8: Speed calculation sketch map and speed change curve chart. the red box, the first column is the coronagraph images, the extreme point of the CME area in each frame to calculate the second column is the foreground detected by our method, speed, but use the average value of the frontier sample points to and the third column is the change area of the foreground. calculate the speed. If the speed is calculated according to the /is event was also not detected by CORIMP, SEEDS, and extreme points of the frontier extreme point, the average speed CACTus methods. of this CME event calculated by our method is 490 km/s, which is similar to the other automatic detection methods. 5.3. Computation of Information on CME. /e information 6. Discussion and Conclusion on CME events can be calculated conveniently by using the processed images. Figure 7 is a sequence of processed images In this paper, we have developed a new method that is of CME events. Our method detected the event’s first C2 capable of detecting, tracking, and calculating the in- appearance date-time (UT): 2014/06/24 05 : 35 : 05, and the formation of CMEs in SOHO/LASCO C2 pseudocolor co- duration of this CME event is 2.6 hours, including 14 frames. ronagraph images. /e basic algorithm includes the In Figure 7, we show nine processed results of these frames. following: (i) establishing and maintaining the background /e first column is the coronal images; the second column is model of the coronal image sequences, (ii) detecting the the detected candidate CME regions; the third column is the candidate regions of CME based on Bayesian theorem, (iii) outline of the candidate CME regions which were marked by identifying the CME events, and (iv) calculating the in- the blue curve; the fourth column is the changing areas of the formation of CME events. candidate CME regions’ images; the fifth column is the /is novel method is based on adaptive background contour of the changing areas which were marked by the learning technology, and through the static and dynamic purple curve. We use the location information of the time- characteristics to model the background, this method can stamp to filter the noise caused by the timestamp, so we can describe the complex background well, especially the dy- extract a more accurate CME area and ensure that the final namic changes in the background. So by using the pro- calculated CME feature information is more accurate. /e posed method to detect the CME in the superposition area comparison of extracted information on this CME with other with the Helmet streamers has more obvious advantage. At methods is shown in Table 2. During the calculation of our the same time, due to the background modeling learning, in method, the speed of each frame can be calculated according this method, the information of multiframe images is to the change of each frame, the calculating schematic dia- counted. In this way, the influence on the results caused by gram, and the speed change curve shown in Figure 8. the noise in the single-frame image can be suppressed and In Table 2, the speed calculated by our method is the can enhance the robustness to CME detection. Our CME lowest; this is mainly due to the method did not use the frontier event identification method is based on the candidate CME Advances in Astronomy 13 area. It uses the fact that the CME region always enlarged Program. /e CORIMP CME catalog has been provided by gradually; on the one hand, it can avoid the effect of the Institute for Astronomy University of Hawaii. noise, and on the other hand, it can effectively track a complete CME event. Finally, through the detected region References information on each frame, it is convenient and effective to extract the morphological and motion information of the [1] E. R. Christian, Coronal Mass Ejections, NASA/Goddard CME event. Space Flight Center, Maryland, USA, 2012. Automated methods such as CACTus, SEEDS, and [2] D. H. Hathaway, Coronal Mass Ejections, NASA/Marshall CORIMP have a low detection rate of CMEs compared to Space Flight Center, Maryland, USA, 2014. [3] B. C. Low, “Coronal mass ejections,” Reviews of Geophysics, CDAW catalogs made by human observers. /is is mainly vol. 25, no. 3, pp. 663–675, 2016. because the method of manual labeling in recent years has [4] F. A. Cucinotta, “Space radiation risks for astronauts on marked poor CME events, especially the poor events in the multiple international space station missions,” PLoS One, Helmet streamers. So new approaches are needed to detect vol. 9, no. 4, Article ID e96099, 2014. subtle changes in the dynamic scenes, and the method we [5] R. Tousey, “/e solar corona,” in Space Research XIII, Aka- proposed has good performance in this aspect. demic-Verlag, Berlin, Germany, 1973. Similar to other automated methods, the biggest prob- [6] G. E. Brueckner, R. A. Howard, M. J. Koomen et al., “/e large lem in the adaptive background learning method is the angle spectroscopic coronagraph (LASCO),” Solar Physics, estimation of the various parameters and thresholds, such as vol. 162, no. 1-2, pp. 357–402, 1995. quantized levels of the pixel information, learning update [7] R. A. Howard, J. D. Moses, and D. G. Socker, “Sun-earth rate, and foreground detection threshold. For example, a connection coronal and heliospheric investigation (SEC- CHI),” in Proceedings of the International Symposium on small foreground detection threshold can reduce not only Optical Science and Technology, San Diego, CA, USA, May the false-negative rate, but also the accuracy rate. So the selection of these empirical values has a certain effect on the [8] N. Gopalswamy, S. Yashiro, G. Michalek et al., “/e SOHO/ algorithm, and further investigation will be carried out in LASCO CME catalog,” Earth, Moon, and Planets, vol. 104, these areas. We are also planning to apply the method to no. 1–4, pp. 295–313, 2009. corona images acquired by other devices. [9] E. Robbrecht and D. Berghmans, “Automated recognition of coronal mass ejections (CMEs) in near-real-time data,” As- Data Availability tronomy & Astrophysics, vol. 425, no. 3, pp. 1097–1106, 2004. [10] Y. Boursier, A. Llebaria, F. Goudail et al., “Automatic de- /e SOHO/LASCO data used to support the findings of this tection of coronal mass ejections on LASCO-C2 synoptic study are available from the SOHO/LASCO Instrument maps,” Proceedings of SPIE—>e International Society for Homepage (http://lasco-www.nrl.navy.mil/). Optical Engineering, vol. 5901, 2005. [11] O. Olmedo, J. Zhang, K. Poland, and K. Borne, “Automatic detection and tracking of coronal mass ejections in co- Conflicts of Interest ronagraph time series,” Solar Physics, vol. 248, no. 2, pp. 485–499, 2008. /e authors declare that there are no conflicts of interest [12] C. A. Young and P. T. Gallagher, “Multiscale edge detection in regarding the publication of this paper. the corona,” Solar Physics, vol. 248, no. 2, pp. 457–469, 2008. [13] N. A. Goussies, M. E. Mejail, J. Jacobo, and G. Stenborg, Acknowledgments “Detection and tracking of coronal mass ejections based on supervised segmentation and level set,” Pattern Recognition /is work was supported by the National Natural Science Letters, vol. 31, no. 6, pp. 496–501, 2010. Foundation of China (Grant nos. 11603016 and 11873062), [14] J. P. Byrne, P. T. Gallagher, R. T. J. McAteer, and C. A. Young, Key Scientific Research Foundation Project of Southwest “/e kinematics of coronal mass ejections using multiscale Forestry University (Grant no. 111827), and Open Research methods,” Astronomy & Astrophysics, vol. 495, no. 1, Program of CAS Key Laboratory of Solar Activity, National pp. 325–334, 2009. Astronomical Observatories (KLSA201909). SOHO is a [15] P. T. Gallagher, C. A. Young, J. P. Byrne, and R. T. J. McAteer, “Coronal mass ejection detection using wavelets, curvelets project of international cooperation between ESA and and ridgelets: applications for space weather monitoring,” NASA. /e SOHO/LASCO data used here are produced by a Advances in Space Research, vol. 47, no. 12, pp. 2118–2126, consortium of the Naval Research Laboratory (USA), Max- Planck-Institute fur ¨ Aeronomie (Germany), Laboratoire [16] Z. Zhao-xian, W. Ya-li, and L. Jin-sheng, “A method to au- d’Astronomie Spatiale (France), and the University of Bir- tomatic detecting coronal mass ejections in coronagraph mingham (UK). /e authors acknowledge the use of the based on frequency spectrum analysis,” in Proceedings of the CME catalog generated and maintained at the CDAW Data 2012 International Conference of Modern Computer Science Center by NASA and the Catholic University of America in and Applications, pp. 223–227, Springer, Berlin, Heidelberg, cooperation with the Naval Research Laboratory. /e June 2013. CACTus CME catalog is generated and maintained by the [17] A. Bemporad, V. Andretta, M. Pancrazzi et al., “On-board SIDC at the Royal Observatory of Belgium. /e SEEDS CME CME detection algorithm for the solar orbiter-METIS co- catalog has been supported by NASA Living with a Star ronagraph,” Proceedings of SPIE—>e International Society Program and NASA Applied Information Systems Research for Optical Engineering, vol. 9152, p. 91520K, 2014. 14 Advances in Astronomy [18] L. Zhang, J. Yin, J. Lin et al., “Detection of coronal mass ejections using multiple features and space-time continuity,” Solar Physics, vol. 292, no. 7, p. 91, 2017. [19] R. Patel, K. Amareswari, V. Pant et al., “Onboard automated CME detection algorithm for the visible emission line co- ronagraph on ADITYA-L1,” Solar Physics, vol. 293, no. 7, pp. 1–25, 2018. [20] D. B. Dhuri, S. M. Hanasoge, and M. C. M. Cheung, “Machine learning reveals systematic accumulation of electric current in lead-up to solar flares,” Proceedings of the National Academy of Sciences, vol. 116, no. 23, pp. 11141–11146, 2019. [21] X. Huang, H. Wang, L. Xu, J. Liu, R. Li, and X. Dai, “Deep learning based solar flare forecasting model. I: results for line- of-sight magnetograms,” >e Astrophysical Journal, vol. 856, no. 1, p. 7, 2018. [22] P. Wang, Y. Zhang, L. Feng et al., “A new automatic tool for CME detection and tracking with machine learning tech- niques,” 2019, https://arxiv.org/abs/1907.08798. [23] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747–757, 2000. [24] L. Li, W. Huang, I. Y. H. Gu et al., “Foreground object de- tection from videos containing complex background,” in Proceedings of the Eleventh ACM International Conference on Multimedia, January 2003. [25] Q. Mo, F. Dai, D. Liu, J. Qin, Z. Xie, and T. Li, “Development of private processes: a refinement approach,” IEEE Access, vol. 7, pp. 31517–31534, 2019. [26] O. Olmedo, A study of the initiation process of coronal mass ejections and the tool for their auto-detection, Ph.D. /esis, College of Science, pp. 137–149, 2011. Journal of International Journal of The Scientific Advances in Applied Bionics Engineering Geophysics Chemistry World Journal and Biomechanics Hindawi Hindawi Hindawi Publishing Corporation Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www www.hindawi.com .hindawi.com V Volume 2018 olume 2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Active and Passive Shock and Vibration Electronic Components Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Submit your manuscripts at www.hindawi.com Advances in Advances in Mathematical Physics Astronomy Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 International Journal of Rotating Machinery Advances in Optical Advances in Technologies OptoElectronics Advances in Advances in Physical Chemistry Condensed Matter Physics Hindawi Hindawi Hindawi Hindawi Volume 2018 www.hindawi.com Hindawi Volume 2018 Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com www.hindawi.com International Journal of Journal of International Journal of Advances in Antennas and Advances in Chemistry Propagation High Energy Physics Acoustics and Vibration Optics Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal

Advances in AstronomyHindawi Publishing Corporation

Published: Nov 7, 2019

References