Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Conformable Fractional Models of the Stellar Helium Burning via Artificial Neural Networks

Conformable Fractional Models of the Stellar Helium Burning via Artificial Neural Networks Hindawi Advances in Astronomy Volume 2021, Article ID 6662217, 18 pages https://doi.org/10.1155/2021/6662217 Research Article Conformable Fractional Models of the Stellar Helium Burning via Artificial Neural Networks 1 2 2 Emad A.-B. Abdel-Salam , Mohamed I. Nouh , Yosry A. Azzam , and M. S. Jazmati Department of Mathematics, Faculty of Science, New Valley University, El-Kharja 72511, Egypt Astronomy Department, National Research Institute of Astronomy and Geophysics (NRIAG), 11421 Helwan, Cairo, Egypt Department of Mathematics, College of Science, Qassim University, Buraydah 51452, P. O. Box 6644, Saudi Arabia Correspondence should be addressed to Yosry A. Azzam; y.azzam@nriag.sci.eg Received 15 December 2020; Revised 20 January 2021; Accepted 11 February 2021; Published 16 March 2021 Academic Editor: Kwing Lam Chan Copyright © 2021 Emad A.-B. Abdel-Salam et al. ,is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ,e helium burning phase represents the second stage that the star used to consume nuclear fuel in its interior. In this stage, the three elements, carbon, oxygen, and neon, are synthesized. ,e present paper is twofold: firstly, it develops an analytical solution to the system of the conformable fractional differential equations of the helium burning network, where we used, for this purpose, the series expansion method and obtained recurrence relations for the product abundances, that is, helium, carbon, oxygen, and neon. Using four different initial abundances, we calculated 44 gas models covering the range of the fractional parameter α � 0.5 − 1 with step Δα � 0.05. We found that the effects of the fractional parameter on the product abundances are small which coincides with the results obtained by a previous study. Secondly, we introduced the mathematical model of the neural network (NN) and developed a neural network algorithm to simulate the helium burning network using a feed-forward process. A comparison between the NN and the analytical models revealed very good agreement for all gas models. We found that NN could be considered as a powerful tool to solve and model nuclear burning networks and could be applied to the other nuclear stellar burning networks. Saigo–Maeda fractional operators defined in terms of the 1. Introduction Appell hypergeometric function. Nowadays, applications of fractional calculus in physics, In astrophysics, many problems have been handled using astrophysics, and related science are widely used [1, 2]. fractional models. Examples of these studies are in [6], where Examples of the recent applications of the fractional calculus the author introduced an analytical solution to the fractional in physics are found in [3] in which the author has intro- white dwarf equation, in [7] in which the authors analyzed duced a generalized fractional scale factor and a time-de- the fractional incompressible gas spheres and in [8, 9] in pendent Hubble parameter obeying an which the authors introduced an analytical solution to the “Ornstein–Uhlenbeck-like fractional differential equation” first and second types of Lane–Emden equation in the sense which serves to describe the accelerated expansion of a of modified Riemann–Liouville fractional derivative. Nouh nonsingular universe, in [4] in which the author extended in [10] solved the fractional helium burning network using a the idea of fractional spin based on two-order fractional series expansion method. Abdel-Salam and Nouh [11] and derivative operator and in [5] in which the author has Yousif et al. [12] introduced analytical solutions to the generalized the fractional action integral by using the conformable polytropic and isothermal gas spheres. 2 Advances in Astronomy Simulation of ordinary (ODE) and partial differential where a, b, and c are the reaction rates. equations (PDE) using an artificial neural network (ANN) ,e system of equation (2) represents the integer version gives very good accuracy when compared with both the of the helium burning network and solved simultaneously by numerical and analytical methods. Many authors dealt with computational or analytical methods [23–26]. Appendix A this issue and developed many neural algorithms to solve includes clarification of the derivation of the set of equation ODE and PDE. Dissanayake and Phan-,ien [13] first in- (2). ,e fractional kinetic equation (like helium burning troduced the concept of approximating the solutions of network) has been solved by many authors. In terms of differential equations with neural networks, where training H-functions, [27] presented a solution to the fractional was carried out by minimizing losses based on the satis- generalized kinetic equation. ,e generalized fractional faction of the network with the boundary conditions and the kinetic equations have been solved by [28]. Chaurasia and differential equations themselves. Lagaris et al. [14] dem- Pandey [29] solved the fractional kinetic equations in a series onstrated that the network shape could be selected by form of the Lorenzo–Hartley function. construction to satisfy boundary conditions and that au- In the present article, we developed a neural network al- tomatic differentiation could be used to determine the de- gorithm to solve the fractional system of differential equations rivatives that appear in the loss function. ,is approach has describing the helium burning network. We use the principles been extended to irregular boundary systems [15, 16], ap- of the conformable fractional derivative for the mathematical plied to the resolution of PDEs occurring in fluid mechanics modeling of the ANN. We used in this research an architecture [17], and software packages have been developed to facilitate of ANN which is the feed-forward network having three layers their applications [18–20]. Nouh et al. [21] and Azzam et al. and trained using the algorithm of backpropagation (BP) based [22] developed a neural network algorithm to solve the first on the gradient descent delta rule. and second types of Lane–Emden equations arising in ,e analytical solution is developed using the series astrophysics. expansion method and a comparison between the ANN ,e helium burning stage (also known as the triple- and analytical models is performed to declare the effi- alpha process) represents the second stage where the stars ciency and applicability of the ANN for solving the undergo the transfer of nuclear energy from the interior to conformable helium burning network. ,e paper is or- their surface. In this stage, nuclear energy is almost ganized as follows: Section 2 introduces the details of the converted to light when passing through the stellar at- analytical solution of the conformable helium burning mosphere. Helium burning (HB) releases energy per unit model using the series expansion method. Section 3 deals 23 18 fuel of about 6 ×10 MeV/g ≈ 10 erg/g. ,e reaction with the mathematical modeling of the neural network equations that govern the HB network may be written as technique with its gradient computations and back- follows [10]: propagation training algorithm. Section 4 discusses the results obtained and the comparison between the ANN 4 12 3He ⟶ C + c + 7.281 Mev, and analytical models. Section 5 gives the details of the 12 4 16 conclusion. (1) C + He ⟶ O + c + 7.150 Mev, 16 4 20 O + He ⟶ Ne + c + 4.750 Mev, 2. Analytical Solution to the Conformable where the conversion process from helium to carbon needs Helium Burning Model 10 K. By being certainly valid, the techniques of numerical integration Clayton [23] set up a model for the helium burning process may provide very accurate models. However, it is surely by taking into account the above reactions. If the number of worthwhile to obtain modeling with the desired precision if atoms per unit of stellar material mass for helium, carbon, complete analytical formulas are created. Besides, these ana- oxygen, and neon is represented by x, y, z, and r, respectively, lytical formulas usually provide much more deep insight into the then the next four equations (also maybe called the kinetic essence of a model than numerical integration. ,e power series equations) control the time-dependent change in abundance: solution, on the other hand, may serve as the analytical rep- dx resentation of the solution in the absence of a closed analysis � − 3ax − bxy − cxz, dt solution for a particular differential equation. ,e fractional form of equation (2) is given by [10] dy � ax − bxy, α 3 dt D x � − 3ax − bxy − cxz, (2) α 3 dz D y � ax − bxy, (3) � bxy − cxz, D z � bxy − cxz, dt D r � cxz. dr � cxz. dt If T � t , then x, y, z, r could be represented by Advances in Astronomy 3 ∞ n− 1 α α nu D u � D G, m 2 3 x x x � 􏽘 X T � X + X T + X T + X T + · · · m 0 1 2 3 (5) n α α m�0 or nu D u � uD G. x x α 2α 3α � X + X t + X t + X t + · · · , Performing the fractional derivative to equation (5) k 0 1 2 3 times, we get m 2 3 k times k times y � 􏽘 Y T � Y + Y T + Y T + Y T + . . . ︷α,...,α α ︷α,...,α α m 0 1 2 3 D 􏼂nGD u􏼃 � D uD G􏼁 , x x m�0 k k j+1 times k− j times j+1 times k− j times k k ︷α,...,α ︷α,...,α ︷α,...,α ︷α,...,α α 2α 3α � Y + Y t + Y t + Y t + · · · , or n 􏽘􏼠 􏼡D uD G � 􏽘􏼠 􏼡D GD u, 0 1 2 3 j j j�0 j�0 (4) m 2 3 (6) z � 􏽘 Z T � Z + Z T + Z T + Z T + · · · m 0 1 2 3 m�0 and putting X � 0, we get α 2α 3α j+1 times k− j times � Z + Z t + Z t + Z t + · · · , k 0 1 2 3 ︷α,...,α ︷α,...,α n 􏽘􏼠 􏼡D u(0)D G(0) j�0 m 2 3 (7) r � 􏽘 R T � R + R T + R T + R T + · · · m 0 1 2 3 j+1 times k− j times m�0 ︷α,...,α ︷α,...,α � 􏽘 D G(0)D u(0). 􏼠 􏼡 α 2α 3α j�0 � R + R t + R t + R t + · · · , 0 1 2 3 Since where X , Y , Z , R are constants to be determined. m m m m j+1 times k− j times In equation (2), the left side of the system depicts the ︷α,...,α (j+1) ︷α,...,α D u(0) � α A , D G(0) j+1 abundances of those elements in which the abundance of helium n j+1 times (x) is raised to power 3. To obtain the fractional derivative of u , (k− j) ︷α,...,α (j+1) � α Q , D G(0) � α Q , k− j j+1 we apply the fractional derivative of the product of the two k− j times functions. Using the series expansion method, we obtained the ︷α,...,α (k− j) 3 D u(0) � α Q , j+1 recurrence relation of the term x by the following. ∞ ∞ α n α αm αm Let D u � D G, with u � 􏽐 A x , C � 􏽐 Q x . (8) x x m�0 m m�0 m ,at is, we have (j+1) (k− j) ⎛ ⎜ ⎞ ⎟ ⎝ ⎠ n 􏽘 (j + 1)!α A (k − j)!α Q j+1 k− j j�0 (j+1) (k− j) ⎛ ⎜ ⎞ ⎟ ⎝ ⎠ � 􏽘 (j + 1)!α Q (k − j)!α A , (9) j+1 k− j j�0 k k k k (k+1) (k+1) ⎛ ⎜ ⎞ ⎟ ⎛ ⎜ ⎞ ⎟ ⎝ ⎠ ⎝ ⎠ ⇒n 􏽘 (j + 1)!(k − j)!α A Q � 􏽘 (j + 1)!(k − j)!α Q A . j+1 k− j j+1 k− j j�0 j�0 j j 4 Advances in Astronomy After some manipulations, we get taking fractional differentiation α-derivatives to equation k k− 1 (4), we get k!(k + 1)A Q � n 􏽘 k!(j + 1)A Q − 􏽘 k!(j + 1)A Q , 0 k+1 j+1 k− j k− j j+1 j�0 j�0 ∞ α n− 1 D x � 􏽘 αnX T , (10) t n n�1 and putting i � j + 1 and i � k − j in equation (10), we have α n− 1 D y � 􏽘 αnY T , k+1 k t n n�1 k!(k + 1 − i)A Q . k!(k + 1)A Q � n 􏽘 k!(i)A Q − 􏽘 0 k+1 i k+1− i i k+1− i i�1 i�1 (17) (11) α n− 1 D z � 􏽘 αnZ T , t n n�1 If m � k + 1, then m− 1 m!A Q � n 􏽘 (m − 1)!(i)A Q − 􏽘 (m − 1)!(m − i)A Q . 0 m i m− i i m− i α n− 1 D r � 􏽘 αnR T , i�1 i�1 t n n�1 (12) Adding the zero value − (m − 1)!(m − m)A Q to the 􏼈 􏼉 and inserting equations (4) and (17) into equation (3), the m 0 second summation of the last equation, we get series coefficients X , Y , Z , and R could be ob- n+1 n+1 n+1 n+1 m m− 1 tained from m!A Q � n 􏽘(m − 1)!(i)A Q − 􏽘 (m − 1) 0 m i m− i i�1 i�1 n n ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ !(m − i)A Q − (m − 1)!(m − m)A Q , ⎣ ⎦ i m− i m 0 X � − 3aQ + b 􏽘 X Y + c 􏽘 X Z , n+1 n j n− j j n− j α(n + 1) m m j�0 j�0 m!A Q � n 􏽘(m − 1)!(i)A Q − 􏽘(m − 1)!(m − i)A Q . 0 m i m− i i m− i i�1 i�1 (13) ⎢ ⎥ ⎡ ⎢ ⎤ ⎥ ⎣ ⎦ Y � aQ − b 􏽘 X Y , n+1 n j n− j α(n + 1) From the last equation, we can write the coefficients Q as m j�0 Q � 􏽘 (m − 1)!(in − m + i)A Q , ∀m≥ 1. m i m− i m!A n n i�1 ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎣ ⎦ Z � b 􏽘 X Y − c 􏽘 X Z , n+1 j n− j j n− j (14) α(n + 1) j�0 j�0 Putting n � 3 in equation (14), we have Q � 􏽘 (m − 1)!(4i − m)X Q , ∀m≥ 1, (15) m R � 􏽘 X Z . i m− i n+1 j n− j m!X 0 α(n + 1) i�1 j�0 (18) where X � A , 0 0 ,e recurrence relations corresponding to the integer X � A , i i model could be obtained by putting α � 1 in the last four formulas of equation (18) [26]. 3 (16) Q � X , 0 0 At n � 0 and with the initial values of the chemical composition X � x , Y � y , Z � z , R � r , where 3X Q 0 0 0 0 0 0 0 0 1 0 Q � , 1 x , y , z , r are arbitrary constants, we get 0 0 0 0 0 Advances in Astronomy 5 X � − 􏼂3aQ + bX Y + cX Z 􏼃 1 0 0 0 0 0 � − 3ax + bx y + cx y , 􏽨 􏽩 0 0 0 0 0 1 1 Y � 􏼂aQ − bX Y 􏼃 � 􏽨ax − bx y 􏽩, 1 0 0 0 0 0 α α (19) 1 1 Z � bX Y − cX Z � bx y − cx z , 􏼂 􏼃 􏼂 􏼃 1 0 0 0 0 0 0 0 0 α α c c R � X Z � x z , 1 0 0 0 0 α α 3X Q 3x 3 3 1 0 0 3 Q � X � x , Q � � − 3ax + bx y + cx y , 􏽨 􏽩 0 0 0 1 0 0 0 0 0 X α at n � 1, we get X � − 􏼂3aQ + b X Y + X Y 􏼁 + c X Z + X Z 􏼁􏼃 2 1 0 1 1 0 0 1 1 0 2α 1 9ax x y 3 0 3 0 3 � − 􏼢− 􏽨3ax + bx y + cx y 􏽩 + b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α x z 0 0 3 + c bx y − cx z − 3ax + bx y + cx y 􏼒 􏼂 􏼃 􏽨 􏽩􏼓􏼕 0 0 0 0 0 0 0 0 0 α α Y � 􏼂aQ − b X Y + X Y 􏼁􏼃 2 1 0 1 1 0 2α 1 9ax x y 0 3 0 3 0 3 � − 3ax + bx y + cx y + b ax − bx y − 3ax + bx y + cx y , 􏼢 􏽨 􏽩 􏼒 􏽨 􏽩 􏽨 􏽩􏼓􏼣 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α Z � 􏼂b X Y + X Y 􏼁 − c X Z + X Z 􏼁􏼃 2 0 1 1 0 0 1 1 0 2α 1 x y x z 0 3 0 3 0 0 3 � 􏼔b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 − c􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓􏼕, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α α c c x z 0 0 R � X Z + X Z 􏼁 � 􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓, 2 0 1 1 0 0 0 0 0 0 0 0 0 0 2α 2α α α (20) 6 Advances in Astronomy and by applying the same scheme, we can determine the rest of the series terms. So, the product abundance could be represented by the series solution of equation (3) as αm α 2α 3α x � 􏽘 X t � X + X t + X t + X t + . . . m 0 1 2 3 m�0 3 α � x − 􏽨3ax + bx y + cx y 􏽩t 0 0 0 0 0 0 1 9ax − 􏼢− 􏽨3ax + bx y + cx y 􏽩 0 0 0 0 0 2α α x y x z 3 3 3 2α 0 0 0 0 + b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 + c􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓􏼕t + · · · , 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 α α α α αm α 2α 3α y � 􏽘 Y t � Y + Y t + Y t + Y t + · · · m 0 1 2 3 m�0 3 α � y + 􏽨ax − bx y 􏽩t 0 0 0 1 9ax x y 0 3 0 3 0 3 2α + − 3ax + bx y + cx y + b ax − bx y − 3ax + bx y + cx y t + · · · , 􏼢 􏽨 􏽩 􏼒 􏽨 􏽩 􏽨 􏽩􏼓􏼣 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α αm α 2α 3α z � 􏽘 Z t � Z + Z t + Z t + Z t + · · · m 0 1 2 3 m�0 � z + 􏼂bx y − cx z 􏼃t 0 0 0 0 0 1 x y x z 0 3 0 3 0 0 3 2α + 􏼔b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 − c􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓􏼕t + · · · , 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α α αm α 2α 3α r � 􏽘 R t � R + R t + R t + R t + · · · m 0 1 2 3 m�0 c c x z α 0 0 3 2α � r + x z t + bx y − cx z − 3ax + bx y + cx y t + · · · . 􏼒 􏼂 􏼃 􏽨 􏽩􏼓 0 0 0 0 0 0 0 0 0 0 0 0 α 2α α α (21) It is important to mention that x , y , z , and r are ,e form of the neural approximate solution of equation 0 0 0 0 arbitrary initial values that enable us to compute gas models (3) will have two terms: the first represents the initial values with different chemical compositions, that is, pure helium or and the second represents the feed-forward neural network, rich helium models. where x is the input vector and p is the corresponding vector of adjustable weight parameters. ,en, the output of the neural network N (x , p) is written as l l 3. Neural Network Algorithm X (x, p) � A (x, ) + f x, N (x, p) , 3.1. Mathematical Modeling of the Problem. To simulate the t 1 1 1 conformable fractional helium burning network represented Y (y, p) � A (y) + f y, N (y, p)􏼁 , t 2 2 2 (22) by equation (3), we use the neural network architecture Z (z, p) � A (z) + f z, N (z, p)􏼁 , t 3 3 3 shown in Figure 1. Considering the initial conditions R (r, p) � A (r) + f r, N (r, p)􏼁 . t 4 4 4 X � x , Y � y , Z � z , R � r , the neural network could 0 0 0 0 0 0 0 0 be obtained following the next steps [30]. ,en, neural network output N (x , p) is given by ℓ ℓ Advances in Astronomy 7 σ (z ) W V 1,1 1,1 1 1 W V 1,2 1,2 2,1 2,1 X W 1 2,2 V 2,2 2 Y 1,h 3 Z 2,0 h,0 1,0 V 1,0 4 R 4,0 0 = –1 0 = –1 (fixed input) (Fixed input) Dummy neuron Input layer Hidden layer Output layer Figure 1: ANN architecture proposed to simulate conformable fractional helium burning network. unit, and σ(z) is the sigmoid activation function that has the − x − y N x , p � 􏽘 􏼁 v σ 􏼐z 􏼑, ℓ � 1, 2, 3, 4, (23) form σ(x) � (1/(1 + e )), σ(y) � (1/(1 + e )), ℓ ℓ ij i j − z − r j�1 σ(z) � (1/(1 + e )), and σ(r) � (1/(1 + e )). By differ- entiating the networks output N with respect to the vector where z � 􏽐 w x + β , w is the weight from the input j i�1 ij j i ij x , we get unit j to the hidden unit i, v is the weight from the hidden th unit i to the output, β represents the bias of the i hidden H n h α α (α) (α) α ⎛ ⎝ ⎛ ⎝ ⎞ ⎠⎞ ⎠ D N(x, p) � D 􏽘 v σ z � 􏽘 w x + β � 􏽘 v w σ , σ � D σ(x). (24) x x i i ij j i i ij x j j i�1 i�1 i�1 X (0, p) � x + 0.N (0, p) � x , Y (0, p) � y + 0.N (0, p) � y , Differentiating equation (24) n times gives t 0 1 0 t 0 2 0 n n Z (0, p) � z + 0.N (0, p) � z , R (0, p) � r + 0.N (0, p) � r , n times t 0 3 0 t 0 4 0 α,...,α (nα) α D N(x, p) � 􏽘 v P σ , P � 􏽙 w , σ � σ z􏼁 . α 1− α α x i i i i i i j ik D X (x, p) � x N (x, p) + xD N (x, p), t 1 1 x x i�1 k�1 α 1− α α D Y (y, p) � y N (y, p) + yD N (y, p), x t 2 x 2 (25) α 1− α α D Z (z, p) � z N (z, p) + zD N (z, p), x t 3 z 3 As a result, the solution of the helium burning network is α 1− α α D R (r, p) � r N (r, p) + rD N (r, p). given as r t 4 r 4 (27) X (x, p) � x + xN (x, p), t 0 1 Y (x, p) � y + yN (y, p), t 0 2 (26) Z (x, p) � z + zN (z, p), t 0 3 3.2. Gradient Computations and Parameter Updating. Using equation (27) to update the network parameters and R (x, p) � r + rN (r, p), t 0 4 computing the gradient, the error quantity needed to be which fulfills the initial conditions as minimized is given by 8 Advances in Astronomy α 3 E(x) � 􏽘 􏽮D X x , p 􏼁 + 3aX x , p 􏼁 + bX x , p 􏼁 Y y , p􏼁 + cX x , p 􏼁 Z z , p􏼁 􏽯 t t i t i t i t i t i t i α 3 α + 􏽘 􏽮D Y x , p 􏼁 − aX x , p􏼁 + bX x , p􏼁 Y y , p 􏼁 􏽯 + 􏽘 􏼈D R r , p􏼁 − cX x , p􏼁 Z z , p 􏼁 􏼉 t t i t i t i t i t t i t i t i (28) i i α 2 + 􏽘 􏼈D Z z , p􏼁 − bX x , p 􏼁 Y y , p􏼁 + cX x , p􏼁 Z z , p 􏼁 􏼉 , t i t i t i t i t i where ,e network parameters updating rule can be given as α 1− α α α v (x + 1) � v (x) + aD N, D X (x, p) � x N (x, p) + xD N (x, p), t 1 1 i i v x x α 1− α α α D Y (y, p) � y N (y, p) + yD N (y, p), β (x + 1) � β (x) + bD N, (32) i i x t 2 x 2 β (29) α 1− α α α D Z (z, p) � z N (z, p) + zD N (z, p), w (x + 1) � w (x) + cD N, x t 3 z 3 ij ij w ij α 1− α α D R (r, p) � r N (r, p) + rD N (r, p), r t 4 r 4 where a, b, c are learning rates, i � 1, 2, . . . , n and j � 1, 2, . . . , h. where D N(x, p) is given by equation (25). So, the problem In the stellar helium burning model based on ANN, the is converted into an unconstrained optimization problem. neuron is the fundamental processing unit that can process a To update the network parameters, we train the neural local memory and carry out localised information. At each network for the optimized parameter values. After the neuron, the net input (z) is calculated by supplementing the training process, we obtained the network parameters and received weights to obtain an aggregate weight of those computed the following: inputs and add it to a bias (β). ,e net input (z) is then X (x, p) � x + xN (x, p), passed by a nonlinear activation function, which results in t 0 1 the neuron output u (as seen in Figure 1) [31]. Y (x, p) � y + yN (y, p), t 0 2 (30) Z (x, p) � z + zN (z, p), t 0 3 3.3. Training of BP Algorithm. ,e backpropagation (BP) R (x, p) � r + rN (r, p). t 0 4 training algorithm is a gradient algorithm aimed to mini- mize the average square error between the desired output Now, N with one hidden layer is analogous to the and the actual output of a feed-forward network. conformable fractional derivative. By replacing the hidden th unit transfer function with the n order fractional deriva- It requires continuously differentiable nonlinearity. Figure 2 displays a flow chart of a backpropagation offline tive, the fractional N gradient differentiating with respect to v , β , and w could be written as learning algorithm [32]. i i ij ,e algorithm is a recursive algorithm that starts at the α (nα) D N � P σ , v i i output units, working back to the first hidden layer. A α ((n+1)α) comparison of the output X , Y , Z , R at the output layer j j j j D N � v P σ , β i i i with the desired outputs tx, ty, tz, tr is performed using an error function which has the following form: 1− α α ((n+1)α) j (nα) ⎛ ⎝ ⎞ ⎠ D N � x v P σ + v α w 􏽙 w σ . w i i i i i j ij i ik ij k�1,k≠ j (31) δ � X 􏼐tx − X 􏼑􏼐1 − X 􏼑 + Y 􏼐ty − Y 􏼑􏼐1 − Y 􏼑 + Z 􏼐tz − Z 􏼑􏼐1 − Z 􏼑 + R 􏼐tr − R 􏼑􏼐1 − R 􏼑. (33) j j j j j j j j j j j j j j j j j For the hidden layer, the error function takes the form: δ � 􏽮X 􏼐1 − X 􏼑 + Y 􏼐1 − Y 􏼑 + Z 􏼐1 − Z 􏼑 + R 􏼐1 − R 􏼑􏽯 􏽘 δ w , j j j j j j j j j k k (34) k Advances in Astronomy 9 Start Initialize biases and weights Introduce input and target output Compute actual output of hidden and output neurons Weights are adjusted by- w (t + 1) = w (t) + ηδ u + γ (w (t) – w (t – 1)) ji ji j j ji ji If unit j is an output unit, δ = u (t – u ) (1 – u ) j j j j j If unit j is a hidden unit, δ = u (1 – u ) δ w j j j k k Change the learning pattern Learning pattern: end Increment of the number of iterations >= PJ E = 1/PJ (t – u ) rms pj pj p=1 j=1 End Figure 2: Flowchart of an offline backpropagation training algorithm. where δ is the error term of the output layer, and w is the algorithm and also to assist in pushing the changes of the j k weight between the output and hidden layers. ,e update of energy function over local increases and boosting the the weight of each connection is implemented by replicating weights in the direction of the overall downhill [33]. ,is the error in a backward direction from the output layer to term adds a fraction of the most recent weight values to the the input layer as follows: current weight values. Both η and c terms are set at the start of the training phase and determine the network speed and w (t + 1) � w (t) + ηδ u + c w (t) − w (t − 1) . stability [31, 34]. 􏼐 􏼑 ji ji j j ji ji ,e process is repeated for each input pattern until the (35) output error of the network is decreased to a prespecified threshold value. ,e final weight values are frozen and ,e value of learning rate η is chosen such that it is utilized to get the precise product abundances during the test neither too large leading to overshooting nor very small session. ,e quality and success of training of ANN are leading to a slow convergence rate. ,e value of the mo- assessed by calculating the error for the whole batch of mentum term found in the last part in equation (36) which is training patterns using the normalized RMS error that is affixed with a constant c (momentum) is used to accelerate defined as the error convergence of the backpropagation learning 10 Advances in Astronomy 􏽶�� � P J 2 2 2 2 E � 􏽘 􏽘􏼚􏼐t − X 􏼑 + 􏼐t − Y 􏼑 + 􏼐t − Z 􏼑 + 􏼐t − R 􏼑 􏼛 , (36) rms pj pj pj pj pj pj pj pj PJ p�1 j�1 where J is the number of output units; P is the number of similar number of training iterations. As a result, the training patterns; tx , ty , tz , and tr are the desired configuration of the NN we used was 4-20-4, where the input pj pj pj pj outputs at unit j, whereas X , Y , Z , and R are the layer has four inputs which are the fractional parameter α, pj pj pj pj actual outputs at the same unit j. A zero error denotes that all the time t (t takes values from 3 to 2100 in steps of 3 seconds), the output patterns computed by the stellar helium burning two of the initial abundances which are the helium (X ), and model match the expected values perfectly and that the carbon (Y ). We excluded the other two initial abundances stellar helium burning model is fully trained. Similarly, (Z and R ) because their values are always zero as indicated 0 0 internal unit thresholds are adjusted by supposing they are in Table 1. ,e output layer has 4 nodes which are the time- connection weights on links from the input with an auxiliary dependent product abundances for helium (X), carbon (Y), constant value. ,e previous algorithm has been pro- oxygen (Z), and neon (R). grammed using C++ programming language running on During the training of the NN, we used a value for the Windows 7 of a CORE i7 PC. learning rate (η � 0.035) and for the momentum (c � 0.5). ,ose values for η and c were proved to quicken the con- vergence of the backpropagation training algorithm without 4. Results and Discussion exceeding the solution. For the demonstration of the con- 4.1. Data Preparation. Based on the recurrence relations vergence and stability of the values computed for weight (equation (18)), we computed one pure helium gas model, parameters of network layers, the behaviors of the con- X �1, Y � 0, Z � 0, R � 0, and three rich helium gas 0 0 0 0 vergence of the input layer weights, bias, and output layer models, X � 0.95, Y � 0.05, Z � 0, R � 0; X � 0.9, Y � 0.1, weights (w , β and ] ) for the helium burning network are 0 0 0 0 0 0 i i, i Z � 0, R � 0; and X � 0.85, Y � 0.15, Z � 0, R � 0. ,e 0 0 0 0 0 0 displayed in Figure 5. As these figures show, the weight fractional parameter covers the range α � 0.5 − 1 with a step values are initialized to random values and after somewhat of 0.05. ,e calculations are performed for a time T � 2100 s. considerable iterations they converged to stable values. Consequently, we have a total sum of 44 fractional helium burning models. 4.3.ComparisonbetweentheNNModelandAnalyticalModel. Figure 3 plots the two product abundances from gas After the end of the training phase of NN, we used the final models calculated at α � 0.95, where the solid lines are for frozen weight values in the test phase to predict the time- the pure helium model with initial abundance X �1, Y � 0, 0 0 dependent product abundances for helium (X), carbon (Y), Z � 0, R � 0; and the dashed lines are for the rich helium 0 0 oxygen (Z), and neon (R). In this test phase, we used values model with initial abundances, X � 0.95, Y � 0.05, Z � 0, 0 0 0 for a fractional parameter α not being used in the training R � 0. ,e effects of changing the composition of the gas are 12 phase to predict the helium burning network model. ,ese remarkable, especially for the carbon C . values are shown in the third column of Table 1. ,e results In Figure 4, we illustrated the effects of changing the of the predicted values show very good agreement with the fractional parameters on the product abundances calculated analytical values for different helium modes. A comparison for a gas model with initial abundance X � 0.85, Y � 0.15, 0 0 between the predicted NN model values and analytical Z � 0, R � 0. It is clear that the effects of the change of the 0 0 model for two values of the fractional parameters (α � 0.55 fractional parameter on the behavior of the product and α � 0.95) along with different helium modes shown in abundances are small. ,is result is similar to the results Table 1 are displayed in the range of figures , that is, Figures obtained by [10] for the models computed in the sense of the 6–9 for one pure helium gas model, X �1, Y � 0, Z � 0, 0 0 0 modified Riemann–Liouville fractional derivative. R � 0, and three rich helium gas models, X � 0.95, Y � 0.05, 0 0 0 Z � 0, R � 0; X � 0.9, Y � 0.1, Z � 0, R � 0; and X � 0.85, 0 0 0 0 0 0 0 4.2.ANNTraining. For the training of ANN used to simulate Y � 0.15, Z � 0, R � 0. In all these figures, the very good 0 0 0 the helium burning network, we used part of the data cal- agreement between both the NN model and analytical model culated in the previous subsection. ,e data used for training is clear, which elects the NN to be considered as a powerful of the ANN are as shown in the second column of Table 1. tool to solve and model nuclear burning networks and could ,e neural network (NN) architecture used in this paper be applied to the other nuclear stellar burning networks. for the helium burning network has three layers as shown in From the performed calculations, one can examine the Figure 1. ,ese layers are the input layer, hidden layer, and effect of changing the fraction parameter over four elements. output layer. Different configurations of hidden neurons of Figures 6–9 illustrate the fractional product abundances of 4 12 16 20 10, 20, and 40 have been tested, where we concluded that 20 He , C , O , and Ne as a function of time, where some neurons in a single hidden layer are giving the best model of features could be obtained. For all gas models, the difference the network to simulate the helium burning network. ,is between the abundances of He computed for the two values number of neurons in the hidden layer was found to give the of the fractional parameters (α � 0.55, 0.95) is very small minimum value of RMS error of 0.000005 in an almost when the time t≤ 200 seconds, after that the difference Advances in Astronomy 11 0.16 α = 0.95 0.14 0.12 0.1 0.08 0.06 0.04 Ne 0.02 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 He 12 16 20 Figure 3: ,e product abundance (C , O , and Ne ) computed analytically at α � 0.95 for the two different abundances. ,e solid lines represent the helium network with X �1, Y � 0, Z � 0, and R � 0, and the dashed lines represent the helium network X � 0.95, Y � 0.05, 0 0 0 0 0 0 Z � 0, and R � 0. 0 0 0.25 0.225 0.2 0.175 0.15 0.125 0.1 0.075 Ne 0.05 0.025 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 He 12 16 20 Figure 4: ,e product abundance (C , O , and Ne ) computed analytically at X � 0.85, Y � 0.15, Z � 0, and R � 0 and for two different 0 0 0 0 fractional parameters. ,e solid lines represent the helium network with α � 0.9 and the dashed lines represent the helium network α � 0.5. Table 1: Training and testing data for the helium burning network. Training phase Testing phase α 0.5, 0.6, 0.7, 0.8, 0.9, 1 0.55, 0.65, 0.75, 0.85, 0.95 Time 0–2100 sec (∆t � 3) 0–2100 sec (∆t � 3) X � 0.85, Y � 0.15, Z � 0, R � 0 X � 0.85, Y � 0.15, Z � 0, R � 0 0 0 0 0 0 0 0 0 X � 0.90, Y � 0.1, Z � 0, R � 0 X � 0.90, Y � 0.1, Z � 0, R � 0 0 0 0 0 0 0 0 0 Initial abundances of the HB X � 0.95, Y � 0.05, Z � 0, R � 0 X � 0.95, Y � 0.05, Z � 0, R � 0 0 0 0 0 0 0 0 0 X �1, Y � 0, Z � 0, R � 0 X �1, Y � 0, Z � 0, R � 0 0 0 0 0 0 0 0 0 Product abundance Product abundance 12 Advances in Astronomy Convergence of input layer weights (w ) 0 100000 200000 300000 400000 500000 –1 Iteration w [1] w [3] w [2] w [4] (a) Convergence of bias (β ) 0.7 0.6 0.5 0.4 β 0.3 0.2 0.1 0 100000 200000 300000 400000 500000 –0.1 Iteration (b) Convergence of output layer weights (v ) 1.5 0.5 –0.5 –1 –1.5 0 100000 200000 300000 400000 500000 Iteration v [1] v [3] v [2] v [4] (c) Figure 5: Convergence of the weights of input, bias, and output layers for the training of the NN used to simulate helium burning network. (a) ,e convergence of input layer weights (w ). (b) ,e convergence of bias (β ). (c) Convergence of output layer weights (v ). i i becomes larger. Also, it is noticed clearly that the abundance between the fractional product abundances of O are large after of C has the same behavior. just the beginning of the ignition, whereas the differences be- 16 20 ,e behaviors of the fractional product abundances of O tween the fractional product abundances of Ne are very small 20 4 12 and Ne are different from those of He and C . ,e differences for t≤ 100 seconds and increase after that time. Advances in Astronomy 13 1 0.24 X = 0.85, Y = 0.15, Z = 0, R = 0 0.9 0 0 0 0 0.22 X = 0.85, Y = 0.15, Z = 0, R = 0 0 0 0 0 0.8 0.2 0.7 0.18 0.6 0.16 0.5 C 0.14 α = 0.95 0.4 0.12 0.3 0.1 α = 0.95 α = 0.55 0.2 0.08 α = 0.55 0.1 0.06 0.04 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.2 0.1 X = 0.85, Y = 0.15, Z = 0, R = 0 0 0 0 0 α = 0.95 0.18 0.09 0.16 0.08 0.14 0.07 α = 0.55 0.06 0.12 α = 0.55 O 0.1 0.05 0.08 0.04 0.03 0.06 α = 0.95 X = 0.85, Y = 0.15, Z = 0, R = 0 0.02 0.04 0 0 0 0 0.01 0.02 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 6: ,e distribution of the product abundance with time for the rich helium burning network, X � 0.85, Y � 0.15, Z � 0, and 0 0 0 R � 0. He Ne 14 Advances in Astronomy 1 0.2 0.9 X = 0.90, Y = 0.1, Z = 0, R = 0 0 0 0 0 0.18 X = 0.90, Y = 0.1, Z = 0, R = 0 0 0 0 0 0.8 0.16 0.7 0.14 0.6 0.5 C 0.12 α = 0.95 0.4 0.1 0.3 α = 0.55 α = 0.95 0.08 0.2 0.06 α = 0.55 0.1 0.04 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.16 0.09 0.08 0.14 α = 0.95 0.07 0.12 0.06 α = 0.55 0.1 α = 0.55 0.05 O 0.08 0.04 0.06 α = 0.95 0.03 0.04 0.02 X = 0.9, Y = 0.1, Z = 0, R = 0 0 0 0 0 X = 0.90, Y = 0.1, Z = 0, R = 0 0 0 0 0 0.02 0.01 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 7: ,e distribution of the product abundance with time for the rich helium burning network, X � 0.9, Y � 0.1, Z � 0, and R � 0. 0 0 0 0 He Ne Advances in Astronomy 15 1 0.16 0.9 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 0.14 0.8 0.7 0.12 0.6 0.5 C 0.1 α = 0.95 0.4 0.08 0.3 α = 0.55 α = 0.95 0.2 0.06 α = 0.55 0.1 0 0.04 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.16 0.08 α = 0.95 0.14 0.07 α = 0.55 0.12 0.06 α = 0.55 0.1 0.05 α = 0.95 O 0.08 0.04 0.06 0.03 0.04 0.02 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 0.02 0.01 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 8: ,e distribution of the product abundance with time for the rich helium burning network, X � 0.95, Y � 0.05, Z � 0, and 0 0 0 R � 0. He Ne 16 Advances in Astronomy 1 0.16 0.9 0.14 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 0.8 0.12 0.7 0.1 0.6 α = 0.95 0.5 C 0.08 0.4 0.06 α = 0.55 0.3 α = 0.95 0.04 0.2 α = 0.55 0.02 0.1 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.16 0.08 0.14 0.07 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 0.12 0.06 α = 0.95 0.1 0.05 α = 0.55 O 0.08 0.04 α = 0.55 0.06 0.03 α = 0.95 0.04 0.02 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 0.02 0.01 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 9: ,e distribution of the product abundance with time for the pure helium burning network, X �1, Y � 0, Z � 0, and R � 0. 0 0 0 0 Y � 0.05, Z � 0, R � 0); (X � 0.9, Y � 0.1, Z � 0, R � 0) 5. Conclusion 0 0 0 0 0 0 0 and (X � 0.85, Y � 0.15, Z � 0, R � 0). ,e results of the 0 0 0 0 In the current research, we introduced an analytical solution analytical solution revealed that the conformable models to the conformable fractional helium burning network via a have the same behaviors as the fractional models computed series expansion method where we obtained the product using the modified Riemann–Liouville fractional derivative. abundances of the syntheses elements as a function of time. Second, we used the NN in its feed-forward type to simulate ,e calculations are performed for the four different initial the system of the differential equations of the HB. To do that, abundances: (X �1, Y � 0, Z � 0, R � 0), (X � 0.95, 0 0 0 0 0 we performed the mathematical modeling of a NN to He Ne Advances in Astronomy 17 simulate the conformable helium burning network. We dN ρN dx 1 A 3 � � − ≺συ≻ x − ≺συ≻ xy − ≺συ≻ xz, 3α α12 α16 trained the NN using the backpropagation delta rule algo- dt A dt rithm and used training data for models with the fractional parameter range α � 0.5 − 1 with step Δα � 0.1. We pre- dN ρN dy 2 A � � ≺συ≻ x − ≺συ≻ xy, dicted the fractional models for the range α � 0.55 − 0.95 3α α12 dt A dt with step Δα � 0.1. ,e comparison with the analytical solutions gives a very good agreement for most cases, a small dN ρN dz 3 A � � ≺συ≻ xy − ≺συ≻ xz, difference obtained for the model with fractional parameters α12 α16 dt A dt α � 0.55. ,e results obtained in this research prove that modeling of nuclear burning networks using NN gives very dN ρN dr 4 A � � ≺συ≻ xz. good results and validates the NN to be an accurate, robust, α16 dt A dt and trustworthy method to solve and model similar net- (A.5) works and could be applied to other nuclear stellar burning networks comprised of conformable fractional differential Using equations (A.4) and (A.5) could be written as equations. dx � − r x − r xy − r xz, 3α α12 α16 dt Appendix dy � r x − r xy, 3α α12 dt A. The Helium Burning Network (A.6) dz � r xy − r xz, Kinetic equations governing the change in the number α12 α16 dt density N of species i over time are describing the nucle- osynthesis of the elements in stars [35]: dr � r xz, α16 dt dN � − 􏽘 N N ≺συ≻ + 􏽘 N N ≺συ≻ , i j ij l k kl (A.1) where the abundances x, y, z, and r are expressed in number dt j k,l≠i instead of number density. By replacing the reaction rates r , r , and r in equation (A.6) by a, b, and c, respec- 3α α12 α16 where ≺συ≻ for the interaction involving species m and n tively, we obtained equation (2). mn constitutes the reaction cross section and all the reactions producing or destroying species i shall be summed up. ,e Data Availability number density N of the species i is expressed by its abundance X by the relation ,e Excel data used to support the findings of this study are available from the corresponding author upon request. ρN X A i N � , (A.2) Conflicts of Interest where N is Avogadro’s number and A is the mass of i in A i mass units. ,e authors declare that they have no conflicts of interest. ,e reaction rate is given by Acknowledgments X X i j 2 2 r � ρ N ≺συ≻ . (A.3) ij ij A A i j ,e authors acknowledge the Academy of Scientific Re- search and Technology (ASRT), Egypt (Grant no. 6413), For the helium burning reactions, the rates of the three under the project Science Up. ASRT is the second affiliation − 1 reactions in the units of s could be written as [23] of this research. r � ≺συ≻ He , 􏼐 􏼑 3α 3α References 4 12 (A.4) r � ≺συ≻ He C , α12 α12 [1] T. M. Michelitsch, B. A. Collet, A. P. Riascos, A. F. Nowakowski, and F. C. G. A. Nicolleau, “A fractional 4 16 r � ≺συ≻ He O . α16 α16 generalization of the classical lattice dynamics approach,” Chaos, Solitons and Fractals, vol. 92, pp. 1339–1351, 2016, 4 12 16 20 Now, by putting x � He , y � C , z � O , r � Ne ISSN 0960-0779. for the helium, carbon, oxygen, and neon abundances in [2] R. Hilfer, (ed), Applications of Fractional Calculus in Physics, − 3 number density (in units of cm ), respectively, World Scientific, Singapore, 2000. and implementing equations (A.2)–(A.4), the [3] R. A. El-Nabulsi, “Implications of the Ornstein-Uhlenbeck- abundance differential equation (Equation (A.1)) could Like fractional differential equation in cosmology,” Revista be written as Mexicana de Fisica, vol. 62, pp. 240–250, 2016. 18 Advances in Astronomy [4] R. A. El-Nabulsi, “On generalized fractional spin, fractional [22] Y. A. Azzam, E. A.-B. Abdel-Salam, and M. I. Nouh, “Artificial angular momentum, fractional momentum operators in neural network modeling of the conformable fractional iso- quantum mechanics,” Few-Body System, vol. 61, p. 25, 2020. thermal gas spheres,” RMxAA, vol. 57, no. 1, 2021. [23] D. D. Clayton, Principles of Stellar Evolution and Nucleo- [5] R. A. El-Nabulsi, “Saigo-maeda operators involving the Appell function, real spectra from symmetric quantum Hamiltonians synthesis, University of Chicago Press, Chicago, IL, USA, and violation of the second law of thermodynamics for 1983. quantum damped oscillators,” International Journal of Ge- [24] H. L. Duorah and R. S. Kushwaha, “Helium-Burning reaction oretical Physics, vol. 59, no. 12, pp. 3721–3736, 2020. products and the rate of energy generation,” Ge Astrophysical [6] A. R. El-Nabulsi, “,e fractional white dwarf hydrodynamical Journal, vol. 137, p. 566, 1963. nonlinear differential equation and emergence of quark stars,” [25] W. R. Hix and F.-K. ,ielemann, “Computational methods Applied Mathematics and Computation, vol. 218, no. 6, for nucleosynthesis and nuclear energy generation,” Journal of p. 2837, 2011. Computational and Applied Mathematics, vol. 109, no. 1-2, [7] S. S. Bayin and J. P. Krisch, “Fractional incompressible stars,” p. 321, 1999. Astrophysics and Space Science, vol. 359, no. 2, 2015. [26] M. I. Nouh, M. A. Sharaf, and A. S. Saad, “Symbolic analytical [8] E. A.-B. Abdel-Salam and M. I. Nouh, “Approximate solution solutions for the abundances differential equations of the to the fractional second-type lane-emden equation,” Astro- Helium burning phase,” Astronomische Nachrichten, vol. 324, no. 5, p. 432, 2003. physics, vol. 59, no. 3, p. 398, 2016. [9] M. I. Nouh and E. A.-B. Abdel-Salam, “Analytical solution to [27] H. J. Haubold and A. M. Mathai, “,e fractional kinetic the fractional polytropic gas spheres,” European Physical equation and thermonuclear functions,” Astrophysics and Journal Plus, vol. 133, p. 149, 2018. Space Science, vol. 273, no. 1/4, pp. 53–63, 2000. [10] M. I. Nouh, “Computational method for a fractional model of [28] R. K. Saxena, A. M. Mathai, and H. J. Haubold, “On fractional the helium burning network,” New Astronomy, vol. 66, p. 40, kinetic equations,” Astrophysics and Space Science, vol. 282, 2019. no. 1, pp. 281–287, 2002. [11] E. A.-B. Abdel-Salam and M. I. Nouh, “Conformable frac- [29] V. B. L. Chaurasia and S. C. Pandey, “Computable extensions tional polytropic gas spheres,” New Astronomy, vol. 76, of generalized fractional kinetic equations in astrophysics,” p. 101322, 2020. Research in Astronomy and Astrophysics, vol. 10, no. 1, p. 22, [12] E. A. Yousif, A. M. A. Adam, A. A. Hassaballa, and 2010. M. I. Nouh, “Conformable fractional isothermal gas spheres,” [30] N. Yadav, A. Yadav, and M. Kumar, An Introduction to Neural New Astronomy, vol. 84, p. 101511, 2021. Network Methods for Differential Equations, Springer, Berlin, Germany, 2015. [13] M. W. M. G. Dissanayake and N. Phan-,ien, “Neural-net- work-based approximations for solving partial differential [31] H. K. Elminir, Y. A. Azzam, and F. I. Younes, “Prediction of equations,” Communications in Numerical Methods in Engi- hourly and daily diffuse fraction using neural network, as neering, vol. 10, no. 3, pp. 195–201, 1994. compared to linear regression models,” Energy, vol. 32, no. 8, [14] I. E. Lagaris, A. Likas, and D. I. Fotiadis, “Artificial neural pp. 1513–1523, 2007. networks for solving ordinary and partial differential equa- [32] T. Fukuda, Y. Hasegawa, K. Sekiyama, and A. Tadayoshi, tions,” IEEE Transactions on Neural Networks, vol. 9, no. 5, Multi-Locomotion Robotic Systems: New Concepts of Bio-In- pp. 987–1000, 1998. spired Robotics, Springer, Berlin, Germany, 2012. [15] I. E. Lagaris, A. C. Likas, and D. G. Papageorgiou, “Neural- [33] C. Denz, Optical Neural Networks, Springer, Berlin, Germany, network methods for boundary value problems with irregular 1998. boundaries,” IEEE Transactions on Neural Networks, vol. 11, [34] I. A. Basheer and M. Hajmeer, “Artificial neural networks: fundamentals, computing, design, and application,” Journal of no. 5, pp. 1041–1049, 2000. [16] K. S. McFall and J. R. Mahan, “Artificial neural network Microbiological Methods, vol. 43, no. 1, pp. 3–31, 2000. method for solution of boundary value problems with exact [35] V. Kourganoff, Introduction to the Physics of Stellar Interiors, satisfaction of arbitrary boundary conditions,” IEEE Trans- D. Reidel Publishing Company, Dordrecht, ,e Netherlands, actions on Neural Networks, vol. 20, no. 8, pp. 1221–1233, 1973. [17] M. Baymani, A. Kerayechian, and S. Effati, “Artificial neural networks approach for solving Stokes problem,” Applied Mathematics, vol. 01, no. 4, pp. 288–292, 2010. [18] L. Lu, X. Meng, Z. Mao, E. George, and K. DeepXDE, “A deep learning library for solving differential equations,” 2020, http://arxiv.org/abs/1907.04502. [19] K. Alexander, R. Khudorozkov, and S. Tsimfer, “PYDens: a python framework for solving differential equations with neural networks,” 2019, http://arxiv.org/abs/1909.11544. [20] F. Chen, D. Sondak, P. Protopapas et al., “NeuroDiffEq: a Python package for solving differential equations with neural networks,” Journal of Open Source Software, vol. 5, no. 46, p. 1931, 2020. [21] M. I. Nouh, Y. A. Azzam, and E. A.-B. Abdel-Salam, “Modeling fractional polytropic gas spheres using artificial neural network,” Neural Computation and Application, 2020. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Astronomy Hindawi Publishing Corporation

Conformable Fractional Models of the Stellar Helium Burning via Artificial Neural Networks

Loading next page...
 
/lp/hindawi-publishing-corporation/conformable-fractional-models-of-the-stellar-helium-burning-via-Qryipk2WLN
Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2021 Emad A.-B. Abdel-Salam et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-7969
eISSN
1687-7977
DOI
10.1155/2021/6662217
Publisher site
See Article on Publisher Site

Abstract

Hindawi Advances in Astronomy Volume 2021, Article ID 6662217, 18 pages https://doi.org/10.1155/2021/6662217 Research Article Conformable Fractional Models of the Stellar Helium Burning via Artificial Neural Networks 1 2 2 Emad A.-B. Abdel-Salam , Mohamed I. Nouh , Yosry A. Azzam , and M. S. Jazmati Department of Mathematics, Faculty of Science, New Valley University, El-Kharja 72511, Egypt Astronomy Department, National Research Institute of Astronomy and Geophysics (NRIAG), 11421 Helwan, Cairo, Egypt Department of Mathematics, College of Science, Qassim University, Buraydah 51452, P. O. Box 6644, Saudi Arabia Correspondence should be addressed to Yosry A. Azzam; y.azzam@nriag.sci.eg Received 15 December 2020; Revised 20 January 2021; Accepted 11 February 2021; Published 16 March 2021 Academic Editor: Kwing Lam Chan Copyright © 2021 Emad A.-B. Abdel-Salam et al. ,is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ,e helium burning phase represents the second stage that the star used to consume nuclear fuel in its interior. In this stage, the three elements, carbon, oxygen, and neon, are synthesized. ,e present paper is twofold: firstly, it develops an analytical solution to the system of the conformable fractional differential equations of the helium burning network, where we used, for this purpose, the series expansion method and obtained recurrence relations for the product abundances, that is, helium, carbon, oxygen, and neon. Using four different initial abundances, we calculated 44 gas models covering the range of the fractional parameter α � 0.5 − 1 with step Δα � 0.05. We found that the effects of the fractional parameter on the product abundances are small which coincides with the results obtained by a previous study. Secondly, we introduced the mathematical model of the neural network (NN) and developed a neural network algorithm to simulate the helium burning network using a feed-forward process. A comparison between the NN and the analytical models revealed very good agreement for all gas models. We found that NN could be considered as a powerful tool to solve and model nuclear burning networks and could be applied to the other nuclear stellar burning networks. Saigo–Maeda fractional operators defined in terms of the 1. Introduction Appell hypergeometric function. Nowadays, applications of fractional calculus in physics, In astrophysics, many problems have been handled using astrophysics, and related science are widely used [1, 2]. fractional models. Examples of these studies are in [6], where Examples of the recent applications of the fractional calculus the author introduced an analytical solution to the fractional in physics are found in [3] in which the author has intro- white dwarf equation, in [7] in which the authors analyzed duced a generalized fractional scale factor and a time-de- the fractional incompressible gas spheres and in [8, 9] in pendent Hubble parameter obeying an which the authors introduced an analytical solution to the “Ornstein–Uhlenbeck-like fractional differential equation” first and second types of Lane–Emden equation in the sense which serves to describe the accelerated expansion of a of modified Riemann–Liouville fractional derivative. Nouh nonsingular universe, in [4] in which the author extended in [10] solved the fractional helium burning network using a the idea of fractional spin based on two-order fractional series expansion method. Abdel-Salam and Nouh [11] and derivative operator and in [5] in which the author has Yousif et al. [12] introduced analytical solutions to the generalized the fractional action integral by using the conformable polytropic and isothermal gas spheres. 2 Advances in Astronomy Simulation of ordinary (ODE) and partial differential where a, b, and c are the reaction rates. equations (PDE) using an artificial neural network (ANN) ,e system of equation (2) represents the integer version gives very good accuracy when compared with both the of the helium burning network and solved simultaneously by numerical and analytical methods. Many authors dealt with computational or analytical methods [23–26]. Appendix A this issue and developed many neural algorithms to solve includes clarification of the derivation of the set of equation ODE and PDE. Dissanayake and Phan-,ien [13] first in- (2). ,e fractional kinetic equation (like helium burning troduced the concept of approximating the solutions of network) has been solved by many authors. In terms of differential equations with neural networks, where training H-functions, [27] presented a solution to the fractional was carried out by minimizing losses based on the satis- generalized kinetic equation. ,e generalized fractional faction of the network with the boundary conditions and the kinetic equations have been solved by [28]. Chaurasia and differential equations themselves. Lagaris et al. [14] dem- Pandey [29] solved the fractional kinetic equations in a series onstrated that the network shape could be selected by form of the Lorenzo–Hartley function. construction to satisfy boundary conditions and that au- In the present article, we developed a neural network al- tomatic differentiation could be used to determine the de- gorithm to solve the fractional system of differential equations rivatives that appear in the loss function. ,is approach has describing the helium burning network. We use the principles been extended to irregular boundary systems [15, 16], ap- of the conformable fractional derivative for the mathematical plied to the resolution of PDEs occurring in fluid mechanics modeling of the ANN. We used in this research an architecture [17], and software packages have been developed to facilitate of ANN which is the feed-forward network having three layers their applications [18–20]. Nouh et al. [21] and Azzam et al. and trained using the algorithm of backpropagation (BP) based [22] developed a neural network algorithm to solve the first on the gradient descent delta rule. and second types of Lane–Emden equations arising in ,e analytical solution is developed using the series astrophysics. expansion method and a comparison between the ANN ,e helium burning stage (also known as the triple- and analytical models is performed to declare the effi- alpha process) represents the second stage where the stars ciency and applicability of the ANN for solving the undergo the transfer of nuclear energy from the interior to conformable helium burning network. ,e paper is or- their surface. In this stage, nuclear energy is almost ganized as follows: Section 2 introduces the details of the converted to light when passing through the stellar at- analytical solution of the conformable helium burning mosphere. Helium burning (HB) releases energy per unit model using the series expansion method. Section 3 deals 23 18 fuel of about 6 ×10 MeV/g ≈ 10 erg/g. ,e reaction with the mathematical modeling of the neural network equations that govern the HB network may be written as technique with its gradient computations and back- follows [10]: propagation training algorithm. Section 4 discusses the results obtained and the comparison between the ANN 4 12 3He ⟶ C + c + 7.281 Mev, and analytical models. Section 5 gives the details of the 12 4 16 conclusion. (1) C + He ⟶ O + c + 7.150 Mev, 16 4 20 O + He ⟶ Ne + c + 4.750 Mev, 2. Analytical Solution to the Conformable where the conversion process from helium to carbon needs Helium Burning Model 10 K. By being certainly valid, the techniques of numerical integration Clayton [23] set up a model for the helium burning process may provide very accurate models. However, it is surely by taking into account the above reactions. If the number of worthwhile to obtain modeling with the desired precision if atoms per unit of stellar material mass for helium, carbon, complete analytical formulas are created. Besides, these ana- oxygen, and neon is represented by x, y, z, and r, respectively, lytical formulas usually provide much more deep insight into the then the next four equations (also maybe called the kinetic essence of a model than numerical integration. ,e power series equations) control the time-dependent change in abundance: solution, on the other hand, may serve as the analytical rep- dx resentation of the solution in the absence of a closed analysis � − 3ax − bxy − cxz, dt solution for a particular differential equation. ,e fractional form of equation (2) is given by [10] dy � ax − bxy, α 3 dt D x � − 3ax − bxy − cxz, (2) α 3 dz D y � ax − bxy, (3) � bxy − cxz, D z � bxy − cxz, dt D r � cxz. dr � cxz. dt If T � t , then x, y, z, r could be represented by Advances in Astronomy 3 ∞ n− 1 α α nu D u � D G, m 2 3 x x x � 􏽘 X T � X + X T + X T + X T + · · · m 0 1 2 3 (5) n α α m�0 or nu D u � uD G. x x α 2α 3α � X + X t + X t + X t + · · · , Performing the fractional derivative to equation (5) k 0 1 2 3 times, we get m 2 3 k times k times y � 􏽘 Y T � Y + Y T + Y T + Y T + . . . ︷α,...,α α ︷α,...,α α m 0 1 2 3 D 􏼂nGD u􏼃 � D uD G􏼁 , x x m�0 k k j+1 times k− j times j+1 times k− j times k k ︷α,...,α ︷α,...,α ︷α,...,α ︷α,...,α α 2α 3α � Y + Y t + Y t + Y t + · · · , or n 􏽘􏼠 􏼡D uD G � 􏽘􏼠 􏼡D GD u, 0 1 2 3 j j j�0 j�0 (4) m 2 3 (6) z � 􏽘 Z T � Z + Z T + Z T + Z T + · · · m 0 1 2 3 m�0 and putting X � 0, we get α 2α 3α j+1 times k− j times � Z + Z t + Z t + Z t + · · · , k 0 1 2 3 ︷α,...,α ︷α,...,α n 􏽘􏼠 􏼡D u(0)D G(0) j�0 m 2 3 (7) r � 􏽘 R T � R + R T + R T + R T + · · · m 0 1 2 3 j+1 times k− j times m�0 ︷α,...,α ︷α,...,α � 􏽘 D G(0)D u(0). 􏼠 􏼡 α 2α 3α j�0 � R + R t + R t + R t + · · · , 0 1 2 3 Since where X , Y , Z , R are constants to be determined. m m m m j+1 times k− j times In equation (2), the left side of the system depicts the ︷α,...,α (j+1) ︷α,...,α D u(0) � α A , D G(0) j+1 abundances of those elements in which the abundance of helium n j+1 times (x) is raised to power 3. To obtain the fractional derivative of u , (k− j) ︷α,...,α (j+1) � α Q , D G(0) � α Q , k− j j+1 we apply the fractional derivative of the product of the two k− j times functions. Using the series expansion method, we obtained the ︷α,...,α (k− j) 3 D u(0) � α Q , j+1 recurrence relation of the term x by the following. ∞ ∞ α n α αm αm Let D u � D G, with u � 􏽐 A x , C � 􏽐 Q x . (8) x x m�0 m m�0 m ,at is, we have (j+1) (k− j) ⎛ ⎜ ⎞ ⎟ ⎝ ⎠ n 􏽘 (j + 1)!α A (k − j)!α Q j+1 k− j j�0 (j+1) (k− j) ⎛ ⎜ ⎞ ⎟ ⎝ ⎠ � 􏽘 (j + 1)!α Q (k − j)!α A , (9) j+1 k− j j�0 k k k k (k+1) (k+1) ⎛ ⎜ ⎞ ⎟ ⎛ ⎜ ⎞ ⎟ ⎝ ⎠ ⎝ ⎠ ⇒n 􏽘 (j + 1)!(k − j)!α A Q � 􏽘 (j + 1)!(k − j)!α Q A . j+1 k− j j+1 k− j j�0 j�0 j j 4 Advances in Astronomy After some manipulations, we get taking fractional differentiation α-derivatives to equation k k− 1 (4), we get k!(k + 1)A Q � n 􏽘 k!(j + 1)A Q − 􏽘 k!(j + 1)A Q , 0 k+1 j+1 k− j k− j j+1 j�0 j�0 ∞ α n− 1 D x � 􏽘 αnX T , (10) t n n�1 and putting i � j + 1 and i � k − j in equation (10), we have α n− 1 D y � 􏽘 αnY T , k+1 k t n n�1 k!(k + 1 − i)A Q . k!(k + 1)A Q � n 􏽘 k!(i)A Q − 􏽘 0 k+1 i k+1− i i k+1− i i�1 i�1 (17) (11) α n− 1 D z � 􏽘 αnZ T , t n n�1 If m � k + 1, then m− 1 m!A Q � n 􏽘 (m − 1)!(i)A Q − 􏽘 (m − 1)!(m − i)A Q . 0 m i m− i i m− i α n− 1 D r � 􏽘 αnR T , i�1 i�1 t n n�1 (12) Adding the zero value − (m − 1)!(m − m)A Q to the 􏼈 􏼉 and inserting equations (4) and (17) into equation (3), the m 0 second summation of the last equation, we get series coefficients X , Y , Z , and R could be ob- n+1 n+1 n+1 n+1 m m− 1 tained from m!A Q � n 􏽘(m − 1)!(i)A Q − 􏽘 (m − 1) 0 m i m− i i�1 i�1 n n ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ !(m − i)A Q − (m − 1)!(m − m)A Q , ⎣ ⎦ i m− i m 0 X � − 3aQ + b 􏽘 X Y + c 􏽘 X Z , n+1 n j n− j j n− j α(n + 1) m m j�0 j�0 m!A Q � n 􏽘(m − 1)!(i)A Q − 􏽘(m − 1)!(m − i)A Q . 0 m i m− i i m− i i�1 i�1 (13) ⎢ ⎥ ⎡ ⎢ ⎤ ⎥ ⎣ ⎦ Y � aQ − b 􏽘 X Y , n+1 n j n− j α(n + 1) From the last equation, we can write the coefficients Q as m j�0 Q � 􏽘 (m − 1)!(in − m + i)A Q , ∀m≥ 1. m i m− i m!A n n i�1 ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎣ ⎦ Z � b 􏽘 X Y − c 􏽘 X Z , n+1 j n− j j n− j (14) α(n + 1) j�0 j�0 Putting n � 3 in equation (14), we have Q � 􏽘 (m − 1)!(4i − m)X Q , ∀m≥ 1, (15) m R � 􏽘 X Z . i m− i n+1 j n− j m!X 0 α(n + 1) i�1 j�0 (18) where X � A , 0 0 ,e recurrence relations corresponding to the integer X � A , i i model could be obtained by putting α � 1 in the last four formulas of equation (18) [26]. 3 (16) Q � X , 0 0 At n � 0 and with the initial values of the chemical composition X � x , Y � y , Z � z , R � r , where 3X Q 0 0 0 0 0 0 0 0 1 0 Q � , 1 x , y , z , r are arbitrary constants, we get 0 0 0 0 0 Advances in Astronomy 5 X � − 􏼂3aQ + bX Y + cX Z 􏼃 1 0 0 0 0 0 � − 3ax + bx y + cx y , 􏽨 􏽩 0 0 0 0 0 1 1 Y � 􏼂aQ − bX Y 􏼃 � 􏽨ax − bx y 􏽩, 1 0 0 0 0 0 α α (19) 1 1 Z � bX Y − cX Z � bx y − cx z , 􏼂 􏼃 􏼂 􏼃 1 0 0 0 0 0 0 0 0 α α c c R � X Z � x z , 1 0 0 0 0 α α 3X Q 3x 3 3 1 0 0 3 Q � X � x , Q � � − 3ax + bx y + cx y , 􏽨 􏽩 0 0 0 1 0 0 0 0 0 X α at n � 1, we get X � − 􏼂3aQ + b X Y + X Y 􏼁 + c X Z + X Z 􏼁􏼃 2 1 0 1 1 0 0 1 1 0 2α 1 9ax x y 3 0 3 0 3 � − 􏼢− 􏽨3ax + bx y + cx y 􏽩 + b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α x z 0 0 3 + c bx y − cx z − 3ax + bx y + cx y 􏼒 􏼂 􏼃 􏽨 􏽩􏼓􏼕 0 0 0 0 0 0 0 0 0 α α Y � 􏼂aQ − b X Y + X Y 􏼁􏼃 2 1 0 1 1 0 2α 1 9ax x y 0 3 0 3 0 3 � − 3ax + bx y + cx y + b ax − bx y − 3ax + bx y + cx y , 􏼢 􏽨 􏽩 􏼒 􏽨 􏽩 􏽨 􏽩􏼓􏼣 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α Z � 􏼂b X Y + X Y 􏼁 − c X Z + X Z 􏼁􏼃 2 0 1 1 0 0 1 1 0 2α 1 x y x z 0 3 0 3 0 0 3 � 􏼔b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 − c􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓􏼕, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α α c c x z 0 0 R � X Z + X Z 􏼁 � 􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓, 2 0 1 1 0 0 0 0 0 0 0 0 0 0 2α 2α α α (20) 6 Advances in Astronomy and by applying the same scheme, we can determine the rest of the series terms. So, the product abundance could be represented by the series solution of equation (3) as αm α 2α 3α x � 􏽘 X t � X + X t + X t + X t + . . . m 0 1 2 3 m�0 3 α � x − 􏽨3ax + bx y + cx y 􏽩t 0 0 0 0 0 0 1 9ax − 􏼢− 􏽨3ax + bx y + cx y 􏽩 0 0 0 0 0 2α α x y x z 3 3 3 2α 0 0 0 0 + b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 + c􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓􏼕t + · · · , 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 α α α α αm α 2α 3α y � 􏽘 Y t � Y + Y t + Y t + Y t + · · · m 0 1 2 3 m�0 3 α � y + 􏽨ax − bx y 􏽩t 0 0 0 1 9ax x y 0 3 0 3 0 3 2α + − 3ax + bx y + cx y + b ax − bx y − 3ax + bx y + cx y t + · · · , 􏼢 􏽨 􏽩 􏼒 􏽨 􏽩 􏽨 􏽩􏼓􏼣 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α αm α 2α 3α z � 􏽘 Z t � Z + Z t + Z t + Z t + · · · m 0 1 2 3 m�0 � z + 􏼂bx y − cx z 􏼃t 0 0 0 0 0 1 x y x z 0 3 0 3 0 0 3 2α + 􏼔b􏼒 􏽨ax − bx y 􏽩 − 􏽨3ax + bx y + cx y 􏽩􏼓 − c􏼒 􏼂bx y − cx z 􏼃 − 􏽨3ax + bx y + cx y 􏽩􏼓􏼕t + · · · , 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2α α α α α αm α 2α 3α r � 􏽘 R t � R + R t + R t + R t + · · · m 0 1 2 3 m�0 c c x z α 0 0 3 2α � r + x z t + bx y − cx z − 3ax + bx y + cx y t + · · · . 􏼒 􏼂 􏼃 􏽨 􏽩􏼓 0 0 0 0 0 0 0 0 0 0 0 0 α 2α α α (21) It is important to mention that x , y , z , and r are ,e form of the neural approximate solution of equation 0 0 0 0 arbitrary initial values that enable us to compute gas models (3) will have two terms: the first represents the initial values with different chemical compositions, that is, pure helium or and the second represents the feed-forward neural network, rich helium models. where x is the input vector and p is the corresponding vector of adjustable weight parameters. ,en, the output of the neural network N (x , p) is written as l l 3. Neural Network Algorithm X (x, p) � A (x, ) + f x, N (x, p) , 3.1. Mathematical Modeling of the Problem. To simulate the t 1 1 1 conformable fractional helium burning network represented Y (y, p) � A (y) + f y, N (y, p)􏼁 , t 2 2 2 (22) by equation (3), we use the neural network architecture Z (z, p) � A (z) + f z, N (z, p)􏼁 , t 3 3 3 shown in Figure 1. Considering the initial conditions R (r, p) � A (r) + f r, N (r, p)􏼁 . t 4 4 4 X � x , Y � y , Z � z , R � r , the neural network could 0 0 0 0 0 0 0 0 be obtained following the next steps [30]. ,en, neural network output N (x , p) is given by ℓ ℓ Advances in Astronomy 7 σ (z ) W V 1,1 1,1 1 1 W V 1,2 1,2 2,1 2,1 X W 1 2,2 V 2,2 2 Y 1,h 3 Z 2,0 h,0 1,0 V 1,0 4 R 4,0 0 = –1 0 = –1 (fixed input) (Fixed input) Dummy neuron Input layer Hidden layer Output layer Figure 1: ANN architecture proposed to simulate conformable fractional helium burning network. unit, and σ(z) is the sigmoid activation function that has the − x − y N x , p � 􏽘 􏼁 v σ 􏼐z 􏼑, ℓ � 1, 2, 3, 4, (23) form σ(x) � (1/(1 + e )), σ(y) � (1/(1 + e )), ℓ ℓ ij i j − z − r j�1 σ(z) � (1/(1 + e )), and σ(r) � (1/(1 + e )). By differ- entiating the networks output N with respect to the vector where z � 􏽐 w x + β , w is the weight from the input j i�1 ij j i ij x , we get unit j to the hidden unit i, v is the weight from the hidden th unit i to the output, β represents the bias of the i hidden H n h α α (α) (α) α ⎛ ⎝ ⎛ ⎝ ⎞ ⎠⎞ ⎠ D N(x, p) � D 􏽘 v σ z � 􏽘 w x + β � 􏽘 v w σ , σ � D σ(x). (24) x x i i ij j i i ij x j j i�1 i�1 i�1 X (0, p) � x + 0.N (0, p) � x , Y (0, p) � y + 0.N (0, p) � y , Differentiating equation (24) n times gives t 0 1 0 t 0 2 0 n n Z (0, p) � z + 0.N (0, p) � z , R (0, p) � r + 0.N (0, p) � r , n times t 0 3 0 t 0 4 0 α,...,α (nα) α D N(x, p) � 􏽘 v P σ , P � 􏽙 w , σ � σ z􏼁 . α 1− α α x i i i i i i j ik D X (x, p) � x N (x, p) + xD N (x, p), t 1 1 x x i�1 k�1 α 1− α α D Y (y, p) � y N (y, p) + yD N (y, p), x t 2 x 2 (25) α 1− α α D Z (z, p) � z N (z, p) + zD N (z, p), x t 3 z 3 As a result, the solution of the helium burning network is α 1− α α D R (r, p) � r N (r, p) + rD N (r, p). given as r t 4 r 4 (27) X (x, p) � x + xN (x, p), t 0 1 Y (x, p) � y + yN (y, p), t 0 2 (26) Z (x, p) � z + zN (z, p), t 0 3 3.2. Gradient Computations and Parameter Updating. Using equation (27) to update the network parameters and R (x, p) � r + rN (r, p), t 0 4 computing the gradient, the error quantity needed to be which fulfills the initial conditions as minimized is given by 8 Advances in Astronomy α 3 E(x) � 􏽘 􏽮D X x , p 􏼁 + 3aX x , p 􏼁 + bX x , p 􏼁 Y y , p􏼁 + cX x , p 􏼁 Z z , p􏼁 􏽯 t t i t i t i t i t i t i α 3 α + 􏽘 􏽮D Y x , p 􏼁 − aX x , p􏼁 + bX x , p􏼁 Y y , p 􏼁 􏽯 + 􏽘 􏼈D R r , p􏼁 − cX x , p􏼁 Z z , p 􏼁 􏼉 t t i t i t i t i t t i t i t i (28) i i α 2 + 􏽘 􏼈D Z z , p􏼁 − bX x , p 􏼁 Y y , p􏼁 + cX x , p􏼁 Z z , p 􏼁 􏼉 , t i t i t i t i t i where ,e network parameters updating rule can be given as α 1− α α α v (x + 1) � v (x) + aD N, D X (x, p) � x N (x, p) + xD N (x, p), t 1 1 i i v x x α 1− α α α D Y (y, p) � y N (y, p) + yD N (y, p), β (x + 1) � β (x) + bD N, (32) i i x t 2 x 2 β (29) α 1− α α α D Z (z, p) � z N (z, p) + zD N (z, p), w (x + 1) � w (x) + cD N, x t 3 z 3 ij ij w ij α 1− α α D R (r, p) � r N (r, p) + rD N (r, p), r t 4 r 4 where a, b, c are learning rates, i � 1, 2, . . . , n and j � 1, 2, . . . , h. where D N(x, p) is given by equation (25). So, the problem In the stellar helium burning model based on ANN, the is converted into an unconstrained optimization problem. neuron is the fundamental processing unit that can process a To update the network parameters, we train the neural local memory and carry out localised information. At each network for the optimized parameter values. After the neuron, the net input (z) is calculated by supplementing the training process, we obtained the network parameters and received weights to obtain an aggregate weight of those computed the following: inputs and add it to a bias (β). ,e net input (z) is then X (x, p) � x + xN (x, p), passed by a nonlinear activation function, which results in t 0 1 the neuron output u (as seen in Figure 1) [31]. Y (x, p) � y + yN (y, p), t 0 2 (30) Z (x, p) � z + zN (z, p), t 0 3 3.3. Training of BP Algorithm. ,e backpropagation (BP) R (x, p) � r + rN (r, p). t 0 4 training algorithm is a gradient algorithm aimed to mini- mize the average square error between the desired output Now, N with one hidden layer is analogous to the and the actual output of a feed-forward network. conformable fractional derivative. By replacing the hidden th unit transfer function with the n order fractional deriva- It requires continuously differentiable nonlinearity. Figure 2 displays a flow chart of a backpropagation offline tive, the fractional N gradient differentiating with respect to v , β , and w could be written as learning algorithm [32]. i i ij ,e algorithm is a recursive algorithm that starts at the α (nα) D N � P σ , v i i output units, working back to the first hidden layer. A α ((n+1)α) comparison of the output X , Y , Z , R at the output layer j j j j D N � v P σ , β i i i with the desired outputs tx, ty, tz, tr is performed using an error function which has the following form: 1− α α ((n+1)α) j (nα) ⎛ ⎝ ⎞ ⎠ D N � x v P σ + v α w 􏽙 w σ . w i i i i i j ij i ik ij k�1,k≠ j (31) δ � X 􏼐tx − X 􏼑􏼐1 − X 􏼑 + Y 􏼐ty − Y 􏼑􏼐1 − Y 􏼑 + Z 􏼐tz − Z 􏼑􏼐1 − Z 􏼑 + R 􏼐tr − R 􏼑􏼐1 − R 􏼑. (33) j j j j j j j j j j j j j j j j j For the hidden layer, the error function takes the form: δ � 􏽮X 􏼐1 − X 􏼑 + Y 􏼐1 − Y 􏼑 + Z 􏼐1 − Z 􏼑 + R 􏼐1 − R 􏼑􏽯 􏽘 δ w , j j j j j j j j j k k (34) k Advances in Astronomy 9 Start Initialize biases and weights Introduce input and target output Compute actual output of hidden and output neurons Weights are adjusted by- w (t + 1) = w (t) + ηδ u + γ (w (t) – w (t – 1)) ji ji j j ji ji If unit j is an output unit, δ = u (t – u ) (1 – u ) j j j j j If unit j is a hidden unit, δ = u (1 – u ) δ w j j j k k Change the learning pattern Learning pattern: end Increment of the number of iterations >= PJ E = 1/PJ (t – u ) rms pj pj p=1 j=1 End Figure 2: Flowchart of an offline backpropagation training algorithm. where δ is the error term of the output layer, and w is the algorithm and also to assist in pushing the changes of the j k weight between the output and hidden layers. ,e update of energy function over local increases and boosting the the weight of each connection is implemented by replicating weights in the direction of the overall downhill [33]. ,is the error in a backward direction from the output layer to term adds a fraction of the most recent weight values to the the input layer as follows: current weight values. Both η and c terms are set at the start of the training phase and determine the network speed and w (t + 1) � w (t) + ηδ u + c w (t) − w (t − 1) . stability [31, 34]. 􏼐 􏼑 ji ji j j ji ji ,e process is repeated for each input pattern until the (35) output error of the network is decreased to a prespecified threshold value. ,e final weight values are frozen and ,e value of learning rate η is chosen such that it is utilized to get the precise product abundances during the test neither too large leading to overshooting nor very small session. ,e quality and success of training of ANN are leading to a slow convergence rate. ,e value of the mo- assessed by calculating the error for the whole batch of mentum term found in the last part in equation (36) which is training patterns using the normalized RMS error that is affixed with a constant c (momentum) is used to accelerate defined as the error convergence of the backpropagation learning 10 Advances in Astronomy 􏽶�� � P J 2 2 2 2 E � 􏽘 􏽘􏼚􏼐t − X 􏼑 + 􏼐t − Y 􏼑 + 􏼐t − Z 􏼑 + 􏼐t − R 􏼑 􏼛 , (36) rms pj pj pj pj pj pj pj pj PJ p�1 j�1 where J is the number of output units; P is the number of similar number of training iterations. As a result, the training patterns; tx , ty , tz , and tr are the desired configuration of the NN we used was 4-20-4, where the input pj pj pj pj outputs at unit j, whereas X , Y , Z , and R are the layer has four inputs which are the fractional parameter α, pj pj pj pj actual outputs at the same unit j. A zero error denotes that all the time t (t takes values from 3 to 2100 in steps of 3 seconds), the output patterns computed by the stellar helium burning two of the initial abundances which are the helium (X ), and model match the expected values perfectly and that the carbon (Y ). We excluded the other two initial abundances stellar helium burning model is fully trained. Similarly, (Z and R ) because their values are always zero as indicated 0 0 internal unit thresholds are adjusted by supposing they are in Table 1. ,e output layer has 4 nodes which are the time- connection weights on links from the input with an auxiliary dependent product abundances for helium (X), carbon (Y), constant value. ,e previous algorithm has been pro- oxygen (Z), and neon (R). grammed using C++ programming language running on During the training of the NN, we used a value for the Windows 7 of a CORE i7 PC. learning rate (η � 0.035) and for the momentum (c � 0.5). ,ose values for η and c were proved to quicken the con- vergence of the backpropagation training algorithm without 4. Results and Discussion exceeding the solution. For the demonstration of the con- 4.1. Data Preparation. Based on the recurrence relations vergence and stability of the values computed for weight (equation (18)), we computed one pure helium gas model, parameters of network layers, the behaviors of the con- X �1, Y � 0, Z � 0, R � 0, and three rich helium gas 0 0 0 0 vergence of the input layer weights, bias, and output layer models, X � 0.95, Y � 0.05, Z � 0, R � 0; X � 0.9, Y � 0.1, weights (w , β and ] ) for the helium burning network are 0 0 0 0 0 0 i i, i Z � 0, R � 0; and X � 0.85, Y � 0.15, Z � 0, R � 0. ,e 0 0 0 0 0 0 displayed in Figure 5. As these figures show, the weight fractional parameter covers the range α � 0.5 − 1 with a step values are initialized to random values and after somewhat of 0.05. ,e calculations are performed for a time T � 2100 s. considerable iterations they converged to stable values. Consequently, we have a total sum of 44 fractional helium burning models. 4.3.ComparisonbetweentheNNModelandAnalyticalModel. Figure 3 plots the two product abundances from gas After the end of the training phase of NN, we used the final models calculated at α � 0.95, where the solid lines are for frozen weight values in the test phase to predict the time- the pure helium model with initial abundance X �1, Y � 0, 0 0 dependent product abundances for helium (X), carbon (Y), Z � 0, R � 0; and the dashed lines are for the rich helium 0 0 oxygen (Z), and neon (R). In this test phase, we used values model with initial abundances, X � 0.95, Y � 0.05, Z � 0, 0 0 0 for a fractional parameter α not being used in the training R � 0. ,e effects of changing the composition of the gas are 12 phase to predict the helium burning network model. ,ese remarkable, especially for the carbon C . values are shown in the third column of Table 1. ,e results In Figure 4, we illustrated the effects of changing the of the predicted values show very good agreement with the fractional parameters on the product abundances calculated analytical values for different helium modes. A comparison for a gas model with initial abundance X � 0.85, Y � 0.15, 0 0 between the predicted NN model values and analytical Z � 0, R � 0. It is clear that the effects of the change of the 0 0 model for two values of the fractional parameters (α � 0.55 fractional parameter on the behavior of the product and α � 0.95) along with different helium modes shown in abundances are small. ,is result is similar to the results Table 1 are displayed in the range of figures , that is, Figures obtained by [10] for the models computed in the sense of the 6–9 for one pure helium gas model, X �1, Y � 0, Z � 0, 0 0 0 modified Riemann–Liouville fractional derivative. R � 0, and three rich helium gas models, X � 0.95, Y � 0.05, 0 0 0 Z � 0, R � 0; X � 0.9, Y � 0.1, Z � 0, R � 0; and X � 0.85, 0 0 0 0 0 0 0 4.2.ANNTraining. For the training of ANN used to simulate Y � 0.15, Z � 0, R � 0. In all these figures, the very good 0 0 0 the helium burning network, we used part of the data cal- agreement between both the NN model and analytical model culated in the previous subsection. ,e data used for training is clear, which elects the NN to be considered as a powerful of the ANN are as shown in the second column of Table 1. tool to solve and model nuclear burning networks and could ,e neural network (NN) architecture used in this paper be applied to the other nuclear stellar burning networks. for the helium burning network has three layers as shown in From the performed calculations, one can examine the Figure 1. ,ese layers are the input layer, hidden layer, and effect of changing the fraction parameter over four elements. output layer. Different configurations of hidden neurons of Figures 6–9 illustrate the fractional product abundances of 4 12 16 20 10, 20, and 40 have been tested, where we concluded that 20 He , C , O , and Ne as a function of time, where some neurons in a single hidden layer are giving the best model of features could be obtained. For all gas models, the difference the network to simulate the helium burning network. ,is between the abundances of He computed for the two values number of neurons in the hidden layer was found to give the of the fractional parameters (α � 0.55, 0.95) is very small minimum value of RMS error of 0.000005 in an almost when the time t≤ 200 seconds, after that the difference Advances in Astronomy 11 0.16 α = 0.95 0.14 0.12 0.1 0.08 0.06 0.04 Ne 0.02 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 He 12 16 20 Figure 3: ,e product abundance (C , O , and Ne ) computed analytically at α � 0.95 for the two different abundances. ,e solid lines represent the helium network with X �1, Y � 0, Z � 0, and R � 0, and the dashed lines represent the helium network X � 0.95, Y � 0.05, 0 0 0 0 0 0 Z � 0, and R � 0. 0 0 0.25 0.225 0.2 0.175 0.15 0.125 0.1 0.075 Ne 0.05 0.025 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 He 12 16 20 Figure 4: ,e product abundance (C , O , and Ne ) computed analytically at X � 0.85, Y � 0.15, Z � 0, and R � 0 and for two different 0 0 0 0 fractional parameters. ,e solid lines represent the helium network with α � 0.9 and the dashed lines represent the helium network α � 0.5. Table 1: Training and testing data for the helium burning network. Training phase Testing phase α 0.5, 0.6, 0.7, 0.8, 0.9, 1 0.55, 0.65, 0.75, 0.85, 0.95 Time 0–2100 sec (∆t � 3) 0–2100 sec (∆t � 3) X � 0.85, Y � 0.15, Z � 0, R � 0 X � 0.85, Y � 0.15, Z � 0, R � 0 0 0 0 0 0 0 0 0 X � 0.90, Y � 0.1, Z � 0, R � 0 X � 0.90, Y � 0.1, Z � 0, R � 0 0 0 0 0 0 0 0 0 Initial abundances of the HB X � 0.95, Y � 0.05, Z � 0, R � 0 X � 0.95, Y � 0.05, Z � 0, R � 0 0 0 0 0 0 0 0 0 X �1, Y � 0, Z � 0, R � 0 X �1, Y � 0, Z � 0, R � 0 0 0 0 0 0 0 0 0 Product abundance Product abundance 12 Advances in Astronomy Convergence of input layer weights (w ) 0 100000 200000 300000 400000 500000 –1 Iteration w [1] w [3] w [2] w [4] (a) Convergence of bias (β ) 0.7 0.6 0.5 0.4 β 0.3 0.2 0.1 0 100000 200000 300000 400000 500000 –0.1 Iteration (b) Convergence of output layer weights (v ) 1.5 0.5 –0.5 –1 –1.5 0 100000 200000 300000 400000 500000 Iteration v [1] v [3] v [2] v [4] (c) Figure 5: Convergence of the weights of input, bias, and output layers for the training of the NN used to simulate helium burning network. (a) ,e convergence of input layer weights (w ). (b) ,e convergence of bias (β ). (c) Convergence of output layer weights (v ). i i becomes larger. Also, it is noticed clearly that the abundance between the fractional product abundances of O are large after of C has the same behavior. just the beginning of the ignition, whereas the differences be- 16 20 ,e behaviors of the fractional product abundances of O tween the fractional product abundances of Ne are very small 20 4 12 and Ne are different from those of He and C . ,e differences for t≤ 100 seconds and increase after that time. Advances in Astronomy 13 1 0.24 X = 0.85, Y = 0.15, Z = 0, R = 0 0.9 0 0 0 0 0.22 X = 0.85, Y = 0.15, Z = 0, R = 0 0 0 0 0 0.8 0.2 0.7 0.18 0.6 0.16 0.5 C 0.14 α = 0.95 0.4 0.12 0.3 0.1 α = 0.95 α = 0.55 0.2 0.08 α = 0.55 0.1 0.06 0.04 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.2 0.1 X = 0.85, Y = 0.15, Z = 0, R = 0 0 0 0 0 α = 0.95 0.18 0.09 0.16 0.08 0.14 0.07 α = 0.55 0.06 0.12 α = 0.55 O 0.1 0.05 0.08 0.04 0.03 0.06 α = 0.95 X = 0.85, Y = 0.15, Z = 0, R = 0 0.02 0.04 0 0 0 0 0.01 0.02 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 6: ,e distribution of the product abundance with time for the rich helium burning network, X � 0.85, Y � 0.15, Z � 0, and 0 0 0 R � 0. He Ne 14 Advances in Astronomy 1 0.2 0.9 X = 0.90, Y = 0.1, Z = 0, R = 0 0 0 0 0 0.18 X = 0.90, Y = 0.1, Z = 0, R = 0 0 0 0 0 0.8 0.16 0.7 0.14 0.6 0.5 C 0.12 α = 0.95 0.4 0.1 0.3 α = 0.55 α = 0.95 0.08 0.2 0.06 α = 0.55 0.1 0.04 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.16 0.09 0.08 0.14 α = 0.95 0.07 0.12 0.06 α = 0.55 0.1 α = 0.55 0.05 O 0.08 0.04 0.06 α = 0.95 0.03 0.04 0.02 X = 0.9, Y = 0.1, Z = 0, R = 0 0 0 0 0 X = 0.90, Y = 0.1, Z = 0, R = 0 0 0 0 0 0.02 0.01 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 7: ,e distribution of the product abundance with time for the rich helium burning network, X � 0.9, Y � 0.1, Z � 0, and R � 0. 0 0 0 0 He Ne Advances in Astronomy 15 1 0.16 0.9 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 0.14 0.8 0.7 0.12 0.6 0.5 C 0.1 α = 0.95 0.4 0.08 0.3 α = 0.55 α = 0.95 0.2 0.06 α = 0.55 0.1 0 0.04 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.16 0.08 α = 0.95 0.14 0.07 α = 0.55 0.12 0.06 α = 0.55 0.1 0.05 α = 0.95 O 0.08 0.04 0.06 0.03 0.04 0.02 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 X = 0.95, Y = 0.05, Z = 0, R = 0 0 0 0 0 0.02 0.01 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 8: ,e distribution of the product abundance with time for the rich helium burning network, X � 0.95, Y � 0.05, Z � 0, and 0 0 0 R � 0. He Ne 16 Advances in Astronomy 1 0.16 0.9 0.14 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 0.8 0.12 0.7 0.1 0.6 α = 0.95 0.5 C 0.08 0.4 0.06 α = 0.55 0.3 α = 0.95 0.04 0.2 α = 0.55 0.02 0.1 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (a) (b) 0.16 0.08 0.14 0.07 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 0.12 0.06 α = 0.95 0.1 0.05 α = 0.55 O 0.08 0.04 α = 0.55 0.06 0.03 α = 0.95 0.04 0.02 X = 1, Y = 0, Z = 0, R = 0 0 0 0 0 0.02 0.01 0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 Time (s) Time (s) Analytical Analytical ANN ANN (c) (d) Figure 9: ,e distribution of the product abundance with time for the pure helium burning network, X �1, Y � 0, Z � 0, and R � 0. 0 0 0 0 Y � 0.05, Z � 0, R � 0); (X � 0.9, Y � 0.1, Z � 0, R � 0) 5. Conclusion 0 0 0 0 0 0 0 and (X � 0.85, Y � 0.15, Z � 0, R � 0). ,e results of the 0 0 0 0 In the current research, we introduced an analytical solution analytical solution revealed that the conformable models to the conformable fractional helium burning network via a have the same behaviors as the fractional models computed series expansion method where we obtained the product using the modified Riemann–Liouville fractional derivative. abundances of the syntheses elements as a function of time. Second, we used the NN in its feed-forward type to simulate ,e calculations are performed for the four different initial the system of the differential equations of the HB. To do that, abundances: (X �1, Y � 0, Z � 0, R � 0), (X � 0.95, 0 0 0 0 0 we performed the mathematical modeling of a NN to He Ne Advances in Astronomy 17 simulate the conformable helium burning network. We dN ρN dx 1 A 3 � � − ≺συ≻ x − ≺συ≻ xy − ≺συ≻ xz, 3α α12 α16 trained the NN using the backpropagation delta rule algo- dt A dt rithm and used training data for models with the fractional parameter range α � 0.5 − 1 with step Δα � 0.1. We pre- dN ρN dy 2 A � � ≺συ≻ x − ≺συ≻ xy, dicted the fractional models for the range α � 0.55 − 0.95 3α α12 dt A dt with step Δα � 0.1. ,e comparison with the analytical solutions gives a very good agreement for most cases, a small dN ρN dz 3 A � � ≺συ≻ xy − ≺συ≻ xz, difference obtained for the model with fractional parameters α12 α16 dt A dt α � 0.55. ,e results obtained in this research prove that modeling of nuclear burning networks using NN gives very dN ρN dr 4 A � � ≺συ≻ xz. good results and validates the NN to be an accurate, robust, α16 dt A dt and trustworthy method to solve and model similar net- (A.5) works and could be applied to other nuclear stellar burning networks comprised of conformable fractional differential Using equations (A.4) and (A.5) could be written as equations. dx � − r x − r xy − r xz, 3α α12 α16 dt Appendix dy � r x − r xy, 3α α12 dt A. The Helium Burning Network (A.6) dz � r xy − r xz, Kinetic equations governing the change in the number α12 α16 dt density N of species i over time are describing the nucle- osynthesis of the elements in stars [35]: dr � r xz, α16 dt dN � − 􏽘 N N ≺συ≻ + 􏽘 N N ≺συ≻ , i j ij l k kl (A.1) where the abundances x, y, z, and r are expressed in number dt j k,l≠i instead of number density. By replacing the reaction rates r , r , and r in equation (A.6) by a, b, and c, respec- 3α α12 α16 where ≺συ≻ for the interaction involving species m and n tively, we obtained equation (2). mn constitutes the reaction cross section and all the reactions producing or destroying species i shall be summed up. ,e Data Availability number density N of the species i is expressed by its abundance X by the relation ,e Excel data used to support the findings of this study are available from the corresponding author upon request. ρN X A i N � , (A.2) Conflicts of Interest where N is Avogadro’s number and A is the mass of i in A i mass units. ,e authors declare that they have no conflicts of interest. ,e reaction rate is given by Acknowledgments X X i j 2 2 r � ρ N ≺συ≻ . (A.3) ij ij A A i j ,e authors acknowledge the Academy of Scientific Re- search and Technology (ASRT), Egypt (Grant no. 6413), For the helium burning reactions, the rates of the three under the project Science Up. ASRT is the second affiliation − 1 reactions in the units of s could be written as [23] of this research. r � ≺συ≻ He , 􏼐 􏼑 3α 3α References 4 12 (A.4) r � ≺συ≻ He C , α12 α12 [1] T. M. Michelitsch, B. A. Collet, A. P. Riascos, A. F. Nowakowski, and F. C. G. A. Nicolleau, “A fractional 4 16 r � ≺συ≻ He O . α16 α16 generalization of the classical lattice dynamics approach,” Chaos, Solitons and Fractals, vol. 92, pp. 1339–1351, 2016, 4 12 16 20 Now, by putting x � He , y � C , z � O , r � Ne ISSN 0960-0779. for the helium, carbon, oxygen, and neon abundances in [2] R. Hilfer, (ed), Applications of Fractional Calculus in Physics, − 3 number density (in units of cm ), respectively, World Scientific, Singapore, 2000. and implementing equations (A.2)–(A.4), the [3] R. A. El-Nabulsi, “Implications of the Ornstein-Uhlenbeck- abundance differential equation (Equation (A.1)) could Like fractional differential equation in cosmology,” Revista be written as Mexicana de Fisica, vol. 62, pp. 240–250, 2016. 18 Advances in Astronomy [4] R. A. El-Nabulsi, “On generalized fractional spin, fractional [22] Y. A. Azzam, E. A.-B. Abdel-Salam, and M. I. Nouh, “Artificial angular momentum, fractional momentum operators in neural network modeling of the conformable fractional iso- quantum mechanics,” Few-Body System, vol. 61, p. 25, 2020. thermal gas spheres,” RMxAA, vol. 57, no. 1, 2021. [23] D. D. Clayton, Principles of Stellar Evolution and Nucleo- [5] R. A. El-Nabulsi, “Saigo-maeda operators involving the Appell function, real spectra from symmetric quantum Hamiltonians synthesis, University of Chicago Press, Chicago, IL, USA, and violation of the second law of thermodynamics for 1983. quantum damped oscillators,” International Journal of Ge- [24] H. L. Duorah and R. S. Kushwaha, “Helium-Burning reaction oretical Physics, vol. 59, no. 12, pp. 3721–3736, 2020. products and the rate of energy generation,” Ge Astrophysical [6] A. R. El-Nabulsi, “,e fractional white dwarf hydrodynamical Journal, vol. 137, p. 566, 1963. nonlinear differential equation and emergence of quark stars,” [25] W. R. Hix and F.-K. ,ielemann, “Computational methods Applied Mathematics and Computation, vol. 218, no. 6, for nucleosynthesis and nuclear energy generation,” Journal of p. 2837, 2011. Computational and Applied Mathematics, vol. 109, no. 1-2, [7] S. S. Bayin and J. P. Krisch, “Fractional incompressible stars,” p. 321, 1999. Astrophysics and Space Science, vol. 359, no. 2, 2015. [26] M. I. Nouh, M. A. Sharaf, and A. S. Saad, “Symbolic analytical [8] E. A.-B. Abdel-Salam and M. I. Nouh, “Approximate solution solutions for the abundances differential equations of the to the fractional second-type lane-emden equation,” Astro- Helium burning phase,” Astronomische Nachrichten, vol. 324, no. 5, p. 432, 2003. physics, vol. 59, no. 3, p. 398, 2016. [9] M. I. Nouh and E. A.-B. Abdel-Salam, “Analytical solution to [27] H. J. Haubold and A. M. Mathai, “,e fractional kinetic the fractional polytropic gas spheres,” European Physical equation and thermonuclear functions,” Astrophysics and Journal Plus, vol. 133, p. 149, 2018. Space Science, vol. 273, no. 1/4, pp. 53–63, 2000. [10] M. I. Nouh, “Computational method for a fractional model of [28] R. K. Saxena, A. M. Mathai, and H. J. Haubold, “On fractional the helium burning network,” New Astronomy, vol. 66, p. 40, kinetic equations,” Astrophysics and Space Science, vol. 282, 2019. no. 1, pp. 281–287, 2002. [11] E. A.-B. Abdel-Salam and M. I. Nouh, “Conformable frac- [29] V. B. L. Chaurasia and S. C. Pandey, “Computable extensions tional polytropic gas spheres,” New Astronomy, vol. 76, of generalized fractional kinetic equations in astrophysics,” p. 101322, 2020. Research in Astronomy and Astrophysics, vol. 10, no. 1, p. 22, [12] E. A. Yousif, A. M. A. Adam, A. A. Hassaballa, and 2010. M. I. Nouh, “Conformable fractional isothermal gas spheres,” [30] N. Yadav, A. Yadav, and M. Kumar, An Introduction to Neural New Astronomy, vol. 84, p. 101511, 2021. Network Methods for Differential Equations, Springer, Berlin, Germany, 2015. [13] M. W. M. G. Dissanayake and N. Phan-,ien, “Neural-net- work-based approximations for solving partial differential [31] H. K. Elminir, Y. A. Azzam, and F. I. Younes, “Prediction of equations,” Communications in Numerical Methods in Engi- hourly and daily diffuse fraction using neural network, as neering, vol. 10, no. 3, pp. 195–201, 1994. compared to linear regression models,” Energy, vol. 32, no. 8, [14] I. E. Lagaris, A. Likas, and D. I. Fotiadis, “Artificial neural pp. 1513–1523, 2007. networks for solving ordinary and partial differential equa- [32] T. Fukuda, Y. Hasegawa, K. Sekiyama, and A. Tadayoshi, tions,” IEEE Transactions on Neural Networks, vol. 9, no. 5, Multi-Locomotion Robotic Systems: New Concepts of Bio-In- pp. 987–1000, 1998. spired Robotics, Springer, Berlin, Germany, 2012. [15] I. E. Lagaris, A. C. Likas, and D. G. Papageorgiou, “Neural- [33] C. Denz, Optical Neural Networks, Springer, Berlin, Germany, network methods for boundary value problems with irregular 1998. boundaries,” IEEE Transactions on Neural Networks, vol. 11, [34] I. A. Basheer and M. Hajmeer, “Artificial neural networks: fundamentals, computing, design, and application,” Journal of no. 5, pp. 1041–1049, 2000. [16] K. S. McFall and J. R. Mahan, “Artificial neural network Microbiological Methods, vol. 43, no. 1, pp. 3–31, 2000. method for solution of boundary value problems with exact [35] V. Kourganoff, Introduction to the Physics of Stellar Interiors, satisfaction of arbitrary boundary conditions,” IEEE Trans- D. Reidel Publishing Company, Dordrecht, ,e Netherlands, actions on Neural Networks, vol. 20, no. 8, pp. 1221–1233, 1973. [17] M. Baymani, A. Kerayechian, and S. Effati, “Artificial neural networks approach for solving Stokes problem,” Applied Mathematics, vol. 01, no. 4, pp. 288–292, 2010. [18] L. Lu, X. Meng, Z. Mao, E. George, and K. DeepXDE, “A deep learning library for solving differential equations,” 2020, http://arxiv.org/abs/1907.04502. [19] K. Alexander, R. Khudorozkov, and S. Tsimfer, “PYDens: a python framework for solving differential equations with neural networks,” 2019, http://arxiv.org/abs/1909.11544. [20] F. Chen, D. Sondak, P. Protopapas et al., “NeuroDiffEq: a Python package for solving differential equations with neural networks,” Journal of Open Source Software, vol. 5, no. 46, p. 1931, 2020. [21] M. I. Nouh, Y. A. Azzam, and E. A.-B. Abdel-Salam, “Modeling fractional polytropic gas spheres using artificial neural network,” Neural Computation and Application, 2020.

Journal

Advances in AstronomyHindawi Publishing Corporation

Published: Mar 16, 2021

References