Abstract This article presents a novel iterative eigenvalue decomposition approach for solving the optimal control problem of time-delay systems. In this approach, the original time-delay optimal control problem is transformed into a sequence of decoupled linear first-order initial value problems without delay and advance terms. Solving the derived sequence in a recursive manner leads to the optimal control law and the optimal trajectory in the form of uniformly convergent series. This scheme has a parallel processing capability which improves its computational efficiency. The convergence analysis of the proposed approach is provided. Finally, some comparative results are included to illustrate the effectiveness of the new iterative method. 1. Introduction Time-delay systems are a very important class of systems whose control and optimization have been of interest to many researchers (see Malek-Zavarei & Jamshidi, 1987; Mohan & Kar, 2010; Ganjefar et al., 2011; Fridman, 2014). The presence of delay makes the analysis and control of such systems much more complicated. In fact, the application of Pontryagin’s maximum principle to the optimal control problems (OCPs) with time-delay results in a system of coupled two-point boundary value problems (TPBVPs) involving both delay and advance terms (see Kharatishvili, 1961). This problem cannot be solved analytically or even numerically except in very special cases. Therefore, the main objective of all methodologies for solving the time-delay optimal control problems (TDOCPs) has been the avoidance of solving the above-mentioned TPBVP. The TDOCPs have been widely studied by many authors during the past several decades, and many effective methods have been designed for solving these problems (see Eller et al., 1969; Inoue et al., 1971; Jamshidi & Malek-Zavarei, 1972). Eller et al. (1969) have proposed a method that involves the solution of a set of successively coupled partial differential equations. The sensitivity approach (see Inoue et al., 1971) is another method which is only applicable for the systems with small delay in the state. The method presented by Jamshidi & Malek-Zavarei (1972) is also a sensitivity approach in which the original system is embedded in a class of non-delay systems using an appropriate parameter. Wong et al. (1985) have presented a computational scheme using the technique of control parametrization in which a quasi-Newton method is applied for solving the achieved optimization problem. Lee (1993) has used the auxiliary state variables and a Páde approximation to transform the problem to one without a time-delay argument. Then the transformed problem has been solved by a well-developed optimization algorithm. Dadebo & Luus (1992) used an iterative dynamic programming (IDP) for the solution of TDOCPs. To avoid the data interpolation, Chen et al. (2000) used the Taylor expansion to estimate time-delay state variables under the framework of IDP. The TDOCPs have also been solved using the orthogonal expansion approaches such as Walsh functions (see Pananisamy & Rao, 1983), block-pulse functions (see Hwang & Shih (1985)), shifted Legendre polynomials (see Perng, 1986), Chebyshev polynomials (see Horng & Chou, 1985), linear Legendre multi-wavelets (see Khellat, 2009) and hybrid functions (see Haddadi et al., 2012; Dadkhah et al., 2015; Nazemi & Shabani, 2015). These approaches are based on converting the underlying differential equation into integral equation through integration, approximating various signals involved in the equation by truncated orthogonal series, and using the operational matrix of integration, delay and product to reduce the problem to solve a system of algebraic equations. In year 2015, a variational iteration method has been used for solving the OCP of linear time-delay systems (see Mirhosseini-Alizamini et al., 2015). Recently, a recursive shooting method has been provided by the authors (see Jajarmi & Hajipour, 2016) for solving the OCP of linear time-varying systems with state time-delay. The optimal control law is provided by using the shooting method in a recursive manner for solving a sequence of linear TPBVPs derived from the maximum principle. However, these techniques may pose numerical difficulty and computational complexity for the nonlinear TDOCPs and the problems with several state and control variables. The aim of this article is to design a new computational efficient iterative method based on the eigenvalue decomposition (EVD) approach for solving the OCP of linear time-delay systems. The proposed method denoted by the ‘iterative EVD’ requires solving only a sequence of decoupled linear first-order initial value problems (IVPs) without delay and advance terms. Solving the latter sequence in a recursive manner leads to the optimal control law and the optimal trajectory in the form of uniformly convergent series. Although each iteration of this approach needs solving a system of linear non-delay IVPs, this system is decomposed into some first-order decoupled subproblems which are solved in parallel. Therefore, the suggested technique has a parallel processing capability which is the main feature of the new iterative approach. This capability reduces the computational efforts, effectively. The convergence analysis of the novel method is also provided. Consequently, the proposed procedure has practical sense and facilitates the application of parallel processing, which is very useful in overcoming the complexity of computations. The rest of article is organized as follows. Section 2 formulates a class of TDOCPs. In Section 3, an efficient iterative approach is proposed for solving the TDOCPs. In this section, the convergence analysis of the proposed method is also given. The effectiveness of the suggested technique is verified by solving three numerical examples in Section 4. Finally, conclusions and future works are given in the last section. 2. Problem formulation Consider a linear time-delay system described by {x˙(t)=Ax(t)+A1x(t−τ)+Bu(t),t0≤t≤tf,x(t)=φ(t),t0−τ≤t≤t0, (2.1) where $$x \in PC^1\left(\left[t_0-\tau,t_f\right],\mathbb{R}^n\right)$$ and $$u \in PC\left(\left[t_0,t_f\right],\mathbb{R}^m\right)$$ are the state and control vectors, respectively. Note that, the class of piecewise continuous functions from $$\left[t_0,t_1\right]$$ into $$\mathbb{R}^n$$ is denoted by $$PC\left(\left[t_0,t_1\right],\mathbb{R}^n\right)$$. Moreover, $$PC^1\left(\left[t_0,t_1\right],\mathbb{R}^n\right)$$ denotes the class of continuous functions having piecewise continuous first-order derivatives. The parameter $$\tau >0$$ is the constant time-delay and $$\varphi \in C\left(\left[t_0-\tau,t_0\right], \mathbb{R}^n\right)$$ is the continuous initial state function where $$C\left(\left[t_0,t_1\right],\mathbb{R}^n\right)$$ denotes the space of continuous functions from $$\left[t_0,t_1\right]$$ into $$\mathbb{R}^n$$. Also, $${\boldsymbol{A}}$$, $${\boldsymbol{A}}_\textbf{1}$$ and $${\boldsymbol{B}}$$ are real constant matrices of appropriate dimensions and the pair $$\left({\boldsymbol{A}},{\boldsymbol{B}}\right)$$ is assumed to be completely controllable (see Hewer, 1972). The objective is to find the optimal control law $$u^*(t)$$ for $$t\in \left[t_0,t_f\right]$$ which minimizes the following quadratic cost functional J=12xT(tf)Qfx(tf)+12∫t0tf(xT(t)Qx(t)+uT(t)Ru(t))dt, (2.2) subject to the system (2.1) where $${\boldsymbol{Q}}\in \mathbb{R}^{n\times n}$$ and $${\boldsymbol{Q}}_{\boldsymbol{f}}\in \mathbb{R}^{n\times n}$$ are positive semi-definite matrices and $${\boldsymbol{R}} \in \mathbb{R}^{m\times m}$$ is a positive definite matrix. For the problem (2.1)–(2.2), the Pontryagin’s maximum principle of OCPs with time-delay provides the following necessary conditions for optimality (see Kharatishvili, 1961) {x˙(t)=Ax(t)+A1x(t−τ)−BR−1BTλ(t),t0≤t≤tf,λ˙(t)={−Qx(t)−ATλ(t)−A1Tλ(t+τ),t0≤t<tf−τ,−Qx(t)−ATλ(t),tf−τ≤t≤tf, x(t)=φ(t),t0−τ≤t≤t0,λ(tf)=Qfx(tf), (2.3) where $$\lambda(t) \in PC^1\left(\left[t_0, t_f\right],\mathbb{R}^n\right)$$ is the co-state vector. Also, the optimal control law is u∗(t)=−R−1BTλ(t),t0≤t≤tf. (2.4) In fact, the application of Pontryagin’s maximum principle to the OCPs with time-delay results in a system of coupled TPBVPs involving both delay and advance terms (see Kharatishvili, 1961). The traditional techniques cannot solve this problem analytically or even numerically except in very special cases. To overcome this difficulty, an efficient iterative approach will be presented in the next section. 3. Optimal control design strategy and convergence analysis In this section, we first present an iterative technique in order to solve the coupled TPBVP (2.3). Then we prove the convergence of the proposed approach. Consider the following IVP corresponding to the TPBVP (2.3) {x˙(t)=Ax(t)+A1x(t−τ)−BR−1BTλ(t),t0≤t≤tf,λ˙(t)={−Qx(t)−ATλ(t)−A1Tλ(t+τ),t0≤t<tf−τ,−Qx(t)−ATλ(t),tf−τ≤t≤tf, x(t)=φ(t),t0−τ≤t≤t0,λ(t0)=λ0, (3.1) where $$\lambda_0$$ is an unknown parameter. This parameter will be identified using the boundary condition $$\lambda(t_f)={\boldsymbol{Q}}_{\boldsymbol{f}}x(t_f)$$. The problem given by Equation (3.1) is reformulated in the following form {v˙(t)=Mv(t)+Nv(t−τ)+Pv(t+τ),t0≤t≤tf,v(t)=ψ(t),t0−τ≤t≤t0, (3.2) where $$v(t)\triangleq\left(x(t),\lambda(t)\right)$$ and M=[A−BR−1BT−Q−AT],N=[A1000], (3.3) P=[000P22],P22={−A1T,t0≤t<tf−τ,0,tf−τ≤t<tf, (3.4) and $$\psi(t)=\left(\varphi(t),\lambda_0\right)$$. Assume that the matrix $${\boldsymbol{M}}$$ has $$2n$$ distinct eigenvalues $$m_j$$, $$j=1,2,\ldots,2n$$. Let $${\boldsymbol{U}}$$ denotes the matrix of the right eigenvectors of $${\boldsymbol{M}}$$, then the transformation $$v(t)={\boldsymbol{U}}y(t)$$ yields the following equivalent IVP for Equation (3.2) {y˙j(t)=mjyj(t)+n~jy(t−τ)+p~jy(t+τ),t0≤t≤tf,yj(t)=ζj(t),t0−τ≤t<t0,j=1,2,…,2n, (3.5) where $$y_j(t)$$ and $$\zeta_j(t)$$ are the $$j$$th elements of vectors $$y(t)$$ and $$\zeta(t)={\boldsymbol{U}}^{-1}\psi(t)$$, respectively. Also, the vectors $$\tilde{n}_j$$ and $$\tilde{p}_j$$ are the $$j$$th rows of matrices $$\tilde{{\boldsymbol{N}}}={\boldsymbol{U}}^{-1}{\boldsymbol{N}}{\boldsymbol{U}}$$ and $$\tilde{{\boldsymbol{P}}}={\boldsymbol{U}}^{-1}{\boldsymbol{P}}{\boldsymbol{U}}$$, respectively. If $$y_j(t) \in Y$$ be the solution of IVP (3.5), where $$Y=PC^1\left(\left[t_0-\tau, t_f\right],\mathbb{R}\right)$$, then a homotopy $$\tilde{y}_j(.,\rho):Y \times [0,1]\rightarrow Y$$ is defined which satisfies (1−ρ)[y~˙j(t,ρ)−mjy~j(t,ρ)]+ρ[y~˙j(t,ρ)−mjy~j(t,ρ)−n~jy~(t−τ,ρ)−p~jy~(t+τ,ρ)]=0, (3.6) where $$\tilde{y}(.,\rho)=\left({\tilde{y}}_1(.,\rho),\ldots,{\tilde{y}}_{2n}(.,\rho)\right)$$ and the embedding parameter $$\rho\in [0,1]$$ is called the homotopy parameter (see He, 1998). When $$\rho=0$$ and $$\rho=1$$ it holds y~˙j(t,0)−mjy~j(t,0)=0, (3.7) y~˙j(t,1)−mjy~j(t,1)−n~jy~(t−τ,1)−p~jy~(t+τ,1)=0. (3.8) Thus, when $$\rho$$ increases from zero to one, the trivial problem (3.7) is continuously deformed to problem (3.8). Moreover, the problem (3.8) is exactly the same as the original IVP (3.5) which shows that $$\tilde{y}_j(t,1)=y_j(t)$$. Therefore, the changing process of $$\rho$$ from zero to one is just that of $$\tilde{y}_j(t,\rho)$$ from $$\tilde{y}_j(t,0)$$ to $$y_j(t)$$. This process is called deformation in topology. Assume the homotopy $$\tilde{y}_j(t,\rho)$$ has an expansion in a power series with respect to the parameter $$\rho$$ y~j(t,ρ)=y~j(0)(t)+ρy~j(1)(t)+ρ2y~j(2)(t)+⋯, (3.9) where $$\tilde{y}_j^{(i)}(t) \in Y$$. Setting $$\rho=1$$ in expansion (3.9) results in the solution of IVP (3.5) as yj(t)=limρ→1y~j(t,ρ)=y~j(0)(t)+y~j(1)(t)+y~j(2)(t)+⋯ . (3.10) Substituting expression (3.9) in Equation (3.6) and in the initial condition of problem (3.5), rearranging with respect to the order of $$\rho$$ and then equating the coefficients of the same power of $$\rho$$ on each side, we obtain ρ0:{y~˙j(0)(t)=mjy~j(0)(t),t0≤t≤tf,y~j(0)(t)=ζj(t),t0−τ≤t≤t0,j=1,2,…,2n, (3.11) and for $$i\geq 1$$ ρi:{y~˙j(i)(t)=mjy~j(i)(t)+n~jy~(i−1)(t−τ)+p~jy~(i−1)(t+τ),t0≤t≤tf,y~j(i)(t)=0,t0−τ≤t≤t0,j=1,2,…,2n, (3.12) where $$\tilde{y}^{(i)}(t)=\left(\tilde{y}_1^{(i)}(t),\ldots,\tilde{y}_{2n}^{(i)}(t)\right)$$. The problem (3.11) is a system of homogeneous linear time-invariant IVPs without delay and advance terms which is decomposed into $$N$$ decoupled first-order subproblems. Thus they can be solved in parallel. If $$y_j^{(0)}(t)$$ be the solution of problem (3.11), then for $$i\geq 1$$ the problem (3.12) is a system of nonhomogeneous linear time-invariant IVPs whose delay and advance terms are no longer considered as unknowns. In addition, linear subproblems in Equation (3.12) are completely decoupled so that they can be solved in parallel. For $$i\geq 1$$, the solution of $$y_j^{(i)}(t)$$ can be obtained in the $$i$$th step from Equation (3.12) by solving recursively only a set of decoupled nonhomogeneous linear time-invariant IVPs which has no delay and advance terms. Therefore, the suggested technique in this article has a parallel processing capability which improves its computational efficiency. Solving Equations (3.11)–(3.12), $$\tilde{y}_j^{(i)}(t)$$ is formulated as y~j(0)(t)={ζj(t),t0−τ≤t≤t0,emj(t−t0)ζj(t0),t0≤t≤tf, (3.13) and for $$i\geq 1$$ y~j(i)(t)=∫t0temj(t−θ)[n~jy~(i−1)(θ−τ)+p~jy~(i−1)(θ+τ)]dθ,t0≤t≤tf, (3.14) and $${\tilde{y}}_j^{(i)}(t)=0$$ for $$t_0-\tau \leq t \leq t_0$$. Following this procedure and using the inverse transformation, the solution of IVP (3.2) is determined as v(t)=∑i=0∞v~(i)(t), (3.15) where $$\tilde{v}^{(i)}(t)={\boldsymbol{U}}\tilde{y}^{(i)}(t)$$, $$i \geq 0$$. Let $${\boldsymbol{U}}_x$$ and $${\boldsymbol{U}}_\lambda$$ be the $$n\times2n$$ sub-matrices of $${\boldsymbol{U}}$$ such that $${\boldsymbol{U}}=\left[{\boldsymbol{U}}_x^T,{\boldsymbol{U}}_{\lambda}^T\right]^T$$. Then the optimal state trajectory $$x^{*}(t)$$ and the optimal control $$u^{*}(t)$$ are formulated in the following forms x∗(t)=∑i=0∞x~(i)(t),t0≤t≤tf, (3.16) u∗(t)=−R−1BT∑i=0∞λ~(i)(t),t0≤t≤tf, (3.17) where $$\tilde{x}^{(i)}(t)={\boldsymbol{U}}_x\tilde{y}^{(i)}(t)$$ and $$\tilde{\lambda}^{(i)}(t)={\boldsymbol{U}}_{\lambda}\tilde{y}^{(i)}(t)$$, $$i \geq 0$$. 3.1. Convergence analysis In this section, we investigate the convergence analysis of the proposed method for solving linear TDOCP (2.1)–(2.2). Consider the function sequence $$\left\{y_j^{(k)}(t)\right\}_{k=0}^{\infty}$$ as follows yj(k)(t)≜∑i=0ky~j(i)(t), (3.18) where $$\tilde{y}_j^{(i)}$$ is given by Equations (3.13)–(3.14). By substituting Equations (3.13)–(3.14) into (3.18), we have yj(0)(t)={ζj(t),t∈T1,emj(t−t0)ζj(t0),t∈T, (3.19) and for $$k\geq 1$$ yj(k)(t)=emj(t−t0)ζj(t0)+∫t0temj(t−θ)[n~jy(k−1)(θ−τ)+p~jy(k−1)(θ+τ)]dθ,t∈T, (3.20) and $$y_j^{(k)}(t)=0$$ for $$t \in T_1$$, where $$T_1\triangleq[t_0-\tau,t_0]$$ and $$T\triangleq[t_0,t_f]$$. Let $$\left|\cdot \right|_\infty$$ denotes the infinity norm of vectors and $$\left\|\cdot \right\|_{T}$$ denotes the sup-norm of functions in $$C(T,\mathbb{R}^1)$$. Let $$[\cdot ]_{sup,T}$$ denotes the vector which presents the component-wise sup-norm of vector functions, i.e. [y(t)]sup,T≜(‖y1(t)‖T,…,‖yn(t)‖T),y(t)∈C(T,Rn). (3.21) Define ‖y(t)‖T≜|[y(t)]sup,T|∞,y(t)∈C(T,Rn). (3.22) These notations are used in the proof of next theorem which shows the uniform convergence of the solution sequence given by Equations (3.19)–(3.20) to the solution of IVP (3.5). Theorem 3.1 The function sequence $$\left\{y_j^{(k)}(t)\right\}_{k=0}^\infty$$ given by Equation (3.18) converges uniformly to the solution of IVP (3.5). Proof. From Equations (3.19)–(3.20), we have yj(1)(t)−yj(0)(t)=∫t0temj(t−θ)[n~jy(0)(θ−τ)+p~jy(0)(θ+τ)]dθ,t∈T. (3.23) Define $$\alpha_j\triangleq \left\|e^{m_j(t-t_0)}\right\|_{T}$$, $$\beta_j\triangleq \max(\left| \tilde{n}_j \right|_\infty,\left| \tilde{p}_j \right|_\infty)$$, $$\gamma_j\triangleq \left\| \zeta_j(t) \right\|_{T_1}$$ and $$\eta_j\triangleq \max(\alpha_j\gamma_j,\gamma_j)$$ where $$\alpha_j$$, $$\beta_j$$, $$\gamma_j$$ and $$\eta_j$$ are some positive constants. Let $$\alpha=\displaystyle \max_{j}(\alpha_j)$$, $$\beta=\displaystyle \max_{j}(\beta_j)$$, $$\gamma=\displaystyle \max_{j}(\gamma_j)$$ and $$\eta=\displaystyle \max_{j}(\eta_j)$$. Then from Equation (3.23) we obtain ‖yj(1)(t)−yj(0)(t)‖T≤αjβj∫t0t(‖y(0)(θ−τ)‖T+‖y(0)(θ+τ)‖T)dθ ≤2αjβjηj∫t0tdθ=2αjβjηj(t−t0)≤2αβη(t−t0),t∈T. (3.24) Also, from Equation (3.20), for $$t \in T$$, we obtain yj(2)(t)−yj(1)(t)=∫t0temj(t−θ)[n~j(y(1)(θ−τ)−y(0)(θ−τ))+p~j(y(1)(θ+τ)−y(0)(θ+τ))]dθ, (3.25) and then, with the aid of Equation (3.24), we have ‖yj(2)(t)−yj(1)(t)‖T ≤αjβj∫t0t(‖y(1)(θ−τ)−y(0)(θ−τ)‖T+‖y(1)(θ+τ)−y(0)(θ+τ)‖T)dθ ≤22αjβjαβη∫t0t(θ−t0)dθ≤12!(2αβ)2η(t−t0)2,t∈T. (3.26) Continuing as above, by the mathematics induction, we obtain ‖yj(k)(t)−yj(k−1)(t)‖T≤1k!(2αβ)kη(t−t0)k,k=1,2,…,t∈T. (3.27) Then according to the triangle inequality, for any $$k=0,1,\ldots$$ and $$q=1,2,\ldots$$, we have ‖yj(k+q)(t)−yj(k)(t)‖T≤∑r=k+1k+q1r!(2αβ)rη(t−t0)r =1(k+1)!(2αβ)k+1η(t−t0)k+1(1+∑r=1q−11∏i=1r(k+i+1)(2αβ)r(t−t0)r). (3.28) Note that $$\displaystyle\prod_{i=1}^{r}(k+i+1)\geq \displaystyle\prod_{i=1}^{r}(i+1) = (r+1)!\geq r!$$. Therefore, for any $$k=0,1,\ldots$$ and $$q=1,2,\ldots$$, Equation (3.28) implies that ‖yj(k+q)(t)−yj(k)(t)‖T≤1(k+1)!(2αβ)k+1η(t−t0)k+1∑r=0q−11r!(2αβ)r(t−t0)r ≤1(k+1)!(2αβ)k+1η(t−t0)k+1∑r=0∞1r!(2αβ)r(t−t0)r⏟e2αβ(t−t0) =1(k+1)!(2αβ)k+1η(t−t0)k+1e2αβ(t−t0),t∈T. (3.29) Thus $$\left\{y_j^{(k)}(t)\right\}_{k=0}^\infty$$ is a Cauchy sequence in $$PC^1(T,\mathbb{R})$$. Note that $$C(T,\mathbb{R})$$ is a Banach space with norm $$\left\|\cdot\right\|_T$$ which includes $$PC^1(T,\mathbb{R})$$ as an unclosed subspace. Consequently, the sequence $$\left\{y_j^{(k)}(t)\right\}_{k=0}^\infty$$ is uniformly convergent (see Taylor & Lay, 1980). Thus, from Equation (3.10), limk→∞yj(k)(t)=∑i=0∞y~j(i)(t)=yj(t), (3.30) is the solution of IVP (3.5). □ Theorem 3.2 The sequences $$\left\{x^{(k)}(t)\right\}_{k=0}^\infty$$ and $$\left\{u^{(k)}(t)\right\}_{k=0}^\infty$$ converge uniformly to the optimal solutions of the TDOCP (2.1)–(2.2), where x(k)(t)≜∑i=0kx~(i)(t),u(k)(t)≜−R−1BTλ(k)(t), (3.31) and $$\lambda^{(k)}(t)\triangleq \sum_{i=0}^{k}\tilde{\lambda}^{(i)}(t)$$. Proof. Let $$x^*(t)$$ and $$u^*(t)$$ be the optimal solutions of the TDOCP (2.1)–(2.2). Since $$\tilde{x}^{(i)}(t)={\boldsymbol{U}}_x\tilde{y}^{(i)}(t)$$ and $$\tilde{\lambda}^{(i)}(t)={\boldsymbol{U}}_{\lambda}\tilde{y}^{(i)}(t)$$, with the aid of Equation (3.31) we obtain $$x^{(k)}(t)={\boldsymbol{U}}_x{y}^{(k)}(t)$$ and $${\lambda}^{(k)}(t)={\boldsymbol{U}}_{\lambda}{y}^{(k)}(t)$$. Then, according to Theorem 3.1, the solution sequences $$\left\{x^{(k)}(t)\right\}_{k=0}^\infty$$ and $$\left\{\lambda^{(k)}(t)\right\}_{k=0}^\infty$$ converge uniformly to $$x(t)={\boldsymbol{U}}_xy(t)$$ and $$\lambda(t)={\boldsymbol{U}}_{\lambda}y(t)$$, respectively. Therefore, the limit of sequence $$\left\{x^{(k)}(t)\right\}_{k=0}^\infty$$ is the optimal trajectory $$x^*(t)$$ of the TDOCP (2.1)–(2.2). The control sequence $$\left\{u^{(k)}(t)\right\}_{k=0}^\infty$$ depends only on the co-state vector sequence $$\left\{\lambda^{(k)}(t)\right\}_{k=0}^\infty$$ through a linear operator. Therefore we have u∗(t)=−R−1BTλ(t)=−R−1BT∑i=0∞λ~(i)(t)=−R−1BT(limk→∞∑i=0kλ~(i)(t)) =limk→∞(−R−1BT∑i=0kλ~(i)(t))=limk→∞(−R−1BTλ(k)(t))=limk→∞u(k)(t). (3.32) That is, the control sequence $$\left\{u^{(k)}(t)\right\}_{k=0}^\infty$$ converges uniformly to the optimal control $$u^*(t)$$, and the proof is complete. □ The main idea of the proposed iterative method can be summarized as follows. The Pontryagin’s maximum principle for the TDOCP (2.1)–(2.2) provides the TPBVP given by Equation (2.3). To solve the derived problem, we design an iterative technique denoted by ‘iterative EVD’ by using a series expansion approach and the eigenvalue decomposition policy. Applying the iterative EVD, the TPBVP (2.3) is transformed into a sequence of decoupled linear first-order IVPs without delay and advance terms as given by Equations (3.13)–(3.14), thus they can be solved in parallel. Solving the derived sequence in a recursive manner leads to the optimal control law and the optimal trajectory in the form of uniformly convergent series as in Equations (3.16)–(3.17). Consequently, the proposed procedure has practical sense and facilitates the application of parallel processing, which is very useful in overcoming the complexity of computations. 3.2 Suboptimal control design It is almost impossible to obtain the optimal trajectory-control pair as in Equations (3.16)–(3.17) since these equations contain infinite series. Therefore, in practical applications, the series solution is approximated by replacing $$\infty$$ with a finite positive integer $$M$$. Then the $$M^{th}$$-order suboptimal trajectory-control pair, denoted by $$\left(x^{(M)}(t),u^{(M)}(t)\right)$$, becomes x(M)(t)=∑i=0Mx~(i)(t),t0≤t≤tf, (3.33) u(M)(t)=−R−1BT∑i=0Mλ~(i)(t),t0≤t≤tf, (3.34) where $$\tilde{x}^{(i)}(t)={\boldsymbol{U}}_x\tilde{y}^{(i)}(t)$$ and $$\tilde{\lambda}^{(i)}(t)={\boldsymbol{U}}_{\lambda}\tilde{y}^{(i)}(t)$$. The integer $$M$$ is generally determined according to a concrete control precision. For example, the $$M$${th}-order suboptimal trajectory-control pair in expressions (3.33)-(3.34) has the desirable accuracy if for a given small enough constant $$\varepsilon>0$$, the following condition holds |J(M)−J(M−1)J(M)|<ε, (3.35) where J(M)=12(x(M)(tf))TQfx(M)(tf)+12∫t0tf((x(M)(t))TQx(M)(t)+(u(M)(t))TRu(M)(t))dt. (3.36) Remark 3.1 (Approximating the co-state initial value) As the co-state initial value $$\lambda_0$$ is an unknown parameter, the function sequence $$\tilde{y}_j^{(i)}(t)$$ given by Equations (3.13)–(3.14) depends on both $$t$$ and $$\lambda_0$$, i.e. we have $$\tilde{y}_j^{(i)}(t,\lambda_0)$$. In order to identify $$\lambda_0$$, by considering the boundary condition $$\lambda(t_f)={\boldsymbol{Q}}_{\boldsymbol{f}}x(t_f)$$, after sufficient iterations of the proposed approach, we have $$\sum_{i=0}^{M}\tilde{\lambda}^{(i)}(t_f,\lambda_0)={\boldsymbol{Q}}_{\boldsymbol{f}}\sum_{i=0}^{M}\tilde{x}^{(i)}(t_f,\lambda_0)$$. Therefore $$\lambda_0$$ should be the real root of the following system of linear algebraic equations ∑i=0MUλy~(i)(tf,λ0)−Qf∑i=0MUxy~(i)(tf,λ0)=0. (3.37) Once $$\lambda_0$$ is known, the $$M$${th}-order suboptimal trajectory-control pair is obtained from Equations (3.33)–(3.34) and its accuracy can be evaluated using Equation (3.35). 4. Numerical examples In this section, three numerical examples are given to demonstrate the effectiveness of the proposed approach. For these examples, the results of our suggested technique are compared with the existing results. The presented iterative method in this article based on the EVD policy for the TDOCPs is denoted by ‘iterative EVD’. Example 4.1 Consider a linear time-delay system described by {x˙(t)=x(t)+x(t−1)+u(t),0≤t≤2,x(t)=1,−1≤t≤0, (4.1) where the quadratic cost functional to be minimized is in the form J=32x2(2)+∫02u2(t)dt. (4.2) The problem is to find the optimal control $$u^*(t)$$ which minimizes expression (4.2) subject to the time-delay system (4.1). Using the maximum principle for the TDOCPs, the optimal control $$u^*(t)$$ for this problem is given by u∗(t)={δ(e2−t+(1−t)e1−t),0≤t≤1,δe2−t,1≤t≤2, (4.3) where $$\delta\simeq -0.3931594896$$. This problem was solved by using an averaging approximations method (AAM) (see Banks & Burns, 1987), single-term Walsh series (SWS) (see Pananisamy & Rao, 1983), variational iteration method (VIM) (see Mirhosseini-Alizamini et al., 2015) and a recursive shooting method (RSM) (see Jajarmi & Hajipour, 2016). Here, we applied the proposed procedure described in Section 3 to solve this problem. For this purpose, we select $$M=8$$. In Table 1, a comparison is made between the exact solution $$u^*(t)$$ and the approximate values of $$u(t)$$ computed by the iterative EVD (with $$M=8$$), AAM and SWS. Table 2 compares the minimum value of $$J$$ obtained by the proposed method with the exact solution and those achieved by the AAM, SWS, VIM and RSM. As it is shown in Tables 1 and 2, the approximate solution of the new iterative EVD is more accurate than those obtained by the other existing methods. In order to demonstrate the validity of our numerical findings reported in Tables 1 and 2, we have determined the relative error given by Equation (3.35) for $$M=8$$ which is $$\left|\frac{J^{(8)}-J^{(7)}}{J^{(8)}}\right|=8.5013\times 10^{-6}$$. Moreover, the eighth-order suboptimal trajectory-control pair $$(x^{(8)}(t),u^{(8)}(t))$$ together with the optimal solutions are shown in Fig. 1. This figure shows that the iterative EVD is sufficiently accurate. Figure 2 provides a comparison on the computing time between the iterative EVD, VIM and RSM. Note that, comparing the central processing unit (CPU) time with the AAM and SWS cannot be done because the elapsed CPU time for these methods have not been reported. In addition, the new technique in this article provides the solution in an iterative manner, thus comparing the CPU time with the iterative schemes is more reasonable. Comparison results in Fig. 2 reveal the computational efficiency of the new iterative method. Table 1 Approximate and the exact values of $$u(t)$$ for Example 4.1 Method $$t$$ AAM (Banks & Burns, 1987) SWS (Pananisamy & Rao, 1983) Iterative EVD with $$M=8$$ Exact 0.00 $$-$$3.9734 $$-$$3.9730 $$-$$3.9737950 $$-$$3.9737958 0.19 $$-$$3.1201 $$-$$3.1178 $$-$$3.1182474 $$-$$3.1182473 0.39 $$-$$2.3973 $$-$$2.4079 $$-$$2.4082882 $$-$$2.4082883 0.59 $$-$$1.8376 $$-$$1.8528 $$-$$1.8532558 $$-$$1.8532558 0.80 $$-$$1.4082 $$-$$1.4008 $$-$$1.4013768 $$-$$1.4013766 1.00 $$-$$1.0884 $$-$$1.0680 $$-$$1.0687181 $$-$$1.0687182 1.20 $$-$$0.8604 $$-$$0.8747 $$-$$0.8749922 $$-$$0.8749925 1.41 $$-$$0.6961 $$-$$0.7093 $$-$$0.7092551 $$-$$0.7092551 1.61 $$-$$0.5678 $$-$$0.5810 $$-$$0.5806892 $$-$$0.5806890 1.81 $$-$$0.4635 $$-$$0.4759 $$-$$0.4754277 $$-$$0.4754279 2.00 $$-$$0.3842 $$-$$0.3937 $$-$$0.3931598 $$-$$0.3931594 Method $$t$$ AAM (Banks & Burns, 1987) SWS (Pananisamy & Rao, 1983) Iterative EVD with $$M=8$$ Exact 0.00 $$-$$3.9734 $$-$$3.9730 $$-$$3.9737950 $$-$$3.9737958 0.19 $$-$$3.1201 $$-$$3.1178 $$-$$3.1182474 $$-$$3.1182473 0.39 $$-$$2.3973 $$-$$2.4079 $$-$$2.4082882 $$-$$2.4082883 0.59 $$-$$1.8376 $$-$$1.8528 $$-$$1.8532558 $$-$$1.8532558 0.80 $$-$$1.4082 $$-$$1.4008 $$-$$1.4013768 $$-$$1.4013766 1.00 $$-$$1.0884 $$-$$1.0680 $$-$$1.0687181 $$-$$1.0687182 1.20 $$-$$0.8604 $$-$$0.8747 $$-$$0.8749922 $$-$$0.8749925 1.41 $$-$$0.6961 $$-$$0.7093 $$-$$0.7092551 $$-$$0.7092551 1.61 $$-$$0.5678 $$-$$0.5810 $$-$$0.5806892 $$-$$0.5806890 1.81 $$-$$0.4635 $$-$$0.4759 $$-$$0.4754277 $$-$$0.4754279 2.00 $$-$$0.3842 $$-$$0.3937 $$-$$0.3931598 $$-$$0.3931594 Table 1 Approximate and the exact values of $$u(t)$$ for Example 4.1 Method $$t$$ AAM (Banks & Burns, 1987) SWS (Pananisamy & Rao, 1983) Iterative EVD with $$M=8$$ Exact 0.00 $$-$$3.9734 $$-$$3.9730 $$-$$3.9737950 $$-$$3.9737958 0.19 $$-$$3.1201 $$-$$3.1178 $$-$$3.1182474 $$-$$3.1182473 0.39 $$-$$2.3973 $$-$$2.4079 $$-$$2.4082882 $$-$$2.4082883 0.59 $$-$$1.8376 $$-$$1.8528 $$-$$1.8532558 $$-$$1.8532558 0.80 $$-$$1.4082 $$-$$1.4008 $$-$$1.4013768 $$-$$1.4013766 1.00 $$-$$1.0884 $$-$$1.0680 $$-$$1.0687181 $$-$$1.0687182 1.20 $$-$$0.8604 $$-$$0.8747 $$-$$0.8749922 $$-$$0.8749925 1.41 $$-$$0.6961 $$-$$0.7093 $$-$$0.7092551 $$-$$0.7092551 1.61 $$-$$0.5678 $$-$$0.5810 $$-$$0.5806892 $$-$$0.5806890 1.81 $$-$$0.4635 $$-$$0.4759 $$-$$0.4754277 $$-$$0.4754279 2.00 $$-$$0.3842 $$-$$0.3937 $$-$$0.3931598 $$-$$0.3931594 Method $$t$$ AAM (Banks & Burns, 1987) SWS (Pananisamy & Rao, 1983) Iterative EVD with $$M=8$$ Exact 0.00 $$-$$3.9734 $$-$$3.9730 $$-$$3.9737950 $$-$$3.9737958 0.19 $$-$$3.1201 $$-$$3.1178 $$-$$3.1182474 $$-$$3.1182473 0.39 $$-$$2.3973 $$-$$2.4079 $$-$$2.4082882 $$-$$2.4082883 0.59 $$-$$1.8376 $$-$$1.8528 $$-$$1.8532558 $$-$$1.8532558 0.80 $$-$$1.4082 $$-$$1.4008 $$-$$1.4013768 $$-$$1.4013766 1.00 $$-$$1.0884 $$-$$1.0680 $$-$$1.0687181 $$-$$1.0687182 1.20 $$-$$0.8604 $$-$$0.8747 $$-$$0.8749922 $$-$$0.8749925 1.41 $$-$$0.6961 $$-$$0.7093 $$-$$0.7092551 $$-$$0.7092551 1.61 $$-$$0.5678 $$-$$0.5810 $$-$$0.5806892 $$-$$0.5806890 1.81 $$-$$0.4635 $$-$$0.4759 $$-$$0.4754277 $$-$$0.4754279 2.00 $$-$$0.3842 $$-$$0.3937 $$-$$0.3931598 $$-$$0.3931594 Table 2 Comparison results of $$J$$ for Example 4.1 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.0833 SWS (Pananisamy & Rao, 1983) 3.0879 VIM (Mirhosseini-Alizamini et al., 2015) 3.1091 RSM (Jajarmi & Hajipour, 2016) 3.1017 Iterative EVD $$(M=8)$$ 3.10165735 Exact 3.10165726 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.0833 SWS (Pananisamy & Rao, 1983) 3.0879 VIM (Mirhosseini-Alizamini et al., 2015) 3.1091 RSM (Jajarmi & Hajipour, 2016) 3.1017 Iterative EVD $$(M=8)$$ 3.10165735 Exact 3.10165726 Table 2 Comparison results of $$J$$ for Example 4.1 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.0833 SWS (Pananisamy & Rao, 1983) 3.0879 VIM (Mirhosseini-Alizamini et al., 2015) 3.1091 RSM (Jajarmi & Hajipour, 2016) 3.1017 Iterative EVD $$(M=8)$$ 3.10165735 Exact 3.10165726 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.0833 SWS (Pananisamy & Rao, 1983) 3.0879 VIM (Mirhosseini-Alizamini et al., 2015) 3.1091 RSM (Jajarmi & Hajipour, 2016) 3.1017 Iterative EVD $$(M=8)$$ 3.10165735 Exact 3.10165726 Fig. 1. View largeDownload slide Simulation curves of the state and control variables for Example 4.1. Fig. 1. View largeDownload slide Simulation curves of the state and control variables for Example 4.1. Fig. 2. View largeDownload slide Comparison results of the elapsed CPU time for Example 4.1. Fig. 2. View largeDownload slide Comparison results of the elapsed CPU time for Example 4.1. Example 4.2 Consider the optimal control of a harmonic oscillator with retarded damping. It involves the minimization of the cost functional J=5x12(2)+12∫02u2(t)dt, (4.4) with the second order time-delay system {x˙1(t)=x2(t),0≤t≤2,x˙2(t)=−x1(t)−x2(t−1)+u(t),0≤t≤2, (4.5) where the initial state is [x1(t)x2(t)]=[100],−1≤t≤0. (4.6) The problem is to find the optimal control $$u^*(t)$$ which minimizes expression (4.4) subject to the time-delay system (4.5)–(4.6). Applying the maximum principle for the time-delay systems, the optimal control $$u^*(t)$$ for this problem is given by u∗(t)={δsin(2−t)+(δ2)(1−t)sin(t−1),0≤t<1,δsin(2−t),1≤t≤2, (4.7) where $$\delta\simeq 2.5599383203$$. The optimal control law for this problem was approximated by the AAM (see (Banks & Burns, 1987), combined parameter and function optimization algorithm (PFOA) (see Lee, 1993), linear Legendre multi-wavelets (LMW) (see Khellat, 2009), hybrid functions approximation approach (HFAA) (see Haddadi et al., 2012), VIM (see Mirhosseini-Alizamini et al., 2015) and RSM (see Jajarmi & Hajipour, 2016). In Table 3, the optimal solution $$u^*(t)$$ and the approximate values of $$u(t)$$ obtained by the iterative EVD with $$M=12$$ and the AAM are compared. Table 4 compares the minimum value of $$J$$ computed by the iterative EVD with the exact solution and those achieved by the AAM, PFOA, LMW, HFAA, VIM and RSM. In order to demonstrate the validity of our numerical findings reported in Tables 3 and 4, we have determined the relative error given by Equation (3.35) for $$M=12$$ which is $$\left|\frac{J^{(12)}-J^{(11)}}{J^{(12)}}\right|=2.0152\times 10^{-6}$$. The approximate solutions of $$x_1(t)$$, $$x_2(t)$$ and $$u(t)$$ for $$M=12$$ together with the optimal solutions are shown in Fig. 3. The numerical results reveal the validity and high accuracy of the suggested technique. The elapsed CPU time of the iterative EVD, VIM and RSM for different iteration times are compared in Fig. 4 which verify the computational efficiency of the new iterative scheme. Table 3 Approximate and the exact values of $$u(t)$$ for Example 4.2 Method $$t$$ AAM (Banks & Burns, 1987) Iterative EVD with $$M=12$$ Exact 0.00 1.2295 1.2506884 1.2506884 0.20 1.7261 1.7584348 1.7584349 0.41 2.0828 2.1393127 2.1393125 0.61 2.2694 2.3284277 2.3284279 0.81 2.2684 2.3306378 2.3306379 1.02 2.0889 2.1260220 2.1260220 1.22 1.7746 1.8003518 1.8003519 1.42 1.3785 1.4029075 1.4029074 1.62 0.9443 0.9495337 0.9495335 1.82 0.4536 0.4583043 0.4583046 2.00 0.0000 0.0000000 0.0000000 Method $$t$$ AAM (Banks & Burns, 1987) Iterative EVD with $$M=12$$ Exact 0.00 1.2295 1.2506884 1.2506884 0.20 1.7261 1.7584348 1.7584349 0.41 2.0828 2.1393127 2.1393125 0.61 2.2694 2.3284277 2.3284279 0.81 2.2684 2.3306378 2.3306379 1.02 2.0889 2.1260220 2.1260220 1.22 1.7746 1.8003518 1.8003519 1.42 1.3785 1.4029075 1.4029074 1.62 0.9443 0.9495337 0.9495335 1.82 0.4536 0.4583043 0.4583046 2.00 0.0000 0.0000000 0.0000000 Table 3 Approximate and the exact values of $$u(t)$$ for Example 4.2 Method $$t$$ AAM (Banks & Burns, 1987) Iterative EVD with $$M=12$$ Exact 0.00 1.2295 1.2506884 1.2506884 0.20 1.7261 1.7584348 1.7584349 0.41 2.0828 2.1393127 2.1393125 0.61 2.2694 2.3284277 2.3284279 0.81 2.2684 2.3306378 2.3306379 1.02 2.0889 2.1260220 2.1260220 1.22 1.7746 1.8003518 1.8003519 1.42 1.3785 1.4029075 1.4029074 1.62 0.9443 0.9495337 0.9495335 1.82 0.4536 0.4583043 0.4583046 2.00 0.0000 0.0000000 0.0000000 Method $$t$$ AAM (Banks & Burns, 1987) Iterative EVD with $$M=12$$ Exact 0.00 1.2295 1.2506884 1.2506884 0.20 1.7261 1.7584348 1.7584349 0.41 2.0828 2.1393127 2.1393125 0.61 2.2694 2.3284277 2.3284279 0.81 2.2684 2.3306378 2.3306379 1.02 2.0889 2.1260220 2.1260220 1.22 1.7746 1.8003518 1.8003519 1.42 1.3785 1.4029075 1.4029074 1.62 0.9443 0.9495337 0.9495335 1.82 0.4536 0.4583043 0.4583046 2.00 0.0000 0.0000000 0.0000000 Table 4 Comparison results of $$J$$ for Example 4.2 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.2587 PFOA (Lee, 1993) 3.4827 LMW (Khellat, 2009) 3.4325 HFAA (Haddadi et al., 2012) 3.2166 VIM (Mirhosseini-Alizamini et al., 2015) 3.3991 RSM (Jajarmi & Hajipour, 2016) 3.39912210 Iterative EVD $$(M=12)$$ 3.39911810 Exact 3.39911806 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.2587 PFOA (Lee, 1993) 3.4827 LMW (Khellat, 2009) 3.4325 HFAA (Haddadi et al., 2012) 3.2166 VIM (Mirhosseini-Alizamini et al., 2015) 3.3991 RSM (Jajarmi & Hajipour, 2016) 3.39912210 Iterative EVD $$(M=12)$$ 3.39911810 Exact 3.39911806 Table 4 Comparison results of $$J$$ for Example 4.2 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.2587 PFOA (Lee, 1993) 3.4827 LMW (Khellat, 2009) 3.4325 HFAA (Haddadi et al., 2012) 3.2166 VIM (Mirhosseini-Alizamini et al., 2015) 3.3991 RSM (Jajarmi & Hajipour, 2016) 3.39912210 Iterative EVD $$(M=12)$$ 3.39911810 Exact 3.39911806 Method Cost functional value $$J$$ AAM (Banks & Burns, 1987) 3.2587 PFOA (Lee, 1993) 3.4827 LMW (Khellat, 2009) 3.4325 HFAA (Haddadi et al., 2012) 3.2166 VIM (Mirhosseini-Alizamini et al., 2015) 3.3991 RSM (Jajarmi & Hajipour, 2016) 3.39912210 Iterative EVD $$(M=12)$$ 3.39911810 Exact 3.39911806 Fig. 3. View largeDownload slide Simulation curves of the state and control variables for Example 4.2. Fig. 3. View largeDownload slide Simulation curves of the state and control variables for Example 4.2. Fig. 4. View largeDownload slide Comparison results of the elapsed CPU time for Example 4.2. Fig. 4. View largeDownload slide Comparison results of the elapsed CPU time for Example 4.2. Example 4.3 Consider the following nonlinear time-delay system {x˙(t)=x(t)sin(x(t))+x(t−1)+u(t),0≤t≤2,x(t)=10,−1≤t≤0. (4.8) The quadratic cost functional to be minimized is given by J=12x2(2)+12∫02(x2(t)+u2(t))dt. (4.9) This problem has been solved by a semigroup approximation method (SAM) (see Banks, 1979), PFOA (see Lee, 1993), modified quasi-Newton method (MQNM) (see Wong et al., 1985) and a hybrid functions expansion technique (HFET) (see Nazemi & Shabani, 2015). The Pontryagin’s necessary conditions of optimality for the nonlinear TDOCP given by Equations (4.8)–(4.9) are provided by {x˙(t)=x(t)sin(x(t))+x(t−1)−λ(t),0≤t≤2,λ˙(t)={−x(t)−λ(t)(sin(x(t))+x(t)cos(x(t)))−λ(t+1),0≤t<1,−x(t)−λ(t)(sin(x(t))+x(t)cos(x(t))),1≤t≤2, x(t)=10,−1≤t≤0,λ(2)=x(2). (4.10) If we implement the new iterative EVD to solve this problem, the nonlinear terms together with the delay and advance terms in Equation (4.10) are no longer considered as unknowns since they have been obtained from the previous iteration. Simulation results including the cost functional value, relative error and the elapsed CPU time for different iterations are listed in Table 5. From Table 5 it is observed that a minimum value of $$J^{(10)}=161.911429$$ is achieved after the tenth-step of the iterative process. As it is shown in Table 6, this value of $$J^{(10)}$$ compares very well with those computed by the SAM, PFOA, MQNM, HFET and RSM. In addition, the approximate solution of the RSM is achieved within about $$1.670046$$ sec. of CPU time while the minimum value of $$J$$ for the new iterative scheme with $$M=10$$ is obtained within $$0.906732$$ sec. of CPU time. Simulation curves of $$x^{(10)}(t)$$ and $$u^{(10)}(t)$$ are depicted in Fig. 5 which are very close to the optimal trajectory and control, respectively. Table 5 Simulation results of the iterative EVD for Example 4.3 $$i$$ $$J^{(i)}$$ $$\left|\frac{J^{(i)}-J^{(i-1)}}{J^{(i)}}\right|$$ CPU time (s) 0 30.549578 — 0.261805 1 125.982628 0.757509 0.340543 2 171.152178 0.263914 0.369901 3 160.081549 0.069156 0.787469 4 161.888611 0.011162 0.804968 5 161.968026 0.000490 0.822451 6 161.894233 0.000455 0.839542 7 161.910492 0.000100 0.856647 8 161.913037 0.000015 0.873192 9 161.911239 0.000011 0.889925 10 161.911429 0.000001 0.906732 $$i$$ $$J^{(i)}$$ $$\left|\frac{J^{(i)}-J^{(i-1)}}{J^{(i)}}\right|$$ CPU time (s) 0 30.549578 — 0.261805 1 125.982628 0.757509 0.340543 2 171.152178 0.263914 0.369901 3 160.081549 0.069156 0.787469 4 161.888611 0.011162 0.804968 5 161.968026 0.000490 0.822451 6 161.894233 0.000455 0.839542 7 161.910492 0.000100 0.856647 8 161.913037 0.000015 0.873192 9 161.911239 0.000011 0.889925 10 161.911429 0.000001 0.906732 Table 5 Simulation results of the iterative EVD for Example 4.3 $$i$$ $$J^{(i)}$$ $$\left|\frac{J^{(i)}-J^{(i-1)}}{J^{(i)}}\right|$$ CPU time (s) 0 30.549578 — 0.261805 1 125.982628 0.757509 0.340543 2 171.152178 0.263914 0.369901 3 160.081549 0.069156 0.787469 4 161.888611 0.011162 0.804968 5 161.968026 0.000490 0.822451 6 161.894233 0.000455 0.839542 7 161.910492 0.000100 0.856647 8 161.913037 0.000015 0.873192 9 161.911239 0.000011 0.889925 10 161.911429 0.000001 0.906732 $$i$$ $$J^{(i)}$$ $$\left|\frac{J^{(i)}-J^{(i-1)}}{J^{(i)}}\right|$$ CPU time (s) 0 30.549578 — 0.261805 1 125.982628 0.757509 0.340543 2 171.152178 0.263914 0.369901 3 160.081549 0.069156 0.787469 4 161.888611 0.011162 0.804968 5 161.968026 0.000490 0.822451 6 161.894233 0.000455 0.839542 7 161.910492 0.000100 0.856647 8 161.913037 0.000015 0.873192 9 161.911239 0.000011 0.889925 10 161.911429 0.000001 0.906732 Table 6 Comparison results of $$J$$ for Example 4.3 Method Cost functional value $$J$$ SAM (Banks, 1979) 162.019 MQNM (Wong et al., 1985) 162.104 PFOA (Lee, 1993) 162.001 HFET (Nazemi & Shabani, 2015)}) 161.94 RSM (Jajarmi & Hajipour, 2016) 161.924538 Iterative EVD $$(M=10)$$ 161.911429 Method Cost functional value $$J$$ SAM (Banks, 1979) 162.019 MQNM (Wong et al., 1985) 162.104 PFOA (Lee, 1993) 162.001 HFET (Nazemi & Shabani, 2015)}) 161.94 RSM (Jajarmi & Hajipour, 2016) 161.924538 Iterative EVD $$(M=10)$$ 161.911429 Table 6 Comparison results of $$J$$ for Example 4.3 Method Cost functional value $$J$$ SAM (Banks, 1979) 162.019 MQNM (Wong et al., 1985) 162.104 PFOA (Lee, 1993) 162.001 HFET (Nazemi & Shabani, 2015)}) 161.94 RSM (Jajarmi & Hajipour, 2016) 161.924538 Iterative EVD $$(M=10)$$ 161.911429 Method Cost functional value $$J$$ SAM (Banks, 1979) 162.019 MQNM (Wong et al., 1985) 162.104 PFOA (Lee, 1993) 162.001 HFET (Nazemi & Shabani, 2015)}) 161.94 RSM (Jajarmi & Hajipour, 2016) 161.924538 Iterative EVD $$(M=10)$$ 161.911429 Fig. 5. View largeDownload slide Simulation curves of the state and control variables for Example 4.3. Fig. 5. View largeDownload slide Simulation curves of the state and control variables for Example 4.3. 5. Conclusion This article presents a novel approach denoted by the ‘iterative EVD’ to solve the OCP of linear time-delay systems. Applying the iterative EVD, the optimal control law and the optimal trajectory are determined in the form of uniformly convergent series by solving recursively only a sequence of decoupled linear first-order non-delay IVPs. Note that the proposed procedure facilitates the application of parallel processing which improves its computational efficiency. Therefore the iterative EVD has practical sense and is very useful in overcoming the complexity of computations. The presented results demonstrate the efficiency, simplicity and high accuracy of the suggested technique. Future work can be focused on extending the iterative EVD for solving the OCP of nonlinear time-delay systems. Moreover, it may be possible to extend the current framework to the switching case, as that systematically investigated by Kanget al. (2014) and (2016). References Banks, H. T. ( 1979 ) Approximation of nonlinear functional differential equation control systems . SIAM J. Control Optim. , 29 , 383 – 408 . Banks, H. T. & Burns, J. A. ( 1987 ) Hereditary control problems: numerical methods based on averaging approximations . SIAM J. Control Optim. , 16 , 169 – 208 . Google Scholar Crossref Search ADS Chen, C. , Sun, D. Y. & Teo, C. Y. ( 2000 ) Numerical solution of time-delayed optimal control problems by iterative dynamic programming , Optim. Control Appl. Meth., 21 , 91 – 105 . Google Scholar Crossref Search ADS Dadebo, S. A. & Luus, R. ( 1992 ) Optimal control of time-delay systems by dynamic programming. Optim. Control Appl. Meth. , 13 , 29 – 41 . Google Scholar Crossref Search ADS Dadkhah, M. , Farahi, M. H. & Heydari, A. ( 2015 ) Optimal control of a class of non-linear time-delay systems via hybrid functions . IMA J. Math. Control Info. , https://doi.org/10.1093/imamci/dnv044 . Eller, D. H. , Najibi, J. K. & Banks, H. T. ( 1969 ) Optimal control of linear time delay systems . IEEE Trans. Autom. Control , 14 , 678 – 687 . Google Scholar Crossref Search ADS Fridman, E. ( 2014 ) Introduction to Time-Delay Systems: Analysis and Control . Birkhauser : Birkhäuser Basel . Ganjefar, S. , Najibi, S. & Momeni, A. ( 2011 ) A novel structure for the optimal control of bilateral teleoperation systems with variable time delay . J. Frankl. Inst. , 348 , 1537 – 1555 . Google Scholar Crossref Search ADS Haddadi, N. , Ordokhani, Y. & Razzaghi, M. ( 2012 ) Optimal control of delay systems by using a hybrid functions approximation , J. Optim. Theory Appl. , 153 , 338 – 356 . Google Scholar Crossref Search ADS He, J. H. ( 1998 ) An approximate solution technique depending upon an artificial parameter. Commun. Nonlinear Sci. , 3 , 92 – 97 . Google Scholar Crossref Search ADS Hewer, G. A. ( 1972 ) A note on controllability of linear systems with time delay . IEEE Trans. Autom. Control , 17 , 733 – 734 . Google Scholar Crossref Search ADS Horng, I. & Chou, J. ( 1985 ) Analysis parameter estimation and optimal control of time-delay systems via Chebyshev series , Int. J. Control , 41 , 1221 – 1234 . Google Scholar Crossref Search ADS Hwang, C. & Shih, Y. P. ( 1985 ) Optimal control of delay systems via block pulse functions , J Optim. Theory Appl. , 45 , 101 – 112 . Google Scholar Crossref Search ADS Inoue, K. , Akashi, H. & Ogino, K. ( 1971 ) Sensitivity approaches to optimization of linear systems with time-delay . Automatica , 17 , 671 – 676 . Google Scholar Crossref Search ADS Jajarmi, A. & Hajipour, M. ( 2016 ) An efficient recursive shooting method for the optimal control of time-varying systems with state time-delay . Appl. Math. Model. , 40 , 2756 – 2769 . Google Scholar Crossref Search ADS Jamshidi, M. & Malek-Zavarei, M. ( 1972 ) Suboptimal design of linear control systems with time delay . Proc. IEE , 119 , 1743 – 1746 . Kang Y. , Zhai, D. H. , Liu, G. P. , Zhao Y. B. & Zhao P. ( 2014 ) Stability analysis of a class of hybrid stochastic retarded systems under asynchronous switching , IEEE T. Automat. Contr. , 59 , 1511 – 1523 . Google Scholar Crossref Search ADS Kang Y. , Zhai, D. H. , Liu, G. P. & Zhao Y. B. ( 2016 ) On input-to-state stability of switched stochastic nonlinear systems under extended asynchronous switching . IEEE T. Cyb. , 46 , 1092 – 1105 . Google Scholar Crossref Search ADS Kharatishvili, G. L. ( 1961 ) The maximum principle in the theory of optimal process with time-lags . Dokl. Akad. Nauk SSSR , 136 , 39 – 42 . Khellat, F. ( 2009 ) Optimal control of linear time-delayed systems by linear Legendre multiwavelets . J. Optim. Theory Appl. , 143 , 107 – 121 . Google Scholar Crossref Search ADS Lee, A. Y. ( 1993 ) Numerical solution of time-delayed optimal control problems with terminal inequality constraints . Optim. Control Appl. Meth. , 14 , 203 – 210 . Google Scholar Crossref Search ADS Malek-Zavarei, M. & Jamshidi, M. ( 1987 ) Time Delay Systems: Analysis, Opimization and Applications . Amesterdam : North-Holland . Mirhosseini-Alizamini, S. M. , Effati, S. & Heydari, A. ( 2015 ) An iterative method for suboptimal control of linear time-delayed systems . Syst. Control Lett. , 82 , 40 – 50 . Google Scholar Crossref Search ADS Mohan, B. M. & Kar, S. K. ( 2010 ) Orthogonal functions approach to optimal control of delay systems with reverse time terms . J. Frankl. Inst. , 347 , 1723 – 1739 . Google Scholar Crossref Search ADS Nazemi, A. & Shabani, M. M. ( 2015 ) Numerical solution of the time-delayed optimal control problems with hybrid functions . IMA J. Math. Control Info. , 32 , 623 – 638 . Google Scholar Crossref Search ADS Pananisamy, K. R. & Rao, G. P. ( 1983 ) Optimal control of linear systems with delays in state and control via Wash functions . IEE Proc. , 130 , 300 – 312 . Google Scholar Crossref Search ADS Perng, M. H. ( 1986 ) Direct approach for the optimal control of linear time-delay systems via shifted Legendre polynomials . Int. J. Control , 43 , 1897 – 1904 . Google Scholar Crossref Search ADS Taylor, A. E. & Lay, D. C. ( 1980 ) Introduction to Functional Analysis . Wiley : New York . Wong, K. H. , Clements, D. J. & Teo, K. L. ( 1985 ) Optimal control computation for nonlinear time-Lag systems . J. Optim. Theory Appl. , 47 , 91 – 107 . Google Scholar Crossref Search ADS © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
IMA Journal of Mathematical Control and Information – Oxford University Press
Published: Sep 21, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”
Daniel C.
“Whoa! It’s like Spotify but for academic articles.”
@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”
@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”
@JoseServera
DeepDyve Freelancer | DeepDyve Pro | |
---|---|---|
Price | FREE | $49/month |
Save searches from | ||
Create lists to | ||
Export lists, citations | ||
Read DeepDyve articles | Abstract access only | Unlimited access to over |
20 pages / month | ||
PDF Discount | 20% off | |
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.
ok to continue