# Chebyshev inclusion functions based symplectic algorithm for solving non-linear optimal control problem with interval uncertainty

Chebyshev inclusion functions based symplectic algorithm for solving non-linear optimal control... Abstract This article proposes a non-probabilistic interval analysis algorithm to obtain the influence domain of non-linear optimal control problem with interval uncertain parameter. The proposed non-probabilistic interval analysis algorithm is mainly constructed based on the symplectic algorithm and Chebyshev interval method. Firstly, the interval model is used to descript the uncertainties arising in optimal control problem. These uncertainties are the uncertain-but-bounded initial conditions or parameter variables. Thus, they are expressed based on the lower and upper bounds of variables without probability distributions. Then the Chebyshev inclusion function is employed to represent the relation between input uncertain parameters and responses of optimal control evaluation. By doing this, the optimal control problem with interval variables is transformed into a new set of optimal control problems with deterministic variables. So, the symplectic algorithm can be directly applied to the solving of the deterministic optimal control problems. Finally with the solutions of deterministic optimal control problems, the interval operation is used to obtain the influence domain, i.e., the bounds of the responses to the states and control inputs. The effectiveness of the proposed method is verified by a simple uncertain optimal control problem and an optimal orbital rendezvous problem with uncertainties in astrodynamics. 1. Introduction Optimal control theory of dynamics systems is extensively involved in astrodynamics (Peng et al., 2014b) etc. Theories and algorithms for solving open-loop optimal control problem have experienced a rapid growth over the past decades (Peng et al., 2014a). However, the spacecraft dynamical systems are too complex to be in accordance with deterministic systems. On the one hand, the uncertainties are inevitable in the data collection, dynamics modelling and analytical method. These uncertainties may significantly change the results of open-loop optimal control problem, and then influence the spacecraft guidance and trajectory design in astrodynamics. On the other hand, considerations about collision avoidance, thrust estimation, etc. for the spacecraft mission are serious topics in astrodynamics. It has been developed a strong consensus that uncertainty quantification (UQ) must play a prominent role in the future to understand the physics of non-linear dynamics system and to certify its stability (Pettit, 2004). UQ is an important tool that enables quantitative risk analysis, the goal of which is to provide rational guidance for design and certification decisions (Pettit, 2004). Rich tools are available to consider the above uncertainties in optimal control problem. The basic idea for dealing with uncertainties in optimal control problem is the strategy of closed-loop feedback (Dorf & Bishop, 2011). For example, the robust control method has been largely developed to reduce the sensitivity of the uncertain parameters (Pritchard & Townley, 1991; Büskens & Maurer, 2001; Diehl et al., 2005; Sakthivel & Ito, 2007; Ibrir, 2008). Thus, the deterministic control law is obtained. However, the influence domain of uncertainties in an open-loop optimal control is also important in some non-critical prior design (Loxton et al., 2011) and does not get enough attention, such as trajectory design (Golnaraghi & Kuo, 2010), reserving safety factor. It should be reminded that the solutions obtained from the feedback control, such as the robust control, are irrelevant to the propagation and influence domain of an uncertain system. A comprehensive review of the uncertainty propagation of ordinary differential equations (ODEs) using interval analysis and many instances of uncertainty propagation on orbit in aerocapture manoeuvres are given in report (Zazzera et al., 2006). Also, the propagation and influence domain of uncertainty from dynamical model, initial conditions, dynamical model parameters (Zazzera et al., 2006) are essential research topics in open-loop optimal control problem in astrodynamics. This article focuses on the propagation and influence domain estimation of uncertainty in an open-loop non-linear optimal control problem. So far, many methodologies have been developed to synthesize various uncertainties into the optimal control problem. Almost all of these methods are based on the probabilistic technologies, including stochastic optimal control, probabilistic approximation methods and polynomial chaos approximation methods. Stochastic optimal control has received the most attention among the statistical methods. It assumes that the state of the controlled system over time would be a stochastic process, and this stochastic process may be impossible to measure, moreover the measurement itself is usually noisily subjected to errors. The most commonly used principles for solving the stochastic optimal control problem are Pontryagin’s maximum principle and Bellman’s dynamic programming (Yong & Zhou, 1999). These two approaches lead to two different classes of methodologies separately and independently, but they are in fact equivalent. Since the theoretical basis of stochastic maximum is established by Peng (1990), the stochastic maximum principle can be used for the optimal control systems with stochastic input. Although significant progresses have been made in methodologies for solving stochastic optimal control, they all base on the assumption that the distribution of uncertain input parameter is known, and optimal objective is to minimize the expectations of the terminal and integral costs (Stengel, 1986). Many probabilistic approximation methods founded on the Monte Carlo method are also proposed. They are based on the assumption that the distribution of the input uncertain parameters is known. The essence of these approximation methods is that they use statistical method and sampling to model the problem. While the expected value of function is approximated by different methods. Cao et al. (2003) presented a Monte Carlo method with variance reduction technology, and the sensitivity derivative of the state variable with uncertain parameters is employed. Kleywegt et al. (2002) regarded the uncertain optimal control problem as a stochastic discrete optimization problem, and they applied a Monte Carlo-based sample average approximation to help solving the uncertain optimal control problem. Dorociak & Gausemeier (2014) transformed the uncertain optimal control problem into the multi-objective optimization problem, in which the two objectives based on a straightforward approximation are employed to determine the Pareto optimal control sequences. Phelps et al. (2016) provided more analysis and proof to extend consistency results for the deterministic optimal control problem and uncertain optimal control problem, in which the computational framework is also based on the approximation of sample averages using the Monte Carlo method. Polynomial chaos (PC) approximation methods are popular non-statistical probability methods, for their computational cost is far superior to the Monte Carlo simulations. Since Xiu & Karniadakis (2002) presented the generalized polynomial chaos (gPC) framework, which employed Wiener–Askey (Askey & Wilson, 1985) orthogonal polynomial chaos expansion to represent various stochastic processes, the PC method becomes applicable to approximate various distributions. Hover (2008) employed the PC approach to replace random variables with free variables in trajectory optimization. Fisher & Bhattacharya (2011) and Huschto & Sager (2013) used the framework of polynomial chaos expansion to transform the stochastic optimal control problem into the deterministic optimal control problems. Then this methodology was applied to the trajectory generation problem of the constrained mechanical systems and the feedback character preserving of optimal Markov decision rules respectively. In all the above methods, the basic assumptions for the controlled model are aleatory uncertainties. However, the optimal control problems encountered in the astrodynamics usually have imperfect unknown information. These problems are often subject to uncertainties such as atmospheric influences (Wilkins & Alfriend, 2000), influence of wind, gravitational constant, solar radiation, etc. (MaCdonald & Badescu, 2014) in astrodynamics. Though the probabilistic methods are effective when the complete statistical information of uncertain parameters are known a priori, but they are no longer applicable when only the upper and lower bounds of the uncertain parameters are known (Wu et al., 2013a). There are many non-probabilistic methods available to deal with the uncertainty, such as evidential reasoning (ER) approach (Dempster et al., 1977), fuzzy set theory (Zadeh, 1965), evidence theory (Shafer, 1976), convex model (Ben-Haim & Elishakoff, 1990; Jiang et al., 2011), interval analysis (Moore, 1966; Hansen, 1969), etc. In the above methods, the interval analysis has recently gained popularity in various fields for its explicit arithmetic and verifying or enclosing solutions (Alefeld & Mayer, 2000). Essentially, the interval model is a convex model to enclose the possible values of uncertain parameter (Jiang et al., 2007b), in which only the lower and upper bounds of uncertain parameters are required. It is more rational to descript the uncertainties and their influence using interval model. The direct application of interval arithmetic leads to a conservative result with large overestimation, and it often suffers from a high computational load (Ishizaki et al., 2016). The indirect method including other general approach, such as optimization approach, and some specialized method. For example, IP-GA optimization approach can be employed to obtain the output interval for interval model (Jiang et al., 2007a), and interval quadratic programming (Ishizaki et al., 2016) was designed for a class of special interval model which consists of the minimizer in a parametric quadratic program. However, the interval analysis can be rarely found to handle with the uncertainties in optimal control problem. The interval inclusion function is a suitable tool to obtain the result of the propagation and influence domain for an uncertain optimal control problem. In this article, the interval analysis based method is first employed for solving the uncertain optimal control problem with uncertain parameters of unknown distributions. Then a non-probabilistic Chebyshev interval method combined with symplectic algorithm is proposed to estimate the interval bounds of state, costate and control variables. In contrast to aforementioned probabilistic methods, Chebyshev interval method seeks the bounds of the uncertain output based on the interval model rather than solving for an output expectation in the problems with probability-distribution-known uncertain parameters. Hence, given an interval for the uncertain output, the uncertain optimal control can deliver solutions that meet the system requirements in all potential cases. The proposed method might be stated as a worst-case analysis method, which can improve the security of optimal trajectory design in astrodynamics. 2. Deterministic non-linear optimal control problem 2.1. Modelling of deterministic optimal control problem For the deterministic optimal control problem, the controlled dynamic system is expressed as $$\label{eq1} \dot{{\bf x}}\left(t \right)=f\left({{\bf x}}\left(t \right),{\bf u}\left(t \right),t \right)$$ (2.1) and the goal is to find a $${{\bf u}}$$ which minimizes the following cost function \begin{align} \label{eq2} J=\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left({{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t \right)} \end{align} (2.2) in which $$t$$ denotes time, the dot represents the derivative with respect to time, $${{\bf x}}$$ is the state vector, $${{\bf u}}$$ is the control input vector, and $$J$$ is a cost function. When the goal is satisfied, an optimal control $${{\bf u}}^{\ast}$$ can be found and the corresponding state vector $${{\bf x}}^{\ast}$$ is also determined. In this article, we mainly pay attention to the optimal control problem with fixed terminal time, i.e. $$t_{f}$$ is given. Many excellent methods can be used to solve optimal control problem (Loxton et al., 2011), among which, the calculus of variation is one of the effective methods for solving the deterministic optimal control problem defined by Equations (2.1) and (2.2). In the calculus of variation, the optimal control problem is transformed into a non-linear two-point boundary value problem (TPBVP). Further, many methods have been proposed for solving TPBVP, including the numerical iteration by employing implicit Lagrange function and derivative Newton’s shooting (Majji et al., 2009), alternative approach (Lizia et al., 2008), sixth-order scheme (Armellin & Topputo, 2006), and symplectic algorithm (Peng et al., 2011). In this study, the symplectic algorithm is adopted to solve the deterministic optimal control problem, for it is proved to be successful in the trajectory optimization in astrodynamics (Peng et al., 2011). A brief review of the symplectic algorithm is illustrated in the following section, more details can be found in (Peng et al., 2011). 2.2. Symplectic algorithm for solving deterministic nonlinear optimal control problem To construct a symplectic algorithm, a modified action $$U$$ in a time interval $$\left({0,\eta} \right)$$ is defined as follows \begin{align}\label{eq2.3} U={{\boldsymbol\lambda}}_{\eta}^{{\rm T}} {{\bf x}}_{\eta} -\int\limits_0^\eta {\left({{{\boldsymbol\lambda}}^{{\rm T}} \dot{{\bf x}}-H} \right){\rm d}t}, \end{align} (2.3) where $${{\bf x}}_{\eta}$$ and $${{\boldsymbol\lambda}}_{\eta}$$ are the state and costate variables at time $$\eta$$, respectively. The Hamiltonian function $$H\left({{\bf x}},{{\boldsymbol\lambda}} \right)$$ only depends on state $${{\bf x}}$$ and costate variables $${{\boldsymbol\lambda}}$$, i.e., \begin{align}\label{eq2.4} H\left({{{\bf x}},{{\boldsymbol\lambda}}} \right)={\it {\Phi}} \left( {{{\bf x}}\left(t \right),g\left({{{\bf x}},{{\boldsymbol\lambda }}} \right),t} \right)+{{\boldsymbol\lambda}}^{{\rm T}}f\left( {{{\bf x}}\left(t \right),g\left({{{\bf x}},{{\boldsymbol\lambda }}} \right),t} \right). \end{align} (2.4) After the differentiation of Equation (2.4) on both sides, then it has, \begin{align}\label{eq2.5} {\rm d}U={{\boldsymbol\lambda}}_{0}^{{\rm T}} {\rm d}{{\bf x}}_{0} +{{\bf x}}_{\eta}^{{\rm T}} {\rm d}{{\boldsymbol\lambda}}_{\eta}, \end{align} (2.5) where $${{\bf x}}_{0}$$ and $${{\boldsymbol\lambda}}_{0}$$ are the state and costate variables at initial time, respectively. Equation (2.5) implies that if the Hamiltonian canonical equation is satisfied within the time interval $$\left({0,\eta} \right)$$, considering the state variables $${{\bf x}}_{0}$$ and costate variables $${{\boldsymbol\lambda}}_{\eta}$$ at two ends of the time interval are taken as the independent variables, $$U$$ must be only the function of $${{\bf x}}_{0}$$ and $${{\boldsymbol\lambda}}_{\eta}$$. In order to formulate the numerical algorithm, the whole time domain $$\left({t_{0}, t_{f}} \right)$$ is divided into a series of equal time intervals; the interval length is $$\eta$$ and the number of time intervals is $${\it {\Omega}}$$. Then the state variable $${{\bf x}}\left(t \right)$$ and costate variable $${{\boldsymbol\lambda}}\left(t \right)$$ within the $$j$$th subinterval are approximated by using the Lagrange polynomials with order $$m -1$$ and $$n - 1$$, respectively, i.e., \begin{gather} \label{eq3} {{\bf x}}\left(t \right)=\left(M_{1} \otimes {{\bf I}} \right){{\bf x}}_{j-1} +\left(\tilde{{{\bf M}}}\otimes {{\bf I}} \right){\overline{{{\bf x}}}}_{j}\\ \end{gather} (2.6) \begin{gather} \label{eq4} {{\boldsymbol\lambda}}\left(t \right)=\left( \underset{{\sim}}{\bf N} \otimes {{\bf I}} \right) \overline{{\boldsymbol\lambda}}_{j} +\left({N_{n} \otimes {{\bf I}}} \right){{\boldsymbol\lambda}}_{j} \end{gather} (2.7) in which, the vector $${{\bf x}}_{j-1}$$ denotes the state variable at left end of the $$j$$th subinterval, the $${{\boldsymbol\lambda}}_{j}$$ denotes the costate variable at right end of the $$j$$th subinterval, vector $$\overline{{\bf x}}_{j}$$ is composed of all the state variables at interpolation points within the $$j$$th subinterval, i.e., $$\overline{{\bf x}}_{j} =\left\{\overline{{\bf x}}_{j}^{2}, \overline{{\bf x}}_{j}^{3}, \cdots, \overline{{\bf x}}_{j}^{m} \right\}^{{\rm T}}$$, and $$\overline{\boldsymbol{\lambda}}_{j}$$ is composed of the dependent costate variables at interpolation points within the $$j$$-th subinterval, i.e., $$\overline{\boldsymbol{\lambda}}_{j} =\left\{\overline{\boldsymbol{\lambda}}_{j}^{1}, \overline{\boldsymbol{\lambda}}_{j}^{2}, \cdots, \overline{\boldsymbol{\lambda}}_{j}^{n-1} \right\}^{{\rm T}}$$. The symbol $$\otimes$$ in Equations (2.6) and (2.7) denotes the Kronecker product of two matrices. Other symbols in Equations (2.6) and (2.7) are defined as \begin{gather} \tilde{{\bf M}}=\left[{M_{2}, M_{3}, \cdots, M_{m} } \right]\\ \end{gather} (2.8) \begin{gather} \underset{{\sim}}{\bf N}=\left[{N_{1}, N_{2} ,\cdots, N_{n-1}} \right]\\ \end{gather} (2.9) \begin{gather} M_{i} =\prod\limits_{j=1,j\ne i}^m {\frac{t-{\left( {j-1} \right)\eta} /{\left({m-1} \right)}}{{\left({i-j} \right)\eta} /{\left({m-1} \right)}}} \end{gather} (2.10) \begin{gather} N_{i} =\prod\limits_{j=1,j\ne i}^n {\frac{t-{\left( {j-1} \right)\eta} /{\left({n-1} \right)}}{{\left({i-j} \right)\eta} /{\left({n-1} \right)}}} \end{gather} (2.11) Substituting Equations (2.6) and (2.7) into Equation (2.3), the approximate function in the $$j$$th subinterval can be expressed as follows \begin{align}\label{2.12} U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda}}_{j} ,\overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda}}}}_{j}} \right)={{\boldsymbol\lambda}}_{j}^{{\rm T}} \overline{{\bf x}}_{j} -\int\limits_0^\eta {\left({{{\boldsymbol\lambda}}^{{\rm T}} \dot{{\bf x}}-H\left({{{\bf x}},{{\boldsymbol\lambda}}} \right)} \right)_{j} {\rm d}t} \quad j=1,2,\ldots, {\it {\Omega}}. \end{align} (2.12) According to Equation (2.5), the approximate action $$U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda}}_{j}, \overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda}}}}_{j}} \right)$$ in the $$j$$th subinterval must only be the function of state variable at left and costate variable at right ends of the $$j$$th subinterval, i.e., the following equations must be satisfied \begin{gather} \label{eq9} \frac{\partial U_{j} \left({{{\bf x}}_{j-1} ,{{\boldsymbol\lambda}}_{j}, \overline{{\bf x}}_{j} ,{\overline{{\boldsymbol{\lambda}}}}_{j}} \right)}{\partial \overline{{\bf x}}_{j}}={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}\\ \end{gather} (2.13) \begin{gather} \frac{\partial U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda }}_{j}, \overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda }}}}_{j}} \right)}{\partial {\overline{{\boldsymbol{\lambda }}}}_{j}}={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}.\label{eq2.14} \end{gather} (2.14) According to Equation (2.5), we have \begin{gather} \label{eq10} \frac{\partial U_{j} \left({{{\bf x}}_{j-1} ,{{\boldsymbol\lambda}}_{j}, \overline{{\bf x}}_{j} ,{\overline{{\boldsymbol{\lambda}}}}_{j}} \right)}{\partial {{\bf x}}_{j-1}}-{{\boldsymbol\lambda}}_{j-1} ={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}\\ \end{gather} (2.15) \begin{gather} \frac{\partial U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda }}_{j}, \overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda }}}}_{j}} \right)}{\partial {{\boldsymbol\lambda}}_{j}}-{{\bf x}}_{j} ={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}.\label{eq2.16} \end{gather} (2.16) Accordingly, based on Equation (2.5), the deterministic non-linear optimal control problem is transformed into a set of non-linear equations (2.13)–(2.16). Using Newton-Raphson iteration method or other non-linear equations solvers, the state variables and the costate variables can be obtained. Then, the deterministic non-linear optimal control problem can be solved. The flowchart of symplectic algorithm can be expressed by Fig. 1. Fig. 1. View largeDownload slide The symplectic algorithm for solving TPBVP. Fig. 1. View largeDownload slide The symplectic algorithm for solving TPBVP. To have a best precision and efficiency in the symplectic algorithm, the combination of interpolation parameters $$m$$ and $$n$$ is $$m=n$$ for state and costate variables, respectively. Numerical tests show that if the combination of the interpolation parameters is $$m=n$$, the order of the proposed symplectic algorithm can be $$2\left({m-1} \right)+1$$. The main computational cost of the symplectic algorithm is solving for nonlinear equations in Equations (2.13)–(2.16), which are strongly dependant on the problem. It should be noted that in the above algorithm, the order for state and costate variable in Lagrange polynomials are $$m$$ and $$n$$, respectively. The dimensions of the equations in Equations (2.13) and (2.14) are $$m-1$$ and $$n-1$$, adding the two equations in Equations (2.15) and (2.16), the dimension in a time interval is $$m+n$$. Considering the problem is separated by a number of $${\it {\Omega}}$$ time intervals, the dimension of the nonlinear equations is $${\it {\Omega}} \times \left({m+n} \right)$$. 3. Interval uncertain method for non-linear optimal control problem with uncertainties 3.1. Interval uncertain modelling of optimal control problem with uncertainties Considering an optimal control problem with uncertainties, the controlled dynamical system (Equation (3.1)) related with uncertain parameter vector $${\boldsymbol{\xi}}=\left[{\xi_{1}, \xi_{2}, \ldots, \xi_{n}} \right]$$ can be rewritten as follows, i.e., $$\label{eq11} \dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t,{\boldsymbol{\xi}} \right)$$ (3.1) and the new objective $$\tilde{{J}}$$ is still to minimize the cost function defined in Equation (3.2): $$\label{eq12} \tilde{{J}}=\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left({{{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t} \right)}$$ (3.2) while it is indirectly related with uncertain parameter $${\boldsymbol{\xi}}$$ through state $${{\bf x}}$$ and control $${{\bf u}}$$ arising in Equation (3.1). The uncertain parameter $${\boldsymbol{\xi}}$$ is associated with the uncertainties from system parameters, such as initial errors, process errors, etc. We assume that the complete statistical information for uncertain parameter $${\boldsymbol{\xi}}$$ is unknown and only the bounds are known. The model of uncertain optimal control problem defined by Equations (3.1) and (3.2) is significantly different from that defined in stochastic optimal control. Firstly, it should be noted that the uncertain parameter $${\boldsymbol{\xi}}$$ in Equation (3.1) is introduced without any assumption of probability distribution. Furthermore, the stochastic optimal control system can be descripted as follows: $$\label{eq13} \dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf x}}\left( t \right),{{\bf u}}\left(t \right),t,{{\bf v}}(t)} \right)$$ (3.3) in which, the controlled system is related with a stochastic variables $${{\bf v}}(t)$$, and it is a Gaussian white noise vector. Moreover, the performance index of stochastic optimal control is the expectation of the cost function (Stengel, 1986), i.e., \begin{align}\label{eq3.4} E\left\{J \right\}=E\left\{{\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left( {{{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t} \right)}} \right\}\!. \end{align} (3.4) Therefore, the modelling of uncertain optimal control in this article should be distinguished from the stochastic optimal control. All the uncertainties, such as measurement errors, process errors, etc., in the theory of stochastic optimal control are introduced by known probability distribution. However, many uncertainties like non-exact relationships and incomplete theories cannot be descripted by probability functions. In many practical situations, the bounds of uncertain parameter can be obtained more easily than the exact probability distribution (Wang & Qiu, 2009; Römgens et al., 2013; Wu et al., 2013b). So it is necessary to establish the interval uncertain model for those uncertain optimal control problems. If only the bounds of the uncertain parameter $${\boldsymbol{\xi}}$$ are given, it can be defined using the interval vector $$\left[{{\boldsymbol{\xi}}} \right]\in \mathbb{I}\mathbb{R}^{n}$$, i.e., \begin{align}\label{eq3.5} \left[{{\boldsymbol{\xi}}} \right]=\left[{{{{\underline{\boldsymbol{\xi} }}}},{\overline{{\boldsymbol{\xi}}}}} \right]^{n}=\left\{{\xi_{r} :\underline{\xi}_{r} \leqslant \xi_{r} \leqslant \overline{{\xi }}_{r}, r=1,2,\ldots, n} \right\}\!, \end{align} (3.5) where, the symbols $${{{\underline{\boldsymbol{\xi}}}}}=\left[{{\underline{\xi}}_{1}, {\underline{\xi}}_{2}, \ldots, {\underline{\xi}}_{n}} \right]^{{\rm T}}$$and $${\overline{{\boldsymbol{\xi}}}}=\left[{\overline{{\xi}}_{1}, \overline{{\xi}}_{2}, \ldots, \overline{{\xi}}_{n}} \right]^{{\rm T}}$$denote the lower and the upper bounds of uncertain parameter $${\boldsymbol{\xi}}$$. In order to avoid repetition and simplify the formula, the underlined and overlined value in this article indicates its lower and upper bound, respectively. For simplicity, the symbol $${{\bf z}}\left(t \right)=\left[{{{\bf u}}\left(t \right),\;{{\bf x}}\left(t \right)} \right]^{{\rm T}}$$ is used to express the solution vector obtained from the optimal control problem. Then the Equation (3.1) can be described as a controlled dynamic system with uncertain parameter $${\boldsymbol{\xi}}$$, i.e., $$\label{eq14} \dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left( t \right),t,{\boldsymbol{\xi}}} \right)$$ (3.6) and the cost function in Equation (3.2) can also be rewritten as: \begin{align}\label{eq3.7} \tilde{{J}}\left({{{\bf s}}} \right)=\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left({{{\bf z}}\left(t \right),t} \right)}. \end{align} (3.7) If we employ the symbol $${{\bf z}}^{\ast}$$ to represent the real solution of deterministic optimal control problem defined in Equations (2.1) and (2.2), then the real solution of optimal control problem with uncertain parameter defined in Equations (3.6) and (3.7) (or Equations (3.1) and (3.2)) can be included in an interval vector $$\left[{{{\bf z}}^{\ast}} \right]=\left[{{{\bf {\underline{z}}}}^{\ast},\overline{{\bf z}}^{\ast}} \right]$$. In above interval vector expression, the lower bound $${{\bf {\underline{z}}}}^{\ast}$$ and upper bound $$\overline{{\bf z}}^{\ast}$$ of the solution $${{\bf z}}^{\ast}$$ can also be defined as follows: \begin{gather} \label{eq15} {\underline{z}}_{\chi}^{\ast} \left({\tilde{{t}}} \right)=\inf \left\{{z_{\chi}^{\ast} \left({\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right),\;{\boldsymbol{\xi}}\in \left[{{\boldsymbol{\xi}}} \right]} \right\}\\ \end{gather} (3.8) \begin{gather} \overline{{z}}_{\;\chi}^{\ast} \left({\tilde{{t}}} \right)=\sup \left\{{z_{\chi}^{\ast} \left({\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right),\;{\boldsymbol{\xi}}\in \left[{{\boldsymbol{\xi}}} \right]} \right\}\!.\label{eq3.9} \end{gather} (3.9) In which, $$\chi =1,2,\cdots, Z$$ is the component index of variable $${{\bf z}}$$. It should be noted that $$z_{\;\chi}^{\ast} \left({\tilde{{t}}} \right)$$ is a component of the output in a given time $$\tilde{{t}}$$, which is a quantity. 3.2. Interval analysis of the interval uncertain model The numerical method descripted in Section 2 can be applied to solve the deterministic optimal control problem when the uncertain parameter $${\boldsymbol{\xi}}$$ is determined. Then, the precise bounds of the uncertain optimal control solution $${{\bf z}}^{\ast}$$ can be solved by the following optimization problems, i.e., \begin{gather} \label{eq16} {\underline{z}}_{\chi}^{\ast} \left({\tilde{{t}}} \right)=\mathop{\min\limits_{{\boldsymbol{\xi}}\in \left[ {{\boldsymbol{\xi}}} \right]}} \left\{{z_{\chi}^{\ast} \left( {\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right)} \right\}\\ \end{gather} (3.10) \begin{gather} \overline{{z}}_{\chi}^{\ast} \left({\tilde{{t}}} \right)=\mathop{\max\limits_{{\boldsymbol{\xi}}\in \left[ {{\boldsymbol{\xi}}} \right]}} \left\{{z_{\chi}^{\ast} \left( {\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right)} \right\}. \label{eq3.11} \end{gather} (3.11) But the solutions of optimization problems in Equations (3.10) and (3.11) cannot be obtained easily by optimization algorithms. As the gradient algorithms often fail in global reaching for strong non-linear problem and the intelligence algorithms have high computational cost. Fortunately, the interval analysis is an efficient way to estimate the range of output of uncertain optimal control problem. In the interval analysis, the interval operations combined with appropriate function are used to estimate the range of output of uncertain optimal control problem. Interval arithmetic and interval function are the two basic tools for interval analysis, and interval analysis uses interval arithmetic to obtain the solution of interval function. Interval arithmetic defines the operations on the real intervals and ensures all the possible results from real operations are enclosed in the given intervals. Four arithmetic operations for the real intervals $$\left[x \right]$$ and $$\left[y \right]$$ are defined as following (Moore, 1966): \begin{align}\label{eq3.12} \left[z \right]=\left[x \right]\circ \left[y \right]=\left\{ {z=x\circ y:x\in \left[x \right],y\in \left[y \right],\circ \in \left\{{+,-,\times, \div} \right\}} \right\}\!. \end{align} (3.12) Also the arithmetic operations defined on a real number $$a$$ and an interval $$\left[x \right]$$ are given by: \begin{align}\label{eq3.13} \left[z \right]=a\circ \left[x \right]=\left\{z=a\circ x:x\in \left[x \right],\circ \in \left\{{+,-,\times, \div} \right\} \right\}\!. \end{align} (3.13) For an arbitrary function (or mapping) $${\boldsymbol{f}}:\mathbb{R}^{n}\to \mathbb{R}^{m}$$, then, we define the $$[\,{\boldsymbol{f}}]$$ as the interval function and it is given by: \begin{align}\label{eq3.14} \left[{{{\bf z}}} \right]=[\,{\boldsymbol{f}}]\left({\left[{{{\bf x}}} \right]} \right)\!, \end{align} (3.14) where, $${{\bf x}}\in \mathbb{R}^{n}$$ is the input vector and $${{\bf z}}\in \mathbb{R}^{m}$$ is the output vector of the function $${\boldsymbol f}$$, $$\left[{{{\bf x}}} \right]\in \mathbb{I}\mathbb{R}^{n}$$ and $$\left[{{{\bf z}}} \right]\in \mathbb{I}\mathbb{R}^{m}$$ are the corresponding interval expression for their ranges. The interval function can be easily applied to describe the solutions of optimal control problem with uncertain parameter $${\boldsymbol{\xi}}$$. If the uncertain parameter $${\boldsymbol{\xi}}$$ is fixed, the solutions of deterministic optimal control problem can be treated as a function of $${\boldsymbol{\xi}}$$, i.e., \begin{align}\label{eq3.15} {{\bf z}}^{\ast}={\boldsymbol{g}}\left({{\boldsymbol{\xi}}} \right)\!. \end{align} (3.15) Then the interval function for optimal control problem with interval uncertain parameter $$\left[{{\boldsymbol{\xi}}} \right]$$ is expressed as follows: \begin{align}\label{eq3.16} \left[{{{\bf z}}^{\ast}} \right]=\left[{\boldsymbol{g}} \right]\left({\left[ {{\boldsymbol{\xi}}} \right]} \right)\!. \end{align} (3.16) Unfortunately, the interval analysis can only be applied on the functions which contain the basic arithmetic operations. Thus the interval analysis cannot be used directly to estimate the range of interval function of uncertain optimal control problem. However, uncertain optimal control problem can be represented by some approximate functions which can easily perform the interval arithmetic operations. The Taylor series expansion is commonly used to construct the inclusion function in many fields, but it may take more overestimation and may not ensure sharper but wider results. The algorithm based on the Chebyshev series can largely reduce the overestimation in interval analysis for non-linear problem and it has been proved to be effective for interval analysis of ODEs (Wu et al., 2013b) and DAEs (Wu et al., 2013a). It can be found that a single operation in Equations (3.12) and (3.13) acquires a precise result and it has no overestimation. However, if an interval function in Equation (3.16) has many interval operations defined by Equations (3.12) and (3.13), it could produce intrinsic overestimation by the so-called ‘wrapping effect’. This overestimation may even be more severe and complex in multidimensional problems (Nedialkov, 1999). It should be emphasized that this ‘wrapping effect’ is intrinsic and cannot be removed. The ‘wrapping effect’ phenomenon is mainly caused by obtaining the bounds asynchronously in related interval operations. In many cases, the asynchrony is caused by multiplying with a negative number or an interval containing negative bound. For example, given for the interval $$\left[x \right]=\left[{1,2} \right]$$ and $$\left[y \right]=\left[{-2,-1} \right]$$, the interval inclusion function: \begin{align}\label{eq3.17} \begin{split} \left[z \right]=\left[{f_{1}} \right]\left({\left[x \right],\left[y \right]} \right)&=2\times \left[x \right]+\left[x \right]\times \left[y \right]=2\times \left[{1,2} \right]+\left[{1,2} \right]\times \left[ {-2,-1} \right] \\ &=\left[{2,4} \right]+\left[{-4,-1} \right]=\left[{-2,3} \right] \end{split}. \end{align} (3.17) Actually, the accuracy interval of $$\left[z \right]$$ is $$\left[{0,2} \right]$$, and if we replace $$\left[{f_{1}} \right]$$ with \begin{align} \label{eq17} \begin{split} \left[z \right]=\left[{f_{1}} \right]\left({\left[x \right],\left[y \right]} \right)&=\left[x \right]\times \left({2+\left[y \right]} \right)=\left[{1,2} \right]\times \left({2+\left[{-2,-1} \right]} \right)\\ &=\left[{1,2} \right]\times \left[{0,1} \right]=\left[{0,2} \right] \end{split} \end{align} (3.18) it will obtain the accuracy result. To reduce the error caused by the phenomenon of ‘wrapping effect’, the interval inclusion function should be designed in an appropriate form. The trigonometric functions based approximation function is the highly preferred form. It is because that not only the interval estimation of the trigonometric functions can be obtained by periodicity (Jaulin et al., 2001; Wu et al., 2015) (maximum interval is $$\left[{-1,1} \right])$$, but also the bounds of interval $$\left[{-1,1} \right]$$ will be obtained synchronically when the interval is multiplied with a negative number or itself. In the following sections, the Chebyshev interval method based the symplectic algorithm is proposed to estimate the solution of optimal control problem with interval uncertainty. 4. Chebyshev interval methodbased symplectic algorithm 4.1. Chebyshev polynomial approximation and the interval analysis The Chebyshev polynomial approximation for the one-dimension function $$f(x)$$ defined on $$\left[{{\underline{x}}, \overline{{x}}} \right]\in \mathbf{\mathbb{R}}$$ can be written as $$\label{eq18} f\left(x \right)=p_{d} \left(x \right)=\frac{1}{2}f_{0} +\sum\limits_{i=1}^d {f_{i} \cdot \cos \left({i\alpha} \right)}$$ (4.1) in which, $$d$$ is the truncated degree of the Chebyshev series and $$\alpha$$ is the substitution variable defined as \begin{align}\label{eq4.2} \alpha =\arccos \left({\frac{2x-(\overline{{x}}+ {\underline{x}} )}{\overline{{x}}- {\underline{x}}}} \right)\in \left[{0,\pi} \right]. \end{align} (4.2) For a $$n$$-dimensional functions $$f\left({{{\bf x}}} \right)$$ with an input variable $${{\bf x}}\in \left[{{\underline{{\bf x}}}},\overline{{\bf x}} \right]\in \mathbf{\mathbb{I}\mathbb{R}}^{n}$$, considering $$\cos \left({0_{r} \alpha_{r}} \right)=1$$, the $$d$$-degree approximation is given using tensor product: $$\label{eq19} f\left({{{\bf x}}} \right)\approx p_{d} \left({{{\bf x}}} \right)=\sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left({\frac{1}{2}} \right)^{\beta}f_{i_{1}, \ldots,i_{r} \ldots,i_{n}} \cdot \cos \left({i_{1} \alpha_{1}} \right)\ldots\cos \left({i_{r} \alpha_{r}} \right)\ldots\cos \left({i_{n} \alpha_{n}} \right)}$$ (4.3) where, $$\beta$$ is the counted number of zero degree ($$i_{r} =0)$$ Chebyshev polynomials, the subscript $$r$$ denotes the component index of the input variable $${{\bf x}}=\left\{{x_{1}, \ldots, x_{r}, \ldots, x_{n}} \right\}$$ and substitution variable $${\boldsymbol{\alpha}}=\left\{{\alpha_{1}, \ldots, \alpha _{r}, \ldots, \alpha_{n}} \right\}$$. Each component $$\alpha_{r}$$ in the substation variable vector can be calculated by Equation (4.2) with the corresponding component of $${{\bf x}}$$ and $${\boldsymbol{\alpha}}$$. It should be noted that the summation in Equation (4.3) is started from 0 compared to the Equation (4.1) after introducing the coefficient $$\left({1/2} \right)^{\beta}$$ to the items which contains zero degree Chebyshev polynomials. In the architecture of the multi-dimensional Chebyshev polynomial interpolant in Equation (4.3), the basic functions are made of trigonometric functions. For the ranges of trigonometric function can be accurately obtained when it is combined with periodicity and monotonicity, then the interval analysis can be applied to the approximation function. The fitting coefficients $$f_{i_{1}, \ldots,i_{r}, \ldots,i_{n}}$$ are calculated by Mehler integral based on the orthogonality of Chebyshev polynomials (Wu et al., 2013b): $$\label{eq20} f_{i_{1}, \ldots, i_{r}, \ldots,i_{n}} \approx \left( {\frac{2}{\tilde{{d}}}} \right)^{n}\sum\limits_{q_{1} =1}^{\tilde{{d}}} {\ldots} \sum\limits_{q_{s} =1}^{\tilde{{d}}} {\ldots} \sum\limits_{q_{n} =1}^{\tilde{{d}}} {f\left({x_{q_{1}} ,\ldots,x_{q_{s}}, \ldots,x_{q_{n}}} \right)\cos \left({i_{1} \theta_{q_{1}}} \right)\ldots\cos \left({i_{r} \theta_{q_{s}}} \right)\ldots\cos \left({i_{n} \theta_{q_{n}}} \right)}$$ (4.4) in which, for each dimensional of input variable $$s\left({s=1,2,\ldots, n} \right)$$, the interpolation points are $$\theta_{q_{s}} =\frac{2q_{s} -1}{\tilde{{d}}}\frac{\pi}{2}$$, where $$q_{s} =1,2,\ldots \tilde{{d}}$$, $$\tilde{{d}}$$ is the calculation degree for a $$d$$-degree truncated Chebyshev series, $$x_{q_{s}}$$ is calculated by \begin{align}\label{eq4.5} x_{q_{s}} =\frac{\overline{{x}}_{q_{s}} + {\underline{x}}_{q_{s}} }{2}+\frac{\overline{{x}}_{q_{s}}-{\underline{x}}_{q_{s}}}{2}\cos \left({\theta_{q_{s}}} \right)\!, \end{align} (4.5) and $$\left[{x_{q_{1}}, \ldots,x_{q_{s}}, \ldots,x_{q_{n}}} \right]^{{\rm T}}$$are the positions for evaluating original functions. It should be noted that the fitting coefficients are calculated by the linear combination of results of original functions in Equation (4.4). Usually, a high degree truncation can obtain a better integral accuracy, so the truncation $$\tilde{{d}}$$ is usually not less than $$d+1$$. Subsequently, because the bounds of trigonometric function can be directly obtained by the periodicity and boundedness (in Equation (4.1), it has $$\cos \left({i\left[\alpha \right]} \right)=[-1,1]$$ when $$i\ne 0$$ and $$\alpha \in \left[{0,\pi} \right])$$, the interval analysis can easily applied to Equation (4.1). The Chebyshev inclusion function for one-dimension problem can be written as: \begin{align}\label{eq4.6} \left[f \right]\left({\left[x \right]} \right)=\frac{1}{2}f_{0} +\sum\limits_{i=1}^d {f_{i} \cdot \cos \left({i\left[\alpha \right]} \right)} =\frac{1}{2}f_{0} +[-1,1]\sum\limits_{i=1}^d {\left| {f_{i}} \right|}. \end{align} (4.6) Considering the $$n$$-dimensional approximation in Equation (4.3), the multi-dimensional Chebyshev inclusion function $$f$$ can be obtained: \begin{align}\label{eq4.7} \left[f \right]\left({\left[{{{\bf x}}} \right]} \right)=\left( {\frac{1}{2}} \right)^{n}f_{i_{1} =0,\ldots,i_{r} =0,\ldots,i_{n} =0} +\left[{-1,1} \right]\sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left| {\left({\frac{1}{2}} \right)^{\beta}f_{i_{1}, \ldots,i_{r}, \ldots,i_{n} \left({\exists i_{r} \ne 0} \right)}} \right|}. \end{align} (4.7) Note that the result of an interval arithmetic operation defined in Equations (3.12) and (3.13) is a real interval, while the results of arithmetic operations between real values are still real values. Equation (4.7) consists of a real number item and other interval items. The real number item is the multiplication of zero degree Chebyshev polynomials $$\frac{1}{2}f_{0_{r}}$$ in each dimension ($$r=1,2,\ldots, n)$$, while the other items containing no-zero degree Chebyshev polynomial ($$\exists i_{r} \ne 0)$$ are interval results. It can be concluded that the Chebyshev interval method is a non-intrusive method without derivative information. If the expression of function $$f$$ is complex or even implicit, the Chebyshev interval method can also be implemented. The error of the interval estimation is related to the accuracy of the approximated interval inclusion function. The truncated error for onedimension Chebyshev series is given by Fox & Parker (1968): \begin{align}\label{eq4.8} e_{d} \left(x \right)=\left| {f\left(x \right)-p_{d} \left(x \right)} \right|\leqslant \frac{2^{-d}}{\left({d+1} \right)!}\left\| {f^{\left({d+1} \right)}} \right\|_{\infty}. \end{align} (4.8) It can be found that the $$e_{d} \left(f \right)$$ can be neglected when $$d$$ is large enough (Wu et al., 2013b). For multi-dimensional problem, it can also been proved that if $$d$$ is large enough $$e_{d} \left({f\left({{{\bf x}}} \right)} \right)$$ can be neglected. 4.2. Implementation of Chebyshev interval methodbased symplectic algorithm Considering the uncertain interval model of non-linear optimal control established in Section 3, the direct solving of interval function defined in Equation (3.16) can be replaced by the Chebyshev interval method in Section 4.1. It is mentioned that the process of solving deterministic optimal control in Section 2 can be regarded as a function $${{\bf g}}$$ when uncertain parameter $${\boldsymbol{\xi}}$$ is fixed. The function $${{\bf g}}$$ can be approximated by $$d$$-degree Chebyshev polynomial approximation. In this situation, the symplectic algorithm for solving optimal control problem would not be changed and the only requirements are the combination vectors of integral points in Mehler integral. Finally, the interval bounds of $${{\bf z}}^{\ast}$$ can be estimated via the Chebyshev inclusion function. The details can be summarized as follows: 1. Confirming the uncertain parameter vector and its range $${\boldsymbol{\xi}}=\left[{{\boldsymbol{\xi}},{\overline{{\boldsymbol{\xi}}}}} \right]\in {\mathbb{IR}}^{n}$$. 2. Calculating the integral points $$\xi_{q_{s}}$$ of Mehler integral by $$\label{eq21} \xi_{q_{s}} =\frac{\overline{{\xi}}_{q_{s}} +{\underline{\xi}} _{q_{s}}}{2}+\frac{\overline{{\xi}}_{q_{s}} - {\underline{\xi}} _{q_{s}}}{2}\cos \left({\theta_{q_{s}}} \right)$$ (4.9) in which, $$\theta_{q_{s}} =\frac{2q_{s} -1}{\tilde{{d}}}\frac{\pi}{2}$$, $$q_{s} =1,2,\ldots \tilde{{d}}$$ and $$s=1,2,\ldots, n$$. 3. Constructing the optimal control problems with the fixed parameters $${\boldsymbol{\xi}}_{q-c} =\left[{\xi_{q_{1}}, \ldots,\xi_{q_{s}}, \ldots,\xi_{q_{n}}} \right]^{{\rm T}}$$ which are the combinations of $$\xi_{q_{s}} \left({q_{s} =1,2,\ldots, \tilde{{d}}} \right)$$. 4. For each deterministic optimal control problem $$\boldsymbol{g}\left({{\boldsymbol{\xi}}_{q-c}} \right)$$, symplectic algorithm can be employed to transform the TPBVP into non-linear equations (2.13)–(2.16), and then solved by the Newton-Raphson iteration method. 5. Calculating the coefficients of Chebyshev polynomial for each $$z_{\chi}^{\ast} \left({\tilde{{t}}} \right)$$ using \begin{align}\label{eq4.10} \boldsymbol{g}_{i_{1}, \ldots,i_{r}, \ldots,i_{n}} \approx \left({\frac{2}{d}} \right)^{n}\sum\limits_{q_{1} =1}^d {\ldots} \sum\limits_{q_{s} =1}^d {\ldots} \sum\limits_{q_{n} =1}^d {\boldsymbol{g}\left( {{\boldsymbol{\xi}}_{q-c}} \right)\cos \left({i_{1} \theta_{q_{1} }} \right)\ldots\cos \left({i_{r} \theta_{q_{s}}} \right)\ldots\cos \left({i_{n} \theta_{q_{n}}} \right)}. \end{align} (4.10) 6. Performing interval analysis based on the Chebyshev polynomial approximation for each $$z_{\chi}^{\ast} \left({\tilde{{t}}} \right)$$ using \begin{gather} \label{eq22} \boldsymbol{g}\left({{\boldsymbol{\xi}}} \right)\approx \sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left({\frac{1}{2}} \right)^{\beta }\boldsymbol{g}_{i_{1}, \ldots,i_{r}, \ldots,i_{n}} \cos \left({i_{1} \alpha_{1} } \right)\ldots\cos \left({i_{r} \alpha_{r}} \right)\ldots\cos \left({i_{n} \alpha_{n}} \right)}\\ \end{gather} (4.11) \begin{gather} \left[{{{\bf z}}^{\ast}} \right]=\left[\boldsymbol{g} \right]\left({\left[ {{{\bf x}}} \right]} \right)=\left({\frac{1}{2}} \right)^{n}\boldsymbol{g}_{i_{1} =0,\ldots,i_{r} =0,\ldots,i_{n} =0} +\left[ {-1,1} \right]\sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left| {\left( {\frac{1}{2}} \right)^{\beta}\boldsymbol{g}_{i_{1}, \ldots,i_{r}, \ldots,i_{n} \left({\exists i_{r} \ne 0} \right)}} \right|}. \label{eq4.12} \end{gather} (4.12) From the above interval uncertain method, it should be noted that the optimal control problem with uncertainties is transformed into a series of deterministic optimal control problems which can be solved by symplectic algorithm. The implementation process of symplectic algorithm is not changed. So the Chebyshev interval method can also be combined with other optimal control algorithms to solve the uncertain optimal control problem. Unlike the probability methods, no priori assumptions have been introduced to the present algorithm procedures. The proposed interval-analysis-based method can provide a more reasonable estimation for uncertain optimal control problem when the known information is scarce or the probabilistic distribution of uncertain parameter is unknown. An error or insufficient assumption on the probabilistic properties may lead to the failure of the estimation for the optimal control problem, if only the upper and lower bounds of the uncertain parameter are known. In the employment of Chebyshev interval method on symplectic algorithm, the Chebyshev approximation is constructed by the tensor product of the Chebyshev polynomials approximation for each dimension in uncertain parameter vector. The coefficients of the terms after tensor product are calculated by Mehler integral. It also should be noted that, as mentioned in Section 4.1, the degree of the Mehler integral $$\tilde{{d}}$$ should be larger than the degree of Chebyshev polynomials $$d$$ in approximation. So $$\tilde{{d}}=\left({d+1} \right)^{n}$$ samples are required in the algorithm process. In each sample, the computation cost is created by the symplectic algorithm mentioned in Section 2.2. 5. Numerical simulations The following carefully chosen applications demonstrate the performance of the proposed interval analysis method for the uncertain optimal control problem. Firstly, a simple example is given for testing the uncertain influence from both initial conditions and dynamics system. Then, the proposed interval analysis algorithm is applied to investigate the uncertain orbital rendezvous of spacecraft. In order to investigate the accuracy of the interval estimation of state variable and control variables, the real bounds of the output in the following applications are calculated by the scanning method. To clarify the algorithm accuracy, the bounds overestimation of output variable $${\bf z}$$ are calculated by \begin{gather} \label{eq23} {\bf z}^{{\rm lower\_overestimation}}={\bf z}^{{\rm lower\_bound\_scanning}}-{\bf z}^{{\rm lower\_bound\_Chebyshev}}\\ \end{gather} (5.1) \begin{gather} \label{eq24} {\bf z}^{{\rm upper\_overestimation}}={\bf z}^{{\rm upper\_bound\_Chebyshev}}-{\bf z}^{{\rm upper\_bound\_scanning}} \end{gather} (5.2) in which, the $${\bf z}^{{\rm lower\_overestimation}}$$ and $${\bf z}^{{\rm upper\_overestimation}}$$ indicate the redundant interval estimation. 5.1. A simple example for numerical tests In this example, two types of uncertainties for non-linear optimal control problem are considered to perform the wide applicability of the presented method. For the problem itself is a non-linear optimal control problem with free terminal state (Kameswaran & Biegler, 2007). The cost function is given by $$\label{eq25} J=x_{2} \left({t_{f} =1} \right)$$ (5.3) the differential equations and the initial conditions are given by $$\label{eq26} \left\{{\begin{array}{@{}ll} \dot{{x}}_{1} =\omega_{1} x_{1} +u,\;\quad& x_{1} \left(0 \right)=x_{1}^{\rm initial} \\ \dot{{x}}_{2} =x_{1}^{2} +\omega_{2} u^{2},\quad& x_{2} \left(0 \right)=x_{2}^{\rm initial} \end{array}} \right.$$ (5.4) The system uncertainty and initial state uncertainty are performed in case A and case B, respectively. In case A, the system parameters $$p_{a}$$ and $$p_{b}$$ are uncertain and considered as interval parameters, while in case B, the initial state parameters $$x_{1}^{\rm initial}$$ and $$x_{2}^{\rm initial}$$ are the interval parameters. The interval uncertain parameters and determinate parameters are shown in Tables 1 and 2, respectively. Table 1 Parameters for the test example in case A $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — Table 1 Parameters for the test example in case A $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — Table 2 Parameters for the test example in case B $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] Table 2 Parameters for the test example in case B $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] The 10-degree Chebyshev interval method ($$d=10$$ and $$\tilde{{d}}=15)$$ combined with symplectic algorithm is used to estimate the interval results of optimal control problem. In order to obtain a high accuracy of the scanning method, 200 sampling points are distributed symmetrically in each of the interval of uncertain parameters. The interval estimations of case A are shown in Figs 2–4, which show that the interval results obtained by Chebyshev interval method can entirely enclose the interval results from the scanning method. The maximum interval for state variables $$x_{1}$$ and $$x_{2}$$ arises at the time $$t=1$$. Also it can be found that most of the overestimation is extremely small. The maximum overestimation is not more than $$10^{-4}$$, and it is located in the lower bounds estimation of $$x_{2}$$ in Fig. 5. Fig. 2. View largeDownload slide Interval estimation of $$x_{1}$$ for the case A. Fig. 2. View largeDownload slide Interval estimation of $$x_{1}$$ for the case A. Fig. 3. View largeDownload slide Interval estimation of $$x_{2}$$ for the case A. Fig. 3. View largeDownload slide Interval estimation of $$x_{2}$$ for the case A. Fig. 4. View largeDownload slide Interval estimation of $$u$$ for the case A. Fig. 4. View largeDownload slide Interval estimation of $$u$$ for the case A. Fig. 5. View largeDownload slide Overestimation of state variables and control variables for the case A. Fig. 5. View largeDownload slide Overestimation of state variables and control variables for the case A. In the case B, the interval estimations are shown in Figs 6–8. It can be found that the interval results obtained by the Chebyshev interval method can also entirely enclose the interval results obtained by the scanning method. While the uncertain interval results in the case B are narrower than they are in the case A. It is illustrated that the uncertain influence from system parameters is smaller than that from the initial state variables. In the case B, the maximum overestimation is not more than $$3e10^{-4}$$, and it is located in the lower bounds estimation of $$x_{2}$$ in Fig. 9. Fig. 6. View largeDownload slide Interval estimation of $$x_1$$ for the case B. Fig. 6. View largeDownload slide Interval estimation of $$x_1$$ for the case B. Fig. 7. View largeDownload slide Interval estimation of $$x_2$$ for the case B. Fig. 7. View largeDownload slide Interval estimation of $$x_2$$ for the case B. Fig. 8. View largeDownload slide Interval estimation of u for the case B. Fig. 8. View largeDownload slide Interval estimation of u for the case B. Fig. 9. View largeDownload slide Overestimation of state variables and control variables for the case B. Fig. 9. View largeDownload slide Overestimation of state variables and control variables for the case B. 5.2. Optimal orbital rendezvous problem with uncertain initial conditions The accuracy of spacecraft relative state information has much influence on the spacecraft rendezvous (Gao et al., 2009). Meanwhile, navigation error, complex perturbations factors and other source of uncertainty are inevitable in the rendezvous trajectory design. In this example, the initial state variables for an optimal spacecraft rendezvous problem are considered as uncertain parameters for verifying the performance of the proposed method. The trajectory design problem of spacecraft rendezvous with continuous thrust can be taken as a non-linear optimal control problem, i.e., the transferring spacecraft is controlled from one state to another given state. In order to establish the equations of motion, we introduce the coordinate frame which is rotating along a circular orbit at a constant angular velocity. From the Newton’s laws, we obtain the following equations of motion in the rotating frame (Park et al., 2006). \begin{align} \label{eq27} \left\{{\begin{array}{@{}l} \ddot{{x}}-2\omega \dot{{y}}-\omega^{2}\left({R+x} \right)=-\dfrac{\mu }{r^{3}}\left({R+x} \right)+u_{x} \\[2ex] \ddot{{y}}+2\omega \dot{{x}}-\omega^{2}y=-\dfrac{\mu}{r^{3}}y+u_{y} \\[2ex] \ddot{{z}}=-\dfrac{\mu}{r^{3}}z+u_{z} \\ \end{array}} \right. \end{align} (5.5) where $$r=\sqrt {\left({x+R} \right)^{2}+y^{2}+z^{2}}$$, $$x$$, $$y$$ and $$z$$ are the position variables of the spacecraft, and they represent radial, tangential and normal displacements from the origin of the rotating frame, respectively. $$R$$ is the position variable of the origin of the rotating frame from the origin of the inertial frame. $$\omega$$ is the mean angular velocity of the rotating frame. $$\mu$$ is the gravitational parameter of the central body. $$u_{x}$$, $$u_{y}$$ and $$u_{z}$$ are the control accelerations in $$x$$, $$y$$ and $$z$$ directions of the spacecraft, respectively. For convenience of analysis, the non-dimensionalized equations of motion with reference length R and reference time $$1/\omega$$ can be given as follows \begin{align} \label{eq28} \left\{{\begin{array}{@{}l} \ddot{{x}}-2\dot{{y}}+\left({\dfrac{1}{r^{3}}-1} \right)\left({1+x} \right)=u_{x} \\ \ddot{{y}}+2\dot{{x}}+\left({\dfrac{1}{r^{3}}-1} \right)y=u_{y} \\ \ddot{{z}}+\dfrac{1}{r^{3}}z=u_{z} \\ \end{array}} \right. \end{align} (5.6) where $$r=\sqrt {\left({x+1} \right)^{2}+y^{2}+z^{2}}$$ in the above non-dimensionalized equations of motion, the goal of the non-linear optimal control is to minimize the quadratic cost, i.e., \begin{align} J=\frac{1}{2}\int\limits_0^1 {\left({u_{x}^{2} +u_{y}^{2} +u_{z}^{2}} \right){\rm d}t}. \label{eq4.19} \end{align} (5.7) In state space, we define the state vector composed of position variables and velocity variables as $${{\bf x}}=\left\{{x,y,z,\dot{{x}},\dot{{y}},\dot{{z}}} \right\}^{{\rm T}}$$, and this problem means that the consumption of energy in the fixed flying time $$t_{f} =1$$ is minimized for transferring spacecraft from the initial state $$\label{eq29} {{\bf x}}_{t=0} =0.1\left\{{x_{0}, y_{0}, z_{0} ,0,0,0} \right\}^{{\rm T}}$$ (5.8) to the desired state \begin{align} {{\bf x}}_{t_{f} =1} =\left\{{0,0,0,0,0,0} \right\}^{{\rm T}}. \label{eq4.21} \end{align} (5.9) The initial state is the time to move in a relative position. The main uncertain influence comes from the position, so only the position variables are considered as the uncertain parameters. The intervals of these variables are expressed as $$\label{eq30} \left\{{\begin{array}{@{}l} \left[x_{0}\right] =[0.99\sin \theta \cos \phi, \;1.01\sin \theta \cos \phi] \\ \left[y_{0}\right] =\left[{0.99\cos \theta \sin \phi, \;1.01\cos \theta \sin \phi} \right] \\ \left[z_{0}\right] =\left[{0.99\cos \theta, \;1.01\cos \theta} \right] \\ \end{array}} \right.$$ (5.10) which $$\phi =\pi/6$$ and $$\theta =\pi/{72}$$. The 5-degree Chebyshev interval method ($$d=5$$ and $$\tilde{{d}}=6)$$ combined with symplectic algorithm are applied to solve the optimal control problem. For the scanning method, 10 sampling points are distributed symmetrically in the interval of each uncertain parameters. It should be noted that the number of solving optimal control problems is equal to the number of the solving TPBVP in the Section 2.2 The interval estimations of state variables for optimal orbital rendezvous are shown in Fig. 10 and the interval results obtained by the Chebyshev interval method also entirely enclose the interval results obtained by the scanning method. The maximum interval length is 2.618e-3, and it is located in the estimation of variable $$\dot{{z}}$$ at the time $$t=0.5077$$. It also indicates that the possible values are entirely enclosed in the interval obtained by the Chebyshev interval method. Besides, the Chebyshev interval method leads to extremely small overestimation. All of the overestimation is less than 2.5e-8 (as it is shown in Figs 11 and 12). Furthermore, the number of solving deterministic optimal control problems by the symplectic algorithm is listed in Table 3. It shows that the computational cost of Chebyshev interval method is much less than the scanning method. As it is shown in Fig. 13, the maximum length of the estimated interval of control variables is 1.9660e-2, and it is located in the estimation of $$u_{x}$$ at time $$t=0.43077$$. The accuracy of the interval results (as it is shown in Fig. 14) in optimal orbital rendezvous problem is also ensured by the Chebyshev interval method. Fig. 10. View largeDownload slide Interval estimation of state variables for optimal orbital rendezvous. Fig. 10. View largeDownload slide Interval estimation of state variables for optimal orbital rendezvous. Fig. 11. View largeDownload slide Overestimation of position state variables for optimal orbital rendezvous. Fig. 11. View largeDownload slide Overestimation of position state variables for optimal orbital rendezvous. Fig. 12. View largeDownload slide Overestimation of velocity state variables for optimal orbital rendezvous. Fig. 12. View largeDownload slide Overestimation of velocity state variables for optimal orbital rendezvous. Fig. 13. View largeDownload slide Interval estimation of control variables for optimal orbital rendezvous. Fig. 13. View largeDownload slide Interval estimation of control variables for optimal orbital rendezvous. Fig. 14. View largeDownload slide Overestimation of control variables for optimal orbital rendezvous. Fig. 14. View largeDownload slide Overestimation of control variables for optimal orbital rendezvous. Table 3 Calculation times of deterministic optimal control for the uncertain parameters Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Table 3 Calculation times of deterministic optimal control for the uncertain parameters Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Though the real control value is an initiative value which can be determined by feedback and many factors, this estimation can give a forecast for the initial spacecraft rendezvous design. The enclosures of state variables $${{\bf x}}$$ and $$\dot{{\bf x}}$$ are the foundations for the collision avoidance for the spacecraft mission in the predesign of the spacecraft mission. The enclosures of the $${{\bf u}}$$ can give a guidance for the thruster selection or the trajectory design under the constraint of the special thruster. 6. Conclusions This article proposes an uncertain influence domain estimation method for uncertain optimal control problem using symplectic algorithm and Chebyshev interval method. In order to obtain an accuracy description of uncertainties without known statistical information, the interval uncertain modelling is first adopted. Then the combination of symplectic algorithm and Chebyshev interval method is applied to estimate the influence domain of uncertain optimal control problem. The Chebyshev interval method employs the Chebyshev inclusion function to approximate the original uncertain optimal control problem. The approximation function transforms the uncertain optimal control problem into a series of deterministic ones, therefore, the symplectic algorithm can be employed. Each of the deterministic optimal control problems is further transformed into a set of non-linear algebraic equations subsequently solved by the Newton-Raphson iteration. In the numerical simulations, a simple test and an uncertain optimal orbital rendezvous are employed to show the effectiveness of the presented method. The results demonstrate that the proposed method has high computational precision and efficiency compared with the scanning method. Especially, the proposed method can obtain a desirable numerical accuracy for uncertain estimation of optimal trajectory in orbital rendezvous. Our future research will explore the practical application of interval based method on Spacecraft Swarm reconfiguration and other related fields. More detailed theory research of the symplectic algorithm and Chebyshev interval method with different aspects is out of the major scope of this study. Funding National Natural Science Foundation of China (11472069, 11372064, 91648204); National 111 Project of China (B14013); and National Key Research and Development Program (2016YFB0201602). References Alefeld, G. & Mayer, G. ( 2000 ) Interval analysis: theory and applications. J. Comput. Appl. Math., 121 , 421 – 464 . Google Scholar CrossRef Search ADS Armellin, R. & Topputo, F. ( 2006 ) A sixth-order accurate scheme for solving two-point boundary value problems in astrodynamics. Celestial Mech. Dynam. Astronom., 96 , 289 – 309 . Google Scholar CrossRef Search ADS Askey, R. & Wilson, J. ( 1985 ) Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials. Mem. Amer. Math. Soc., 54 , 1 – 55 Büskens, C. & Maurer, H. ( 2001 ) Sensitivity analysis and real-time control of parametric optimal control problems using nonlinear programming methods. Online Optimization of Large Scale Systems. ( Gr Tschel, M. Krumke S. O. & Rambau J. eds). Berlin Heidelberg : Springer , pp. 57 – 68 . Google Scholar CrossRef Search ADS Ben-Haim, Y. & Elishakoff, I. ( 1990 ) Convex Models of Uncertainty in Applied Mechanics . Amsterdam : Elsevier . Cao, Y., Hussaini, M. Y. & Zang, T. A. ( 2003 ) An efficient monte carlo method for optimal control problems with uncertainty. Comput. Optim. Appl., 26 , 219 – 230 . Google Scholar CrossRef Search ADS Dempster, A. P., Laird, N. M. & Rubin, D. B. ( 1977 ) Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B Methodol.l, 39 , 1 – 38 . Diehl, M., Bock, H. G. & Kostina, E. ( 2005 ) An approximation technique for robust nonlinear optimization. Math. Program., 107 , 213 – 230 . Google Scholar CrossRef Search ADS Dorf, R. C. & Bishop, R. H. ( 2011 ) Modern Control Systems . New Jersey : Pearson Education , pp. 234 – 236 . Dorociak, R. & Gausemeier, J. ( 2014 ) Methods of improving the dependability of self-optimizing systems. Dependability of Self-Optimizing Mechatronic Systems. ( Gausemeier, J. Rammig, F. J. Schfer W. & Sextro W. eds). Berlin Heidelberg : Springer . Google Scholar CrossRef Search ADS Fisher, J. & Bhattacharya, R. ( 2011 ) Optimal trajectory generation with probabilistic system uncertainty using polynomial chaos. J. Dyn. Syst. Measurement Control, 133 , 014501 . Google Scholar CrossRef Search ADS Fox, L. & Parker, I. B. ( 1968 ) Chebyshev Polynomials in Numerical Analysis . London : Oxford University Press . Gao, H., Yang, X. & Shi, P. ( 2009 ) Multi-objective robust H$$_{\infty}$$ control of spacecraft rendezvous. IEEE Trans. Control Syst. Technol., 17 , 794 – 802 . Google Scholar CrossRef Search ADS Golnaraghi, F. & Kuo, B. C. ( 2010 ) Automatic Control Systems . Quebecor : John Wiley & Sons ., pp. 7 . Hansen, E. ( 1969 ) Topics in Interval Analysis . Oxford : Clarendon Press . Hover, F. S. ( 2008 ) Gradient dynamic optimization with Legendre chaos. Automatica, 44 , 135 – 140 . Google Scholar CrossRef Search ADS Huschto, T. & Sager, S. ( 2013 ) Stochastic Optimal Control in the Perspective of the Wiener Chaos , IEEE, Zürich . Ibrir, S. ( 2008 ) Design of static and dynamic output feedback controllers through Euler approximate models: uncertain systems with norm-bounded uncertainties. IMA J. Math. Control Inf., 25 , 281 – 296 . Google Scholar CrossRef Search ADS Ishizaki, T., Koike, M., Ramdani, N., Ueda, Y., Masuta, T., Oozeki, T., Sadamoto, T. & Imura, J.-I. ( 2016 ) Interval quadratic programming for day-ahead dispatch of uncertain predicted demand. Automatica, 64 , 163 – 173 . Google Scholar CrossRef Search ADS Jaulin, L., Kieffer, M., Didrit, O. & Walter, É. ( 2001 ) Applied Interval Analysis: With Examples in Parameter and State Estimation, Robust Control and Robotics . New York : Springer . Jiang, C., Han, X., Guan, F. J. & Li, Y. H. ( 2007a ) An uncertain structural optimization method based on nonlinear interval number programming and interval analysis method. Eng. Struct., 29 , 3168 – 3177 . Google Scholar CrossRef Search ADS Jiang, C., Han, X. & Liu, G. R. ( 2007b ) Optimization of structures with uncertain constraints based on convex model and satisfaction degree of interval. Comput. Meth. Appl. Mech. Eng., 196 , 4791 – 4800 . Google Scholar CrossRef Search ADS Jiang, C., Han, X., Lu, G. Y., Liu, J., Zhang, Z. & Bai, Y. C. ( 2011 ) Correlation analysis of non-probabilistic convex model and corresponding structural reliability technique. Comput. Meth. Appl. Mech. Eng., 200 , 2528 – 2546 . Google Scholar CrossRef Search ADS Kameswaran, S. & Biegler, L. T. ( 2007 ) Convergence rates for direct transcription of optimal control problems using collocation at Radau points. Comput. Optim. Appl., 41 , 81 – 126 . Google Scholar CrossRef Search ADS Kleywegt, A. J., Shapiro, A. & Homem-de-Mello, T. ( 2002 ) The sample average approximation method for stochastic discrete optimization. SIAM J. Optim., 12 , 479 – 502 . Google Scholar CrossRef Search ADS Lizia, P. D., Armellin, R. & Lavagna, M. ( 2008 ) Application of high order expansions of two-point boundary value problems to astrodynamics. Celestial Mech. Dynam. Astronom., 102 , 355 – 375 . Google Scholar CrossRef Search ADS Loxton, R., Teo, K. L. & Rehbock, V. ( 2011 ) Robust suboptimal control of nonlinear systems. Appl. Math. Comput., 217 , 6566 – 6576 . MaCdonald, M. & Badescu, V. ( 2014 ) The International Handbook of Space Technology . Berlin : Springer . Google Scholar CrossRef Search ADS Majji, M., Turner, J. D. & Junkins, J. L. ( 2009 ) Solution of two-point boundary-value problems using Lagrange implicit function theorem. J. Guidance Control Dyn., 32 , 1684 – 1687 . Google Scholar CrossRef Search ADS Moore, R. E. ( 1966 ) Interval Analysis . Englewood Cliffs, NJ : Prentice-Hall . Nedialkov, N. ( 1999 ) Validated solutions of initial value problems for ordinary differential equations. Appl. Math. Comput., 105 , 21 – 68 . Park, C., Guibout, V. & Scheeres, D. J. ( 2006 ) Solving optimal continuous thrust rendezvous problems with generating functions. J. Guidance Control Dyn., 29 , 321 – 331 . Google Scholar CrossRef Search ADS Peng, H., Chen, B. & Wu, Z. ( 2014a ) Multi-objective transfer to libration-point orbits via the mixed low-thrust and invariant-manifold approach. Nonlinear Dynam., 77 , 321 – 338 . Google Scholar CrossRef Search ADS Peng, H., Jiang, X. & Chen, B. ( 2014b ) Optimal nonlinear feedback control of spacecraft rendezvous with finite low thrust between libration orbits. Nonlinear Dynam., 76 , 1611 – 1632 . Google Scholar CrossRef Search ADS Peng, H. J., Gao, Q., Wu, Z. G. & Zhong, W. X. ( 2011 ) Symplectic adaptive algorithm for solving nonlinear two-point boundary value problems in Astrodynamics. Celestial Mech. Dynam. Astronom., 110 , 319 – 342 . Google Scholar CrossRef Search ADS Peng, S. ( 1990 ) A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim., 28 , 966 – 979 . Google Scholar CrossRef Search ADS Pettit, C. L. ( 2004 ) Uncertainty quantification in aeroelasticity: recent results and research challenges. J. Aircraft, 41 , 1217 – 1229 . Google Scholar CrossRef Search ADS Phelps, C., Royset, J. O. & Gong, Q. ( 2016 ) Optimal control of uncertain systems using sample average approximations. SIAM J. Control Optim., 54 , 1 – 29 . Google Scholar CrossRef Search ADS Pritchard, A. J. & Townley, S. ( 1991 ) Robustness optimization for uncertain infinite-dimensional systems with unbounded inputs. IMA J. Math. Control Inf., 8 , 121 – 133 . Google Scholar CrossRef Search ADS Römgens, B., Mooij, E. & Naeije, M. ( 2013 ) Satellite collision avoidance prediction using verified interval orbit propagation. J. Guidance Control Dyn., 36 , 821 – 832 . Google Scholar CrossRef Search ADS Sakthivel, R. & Ito, H. ( 2007 ) Non-linear robust boundary control of the Kuramoto-Sivashinsky equation. IMA J. Math. Control Inf., 24 , 47 – 55 . Google Scholar CrossRef Search ADS Shafer, G. ( 1976 ) A Mathematical Theory of Evidence . Princeton : Princeton University Press . Stengel, R. F. ( 1986 ) Stochastic Optimal Control: Theory and application . Chichester : John Wiley & Sons., pp. 421 . Wang, X. & Qiu, Z. ( 2009 ) Nonprobabilistic interval reliability analysis of wing flutter. AIAA J., 47 , 743 – 748 . Google Scholar CrossRef Search ADS Wilkins, M. & Alfriend, K. ( 2000 ) Characterizing Orbit Uncertainty Due to Atmospheric Uncertainty. Proceedings of the Astrodynamics Specialist Conference , pp. 19 – 29 . Wu, J., Luo, Z., Zhang, N. & Zhang, Y. ( 2015 ) A new interval uncertain optimization method for structures using Chebyshev surrogate models. Comput. Structures, 146 , 185 – 196 . Google Scholar CrossRef Search ADS Wu, J., Luo, Z., Zhang, Y., Zhang, N. & Chen, L. ( 2013a ) Interval uncertain method for multibody mechanical systems using Chebyshev inclusion functions. Int. J. Numer. Methods Eng., 95 , 608 – 630 . Google Scholar CrossRef Search ADS Wu, J., Zhang, Y., Chen, L. & Luo, Z. ( 2013b ) A Chebyshev interval method for nonlinear dynamic systems under uncertainty. Appl. Math. Model., 37 , 4578 – 4591 . Google Scholar CrossRef Search ADS Xiu, D. & Karniadakis, G. E. ( 2002 ) The Wiener–Askey polynomial Chaos for stochastic differential equations. SIAM J. Sci. Comput., 24 , 619 – 644 . Google Scholar CrossRef Search ADS Yong, J. & Zhou, X. Y. ( 1999 ) Stochastic Controls: Hamiltonian Systems and HJB Equations . New York : Springer . Zadeh, L. A. ( 1965 ) Fuzzy sets. Inf. Control, 8 , 338 – 353 . Google Scholar CrossRef Search ADS Zazzera, F. B., Vasile, M., Massari, M. & Lizia, P. D. ( 2006 ) Assessing the Accuracy of Interval Arithmetic Estimates in Space Flight Mechanics. Technical Report 18851/05 . Milan, Italy : European Space Agency . © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

# Chebyshev inclusion functions based symplectic algorithm for solving non-linear optimal control problem with interval uncertainty

, Volume Advance Article – Jul 29, 2017
27 pages

/lp/ou_press/chebyshev-inclusion-functions-based-symplectic-algorithm-for-solving-o6Qq6czI8N
Publisher
Oxford University Press
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dnx032
Publisher site
See Article on Publisher Site

### Abstract

Abstract This article proposes a non-probabilistic interval analysis algorithm to obtain the influence domain of non-linear optimal control problem with interval uncertain parameter. The proposed non-probabilistic interval analysis algorithm is mainly constructed based on the symplectic algorithm and Chebyshev interval method. Firstly, the interval model is used to descript the uncertainties arising in optimal control problem. These uncertainties are the uncertain-but-bounded initial conditions or parameter variables. Thus, they are expressed based on the lower and upper bounds of variables without probability distributions. Then the Chebyshev inclusion function is employed to represent the relation between input uncertain parameters and responses of optimal control evaluation. By doing this, the optimal control problem with interval variables is transformed into a new set of optimal control problems with deterministic variables. So, the symplectic algorithm can be directly applied to the solving of the deterministic optimal control problems. Finally with the solutions of deterministic optimal control problems, the interval operation is used to obtain the influence domain, i.e., the bounds of the responses to the states and control inputs. The effectiveness of the proposed method is verified by a simple uncertain optimal control problem and an optimal orbital rendezvous problem with uncertainties in astrodynamics. 1. Introduction Optimal control theory of dynamics systems is extensively involved in astrodynamics (Peng et al., 2014b) etc. Theories and algorithms for solving open-loop optimal control problem have experienced a rapid growth over the past decades (Peng et al., 2014a). However, the spacecraft dynamical systems are too complex to be in accordance with deterministic systems. On the one hand, the uncertainties are inevitable in the data collection, dynamics modelling and analytical method. These uncertainties may significantly change the results of open-loop optimal control problem, and then influence the spacecraft guidance and trajectory design in astrodynamics. On the other hand, considerations about collision avoidance, thrust estimation, etc. for the spacecraft mission are serious topics in astrodynamics. It has been developed a strong consensus that uncertainty quantification (UQ) must play a prominent role in the future to understand the physics of non-linear dynamics system and to certify its stability (Pettit, 2004). UQ is an important tool that enables quantitative risk analysis, the goal of which is to provide rational guidance for design and certification decisions (Pettit, 2004). Rich tools are available to consider the above uncertainties in optimal control problem. The basic idea for dealing with uncertainties in optimal control problem is the strategy of closed-loop feedback (Dorf & Bishop, 2011). For example, the robust control method has been largely developed to reduce the sensitivity of the uncertain parameters (Pritchard & Townley, 1991; Büskens & Maurer, 2001; Diehl et al., 2005; Sakthivel & Ito, 2007; Ibrir, 2008). Thus, the deterministic control law is obtained. However, the influence domain of uncertainties in an open-loop optimal control is also important in some non-critical prior design (Loxton et al., 2011) and does not get enough attention, such as trajectory design (Golnaraghi & Kuo, 2010), reserving safety factor. It should be reminded that the solutions obtained from the feedback control, such as the robust control, are irrelevant to the propagation and influence domain of an uncertain system. A comprehensive review of the uncertainty propagation of ordinary differential equations (ODEs) using interval analysis and many instances of uncertainty propagation on orbit in aerocapture manoeuvres are given in report (Zazzera et al., 2006). Also, the propagation and influence domain of uncertainty from dynamical model, initial conditions, dynamical model parameters (Zazzera et al., 2006) are essential research topics in open-loop optimal control problem in astrodynamics. This article focuses on the propagation and influence domain estimation of uncertainty in an open-loop non-linear optimal control problem. So far, many methodologies have been developed to synthesize various uncertainties into the optimal control problem. Almost all of these methods are based on the probabilistic technologies, including stochastic optimal control, probabilistic approximation methods and polynomial chaos approximation methods. Stochastic optimal control has received the most attention among the statistical methods. It assumes that the state of the controlled system over time would be a stochastic process, and this stochastic process may be impossible to measure, moreover the measurement itself is usually noisily subjected to errors. The most commonly used principles for solving the stochastic optimal control problem are Pontryagin’s maximum principle and Bellman’s dynamic programming (Yong & Zhou, 1999). These two approaches lead to two different classes of methodologies separately and independently, but they are in fact equivalent. Since the theoretical basis of stochastic maximum is established by Peng (1990), the stochastic maximum principle can be used for the optimal control systems with stochastic input. Although significant progresses have been made in methodologies for solving stochastic optimal control, they all base on the assumption that the distribution of uncertain input parameter is known, and optimal objective is to minimize the expectations of the terminal and integral costs (Stengel, 1986). Many probabilistic approximation methods founded on the Monte Carlo method are also proposed. They are based on the assumption that the distribution of the input uncertain parameters is known. The essence of these approximation methods is that they use statistical method and sampling to model the problem. While the expected value of function is approximated by different methods. Cao et al. (2003) presented a Monte Carlo method with variance reduction technology, and the sensitivity derivative of the state variable with uncertain parameters is employed. Kleywegt et al. (2002) regarded the uncertain optimal control problem as a stochastic discrete optimization problem, and they applied a Monte Carlo-based sample average approximation to help solving the uncertain optimal control problem. Dorociak & Gausemeier (2014) transformed the uncertain optimal control problem into the multi-objective optimization problem, in which the two objectives based on a straightforward approximation are employed to determine the Pareto optimal control sequences. Phelps et al. (2016) provided more analysis and proof to extend consistency results for the deterministic optimal control problem and uncertain optimal control problem, in which the computational framework is also based on the approximation of sample averages using the Monte Carlo method. Polynomial chaos (PC) approximation methods are popular non-statistical probability methods, for their computational cost is far superior to the Monte Carlo simulations. Since Xiu & Karniadakis (2002) presented the generalized polynomial chaos (gPC) framework, which employed Wiener–Askey (Askey & Wilson, 1985) orthogonal polynomial chaos expansion to represent various stochastic processes, the PC method becomes applicable to approximate various distributions. Hover (2008) employed the PC approach to replace random variables with free variables in trajectory optimization. Fisher & Bhattacharya (2011) and Huschto & Sager (2013) used the framework of polynomial chaos expansion to transform the stochastic optimal control problem into the deterministic optimal control problems. Then this methodology was applied to the trajectory generation problem of the constrained mechanical systems and the feedback character preserving of optimal Markov decision rules respectively. In all the above methods, the basic assumptions for the controlled model are aleatory uncertainties. However, the optimal control problems encountered in the astrodynamics usually have imperfect unknown information. These problems are often subject to uncertainties such as atmospheric influences (Wilkins & Alfriend, 2000), influence of wind, gravitational constant, solar radiation, etc. (MaCdonald & Badescu, 2014) in astrodynamics. Though the probabilistic methods are effective when the complete statistical information of uncertain parameters are known a priori, but they are no longer applicable when only the upper and lower bounds of the uncertain parameters are known (Wu et al., 2013a). There are many non-probabilistic methods available to deal with the uncertainty, such as evidential reasoning (ER) approach (Dempster et al., 1977), fuzzy set theory (Zadeh, 1965), evidence theory (Shafer, 1976), convex model (Ben-Haim & Elishakoff, 1990; Jiang et al., 2011), interval analysis (Moore, 1966; Hansen, 1969), etc. In the above methods, the interval analysis has recently gained popularity in various fields for its explicit arithmetic and verifying or enclosing solutions (Alefeld & Mayer, 2000). Essentially, the interval model is a convex model to enclose the possible values of uncertain parameter (Jiang et al., 2007b), in which only the lower and upper bounds of uncertain parameters are required. It is more rational to descript the uncertainties and their influence using interval model. The direct application of interval arithmetic leads to a conservative result with large overestimation, and it often suffers from a high computational load (Ishizaki et al., 2016). The indirect method including other general approach, such as optimization approach, and some specialized method. For example, IP-GA optimization approach can be employed to obtain the output interval for interval model (Jiang et al., 2007a), and interval quadratic programming (Ishizaki et al., 2016) was designed for a class of special interval model which consists of the minimizer in a parametric quadratic program. However, the interval analysis can be rarely found to handle with the uncertainties in optimal control problem. The interval inclusion function is a suitable tool to obtain the result of the propagation and influence domain for an uncertain optimal control problem. In this article, the interval analysis based method is first employed for solving the uncertain optimal control problem with uncertain parameters of unknown distributions. Then a non-probabilistic Chebyshev interval method combined with symplectic algorithm is proposed to estimate the interval bounds of state, costate and control variables. In contrast to aforementioned probabilistic methods, Chebyshev interval method seeks the bounds of the uncertain output based on the interval model rather than solving for an output expectation in the problems with probability-distribution-known uncertain parameters. Hence, given an interval for the uncertain output, the uncertain optimal control can deliver solutions that meet the system requirements in all potential cases. The proposed method might be stated as a worst-case analysis method, which can improve the security of optimal trajectory design in astrodynamics. 2. Deterministic non-linear optimal control problem 2.1. Modelling of deterministic optimal control problem For the deterministic optimal control problem, the controlled dynamic system is expressed as $$\label{eq1} \dot{{\bf x}}\left(t \right)=f\left({{\bf x}}\left(t \right),{\bf u}\left(t \right),t \right)$$ (2.1) and the goal is to find a $${{\bf u}}$$ which minimizes the following cost function \begin{align} \label{eq2} J=\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left({{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t \right)} \end{align} (2.2) in which $$t$$ denotes time, the dot represents the derivative with respect to time, $${{\bf x}}$$ is the state vector, $${{\bf u}}$$ is the control input vector, and $$J$$ is a cost function. When the goal is satisfied, an optimal control $${{\bf u}}^{\ast}$$ can be found and the corresponding state vector $${{\bf x}}^{\ast}$$ is also determined. In this article, we mainly pay attention to the optimal control problem with fixed terminal time, i.e. $$t_{f}$$ is given. Many excellent methods can be used to solve optimal control problem (Loxton et al., 2011), among which, the calculus of variation is one of the effective methods for solving the deterministic optimal control problem defined by Equations (2.1) and (2.2). In the calculus of variation, the optimal control problem is transformed into a non-linear two-point boundary value problem (TPBVP). Further, many methods have been proposed for solving TPBVP, including the numerical iteration by employing implicit Lagrange function and derivative Newton’s shooting (Majji et al., 2009), alternative approach (Lizia et al., 2008), sixth-order scheme (Armellin & Topputo, 2006), and symplectic algorithm (Peng et al., 2011). In this study, the symplectic algorithm is adopted to solve the deterministic optimal control problem, for it is proved to be successful in the trajectory optimization in astrodynamics (Peng et al., 2011). A brief review of the symplectic algorithm is illustrated in the following section, more details can be found in (Peng et al., 2011). 2.2. Symplectic algorithm for solving deterministic nonlinear optimal control problem To construct a symplectic algorithm, a modified action $$U$$ in a time interval $$\left({0,\eta} \right)$$ is defined as follows \begin{align}\label{eq2.3} U={{\boldsymbol\lambda}}_{\eta}^{{\rm T}} {{\bf x}}_{\eta} -\int\limits_0^\eta {\left({{{\boldsymbol\lambda}}^{{\rm T}} \dot{{\bf x}}-H} \right){\rm d}t}, \end{align} (2.3) where $${{\bf x}}_{\eta}$$ and $${{\boldsymbol\lambda}}_{\eta}$$ are the state and costate variables at time $$\eta$$, respectively. The Hamiltonian function $$H\left({{\bf x}},{{\boldsymbol\lambda}} \right)$$ only depends on state $${{\bf x}}$$ and costate variables $${{\boldsymbol\lambda}}$$, i.e., \begin{align}\label{eq2.4} H\left({{{\bf x}},{{\boldsymbol\lambda}}} \right)={\it {\Phi}} \left( {{{\bf x}}\left(t \right),g\left({{{\bf x}},{{\boldsymbol\lambda }}} \right),t} \right)+{{\boldsymbol\lambda}}^{{\rm T}}f\left( {{{\bf x}}\left(t \right),g\left({{{\bf x}},{{\boldsymbol\lambda }}} \right),t} \right). \end{align} (2.4) After the differentiation of Equation (2.4) on both sides, then it has, \begin{align}\label{eq2.5} {\rm d}U={{\boldsymbol\lambda}}_{0}^{{\rm T}} {\rm d}{{\bf x}}_{0} +{{\bf x}}_{\eta}^{{\rm T}} {\rm d}{{\boldsymbol\lambda}}_{\eta}, \end{align} (2.5) where $${{\bf x}}_{0}$$ and $${{\boldsymbol\lambda}}_{0}$$ are the state and costate variables at initial time, respectively. Equation (2.5) implies that if the Hamiltonian canonical equation is satisfied within the time interval $$\left({0,\eta} \right)$$, considering the state variables $${{\bf x}}_{0}$$ and costate variables $${{\boldsymbol\lambda}}_{\eta}$$ at two ends of the time interval are taken as the independent variables, $$U$$ must be only the function of $${{\bf x}}_{0}$$ and $${{\boldsymbol\lambda}}_{\eta}$$. In order to formulate the numerical algorithm, the whole time domain $$\left({t_{0}, t_{f}} \right)$$ is divided into a series of equal time intervals; the interval length is $$\eta$$ and the number of time intervals is $${\it {\Omega}}$$. Then the state variable $${{\bf x}}\left(t \right)$$ and costate variable $${{\boldsymbol\lambda}}\left(t \right)$$ within the $$j$$th subinterval are approximated by using the Lagrange polynomials with order $$m -1$$ and $$n - 1$$, respectively, i.e., \begin{gather} \label{eq3} {{\bf x}}\left(t \right)=\left(M_{1} \otimes {{\bf I}} \right){{\bf x}}_{j-1} +\left(\tilde{{{\bf M}}}\otimes {{\bf I}} \right){\overline{{{\bf x}}}}_{j}\\ \end{gather} (2.6) \begin{gather} \label{eq4} {{\boldsymbol\lambda}}\left(t \right)=\left( \underset{{\sim}}{\bf N} \otimes {{\bf I}} \right) \overline{{\boldsymbol\lambda}}_{j} +\left({N_{n} \otimes {{\bf I}}} \right){{\boldsymbol\lambda}}_{j} \end{gather} (2.7) in which, the vector $${{\bf x}}_{j-1}$$ denotes the state variable at left end of the $$j$$th subinterval, the $${{\boldsymbol\lambda}}_{j}$$ denotes the costate variable at right end of the $$j$$th subinterval, vector $$\overline{{\bf x}}_{j}$$ is composed of all the state variables at interpolation points within the $$j$$th subinterval, i.e., $$\overline{{\bf x}}_{j} =\left\{\overline{{\bf x}}_{j}^{2}, \overline{{\bf x}}_{j}^{3}, \cdots, \overline{{\bf x}}_{j}^{m} \right\}^{{\rm T}}$$, and $$\overline{\boldsymbol{\lambda}}_{j}$$ is composed of the dependent costate variables at interpolation points within the $$j$$-th subinterval, i.e., $$\overline{\boldsymbol{\lambda}}_{j} =\left\{\overline{\boldsymbol{\lambda}}_{j}^{1}, \overline{\boldsymbol{\lambda}}_{j}^{2}, \cdots, \overline{\boldsymbol{\lambda}}_{j}^{n-1} \right\}^{{\rm T}}$$. The symbol $$\otimes$$ in Equations (2.6) and (2.7) denotes the Kronecker product of two matrices. Other symbols in Equations (2.6) and (2.7) are defined as \begin{gather} \tilde{{\bf M}}=\left[{M_{2}, M_{3}, \cdots, M_{m} } \right]\\ \end{gather} (2.8) \begin{gather} \underset{{\sim}}{\bf N}=\left[{N_{1}, N_{2} ,\cdots, N_{n-1}} \right]\\ \end{gather} (2.9) \begin{gather} M_{i} =\prod\limits_{j=1,j\ne i}^m {\frac{t-{\left( {j-1} \right)\eta} /{\left({m-1} \right)}}{{\left({i-j} \right)\eta} /{\left({m-1} \right)}}} \end{gather} (2.10) \begin{gather} N_{i} =\prod\limits_{j=1,j\ne i}^n {\frac{t-{\left( {j-1} \right)\eta} /{\left({n-1} \right)}}{{\left({i-j} \right)\eta} /{\left({n-1} \right)}}} \end{gather} (2.11) Substituting Equations (2.6) and (2.7) into Equation (2.3), the approximate function in the $$j$$th subinterval can be expressed as follows \begin{align}\label{2.12} U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda}}_{j} ,\overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda}}}}_{j}} \right)={{\boldsymbol\lambda}}_{j}^{{\rm T}} \overline{{\bf x}}_{j} -\int\limits_0^\eta {\left({{{\boldsymbol\lambda}}^{{\rm T}} \dot{{\bf x}}-H\left({{{\bf x}},{{\boldsymbol\lambda}}} \right)} \right)_{j} {\rm d}t} \quad j=1,2,\ldots, {\it {\Omega}}. \end{align} (2.12) According to Equation (2.5), the approximate action $$U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda}}_{j}, \overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda}}}}_{j}} \right)$$ in the $$j$$th subinterval must only be the function of state variable at left and costate variable at right ends of the $$j$$th subinterval, i.e., the following equations must be satisfied \begin{gather} \label{eq9} \frac{\partial U_{j} \left({{{\bf x}}_{j-1} ,{{\boldsymbol\lambda}}_{j}, \overline{{\bf x}}_{j} ,{\overline{{\boldsymbol{\lambda}}}}_{j}} \right)}{\partial \overline{{\bf x}}_{j}}={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}\\ \end{gather} (2.13) \begin{gather} \frac{\partial U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda }}_{j}, \overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda }}}}_{j}} \right)}{\partial {\overline{{\boldsymbol{\lambda }}}}_{j}}={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}.\label{eq2.14} \end{gather} (2.14) According to Equation (2.5), we have \begin{gather} \label{eq10} \frac{\partial U_{j} \left({{{\bf x}}_{j-1} ,{{\boldsymbol\lambda}}_{j}, \overline{{\bf x}}_{j} ,{\overline{{\boldsymbol{\lambda}}}}_{j}} \right)}{\partial {{\bf x}}_{j-1}}-{{\boldsymbol\lambda}}_{j-1} ={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}\\ \end{gather} (2.15) \begin{gather} \frac{\partial U_{j} \left({{{\bf x}}_{j-1}, {{\boldsymbol\lambda }}_{j}, \overline{{\bf x}}_{j}, {\overline{{\boldsymbol{\lambda }}}}_{j}} \right)}{\partial {{\boldsymbol\lambda}}_{j}}-{{\bf x}}_{j} ={{\bf 0}},\quad j=1,2,\ldots, {\it {\Omega}}.\label{eq2.16} \end{gather} (2.16) Accordingly, based on Equation (2.5), the deterministic non-linear optimal control problem is transformed into a set of non-linear equations (2.13)–(2.16). Using Newton-Raphson iteration method or other non-linear equations solvers, the state variables and the costate variables can be obtained. Then, the deterministic non-linear optimal control problem can be solved. The flowchart of symplectic algorithm can be expressed by Fig. 1. Fig. 1. View largeDownload slide The symplectic algorithm for solving TPBVP. Fig. 1. View largeDownload slide The symplectic algorithm for solving TPBVP. To have a best precision and efficiency in the symplectic algorithm, the combination of interpolation parameters $$m$$ and $$n$$ is $$m=n$$ for state and costate variables, respectively. Numerical tests show that if the combination of the interpolation parameters is $$m=n$$, the order of the proposed symplectic algorithm can be $$2\left({m-1} \right)+1$$. The main computational cost of the symplectic algorithm is solving for nonlinear equations in Equations (2.13)–(2.16), which are strongly dependant on the problem. It should be noted that in the above algorithm, the order for state and costate variable in Lagrange polynomials are $$m$$ and $$n$$, respectively. The dimensions of the equations in Equations (2.13) and (2.14) are $$m-1$$ and $$n-1$$, adding the two equations in Equations (2.15) and (2.16), the dimension in a time interval is $$m+n$$. Considering the problem is separated by a number of $${\it {\Omega}}$$ time intervals, the dimension of the nonlinear equations is $${\it {\Omega}} \times \left({m+n} \right)$$. 3. Interval uncertain method for non-linear optimal control problem with uncertainties 3.1. Interval uncertain modelling of optimal control problem with uncertainties Considering an optimal control problem with uncertainties, the controlled dynamical system (Equation (3.1)) related with uncertain parameter vector $${\boldsymbol{\xi}}=\left[{\xi_{1}, \xi_{2}, \ldots, \xi_{n}} \right]$$ can be rewritten as follows, i.e., $$\label{eq11} \dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t,{\boldsymbol{\xi}} \right)$$ (3.1) and the new objective $$\tilde{{J}}$$ is still to minimize the cost function defined in Equation (3.2): $$\label{eq12} \tilde{{J}}=\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left({{{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t} \right)}$$ (3.2) while it is indirectly related with uncertain parameter $${\boldsymbol{\xi}}$$ through state $${{\bf x}}$$ and control $${{\bf u}}$$ arising in Equation (3.1). The uncertain parameter $${\boldsymbol{\xi}}$$ is associated with the uncertainties from system parameters, such as initial errors, process errors, etc. We assume that the complete statistical information for uncertain parameter $${\boldsymbol{\xi}}$$ is unknown and only the bounds are known. The model of uncertain optimal control problem defined by Equations (3.1) and (3.2) is significantly different from that defined in stochastic optimal control. Firstly, it should be noted that the uncertain parameter $${\boldsymbol{\xi}}$$ in Equation (3.1) is introduced without any assumption of probability distribution. Furthermore, the stochastic optimal control system can be descripted as follows: $$\label{eq13} \dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf x}}\left( t \right),{{\bf u}}\left(t \right),t,{{\bf v}}(t)} \right)$$ (3.3) in which, the controlled system is related with a stochastic variables $${{\bf v}}(t)$$, and it is a Gaussian white noise vector. Moreover, the performance index of stochastic optimal control is the expectation of the cost function (Stengel, 1986), i.e., \begin{align}\label{eq3.4} E\left\{J \right\}=E\left\{{\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left( {{{\bf x}}\left(t \right),{{\bf u}}\left(t \right),t} \right)}} \right\}\!. \end{align} (3.4) Therefore, the modelling of uncertain optimal control in this article should be distinguished from the stochastic optimal control. All the uncertainties, such as measurement errors, process errors, etc., in the theory of stochastic optimal control are introduced by known probability distribution. However, many uncertainties like non-exact relationships and incomplete theories cannot be descripted by probability functions. In many practical situations, the bounds of uncertain parameter can be obtained more easily than the exact probability distribution (Wang & Qiu, 2009; Römgens et al., 2013; Wu et al., 2013b). So it is necessary to establish the interval uncertain model for those uncertain optimal control problems. If only the bounds of the uncertain parameter $${\boldsymbol{\xi}}$$ are given, it can be defined using the interval vector $$\left[{{\boldsymbol{\xi}}} \right]\in \mathbb{I}\mathbb{R}^{n}$$, i.e., \begin{align}\label{eq3.5} \left[{{\boldsymbol{\xi}}} \right]=\left[{{{{\underline{\boldsymbol{\xi} }}}},{\overline{{\boldsymbol{\xi}}}}} \right]^{n}=\left\{{\xi_{r} :\underline{\xi}_{r} \leqslant \xi_{r} \leqslant \overline{{\xi }}_{r}, r=1,2,\ldots, n} \right\}\!, \end{align} (3.5) where, the symbols $${{{\underline{\boldsymbol{\xi}}}}}=\left[{{\underline{\xi}}_{1}, {\underline{\xi}}_{2}, \ldots, {\underline{\xi}}_{n}} \right]^{{\rm T}}$$and $${\overline{{\boldsymbol{\xi}}}}=\left[{\overline{{\xi}}_{1}, \overline{{\xi}}_{2}, \ldots, \overline{{\xi}}_{n}} \right]^{{\rm T}}$$denote the lower and the upper bounds of uncertain parameter $${\boldsymbol{\xi}}$$. In order to avoid repetition and simplify the formula, the underlined and overlined value in this article indicates its lower and upper bound, respectively. For simplicity, the symbol $${{\bf z}}\left(t \right)=\left[{{{\bf u}}\left(t \right),\;{{\bf x}}\left(t \right)} \right]^{{\rm T}}$$ is used to express the solution vector obtained from the optimal control problem. Then the Equation (3.1) can be described as a controlled dynamic system with uncertain parameter $${\boldsymbol{\xi}}$$, i.e., $$\label{eq14} \dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left( t \right),t,{\boldsymbol{\xi}}} \right)$$ (3.6) and the cost function in Equation (3.2) can also be rewritten as: \begin{align}\label{eq3.7} \tilde{{J}}\left({{{\bf s}}} \right)=\int_{t_{0}}^{t_{f}} {{\it {\Phi}} \left({{{\bf z}}\left(t \right),t} \right)}. \end{align} (3.7) If we employ the symbol $${{\bf z}}^{\ast}$$ to represent the real solution of deterministic optimal control problem defined in Equations (2.1) and (2.2), then the real solution of optimal control problem with uncertain parameter defined in Equations (3.6) and (3.7) (or Equations (3.1) and (3.2)) can be included in an interval vector $$\left[{{{\bf z}}^{\ast}} \right]=\left[{{{\bf {\underline{z}}}}^{\ast},\overline{{\bf z}}^{\ast}} \right]$$. In above interval vector expression, the lower bound $${{\bf {\underline{z}}}}^{\ast}$$ and upper bound $$\overline{{\bf z}}^{\ast}$$ of the solution $${{\bf z}}^{\ast}$$ can also be defined as follows: \begin{gather} \label{eq15} {\underline{z}}_{\chi}^{\ast} \left({\tilde{{t}}} \right)=\inf \left\{{z_{\chi}^{\ast} \left({\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right),\;{\boldsymbol{\xi}}\in \left[{{\boldsymbol{\xi}}} \right]} \right\}\\ \end{gather} (3.8) \begin{gather} \overline{{z}}_{\;\chi}^{\ast} \left({\tilde{{t}}} \right)=\sup \left\{{z_{\chi}^{\ast} \left({\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right),\;{\boldsymbol{\xi}}\in \left[{{\boldsymbol{\xi}}} \right]} \right\}\!.\label{eq3.9} \end{gather} (3.9) In which, $$\chi =1,2,\cdots, Z$$ is the component index of variable $${{\bf z}}$$. It should be noted that $$z_{\;\chi}^{\ast} \left({\tilde{{t}}} \right)$$ is a component of the output in a given time $$\tilde{{t}}$$, which is a quantity. 3.2. Interval analysis of the interval uncertain model The numerical method descripted in Section 2 can be applied to solve the deterministic optimal control problem when the uncertain parameter $${\boldsymbol{\xi}}$$ is determined. Then, the precise bounds of the uncertain optimal control solution $${{\bf z}}^{\ast}$$ can be solved by the following optimization problems, i.e., \begin{gather} \label{eq16} {\underline{z}}_{\chi}^{\ast} \left({\tilde{{t}}} \right)=\mathop{\min\limits_{{\boldsymbol{\xi}}\in \left[ {{\boldsymbol{\xi}}} \right]}} \left\{{z_{\chi}^{\ast} \left( {\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right)} \right\}\\ \end{gather} (3.10) \begin{gather} \overline{{z}}_{\chi}^{\ast} \left({\tilde{{t}}} \right)=\mathop{\max\limits_{{\boldsymbol{\xi}}\in \left[ {{\boldsymbol{\xi}}} \right]}} \left\{{z_{\chi}^{\ast} \left( {\tilde{{t}}} \right):\min \tilde{{J}}\left({{{\bf z}}\left(t \right)} \right),{\rm subject}\;{\rm to}\;\dot{{\bf x}}\left(t \right)=\boldsymbol{f}\left({{{\bf z}}\left(t \right),t,{\boldsymbol{\xi}}} \right)} \right\}. \label{eq3.11} \end{gather} (3.11) But the solutions of optimization problems in Equations (3.10) and (3.11) cannot be obtained easily by optimization algorithms. As the gradient algorithms often fail in global reaching for strong non-linear problem and the intelligence algorithms have high computational cost. Fortunately, the interval analysis is an efficient way to estimate the range of output of uncertain optimal control problem. In the interval analysis, the interval operations combined with appropriate function are used to estimate the range of output of uncertain optimal control problem. Interval arithmetic and interval function are the two basic tools for interval analysis, and interval analysis uses interval arithmetic to obtain the solution of interval function. Interval arithmetic defines the operations on the real intervals and ensures all the possible results from real operations are enclosed in the given intervals. Four arithmetic operations for the real intervals $$\left[x \right]$$ and $$\left[y \right]$$ are defined as following (Moore, 1966): \begin{align}\label{eq3.12} \left[z \right]=\left[x \right]\circ \left[y \right]=\left\{ {z=x\circ y:x\in \left[x \right],y\in \left[y \right],\circ \in \left\{{+,-,\times, \div} \right\}} \right\}\!. \end{align} (3.12) Also the arithmetic operations defined on a real number $$a$$ and an interval $$\left[x \right]$$ are given by: \begin{align}\label{eq3.13} \left[z \right]=a\circ \left[x \right]=\left\{z=a\circ x:x\in \left[x \right],\circ \in \left\{{+,-,\times, \div} \right\} \right\}\!. \end{align} (3.13) For an arbitrary function (or mapping) $${\boldsymbol{f}}:\mathbb{R}^{n}\to \mathbb{R}^{m}$$, then, we define the $$[\,{\boldsymbol{f}}]$$ as the interval function and it is given by: \begin{align}\label{eq3.14} \left[{{{\bf z}}} \right]=[\,{\boldsymbol{f}}]\left({\left[{{{\bf x}}} \right]} \right)\!, \end{align} (3.14) where, $${{\bf x}}\in \mathbb{R}^{n}$$ is the input vector and $${{\bf z}}\in \mathbb{R}^{m}$$ is the output vector of the function $${\boldsymbol f}$$, $$\left[{{{\bf x}}} \right]\in \mathbb{I}\mathbb{R}^{n}$$ and $$\left[{{{\bf z}}} \right]\in \mathbb{I}\mathbb{R}^{m}$$ are the corresponding interval expression for their ranges. The interval function can be easily applied to describe the solutions of optimal control problem with uncertain parameter $${\boldsymbol{\xi}}$$. If the uncertain parameter $${\boldsymbol{\xi}}$$ is fixed, the solutions of deterministic optimal control problem can be treated as a function of $${\boldsymbol{\xi}}$$, i.e., \begin{align}\label{eq3.15} {{\bf z}}^{\ast}={\boldsymbol{g}}\left({{\boldsymbol{\xi}}} \right)\!. \end{align} (3.15) Then the interval function for optimal control problem with interval uncertain parameter $$\left[{{\boldsymbol{\xi}}} \right]$$ is expressed as follows: \begin{align}\label{eq3.16} \left[{{{\bf z}}^{\ast}} \right]=\left[{\boldsymbol{g}} \right]\left({\left[ {{\boldsymbol{\xi}}} \right]} \right)\!. \end{align} (3.16) Unfortunately, the interval analysis can only be applied on the functions which contain the basic arithmetic operations. Thus the interval analysis cannot be used directly to estimate the range of interval function of uncertain optimal control problem. However, uncertain optimal control problem can be represented by some approximate functions which can easily perform the interval arithmetic operations. The Taylor series expansion is commonly used to construct the inclusion function in many fields, but it may take more overestimation and may not ensure sharper but wider results. The algorithm based on the Chebyshev series can largely reduce the overestimation in interval analysis for non-linear problem and it has been proved to be effective for interval analysis of ODEs (Wu et al., 2013b) and DAEs (Wu et al., 2013a). It can be found that a single operation in Equations (3.12) and (3.13) acquires a precise result and it has no overestimation. However, if an interval function in Equation (3.16) has many interval operations defined by Equations (3.12) and (3.13), it could produce intrinsic overestimation by the so-called ‘wrapping effect’. This overestimation may even be more severe and complex in multidimensional problems (Nedialkov, 1999). It should be emphasized that this ‘wrapping effect’ is intrinsic and cannot be removed. The ‘wrapping effect’ phenomenon is mainly caused by obtaining the bounds asynchronously in related interval operations. In many cases, the asynchrony is caused by multiplying with a negative number or an interval containing negative bound. For example, given for the interval $$\left[x \right]=\left[{1,2} \right]$$ and $$\left[y \right]=\left[{-2,-1} \right]$$, the interval inclusion function: \begin{align}\label{eq3.17} \begin{split} \left[z \right]=\left[{f_{1}} \right]\left({\left[x \right],\left[y \right]} \right)&=2\times \left[x \right]+\left[x \right]\times \left[y \right]=2\times \left[{1,2} \right]+\left[{1,2} \right]\times \left[ {-2,-1} \right] \\ &=\left[{2,4} \right]+\left[{-4,-1} \right]=\left[{-2,3} \right] \end{split}. \end{align} (3.17) Actually, the accuracy interval of $$\left[z \right]$$ is $$\left[{0,2} \right]$$, and if we replace $$\left[{f_{1}} \right]$$ with \begin{align} \label{eq17} \begin{split} \left[z \right]=\left[{f_{1}} \right]\left({\left[x \right],\left[y \right]} \right)&=\left[x \right]\times \left({2+\left[y \right]} \right)=\left[{1,2} \right]\times \left({2+\left[{-2,-1} \right]} \right)\\ &=\left[{1,2} \right]\times \left[{0,1} \right]=\left[{0,2} \right] \end{split} \end{align} (3.18) it will obtain the accuracy result. To reduce the error caused by the phenomenon of ‘wrapping effect’, the interval inclusion function should be designed in an appropriate form. The trigonometric functions based approximation function is the highly preferred form. It is because that not only the interval estimation of the trigonometric functions can be obtained by periodicity (Jaulin et al., 2001; Wu et al., 2015) (maximum interval is $$\left[{-1,1} \right])$$, but also the bounds of interval $$\left[{-1,1} \right]$$ will be obtained synchronically when the interval is multiplied with a negative number or itself. In the following sections, the Chebyshev interval method based the symplectic algorithm is proposed to estimate the solution of optimal control problem with interval uncertainty. 4. Chebyshev interval methodbased symplectic algorithm 4.1. Chebyshev polynomial approximation and the interval analysis The Chebyshev polynomial approximation for the one-dimension function $$f(x)$$ defined on $$\left[{{\underline{x}}, \overline{{x}}} \right]\in \mathbf{\mathbb{R}}$$ can be written as $$\label{eq18} f\left(x \right)=p_{d} \left(x \right)=\frac{1}{2}f_{0} +\sum\limits_{i=1}^d {f_{i} \cdot \cos \left({i\alpha} \right)}$$ (4.1) in which, $$d$$ is the truncated degree of the Chebyshev series and $$\alpha$$ is the substitution variable defined as \begin{align}\label{eq4.2} \alpha =\arccos \left({\frac{2x-(\overline{{x}}+ {\underline{x}} )}{\overline{{x}}- {\underline{x}}}} \right)\in \left[{0,\pi} \right]. \end{align} (4.2) For a $$n$$-dimensional functions $$f\left({{{\bf x}}} \right)$$ with an input variable $${{\bf x}}\in \left[{{\underline{{\bf x}}}},\overline{{\bf x}} \right]\in \mathbf{\mathbb{I}\mathbb{R}}^{n}$$, considering $$\cos \left({0_{r} \alpha_{r}} \right)=1$$, the $$d$$-degree approximation is given using tensor product: $$\label{eq19} f\left({{{\bf x}}} \right)\approx p_{d} \left({{{\bf x}}} \right)=\sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left({\frac{1}{2}} \right)^{\beta}f_{i_{1}, \ldots,i_{r} \ldots,i_{n}} \cdot \cos \left({i_{1} \alpha_{1}} \right)\ldots\cos \left({i_{r} \alpha_{r}} \right)\ldots\cos \left({i_{n} \alpha_{n}} \right)}$$ (4.3) where, $$\beta$$ is the counted number of zero degree ($$i_{r} =0)$$ Chebyshev polynomials, the subscript $$r$$ denotes the component index of the input variable $${{\bf x}}=\left\{{x_{1}, \ldots, x_{r}, \ldots, x_{n}} \right\}$$ and substitution variable $${\boldsymbol{\alpha}}=\left\{{\alpha_{1}, \ldots, \alpha _{r}, \ldots, \alpha_{n}} \right\}$$. Each component $$\alpha_{r}$$ in the substation variable vector can be calculated by Equation (4.2) with the corresponding component of $${{\bf x}}$$ and $${\boldsymbol{\alpha}}$$. It should be noted that the summation in Equation (4.3) is started from 0 compared to the Equation (4.1) after introducing the coefficient $$\left({1/2} \right)^{\beta}$$ to the items which contains zero degree Chebyshev polynomials. In the architecture of the multi-dimensional Chebyshev polynomial interpolant in Equation (4.3), the basic functions are made of trigonometric functions. For the ranges of trigonometric function can be accurately obtained when it is combined with periodicity and monotonicity, then the interval analysis can be applied to the approximation function. The fitting coefficients $$f_{i_{1}, \ldots,i_{r}, \ldots,i_{n}}$$ are calculated by Mehler integral based on the orthogonality of Chebyshev polynomials (Wu et al., 2013b): $$\label{eq20} f_{i_{1}, \ldots, i_{r}, \ldots,i_{n}} \approx \left( {\frac{2}{\tilde{{d}}}} \right)^{n}\sum\limits_{q_{1} =1}^{\tilde{{d}}} {\ldots} \sum\limits_{q_{s} =1}^{\tilde{{d}}} {\ldots} \sum\limits_{q_{n} =1}^{\tilde{{d}}} {f\left({x_{q_{1}} ,\ldots,x_{q_{s}}, \ldots,x_{q_{n}}} \right)\cos \left({i_{1} \theta_{q_{1}}} \right)\ldots\cos \left({i_{r} \theta_{q_{s}}} \right)\ldots\cos \left({i_{n} \theta_{q_{n}}} \right)}$$ (4.4) in which, for each dimensional of input variable $$s\left({s=1,2,\ldots, n} \right)$$, the interpolation points are $$\theta_{q_{s}} =\frac{2q_{s} -1}{\tilde{{d}}}\frac{\pi}{2}$$, where $$q_{s} =1,2,\ldots \tilde{{d}}$$, $$\tilde{{d}}$$ is the calculation degree for a $$d$$-degree truncated Chebyshev series, $$x_{q_{s}}$$ is calculated by \begin{align}\label{eq4.5} x_{q_{s}} =\frac{\overline{{x}}_{q_{s}} + {\underline{x}}_{q_{s}} }{2}+\frac{\overline{{x}}_{q_{s}}-{\underline{x}}_{q_{s}}}{2}\cos \left({\theta_{q_{s}}} \right)\!, \end{align} (4.5) and $$\left[{x_{q_{1}}, \ldots,x_{q_{s}}, \ldots,x_{q_{n}}} \right]^{{\rm T}}$$are the positions for evaluating original functions. It should be noted that the fitting coefficients are calculated by the linear combination of results of original functions in Equation (4.4). Usually, a high degree truncation can obtain a better integral accuracy, so the truncation $$\tilde{{d}}$$ is usually not less than $$d+1$$. Subsequently, because the bounds of trigonometric function can be directly obtained by the periodicity and boundedness (in Equation (4.1), it has $$\cos \left({i\left[\alpha \right]} \right)=[-1,1]$$ when $$i\ne 0$$ and $$\alpha \in \left[{0,\pi} \right])$$, the interval analysis can easily applied to Equation (4.1). The Chebyshev inclusion function for one-dimension problem can be written as: \begin{align}\label{eq4.6} \left[f \right]\left({\left[x \right]} \right)=\frac{1}{2}f_{0} +\sum\limits_{i=1}^d {f_{i} \cdot \cos \left({i\left[\alpha \right]} \right)} =\frac{1}{2}f_{0} +[-1,1]\sum\limits_{i=1}^d {\left| {f_{i}} \right|}. \end{align} (4.6) Considering the $$n$$-dimensional approximation in Equation (4.3), the multi-dimensional Chebyshev inclusion function $$f$$ can be obtained: \begin{align}\label{eq4.7} \left[f \right]\left({\left[{{{\bf x}}} \right]} \right)=\left( {\frac{1}{2}} \right)^{n}f_{i_{1} =0,\ldots,i_{r} =0,\ldots,i_{n} =0} +\left[{-1,1} \right]\sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left| {\left({\frac{1}{2}} \right)^{\beta}f_{i_{1}, \ldots,i_{r}, \ldots,i_{n} \left({\exists i_{r} \ne 0} \right)}} \right|}. \end{align} (4.7) Note that the result of an interval arithmetic operation defined in Equations (3.12) and (3.13) is a real interval, while the results of arithmetic operations between real values are still real values. Equation (4.7) consists of a real number item and other interval items. The real number item is the multiplication of zero degree Chebyshev polynomials $$\frac{1}{2}f_{0_{r}}$$ in each dimension ($$r=1,2,\ldots, n)$$, while the other items containing no-zero degree Chebyshev polynomial ($$\exists i_{r} \ne 0)$$ are interval results. It can be concluded that the Chebyshev interval method is a non-intrusive method without derivative information. If the expression of function $$f$$ is complex or even implicit, the Chebyshev interval method can also be implemented. The error of the interval estimation is related to the accuracy of the approximated interval inclusion function. The truncated error for onedimension Chebyshev series is given by Fox & Parker (1968): \begin{align}\label{eq4.8} e_{d} \left(x \right)=\left| {f\left(x \right)-p_{d} \left(x \right)} \right|\leqslant \frac{2^{-d}}{\left({d+1} \right)!}\left\| {f^{\left({d+1} \right)}} \right\|_{\infty}. \end{align} (4.8) It can be found that the $$e_{d} \left(f \right)$$ can be neglected when $$d$$ is large enough (Wu et al., 2013b). For multi-dimensional problem, it can also been proved that if $$d$$ is large enough $$e_{d} \left({f\left({{{\bf x}}} \right)} \right)$$ can be neglected. 4.2. Implementation of Chebyshev interval methodbased symplectic algorithm Considering the uncertain interval model of non-linear optimal control established in Section 3, the direct solving of interval function defined in Equation (3.16) can be replaced by the Chebyshev interval method in Section 4.1. It is mentioned that the process of solving deterministic optimal control in Section 2 can be regarded as a function $${{\bf g}}$$ when uncertain parameter $${\boldsymbol{\xi}}$$ is fixed. The function $${{\bf g}}$$ can be approximated by $$d$$-degree Chebyshev polynomial approximation. In this situation, the symplectic algorithm for solving optimal control problem would not be changed and the only requirements are the combination vectors of integral points in Mehler integral. Finally, the interval bounds of $${{\bf z}}^{\ast}$$ can be estimated via the Chebyshev inclusion function. The details can be summarized as follows: 1. Confirming the uncertain parameter vector and its range $${\boldsymbol{\xi}}=\left[{{\boldsymbol{\xi}},{\overline{{\boldsymbol{\xi}}}}} \right]\in {\mathbb{IR}}^{n}$$. 2. Calculating the integral points $$\xi_{q_{s}}$$ of Mehler integral by $$\label{eq21} \xi_{q_{s}} =\frac{\overline{{\xi}}_{q_{s}} +{\underline{\xi}} _{q_{s}}}{2}+\frac{\overline{{\xi}}_{q_{s}} - {\underline{\xi}} _{q_{s}}}{2}\cos \left({\theta_{q_{s}}} \right)$$ (4.9) in which, $$\theta_{q_{s}} =\frac{2q_{s} -1}{\tilde{{d}}}\frac{\pi}{2}$$, $$q_{s} =1,2,\ldots \tilde{{d}}$$ and $$s=1,2,\ldots, n$$. 3. Constructing the optimal control problems with the fixed parameters $${\boldsymbol{\xi}}_{q-c} =\left[{\xi_{q_{1}}, \ldots,\xi_{q_{s}}, \ldots,\xi_{q_{n}}} \right]^{{\rm T}}$$ which are the combinations of $$\xi_{q_{s}} \left({q_{s} =1,2,\ldots, \tilde{{d}}} \right)$$. 4. For each deterministic optimal control problem $$\boldsymbol{g}\left({{\boldsymbol{\xi}}_{q-c}} \right)$$, symplectic algorithm can be employed to transform the TPBVP into non-linear equations (2.13)–(2.16), and then solved by the Newton-Raphson iteration method. 5. Calculating the coefficients of Chebyshev polynomial for each $$z_{\chi}^{\ast} \left({\tilde{{t}}} \right)$$ using \begin{align}\label{eq4.10} \boldsymbol{g}_{i_{1}, \ldots,i_{r}, \ldots,i_{n}} \approx \left({\frac{2}{d}} \right)^{n}\sum\limits_{q_{1} =1}^d {\ldots} \sum\limits_{q_{s} =1}^d {\ldots} \sum\limits_{q_{n} =1}^d {\boldsymbol{g}\left( {{\boldsymbol{\xi}}_{q-c}} \right)\cos \left({i_{1} \theta_{q_{1} }} \right)\ldots\cos \left({i_{r} \theta_{q_{s}}} \right)\ldots\cos \left({i_{n} \theta_{q_{n}}} \right)}. \end{align} (4.10) 6. Performing interval analysis based on the Chebyshev polynomial approximation for each $$z_{\chi}^{\ast} \left({\tilde{{t}}} \right)$$ using \begin{gather} \label{eq22} \boldsymbol{g}\left({{\boldsymbol{\xi}}} \right)\approx \sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left({\frac{1}{2}} \right)^{\beta }\boldsymbol{g}_{i_{1}, \ldots,i_{r}, \ldots,i_{n}} \cos \left({i_{1} \alpha_{1} } \right)\ldots\cos \left({i_{r} \alpha_{r}} \right)\ldots\cos \left({i_{n} \alpha_{n}} \right)}\\ \end{gather} (4.11) \begin{gather} \left[{{{\bf z}}^{\ast}} \right]=\left[\boldsymbol{g} \right]\left({\left[ {{{\bf x}}} \right]} \right)=\left({\frac{1}{2}} \right)^{n}\boldsymbol{g}_{i_{1} =0,\ldots,i_{r} =0,\ldots,i_{n} =0} +\left[ {-1,1} \right]\sum\limits_{i_{1} =0}^d {\ldots} \sum\limits_{i_{r} =0}^d {\ldots} \sum\limits_{i_{n} =0}^d {\left| {\left( {\frac{1}{2}} \right)^{\beta}\boldsymbol{g}_{i_{1}, \ldots,i_{r}, \ldots,i_{n} \left({\exists i_{r} \ne 0} \right)}} \right|}. \label{eq4.12} \end{gather} (4.12) From the above interval uncertain method, it should be noted that the optimal control problem with uncertainties is transformed into a series of deterministic optimal control problems which can be solved by symplectic algorithm. The implementation process of symplectic algorithm is not changed. So the Chebyshev interval method can also be combined with other optimal control algorithms to solve the uncertain optimal control problem. Unlike the probability methods, no priori assumptions have been introduced to the present algorithm procedures. The proposed interval-analysis-based method can provide a more reasonable estimation for uncertain optimal control problem when the known information is scarce or the probabilistic distribution of uncertain parameter is unknown. An error or insufficient assumption on the probabilistic properties may lead to the failure of the estimation for the optimal control problem, if only the upper and lower bounds of the uncertain parameter are known. In the employment of Chebyshev interval method on symplectic algorithm, the Chebyshev approximation is constructed by the tensor product of the Chebyshev polynomials approximation for each dimension in uncertain parameter vector. The coefficients of the terms after tensor product are calculated by Mehler integral. It also should be noted that, as mentioned in Section 4.1, the degree of the Mehler integral $$\tilde{{d}}$$ should be larger than the degree of Chebyshev polynomials $$d$$ in approximation. So $$\tilde{{d}}=\left({d+1} \right)^{n}$$ samples are required in the algorithm process. In each sample, the computation cost is created by the symplectic algorithm mentioned in Section 2.2. 5. Numerical simulations The following carefully chosen applications demonstrate the performance of the proposed interval analysis method for the uncertain optimal control problem. Firstly, a simple example is given for testing the uncertain influence from both initial conditions and dynamics system. Then, the proposed interval analysis algorithm is applied to investigate the uncertain orbital rendezvous of spacecraft. In order to investigate the accuracy of the interval estimation of state variable and control variables, the real bounds of the output in the following applications are calculated by the scanning method. To clarify the algorithm accuracy, the bounds overestimation of output variable $${\bf z}$$ are calculated by \begin{gather} \label{eq23} {\bf z}^{{\rm lower\_overestimation}}={\bf z}^{{\rm lower\_bound\_scanning}}-{\bf z}^{{\rm lower\_bound\_Chebyshev}}\\ \end{gather} (5.1) \begin{gather} \label{eq24} {\bf z}^{{\rm upper\_overestimation}}={\bf z}^{{\rm upper\_bound\_Chebyshev}}-{\bf z}^{{\rm upper\_bound\_scanning}} \end{gather} (5.2) in which, the $${\bf z}^{{\rm lower\_overestimation}}$$ and $${\bf z}^{{\rm upper\_overestimation}}$$ indicate the redundant interval estimation. 5.1. A simple example for numerical tests In this example, two types of uncertainties for non-linear optimal control problem are considered to perform the wide applicability of the presented method. For the problem itself is a non-linear optimal control problem with free terminal state (Kameswaran & Biegler, 2007). The cost function is given by $$\label{eq25} J=x_{2} \left({t_{f} =1} \right)$$ (5.3) the differential equations and the initial conditions are given by $$\label{eq26} \left\{{\begin{array}{@{}ll} \dot{{x}}_{1} =\omega_{1} x_{1} +u,\;\quad& x_{1} \left(0 \right)=x_{1}^{\rm initial} \\ \dot{{x}}_{2} =x_{1}^{2} +\omega_{2} u^{2},\quad& x_{2} \left(0 \right)=x_{2}^{\rm initial} \end{array}} \right.$$ (5.4) The system uncertainty and initial state uncertainty are performed in case A and case B, respectively. In case A, the system parameters $$p_{a}$$ and $$p_{b}$$ are uncertain and considered as interval parameters, while in case B, the initial state parameters $$x_{1}^{\rm initial}$$ and $$x_{2}^{\rm initial}$$ are the interval parameters. The interval uncertain parameters and determinate parameters are shown in Tables 1 and 2, respectively. Table 1 Parameters for the test example in case A $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — Table 1 Parameters for the test example in case A $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range [0.49, 0.51] [0.49, 0.51] — — Table 2 Parameters for the test example in case B $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] Table 2 Parameters for the test example in case B $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] $$\omega_{1}$$ $$\omega_{2}$$ $$x_{1}^{\rm initial}$$ $$x_{2}^{\rm initial}$$ Mean value 0.5 0.5 1 0 Range — — [0.99, 1.01] [$$-$$0.01, 0.01] The 10-degree Chebyshev interval method ($$d=10$$ and $$\tilde{{d}}=15)$$ combined with symplectic algorithm is used to estimate the interval results of optimal control problem. In order to obtain a high accuracy of the scanning method, 200 sampling points are distributed symmetrically in each of the interval of uncertain parameters. The interval estimations of case A are shown in Figs 2–4, which show that the interval results obtained by Chebyshev interval method can entirely enclose the interval results from the scanning method. The maximum interval for state variables $$x_{1}$$ and $$x_{2}$$ arises at the time $$t=1$$. Also it can be found that most of the overestimation is extremely small. The maximum overestimation is not more than $$10^{-4}$$, and it is located in the lower bounds estimation of $$x_{2}$$ in Fig. 5. Fig. 2. View largeDownload slide Interval estimation of $$x_{1}$$ for the case A. Fig. 2. View largeDownload slide Interval estimation of $$x_{1}$$ for the case A. Fig. 3. View largeDownload slide Interval estimation of $$x_{2}$$ for the case A. Fig. 3. View largeDownload slide Interval estimation of $$x_{2}$$ for the case A. Fig. 4. View largeDownload slide Interval estimation of $$u$$ for the case A. Fig. 4. View largeDownload slide Interval estimation of $$u$$ for the case A. Fig. 5. View largeDownload slide Overestimation of state variables and control variables for the case A. Fig. 5. View largeDownload slide Overestimation of state variables and control variables for the case A. In the case B, the interval estimations are shown in Figs 6–8. It can be found that the interval results obtained by the Chebyshev interval method can also entirely enclose the interval results obtained by the scanning method. While the uncertain interval results in the case B are narrower than they are in the case A. It is illustrated that the uncertain influence from system parameters is smaller than that from the initial state variables. In the case B, the maximum overestimation is not more than $$3e10^{-4}$$, and it is located in the lower bounds estimation of $$x_{2}$$ in Fig. 9. Fig. 6. View largeDownload slide Interval estimation of $$x_1$$ for the case B. Fig. 6. View largeDownload slide Interval estimation of $$x_1$$ for the case B. Fig. 7. View largeDownload slide Interval estimation of $$x_2$$ for the case B. Fig. 7. View largeDownload slide Interval estimation of $$x_2$$ for the case B. Fig. 8. View largeDownload slide Interval estimation of u for the case B. Fig. 8. View largeDownload slide Interval estimation of u for the case B. Fig. 9. View largeDownload slide Overestimation of state variables and control variables for the case B. Fig. 9. View largeDownload slide Overestimation of state variables and control variables for the case B. 5.2. Optimal orbital rendezvous problem with uncertain initial conditions The accuracy of spacecraft relative state information has much influence on the spacecraft rendezvous (Gao et al., 2009). Meanwhile, navigation error, complex perturbations factors and other source of uncertainty are inevitable in the rendezvous trajectory design. In this example, the initial state variables for an optimal spacecraft rendezvous problem are considered as uncertain parameters for verifying the performance of the proposed method. The trajectory design problem of spacecraft rendezvous with continuous thrust can be taken as a non-linear optimal control problem, i.e., the transferring spacecraft is controlled from one state to another given state. In order to establish the equations of motion, we introduce the coordinate frame which is rotating along a circular orbit at a constant angular velocity. From the Newton’s laws, we obtain the following equations of motion in the rotating frame (Park et al., 2006). \begin{align} \label{eq27} \left\{{\begin{array}{@{}l} \ddot{{x}}-2\omega \dot{{y}}-\omega^{2}\left({R+x} \right)=-\dfrac{\mu }{r^{3}}\left({R+x} \right)+u_{x} \\[2ex] \ddot{{y}}+2\omega \dot{{x}}-\omega^{2}y=-\dfrac{\mu}{r^{3}}y+u_{y} \\[2ex] \ddot{{z}}=-\dfrac{\mu}{r^{3}}z+u_{z} \\ \end{array}} \right. \end{align} (5.5) where $$r=\sqrt {\left({x+R} \right)^{2}+y^{2}+z^{2}}$$, $$x$$, $$y$$ and $$z$$ are the position variables of the spacecraft, and they represent radial, tangential and normal displacements from the origin of the rotating frame, respectively. $$R$$ is the position variable of the origin of the rotating frame from the origin of the inertial frame. $$\omega$$ is the mean angular velocity of the rotating frame. $$\mu$$ is the gravitational parameter of the central body. $$u_{x}$$, $$u_{y}$$ and $$u_{z}$$ are the control accelerations in $$x$$, $$y$$ and $$z$$ directions of the spacecraft, respectively. For convenience of analysis, the non-dimensionalized equations of motion with reference length R and reference time $$1/\omega$$ can be given as follows \begin{align} \label{eq28} \left\{{\begin{array}{@{}l} \ddot{{x}}-2\dot{{y}}+\left({\dfrac{1}{r^{3}}-1} \right)\left({1+x} \right)=u_{x} \\ \ddot{{y}}+2\dot{{x}}+\left({\dfrac{1}{r^{3}}-1} \right)y=u_{y} \\ \ddot{{z}}+\dfrac{1}{r^{3}}z=u_{z} \\ \end{array}} \right. \end{align} (5.6) where $$r=\sqrt {\left({x+1} \right)^{2}+y^{2}+z^{2}}$$ in the above non-dimensionalized equations of motion, the goal of the non-linear optimal control is to minimize the quadratic cost, i.e., \begin{align} J=\frac{1}{2}\int\limits_0^1 {\left({u_{x}^{2} +u_{y}^{2} +u_{z}^{2}} \right){\rm d}t}. \label{eq4.19} \end{align} (5.7) In state space, we define the state vector composed of position variables and velocity variables as $${{\bf x}}=\left\{{x,y,z,\dot{{x}},\dot{{y}},\dot{{z}}} \right\}^{{\rm T}}$$, and this problem means that the consumption of energy in the fixed flying time $$t_{f} =1$$ is minimized for transferring spacecraft from the initial state $$\label{eq29} {{\bf x}}_{t=0} =0.1\left\{{x_{0}, y_{0}, z_{0} ,0,0,0} \right\}^{{\rm T}}$$ (5.8) to the desired state \begin{align} {{\bf x}}_{t_{f} =1} =\left\{{0,0,0,0,0,0} \right\}^{{\rm T}}. \label{eq4.21} \end{align} (5.9) The initial state is the time to move in a relative position. The main uncertain influence comes from the position, so only the position variables are considered as the uncertain parameters. The intervals of these variables are expressed as $$\label{eq30} \left\{{\begin{array}{@{}l} \left[x_{0}\right] =[0.99\sin \theta \cos \phi, \;1.01\sin \theta \cos \phi] \\ \left[y_{0}\right] =\left[{0.99\cos \theta \sin \phi, \;1.01\cos \theta \sin \phi} \right] \\ \left[z_{0}\right] =\left[{0.99\cos \theta, \;1.01\cos \theta} \right] \\ \end{array}} \right.$$ (5.10) which $$\phi =\pi/6$$ and $$\theta =\pi/{72}$$. The 5-degree Chebyshev interval method ($$d=5$$ and $$\tilde{{d}}=6)$$ combined with symplectic algorithm are applied to solve the optimal control problem. For the scanning method, 10 sampling points are distributed symmetrically in the interval of each uncertain parameters. It should be noted that the number of solving optimal control problems is equal to the number of the solving TPBVP in the Section 2.2 The interval estimations of state variables for optimal orbital rendezvous are shown in Fig. 10 and the interval results obtained by the Chebyshev interval method also entirely enclose the interval results obtained by the scanning method. The maximum interval length is 2.618e-3, and it is located in the estimation of variable $$\dot{{z}}$$ at the time $$t=0.5077$$. It also indicates that the possible values are entirely enclosed in the interval obtained by the Chebyshev interval method. Besides, the Chebyshev interval method leads to extremely small overestimation. All of the overestimation is less than 2.5e-8 (as it is shown in Figs 11 and 12). Furthermore, the number of solving deterministic optimal control problems by the symplectic algorithm is listed in Table 3. It shows that the computational cost of Chebyshev interval method is much less than the scanning method. As it is shown in Fig. 13, the maximum length of the estimated interval of control variables is 1.9660e-2, and it is located in the estimation of $$u_{x}$$ at time $$t=0.43077$$. The accuracy of the interval results (as it is shown in Fig. 14) in optimal orbital rendezvous problem is also ensured by the Chebyshev interval method. Fig. 10. View largeDownload slide Interval estimation of state variables for optimal orbital rendezvous. Fig. 10. View largeDownload slide Interval estimation of state variables for optimal orbital rendezvous. Fig. 11. View largeDownload slide Overestimation of position state variables for optimal orbital rendezvous. Fig. 11. View largeDownload slide Overestimation of position state variables for optimal orbital rendezvous. Fig. 12. View largeDownload slide Overestimation of velocity state variables for optimal orbital rendezvous. Fig. 12. View largeDownload slide Overestimation of velocity state variables for optimal orbital rendezvous. Fig. 13. View largeDownload slide Interval estimation of control variables for optimal orbital rendezvous. Fig. 13. View largeDownload slide Interval estimation of control variables for optimal orbital rendezvous. Fig. 14. View largeDownload slide Overestimation of control variables for optimal orbital rendezvous. Fig. 14. View largeDownload slide Overestimation of control variables for optimal orbital rendezvous. Table 3 Calculation times of deterministic optimal control for the uncertain parameters Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Table 3 Calculation times of deterministic optimal control for the uncertain parameters Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Methods Chebyshev interval method Scanning method Number of solving optimal control problem 216 1000 Though the real control value is an initiative value which can be determined by feedback and many factors, this estimation can give a forecast for the initial spacecraft rendezvous design. The enclosures of state variables $${{\bf x}}$$ and $$\dot{{\bf x}}$$ are the foundations for the collision avoidance for the spacecraft mission in the predesign of the spacecraft mission. The enclosures of the $${{\bf u}}$$ can give a guidance for the thruster selection or the trajectory design under the constraint of the special thruster. 6. Conclusions This article proposes an uncertain influence domain estimation method for uncertain optimal control problem using symplectic algorithm and Chebyshev interval method. In order to obtain an accuracy description of uncertainties without known statistical information, the interval uncertain modelling is first adopted. Then the combination of symplectic algorithm and Chebyshev interval method is applied to estimate the influence domain of uncertain optimal control problem. The Chebyshev interval method employs the Chebyshev inclusion function to approximate the original uncertain optimal control problem. The approximation function transforms the uncertain optimal control problem into a series of deterministic ones, therefore, the symplectic algorithm can be employed. Each of the deterministic optimal control problems is further transformed into a set of non-linear algebraic equations subsequently solved by the Newton-Raphson iteration. In the numerical simulations, a simple test and an uncertain optimal orbital rendezvous are employed to show the effectiveness of the presented method. The results demonstrate that the proposed method has high computational precision and efficiency compared with the scanning method. Especially, the proposed method can obtain a desirable numerical accuracy for uncertain estimation of optimal trajectory in orbital rendezvous. Our future research will explore the practical application of interval based method on Spacecraft Swarm reconfiguration and other related fields. More detailed theory research of the symplectic algorithm and Chebyshev interval method with different aspects is out of the major scope of this study. Funding National Natural Science Foundation of China (11472069, 11372064, 91648204); National 111 Project of China (B14013); and National Key Research and Development Program (2016YFB0201602). References Alefeld, G. & Mayer, G. ( 2000 ) Interval analysis: theory and applications. J. Comput. Appl. Math., 121 , 421 – 464 . Google Scholar CrossRef Search ADS Armellin, R. & Topputo, F. ( 2006 ) A sixth-order accurate scheme for solving two-point boundary value problems in astrodynamics. Celestial Mech. Dynam. Astronom., 96 , 289 – 309 . Google Scholar CrossRef Search ADS Askey, R. & Wilson, J. ( 1985 ) Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials. Mem. Amer. Math. Soc., 54 , 1 – 55 Büskens, C. & Maurer, H. ( 2001 ) Sensitivity analysis and real-time control of parametric optimal control problems using nonlinear programming methods. Online Optimization of Large Scale Systems. ( Gr Tschel, M. Krumke S. O. & Rambau J. eds). Berlin Heidelberg : Springer , pp. 57 – 68 . Google Scholar CrossRef Search ADS Ben-Haim, Y. & Elishakoff, I. ( 1990 ) Convex Models of Uncertainty in Applied Mechanics . Amsterdam : Elsevier . Cao, Y., Hussaini, M. Y. & Zang, T. A. ( 2003 ) An efficient monte carlo method for optimal control problems with uncertainty. Comput. Optim. Appl., 26 , 219 – 230 . Google Scholar CrossRef Search ADS Dempster, A. P., Laird, N. M. & Rubin, D. B. ( 1977 ) Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B Methodol.l, 39 , 1 – 38 . Diehl, M., Bock, H. G. & Kostina, E. ( 2005 ) An approximation technique for robust nonlinear optimization. Math. Program., 107 , 213 – 230 . Google Scholar CrossRef Search ADS Dorf, R. C. & Bishop, R. H. ( 2011 ) Modern Control Systems . New Jersey : Pearson Education , pp. 234 – 236 . Dorociak, R. & Gausemeier, J. ( 2014 ) Methods of improving the dependability of self-optimizing systems. Dependability of Self-Optimizing Mechatronic Systems. ( Gausemeier, J. Rammig, F. J. Schfer W. & Sextro W. eds). Berlin Heidelberg : Springer . Google Scholar CrossRef Search ADS Fisher, J. & Bhattacharya, R. ( 2011 ) Optimal trajectory generation with probabilistic system uncertainty using polynomial chaos. J. Dyn. Syst. Measurement Control, 133 , 014501 . Google Scholar CrossRef Search ADS Fox, L. & Parker, I. B. ( 1968 ) Chebyshev Polynomials in Numerical Analysis . London : Oxford University Press . Gao, H., Yang, X. & Shi, P. ( 2009 ) Multi-objective robust H$$_{\infty}$$ control of spacecraft rendezvous. IEEE Trans. Control Syst. Technol., 17 , 794 – 802 . Google Scholar CrossRef Search ADS Golnaraghi, F. & Kuo, B. C. ( 2010 ) Automatic Control Systems . Quebecor : John Wiley & Sons ., pp. 7 . Hansen, E. ( 1969 ) Topics in Interval Analysis . Oxford : Clarendon Press . Hover, F. S. ( 2008 ) Gradient dynamic optimization with Legendre chaos. Automatica, 44 , 135 – 140 . Google Scholar CrossRef Search ADS Huschto, T. & Sager, S. ( 2013 ) Stochastic Optimal Control in the Perspective of the Wiener Chaos , IEEE, Zürich . Ibrir, S. ( 2008 ) Design of static and dynamic output feedback controllers through Euler approximate models: uncertain systems with norm-bounded uncertainties. IMA J. Math. Control Inf., 25 , 281 – 296 . Google Scholar CrossRef Search ADS Ishizaki, T., Koike, M., Ramdani, N., Ueda, Y., Masuta, T., Oozeki, T., Sadamoto, T. & Imura, J.-I. ( 2016 ) Interval quadratic programming for day-ahead dispatch of uncertain predicted demand. Automatica, 64 , 163 – 173 . Google Scholar CrossRef Search ADS Jaulin, L., Kieffer, M., Didrit, O. & Walter, É. ( 2001 ) Applied Interval Analysis: With Examples in Parameter and State Estimation, Robust Control and Robotics . New York : Springer . Jiang, C., Han, X., Guan, F. J. & Li, Y. H. ( 2007a ) An uncertain structural optimization method based on nonlinear interval number programming and interval analysis method. Eng. Struct., 29 , 3168 – 3177 . Google Scholar CrossRef Search ADS Jiang, C., Han, X. & Liu, G. R. ( 2007b ) Optimization of structures with uncertain constraints based on convex model and satisfaction degree of interval. Comput. Meth. Appl. Mech. Eng., 196 , 4791 – 4800 . Google Scholar CrossRef Search ADS Jiang, C., Han, X., Lu, G. Y., Liu, J., Zhang, Z. & Bai, Y. C. ( 2011 ) Correlation analysis of non-probabilistic convex model and corresponding structural reliability technique. Comput. Meth. Appl. Mech. Eng., 200 , 2528 – 2546 . Google Scholar CrossRef Search ADS Kameswaran, S. & Biegler, L. T. ( 2007 ) Convergence rates for direct transcription of optimal control problems using collocation at Radau points. Comput. Optim. Appl., 41 , 81 – 126 . Google Scholar CrossRef Search ADS Kleywegt, A. J., Shapiro, A. & Homem-de-Mello, T. ( 2002 ) The sample average approximation method for stochastic discrete optimization. SIAM J. Optim., 12 , 479 – 502 . Google Scholar CrossRef Search ADS Lizia, P. D., Armellin, R. & Lavagna, M. ( 2008 ) Application of high order expansions of two-point boundary value problems to astrodynamics. Celestial Mech. Dynam. Astronom., 102 , 355 – 375 . Google Scholar CrossRef Search ADS Loxton, R., Teo, K. L. & Rehbock, V. ( 2011 ) Robust suboptimal control of nonlinear systems. Appl. Math. Comput., 217 , 6566 – 6576 . MaCdonald, M. & Badescu, V. ( 2014 ) The International Handbook of Space Technology . Berlin : Springer . Google Scholar CrossRef Search ADS Majji, M., Turner, J. D. & Junkins, J. L. ( 2009 ) Solution of two-point boundary-value problems using Lagrange implicit function theorem. J. Guidance Control Dyn., 32 , 1684 – 1687 . Google Scholar CrossRef Search ADS Moore, R. E. ( 1966 ) Interval Analysis . Englewood Cliffs, NJ : Prentice-Hall . Nedialkov, N. ( 1999 ) Validated solutions of initial value problems for ordinary differential equations. Appl. Math. Comput., 105 , 21 – 68 . Park, C., Guibout, V. & Scheeres, D. J. ( 2006 ) Solving optimal continuous thrust rendezvous problems with generating functions. J. Guidance Control Dyn., 29 , 321 – 331 . Google Scholar CrossRef Search ADS Peng, H., Chen, B. & Wu, Z. ( 2014a ) Multi-objective transfer to libration-point orbits via the mixed low-thrust and invariant-manifold approach. Nonlinear Dynam., 77 , 321 – 338 . Google Scholar CrossRef Search ADS Peng, H., Jiang, X. & Chen, B. ( 2014b ) Optimal nonlinear feedback control of spacecraft rendezvous with finite low thrust between libration orbits. Nonlinear Dynam., 76 , 1611 – 1632 . Google Scholar CrossRef Search ADS Peng, H. J., Gao, Q., Wu, Z. G. & Zhong, W. X. ( 2011 ) Symplectic adaptive algorithm for solving nonlinear two-point boundary value problems in Astrodynamics. Celestial Mech. Dynam. Astronom., 110 , 319 – 342 . Google Scholar CrossRef Search ADS Peng, S. ( 1990 ) A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim., 28 , 966 – 979 . Google Scholar CrossRef Search ADS Pettit, C. L. ( 2004 ) Uncertainty quantification in aeroelasticity: recent results and research challenges. J. Aircraft, 41 , 1217 – 1229 . Google Scholar CrossRef Search ADS Phelps, C., Royset, J. O. & Gong, Q. ( 2016 ) Optimal control of uncertain systems using sample average approximations. SIAM J. Control Optim., 54 , 1 – 29 . Google Scholar CrossRef Search ADS Pritchard, A. J. & Townley, S. ( 1991 ) Robustness optimization for uncertain infinite-dimensional systems with unbounded inputs. IMA J. Math. Control Inf., 8 , 121 – 133 . Google Scholar CrossRef Search ADS Römgens, B., Mooij, E. & Naeije, M. ( 2013 ) Satellite collision avoidance prediction using verified interval orbit propagation. J. Guidance Control Dyn., 36 , 821 – 832 . Google Scholar CrossRef Search ADS Sakthivel, R. & Ito, H. ( 2007 ) Non-linear robust boundary control of the Kuramoto-Sivashinsky equation. IMA J. Math. Control Inf., 24 , 47 – 55 . Google Scholar CrossRef Search ADS Shafer, G. ( 1976 ) A Mathematical Theory of Evidence . Princeton : Princeton University Press . Stengel, R. F. ( 1986 ) Stochastic Optimal Control: Theory and application . Chichester : John Wiley & Sons., pp. 421 . Wang, X. & Qiu, Z. ( 2009 ) Nonprobabilistic interval reliability analysis of wing flutter. AIAA J., 47 , 743 – 748 . Google Scholar CrossRef Search ADS Wilkins, M. & Alfriend, K. ( 2000 ) Characterizing Orbit Uncertainty Due to Atmospheric Uncertainty. Proceedings of the Astrodynamics Specialist Conference , pp. 19 – 29 . Wu, J., Luo, Z., Zhang, N. & Zhang, Y. ( 2015 ) A new interval uncertain optimization method for structures using Chebyshev surrogate models. Comput. Structures, 146 , 185 – 196 . Google Scholar CrossRef Search ADS Wu, J., Luo, Z., Zhang, Y., Zhang, N. & Chen, L. ( 2013a ) Interval uncertain method for multibody mechanical systems using Chebyshev inclusion functions. Int. J. Numer. Methods Eng., 95 , 608 – 630 . Google Scholar CrossRef Search ADS Wu, J., Zhang, Y., Chen, L. & Luo, Z. ( 2013b ) A Chebyshev interval method for nonlinear dynamic systems under uncertainty. Appl. Math. Model., 37 , 4578 – 4591 . Google Scholar CrossRef Search ADS Xiu, D. & Karniadakis, G. E. ( 2002 ) The Wiener–Askey polynomial Chaos for stochastic differential equations. SIAM J. Sci. Comput., 24 , 619 – 644 . Google Scholar CrossRef Search ADS Yong, J. & Zhou, X. Y. ( 1999 ) Stochastic Controls: Hamiltonian Systems and HJB Equations . New York : Springer . Zadeh, L. A. ( 1965 ) Fuzzy sets. Inf. Control, 8 , 338 – 353 . Google Scholar CrossRef Search ADS Zazzera, F. B., Vasile, M., Massari, M. & Lizia, P. D. ( 2006 ) Assessing the Accuracy of Interval Arithmetic Estimates in Space Flight Mechanics. Technical Report 18851/05 . Milan, Italy : European Space Agency . © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

### Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Jul 29, 2017

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations