A numerical method for the approximation of reachable sets of linear control systems

A numerical method for the approximation of reachable sets of linear control systems Abstract In this article, a numerical method for the approximation of reachable sets of linear control systems is discussed. First a continuous system is transformed into a discrete one with Runge–Kutta methods. Then based on Benson’s outer approximation algorithm for solving multiobjective optimization problems, we propose a variant of Benson’s algorithm to sandwich the reachable set of the discrete system with an inner approximation and an outer approximation. By specifying an approximation error, the quality of the approximations measured in Hausdorff distance can be directly controlled. Furthermore, we use an illustrative example to demonstrate the working of the algorithm. Finally, computational experiments illustrate the superior performance of our proposed algorithm compared to a recent algorithm in the literature. 1. Introduction In this article, we consider the problem of finding the reachable set of the following linear control system. \begin{align} (\rm P_c)\quad &\dot{x}(t) = A(t)x(t)+B(t)u(t), \\ \end{align} (Pc(a)) \begin{align} &x(t_0) = x_0, \end{align} (Pc(b)) \begin{align} &u(t) \in U, \end{align} (Pc(c)) where $$x(t)\in \mathbb{R}^{p}$$ is the state, $$u(t)\in \mathbb{R}^{m'}$$ is the control input and it is a (Lebesgue) measurable function of time $$t$$, $$A(t)\in \mathbb{R}^{p\times p}$$ and $$B(t)\in \mathbb{R}^{p\times m'}$$ are known continuous and bounded matrices, $$x_0 \in \mathbb{R}^{p}$$ is the initial state, $$U \neq \emptyset$$ is assumed to be a convex and compact subset of $$\mathbb{R}^{m'}$$. Definition 1.1 Let $$t_0 < T$$ be given. The reachable set of system $$({\rm P}_c)$$ for the time $$t = T$$ is defined as \begin{align*} {\mathcal R}(T,t_0,x_0): = & \{ y\in \mathbb{R}^{p} : \exists x(\cdot) \text{ and } u(\cdot)\text{ of } (P_c) \text{ with } x(T) = y\}. \end{align*} There has been an increasing interest in reachable sets of linear control systems with recent years. Reachable sets can be used in various applications, including calculation of the manoeuvres range of spacecrafts (Li et al., 2011), evaluation of the interception domain of intercontinental ballistic missile (Vinh et al., 1995), collision avoidance for vehicles (Mitchell & Tomlin, 2003), mechanical structure optimization (Suchaitanawanit & Cole, 2015), gene expression system control (Parise et al., 2014) and so on. It is known that reachable sets of linear control systems are convex, compact and non-empty (see Sonneborn & Van Vleck, 1964). A variety of methods for the approximation of reachable sets of linear control systems have been suggested. Gayek & Fisher (1987) construct an $$n$$-dimensional polyhedron to approximate the reachable set of an $$n$$-dimensional linear system. Varaiya (2000) proposes an approach based on the Pontryagin maximum principle of optimal control theory to approximate the reachable set of a linear system. Kurzhanski & Varaiya (2001) introduce a method based on level sets and Hamilton–Jacobi equations to estimate reachable sets. Kurzhanski & Vályi (1991), Kurzhanski & Varaiya (2000), Kurzhanski & Varaiya (2002) and Botchkarev & Tripakis (2000) use ellipsoidal methods to achieve inner and outer approximations of reachable sets while Girard (2005) and Girard et al. (2006) construct reachable sets by set operation using zonotopes. Kostousova (2001) introduces families of external and internal parallelotopic estimates for reachable sets. Moreover, Kostousova (1997, 2008, 2009) study properties of outer polyhedral estimates for reachable sets of linear systems. It has been suggested that the reachable set of a continuous control system can be approximated by the reachable set of its corresponding discrete system (see Vinnikov, 2015; Dontchev & Farkhi, 1989). Thus, using the convex property of reachable sets of linear systems, Baier et al. (2007) propose a direct solution method based on optimization techniques to approximate the discrete reachable set. The method is easy to implement. However, it needs to choose a finite number of normal directions in advance and the approximation error cannot be directly controlled. In this article, we pursue the idea of using the discrete reachable set to approximate its corresponding continuous reachable set. Followed the work of Benson’s outer approximation algorithm (Benson, 1998) and its variants (see e.g., Shao & Ehrgott, 2008; Ehrgott et al., 2011; Löhne et al., 2014) for solving multiobjective optimization problems, we propose a variant of Benson’s algorithm with outer and inner approximations to sandwich the discrete reachable set of a linear control system. The main contribution of our work is that by specifying an approximation error the approximation quality measured in Hausdorff distance between the outer and inner approximation sets can be directly controlled. This article is organized as follows. In Section 2, we introduce discrete linear control systems and provide a theoretical basis for using the discrete reachable set to approximate its continuous counterpart. Then, we propose a variant of Benson’s algorithm for solving discrete reachable sets and present an illustrative example in Section 3. Numerical results are given in Section 4. Finally, we draw some conclusions in Section 5. 2 Discrete reachable sets A discrete counterpart of linear control system ($${\rm P}_c$$), using a one-step discrete scheme, e.g. explicit Runge–Kutta scheme, on equidistant time grid points $$t_i = t_0 +ih$$, $$i = 0,1,\ldots, N$$, and step size $$h = (T -t_0)/N$$, is constructed as follows: \begin{align} ({\rm P}_d)\quad & x_h(t_{i+1}) = x_h(t_i) + h {\it{\Phi}}(x_h(t_i), u_h(t_i),t_i,h), i =0,1\ldots, N-1,\\ \end{align} (Pd(a)) \begin{align} & x_h(t_0) = x_0,\\ \end{align} (Pd(b)) \begin{align} & u_h \in U_h. \end{align} (Pd(c)) It needs to be noted that constraint $$u_h \in U_h$$ could be non-linear. For instance, if $$U_h = \{x\in \mathbb{R}^{m^\prime}: \|x\|_2^2 \leq K\}$$, where $$K$$ is a constant positive number, then the corresponding constraint $${u_h}_1^2+{u_h}_2^2+\ldots+{u_h}_{m^\prime}^2 - K \leq 0$$ is non-linear. The discretized system depends on the choice of parametrization $$u_h(\cdot)$$ of the control on the grid, e.g. a piecewise constant function, and on the incremental function $${\it{\Phi}}$$. In the simplest case we have $${\it{\Phi}}(x(t), u(t), t, h) = A(t)x(t) + B(t)u(t)$$, that is Euler method. Definition 2.1 The reachable set of discretized control system $$(\rm P_d)$$ for the time $$t_N$$ is defined as \begin{array}{l l l} {\mathcal R}_h(t_N, t_0, x_0) := &\{y\in \mathbb{R}^{p} : & \exists u_{h}(\cdot) \text{ and } x_h(\cdot) \text{ of } ({\rm P}_d) \text{ with } x_h(t_N)=y\}. \end{array} We also call $${\mathcal R}_h(t_N, t_0, x_0)$$ the discrete reachable set. Reachable set $${\mathcal R}_h(t_N, t_0, x_0)$$ of linear system ($${\rm P}_d$$) is convex compact and non-empty (Baier et al., 2007). Definition 2.2 Let $$\cal C$$ and $$\cal D$$ be two non-empty convex compact sets in $$\mathbb{R}^p$$. Then the Hausdorff distance between $$\cal C$$ and $$\cal D$$ is defined as \begin{equation} H(\mathcal{C},\mathcal{D}):=\max \{d(\mathcal{C},\mathcal{D}),d(\mathcal{D},\mathcal{C})\}, \end{equation} (2.2) where $$d(\mathcal{C}, \mathcal{D}) = \max \limits_{c \in \mathcal{C}} \min \limits_{d \in \mathcal{D}} \|c-d\|_2$$. Theorem 2.1 (Dontchev & Farkhi, 1989) Let $$N\in \mathbb N$$, $$h = (T-t_0)/N$$ be the step size of problem $$({\rm P}_d)$$, $$t_i = t_0 +ih$$, $$i = 0,1,\ldots, N$$. Let $${\mathcal R}(T,t_0,x_0)$$ and $${\mathcal R}_h(t_N,t_0,x_0)$$, respectively, be the reachable set of system $$({\rm P}_c)$$ and the reachable set of system $$({\rm P}_d)$$ with Euler scheme. Then there exists a constant $$M>0$$, such that $$H(\mathcal R(T,t_0,x_0), \mathcal R_h(t_N,t_0,x_0)) \leq Mh $$ holds. Theorem 2.1 shows that in the case of the Euler scheme is used for discretization, then the Hausdorff distance between the discrete reachable set and its continuous counterpart is bounded by an upper bound. Hence using $$\mathcal R_h$$ to approximate $$\mathcal R$$ is reasonable. It needs to be noted that different discretization methods will lead to different numerical bounds for the Hausdorff distance between the discrete reachable set and its continuous counterpart (see e.g., Veliov, 1992; Ferretti, 1997; Zivanovic & Collins, 2012). If a suitable discretization scheme is used, a small Hausdorff distance $$H(\mathcal R_h, \mathcal R)$$ can be expected. In this article, we use Euler scheme for discretization. 3. A variant of Benson’s algorithm for finding discrete reachable sets In this section, we are going to propose a variant of Benson’s algorithm to solve the reachable set of discretized control system $$(P_d)$$ ($$(P_d)$$ is obtained by the discretization of $$(P_c)$$ with Euler method). Let us first rewrite the constraints $$({\rm P}_d(a))$$, $$({\rm P}_d(b))$$ and $$({\rm P}_d(c))$$ into the generalized form of $$g(x)\leq 0$$. The discrete reachable set $${\mathcal R}_h(t_N, t_0, x_0)$$ problem can be written as follows. \begin{eqnarray*} \begin{array}{l r c} (\rm P_\mathcal{Y}) & \mbox{Find} & \mathcal{Y} = \{Px: x \in \mathcal{X}\}\\ & \mbox{s.t.} & \mathcal{X} = \{x\in \mathbb{R}^n : g(x)\leq 0\}, \end{array} \end{eqnarray*} where $$x = (u_h(t_0)^{\top},\ldots, u_h(t_{N-1})^{\top}, x_h(t_0)^{\top}, \ldots, x_h(t_{N-1})^{\top},x_h(t_{N})^{\top})^{\top}$$ are variables, $$P=[{\textbf{O}}_{\it{p\times(n-p)}}, {\textbf{I}}_{\it{p\times p}}]$$ is a matrix with the last $$p$$ columns being identity matrix while the rest being zero matrix. Here we assume that $$\exists x$$ such that $$g(x) < 0$$. Problem $$(\rm P_\mathcal{Y})$$ can also be seen as finding the feasible set in objective space of the following multiple objective optimization problem \begin{eqnarray*} \begin{array}{l l l} ({\rm MOP}) & \min & Px\\ & \mbox{s.t.} & x\in \mathcal{X}=\{x\in \mathbb{R}^n: g(x) \leq 0\}, \end{array} \end{eqnarray*} where $$\mathcal{X}$$ is the feasible set in decision space $$\mathbb{R}^n$$ and \begin{equation*} \mathcal{Y} := \{Px: x\in \mathcal{X}\} \end{equation*} is called the feasible set in objective space $$\mathbb{R}^p$$. In multi-objective optimization, several objective functions have to be minimized. The objectives are usually conflicting so that a feasible solution optimizing all objectives simultaneously does not exist in most cases. Therefore, the purpose of multi-objective optimization is to obtain non-dominated points rather than optimal values. Researchers developed various MOP algorithms which work in objective space to obtain the non-dominated set. For example, Benson (1998) has proposed an outer approximation method to find the feasible set in objective space of an multiobjective linear programming problem (MOLP). Ehrgott et al. (2012) developed a dual variant of Benson’s algorithm to solve MOLP. Moreover, extensions of Benson’s outer approximation algorithm for solving convex multiobjective programming problems can also be found in Ehrgott et al. (2011) and Löhne et al. (2014). Since the goal of multiobjective optimization is to obtain non-dominated sets, extended versions of Benson’s algorithms of Ehrgott et al. (2011) and Löhne et al. (2014) cannot be directly used to solve problem $$(\rm P_\mathcal{Y})$$. Therefore in this section, we are going to propose a variant of Benson’s algorithm to solve problem $$(\rm P_\mathcal{Y})$$. The following notations are used. For a set $$\mathcal{A} \subseteq \mathbb{R}^p$$, we denote the interior, boundary and convex hull of $$\mathcal{A}$$ by $${\rm int \,} \mathcal{A}$$, $${\rm bd \,} \mathcal{A}$$ and $${\rm conv \,} \mathcal{A}$$, respectively. Suppose $$\mathcal{A}$$ is a non-empty polyhedral convex compact set, we denote all vertices of $$\mathcal{A}$$ by $${\rm vert\,} \mathcal{A}$$. A subset $$\mathcal{F}$$ of a convex set $$\mathcal{A}$$ is called an exposed face of $$\mathcal{A}$$ if there exists a supporting hyperplane $$\cal H$$ to $$\mathcal{A}$$, with $$\mathcal{F} = \mathcal{A} \cap \cal H.$$ The variant of Benson’s algorithm to be proposed considers solving a generalized form of ($$\rm P_\mathcal{Y}$$) with $$Px$$ being linear functions and $$g(x)$$ being convex functions. It is obvious both $$\mathcal{X}$$ and $$\mathcal{Y}$$ are convex sets. The idea of the algorithm can be summarized as follows. Let $$\epsilon > 0$$ be an approximation error. Our algorithm starts with a polyhedron $$\mathcal{S}$$, covering the set $$\mathcal{Y}$$. Then a relative interior point $$\hat p$$ of $$\mathcal{Y}$$ is found. In each iteration, we check vertices $$s$$ of $$\mathcal{S}$$ to see if they are contained in $$\mathcal{Y}$$ or not. If not, we calculate the corresponding boundary point $$y$$ of $$\mathcal{Y}$$ on the line segment between $$s$$ and $$\hat p$$, and then measure the distance $$d(s,y)$$ between $$s$$ and $$y$$. If the vertex $$s \in \mathcal{Y}$$ or if $$d(s,y) \leq \epsilon$$, we add $$s$$ to the outer approximation $$\mathcal{O}$$ and its corresponding boundary point $$y$$ to the inner approximation $$\mathcal{I}$$ (note: if $$s \in \mathcal{Y}$$, its corresponding boundary point is $$s$$ itself). On the other hand, if a vertex of the polyhedron $$\mathcal{S}$$ is found at a distance greater than $$\epsilon$$ from its corresponding boundary point, a hyperplane separating it from $$\mathcal{Y}$$ is added to the description of $$\mathcal{S}$$ and the procedure is repeated until all the vertices of $$\mathcal{S}$$ are either contained in $$\mathcal{Y}$$ or at a distance less than $$\epsilon$$ from their corresponding boundary points. In order to construct the hyperplane separating a point $$s \notin \mathcal{Y}$$ from $$\mathcal{Y}$$, the following procedure is used: the point $$s$$ and $$\hat p$$ are connected by a line segment. This line segment intersects the boundary of $$\mathcal{Y}$$ at a point $$y$$. The idea is to construct a hyperplane which supports $$\mathcal{Y}$$ in $$y$$ and hence separates $$s$$ and $$\mathcal{Y}$$. The following primal dual convex programming pair (P$$_1(\hat{y})$$) and (D$$_1(\hat{y})$$) depending on $$\hat{y} \in \mathbb R^p$$ is needed for that purpose. $$(P_1(\hat{y}))\hspace{50mm} \displaystyle{\min : z \quad \quad g(x) \leq 0, Px-\hat{y}-z\bar n = 0}$$ where $$\bar{n}= \hat p-\hat{y}$$, $$\hat p$$ is an interior point of $$\mathcal{Y}$$. The Lagrangian $$L:(\mathbb{R}^n\times\mathbb{R})\times(\mathbb{R}^m\times \mathbb{R}^p)\rightarrow\mathbb{R}$$ associated with the problem (P$$_1(\hat{y})$$) can be defined as $$L(x,z,u,\lambda):=z+u^{\top}g(x)+\lambda^{\top}(Px-\hat{y}-z\bar n),$$ which yields the dual problem $$(D_1(\hat{y})) \hspace{50mm} \displaystyle{\max : \inf\left\{u^{\top}g(x)+\lambda ^{\top}Px\right\}-\lambda^{\top}\hat{y} \quad \quad u \geq 0, \lambda^{\top}\bar n = 1}$$ Note that the objective value of (P$$_1(\hat{y})$$) is strictly greater than zero if $$\hat{y}\notin \mathcal{Y}$$, is equal to zero if $$\hat{y} \in {\rm bd \,} \mathcal{Y}$$ and is strictly less than zero if $$\hat{y} \in {\rm int \,} \mathcal{Y}$$. In our problem, the parameter vector $$\hat{y} \in \mathbb{R}^q$$ typically does not belong to $$\mathcal{Y}$$. Problem (P$$_1(\hat{y})$$) has a unique optimal solution due to the convexity of $$\mathcal{Y}$$ and the intermediate value theorem since $$Px$$ are continuous. We denote the optimal solution of (P$$_1(\hat{y})$$) by $$(\bar x,\bar z)$$, then $$\bar y =P\bar x$$ is a boundary point of $$\mathcal{Y}$$. The following Proposition 3.1 and Theorem 3.1 provide theoretical basis for constructing a supporting hyperplane of $$\mathcal{Y}$$. Proposition 3.1 Let $$\hat{y} \in {\mathbb R}^p$$ and let $$(x^*,z^*)$$ be an optimal solution of $${\rm P_1}(\hat y)$$ and $$(u^*,\lambda^*)$$ be an optimal solution of $${\rm D_1}(\hat y)$$. Then the optimal objective values of $${\rm P_1}(\hat y)$$ and $${\rm D_1}(\hat y)$$ coincide, i.e., $$\inf\limits_{x\in\mathbb{R}^n}\{{u^*}^{\top}g(x)+{\lambda^*}^{\top}Px\}-{\lambda^*}^{\top}\hat{y} = z^*.$$ Proof. Since for problem $$(\rm P_\mathcal{Y})$$, we assume that $$\exists x$$ such that $$g(x) < 0$$, i.e., slater condition is satisfied, convex programming duality implies the existence of a solution $$(u,\lambda)$$ of (D$$_1(\hat{y})$$) and coincidence of the optimal values. □ Theorem 3.1 Let $$\hat{y} \in {\mathbb R}^p$$ and let $$(x^*,z^*)$$ be an optimal solution of $${\rm P_1}(\hat y)$$ and $$(u^*,\lambda^*)$$ be an optimal solution of $${\rm D_1}(\hat y)$$. Then $${\lambda^*}^{\top}y \geq {\lambda^*}^{\top}Px^* \mbox{ for all } y \in \mathcal{Y}$$, i.e., the hyperplane $$H:=\{y \in {\mathbb R}^p: {\lambda^*}^{\top}y = {\lambda^*}^{\top}Px^* \}$$ is a supporting hyperplane of $$\mathcal{Y}$$ at $$Px^*=\hat{y}+z^*\bar{n}$$. Proof. Let $$p$$ be a point in $$\mathcal{Y}$$, then the optimal value of (P$$_1(p)$$) must be less than or equal to zero. Let $$\mathcal{D} = \{(u,\lambda): u \geq 0, \lambda^{\top}\bar n = 1\}$$. Due to the weak duality between (P$$_1(p)$$) and (D$$_1(p)$$), we have $$\sup\limits_{(u,\lambda) \in \mathcal{D}}\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}p) \leq 0.$$ Furthermore, \begin{eqnarray*} & & \sup\limits_{(u,\lambda) \in \mathcal{D}}\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}p)\\ &= &\sup\limits_{(u,\lambda) \in \mathcal{D}}\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}\hat{y}+\lambda^{\top}\hat{y}-\lambda^{\top}p)\\ &=&\sup\limits_{(u,\lambda) \in \mathcal{D}}\{\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}\hat{y})+\lambda^{\top}\hat{y}-\lambda^{\top}p\}\\ &\geq &\inf\limits_{x\in\mathbb{R}^n}({u^*}^{\top}g(x)+{\lambda^*}^{\top} Px-{\lambda^*}^{\top}\hat{y})+{\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p\\ &=&z^*+{\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p, \end{eqnarray*} we have $$z^*+ {\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p \leq 0.$$ Moreover, because $$(x^*,z^*)$$ is an optimal solution of $${\rm P_1}(\hat y)$$, we have $$Px^* = (\hat y + z^* \bar n)$$. Therefore, $$ {\lambda^*}^{\top} Px^* = {\lambda^*}^{\top} (\hat y + z^* \bar n) = {\lambda^*}^{\top} \hat y + z^* \underbrace{(\lambda^*)^{\top} \bar n}_{=1} = {\lambda^*}^{\top} \hat y + z^*. $$ Now substitute $$z^* = {\lambda^*}^{\top} Px^* - {\lambda^*}^{\top} \hat y$$ into $$z^*+ {\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p \leq 0$$, we have $${\lambda^*}^{\top} Px^* -{\lambda^*}^{\top}p \leq 0,$$ i.e., for every $$p \in \mathcal{Y}$$, that $${\lambda^*}^{\top} p \geq {\lambda^*}^{\top}Px^*$$ holds, thus completes the proof. □ According to Theorem 3.1, for a given point $$\hat y$$, an optimal solution of (D$$_1(\hat{y})$$) can be used to obtain the required supporting hyperplane of $$\mathcal{Y}$$. We now state the algorithm in its general form. Algorithm 3.1 (A variant of Benson’s algorithm for solving $$(\rm P_\mathcal{Y})$$) Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). In Algorithm 3.1, at initialization a polyhedral convex set $$\mathcal{S}^0$$ need to be constructed. According to Benson (1998), the initial outer approximation $$\mathcal{S}^0$$ can be constructed as follows. Let \begin{equation} \hat \beta = \min \{e^{\top} y: y\in \mathcal{Y} \}, \end{equation} (3.1) where $$e \in {\mathbb R}^p$$ is a vector in which each entry is 1 and $$\mathcal{Y}$$ is described in $$(\rm P_\mathcal{Y})$$. We denote the optimal solution of (3.1) by $$y'$$. Compute an optimal solution $${\tilde{y}}^{i}$$ of \begin{equation} \max \{y_i: y\in \mathcal{Y} \}, i=1,\ldots, p, \end{equation} (3.2) and let $$y^{AI} = ({\tilde{y}}^1_1, {\tilde{y}}^2_2,\ldots, {\tilde{y}}^p_p)^{\top}$$. Define $$p+1$$ points $$v^k \in {\mathbb R}^p$$, $$k = 0, 1, \dots, p$$. Let $$v^0 = y^{AI}$$ and, for $$k = 1,2,\dots,p$$, let \begin{eqnarray} v_l^k = \left \{ \begin{array}{@{}l l@{}} y_l^{AI}, & \textrm{if}~ l \neq{k}, \\ \hat \beta + {y}^{AI}_k - e^{\top} v^0, & \textrm{if}~ l = k, \end{array}\right. \end{eqnarray} (3.3)$$l = 1, 2, \ldots, p$$. Then $$\mathcal{S}^0$$ can be constructed as the convex hull of $$\{v^k:k =0,1,\dots,p\}$$. Obviously $$\mathcal{S}^0$$ is a $$p$$-dimensional simplex and it contains $$\mathcal{Y}$$. Moreover, in Algorithm 3.1 at initialization, an interior point $$\hat p$$ of $$\mathcal{Y}$$ need to be found. We refer Algorithm 3 of Löhne (2011) to calculate an interior point. Remark 3.1 In Algorithm 3.1, for each $$k\geq 0$$, the inequality cut $$y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k$$ is constructed so that $$\mathcal{S}^{k+1}$$ ‘cuts off’ a portion of $$\mathcal{S}^k$$ containing $$y^k$$ and at the same time point $$y^k$$ is added to $$V_i(\mathcal{S}^k)$$. Thus Algorithm 3.1 constructs a sequence of polyhedra $$\mathcal{S}^{k}$$ with $$\mathcal{S}^0\supset \ldots \mathcal{S}^{k}\supset \mathcal{Y}$$ approximating $$\mathcal{Y}$$ from the outside. It also implicitly constructs a sequence of polyhedra $${\rm conv \,} V_i(\mathcal{S}^{k})$$ with $${\rm conv \,} V_i(\mathcal{S}^{0}) \subseteq \cdots \subseteq {\rm conv \,} V_i(\mathcal{S}^k)\subseteq \mathcal{Y}$$ approximating $$\mathcal{Y}$$ from the inside. Proposition 3.2 Let $$\bar y^i$$ denote the boundary point of $$\mathcal{Y}$$ obtained by solving $${\rm P_1}(s^i)$$ for $$s^i \in {\rm vert\,} \mathcal{S}^k$$. Then $$H(\mathcal{S}^k, \mathcal{Y})\leq \max \limits_{s^i\in {\rm vert\,} \mathcal{S}^k} \| \bar y^i-s^i\|$$. Proof. Since $$\mathcal{Y} \subseteq \mathcal{S}^k$$ and $$d(s^i, \mathcal{Y})$$ is a convex function of variable $$s^i$$, one has $$H(\mathcal{S}^k, \mathcal{Y}) = \max \limits_{s^i \in {\rm vert\,} \mathcal{S}^k} d(s^i, \mathcal{Y}).$$ According to the fact that $$\bar y^i \in \mathcal{Y}$$ for every $$s^i \in {\rm vert\,} \mathcal{S}^k$$, we obtain \[\max \limits_{s^i \in {\rm vert\,} \mathcal{S}^k} d(s^i, \mathcal{Y})\leq \max \limits_{s^i \in {\rm vert\,} \mathcal{S}^k}\| \bar y^i-s^i\|.\] □ Theorem 3.2 When the algorithm terminates, we have $$\mathcal{Y}^i\subseteq \mathcal{Y} \subseteq \mathcal{Y}^o$$ and $$H(\mathcal{Y}^i, \mathcal{Y}^o) \leq \epsilon$$. Proof. Since $$\epsilon >0$$, when the algorithm terminates, we have for each $$s^k \in {\rm vert\,} \mathcal{S}^k$$, $$\exists y^k$$ be a solution of $$P_1(\hat y)$$, such that $$d(s^k,y^k)< \epsilon$$, i.e., $$H(\mathcal{S}_k, {\rm conv \,} (V_i(\mathcal{S}^k))) <\epsilon$$ is satisfied. □ Remark 3.2 In the case of $$\mathcal{Y}$$ being a polytope, even if $$\epsilon$$ is set to be $$0$$, the sandwich algorithm is finite. This is because at each iteration, the algorithm finds an exposed face on the supporting hyperplane $$y^{\top}{\lambda^k}= {y^k}^{\top}{\lambda^k}$$ of $$\mathcal{Y}$$. Since the number of exposed faces of polytope $$\mathcal{Y}$$ is limited, thus after finite number of iterations, $$\mathcal{S}^k=\mathcal{Y}$$. Theorem 3.3 gives the total approximation error bound for using the discrete reachable set to approximate its continuous counterpart. Theorem 3.3 Let $$\mathcal R_h = \mathcal R_h(t_N, t_0, X_0)$$ be the reachable set of discrete control system $$(\rm P_d)$$ at time $$t_N$$, $$\mathcal R = \mathcal R(T, t_0, X_0)$$ be the reachable set of continuous control system $$(\rm P_c)$$ at time $$T$$. If $$H(\mathcal R,\mathcal R_h) \leq Mh$$, then we have $$H({\mathcal R},\mathcal{Y}^i) \leq Mh+\epsilon$$ and $$H({\mathcal R}, \mathcal{Y}^o) \leq Mh+\epsilon,$$ where $$\epsilon$$ is the specified approximation error. Proof. We just apply the triangle inequality for the Hausdorff distance and combine the convergence assumption with the results of Theorem 3.2, since the reachable sets $${\mathcal R}_h$$ are bounded and the assumption yields compactness and non-emptiness. □ According to Theorem 3.3, one can set $$\epsilon \ll Mh$$, then the Hausdorff distance between $${\mathcal R}$$ and the inner approximation $$\mathcal{Y}^i$$ or the outer approximation $$\mathcal{Y}^o$$ mainly depends on the discretization. Now we give an example to illustrate Algorithm 3.1. Example 3.4 Consider the following problem \begin{array}{l l l} \mbox{Find} & \mathcal{Y} =\{y \in \mathbb{R}^2: y= x\} \\ \mbox{s.t.}&(x_1-2)^2+(x_2-2)^2&\leq 4\\ & 0&\leq x_1\\ & 0&\leq x_2.\\ \end{array} We use Algorithm 3.1 to solve the problem. The approximation error $$\epsilon$$ is set to be $$0.4$$. Figure 1(a) shows $$\mathcal{Y}, \mathcal{S}^0$$, $$V_i(\mathcal{S}^0)$$ (symbol ‘+’) and the interior point $$\hat p=(2.1953,2.1953)$$. Figure 1(b–f) show the first to the fifth cut. In the figure, empty circles represent vertices being cut off, the symbol ‘+’ represents the boundary points, dotted line segments between the vertices and the interior point show the vertices being checked at iteration $$k-1$$. As it can be seen, $$\mathcal{S}^k$$ and $$V_i(\mathcal{S}^k)$$ change with iteration $$k$$. The vertex $$s^k$$ being checked, its corresponding boundary point $$y^k$$, the distance $$d(s^k,y^k)$$ between $$s^k$$ and $$y^k$$ and the cutting plane for each iteration are listed in Table 1. Table 1 $$s^k$$, $$y^k$$, $$d(s^k,y^k)$$ and the cutting planes in the first six iterations of the algorithm k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ Table 1 $$s^k$$, $$y^k$$, $$d(s^k,y^k)$$ and the cutting planes in the first six iterations of the algorithm k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ After five cuts, there are eight points outside $$\mathcal{Y}$$. Among them two points (0.7728,4) and (4,0.7728) have been checked at $$k =4$$, the distances between them and their corresponding boundary points are less than the approximation error 0.4, thus they belong to $$\mathcal{Y} \cup \mathcal{O}$$. For the rest of the six infeasible points, the distances between them and their corresponding boundary points are less than the approximation error 0.4, so we accept these six infeasible points for the outer approximation, see Fig. 1(g). When the algorithm terminates, the total number of iterations $$k$$ is equal to $$6$$, The total number of linear programmes (LPs) solved is 16. In Fig. 1(h), we show the outer approximation $$\mathcal{Y}^o$$ and the inner approximation $$\mathcal{Y}^i$$ of $$\mathcal{Y}$$ and their corresponding sets of points. Fig. 1. View largeDownload slide The changing of $$\mathcal{S}^i$$ with iteration $$i$$ (from (a) to (g)) and the inner and outer approximations $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ (see (h)). (a) $$\mathcal{Y}$$, $$\mathcal{S}^0$$, $$V_i(\mathcal{S}^0)$$ and $$\hat p$$. (b) $$\mathcal{S}^1$$ and $$V_i(\mathcal{S}^1)$$. (c) $$\mathcal{S}^2$$ and $$V_i(\mathcal{S}^2)$$. (d) $$\mathcal{S}^3$$ and $$V_i(\mathcal{S}^3)$$. (e) $$\mathcal{S}^4$$ and $$V_i(\mathcal{S}^4)$$. (f) $$\mathcal{S}^5$$ and $$V_i(\mathcal{S}^5)$$. (g) $$\mathcal{S}^6$$ and $$V_i(\mathcal{S}^6)$$. (h) $$\mathcal{Y}^o$$ and $$\mathcal{Y}^i$$. Fig. 1. View largeDownload slide The changing of $$\mathcal{S}^i$$ with iteration $$i$$ (from (a) to (g)) and the inner and outer approximations $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ (see (h)). (a) $$\mathcal{Y}$$, $$\mathcal{S}^0$$, $$V_i(\mathcal{S}^0)$$ and $$\hat p$$. (b) $$\mathcal{S}^1$$ and $$V_i(\mathcal{S}^1)$$. (c) $$\mathcal{S}^2$$ and $$V_i(\mathcal{S}^2)$$. (d) $$\mathcal{S}^3$$ and $$V_i(\mathcal{S}^3)$$. (e) $$\mathcal{S}^4$$ and $$V_i(\mathcal{S}^4)$$. (f) $$\mathcal{S}^5$$ and $$V_i(\mathcal{S}^5)$$. (g) $$\mathcal{S}^6$$ and $$V_i(\mathcal{S}^6)$$. (h) $$\mathcal{Y}^o$$ and $$\mathcal{Y}^i$$. 4 Numerical examples In Section 1, we have briefly outlined the literature on algorithms to solve reachable sets. In this section, we compare our algorithm presented in Section 3 with the algorithm of Baier et al. (2007). The algorithm of Baier et al. (2007), like ours, is based on outer and inner approximations, but they use a different strategy. Baier et al. (2007)’s algorithm uses a set of normal directions to construct supporting hyperplanes of $$\mathcal{Y}$$. Then the intersection of the halfspaces is taken as the outer approximation and the convex hull of the supporting points is taken as the inner approximation. However, in Baier et al. (2007)’s algorithm, the Hausdorff distance between the outer and inner approximation can not be calculated due to lack of vertex representation of the outer approximation. To compare our algorithm with Baier et al. (2007)’s algorithm, we have implemented the two algorithms in Matlab 7.10 using MOSEK (http://www.mosek.com) as non-linear programming and linear programming solver. Tests are run on a PC with dual processor CPU, 2.67 GHz and 2GB RAM and the codes including Matlab and MOSEK are run in a single thread mode. At step (k5) of our algorithm, the method of Chen et al. (1991) for on-line vertex enumeration by adjacency lists was used to calculate a vertex representation from the inequality representation of $$\mathcal{S}^k$$. We also adapt Baier et al. (2007)’s algorithm with on-line vertex enumeration method so that the vertex representation of an outer approximation can be calculated, thus the Hausdorff distance between the outer and inner approximation can be calculated. Notice that Baier et al. (2007)’s algorithm needs to choose a set of normal directions and each direction is used to calculate a supporting hyperplane of the discrete reachable set by solving a LP or non-linear programme (NLP). Therefore, to make the two algorithms comparable, for a given approximation error in our algorithm, we use trial and error to find the number of normal directions in Baier et al. (2007)’s algorithm which produces the result with similar Hausdorff distance to the result of ours. Moreover, we also compare the results of the two algorithms with solving the same number of LPs or NLPs, i.e., using the same number of supporting hyperplanes. We first apply this code to 3 two-dimensional linear control system problems and 1 three-dimensional linear control system problem. For all the four examples, we all use Euler method with piecewise constant control for discretization and the step size $$h$$ is set to be $$0.01$$. Example 4.1 Consider the following linear control system example with $$x_0=(0,0)^{\top}$$, $$t_0 = 0$$, $$T = 1$$, $$U=[0,1]$$ and \begin{equation} A=\left[ \begin{array}{l r} 0 & 1 \\ 0 & 0 \end{array} \right]\!, B=\left[ \begin{array}{r} 0 \\ 1 \end{array} \right]\!. \end{equation} The discrete reachable set problem $$({\rm P}_\mathcal{Y})$$ only have linear constraints. Therefore, for both algorithms, only LPs need to be solved. The problem is solved for $$\epsilon = 0.1$$ and $$\epsilon = 0.01$$ by Algorithm 3.1. It is also solved by Baier et al. (2007)’s algorithm with $$15$$,$$19$$, $$31$$, $$55$$ normal directions (the normal directions are calculated according to Baier et al. (2007), they are uniformly distributed on the unit ball). The results are shown in Table 2. Furthermore, outer approximations $$\mathcal{Y}^o$$ and inner approximations $$\mathcal{Y}^i$$ obtained by our algorithm with $$\epsilon = 0.1$$ and $$\epsilon = 0.01$$ are shown in Fig. 2(a and b), respectively. In the figure, outer approximations are drawn with solid lines while dot-dashed lines show inner approximations. For $$\epsilon = 0.1$$ ($$0.01$$), our algorithm needs to solve 15 (31) LPs. For comparison, we show the results obtained by Baier et al. (2007)’s algorithm with $$15$$ and $$31$$ normal directions (i.e., solving 15 and 31 LPs) in Fig. 2(c and d), respectively. From the figure, we can see that under the circumstance of solving the same number of LPs, Algorithm 3.1 obtains smaller $$H(\mathcal{Y}^o,\mathcal{Y}^i)$$ distance than Baier et al. (2007)’s algorithm. Fig. 2. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (15 LPs for (a) and (c); 31 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01$$. (c) Results of Baier et al. (2007)’s algorithm with 15 LPs. (d) Results of Baier et al. (2007)’s algorithm with 31 LPs. Fig. 2. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (15 LPs for (a) and (c); 31 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01$$. (c) Results of Baier et al. (2007)’s algorithm with 15 LPs. (d) Results of Baier et al. (2007)’s algorithm with 31 LPs. Table 2 The Hausdorff distance, running time and the number of LPs/NLPs for Examples 4.1–4.4 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Table 2 The Hausdorff distance, running time and the number of LPs/NLPs for Examples 4.1–4.4 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Example 4.2 Consider the linear control system with $$U=\{x \in \mathbb{R}^2: \|x\|_2 \leq 1\}$$, $$x_0=(0,0)^{\top}$$, $$t_0 = 0$$, $$T=2$$ and \begin{equation} A=\left[ \begin{array}{r r} 0 & 1 \\[3pt] -2 & -3 \end{array} \right]\!, B=\left[ \begin{array}{r r } 1 & 0 \\[3pt] 0 & 1 \end{array} \right]\!. \end{equation} The discrete reachable set problem $$(\rm P_\mathcal{Y})$$ has non-linear constraints. Thus, NLPs need to be solved for both Algorithm 3.1 and Baier et al. (2007)’s algorithm. The problem is solved by Algorithm 3.1 with $$\epsilon =0.1$$ and $$0.01$$. It is also solved by Baier et al. (2007)’s algorithm with 19, 26, 65 and 130 normal directions (these normal directions are chosen to be uniformly distributed on the unit ball according to Baier et al., 2007). The results are listed in Table 2. The results with $$\epsilon =0.1$$ and $$\epsilon =0.01$$ solved by Algorithm 3.1 are shown in Fig. 3(a and b). For $$\epsilon = 0.1$$ ($$0.01$$), our algorithm needs to solve 19 (65) NLPs. For comparison, we also show the results solved by Baier et al. (2007)’s algorithm with solving $$19$$ and $$65$$ NLPs in Fig. 3(c and d), respectively. Fig. 3. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of NLPs (19 NLPs for (a) and (c); 65 NLPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 19 NLPs. (d) Results of Baier et al. (2007)’s algorithm with 65 NLPs. Fig. 3. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of NLPs (19 NLPs for (a) and (c); 65 NLPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 19 NLPs. (d) Results of Baier et al. (2007)’s algorithm with 65 NLPs. Example 4.3 Consider a time-variant linear system with $$U =\{x \in {\mathbb{R}}: |x| \leq 1\}$$, $$x_0=(0,0)^{\top}$$, $$t_0=0$$, $$T=4$$ and \begin{equation} A(t)=\left[\begin{array}{cc} (\textrm{e}^{-t}\textrm{sin}\,t-2)&t \textrm{e}^{-2t}\\-\textrm{e}^{-t}&(2\textrm{e}^{-2t}\textrm{cos}\,t-1) \end{array}\right]\!, B(t)=\left[\begin{array}{c} 1\\\textrm{sin}\,t \end{array}\right]\!. \end{equation} The discrete reachable set problem $$(\rm P_\mathcal{Y})$$ is solved by Algorithm 3.1 with $$\epsilon =0.1$$ and $$0.01$$. It is also solved by Baier et al. (2007)’s algorithm with 13, 15, 35 and 105 normal directions (uniformly distributed points on the unit ball are taken as the normal directions). The results are listed in Table 2. The results of Algorithm 3.1 with $$\epsilon =0.1$$ and $$0.01$$ are shown in Fig. 4(a and b), respectively. For comparison, we also show the results solved by Baier et al. (2007)’s algorithm using $$13$$ and $$35$$ LPs in Fig. 4(c and d), respectively Fig. 4. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (13 LPs for (a) and (c); 35 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 13 LPs. (d) Results of Baier et al. (2007)’s algorithm with 35 LPs. Fig. 4. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (13 LPs for (a) and (c); 35 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 13 LPs. (d) Results of Baier et al. (2007)’s algorithm with 35 LPs. Example 4.4 This linear control system example has $$x_0=(0,0,0)^{\top}$$, $$t_0 = 0$$, $$T=1$$, $$U=\{x \in \mathbb{R}^3: \|x\|_2 \leq 1\}$$ and \begin{equation} A=\left[ \begin{array}{r r r} -1 & 1 & 0\\ 0 & -1 & 0\\ 0 & 0 & -2 \end{array} \right]\!, B=\left[ \begin{array}{r r r } 0 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1 \end{array} \right]\!. \end{equation} The corresponding discrete reachable set problem $$(\rm P_{\mathcal{Y}})$$ has non-linear constraints due to $$U=\{x \in \mathbb{R}^3: \|x\|_2 \leq 1\}$$. We solve the problem using Algorithm 3.1 with $$\epsilon =0.1$$ and $$\epsilon =0.01$$. For comparison, for each result of our algorithm with a specified $$\epsilon$$, we also solve the problem by Baier et al. (2007)’s algorithm with two different number of normal directions, i.e., for each result of our algorithm, we not only present the result of Baier et al. (2007)’s algorithm with similar Hausdorff distance but also the result with solving the same number of NLPs. The results are listed in Table 2. Figures 5 and 6 show the inner approximation $$\mathcal{Y}^i$$, outer approximation $$\mathcal{Y}^o$$ and both inner and outer approximations $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with $$\epsilon =0.1$$ and $$\epsilon =0.01$$, respectively. The number of NLPs computed for $$\epsilon =0.1$$ and $$\epsilon =0.01$$ is 114 and 919, respectively. For comparison, we show the results of Baier et al. (2007)’s algorithm with solving the same number of NLPs as ours, i.e., 114 and 919, respectively, in Figs 7 and 8. Fig. 5. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 5. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 6. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 6. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 7. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 114 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 7. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 114 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 8. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 919 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 8. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 919 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Table 2 summarizes the Hausdorff distance between the outer and inner approximation sets, the computation time (seconds), the number of LPs/NLPs solved by both algorithms as indicated in Examples 4.1–4.4. For our algorithm, we also list the approximation error $$\epsilon$$. For each row result of our algorithm with different $$\epsilon$$ (see column 2), we show two corresponding results solved by Baier et al. (2007)’s algorithm, one is with solving the same number of LPs/NLPs as ours (one row before) the other is the result with similar Hausdorff distance as ours (one row after). From Table 2, we can see that with similar Hausdorff distance, the algorithm proposed in this article uses less computational time and solves less number of LPs/NLPs. Moreover, under the circumstance of solving the same number of LPs/NLPs, our algorithm obtains smaller Hausdorff distance between inner and outer approximations. As the approximation error decreases, the number of LPs/NLPs increases, furthermore the computation time increases. Next, we solve a four- and a five-dimensional control system examples to show the performance of our algorithm for handling high dimensional problems. Again, we use Euler method with piecewise constant control for discretization and the step size $$h$$ is set to be $$0.01$$. For comparison, for a given approximation error of our algorithm, we have also tried to use trial and error to find a suitable number of normal directions for Baier et al. (2007)’s algorithm so that both algorithms produce similar Hausdorff distance. However, we could not get results within 2 h. Therefore, for these two problems, we only compare the results of the two algorithms with solving the same number of optimization problems. Example 4.5 This four-dimensional linear control system example has $$x_0=(0,0,0,0)^{\top}$$, $$t_0 = 0$$, $$T=1$$, $$U =\{x\in \mathbb{R}: |x| \leq 1\}$$ and \begin{equation} A=\left[ \begin{array}{r r r r} -1 & 0 & 1 & 0\\ 0 & -2 & 0 & 0\\ 0 & 1 & -1 & 0\\ 0 & 0 & 0 & -3 \end{array} \right]\!, B=\left[ \begin{array}{r} 0\\ 1\\ 1\\ 2 \end{array} \right]\!. \end{equation} We solve the problem using Algorithm 3.1 with $$\epsilon =0.2$$ and $$\epsilon =0.1$$. The number of LPs computed for $$\epsilon =0.2$$ and $$\epsilon =0.1$$ is $$323$$ and $$674$$, respectively. For comparison, we also solve the problem by Baier et al. (2007)’s algorithm using $$323$$ and $$674$$ normal directions (i.e., solving $$323$$ and $$674$$ LPs), respectively. The results are listed in Table 3. Table 3 The Hausdorff distance, running time and the number of LPs for Examples 4.5–4.6 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Table 3 The Hausdorff distance, running time and the number of LPs for Examples 4.5–4.6 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Example 4.6 This five-dimensional linear control system example has $$x_0=(0,0,0,0,0)^{\top}$$, $$t_0 = 0$$, $$T=1$$, $$U =\{x\in \mathbb{R}: |x| \leq 1\}$$ \begin{equation} A=\left[ \begin{array}{r r r r r} -1 & 0 & 1 & 0 & 0\\ 0 & -2 & 0 & 0 & 0\\ 0 & 1 & -1 & 0 & -1\\ 0 & 0 & 0 & -3 & 1\\ 0 & 0 & 0 & 0 & -1 \end{array} \right]\!, B=\left[ \begin{array}{r} 0\\ 1\\ 1\\ 2\\ 3 \end{array} \right]\!. \end{equation} We solve the problem using Algorithm 3.1 with $$\epsilon =0.2$$ and $$\epsilon =0.1$$. The number of LPs computed for $$\epsilon =0.2$$ and $$\epsilon =0.1$$ is $$4380$$ and $$14858$$, respectively. For comparison, we also solve the problem by Baier et al. (2007)’s algorithm with $$4380$$ and $$14858$$ normal directions (i.e., solving $$4380$$ and $$14858$$ LPs), respectively. The results are listed in Table 3. Tables 2 and 3 suggest that the most critical factor influencing computation time is the dimension of the system $$p$$. To further show the computation time changing with the increasing of system dimension $$p$$, we plot computational time versus system dimension $$p$$ for Examples 4.3–4.6 solved by our algorithm with approximation error $$\epsilon=0.1$$ in Fig. 9. As $$p$$ increases the computation time increases and more dramatically so for higher values of $$p$$ than smaller ones. For example, as $$p$$ increases from $$4$$ to $$5$$, the number of LPs solved increases from $$674$$ to $$14858$$ and the computation time increases from 7.0072 to $$875.2668$$ s. Fig. 9. View largeDownload slide Computational time for two-dimensional to five-dimensional linear control systems. Fig. 9. View largeDownload slide Computational time for two-dimensional to five-dimensional linear control systems. It needs to be noted that parallel computations can be used in Algorithm 3.1 to speed up the computation time for high-dimensional cases. Instead of one by one checking vertex $$s\in {\rm vert\,} \mathcal{S}^k$$ to see if $$s \in \mathcal{Y} \cup \mathcal{O}$$ or not, several vertices of $${\rm vert\,} \mathcal{S}^k$$ can be checked at the same time by parallel computing $$({\rm P_1}(s))$$, then construct supporting hyperplanes of $$\mathcal{Y}$$ using (D$$_1({s})$$) and update the outer approximation $$\mathcal{S}$$. According to Theorem 3.2, the quality of the approximation of Algorithm 3.1 is guaranteed. In all the six examples, Baier et al. (2007)’s method delivers a worse approximation. This indicates that normal directions chosen in the method are not appropriate to find an outer approximation $$\mathcal{Y}^o$$ and an inner approximation $$\mathcal{Y}^i$$ of $$\mathcal{Y}$$ with $$H(\mathcal{Y}^o,\mathcal{Y}^i)<\epsilon$$. Hence it needs to be extended by mechanisms to control the normal direction parameters in such a way that the quality of the approximation can be guaranteed. 5. Conclusion In this article, we have proposed a variant of Benson’s algorithm to find the reachable set of a discrete linear control system. We have proved that the algorithm guarantees to find an inner approximation and an outer approximation of the reachable set with a specified approximation error. To the best of our knowledge, this is the first algorithm that can guarantee the approximation quality in terms of Hausdorff distance. This algorithm can not only be used in linear control systems but also non-linear control systems with convex reachable sets. Numerical results show that our algorithm performs better than the algorithm of Baier et al. (2007) on all the tested instances. Further research is needed to develop an algorithm with guaranteed approximation quality in terms of Hausdorff distance for non-linear control systems with non-convex reachable sets. Moreover, it is worth investigating the trade-off between approximation quality and computational effort, in particular concerning different discretization methods and the growth of computation time as $$p$$ increases. Acknowledgements We would like to thank Professor Andreas Löhne from Friedrich-Schiller-Universität Jena and the anonymous reviewers for their insightful and detailed comments, which have greatly improved the quality of our article. Funding Beijing Natural Science Foundation (4152034); and National Natural Science Foundation of China (11371053), in part. References Baier, R., Büskens, C. & Chahma, I. A. ( 2007 ) Approximation of reachable sets by direct solution methods for optimal control problems. Optim. Method Softw. , 22 , 433 – 452 . Google Scholar CrossRef Search ADS Benson, H. ( 1998 ) An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem. J. Global. Optim., 13 , 1 – 24 . Google Scholar CrossRef Search ADS Botchkarev, O. & Tripakis, S. ( 2000 ) Verification of hybrid systems with linear differential inclusions using ellipsoidal approximations. Hybrid Systems: Computation and Control ( Lynch, N. & Krogh B. H. eds). Berlin, Heidelberg : Springer , pp. 73 – 88 . Google Scholar CrossRef Search ADS Chen, P. C., Hansen, P. & Jaumard, B. ( 1991 ) On-line and off-line vertex enumeration by adjacency lists. Oper. Res. Lett. , 10 , 403 – 409 . Google Scholar CrossRef Search ADS Dontchev, A. & Farkhi, E. ( 1989 ) Error estimates for discretized differential inclusions. Computing , 41 , 349 – 358 . Google Scholar CrossRef Search ADS Ehrgott, M., Löhne, A. & Shao, L. ( 2012 ) A dual variant of Benson’s “outer approximation algorithm” for multiple objective linear programming. J. Global. Optim. , 52 , 757 – 778 . Google Scholar CrossRef Search ADS Ehrgott, M., Shao, L. & Schöbel, A. ( 2011 ) An approximation algorithm for convex multi-objective programming problems. J. Global. Optim. , 50 , 397 – 416 . Google Scholar CrossRef Search ADS Ferretti, R. ( 1997 ) High-order approximations of linear control systems via Runge–Kutta schemes. Computing , 58 , 351 – 364 . Google Scholar CrossRef Search ADS Gayek, J. E. & Fisher, M. E. ( 1987 ) Approximating reachable sets for n-dimensional linear discrete systems. IMA J. Math. Control Inf. , 4 , 149 – 160 . Google Scholar CrossRef Search ADS Girard, A. ( 2005 ) Reachability of uncertain linear systems using zonotopes. Hybrid Systems: Computation and Control ( Morari, M. Thiele L. eds). Berlin, Heidelberg : Springer , pp. 291 – 305 . Google Scholar CrossRef Search ADS Girard, A., Le Guernic, C. & Maler, O. ( 2006 ) Efficient computation of reachable sets of linear time-invariant systems with inputs. Hybrid Systems: Computation and Control ( Hespanha J. P. & Tiwari A. eds), Vol. 3927. Berlin Heidelberg : Springer , pp. 257 – 271 . Google Scholar CrossRef Search ADS Kostousova, E. K. ( 1997 ) On polyhedral estimation of the attainability domains of linear multistep systems. Autom. Remote Control , 58 , 374 – 382 . Kostousova, E. K. ( 2001 ) Control synthesis via parallelotopes: optimization and parallel computations. Optim. Method Softw. , 14 , 267 – 310 . Google Scholar CrossRef Search ADS Kostousova, E. K. ( 2008 ) On the boundedness of outer polyhedral estimates for reachable sets of linear systems. Comput. Math. Math. Phys. , 48 , 918 – 932 . Google Scholar CrossRef Search ADS Kostousova, E. K. ( 2009 ) External polyhedral estimates for reachable sets of linear discrete-time systemswith integral bounds on controls. Int. J. Pure Appl. Math. , 50 , 187 – 194 . Kurzhanski, A. & Vályi, I. ( 1991 ) Ellipsoidal techniques for dynamic systems: the problem of control synthesis. Dynam. Contr. , 1 , 357 – 378 . Google Scholar CrossRef Search ADS Kurzhanski, A. B. & Varaiya, P. ( 2000 ) Ellipsoidal techniques for reachability analysis: internal approximation. Syst. Control Lett. , 41 , 201 – 211 . Google Scholar CrossRef Search ADS Kurzhanski, A. B. & Varaiya, P. ( 2001 ) Dynamic optimization for reachability problems. J. Optimiz. Theory App. , 108 , 227 – 251 . Google Scholar CrossRef Search ADS Kurzhanski, A. B. & Varaiya, P. ( 2002 ) On ellipsoidal techniques for reachability analysis. Part I: external approximations. Optim. Method Softw. , 17 , 177 – 206 . Google Scholar CrossRef Search ADS Li, X., He, X., Zhong, Q. & Song, M. ( 2011 ) Reachable domain for satellite with two kinds of thrust. Acta Astronaut. , 68 , 1860 – 1864 . Google Scholar CrossRef Search ADS Löhne, A. ( 2011 ) Vector Optimization with Infimum and Supremum . Berlin, Heidelberg : Springer. Google Scholar CrossRef Search ADS Löhne, A., Rudloff, B. & Ulus, F. ( 2014 ) Primal and dual approximation algorithms for convex vector optimization problems. J. Global. Optim. , 60 , 713 – 736 . Google Scholar CrossRef Search ADS Mitchell, I. M. & Tomlin, C. J. ( 2003 ) Overapproximating reachable sets by Hamilton-Jacobi projections. J. Sci. Comput. , 19 , 323 – 346 . Google Scholar CrossRef Search ADS Parise, F., Valcher, M. E. & Lygeros, J. ( 2014 ) On the reachable set of the controlled gene expression system. 2014 IEEE 53rd Annual Conference on Decision and Control (CDC) , Los Angeles, CA, USA . pp. 4597 – 4604 . Shao, L. & Ehrgott, M. ( 2008 ) Approximately solving multiobjective linear programmes in objective space and an application in radiotherapy treatment planning. Math. Methods Oper. Res. , 68 , 257 – 276 . Google Scholar CrossRef Search ADS Sonneborn, L. & Van Vleck, F. ( 1964 ) The bang-bang principle for linear control systems. J. Soc. Ind. Appl. Math. A , 2 , 151 – 159 . Google Scholar CrossRef Search ADS Suchaitanawanit, B. & Cole, M. O. ( 2015 ) Mechanical structure optimization in minimum-time motion control of flexible bodies. Automatica , 62 , 213 – 221 . Google Scholar CrossRef Search ADS Varaiya, P. ( 2000 ) Reach set computation using optimal control. Verification of Digital and Hybrid Systems ( Inan M. K. & Kurshan R. P. eds). Berlin, Heidelberg : Springer, pp. 323 – 331 . Google Scholar CrossRef Search ADS Veliov, V. ( 1992 ) Second-order discrete approximation to linear differential inclusions. SIAM J. Control Optim. , 29 , 439 – 451 . Vinh, N. X., Gilbert, E. G., Howe, R. M., Sheu, D. & Lu, P. ( 1995 ) Reachable domain for interception at hyperbolic speeds. Acta Astronaut. , 35 , 1 – 8 . Google Scholar CrossRef Search ADS Vinnikov, E. ( 2015 ) Construction of attainable sets and integral funnels of nonlinear controlled systems in the matlab environment. Computat. Math. and Model. , 26 , 107 – 119 . Google Scholar CrossRef Search ADS Zivanovic, S. G. & Collins, P. ( 2012 ) Higher order methods for differential inclusions. Preprint arXiv:1206.6563. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

A numerical method for the approximation of reachable sets of linear control systems

Loading next page...
 
/lp/ou_press/a-numerical-method-for-the-approximation-of-reachable-sets-of-linear-k7ZFbmxsum
Publisher
Oxford University Press
Copyright
© The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dnx052
Publisher site
See Article on Publisher Site

Abstract

Abstract In this article, a numerical method for the approximation of reachable sets of linear control systems is discussed. First a continuous system is transformed into a discrete one with Runge–Kutta methods. Then based on Benson’s outer approximation algorithm for solving multiobjective optimization problems, we propose a variant of Benson’s algorithm to sandwich the reachable set of the discrete system with an inner approximation and an outer approximation. By specifying an approximation error, the quality of the approximations measured in Hausdorff distance can be directly controlled. Furthermore, we use an illustrative example to demonstrate the working of the algorithm. Finally, computational experiments illustrate the superior performance of our proposed algorithm compared to a recent algorithm in the literature. 1. Introduction In this article, we consider the problem of finding the reachable set of the following linear control system. \begin{align} (\rm P_c)\quad &\dot{x}(t) = A(t)x(t)+B(t)u(t), \\ \end{align} (Pc(a)) \begin{align} &x(t_0) = x_0, \end{align} (Pc(b)) \begin{align} &u(t) \in U, \end{align} (Pc(c)) where $$x(t)\in \mathbb{R}^{p}$$ is the state, $$u(t)\in \mathbb{R}^{m'}$$ is the control input and it is a (Lebesgue) measurable function of time $$t$$, $$A(t)\in \mathbb{R}^{p\times p}$$ and $$B(t)\in \mathbb{R}^{p\times m'}$$ are known continuous and bounded matrices, $$x_0 \in \mathbb{R}^{p}$$ is the initial state, $$U \neq \emptyset$$ is assumed to be a convex and compact subset of $$\mathbb{R}^{m'}$$. Definition 1.1 Let $$t_0 < T$$ be given. The reachable set of system $$({\rm P}_c)$$ for the time $$t = T$$ is defined as \begin{align*} {\mathcal R}(T,t_0,x_0): = & \{ y\in \mathbb{R}^{p} : \exists x(\cdot) \text{ and } u(\cdot)\text{ of } (P_c) \text{ with } x(T) = y\}. \end{align*} There has been an increasing interest in reachable sets of linear control systems with recent years. Reachable sets can be used in various applications, including calculation of the manoeuvres range of spacecrafts (Li et al., 2011), evaluation of the interception domain of intercontinental ballistic missile (Vinh et al., 1995), collision avoidance for vehicles (Mitchell & Tomlin, 2003), mechanical structure optimization (Suchaitanawanit & Cole, 2015), gene expression system control (Parise et al., 2014) and so on. It is known that reachable sets of linear control systems are convex, compact and non-empty (see Sonneborn & Van Vleck, 1964). A variety of methods for the approximation of reachable sets of linear control systems have been suggested. Gayek & Fisher (1987) construct an $$n$$-dimensional polyhedron to approximate the reachable set of an $$n$$-dimensional linear system. Varaiya (2000) proposes an approach based on the Pontryagin maximum principle of optimal control theory to approximate the reachable set of a linear system. Kurzhanski & Varaiya (2001) introduce a method based on level sets and Hamilton–Jacobi equations to estimate reachable sets. Kurzhanski & Vályi (1991), Kurzhanski & Varaiya (2000), Kurzhanski & Varaiya (2002) and Botchkarev & Tripakis (2000) use ellipsoidal methods to achieve inner and outer approximations of reachable sets while Girard (2005) and Girard et al. (2006) construct reachable sets by set operation using zonotopes. Kostousova (2001) introduces families of external and internal parallelotopic estimates for reachable sets. Moreover, Kostousova (1997, 2008, 2009) study properties of outer polyhedral estimates for reachable sets of linear systems. It has been suggested that the reachable set of a continuous control system can be approximated by the reachable set of its corresponding discrete system (see Vinnikov, 2015; Dontchev & Farkhi, 1989). Thus, using the convex property of reachable sets of linear systems, Baier et al. (2007) propose a direct solution method based on optimization techniques to approximate the discrete reachable set. The method is easy to implement. However, it needs to choose a finite number of normal directions in advance and the approximation error cannot be directly controlled. In this article, we pursue the idea of using the discrete reachable set to approximate its corresponding continuous reachable set. Followed the work of Benson’s outer approximation algorithm (Benson, 1998) and its variants (see e.g., Shao & Ehrgott, 2008; Ehrgott et al., 2011; Löhne et al., 2014) for solving multiobjective optimization problems, we propose a variant of Benson’s algorithm with outer and inner approximations to sandwich the discrete reachable set of a linear control system. The main contribution of our work is that by specifying an approximation error the approximation quality measured in Hausdorff distance between the outer and inner approximation sets can be directly controlled. This article is organized as follows. In Section 2, we introduce discrete linear control systems and provide a theoretical basis for using the discrete reachable set to approximate its continuous counterpart. Then, we propose a variant of Benson’s algorithm for solving discrete reachable sets and present an illustrative example in Section 3. Numerical results are given in Section 4. Finally, we draw some conclusions in Section 5. 2 Discrete reachable sets A discrete counterpart of linear control system ($${\rm P}_c$$), using a one-step discrete scheme, e.g. explicit Runge–Kutta scheme, on equidistant time grid points $$t_i = t_0 +ih$$, $$i = 0,1,\ldots, N$$, and step size $$h = (T -t_0)/N$$, is constructed as follows: \begin{align} ({\rm P}_d)\quad & x_h(t_{i+1}) = x_h(t_i) + h {\it{\Phi}}(x_h(t_i), u_h(t_i),t_i,h), i =0,1\ldots, N-1,\\ \end{align} (Pd(a)) \begin{align} & x_h(t_0) = x_0,\\ \end{align} (Pd(b)) \begin{align} & u_h \in U_h. \end{align} (Pd(c)) It needs to be noted that constraint $$u_h \in U_h$$ could be non-linear. For instance, if $$U_h = \{x\in \mathbb{R}^{m^\prime}: \|x\|_2^2 \leq K\}$$, where $$K$$ is a constant positive number, then the corresponding constraint $${u_h}_1^2+{u_h}_2^2+\ldots+{u_h}_{m^\prime}^2 - K \leq 0$$ is non-linear. The discretized system depends on the choice of parametrization $$u_h(\cdot)$$ of the control on the grid, e.g. a piecewise constant function, and on the incremental function $${\it{\Phi}}$$. In the simplest case we have $${\it{\Phi}}(x(t), u(t), t, h) = A(t)x(t) + B(t)u(t)$$, that is Euler method. Definition 2.1 The reachable set of discretized control system $$(\rm P_d)$$ for the time $$t_N$$ is defined as \begin{array}{l l l} {\mathcal R}_h(t_N, t_0, x_0) := &\{y\in \mathbb{R}^{p} : & \exists u_{h}(\cdot) \text{ and } x_h(\cdot) \text{ of } ({\rm P}_d) \text{ with } x_h(t_N)=y\}. \end{array} We also call $${\mathcal R}_h(t_N, t_0, x_0)$$ the discrete reachable set. Reachable set $${\mathcal R}_h(t_N, t_0, x_0)$$ of linear system ($${\rm P}_d$$) is convex compact and non-empty (Baier et al., 2007). Definition 2.2 Let $$\cal C$$ and $$\cal D$$ be two non-empty convex compact sets in $$\mathbb{R}^p$$. Then the Hausdorff distance between $$\cal C$$ and $$\cal D$$ is defined as \begin{equation} H(\mathcal{C},\mathcal{D}):=\max \{d(\mathcal{C},\mathcal{D}),d(\mathcal{D},\mathcal{C})\}, \end{equation} (2.2) where $$d(\mathcal{C}, \mathcal{D}) = \max \limits_{c \in \mathcal{C}} \min \limits_{d \in \mathcal{D}} \|c-d\|_2$$. Theorem 2.1 (Dontchev & Farkhi, 1989) Let $$N\in \mathbb N$$, $$h = (T-t_0)/N$$ be the step size of problem $$({\rm P}_d)$$, $$t_i = t_0 +ih$$, $$i = 0,1,\ldots, N$$. Let $${\mathcal R}(T,t_0,x_0)$$ and $${\mathcal R}_h(t_N,t_0,x_0)$$, respectively, be the reachable set of system $$({\rm P}_c)$$ and the reachable set of system $$({\rm P}_d)$$ with Euler scheme. Then there exists a constant $$M>0$$, such that $$H(\mathcal R(T,t_0,x_0), \mathcal R_h(t_N,t_0,x_0)) \leq Mh $$ holds. Theorem 2.1 shows that in the case of the Euler scheme is used for discretization, then the Hausdorff distance between the discrete reachable set and its continuous counterpart is bounded by an upper bound. Hence using $$\mathcal R_h$$ to approximate $$\mathcal R$$ is reasonable. It needs to be noted that different discretization methods will lead to different numerical bounds for the Hausdorff distance between the discrete reachable set and its continuous counterpart (see e.g., Veliov, 1992; Ferretti, 1997; Zivanovic & Collins, 2012). If a suitable discretization scheme is used, a small Hausdorff distance $$H(\mathcal R_h, \mathcal R)$$ can be expected. In this article, we use Euler scheme for discretization. 3. A variant of Benson’s algorithm for finding discrete reachable sets In this section, we are going to propose a variant of Benson’s algorithm to solve the reachable set of discretized control system $$(P_d)$$ ($$(P_d)$$ is obtained by the discretization of $$(P_c)$$ with Euler method). Let us first rewrite the constraints $$({\rm P}_d(a))$$, $$({\rm P}_d(b))$$ and $$({\rm P}_d(c))$$ into the generalized form of $$g(x)\leq 0$$. The discrete reachable set $${\mathcal R}_h(t_N, t_0, x_0)$$ problem can be written as follows. \begin{eqnarray*} \begin{array}{l r c} (\rm P_\mathcal{Y}) & \mbox{Find} & \mathcal{Y} = \{Px: x \in \mathcal{X}\}\\ & \mbox{s.t.} & \mathcal{X} = \{x\in \mathbb{R}^n : g(x)\leq 0\}, \end{array} \end{eqnarray*} where $$x = (u_h(t_0)^{\top},\ldots, u_h(t_{N-1})^{\top}, x_h(t_0)^{\top}, \ldots, x_h(t_{N-1})^{\top},x_h(t_{N})^{\top})^{\top}$$ are variables, $$P=[{\textbf{O}}_{\it{p\times(n-p)}}, {\textbf{I}}_{\it{p\times p}}]$$ is a matrix with the last $$p$$ columns being identity matrix while the rest being zero matrix. Here we assume that $$\exists x$$ such that $$g(x) < 0$$. Problem $$(\rm P_\mathcal{Y})$$ can also be seen as finding the feasible set in objective space of the following multiple objective optimization problem \begin{eqnarray*} \begin{array}{l l l} ({\rm MOP}) & \min & Px\\ & \mbox{s.t.} & x\in \mathcal{X}=\{x\in \mathbb{R}^n: g(x) \leq 0\}, \end{array} \end{eqnarray*} where $$\mathcal{X}$$ is the feasible set in decision space $$\mathbb{R}^n$$ and \begin{equation*} \mathcal{Y} := \{Px: x\in \mathcal{X}\} \end{equation*} is called the feasible set in objective space $$\mathbb{R}^p$$. In multi-objective optimization, several objective functions have to be minimized. The objectives are usually conflicting so that a feasible solution optimizing all objectives simultaneously does not exist in most cases. Therefore, the purpose of multi-objective optimization is to obtain non-dominated points rather than optimal values. Researchers developed various MOP algorithms which work in objective space to obtain the non-dominated set. For example, Benson (1998) has proposed an outer approximation method to find the feasible set in objective space of an multiobjective linear programming problem (MOLP). Ehrgott et al. (2012) developed a dual variant of Benson’s algorithm to solve MOLP. Moreover, extensions of Benson’s outer approximation algorithm for solving convex multiobjective programming problems can also be found in Ehrgott et al. (2011) and Löhne et al. (2014). Since the goal of multiobjective optimization is to obtain non-dominated sets, extended versions of Benson’s algorithms of Ehrgott et al. (2011) and Löhne et al. (2014) cannot be directly used to solve problem $$(\rm P_\mathcal{Y})$$. Therefore in this section, we are going to propose a variant of Benson’s algorithm to solve problem $$(\rm P_\mathcal{Y})$$. The following notations are used. For a set $$\mathcal{A} \subseteq \mathbb{R}^p$$, we denote the interior, boundary and convex hull of $$\mathcal{A}$$ by $${\rm int \,} \mathcal{A}$$, $${\rm bd \,} \mathcal{A}$$ and $${\rm conv \,} \mathcal{A}$$, respectively. Suppose $$\mathcal{A}$$ is a non-empty polyhedral convex compact set, we denote all vertices of $$\mathcal{A}$$ by $${\rm vert\,} \mathcal{A}$$. A subset $$\mathcal{F}$$ of a convex set $$\mathcal{A}$$ is called an exposed face of $$\mathcal{A}$$ if there exists a supporting hyperplane $$\cal H$$ to $$\mathcal{A}$$, with $$\mathcal{F} = \mathcal{A} \cap \cal H.$$ The variant of Benson’s algorithm to be proposed considers solving a generalized form of ($$\rm P_\mathcal{Y}$$) with $$Px$$ being linear functions and $$g(x)$$ being convex functions. It is obvious both $$\mathcal{X}$$ and $$\mathcal{Y}$$ are convex sets. The idea of the algorithm can be summarized as follows. Let $$\epsilon > 0$$ be an approximation error. Our algorithm starts with a polyhedron $$\mathcal{S}$$, covering the set $$\mathcal{Y}$$. Then a relative interior point $$\hat p$$ of $$\mathcal{Y}$$ is found. In each iteration, we check vertices $$s$$ of $$\mathcal{S}$$ to see if they are contained in $$\mathcal{Y}$$ or not. If not, we calculate the corresponding boundary point $$y$$ of $$\mathcal{Y}$$ on the line segment between $$s$$ and $$\hat p$$, and then measure the distance $$d(s,y)$$ between $$s$$ and $$y$$. If the vertex $$s \in \mathcal{Y}$$ or if $$d(s,y) \leq \epsilon$$, we add $$s$$ to the outer approximation $$\mathcal{O}$$ and its corresponding boundary point $$y$$ to the inner approximation $$\mathcal{I}$$ (note: if $$s \in \mathcal{Y}$$, its corresponding boundary point is $$s$$ itself). On the other hand, if a vertex of the polyhedron $$\mathcal{S}$$ is found at a distance greater than $$\epsilon$$ from its corresponding boundary point, a hyperplane separating it from $$\mathcal{Y}$$ is added to the description of $$\mathcal{S}$$ and the procedure is repeated until all the vertices of $$\mathcal{S}$$ are either contained in $$\mathcal{Y}$$ or at a distance less than $$\epsilon$$ from their corresponding boundary points. In order to construct the hyperplane separating a point $$s \notin \mathcal{Y}$$ from $$\mathcal{Y}$$, the following procedure is used: the point $$s$$ and $$\hat p$$ are connected by a line segment. This line segment intersects the boundary of $$\mathcal{Y}$$ at a point $$y$$. The idea is to construct a hyperplane which supports $$\mathcal{Y}$$ in $$y$$ and hence separates $$s$$ and $$\mathcal{Y}$$. The following primal dual convex programming pair (P$$_1(\hat{y})$$) and (D$$_1(\hat{y})$$) depending on $$\hat{y} \in \mathbb R^p$$ is needed for that purpose. $$(P_1(\hat{y}))\hspace{50mm} \displaystyle{\min : z \quad \quad g(x) \leq 0, Px-\hat{y}-z\bar n = 0}$$ where $$\bar{n}= \hat p-\hat{y}$$, $$\hat p$$ is an interior point of $$\mathcal{Y}$$. The Lagrangian $$L:(\mathbb{R}^n\times\mathbb{R})\times(\mathbb{R}^m\times \mathbb{R}^p)\rightarrow\mathbb{R}$$ associated with the problem (P$$_1(\hat{y})$$) can be defined as $$L(x,z,u,\lambda):=z+u^{\top}g(x)+\lambda^{\top}(Px-\hat{y}-z\bar n),$$ which yields the dual problem $$(D_1(\hat{y})) \hspace{50mm} \displaystyle{\max : \inf\left\{u^{\top}g(x)+\lambda ^{\top}Px\right\}-\lambda^{\top}\hat{y} \quad \quad u \geq 0, \lambda^{\top}\bar n = 1}$$ Note that the objective value of (P$$_1(\hat{y})$$) is strictly greater than zero if $$\hat{y}\notin \mathcal{Y}$$, is equal to zero if $$\hat{y} \in {\rm bd \,} \mathcal{Y}$$ and is strictly less than zero if $$\hat{y} \in {\rm int \,} \mathcal{Y}$$. In our problem, the parameter vector $$\hat{y} \in \mathbb{R}^q$$ typically does not belong to $$\mathcal{Y}$$. Problem (P$$_1(\hat{y})$$) has a unique optimal solution due to the convexity of $$\mathcal{Y}$$ and the intermediate value theorem since $$Px$$ are continuous. We denote the optimal solution of (P$$_1(\hat{y})$$) by $$(\bar x,\bar z)$$, then $$\bar y =P\bar x$$ is a boundary point of $$\mathcal{Y}$$. The following Proposition 3.1 and Theorem 3.1 provide theoretical basis for constructing a supporting hyperplane of $$\mathcal{Y}$$. Proposition 3.1 Let $$\hat{y} \in {\mathbb R}^p$$ and let $$(x^*,z^*)$$ be an optimal solution of $${\rm P_1}(\hat y)$$ and $$(u^*,\lambda^*)$$ be an optimal solution of $${\rm D_1}(\hat y)$$. Then the optimal objective values of $${\rm P_1}(\hat y)$$ and $${\rm D_1}(\hat y)$$ coincide, i.e., $$\inf\limits_{x\in\mathbb{R}^n}\{{u^*}^{\top}g(x)+{\lambda^*}^{\top}Px\}-{\lambda^*}^{\top}\hat{y} = z^*.$$ Proof. Since for problem $$(\rm P_\mathcal{Y})$$, we assume that $$\exists x$$ such that $$g(x) < 0$$, i.e., slater condition is satisfied, convex programming duality implies the existence of a solution $$(u,\lambda)$$ of (D$$_1(\hat{y})$$) and coincidence of the optimal values. □ Theorem 3.1 Let $$\hat{y} \in {\mathbb R}^p$$ and let $$(x^*,z^*)$$ be an optimal solution of $${\rm P_1}(\hat y)$$ and $$(u^*,\lambda^*)$$ be an optimal solution of $${\rm D_1}(\hat y)$$. Then $${\lambda^*}^{\top}y \geq {\lambda^*}^{\top}Px^* \mbox{ for all } y \in \mathcal{Y}$$, i.e., the hyperplane $$H:=\{y \in {\mathbb R}^p: {\lambda^*}^{\top}y = {\lambda^*}^{\top}Px^* \}$$ is a supporting hyperplane of $$\mathcal{Y}$$ at $$Px^*=\hat{y}+z^*\bar{n}$$. Proof. Let $$p$$ be a point in $$\mathcal{Y}$$, then the optimal value of (P$$_1(p)$$) must be less than or equal to zero. Let $$\mathcal{D} = \{(u,\lambda): u \geq 0, \lambda^{\top}\bar n = 1\}$$. Due to the weak duality between (P$$_1(p)$$) and (D$$_1(p)$$), we have $$\sup\limits_{(u,\lambda) \in \mathcal{D}}\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}p) \leq 0.$$ Furthermore, \begin{eqnarray*} & & \sup\limits_{(u,\lambda) \in \mathcal{D}}\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}p)\\ &= &\sup\limits_{(u,\lambda) \in \mathcal{D}}\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}\hat{y}+\lambda^{\top}\hat{y}-\lambda^{\top}p)\\ &=&\sup\limits_{(u,\lambda) \in \mathcal{D}}\{\inf\limits_{x\in\mathbb{R}^n}(u^{\top}g(x)+\lambda^{\top} Px-\lambda^{\top}\hat{y})+\lambda^{\top}\hat{y}-\lambda^{\top}p\}\\ &\geq &\inf\limits_{x\in\mathbb{R}^n}({u^*}^{\top}g(x)+{\lambda^*}^{\top} Px-{\lambda^*}^{\top}\hat{y})+{\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p\\ &=&z^*+{\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p, \end{eqnarray*} we have $$z^*+ {\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p \leq 0.$$ Moreover, because $$(x^*,z^*)$$ is an optimal solution of $${\rm P_1}(\hat y)$$, we have $$Px^* = (\hat y + z^* \bar n)$$. Therefore, $$ {\lambda^*}^{\top} Px^* = {\lambda^*}^{\top} (\hat y + z^* \bar n) = {\lambda^*}^{\top} \hat y + z^* \underbrace{(\lambda^*)^{\top} \bar n}_{=1} = {\lambda^*}^{\top} \hat y + z^*. $$ Now substitute $$z^* = {\lambda^*}^{\top} Px^* - {\lambda^*}^{\top} \hat y$$ into $$z^*+ {\lambda^*}^{\top}\hat{y}-{\lambda^*}^{\top}p \leq 0$$, we have $${\lambda^*}^{\top} Px^* -{\lambda^*}^{\top}p \leq 0,$$ i.e., for every $$p \in \mathcal{Y}$$, that $${\lambda^*}^{\top} p \geq {\lambda^*}^{\top}Px^*$$ holds, thus completes the proof. □ According to Theorem 3.1, for a given point $$\hat y$$, an optimal solution of (D$$_1(\hat{y})$$) can be used to obtain the required supporting hyperplane of $$\mathcal{Y}$$. We now state the algorithm in its general form. Algorithm 3.1 (A variant of Benson’s algorithm for solving $$(\rm P_\mathcal{Y})$$) Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). Initialization. (i1) Construct a polytope $$\mathcal{S}^0 \supseteq \mathcal{Y}$$ and find $$\hat p \in {\rm int \,} \mathcal{Y}$$. (i2) Let $$\mathcal{O}:=\emptyset$$, $${\cal I}:=\{ {\tilde{y}}^1,\ldots, {\tilde{y}}^p, y'\}$$ (see the construction of $$\mathcal{S}^0$$ below for the calculation of $${\tilde{y}}^i$$, $$i=1,\ldots, p$$ and $$y'$$). Set $$\epsilon > 0$$, $$k=0$$ and go to iteration $$k$$. Iteration $$k$$. (k1) If, for each $$s\in {\rm vert\,} \mathcal{S}^k, s \in \mathcal{Y} \cup \mathcal{O}$$ is satisfied, then go to step (k6). Otherwise, choose any $$s^k \in {\rm vert\,} \mathcal{S}^k \setminus ({\mathcal{O}} \cup \mathcal{Y})$$ and continue. (k2) Compute the unique boundary point $$y^k$$ of $$\mathcal{Y}$$ by solving $${\rm P_1}(s^k)$$. (k3) If the distance $$d(s^k,y^k)$$ from $$s^k$$ to $$y^k$$ is at most $$\epsilon$$, then add $$s^k$$ to $$\mathcal{O}$$ and add $$y^k$$ to $$\cal I$$. Go to (k1). (k4) If the distance $$d(s^k,y^k) > \epsilon$$, then determine a hyperplane $$H:=\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k \}$$ separating $$s^k$$ from $$\mathcal{Y}$$ and set $$\mathcal{S}^{k+1} = \mathcal{S}^k\cap\{y \in {\mathbb R}^p: y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k\}$$. (k5) Determine $${\rm vert\,} \mathcal{S}^{k+1}$$. Set $$k = k+1$$ and go to (k1). (k6) Define the vertex set of the outer approximation $$V_{o}(\mathcal{S}^k) = {\rm vert\,} \mathcal{S}^k$$ and define the vertex set of the inner approximation $$V_{i}(\mathcal{S}^k) =({\rm vert\,} \mathcal{S}^k \setminus \mathcal{O})\cal \cup \cal I$$. Results. (r1) Let $$\mathcal{Y}^{i}= {\rm conv \,} V_{i}(\mathcal{S}^k)$$. $$\mathcal{Y}^i$$ represents the inner approximation of $$\mathcal{Y}$$ ($$\mathcal{Y}^i \subseteq \mathcal{Y}$$). (r2) Let $$\mathcal{Y}^{o} = \mathcal{S}^k$$. $$\mathcal{Y}^o$$ represents the outer approximation of $$\mathcal{Y}$$ ($$\mathcal{Y} \subseteq \mathcal{Y}^o$$). In Algorithm 3.1, at initialization a polyhedral convex set $$\mathcal{S}^0$$ need to be constructed. According to Benson (1998), the initial outer approximation $$\mathcal{S}^0$$ can be constructed as follows. Let \begin{equation} \hat \beta = \min \{e^{\top} y: y\in \mathcal{Y} \}, \end{equation} (3.1) where $$e \in {\mathbb R}^p$$ is a vector in which each entry is 1 and $$\mathcal{Y}$$ is described in $$(\rm P_\mathcal{Y})$$. We denote the optimal solution of (3.1) by $$y'$$. Compute an optimal solution $${\tilde{y}}^{i}$$ of \begin{equation} \max \{y_i: y\in \mathcal{Y} \}, i=1,\ldots, p, \end{equation} (3.2) and let $$y^{AI} = ({\tilde{y}}^1_1, {\tilde{y}}^2_2,\ldots, {\tilde{y}}^p_p)^{\top}$$. Define $$p+1$$ points $$v^k \in {\mathbb R}^p$$, $$k = 0, 1, \dots, p$$. Let $$v^0 = y^{AI}$$ and, for $$k = 1,2,\dots,p$$, let \begin{eqnarray} v_l^k = \left \{ \begin{array}{@{}l l@{}} y_l^{AI}, & \textrm{if}~ l \neq{k}, \\ \hat \beta + {y}^{AI}_k - e^{\top} v^0, & \textrm{if}~ l = k, \end{array}\right. \end{eqnarray} (3.3)$$l = 1, 2, \ldots, p$$. Then $$\mathcal{S}^0$$ can be constructed as the convex hull of $$\{v^k:k =0,1,\dots,p\}$$. Obviously $$\mathcal{S}^0$$ is a $$p$$-dimensional simplex and it contains $$\mathcal{Y}$$. Moreover, in Algorithm 3.1 at initialization, an interior point $$\hat p$$ of $$\mathcal{Y}$$ need to be found. We refer Algorithm 3 of Löhne (2011) to calculate an interior point. Remark 3.1 In Algorithm 3.1, for each $$k\geq 0$$, the inequality cut $$y^{\top} \lambda^k \geq {y^k}^{\top}\lambda^k$$ is constructed so that $$\mathcal{S}^{k+1}$$ ‘cuts off’ a portion of $$\mathcal{S}^k$$ containing $$y^k$$ and at the same time point $$y^k$$ is added to $$V_i(\mathcal{S}^k)$$. Thus Algorithm 3.1 constructs a sequence of polyhedra $$\mathcal{S}^{k}$$ with $$\mathcal{S}^0\supset \ldots \mathcal{S}^{k}\supset \mathcal{Y}$$ approximating $$\mathcal{Y}$$ from the outside. It also implicitly constructs a sequence of polyhedra $${\rm conv \,} V_i(\mathcal{S}^{k})$$ with $${\rm conv \,} V_i(\mathcal{S}^{0}) \subseteq \cdots \subseteq {\rm conv \,} V_i(\mathcal{S}^k)\subseteq \mathcal{Y}$$ approximating $$\mathcal{Y}$$ from the inside. Proposition 3.2 Let $$\bar y^i$$ denote the boundary point of $$\mathcal{Y}$$ obtained by solving $${\rm P_1}(s^i)$$ for $$s^i \in {\rm vert\,} \mathcal{S}^k$$. Then $$H(\mathcal{S}^k, \mathcal{Y})\leq \max \limits_{s^i\in {\rm vert\,} \mathcal{S}^k} \| \bar y^i-s^i\|$$. Proof. Since $$\mathcal{Y} \subseteq \mathcal{S}^k$$ and $$d(s^i, \mathcal{Y})$$ is a convex function of variable $$s^i$$, one has $$H(\mathcal{S}^k, \mathcal{Y}) = \max \limits_{s^i \in {\rm vert\,} \mathcal{S}^k} d(s^i, \mathcal{Y}).$$ According to the fact that $$\bar y^i \in \mathcal{Y}$$ for every $$s^i \in {\rm vert\,} \mathcal{S}^k$$, we obtain \[\max \limits_{s^i \in {\rm vert\,} \mathcal{S}^k} d(s^i, \mathcal{Y})\leq \max \limits_{s^i \in {\rm vert\,} \mathcal{S}^k}\| \bar y^i-s^i\|.\] □ Theorem 3.2 When the algorithm terminates, we have $$\mathcal{Y}^i\subseteq \mathcal{Y} \subseteq \mathcal{Y}^o$$ and $$H(\mathcal{Y}^i, \mathcal{Y}^o) \leq \epsilon$$. Proof. Since $$\epsilon >0$$, when the algorithm terminates, we have for each $$s^k \in {\rm vert\,} \mathcal{S}^k$$, $$\exists y^k$$ be a solution of $$P_1(\hat y)$$, such that $$d(s^k,y^k)< \epsilon$$, i.e., $$H(\mathcal{S}_k, {\rm conv \,} (V_i(\mathcal{S}^k))) <\epsilon$$ is satisfied. □ Remark 3.2 In the case of $$\mathcal{Y}$$ being a polytope, even if $$\epsilon$$ is set to be $$0$$, the sandwich algorithm is finite. This is because at each iteration, the algorithm finds an exposed face on the supporting hyperplane $$y^{\top}{\lambda^k}= {y^k}^{\top}{\lambda^k}$$ of $$\mathcal{Y}$$. Since the number of exposed faces of polytope $$\mathcal{Y}$$ is limited, thus after finite number of iterations, $$\mathcal{S}^k=\mathcal{Y}$$. Theorem 3.3 gives the total approximation error bound for using the discrete reachable set to approximate its continuous counterpart. Theorem 3.3 Let $$\mathcal R_h = \mathcal R_h(t_N, t_0, X_0)$$ be the reachable set of discrete control system $$(\rm P_d)$$ at time $$t_N$$, $$\mathcal R = \mathcal R(T, t_0, X_0)$$ be the reachable set of continuous control system $$(\rm P_c)$$ at time $$T$$. If $$H(\mathcal R,\mathcal R_h) \leq Mh$$, then we have $$H({\mathcal R},\mathcal{Y}^i) \leq Mh+\epsilon$$ and $$H({\mathcal R}, \mathcal{Y}^o) \leq Mh+\epsilon,$$ where $$\epsilon$$ is the specified approximation error. Proof. We just apply the triangle inequality for the Hausdorff distance and combine the convergence assumption with the results of Theorem 3.2, since the reachable sets $${\mathcal R}_h$$ are bounded and the assumption yields compactness and non-emptiness. □ According to Theorem 3.3, one can set $$\epsilon \ll Mh$$, then the Hausdorff distance between $${\mathcal R}$$ and the inner approximation $$\mathcal{Y}^i$$ or the outer approximation $$\mathcal{Y}^o$$ mainly depends on the discretization. Now we give an example to illustrate Algorithm 3.1. Example 3.4 Consider the following problem \begin{array}{l l l} \mbox{Find} & \mathcal{Y} =\{y \in \mathbb{R}^2: y= x\} \\ \mbox{s.t.}&(x_1-2)^2+(x_2-2)^2&\leq 4\\ & 0&\leq x_1\\ & 0&\leq x_2.\\ \end{array} We use Algorithm 3.1 to solve the problem. The approximation error $$\epsilon$$ is set to be $$0.4$$. Figure 1(a) shows $$\mathcal{Y}, \mathcal{S}^0$$, $$V_i(\mathcal{S}^0)$$ (symbol ‘+’) and the interior point $$\hat p=(2.1953,2.1953)$$. Figure 1(b–f) show the first to the fifth cut. In the figure, empty circles represent vertices being cut off, the symbol ‘+’ represents the boundary points, dotted line segments between the vertices and the interior point show the vertices being checked at iteration $$k-1$$. As it can be seen, $$\mathcal{S}^k$$ and $$V_i(\mathcal{S}^k)$$ change with iteration $$k$$. The vertex $$s^k$$ being checked, its corresponding boundary point $$y^k$$, the distance $$d(s^k,y^k)$$ between $$s^k$$ and $$y^k$$ and the cutting plane for each iteration are listed in Table 1. Table 1 $$s^k$$, $$y^k$$, $$d(s^k,y^k)$$ and the cutting planes in the first six iterations of the algorithm k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ Table 1 $$s^k$$, $$y^k$$, $$d(s^k,y^k)$$ and the cutting planes in the first six iterations of the algorithm k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ k $$s^k$$ $$y^k$$ $$d(s^k,y^k)$$ Cut 0 ($$-$$2.8284,4) (0.2170,2.9060) 3.2359 0.1683$$y_1$$ $$-$$ 0.0855$$y_2$$ = $$-$$0.2120 1 (4,$$-$$2.8284) (2.9060,0.2170) 3.2359 $$-$$0.0855$$y_1$$ + 0.1683$$y_2$$ = $$-$$0.2120 2 (4,4) (3.4142,3.4142) 0.8284 $$-$$0.2770 $$y_1$$ $$-$$ 0.2770$$y_2$$ = $$-$$1.8918 3 ($$-$$0.4405,1.6120) (0.0205,1.7141) 0.4722 0.3676 $$y_1$$ + 0.0531$$y_2$$ = 0.0986 (0.7728,4) (0.9890,3.7257) 0.3493 $$-$$ 4 (4,0.7728) (3.7257,0.9890) 0.3493 $$-$$ (1.6120,$$-$$0.4405) (1.7141,0.0205) 0.4722 0.0531 $$y_1$$ + 0.3676$$y_2$$ = 0.0986 (0.8284,4) (2.7738,3.8442) 0.1651 $$-$$ (4,2.8284) (3.8442,2.7738) 0.1651 $$-$$ 5 ($$-$$0.0700,2.3411) (0.0282,2.3348) 0.0985 $$-$$ (0.1156,1.0560) (0.2101,1.1077) 0.1077 $$-$$ (2.3411,$$-$$0.0700) (2.3348,0.0282) 0.0985 $$-$$ (1.0560,0.1156) (1.1077,0.2101) 0.1077 $$-$$ After five cuts, there are eight points outside $$\mathcal{Y}$$. Among them two points (0.7728,4) and (4,0.7728) have been checked at $$k =4$$, the distances between them and their corresponding boundary points are less than the approximation error 0.4, thus they belong to $$\mathcal{Y} \cup \mathcal{O}$$. For the rest of the six infeasible points, the distances between them and their corresponding boundary points are less than the approximation error 0.4, so we accept these six infeasible points for the outer approximation, see Fig. 1(g). When the algorithm terminates, the total number of iterations $$k$$ is equal to $$6$$, The total number of linear programmes (LPs) solved is 16. In Fig. 1(h), we show the outer approximation $$\mathcal{Y}^o$$ and the inner approximation $$\mathcal{Y}^i$$ of $$\mathcal{Y}$$ and their corresponding sets of points. Fig. 1. View largeDownload slide The changing of $$\mathcal{S}^i$$ with iteration $$i$$ (from (a) to (g)) and the inner and outer approximations $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ (see (h)). (a) $$\mathcal{Y}$$, $$\mathcal{S}^0$$, $$V_i(\mathcal{S}^0)$$ and $$\hat p$$. (b) $$\mathcal{S}^1$$ and $$V_i(\mathcal{S}^1)$$. (c) $$\mathcal{S}^2$$ and $$V_i(\mathcal{S}^2)$$. (d) $$\mathcal{S}^3$$ and $$V_i(\mathcal{S}^3)$$. (e) $$\mathcal{S}^4$$ and $$V_i(\mathcal{S}^4)$$. (f) $$\mathcal{S}^5$$ and $$V_i(\mathcal{S}^5)$$. (g) $$\mathcal{S}^6$$ and $$V_i(\mathcal{S}^6)$$. (h) $$\mathcal{Y}^o$$ and $$\mathcal{Y}^i$$. Fig. 1. View largeDownload slide The changing of $$\mathcal{S}^i$$ with iteration $$i$$ (from (a) to (g)) and the inner and outer approximations $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ (see (h)). (a) $$\mathcal{Y}$$, $$\mathcal{S}^0$$, $$V_i(\mathcal{S}^0)$$ and $$\hat p$$. (b) $$\mathcal{S}^1$$ and $$V_i(\mathcal{S}^1)$$. (c) $$\mathcal{S}^2$$ and $$V_i(\mathcal{S}^2)$$. (d) $$\mathcal{S}^3$$ and $$V_i(\mathcal{S}^3)$$. (e) $$\mathcal{S}^4$$ and $$V_i(\mathcal{S}^4)$$. (f) $$\mathcal{S}^5$$ and $$V_i(\mathcal{S}^5)$$. (g) $$\mathcal{S}^6$$ and $$V_i(\mathcal{S}^6)$$. (h) $$\mathcal{Y}^o$$ and $$\mathcal{Y}^i$$. 4 Numerical examples In Section 1, we have briefly outlined the literature on algorithms to solve reachable sets. In this section, we compare our algorithm presented in Section 3 with the algorithm of Baier et al. (2007). The algorithm of Baier et al. (2007), like ours, is based on outer and inner approximations, but they use a different strategy. Baier et al. (2007)’s algorithm uses a set of normal directions to construct supporting hyperplanes of $$\mathcal{Y}$$. Then the intersection of the halfspaces is taken as the outer approximation and the convex hull of the supporting points is taken as the inner approximation. However, in Baier et al. (2007)’s algorithm, the Hausdorff distance between the outer and inner approximation can not be calculated due to lack of vertex representation of the outer approximation. To compare our algorithm with Baier et al. (2007)’s algorithm, we have implemented the two algorithms in Matlab 7.10 using MOSEK (http://www.mosek.com) as non-linear programming and linear programming solver. Tests are run on a PC with dual processor CPU, 2.67 GHz and 2GB RAM and the codes including Matlab and MOSEK are run in a single thread mode. At step (k5) of our algorithm, the method of Chen et al. (1991) for on-line vertex enumeration by adjacency lists was used to calculate a vertex representation from the inequality representation of $$\mathcal{S}^k$$. We also adapt Baier et al. (2007)’s algorithm with on-line vertex enumeration method so that the vertex representation of an outer approximation can be calculated, thus the Hausdorff distance between the outer and inner approximation can be calculated. Notice that Baier et al. (2007)’s algorithm needs to choose a set of normal directions and each direction is used to calculate a supporting hyperplane of the discrete reachable set by solving a LP or non-linear programme (NLP). Therefore, to make the two algorithms comparable, for a given approximation error in our algorithm, we use trial and error to find the number of normal directions in Baier et al. (2007)’s algorithm which produces the result with similar Hausdorff distance to the result of ours. Moreover, we also compare the results of the two algorithms with solving the same number of LPs or NLPs, i.e., using the same number of supporting hyperplanes. We first apply this code to 3 two-dimensional linear control system problems and 1 three-dimensional linear control system problem. For all the four examples, we all use Euler method with piecewise constant control for discretization and the step size $$h$$ is set to be $$0.01$$. Example 4.1 Consider the following linear control system example with $$x_0=(0,0)^{\top}$$, $$t_0 = 0$$, $$T = 1$$, $$U=[0,1]$$ and \begin{equation} A=\left[ \begin{array}{l r} 0 & 1 \\ 0 & 0 \end{array} \right]\!, B=\left[ \begin{array}{r} 0 \\ 1 \end{array} \right]\!. \end{equation} The discrete reachable set problem $$({\rm P}_\mathcal{Y})$$ only have linear constraints. Therefore, for both algorithms, only LPs need to be solved. The problem is solved for $$\epsilon = 0.1$$ and $$\epsilon = 0.01$$ by Algorithm 3.1. It is also solved by Baier et al. (2007)’s algorithm with $$15$$,$$19$$, $$31$$, $$55$$ normal directions (the normal directions are calculated according to Baier et al. (2007), they are uniformly distributed on the unit ball). The results are shown in Table 2. Furthermore, outer approximations $$\mathcal{Y}^o$$ and inner approximations $$\mathcal{Y}^i$$ obtained by our algorithm with $$\epsilon = 0.1$$ and $$\epsilon = 0.01$$ are shown in Fig. 2(a and b), respectively. In the figure, outer approximations are drawn with solid lines while dot-dashed lines show inner approximations. For $$\epsilon = 0.1$$ ($$0.01$$), our algorithm needs to solve 15 (31) LPs. For comparison, we show the results obtained by Baier et al. (2007)’s algorithm with $$15$$ and $$31$$ normal directions (i.e., solving 15 and 31 LPs) in Fig. 2(c and d), respectively. From the figure, we can see that under the circumstance of solving the same number of LPs, Algorithm 3.1 obtains smaller $$H(\mathcal{Y}^o,\mathcal{Y}^i)$$ distance than Baier et al. (2007)’s algorithm. Fig. 2. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (15 LPs for (a) and (c); 31 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01$$. (c) Results of Baier et al. (2007)’s algorithm with 15 LPs. (d) Results of Baier et al. (2007)’s algorithm with 31 LPs. Fig. 2. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (15 LPs for (a) and (c); 31 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01$$. (c) Results of Baier et al. (2007)’s algorithm with 15 LPs. (d) Results of Baier et al. (2007)’s algorithm with 31 LPs. Table 2 The Hausdorff distance, running time and the number of LPs/NLPs for Examples 4.1–4.4 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Table 2 The Hausdorff distance, running time and the number of LPs/NLPs for Examples 4.1–4.4 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Example $$\epsilon$$ Algorithm LPs/NLPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) — Baier et al. (2007) 15 0.0720 0.1592 0.1 Proposed 15 0.0369 0.1167 — Baier et al. (2007) 19 0.0473 0.1943 4.1 — Baier et al. (2007) 31 0.0199 0.3415 0.01 Proposed 31 0.0068 0.2570 — Baier et al. (2007) 55 0.0070 0.5763 — Baier et al. (2007) 19 0.1482 0.4166 0.1 Proposed 19 0.0876 0.3857 — Baier et al. (2007) 26 0.0878 0.4965 4.2 — Baier et al. (2007) 65 0.0238 1.3364 0.01 Proposed 65 0.0061 1.3021 — Baier et al. (2007) 130 0.0065 2.2372 — Baier et al. (2007) 13 0.1108 0.1514 0.1 Proposed 13 0.0759 0.1366 — Baier et al. (2007) 15 0.0843 0.1737 4.3 — Baier et al. (2007) 35 0.0300 0.4108 0.01 Proposed 35 0.0080 0.4161 — Baier et al. (2007) 105 0.0092 1.2856 — Baier et al. (2007) 114 0.2095 2.0425 0.1 Proposed 114 0.0725 1.3903 — Baier et al. (2007) 420 0.0788 10.2006 4.4 — Baier et al. (2007) 919 0.0380 24.4215 0.01 Proposed 919 0.0097 9.9183 — Baier et al. (2007) 3600 0.0114 289.7547 Example 4.2 Consider the linear control system with $$U=\{x \in \mathbb{R}^2: \|x\|_2 \leq 1\}$$, $$x_0=(0,0)^{\top}$$, $$t_0 = 0$$, $$T=2$$ and \begin{equation} A=\left[ \begin{array}{r r} 0 & 1 \\[3pt] -2 & -3 \end{array} \right]\!, B=\left[ \begin{array}{r r } 1 & 0 \\[3pt] 0 & 1 \end{array} \right]\!. \end{equation} The discrete reachable set problem $$(\rm P_\mathcal{Y})$$ has non-linear constraints. Thus, NLPs need to be solved for both Algorithm 3.1 and Baier et al. (2007)’s algorithm. The problem is solved by Algorithm 3.1 with $$\epsilon =0.1$$ and $$0.01$$. It is also solved by Baier et al. (2007)’s algorithm with 19, 26, 65 and 130 normal directions (these normal directions are chosen to be uniformly distributed on the unit ball according to Baier et al., 2007). The results are listed in Table 2. The results with $$\epsilon =0.1$$ and $$\epsilon =0.01$$ solved by Algorithm 3.1 are shown in Fig. 3(a and b). For $$\epsilon = 0.1$$ ($$0.01$$), our algorithm needs to solve 19 (65) NLPs. For comparison, we also show the results solved by Baier et al. (2007)’s algorithm with solving $$19$$ and $$65$$ NLPs in Fig. 3(c and d), respectively. Fig. 3. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of NLPs (19 NLPs for (a) and (c); 65 NLPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 19 NLPs. (d) Results of Baier et al. (2007)’s algorithm with 65 NLPs. Fig. 3. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of NLPs (19 NLPs for (a) and (c); 65 NLPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 19 NLPs. (d) Results of Baier et al. (2007)’s algorithm with 65 NLPs. Example 4.3 Consider a time-variant linear system with $$U =\{x \in {\mathbb{R}}: |x| \leq 1\}$$, $$x_0=(0,0)^{\top}$$, $$t_0=0$$, $$T=4$$ and \begin{equation} A(t)=\left[\begin{array}{cc} (\textrm{e}^{-t}\textrm{sin}\,t-2)&t \textrm{e}^{-2t}\\-\textrm{e}^{-t}&(2\textrm{e}^{-2t}\textrm{cos}\,t-1) \end{array}\right]\!, B(t)=\left[\begin{array}{c} 1\\\textrm{sin}\,t \end{array}\right]\!. \end{equation} The discrete reachable set problem $$(\rm P_\mathcal{Y})$$ is solved by Algorithm 3.1 with $$\epsilon =0.1$$ and $$0.01$$. It is also solved by Baier et al. (2007)’s algorithm with 13, 15, 35 and 105 normal directions (uniformly distributed points on the unit ball are taken as the normal directions). The results are listed in Table 2. The results of Algorithm 3.1 with $$\epsilon =0.1$$ and $$0.01$$ are shown in Fig. 4(a and b), respectively. For comparison, we also show the results solved by Baier et al. (2007)’s algorithm using $$13$$ and $$35$$ LPs in Fig. 4(c and d), respectively Fig. 4. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (13 LPs for (a) and (c); 35 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 13 LPs. (d) Results of Baier et al. (2007)’s algorithm with 35 LPs. Fig. 4. View largeDownload slide Comparisons of $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ obtained by Algorithm 3.1 (top) and Baier et al. (2007)’s algorithm (bottom) with solving the same number of LPs (13 LPs for (a) and (c); 35 LPs for (b) and (d)). (a) Results of Algorithm 3.1 with $$\epsilon = 0.1$$. (b) Results of Algorithm 3.1 with $$\epsilon = 0.01.$$ (c) Results of Baier et al. (2007)’s algorithm with 13 LPs. (d) Results of Baier et al. (2007)’s algorithm with 35 LPs. Example 4.4 This linear control system example has $$x_0=(0,0,0)^{\top}$$, $$t_0 = 0$$, $$T=1$$, $$U=\{x \in \mathbb{R}^3: \|x\|_2 \leq 1\}$$ and \begin{equation} A=\left[ \begin{array}{r r r} -1 & 1 & 0\\ 0 & -1 & 0\\ 0 & 0 & -2 \end{array} \right]\!, B=\left[ \begin{array}{r r r } 0 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1 \end{array} \right]\!. \end{equation} The corresponding discrete reachable set problem $$(\rm P_{\mathcal{Y}})$$ has non-linear constraints due to $$U=\{x \in \mathbb{R}^3: \|x\|_2 \leq 1\}$$. We solve the problem using Algorithm 3.1 with $$\epsilon =0.1$$ and $$\epsilon =0.01$$. For comparison, for each result of our algorithm with a specified $$\epsilon$$, we also solve the problem by Baier et al. (2007)’s algorithm with two different number of normal directions, i.e., for each result of our algorithm, we not only present the result of Baier et al. (2007)’s algorithm with similar Hausdorff distance but also the result with solving the same number of NLPs. The results are listed in Table 2. Figures 5 and 6 show the inner approximation $$\mathcal{Y}^i$$, outer approximation $$\mathcal{Y}^o$$ and both inner and outer approximations $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with $$\epsilon =0.1$$ and $$\epsilon =0.01$$, respectively. The number of NLPs computed for $$\epsilon =0.1$$ and $$\epsilon =0.01$$ is 114 and 919, respectively. For comparison, we show the results of Baier et al. (2007)’s algorithm with solving the same number of NLPs as ours, i.e., 114 and 919, respectively, in Figs 7 and 8. Fig. 5. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 5. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 6. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 6. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Algorithm 3.1 with approximation error $$0.1$$. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 7. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 114 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 7. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 114 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 8. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 919 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Fig. 8. View largeDownload slide $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$ solved by Baier et al. (2007)’s algorithm with 919 NLPs. (a) $$\mathcal{Y}^i$$. (b) $$\mathcal{Y}^o$$. (c) $$\mathcal{Y}^i$$ and $$\mathcal{Y}^o$$. Table 2 summarizes the Hausdorff distance between the outer and inner approximation sets, the computation time (seconds), the number of LPs/NLPs solved by both algorithms as indicated in Examples 4.1–4.4. For our algorithm, we also list the approximation error $$\epsilon$$. For each row result of our algorithm with different $$\epsilon$$ (see column 2), we show two corresponding results solved by Baier et al. (2007)’s algorithm, one is with solving the same number of LPs/NLPs as ours (one row before) the other is the result with similar Hausdorff distance as ours (one row after). From Table 2, we can see that with similar Hausdorff distance, the algorithm proposed in this article uses less computational time and solves less number of LPs/NLPs. Moreover, under the circumstance of solving the same number of LPs/NLPs, our algorithm obtains smaller Hausdorff distance between inner and outer approximations. As the approximation error decreases, the number of LPs/NLPs increases, furthermore the computation time increases. Next, we solve a four- and a five-dimensional control system examples to show the performance of our algorithm for handling high dimensional problems. Again, we use Euler method with piecewise constant control for discretization and the step size $$h$$ is set to be $$0.01$$. For comparison, for a given approximation error of our algorithm, we have also tried to use trial and error to find a suitable number of normal directions for Baier et al. (2007)’s algorithm so that both algorithms produce similar Hausdorff distance. However, we could not get results within 2 h. Therefore, for these two problems, we only compare the results of the two algorithms with solving the same number of optimization problems. Example 4.5 This four-dimensional linear control system example has $$x_0=(0,0,0,0)^{\top}$$, $$t_0 = 0$$, $$T=1$$, $$U =\{x\in \mathbb{R}: |x| \leq 1\}$$ and \begin{equation} A=\left[ \begin{array}{r r r r} -1 & 0 & 1 & 0\\ 0 & -2 & 0 & 0\\ 0 & 1 & -1 & 0\\ 0 & 0 & 0 & -3 \end{array} \right]\!, B=\left[ \begin{array}{r} 0\\ 1\\ 1\\ 2 \end{array} \right]\!. \end{equation} We solve the problem using Algorithm 3.1 with $$\epsilon =0.2$$ and $$\epsilon =0.1$$. The number of LPs computed for $$\epsilon =0.2$$ and $$\epsilon =0.1$$ is $$323$$ and $$674$$, respectively. For comparison, we also solve the problem by Baier et al. (2007)’s algorithm using $$323$$ and $$674$$ normal directions (i.e., solving $$323$$ and $$674$$ LPs), respectively. The results are listed in Table 3. Table 3 The Hausdorff distance, running time and the number of LPs for Examples 4.5–4.6 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Table 3 The Hausdorff distance, running time and the number of LPs for Examples 4.5–4.6 solved by the proposed Algorithm 3.1 andBaier et al. (2007)’s algorithm Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Example $$\epsilon$$ Algorithm LPs $$H(\mathcal{Y}^o, \mathcal{Y}^i)$$ Time (s) 4.5 0.2 Proposed 323 0.0648 4.032 — Baier et al. (2007) 323 0.1715 12.5635 0.1 Proposed 674 0.0542 7.0072 — Baier et al. (2007) 674 0.1372 30.5755 4.6 0.2 Proposed 4380 0.1247 90.7213 — Baier et al. (2007) 4380 0.2104 422.3247 0.1 Proposed 14858 0.0542 875.2668 — Baier et al. (2007) 14858 0.1475 2238.5415 Example 4.6 This five-dimensional linear control system example has $$x_0=(0,0,0,0,0)^{\top}$$, $$t_0 = 0$$, $$T=1$$, $$U =\{x\in \mathbb{R}: |x| \leq 1\}$$ \begin{equation} A=\left[ \begin{array}{r r r r r} -1 & 0 & 1 & 0 & 0\\ 0 & -2 & 0 & 0 & 0\\ 0 & 1 & -1 & 0 & -1\\ 0 & 0 & 0 & -3 & 1\\ 0 & 0 & 0 & 0 & -1 \end{array} \right]\!, B=\left[ \begin{array}{r} 0\\ 1\\ 1\\ 2\\ 3 \end{array} \right]\!. \end{equation} We solve the problem using Algorithm 3.1 with $$\epsilon =0.2$$ and $$\epsilon =0.1$$. The number of LPs computed for $$\epsilon =0.2$$ and $$\epsilon =0.1$$ is $$4380$$ and $$14858$$, respectively. For comparison, we also solve the problem by Baier et al. (2007)’s algorithm with $$4380$$ and $$14858$$ normal directions (i.e., solving $$4380$$ and $$14858$$ LPs), respectively. The results are listed in Table 3. Tables 2 and 3 suggest that the most critical factor influencing computation time is the dimension of the system $$p$$. To further show the computation time changing with the increasing of system dimension $$p$$, we plot computational time versus system dimension $$p$$ for Examples 4.3–4.6 solved by our algorithm with approximation error $$\epsilon=0.1$$ in Fig. 9. As $$p$$ increases the computation time increases and more dramatically so for higher values of $$p$$ than smaller ones. For example, as $$p$$ increases from $$4$$ to $$5$$, the number of LPs solved increases from $$674$$ to $$14858$$ and the computation time increases from 7.0072 to $$875.2668$$ s. Fig. 9. View largeDownload slide Computational time for two-dimensional to five-dimensional linear control systems. Fig. 9. View largeDownload slide Computational time for two-dimensional to five-dimensional linear control systems. It needs to be noted that parallel computations can be used in Algorithm 3.1 to speed up the computation time for high-dimensional cases. Instead of one by one checking vertex $$s\in {\rm vert\,} \mathcal{S}^k$$ to see if $$s \in \mathcal{Y} \cup \mathcal{O}$$ or not, several vertices of $${\rm vert\,} \mathcal{S}^k$$ can be checked at the same time by parallel computing $$({\rm P_1}(s))$$, then construct supporting hyperplanes of $$\mathcal{Y}$$ using (D$$_1({s})$$) and update the outer approximation $$\mathcal{S}$$. According to Theorem 3.2, the quality of the approximation of Algorithm 3.1 is guaranteed. In all the six examples, Baier et al. (2007)’s method delivers a worse approximation. This indicates that normal directions chosen in the method are not appropriate to find an outer approximation $$\mathcal{Y}^o$$ and an inner approximation $$\mathcal{Y}^i$$ of $$\mathcal{Y}$$ with $$H(\mathcal{Y}^o,\mathcal{Y}^i)<\epsilon$$. Hence it needs to be extended by mechanisms to control the normal direction parameters in such a way that the quality of the approximation can be guaranteed. 5. Conclusion In this article, we have proposed a variant of Benson’s algorithm to find the reachable set of a discrete linear control system. We have proved that the algorithm guarantees to find an inner approximation and an outer approximation of the reachable set with a specified approximation error. To the best of our knowledge, this is the first algorithm that can guarantee the approximation quality in terms of Hausdorff distance. This algorithm can not only be used in linear control systems but also non-linear control systems with convex reachable sets. Numerical results show that our algorithm performs better than the algorithm of Baier et al. (2007) on all the tested instances. Further research is needed to develop an algorithm with guaranteed approximation quality in terms of Hausdorff distance for non-linear control systems with non-convex reachable sets. Moreover, it is worth investigating the trade-off between approximation quality and computational effort, in particular concerning different discretization methods and the growth of computation time as $$p$$ increases. Acknowledgements We would like to thank Professor Andreas Löhne from Friedrich-Schiller-Universität Jena and the anonymous reviewers for their insightful and detailed comments, which have greatly improved the quality of our article. Funding Beijing Natural Science Foundation (4152034); and National Natural Science Foundation of China (11371053), in part. References Baier, R., Büskens, C. & Chahma, I. A. ( 2007 ) Approximation of reachable sets by direct solution methods for optimal control problems. Optim. Method Softw. , 22 , 433 – 452 . Google Scholar CrossRef Search ADS Benson, H. ( 1998 ) An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem. J. Global. Optim., 13 , 1 – 24 . Google Scholar CrossRef Search ADS Botchkarev, O. & Tripakis, S. ( 2000 ) Verification of hybrid systems with linear differential inclusions using ellipsoidal approximations. Hybrid Systems: Computation and Control ( Lynch, N. & Krogh B. H. eds). Berlin, Heidelberg : Springer , pp. 73 – 88 . Google Scholar CrossRef Search ADS Chen, P. C., Hansen, P. & Jaumard, B. ( 1991 ) On-line and off-line vertex enumeration by adjacency lists. Oper. Res. Lett. , 10 , 403 – 409 . Google Scholar CrossRef Search ADS Dontchev, A. & Farkhi, E. ( 1989 ) Error estimates for discretized differential inclusions. Computing , 41 , 349 – 358 . Google Scholar CrossRef Search ADS Ehrgott, M., Löhne, A. & Shao, L. ( 2012 ) A dual variant of Benson’s “outer approximation algorithm” for multiple objective linear programming. J. Global. Optim. , 52 , 757 – 778 . Google Scholar CrossRef Search ADS Ehrgott, M., Shao, L. & Schöbel, A. ( 2011 ) An approximation algorithm for convex multi-objective programming problems. J. Global. Optim. , 50 , 397 – 416 . Google Scholar CrossRef Search ADS Ferretti, R. ( 1997 ) High-order approximations of linear control systems via Runge–Kutta schemes. Computing , 58 , 351 – 364 . Google Scholar CrossRef Search ADS Gayek, J. E. & Fisher, M. E. ( 1987 ) Approximating reachable sets for n-dimensional linear discrete systems. IMA J. Math. Control Inf. , 4 , 149 – 160 . Google Scholar CrossRef Search ADS Girard, A. ( 2005 ) Reachability of uncertain linear systems using zonotopes. Hybrid Systems: Computation and Control ( Morari, M. Thiele L. eds). Berlin, Heidelberg : Springer , pp. 291 – 305 . Google Scholar CrossRef Search ADS Girard, A., Le Guernic, C. & Maler, O. ( 2006 ) Efficient computation of reachable sets of linear time-invariant systems with inputs. Hybrid Systems: Computation and Control ( Hespanha J. P. & Tiwari A. eds), Vol. 3927. Berlin Heidelberg : Springer , pp. 257 – 271 . Google Scholar CrossRef Search ADS Kostousova, E. K. ( 1997 ) On polyhedral estimation of the attainability domains of linear multistep systems. Autom. Remote Control , 58 , 374 – 382 . Kostousova, E. K. ( 2001 ) Control synthesis via parallelotopes: optimization and parallel computations. Optim. Method Softw. , 14 , 267 – 310 . Google Scholar CrossRef Search ADS Kostousova, E. K. ( 2008 ) On the boundedness of outer polyhedral estimates for reachable sets of linear systems. Comput. Math. Math. Phys. , 48 , 918 – 932 . Google Scholar CrossRef Search ADS Kostousova, E. K. ( 2009 ) External polyhedral estimates for reachable sets of linear discrete-time systemswith integral bounds on controls. Int. J. Pure Appl. Math. , 50 , 187 – 194 . Kurzhanski, A. & Vályi, I. ( 1991 ) Ellipsoidal techniques for dynamic systems: the problem of control synthesis. Dynam. Contr. , 1 , 357 – 378 . Google Scholar CrossRef Search ADS Kurzhanski, A. B. & Varaiya, P. ( 2000 ) Ellipsoidal techniques for reachability analysis: internal approximation. Syst. Control Lett. , 41 , 201 – 211 . Google Scholar CrossRef Search ADS Kurzhanski, A. B. & Varaiya, P. ( 2001 ) Dynamic optimization for reachability problems. J. Optimiz. Theory App. , 108 , 227 – 251 . Google Scholar CrossRef Search ADS Kurzhanski, A. B. & Varaiya, P. ( 2002 ) On ellipsoidal techniques for reachability analysis. Part I: external approximations. Optim. Method Softw. , 17 , 177 – 206 . Google Scholar CrossRef Search ADS Li, X., He, X., Zhong, Q. & Song, M. ( 2011 ) Reachable domain for satellite with two kinds of thrust. Acta Astronaut. , 68 , 1860 – 1864 . Google Scholar CrossRef Search ADS Löhne, A. ( 2011 ) Vector Optimization with Infimum and Supremum . Berlin, Heidelberg : Springer. Google Scholar CrossRef Search ADS Löhne, A., Rudloff, B. & Ulus, F. ( 2014 ) Primal and dual approximation algorithms for convex vector optimization problems. J. Global. Optim. , 60 , 713 – 736 . Google Scholar CrossRef Search ADS Mitchell, I. M. & Tomlin, C. J. ( 2003 ) Overapproximating reachable sets by Hamilton-Jacobi projections. J. Sci. Comput. , 19 , 323 – 346 . Google Scholar CrossRef Search ADS Parise, F., Valcher, M. E. & Lygeros, J. ( 2014 ) On the reachable set of the controlled gene expression system. 2014 IEEE 53rd Annual Conference on Decision and Control (CDC) , Los Angeles, CA, USA . pp. 4597 – 4604 . Shao, L. & Ehrgott, M. ( 2008 ) Approximately solving multiobjective linear programmes in objective space and an application in radiotherapy treatment planning. Math. Methods Oper. Res. , 68 , 257 – 276 . Google Scholar CrossRef Search ADS Sonneborn, L. & Van Vleck, F. ( 1964 ) The bang-bang principle for linear control systems. J. Soc. Ind. Appl. Math. A , 2 , 151 – 159 . Google Scholar CrossRef Search ADS Suchaitanawanit, B. & Cole, M. O. ( 2015 ) Mechanical structure optimization in minimum-time motion control of flexible bodies. Automatica , 62 , 213 – 221 . Google Scholar CrossRef Search ADS Varaiya, P. ( 2000 ) Reach set computation using optimal control. Verification of Digital and Hybrid Systems ( Inan M. K. & Kurshan R. P. eds). Berlin, Heidelberg : Springer, pp. 323 – 331 . Google Scholar CrossRef Search ADS Veliov, V. ( 1992 ) Second-order discrete approximation to linear differential inclusions. SIAM J. Control Optim. , 29 , 439 – 451 . Vinh, N. X., Gilbert, E. G., Howe, R. M., Sheu, D. & Lu, P. ( 1995 ) Reachable domain for interception at hyperbolic speeds. Acta Astronaut. , 35 , 1 – 8 . Google Scholar CrossRef Search ADS Vinnikov, E. ( 2015 ) Construction of attainable sets and integral funnels of nonlinear controlled systems in the matlab environment. Computat. Math. and Model. , 26 , 107 – 119 . Google Scholar CrossRef Search ADS Zivanovic, S. G. & Collins, P. ( 2012 ) Higher order methods for differential inclusions. Preprint arXiv:1206.6563. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Nov 25, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off