Controllability of semilinear impulsive control systems with multiple time delays in control

Controllability of semilinear impulsive control systems with multiple time delays in control Abstract In this article, we study the controllability of finite-dimensional dynamical control systems modelled by semilinear impulsive ordinary differential equations with multiple constant time delays in the control function. Initially, we recall a necessary and sufficient condition for the controllability of the corresponding linear system without impulses, with multiple constant time delays in the control function in terms of a matrix rank condition. Then under some sufficient conditions, we show that the actual system is also controllable for certain classes of non-linearities and impulse functions. We employ Schauder fixed-point theorem and Banach contraction mapping principle to establish the results. Our obtained results are applicable for both autonomous and non-autonomous systems. An example is given to illustrate the theoretical results. 1. Introduction The study of impulsive systems has become more important in recent years as many of the evolution processes that occur in physics, chemistry, biology, population dynamics, engineering, information science etc. are characterized by the fact that, at certain moments of times, the state function experiences a sudden change, that is, in the form of impulses. There has been a significant development in the impulsive theory in the past three decades. For a detailed study on impulsive differential equations, see the monograph by Lakshmikantham et al. (1989) and the references therein. The study of controllability of impulsive systems has begun in 1993 by the work of Leela et al. (1993). In short, a system is said to be controllable over some space V on some finite time interval, if it is possible to steer that system from arbitrary initial state to arbitrary desired final state in V, by using the set of admissible control functions. The controllability of various types of linear impulsive systems in a finite-dimensional space is well known and various references are available on this in the literature; perhaps, one can see the works by Liu (1995), Guan et al. (2002), Zhao & Sun (2010), Han et al. (2012) and the references therein. Unlike the linear systems, the bibliography is not very broad, when it comes to the semilinear impulsive systems in a finite-dimensional space (the general form of a first-order semilinear system is given by $$\texttt{L}(\mathbf{x}(t))=F(t, \mathbf{x}(t)),$$ where $$\texttt{L}$$ is the first-order linear differential operator, F(⋅, ⋅) is a non-linear function in the state vector x(t)). However, we can mention the work by George et al. (2000), in which the authors obtained the controllability conditions of such systems by employing Banach contraction principle. Some other references that are available in the literature in this respect are the works by Nieto & Tisdell (2010), Zhu & Lin (2012) etc, in which the authors adopted Schaefer’s fixed-point theorem to obtain the controllability results. But in all these papers, the non-linear term and the impulse functions involved in the considered system depends on time and the state function, but not on the control parameter u. Leiva (2014) studied this case by assuming the system’s non-linear term and the impulse functions depends also on the control parameter u and obtained the controllability conditions by employing Rothe’s type fixed-point theorem. The extension of this result has appeared in Leiva & Rojas (2016) for the systems with non-local conditions. As we know, there are some chemical process systems, hydraulically actuated systems, combustion systems, population dynamics, harmonic oscillator etc., in which the present rate of change of control function depends upon the past values of it (see Erneux, 2009 and the references therein). Such processes are modelled by delay differential equations having time delays in the control function. For example, an equation of harmonic oscillator is represented by $$\,\ddot{\!x}(t)+k^{2} x(t)=u(t)+u(t-h),k^{2}>0;$$ an equation arising in population dynamics is given by $$\,\dot{\!x}(t)=x(t)+\int _{0}^{\infty }e^{-\sigma }u(t-\sigma )\mathrm{d}\sigma $$ etc. For many other practical examples where time delays are involved in control, one can see the works by Artstein (1982), Sikora (2005) etc. Several mathematicians contributed in the development of controllability of the linear systems in a finite-dimensional space involving time delays in control, for example, Olbrot (1972), Klamka (1976), Klamka (1977), Khambadkone (1982) and references therein. For the non-linear systems, one can refer to the works by Dacka (1982), Balachandran & Somasundaram (1985), Klamka (2008), Klamka (2009) etc. If an impulsive system in a finite-dimensional space involves time delays in control, the establishment of the controllability of such system becomes much more complex because of the coexistence of impulses and delays. However, for the linear case of this scenario, Liu & Zhao (2012) obtained the controllability results in terms of a matrix rank condition, which is easy to check whether the system is controllable or not. But we know that most of the problems occurring in real world are not linear in nature. To the best of our knowledge, there is no work available in the literature on the controllability of the non-linear (in particular semilinear) impulsive system with multiple constant time delays in the control. This motivates our current study. Firstly, we recall the controllability condition for the corresponding linear delay system without impulses in terms of a matrix rank condition. Then, for certain classes of the non-linear part of the system and impulse functions, we establish sufficient conditions under which the original system is also controllable. We use Schauder fixed-point theorem and Banach contraction mapping principle to prove the results. The organization of this paper is as follows: in Section 2, we formulate the controllability problem of finite-dimensional semilinear impulsive system with multiple constant time delays in the control function. In Section 3, we recall a necessary and sufficient condition for the controllability of the corresponding linear system with multiple constant time delays in the control and without impulses, in terms of a matrix rank condition by using N number of matrices, where N is the number of time delays in the control function. In Section 4, we prove that, under some sufficient conditions, the actual system is also controllable for certain classes of non-linearities and impulse functions. For this, we use Schauder fixed-point theorem and Banach contraction mapping principle. Finally, in Section 5, we give a numerical example for non-autonomous system to show the effectiveness of the results obtained in this paper. 2. Preliminaries and system description We begin this section with functional settings required to establish the results of this paper. The natural space to work on the solvability of semilinear impulsive control delay system (see (2.1) below) is the real Banach space given by   \begin{align*} \mathcal{B}_{1}:=&\bigg\{\mathbf{x}(\cdot)\Big| \mathbf{x}(\cdot):[t_{0}, T]\rightarrow \mathbb{R}^{n}, \mathbf{x}(\cdot)\ \textrm{is a continuous function on}\ [t_{0}, T]\setminus\{t_{k}:k=1, 2,\ldots,M\}\\ &\quad\textrm{and differentiable a.e. on }[t_{0}, T]\ \textrm{such that}\, \exists \,\textrm{a left limit}\, \mathbf{x}\big(t^{-}_{k}\big):=\lim \limits_{t\uparrow t_{k}} \mathbf{x}(t)\ \textrm{and a right}\\&\quad\textrm{limit}\ \mathbf{x}\big(t^{+}_{k}\big):=\lim \limits_{t \downarrow t_{k}} \mathbf{x}(t)\ \textrm{with}\ \mathbf{x}\big(t^{-}_{k}\big)=\mathbf{x}\big(t_{k}\big)\ \textrm{and}\ \mathbf{x}(t_{0})=\lim \limits_{t \downarrow t_{0}}\mathbf{x}(t) \bigg\}, \end{align*}endowed with the norm   $$ \left\|\mathbf{x}(\cdot)\right\|_{\mathcal{B}_{1}}:=\sup \limits_{t\in [t_{0}, T]}\left\|\mathbf{x}(t)\right\|_{\mathbb{R}^{n}}.$$Here a.e. stands for ‘almost everywhere’, which we define as follows: a property $$\mathcal{P}$$ is said to hold a.e. on $$[t_{0}, T],$$ if the following conditions are satisfied: (i) The property $$\mathcal{P}$$ holds on a subset S of $$[t_{0}, T]$$; (ii) If the property $$\mathcal{P}$$ fails to satisfy on $$[t_{0}, T]\setminus \mathrm{S},$$ then the Lebesgue measure of the set $$[t_{0}, T]\setminus \mathrm{S} $$ is zero. We also need the following real Banach spaces:   $$ \mathcal{B}_{2}:=\big\{\mathbf{u}(\cdot) \big| \mathbf{u}(\cdot):[t_{0}, T]\rightarrow \mathbb{R}^{m}, \mathbf{u}(\cdot)\textrm{ is continuous a.e. and bounded on }[t_{0}, T]\big\}, $$endowed with the norm   $$ \left\|\mathbf{u}(\cdot)\right\|_{\mathcal{B}_{2}}:=\sup \limits_{t\in [t_{0}, T]}\left\|\mathbf{u}(t)\right\|_{\mathbb{R}^{m}},$$$$\mathbb{R}^{n} \times \mathbb{R}^{m}:=\{(\mathbf{v}, \mathbf{w}) | \mathbf{v}\in \mathbb{R}^{n}, \mathbf{w}\in \mathbb{R}^{m} \},$$ endowed with the norm   $$ \left\|(\mathbf{v}, \mathbf{w})\right\|_{\mathbb{R}^{n} \times \mathbb{R}^{m}}:=\|\mathbf{v}\|_{\mathbb{R}^{n}}+\|\mathbf{w}\|_{\mathbb{R}^{m}},$$and $$[t_{0}, T]\times \mathbb{R}^{n} \times \mathbb{R}^{m}:=\{(t, \mathbf{v}, \mathbf{w}) | t\in [t_{0}, T], \mathbf{v}\in \mathbb{R}^{n}, \mathbf{w}\in \mathbb{R}^{m} \},$$ endowed with the norm   $$ \left\|(t, \mathbf{v}, \mathbf{w})\right\|_{[t_{0}, T]\times \mathbb{R}^{n} \times \mathbb{R}^{m}}:=|t|+\|\mathbf{v}\|_{\mathbb{R}^{n}}+\|\mathbf{w}\|_{\mathbb{R}^{m}}.$$Here $$\|\cdot \|_{\mathbb{R}^{n}}$$ and $$\|\cdot \|_{\mathbb{R}^{m}}$$ are the usual Euclidean norms on the real Banach spaces $$\mathbb{R}^{n}$$ and $$\mathbb{R}^{m},$$ respectively. Throughout this paper, for any operator $$\mathcal{T}$$, the Hermitian adjoint is denoted by $$\mathcal{T}^{*}$$ and for any matrix $$\mathbf{A}=(a_{ij}),$$ we define the Frobenius norm $$\|\mathbf{A}\|:=\sqrt{\sum_{i, j=1}|a_{ij}|^{2}}.$$ Further, C(A;B) denotes the set of all continuous functions from set A to set B. We consider the dynamical control system modelled by the following semilinear impulsive ordinary differential equation whose state vector is in $$\mathbb{R}^{n},$$ with multiple constant time delays in the control function,   \begin{equation}\!\!\!\!\!\left.\begin{array}{rl} \dot{\mathbf{x}}(t)=&\!\!\mathbf{A}(t)\mathbf{x}(t)+\sum\limits_{i=1}^{N}\mathbf{B}_{i}(t)\mathbf{u}(t-h_{i})+\mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t)),t\in[t_{0}, T]\setminus\{t_{k}:k=1, 2,\ldots, M\}, \\ \mathbf{x}(t_{0})=&\!\!\mathbf{x}_{0},\\[2pt] \Delta(\mathbf{x}(t_{k})):=&\!\!\mathbf{x}\left(t^{+}_{k}\right)-\mathbf{x}(t_{k})=\mathbf{g}_{k}(\mathbf{x}(t_{k}), \mathbf{u}(t_{k})),\\[2pt] \mathbf{u}(t)=&\!\!\mathbf{u}_{0}(t), t\in [t_{0}-h_{N}, t_{0}). \end{array} \right\} \end{equation} (2.1)Since in system (2.1), the coefficient of the first-order derivative of x(t) does not depend on x(t) or the derivative of x(t), therefore this system is semilinear as it can be written in the form: $$\texttt{L}(\mathbf{x}(t))=F(t, \mathbf{x}(t)),$$ where $$\texttt{L}$$ is the first-order linear differential operator. We make the following assumptions on this system components: (i) the state function $$\mathbf{x}(\cdot )\in \mathcal{B}_{1}$$ with a given initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$, (ii) the control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ (iii) $$\mathbf{A}(\cdot ):[t_{0}, T]\rightarrow \mathbb{R}^{n \times n}$$ and $$\mathbf{B}_{i}(\cdot ):[t_{0}, T]\rightarrow \mathbb{R}^{n \times m}$$ are the given matrix valued continuous functions on $$[t_{0}, T],$$ (iv) $$t_{0}\leq t_{1} \leq t_{2}\leq \cdots \leq t_{M}< T,$$$$t_{k}$$’s are the fixed times at which the state function x(⋅) experiences impulses and are state independent, (v) $$0\leq h_{1}\leq h_{2}\leq \cdots \leq h_{N}\leq \min \, \{(t_{1}-t_{0}), (t_{2}-t_{1}),\ldots ,(t_{M}-t_{M-1}), (T-t_{M}) \},$$$$h_{i}$$’s are the known time delays in the control function u(⋅), (vi) $$\Delta (\mathbf{x}(t_{k}))$$ is the jump in the state function x(⋅) at the time $$t_{k},$$ (vii) $$\mathbf{u}_{0}(\cdot ):[t_{0}-h_{N}, t_{0})\rightarrow \mathbb{R}^{m}$$ denotes the known initial control function (and is assumed to be bounded and continuous on its domain) applied to the system (2.1), (viii) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C}\left ([t_{0}, T]\times \mathbb{R}^{n} \times \mathbb{R}^{m}; \mathbb{R}^{n}\right )$$ is nonlinear in its second argument and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C}\left (\mathbb{R}^{n} \times \mathbb{R}^{m}; \mathbb{R}^{n}\right ),$$ the subscripts i = 1, …, N and k = 1, …, M. Before proceeding to establish the controllability of the system (2.1), we will make sure that this system is solvable for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any $$\mathbf{u}(\cdot )\in \mathcal{B}_{2}.$$ Note that, the solvability of (2.1) is similar to the solvability of the following initial value problem (2.2) having no impulses:   \begin{equation} \left. \begin{array}{rl} \dot{\mathbf{x}}(t)&\!\!\!=\mathbf{f}(t, \mathbf{x}(t)),\,\, t\in [t_{0}, T],\\ [2pt]\mathbf{x}(t_{0})&\!\!\!=\mathbf{x}_{0}. \end{array} \right\} \end{equation} (2.2)This is because when (2.2) possesses a unique solution, then its corresponding impulsive system with finite impulses also admits a unique solution, but having jumps in the state function at the impulse times. First, let us discuss the solvability of (2.2) on $$[t_{0}, T].$$ In (2.2), if we assume that f(⋅, ⋅) is a continuous function on $$[t_{0}, T]\times \mathbb{R}^{n}$$ and satisfy there a Lipschitz condition with respect to the second argument x(t), then (2.2) has a unique solution on the interval $$[t_{0}, T]$$ (refer to Coddington, 1989, Theorem 4, p. 252). But note that these are just sufficient conditions, not necessary for the existence of a unique solution to (2.2) on $$[t_{0}, T]$$. In other words, (2.2) still can admit a unique solution on $$[t_{0}, T]$$ without f(⋅, ⋅) being a continuous function on its domain and without f(⋅, ⋅) satisfying a Lipschitz condition with respect to the second argument on its domain. Now, for the system (2.1), if we assume f(⋅, ⋅, ⋅) satisfy a Lipschitz condition with respect to the second argument on its domain $$[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}$$ (then the right-hand side of $$\dot{ \mathbf{x}}(t)$$ in (2.1) also satisfies a Lipschitz condition), then (2.1) has a unique solution on $$[t_{0}, T]$$ for a given u(⋅). Note that, Lipschitz functions are either bounded or unbounded on their domain. However, there exist some bounded continuous functions f(⋅, ⋅, ⋅) that do not satisfy a Lipschitz condition (then, of course the right-hand side of $$\dot{ \mathbf{x}}(t)$$ also does not satisfy a Lipschitz condition), but still the system (2.1) admits a unique solution on $$[t_{0}, T]$$ for a given u(⋅). Similarly, there exist some linear growth continuous functions f(⋅, ⋅, ⋅) that are unbounded and do not satisfy a Lipschitz condition, but (2.1) still has a unique solution on $$[t_{0}, T]$$ for a given u(⋅). In this work, we consider all the three cases and assume that our system (2.1) admits a unique solution on $$[t_{0}, T]$$ for any given u(⋅). The following definition for the controllability of the system (2.1) is adopted in this paper. Definition 2.1 (Controllability) The system (2.1) is said to be controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T]$$, if for every pair of vectors $$(\mathbf{x}_{0}, \mathbf{x}_{_{T}})\in \mathbb{R}^{n}\times \mathbb{R}^{n}$$ and for every continuous and bounded function $$\mathbf{u}_{0}(\cdot ):[t_{0}-h_{N}, t_{0})\rightarrow \mathbb{R}^{m},$$ there exists at least one control function $$\mathbf{u}(\cdot ) \in \mathcal{B}_{2}$$ such that, with this control function on $$[t_{0}, T],$$ the corresponding solution to the system (2.1) with $$\mathbf{x}(t_{0})=\mathbf{x}_{0}$$ and $$\mathbf{u}(t)=\mathbf{u}_{0}(t),$$$$t \in [t_{0}-h_{N}, t_{0})$$, satisfies the condition $$\mathbf{x}(T)=\mathbf{x}_{_{T}}.$$ Remark 2.1 In the above definition, if $$\mathbf{x}_{_{T}}=\mathbf{0},$$ then the system (2.1) is said to be null controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T].$$We use the following lemmas in establishing the controllability of the system (2.1). Lemma 2.1 (Strong version of Schauder fixed-point theorem (Morris & Noussair, 1975)) Let $$\mathcal{X}$$ be a Banach space and let $$\mathcal{B}\subset \mathcal{X}$$ be a non-empty, closed and convex subset. If $$\mathcal{K}(\cdot )$$ is a continuous mapping of $$\mathcal{B}$$ into a compact subset of $$\mathcal{B},$$ then $$\mathcal{K}(\cdot )$$ has at least one fixed point in $$\mathcal{B}.$$ Lemma 2.2 (Banach contraction mapping principle (Farmakis & Moskowitz, 2013)) If $$\mathcal{X}$$ is a complete metric space and the mapping $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ is a contraction, then $$\mathcal{K}(\cdot )$$ has a unique fixed point in $$\mathcal{X}.$$ 3. Controllability of the linear system without impulses and with multiple constant time delays in the control function In 2012, Liu and Zhao obtained a necessary and sufficient condition (see Liu & Zhao, 2012, Theorem 3.8, p. 564) for the controllability of the linear impulsive system with multiple constant time delays in control. Further, this controllability condition has been reduced for the system without impulses and is given in Corollary 3.9, p. 566 of the same paper. In this section, we recall that necessary and sufficient condition for the controllability of the corresponding linear system (see (3.1) below) without impulses and with multiple constant time delays in the control function and we present them again here for the completion of this paper and the outcomes are useful in establishing the controllability results given in Section 4 below. The associated linear system of (2.1) without impulses is given by   \begin{equation} \left. \begin{array}{rl} \dot{\mathbf{x}}(t)&\!\!\!=\mathbf{A}(t)\mathbf{x}(t)+\sum\limits_{i=1}^{N}\mathbf{B}_{i}(t)\mathbf{u}(t-h_{i}), \ t \in [t_{0}, T],\\ \mathbf{x}(t_{0})&\!\!\!=\mathbf{x}_{0}, \\[2pt] \mathbf{u}(t)&\!\!\!=\mathbf{u}_{0}(t),\quad t\in [t_{0}-h_{N}, t_{0}). \end{array} \right \} \end{equation} (3.1) Let $$\boldsymbol{\Phi }(t, t_{0})$$ be the state-transition matrix, then $$\mathbf{x}(t)=\boldsymbol{\Phi }(t, t_{0})\mathbf{x}_{0}$$ is a unique solution to the homogeneous system $$\dot{\mathbf{x}}(t)=\mathbf{A}(t)\mathbf{x}(t)$$ with $$\mathbf{x}(t_{0})=\mathbf{x}_{0}.$$ Hence, the solution to the linear system (3.1) at any time $$t\in [t_{0}, T]$$ is given by   \begin{align*}\mathbf{x}(t)&=\boldsymbol{\Phi}(t, t_{0})\mathbf{x}_{0}+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\sum_{i=1}^{N}\mathbf{B}_{i}(s)\mathbf{u}(s-h_{i})\,\mathrm{d}s\\ &=\boldsymbol{\Phi}(t, t_{0})\mathbf{x}_{0}+\boldsymbol{\Phi}(t, t_{0})\sum_{i=1}^{N}\int_{t_{0}-h_{i}}^{t_{0}}\boldsymbol{\Phi}(t_{0}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}_{0}(s)\,\mathrm{d}s\\ &\quad+\sum_{i=1}^{N}\int_{t_{0}}^{t-h_{i}}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Let us denote   \begin{align} \sum_{i=1}^{N}\int_{t_{0}-h_{i}}^{t_{0}}\boldsymbol{\Phi}(t_{0}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}_{0}(s)\,\mathrm{d}s=\mathbf{a}_{0}\in \mathbb{R}^{n}, \end{align} (3.2)therefore, we have   \begin{equation} \mathbf{x}(t)=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\sum_{i=1}^{N}\int_{t_{0}}^{t-h_{i}}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{equation} (3.3)Now let us simplify the summation given in equation (3.3) as   \begin{align} &\sum_{i=1}^{N}\int_{t_{0}}^{t-h_{i}}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\nonumber\\ &\quad=\int_{t_{0}}^{t-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align} (3.4)Using (3.4) in (3.3), the solution to the system (3.1) can be written as   \begin{align}\mathbf{x}(t)&=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{t-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\nonumber\\ &\quad+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align} (3.5)Let us now define   \begin{align} \mathbf{W}_{l}:=\mathbf{W}_{l}(T)&=\int_{T-h_{l+1}}^{T-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s,\nonumber\\[10pt] \mathbf{W}_{N}:=\mathbf{W}_{N}(T)&=\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s, \end{align} (3.6)where l = 1, 2, …, (N − 1). Lemma 3.1 Each $$\mathbf{W}_{i}$$ given in (3.6) is a positive semidefinite symmetric n × n matrix and $$ \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdot \cdot \cdot |\mathbf{W}_{N})= \textrm{rank}(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot \cdot \cdot +\mathbf{W}_{N}).$$ Proof. Let P(s) be any n × m matrix with each of its entry to be a real valued continuous function of s. Then for each fixed $$s\in [t_{0}, T],$$ for all $$\mathbf{v}\in \mathbb{R}^{n} $$ and under the usual inner product on $$\mathbb{R}^{n},$$ we have   $$ \big\langle \mathbf{P}(s)\mathbf{P}^{*}(s)\mathbf{v}, \mathbf{v}\big\rangle_{\mathbb{R}^{n}}=\big\langle \mathbf{P}^{*}(s)\mathbf{v}, \mathbf{P}^{*}(s)\mathbf{v}\big\rangle_{\mathbb{R}^{m}}=\left\|\mathbf{P}^{*}(s)\mathbf{v}\right\|^{2}_{\mathbb{R}^{m}}\geq 0,$$which shows that $$\mathbf{P}(s)\mathbf{P}^{*}(s)$$ is positive semidefinite symmetric n × n matrix for each $$s\in [t_{0}, T]$$. Now, for $$\alpha <\beta ,$$ let us consider   $$ \Bigg\langle \int_{\alpha}^{\beta}\mathbf{P}(s)\mathbf{P}^{*}(s)\,\mathrm{d}s\mathbf{v}, \mathbf{v}\Bigg \rangle_{\mathbb{R}^{n}}=\int_{\alpha}^{\beta}\big(\mathbf{P}^{*}(s)\mathbf{v}\big)^{*}\big(\mathbf{P}^{*}(s)\mathbf{v}\big)\,\mathrm{d}s=\int_{\alpha}^{\beta}\left\|\mathbf{P}^{*}(s)\mathbf{v}\right\|^{2}_{\mathbb{R}^{m}} \,\mathrm{d}s\geq 0, $$which easily shows that $$\int _{\alpha }^{\beta }\mathbf{P}(s)\mathbf{P}^{*}(s)\,\mathrm{d}s$$ is positive semidefinite symmetric n × n matrix. Therefore each $$\mathbf{W}_{i}$$ given in (3.6) is a positive semidefinite symmetric n × n matrix. Also, we know that   $$ \big\langle(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot\cdot\cdot+\mathbf{W}_{N})\mathbf{v}, \mathbf{v}\big\rangle_{\mathbb{R}^{n}}=\langle\mathbf{W}_{1}\mathbf{v},\mathbf{v}\rangle_{\mathbb{R}^{n}}+\langle\mathbf{W}_{2}\mathbf{v},\mathbf{v}\rangle_{\mathbb{R}^{n}}+\cdot\cdot\cdot+\langle\mathbf{W}_{N}\mathbf{v},\mathbf{v}\rangle_{\mathbb{R}^{n}}\geq 0,$$for all $$\mathbf{v}\in \mathbb{R}^{n},$$ which shows that $$(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot \cdot \cdot +\mathbf{W}_{N})$$ is a positive semidefinite symmetric n × n matrix. Now, it remains to prove that $$\textrm{rank}(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot \cdot \cdot +\mathbf{W}_{N})=\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdot \cdot \cdot |\mathbf{W}_{N}).$$ This follows from the following estimation:   \begin{align*} \mathbf{v}\in \textrm{ker}\big[ (\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot\cdot\cdot+\mathbf{W}_{N})^{*}\big]&\Longleftrightarrow(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot\cdot\cdot+\mathbf{W}_{N})^{*}(\mathbf{v})=\mathbf{0}\\ &\Longleftrightarrow \mathbf{W}_{i}^{*}(\mathbf{v})=\mathbf{0}, \quad \textrm{ for all }i, \textrm{ as each } \mathbf{W}_{i} \textrm{ is a positive}\\ &\qquad\qquad\textrm{semidefinite matrix}\\ &\Longleftrightarrow \mathbf{v}\in \textrm{ker} \big(\mathbf{W}_{i}^{*}\big), \quad \textrm{ for all }i\\ &\Longleftrightarrow \mathbf{v}\in \textrm{ker} \begin{bmatrix} \mathbf{W}_{1}^{*} \\ \mathbf{W}_{2}^{*} \\ \vdots \\ \mathbf{W}_{N}^{*} \end{bmatrix}_{Nn\times n}\\ &\Longleftrightarrow \mathbf{v}\in \textrm{ker} \left(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdot\cdot\cdot|\mathbf{W}_{N}\right)^{*}. \end{align*}Therefore, by using the rank-nullity theorem, we have   \begin{align*} &\qquad \textrm{ker}\big[ (\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})^{*}\big]=\,\textrm{ker}\big[(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots|\mathbf{W}_{N})^{*}\big]\\ &\Longrightarrow n-\textrm{rank}\big[(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})^{*}\big]=\,n-\textrm{rank}\big[(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots|\mathbf{W}_{N})^{*}\big]\\ &\Longrightarrow \textrm{rank}(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})=\,\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots|\mathbf{W}_{N}), \end{align*}which completes the proof. For necessary and sufficient condition of controllabilty of the system (3.1), see the following theorem. Theorem 3.1 The system (3.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T]$$ if and only if $$ \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n.$$ Proof. In order to show the sufficiency, let us assume that $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n.$$ Therefore, from Lemma 3.1, it is clear that the matrix $$\mathbf{W}=\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots +\mathbf{W}_{N}$$ is positive definite. Let us define a control function as follows:   \begin{align} \mathbf{u}(t):= \begin{cases} \left[\,\sum\limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\left[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\right], & t\in [t_{0}, T-h_{N}],\\[10pt] \left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\left[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\right], & t\in (T-h_{l+1}, T-h_{l}],\\[10pt] \mathbf{0}, & t\in (T-h_{1}, T], \end{cases} \end{align} (3.7) where l = 1, 2, …, (N − 1). The state x(t) given in (3.5) at t = T becomes   \begin{align*} \mathbf{x}(T)=&\,\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{T-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align*} Substituting u(t) from (3.7) in the above expression, we get   \begin{align*} \mathbf{x}(T) =& \, \boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\\ &+\left\{\int_{t_{0}}^{T-h_{N}}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\\ &\qquad\left.+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right\}\\ &\qquad\times \mathbf{W}^{-1}\big[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\big]\\ =& \, \boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\{\mathbf{W}_{N}+\cdots+\mathbf{W}_{1}\}\mathbf{W}^{-1}\big[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\big]\\ =&\,\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\mathbf{WW}^{-1}\big[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\big]=\mathbf{x}_{_{T}}. \end{align*} Hence, the system (3.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T].$$ The converse can be proved by contradiction. Let the system (3.1) be controllable on $$[t_{0}, T],$$ but assume that $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N}) < n.$$ Then from Lemma 3.1, we know that $$\mathbf{W}=\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots +\mathbf{W}_{N}$$ is a singular matrix. Thus, there exists at least one non-zero vector, say $$\mathbf{v}\in \mathbb{R}^{n}$$ such that Wv = 0, i.e.,   $$ (\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})\mathbf{v}=\mathbf{0}\Longrightarrow \mathbf{W}_{1} \mathbf{v}+\mathbf{W}_{2} \mathbf{v}+\cdots+\mathbf{W}_{N} \mathbf{v}=\mathbf{0}.$$ Hence, $$\mathbf{W}_{i}\mathbf{v}=\mathbf{0}$$ for all i (since each $$\mathbf{W}_{i}$$ is positive semidefinite matrix). This shows that each $$\mathbf{W}_{i}$$ is a singular matrix and $$\big \langle \mathbf{W}_{i}\mathbf{v}, \mathbf{v}\big \rangle _{\mathbb{R}^{n}}=0,$$ for all i, i.e.,   \begin{multline*} \begin{cases} \left\langle\int_{T-h_{l+1}}^{T-h_{l}}\left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s \mathbf{v}, \mathbf{v}\right\rangle_{\mathbb{R}^{n}}=0,\\[15pt] \left\langle\int_{t_{0}}^{T-h_{N}}\left[\,\sum\limits_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum\limits_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s \mathbf{v}, \mathbf{v}\right\rangle_{\mathbb{R}^{n}}=0. \end{cases} \\ \Longrightarrow \begin{cases} \int_{T-h_{l+1}}^{T-h_{l}}\left\|\,\sum\limits_{i=1}^{l}\mathbf{B}^{*}_{i}(s+h_{i})\boldsymbol{\Phi}^{*}(T, s+h_{i})\mathbf{v}\right\|_{\mathbb{R}^{m}}^{2}\,\mathrm{d}s=0,\quad \textrm{ for all } l=1,2,\ldots,(N-1),\\[10pt] \int_{t_{0}}^{T-h_{N}}\left\|\,\sum\limits_{i=1}^{N}\mathbf{B}^{*}_{i}(s+h_{i})\boldsymbol{\Phi}^{*}(T, s+h_{i})\mathbf{v}\right\|_{\mathbb{R}^{m}}^{2}\,\mathrm{d}s=0. \end{cases} \end{multline*} Since each $$\mathbf{B}_{i}^{*}(\cdot )$$ and $$\boldsymbol{\Phi }^{*}(\cdot , \cdot )$$ are continuous functions, so the above integrals implies that   \begin{align} \mathbf{v}^{*}\left[\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]=\mathbf{0},\quad \textrm{ for all } l=1,2,\ldots,N\textrm{ and some } \mathbf{v}\neq\mathbf{0}\in\mathbb{R}^{n}. \end{align} (3.8)We assumed that the system (3.1) is controllable on $$[t_{0}, T],$$ in particular the system is null controllable. Now, let us choose an initial state $$\mathbf{x}_{0}=-\mathbf{a}_{0}+\boldsymbol{\Phi }^{-1}(T, t_{0})\mathbf{v}$$ and a final state x(T) = 0. Then with some control u(⋅), the state of the system (3.1) given in (3.5) satisfies x(T) = 0. That is,   \begin{align*} \mathbf{0}=\mathbf{x}(T)=&\,\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ =&\,\boldsymbol{\Phi}(T, t_{0})\boldsymbol{\Phi}^{-1}(T, t_{0})\mathbf{v}+\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Therefore, we have   \begin{align*} \mathbf{v}\!=\!-\!\int_{t_{0}}^{T-h_{N}}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s-\!\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\!\right]\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Premultiply the above equation with $$\mathbf{v}^{*}$$, we get   \begin{align*} \mathbf{v}^{*}\mathbf{v} =&\,-\int_{t_{0}}^{T-h_{N}}\mathbf{v}^{*}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &-\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\mathbf{v}^{*}\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Now using (3.8) in the above equation, we get $$\mathbf{v}^{*}\mathbf{v}=0 \Longrightarrow \|\mathbf{v}\|_{\mathbb{R}^{n}}^{2}=0 \Longrightarrow \mathbf{v}=\mathbf{0}$$, which is a contradiction. Hence, our assumption that $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N}) < n$$ is wrong. Thus, we finally have $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n.$$ For an autonomous system, we can obtain the following controllability condition (similar results are available in the literature, for example, see Theorem 3.1 of Banks et al., 1971). Theorem 3.2 If the system (3.1) is autonomous i.e., time-invariant, then the necessary and sufficient condition for the controllability of (3.1) is given by   $$ \textrm{rank}\left(\mathbf{B}_{1}|\mathbf{AB}_{1}|\cdots|\mathbf{A}^{n-1}\mathbf{B}_{1}|\mathbf{B}_{2}|\mathbf{AB}_{2}|\cdots|\mathbf{A}^{n-1}\mathbf{B}_{2}|\cdots|\mathbf{B}_{N}|\mathbf{AB}_{N}|\cdots|\mathbf{A}^{n-1}\mathbf{B}_{N}\right)=n.$$ Remark 3.1 In the system (3.1), if delays are absent in the control, i.e., $$h_{i}=0, \textrm{for all }i,$$ then $$\mathbf{W}_{1}=\mathbf{W}_{2}=\cdots =\mathbf{W}_{N-1}=\mathbf{O}$$ and $$\mathbf{W}_{N}=\int _{t_{0}}^{T}\big[\boldsymbol{\Phi }(T, s)\sum_{i=1}^{N}\mathbf{B}_{i}(s)\big]\big[\boldsymbol{\Phi }(T, s)\sum_{i=1}^{N}\mathbf{B}_{i}(s)\big]^{*}\, \mathrm{d}s.$$ Then we have $$\mathbf{W}=\sum_{i=1}^{N}\mathbf{W}_{i}=\mathbf{W}_{N}$$ and this matrix is called the controllability Grammian of the linear system (3.1) with no delays and such system is controllable on $$[t_{0}, T]$$ if and only if $$ \textrm{rank}(\mathbf{W})= \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N-1}|\mathbf{W}_{N})= \textrm{rank}(\mathbf{W}_{N})=n.$$ That is, the controllability Grammian W is positive definite. Remark 3.2 It can be easily shown that, our necessary and sufficient condition given in Theorem 3.1 for controllability of the system (3.1) is equivalent to the condition given in Corollary 3.9 of the work by Liu & Zhao (2012), in which, the authors provide controllability condition: $$ \textrm{rank}(\mathscr{C}_{1})=n,$$ where   \begin{align*}\mathscr{C}_{1}=&\Bigg(\int_{t_{0}}^{t_{1}-h_{N}}\mathbf{VV}^{*} \,\mathrm{d}s \left| \int_{t_{1}-h_{N}}^{t_{1}-h_{N-1}}\mathbf{VV}^{*}\,\mathrm{d}s \right|\cdots\left|\int_{t_{1}-h_{2}}^{t_{1}-h_{1}}\mathbf{VV}^{*}\,\mathrm{d}s\right| \int_{t_{1}-h_{1}}^{t_{2}-h_{N}}\mathbf{VV}^{*}\,\mathrm{d}s \\&\quad \left|\int_{t_{2}-h_{N}}^{t_{2}-h_{N-1}}\mathbf{VV}^{*}\,\mathrm{d}s\right|\cdots\left|\int_{t_{2}-h_{2}}^{t_{2}-h_{1}}\mathbf{VV}^{*}\,\mathrm{d}s \int_{t_{M-1}-h_{1}}^{t_{M}-h_{N}}\mathbf{VV}^{*}\,\mathrm{d}s\right|\int_{t_{M}-h_{N}}^{t_{M}-h_{N-1}}\mathbf{VV}^{*}\,\mathrm{d}s\bigg|\cdots\\&\quad\left|\int_{t_{M}-h_{2}}^{t_{M}-h_{1}}\mathbf{VV}^{*}\,\mathrm{d}s\right| \int_{t_{M}-h_{1}}^{T-h_{N}} \mathbf{VV}^{*}\,\mathrm{d}s | \mathbf{W}_{N-1}| \mathbf{W}_{N-2} |\cdots| \mathbf{W}_{1} \Bigg), \end{align*}where $$\mathbf{V}=\sum _{i=1}^{N}\boldsymbol{\Phi }(T, s+h_{i})\mathbf{B}_{i}(s+h_{i}).$$ As we are dealing with the linear delay system without impulses, the matrix $$\mathscr{C}_{1}$$ reduces to $$(\mathbf{W}_{N}|\mathbf{W}_{N-1}|\cdots |\mathbf{W}_{1}).$$ Remark 3.3 It is always true that, if at least one of the $$\mathbf{W}_{i}$$ is of full rank n, then $$ \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\!\cdots\! |\mathbf{W}_{N})\!=n$$. Thus, $$\mathbf{W}=\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots +\mathbf{W}_{N}$$ is a positive definite matrix and by Theorem 3.1, system (3.1) is controllable on $$[t_{0}, T].$$ But if $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n,$$ then it may be possible that none of $$\mathbf{W}_{i}$$ has full rank n. For example, let $$\mathbf{W}_{1}=\left(\begin{array}{@{}cc@{}} 1 & 0 \\ 0 & 0 \end{array}\right),$$ $$\mathbf{W}_{2}=\left(\begin{array}{@{}cc@{}} 0 & 0 \\ 0 & 2 \end{array}\right)$$ and both have rank = 1, but the augmented matrix $$(\mathbf{W}_{1}|\mathbf{W}_{2})=\left(\begin{array}{@{}cc@{}} 1 & 0|0 & 0 \\ 0 & 0|0 & 2 \end{array}\right)$$ has full rank = 2. Therefore, we conclude that even if none of $$\mathbf{W}_{i}$$ are positive definite matrices, the system (3.1) is still controllable on $$[t_{0}, T].$$ 4. Main results In this section, we obtain the controllability results of the system (2.1) for certain classes of non-linearities f(⋅, ⋅, ⋅) and impulse functions $$\mathbf{g}_{k}(\cdot ,\cdot )$$. Throughout this section, we assume that the semilinear impulsive delay system (2.1) admits only one solution for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ and the linear delay system (3.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T].$$ Here, for a given initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}$$ and for a given u(⋅), the solution to (2.1) satisfies the following equation:   \begin{align} \mathbf{x}(t)=\left\{ \begin{aligned} &\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{t-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s + h_{i})\mathbf{u}(s)\,\mathrm{d}s\\ &+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s, \quad \textrm{ for all } t\in [t_{0}, t_{1}],\\ &\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s+\int_{t_{0}}^{t}\Phi(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\\ &+\sum_{j=1}^{k} \boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})), \quad \textrm{ for all }t\in(t_{k}, t_{k+1}], \end{aligned} \right. \end{align} (4.1)where k = 1, 2, …, M and $$t_{M+1}=T.$$ Before proceeding for the establishment of the controllability results of the system (2.1), first we consider the real Banach space $$\mathcal{X}:=\mathcal{B}_{1}\times \mathcal{B}_{2}=\{(\mathbf{x}, \mathbf{u}):\mathbf{x}\in \mathcal{B}_{1}, \mathbf{u}\in \mathcal{B}_{2} \},$$ endowed with the norm   $$ \left\|(\mathbf{x}, \mathbf{u})\right\|_{\mathcal{X}}:=\|\mathbf{x}\|_{\mathcal{B}_{1}}+\|\mathbf{u}\|_{\mathcal{B}_{2}}.$$Next we define an operator $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ by the formula   \begin{align} \mathcal{K}(\mathbf{x}, \mathbf{u}):=\big(\mathcal{K}_{1}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\big)=( \boldsymbol{y}, \boldsymbol{v}), \end{align} (4.2)where $$\mathcal{K}_{1}(\cdot ):\mathcal{X}\rightarrow \mathcal{B}_{1}$$ is defined by   \begin{align} &\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)= \boldsymbol{y}(t)\nonumber\\ &\quad:=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\nonumber\\ &\qquad+\left\{\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\mathrm{d}s\right.\nonumber\\ &\qquad\qquad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\mathrm{d}s\right\}\!\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\nonumber\\ &\qquad+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\textrm{d}s, \quad \textrm{ for all }t \in [t_{0}, t_{1}], \end{align} (4.3)  \begin{align}&\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)= \boldsymbol{y}(t) \nonumber\\ &\quad:=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\nonumber\\ &\qquad+\left\{\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\mathrm{d}s\nonumber\right.\\ &\qquad\qquad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\!\,\mathrm{d}s\right\}\!\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\nonumber\\ &\qquad+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s+\sum_{j=1}^{k}\boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})),\quad \textrm{for all }t \in (t_{k}, t_{k+1}], \end{align} (4.4)and $$\mathcal{K}_{2}(\cdot ):\mathcal{X}\rightarrow \mathcal{B}_{2}$$ is defined by   \begin{align} \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(t)= \textbf{v}(t):=\begin{cases} \left[\,\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in [t_{0}, T-h_{N}],\\ \left[\,\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in (T-h_{l+1}, T-h_{l}],\\ \mathbf{0}, & t\in (T-h_{1}, T], \end{cases} \end{align} (4.5)where l = 1, 2, …, (N − 1) and $$\mathcal{L}(\cdot ):\mathcal{X}\rightarrow \mathbb{R}^{n}$$ is defined by   \begin{align} \mathcal{L}(\mathbf{x}, \mathbf{u}):= \mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})-\int_{t_{0}}^{T}\boldsymbol{\Phi}(T, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s-\sum_{j=1}^{M}\boldsymbol{\Phi}(T, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})). \end{align} (4.6)The following theorem is useful in establishing the controllability results of the system (2.1). Theorem 4.1 The system (2.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T]$$ if and only if for every initial state $$\mathbf{x}_{0}$$ and a final state $$\mathbf{x}_{_{T}},$$ the operator $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ given in equations (4.2–4.6) has a fixed point, that is, $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ for some $$(\mathbf{x}, \mathbf{u})\in \mathcal{X}.$$ Proof. Let the system (2.1) be controllable on $$[t_{0}, T],$$ then there exists a control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2}$$, which steers the state of the system given in (4.1) from $$\mathbf{x}_{0}$$ to $$\mathbf{x}(T)=\mathbf{x}_{_{T}}.$$ That is,   \begin{align*} \mathbf{x}_{_{T}}=&\, \boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{T-h_{N}}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s+\int_{t_{0}}^{T}\Phi(T, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\\ &+\sum_{j=1}^{M} \boldsymbol{\Phi}(T, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})). \end{align*}Combining the above equation with (4.6), we get   \begin{align} \begin{aligned} \mathcal{L}(\mathbf{x}, \mathbf{u})=&\, \int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{aligned} \end{align} (4.7)We choose a function u(⋅) satisfying (4.7) as   \begin{align} \begin{aligned} \mathbf{u}(t)&=\begin{cases} \left[\,\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in [t_{0},T-h_{N}],\\[8pt] \left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in (T-h_{l+1},T-h_{l}],\\[4pt] \mathbf{0}, & t\in (T-h_{1},T]. \end{cases} \end{aligned} \end{align} (4.8)Now, if we compare (4.8) with (4.5), it can be easily seen that $$\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})=\mathbf{u}.$$ Furthermore, with this control function, the corresponding solution (4.1) reduces to (4.3) and (4.4). Hence, we have $$\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})=\mathbf{x}.$$ Therefore, $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ that is, $$\mathcal{K}(\cdot )$$ has a fixed point. For the converse, let us assume that the operator $$\mathcal{K}(\cdot )$$ has a fixed point, that is, $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ for some $$(\mathbf{x}, \mathbf{u})\in \mathcal{X}.$$ Our purpose is to show that there exists some control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2}$$ such that $$\mathbf{x}(T)=\mathbf{x}_{_{T}}.$$ Since $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ from (4.4) and (4.5), we obtain the following equations:   \begin{align}\mathbf{x}(t)=&\,\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\nonumber\\ &+\left\{\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\nonumber\\ &\quad\ \left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s + h_{i})\!\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\mathrm{d}s\right\}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\nonumber\\ &+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s+\sum_{j=1}^{k}\boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})), \quad \textrm{ for all }t \in (t_{k}, t_{k+1}], \end{align} (4.9)and   \begin{align} \mathbf{u}(t)=\begin{cases} \left[\,\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in [t_{0}, T-h_{N}],\\ \left[\,\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in (T-h_{l+1}, T-h_{l}],\\ 0, & t\in (T-h_{1}, T]. \end{cases} \end{align} (4.10)In order to get $$\mathbf{x}(T)=\mathbf{x}_{_{T}},$$ let us put t = T in (4.9) and use (4.6) to obtain   \begin{align*} \mathbf{x}(T)=&\,\mathbf{x}_{_{T}}-\mathcal{L}(\mathbf{x}, \mathbf{u})\\ &+\left\{\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\nonumber\\ &+\left.\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\!\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s \!+\! h_{i})\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\!\,\mathrm{d}s\right\}\!\!\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\\ =&\,\mathbf{x}_{_{T}}-\mathcal{L}(\mathbf{x}, \mathbf{u})+\mathbf{W}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\\ =&\,\mathbf{x}_{_{T}}. \end{align*}Hence, the system (2.1) is controllable on $$[t_{0}, T].$$ Remark 4.1 There are several methods to show the existence of fixed point of an operator. In this paper, we use Lemmas 2.1 and 2.2 to show the existence of fixed point of an operator $$\mathcal{K}(\cdot ).$$ Now we introduce the following notations for our convenience, which we use in the proof of next theorems:   \begin{align*} M_{1}:=&\,\sup \limits_{t_{0}\leq s\leq t\leq T } \left\|\boldsymbol{\Phi}(t, s)\right\|,\\ M_{2}:=&\,\left\|\mathbf{x}_{0}+\mathbf{a}_{0}\right\|_{\mathbb{R}^{n}},\\ M_{3}:=&\,\sup\limits_{t\in [t_{0}, T]}\left\|\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\\ &\qquad\ \quad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right\|,\\ M_{4}:=&\,\max_{l=1,\ldots,(N-1)}\left\{\sup_{[t_{0}, T-h_{N}]}\left\|\left[\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\right\|,\right.\\ &\qquad\qquad\qquad\left.\sup_{(T-h_{l+1}, T-h_{l}]}\left\|\left[\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\right\|\right\}.\end{align*} The following three verifiable classes for the continuous function f(⋅, ⋅, ⋅) and $$\mathbf{g}_{k}(\cdot , \cdot )$$ are considered in this paper, for which we will obtain the controllability results of the system (2.1). (i) The class of bounded functions: $$\mathfrak{B}_{1}=\{\mathbf{f}(\cdot , \cdot , \cdot ) | \mathbf{f}(\cdot , \cdot , \cdot ):[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} \ \textrm{is a bounded function}\}$$ and $$\mathfrak{B}_{2}=\{\mathbf{g}(\cdot , \cdot ) | \mathbf{g}(\cdot , \cdot ):\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n}\ \textrm{is a bounded function}\}$$; (ii) The class of Lipschitz functions: $$\mathcal{L}\mathcal{C}_{1}=\{\mathbf{f}(\cdot , \cdot , \cdot ) | \mathbf{f}(\cdot , \cdot , \cdot ):[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a Lipschitz condition with respect to second and third arguments$$\}$$ and $$\mathcal{L}\mathcal{C}_{2}=\{\mathbf{g}(\cdot , \cdot ) | \mathbf{g}(\cdot , \cdot ):\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a Lipschitz condition$$\};$$ (iii) The class of linear growth functions: $$\mathcal{G}\mathcal{C}_{1}=\{\mathbf{f}(\cdot , \cdot , \cdot ) | \mathbf{f}(\cdot , \cdot , \cdot ):[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a linear growth condition with respect to second and third arguments$$\}$$ and $$\mathcal{G}\mathcal{C}_{2}=\{\mathbf{g}(\cdot , \cdot ) | \mathbf{g}(\cdot , \cdot ):\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a linear growth condition$$\}.$$ 4.1. Controllability of the system (2.1) for a class of bounded non-linearities In this subsection, we establish controllability results of the system (2.1) for the first class given above. Theorem 4.2 In system (2.1), let us assume that (i) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathfrak{B}_{1}$$ with bound K ≥ 0, that is,   $$ \left\|\mathbf{f}(t, \mathbf{v}, \mathbf{w})\right\|_{\mathbb{R}^{n}}\leq K, \quad \textrm{for all }(t, \mathbf{v}, \mathbf{w})\in [t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m};$$ (ii) each function $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathfrak{B}_{2}$$ with bounds $$\vartheta _{k}\geq 0,$$ that is,   $$ \left\|\mathbf{g}_{k}(\mathbf{v}, \mathbf{w})\right\|_{\mathbb{R}^{n}}\leq \vartheta_{k}, \quad \textrm{for all }(\mathbf{v}, \mathbf{w})\in \mathbb{R}^{n}\times \mathbb{R}^{m}.$$ Furthermore, if $$\vartheta =\sum \limits _{k=1}^{M} \vartheta _{k}\geq 0,$$ then $$\sum \limits _{k=1}^{M}\|\mathbf{g}_{k}(\mathbf{v}, \mathbf{w})\|_{\mathbb{R}^{n}}\leq \vartheta ,$$$$\textrm{ for all } (\mathbf{v}, \mathbf{w})\in \mathbb{R}^{n}\times \mathbb{R}^{m},$$ then the semilinear impulsive delay system (2.1) is controllable on $$[t_{0}, T].$$ Proof. For $$r_{0}>0,$$ let $$\mathcal{B}=\{(\mathbf{x}, \mathbf{u})\in \mathcal{X}:0\leq \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\}$$ be a non-empty, closed and convex subset of $$\mathcal{X}.$$ In order to prove this theorem, we apply Schauder fixed-point theorem (see 2.1), to show that $$\mathcal{K}(\cdot )$$ is a continuous operator from $$\mathcal{B}$$ into a compact subset of $$\mathcal{B}.$$ Then, rest of the proof follows from Theorem 4.1. We divide this proof into the following three steps: Step 1:$$\mathcal{K}(\cdot )$$ is a continuous operator on $$\mathcal{B}$$. First we show that $$\mathcal{K}_{1}(\cdot )$$ and $$\mathcal{K}_{2}(\cdot )$$ are continuous operators on $$\mathcal{B}.$$ For this, let $$(\mathbf{x}_{1}, \mathbf{u}_{1}),$$$$(\mathbf{x}_{2}, \mathbf{u}_{2}) \in \mathcal{B}$$ be such that $$\|(\mathbf{x}_{1}, \mathbf{u}_{1})-(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0.$$ Our aim is to establish that $$\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0$$ and $$\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0.$$ Note that f(⋅, ⋅, ⋅) is a continuous function on its domain, so in particular it is continuous with respect to the second and third arguments, and hence we have   $$ \sup \limits_{t\in [t_{0}, T]}\|\mathbf{f}(t, \mathbf{x}_{1}(t), \mathbf{u}_{1}(t))-\mathbf{f}(t, \mathbf{x}_{2}(t), \mathbf{u}_{2}(t))\|_{\mathbb{R}^{n}}\rightarrow 0,$$as $$\|(\mathbf{x}_{1}, \mathbf{u}_{1})-(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0.$$ Similarly, we have   $$ \sup \limits_{t\in [t_{0}, T]}\|\mathbf{g}_{k}(\mathbf{x}_{1}(t), \mathbf{u}_{1}(t))-\mathbf{g}_{k}(\mathbf{x}_{2}(t), \mathbf{u}_{2}(t))\|_{\mathbb{R}^{n}}\rightarrow 0,\quad \textrm{for all }k.$$The continuity of $$\mathcal{K}_{1}(\cdot )$$ follows from the following estimation:   \begin{align*}&\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathcal{B}_{1}}\\ &\quad=\sup \limits_{t \in [t_{0}, T]}\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{n}}\\ &\quad= \max \limits_{k}\bigg\{\sup_{t\in[t_{0}, t_{1}]}\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{n}},\sup_{t\in (t_{k}, t_{k+1}]}\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{n}}\bigg\}\\ &\quad=\max \limits_{k}\Bigg\{\sup_{t\in[t_{0}, t_{1}]}\left\|\left(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\right.\\ &\qquad\qquad\qquad\quad\qquad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right)\\ &\qquad\quad\qquad\left.\times \mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\left[\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\right]\,\mathrm{d}s\right\|_{\mathbb{R}^{n}},\\ &\qquad\sup_{t\in(t_{k}, t_{k+1}]}\bigg\|\bigg(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\\ &\quad\qquad\qquad+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\bigg)\\ &\qquad\qquad\quad\times \mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\left[\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\right]\,\mathrm{d}s\\ &\qquad\qquad\quad+\sum_{j=1}^{k}\boldsymbol{\Phi}(t, t_{j})[\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))-\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))]\bigg\|_{\mathbb{R}^{n}}\Bigg\}\\ &\quad\leq M_{3}\|\mathbf{W}^{-1}\|\big\|\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathbb{R}^{n}}\\ &\qquad+\sup_{t_{0}\leq s\leq t\leq T}\big\|\boldsymbol{\Phi}(t, s)\big\| T\sup_{s\in[t_{0}, T]}\big\|\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\big\|_{\mathbb{R}^{n}}\\ &\qquad+\sup_{t_{0}\leq s\leq t\leq T}\big\|\boldsymbol{\Phi}(t, s)\big\| \sum_{j=1}^{M}\big\|\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))-\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))\big\|_{\mathbb{R}^{n}}\\ &\quad\leq M_{1}\Big(1+M_{3}\|\mathbf{W}^{-1}\|\Big)\\ &\qquad\left(T\!\sup_{s\in[t_{0}, T]}\!\big\|\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))\!-\!\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\big\|_{\mathbb{R}^{n}}\!+\!\!\sum_{j=1}^{M}\big\|\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))\!-\!\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))\big\|_{\mathbb{R}^{n}}\right). \end{align*}The continuity of $$\mathcal{K}_{2}(\cdot )$$ follows from the following estimation:   \begin{align*}&\big\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathcal{B}_{2}}\\[10pt] &\quad=\sup_{t\in[t_{0}, T]}\big\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{m}}\\[10pt] &\quad=\max_{l}\left\{\sup_{t\in[t_{0}, T-h_{N}]}\left\|\left[\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\right\|_{\mathbb{R}^{m}},\right.\\[10pt] &\qquad\qquad\quad\left.\sup_{t\in(T-h_{l+1}, T-h_{l}]}\left\|\left[\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\right\|_{\mathbb{R}^{m}}\right\}\\[10pt] &\quad\leq M_{4}\big\|\mathbf{W}^{-1}\big\|\big\|\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathbb{R}^{n}}\\[10pt] &\quad\leq M_{1}M_{4}\big\|\mathbf{W}^{-1}\big\|\\[10pt] &\qquad\left(T\!\sup_{s\in[t_{0}, T]}\!\big\|\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))\!-\!\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\big\|_{\mathbb{R}^{n}}\!+\!\!\sum_{j=1}^{M}\big\|\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))\!-\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))\big\|_{\mathbb{R}^{n}}\right)\!. \end{align*}Finally, the continuity of $$\mathcal{K}(\cdot )$$ follows from the following estimate:   \begin{align*} \|\mathcal{K}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}&=\|\big(\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1}), \mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})\big)-\big(\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2}), \mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\|_{\mathcal{X}}\\[4pt] &=\|\big(\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2}), \mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\|_{\mathcal{X}}\\[4pt] &=\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{1}}+\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{2}}\!. \end{align*}Step 2:$$\mathcal{K(B)}$$ is a compact set. In order to prove this, we first claim that $$\mathcal{K}_{1}(\mathcal{B})=\{\mathcal{K}_{1}(\mathbf{x}, \mathbf{u}): \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\}\subset \mathcal{B}_{1}$$ is equicontinuous set on each subinterval $$[t_{0}, t_{1}],$$$$(t_{k}, t_{k+1}], k=1, 2,\ldots ,M $$ and uniformly bounded on $$[t_{0}, T].$$ Similarly, $$\mathcal{K}_{2}(\mathcal{B})=\{\mathcal{K}_{2}(\mathbf{x}, \mathbf{u}):\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\}\subset \mathcal{B}_{2} $$ is equicontinuous set on $$[t_{0}, T-h_{N}],$$$$(T-h_{i}, T-h_{i-1}],\ldots ,(T-h_{1}, T],$$i = 1, …, N and uniformly bounded on $$[t_{0}, T].$$ Before proceeding, we need the following estimation that is obtained from (4.6)   \begin{align} \begin{aligned} \|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}\leq \|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+M_{1}(T K +\vartheta). \end{aligned} \end{align} (4.11) Now, let $$s_{1} < s_{2}, $$ where $$s_{1}, s_{2}\in [t_{0}, t_{1}]$$ or $$s_{1}, s_{2}\in (t_{k}, t_{k+1}]$$ and $$\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{1}(\mathcal{B})$$ be any elements. Now consider the following estimation:   \begin{align*} &\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{n}}\\ &=\left\|[\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})](\mathbf{x}_{0}+\mathbf{a}_{0})\vphantom{\sum_{j=1}^{k}\big[\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\big]}\right.\\ &+\left[ \left\{\int_{t_{0}}^{s_{1}-h_{N}} \left(\sum_{i=1}^{N} \boldsymbol{\Phi}(s_{1}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\right.\right.\\ &\left.-\int_{t_{0}}^{s_{2}-h_{N}} \left(\sum_{i=1}^{N} \boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\right\}+\cdots\\ &+\left\{\int_{s_{1}-h_{2}}^{s_{1}-h_{1}}\boldsymbol{\Phi}(s_{1}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\right.\\ &\left.\left.-\int_{s_{2}-h_{2}}^{s_{2}-h_{1}}\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\right\}\right]\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\\ &+\int_{t_{0}}^{s_{1}}\boldsymbol{\Phi}(s_{1}, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\mathrm{d}s-\int_{t_{0}}^{s_{2}}\boldsymbol{\Phi}(s_{2}, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\\ &\left.+\sum_{j=1}^{k}\big[\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\big]\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\right\|_{\mathbb{R}^{n}} \end{align*}  \begin{align*} &\leq\|\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})\|\|\mathbf{x}_{0}+\mathbf{a}_{0}\|_{\mathbb{R}^{n}}\\ &\quad+\Bigg\{\Bigg\|\int_{t_{0}}^{s_{1}-h_{N}}\left(\sum_{i=1}^{N}\big[\boldsymbol{\Phi}(s_{1}, s+h_{i})-\boldsymbol{\Phi}(s_{2}, s+h_{i})\big]\mathbf{B}_{i}(s+h_{i})\right)\times\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{1}-h_{N}}^{s_{2}-h_{N}}\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\Big\|+\cdots\\ &\quad+\Big\|\int_{s_{1}-h_{2}}^{s_{1}-h_{1}}\big(\boldsymbol{\Phi}(s_{1}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{2}-h_{2}}^{s_{1}-h_{2}}\big(\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{1}-h_{2}}^{s_{1}-h_{1}}\big(\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{1}-h_{1}}^{s_{2}-h_{1}}\big(\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\Big\|\Bigg\}\big\|\mathbf{W}^{-1}\big\|\\ &\quad\times\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}+\Big\|\int_{t_{0}}^{s_{1}}\big[\boldsymbol{\Phi}(s_{1}, s)-\boldsymbol{\Phi}(s_{2}, s)\big]\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s-\int_{s_{1}}^{s_{2}}\boldsymbol{\Phi}(s_{2}, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\Big\|_{\mathbb{R}^{n}}\\ &\quad+\sum_{j=1}^{k}\big\|\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\big\|\big\|\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\big\|_{\mathbb{R}^{n}}\\ \end{align*}  \begin{align*} &\leq M_{2}\|\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})\|\\ &\quad+\!\Bigg\{\!(s_{1}\!-\!h_{N})\sup\limits_{s}\!\left\|\sum_{i=1}^{N}\!\big(\boldsymbol{\Phi}(s_{1}, s+h_{i})-\!\boldsymbol{\Phi}(s_{2}, s+\!h_{i})\big)\mathbf{B}_{i}(s+h_{i})\right\|\!\times\!\sup\limits_{s}\left\|\!\Big(\!\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s\!+h_{i})\mathbf{B}_{i}(s+\!h_{i})\Big)^{*}\right\|\\ &\quad+(s_{2}-s_{1})\sup\limits_{s}\left\|\sum_{i=1}^{N}\boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right\|\sup\limits_{s}\left\|\Big(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\Big)^{*}\right\|+\cdots\\ &\quad+(h_{2}\!-\!h_{1})\sup\limits_{s}\|\boldsymbol{\Phi}(s_{1}, s+h_{1})-\boldsymbol{\Phi}(s_{2}, s\!+h_{1})\|\sup\limits_{s}\|\mathbf{B}_{1}(s+h_{1})\|\!\times\!\sup\limits_{s}\big\|\big(\boldsymbol{\Phi}(T, s+\!h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\big\|\\ &\quad+2(s_{2}-s_{1})\sup\limits_{s}\|\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\|\Bigg\}\\ &\quad\times \|\mathbf{W}^{-1}\|\Big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1} (TK +\vartheta)\Big)\\ &\quad+s_{1}\sup\limits_{s}\|\boldsymbol{\Phi}(s_{1}, s)-\boldsymbol{\Phi}(s_{2}, s)\|K+(s_{2}-s_{1})M_{1} K+\vartheta\sum_{j=1}^{k}\|\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\|. \end{align*}Observe that, the right-hand side of the above inequality is independent of the choice of x and u. Also, if we take $$s_{1}\rightarrow s_{2},$$ then we see that $$\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{n}}\rightarrow 0,$$ for all $$ \mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{1}(\mathcal{B}).$$ Therefore, $$\mathcal{K}_{1}(\mathcal{B})$$ is equicontinuous set on $$[t_{0}, t_{1}],$$$$(t_{k}, t_{k+1}].$$ For uniform boundedness of $$\mathcal{K}_{1}(\mathcal{B}),$$ we consider the following estimation:   \begin{align*} \|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}} &=\sup_{t\in[t_{0}, T]}\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{n}}=\max \limits_{k}\left\{\sup_{t\in[t_{0}, t_{1}]}\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{n}}, \sup_{t\in(t_{k}, t_{k+1}]}\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{n}}\right\}\\ &=\max \limits_{k}\Bigg\{\sup_{t\in[t_{0}, t_{1}]}\Big\|\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\\ &\quad+\bigg(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N} \boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\\ &\quad+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\bigg)\\ &\quad\times \mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})+ \int_{t_{0}}^{t} \boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\Big\|_{\mathbb{R}^{n}},\end{align*}  \begin{align*} &\sup_{t\in(t_{k}, t_{k+1}]}\Big\|\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\\ &+\bigg(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N} \boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\bigg)\\ &\times \mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})+ \int_{t_{0}}^{t} \boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s+\sum_{j=1}^{k} \boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\Big\|_{\mathbb{R}^{n}}\Bigg\}\\ &\leq \sup_{t\in[t_{0}, T]}\|\boldsymbol{\Phi}(t, t_{0})\|\|\mathbf{x}_{0}+\mathbf{a}_{0}\|_{\mathbb{R}^{n}}+M_{3}\|\mathbf{W}^{-1}\|\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}\\ &+\sup_{t_{0}\leq s\leq t\leq T}\|\boldsymbol{\Phi}(t, s)\|T\!\sup_{s\in [t_{0}, T]}\|\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\|_{\mathbb{R}^{n}}+\!\sup_{t, s\in[t_{0}, T]}\|\boldsymbol{\Phi}(t, s)\|\!\sum_{j=1}^{M}\|\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\|_{\mathbb{R}^{n}}\\ &\leq M_{1}M_{2}+M_{3}\|\mathbf{W}^{-1}\|\big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1}(T K +\vartheta)\big)+M_{1}TK+M_{1}\vartheta, \end{align*}that is,   \begin{align} \left\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\right\|_{\mathcal{B}_{1}}\leq \big(1+M_{3}\|\mathbf{W}^{-1}\|\big)\big(M_{1}M_{2}+M_{1}(TK+\vartheta)\big)+M_{3}\|\mathbf{W}^{-1}\|\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}. \end{align} (4.12)Since the right-hand side of the above inequality is independent of the choice of x and u, so the above inequality holds for any $$\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{1}(\mathcal{B}).$$ Thus, the set $$\mathcal{K}_{1}(\mathcal{B})$$ is uniformly bounded on $$[t_{0}, T].$$ To show the equicontinuity of $$\mathcal{K}_{2}(\mathcal{B}),$$ choose the elements $$s_{1}, s_{2}\in [t_{0}, T-h_{N}]$$ or $$s_{1}, s_{2}\in (T-h_{i}, T-h_{i-1}]$$ or $$s_{1}, s_{2}\in (T-h_{1}, T] $$ with $$s_{1} < s_{2},$$ and for any $$ \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{2}(\mathcal{B}),$$ consider the following estimation:   \begin{align*} &\left\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(s_{2})\right\|_{\mathbb{R}^{m}}\\ &\quad\leq \sum_{i}\Big\|\Big(\boldsymbol{\Phi}(T, s_{1}+h_{i})\mathbf{B}_{i}(s_{1}+h_{i})\Big)^{*}-\Big(\boldsymbol{\Phi}(T, s_{2}+h_{i})\mathbf{B}_{i}(s_{2}+h_{i})\Big)^{*}\Big\| \big\|\mathbf{W}^{-1}\big\|\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}\\ &\quad\leq \sum_{i}\Big\|\Big(\boldsymbol{\Phi}(T, s_{1}+h_{i})\mathbf{B}_{i}(s_{1}+h_{i})\Big)^{*}-\Big(\boldsymbol{\Phi}(T, s_{2}+h_{i})\mathbf{B}_{i}(s_{2}+h_{i})\Big)^{*}\Big\|\\ &\qquad\times \|\mathbf{W}^{-1}\|\Big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1}(T K +\vartheta)\Big). \end{align*}For uniform boundedness of $$\mathcal{K}_{2}(\mathcal{B}),$$ see the following estimation:   \begin{align*} \|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}&=\sup_{t\in[t_{0}, T]}\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{m}} \\ &=\max_{l}\left\{\sup_{t\in[t_{0}, T-h_{N}]}\left\|\left(\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\right\|_{\mathbb{R}^{m}},\right.\\ &\left.\qquad\qquad\sup_{t\in(T-h_{l+1}, T-h_{l}]}\left\|\left(\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\right\|_{\mathbb{R}^{m}}\right\}\\ &\leq \max_{l}\left\{\sup_{t\in[t_{0}, T-h_{N}]}\left\|\left(\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\right\|,\right.\\ &\left.\qquad\qquad\sup_{t\in(T-h_{l+1}, T-h_{l}]}\left\|\left(\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\right\|\right\} \big\|\mathbf{W}^{-1}\big\|\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}, \end{align*}that is,   \begin{align} \|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}\leq M_{4}\big\|\mathbf{W}^{-1}\big\|\Big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1}(T K +\vartheta)\Big). \end{align} (4.13)Now $$\mathcal{K}(\mathcal{B})=\mathcal{K}_{1}(\mathcal{B})\times \mathcal{K}_{2}(\mathcal{B})=\big \{\big (\mathcal{K}_{1}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\big ):\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\big \}$$ is an equicontinuous set on $$[t_{0}, t_{1}],(t_{1}, t_{2}],\ldots ,(t_{M-1}, t_{M}],(t_{M}, T-h_{N}],(T-h_{N}, T-h_{N-1}],\ldots ,(T-h_{1}, T]$$ and uniformly bounded on $$[t_{0}, T].$$ Consequently, if we take a sequence $$\{(\mathcal{K}_{1}^{n}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n}(\mathbf{x}, \mathbf{u}))\}$$ in $$\mathcal{K}(\mathcal{B}),$$ this sequence is uniformly bounded and equicontinuous on each interval and in particular on $$[t_{0}, t_{1}],$$ so by Arzela–Ascoli theorem, there exists a subsequence $$\big \{\left (\mathcal{K}_{1}^{n_{1}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{1}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$ of $$\{(\mathcal{K}_{1}^{n}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n}(\mathbf{x}, \mathbf{u}))\}$$, which is uniformly convergent on $$[t_{0}, t_{1}].$$ Consider the sequence $$\big \{\left (\mathcal{K}_{1}^{n_{1}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{1}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$, which is equicontinuous and uniformly bounded on each interval, in particular on $$(t_{1}, t_{2}],$$ and, for the same reason, there exists a subsequence $$\big \{\left (\mathcal{K}_{1}^{n_{2}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{2}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$ of $$\big \{\left (\mathcal{K}_{1}^{n_{1}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{1}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$ which is uniformly convergent on $$[t_{0}, t_{2}].$$ Continuing this process for the intervals $$(t_{2}, t_{3}],\ldots ,(t_{M-1}, t_{M}],\ (t_{M}, T-h_{N}],\ (T-h_{N}, T-h_{N-1}],\ldots ,\ (T-h_{1}, T],$$ we see that the sequence $$\big \{\big (\mathcal{K}_{1}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u})\big )\big \}$$ is uniformly convergent on $$[t_{0}, T].$$ Thus, $$\{(\mathcal{K}_{1}^{n}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n}(\mathbf{x}, \mathbf{u}))\}$$ being an arbitrary sequence in $$\mathcal{K(B)},$$ has a converging subsequence $$\big \{\big (\mathcal{K}_{1}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u})\big )\big \}$$ on $$[t_{0}, T].$$ Hence, $$\mathcal{K(B)}$$ is a compact set in $$\mathcal{X}.$$ Step 3:$$\mathcal{K}(\mathcal{B})\subset \mathcal{B}.$$ Let $$\mathcal{K}(\mathbf{x}, \mathbf{u})\in \mathcal{K}(\mathcal{B})$$ be any element. We use the estimations (4.12) and (4.13) to get   \begin{align*} \|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}&=\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}}+\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}\\ &\leq \Big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\Big)\big(M_{1} M_{2}+M_{1}(TK +\vartheta)\big)+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|. \end{align*}Then, we see that $$\lim \limits _{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\rightarrow \infty }\frac{\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}=0.$$ Therefore, for a fixed $$\varepsilon \in (0, 1),$$ we have $$\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}$$ for sufficiently large value of $$\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}},$$ say $$r_{0}> 0.$$ Hence, we obtain $$\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon r_{0} < r_{0}.$$ Therefore, we have $$\mathcal{K(B)}\subset \mathcal{B}.$$ Now, Lemma 2.1 ensures the existence of a fixed point for an operator $$\mathcal{K}(\cdot )$$ in $$\mathcal{B}\subset \mathcal{X}$$ and hence by Theorem 4.1, system (2.1) is controllable on $$[t_{0}, T].$$ Remark 4.2 It is clear from the Theorem 4.2 that, if the semilinear impulsive delay system (2.1) possesses a unique solution on $$[t_{0}, T]$$ for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the linear delay system (3.1) is controllable on $$[t_{0}, T]$$ and the continuous functions f(⋅, ⋅, ⋅) and each $$\mathbf{g}_{k}(\cdot , \cdot )$$ are bounded on their domain, then the system (2.1) is also controllable on $$[t_{0}, T].$$ 4.2. Controllability of the system (2.1) for a class of Lipschitz non-linearities In this subsection, we prove controllability results of the system (2.1) for a class of Lipschitz non-linearities. Theorem 4.3 In system (2.1), if we assume that (i) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{C}_{1}$$ with Lipschitz constants $$\alpha _{0},\beta _{0}\geq 0,$$ that is,   $$ \|\mathbf{f}(t, \mathbf{v}_{1}, \mathbf{w}_{1})-f(t, \mathbf{v}_{2}, \mathbf{w}_{2})\|_{\mathbb{R}^{n}}\leq \alpha_{0}\|\mathbf{v}_{1}-\mathbf{v}_{2}\|_{\mathbb{R}^{n}}+\beta_{0}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|_{\mathbb{R}^{m}},$$for all $$(t, \mathbf{v}_{1}, \mathbf{w}_{1}), (t, \mathbf{v}_{2}, \mathbf{w}_{2})\in [t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m};$$ (ii) each function $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{C}_{2}$$ with Lipschitz constants $$\alpha _{k}, \beta _{k}\geq 0,$$ that is,   $$ \|\mathbf{g}_{k}(\mathbf{v}_{1}, \mathbf{w}_{1})-\mathbf{g}_{k}(\mathbf{v}_{2}, \mathbf{w}_{2})\|_{\mathbb{R}^{n}}\leq \alpha_{k}\|\mathbf{v}_{1}-\mathbf{v}_{2}\|_{\mathbb{R}^{n}}+\beta_{k}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|_{\mathbb{R}^{m}},$$for all $$ (\mathbf{v}_{1}, \mathbf{w}_{1}),\ (\mathbf{v}_{2}, \mathbf{w}_{2})\in \mathbb{R}^{n}\times \mathbb{R}^{m};$$ (iii) $$\delta = [M_{1}(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\| )\gamma ]<1,$$ where   \begin{align} \gamma:=\max\left\{\left(T\alpha_{0}+\sum \limits_{k=1}^{M}\alpha_{k}\right), \left(T\beta_{0}+\sum \limits_{k=1}^{M}\beta_{k}\right)\right\}, \end{align} (4.14)then the semilinear impulsive delay system (2.1) is controllable on $$[t_{0}, T].$$ Proof. We apply Banach contraction mapping principle (see Lemma 2.2) to show that the operator $$\mathcal{K}(\cdot )$$ is a contraction, and then the proof follows from Theorem 4.1. Let us begin by considering   $$ \left\|\mathcal{K}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{X}}=\left\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{B}_{1}}+\left\|K_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-K_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{B}_{2}}. $$Now, we use the estimations for $$\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{1}}$$ and $$\|K_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-K_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{2}}$$, which we obtained in Step 1 in the proof of Theorem 4.2, to get   \begin{align*} \|\mathcal{K}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}&\leq M_{1}\Big(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\Big)\\ &\quad\times\left\{T\sup \limits_{s\in [t_{0}, T]}\|\mathbf{f}(s,\mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\|_{\mathbb{R}^{n}}\right.\\ &\left.\quad\qquad+\sum \limits_{k=1}^{M}\|\mathbf{g}_{k}(\mathbf{x}_{1}(t_{k}), \mathbf{u}_{1}(t_{k}))-\mathbf{g}_{k}(\mathbf{x}_{2}(t_{k}), \mathbf{u}_{2}(t_{k}))\|_{\mathbb{R}^{n}}\right\}\\ &\leq M_{1}\left(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\right)\gamma\left(\|\mathbf{x}_{1}-\mathbf{x}_{2}\|_{\mathcal{B}_{1}}+\|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{\mathcal{B}_{2}}\right)\\ &\leq \delta \left\|(\mathbf{x}_{1}, \mathbf{u}_{1})-(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{X}}. \end{align*}Since $$\delta <1,$$ so $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ is a contraction and hence from Lemma 2.2, $$\mathcal{K}(\cdot )$$ has a unique fixed point in $$\mathcal{X}.$$ Then by Theorem 4.1, the system (2.1) is controllable on $$[t_{0}, T].$$ Remark 4.3 Theorem 4.3 shows that, if the semilinear impulsive delay system (2.1) possesses a unique solution on $$[t_{0}, T]$$ for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the linear delay system (3.1) is controllable on $$[t_{0}, T]$$ and the continuous functions f(⋅, ⋅, ⋅) and each $$\mathbf{g}_{k}(\cdot , \cdot )$$ satisfies a Lipschitz condition as defined above, then the system (2.1) is also controllable on $$[t_{0}, T],$$ provided   $$ M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\gamma<1,$$where $$\gamma $$ is given in (4.14). 4.3. Controllability of the system (2.1) for a class of non-linearities satisfying the linear growth condition The controllability results of the system (2.1) for a class of non-linearities satisfying the linear growth condition, is established in this subsection using Theorem 4.1. Theorem 4.4 In system (2.1), if we assume that (i) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{G}_{1}$$ with growth constants $$a_{0}, b_{0}, c_{0}\geq 0,$$ that is,   $$ \|\mathbf{f}(t, \mathbf{v}, \mathbf{w})\|_{\mathbb{R}^{n}}\leq a_{0}\|\mathbf{v}\|_{\mathbb{R}^{n}}+b_{0}\|\mathbf{w}\|_{\mathbb{R}^{m}}+c_{0},\textrm{ for all }(t, \mathbf{v}, \mathbf{w})\in [t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m},$$ (ii) each function $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{G}_{2}$$ with growth constants $$a_{k}, b_{k}\geq 0,$$ that is,   $$ \|\mathbf{g}_{k}(\mathbf{v}, \mathbf{w})\|_{\mathbb{R}^{n}}\leq a_{k}\|\mathbf{v}\|_{\mathbb{R}^{n}}+b_{k}\|\mathbf{w}\|_{\mathbb{R}^{m}},\quad \textrm{for all }(\mathbf{v}, \mathbf{w})\in \mathbb{R}^{n}\times \mathbb{R}^{m},$$ (iii) $$M_{1}\big (1+(M_{3}+M_{4})\big \|\mathbf{W}^{-1}\big \|\big )\bigg (T(a_{0}+b_{0})+\sum \limits _{k=1}^{M}(a_{k}+b_{k})\bigg )<1,$$then the semilinear impulsive delay system (2.1) is controllable on $$[t_{0}, T].$$ Proof. Let $$\mathcal{B}$$ be a non-empty, closed and convex subset of $$\mathcal{X}$$ as defined earlier. The proof is similar to the proof of Theorem 4.2, that is, we show that $$\mathcal{K}(\cdot )$$ is a continuous operator from $$\mathcal{B}$$ into a compact subset of $$\mathcal{B},$$ so that the existence of the fixed point for $$\mathcal{K}(\cdot )$$ is guaranteed by Lemma 2.1. Then the proof follows from Theorem 4.1. The continuity of $$\mathcal{K}(\cdot )$$ on $$\mathcal{B}$$ is already established in the proof of Theorem 4.2. Now, only thing left to be proved here is that $$\mathcal{K(B)}$$ is compact in $$\mathcal{B}.$$ For this, we first need the following estimation, which we obtain from (4.6):   \begin{align} \begin{aligned} \|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}&\leq \|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1}M_{2}+TM_{1}c_{0}+M_{1}\left\{\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+\left(Tb_{0}+\sum\limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right\}. \end{aligned} \end{align} (4.15)Now, the equicontinuity of the set $$\mathcal{K}_{1}(\mathcal{B})$$ on $$[t_{0}, t_{1}], (t_{k}, t_{k+1}]$$ and its uniform boundedness on $$[t_{0}, T]$$ follows from the following estimations:   \begin{align*} &\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{n}}\\ &\leq M_{2}\|\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})\|\\ &\quad+\left\{(s_{1}-h_{N})\sup\limits_{s}\left\|\sum_{i=1}^{N}\big(\boldsymbol{\Phi}(s_{1}, s+h_{i})-\boldsymbol{\Phi}(s_{2}, s+h_{i})\big)\mathbf{B}_{i}(s+h_{i})\right\|\right.\\ &\quad\times\sup\limits_{s}\left\|\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\right\|\\ &\quad+(s_{2}-s_{1})\sup\limits_{s}\left\|\sum_{i=1}^{N}\boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right\|\sup\limits_{s}\left\|\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\right\|+\cdots\\ &\quad+(h_{2}-h_{1})\sup\limits_{s}\big\|\boldsymbol{\Phi}(s_{1}, s+h_{1})-\boldsymbol{\Phi}(s_{2}, s+h_{1})\big\|\big\|\mathbf{B}_{1}(s+h_{1})\big\|\sup\limits_{s}\big\|\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\big\|\\ &\left.\quad+\, 2(s_{2}-s_{1})\sup\limits_{s}\left\|\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\right\|\vphantom{\sum_{i=1}^{N}}\right\}\\ &\quad\times \big\|\mathbf{W}^{-1}\big\|\left\{\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+T M_{1}c_{0}+M_{1}\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)r_{0}\right\}\\ &\quad+\left\{s_{1} \sup\limits_{s}\|\boldsymbol{\Phi}(s_{1}, s)-\boldsymbol{\Phi}(s_{2}, s)\|+M_{1}(s_{2}-s_{1})\right\}\big((a_{0}+b_{0})r_{0}+c_{0}\big)\\ &\quad+\sum_{j=1}^{k}\|\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\|(a_{j}+b_{j})r_{0}, \end{align*}and   \begin{align} \begin{aligned} \|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}}&\leq M_{1}M_{2}+TM_{1}c_{0}+M_{3}\big\|\mathbf{W}^{-1}\big\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\Big(1+M_{3}\big\|\mathbf{W}^{-1}\big\|\Big)\left[\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+ \left(Tb_{0}+\sum \limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right]\\ &\leq M_{1}M_{2}+TM_{1}c_{0}+M_{3}\|\mathbf{W}^{-1}\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\big(1+M_{3}\|\mathbf{W}^{-1}\|\big)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)r_{0}. \end{aligned} \end{align} (4.16)Similarly, the equicontinuity of the set $$\mathcal{K}_{2}(\mathcal{B})$$ on $$[t_{0}, T-h_{N}], (T-h_{i}, T-h_{i-1}]$$ and $$(T-h_{1}, T]$$ and its uniform boundedness on $$[t_{0}, T]$$ are guaranteed by the following estimations:   \begin{align*} \|\mathcal{K}_{2}&(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{m}}\\ \leq &\sum_{i}\Big\|\Big(\boldsymbol{\Phi}(T, s_{1}+h_{i})\mathbf{B}_{i}(s_{1}+h_{i})\Big)^{*}-\Big(\boldsymbol{\Phi}(T, s_{2}+h_{i})\mathbf{B}_{i}(s_{2}+h_{i})\Big)^{*}\Big\|\\ &\times \big\|\mathbf{W}^{-1}\big\|\left\{(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1}M_{2}+TM_{1}c_{0})+M_{1}\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)\right\}r_{0}, \end{align*}and   \begin{align} \begin{aligned} \|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}&\leq M_{4}\big\|\mathbf{W}^{-1}\big\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &+M_{1}M_{4}\big\|\mathbf{W}^{-1}\big\|\left[\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+\left(Tb_{0}+\sum \limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right]\\ &\leq M_{4}\big\|\mathbf{W}^{-1}\big\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &+M_{1}M_{4}\big\|\mathbf{W}^{-1}\big\|\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)r_{0}. \end{aligned} \end{align} (4.17)Now, by the same argument as given in the proof of Theorem 4.2, we say that $$\mathcal{K(B)}$$ is compact set. Finally, to show $$\mathcal{K(B)}\subset \mathcal{B},$$ take an element $$\mathcal{K}(\mathbf{x}, \mathbf{u})\in \mathcal{K}(\mathcal{B})$$ and use the estimations (4.16) and (4.17) to obtain   \begin{align*} \|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}&=\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}}+\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}\\ &\leq M_{1}M_{2}+TM_{1}c_{0}+\big\|\mathbf{W}^{-1}\big\|(M_{3}+M_{4})(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\Big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\Big)\left[\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+\left(Tb_{0}+\sum \limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right]\\ &\leq M_{1}M_{2}+TM_{1}c_{0}+\big\|\mathbf{W}^{-1}\big\|(M_{3}+M_{4})(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)\big(\|\mathbf{x}\|_{\mathcal{B}_{1}}+\|\mathbf{u}\|_{\mathcal{B}_{2}}\big). \end{align*}Thus, we have   $$ \lim\limits_{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\rightarrow \infty}\frac{\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}\leq M_{1}\big(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\big)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)<1.$$Hence, for some $$\varepsilon \in (0, 1)$$ with $$M_{1}(1+(M_{3}+M_{4}) \|\mathbf{W}^{-1} \| ) (T(a_{0}+b_{0})+\sum\nolimits _{k=1}^{M}(a_{k}+b_{k}) )<\varepsilon ,$$ we have $$\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}},$$ for sufficiently large values of $$\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}},$$ say $$r_{0}>0.$$ Hence, $$ \|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon r_{0} < r_{0}.$$ Thus, we finally have $$\mathcal{K(B)}\subset \mathcal{B}.$$ Remark 4.4 According to the Theorem 4.4, if the semilinear impulsive delay system (2.1) possesses a unique solution on $$[t_{0}, T]$$ for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the linear delay system (3.1) is controllable on $$[t_{0}, T]$$ and the continuous functions f(⋅, ⋅, ⋅) and each $$\mathbf{g}_{k}(\cdot , \cdot )$$ satisfies the linear growth condition as defined above, then the system (2.1) is also controllable on $$[t_{0}, T],$$ provided   $$ M_{1}\left(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\right)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)<1.$$ 5. Numerical example We consider the following two-dimensional non-autonomous impulsive semilinear system with two delays in the control and two impulses in the state,   \begin{align} \left. \begin{aligned} \begin{bmatrix} \dot{x_{1}}(t)\\ \dot{x_{2}}(t) \end{bmatrix}&=\begin{bmatrix} -2 & t\\ 0 & -1 \end{bmatrix}\begin{bmatrix} x_{1}(t)\\ x_{2}(t) \end{bmatrix}+\begin{bmatrix} 1\\ 0 \end{bmatrix}\mathbf{u}(t-0.05)+\begin{bmatrix} 1\\ -1.6 \end{bmatrix}\mathbf{u}(t-0.1)\\&\quad+ t \sin(\mathbf{u}^{2}(t))\begin{bmatrix} \sin({x_{1}^{2}}(t))\\ \cos({x_{2}^{2}}(t)) \end{bmatrix}\!,\ t\in [0, 3]\setminus\{1, 2\},\\ \Delta(\mathbf{x}(1))&=\begin{bmatrix} \sin({x_{1}^{2}}(1)\mathbf{u}(1))\\ \cos({x_{2}^{2}}(1)\mathbf{u}(1)) \end{bmatrix}\!, \Delta(\mathbf{x}(2))=\begin{bmatrix} \cos({x_{1}^{2}}(2)\mathbf{u}(2))\\ \sin({x_{2}^{2}}(2)\mathbf{u}(2)) \end{bmatrix}\!,\\ \begin{bmatrix} x_{1}(0)\\ x_{2}(0) \end{bmatrix}&=\begin{bmatrix} 2\\1 \end{bmatrix}\!,\\ \mathbf{u}(t)&=t^{3}, t\in[-0.1, 0). \end{aligned} \right\} \end{align} (5.1)Comparing this equation with (2.1), we get   \begin{align*} &\mathbf{A}(t)=\begin{bmatrix} -2 & t\\ 0 & -1 \end{bmatrix}, \mathbf{B}_{1}(t)=\begin{bmatrix} 1\\ 0 \end{bmatrix}, \mathbf{B}_{2}(t)=\begin{bmatrix} 1\\ -1.6 \end{bmatrix}\!,\\ & h_{1}=0.05, h_{2}=0.1, \ t_{0}=0, \ t_{1}=1, \ t_{2}=2, \ T=3,\\ & \mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t))=t\sin(\mathbf{u}^{2}(t))\begin{bmatrix} \sin({x_{1}^{2}}(t))\\ \cos({x_{2}^{2}}(t)) \end{bmatrix}, \mathbf{g}_{1}(\mathbf{x}(t_{1}), \mathbf{u}(t_{1}))=\begin{bmatrix}\nonumber \sin({x_{1}^{2}}(1)\mathbf{u}(1))\\ \cos({x_{2}^{2}}(1)\mathbf{u}(1)) \end{bmatrix}\!,\\ &\mathbf{g}_{2}(\mathbf{x}(t_{2}), \mathbf{u}(t_{2}))=\begin{bmatrix} \nonumber \cos({x_{1}^{2}}(2)\mathbf{u}(2))\\ \sin({x_{2}^{2}}(2)\mathbf{u}(2)) \end{bmatrix}\!. \end{align*}We calculate the associated state-transition matrix as   \begin{align*} \boldsymbol{\Phi}(t, s)&=\begin{bmatrix} e^{-2(t-s)} & e^{-(t-s)}(t-1)-e^{-2(t-s)}(s-1)\\ 0 & e^{-(t-s)} \end{bmatrix}\!,\\ \mathbf{W}_{1}&=\int_{2.9}^{2.95}\boldsymbol{\Phi}(3, s+0.05)\mathbf{B}_{1} \big(\boldsymbol{\Phi}(3, s+0.05)\mathbf{B}_{1}\big)^{*}\,\mathrm{d}s=\begin{bmatrix} 0.0453 & 0\\ 0 & 0 \end{bmatrix}\!,\\ \mathbf{W}_{2}&=\int_{0}^{2.9}\sum \limits_{i=1}^{2}\boldsymbol{\Phi}(3, s+h_{i})B_{i}\Big(\sum \limits_{i=1}^{2}\boldsymbol{\Phi}(3, s+h_{i})\mathbf{B}_{i}\Big)^{*}\,\mathrm{d}s=\begin{bmatrix} 0.914 & 0\\ 0 & 1.265 \end{bmatrix}\!. \end{align*}Clearly $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2})=2,$$ and it follows from Theorem 3.1 that the linear part of system (5.1) without impulses, is controllable on [0, 3]. Furthermore, we calculate   $$ \big\|\mathbf{W}^{-1}\big\|=\big\|(\mathbf{W}_{1}+\mathbf{W}_{2})^{-1}\big\|\approx 1.308,M_{1}=\sup \limits_{0\leq s\leq t\leq 3}\|\boldsymbol{\Phi}(t, s)\|=\sqrt{2},M_{3}\approx 1.442,M_{4}\approx 2.487. $$Since $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C}([0, 3]\times \mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C}(\mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ are bounded functions on their domain, that is $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathfrak{B}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathfrak{B}_{2},$$ for k = 1, 2, so it follows from Theorem 4.2 that the semilinear system (5.1) is also controllable on [0, 3]. In (5.1), if we choose   \begin{align*} \mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t))&=\frac{1}{100}\begin{bmatrix} x_{1}(t)+\sin\big(x_{1}(t)\big)\\ x_{2}(t)+\sin\big(x_{2}(t)\big) \end{bmatrix}\!,\\ \mathbf{g}_{1}(\mathbf{x}(t_{1}), \mathbf{u}(t_{1}))&=\frac{1}{100}\begin{bmatrix} x_{1}(1)\\ \cos\big(x_{2}(1)\big) \end{bmatrix}\!,\\ \mathbf{g}_{2}(\mathbf{x}(t_{2}), \mathbf{u}(t_{2}))&=\frac{1}{100}\begin{bmatrix} \cos\big(x_{1}(2)\big)\\ x_{2}(2) \end{bmatrix}\!, \end{align*}then we see that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C} ([0, 3]\times \mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C} (\mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2}),$$ for k = 1, 2, are unbounded on their domain, so we cannot apply the Theorem 4.2 to check the controllability of (5.1). However, we see that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{C}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{C}_{2},$$ for k = 1, 2, such that $$\alpha _{0}=\frac{2\sqrt{2}}{100},$$$$\beta _{0}=0,$$$$\alpha _{1}=\frac{\sqrt{2}}{100},$$$$\beta _{1}=0,$$$$\alpha _{2}=\frac{\sqrt{2}}{100}$$ and $$\beta _{2}=0.$$ Hence,   $$ \gamma:=\max\!\bigg\{\left(T\alpha_{0}+\sum \limits_{k=1}^{2}\alpha_{k}\right), \left(T\beta_{0}+\sum \limits_{k=1}^{2}\beta_{k}\right)\bigg\}=\max\!\bigg\{\left(3\times \frac{2\sqrt{2}}{100}+\frac{\sqrt{2}}{100}+\frac{\sqrt{2}}{100}\right), 0\bigg\}\approx 0.11314.$$After the calculation, we see that   $$ M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\gamma<1$$and hence, by Theorem 4.3, the system (5.1) is controllable on [0, 3]. Finally, to apply the Theorem 4.4, choose the functions f(⋅, ⋅, ⋅) and $$\mathbf{g}_{k}(\cdot , \cdot )$$ in the system (5.1) as   \begin{align*} \mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t))&=c_{0}\begin{bmatrix} x_{1}(t)\sin\Big(\frac{1}{{x_{1}^{2}}(t)+1}\Big)+t\mathbf{u}(t)\\ x_{2}(t)\sin\Big(\frac{1}{{x_{2}^{2}}(t)+1}\Big) \end{bmatrix}\!,\\ \mathbf{g}_{1}(\mathbf{x}(t_{1}), \mathbf{u}(t_{1}))&=c_{1}\begin{bmatrix} x_{1}(1)+\mathbf{u}(1)\sin({x_{2}^{2}}(1))\\x_{2}(1) \end{bmatrix}\!,\\ \mathbf{g}_{2}(\mathbf{x}(t_{2}), \mathbf{u}(t_{2}))&=c_{2}\begin{bmatrix} x_{1}(2)\\x_{2}(2)+\mathbf{u}(2)\cos({x_{1}^{2}}(2)) \end{bmatrix}\!, \end{align*}where $$c_{0}, c_{1}, c_{2}$$ are positive constants. Note that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C} ([0, 3]\times \mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C}(\mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ are unbounded and further $$\mathbf{f}(\cdot , \cdot , \cdot )\notin \mathcal{L}\mathcal{C}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\notin \mathcal{L}\mathcal{C}_{2},$$ for k = 1, 2. So, neither Theorem 4.2 nor Theorem 4.3 are applicable here to check the controllability of (5.1). However, we observe that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{G}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{G}_{2}$$ for k = 1, 2, with $$a_{0}=c_{0}, b_{0}=3c_{0}, a_{1}=b_{1}=c_{1}$$ and $$a_{2}=b_{2}=c_{2}.$$ Then for suitable choices of $$c_{0}, c_{1}$$ and $$c_{2},$$ we can get   $$ M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\left(T(a_{0}+b_{0})+\sum\limits_{k=1}^{2}(a_{k}+b_{k})\right)=8.6956 \big(12c_{0}+2c_{1}+2c_{2}\big)<1$$and hence the system (5.1) is controllable on [0, 3] by Theorem 4.4. 6. Conclusion In this paper, we have considered an n-dimensional semilinear impulsive dynamical control system with multiple constant time delays in control and derived the sufficient conditions to guarantee that this system (2.1) is controllable on $$[t_{0}, T]$$ for certain classes of non-linearities f(⋅, ⋅, ⋅) and impulse functions $$\mathbf{g}_{k}(\cdot , \cdot ).$$ The results are obtained by employing Schauder fixed-point theorem and Banach contraction mapping principle. By assuming that, for a given initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for a given $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the semilinear impulsive delay system (2.1) admits a unique solution on $$[t_{0}, T]$$ and the linear delay system (3.1) is controllable on $$[t_{0}, T],$$ we have established that the semilinear impulsive delay system (2.1) is also controllable on $$[t_{0}, T]$$ under each of the following assumptions: (i) $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathfrak{B}_{1}$$ and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathfrak{B}_{2}$$; (ii) $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{C}_{1}$$ and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{C}_{2}$$ with   $$ M_{1}\big(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\big)\gamma<1,$$where $$\gamma $$ is given in (4.14); (iii) $$\mathbf{f(\cdot , \cdot , \cdot )}\in \mathcal{L}\mathcal{G}_{1}$$ and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{G}_{2}$$ with   $$ M_{1}\left(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\right)\Big(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\Big)<1.$$Numerical example is provided to demonstrate our theoretical results. Furthermore, as every bounded function satisfy the linear growth condition, therefore $$\mathfrak{B}_{1}\subset \mathcal{L}\mathcal{G}_{1}$$ and $$\mathfrak{B}_{2}\subset \mathcal{L}\mathcal{G}_{2}.$$ Also we know that, every Lipschitz function satisfies the linear growth condition, so $$\mathcal{L}\mathcal{C}_{1}\subset \mathcal{L}\mathcal{G}_{1}$$ and $$\mathcal{L}\mathcal{C}_{2}\subset \mathcal{L}\mathcal{G}_{2}.$$ On the other hand, functions satisfying a Lipschitz condition need not be bounded, for example, $$\mathbf{f}(t, \mathbf{v}, \mathbf{w})= \textit{t}+\mathbf{v}+\sin\!\mathbf{v}+\sin\!\mathbf{w}$$ defined on $$[0, 1]\,\times\, \mathbb{R}\, \times\, \mathbb{R}.$$ Similarly, bounded functions may not satisfy a Lipschitz condition, for example $$\mathbf{f}(t, \mathbf{v}, \mathbf{w})=\textit{t} + \sin (\mathbf{v}^2 )+ \cos (\mathbf{w}^{2} )$$ defined on $$[0, 1]\,\times \mathbb{R}\times \mathbb{R}.$$ Therefore, $$\mathfrak{B}_{1}$$ and $$\mathcal{L}\mathcal{C}_{1}$$ are not comparable. Similarly $$\mathfrak{B}_{2}$$ and $$\mathcal{L}\mathcal{C}_{2} $$ are not comparable. Further, the linear growth functions may not be bounded and may not satisfy a Lipschitz condition, for example, $$\mathbf{f}(t, \mathbf{v}, \mathbf{w})=\mathbf{v} \sin (\mathbf{v}^2 )+\mathbf{w}\cos (\mathbf{w}^{2} )$$ defined on $$[0, 1]\times \mathbb{R}\times \mathbb{R}.$$ Thus, from this work, we conclude that Theorem 4.4 gives the controllability conditions of the system (2.1) for much weaker class of functions f(⋅, ⋅, ⋅) and $$\mathbf{g}_{k}(\cdot , \cdot )$$ than that of Theorems 4.2 and 4.3. Acknowledgements The first author is grateful to the Department of Mathematics, Indian Institute of Space Science and Technology, India, for providing the required support to carryout this research work. Also, he would like to thank Dr. Manil T. Mohan, visiting scientist, Statistics and Mathematics unit, Indian Statistical Institute, Bangalore Centre, India, for useful discussions. Both the authors are thankful to the reviewers and the associate editor for their constructive comments and suggestions to improve the quality of this manuscript. References Artstein, Z. ( 1982) Linear systems with delayed controls: a reduction. IEEE Trans. Automat. Control,  27, 869-- 879. Google Scholar CrossRef Search ADS   Balachandran, K. & Somasundaram, D. ( 1985) Relative controllability of nonlinear systems with time varying delays in control. Kybernetika , 21, 65-- 72. Banks, H. T., Jacobs, M. Q. & Latina, M. R. ( 1971) The synthesis of optimal controls for linear, time-optimal problems with retarded controls. J. Optim. Theory. Appl. , 8, 319-- 366. Google Scholar CrossRef Search ADS   Coddington, E. ( 1989) An Introduction to Ordinary Differential Equations . New York, USA: Dover Publications. Dacka, C. ( 1982) Relative controllability of perturbed nonlinear systems with delay in control. IEEE Trans. Automat. Control,  27, 268-- 270. Google Scholar CrossRef Search ADS   Erneux, T. ( 2009) Applied Delay Differential Equations . New York, USA: Springer. Farmakis, I. & Moskowitz, M. ( 2013) Fixed Point Theorems and Their Applications . Singapore: World Scientific. Google Scholar CrossRef Search ADS   George, R. K., Nandakumaran, A. K. & Arapostathis, A. ( 2000) A note on controllability of impulsive systems. J. Math. Anal. Appl. , 241, 276-- 283. Google Scholar CrossRef Search ADS   Guan, Z. H., Qian, T. H. & Yu, X. ( 2002) On controllability and observability for a class of impulsive systems. Syst. Control Lett. , 47, 247-- 257. Google Scholar CrossRef Search ADS   Han, J., Liu, Y., Zhao, S. & Yang, R. ( 2012) A note on the controllability and observability for piecewise linear time-varying impulsive systems. Asian J. Control , 15, 1867-- 1870. Google Scholar CrossRef Search ADS   Khambadkone, M. ( 1982) Euclidean null-controllability of linear systems with distributed delays in control. IEEE Trans. Automat. Control , 27, 210-- 211. Google Scholar CrossRef Search ADS   Klamka, J. ( 1976) Controllability of linear systems with time-variable delays in control. Int. J. Control , 24, 869-- 878. Google Scholar CrossRef Search ADS   Klamka, J. ( 1977) Absolute controllability of linear systems with time-variable delays in control. Int. J. Control , 26, 57-- 63. Google Scholar CrossRef Search ADS   Klamka, J. ( 2008) Constrained controllability of semilinear systems with delayed controls. Bull. Pol. Ac. Tech. , 56, 333-- 337. Klamka, J. ( 2009) Constrained controllability of semilinear systems with delays. Nonlin. Dyn. , 56, 169-- 177. Lakshmikantham, V., Bainov, D. D. & Simeonov, P. S. ( 1989) Theory of Impulsive Differential Equations . World Scientific. Google Scholar CrossRef Search ADS   Leela, S., McRae, F. A. & Sivasundaram, S. ( 1993) Controllability of impulsive differential equations. J. Math. Anal. Appl. , 177, 24-- 30. Google Scholar CrossRef Search ADS   Leiva, H. ( 2014) Controllability of semilinear impulsive nonautonomous systems. Int. J. Control , 88, 585-- 592. Google Scholar CrossRef Search ADS   Leiva, H. & Rojas, R. A. ( 2016) Controllability of semilinear nonautonomous systems with impulses and nonlocal conditions. J. Nat. Sci. , 1, 23-- 38. Liu, X. ( 1995) Impulsive control and optimization. Appl. Math. Comput. , 73, 77-- 98. Liu, Y. & Zhao, S. ( 2012) Controllability analysis of linear time-varying systems with multiple time delays and impulsive effects. Nonlin. Anal. RWA. , 13, 558-- 568. Google Scholar CrossRef Search ADS   Morris, S. A. & Noussair, E. S. ( 1975) The Schauder-Tychonoff fixed point theorem and applications. Matematicky casopis , 25, 165-- 172. Nieto, J. J. & Tisdell, C. C. ( 2010) On exact controllability of first-order impulsive differential equations. Adv. Difference Equations , 2010, 1-- 9. Olbrot, A. W. ( 1972) On controllability of linear systems with time delays in control. IEEE Trans. Automat. Control , 17, 664-- 666. Google Scholar CrossRef Search ADS   Sikora, B. ( 2005) On constrained controllability of dynamical systems with multiple delays in control. Appl. Math. Warsaw , 32, 87-- 101. Google Scholar CrossRef Search ADS   Zhao, S., & Sun, J. ( 2010) Controllability and observability for impulsive systems in complex fields. Nonlin. Anal. RWA. , 11, 1513-- 1521. Google Scholar CrossRef Search ADS   Zhu, Z. Q. & Lin, Q. W. ( 2012) Exact controllability of semilinear systems with impulses. Bull. Math. Anal. Appl. , 4, 157-- 167. © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

Controllability of semilinear impulsive control systems with multiple time delays in control

Loading next page...
 
/lp/ou_press/controllability-of-semilinear-impulsive-control-systems-with-multiple-GLB7k2yPk5
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dny011
Publisher site
See Article on Publisher Site

Abstract

Abstract In this article, we study the controllability of finite-dimensional dynamical control systems modelled by semilinear impulsive ordinary differential equations with multiple constant time delays in the control function. Initially, we recall a necessary and sufficient condition for the controllability of the corresponding linear system without impulses, with multiple constant time delays in the control function in terms of a matrix rank condition. Then under some sufficient conditions, we show that the actual system is also controllable for certain classes of non-linearities and impulse functions. We employ Schauder fixed-point theorem and Banach contraction mapping principle to establish the results. Our obtained results are applicable for both autonomous and non-autonomous systems. An example is given to illustrate the theoretical results. 1. Introduction The study of impulsive systems has become more important in recent years as many of the evolution processes that occur in physics, chemistry, biology, population dynamics, engineering, information science etc. are characterized by the fact that, at certain moments of times, the state function experiences a sudden change, that is, in the form of impulses. There has been a significant development in the impulsive theory in the past three decades. For a detailed study on impulsive differential equations, see the monograph by Lakshmikantham et al. (1989) and the references therein. The study of controllability of impulsive systems has begun in 1993 by the work of Leela et al. (1993). In short, a system is said to be controllable over some space V on some finite time interval, if it is possible to steer that system from arbitrary initial state to arbitrary desired final state in V, by using the set of admissible control functions. The controllability of various types of linear impulsive systems in a finite-dimensional space is well known and various references are available on this in the literature; perhaps, one can see the works by Liu (1995), Guan et al. (2002), Zhao & Sun (2010), Han et al. (2012) and the references therein. Unlike the linear systems, the bibliography is not very broad, when it comes to the semilinear impulsive systems in a finite-dimensional space (the general form of a first-order semilinear system is given by $$\texttt{L}(\mathbf{x}(t))=F(t, \mathbf{x}(t)),$$ where $$\texttt{L}$$ is the first-order linear differential operator, F(⋅, ⋅) is a non-linear function in the state vector x(t)). However, we can mention the work by George et al. (2000), in which the authors obtained the controllability conditions of such systems by employing Banach contraction principle. Some other references that are available in the literature in this respect are the works by Nieto & Tisdell (2010), Zhu & Lin (2012) etc, in which the authors adopted Schaefer’s fixed-point theorem to obtain the controllability results. But in all these papers, the non-linear term and the impulse functions involved in the considered system depends on time and the state function, but not on the control parameter u. Leiva (2014) studied this case by assuming the system’s non-linear term and the impulse functions depends also on the control parameter u and obtained the controllability conditions by employing Rothe’s type fixed-point theorem. The extension of this result has appeared in Leiva & Rojas (2016) for the systems with non-local conditions. As we know, there are some chemical process systems, hydraulically actuated systems, combustion systems, population dynamics, harmonic oscillator etc., in which the present rate of change of control function depends upon the past values of it (see Erneux, 2009 and the references therein). Such processes are modelled by delay differential equations having time delays in the control function. For example, an equation of harmonic oscillator is represented by $$\,\ddot{\!x}(t)+k^{2} x(t)=u(t)+u(t-h),k^{2}>0;$$ an equation arising in population dynamics is given by $$\,\dot{\!x}(t)=x(t)+\int _{0}^{\infty }e^{-\sigma }u(t-\sigma )\mathrm{d}\sigma $$ etc. For many other practical examples where time delays are involved in control, one can see the works by Artstein (1982), Sikora (2005) etc. Several mathematicians contributed in the development of controllability of the linear systems in a finite-dimensional space involving time delays in control, for example, Olbrot (1972), Klamka (1976), Klamka (1977), Khambadkone (1982) and references therein. For the non-linear systems, one can refer to the works by Dacka (1982), Balachandran & Somasundaram (1985), Klamka (2008), Klamka (2009) etc. If an impulsive system in a finite-dimensional space involves time delays in control, the establishment of the controllability of such system becomes much more complex because of the coexistence of impulses and delays. However, for the linear case of this scenario, Liu & Zhao (2012) obtained the controllability results in terms of a matrix rank condition, which is easy to check whether the system is controllable or not. But we know that most of the problems occurring in real world are not linear in nature. To the best of our knowledge, there is no work available in the literature on the controllability of the non-linear (in particular semilinear) impulsive system with multiple constant time delays in the control. This motivates our current study. Firstly, we recall the controllability condition for the corresponding linear delay system without impulses in terms of a matrix rank condition. Then, for certain classes of the non-linear part of the system and impulse functions, we establish sufficient conditions under which the original system is also controllable. We use Schauder fixed-point theorem and Banach contraction mapping principle to prove the results. The organization of this paper is as follows: in Section 2, we formulate the controllability problem of finite-dimensional semilinear impulsive system with multiple constant time delays in the control function. In Section 3, we recall a necessary and sufficient condition for the controllability of the corresponding linear system with multiple constant time delays in the control and without impulses, in terms of a matrix rank condition by using N number of matrices, where N is the number of time delays in the control function. In Section 4, we prove that, under some sufficient conditions, the actual system is also controllable for certain classes of non-linearities and impulse functions. For this, we use Schauder fixed-point theorem and Banach contraction mapping principle. Finally, in Section 5, we give a numerical example for non-autonomous system to show the effectiveness of the results obtained in this paper. 2. Preliminaries and system description We begin this section with functional settings required to establish the results of this paper. The natural space to work on the solvability of semilinear impulsive control delay system (see (2.1) below) is the real Banach space given by   \begin{align*} \mathcal{B}_{1}:=&\bigg\{\mathbf{x}(\cdot)\Big| \mathbf{x}(\cdot):[t_{0}, T]\rightarrow \mathbb{R}^{n}, \mathbf{x}(\cdot)\ \textrm{is a continuous function on}\ [t_{0}, T]\setminus\{t_{k}:k=1, 2,\ldots,M\}\\ &\quad\textrm{and differentiable a.e. on }[t_{0}, T]\ \textrm{such that}\, \exists \,\textrm{a left limit}\, \mathbf{x}\big(t^{-}_{k}\big):=\lim \limits_{t\uparrow t_{k}} \mathbf{x}(t)\ \textrm{and a right}\\&\quad\textrm{limit}\ \mathbf{x}\big(t^{+}_{k}\big):=\lim \limits_{t \downarrow t_{k}} \mathbf{x}(t)\ \textrm{with}\ \mathbf{x}\big(t^{-}_{k}\big)=\mathbf{x}\big(t_{k}\big)\ \textrm{and}\ \mathbf{x}(t_{0})=\lim \limits_{t \downarrow t_{0}}\mathbf{x}(t) \bigg\}, \end{align*}endowed with the norm   $$ \left\|\mathbf{x}(\cdot)\right\|_{\mathcal{B}_{1}}:=\sup \limits_{t\in [t_{0}, T]}\left\|\mathbf{x}(t)\right\|_{\mathbb{R}^{n}}.$$Here a.e. stands for ‘almost everywhere’, which we define as follows: a property $$\mathcal{P}$$ is said to hold a.e. on $$[t_{0}, T],$$ if the following conditions are satisfied: (i) The property $$\mathcal{P}$$ holds on a subset S of $$[t_{0}, T]$$; (ii) If the property $$\mathcal{P}$$ fails to satisfy on $$[t_{0}, T]\setminus \mathrm{S},$$ then the Lebesgue measure of the set $$[t_{0}, T]\setminus \mathrm{S} $$ is zero. We also need the following real Banach spaces:   $$ \mathcal{B}_{2}:=\big\{\mathbf{u}(\cdot) \big| \mathbf{u}(\cdot):[t_{0}, T]\rightarrow \mathbb{R}^{m}, \mathbf{u}(\cdot)\textrm{ is continuous a.e. and bounded on }[t_{0}, T]\big\}, $$endowed with the norm   $$ \left\|\mathbf{u}(\cdot)\right\|_{\mathcal{B}_{2}}:=\sup \limits_{t\in [t_{0}, T]}\left\|\mathbf{u}(t)\right\|_{\mathbb{R}^{m}},$$$$\mathbb{R}^{n} \times \mathbb{R}^{m}:=\{(\mathbf{v}, \mathbf{w}) | \mathbf{v}\in \mathbb{R}^{n}, \mathbf{w}\in \mathbb{R}^{m} \},$$ endowed with the norm   $$ \left\|(\mathbf{v}, \mathbf{w})\right\|_{\mathbb{R}^{n} \times \mathbb{R}^{m}}:=\|\mathbf{v}\|_{\mathbb{R}^{n}}+\|\mathbf{w}\|_{\mathbb{R}^{m}},$$and $$[t_{0}, T]\times \mathbb{R}^{n} \times \mathbb{R}^{m}:=\{(t, \mathbf{v}, \mathbf{w}) | t\in [t_{0}, T], \mathbf{v}\in \mathbb{R}^{n}, \mathbf{w}\in \mathbb{R}^{m} \},$$ endowed with the norm   $$ \left\|(t, \mathbf{v}, \mathbf{w})\right\|_{[t_{0}, T]\times \mathbb{R}^{n} \times \mathbb{R}^{m}}:=|t|+\|\mathbf{v}\|_{\mathbb{R}^{n}}+\|\mathbf{w}\|_{\mathbb{R}^{m}}.$$Here $$\|\cdot \|_{\mathbb{R}^{n}}$$ and $$\|\cdot \|_{\mathbb{R}^{m}}$$ are the usual Euclidean norms on the real Banach spaces $$\mathbb{R}^{n}$$ and $$\mathbb{R}^{m},$$ respectively. Throughout this paper, for any operator $$\mathcal{T}$$, the Hermitian adjoint is denoted by $$\mathcal{T}^{*}$$ and for any matrix $$\mathbf{A}=(a_{ij}),$$ we define the Frobenius norm $$\|\mathbf{A}\|:=\sqrt{\sum_{i, j=1}|a_{ij}|^{2}}.$$ Further, C(A;B) denotes the set of all continuous functions from set A to set B. We consider the dynamical control system modelled by the following semilinear impulsive ordinary differential equation whose state vector is in $$\mathbb{R}^{n},$$ with multiple constant time delays in the control function,   \begin{equation}\!\!\!\!\!\left.\begin{array}{rl} \dot{\mathbf{x}}(t)=&\!\!\mathbf{A}(t)\mathbf{x}(t)+\sum\limits_{i=1}^{N}\mathbf{B}_{i}(t)\mathbf{u}(t-h_{i})+\mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t)),t\in[t_{0}, T]\setminus\{t_{k}:k=1, 2,\ldots, M\}, \\ \mathbf{x}(t_{0})=&\!\!\mathbf{x}_{0},\\[2pt] \Delta(\mathbf{x}(t_{k})):=&\!\!\mathbf{x}\left(t^{+}_{k}\right)-\mathbf{x}(t_{k})=\mathbf{g}_{k}(\mathbf{x}(t_{k}), \mathbf{u}(t_{k})),\\[2pt] \mathbf{u}(t)=&\!\!\mathbf{u}_{0}(t), t\in [t_{0}-h_{N}, t_{0}). \end{array} \right\} \end{equation} (2.1)Since in system (2.1), the coefficient of the first-order derivative of x(t) does not depend on x(t) or the derivative of x(t), therefore this system is semilinear as it can be written in the form: $$\texttt{L}(\mathbf{x}(t))=F(t, \mathbf{x}(t)),$$ where $$\texttt{L}$$ is the first-order linear differential operator. We make the following assumptions on this system components: (i) the state function $$\mathbf{x}(\cdot )\in \mathcal{B}_{1}$$ with a given initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$, (ii) the control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ (iii) $$\mathbf{A}(\cdot ):[t_{0}, T]\rightarrow \mathbb{R}^{n \times n}$$ and $$\mathbf{B}_{i}(\cdot ):[t_{0}, T]\rightarrow \mathbb{R}^{n \times m}$$ are the given matrix valued continuous functions on $$[t_{0}, T],$$ (iv) $$t_{0}\leq t_{1} \leq t_{2}\leq \cdots \leq t_{M}< T,$$$$t_{k}$$’s are the fixed times at which the state function x(⋅) experiences impulses and are state independent, (v) $$0\leq h_{1}\leq h_{2}\leq \cdots \leq h_{N}\leq \min \, \{(t_{1}-t_{0}), (t_{2}-t_{1}),\ldots ,(t_{M}-t_{M-1}), (T-t_{M}) \},$$$$h_{i}$$’s are the known time delays in the control function u(⋅), (vi) $$\Delta (\mathbf{x}(t_{k}))$$ is the jump in the state function x(⋅) at the time $$t_{k},$$ (vii) $$\mathbf{u}_{0}(\cdot ):[t_{0}-h_{N}, t_{0})\rightarrow \mathbb{R}^{m}$$ denotes the known initial control function (and is assumed to be bounded and continuous on its domain) applied to the system (2.1), (viii) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C}\left ([t_{0}, T]\times \mathbb{R}^{n} \times \mathbb{R}^{m}; \mathbb{R}^{n}\right )$$ is nonlinear in its second argument and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C}\left (\mathbb{R}^{n} \times \mathbb{R}^{m}; \mathbb{R}^{n}\right ),$$ the subscripts i = 1, …, N and k = 1, …, M. Before proceeding to establish the controllability of the system (2.1), we will make sure that this system is solvable for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any $$\mathbf{u}(\cdot )\in \mathcal{B}_{2}.$$ Note that, the solvability of (2.1) is similar to the solvability of the following initial value problem (2.2) having no impulses:   \begin{equation} \left. \begin{array}{rl} \dot{\mathbf{x}}(t)&\!\!\!=\mathbf{f}(t, \mathbf{x}(t)),\,\, t\in [t_{0}, T],\\ [2pt]\mathbf{x}(t_{0})&\!\!\!=\mathbf{x}_{0}. \end{array} \right\} \end{equation} (2.2)This is because when (2.2) possesses a unique solution, then its corresponding impulsive system with finite impulses also admits a unique solution, but having jumps in the state function at the impulse times. First, let us discuss the solvability of (2.2) on $$[t_{0}, T].$$ In (2.2), if we assume that f(⋅, ⋅) is a continuous function on $$[t_{0}, T]\times \mathbb{R}^{n}$$ and satisfy there a Lipschitz condition with respect to the second argument x(t), then (2.2) has a unique solution on the interval $$[t_{0}, T]$$ (refer to Coddington, 1989, Theorem 4, p. 252). But note that these are just sufficient conditions, not necessary for the existence of a unique solution to (2.2) on $$[t_{0}, T]$$. In other words, (2.2) still can admit a unique solution on $$[t_{0}, T]$$ without f(⋅, ⋅) being a continuous function on its domain and without f(⋅, ⋅) satisfying a Lipschitz condition with respect to the second argument on its domain. Now, for the system (2.1), if we assume f(⋅, ⋅, ⋅) satisfy a Lipschitz condition with respect to the second argument on its domain $$[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}$$ (then the right-hand side of $$\dot{ \mathbf{x}}(t)$$ in (2.1) also satisfies a Lipschitz condition), then (2.1) has a unique solution on $$[t_{0}, T]$$ for a given u(⋅). Note that, Lipschitz functions are either bounded or unbounded on their domain. However, there exist some bounded continuous functions f(⋅, ⋅, ⋅) that do not satisfy a Lipschitz condition (then, of course the right-hand side of $$\dot{ \mathbf{x}}(t)$$ also does not satisfy a Lipschitz condition), but still the system (2.1) admits a unique solution on $$[t_{0}, T]$$ for a given u(⋅). Similarly, there exist some linear growth continuous functions f(⋅, ⋅, ⋅) that are unbounded and do not satisfy a Lipschitz condition, but (2.1) still has a unique solution on $$[t_{0}, T]$$ for a given u(⋅). In this work, we consider all the three cases and assume that our system (2.1) admits a unique solution on $$[t_{0}, T]$$ for any given u(⋅). The following definition for the controllability of the system (2.1) is adopted in this paper. Definition 2.1 (Controllability) The system (2.1) is said to be controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T]$$, if for every pair of vectors $$(\mathbf{x}_{0}, \mathbf{x}_{_{T}})\in \mathbb{R}^{n}\times \mathbb{R}^{n}$$ and for every continuous and bounded function $$\mathbf{u}_{0}(\cdot ):[t_{0}-h_{N}, t_{0})\rightarrow \mathbb{R}^{m},$$ there exists at least one control function $$\mathbf{u}(\cdot ) \in \mathcal{B}_{2}$$ such that, with this control function on $$[t_{0}, T],$$ the corresponding solution to the system (2.1) with $$\mathbf{x}(t_{0})=\mathbf{x}_{0}$$ and $$\mathbf{u}(t)=\mathbf{u}_{0}(t),$$$$t \in [t_{0}-h_{N}, t_{0})$$, satisfies the condition $$\mathbf{x}(T)=\mathbf{x}_{_{T}}.$$ Remark 2.1 In the above definition, if $$\mathbf{x}_{_{T}}=\mathbf{0},$$ then the system (2.1) is said to be null controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T].$$We use the following lemmas in establishing the controllability of the system (2.1). Lemma 2.1 (Strong version of Schauder fixed-point theorem (Morris & Noussair, 1975)) Let $$\mathcal{X}$$ be a Banach space and let $$\mathcal{B}\subset \mathcal{X}$$ be a non-empty, closed and convex subset. If $$\mathcal{K}(\cdot )$$ is a continuous mapping of $$\mathcal{B}$$ into a compact subset of $$\mathcal{B},$$ then $$\mathcal{K}(\cdot )$$ has at least one fixed point in $$\mathcal{B}.$$ Lemma 2.2 (Banach contraction mapping principle (Farmakis & Moskowitz, 2013)) If $$\mathcal{X}$$ is a complete metric space and the mapping $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ is a contraction, then $$\mathcal{K}(\cdot )$$ has a unique fixed point in $$\mathcal{X}.$$ 3. Controllability of the linear system without impulses and with multiple constant time delays in the control function In 2012, Liu and Zhao obtained a necessary and sufficient condition (see Liu & Zhao, 2012, Theorem 3.8, p. 564) for the controllability of the linear impulsive system with multiple constant time delays in control. Further, this controllability condition has been reduced for the system without impulses and is given in Corollary 3.9, p. 566 of the same paper. In this section, we recall that necessary and sufficient condition for the controllability of the corresponding linear system (see (3.1) below) without impulses and with multiple constant time delays in the control function and we present them again here for the completion of this paper and the outcomes are useful in establishing the controllability results given in Section 4 below. The associated linear system of (2.1) without impulses is given by   \begin{equation} \left. \begin{array}{rl} \dot{\mathbf{x}}(t)&\!\!\!=\mathbf{A}(t)\mathbf{x}(t)+\sum\limits_{i=1}^{N}\mathbf{B}_{i}(t)\mathbf{u}(t-h_{i}), \ t \in [t_{0}, T],\\ \mathbf{x}(t_{0})&\!\!\!=\mathbf{x}_{0}, \\[2pt] \mathbf{u}(t)&\!\!\!=\mathbf{u}_{0}(t),\quad t\in [t_{0}-h_{N}, t_{0}). \end{array} \right \} \end{equation} (3.1) Let $$\boldsymbol{\Phi }(t, t_{0})$$ be the state-transition matrix, then $$\mathbf{x}(t)=\boldsymbol{\Phi }(t, t_{0})\mathbf{x}_{0}$$ is a unique solution to the homogeneous system $$\dot{\mathbf{x}}(t)=\mathbf{A}(t)\mathbf{x}(t)$$ with $$\mathbf{x}(t_{0})=\mathbf{x}_{0}.$$ Hence, the solution to the linear system (3.1) at any time $$t\in [t_{0}, T]$$ is given by   \begin{align*}\mathbf{x}(t)&=\boldsymbol{\Phi}(t, t_{0})\mathbf{x}_{0}+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\sum_{i=1}^{N}\mathbf{B}_{i}(s)\mathbf{u}(s-h_{i})\,\mathrm{d}s\\ &=\boldsymbol{\Phi}(t, t_{0})\mathbf{x}_{0}+\boldsymbol{\Phi}(t, t_{0})\sum_{i=1}^{N}\int_{t_{0}-h_{i}}^{t_{0}}\boldsymbol{\Phi}(t_{0}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}_{0}(s)\,\mathrm{d}s\\ &\quad+\sum_{i=1}^{N}\int_{t_{0}}^{t-h_{i}}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Let us denote   \begin{align} \sum_{i=1}^{N}\int_{t_{0}-h_{i}}^{t_{0}}\boldsymbol{\Phi}(t_{0}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}_{0}(s)\,\mathrm{d}s=\mathbf{a}_{0}\in \mathbb{R}^{n}, \end{align} (3.2)therefore, we have   \begin{equation} \mathbf{x}(t)=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\sum_{i=1}^{N}\int_{t_{0}}^{t-h_{i}}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{equation} (3.3)Now let us simplify the summation given in equation (3.3) as   \begin{align} &\sum_{i=1}^{N}\int_{t_{0}}^{t-h_{i}}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\nonumber\\ &\quad=\int_{t_{0}}^{t-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align} (3.4)Using (3.4) in (3.3), the solution to the system (3.1) can be written as   \begin{align}\mathbf{x}(t)&=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{t-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\nonumber\\ &\quad+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align} (3.5)Let us now define   \begin{align} \mathbf{W}_{l}:=\mathbf{W}_{l}(T)&=\int_{T-h_{l+1}}^{T-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s,\nonumber\\[10pt] \mathbf{W}_{N}:=\mathbf{W}_{N}(T)&=\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s, \end{align} (3.6)where l = 1, 2, …, (N − 1). Lemma 3.1 Each $$\mathbf{W}_{i}$$ given in (3.6) is a positive semidefinite symmetric n × n matrix and $$ \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdot \cdot \cdot |\mathbf{W}_{N})= \textrm{rank}(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot \cdot \cdot +\mathbf{W}_{N}).$$ Proof. Let P(s) be any n × m matrix with each of its entry to be a real valued continuous function of s. Then for each fixed $$s\in [t_{0}, T],$$ for all $$\mathbf{v}\in \mathbb{R}^{n} $$ and under the usual inner product on $$\mathbb{R}^{n},$$ we have   $$ \big\langle \mathbf{P}(s)\mathbf{P}^{*}(s)\mathbf{v}, \mathbf{v}\big\rangle_{\mathbb{R}^{n}}=\big\langle \mathbf{P}^{*}(s)\mathbf{v}, \mathbf{P}^{*}(s)\mathbf{v}\big\rangle_{\mathbb{R}^{m}}=\left\|\mathbf{P}^{*}(s)\mathbf{v}\right\|^{2}_{\mathbb{R}^{m}}\geq 0,$$which shows that $$\mathbf{P}(s)\mathbf{P}^{*}(s)$$ is positive semidefinite symmetric n × n matrix for each $$s\in [t_{0}, T]$$. Now, for $$\alpha <\beta ,$$ let us consider   $$ \Bigg\langle \int_{\alpha}^{\beta}\mathbf{P}(s)\mathbf{P}^{*}(s)\,\mathrm{d}s\mathbf{v}, \mathbf{v}\Bigg \rangle_{\mathbb{R}^{n}}=\int_{\alpha}^{\beta}\big(\mathbf{P}^{*}(s)\mathbf{v}\big)^{*}\big(\mathbf{P}^{*}(s)\mathbf{v}\big)\,\mathrm{d}s=\int_{\alpha}^{\beta}\left\|\mathbf{P}^{*}(s)\mathbf{v}\right\|^{2}_{\mathbb{R}^{m}} \,\mathrm{d}s\geq 0, $$which easily shows that $$\int _{\alpha }^{\beta }\mathbf{P}(s)\mathbf{P}^{*}(s)\,\mathrm{d}s$$ is positive semidefinite symmetric n × n matrix. Therefore each $$\mathbf{W}_{i}$$ given in (3.6) is a positive semidefinite symmetric n × n matrix. Also, we know that   $$ \big\langle(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot\cdot\cdot+\mathbf{W}_{N})\mathbf{v}, \mathbf{v}\big\rangle_{\mathbb{R}^{n}}=\langle\mathbf{W}_{1}\mathbf{v},\mathbf{v}\rangle_{\mathbb{R}^{n}}+\langle\mathbf{W}_{2}\mathbf{v},\mathbf{v}\rangle_{\mathbb{R}^{n}}+\cdot\cdot\cdot+\langle\mathbf{W}_{N}\mathbf{v},\mathbf{v}\rangle_{\mathbb{R}^{n}}\geq 0,$$for all $$\mathbf{v}\in \mathbb{R}^{n},$$ which shows that $$(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot \cdot \cdot +\mathbf{W}_{N})$$ is a positive semidefinite symmetric n × n matrix. Now, it remains to prove that $$\textrm{rank}(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot \cdot \cdot +\mathbf{W}_{N})=\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdot \cdot \cdot |\mathbf{W}_{N}).$$ This follows from the following estimation:   \begin{align*} \mathbf{v}\in \textrm{ker}\big[ (\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot\cdot\cdot+\mathbf{W}_{N})^{*}\big]&\Longleftrightarrow(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdot\cdot\cdot+\mathbf{W}_{N})^{*}(\mathbf{v})=\mathbf{0}\\ &\Longleftrightarrow \mathbf{W}_{i}^{*}(\mathbf{v})=\mathbf{0}, \quad \textrm{ for all }i, \textrm{ as each } \mathbf{W}_{i} \textrm{ is a positive}\\ &\qquad\qquad\textrm{semidefinite matrix}\\ &\Longleftrightarrow \mathbf{v}\in \textrm{ker} \big(\mathbf{W}_{i}^{*}\big), \quad \textrm{ for all }i\\ &\Longleftrightarrow \mathbf{v}\in \textrm{ker} \begin{bmatrix} \mathbf{W}_{1}^{*} \\ \mathbf{W}_{2}^{*} \\ \vdots \\ \mathbf{W}_{N}^{*} \end{bmatrix}_{Nn\times n}\\ &\Longleftrightarrow \mathbf{v}\in \textrm{ker} \left(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdot\cdot\cdot|\mathbf{W}_{N}\right)^{*}. \end{align*}Therefore, by using the rank-nullity theorem, we have   \begin{align*} &\qquad \textrm{ker}\big[ (\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})^{*}\big]=\,\textrm{ker}\big[(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots|\mathbf{W}_{N})^{*}\big]\\ &\Longrightarrow n-\textrm{rank}\big[(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})^{*}\big]=\,n-\textrm{rank}\big[(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots|\mathbf{W}_{N})^{*}\big]\\ &\Longrightarrow \textrm{rank}(\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})=\,\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots|\mathbf{W}_{N}), \end{align*}which completes the proof. For necessary and sufficient condition of controllabilty of the system (3.1), see the following theorem. Theorem 3.1 The system (3.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T]$$ if and only if $$ \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n.$$ Proof. In order to show the sufficiency, let us assume that $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n.$$ Therefore, from Lemma 3.1, it is clear that the matrix $$\mathbf{W}=\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots +\mathbf{W}_{N}$$ is positive definite. Let us define a control function as follows:   \begin{align} \mathbf{u}(t):= \begin{cases} \left[\,\sum\limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\left[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\right], & t\in [t_{0}, T-h_{N}],\\[10pt] \left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\left[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\right], & t\in (T-h_{l+1}, T-h_{l}],\\[10pt] \mathbf{0}, & t\in (T-h_{1}, T], \end{cases} \end{align} (3.7) where l = 1, 2, …, (N − 1). The state x(t) given in (3.5) at t = T becomes   \begin{align*} \mathbf{x}(T)=&\,\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{T-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{align*} Substituting u(t) from (3.7) in the above expression, we get   \begin{align*} \mathbf{x}(T) =& \, \boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\\ &+\left\{\int_{t_{0}}^{T-h_{N}}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\\ &\qquad\left.+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right\}\\ &\qquad\times \mathbf{W}^{-1}\big[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\big]\\ =& \, \boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\{\mathbf{W}_{N}+\cdots+\mathbf{W}_{1}\}\mathbf{W}^{-1}\big[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\big]\\ =&\,\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\mathbf{WW}^{-1}\big[\mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\big]=\mathbf{x}_{_{T}}. \end{align*} Hence, the system (3.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T].$$ The converse can be proved by contradiction. Let the system (3.1) be controllable on $$[t_{0}, T],$$ but assume that $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N}) < n.$$ Then from Lemma 3.1, we know that $$\mathbf{W}=\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots +\mathbf{W}_{N}$$ is a singular matrix. Thus, there exists at least one non-zero vector, say $$\mathbf{v}\in \mathbb{R}^{n}$$ such that Wv = 0, i.e.,   $$ (\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots+\mathbf{W}_{N})\mathbf{v}=\mathbf{0}\Longrightarrow \mathbf{W}_{1} \mathbf{v}+\mathbf{W}_{2} \mathbf{v}+\cdots+\mathbf{W}_{N} \mathbf{v}=\mathbf{0}.$$ Hence, $$\mathbf{W}_{i}\mathbf{v}=\mathbf{0}$$ for all i (since each $$\mathbf{W}_{i}$$ is positive semidefinite matrix). This shows that each $$\mathbf{W}_{i}$$ is a singular matrix and $$\big \langle \mathbf{W}_{i}\mathbf{v}, \mathbf{v}\big \rangle _{\mathbb{R}^{n}}=0,$$ for all i, i.e.,   \begin{multline*} \begin{cases} \left\langle\int_{T-h_{l+1}}^{T-h_{l}}\left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s \mathbf{v}, \mathbf{v}\right\rangle_{\mathbb{R}^{n}}=0,\\[15pt] \left\langle\int_{t_{0}}^{T-h_{N}}\left[\,\sum\limits_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\,\sum\limits_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s \mathbf{v}, \mathbf{v}\right\rangle_{\mathbb{R}^{n}}=0. \end{cases} \\ \Longrightarrow \begin{cases} \int_{T-h_{l+1}}^{T-h_{l}}\left\|\,\sum\limits_{i=1}^{l}\mathbf{B}^{*}_{i}(s+h_{i})\boldsymbol{\Phi}^{*}(T, s+h_{i})\mathbf{v}\right\|_{\mathbb{R}^{m}}^{2}\,\mathrm{d}s=0,\quad \textrm{ for all } l=1,2,\ldots,(N-1),\\[10pt] \int_{t_{0}}^{T-h_{N}}\left\|\,\sum\limits_{i=1}^{N}\mathbf{B}^{*}_{i}(s+h_{i})\boldsymbol{\Phi}^{*}(T, s+h_{i})\mathbf{v}\right\|_{\mathbb{R}^{m}}^{2}\,\mathrm{d}s=0. \end{cases} \end{multline*} Since each $$\mathbf{B}_{i}^{*}(\cdot )$$ and $$\boldsymbol{\Phi }^{*}(\cdot , \cdot )$$ are continuous functions, so the above integrals implies that   \begin{align} \mathbf{v}^{*}\left[\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]=\mathbf{0},\quad \textrm{ for all } l=1,2,\ldots,N\textrm{ and some } \mathbf{v}\neq\mathbf{0}\in\mathbb{R}^{n}. \end{align} (3.8)We assumed that the system (3.1) is controllable on $$[t_{0}, T],$$ in particular the system is null controllable. Now, let us choose an initial state $$\mathbf{x}_{0}=-\mathbf{a}_{0}+\boldsymbol{\Phi }^{-1}(T, t_{0})\mathbf{v}$$ and a final state x(T) = 0. Then with some control u(⋅), the state of the system (3.1) given in (3.5) satisfies x(T) = 0. That is,   \begin{align*} \mathbf{0}=\mathbf{x}(T)=&\,\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ =&\,\boldsymbol{\Phi}(T, t_{0})\boldsymbol{\Phi}^{-1}(T, t_{0})\mathbf{v}+\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Therefore, we have   \begin{align*} \mathbf{v}\!=\!-\!\int_{t_{0}}^{T-h_{N}}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s-\!\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\!\right]\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Premultiply the above equation with $$\mathbf{v}^{*}$$, we get   \begin{align*} \mathbf{v}^{*}\mathbf{v} =&\,-\int_{t_{0}}^{T-h_{N}}\mathbf{v}^{*}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &-\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\mathbf{v}^{*}\left[\,\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s. \end{align*}Now using (3.8) in the above equation, we get $$\mathbf{v}^{*}\mathbf{v}=0 \Longrightarrow \|\mathbf{v}\|_{\mathbb{R}^{n}}^{2}=0 \Longrightarrow \mathbf{v}=\mathbf{0}$$, which is a contradiction. Hence, our assumption that $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N}) < n$$ is wrong. Thus, we finally have $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n.$$ For an autonomous system, we can obtain the following controllability condition (similar results are available in the literature, for example, see Theorem 3.1 of Banks et al., 1971). Theorem 3.2 If the system (3.1) is autonomous i.e., time-invariant, then the necessary and sufficient condition for the controllability of (3.1) is given by   $$ \textrm{rank}\left(\mathbf{B}_{1}|\mathbf{AB}_{1}|\cdots|\mathbf{A}^{n-1}\mathbf{B}_{1}|\mathbf{B}_{2}|\mathbf{AB}_{2}|\cdots|\mathbf{A}^{n-1}\mathbf{B}_{2}|\cdots|\mathbf{B}_{N}|\mathbf{AB}_{N}|\cdots|\mathbf{A}^{n-1}\mathbf{B}_{N}\right)=n.$$ Remark 3.1 In the system (3.1), if delays are absent in the control, i.e., $$h_{i}=0, \textrm{for all }i,$$ then $$\mathbf{W}_{1}=\mathbf{W}_{2}=\cdots =\mathbf{W}_{N-1}=\mathbf{O}$$ and $$\mathbf{W}_{N}=\int _{t_{0}}^{T}\big[\boldsymbol{\Phi }(T, s)\sum_{i=1}^{N}\mathbf{B}_{i}(s)\big]\big[\boldsymbol{\Phi }(T, s)\sum_{i=1}^{N}\mathbf{B}_{i}(s)\big]^{*}\, \mathrm{d}s.$$ Then we have $$\mathbf{W}=\sum_{i=1}^{N}\mathbf{W}_{i}=\mathbf{W}_{N}$$ and this matrix is called the controllability Grammian of the linear system (3.1) with no delays and such system is controllable on $$[t_{0}, T]$$ if and only if $$ \textrm{rank}(\mathbf{W})= \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N-1}|\mathbf{W}_{N})= \textrm{rank}(\mathbf{W}_{N})=n.$$ That is, the controllability Grammian W is positive definite. Remark 3.2 It can be easily shown that, our necessary and sufficient condition given in Theorem 3.1 for controllability of the system (3.1) is equivalent to the condition given in Corollary 3.9 of the work by Liu & Zhao (2012), in which, the authors provide controllability condition: $$ \textrm{rank}(\mathscr{C}_{1})=n,$$ where   \begin{align*}\mathscr{C}_{1}=&\Bigg(\int_{t_{0}}^{t_{1}-h_{N}}\mathbf{VV}^{*} \,\mathrm{d}s \left| \int_{t_{1}-h_{N}}^{t_{1}-h_{N-1}}\mathbf{VV}^{*}\,\mathrm{d}s \right|\cdots\left|\int_{t_{1}-h_{2}}^{t_{1}-h_{1}}\mathbf{VV}^{*}\,\mathrm{d}s\right| \int_{t_{1}-h_{1}}^{t_{2}-h_{N}}\mathbf{VV}^{*}\,\mathrm{d}s \\&\quad \left|\int_{t_{2}-h_{N}}^{t_{2}-h_{N-1}}\mathbf{VV}^{*}\,\mathrm{d}s\right|\cdots\left|\int_{t_{2}-h_{2}}^{t_{2}-h_{1}}\mathbf{VV}^{*}\,\mathrm{d}s \int_{t_{M-1}-h_{1}}^{t_{M}-h_{N}}\mathbf{VV}^{*}\,\mathrm{d}s\right|\int_{t_{M}-h_{N}}^{t_{M}-h_{N-1}}\mathbf{VV}^{*}\,\mathrm{d}s\bigg|\cdots\\&\quad\left|\int_{t_{M}-h_{2}}^{t_{M}-h_{1}}\mathbf{VV}^{*}\,\mathrm{d}s\right| \int_{t_{M}-h_{1}}^{T-h_{N}} \mathbf{VV}^{*}\,\mathrm{d}s | \mathbf{W}_{N-1}| \mathbf{W}_{N-2} |\cdots| \mathbf{W}_{1} \Bigg), \end{align*}where $$\mathbf{V}=\sum _{i=1}^{N}\boldsymbol{\Phi }(T, s+h_{i})\mathbf{B}_{i}(s+h_{i}).$$ As we are dealing with the linear delay system without impulses, the matrix $$\mathscr{C}_{1}$$ reduces to $$(\mathbf{W}_{N}|\mathbf{W}_{N-1}|\cdots |\mathbf{W}_{1}).$$ Remark 3.3 It is always true that, if at least one of the $$\mathbf{W}_{i}$$ is of full rank n, then $$ \textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\!\cdots\! |\mathbf{W}_{N})\!=n$$. Thus, $$\mathbf{W}=\mathbf{W}_{1}+\mathbf{W}_{2}+\cdots +\mathbf{W}_{N}$$ is a positive definite matrix and by Theorem 3.1, system (3.1) is controllable on $$[t_{0}, T].$$ But if $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2}|\cdots |\mathbf{W}_{N})=n,$$ then it may be possible that none of $$\mathbf{W}_{i}$$ has full rank n. For example, let $$\mathbf{W}_{1}=\left(\begin{array}{@{}cc@{}} 1 & 0 \\ 0 & 0 \end{array}\right),$$ $$\mathbf{W}_{2}=\left(\begin{array}{@{}cc@{}} 0 & 0 \\ 0 & 2 \end{array}\right)$$ and both have rank = 1, but the augmented matrix $$(\mathbf{W}_{1}|\mathbf{W}_{2})=\left(\begin{array}{@{}cc@{}} 1 & 0|0 & 0 \\ 0 & 0|0 & 2 \end{array}\right)$$ has full rank = 2. Therefore, we conclude that even if none of $$\mathbf{W}_{i}$$ are positive definite matrices, the system (3.1) is still controllable on $$[t_{0}, T].$$ 4. Main results In this section, we obtain the controllability results of the system (2.1) for certain classes of non-linearities f(⋅, ⋅, ⋅) and impulse functions $$\mathbf{g}_{k}(\cdot ,\cdot )$$. Throughout this section, we assume that the semilinear impulsive delay system (2.1) admits only one solution for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ and the linear delay system (3.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T].$$ Here, for a given initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}$$ and for a given u(⋅), the solution to (2.1) satisfies the following equation:   \begin{align} \mathbf{x}(t)=\left\{ \begin{aligned} &\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{t-h_{N}}\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s + h_{i})\mathbf{u}(s)\,\mathrm{d}s\\ &+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s, \quad \textrm{ for all } t\in [t_{0}, t_{1}],\\ &\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s+\int_{t_{0}}^{t}\Phi(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\\ &+\sum_{j=1}^{k} \boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})), \quad \textrm{ for all }t\in(t_{k}, t_{k+1}], \end{aligned} \right. \end{align} (4.1)where k = 1, 2, …, M and $$t_{M+1}=T.$$ Before proceeding for the establishment of the controllability results of the system (2.1), first we consider the real Banach space $$\mathcal{X}:=\mathcal{B}_{1}\times \mathcal{B}_{2}=\{(\mathbf{x}, \mathbf{u}):\mathbf{x}\in \mathcal{B}_{1}, \mathbf{u}\in \mathcal{B}_{2} \},$$ endowed with the norm   $$ \left\|(\mathbf{x}, \mathbf{u})\right\|_{\mathcal{X}}:=\|\mathbf{x}\|_{\mathcal{B}_{1}}+\|\mathbf{u}\|_{\mathcal{B}_{2}}.$$Next we define an operator $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ by the formula   \begin{align} \mathcal{K}(\mathbf{x}, \mathbf{u}):=\big(\mathcal{K}_{1}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\big)=( \boldsymbol{y}, \boldsymbol{v}), \end{align} (4.2)where $$\mathcal{K}_{1}(\cdot ):\mathcal{X}\rightarrow \mathcal{B}_{1}$$ is defined by   \begin{align} &\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)= \boldsymbol{y}(t)\nonumber\\ &\quad:=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\nonumber\\ &\qquad+\left\{\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\mathrm{d}s\right.\nonumber\\ &\qquad\qquad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\mathrm{d}s\right\}\!\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\nonumber\\ &\qquad+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\textrm{d}s, \quad \textrm{ for all }t \in [t_{0}, t_{1}], \end{align} (4.3)  \begin{align}&\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)= \boldsymbol{y}(t) \nonumber\\ &\quad:=\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\nonumber\\ &\qquad+\left\{\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\mathrm{d}s\nonumber\right.\\ &\qquad\qquad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\!\,\mathrm{d}s\right\}\!\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\nonumber\\ &\qquad+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s+\sum_{j=1}^{k}\boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})),\quad \textrm{for all }t \in (t_{k}, t_{k+1}], \end{align} (4.4)and $$\mathcal{K}_{2}(\cdot ):\mathcal{X}\rightarrow \mathcal{B}_{2}$$ is defined by   \begin{align} \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(t)= \textbf{v}(t):=\begin{cases} \left[\,\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in [t_{0}, T-h_{N}],\\ \left[\,\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in (T-h_{l+1}, T-h_{l}],\\ \mathbf{0}, & t\in (T-h_{1}, T], \end{cases} \end{align} (4.5)where l = 1, 2, …, (N − 1) and $$\mathcal{L}(\cdot ):\mathcal{X}\rightarrow \mathbb{R}^{n}$$ is defined by   \begin{align} \mathcal{L}(\mathbf{x}, \mathbf{u}):= \mathbf{x}_{_{T}}-\boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})-\int_{t_{0}}^{T}\boldsymbol{\Phi}(T, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s-\sum_{j=1}^{M}\boldsymbol{\Phi}(T, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})). \end{align} (4.6)The following theorem is useful in establishing the controllability results of the system (2.1). Theorem 4.1 The system (2.1) is controllable over $$\mathbb{R}^{n}$$ on $$[t_{0}, T]$$ if and only if for every initial state $$\mathbf{x}_{0}$$ and a final state $$\mathbf{x}_{_{T}},$$ the operator $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ given in equations (4.2–4.6) has a fixed point, that is, $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ for some $$(\mathbf{x}, \mathbf{u})\in \mathcal{X}.$$ Proof. Let the system (2.1) be controllable on $$[t_{0}, T],$$ then there exists a control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2}$$, which steers the state of the system given in (4.1) from $$\mathbf{x}_{0}$$ to $$\mathbf{x}(T)=\mathbf{x}_{_{T}}.$$ That is,   \begin{align*} \mathbf{x}_{_{T}}=&\, \boldsymbol{\Phi}(T, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})+\int_{t_{0}}^{T-h_{N}}\left[\,\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s+\int_{t_{0}}^{T}\Phi(T, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\\ &+\sum_{j=1}^{M} \boldsymbol{\Phi}(T, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})). \end{align*}Combining the above equation with (4.6), we get   \begin{align} \begin{aligned} \mathcal{L}(\mathbf{x}, \mathbf{u})=&\, \int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\mathbf{u}(s)\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\mathbf{u}(s)\,\mathrm{d}s. \end{aligned} \end{align} (4.7)We choose a function u(⋅) satisfying (4.7) as   \begin{align} \begin{aligned} \mathbf{u}(t)&=\begin{cases} \left[\,\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in [t_{0},T-h_{N}],\\[8pt] \left[\,\sum\limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in (T-h_{l+1},T-h_{l}],\\[4pt] \mathbf{0}, & t\in (T-h_{1},T]. \end{cases} \end{aligned} \end{align} (4.8)Now, if we compare (4.8) with (4.5), it can be easily seen that $$\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})=\mathbf{u}.$$ Furthermore, with this control function, the corresponding solution (4.1) reduces to (4.3) and (4.4). Hence, we have $$\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})=\mathbf{x}.$$ Therefore, $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ that is, $$\mathcal{K}(\cdot )$$ has a fixed point. For the converse, let us assume that the operator $$\mathcal{K}(\cdot )$$ has a fixed point, that is, $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ for some $$(\mathbf{x}, \mathbf{u})\in \mathcal{X}.$$ Our purpose is to show that there exists some control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2}$$ such that $$\mathbf{x}(T)=\mathbf{x}_{_{T}}.$$ Since $$\mathcal{K}(\mathbf{x}, \mathbf{u})=(\mathbf{x}, \mathbf{u}),$$ from (4.4) and (4.5), we obtain the following equations:   \begin{align}\mathbf{x}(t)=&\,\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\nonumber\\ &+\left\{\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\nonumber\\ &\quad\ \left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s + h_{i})\!\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\mathrm{d}s\right\}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\nonumber\\ &+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s+\sum_{j=1}^{k}\boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j})), \quad \textrm{ for all }t \in (t_{k}, t_{k+1}], \end{align} (4.9)and   \begin{align} \mathbf{u}(t)=\begin{cases} \left[\,\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in [t_{0}, T-h_{N}],\\ \left[\,\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u}), & t\in (T-h_{l+1}, T-h_{l}],\\ 0, & t\in (T-h_{1}, T]. \end{cases} \end{align} (4.10)In order to get $$\mathbf{x}(T)=\mathbf{x}_{_{T}},$$ let us put t = T in (4.9) and use (4.6) to obtain   \begin{align*} \mathbf{x}(T)=&\,\mathbf{x}_{_{T}}-\mathcal{L}(\mathbf{x}, \mathbf{u})\\ &+\left\{\int_{t_{0}}^{T-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\times\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\nonumber\\ &+\left.\sum_{l=1}^{N-1}\int_{T-h_{l+1}}^{T-h_{l}}\!\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s \!+\! h_{i})\right]\!\times\! \left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s + h_{i})\right]^{*}\!\,\mathrm{d}s\right\}\!\!\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\\ =&\,\mathbf{x}_{_{T}}-\mathcal{L}(\mathbf{x}, \mathbf{u})+\mathbf{W}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\\ =&\,\mathbf{x}_{_{T}}. \end{align*}Hence, the system (2.1) is controllable on $$[t_{0}, T].$$ Remark 4.1 There are several methods to show the existence of fixed point of an operator. In this paper, we use Lemmas 2.1 and 2.2 to show the existence of fixed point of an operator $$\mathcal{K}(\cdot ).$$ Now we introduce the following notations for our convenience, which we use in the proof of next theorems:   \begin{align*} M_{1}:=&\,\sup \limits_{t_{0}\leq s\leq t\leq T } \left\|\boldsymbol{\Phi}(t, s)\right\|,\\ M_{2}:=&\,\left\|\mathbf{x}_{0}+\mathbf{a}_{0}\right\|_{\mathbb{R}^{n}},\\ M_{3}:=&\,\sup\limits_{t\in [t_{0}, T]}\left\|\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\\ &\qquad\ \quad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right\|,\\ M_{4}:=&\,\max_{l=1,\ldots,(N-1)}\left\{\sup_{[t_{0}, T-h_{N}]}\left\|\left[\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\right\|,\right.\\ &\qquad\qquad\qquad\left.\sup_{(T-h_{l+1}, T-h_{l}]}\left\|\left[\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\right\|\right\}.\end{align*} The following three verifiable classes for the continuous function f(⋅, ⋅, ⋅) and $$\mathbf{g}_{k}(\cdot , \cdot )$$ are considered in this paper, for which we will obtain the controllability results of the system (2.1). (i) The class of bounded functions: $$\mathfrak{B}_{1}=\{\mathbf{f}(\cdot , \cdot , \cdot ) | \mathbf{f}(\cdot , \cdot , \cdot ):[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} \ \textrm{is a bounded function}\}$$ and $$\mathfrak{B}_{2}=\{\mathbf{g}(\cdot , \cdot ) | \mathbf{g}(\cdot , \cdot ):\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n}\ \textrm{is a bounded function}\}$$; (ii) The class of Lipschitz functions: $$\mathcal{L}\mathcal{C}_{1}=\{\mathbf{f}(\cdot , \cdot , \cdot ) | \mathbf{f}(\cdot , \cdot , \cdot ):[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a Lipschitz condition with respect to second and third arguments$$\}$$ and $$\mathcal{L}\mathcal{C}_{2}=\{\mathbf{g}(\cdot , \cdot ) | \mathbf{g}(\cdot , \cdot ):\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a Lipschitz condition$$\};$$ (iii) The class of linear growth functions: $$\mathcal{G}\mathcal{C}_{1}=\{\mathbf{f}(\cdot , \cdot , \cdot ) | \mathbf{f}(\cdot , \cdot , \cdot ):[t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a linear growth condition with respect to second and third arguments$$\}$$ and $$\mathcal{G}\mathcal{C}_{2}=\{\mathbf{g}(\cdot , \cdot ) | \mathbf{g}(\cdot , \cdot ):\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow \mathbb{R}^{n} $$ satisfying a linear growth condition$$\}.$$ 4.1. Controllability of the system (2.1) for a class of bounded non-linearities In this subsection, we establish controllability results of the system (2.1) for the first class given above. Theorem 4.2 In system (2.1), let us assume that (i) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathfrak{B}_{1}$$ with bound K ≥ 0, that is,   $$ \left\|\mathbf{f}(t, \mathbf{v}, \mathbf{w})\right\|_{\mathbb{R}^{n}}\leq K, \quad \textrm{for all }(t, \mathbf{v}, \mathbf{w})\in [t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m};$$ (ii) each function $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathfrak{B}_{2}$$ with bounds $$\vartheta _{k}\geq 0,$$ that is,   $$ \left\|\mathbf{g}_{k}(\mathbf{v}, \mathbf{w})\right\|_{\mathbb{R}^{n}}\leq \vartheta_{k}, \quad \textrm{for all }(\mathbf{v}, \mathbf{w})\in \mathbb{R}^{n}\times \mathbb{R}^{m}.$$ Furthermore, if $$\vartheta =\sum \limits _{k=1}^{M} \vartheta _{k}\geq 0,$$ then $$\sum \limits _{k=1}^{M}\|\mathbf{g}_{k}(\mathbf{v}, \mathbf{w})\|_{\mathbb{R}^{n}}\leq \vartheta ,$$$$\textrm{ for all } (\mathbf{v}, \mathbf{w})\in \mathbb{R}^{n}\times \mathbb{R}^{m},$$ then the semilinear impulsive delay system (2.1) is controllable on $$[t_{0}, T].$$ Proof. For $$r_{0}>0,$$ let $$\mathcal{B}=\{(\mathbf{x}, \mathbf{u})\in \mathcal{X}:0\leq \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\}$$ be a non-empty, closed and convex subset of $$\mathcal{X}.$$ In order to prove this theorem, we apply Schauder fixed-point theorem (see 2.1), to show that $$\mathcal{K}(\cdot )$$ is a continuous operator from $$\mathcal{B}$$ into a compact subset of $$\mathcal{B}.$$ Then, rest of the proof follows from Theorem 4.1. We divide this proof into the following three steps: Step 1:$$\mathcal{K}(\cdot )$$ is a continuous operator on $$\mathcal{B}$$. First we show that $$\mathcal{K}_{1}(\cdot )$$ and $$\mathcal{K}_{2}(\cdot )$$ are continuous operators on $$\mathcal{B}.$$ For this, let $$(\mathbf{x}_{1}, \mathbf{u}_{1}),$$$$(\mathbf{x}_{2}, \mathbf{u}_{2}) \in \mathcal{B}$$ be such that $$\|(\mathbf{x}_{1}, \mathbf{u}_{1})-(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0.$$ Our aim is to establish that $$\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0$$ and $$\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0.$$ Note that f(⋅, ⋅, ⋅) is a continuous function on its domain, so in particular it is continuous with respect to the second and third arguments, and hence we have   $$ \sup \limits_{t\in [t_{0}, T]}\|\mathbf{f}(t, \mathbf{x}_{1}(t), \mathbf{u}_{1}(t))-\mathbf{f}(t, \mathbf{x}_{2}(t), \mathbf{u}_{2}(t))\|_{\mathbb{R}^{n}}\rightarrow 0,$$as $$\|(\mathbf{x}_{1}, \mathbf{u}_{1})-(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}\rightarrow 0.$$ Similarly, we have   $$ \sup \limits_{t\in [t_{0}, T]}\|\mathbf{g}_{k}(\mathbf{x}_{1}(t), \mathbf{u}_{1}(t))-\mathbf{g}_{k}(\mathbf{x}_{2}(t), \mathbf{u}_{2}(t))\|_{\mathbb{R}^{n}}\rightarrow 0,\quad \textrm{for all }k.$$The continuity of $$\mathcal{K}_{1}(\cdot )$$ follows from the following estimation:   \begin{align*}&\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathcal{B}_{1}}\\ &\quad=\sup \limits_{t \in [t_{0}, T]}\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{n}}\\ &\quad= \max \limits_{k}\bigg\{\sup_{t\in[t_{0}, t_{1}]}\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{n}},\sup_{t\in (t_{k}, t_{k+1}]}\big\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{n}}\bigg\}\\ &\quad=\max \limits_{k}\Bigg\{\sup_{t\in[t_{0}, t_{1}]}\left\|\left(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right.\right.\\ &\qquad\qquad\qquad\quad\qquad\left.+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\right)\\ &\qquad\quad\qquad\left.\times \mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\left[\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\right]\,\mathrm{d}s\right\|_{\mathbb{R}^{n}},\\ &\qquad\sup_{t\in(t_{k}, t_{k+1}]}\bigg\|\bigg(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\\ &\quad\qquad\qquad+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\bigg)\\ &\qquad\qquad\quad\times \mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)+\int_{t_{0}}^{t}\boldsymbol{\Phi}(t, s)\left[\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\right]\,\mathrm{d}s\\ &\qquad\qquad\quad+\sum_{j=1}^{k}\boldsymbol{\Phi}(t, t_{j})[\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))-\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))]\bigg\|_{\mathbb{R}^{n}}\Bigg\}\\ &\quad\leq M_{3}\|\mathbf{W}^{-1}\|\big\|\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathbb{R}^{n}}\\ &\qquad+\sup_{t_{0}\leq s\leq t\leq T}\big\|\boldsymbol{\Phi}(t, s)\big\| T\sup_{s\in[t_{0}, T]}\big\|\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\big\|_{\mathbb{R}^{n}}\\ &\qquad+\sup_{t_{0}\leq s\leq t\leq T}\big\|\boldsymbol{\Phi}(t, s)\big\| \sum_{j=1}^{M}\big\|\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))-\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))\big\|_{\mathbb{R}^{n}}\\ &\quad\leq M_{1}\Big(1+M_{3}\|\mathbf{W}^{-1}\|\Big)\\ &\qquad\left(T\!\sup_{s\in[t_{0}, T]}\!\big\|\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))\!-\!\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\big\|_{\mathbb{R}^{n}}\!+\!\!\sum_{j=1}^{M}\big\|\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))\!-\!\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))\big\|_{\mathbb{R}^{n}}\right). \end{align*}The continuity of $$\mathcal{K}_{2}(\cdot )$$ follows from the following estimation:   \begin{align*}&\big\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathcal{B}_{2}}\\[10pt] &\quad=\sup_{t\in[t_{0}, T]}\big\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})(t)-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})(t)\big\|_{\mathbb{R}^{m}}\\[10pt] &\quad=\max_{l}\left\{\sup_{t\in[t_{0}, T-h_{N}]}\left\|\left[\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\right\|_{\mathbb{R}^{m}},\right.\\[10pt] &\qquad\qquad\quad\left.\sup_{t\in(T-h_{l+1}, T-h_{l}]}\left\|\left[\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right]^{*}\mathbf{W}^{-1}\big(\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\right\|_{\mathbb{R}^{m}}\right\}\\[10pt] &\quad\leq M_{4}\big\|\mathbf{W}^{-1}\big\|\big\|\mathcal{L}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{L}(\mathbf{x}_{2}, \mathbf{u}_{2})\big\|_{\mathbb{R}^{n}}\\[10pt] &\quad\leq M_{1}M_{4}\big\|\mathbf{W}^{-1}\big\|\\[10pt] &\qquad\left(T\!\sup_{s\in[t_{0}, T]}\!\big\|\mathbf{f}(s, \mathbf{x}_{1}(s), \mathbf{u}_{1}(s))\!-\!\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\big\|_{\mathbb{R}^{n}}\!+\!\!\sum_{j=1}^{M}\big\|\mathbf{g}_{j}(\mathbf{x}_{1}(t_{j}), \mathbf{u}_{1}(t_{j}))\!-\mathbf{g}_{j}(\mathbf{x}_{2}(t_{j}), \mathbf{u}_{2}(t_{j}))\big\|_{\mathbb{R}^{n}}\right)\!. \end{align*}Finally, the continuity of $$\mathcal{K}(\cdot )$$ follows from the following estimate:   \begin{align*} \|\mathcal{K}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}&=\|\big(\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1}), \mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})\big)-\big(\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2}), \mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\|_{\mathcal{X}}\\[4pt] &=\|\big(\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2}), \mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\big)\|_{\mathcal{X}}\\[4pt] &=\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{1}}+\|\mathcal{K}_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{2}}\!. \end{align*}Step 2:$$\mathcal{K(B)}$$ is a compact set. In order to prove this, we first claim that $$\mathcal{K}_{1}(\mathcal{B})=\{\mathcal{K}_{1}(\mathbf{x}, \mathbf{u}): \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\}\subset \mathcal{B}_{1}$$ is equicontinuous set on each subinterval $$[t_{0}, t_{1}],$$$$(t_{k}, t_{k+1}], k=1, 2,\ldots ,M $$ and uniformly bounded on $$[t_{0}, T].$$ Similarly, $$\mathcal{K}_{2}(\mathcal{B})=\{\mathcal{K}_{2}(\mathbf{x}, \mathbf{u}):\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\}\subset \mathcal{B}_{2} $$ is equicontinuous set on $$[t_{0}, T-h_{N}],$$$$(T-h_{i}, T-h_{i-1}],\ldots ,(T-h_{1}, T],$$i = 1, …, N and uniformly bounded on $$[t_{0}, T].$$ Before proceeding, we need the following estimation that is obtained from (4.6)   \begin{align} \begin{aligned} \|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}\leq \|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+M_{1}(T K +\vartheta). \end{aligned} \end{align} (4.11) Now, let $$s_{1} < s_{2}, $$ where $$s_{1}, s_{2}\in [t_{0}, t_{1}]$$ or $$s_{1}, s_{2}\in (t_{k}, t_{k+1}]$$ and $$\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{1}(\mathcal{B})$$ be any elements. Now consider the following estimation:   \begin{align*} &\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{n}}\\ &=\left\|[\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})](\mathbf{x}_{0}+\mathbf{a}_{0})\vphantom{\sum_{j=1}^{k}\big[\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\big]}\right.\\ &+\left[ \left\{\int_{t_{0}}^{s_{1}-h_{N}} \left(\sum_{i=1}^{N} \boldsymbol{\Phi}(s_{1}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\right.\right.\\ &\left.-\int_{t_{0}}^{s_{2}-h_{N}} \left(\sum_{i=1}^{N} \boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\right\}+\cdots\\ &+\left\{\int_{s_{1}-h_{2}}^{s_{1}-h_{1}}\boldsymbol{\Phi}(s_{1}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\right.\\ &\left.\left.-\int_{s_{2}-h_{2}}^{s_{2}-h_{1}}\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\right\}\right]\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\\ &+\int_{t_{0}}^{s_{1}}\boldsymbol{\Phi}(s_{1}, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\mathrm{d}s-\int_{t_{0}}^{s_{2}}\boldsymbol{\Phi}(s_{2}, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\\ &\left.+\sum_{j=1}^{k}\big[\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\big]\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\right\|_{\mathbb{R}^{n}} \end{align*}  \begin{align*} &\leq\|\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})\|\|\mathbf{x}_{0}+\mathbf{a}_{0}\|_{\mathbb{R}^{n}}\\ &\quad+\Bigg\{\Bigg\|\int_{t_{0}}^{s_{1}-h_{N}}\left(\sum_{i=1}^{N}\big[\boldsymbol{\Phi}(s_{1}, s+h_{i})-\boldsymbol{\Phi}(s_{2}, s+h_{i})\big]\mathbf{B}_{i}(s+h_{i})\right)\times\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{1}-h_{N}}^{s_{2}-h_{N}}\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\,\mathrm{d}s\Big\|+\cdots\\ &\quad+\Big\|\int_{s_{1}-h_{2}}^{s_{1}-h_{1}}\big(\boldsymbol{\Phi}(s_{1}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{2}-h_{2}}^{s_{1}-h_{2}}\big(\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{1}-h_{2}}^{s_{1}-h_{1}}\big(\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\\ &\quad-\int_{s_{1}-h_{1}}^{s_{2}-h_{1}}\big(\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\,\mathrm{d}s\Big\|\Bigg\}\big\|\mathbf{W}^{-1}\big\|\\ &\quad\times\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}+\Big\|\int_{t_{0}}^{s_{1}}\big[\boldsymbol{\Phi}(s_{1}, s)-\boldsymbol{\Phi}(s_{2}, s)\big]\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s-\int_{s_{1}}^{s_{2}}\boldsymbol{\Phi}(s_{2}, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\Big\|_{\mathbb{R}^{n}}\\ &\quad+\sum_{j=1}^{k}\big\|\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\big\|\big\|\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\big\|_{\mathbb{R}^{n}}\\ \end{align*}  \begin{align*} &\leq M_{2}\|\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})\|\\ &\quad+\!\Bigg\{\!(s_{1}\!-\!h_{N})\sup\limits_{s}\!\left\|\sum_{i=1}^{N}\!\big(\boldsymbol{\Phi}(s_{1}, s+h_{i})-\!\boldsymbol{\Phi}(s_{2}, s+\!h_{i})\big)\mathbf{B}_{i}(s+h_{i})\right\|\!\times\!\sup\limits_{s}\left\|\!\Big(\!\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s\!+h_{i})\mathbf{B}_{i}(s+\!h_{i})\Big)^{*}\right\|\\ &\quad+(s_{2}-s_{1})\sup\limits_{s}\left\|\sum_{i=1}^{N}\boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right\|\sup\limits_{s}\left\|\Big(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\Big)^{*}\right\|+\cdots\\ &\quad+(h_{2}\!-\!h_{1})\sup\limits_{s}\|\boldsymbol{\Phi}(s_{1}, s+h_{1})-\boldsymbol{\Phi}(s_{2}, s\!+h_{1})\|\sup\limits_{s}\|\mathbf{B}_{1}(s+h_{1})\|\!\times\!\sup\limits_{s}\big\|\big(\boldsymbol{\Phi}(T, s+\!h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\big\|\\ &\quad+2(s_{2}-s_{1})\sup\limits_{s}\|\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\|\Bigg\}\\ &\quad\times \|\mathbf{W}^{-1}\|\Big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1} (TK +\vartheta)\Big)\\ &\quad+s_{1}\sup\limits_{s}\|\boldsymbol{\Phi}(s_{1}, s)-\boldsymbol{\Phi}(s_{2}, s)\|K+(s_{2}-s_{1})M_{1} K+\vartheta\sum_{j=1}^{k}\|\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\|. \end{align*}Observe that, the right-hand side of the above inequality is independent of the choice of x and u. Also, if we take $$s_{1}\rightarrow s_{2},$$ then we see that $$\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{n}}\rightarrow 0,$$ for all $$ \mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{1}(\mathcal{B}).$$ Therefore, $$\mathcal{K}_{1}(\mathcal{B})$$ is equicontinuous set on $$[t_{0}, t_{1}],$$$$(t_{k}, t_{k+1}].$$ For uniform boundedness of $$\mathcal{K}_{1}(\mathcal{B}),$$ we consider the following estimation:   \begin{align*} \|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}} &=\sup_{t\in[t_{0}, T]}\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{n}}=\max \limits_{k}\left\{\sup_{t\in[t_{0}, t_{1}]}\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{n}}, \sup_{t\in(t_{k}, t_{k+1}]}\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{n}}\right\}\\ &=\max \limits_{k}\Bigg\{\sup_{t\in[t_{0}, t_{1}]}\Big\|\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\\ &\quad+\bigg(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N} \boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\\ &\quad+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\bigg)\\ &\quad\times \mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})+ \int_{t_{0}}^{t} \boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s\Big\|_{\mathbb{R}^{n}},\end{align*}  \begin{align*} &\sup_{t\in(t_{k}, t_{k+1}]}\Big\|\boldsymbol{\Phi}(t, t_{0})(\mathbf{x}_{0}+\mathbf{a}_{0})\\ &+\bigg(\int_{t_{0}}^{t-h_{N}}\left[\sum_{i=1}^{N} \boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\\ &+\sum_{l=1}^{N-1}\int_{t-h_{l+1}}^{t-h_{l}}\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(t, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]\left[\sum_{i=1}^{l}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right]^{*}\,\mathrm{d}s\bigg)\\ &\times \mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})+ \int_{t_{0}}^{t} \boldsymbol{\Phi}(t, s)\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\,\mathrm{d}s+\sum_{j=1}^{k} \boldsymbol{\Phi}(t, t_{j})\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\Big\|_{\mathbb{R}^{n}}\Bigg\}\\ &\leq \sup_{t\in[t_{0}, T]}\|\boldsymbol{\Phi}(t, t_{0})\|\|\mathbf{x}_{0}+\mathbf{a}_{0}\|_{\mathbb{R}^{n}}+M_{3}\|\mathbf{W}^{-1}\|\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}\\ &+\sup_{t_{0}\leq s\leq t\leq T}\|\boldsymbol{\Phi}(t, s)\|T\!\sup_{s\in [t_{0}, T]}\|\mathbf{f}(s, \mathbf{x}(s), \mathbf{u}(s))\|_{\mathbb{R}^{n}}+\!\sup_{t, s\in[t_{0}, T]}\|\boldsymbol{\Phi}(t, s)\|\!\sum_{j=1}^{M}\|\mathbf{g}_{j}(\mathbf{x}(t_{j}), \mathbf{u}(t_{j}))\|_{\mathbb{R}^{n}}\\ &\leq M_{1}M_{2}+M_{3}\|\mathbf{W}^{-1}\|\big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1}(T K +\vartheta)\big)+M_{1}TK+M_{1}\vartheta, \end{align*}that is,   \begin{align} \left\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\right\|_{\mathcal{B}_{1}}\leq \big(1+M_{3}\|\mathbf{W}^{-1}\|\big)\big(M_{1}M_{2}+M_{1}(TK+\vartheta)\big)+M_{3}\|\mathbf{W}^{-1}\|\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}. \end{align} (4.12)Since the right-hand side of the above inequality is independent of the choice of x and u, so the above inequality holds for any $$\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{1}(\mathcal{B}).$$ Thus, the set $$\mathcal{K}_{1}(\mathcal{B})$$ is uniformly bounded on $$[t_{0}, T].$$ To show the equicontinuity of $$\mathcal{K}_{2}(\mathcal{B}),$$ choose the elements $$s_{1}, s_{2}\in [t_{0}, T-h_{N}]$$ or $$s_{1}, s_{2}\in (T-h_{i}, T-h_{i-1}]$$ or $$s_{1}, s_{2}\in (T-h_{1}, T] $$ with $$s_{1} < s_{2},$$ and for any $$ \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\in \mathcal{K}_{2}(\mathcal{B}),$$ consider the following estimation:   \begin{align*} &\left\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(s_{2})\right\|_{\mathbb{R}^{m}}\\ &\quad\leq \sum_{i}\Big\|\Big(\boldsymbol{\Phi}(T, s_{1}+h_{i})\mathbf{B}_{i}(s_{1}+h_{i})\Big)^{*}-\Big(\boldsymbol{\Phi}(T, s_{2}+h_{i})\mathbf{B}_{i}(s_{2}+h_{i})\Big)^{*}\Big\| \big\|\mathbf{W}^{-1}\big\|\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}\\ &\quad\leq \sum_{i}\Big\|\Big(\boldsymbol{\Phi}(T, s_{1}+h_{i})\mathbf{B}_{i}(s_{1}+h_{i})\Big)^{*}-\Big(\boldsymbol{\Phi}(T, s_{2}+h_{i})\mathbf{B}_{i}(s_{2}+h_{i})\Big)^{*}\Big\|\\ &\qquad\times \|\mathbf{W}^{-1}\|\Big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1}(T K +\vartheta)\Big). \end{align*}For uniform boundedness of $$\mathcal{K}_{2}(\mathcal{B}),$$ see the following estimation:   \begin{align*} \|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}&=\sup_{t\in[t_{0}, T]}\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(t)\|_{\mathbb{R}^{m}} \\ &=\max_{l}\left\{\sup_{t\in[t_{0}, T-h_{N}]}\left\|\left(\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\right\|_{\mathbb{R}^{m}},\right.\\ &\left.\qquad\qquad\sup_{t\in(T-h_{l+1}, T-h_{l}]}\left\|\left(\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\mathbf{W}^{-1}\mathcal{L}(\mathbf{x}, \mathbf{u})\right\|_{\mathbb{R}^{m}}\right\}\\ &\leq \max_{l}\left\{\sup_{t\in[t_{0}, T-h_{N}]}\left\|\left(\sum \limits_{i=1}^{N}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\right\|,\right.\\ &\left.\qquad\qquad\sup_{t\in(T-h_{l+1}, T-h_{l}]}\left\|\left(\sum \limits_{i=1}^{l}\boldsymbol{\Phi}(T, t+h_{i})\mathbf{B}_{i}(t+h_{i})\right)^{*}\right\|\right\} \big\|\mathbf{W}^{-1}\big\|\|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}, \end{align*}that is,   \begin{align} \|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}\leq M_{4}\big\|\mathbf{W}^{-1}\big\|\Big(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+ M_{1}(T K +\vartheta)\Big). \end{align} (4.13)Now $$\mathcal{K}(\mathcal{B})=\mathcal{K}_{1}(\mathcal{B})\times \mathcal{K}_{2}(\mathcal{B})=\big \{\big (\mathcal{K}_{1}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\big ):\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq r_{0}\big \}$$ is an equicontinuous set on $$[t_{0}, t_{1}],(t_{1}, t_{2}],\ldots ,(t_{M-1}, t_{M}],(t_{M}, T-h_{N}],(T-h_{N}, T-h_{N-1}],\ldots ,(T-h_{1}, T]$$ and uniformly bounded on $$[t_{0}, T].$$ Consequently, if we take a sequence $$\{(\mathcal{K}_{1}^{n}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n}(\mathbf{x}, \mathbf{u}))\}$$ in $$\mathcal{K}(\mathcal{B}),$$ this sequence is uniformly bounded and equicontinuous on each interval and in particular on $$[t_{0}, t_{1}],$$ so by Arzela–Ascoli theorem, there exists a subsequence $$\big \{\left (\mathcal{K}_{1}^{n_{1}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{1}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$ of $$\{(\mathcal{K}_{1}^{n}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n}(\mathbf{x}, \mathbf{u}))\}$$, which is uniformly convergent on $$[t_{0}, t_{1}].$$ Consider the sequence $$\big \{\left (\mathcal{K}_{1}^{n_{1}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{1}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$, which is equicontinuous and uniformly bounded on each interval, in particular on $$(t_{1}, t_{2}],$$ and, for the same reason, there exists a subsequence $$\big \{\left (\mathcal{K}_{1}^{n_{2}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{2}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$ of $$\big \{\left (\mathcal{K}_{1}^{n_{1}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{1}}(\mathbf{x}, \mathbf{u}\big )\right .\big \}$$ which is uniformly convergent on $$[t_{0}, t_{2}].$$ Continuing this process for the intervals $$(t_{2}, t_{3}],\ldots ,(t_{M-1}, t_{M}],\ (t_{M}, T-h_{N}],\ (T-h_{N}, T-h_{N-1}],\ldots ,\ (T-h_{1}, T],$$ we see that the sequence $$\big \{\big (\mathcal{K}_{1}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u})\big )\big \}$$ is uniformly convergent on $$[t_{0}, T].$$ Thus, $$\{(\mathcal{K}_{1}^{n}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n}(\mathbf{x}, \mathbf{u}))\}$$ being an arbitrary sequence in $$\mathcal{K(B)},$$ has a converging subsequence $$\big \{\big (\mathcal{K}_{1}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u}), \mathcal{K}_{2}^{n_{(M+N+1)}}(\mathbf{x}, \mathbf{u})\big )\big \}$$ on $$[t_{0}, T].$$ Hence, $$\mathcal{K(B)}$$ is a compact set in $$\mathcal{X}.$$ Step 3:$$\mathcal{K}(\mathcal{B})\subset \mathcal{B}.$$ Let $$\mathcal{K}(\mathbf{x}, \mathbf{u})\in \mathcal{K}(\mathcal{B})$$ be any element. We use the estimations (4.12) and (4.13) to get   \begin{align*} \|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}&=\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}}+\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}\\ &\leq \Big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\Big)\big(M_{1} M_{2}+M_{1}(TK +\vartheta)\big)+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|. \end{align*}Then, we see that $$\lim \limits _{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\rightarrow \infty }\frac{\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}=0.$$ Therefore, for a fixed $$\varepsilon \in (0, 1),$$ we have $$\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}$$ for sufficiently large value of $$\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}},$$ say $$r_{0}> 0.$$ Hence, we obtain $$\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon r_{0} < r_{0}.$$ Therefore, we have $$\mathcal{K(B)}\subset \mathcal{B}.$$ Now, Lemma 2.1 ensures the existence of a fixed point for an operator $$\mathcal{K}(\cdot )$$ in $$\mathcal{B}\subset \mathcal{X}$$ and hence by Theorem 4.1, system (2.1) is controllable on $$[t_{0}, T].$$ Remark 4.2 It is clear from the Theorem 4.2 that, if the semilinear impulsive delay system (2.1) possesses a unique solution on $$[t_{0}, T]$$ for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the linear delay system (3.1) is controllable on $$[t_{0}, T]$$ and the continuous functions f(⋅, ⋅, ⋅) and each $$\mathbf{g}_{k}(\cdot , \cdot )$$ are bounded on their domain, then the system (2.1) is also controllable on $$[t_{0}, T].$$ 4.2. Controllability of the system (2.1) for a class of Lipschitz non-linearities In this subsection, we prove controllability results of the system (2.1) for a class of Lipschitz non-linearities. Theorem 4.3 In system (2.1), if we assume that (i) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{C}_{1}$$ with Lipschitz constants $$\alpha _{0},\beta _{0}\geq 0,$$ that is,   $$ \|\mathbf{f}(t, \mathbf{v}_{1}, \mathbf{w}_{1})-f(t, \mathbf{v}_{2}, \mathbf{w}_{2})\|_{\mathbb{R}^{n}}\leq \alpha_{0}\|\mathbf{v}_{1}-\mathbf{v}_{2}\|_{\mathbb{R}^{n}}+\beta_{0}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|_{\mathbb{R}^{m}},$$for all $$(t, \mathbf{v}_{1}, \mathbf{w}_{1}), (t, \mathbf{v}_{2}, \mathbf{w}_{2})\in [t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m};$$ (ii) each function $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{C}_{2}$$ with Lipschitz constants $$\alpha _{k}, \beta _{k}\geq 0,$$ that is,   $$ \|\mathbf{g}_{k}(\mathbf{v}_{1}, \mathbf{w}_{1})-\mathbf{g}_{k}(\mathbf{v}_{2}, \mathbf{w}_{2})\|_{\mathbb{R}^{n}}\leq \alpha_{k}\|\mathbf{v}_{1}-\mathbf{v}_{2}\|_{\mathbb{R}^{n}}+\beta_{k}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|_{\mathbb{R}^{m}},$$for all $$ (\mathbf{v}_{1}, \mathbf{w}_{1}),\ (\mathbf{v}_{2}, \mathbf{w}_{2})\in \mathbb{R}^{n}\times \mathbb{R}^{m};$$ (iii) $$\delta = [M_{1}(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\| )\gamma ]<1,$$ where   \begin{align} \gamma:=\max\left\{\left(T\alpha_{0}+\sum \limits_{k=1}^{M}\alpha_{k}\right), \left(T\beta_{0}+\sum \limits_{k=1}^{M}\beta_{k}\right)\right\}, \end{align} (4.14)then the semilinear impulsive delay system (2.1) is controllable on $$[t_{0}, T].$$ Proof. We apply Banach contraction mapping principle (see Lemma 2.2) to show that the operator $$\mathcal{K}(\cdot )$$ is a contraction, and then the proof follows from Theorem 4.1. Let us begin by considering   $$ \left\|\mathcal{K}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{X}}=\left\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{B}_{1}}+\left\|K_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-K_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{B}_{2}}. $$Now, we use the estimations for $$\|\mathcal{K}_{1}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}_{1}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{1}}$$ and $$\|K_{2}(\mathbf{x}_{1}, \mathbf{u}_{1})-K_{2}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{B}_{2}}$$, which we obtained in Step 1 in the proof of Theorem 4.2, to get   \begin{align*} \|\mathcal{K}(\mathbf{x}_{1}, \mathbf{u}_{1})-\mathcal{K}(\mathbf{x}_{2}, \mathbf{u}_{2})\|_{\mathcal{X}}&\leq M_{1}\Big(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\Big)\\ &\quad\times\left\{T\sup \limits_{s\in [t_{0}, T]}\|\mathbf{f}(s,\mathbf{x}_{1}(s), \mathbf{u}_{1}(s))-\mathbf{f}(s, \mathbf{x}_{2}(s), \mathbf{u}_{2}(s))\|_{\mathbb{R}^{n}}\right.\\ &\left.\quad\qquad+\sum \limits_{k=1}^{M}\|\mathbf{g}_{k}(\mathbf{x}_{1}(t_{k}), \mathbf{u}_{1}(t_{k}))-\mathbf{g}_{k}(\mathbf{x}_{2}(t_{k}), \mathbf{u}_{2}(t_{k}))\|_{\mathbb{R}^{n}}\right\}\\ &\leq M_{1}\left(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\right)\gamma\left(\|\mathbf{x}_{1}-\mathbf{x}_{2}\|_{\mathcal{B}_{1}}+\|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{\mathcal{B}_{2}}\right)\\ &\leq \delta \left\|(\mathbf{x}_{1}, \mathbf{u}_{1})-(\mathbf{x}_{2}, \mathbf{u}_{2})\right\|_{\mathcal{X}}. \end{align*}Since $$\delta <1,$$ so $$\mathcal{K}(\cdot ):\mathcal{X}\rightarrow \mathcal{X}$$ is a contraction and hence from Lemma 2.2, $$\mathcal{K}(\cdot )$$ has a unique fixed point in $$\mathcal{X}.$$ Then by Theorem 4.1, the system (2.1) is controllable on $$[t_{0}, T].$$ Remark 4.3 Theorem 4.3 shows that, if the semilinear impulsive delay system (2.1) possesses a unique solution on $$[t_{0}, T]$$ for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the linear delay system (3.1) is controllable on $$[t_{0}, T]$$ and the continuous functions f(⋅, ⋅, ⋅) and each $$\mathbf{g}_{k}(\cdot , \cdot )$$ satisfies a Lipschitz condition as defined above, then the system (2.1) is also controllable on $$[t_{0}, T],$$ provided   $$ M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\gamma<1,$$where $$\gamma $$ is given in (4.14). 4.3. Controllability of the system (2.1) for a class of non-linearities satisfying the linear growth condition The controllability results of the system (2.1) for a class of non-linearities satisfying the linear growth condition, is established in this subsection using Theorem 4.1. Theorem 4.4 In system (2.1), if we assume that (i) the function $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{G}_{1}$$ with growth constants $$a_{0}, b_{0}, c_{0}\geq 0,$$ that is,   $$ \|\mathbf{f}(t, \mathbf{v}, \mathbf{w})\|_{\mathbb{R}^{n}}\leq a_{0}\|\mathbf{v}\|_{\mathbb{R}^{n}}+b_{0}\|\mathbf{w}\|_{\mathbb{R}^{m}}+c_{0},\textrm{ for all }(t, \mathbf{v}, \mathbf{w})\in [t_{0}, T]\times \mathbb{R}^{n}\times \mathbb{R}^{m},$$ (ii) each function $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{G}_{2}$$ with growth constants $$a_{k}, b_{k}\geq 0,$$ that is,   $$ \|\mathbf{g}_{k}(\mathbf{v}, \mathbf{w})\|_{\mathbb{R}^{n}}\leq a_{k}\|\mathbf{v}\|_{\mathbb{R}^{n}}+b_{k}\|\mathbf{w}\|_{\mathbb{R}^{m}},\quad \textrm{for all }(\mathbf{v}, \mathbf{w})\in \mathbb{R}^{n}\times \mathbb{R}^{m},$$ (iii) $$M_{1}\big (1+(M_{3}+M_{4})\big \|\mathbf{W}^{-1}\big \|\big )\bigg (T(a_{0}+b_{0})+\sum \limits _{k=1}^{M}(a_{k}+b_{k})\bigg )<1,$$then the semilinear impulsive delay system (2.1) is controllable on $$[t_{0}, T].$$ Proof. Let $$\mathcal{B}$$ be a non-empty, closed and convex subset of $$\mathcal{X}$$ as defined earlier. The proof is similar to the proof of Theorem 4.2, that is, we show that $$\mathcal{K}(\cdot )$$ is a continuous operator from $$\mathcal{B}$$ into a compact subset of $$\mathcal{B},$$ so that the existence of the fixed point for $$\mathcal{K}(\cdot )$$ is guaranteed by Lemma 2.1. Then the proof follows from Theorem 4.1. The continuity of $$\mathcal{K}(\cdot )$$ on $$\mathcal{B}$$ is already established in the proof of Theorem 4.2. Now, only thing left to be proved here is that $$\mathcal{K(B)}$$ is compact in $$\mathcal{B}.$$ For this, we first need the following estimation, which we obtain from (4.6):   \begin{align} \begin{aligned} \|\mathcal{L}(\mathbf{x}, \mathbf{u})\|_{\mathbb{R}^{n}}&\leq \|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1}M_{2}+TM_{1}c_{0}+M_{1}\left\{\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+\left(Tb_{0}+\sum\limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right\}. \end{aligned} \end{align} (4.15)Now, the equicontinuity of the set $$\mathcal{K}_{1}(\mathcal{B})$$ on $$[t_{0}, t_{1}], (t_{k}, t_{k+1}]$$ and its uniform boundedness on $$[t_{0}, T]$$ follows from the following estimations:   \begin{align*} &\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{n}}\\ &\leq M_{2}\|\boldsymbol{\Phi}(s_{1}, t_{0})-\boldsymbol{\Phi}(s_{2}, t_{0})\|\\ &\quad+\left\{(s_{1}-h_{N})\sup\limits_{s}\left\|\sum_{i=1}^{N}\big(\boldsymbol{\Phi}(s_{1}, s+h_{i})-\boldsymbol{\Phi}(s_{2}, s+h_{i})\big)\mathbf{B}_{i}(s+h_{i})\right\|\right.\\ &\quad\times\sup\limits_{s}\left\|\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\right\|\\ &\quad+(s_{2}-s_{1})\sup\limits_{s}\left\|\sum_{i=1}^{N}\boldsymbol{\Phi}(s_{2}, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right\|\sup\limits_{s}\left\|\left(\sum_{i=1}^{N}\boldsymbol{\Phi}(T, s+h_{i})\mathbf{B}_{i}(s+h_{i})\right)^{*}\right\|+\cdots\\ &\quad+(h_{2}-h_{1})\sup\limits_{s}\big\|\boldsymbol{\Phi}(s_{1}, s+h_{1})-\boldsymbol{\Phi}(s_{2}, s+h_{1})\big\|\big\|\mathbf{B}_{1}(s+h_{1})\big\|\sup\limits_{s}\big\|\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\big\|\\ &\left.\quad+\, 2(s_{2}-s_{1})\sup\limits_{s}\left\|\boldsymbol{\Phi}(s_{2}, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big(\boldsymbol{\Phi}(T, s+h_{1})\mathbf{B}_{1}(s+h_{1})\big)^{*}\right\|\vphantom{\sum_{i=1}^{N}}\right\}\\ &\quad\times \big\|\mathbf{W}^{-1}\big\|\left\{\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1} M_{2}+T M_{1}c_{0}+M_{1}\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)r_{0}\right\}\\ &\quad+\left\{s_{1} \sup\limits_{s}\|\boldsymbol{\Phi}(s_{1}, s)-\boldsymbol{\Phi}(s_{2}, s)\|+M_{1}(s_{2}-s_{1})\right\}\big((a_{0}+b_{0})r_{0}+c_{0}\big)\\ &\quad+\sum_{j=1}^{k}\|\boldsymbol{\Phi}(s_{1}, t_{j})-\boldsymbol{\Phi}(s_{2}, t_{j})\|(a_{j}+b_{j})r_{0}, \end{align*}and   \begin{align} \begin{aligned} \|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}}&\leq M_{1}M_{2}+TM_{1}c_{0}+M_{3}\big\|\mathbf{W}^{-1}\big\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\Big(1+M_{3}\big\|\mathbf{W}^{-1}\big\|\Big)\left[\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+ \left(Tb_{0}+\sum \limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right]\\ &\leq M_{1}M_{2}+TM_{1}c_{0}+M_{3}\|\mathbf{W}^{-1}\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\big(1+M_{3}\|\mathbf{W}^{-1}\|\big)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)r_{0}. \end{aligned} \end{align} (4.16)Similarly, the equicontinuity of the set $$\mathcal{K}_{2}(\mathcal{B})$$ on $$[t_{0}, T-h_{N}], (T-h_{i}, T-h_{i-1}]$$ and $$(T-h_{1}, T]$$ and its uniform boundedness on $$[t_{0}, T]$$ are guaranteed by the following estimations:   \begin{align*} \|\mathcal{K}_{2}&(\mathbf{x}, \mathbf{u})(s_{1})-\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})(s_{2})\|_{\mathbb{R}^{m}}\\ \leq &\sum_{i}\Big\|\Big(\boldsymbol{\Phi}(T, s_{1}+h_{i})\mathbf{B}_{i}(s_{1}+h_{i})\Big)^{*}-\Big(\boldsymbol{\Phi}(T, s_{2}+h_{i})\mathbf{B}_{i}(s_{2}+h_{i})\Big)^{*}\Big\|\\ &\times \big\|\mathbf{W}^{-1}\big\|\left\{(\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+M_{1}M_{2}+TM_{1}c_{0})+M_{1}\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)\right\}r_{0}, \end{align*}and   \begin{align} \begin{aligned} \|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}&\leq M_{4}\big\|\mathbf{W}^{-1}\big\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &+M_{1}M_{4}\big\|\mathbf{W}^{-1}\big\|\left[\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+\left(Tb_{0}+\sum \limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right]\\ &\leq M_{4}\big\|\mathbf{W}^{-1}\big\|(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &+M_{1}M_{4}\big\|\mathbf{W}^{-1}\big\|\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)r_{0}. \end{aligned} \end{align} (4.17)Now, by the same argument as given in the proof of Theorem 4.2, we say that $$\mathcal{K(B)}$$ is compact set. Finally, to show $$\mathcal{K(B)}\subset \mathcal{B},$$ take an element $$\mathcal{K}(\mathbf{x}, \mathbf{u})\in \mathcal{K}(\mathcal{B})$$ and use the estimations (4.16) and (4.17) to obtain   \begin{align*} \|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}&=\|\mathcal{K}_{1}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{1}}+\|\mathcal{K}_{2}(\mathbf{x}, \mathbf{u})\|_{\mathcal{B}_{2}}\\ &\leq M_{1}M_{2}+TM_{1}c_{0}+\big\|\mathbf{W}^{-1}\big\|(M_{3}+M_{4})(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\Big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\Big)\left[\left(Ta_{0}+\sum \limits_{k=1}^{M}a_{k}\right)\|\mathbf{x}\|_{\mathcal{B}_{1}}+\left(Tb_{0}+\sum \limits_{k=1}^{M}b_{k}\right)\|\mathbf{u}\|_{\mathcal{B}_{2}}\right]\\ &\leq M_{1}M_{2}+TM_{1}c_{0}+\big\|\mathbf{W}^{-1}\big\|(M_{3}+M_{4})(M_{1}M_{2}+\|\mathbf{x}_{_{T}}\|_{\mathbb{R}^{n}}+TM_{1}c_{0})\\ &\quad+M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)\big(\|\mathbf{x}\|_{\mathcal{B}_{1}}+\|\mathbf{u}\|_{\mathcal{B}_{2}}\big). \end{align*}Thus, we have   $$ \lim\limits_{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\rightarrow \infty}\frac{\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}{\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}}\leq M_{1}\big(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\big)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)<1.$$Hence, for some $$\varepsilon \in (0, 1)$$ with $$M_{1}(1+(M_{3}+M_{4}) \|\mathbf{W}^{-1} \| ) (T(a_{0}+b_{0})+\sum\nolimits _{k=1}^{M}(a_{k}+b_{k}) )<\varepsilon ,$$ we have $$\|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon \|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}},$$ for sufficiently large values of $$\|(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}},$$ say $$r_{0}>0.$$ Hence, $$ \|\mathcal{K}(\mathbf{x}, \mathbf{u})\|_{\mathcal{X}}\leq \varepsilon r_{0} < r_{0}.$$ Thus, we finally have $$\mathcal{K(B)}\subset \mathcal{B}.$$ Remark 4.4 According to the Theorem 4.4, if the semilinear impulsive delay system (2.1) possesses a unique solution on $$[t_{0}, T]$$ for any initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for any control function $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the linear delay system (3.1) is controllable on $$[t_{0}, T]$$ and the continuous functions f(⋅, ⋅, ⋅) and each $$\mathbf{g}_{k}(\cdot , \cdot )$$ satisfies the linear growth condition as defined above, then the system (2.1) is also controllable on $$[t_{0}, T],$$ provided   $$ M_{1}\left(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\right)\left(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\right)<1.$$ 5. Numerical example We consider the following two-dimensional non-autonomous impulsive semilinear system with two delays in the control and two impulses in the state,   \begin{align} \left. \begin{aligned} \begin{bmatrix} \dot{x_{1}}(t)\\ \dot{x_{2}}(t) \end{bmatrix}&=\begin{bmatrix} -2 & t\\ 0 & -1 \end{bmatrix}\begin{bmatrix} x_{1}(t)\\ x_{2}(t) \end{bmatrix}+\begin{bmatrix} 1\\ 0 \end{bmatrix}\mathbf{u}(t-0.05)+\begin{bmatrix} 1\\ -1.6 \end{bmatrix}\mathbf{u}(t-0.1)\\&\quad+ t \sin(\mathbf{u}^{2}(t))\begin{bmatrix} \sin({x_{1}^{2}}(t))\\ \cos({x_{2}^{2}}(t)) \end{bmatrix}\!,\ t\in [0, 3]\setminus\{1, 2\},\\ \Delta(\mathbf{x}(1))&=\begin{bmatrix} \sin({x_{1}^{2}}(1)\mathbf{u}(1))\\ \cos({x_{2}^{2}}(1)\mathbf{u}(1)) \end{bmatrix}\!, \Delta(\mathbf{x}(2))=\begin{bmatrix} \cos({x_{1}^{2}}(2)\mathbf{u}(2))\\ \sin({x_{2}^{2}}(2)\mathbf{u}(2)) \end{bmatrix}\!,\\ \begin{bmatrix} x_{1}(0)\\ x_{2}(0) \end{bmatrix}&=\begin{bmatrix} 2\\1 \end{bmatrix}\!,\\ \mathbf{u}(t)&=t^{3}, t\in[-0.1, 0). \end{aligned} \right\} \end{align} (5.1)Comparing this equation with (2.1), we get   \begin{align*} &\mathbf{A}(t)=\begin{bmatrix} -2 & t\\ 0 & -1 \end{bmatrix}, \mathbf{B}_{1}(t)=\begin{bmatrix} 1\\ 0 \end{bmatrix}, \mathbf{B}_{2}(t)=\begin{bmatrix} 1\\ -1.6 \end{bmatrix}\!,\\ & h_{1}=0.05, h_{2}=0.1, \ t_{0}=0, \ t_{1}=1, \ t_{2}=2, \ T=3,\\ & \mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t))=t\sin(\mathbf{u}^{2}(t))\begin{bmatrix} \sin({x_{1}^{2}}(t))\\ \cos({x_{2}^{2}}(t)) \end{bmatrix}, \mathbf{g}_{1}(\mathbf{x}(t_{1}), \mathbf{u}(t_{1}))=\begin{bmatrix}\nonumber \sin({x_{1}^{2}}(1)\mathbf{u}(1))\\ \cos({x_{2}^{2}}(1)\mathbf{u}(1)) \end{bmatrix}\!,\\ &\mathbf{g}_{2}(\mathbf{x}(t_{2}), \mathbf{u}(t_{2}))=\begin{bmatrix} \nonumber \cos({x_{1}^{2}}(2)\mathbf{u}(2))\\ \sin({x_{2}^{2}}(2)\mathbf{u}(2)) \end{bmatrix}\!. \end{align*}We calculate the associated state-transition matrix as   \begin{align*} \boldsymbol{\Phi}(t, s)&=\begin{bmatrix} e^{-2(t-s)} & e^{-(t-s)}(t-1)-e^{-2(t-s)}(s-1)\\ 0 & e^{-(t-s)} \end{bmatrix}\!,\\ \mathbf{W}_{1}&=\int_{2.9}^{2.95}\boldsymbol{\Phi}(3, s+0.05)\mathbf{B}_{1} \big(\boldsymbol{\Phi}(3, s+0.05)\mathbf{B}_{1}\big)^{*}\,\mathrm{d}s=\begin{bmatrix} 0.0453 & 0\\ 0 & 0 \end{bmatrix}\!,\\ \mathbf{W}_{2}&=\int_{0}^{2.9}\sum \limits_{i=1}^{2}\boldsymbol{\Phi}(3, s+h_{i})B_{i}\Big(\sum \limits_{i=1}^{2}\boldsymbol{\Phi}(3, s+h_{i})\mathbf{B}_{i}\Big)^{*}\,\mathrm{d}s=\begin{bmatrix} 0.914 & 0\\ 0 & 1.265 \end{bmatrix}\!. \end{align*}Clearly $$\textrm{rank}(\mathbf{W}_{1}|\mathbf{W}_{2})=2,$$ and it follows from Theorem 3.1 that the linear part of system (5.1) without impulses, is controllable on [0, 3]. Furthermore, we calculate   $$ \big\|\mathbf{W}^{-1}\big\|=\big\|(\mathbf{W}_{1}+\mathbf{W}_{2})^{-1}\big\|\approx 1.308,M_{1}=\sup \limits_{0\leq s\leq t\leq 3}\|\boldsymbol{\Phi}(t, s)\|=\sqrt{2},M_{3}\approx 1.442,M_{4}\approx 2.487. $$Since $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C}([0, 3]\times \mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C}(\mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ are bounded functions on their domain, that is $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathfrak{B}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathfrak{B}_{2},$$ for k = 1, 2, so it follows from Theorem 4.2 that the semilinear system (5.1) is also controllable on [0, 3]. In (5.1), if we choose   \begin{align*} \mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t))&=\frac{1}{100}\begin{bmatrix} x_{1}(t)+\sin\big(x_{1}(t)\big)\\ x_{2}(t)+\sin\big(x_{2}(t)\big) \end{bmatrix}\!,\\ \mathbf{g}_{1}(\mathbf{x}(t_{1}), \mathbf{u}(t_{1}))&=\frac{1}{100}\begin{bmatrix} x_{1}(1)\\ \cos\big(x_{2}(1)\big) \end{bmatrix}\!,\\ \mathbf{g}_{2}(\mathbf{x}(t_{2}), \mathbf{u}(t_{2}))&=\frac{1}{100}\begin{bmatrix} \cos\big(x_{1}(2)\big)\\ x_{2}(2) \end{bmatrix}\!, \end{align*}then we see that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C} ([0, 3]\times \mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C} (\mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2}),$$ for k = 1, 2, are unbounded on their domain, so we cannot apply the Theorem 4.2 to check the controllability of (5.1). However, we see that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{C}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{C}_{2},$$ for k = 1, 2, such that $$\alpha _{0}=\frac{2\sqrt{2}}{100},$$$$\beta _{0}=0,$$$$\alpha _{1}=\frac{\sqrt{2}}{100},$$$$\beta _{1}=0,$$$$\alpha _{2}=\frac{\sqrt{2}}{100}$$ and $$\beta _{2}=0.$$ Hence,   $$ \gamma:=\max\!\bigg\{\left(T\alpha_{0}+\sum \limits_{k=1}^{2}\alpha_{k}\right), \left(T\beta_{0}+\sum \limits_{k=1}^{2}\beta_{k}\right)\bigg\}=\max\!\bigg\{\left(3\times \frac{2\sqrt{2}}{100}+\frac{\sqrt{2}}{100}+\frac{\sqrt{2}}{100}\right), 0\bigg\}\approx 0.11314.$$After the calculation, we see that   $$ M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\gamma<1$$and hence, by Theorem 4.3, the system (5.1) is controllable on [0, 3]. Finally, to apply the Theorem 4.4, choose the functions f(⋅, ⋅, ⋅) and $$\mathbf{g}_{k}(\cdot , \cdot )$$ in the system (5.1) as   \begin{align*} \mathbf{f}(t, \mathbf{x}(t), \mathbf{u}(t))&=c_{0}\begin{bmatrix} x_{1}(t)\sin\Big(\frac{1}{{x_{1}^{2}}(t)+1}\Big)+t\mathbf{u}(t)\\ x_{2}(t)\sin\Big(\frac{1}{{x_{2}^{2}}(t)+1}\Big) \end{bmatrix}\!,\\ \mathbf{g}_{1}(\mathbf{x}(t_{1}), \mathbf{u}(t_{1}))&=c_{1}\begin{bmatrix} x_{1}(1)+\mathbf{u}(1)\sin({x_{2}^{2}}(1))\\x_{2}(1) \end{bmatrix}\!,\\ \mathbf{g}_{2}(\mathbf{x}(t_{2}), \mathbf{u}(t_{2}))&=c_{2}\begin{bmatrix} x_{1}(2)\\x_{2}(2)+\mathbf{u}(2)\cos({x_{1}^{2}}(2)) \end{bmatrix}\!, \end{align*}where $$c_{0}, c_{1}, c_{2}$$ are positive constants. Note that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathrm{C} ([0, 3]\times \mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathrm{C}(\mathbb{R}^{2}\times \mathbb{R};\mathbb{R}^{2} )$$ are unbounded and further $$\mathbf{f}(\cdot , \cdot , \cdot )\notin \mathcal{L}\mathcal{C}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\notin \mathcal{L}\mathcal{C}_{2},$$ for k = 1, 2. So, neither Theorem 4.2 nor Theorem 4.3 are applicable here to check the controllability of (5.1). However, we observe that $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{G}_{1}$$ and $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{G}_{2}$$ for k = 1, 2, with $$a_{0}=c_{0}, b_{0}=3c_{0}, a_{1}=b_{1}=c_{1}$$ and $$a_{2}=b_{2}=c_{2}.$$ Then for suitable choices of $$c_{0}, c_{1}$$ and $$c_{2},$$ we can get   $$ M_{1}\big(1+(M_{3}+M_{4})\big\|\mathbf{W}^{-1}\big\|\big)\left(T(a_{0}+b_{0})+\sum\limits_{k=1}^{2}(a_{k}+b_{k})\right)=8.6956 \big(12c_{0}+2c_{1}+2c_{2}\big)<1$$and hence the system (5.1) is controllable on [0, 3] by Theorem 4.4. 6. Conclusion In this paper, we have considered an n-dimensional semilinear impulsive dynamical control system with multiple constant time delays in control and derived the sufficient conditions to guarantee that this system (2.1) is controllable on $$[t_{0}, T]$$ for certain classes of non-linearities f(⋅, ⋅, ⋅) and impulse functions $$\mathbf{g}_{k}(\cdot , \cdot ).$$ The results are obtained by employing Schauder fixed-point theorem and Banach contraction mapping principle. By assuming that, for a given initial state $$\mathbf{x}(t_{0})=\mathbf{x}_{0}\in \mathbb{R}^{n}$$ and for a given $$\mathbf{u}(\cdot )\in \mathcal{B}_{2},$$ the semilinear impulsive delay system (2.1) admits a unique solution on $$[t_{0}, T]$$ and the linear delay system (3.1) is controllable on $$[t_{0}, T],$$ we have established that the semilinear impulsive delay system (2.1) is also controllable on $$[t_{0}, T]$$ under each of the following assumptions: (i) $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathfrak{B}_{1}$$ and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathfrak{B}_{2}$$; (ii) $$\mathbf{f}(\cdot , \cdot , \cdot )\in \mathcal{L}\mathcal{C}_{1}$$ and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{C}_{2}$$ with   $$ M_{1}\big(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\big)\gamma<1,$$where $$\gamma $$ is given in (4.14); (iii) $$\mathbf{f(\cdot , \cdot , \cdot )}\in \mathcal{L}\mathcal{G}_{1}$$ and each $$\mathbf{g}_{k}(\cdot , \cdot )\in \mathcal{L}\mathcal{G}_{2}$$ with   $$ M_{1}\left(1+(M_{3}+M_{4})\|\mathbf{W}^{-1}\|\right)\Big(T(a_{0}+b_{0})+\sum \limits_{k=1}^{M}(a_{k}+b_{k})\Big)<1.$$Numerical example is provided to demonstrate our theoretical results. Furthermore, as every bounded function satisfy the linear growth condition, therefore $$\mathfrak{B}_{1}\subset \mathcal{L}\mathcal{G}_{1}$$ and $$\mathfrak{B}_{2}\subset \mathcal{L}\mathcal{G}_{2}.$$ Also we know that, every Lipschitz function satisfies the linear growth condition, so $$\mathcal{L}\mathcal{C}_{1}\subset \mathcal{L}\mathcal{G}_{1}$$ and $$\mathcal{L}\mathcal{C}_{2}\subset \mathcal{L}\mathcal{G}_{2}.$$ On the other hand, functions satisfying a Lipschitz condition need not be bounded, for example, $$\mathbf{f}(t, \mathbf{v}, \mathbf{w})= \textit{t}+\mathbf{v}+\sin\!\mathbf{v}+\sin\!\mathbf{w}$$ defined on $$[0, 1]\,\times\, \mathbb{R}\, \times\, \mathbb{R}.$$ Similarly, bounded functions may not satisfy a Lipschitz condition, for example $$\mathbf{f}(t, \mathbf{v}, \mathbf{w})=\textit{t} + \sin (\mathbf{v}^2 )+ \cos (\mathbf{w}^{2} )$$ defined on $$[0, 1]\,\times \mathbb{R}\times \mathbb{R}.$$ Therefore, $$\mathfrak{B}_{1}$$ and $$\mathcal{L}\mathcal{C}_{1}$$ are not comparable. Similarly $$\mathfrak{B}_{2}$$ and $$\mathcal{L}\mathcal{C}_{2} $$ are not comparable. Further, the linear growth functions may not be bounded and may not satisfy a Lipschitz condition, for example, $$\mathbf{f}(t, \mathbf{v}, \mathbf{w})=\mathbf{v} \sin (\mathbf{v}^2 )+\mathbf{w}\cos (\mathbf{w}^{2} )$$ defined on $$[0, 1]\times \mathbb{R}\times \mathbb{R}.$$ Thus, from this work, we conclude that Theorem 4.4 gives the controllability conditions of the system (2.1) for much weaker class of functions f(⋅, ⋅, ⋅) and $$\mathbf{g}_{k}(\cdot , \cdot )$$ than that of Theorems 4.2 and 4.3. Acknowledgements The first author is grateful to the Department of Mathematics, Indian Institute of Space Science and Technology, India, for providing the required support to carryout this research work. Also, he would like to thank Dr. Manil T. Mohan, visiting scientist, Statistics and Mathematics unit, Indian Statistical Institute, Bangalore Centre, India, for useful discussions. Both the authors are thankful to the reviewers and the associate editor for their constructive comments and suggestions to improve the quality of this manuscript. References Artstein, Z. ( 1982) Linear systems with delayed controls: a reduction. IEEE Trans. Automat. Control,  27, 869-- 879. Google Scholar CrossRef Search ADS   Balachandran, K. & Somasundaram, D. ( 1985) Relative controllability of nonlinear systems with time varying delays in control. Kybernetika , 21, 65-- 72. Banks, H. T., Jacobs, M. Q. & Latina, M. R. ( 1971) The synthesis of optimal controls for linear, time-optimal problems with retarded controls. J. Optim. Theory. Appl. , 8, 319-- 366. Google Scholar CrossRef Search ADS   Coddington, E. ( 1989) An Introduction to Ordinary Differential Equations . New York, USA: Dover Publications. Dacka, C. ( 1982) Relative controllability of perturbed nonlinear systems with delay in control. IEEE Trans. Automat. Control,  27, 268-- 270. Google Scholar CrossRef Search ADS   Erneux, T. ( 2009) Applied Delay Differential Equations . New York, USA: Springer. Farmakis, I. & Moskowitz, M. ( 2013) Fixed Point Theorems and Their Applications . Singapore: World Scientific. Google Scholar CrossRef Search ADS   George, R. K., Nandakumaran, A. K. & Arapostathis, A. ( 2000) A note on controllability of impulsive systems. J. Math. Anal. Appl. , 241, 276-- 283. Google Scholar CrossRef Search ADS   Guan, Z. H., Qian, T. H. & Yu, X. ( 2002) On controllability and observability for a class of impulsive systems. Syst. Control Lett. , 47, 247-- 257. Google Scholar CrossRef Search ADS   Han, J., Liu, Y., Zhao, S. & Yang, R. ( 2012) A note on the controllability and observability for piecewise linear time-varying impulsive systems. Asian J. Control , 15, 1867-- 1870. Google Scholar CrossRef Search ADS   Khambadkone, M. ( 1982) Euclidean null-controllability of linear systems with distributed delays in control. IEEE Trans. Automat. Control , 27, 210-- 211. Google Scholar CrossRef Search ADS   Klamka, J. ( 1976) Controllability of linear systems with time-variable delays in control. Int. J. Control , 24, 869-- 878. Google Scholar CrossRef Search ADS   Klamka, J. ( 1977) Absolute controllability of linear systems with time-variable delays in control. Int. J. Control , 26, 57-- 63. Google Scholar CrossRef Search ADS   Klamka, J. ( 2008) Constrained controllability of semilinear systems with delayed controls. Bull. Pol. Ac. Tech. , 56, 333-- 337. Klamka, J. ( 2009) Constrained controllability of semilinear systems with delays. Nonlin. Dyn. , 56, 169-- 177. Lakshmikantham, V., Bainov, D. D. & Simeonov, P. S. ( 1989) Theory of Impulsive Differential Equations . World Scientific. Google Scholar CrossRef Search ADS   Leela, S., McRae, F. A. & Sivasundaram, S. ( 1993) Controllability of impulsive differential equations. J. Math. Anal. Appl. , 177, 24-- 30. Google Scholar CrossRef Search ADS   Leiva, H. ( 2014) Controllability of semilinear impulsive nonautonomous systems. Int. J. Control , 88, 585-- 592. Google Scholar CrossRef Search ADS   Leiva, H. & Rojas, R. A. ( 2016) Controllability of semilinear nonautonomous systems with impulses and nonlocal conditions. J. Nat. Sci. , 1, 23-- 38. Liu, X. ( 1995) Impulsive control and optimization. Appl. Math. Comput. , 73, 77-- 98. Liu, Y. & Zhao, S. ( 2012) Controllability analysis of linear time-varying systems with multiple time delays and impulsive effects. Nonlin. Anal. RWA. , 13, 558-- 568. Google Scholar CrossRef Search ADS   Morris, S. A. & Noussair, E. S. ( 1975) The Schauder-Tychonoff fixed point theorem and applications. Matematicky casopis , 25, 165-- 172. Nieto, J. J. & Tisdell, C. C. ( 2010) On exact controllability of first-order impulsive differential equations. Adv. Difference Equations , 2010, 1-- 9. Olbrot, A. W. ( 1972) On controllability of linear systems with time delays in control. IEEE Trans. Automat. Control , 17, 664-- 666. Google Scholar CrossRef Search ADS   Sikora, B. ( 2005) On constrained controllability of dynamical systems with multiple delays in control. Appl. Math. Warsaw , 32, 87-- 101. Google Scholar CrossRef Search ADS   Zhao, S., & Sun, J. ( 2010) Controllability and observability for impulsive systems in complex fields. Nonlin. Anal. RWA. , 11, 1513-- 1521. Google Scholar CrossRef Search ADS   Zhu, Z. Q. & Lin, Q. W. ( 2012) Exact controllability of semilinear systems with impulses. Bull. Math. Anal. Appl. , 4, 157-- 167. © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com

Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Mar 26, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off