On the descriptor variable observation of rectangular implicit representations, in the presence of column minimal indices blocks

On the descriptor variable observation of rectangular implicit representations, in the presence... Abstract Recently, it has been shown that implicit rectangular descriptions can be successfully used for modelling and controlling broad classes of linear systems, including systems with internal switches (i.e., variable structure linear systems where the variation is driven by switching signals). This technique consists in finding first the degree-of-freedom, characterizing the internal variable structure, and then making it unobservable by means of an ideal proportional and derivative descriptor variable feedback. When the proportional and derivative feedback is approximated by a suitable proper controller, then the degree-of-freedom is only attenuated in an epsilon order (i.e., the degree of approximation). In this article, we propose two different ways for observing the descriptor variable for implicit rectangular systems, in the presence of column minimal indices blocks. The first one concerns a descriptor variable observer based on fault detection; an apparent failure signal characterizes the variation of structure, which observation is required to support the synthesis of a standard state observer (this approach is constrained to minimum phase systems, with respect to the output—degree-of-freedom transfer). The second method concerns a descriptor variable observer based on precise finite-time adaptive structure detection. 1. Introduction Rosenbrock (1970) introduced the implicit representations as a generalization of proper linear systems (see, e.g., Vardulakis, 1991; Lewis, 1992). An implicit representation, $${\mathfrak{R}^{imp}}(E, A, B, C)$$, is a set of differential and algebraic equations of the following form: Edx/dt=Ax+Bu and y=Cx,t≥0, (1.1) where $${E}:\mathscr{X}_{d}\rightarrow \underline{\mathscr{X}}_{eq}$$, $${A}:\mathscr{X}_{d}\rightarrow \underline{\mathscr{X}}_{eq}$$, $${B}:\mathscr{U}\rightarrow \underline{\mathscr{X}}_{eq}$$ and $${C}:\mathscr{X}_{d}\rightarrow \mathscr{Y}$$ are linear maps. The spaces $$\mathscr{X}_{d} \approx {\Bbb{R}}^{n}$$, $$\underline{\mathscr{X}}_{eq} \approx {\Bbb{R}}^{\bar{n}}$$, $$\mathscr{U} \approx {\Bbb{R}}^{{m}}$$ and $$\mathscr{Y} \approx {\Bbb{R}}^{{p}}$$ are called the descriptor, the equation, the input and the output spaces, respectively. In Bonilla & Malabre (1991), it was shown that when $$\dim (\underline{\mathscr{X}}_{eq})$$$$\leq$$$$\dim (\mathscr{X}_{d})$$ (where $$\dim ({\mathscr{V}})$$ stands for the dimension of space $${\mathscr{V}}$$), it is possible to describe linear systems with an internal variable structure via an implicit representation. Indeed, when $$\dim (\underline{\mathscr{X}}_{eq})$$$$<$$$$\dim (\mathscr{X}_{d})$$ and if the system is solvable (i.e., possesses at least one solution), solutions are generally non unique. In some sense there is a degree-of-freedom in (1.1), which can be used for instance to take into account, in an implicit way, a possible structure variation. In Bonilla & Malabre (2003), it is shown how non square implicit descriptions can be used for modelling broad classes of linear systems, including systems with internal switches. Necessary and sufficient conditions, expressed in terms of the overall implicit model, exist for controlling it so that it has an unique behaviour (whatever be the internal structure variations). The objective is to enhance from these conditions the parts which are due to the common internal dynamic equation and, respectively, to the algebraic constraints which are ‘controlled’ (in an hidden way) by the degree-of-freedom. It is shown how to embed the variable internal structure present in square implicit descriptions inside an $$(A, E, B)$$ invariant subspace contained in the kernel of the output map. Thanks to this embedding, the variable internal structure is made unobservable and in this way a proper closed loop system with a controllable pre specified structure is then obtained. In Bonilla et al. (2015a), the authors have taken advantage from the results obtained in Bonilla & Malabre (2003) to model a certain class of time-dependent, autonomous switched systems (Liberzon, 2003). In Bonilla & Malabre (2003), the authors have proposed a variable structure decoupling control strategy based on an ideal proportional and derivative (PD) feedback. In Bonilla et al. (2015b), the authors have proposed a proper practical approximation of such an ideal PD feedback, which rejects the variable structure; also, the authors study the stability of both implicit control strategies. In this article, we now tackle the variable descriptor observation process of the class of time-dependent, autonomous switched systems considered in Bonilla et al. (2015a). We then propose two different ways for observing the descriptor variable: a descriptor variable observer based on a fault detection based scheme and a descriptor variable observer scheme based on an adaptive structure detection approach. The article is organized as follows: in Section 2, we briefly recall how to model and to control via implicit techniques a class of linear switched systems (see Bonilla et al., 2015a, 2015b for details). In Section 3, we propose two descriptor variable procedures: (i) Linear descriptor observers, based on fault detection techniques and (ii) Indirect variable descriptor observers, based on finite time structure detection techniques. In Section 4, we present two illustrative examples, together with the corresponding numerical simulations. We conclude with some final remarks in Section 5. 2. Preliminaries In Bonilla et al. (2013) has been shown that any reachable implicit description, $${\mathfrak{R}^{imp}}(\overline{E}, \overline{A}, \overline{B})$$, can be decomposed as: [I 000]⏟E¯ddt[xcxℓ]=[A¯1,1A¯1,2A¯2,1A¯2,2]⏟A¯[xcxℓ]+[B¯100I ]⏟B¯[ucua], where the pair $$\left(\overline{A}_{1,1}, [\, \overline{A}_{1,2} \overline{B}_{1} \,]\right)$$ is state reachable (in the classical sense). The component $${x}_{c}$$ is the part of the descriptor variable, associated to the integrator chains, which needs a control law to reach the desired goal; $${x}_{\ell}$$ is the free part of the descriptor variable, associated to algebraic constraints, which acts as some kind of internal input variable, together with the component $${u}_{c}$$ which is the effective external control input variable; the component $${u}_{a}$$ of the external control variable has to satisfy the algebraic equation, this part of the input corresponds to algebraic relationships linked with purely derivative actions. Bonilla et al. (2015a) have taken profit of the free descriptor variable $${x}_{\ell}$$ for describing systems having unexpected changes of structure, inside of a known set of models; see for example the so-called ladder systems illustrated in the second example of Bonilla et al. (2015a). For this, the description of the system is split in two parts: (i) an implicit rectangular representation, $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$: $$E{\mathrm{d}{{x}}}/{\mathrm{d}{{t}}} = Ax+Bu$$ and $$y = Cx$$, which characterizes the fixed structure present in all the switching process and (ii) the algebraic constraints, $$\mathfrak{R}^{alc}(0, D_{q_{{i}}}, 0)$$: $$0 = D_{{q_{{i}}}}x$$, which characterizes a switching process responsible of the change of structure, this changing mechanism is called here: internal structure variation. An interesting class of systems which can be studied with this implicit rectangular representation framework is the linear switched system (LSS) (see (see van der Schaft & Schumacher, 2000; Liberzon, 2003) with the behaviour (see, e.g., Polderman & Willems, 1998): Bsw :={(qi,u(⋅),y(⋅))∈Q×U×Y| ∃ci∈Rn¯, s:J→Q, s.t.:  s(Ii)=qi,y(t)=∑i∈N1Ii(t)Cqi(exp(Aqi(t−Ti−1))ci + ∫0t−Ti−1exp(Aqi(t−Ti−1−τ))Bu(Ti−1+τ)dτ), t∈R+, Ii∈J}, (2.1) where $${\mathbf{1}}_{\mathscr{I}_{i}}({\cdot})$$ is the characteristic function of the interval $$\mathscr{I}_i = [{T}_{i-1},\, {T}_{i})$$ and $$0$$$$=$$$$T_0$$$$<$$$$T_1$$$$<$$$$\cdots$$$$<$$$$T_{i-1}$$$$<$$$$T_i$$, $$\dots$$, $$i \in {\Bbb{N}}$$, is some time partition, with $$\lim_{i\to\infty}T_i = \infty$$. The $${q_{{i}}}$$ are elements of a finite set of indexes (locations) $$\mathscr{Q}$$; the system remains in location $$q_{i} \in \mathscr{Q}$$ for all time instants $$t \in \left[T_{i-1},T_i\right), \ i \in {\Bbb{N}}$$, according to the assignation rule: $$\mathfrak{s}:\, \mathcal{I}\to\mathscr{Q}$$, $$\mathfrak{s}(\mathscr{I}_{i}) = q_{i}$$, where: $$\mathcal{I}$$$$:=$$$$\left\{\mathscr{I}_{i}= [{T}_{i-1},\, {T}_{i})\subset {\Bbb{R}}^{+}\right\}$$. See Bonilla et al. (2015a) for the technical details of this particular proposition of LSS. Such a LSS can be described by the following state space representations, $$\mathfrak{R}^{ss}(A_{q_{{i}}}, B, C_{q_{{i}}})$$: ddtx¯=Aqix¯+Bu and y=Cqix¯;t∈(Ti−1,Ti),limt→Ti−1+x(t)=ci, (2.2) where $$A_{q_{{i}}} \in {\Bbb{R}}^{\bar{n}\times\bar{n}}$$, $$B \in {\Bbb{R}}^{\bar{n}\times{m}}$$ and $$C_{q_{{i}}} \in {\Bbb{R}}^{{p}\times{\bar{n}}}$$, and $$c_{i} \in {\Bbb{R}}^{\bar{n}}$$ are the initial conditions at each switching time $$T_{i-1}$$, $$i \in {\Bbb{N}}$$. We assume that the systems matrices $$A_{q_{{i}}}$$ and $$C_{q_{{i}}}$$ have the specific structure (c.f. (c.f. Narendra & Balakrishnan, 1994; Shorten & Narendra, 2002): Aqi=A¯0+A¯1D¯(qi) and Cqi=C¯0+C¯1D¯(qi), (2.3) where $$\overline{A}_{0} \in {\Bbb{R}}^{\bar{n}\times \bar{n}}$$, $$\overline{C}_{0} \in {\Bbb{R}}^{p\times \bar{n}}$$, $$\overline{A}_{1} \in {\Bbb{R}}^{\bar{n}\times \hat{n}}$$, $$\overline{C}_{1} \in {\Bbb{R}}^{p\times \hat{n}}$$ and $$\overline{D}({q_{{i}}}) \in {\Bbb{R}}^{\hat{n}\times\bar{n}}$$. We also assume that: H1 $${\mathrm{rank}\,}{B} = {m}$$, $${\mathrm{rank}\,} C_{{q_{{}}}_{i}} = {p}$$ and $${\mathrm{rank}\,}\overline{C}_{1} = {p}$$, H2 $$\overline{D}({q_{{i}}})$$ varies linearly with respect to $${q_{{i}}}$$, namely: given two fixed locations $${{q_{{a}}}}, {{q_{{b}}}} \in \mathscr{Q}$$ and a scalar $$m \in {\Bbb{R}}$$ then: $$\overline{D}({{q_{{a}}}} + m{{q_{{b}}}}) = \overline{D}({{q_{{a}}}}) + m\overline{D}({{q_{{b}}}})$$. Let us now proceed to recall the implicit representation technique, as well as the implicit control law approach (which decouples the internal variable structure). We need this preliminary knowledge in order to simplify the presentation of our approach to the descriptor variable observation process. 2.1. Implicit representation In Bonilla et al. (2015a), the authors showed that the state space representation $$\mathfrak{R}^{ss}(A_{q_{{i}}},$$$$B, C_{q_{{i}}})$$, (2.2) and (2.3), is externally equivalent1 to the following implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$: Eddtx=Aqix+Bu and y=Cx,t∈(Ti−1,Ti),E=[E0], Aqi=[ADqi], B=[B0], (2.4) where $$q_{{i}} \in \mathscr{Q}$$, $$\left[T_{i-1},T_i\right) \in \mathfrak{s}^{-1}(q_{{i}})$$ and the maps $${E}:{{\Bbb{R}}^{n}}\to {{\Bbb{R}}^{\bar{n}}}$$, $${A}:{{\Bbb{R}}^{n}}\to {{\Bbb{R}}^{\bar{n}}}$$, $${C}:{{\Bbb{R}}^{n}}\to {{\Bbb{R}}^{{p}}}$$ are defined as follows: E=[I 0], A=[A¯0−A¯1], C=[C¯0−C¯1],Dqi=[D¯(qi)I ]. (2.5) Note that: $$\mathscr{X}_{d} \approx {\Bbb{R}}^{n}$$, $$\underline{\mathscr{X}}_{eq} \approx {\Bbb{R}}^{\bar{n}}$$, $$\mathscr{U} \approx {\Bbb{R}}^{{m}}$$ and $$\mathscr{Y} \approx {\Bbb{R}}^{{p}}$$; also note that: $${n} = {\bar{n} + \hat{n}}$$. The initial conditions at each interval $$\left[T_{i-1},T_i\right)$$ satisfy: $$\lim_{t\to{T_{i-1}^{+}}}x(t)$$$$=$$$$x(T_{i-1})$$$$=$$$$ E_{{q_{{i}}}}^{r} c_i$$, where The behaviour associated with (2.4) and (2.5) is: Big ={(qi,u(⋅),y(⋅))∈Q×U×Y| ∃ci∈Rn¯, s:J→Q, s.t.: y(t)=∑i∈N1Ii(t)C(exp(EqirA(t−Ti−1))Eqirci +∫0t−Ti−1exp(EqirA(t−Ti−1−τ))EqirBu(Ti−1+τ)dτ), qi≡s(Ii),  t∈R+,  Ii∈J}. (2.6) From (2.4), we extract the implicit rectangular representation, $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$: Eddtx=Ax+Bu and y=Cx;t∈R+∖{Ti}, (2.7) where $$\lim_{t\to T_{i-1}^{+}}{x}(t) = E_{{q_{{i}}}}^{r} c_i$$. Remark 2.1 This implicit rectangular representation characterizes the fixed structure present in all the switching processes. Remark 2.2 The implicit rectangular representation (2.7) has a physical sense if it has at least one solution. Bonilla et al. (2015a) have shown that the hypothesis: H3 $${\mathrm{Im}\,}{E} + {\mathrm{Im}\,}{B} = {\Bbb{R}}^{\bar{n}},$$ implies that for any initial condition $$\lim_{t\to0^+}x(t) = x_0 \in {\Bbb{R}}^{n}$$, there exists at least one trajectory $$(u, x)$$$$\in$$$$\mathcal{C}^{\infty}({\Bbb{R}}^{+}, {\Bbb{R}}^{m}\times{\Bbb{R}}^{n})$$ solution of (2.7); see mainly Geerts (1993) and Aubin & Frankowska (1991), and also the Section Discussion about existence of solution of Bonilla et al. (2013). Remark 2.3 From (2.4), we extract the algebraic constraints, $$\mathfrak{R}^{alc}(0, D_{q_{{i}}}, 0)$$, 0=Dqix,t∈[Ti−1,Ti),i∈N,qi∈Q. (2.8) This set of algebraic constraints characterizes the switching process. 2.2. Implicit control law In Bonilla & Malabre (2003), the authors considered the following problem. Problem 2.1 (Internal variable structure decoupling problem (IVSDeP; Bonilla & Malabre, 2003) Consider a proper system described by the implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$ (2.4) such that the geometric conditions: H4 $${\mathrm{Im}\,} A + {\mathrm{Im}\,} B \subset {\mathrm{Im}\,} E,$$ H5 $${{\Bbb{R}}^{\bar{n} + \hat{n}}} = ({\mathrm{Im}\,}{E} + {\mathrm{Im}\,}{A} + {\mathrm{Im}\,}{B})\oplus {\mathrm{Im}\,}{D_{q_{{i}}}}, \forall q_{{i}}\,\in\,\mathscr{Q},$$ H6 $${{\Bbb{R}}^{n}} = {{{\mathrm{ker}}}_{D_{q_{{i}}}}\,} \oplus {{{\mathrm{ker}}}_{E}\,}, \ \forall q_{{i}}\,\in\,\mathscr{Q}$$ are satisfied. Under which conditions does there exist a PD feedback control law $$u$$$$=$$$$F^{*}_{p}x$$$$+$$$$F^{*}_{d}{\mathrm{d}{{x}}}/{\mathrm{d}{{t}}}$$ such that the internal structure variation of the closed loop system is made unobservable? Remark 2.4 Assumption H4, together with Assumptions H1 and H3, implies that there exists a proper solution for any initial condition. Indeed, for any initial condition $$\lim_{t\to0^+}x(t)$$$$=$$$$x_0$$$$\in$$$${\Bbb{R}}^{n}$$ and any admissible input $$u \in \Bbb{U}$$, one solution of the implicit rectangular representation (2.7) is: x(t)=exp(ErAt)x0+∫0texp(ErA(t−τ))ErBu(τ)dτ, where $${E}^{r}$$ is a full column rank matrix such that $$({\Pi_{E}}{E}){E}^{r} = {{\mathrm{I}}_{{}}}$$, where $${\Pi_{E}}$$ is any natural projection on $${\mathrm{Im}\,}{E}$$ (see discussion of Definition 2 in Bonilla et al. (2015a)). Remark 2.5 Assumption H5 guarantees that all the equations of the implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$ are linearly independent, namely: $${\mathrm{Im}\,}{E} + {\mathrm{Im}\,}{A} + {\mathrm{Im}\,}{B} = {\Bbb{R}}^{\bar{n}}$$ and $${\mathrm{Im}\,}{D_{q_{{i}}}} = {\Bbb{R}}^{\hat{n}}$$. Moreover, Assumptions H5 and H4 imply that: (i) $${\mathrm{Im}\,}{E} = {\Bbb{R}}^{\bar{n}}$$ and $${\mathrm{Im}\,}{D_{q_{{i}}}} = {\Bbb{R}}^{\hat{n}}$$, and that: (ii) the Assumption H3 is satisfied. Remark 2.6 Assumption H6, together with H5 and H4, guarantees that all the internal structures are proper, namely there exist bases in $${\Bbb{R}}^{n}$$ and in $${\Bbb{R}}^{\bar{n}}\times{\Bbb{R}}^\hat{n}$$ such that (2.4) takes the form: [I 000]ddt[x¯x^]=[A¯A^0I ][x¯x^]+[B¯0]u, that is to say the LSS (2.2) only commutes between proper systems (see Theorem 1 in Bonilla et al. (2015a)). The solution of this problem was given by Theorems 14 and 15 of Bonilla & Malabre (2003), using the supremal $$(A, E, B)$$ invariant subspace contained in $${{{\mathrm{ker}}}_{C}\,}$$, $${\mathscr{V}}^{*} = \hbox{sup}\{ {\mathscr{V}} \subset {{{\mathrm{ker}}}_{C}\,}\, \vert \ $$$$A{\mathscr{V}} \subset E{\mathscr{V}} + {\mathrm{Im}\,} B$$, which is the limit of the following non-increasing geometric algorithm: V0=Rn;Vμ+1=KerC∩A−1(EVμ+ImB),μ≥0. (2.9) $${\mathscr{V}}^{*}$$ characterizes the maximal part of the implicit representation, $${\mathfrak{R}^{ir}}(E, A, B, C)$$, which can be made unobservable with a suitable proportional and derivative descriptor variable feedback (see Verghese, 1981; Özçaldiran, 1985). The set of pairs $$(F_{p},F_{d})$$ such that: $$(A + BF_{p}){{\mathscr{V}}}^{*}$$$$\subset$$$$(E-BF_{d}){{\mathscr{V}}}^{*}$$ is noted as $$\mathbf{F}({{\mathscr{V}}^{*}})$$. The implicit control law proposed in Bonilla & Malabre (2003) is synthesized by the following three steps (see also Bonilla et al., 2015b): Locate the supremal $$(A, E, B)$$—invariant subspace contained in$${{{\mathrm{ker}}}_{C}\,}$$, $${\mathscr{V}}^{*}$$. For making unobservable the internal structure variation, find a derivative feedback such that: V∗⊃Ker(E−BFd∗). (2.10) And to decouple the variation of the internal structure at the output, find a proportional feedback such that: (A+BFp∗)V∗⊂(E−BFd∗)V∗. (2.11) In order to synthesize the PD feedback, u∗=Fp∗x+Fd∗dx/dt+r, (2.12) in Bonilla et al. (2015b) is proposed the following proper approximation: dx¯/dt =−(1/ε)x¯+(1/ε)Fd∗x,u =−(1/ε)x¯+((1/ε)Fd∗+Fp∗)x+r. (2.13) Also, it is proved that there exists $${\varepsilon}^{*} > 0$$ such that: |y(t)−y∗(t)|≤δ∀ ε∈(0,ε∗], ∀ t≥t∗(δ), (2.14) where $$t^{*}(\delta)$$ is a fixed transient time, which depends on the chosen $$\delta$$, $$y^{*}$$ is the output when applying the control law (2.12), and $$y$$ is the output when applying the proper approximation (2.13), and the closed loop system is BIBO-stable. Remark 2.7 As can be shown, the considered proper approximation for the ideal PD control law decouples the variation of the internal structure at the output (i.e., makes it unobservable), and ensures BIBO stability. We shall now proceed to illustrate the internal variable structure decoupling control approach. 2.3. Illustrative example: part 1 In this article, we consider, as cases of study, two state space representations, $$\mathfrak{R}^{ss}(A_{q_{{i}}}, B, C_{q_{{i}}})$$, (2.2) and (2.3), where the matrices, $$\overline{C}_{0}$$, $$\overline{A}_{1}$$, $$\overline{C}_{1}$$, and $$\overline{D}({q_{{}}})$$, are given as follows: A¯0=[0110], A¯1=[11], B=[01], C¯0=[0(1−d)], C¯1=[−d], D¯(qi)=[q1q2], (2.15) where q=(q1,q2)∈Q={qa,qb,sqc},qa=(−1,−1), qb=(−1,0), qc=(−1,−2), (2.16) and the parameter $$d$$ will take the values $$-1$$ and $$1$$. From (2.3) and (2.15), the matrices $$A_{q_{{i}}}$$, $$B$$ and $$C_{q_{{i}}}$$ take the following form: Aqi=[q1q2+1q1+1q2]B=[01]andCqi=[−dq1−(dq2−(1−d))]. (2.17) For a given pair, $$q_{{i}} \in \mathscr{Q}$$, we have the transfer function (cf. (2.5) of Bonilla et al. (2015a)): Fqi(s)=Cqi(sI−Aqi)−1B=−(dq2−(1−d))s+q1(s+1)(s−(1+q1+q2))={(q2+2)s−q1(s+1)(s−(1+q1+q2)), if: d=−1−(q2s+q1)(s+1)(s−(1+q1+q2)), if: d=1 . (2.18) For the possible three variants of the index $$q_{{i}} \in \mathscr{Q}$$, we have, for a given $$d \in \{1,\,-1\}$$, the following three possible behaviours: Bqa∞={(u,y)∈C∞(Ii,R2)∩ker⁡[−1(ddt+1)]},Bqb∞={(u,y)∈C∞(Ij,R2)∩ker⁡[−((1−d)ddt+1)(ddt+1)(ddt)]},Bqc∞={(u,y)∈C∞(Ik,R2)∩ker⁡[−((1+d)ddt+1)(ddt+1)(ddt+2)]} (2.19) for some disjoints intervals $$\mathscr{I}_{i}$$$$=$$$$[T_{i-1},\, T_{i})$$, $$\mathscr{I}_{j}$$$$=$$$$[T_{j-1},\, T_{j})$$, $$\mathscr{I}_{k}$$$$=$$$$[T_{k-1},\, T_{k})$$, $$\mathscr{I}_{\ell}$$$$=$$$$[T_{\ell-1},\, T_{\ell})$$, $$i,\,j,\,k,\,\ell \in {\Bbb{N}}$$. Implicit representation. The time-dependent, autonomous switched system, described by (2.2), (2.16) and (2.17), is also described by the implicit global representation (2.4) with (cf. (4.6) of Bonilla et al. (2015a)): (2.20) Above the solid line of the differential and algebraic equation set of the implicit global representation (2.4) and (2.20), there is the implicit rectangular representation (2.7) with (cf. (4.7) of Bonilla et al. (2015a)): (2.21) Below the solid line of the differential and algebraic equation set of the implicit global representation (2.4) and (2.20), there is the algebraic constraint (2.8) with (cf. (4.8) of Bonilla et al. (2015a)): Dqi=[q1q21]. (2.22) When splitting the implicit global representations (2.4) and (2.20) into the rectangular implicit description (2.7) and (2.21) and into the algebraic constraints (2.8) and (2.22), we get the fixed active structure of the system2, represented by (2.7) and (2.21). Internal variable structure. In order to have a better understanding of the way that the implicit rectangular representation$$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$ takes into account the so called internal structure variation, in Bonilla et al. (2015a) are compared the Kronecker normal forms (see Gantmacher, 1977) of the pencils associated to the implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$, (2.4) and (2.20), with the Kronecker normal forms of the pencils associated to the implicit rectangular representation$$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$, (2.7) and (2.21). It is shown there for our illustrative example that3: The associated Kronecker blocks of $$[\,{\mathrm{s}}{\mathbf{E}}-{\mathbf{A}}_{q_{{i}}}\,]$$ are as follows (cf. (2.16) and (2.18)): (a) For $${{q_{{2}}}} = -1$$, there are one ied, $$[\, 1\,]$$, and two fed, $$[\, {\mathrm{s}}-{{q_{{1}}}}\,]$$ and $$[\, {\mathrm{s}}+1\,]$$. (b) For $${{q_{{2}}}} \neq -1$$ and $${{q_{{1}}}} + {{q_{{2}}}} = -2$$, there are one ied, $$[\, 1\,]$$, and one (c) For $${{q_{{2}}}} \neq -1$$ and $${{q_{{1}}}} + {{q_{{2}}}} \neq -2$$, there are one ied, $$[\, 1\,]$$, and two fed, $$[\, {\mathrm{s}}-1-{{q_{{1}}}}-{{q_{{2}}}}\,]$$ and $$[\, {\mathrm{s}}+1\,]$$. The associated Kronecker blocks of $$[\,{\mathrm{s}}{E}-{A}\,]$$ are one cmi, , and one fed, $$[\, {\mathrm{s}}+1\,]$$. From that, we realize that the internal variable structure of $${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$, (2.4) and (2.20), is mainly characterized by the invariantcolumn minimal index block, in $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$, (2.7) and (2.21). See discussion of Example 4 in Bonilla et al. (2015a) Implicit control law. To obtain the ideal PD feedback (2.12), we need $${\mathscr{V}}^{*}$$. From (2.9) and (2.21), we get: In order to satisfy (2.10), the derivative part of the control law has to contain the term $\big[\begin{array}{ccc} 0 & d & -d \end{array}\big]$ . Indeed: In order to satisfy (2.11), the proportional part of the control law has to contain the term where $$\tau$$ is a positive real number. Indeed: $$(A+BF_{p}^{*}){\mathscr{V}}^{*}$$$$=$$ Thus, the proportional and derivative feedback is (cf. (4.5) of Bonilla et al. (2015b)): u∗=[−1−(1−d)τ(1−dτ)]x+[0d−d]ddtx+[1τ]r, (2.23) Applying the proportional and derivative feedback (2.23) to (2.4) and (2.20), we get the closed loop system described by the implicit global representation (cf. (4.6) of Bonilla et al. (2015b)): (2.24) where4 The proper approximation (2.13) is: ddtx4 =−(1ε)x4+(1ε)[0d−d]x,u =−(1ε)x4+[−1(dε−(1−d)τ)(1−dτ−dε)]x+[1τ]r, (2.25) where $$\varepsilon$$ is a given sufficiently small positive constant. See Theorem 3 of Bonilla et al. (2015b) for stability details. It is time now to deal with descriptor variable observation of rectangular implicit representations. 3. Descriptor variable observation When one is synthesizing controllers for systems represented by means of implicit descriptions, a natural question is: if the descriptor variable $$x$$ is not accessible, how can we estimate it? The case of regular descriptions—where there are only finite and infinite elementary divisors blocks (see Gantmacher, 1977)— has been widely studied (see, e.g., Dai, 1989; Duan, 2010). Recently, Berger & Reis (2015) (see also Trentelman et al., 2001; Berger & Reis, 2013; Berger et al., 2016, and the references cited in those articles) have tackled the observation problem for general implicit descriptions. With respect to the observer synthesis for implicit descriptions having column minimal indices blocks, there are some structural aspects which have to be taken into account; these kinds of blocks add to the implicit representation a non-unicity on the existence of solution (see, e.g., Gantmacher, 1977; Lebret & Loiseau, 1994). This non-unicity reflects a degree-of-freedom, which enables one to describe variable structure systems (see Bonilla et al., 2015a, 2015b). But, as concerns the realization of observers appear some particular structural phenomena which have to be carefully studied. In this section, based on a structural study we propose two different procedures for observing the descriptor variable in the presence of column minimal indices blocks, namely: Linear descriptor observers based on fault detection techniques. Indirect variable descriptor observers based on finite time structure detection techniques. 3.1. Linear descriptor observer In order to motivate this discussion, let us consider the following illustrative example. Illustrative example, part 2: Let us continue the illustrative example of Section 2.3. Let us consider the implicit rectangular representation (2.7) and (2.21), $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$: [100010]ddt[xcxℓ]=[01−110−1][xcxℓ]+[01]u,y=[0(1−d)d][xcxℓ], which is also equivalent to the state space representation$$\mathfrak{R}^{ss}({A}_{r}, [{{\it \Phi}}_{r} \,|\, B_{r}], C_{r}, [{D}_{r} \,|\, 0])$$: dxc/dt=[0110]xc+[−10−11][xℓu],y=[0(1−d)]xc+[d0][xℓu] (3.1) with the transfer functions: Tuy=(1−d)s(s+1)(s−1)andTxℓy=(ds−1)(s+1)(s+1)(s−1). (3.2) Doing the change of variable: where (recall that $$d \neq 0$$): $$\widetilde{T}$$ we get the implicit rectangular representation$$\mathfrak{R}^{ir}({E{\widetilde{T}}}, {A{\widetilde{T}}}, {B}, C{\widetilde{T}})$$: [100010]ddt[xc∘¯xℓ∘¯]=[01/d−11(1/d−1)−1][xc∘¯xℓ∘¯]+[01]u,y=[00d][xc∘¯xℓ∘¯], which is also equivalent to the state space representation$$\mathfrak{R}^{ss}({A}_{r_{\bar\circ}}, [{{\it \Phi}}_{r_{\bar\circ}} \,|\, {B}_{r_{\bar\circ}}], {C}_{r_{\bar\circ}}, [{D}_{r_{\bar\circ}} \,|\, 0])$$: dxc∘¯/dt =[01/d1(1/d−1)]xc∘¯+[−10−11][xℓ∘¯u],y =[00]xc∘¯+[d0][xℓ∘¯u] (3.3) with the transfer functions: T~uy=0andT~x~ℓ∘¯y=d. (3.4) Now, let us note that: det[−sE+AC]=det[−sET~+AT~CT~]=det[−s1/d−11−(s−1/d+1)−100d]=d(s+1)(s−1/d). We then have from Theorem 3.5 of Berger & Reis (2015) that for the implicit rectangular representation $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$ (even for $$\mathfrak{R}^{ir}({E\widetilde{T}}, {A\widetilde{T}}, {B}, C\widetilde{T})$$): there exists an observer, there exists an asymptotic observer (when $$d<0$$) and there does not exist an exact observer. Now the question is: How to synthesize such an observer? If we would try to synthesize the observer with the state space representation (3.3) (also see (3.4)), we could not certainly estimate the component descriptor variable $$\tilde{x}_{c}$$. And if we would try to synthesize the observer with the state space representation (3.1), we could get $${x}_{\ell}$$ by means of the inverse (only if $$d<0$$): $$\displaystyle T_{y}^{{x}_{\ell}} = \frac{({\mathrm{s}}+1)({\mathrm{s}}-1)}{(d{\mathrm{s}}-1)({\mathrm{s}}+1)}$$, and then $${x}_{c}$$ can be obtained with a Luenberger observer. From this illustrative example, we realize that for the case of implicit rectangular representations, it is important to select an accurate descriptor variable basis when trying to synthesize descriptor variable observers. Hereafter, we show how to do a descriptor variable space decomposition which enables the synthesis of descriptor variable observers. 3.1.1 Observable decomposition Let us assume that the implicit representation (1.1) satisfies Assumptions H4, H5, and: H7 $${\ker}\,{C}\cap{\ker}\,{E} = \{ 0 \}$$, H8 There exists a complementary subspace $$\mathscr{X}_{c}$$ of $${\ker}\,{E}$$ such that its supremal $$(A, E)$$—invariant subspace contained in $$\mathscr{X}_{c}\cap{{\ker}\,{C}}$$, NXc∗=sup{N⊂Xc∩kerC| AN⊂EN}, (3.5) is null, where $$\mathscr{N}_{\mathscr{X}_{c}}^{*}$$ is the limit of the following non-increasing algorithm (see Verghese, 1981; Malabre, 1987; Lewis, 1992): V00=Xc;V0μ+1=(Xc∩kerC)∩A−1(EV0μ), μ≥0. (3.6) has constant rank for all $${\mathrm{s}} \in \Bbb{C}_{bad}$$, where $$ \Bbb{C}_{bad} = \{ {\mathrm{s}} \in \Bbb{C} : \ {\Re}e \,{\mathrm{s}} \geq 0 \} $$. Remark 3.1 Assumptions H4 and H5 imply that there exists a complementary subspace of $${\ker}\,{E}$$ such that (1.1) takes the following form (cf. Remarks 2.4 and 2.5): [I 0]ddt[x¯cx¯ℓ] =[A¯rΦ¯r][x¯cx¯ℓ]+B¯ru,y =[C¯rD¯r][x¯cx¯ℓ]. (3.7) This implicit rectangular representation has precisely the form of the class of implicit rectangular representations which we want to observe, (2.7) and (2.5). Remark 3.2 Hypothesis H7 states that the variation of structure, characterized by $${\ker}\,{E}$$, is observable. This assumption is also known as observability at infinity (see, for example, Kuijper & Schumacher, 1992; Bonilla & Malabre, 1995; Berger et al., 2016 (their Proposition 6.1)). Remark 3.3 Hypothesis H8 states that there exists a complementary observable subspace $$\mathscr{X}_{c}$$ of $${\ker}\,{E}$$, namely: $$\exists\, \mathscr{X}_{c} \subset \mathscr{X}_{d}:$$$$\mathscr{X}_{d} = \mathscr{X}_{c} \oplus {\ker}\,{E}$$ and $$\mathscr{N}_{\mathscr{X}_{c}}^{*} = \{ 0 \}$$. Remark 3.4 Hypothesis H9 states that the output decoupling zeros of (1.1) are Hurwitz (see Verghese, 1981; Aling & Schumacher, 1984)). Also this hypothesis guarantees the existence of an asymptotic observer (cf. Theorem 3.5 of Berger & Reis (2015)). Let us first find an observable decomposition for the implicit representation (1.1). Theorem 3.1 Under assumptions H4, H5, H7–H9, there exists a complementary observable subspace $$\mathscr{X}_{c}$$ of $${\ker}\,{E}$$, such that under the geometric decomposition $$\mathscr{X}_{d} = \mathscr{X}_{c}\oplus{\ker}\,{E}$$, (1.1) takes the form (3.7), or equivalently, $$\mathfrak{R}^{ss}(\overline{A}_{r},\,{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$: dx¯c/dt =A¯rx¯c+B¯ru+Φ¯rx¯ℓ,y =C¯rx¯c+D¯rx¯ℓ. (3.8) Moreover, kerD¯r={0}, (3.9) The pair (C¯r,A¯r) is observable, (3.10) [(sI −A¯r)−Φ¯r−C¯r−D¯r] has constant rank for all s∈Cbad. (3.11) Remark 3.5 Theorem 3.1 is important since it enables us to find an observable state space representation (3.8). The free descriptor variable $${\bar{x}_{\ell}}$$ is handled as an apparent failure signal, which will be reconstructed by means of failure techniques. Let us also note that condition (3.11) states the minimum phase nature between the transfer of the free descriptor variable, $$\bar{x}_{\ell}$$, and the output, $$y$$. Remark 3.6 Let us note that condition $${\ker}\,{\overline{D}_{r}} = \{ 0 \}$$ implies the existence of a left inverse, $$\overline{D}_{r}^{\ell}$$, of $$\overline{D}_{r}$$, thus the description $${\mathfrak{R}^{ss}}(\overline{A}_{r},\,{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$ takes the form: dx¯c/dt =(A¯r−Φ¯r(D¯rℓC¯r))x¯c+[B¯rΦ¯rD¯rℓ][uy],x¯ℓ =−(D¯rℓC¯r)x¯c+[0D¯rℓ][uy]. Let us assume that the pair $$\left(\left(\overline{A}_{r} - \overline{{\it \Phi}}_{r}\,(\overline{D}_{r}^{\ell}\overline{C}_{r}) \right),\,(\overline{D}_{r}^{\ell}\overline{C}_{r})\right)$$ is observable, then one could synthesize a standard observer for $$\bar{x}_{c}$$ under the knowledge of the signals and $$\bar{x}_{\ell}$$, but one wants to observe the whole descriptor variable . Of course, the knowledge of any component, $$\bar{x}_{c}$$ or $$\bar{x}_{\ell}$$, determines the knowledge of the other. In fact, in Section 3.1.2, we first proceed to estimate $$\bar{x}_{\ell}$$, and in Section 3.1.3 we then observe $$\bar{x}_{c}$$. In order to prove Theorem 3.1, we need the following two technical lemmas proved in the appendix. Lemma 3.1 Let $$\mathscr{X}_{o}$$ be any complementary subspace of $${\ker}\,{E}$$, and let $$V: \mathscr{X}_{o} \to \mathscr{X}_{d}$$ be its insertion map. Then the supremal $$(A, E)$$—invariant subspace contained in $$\mathscr{X}_{o}\cap{{\ker}\,{C}}$$, $$\mathscr{N}_{\mathscr{X}_{o}}^{*}$$, is also geometrically characterized by: NXo∗=sup{N¯⊂kerC¯| A¯N¯⊂N¯}, (3.12) where $$\overline{C} = C\, V$$ and $$\overline{A} = A\, V$$. Lemma 3.2 The following three statements are equivalents. ∃Xc⊂Xd: Xd=Xc⊕kerEandNXc∗={0}, (3.13) ∃ Er∈Rn×n¯: EEr=I  and ker[sE−A−C]Er={0} ∀s∈C, (3.14) ∃ K∈R(n−n¯)×n: ker[EK]={0} and ker[Y¯0Y¯1⋮Y¯k]={0}, (3.15) where $${E}^{r}$$ is a right inverse (in the given basis) of the insertion map $$V$$ of Lemma 3.1, $$K$$ is any matrix satisfying: and the matrices $$\overline{Y}_{0}$$, $$\overline{Y}_{1}$$, $$\dots$$, $$\overline{Y}_{k}$$ are obtained with the Lewis structure algorithm (LSA) (see Lewis & Beauchamp, 1987 and Fig. 3 of Bonilla & Malabre, 1997): (3.16) where $$\mathbf{T}_{k}^{o}$$ is a maximal row compression on , i.e., an invertible matrix for getting the surjective matrix $$\mathbf{X}_{k+1}$$. Moreover, Xc=ImV=kerK (3.17) Proof of Theorem 3.1. Assumption H7 implies (3.9). From H8 and Lemma 3.2, we deduce (3.10). Statement (3.11) directly follows from (3.7) and H9. □ Illustrative example, part 3: Let us continue the illustrative example of Section 2.3. From (2.21), we deduce that: [EK]=[100010ab1], where $$a$$ and $$b$$ are two real numbers to be found, such that (3.15) is satisfied. Applying the LSA (3.16) to (2.21), we get: With T1o=[10000100−a−b01ad((1+b)d−1)1−d], we get Thus detV1 =det[Y¯0Y¯1]=|0(1−d)dab1((1+b)d−1)ad(1−(1+a+b)d)| =((1+a+b)d−1)(a−(1+b)d+1). Condition (3.15) is then satisfied when: $$(1 + a + b)\,d \neq 1$$ and $$a - (1 + b)\,d \neq -1$$. In order to get: $$\det{V}_{1} = -1$$, let us choose: $$(a,\,b) = (0,\,-1)$$, then (recall (3.17) and (2.21)): K=[0−11]⇒ Xc=ker⁡K={[100101]},ker⁡E=span{[001]}. (3.18) Based on (3.18), let us do the following change of variable: [x¯cx¯ℓ]=T¯−1x,where:T¯=[100010011]. (3.19) We then get (recall (2.7) and (2.21), and compare with (3.7)): ET¯=[I 0],AT¯=[A¯rΦ¯r]=[00−11−1−1]B¯r=B=[01],CT¯=[C¯rD¯r]=[01d]. (3.20) Let us note that: D¯rℓ=D¯r−1=1/d,det[C¯rC¯rA¯r]=det[011−1]=−1,det[sI −A¯r−Φ¯r−C¯r−D¯r]=det[s01−1s+110−1−d]=−(ds−1)(s+1). (3.21) Also, the transfer functions related to the state space realization $${\mathfrak{R}^{ss}}(\overline{A}_{r},\,\overline{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$ are: Tuy=ss(s+1)andTx¯ℓy=(ds−1)(s+1)s(s+1). (3.22) Compare with (3.2) and (3.4). From (3.21), we note that: (i) the state space realization $${\mathfrak{R}^{ss}}(\overline{A}_{r},\,\overline{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$, (3.8) and (3.20) is observable and (ii) it has a non Hurwitz zero when $$d < 0$$. From (3.22), we see that, in the case: $$d<0$$, $$\bar{x}_{\ell}$$ can be obtained by means of the inverse: $$\displaystyle T_{y}^{\bar{x}_{\ell}} = \frac{{\mathrm{s}}\,({\mathrm{s}}+1)}{(d{\mathrm{s}}-1)({\mathrm{s}}+1)}$$ and $$\bar{x}_{c}$$ is then obtained with a Luenberger observer. Remark 3.7 Let us note that Hypothesis H8 is not always satisfied neither implied by the other assumptions. Indeed, let us consider the following example: [100010]ddt[xc,1xc,2xℓ] =[p10β0p2β][xc,1xc,2xℓ]+[11]u,y =[10α][xc,1xc,2xℓ]. In order to satisfy Hypothesis H7, $${\ker}\,{C}\cap{\ker}\,{E} = \{ 0 \}$$, $${\alpha} \neq0$$. For testing (3.15), let us take: and let us apply the Lewis structure algorithm (3.16): detV1 =det[Y¯0Y¯1]=|10αab1−p1(a−α−1)−bp2−β(b+a−α−1)|, =(β−α(p1−p2)−(a+b)αβ+α2a(p1−p2))(b/α), thus $$b \neq 0$$, let us chose: $$b = 1$$. Let us distinguish two cases: {β=0:detV1=α(aα−1)(p1−p2)β≠0:detV1=−αβ≠0,with:a=α−1.  We then conclude that for the case, $${\beta} = 0$$, there is a change of basis making observable the proper part only if $$p_{1} \neq p_{2}$$. 3.1.2. Free descriptor variable observation Since the state representation (3.8) has the standard form used in the fault detection treatment, we then have the following Beard-Jones filter of (3.8) (cf. Willsky, 1976; Isermann, 1984; Saberi et al., 2000): dw/dt =(Ar¯+K¯rC¯r)w−K¯ry+B¯ru,y^ =C¯rw, (3.23) where $$\overline{K}_{r}$$ is an output injection to be computed. From (3.8) and (3.23), we get the following remainder generator: der/dt =A¯K¯rer−Φ¯K¯rx¯ℓ,r =C¯rer−D¯rx¯ℓ, (3.24) where $${e_r} = w - {\bar{x}_{c}}$$, $$r = \hat{y} - y$$, and: A¯K¯r=A¯r+K¯rC¯randΦ¯K¯r=Φ¯r+K¯rD¯r. (3.25) Let us note that: The observability of the pair $$(\overline{C}_{r},\, \overline{A}_{r})$$ (cf. (3.10)) implies the observability of the pair $$(\overline{C}_{r},\, \overline{A}_{K_{r}})$$ (see Kailath, 1980; Wonham, 1985; Polderman & Willems, 1998). The minimal phase condition (3.11), on the output decoupling zeros of the implicit representation (3.7), implies that the transmission zeros of the state space representation (3.24) are also Hurwitz (see Verghese, 1981; Aling & Schumacher, 1984). The monic condition (3.9) implies that the relative degree of (3.24) is zero. It also implies the existence of a left inverse $$\overline{D}_{r}^{\ell}$$ of $$\overline{D}_{r}$$, namely: $$\overline{D}_{r}^{\ell}\, \overline{D}_{r} = {\mathrm{I}}_{}$$. Theorem 3.2 If Kr¯=−Φ¯rD¯rℓ. (3.26) Then rank[(sI −A¯K¯r)−Φ¯K¯r−C¯r−D¯r]=rank(sI −A¯K¯r)+(n−n¯). (3.27) Moreover, ‖x^ℓ(t)−x¯ℓ(t)‖≤kℓe−αℓt, (3.28) where x^ℓ(t)=Drℓ(y(t)−y^(t)). (3.29)$$\alpha_{\ell}$$ is a positive lower bound of the set of absolute values of the transmission zeros of (3.24), and $${k}_{\ell}$$ is a positive constant depending on the initial conditions. Proof of Theorem 3.2. From (3.25) and (3.26), we get: [(sI −A¯K¯r)−Φ¯K¯r−C¯r−D¯r]=[(sI −(A¯r−Φ¯rD¯rℓC¯r))0−C¯r−D¯r], which implies (3.27). That is to say, the output injection, $$\overline{K}_{r} = - \overline{{\it \Phi}}_{r}\overline{D}_{r}^{\ell}$$, is placing all the poles of the remainder generator (3.24) at its Hurwitz transmission zeros. We then get (3.28). □ Theorem 3.2 states that the remainder generator (3.24) is self-inverted by means of a Hurwitz pole-zero cancelation. 3.1.3. State variable observation Once observed the free part of the descriptor variable, $$\bar{x}_{\ell}$$, which characterizes the variation of structure, we only need to synthesize a standard state observer for the auxiliary state representation (recall (3.8)): dx¯c/dt =A¯rx¯c+[B¯rΦ¯r][ux^ℓ],(y−D¯rx^ℓ) =C¯rx¯c, namely (see, e.g., Kailath, 1980; Wonham, 1985; Polderman & Willems, 1998): dx^c/dt=(Ar¯+K¯oC¯r)x^c−K¯o(y−D¯rx^ℓ)+[B¯rΦ¯r][ux^ℓ], (3.30) where $$\overline{K}_{o}$$ is an output injection such that $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$ is a Hurwitz matrix. 3.1.4. Descriptor variable observation Theorem 3.3 Let us consider the implicit description (3.7), such that (3.9)–(3.11) are satisfied. Let us consider the descriptor variable observer: dx^c/dt =(Ar¯+K¯oC¯r)x^c−K¯oy+B¯ru+(Φ¯r+K¯oD¯r)x^ℓ,dw/dt =(Ar¯+K¯rC¯r)w−K¯ry+B¯ru,x^ℓ(t) =−DrℓC¯rw+Drℓy(t), (3.31) where $$\overline{K}_{r}$$ is defined by (3.26), and $$\overline{K}_{o}$$ is an output injection such that $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$ is a Hurwitz matrix. Then ‖x^ℓ(t)−x¯ℓ(t)‖≤kℓe−αℓtand‖x^c(t)−x¯c(t)‖≤kce−αct (3.32) where $$\alpha_{\ell}$$ is a positive lower bound of the set of absolute values of the transmission zeros of (3.24), $$\alpha_{c}$$ is a positive lower bound of the set of absolute values of the eigen values of the Hurwitz matrix $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$, and $${k}_{\ell}$$ and $${k}_{c}$$ are positive constants depending on the initial conditions. Proof of Theorem 3.3. Equation (3.31) directly follows from (3.23), (3.29) and (3.30). Equation (3.32) follows from (3.28) and from the fact that $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$ is a Hurwitz matrix. □ Corollary 3.1 Under the same conditions of Theorem 3.3. and if in addition $$\overline{K}_{o} = \overline{K}_{r} = - \overline{{\it \Phi}}_{r}\,\overline{D}_{r}^{\ell}$$, then the descriptor variable observer (3.31) takes the following form: dx^c/dt =(A¯r+K¯rC¯r)x^c−K¯ry+B¯ru,x^ℓ =−D¯rℓC¯rx^c+D¯rℓy. (3.33) Moreover ‖[x^c(t)−x¯c(t)x^ℓ(t)−x¯ℓ(t)]‖≤k¯e−α¯t, (3.34) where $$\bar{\alpha}$$ is a positive lower bound of the set of absolute values of the transmission zeros of (3.24) and $$\bar{k}$$ is a positive constant depending on the initial conditions. Proof of Corollary 3.1. From (3.33) and (3.26), we get: dx~c/dt =(A¯r−Φ¯rD¯rℓC¯r)x~c,x~ℓ =−D¯rℓC¯rx~c, where $$\tilde{x}_{c} = \hat{x}_{c} - \bar{x}_{c}$$ and $$\tilde{x}_{\ell} = \hat{x}_{\ell} - \bar{x}_{\ell}$$. On the other hand, we get from (3.25) and (3.26): [I K¯r0I ][(sI −A¯r)−Φ¯r−C¯r−D¯r]=[(sI −A¯K¯r)−Φ¯K¯r−C¯r−D¯r]=[(sI −(A¯r−Φ¯rD¯rℓC¯r))0−C¯r−D¯r]. We then conclude from (3.27) and (3.11). □ Illustrative Example, Part 4: Let us continue the illustrative example of Section 2.3. Let us now consider the alternative observation procedure. The descriptor variable observer (3.33) for the example (3.7) and (3.20), for the case study: $$d = -1$$, is (see (3.20), (3.21) and (3.26)): dx^c/dt =[0−11−2]x^c+[11]y+[01]u,x^ℓ =[01]x^c+[−1]y. (3.35) 3.2. Indirect variable descriptor observer The linear descriptor observer studied in Section 3.1 only works when the transfer between the free component of the descriptor variable, $${x}_{\ell}$$, and the output, $$y$$, has minimum phase nature. If this is not the case, one can try standard state observers with the additional help of an internal structure detection procedure. Namely, if we are able to know which internal structure takes place, we can use standard state observers for each internal structure which is actually acting. But as shown hereafter, this is not a straight forward procedure, and some considerations have to be taken into account. Let us consider the illustrative example of Section 2.3, with $$d=1$$, and let us suppose that we are able to synthesize the ideal observers5 for each $$ {q_{}} = ({q_{{1}}}, {q_{2}}) \in \big\{ {q_{a}},\,{q_{b}},\,{q_{c}} \big\}$$, namely (see (2.4) and (2.20)): x3 =y,q12≠q22:[q1q2q2q1][x1x2] =[−1(q1+q2)−d/dt]y−[0q2]u,q12=q22:[q1q2][x1x2] =[−1]y. Thus, for each $${q_{}} \in \big\{{q_{a}},\,{q_{b}},\,{q_{c}}\big\}$$, we have the following particular estimated values of $$x$$ (see6 (2.16)): (x1, x2, x3)={(0, y, y)forq =qa,(y, (d/dt+1)y, y)forq =qb,(13(2d/dt+5)y−43u, −13(d/dt+1)y+23u, y)forq =qc.  (3.36) Taking into account the ideal observers (3.36) into the proportional variable descriptor feedback (2.25), we get the following particular control laws: Forq =qa:{ddtx4=−1εcx4,u1=(1−1τ)y−1εcx4(0)e−tεc+1τr,  Forq =qb:{ddtx4=−1εcx4+1εcddty,u2=−(1τ−1εcddt)y−1εcx4+1τr, Forq =qc:{ddtx4=−1εcx4−13εc(4+ddt)y+23εcu3,u3=(εc(2+3τ)+4εc+2+2εc+1εc+2ddt)y+3εc+2x4−εc3τεc+2r.  (3.37) Let us now note that in the case of applying a given particular control law to a wrongly detected structure, a temporarily unstable system can be obtained; see Table 1. Table 1 Cross closed loop transfer functions obtained with the particular control laws of (3.37). The denominators are computed with $$\varepsilon_{c} = 0.25$$ and $$\tau = 4$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ Table 1 Cross closed loop transfer functions obtained with the particular control laws of (3.37). The denominators are computed with $$\varepsilon_{c} = 0.25$$ and $$\tau = 4$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ From Table 1, we can assert that the internal structure detection has to be correctly achieved in finite time. Moreover, the finite detection time should be not larger than the expected fastest unstable time constant; for our case, the fastest unstable time constant is 0.20 s (cf. Table 1). In Bonilla et al. (2009) is proposed a finite time adaptive structure detector, which can be used for synthesizing indirect variable descriptor observers. We shall now proceed to illustrate our variable descriptor observation schemes. 4. Illustrative example: part 5 Let us continue the illustrative example of Section 2.3, for the case: $$d = 1$$. For this non minimum phase case, we use an indirect variable descriptor observer based on the finite time adaptive structure detection proposed in Bonilla et al. (2009). For this, we need first to synthesize a standard state observer for each state space representation, $$\mathfrak{R}^{ss}(A_{q_{}}, B, C_{q_{}})$$, $$q_{}\in\mathscr{Q}$$, namely (see (2.2), (2.3) and (2.17); cf. (3.36)): Observer for $$q_{} = {q_{a}}$$: x~1,1=0;x~2,1=y;x~3,1=y. (4.1) Observer for $$q_{} = q_{b}$$: ddt[x~~1x~~2]=[−1100][x~~1x~~2]+[01]u+[−1−1]([10][x~~1x~~2]−y),x~1,2=y;x~2,2=x~~2;x~3,2=y. (4.2) Observer for $$q_{} = q_{c}$$: ddt[x~1,3x~2,3]=[−1−10−2][x~1,3x~2,3]+[01]u+[1/31/3]([12][x~1,3x~2,3]−y),x~3,3=y. (4.3) Finite Time Adaptive Structure Detector. The finite time adaptive structure detector proposed in Bonilla et al. (2009) is composed by three stages: Discriminating filters. $$z$$-Filters. Adaptive structure detector. (i) Descriminating filters: The aim of this first stage is to synthesize a bank of filters which aim is to provide with an output which remains null when detecting the active internal structure. From (2), we deduce (cf. (2) of Bonilla et al. (2009)): (d/dt+βd)[χ1(t)χ2(t)χ3(t)]=−[εd2y1(t)εd3y2(t)εd3y3(t)], (4.4) (εdd/dt+1)y1(t)=χ1(t)+(−u(t)+(d/dt+1)y(t)), (4.5) ((εdd/dt+2/2)2+1/2)y2(t)=χ2(t)+(−u(t)+(d/dt+1)(d/dt)y(t)),((εdd/dt+2/2)2+1/2)y3(t)=χ3(t)+(−(2d/dt+1)u(t)+(d/dt+1)(d/dt+2)y(t)). (4.6) (ii) $$z$$-Filters: The aim of this second stage is to provide with a set of functions which memorize the time evolution of the absolute value of the outputs of the previous stage; also these filters act as a forgetting factor (cf. (7) of Bonilla et al. (2009)): τzdzk/dt(t)+zk(t)=|yk(t)|, k∈{1,2,3}. (4.7) (iii) Adaptive structure detector: This third stage is an adaptive structure detector based on a projected gradient method, which aim is to produce a vector, which components act as logic variables, namely there is only one component which remains different from zero and that corresponds to the active internal structure (cf. (9) of Bonilla et al. (2009))7: dq^k(t)/dt=−ρ1q^k(t)zk2(t)/(αk2+zk2(t))+ρ2γ(q^)e|g(q^)|, (4.8) where $$k\in\{1, 2, 3\}$$, and $${\mathit g}{q_{}} = \rho_{0}^2 - \sum\nolimits_{k=1}^{3}\hat{q}_{k}^2$$ defines an hyper-sphere. The constant terms $$\rho_1$$ and $$\rho_2$$ are the positive gains of the algorithm (attraction to the hyper-sphere) and of its projection term (repulsion to the hyper-sphere), respectively, the $${\alpha}_{k}$$ are positive scale factors. And $$\gamma(\cdot)$$ is an hysteresis switch which switches on at point $$\delta^{2}$$, and switches off at point $$0$$; when on, the output is $$1$$, and when off, the output is $$0$$. From the fifth section of Bonilla et al. (2009), we have the following synthesis procedure: (4.9) Descriptor variable observer. For synthesizing the descriptor variable observer it is only necessary to select the good standard state observer with the help of the vector $$\hat{q}_{}$$. For this, let: $$Sw_{k} = \gamma_{Sw}({q_{}})$$, $$k\in \{1, 2, 3\}$$, where $$\gamma(\cdot)_{Sw}$$ is an hysteresis switch with $$\sqrt{\rho_{0}^{2}-{\beta}^{\ast}\delta^{2}}$$ as the switch on point and $$\rho_{0}/\sqrt{n}$$ as the switch off, output when on $$= 1$$, and output when off $$= 0$$. The descriptor observer is then: x^k=∑j=13x^k,jSwj,k∈{1,2,3}. (4.10) 4.1. Numerical simulation Two MATLAB® numerical simulations were performed, for $$d = -1$$ and for $$d = 1$$, with the solver settings: The behaviours, $$\mathfrak{B}_{q_{i}}^{\infty}$$, take place as follows (recall (2.19)): in the time interval $$[0, 50]$$ takes place $$\mathfrak{B}_{q_{a}}^{\infty}$$. In the time interval $$[50, 100]$$ takes place $$\mathfrak{B}_{q_{b}}^{\infty}$$. In the time interval $$[100, 150]$$ takes place $$\mathfrak{B}_{q_{c}}^{\infty}$$. We apply the P.D. feedback (2.25), with the choice: $$\tau = 4$$ and $${\varepsilon}_{c} = 0.25$$, and using the observed descriptor variable $$\hat{x}$$ instead of the descriptor variable $${x}$$. The reference $$r$$ has been chosen as follows (see Definition 2.4.5 of Polderman & Willems (1998)): ϕ(t)={−e−11−(t′)2,t∈A=(16,26),t′=12t−3,−e−11−(t″)2,t∈B=(46,56),t″=−12t+9,−0,t∈R∖(A∪B) r(t)=∫0t(∑i=03(−1)iϕ(275σ−i))dσ,t∈[0,150]. The model matching error is computed as follows: |y(t)−y∗(t)|=|y(t)−∫0te−1τ(t−σ)r(σ)d(σ)|. 4.1.1. Observers based on fault detection For the case $$d = -1$$, we apply the descriptor variable observer based on fault detection techniques (3.33). The observed descriptor variable is: x^=T[z^1z^2φ^] (4.11) In Fig. 1, we show the numerical simulations for this minimum phase case. In order to appreciate the performance of the remainder generator (3.24), in this simulation we have set the initial condition: Fig. 1. View largeDownload slide Numerical simulations for the minimum phase case: $$d=-1$$. (a) Output, $$y$$. (b, c) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (d, e) Control input, $$u$$. (f, g) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. Fig. 1. View largeDownload slide Numerical simulations for the minimum phase case: $$d=-1$$. (a) Output, $$y$$. (b, c) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (d, e) Control input, $$u$$. (f, g) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. 4.1.2. Observers based on adaptive structure detection For the case $$d = 1$$, we apply the descriptor variable observer based on adaptive structure detection, (4.1)–(4.8) and (4.10). We have chosen: $$\varepsilon_{d} = 0.01$$, $$\tau_{z} = 0.1$$, and $$t^{\ast} = 0.1$$ (recall unstable modes of Table 1). Following the procedure proposed in Bonilla et al. (2009) we set (recall (4.9)): $$n = 3$$, $$\rho_{0} = 2$$, $${\beta}^{\ast} = 2$$, $$\delta = 0.5$$, $${\alpha}_{k} = 1$$ ($$k = 1,\,2,\,3$$) and $$a = 0.09$$; we thus obtain: $$\rho_{1} = 1,\,592.\,44$$ and $$\rho_{2} = 3,\,092.\,98$$. In Figs 2 and 3, we show the numerical simulations for this non minimum phase case. We shall now proceed to conclude our exposition. Fig. 2. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a) Output, $$y$$. (b) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (c) Control input, $$u$$. (d–f) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. Fig. 2. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a) Output, $$y$$. (b) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (c) Control input, $$u$$. (d–f) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. Fig. 3. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a, b) Switch detecting $${q_{a}}$$. (c, d) Switch detecting $${q_{b}}$$. (e, f) Switch detecting $${q_{c}}$$. Fig. 3. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a, b) Switch detecting $${q_{a}}$$. (c, d) Switch detecting $${q_{b}}$$. (e, f) Switch detecting $${q_{c}}$$. 5. Conclusion In this article, we have tackled the descriptor variable observation problem for implicit descriptions having column minimal indices blocks. We are mainly concerned with the synthesis procedure and not with theoretical aspects, as in Berger & Reis (2015). We have considered two procedures: (i) linear descriptor observers, based on fault detection techniques and (ii) indirect variable descriptor observers, based on finite time structure detection techniques. The first proposition is based on fault detection techniques. This observer is composed by a Beard-Jones filter, which aim is to observe the existing degree-of-freedom in rectangular implicit representations. Notice that after the initial transient, this observer remains insensitive to the switchings events (see Fig. 1(f) and (g)); this is the case, because the observer is based on the fault detection of the continuous linear system (3.8). Since this observation is accomplished by a pole-zero cancelation, this technique is reserved to minimum phase systems, with respect to the output—degree-of-freedom transfer, namely to implicit rectangular representations having Hurwitz output decoupling zeros. The second proposition is based on adaptive structure detection. The adaptive structure detection is guaranteed in finite time (see Figs 2(e) and (f) and 3), avoiding possible stability troubles due to temporarily unstable closed loop systems during the detection procedure. We are mainly working with implicit descriptions having column minimal indices blocks, which implies the existence of more unknown variables than equations, and when a solution exists, this is not unique. The non unicity of solution introduces some interesting pathologies on the control theory. As, for example, in Bonilla et al. (2013) we have shown that in systems represented by column minimal indices, it is possible to have reachable systems without any control. This phenomenon is possible because of the existence of the free variable $$\bar{x}_{\ell}$$, which acts as an internal control signal (see Section 4.3 of Bonilla et al. (2013)). As the unobservability is a dual concept of the reachability, pathologies will also then appear in the presence of column minimal indices blocks. This pathology is the lost of observability in the changing basis procedure, since the free variable $$\bar{x}_{\ell}$$ acts as an internal state feedback in the changing basis procedure; so, one must be careful in the changing basis procedure. This is pointed out in Theorem 3.1, and we show in Lemma 3.2 how to find complementaries observable subspaces. We must point out that our descriptor variable observation proposal is intended for implicit descriptions, therefore differs from the well-known observer design method proposed in Tanwani et al. (2013) (see also Biehn et al., 2001). In that case the observer is focused on the estimation of the state vector of switched linear systems with state jumps (corresponding to the acting switching signals). In the observation approach that we followed here (related to an implicit description of the concerned system), we estimate the descriptor state variable affected by system’ structure variation. Our observer design methods (i.e., the approach based on linear descriptor observers, and the one based on indirect variable descriptor observation), can be then applied to a broad class of linear systems, including switched systems. References Aling H. & Schumacher M. (1984) A nine-fold canonical decomposition for linear systems. Int. J. Control, 39 , 779 – 805 . Google Scholar CrossRef Search ADS Aubin J.P. & Frankowska H. (1991) Viability kernels of control systems. Nonlinear Synthesis ( Byrnes C. I. & Kurzhanski A. eds). Progress in Systems and Control Theory , vol. 9 . Boston : Birkhäuser , pp. 12 – 33 . Berger T. & Reis T. (2013) Controllability of linear differential-algebraic systems—a survey. Surveys in Differential-Algebraic Equations I ( Ilchmann A. & Reis T. eds). Differential-Algebraic Equations Forum. Berlin-Heidelberg : Springer , pp. 1 – 61 . Berger T. & Reis T. (2015) Observers and dynamic controllers for linear differential-algebraic systems. Submitted, preprint available from the website of the authors: www.math.unihamburg.de/home/reis/BergReis15.pdf. Berger T. , Reis T. & Trenn S. (2016) Observability of linear differential-algebraic systems: A Survey. Surveys in Differential-Algebraic Equations IV ( Ilchmann A. & Reis T. eds). Springer , pp. 161 – 219 . Surveys in Differential-Algebraic Equations IV ( Ilchmann A. & Reis T. eds). Differential-Algebraic Equations Forum. Berlin-Heidelberg, Germany : Springer (to appear) . Biehn N. , Campbell S. L. , Nikoukhah R. & Delebecque F. (2001) Numerically constructible observers for linear time-varying descriptor systems. Automatica , 37 , 445 – 452 . Google Scholar CrossRef Search ADS Bonilla M. , Lebret G. , Loiseau J. J. & Malabre M. (2013) Simultaneous state and input reachability for linear time invariant systems. Linear Algebra Appl. , 439 , 1425 – 1440 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (1991) Variable structure systems via implicit descriptions. First European Control Conference . Grenoble, Juillet , Hermès, Paris , 1 , 403 – 408 . Bonilla M. & Malabre M. (1994) Geometric characterization of Lewis structure algorithm. Circuits Syst. Signal Process. , 13 , 255 – 272 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (1995) Geometric minimization under external equivalence for implicit descriptions. Automatica , 31 , 897 – 901 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (1997) Structural matrix minimization algorithm for implicit descriptions. Automatica , 33 , 705 – 710 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (2003) On the control of linear systems having internal variations. Automatica , 39 , 1989 – 1996 . Google Scholar CrossRef Search ADS Bonilla M. , Malabre M. & Azhmyakov V. ( 2015a ) An implicit systems characterization of a class of impulsive linear switched control processes. Part 1: modeling. Nonlinear Anal. Hybrid Syst. , 15 , 157 – 170 . Google Scholar CrossRef Search ADS Bonilla M. , Malabre M. & Azhmyakov V. ( 2015b ) An implicit systems characterization of a class of impulsive linear switched control processes. Part 2: control. Nonlinear Anal. Hybrid Syst. , 18 , 15 – 32 . Google Scholar CrossRef Search ADS Bonilla M. , Martínez J. C. , Pacheco J. & Malabre M. (2009) Matching a system behavior within a known set of models: a quadratic optimization based adaptive solution. Int. J. Adapt. Control Signal Process. , 23 , 882 – 906 . Google Scholar CrossRef Search ADS Dai L. (1989) Singular Control Systems . Lecture Notes in Control and Information Sciences , vol. 118 . New York : Springer . Duan G.R. (2010) Analysis and Design of Descriptor Linear Systems . Springer : New York . Gantmacher F.R. (1977) The Theory of Matrices , vol. II . Chelsea : New York . Geerts T. (1993) Solvability conditions, consistency, and weak consistency for linear differential-algebraic equations and time-invariant singular systems: the general case. Linear Algebra Appl. , 181 , 111 – 130 . Google Scholar CrossRef Search ADS Isermann R. (1984) Process fault detection based on modeling and estimation methods—a survey. Automatica , 20 , 387 – 404 . Google Scholar CrossRef Search ADS Kailath T. (1980) Linear Systems . Englewoods Cliffs , NJ: Prentice-Hall . Kuijper M. & Schumacher J. M. (1992) . Minimality of descriptor representations under external equivalence. Automatica , 27 , 985 – 995 . Google Scholar CrossRef Search ADS Lebret G. & Loiseau J. J. (1994) Proportional and proportional-derivative canonical forms for descriptor systems with outputs. Automatica , 30 , 847 – 864 . Google Scholar CrossRef Search ADS Lewis F. L. (1992) A tutorial on the geometric analysis of linear time-invariant implicit systems. Automatica , 28 , 119 – 137 . Google Scholar CrossRef Search ADS Lewis F. L. & Beauchamp G. (1987) Computation of subspaces for singular systems. Proceedings of MTNS’87 , Phoenix, AZ . Liberzon D. (2003) . Switching in Systems and Control . Systems and Control: Foundations & Applications . Boston, MA : Birkhauser . Liret F. & Martinais D. (1997) . Algèbre 1re anníe . Paris, France : Dunod . Malabre M. (1987) More geometry about singular systems. Proceedings of 26th IEEE Conference on Decision and Control , Los Angeles , 9 – 11 December 1987, Library of Congress, Catalog Card Number: 79-640961, IEEE, Catalog Number: 87 CH 2505-6 , pp. 1138 – 1139 . Narendra K. S. & Balakrishnan J. (1994) A common Lyapunov function for stable LTI systems with commuting $$A$$-matrices. IEEE-TAC , 39 , 2469 – 2471 . Özçaldiran K. (1985) Control of descriptor systems. Ph.D. Thesis , Georgia Institute of Technology . Polderman J. W. & Willems J. C. (1998) Introduction to Mathematical Systems Theory: A Behavioral Approach . New York : Springer . Rosenbrock H.H. (1970) State–Space and Multivariable Theory . London : Nelson . Saberi A. , Stoorvogel A. A. , Sannuti P. & Niemann H. H. (2000) Fundamental problems in fault detection and identification. Int. J. Robust Nonlinear Control , 10 , 1209 – 1236 . Google Scholar CrossRef Search ADS Schaft A. J. van der & Schumacher H. (2000) An Introduction to Hybrid Dynamical Systems . Lecture Notes in Control and Information Sciences , vol. 251 . New York : Springer . Shorten R. N. & Narendra K. S. (2002) Necessary and sufficient conditions for the existence of a common quadratic Lyapunov function for a finite number of stable second order linear time-invariant systems. Int. J. Adapt. Control Signal Process. , 16 , 709 – 728 . Google Scholar CrossRef Search ADS Tanwani A. , Shim H. & Liberzon D. (2013) Observability for switched linear systems: characterization and observer design. IEEE Trans. Autom. Control , 58 , 891 – 904 . Google Scholar CrossRef Search ADS Trentelman H.L. , Stoorvogel A.A. , and Hautus M.L.J. (2001) Control Theory for Linear Systems. Communications and Control Engineering . London : Springer . Vardulakis A. I. G. (1991) Linear Multivariable Control: Algebraic Analysis and Synthesis Methods . Chichester, UK : John Wiley & Sons Ltd . Verghese G. C. (1981) Further notes on singular descriptions. Proceedings of the Joint American Control Conference , Paper TA4, Charlottesville . Willems J. C. (1983) Input–output and state space representations of finite-dimensional linear time-invariant systems. Linear Algebra Appl. , 50 , 81 – 608 . Google Scholar CrossRef Search ADS Willsky A. S. (1976) A survey of design methods for failure detection in dynamic systems. Automatica , 12 , 601 – 611 . Google Scholar CrossRef Search ADS Wonham W. M. (1985) Linear Multivariable Control: A Geometric Approach , 3rd edn . New York : Springer . Appendix A. Proof of Lemma 3.1 From (3.5), we have: NXo∗=sup{N⊂Xo∩kerC| AN⊂EN}=sup{N⊂ImV∩kerC| A(ImV∩N)⊂E(ImV∩N)},=sup{N⊂VV−1kerC| AV(V−1N)⊂EV(V−1N)}=sup{N⊂VkerCV| AV(V−1N)⊂EV(V−1N)},=sup{(V−1N)⊂kerCV| AV(V−1N)⊂EV(V−1N)}. Now, since $$\mathrm{Ker}\,{EV} = V^{-1}\,\mathrm{Ker}\,{E} = \mathscr{X}_{o} \cap \mathrm{Ker}\,{E} = \{ 0 \}$$ and $$\mathrm{Im}\,{EV} = E \mathscr{X}_{o} = E(\mathscr{X}_{o}\oplus{{\ker}\,{E}}) = \mathrm{Im}\,{E}$$, we get (3.12). □ Proof of Lemma 3.2 From Lemma 3.1, (3.13) is equivalent to the observability of the pair $$(CV,\,AV)$$ (see Wonham, 1985). From the PBH–rank proof (see, e.g., Kailath, 1980;Vardulakis, 1991; Polderman & Willems, 1998)), we get the equivalence between (3.13) and (3.14). In Bonilla & Malabre (1997), we have used the geometric interpretation of the LSA, given in Bonilla & Malabre (1994), to matricially compute the unobservable subspace of a triple $$(E, A, C)$$, geometrically computed by (3.6) (with $$\mathscr{X}_{c} = \mathscr{X}_{d}$$). For our case. we have only to replace $${\ker}\,{C}$$ by $$\mathscr{X}_{c}\cap{\ker}\,{C}$$. Now, it is an usual practice to represent subspaces by means of kernels of some given matrices (see, e.g., Liret & Martinais, 1997). Thus: We then get the equivalence between (3.13) and (3.15); the condition: is implied by the complementary condition: $$\mathscr{X}_{d} = \mathscr{X}_{c} \oplus {\ker}\,{E}$$. Equation (3.17) is straightforward. 1 See Willems (1983) and Polderman & Willems (1998) for the external equivalence definition. 2 The parameter $$d$$ is not part of the variable structure, characterized by the pair $$({{{q_{{1}}}}}, {{{q_{{2}}}}})$$, we have only put it to consider two cases of study. 3 ied: infinite elementary divisor, fed: finite elementary divisor and cmi: column minimal index. 4 We have applied this change of basis to put in evidence the parameter $$d$$. In the fourth example of Bonilla et al. (2015b), $$d = 1$$, so: $${\xi} = {x}$$. 5 Based on an exact reconstruction of their derivative actions. 6 For the case: $${q_{a}} = (-1,\,-1)$$, we have from (2.4) and (2.20): $${\mathrm{d}{{x_1}}}/{\mathrm{d}{{t}}} = x_2 - y$$ and $$x_1 + x_2 = y$$, thus: $$x_1 = {{\text e}}^{-t}x_1(0)$$, $$x_2 = y - {{\text e}}^{-t}x_1(0)$$ and $$x_3 = y$$. 7 In Bonilla et al. (2009) is also attached a dead zone function, which aim is to consider band-limited high-frequency noise components. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

On the descriptor variable observation of rectangular implicit representations, in the presence of column minimal indices blocks

Loading next page...
 
/lp/ou_press/on-the-descriptor-variable-observation-of-rectangular-implicit-LLc0vHf9W0
Publisher
Oxford University Press
Copyright
© The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dnx020
Publisher site
See Article on Publisher Site

Abstract

Abstract Recently, it has been shown that implicit rectangular descriptions can be successfully used for modelling and controlling broad classes of linear systems, including systems with internal switches (i.e., variable structure linear systems where the variation is driven by switching signals). This technique consists in finding first the degree-of-freedom, characterizing the internal variable structure, and then making it unobservable by means of an ideal proportional and derivative descriptor variable feedback. When the proportional and derivative feedback is approximated by a suitable proper controller, then the degree-of-freedom is only attenuated in an epsilon order (i.e., the degree of approximation). In this article, we propose two different ways for observing the descriptor variable for implicit rectangular systems, in the presence of column minimal indices blocks. The first one concerns a descriptor variable observer based on fault detection; an apparent failure signal characterizes the variation of structure, which observation is required to support the synthesis of a standard state observer (this approach is constrained to minimum phase systems, with respect to the output—degree-of-freedom transfer). The second method concerns a descriptor variable observer based on precise finite-time adaptive structure detection. 1. Introduction Rosenbrock (1970) introduced the implicit representations as a generalization of proper linear systems (see, e.g., Vardulakis, 1991; Lewis, 1992). An implicit representation, $${\mathfrak{R}^{imp}}(E, A, B, C)$$, is a set of differential and algebraic equations of the following form: Edx/dt=Ax+Bu and y=Cx,t≥0, (1.1) where $${E}:\mathscr{X}_{d}\rightarrow \underline{\mathscr{X}}_{eq}$$, $${A}:\mathscr{X}_{d}\rightarrow \underline{\mathscr{X}}_{eq}$$, $${B}:\mathscr{U}\rightarrow \underline{\mathscr{X}}_{eq}$$ and $${C}:\mathscr{X}_{d}\rightarrow \mathscr{Y}$$ are linear maps. The spaces $$\mathscr{X}_{d} \approx {\Bbb{R}}^{n}$$, $$\underline{\mathscr{X}}_{eq} \approx {\Bbb{R}}^{\bar{n}}$$, $$\mathscr{U} \approx {\Bbb{R}}^{{m}}$$ and $$\mathscr{Y} \approx {\Bbb{R}}^{{p}}$$ are called the descriptor, the equation, the input and the output spaces, respectively. In Bonilla & Malabre (1991), it was shown that when $$\dim (\underline{\mathscr{X}}_{eq})$$$$\leq$$$$\dim (\mathscr{X}_{d})$$ (where $$\dim ({\mathscr{V}})$$ stands for the dimension of space $${\mathscr{V}}$$), it is possible to describe linear systems with an internal variable structure via an implicit representation. Indeed, when $$\dim (\underline{\mathscr{X}}_{eq})$$$$<$$$$\dim (\mathscr{X}_{d})$$ and if the system is solvable (i.e., possesses at least one solution), solutions are generally non unique. In some sense there is a degree-of-freedom in (1.1), which can be used for instance to take into account, in an implicit way, a possible structure variation. In Bonilla & Malabre (2003), it is shown how non square implicit descriptions can be used for modelling broad classes of linear systems, including systems with internal switches. Necessary and sufficient conditions, expressed in terms of the overall implicit model, exist for controlling it so that it has an unique behaviour (whatever be the internal structure variations). The objective is to enhance from these conditions the parts which are due to the common internal dynamic equation and, respectively, to the algebraic constraints which are ‘controlled’ (in an hidden way) by the degree-of-freedom. It is shown how to embed the variable internal structure present in square implicit descriptions inside an $$(A, E, B)$$ invariant subspace contained in the kernel of the output map. Thanks to this embedding, the variable internal structure is made unobservable and in this way a proper closed loop system with a controllable pre specified structure is then obtained. In Bonilla et al. (2015a), the authors have taken advantage from the results obtained in Bonilla & Malabre (2003) to model a certain class of time-dependent, autonomous switched systems (Liberzon, 2003). In Bonilla & Malabre (2003), the authors have proposed a variable structure decoupling control strategy based on an ideal proportional and derivative (PD) feedback. In Bonilla et al. (2015b), the authors have proposed a proper practical approximation of such an ideal PD feedback, which rejects the variable structure; also, the authors study the stability of both implicit control strategies. In this article, we now tackle the variable descriptor observation process of the class of time-dependent, autonomous switched systems considered in Bonilla et al. (2015a). We then propose two different ways for observing the descriptor variable: a descriptor variable observer based on a fault detection based scheme and a descriptor variable observer scheme based on an adaptive structure detection approach. The article is organized as follows: in Section 2, we briefly recall how to model and to control via implicit techniques a class of linear switched systems (see Bonilla et al., 2015a, 2015b for details). In Section 3, we propose two descriptor variable procedures: (i) Linear descriptor observers, based on fault detection techniques and (ii) Indirect variable descriptor observers, based on finite time structure detection techniques. In Section 4, we present two illustrative examples, together with the corresponding numerical simulations. We conclude with some final remarks in Section 5. 2. Preliminaries In Bonilla et al. (2013) has been shown that any reachable implicit description, $${\mathfrak{R}^{imp}}(\overline{E}, \overline{A}, \overline{B})$$, can be decomposed as: [I 000]⏟E¯ddt[xcxℓ]=[A¯1,1A¯1,2A¯2,1A¯2,2]⏟A¯[xcxℓ]+[B¯100I ]⏟B¯[ucua], where the pair $$\left(\overline{A}_{1,1}, [\, \overline{A}_{1,2} \overline{B}_{1} \,]\right)$$ is state reachable (in the classical sense). The component $${x}_{c}$$ is the part of the descriptor variable, associated to the integrator chains, which needs a control law to reach the desired goal; $${x}_{\ell}$$ is the free part of the descriptor variable, associated to algebraic constraints, which acts as some kind of internal input variable, together with the component $${u}_{c}$$ which is the effective external control input variable; the component $${u}_{a}$$ of the external control variable has to satisfy the algebraic equation, this part of the input corresponds to algebraic relationships linked with purely derivative actions. Bonilla et al. (2015a) have taken profit of the free descriptor variable $${x}_{\ell}$$ for describing systems having unexpected changes of structure, inside of a known set of models; see for example the so-called ladder systems illustrated in the second example of Bonilla et al. (2015a). For this, the description of the system is split in two parts: (i) an implicit rectangular representation, $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$: $$E{\mathrm{d}{{x}}}/{\mathrm{d}{{t}}} = Ax+Bu$$ and $$y = Cx$$, which characterizes the fixed structure present in all the switching process and (ii) the algebraic constraints, $$\mathfrak{R}^{alc}(0, D_{q_{{i}}}, 0)$$: $$0 = D_{{q_{{i}}}}x$$, which characterizes a switching process responsible of the change of structure, this changing mechanism is called here: internal structure variation. An interesting class of systems which can be studied with this implicit rectangular representation framework is the linear switched system (LSS) (see (see van der Schaft & Schumacher, 2000; Liberzon, 2003) with the behaviour (see, e.g., Polderman & Willems, 1998): Bsw :={(qi,u(⋅),y(⋅))∈Q×U×Y| ∃ci∈Rn¯, s:J→Q, s.t.:  s(Ii)=qi,y(t)=∑i∈N1Ii(t)Cqi(exp(Aqi(t−Ti−1))ci + ∫0t−Ti−1exp(Aqi(t−Ti−1−τ))Bu(Ti−1+τ)dτ), t∈R+, Ii∈J}, (2.1) where $${\mathbf{1}}_{\mathscr{I}_{i}}({\cdot})$$ is the characteristic function of the interval $$\mathscr{I}_i = [{T}_{i-1},\, {T}_{i})$$ and $$0$$$$=$$$$T_0$$$$<$$$$T_1$$$$<$$$$\cdots$$$$<$$$$T_{i-1}$$$$<$$$$T_i$$, $$\dots$$, $$i \in {\Bbb{N}}$$, is some time partition, with $$\lim_{i\to\infty}T_i = \infty$$. The $${q_{{i}}}$$ are elements of a finite set of indexes (locations) $$\mathscr{Q}$$; the system remains in location $$q_{i} \in \mathscr{Q}$$ for all time instants $$t \in \left[T_{i-1},T_i\right), \ i \in {\Bbb{N}}$$, according to the assignation rule: $$\mathfrak{s}:\, \mathcal{I}\to\mathscr{Q}$$, $$\mathfrak{s}(\mathscr{I}_{i}) = q_{i}$$, where: $$\mathcal{I}$$$$:=$$$$\left\{\mathscr{I}_{i}= [{T}_{i-1},\, {T}_{i})\subset {\Bbb{R}}^{+}\right\}$$. See Bonilla et al. (2015a) for the technical details of this particular proposition of LSS. Such a LSS can be described by the following state space representations, $$\mathfrak{R}^{ss}(A_{q_{{i}}}, B, C_{q_{{i}}})$$: ddtx¯=Aqix¯+Bu and y=Cqix¯;t∈(Ti−1,Ti),limt→Ti−1+x(t)=ci, (2.2) where $$A_{q_{{i}}} \in {\Bbb{R}}^{\bar{n}\times\bar{n}}$$, $$B \in {\Bbb{R}}^{\bar{n}\times{m}}$$ and $$C_{q_{{i}}} \in {\Bbb{R}}^{{p}\times{\bar{n}}}$$, and $$c_{i} \in {\Bbb{R}}^{\bar{n}}$$ are the initial conditions at each switching time $$T_{i-1}$$, $$i \in {\Bbb{N}}$$. We assume that the systems matrices $$A_{q_{{i}}}$$ and $$C_{q_{{i}}}$$ have the specific structure (c.f. (c.f. Narendra & Balakrishnan, 1994; Shorten & Narendra, 2002): Aqi=A¯0+A¯1D¯(qi) and Cqi=C¯0+C¯1D¯(qi), (2.3) where $$\overline{A}_{0} \in {\Bbb{R}}^{\bar{n}\times \bar{n}}$$, $$\overline{C}_{0} \in {\Bbb{R}}^{p\times \bar{n}}$$, $$\overline{A}_{1} \in {\Bbb{R}}^{\bar{n}\times \hat{n}}$$, $$\overline{C}_{1} \in {\Bbb{R}}^{p\times \hat{n}}$$ and $$\overline{D}({q_{{i}}}) \in {\Bbb{R}}^{\hat{n}\times\bar{n}}$$. We also assume that: H1 $${\mathrm{rank}\,}{B} = {m}$$, $${\mathrm{rank}\,} C_{{q_{{}}}_{i}} = {p}$$ and $${\mathrm{rank}\,}\overline{C}_{1} = {p}$$, H2 $$\overline{D}({q_{{i}}})$$ varies linearly with respect to $${q_{{i}}}$$, namely: given two fixed locations $${{q_{{a}}}}, {{q_{{b}}}} \in \mathscr{Q}$$ and a scalar $$m \in {\Bbb{R}}$$ then: $$\overline{D}({{q_{{a}}}} + m{{q_{{b}}}}) = \overline{D}({{q_{{a}}}}) + m\overline{D}({{q_{{b}}}})$$. Let us now proceed to recall the implicit representation technique, as well as the implicit control law approach (which decouples the internal variable structure). We need this preliminary knowledge in order to simplify the presentation of our approach to the descriptor variable observation process. 2.1. Implicit representation In Bonilla et al. (2015a), the authors showed that the state space representation $$\mathfrak{R}^{ss}(A_{q_{{i}}},$$$$B, C_{q_{{i}}})$$, (2.2) and (2.3), is externally equivalent1 to the following implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$: Eddtx=Aqix+Bu and y=Cx,t∈(Ti−1,Ti),E=[E0], Aqi=[ADqi], B=[B0], (2.4) where $$q_{{i}} \in \mathscr{Q}$$, $$\left[T_{i-1},T_i\right) \in \mathfrak{s}^{-1}(q_{{i}})$$ and the maps $${E}:{{\Bbb{R}}^{n}}\to {{\Bbb{R}}^{\bar{n}}}$$, $${A}:{{\Bbb{R}}^{n}}\to {{\Bbb{R}}^{\bar{n}}}$$, $${C}:{{\Bbb{R}}^{n}}\to {{\Bbb{R}}^{{p}}}$$ are defined as follows: E=[I 0], A=[A¯0−A¯1], C=[C¯0−C¯1],Dqi=[D¯(qi)I ]. (2.5) Note that: $$\mathscr{X}_{d} \approx {\Bbb{R}}^{n}$$, $$\underline{\mathscr{X}}_{eq} \approx {\Bbb{R}}^{\bar{n}}$$, $$\mathscr{U} \approx {\Bbb{R}}^{{m}}$$ and $$\mathscr{Y} \approx {\Bbb{R}}^{{p}}$$; also note that: $${n} = {\bar{n} + \hat{n}}$$. The initial conditions at each interval $$\left[T_{i-1},T_i\right)$$ satisfy: $$\lim_{t\to{T_{i-1}^{+}}}x(t)$$$$=$$$$x(T_{i-1})$$$$=$$$$ E_{{q_{{i}}}}^{r} c_i$$, where The behaviour associated with (2.4) and (2.5) is: Big ={(qi,u(⋅),y(⋅))∈Q×U×Y| ∃ci∈Rn¯, s:J→Q, s.t.: y(t)=∑i∈N1Ii(t)C(exp(EqirA(t−Ti−1))Eqirci +∫0t−Ti−1exp(EqirA(t−Ti−1−τ))EqirBu(Ti−1+τ)dτ), qi≡s(Ii),  t∈R+,  Ii∈J}. (2.6) From (2.4), we extract the implicit rectangular representation, $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$: Eddtx=Ax+Bu and y=Cx;t∈R+∖{Ti}, (2.7) where $$\lim_{t\to T_{i-1}^{+}}{x}(t) = E_{{q_{{i}}}}^{r} c_i$$. Remark 2.1 This implicit rectangular representation characterizes the fixed structure present in all the switching processes. Remark 2.2 The implicit rectangular representation (2.7) has a physical sense if it has at least one solution. Bonilla et al. (2015a) have shown that the hypothesis: H3 $${\mathrm{Im}\,}{E} + {\mathrm{Im}\,}{B} = {\Bbb{R}}^{\bar{n}},$$ implies that for any initial condition $$\lim_{t\to0^+}x(t) = x_0 \in {\Bbb{R}}^{n}$$, there exists at least one trajectory $$(u, x)$$$$\in$$$$\mathcal{C}^{\infty}({\Bbb{R}}^{+}, {\Bbb{R}}^{m}\times{\Bbb{R}}^{n})$$ solution of (2.7); see mainly Geerts (1993) and Aubin & Frankowska (1991), and also the Section Discussion about existence of solution of Bonilla et al. (2013). Remark 2.3 From (2.4), we extract the algebraic constraints, $$\mathfrak{R}^{alc}(0, D_{q_{{i}}}, 0)$$, 0=Dqix,t∈[Ti−1,Ti),i∈N,qi∈Q. (2.8) This set of algebraic constraints characterizes the switching process. 2.2. Implicit control law In Bonilla & Malabre (2003), the authors considered the following problem. Problem 2.1 (Internal variable structure decoupling problem (IVSDeP; Bonilla & Malabre, 2003) Consider a proper system described by the implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$ (2.4) such that the geometric conditions: H4 $${\mathrm{Im}\,} A + {\mathrm{Im}\,} B \subset {\mathrm{Im}\,} E,$$ H5 $${{\Bbb{R}}^{\bar{n} + \hat{n}}} = ({\mathrm{Im}\,}{E} + {\mathrm{Im}\,}{A} + {\mathrm{Im}\,}{B})\oplus {\mathrm{Im}\,}{D_{q_{{i}}}}, \forall q_{{i}}\,\in\,\mathscr{Q},$$ H6 $${{\Bbb{R}}^{n}} = {{{\mathrm{ker}}}_{D_{q_{{i}}}}\,} \oplus {{{\mathrm{ker}}}_{E}\,}, \ \forall q_{{i}}\,\in\,\mathscr{Q}$$ are satisfied. Under which conditions does there exist a PD feedback control law $$u$$$$=$$$$F^{*}_{p}x$$$$+$$$$F^{*}_{d}{\mathrm{d}{{x}}}/{\mathrm{d}{{t}}}$$ such that the internal structure variation of the closed loop system is made unobservable? Remark 2.4 Assumption H4, together with Assumptions H1 and H3, implies that there exists a proper solution for any initial condition. Indeed, for any initial condition $$\lim_{t\to0^+}x(t)$$$$=$$$$x_0$$$$\in$$$${\Bbb{R}}^{n}$$ and any admissible input $$u \in \Bbb{U}$$, one solution of the implicit rectangular representation (2.7) is: x(t)=exp(ErAt)x0+∫0texp(ErA(t−τ))ErBu(τ)dτ, where $${E}^{r}$$ is a full column rank matrix such that $$({\Pi_{E}}{E}){E}^{r} = {{\mathrm{I}}_{{}}}$$, where $${\Pi_{E}}$$ is any natural projection on $${\mathrm{Im}\,}{E}$$ (see discussion of Definition 2 in Bonilla et al. (2015a)). Remark 2.5 Assumption H5 guarantees that all the equations of the implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$ are linearly independent, namely: $${\mathrm{Im}\,}{E} + {\mathrm{Im}\,}{A} + {\mathrm{Im}\,}{B} = {\Bbb{R}}^{\bar{n}}$$ and $${\mathrm{Im}\,}{D_{q_{{i}}}} = {\Bbb{R}}^{\hat{n}}$$. Moreover, Assumptions H5 and H4 imply that: (i) $${\mathrm{Im}\,}{E} = {\Bbb{R}}^{\bar{n}}$$ and $${\mathrm{Im}\,}{D_{q_{{i}}}} = {\Bbb{R}}^{\hat{n}}$$, and that: (ii) the Assumption H3 is satisfied. Remark 2.6 Assumption H6, together with H5 and H4, guarantees that all the internal structures are proper, namely there exist bases in $${\Bbb{R}}^{n}$$ and in $${\Bbb{R}}^{\bar{n}}\times{\Bbb{R}}^\hat{n}$$ such that (2.4) takes the form: [I 000]ddt[x¯x^]=[A¯A^0I ][x¯x^]+[B¯0]u, that is to say the LSS (2.2) only commutes between proper systems (see Theorem 1 in Bonilla et al. (2015a)). The solution of this problem was given by Theorems 14 and 15 of Bonilla & Malabre (2003), using the supremal $$(A, E, B)$$ invariant subspace contained in $${{{\mathrm{ker}}}_{C}\,}$$, $${\mathscr{V}}^{*} = \hbox{sup}\{ {\mathscr{V}} \subset {{{\mathrm{ker}}}_{C}\,}\, \vert \ $$$$A{\mathscr{V}} \subset E{\mathscr{V}} + {\mathrm{Im}\,} B$$, which is the limit of the following non-increasing geometric algorithm: V0=Rn;Vμ+1=KerC∩A−1(EVμ+ImB),μ≥0. (2.9) $${\mathscr{V}}^{*}$$ characterizes the maximal part of the implicit representation, $${\mathfrak{R}^{ir}}(E, A, B, C)$$, which can be made unobservable with a suitable proportional and derivative descriptor variable feedback (see Verghese, 1981; Özçaldiran, 1985). The set of pairs $$(F_{p},F_{d})$$ such that: $$(A + BF_{p}){{\mathscr{V}}}^{*}$$$$\subset$$$$(E-BF_{d}){{\mathscr{V}}}^{*}$$ is noted as $$\mathbf{F}({{\mathscr{V}}^{*}})$$. The implicit control law proposed in Bonilla & Malabre (2003) is synthesized by the following three steps (see also Bonilla et al., 2015b): Locate the supremal $$(A, E, B)$$—invariant subspace contained in$${{{\mathrm{ker}}}_{C}\,}$$, $${\mathscr{V}}^{*}$$. For making unobservable the internal structure variation, find a derivative feedback such that: V∗⊃Ker(E−BFd∗). (2.10) And to decouple the variation of the internal structure at the output, find a proportional feedback such that: (A+BFp∗)V∗⊂(E−BFd∗)V∗. (2.11) In order to synthesize the PD feedback, u∗=Fp∗x+Fd∗dx/dt+r, (2.12) in Bonilla et al. (2015b) is proposed the following proper approximation: dx¯/dt =−(1/ε)x¯+(1/ε)Fd∗x,u =−(1/ε)x¯+((1/ε)Fd∗+Fp∗)x+r. (2.13) Also, it is proved that there exists $${\varepsilon}^{*} > 0$$ such that: |y(t)−y∗(t)|≤δ∀ ε∈(0,ε∗], ∀ t≥t∗(δ), (2.14) where $$t^{*}(\delta)$$ is a fixed transient time, which depends on the chosen $$\delta$$, $$y^{*}$$ is the output when applying the control law (2.12), and $$y$$ is the output when applying the proper approximation (2.13), and the closed loop system is BIBO-stable. Remark 2.7 As can be shown, the considered proper approximation for the ideal PD control law decouples the variation of the internal structure at the output (i.e., makes it unobservable), and ensures BIBO stability. We shall now proceed to illustrate the internal variable structure decoupling control approach. 2.3. Illustrative example: part 1 In this article, we consider, as cases of study, two state space representations, $$\mathfrak{R}^{ss}(A_{q_{{i}}}, B, C_{q_{{i}}})$$, (2.2) and (2.3), where the matrices, $$\overline{C}_{0}$$, $$\overline{A}_{1}$$, $$\overline{C}_{1}$$, and $$\overline{D}({q_{{}}})$$, are given as follows: A¯0=[0110], A¯1=[11], B=[01], C¯0=[0(1−d)], C¯1=[−d], D¯(qi)=[q1q2], (2.15) where q=(q1,q2)∈Q={qa,qb,sqc},qa=(−1,−1), qb=(−1,0), qc=(−1,−2), (2.16) and the parameter $$d$$ will take the values $$-1$$ and $$1$$. From (2.3) and (2.15), the matrices $$A_{q_{{i}}}$$, $$B$$ and $$C_{q_{{i}}}$$ take the following form: Aqi=[q1q2+1q1+1q2]B=[01]andCqi=[−dq1−(dq2−(1−d))]. (2.17) For a given pair, $$q_{{i}} \in \mathscr{Q}$$, we have the transfer function (cf. (2.5) of Bonilla et al. (2015a)): Fqi(s)=Cqi(sI−Aqi)−1B=−(dq2−(1−d))s+q1(s+1)(s−(1+q1+q2))={(q2+2)s−q1(s+1)(s−(1+q1+q2)), if: d=−1−(q2s+q1)(s+1)(s−(1+q1+q2)), if: d=1 . (2.18) For the possible three variants of the index $$q_{{i}} \in \mathscr{Q}$$, we have, for a given $$d \in \{1,\,-1\}$$, the following three possible behaviours: Bqa∞={(u,y)∈C∞(Ii,R2)∩ker⁡[−1(ddt+1)]},Bqb∞={(u,y)∈C∞(Ij,R2)∩ker⁡[−((1−d)ddt+1)(ddt+1)(ddt)]},Bqc∞={(u,y)∈C∞(Ik,R2)∩ker⁡[−((1+d)ddt+1)(ddt+1)(ddt+2)]} (2.19) for some disjoints intervals $$\mathscr{I}_{i}$$$$=$$$$[T_{i-1},\, T_{i})$$, $$\mathscr{I}_{j}$$$$=$$$$[T_{j-1},\, T_{j})$$, $$\mathscr{I}_{k}$$$$=$$$$[T_{k-1},\, T_{k})$$, $$\mathscr{I}_{\ell}$$$$=$$$$[T_{\ell-1},\, T_{\ell})$$, $$i,\,j,\,k,\,\ell \in {\Bbb{N}}$$. Implicit representation. The time-dependent, autonomous switched system, described by (2.2), (2.16) and (2.17), is also described by the implicit global representation (2.4) with (cf. (4.6) of Bonilla et al. (2015a)): (2.20) Above the solid line of the differential and algebraic equation set of the implicit global representation (2.4) and (2.20), there is the implicit rectangular representation (2.7) with (cf. (4.7) of Bonilla et al. (2015a)): (2.21) Below the solid line of the differential and algebraic equation set of the implicit global representation (2.4) and (2.20), there is the algebraic constraint (2.8) with (cf. (4.8) of Bonilla et al. (2015a)): Dqi=[q1q21]. (2.22) When splitting the implicit global representations (2.4) and (2.20) into the rectangular implicit description (2.7) and (2.21) and into the algebraic constraints (2.8) and (2.22), we get the fixed active structure of the system2, represented by (2.7) and (2.21). Internal variable structure. In order to have a better understanding of the way that the implicit rectangular representation$$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$ takes into account the so called internal structure variation, in Bonilla et al. (2015a) are compared the Kronecker normal forms (see Gantmacher, 1977) of the pencils associated to the implicit global representation$${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$, (2.4) and (2.20), with the Kronecker normal forms of the pencils associated to the implicit rectangular representation$$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$, (2.7) and (2.21). It is shown there for our illustrative example that3: The associated Kronecker blocks of $$[\,{\mathrm{s}}{\mathbf{E}}-{\mathbf{A}}_{q_{{i}}}\,]$$ are as follows (cf. (2.16) and (2.18)): (a) For $${{q_{{2}}}} = -1$$, there are one ied, $$[\, 1\,]$$, and two fed, $$[\, {\mathrm{s}}-{{q_{{1}}}}\,]$$ and $$[\, {\mathrm{s}}+1\,]$$. (b) For $${{q_{{2}}}} \neq -1$$ and $${{q_{{1}}}} + {{q_{{2}}}} = -2$$, there are one ied, $$[\, 1\,]$$, and one (c) For $${{q_{{2}}}} \neq -1$$ and $${{q_{{1}}}} + {{q_{{2}}}} \neq -2$$, there are one ied, $$[\, 1\,]$$, and two fed, $$[\, {\mathrm{s}}-1-{{q_{{1}}}}-{{q_{{2}}}}\,]$$ and $$[\, {\mathrm{s}}+1\,]$$. The associated Kronecker blocks of $$[\,{\mathrm{s}}{E}-{A}\,]$$ are one cmi, , and one fed, $$[\, {\mathrm{s}}+1\,]$$. From that, we realize that the internal variable structure of $${\mathfrak{R}^{ig}}({\mathbf{E}}, {\mathbf{A}}_{q_{{i}}}, {\mathbf{B}}, C)$$, (2.4) and (2.20), is mainly characterized by the invariantcolumn minimal index block, in $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$, (2.7) and (2.21). See discussion of Example 4 in Bonilla et al. (2015a) Implicit control law. To obtain the ideal PD feedback (2.12), we need $${\mathscr{V}}^{*}$$. From (2.9) and (2.21), we get: In order to satisfy (2.10), the derivative part of the control law has to contain the term $\big[\begin{array}{ccc} 0 & d & -d \end{array}\big]$ . Indeed: In order to satisfy (2.11), the proportional part of the control law has to contain the term where $$\tau$$ is a positive real number. Indeed: $$(A+BF_{p}^{*}){\mathscr{V}}^{*}$$$$=$$ Thus, the proportional and derivative feedback is (cf. (4.5) of Bonilla et al. (2015b)): u∗=[−1−(1−d)τ(1−dτ)]x+[0d−d]ddtx+[1τ]r, (2.23) Applying the proportional and derivative feedback (2.23) to (2.4) and (2.20), we get the closed loop system described by the implicit global representation (cf. (4.6) of Bonilla et al. (2015b)): (2.24) where4 The proper approximation (2.13) is: ddtx4 =−(1ε)x4+(1ε)[0d−d]x,u =−(1ε)x4+[−1(dε−(1−d)τ)(1−dτ−dε)]x+[1τ]r, (2.25) where $$\varepsilon$$ is a given sufficiently small positive constant. See Theorem 3 of Bonilla et al. (2015b) for stability details. It is time now to deal with descriptor variable observation of rectangular implicit representations. 3. Descriptor variable observation When one is synthesizing controllers for systems represented by means of implicit descriptions, a natural question is: if the descriptor variable $$x$$ is not accessible, how can we estimate it? The case of regular descriptions—where there are only finite and infinite elementary divisors blocks (see Gantmacher, 1977)— has been widely studied (see, e.g., Dai, 1989; Duan, 2010). Recently, Berger & Reis (2015) (see also Trentelman et al., 2001; Berger & Reis, 2013; Berger et al., 2016, and the references cited in those articles) have tackled the observation problem for general implicit descriptions. With respect to the observer synthesis for implicit descriptions having column minimal indices blocks, there are some structural aspects which have to be taken into account; these kinds of blocks add to the implicit representation a non-unicity on the existence of solution (see, e.g., Gantmacher, 1977; Lebret & Loiseau, 1994). This non-unicity reflects a degree-of-freedom, which enables one to describe variable structure systems (see Bonilla et al., 2015a, 2015b). But, as concerns the realization of observers appear some particular structural phenomena which have to be carefully studied. In this section, based on a structural study we propose two different procedures for observing the descriptor variable in the presence of column minimal indices blocks, namely: Linear descriptor observers based on fault detection techniques. Indirect variable descriptor observers based on finite time structure detection techniques. 3.1. Linear descriptor observer In order to motivate this discussion, let us consider the following illustrative example. Illustrative example, part 2: Let us continue the illustrative example of Section 2.3. Let us consider the implicit rectangular representation (2.7) and (2.21), $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$: [100010]ddt[xcxℓ]=[01−110−1][xcxℓ]+[01]u,y=[0(1−d)d][xcxℓ], which is also equivalent to the state space representation$$\mathfrak{R}^{ss}({A}_{r}, [{{\it \Phi}}_{r} \,|\, B_{r}], C_{r}, [{D}_{r} \,|\, 0])$$: dxc/dt=[0110]xc+[−10−11][xℓu],y=[0(1−d)]xc+[d0][xℓu] (3.1) with the transfer functions: Tuy=(1−d)s(s+1)(s−1)andTxℓy=(ds−1)(s+1)(s+1)(s−1). (3.2) Doing the change of variable: where (recall that $$d \neq 0$$): $$\widetilde{T}$$ we get the implicit rectangular representation$$\mathfrak{R}^{ir}({E{\widetilde{T}}}, {A{\widetilde{T}}}, {B}, C{\widetilde{T}})$$: [100010]ddt[xc∘¯xℓ∘¯]=[01/d−11(1/d−1)−1][xc∘¯xℓ∘¯]+[01]u,y=[00d][xc∘¯xℓ∘¯], which is also equivalent to the state space representation$$\mathfrak{R}^{ss}({A}_{r_{\bar\circ}}, [{{\it \Phi}}_{r_{\bar\circ}} \,|\, {B}_{r_{\bar\circ}}], {C}_{r_{\bar\circ}}, [{D}_{r_{\bar\circ}} \,|\, 0])$$: dxc∘¯/dt =[01/d1(1/d−1)]xc∘¯+[−10−11][xℓ∘¯u],y =[00]xc∘¯+[d0][xℓ∘¯u] (3.3) with the transfer functions: T~uy=0andT~x~ℓ∘¯y=d. (3.4) Now, let us note that: det[−sE+AC]=det[−sET~+AT~CT~]=det[−s1/d−11−(s−1/d+1)−100d]=d(s+1)(s−1/d). We then have from Theorem 3.5 of Berger & Reis (2015) that for the implicit rectangular representation $$\mathfrak{R}^{ir}({E}, {A}, {B}, C)$$ (even for $$\mathfrak{R}^{ir}({E\widetilde{T}}, {A\widetilde{T}}, {B}, C\widetilde{T})$$): there exists an observer, there exists an asymptotic observer (when $$d<0$$) and there does not exist an exact observer. Now the question is: How to synthesize such an observer? If we would try to synthesize the observer with the state space representation (3.3) (also see (3.4)), we could not certainly estimate the component descriptor variable $$\tilde{x}_{c}$$. And if we would try to synthesize the observer with the state space representation (3.1), we could get $${x}_{\ell}$$ by means of the inverse (only if $$d<0$$): $$\displaystyle T_{y}^{{x}_{\ell}} = \frac{({\mathrm{s}}+1)({\mathrm{s}}-1)}{(d{\mathrm{s}}-1)({\mathrm{s}}+1)}$$, and then $${x}_{c}$$ can be obtained with a Luenberger observer. From this illustrative example, we realize that for the case of implicit rectangular representations, it is important to select an accurate descriptor variable basis when trying to synthesize descriptor variable observers. Hereafter, we show how to do a descriptor variable space decomposition which enables the synthesis of descriptor variable observers. 3.1.1 Observable decomposition Let us assume that the implicit representation (1.1) satisfies Assumptions H4, H5, and: H7 $${\ker}\,{C}\cap{\ker}\,{E} = \{ 0 \}$$, H8 There exists a complementary subspace $$\mathscr{X}_{c}$$ of $${\ker}\,{E}$$ such that its supremal $$(A, E)$$—invariant subspace contained in $$\mathscr{X}_{c}\cap{{\ker}\,{C}}$$, NXc∗=sup{N⊂Xc∩kerC| AN⊂EN}, (3.5) is null, where $$\mathscr{N}_{\mathscr{X}_{c}}^{*}$$ is the limit of the following non-increasing algorithm (see Verghese, 1981; Malabre, 1987; Lewis, 1992): V00=Xc;V0μ+1=(Xc∩kerC)∩A−1(EV0μ), μ≥0. (3.6) has constant rank for all $${\mathrm{s}} \in \Bbb{C}_{bad}$$, where $$ \Bbb{C}_{bad} = \{ {\mathrm{s}} \in \Bbb{C} : \ {\Re}e \,{\mathrm{s}} \geq 0 \} $$. Remark 3.1 Assumptions H4 and H5 imply that there exists a complementary subspace of $${\ker}\,{E}$$ such that (1.1) takes the following form (cf. Remarks 2.4 and 2.5): [I 0]ddt[x¯cx¯ℓ] =[A¯rΦ¯r][x¯cx¯ℓ]+B¯ru,y =[C¯rD¯r][x¯cx¯ℓ]. (3.7) This implicit rectangular representation has precisely the form of the class of implicit rectangular representations which we want to observe, (2.7) and (2.5). Remark 3.2 Hypothesis H7 states that the variation of structure, characterized by $${\ker}\,{E}$$, is observable. This assumption is also known as observability at infinity (see, for example, Kuijper & Schumacher, 1992; Bonilla & Malabre, 1995; Berger et al., 2016 (their Proposition 6.1)). Remark 3.3 Hypothesis H8 states that there exists a complementary observable subspace $$\mathscr{X}_{c}$$ of $${\ker}\,{E}$$, namely: $$\exists\, \mathscr{X}_{c} \subset \mathscr{X}_{d}:$$$$\mathscr{X}_{d} = \mathscr{X}_{c} \oplus {\ker}\,{E}$$ and $$\mathscr{N}_{\mathscr{X}_{c}}^{*} = \{ 0 \}$$. Remark 3.4 Hypothesis H9 states that the output decoupling zeros of (1.1) are Hurwitz (see Verghese, 1981; Aling & Schumacher, 1984)). Also this hypothesis guarantees the existence of an asymptotic observer (cf. Theorem 3.5 of Berger & Reis (2015)). Let us first find an observable decomposition for the implicit representation (1.1). Theorem 3.1 Under assumptions H4, H5, H7–H9, there exists a complementary observable subspace $$\mathscr{X}_{c}$$ of $${\ker}\,{E}$$, such that under the geometric decomposition $$\mathscr{X}_{d} = \mathscr{X}_{c}\oplus{\ker}\,{E}$$, (1.1) takes the form (3.7), or equivalently, $$\mathfrak{R}^{ss}(\overline{A}_{r},\,{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$: dx¯c/dt =A¯rx¯c+B¯ru+Φ¯rx¯ℓ,y =C¯rx¯c+D¯rx¯ℓ. (3.8) Moreover, kerD¯r={0}, (3.9) The pair (C¯r,A¯r) is observable, (3.10) [(sI −A¯r)−Φ¯r−C¯r−D¯r] has constant rank for all s∈Cbad. (3.11) Remark 3.5 Theorem 3.1 is important since it enables us to find an observable state space representation (3.8). The free descriptor variable $${\bar{x}_{\ell}}$$ is handled as an apparent failure signal, which will be reconstructed by means of failure techniques. Let us also note that condition (3.11) states the minimum phase nature between the transfer of the free descriptor variable, $$\bar{x}_{\ell}$$, and the output, $$y$$. Remark 3.6 Let us note that condition $${\ker}\,{\overline{D}_{r}} = \{ 0 \}$$ implies the existence of a left inverse, $$\overline{D}_{r}^{\ell}$$, of $$\overline{D}_{r}$$, thus the description $${\mathfrak{R}^{ss}}(\overline{A}_{r},\,{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$ takes the form: dx¯c/dt =(A¯r−Φ¯r(D¯rℓC¯r))x¯c+[B¯rΦ¯rD¯rℓ][uy],x¯ℓ =−(D¯rℓC¯r)x¯c+[0D¯rℓ][uy]. Let us assume that the pair $$\left(\left(\overline{A}_{r} - \overline{{\it \Phi}}_{r}\,(\overline{D}_{r}^{\ell}\overline{C}_{r}) \right),\,(\overline{D}_{r}^{\ell}\overline{C}_{r})\right)$$ is observable, then one could synthesize a standard observer for $$\bar{x}_{c}$$ under the knowledge of the signals and $$\bar{x}_{\ell}$$, but one wants to observe the whole descriptor variable . Of course, the knowledge of any component, $$\bar{x}_{c}$$ or $$\bar{x}_{\ell}$$, determines the knowledge of the other. In fact, in Section 3.1.2, we first proceed to estimate $$\bar{x}_{\ell}$$, and in Section 3.1.3 we then observe $$\bar{x}_{c}$$. In order to prove Theorem 3.1, we need the following two technical lemmas proved in the appendix. Lemma 3.1 Let $$\mathscr{X}_{o}$$ be any complementary subspace of $${\ker}\,{E}$$, and let $$V: \mathscr{X}_{o} \to \mathscr{X}_{d}$$ be its insertion map. Then the supremal $$(A, E)$$—invariant subspace contained in $$\mathscr{X}_{o}\cap{{\ker}\,{C}}$$, $$\mathscr{N}_{\mathscr{X}_{o}}^{*}$$, is also geometrically characterized by: NXo∗=sup{N¯⊂kerC¯| A¯N¯⊂N¯}, (3.12) where $$\overline{C} = C\, V$$ and $$\overline{A} = A\, V$$. Lemma 3.2 The following three statements are equivalents. ∃Xc⊂Xd: Xd=Xc⊕kerEandNXc∗={0}, (3.13) ∃ Er∈Rn×n¯: EEr=I  and ker[sE−A−C]Er={0} ∀s∈C, (3.14) ∃ K∈R(n−n¯)×n: ker[EK]={0} and ker[Y¯0Y¯1⋮Y¯k]={0}, (3.15) where $${E}^{r}$$ is a right inverse (in the given basis) of the insertion map $$V$$ of Lemma 3.1, $$K$$ is any matrix satisfying: and the matrices $$\overline{Y}_{0}$$, $$\overline{Y}_{1}$$, $$\dots$$, $$\overline{Y}_{k}$$ are obtained with the Lewis structure algorithm (LSA) (see Lewis & Beauchamp, 1987 and Fig. 3 of Bonilla & Malabre, 1997): (3.16) where $$\mathbf{T}_{k}^{o}$$ is a maximal row compression on , i.e., an invertible matrix for getting the surjective matrix $$\mathbf{X}_{k+1}$$. Moreover, Xc=ImV=kerK (3.17) Proof of Theorem 3.1. Assumption H7 implies (3.9). From H8 and Lemma 3.2, we deduce (3.10). Statement (3.11) directly follows from (3.7) and H9. □ Illustrative example, part 3: Let us continue the illustrative example of Section 2.3. From (2.21), we deduce that: [EK]=[100010ab1], where $$a$$ and $$b$$ are two real numbers to be found, such that (3.15) is satisfied. Applying the LSA (3.16) to (2.21), we get: With T1o=[10000100−a−b01ad((1+b)d−1)1−d], we get Thus detV1 =det[Y¯0Y¯1]=|0(1−d)dab1((1+b)d−1)ad(1−(1+a+b)d)| =((1+a+b)d−1)(a−(1+b)d+1). Condition (3.15) is then satisfied when: $$(1 + a + b)\,d \neq 1$$ and $$a - (1 + b)\,d \neq -1$$. In order to get: $$\det{V}_{1} = -1$$, let us choose: $$(a,\,b) = (0,\,-1)$$, then (recall (3.17) and (2.21)): K=[0−11]⇒ Xc=ker⁡K={[100101]},ker⁡E=span{[001]}. (3.18) Based on (3.18), let us do the following change of variable: [x¯cx¯ℓ]=T¯−1x,where:T¯=[100010011]. (3.19) We then get (recall (2.7) and (2.21), and compare with (3.7)): ET¯=[I 0],AT¯=[A¯rΦ¯r]=[00−11−1−1]B¯r=B=[01],CT¯=[C¯rD¯r]=[01d]. (3.20) Let us note that: D¯rℓ=D¯r−1=1/d,det[C¯rC¯rA¯r]=det[011−1]=−1,det[sI −A¯r−Φ¯r−C¯r−D¯r]=det[s01−1s+110−1−d]=−(ds−1)(s+1). (3.21) Also, the transfer functions related to the state space realization $${\mathfrak{R}^{ss}}(\overline{A}_{r},\,\overline{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$ are: Tuy=ss(s+1)andTx¯ℓy=(ds−1)(s+1)s(s+1). (3.22) Compare with (3.2) and (3.4). From (3.21), we note that: (i) the state space realization $${\mathfrak{R}^{ss}}(\overline{A}_{r},\,\overline{B}_{r},\,\overline{C}_{r},\,\overline{D}_{r})$$, (3.8) and (3.20) is observable and (ii) it has a non Hurwitz zero when $$d < 0$$. From (3.22), we see that, in the case: $$d<0$$, $$\bar{x}_{\ell}$$ can be obtained by means of the inverse: $$\displaystyle T_{y}^{\bar{x}_{\ell}} = \frac{{\mathrm{s}}\,({\mathrm{s}}+1)}{(d{\mathrm{s}}-1)({\mathrm{s}}+1)}$$ and $$\bar{x}_{c}$$ is then obtained with a Luenberger observer. Remark 3.7 Let us note that Hypothesis H8 is not always satisfied neither implied by the other assumptions. Indeed, let us consider the following example: [100010]ddt[xc,1xc,2xℓ] =[p10β0p2β][xc,1xc,2xℓ]+[11]u,y =[10α][xc,1xc,2xℓ]. In order to satisfy Hypothesis H7, $${\ker}\,{C}\cap{\ker}\,{E} = \{ 0 \}$$, $${\alpha} \neq0$$. For testing (3.15), let us take: and let us apply the Lewis structure algorithm (3.16): detV1 =det[Y¯0Y¯1]=|10αab1−p1(a−α−1)−bp2−β(b+a−α−1)|, =(β−α(p1−p2)−(a+b)αβ+α2a(p1−p2))(b/α), thus $$b \neq 0$$, let us chose: $$b = 1$$. Let us distinguish two cases: {β=0:detV1=α(aα−1)(p1−p2)β≠0:detV1=−αβ≠0,with:a=α−1.  We then conclude that for the case, $${\beta} = 0$$, there is a change of basis making observable the proper part only if $$p_{1} \neq p_{2}$$. 3.1.2. Free descriptor variable observation Since the state representation (3.8) has the standard form used in the fault detection treatment, we then have the following Beard-Jones filter of (3.8) (cf. Willsky, 1976; Isermann, 1984; Saberi et al., 2000): dw/dt =(Ar¯+K¯rC¯r)w−K¯ry+B¯ru,y^ =C¯rw, (3.23) where $$\overline{K}_{r}$$ is an output injection to be computed. From (3.8) and (3.23), we get the following remainder generator: der/dt =A¯K¯rer−Φ¯K¯rx¯ℓ,r =C¯rer−D¯rx¯ℓ, (3.24) where $${e_r} = w - {\bar{x}_{c}}$$, $$r = \hat{y} - y$$, and: A¯K¯r=A¯r+K¯rC¯randΦ¯K¯r=Φ¯r+K¯rD¯r. (3.25) Let us note that: The observability of the pair $$(\overline{C}_{r},\, \overline{A}_{r})$$ (cf. (3.10)) implies the observability of the pair $$(\overline{C}_{r},\, \overline{A}_{K_{r}})$$ (see Kailath, 1980; Wonham, 1985; Polderman & Willems, 1998). The minimal phase condition (3.11), on the output decoupling zeros of the implicit representation (3.7), implies that the transmission zeros of the state space representation (3.24) are also Hurwitz (see Verghese, 1981; Aling & Schumacher, 1984). The monic condition (3.9) implies that the relative degree of (3.24) is zero. It also implies the existence of a left inverse $$\overline{D}_{r}^{\ell}$$ of $$\overline{D}_{r}$$, namely: $$\overline{D}_{r}^{\ell}\, \overline{D}_{r} = {\mathrm{I}}_{}$$. Theorem 3.2 If Kr¯=−Φ¯rD¯rℓ. (3.26) Then rank[(sI −A¯K¯r)−Φ¯K¯r−C¯r−D¯r]=rank(sI −A¯K¯r)+(n−n¯). (3.27) Moreover, ‖x^ℓ(t)−x¯ℓ(t)‖≤kℓe−αℓt, (3.28) where x^ℓ(t)=Drℓ(y(t)−y^(t)). (3.29)$$\alpha_{\ell}$$ is a positive lower bound of the set of absolute values of the transmission zeros of (3.24), and $${k}_{\ell}$$ is a positive constant depending on the initial conditions. Proof of Theorem 3.2. From (3.25) and (3.26), we get: [(sI −A¯K¯r)−Φ¯K¯r−C¯r−D¯r]=[(sI −(A¯r−Φ¯rD¯rℓC¯r))0−C¯r−D¯r], which implies (3.27). That is to say, the output injection, $$\overline{K}_{r} = - \overline{{\it \Phi}}_{r}\overline{D}_{r}^{\ell}$$, is placing all the poles of the remainder generator (3.24) at its Hurwitz transmission zeros. We then get (3.28). □ Theorem 3.2 states that the remainder generator (3.24) is self-inverted by means of a Hurwitz pole-zero cancelation. 3.1.3. State variable observation Once observed the free part of the descriptor variable, $$\bar{x}_{\ell}$$, which characterizes the variation of structure, we only need to synthesize a standard state observer for the auxiliary state representation (recall (3.8)): dx¯c/dt =A¯rx¯c+[B¯rΦ¯r][ux^ℓ],(y−D¯rx^ℓ) =C¯rx¯c, namely (see, e.g., Kailath, 1980; Wonham, 1985; Polderman & Willems, 1998): dx^c/dt=(Ar¯+K¯oC¯r)x^c−K¯o(y−D¯rx^ℓ)+[B¯rΦ¯r][ux^ℓ], (3.30) where $$\overline{K}_{o}$$ is an output injection such that $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$ is a Hurwitz matrix. 3.1.4. Descriptor variable observation Theorem 3.3 Let us consider the implicit description (3.7), such that (3.9)–(3.11) are satisfied. Let us consider the descriptor variable observer: dx^c/dt =(Ar¯+K¯oC¯r)x^c−K¯oy+B¯ru+(Φ¯r+K¯oD¯r)x^ℓ,dw/dt =(Ar¯+K¯rC¯r)w−K¯ry+B¯ru,x^ℓ(t) =−DrℓC¯rw+Drℓy(t), (3.31) where $$\overline{K}_{r}$$ is defined by (3.26), and $$\overline{K}_{o}$$ is an output injection such that $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$ is a Hurwitz matrix. Then ‖x^ℓ(t)−x¯ℓ(t)‖≤kℓe−αℓtand‖x^c(t)−x¯c(t)‖≤kce−αct (3.32) where $$\alpha_{\ell}$$ is a positive lower bound of the set of absolute values of the transmission zeros of (3.24), $$\alpha_{c}$$ is a positive lower bound of the set of absolute values of the eigen values of the Hurwitz matrix $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$, and $${k}_{\ell}$$ and $${k}_{c}$$ are positive constants depending on the initial conditions. Proof of Theorem 3.3. Equation (3.31) directly follows from (3.23), (3.29) and (3.30). Equation (3.32) follows from (3.28) and from the fact that $$\overline{A_{r}}+\overline{K}_{o}\,{\overline{C}_{r}}$$ is a Hurwitz matrix. □ Corollary 3.1 Under the same conditions of Theorem 3.3. and if in addition $$\overline{K}_{o} = \overline{K}_{r} = - \overline{{\it \Phi}}_{r}\,\overline{D}_{r}^{\ell}$$, then the descriptor variable observer (3.31) takes the following form: dx^c/dt =(A¯r+K¯rC¯r)x^c−K¯ry+B¯ru,x^ℓ =−D¯rℓC¯rx^c+D¯rℓy. (3.33) Moreover ‖[x^c(t)−x¯c(t)x^ℓ(t)−x¯ℓ(t)]‖≤k¯e−α¯t, (3.34) where $$\bar{\alpha}$$ is a positive lower bound of the set of absolute values of the transmission zeros of (3.24) and $$\bar{k}$$ is a positive constant depending on the initial conditions. Proof of Corollary 3.1. From (3.33) and (3.26), we get: dx~c/dt =(A¯r−Φ¯rD¯rℓC¯r)x~c,x~ℓ =−D¯rℓC¯rx~c, where $$\tilde{x}_{c} = \hat{x}_{c} - \bar{x}_{c}$$ and $$\tilde{x}_{\ell} = \hat{x}_{\ell} - \bar{x}_{\ell}$$. On the other hand, we get from (3.25) and (3.26): [I K¯r0I ][(sI −A¯r)−Φ¯r−C¯r−D¯r]=[(sI −A¯K¯r)−Φ¯K¯r−C¯r−D¯r]=[(sI −(A¯r−Φ¯rD¯rℓC¯r))0−C¯r−D¯r]. We then conclude from (3.27) and (3.11). □ Illustrative Example, Part 4: Let us continue the illustrative example of Section 2.3. Let us now consider the alternative observation procedure. The descriptor variable observer (3.33) for the example (3.7) and (3.20), for the case study: $$d = -1$$, is (see (3.20), (3.21) and (3.26)): dx^c/dt =[0−11−2]x^c+[11]y+[01]u,x^ℓ =[01]x^c+[−1]y. (3.35) 3.2. Indirect variable descriptor observer The linear descriptor observer studied in Section 3.1 only works when the transfer between the free component of the descriptor variable, $${x}_{\ell}$$, and the output, $$y$$, has minimum phase nature. If this is not the case, one can try standard state observers with the additional help of an internal structure detection procedure. Namely, if we are able to know which internal structure takes place, we can use standard state observers for each internal structure which is actually acting. But as shown hereafter, this is not a straight forward procedure, and some considerations have to be taken into account. Let us consider the illustrative example of Section 2.3, with $$d=1$$, and let us suppose that we are able to synthesize the ideal observers5 for each $$ {q_{}} = ({q_{{1}}}, {q_{2}}) \in \big\{ {q_{a}},\,{q_{b}},\,{q_{c}} \big\}$$, namely (see (2.4) and (2.20)): x3 =y,q12≠q22:[q1q2q2q1][x1x2] =[−1(q1+q2)−d/dt]y−[0q2]u,q12=q22:[q1q2][x1x2] =[−1]y. Thus, for each $${q_{}} \in \big\{{q_{a}},\,{q_{b}},\,{q_{c}}\big\}$$, we have the following particular estimated values of $$x$$ (see6 (2.16)): (x1, x2, x3)={(0, y, y)forq =qa,(y, (d/dt+1)y, y)forq =qb,(13(2d/dt+5)y−43u, −13(d/dt+1)y+23u, y)forq =qc.  (3.36) Taking into account the ideal observers (3.36) into the proportional variable descriptor feedback (2.25), we get the following particular control laws: Forq =qa:{ddtx4=−1εcx4,u1=(1−1τ)y−1εcx4(0)e−tεc+1τr,  Forq =qb:{ddtx4=−1εcx4+1εcddty,u2=−(1τ−1εcddt)y−1εcx4+1τr, Forq =qc:{ddtx4=−1εcx4−13εc(4+ddt)y+23εcu3,u3=(εc(2+3τ)+4εc+2+2εc+1εc+2ddt)y+3εc+2x4−εc3τεc+2r.  (3.37) Let us now note that in the case of applying a given particular control law to a wrongly detected structure, a temporarily unstable system can be obtained; see Table 1. Table 1 Cross closed loop transfer functions obtained with the particular control laws of (3.37). The denominators are computed with $$\varepsilon_{c} = 0.25$$ and $$\tau = 4$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ Table 1 Cross closed loop transfer functions obtained with the particular control laws of (3.37). The denominators are computed with $$\varepsilon_{c} = 0.25$$ and $$\tau = 4$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ $$q$$ $$q_{a}$$ $$q_{b}$$ $$q_{c}$$ $$\frac{y(\mathrm{s})}{u_{1}(\mathrm{s})}$$ $$\frac{1}{{\tau}\mathrm{s} + 1}$$ $$\frac{1/{\tau}} {{\bf\left(\mathrm{s}-\frac12\right)} \left(\mathrm{s}+\frac32\right)}$$ $$\frac{(1/{\tau})(2{\mathrm{s}}+1)} {{\mathrm{s}}^{2}+(2/{\tau}+1){\mathrm{s}}+(1/{\tau}+1)}$$ $$\frac{y(\mathrm{s})}{u_{2}(\mathrm{s})}$$ $$\frac{-(1/{\tau})({\varepsilon_{c}}\mathrm{s}+1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.41}\right)} \left(\mathrm{s}+\frac1{1.46}\right)}$$ $$\frac{({\varepsilon_{c}}{\mathrm{s}}+1)} {{\varepsilon_{c}}{\tau}\left({\mathrm{s}}^{2} + {\mathrm{s}} + \frac1{\tau} \right){\mathrm{s}} + ({\tau}{\mathrm{s}} + 1)}$$ $$\frac{-({\varepsilon_{c}}/{\tau})(2{\mathrm{s}}+1)({\mathrm{s}}+1/{\varepsilon_{c}})/(2-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.50}\right)} \left(\left(\mathrm{s}+\frac1{1.34}\right)^2 +(0.30)^2 \right)}$$ $$\frac{y(\mathrm{s})}{u_{3}(\mathrm{s})}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}} + 1)/(1-{\varepsilon_{c}})} {{\left(\mathrm{s}-\frac1{0.20}\right)} \left(\mathrm{s}+\frac1{2.16}\right)}$$ $$\frac{-(3/{\tau})({\varepsilon_{c}}{\mathrm{s}}+1)/(2+{\varepsilon_{c}})}{{\left(\mathrm{s}-\frac1{0.67}\right)} \left(\mathrm{s}+\frac1{0.55}\right) \left(\mathrm{s}+\frac1{2.19}\right)}$$ $$\frac{\left({\varepsilon_{c}}{\mathrm{s}}+1 \right)}{{\varepsilon_{c}} \frac{\left({\tau}{\mathrm{s}}^{2}+(2+{\tau}){\mathrm{s}}+1\right){\mathrm{s}}} {(2{\mathrm{s}}+1)} + ({\tau}{\mathrm{s}} + 1)}$$ From Table 1, we can assert that the internal structure detection has to be correctly achieved in finite time. Moreover, the finite detection time should be not larger than the expected fastest unstable time constant; for our case, the fastest unstable time constant is 0.20 s (cf. Table 1). In Bonilla et al. (2009) is proposed a finite time adaptive structure detector, which can be used for synthesizing indirect variable descriptor observers. We shall now proceed to illustrate our variable descriptor observation schemes. 4. Illustrative example: part 5 Let us continue the illustrative example of Section 2.3, for the case: $$d = 1$$. For this non minimum phase case, we use an indirect variable descriptor observer based on the finite time adaptive structure detection proposed in Bonilla et al. (2009). For this, we need first to synthesize a standard state observer for each state space representation, $$\mathfrak{R}^{ss}(A_{q_{}}, B, C_{q_{}})$$, $$q_{}\in\mathscr{Q}$$, namely (see (2.2), (2.3) and (2.17); cf. (3.36)): Observer for $$q_{} = {q_{a}}$$: x~1,1=0;x~2,1=y;x~3,1=y. (4.1) Observer for $$q_{} = q_{b}$$: ddt[x~~1x~~2]=[−1100][x~~1x~~2]+[01]u+[−1−1]([10][x~~1x~~2]−y),x~1,2=y;x~2,2=x~~2;x~3,2=y. (4.2) Observer for $$q_{} = q_{c}$$: ddt[x~1,3x~2,3]=[−1−10−2][x~1,3x~2,3]+[01]u+[1/31/3]([12][x~1,3x~2,3]−y),x~3,3=y. (4.3) Finite Time Adaptive Structure Detector. The finite time adaptive structure detector proposed in Bonilla et al. (2009) is composed by three stages: Discriminating filters. $$z$$-Filters. Adaptive structure detector. (i) Descriminating filters: The aim of this first stage is to synthesize a bank of filters which aim is to provide with an output which remains null when detecting the active internal structure. From (2), we deduce (cf. (2) of Bonilla et al. (2009)): (d/dt+βd)[χ1(t)χ2(t)χ3(t)]=−[εd2y1(t)εd3y2(t)εd3y3(t)], (4.4) (εdd/dt+1)y1(t)=χ1(t)+(−u(t)+(d/dt+1)y(t)), (4.5) ((εdd/dt+2/2)2+1/2)y2(t)=χ2(t)+(−u(t)+(d/dt+1)(d/dt)y(t)),((εdd/dt+2/2)2+1/2)y3(t)=χ3(t)+(−(2d/dt+1)u(t)+(d/dt+1)(d/dt+2)y(t)). (4.6) (ii) $$z$$-Filters: The aim of this second stage is to provide with a set of functions which memorize the time evolution of the absolute value of the outputs of the previous stage; also these filters act as a forgetting factor (cf. (7) of Bonilla et al. (2009)): τzdzk/dt(t)+zk(t)=|yk(t)|, k∈{1,2,3}. (4.7) (iii) Adaptive structure detector: This third stage is an adaptive structure detector based on a projected gradient method, which aim is to produce a vector, which components act as logic variables, namely there is only one component which remains different from zero and that corresponds to the active internal structure (cf. (9) of Bonilla et al. (2009))7: dq^k(t)/dt=−ρ1q^k(t)zk2(t)/(αk2+zk2(t))+ρ2γ(q^)e|g(q^)|, (4.8) where $$k\in\{1, 2, 3\}$$, and $${\mathit g}{q_{}} = \rho_{0}^2 - \sum\nolimits_{k=1}^{3}\hat{q}_{k}^2$$ defines an hyper-sphere. The constant terms $$\rho_1$$ and $$\rho_2$$ are the positive gains of the algorithm (attraction to the hyper-sphere) and of its projection term (repulsion to the hyper-sphere), respectively, the $${\alpha}_{k}$$ are positive scale factors. And $$\gamma(\cdot)$$ is an hysteresis switch which switches on at point $$\delta^{2}$$, and switches off at point $$0$$; when on, the output is $$1$$, and when off, the output is $$0$$. From the fifth section of Bonilla et al. (2009), we have the following synthesis procedure: (4.9) Descriptor variable observer. For synthesizing the descriptor variable observer it is only necessary to select the good standard state observer with the help of the vector $$\hat{q}_{}$$. For this, let: $$Sw_{k} = \gamma_{Sw}({q_{}})$$, $$k\in \{1, 2, 3\}$$, where $$\gamma(\cdot)_{Sw}$$ is an hysteresis switch with $$\sqrt{\rho_{0}^{2}-{\beta}^{\ast}\delta^{2}}$$ as the switch on point and $$\rho_{0}/\sqrt{n}$$ as the switch off, output when on $$= 1$$, and output when off $$= 0$$. The descriptor observer is then: x^k=∑j=13x^k,jSwj,k∈{1,2,3}. (4.10) 4.1. Numerical simulation Two MATLAB® numerical simulations were performed, for $$d = -1$$ and for $$d = 1$$, with the solver settings: The behaviours, $$\mathfrak{B}_{q_{i}}^{\infty}$$, take place as follows (recall (2.19)): in the time interval $$[0, 50]$$ takes place $$\mathfrak{B}_{q_{a}}^{\infty}$$. In the time interval $$[50, 100]$$ takes place $$\mathfrak{B}_{q_{b}}^{\infty}$$. In the time interval $$[100, 150]$$ takes place $$\mathfrak{B}_{q_{c}}^{\infty}$$. We apply the P.D. feedback (2.25), with the choice: $$\tau = 4$$ and $${\varepsilon}_{c} = 0.25$$, and using the observed descriptor variable $$\hat{x}$$ instead of the descriptor variable $${x}$$. The reference $$r$$ has been chosen as follows (see Definition 2.4.5 of Polderman & Willems (1998)): ϕ(t)={−e−11−(t′)2,t∈A=(16,26),t′=12t−3,−e−11−(t″)2,t∈B=(46,56),t″=−12t+9,−0,t∈R∖(A∪B) r(t)=∫0t(∑i=03(−1)iϕ(275σ−i))dσ,t∈[0,150]. The model matching error is computed as follows: |y(t)−y∗(t)|=|y(t)−∫0te−1τ(t−σ)r(σ)d(σ)|. 4.1.1. Observers based on fault detection For the case $$d = -1$$, we apply the descriptor variable observer based on fault detection techniques (3.33). The observed descriptor variable is: x^=T[z^1z^2φ^] (4.11) In Fig. 1, we show the numerical simulations for this minimum phase case. In order to appreciate the performance of the remainder generator (3.24), in this simulation we have set the initial condition: Fig. 1. View largeDownload slide Numerical simulations for the minimum phase case: $$d=-1$$. (a) Output, $$y$$. (b, c) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (d, e) Control input, $$u$$. (f, g) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. Fig. 1. View largeDownload slide Numerical simulations for the minimum phase case: $$d=-1$$. (a) Output, $$y$$. (b, c) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (d, e) Control input, $$u$$. (f, g) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. 4.1.2. Observers based on adaptive structure detection For the case $$d = 1$$, we apply the descriptor variable observer based on adaptive structure detection, (4.1)–(4.8) and (4.10). We have chosen: $$\varepsilon_{d} = 0.01$$, $$\tau_{z} = 0.1$$, and $$t^{\ast} = 0.1$$ (recall unstable modes of Table 1). Following the procedure proposed in Bonilla et al. (2009) we set (recall (4.9)): $$n = 3$$, $$\rho_{0} = 2$$, $${\beta}^{\ast} = 2$$, $$\delta = 0.5$$, $${\alpha}_{k} = 1$$ ($$k = 1,\,2,\,3$$) and $$a = 0.09$$; we thus obtain: $$\rho_{1} = 1,\,592.\,44$$ and $$\rho_{2} = 3,\,092.\,98$$. In Figs 2 and 3, we show the numerical simulations for this non minimum phase case. We shall now proceed to conclude our exposition. Fig. 2. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a) Output, $$y$$. (b) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (c) Control input, $$u$$. (d–f) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. Fig. 2. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a) Output, $$y$$. (b) Model matching error, $$\left| y(t)-{{y}^{*}}(t) \right|$$. (c) Control input, $$u$$. (d–f) Observation error, $${{\left\| \hat{x}-x \right\|}_{2}}$$. Fig. 3. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a, b) Switch detecting $${q_{a}}$$. (c, d) Switch detecting $${q_{b}}$$. (e, f) Switch detecting $${q_{c}}$$. Fig. 3. View largeDownload slide Numerical simulations for the non minimum phase case: $$d=1$$. (a, b) Switch detecting $${q_{a}}$$. (c, d) Switch detecting $${q_{b}}$$. (e, f) Switch detecting $${q_{c}}$$. 5. Conclusion In this article, we have tackled the descriptor variable observation problem for implicit descriptions having column minimal indices blocks. We are mainly concerned with the synthesis procedure and not with theoretical aspects, as in Berger & Reis (2015). We have considered two procedures: (i) linear descriptor observers, based on fault detection techniques and (ii) indirect variable descriptor observers, based on finite time structure detection techniques. The first proposition is based on fault detection techniques. This observer is composed by a Beard-Jones filter, which aim is to observe the existing degree-of-freedom in rectangular implicit representations. Notice that after the initial transient, this observer remains insensitive to the switchings events (see Fig. 1(f) and (g)); this is the case, because the observer is based on the fault detection of the continuous linear system (3.8). Since this observation is accomplished by a pole-zero cancelation, this technique is reserved to minimum phase systems, with respect to the output—degree-of-freedom transfer, namely to implicit rectangular representations having Hurwitz output decoupling zeros. The second proposition is based on adaptive structure detection. The adaptive structure detection is guaranteed in finite time (see Figs 2(e) and (f) and 3), avoiding possible stability troubles due to temporarily unstable closed loop systems during the detection procedure. We are mainly working with implicit descriptions having column minimal indices blocks, which implies the existence of more unknown variables than equations, and when a solution exists, this is not unique. The non unicity of solution introduces some interesting pathologies on the control theory. As, for example, in Bonilla et al. (2013) we have shown that in systems represented by column minimal indices, it is possible to have reachable systems without any control. This phenomenon is possible because of the existence of the free variable $$\bar{x}_{\ell}$$, which acts as an internal control signal (see Section 4.3 of Bonilla et al. (2013)). As the unobservability is a dual concept of the reachability, pathologies will also then appear in the presence of column minimal indices blocks. This pathology is the lost of observability in the changing basis procedure, since the free variable $$\bar{x}_{\ell}$$ acts as an internal state feedback in the changing basis procedure; so, one must be careful in the changing basis procedure. This is pointed out in Theorem 3.1, and we show in Lemma 3.2 how to find complementaries observable subspaces. We must point out that our descriptor variable observation proposal is intended for implicit descriptions, therefore differs from the well-known observer design method proposed in Tanwani et al. (2013) (see also Biehn et al., 2001). In that case the observer is focused on the estimation of the state vector of switched linear systems with state jumps (corresponding to the acting switching signals). In the observation approach that we followed here (related to an implicit description of the concerned system), we estimate the descriptor state variable affected by system’ structure variation. Our observer design methods (i.e., the approach based on linear descriptor observers, and the one based on indirect variable descriptor observation), can be then applied to a broad class of linear systems, including switched systems. References Aling H. & Schumacher M. (1984) A nine-fold canonical decomposition for linear systems. Int. J. Control, 39 , 779 – 805 . Google Scholar CrossRef Search ADS Aubin J.P. & Frankowska H. (1991) Viability kernels of control systems. Nonlinear Synthesis ( Byrnes C. I. & Kurzhanski A. eds). Progress in Systems and Control Theory , vol. 9 . Boston : Birkhäuser , pp. 12 – 33 . Berger T. & Reis T. (2013) Controllability of linear differential-algebraic systems—a survey. Surveys in Differential-Algebraic Equations I ( Ilchmann A. & Reis T. eds). Differential-Algebraic Equations Forum. Berlin-Heidelberg : Springer , pp. 1 – 61 . Berger T. & Reis T. (2015) Observers and dynamic controllers for linear differential-algebraic systems. Submitted, preprint available from the website of the authors: www.math.unihamburg.de/home/reis/BergReis15.pdf. Berger T. , Reis T. & Trenn S. (2016) Observability of linear differential-algebraic systems: A Survey. Surveys in Differential-Algebraic Equations IV ( Ilchmann A. & Reis T. eds). Springer , pp. 161 – 219 . Surveys in Differential-Algebraic Equations IV ( Ilchmann A. & Reis T. eds). Differential-Algebraic Equations Forum. Berlin-Heidelberg, Germany : Springer (to appear) . Biehn N. , Campbell S. L. , Nikoukhah R. & Delebecque F. (2001) Numerically constructible observers for linear time-varying descriptor systems. Automatica , 37 , 445 – 452 . Google Scholar CrossRef Search ADS Bonilla M. , Lebret G. , Loiseau J. J. & Malabre M. (2013) Simultaneous state and input reachability for linear time invariant systems. Linear Algebra Appl. , 439 , 1425 – 1440 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (1991) Variable structure systems via implicit descriptions. First European Control Conference . Grenoble, Juillet , Hermès, Paris , 1 , 403 – 408 . Bonilla M. & Malabre M. (1994) Geometric characterization of Lewis structure algorithm. Circuits Syst. Signal Process. , 13 , 255 – 272 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (1995) Geometric minimization under external equivalence for implicit descriptions. Automatica , 31 , 897 – 901 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (1997) Structural matrix minimization algorithm for implicit descriptions. Automatica , 33 , 705 – 710 . Google Scholar CrossRef Search ADS Bonilla M. & Malabre M. (2003) On the control of linear systems having internal variations. Automatica , 39 , 1989 – 1996 . Google Scholar CrossRef Search ADS Bonilla M. , Malabre M. & Azhmyakov V. ( 2015a ) An implicit systems characterization of a class of impulsive linear switched control processes. Part 1: modeling. Nonlinear Anal. Hybrid Syst. , 15 , 157 – 170 . Google Scholar CrossRef Search ADS Bonilla M. , Malabre M. & Azhmyakov V. ( 2015b ) An implicit systems characterization of a class of impulsive linear switched control processes. Part 2: control. Nonlinear Anal. Hybrid Syst. , 18 , 15 – 32 . Google Scholar CrossRef Search ADS Bonilla M. , Martínez J. C. , Pacheco J. & Malabre M. (2009) Matching a system behavior within a known set of models: a quadratic optimization based adaptive solution. Int. J. Adapt. Control Signal Process. , 23 , 882 – 906 . Google Scholar CrossRef Search ADS Dai L. (1989) Singular Control Systems . Lecture Notes in Control and Information Sciences , vol. 118 . New York : Springer . Duan G.R. (2010) Analysis and Design of Descriptor Linear Systems . Springer : New York . Gantmacher F.R. (1977) The Theory of Matrices , vol. II . Chelsea : New York . Geerts T. (1993) Solvability conditions, consistency, and weak consistency for linear differential-algebraic equations and time-invariant singular systems: the general case. Linear Algebra Appl. , 181 , 111 – 130 . Google Scholar CrossRef Search ADS Isermann R. (1984) Process fault detection based on modeling and estimation methods—a survey. Automatica , 20 , 387 – 404 . Google Scholar CrossRef Search ADS Kailath T. (1980) Linear Systems . Englewoods Cliffs , NJ: Prentice-Hall . Kuijper M. & Schumacher J. M. (1992) . Minimality of descriptor representations under external equivalence. Automatica , 27 , 985 – 995 . Google Scholar CrossRef Search ADS Lebret G. & Loiseau J. J. (1994) Proportional and proportional-derivative canonical forms for descriptor systems with outputs. Automatica , 30 , 847 – 864 . Google Scholar CrossRef Search ADS Lewis F. L. (1992) A tutorial on the geometric analysis of linear time-invariant implicit systems. Automatica , 28 , 119 – 137 . Google Scholar CrossRef Search ADS Lewis F. L. & Beauchamp G. (1987) Computation of subspaces for singular systems. Proceedings of MTNS’87 , Phoenix, AZ . Liberzon D. (2003) . Switching in Systems and Control . Systems and Control: Foundations & Applications . Boston, MA : Birkhauser . Liret F. & Martinais D. (1997) . Algèbre 1re anníe . Paris, France : Dunod . Malabre M. (1987) More geometry about singular systems. Proceedings of 26th IEEE Conference on Decision and Control , Los Angeles , 9 – 11 December 1987, Library of Congress, Catalog Card Number: 79-640961, IEEE, Catalog Number: 87 CH 2505-6 , pp. 1138 – 1139 . Narendra K. S. & Balakrishnan J. (1994) A common Lyapunov function for stable LTI systems with commuting $$A$$-matrices. IEEE-TAC , 39 , 2469 – 2471 . Özçaldiran K. (1985) Control of descriptor systems. Ph.D. Thesis , Georgia Institute of Technology . Polderman J. W. & Willems J. C. (1998) Introduction to Mathematical Systems Theory: A Behavioral Approach . New York : Springer . Rosenbrock H.H. (1970) State–Space and Multivariable Theory . London : Nelson . Saberi A. , Stoorvogel A. A. , Sannuti P. & Niemann H. H. (2000) Fundamental problems in fault detection and identification. Int. J. Robust Nonlinear Control , 10 , 1209 – 1236 . Google Scholar CrossRef Search ADS Schaft A. J. van der & Schumacher H. (2000) An Introduction to Hybrid Dynamical Systems . Lecture Notes in Control and Information Sciences , vol. 251 . New York : Springer . Shorten R. N. & Narendra K. S. (2002) Necessary and sufficient conditions for the existence of a common quadratic Lyapunov function for a finite number of stable second order linear time-invariant systems. Int. J. Adapt. Control Signal Process. , 16 , 709 – 728 . Google Scholar CrossRef Search ADS Tanwani A. , Shim H. & Liberzon D. (2013) Observability for switched linear systems: characterization and observer design. IEEE Trans. Autom. Control , 58 , 891 – 904 . Google Scholar CrossRef Search ADS Trentelman H.L. , Stoorvogel A.A. , and Hautus M.L.J. (2001) Control Theory for Linear Systems. Communications and Control Engineering . London : Springer . Vardulakis A. I. G. (1991) Linear Multivariable Control: Algebraic Analysis and Synthesis Methods . Chichester, UK : John Wiley & Sons Ltd . Verghese G. C. (1981) Further notes on singular descriptions. Proceedings of the Joint American Control Conference , Paper TA4, Charlottesville . Willems J. C. (1983) Input–output and state space representations of finite-dimensional linear time-invariant systems. Linear Algebra Appl. , 50 , 81 – 608 . Google Scholar CrossRef Search ADS Willsky A. S. (1976) A survey of design methods for failure detection in dynamic systems. Automatica , 12 , 601 – 611 . Google Scholar CrossRef Search ADS Wonham W. M. (1985) Linear Multivariable Control: A Geometric Approach , 3rd edn . New York : Springer . Appendix A. Proof of Lemma 3.1 From (3.5), we have: NXo∗=sup{N⊂Xo∩kerC| AN⊂EN}=sup{N⊂ImV∩kerC| A(ImV∩N)⊂E(ImV∩N)},=sup{N⊂VV−1kerC| AV(V−1N)⊂EV(V−1N)}=sup{N⊂VkerCV| AV(V−1N)⊂EV(V−1N)},=sup{(V−1N)⊂kerCV| AV(V−1N)⊂EV(V−1N)}. Now, since $$\mathrm{Ker}\,{EV} = V^{-1}\,\mathrm{Ker}\,{E} = \mathscr{X}_{o} \cap \mathrm{Ker}\,{E} = \{ 0 \}$$ and $$\mathrm{Im}\,{EV} = E \mathscr{X}_{o} = E(\mathscr{X}_{o}\oplus{{\ker}\,{E}}) = \mathrm{Im}\,{E}$$, we get (3.12). □ Proof of Lemma 3.2 From Lemma 3.1, (3.13) is equivalent to the observability of the pair $$(CV,\,AV)$$ (see Wonham, 1985). From the PBH–rank proof (see, e.g., Kailath, 1980;Vardulakis, 1991; Polderman & Willems, 1998)), we get the equivalence between (3.13) and (3.14). In Bonilla & Malabre (1997), we have used the geometric interpretation of the LSA, given in Bonilla & Malabre (1994), to matricially compute the unobservable subspace of a triple $$(E, A, C)$$, geometrically computed by (3.6) (with $$\mathscr{X}_{c} = \mathscr{X}_{d}$$). For our case. we have only to replace $${\ker}\,{C}$$ by $$\mathscr{X}_{c}\cap{\ker}\,{C}$$. Now, it is an usual practice to represent subspaces by means of kernels of some given matrices (see, e.g., Liret & Martinais, 1997). Thus: We then get the equivalence between (3.13) and (3.15); the condition: is implied by the complementary condition: $$\mathscr{X}_{d} = \mathscr{X}_{c} \oplus {\ker}\,{E}$$. Equation (3.17) is straightforward. 1 See Willems (1983) and Polderman & Willems (1998) for the external equivalence definition. 2 The parameter $$d$$ is not part of the variable structure, characterized by the pair $$({{{q_{{1}}}}}, {{{q_{{2}}}}})$$, we have only put it to consider two cases of study. 3 ied: infinite elementary divisor, fed: finite elementary divisor and cmi: column minimal index. 4 We have applied this change of basis to put in evidence the parameter $$d$$. In the fourth example of Bonilla et al. (2015b), $$d = 1$$, so: $${\xi} = {x}$$. 5 Based on an exact reconstruction of their derivative actions. 6 For the case: $${q_{a}} = (-1,\,-1)$$, we have from (2.4) and (2.20): $${\mathrm{d}{{x_1}}}/{\mathrm{d}{{t}}} = x_2 - y$$ and $$x_1 + x_2 = y$$, thus: $$x_1 = {{\text e}}^{-t}x_1(0)$$, $$x_2 = y - {{\text e}}^{-t}x_1(0)$$ and $$x_3 = y$$. 7 In Bonilla et al. (2009) is also attached a dead zone function, which aim is to consider band-limited high-frequency noise components. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Apr 26, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off