Polynomial matrix equivalences: system transformations and structural invariants

Polynomial matrix equivalences: system transformations and structural invariants The present article is a survey on linear multivariable systems equivalences. We attempt a review of the most significant forms of system equivalence having as a starting point matrix transformations preserving certain aspects of their spectral structure. From a system theoretic point of view, the need for a variety of forms of polynomial matrix equivalences, arises from the fact that different types of spectral invariants give rise to different types of dynamics of the underlying linear system. A historical perspective of the key results and their contributors is also given. 1. Introduction In this survey work, we consider linear time invariant systems and the respective equivalence relations amongst them. The class of linear multivariable systems has been studied through a variety of models. In what follows $$\mathbb{R},\mathbb{C}$$ denote the fields of real and complex numbers, respectively, $$\mathbb{R}[s]$$ the ring of polynomials with real coefficients, $$\mathbb{R}(s)$$ the field of real rational functions and $$\mathbb{R}_{pr}(s)$$ the ring of real proper rational functions. The classical time domain approach uses state space representations of the form   ρx(t)=Ax(t)+Bu(t)y(t)=Cx(t)+Du(t) (1) where $$A\in\mathbb{R}^{p\times p}$$, $$B\in\mathbb{R}^{p\times l}$$, $$C\in\mathbb{R}^{m\times p}$$, $$D\in\mathbb{R}^{m\times l}$$, $$x(t)\in\mathbb{R}^{p}$$ is the state vector and $$u(t)\in\mathbb{R}^{l}$$, $$y(t)\in\mathbb{R}^{m}$$ are respectively the input and output vectors. The operator $$\rho$$ is either the differential operator $$d/dt$$ or the forward shift operator $$\rho x(t)=x(t+1)$$, in the continuous and discrete time case respectively. In view of a more general perspective, (1) is a special case of the generalized state space or descriptor models  ρEx(t)=Ax(t)+Bu(t)y(t)=Cx(t)+Du(t). (2) Notice that the matrix $$E$$ may be singular, in which case the above descriptor model cannot be reduced to a state space representation by premultiplying both sides of the first equation (2) by $$E^{-1}$$. From a frequency domain point of view, one usually studies the input/output description of the linear system expressed by a transfer function   y(s)=G(s)u(s) (3) In the multivariable case, transfer functions are essentially rational matrices. In the multiple input–multiple output (MIMO) case the factorization of a given transfer function $$G(s)\in\mathbb{R}(s)^{m\times l}$$ as a fraction of polynomial matrices, leads to the introduction of two distinct fractional representations, i.e.   G(s)=NR(s)DR(s)−1=DL(s)−1NL(s), (4) Namely, the left matrix fraction description (MFDs) of $$G(s)$$ can be seen as an input–output relation of the form   DL(ρ)y(t)=NL(ρ)u(t) (5) where $$D_{L}(s)\in\mathbb{R}[s]^{m\times m}$$, $$N_{L}(s)\in\mathbb{R}[s]^{m\times l}$$, whereas the right matrix fraction description can be considered as a model of the form   DR(ρ)ξ(t)=u(t)y(t)=NR(ρ)ξ(t) (6) where $$D_{R}(s)\in\mathbb{R}[s]^{l\times l}$$, $$N_{R}(s)\in\mathbb{R}[s]^{m\times l}$$ and $$\xi(t)\in\mathbb{R}^{l}$$ is the pseudostate vector. All the above representations can be considered as special cases of polynomial matrix descriptions (PMDs)   A(ρ)ξ(t)=B(ρ)u(t)y(t)=C(ρ)ξ(t)+D(ρ)u(t) (7) where $$A(s)\in\mathbb{R}[s]^{r\times r}$$, $$B(s)\in\mathbb{R}[s]^{r\times l}$$, $$C(s)\in\mathbb{R}[s]^{m\times r}$$, $$D(\rho)\in\mathbb{R}[s]^{m\times l}$$. In the continuous time case, $$\rho$$ is to be interpreted as the differential operator in the distributional sense (see Hautus (1976); Cobb (1982); Hautus & Silverman (1983); Cobb (1984)). According to this approach, certain classes of linear systems may exhibit impulsive solutions along with the smooth functional ones (see Verghese et al., 1981; Hautus & Silverman, 1983; Cobb, 1984; Vardulakis & Fragulis, 1989; Geerts, 1993, 1996, Vardulakis, 1991, p. 176–223). For a detailed presentation of the framework of impulsive-smooth distributions and related results, we encourage the reader to see the articles cited above. It is well known that the finite frequency modes of the above systems are associated with the structure of the finite zeros of the matrix $$A(s)$$ in the case of (7), or the one of the corresponding matrices in the rest of the models. On the other hand, impulsive modes or infinite frequency behaviour is closely related to the presence of zeros at $$s=\infty$$ in the corresponding matrix (see for instance Vardulakis 1991, p. 176–223; Bourlès & Marinescu, 1999; Karampetakis, 2013). When discrete time systems are under consideration, $$\rho$$ is to be interpreted as the forward shift operator $$\rho x(t)=x(t+1).$$ While the regular discrete time case has no significant differences compared to the continuous one, a radically different approach is required, when singularity comes into consideration. Singular discrete time systems are in general non causal and hence not physically realizable. However, there are situations where singular models arise in a natural manner (see for instance Luenberger, 1977) or systems where the independent variable $$t$$ is spatial rather than temporal. The framework proposed for such problems (see Luenberger, 1977; Lewis, 1984, 1986; Antoniou et al., 1998; Karampetakis et al., 2001; Karampetakis, 2004) is to use a finite time interval and consider solutions propagating forward and backward in time given a set of admissible boundary conditions and exogenous inputs. The finite and infinite elementary divisors of the polynomial matrices involved in the analysis of such models have been shown to play a central role. Notably, the finite elementary divisors structure of a matrix is completely determined by its finite zero structure and corresponds to the forward solution space, while the infinite elementary divisor structure depends both on the poles and zeros at infinity of the polynomial matrix associated with the system giving rise to impulsive behavior for non compatible initial conditions at $$t=0$$ in the continuous time case, and backward solution of the system in the discrete time case. In either case, the finite and infinite zero structure in continuous time systems, or the corresponding elementary divisor structure in the discrete time case, plays a central role in the determination of the behavior of a linear system. In this respect, depending on the type of the model under investigation and the type of behaviour to be preserved, it is natural to seek transformations between system representations leaving invariant the corresponding spectral characteristics of the matrices involved. Following this approach we present matrix transformations preserving certain types of spectral structure accompanied by the induced system transformations preserving the corresponding type of dynamics. 2. Structure of polynomial matrices We first review some basic facts related to the finite zero structure of polynomial matrices. A square polynomial matrix $$A(s)$$ is termed regular, if its determinant is non zero for almost every $$s\in\mathbb{C}$$. In such a case, if $$\det A(s_{0})=0$$ for some $$s_{0}\in\mathbb{C}$$, then $$s_{0}$$ is a finite eigenvalue or zero of $$A(s)$$. Finite zeros can be defined in the case of non-square or square non-regular matrices using a slightly more general framework. A square polynomial matrix $$U(s)$$ satisfying $$\det U(s)\neq0$$ for every $$s\in\mathbb{C}$$ is called unimodular. Unimodular matrices are the invertible elements (units) of the ring of polynomial matrices. If $$s_{i},i=1,2,\ldots,\mu$$ are the distinct finite zeros of $$A(s)=\sum_{i=0}^{q}A_{i}s^{i}\in\mathbb{R}[s]^{m\times n}$$, then there exist unimodular matrices $$U_{L}(s),U_{R}(s)$$ of appropriate dimensions such that   SA(s)C(s)=UL(s)A(s)UR(s), (8) where $$S_{A(s)}^{\mathbb{C}}(s)=diag\{f_{1}(s),f_{2}(s),\ldots,f_{r}(s),0_{m-r,n-r}\}$$ is the Smith canonical form of $$A(s)$$ in $$\mathbb{C}$$, $$f_{j}(s)=\prod_{i=1}^{\mu}(s-s_{i})^{\sigma_{ij}}$$ are the invariant polynomials of $$A(s)$$ and the partial multiplicities of $$s_{i}$$ satisfy $$\sigma_{ij}\leq\sigma_{i,j+1}$$, hence $$f_{i}(s)\mid f_{i+1}(s)$$ (Vardulakis, 1991, p. 9–14) and $$r=\textrm{rank}_{\mathbb{R}(s)}A(s)$$ is the normal rank of $$A(s)$$. Additionally, the factors $$(s-s_{i})^{\sigma_{ij}}$$ are the finite elementary divisors of $$A(s)$$. We now draw our attention to the infinite structure of a polynomial or rational matrix. A rational function with numerator degree less than or equal (less than) to the denominator degree is called proper (strictly proper). The sets of proper and strictly proper rational functions are rings denoted by $$\mathbb{R}_{pr}(s),\mathbb{R}_{sp}(s)$$ respectively. Clearly, $$r(s)\in\mathbb{R}_{pr}(s)$$ iff   lims→∞r(s)=c∈R. Moreover, if $$c=0$$, $$r(s)$$ is strictly proper, while in case $$c\neq0$$ the function $$r(s)$$ is called biproper. The set of $$m\times n$$ matrices with elements in the sets $$\mathbb{R}(s),\mathbb{R}_{pr}(s)$$ and $$\mathbb{R}_{sp}(s)$$ will be denoted by $$\mathbb{R}(s)^{m\times n}$$, $$\mathbb{R}_{sp}(s)^{m\times n}$$ and $$\mathbb{R}_{sp}(s)^{m\times n}$$. A square proper rational matrix $$R(s)$$ is called biproper iff   lims→∞|R(s)|=c≠0. Biproper matrices serve as units on the ring of square proper rational matrices, in the sense that their inverses are proper rational matrices themselves. The Smith–McMillan form at$$s=\infty$$ of $$A(s)\in\mathbb{R}[s]^{m\times n}$$ (Vardulakis, 1991, p. 100), is   SA(s)∞(s)=diag{sq1,sq2,...,sqk,1sq^k+1,…,1sq^r,0m−r,n−r} (9) where $$q_{1}\geq q_{2}\geq\ldots\geq q_{k}>0$$, $$\widehat{q}_{r}\geq\widehat{q}_{r-1}\geq\ldots\geq\widehat{q}_{k+1}\geq0$$ are respectively the orders of the poles and zeros of $$A(s)$$ at $$s=\infty$$ and $$r=\textrm{rank}_{\mathbb{R}(s)}A(s)$$. Furthermore, it was shown in Vardulakis, 1991, Proposition. 3.52, p. 119 that $$q_{1}=q$$. Considering the dual or reverse matrix of $$A(s)$$  revA(s)=sqA(s−1)=∑i=0qAisq−i (10) then the finite Jordan pair (Gohberg et al., 1982, Chapter 7) of $$\mathrm{rev}A(s)$$ corresponding to the zero structure at $$s=0$$, is defined as the infinity Jordan pair$$(X_{\infty},J_{\infty})$$, of $$A(s)$$. As a result, the infinity Jordan pair of $$A(s)$$ satisfies the following properties   ∑i=0qAiX∞J∞q−i=0,rank[X∞X∞J∞⋮X∞J∞μ−1]=μ. (11) Furthermore, the structure of the infinity Jordan pair of $$A(s)$$ is closely related (Vardulakis, 1991, Section 4.2.3) to its Smith–McMillan form at $$s=\infty$$ given in (9). Particularly,   J∞=blockdiagi=1…r{Ji∞}∈Rμ×μ, where   Ji∞=[01⋯000⋱⋮⋮⋱⋱10⋯00]∈Rμi×μi, and   μi={q1−qi,i=2,…,kq1+q^ii=k+1,…,r. The columns of the matrix $$X_{\infty}$$ can be recovered (Vardulakis, 1991, sec. 4.2.2) from the biproper matrix which multiplies $$A(s)$$ on the right to obtain (9). We now draw our attention to the study of the structural invariants of non regular polynomial matrices. Let $$A(s)\in\mathbb{R}[s]^{m\times n}$$ be a polynomial matrix with $$r=\textrm{rank}_{\mathbb{R}(s)}A(s)$$. The right null space of $$A(s)$$ consists of all rational vectors $$\xi(s)\in\mathbb{R}(s)^{n\times1}$$ satisfying   A(s)ξ(s)=0. It can be easily shown that the right null space of $$A(s)$$ is a vector space over the field $$\mathbb{R}(s)$$ of dimension equal to $$(n-r)$$. Thus, there exist $$(n-r)$$ linearly independent (over $$\mathbb{R}(s)$$) rational column vectors $$\bar{f}_{i}(s)\in\mathbb{R}(s)^{n\times1}$$, $$i=1,2,\ldots,n-r$$ spanning the right null space of $$A(s)$$. Setting, $$\overline{F}(s)=\left[\overline{f}_{1}(s),\overline{f}_{2}(s),\ldots,\overline{f}_{n-r}(s)\right]$$ it can be shown that   A(s)F¯(s)=0 and $$\textrm{rank}_{\mathbb{R}(s)}F(s)=n-r$$. However, as noted in Forney (1975); Kailath, 1980, the structure of the right null space can be captured by the notion of a minimal polynomial basis. Using a more elaborate procedure one can compute a basis for the right null space consisting only of polynomial vectors having a minimal column degree complexity. Before we present the characterization of a minimal polynomial basis, we introduce some terminology regarding the column degrees of a polynomial matrix. Let   F(s)=[f1(s),f2(s),…,fn−r(s)]∈R[s]n×(n−r) (12) be a polynomial matrix and $$f_{i}(s)\in\mathbb{R}(s)^{n\times1}$$, $$i=1,2,\ldots,n-r$$ be its column vectors. Let also $$v_{i}=\deg f_{i}(s)$$, $$i=1,2,\ldots,n-r$$, be the column degrees of $$F(s)$$ and write   F(s)=Fhcdiag{sv1,sv2,…,svn−r}+Fl(s) where $$F_{l}(s)$$ is a polynomial matrix with column degrees strictly lower than the corresponding ones in $$F(s)$$ and $$F_{hc}$$ is the highest column degree coefficient matrix of $$F(s)$$. The matrix $$F(s)$$ is called column reduced or column proper if $$F_{hc}$$ has full column rank. Notably, it has been shown that every polynomial matrix with full normal column rank can be reduced to a column proper one by post-multiplication by an appropriate unimodular matrix (see Vardulakis, 1991, Section 1.2.3). The columns of a polynomial matrix of the form (12), form a minimal polynomial basis for the right null space of $$A(s)$$ (see Forney, 1975; Kailath, 1980, Section 6.5.4 or Vardulakis, 1991, Section 1.8), if the following properties are satisfied   A(s)F(s)=0, (13) that is the columns of $$F(s)$$ belong to the right null space of $$P(s)$$,   rankF(s)=n−r,∀s∈C, (14) which essentially states that $$F(s)$$ has no finite zeros and   rankFhc=n−r (15) where $$F_{hc}$$ is the highest column degree coefficient matrix of $$F(s)$$, which implies that the latter is column reduced. Notice also that this last condition implies the absence of zeros at $$s=\infty$$ in $$F(s)$$ (see Vardulakis, 1991, Corollary. 3.100). It can be shown (Forney, 1975; Kailath, 1980, Section 6.5.4), that if $$F(s)$$ satisfies the above conditions and $$v_{1}\leq v_{2}\leq\ldots\leq v_{n-r}$$ are its column degrees in ascending order, then the column degrees $$v_{1}^{\prime}\leq v_{2}^{\prime}\leq\ldots\leq v_{n-r}^{\prime}$$ of any other basis of the right null space of $$A(s)$$, will satisfy $$v_{i}\leq v_{i}^{\prime}$$ for every $$i=1,2,\ldots,n-r$$. The integers $$0\leq v_{1}\leq v_{2}\leq\ldots\leq v_{n-r}$$ are known as right minimal indices of $$A(s)$$. The left null space of a given polynomial matrix $$A(s)$$ is structured accordingly. The row degrees of a left minimal basis, are the left minimal indices of $$A(s)$$ and they will be denoted $$0\leq\eta_{1}\leq\eta_{2}\leq\ldots\leq\eta_{m-r}$$. In the special case of matrix pencils, that is first order polynomial matrices, Weierstrass (1868) and Kronecker (1890) introduced the Weierstrass and Kronecker canonical forms of a regular of non-regular matrix pencil respectively (see Gantmacher, 1959, Vol. 2, Theorem. 3, p.28 and Gantmacher, 1959, Vol. 2, Theorem. 5, p.40 respectively, for a more easily accessible source). The well known Kronecker canonical form of a possibly non regular matrix pencil $$sE-A$$ with $$E,A\in\mathbb{R}^{p\times q},$$ is a matrix pencil of the form   K(s)=diag{sIn−JC,sJ∞−Iμ,Lε(s),Lη(s)} (16) where the $$sI_{n}-J_{\mathbb{C}}$$ corresponds to the finite zeros (finite elementary divisors) of the pencil, with $$J_{\mathbb{C}}$$ being in Jordan canonical form. The block $$sJ_{\infty}-I_{\mu}$$ corresponds to the infinite elementary divisors of the pencil, where $$J_{\infty}$$ is in Jordan form, with all its diagonal elements equal to zero. Finally, the block $$L_{\varepsilon}(s)$$ (respectively $$L_{\eta}(s)$$) is a block diagonal matrix, comprised by non square blocks $$L_{\varepsilon_{i}}(s),~i=1,\ldots,r$$ (respectively $$L_{\eta_{i}}(s)~i=1,\ldots,l$$) of the form   Lεi(s)=sMεi−Nεi∈R[s]εi×(εi+1)Lηi(s)=sMηi⊺−Nηi⊺∈R[s](ηi+1)×ηi where   Mεi=[10⋯0⋮⋱⋱⋮0⋯10]∈Rεi×(εi+1)Nηi=[01⋯0⋮⋱⋱⋮0⋯01]∈Rηi×(ηi+1). The block $$L_{\varepsilon_{i}}(s)$$ ($$L_{\eta_{i}}(s)$$) is called the right (left) Kronecker block and the integers $$\varepsilon_{i}$$ ($$\eta_{i}$$) are the right (left) Kronecker indices of the original pencil. Moreover, if we set $$\varepsilon=\sum_{i=1}^{r}\varepsilon_{i}$$ and $$\eta=\sum_{i=1}^{l}\eta_{i},$$ then $$p=n+\mu+\varepsilon+\eta+l$$ and $$q=n+\mu+\varepsilon+\eta+r$$. 3. Transformations preserving the finite structure 3.1. Matrix similarity The most common form of matrix equivalence related to linear systems theory is matrix similarity. Matrix similarity relates constant squares matrices of the same dimension. Similar matrices represent the same linear transformation expressed in two different bases. Definition 1 [Similarity] (Gantmacher, 1959, Vol.1, Definition 10, p. 67) Two constant matrices $$A_{1},A_{2}\in\mathbb{R}^{n\times n},$$ are similar if there exists an invertible matrix $$M,$$ such that   A1=MA2M−1 (17) Notice that (17) can be also written in the form   (sIn−A1)=M(sIn−A2)M−1 (18) Similarity, as a relation on the set of $$n\times n$$ constant matrices, is easily shown to be an equivalence relation. The eigenstructures of two similar matrices, that is the eigenvalues along with their algebraic and geometric multiplicities, are identical. In particular, given a matrix $$A$$ with $$v$$ distinct eigenvalues $$\lambda_{1},\lambda_{2},\ldots,\lambda_{v}\in\mathbb{C}$$, there exists an invertible matrix $$M$$ (see for instance Gantmacher, 1959, Vol. 1, p. 152), such that $$A=MJM^{-1}$$, where   J=diagi=1,…,vj=1,…,vi{Jij} is the Jordan canonical form of $$A$$, $$v_{i}$$ is the geometric multiplicity of $$\lambda_{i}$$,   Jij=[λi1⋯00λi⋱⋮⋮⋱10⋯0λi]∈Cvij×vij and $$\sum\limits _{j=1}^{v_{i}}v_{ij}$$ is the algebraic multiplicity of $$\lambda_{i}$$. In general, the above decomposition, even if $$A$$ is a real matrix, may result in complex matrices $$M,J.$$ However, a real alternative of $$J$$ and $$M$$ is also possible. Given a linear system described by a set of state space equations of the form (1) the eigenstructure of the matrix $$A\in\mathbb{R}^{p\times p}$$ plays a crucial role in the dynamics of the system. A question naturally arising in the study of such systems, is how equation (1) changes, when a transformation of the state vector takes place, i.e. when   x(t)=Mx^(t) (19) It is easy to see that the above change of coordinates results in a new description of the form   ρx^(t)=A^x^(t)+B^u(t)y(t)=C^x^(t)+Du(t) where $$\hat{A}=M^{-1}AM$$, $$\hat{B}=M^{-1}B$$ and $$\hat{C}=CM$$. Equivalently two state space systems of the form (1) will be termed system similar if there exists an invertible matrix such that   [sI−A^B^−C^D]=[M−100I][sI−AB−CD][M00I] This type of equivalence between two state space representations is known in control theory literature as system similarity. 3.2. Unimodular equivalence Unimodular equivalence provides a transformation between rational matrices of the same dimensions. In what follows, we focus on the polynomial case which is widely used in linear systems theory. Recall that unimodular matrices are polynomial matrices whose inverse is also polynomial, that is their determinant is a non-zero constant. Pre or post multiplication of a polynomial matrix by a unimodular one, corresponds to performing row or column elementary operations on the matrix. This naturally leads to the following definition. Definition 2 [Unimodular equivalence] (Gantmacher, 1959, Vol. 1, Definition 2’, p. 133), Rosenbrock (1970) Two polynomial matrices $$A_{i}(s)\in\mathbb{R}[s]^{p\times q},$$$$i=1,2$$ are unimodular equivalent,if there exist unimodular matrices $$U(s)\in\mathbb{R}[s]^{p\times p}$$, $$V(s)\in\mathbb{R}[s]^{q\times q}$$, such that   A1(s)=U(s)A2(s)V(s). Unimodular equivalence is an equivalence relation, having as a canonical form the common Smith form of the equivalent polynomial matrices. The finite elementary divisor structure and therefore the finite zeros and their multiplicities form a complete set of invariants for unimodular equivalence. In the special case where $$A_{i}(s)=sI_{p}-A_{i}$$, $$i=1,2$$, unimodular equivalence can be reduced to matrix similarity of $$A_{i}$$’s (see (see Gantmacher, 1959, Vol. 1, Theorem. 7, p.147). From a system theoretic point of view, unimodular equivalence relates matrix fractional descriptions (MFDs) of a system with a given transfer function $$G(s)\in\mathbb{R}(s)^{m\times l}$$. In particular, given right and left MFDs of the form (4), it is easy to see that choosing $$V(s),U(s)$$ unimodular of appropriate dimensions, the pairs $$\hat{N}_{R}(s)=N_{R}(s)V(s)$$, $$\hat{D}_{R}(s)=D_{R}(s)V(s)$$ and $$\hat{N}_{L}(s)=U(s)N_{L}(s)$$, $$\hat{D}_{L}(s)=U(s)D_{L}(s)$$ give rise to the same transfer function $$G(s),$$ preserving input (respectively output) decoupling zeros of the MFDs, i.e. the zeros of   [DL(s)NL(s)],(respectively[NR(s)−DR(s)]). (20) On the other hand, it is natural to consider a left (right) MFD of the form (4) giving rise to the same transfer function as equivalent, if they also share identical input (output) decoupling zeros structure. Since left MFDs have no input decoupling zeros and right MFDs have no output decoupling zeros, we can deduce that a left and a right MFD of the form (4) can be equivalent if and only if there are no input and output decoupling zeros in both descriptions. Notice that (4) can be written as   DL(s)NR(s)=NL(s)DR(s) (21) or   [DL(s)NL(s)][NR(s)−DR(s)]=0, (22) while the absence of input and output decoupling zeros can be expressed by the absence of zeros in the respective compound matrix in (20). Notice that the coprimeness requirement of the pairs in the compound matrices in (20) in conjunction with equation (21), essentially imply that the pairs of numerators and denominators share common finite zero structures, despite the fact that $$D_{L}(s)$$ and $$D_{R}(s)$$ may be of different dimensions. In order to overcome the difficulty of relating systems with different number of pseudostates having the same number of inputs and outputs, the class of polynomial matrices $$\mathcal{P}(m,l)$$ with dimensions $$(r+m)\times(r+l)$$, for all $$r=1,2,\ldots$$, was introduced and the following definition was proposed. Definition 3 [Extended unimodular equivalence] (Pugh & Shelton, 1978) Two polynomial matrices $$A_{1}(s)$$, $$A_{2}(s)\in\mathcal{P}(m,l)$$ are extended unimodular equivalent (e.u.e.) if there exist polynomial matrices of appropriate dimensions $$M(s)$$, $$N(s)$$ such that   M(s)A1(s)=A2(s)N(s) (23) where the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] (24) have full rank and no zeros in $$\mathbb{C}$$. Proving that e.u.e. is an equivalence relation is not a trivial task (Pugh & Shelton, 1978; Smith, 1981). It has been shown in Pugh & Shelton (1978) that e.u.e. preserves the finite zero structure as well as the normal rank defect of the matrices involved. Additionally, it should be mentioned that conditions (23) and (24) are a well known necessary and sufficient condition for two polynomial matrices to be left-similar (Cohn, 1985, Proposition 6.5, p. 28) while at the same time they provide necessary and sufficient conditions for the corresponding behaviors to be isomorphic (Oberst, 1990, Theorem 54, p. 33). A more general form of equivalence relation between systems expressed in Polynomial Matrix Descriptions (PMDs) has been introduced in Rosenbrock (1970) known as strict system equivalence (s.s.e), while in Wolovich (1974), a different approach using state space reduction was followed to establish the same result. Additionally, an alternative formulation of s.s.e. was proposed in Fuhrmann (1977) and was proven equivalent to the original form of the s.s.e. in Levy et al. (1977). In what follows we shall focus on the definition of s.s.e. given in (Fuhrmann, 1977). Let   Pi(s)=[Ai(s)−Bi(s)Ci(s)Di(s)],i=1,2 (25) be the system matrices of two PMDs $${\it\Sigma}_{i}$$, where $$A_{i}(s)\in\mathbb{R}[s]^{r_{i}\times r_{i}}$$, $$B_{i}(s)\in\mathbb{R}[s]^{r_{i}\times l}$$, $$C_{i}(s)\in\mathbb{R}[s]^{m\times r_{i}}$$, $$D_{i}(s)\in\mathbb{R}[s]^{m\times l}$$. Definition 4 [Furhmann’s s.s.e] (Fuhrmann, 1977) $${\it\Sigma}_{1}$$ and $${\it\Sigma}_{2}$$ are Fuhrmann strictly system equivalent, if there exist polynomial matrices $$M_{1}(s)$$, $$M_{2}(s)$$, $$X_{1}(s)$$, $$X_{2}(s)$$ such that   [M1(s)0X1(s)Im][A1(s)−B1(s)C1(s)D1(s)]=[A2(s)−B2(s)C2(s)D2(s)][M2(s)X2(s)0Il] (26) where   [M1(s)A2(s)],[A1(s)−M2(s)] have full rank and no zeros in $$\mathbb{C}$$. A dynamic interpretation of strict system equivalence was given in terms of the existence of a differential bijective map between the solution sets of the systems in Pernebo (1977). In Pernebo (1977) it has been shown that two system matrices are s.s.e. if and only if there exists a bijective mapping between the sets of solutions of $${\it\Sigma}_{1}$$ and $${\it\Sigma}_{2}$$, of the form   ξ2(t)=N(ρ)ξ1(t)+Y(ρ)u(t), where $$N(\rho)$$ and $$Y(\rho)$$ are polynomial matrices, $$\xi_{i}(t)$$, $$i=1,2$$ are the pseudostate responses and $$u(t)$$ is the common input of the two systems, producing equal outputs $$y_{i}(t)$$, $$i=1,2$$. The bijective mapping must be such that the internal variables of the one system are linear combinations of the internal variables, the inputs and their derivatives of arbitrary order, of the other system. From a system theoretic point of view, if two PMDs with system matrices as in (25) are s.s.e., then it can be shown (Pugh & Shelton, 1978) that any of the following matrix pairs for   Ai(s),[Ai(s)−Bi(s)],[Ai(s)Ci(s)],Pi(s),i=1,2, (27) satisfy an e.u.e. relation. Hence, s.s.e preserves the poles, the input and output decoupling zeros and system zeros of $${\it\Sigma}_{i}$$, that are the zeros of the respective matrices in (27). Additionally Fuhrmann strict system equivalence and its behavioral interpretation has been generalized in Pugh et al. (1998); Zerz (2000) to the case of multidimensional systems. 4. Transformations preserving the structure at infinity The class of singular linear systems received special attention by many authors (see Rosenbrock, 1974; Verghese et al., 1981; Cobb, 1982; Hautus & Silverman, 1983; Vardulakis, 1991). A distinguishing feature of this class of linear systems, is their infinite frequency behavior which is directly related to the so called pole/zero structure at infinity of the polynomial matrices involved. As noted in (Kailath, 1980, p. 398), while unimodular equivalence operations preserve the finite zero structure of polynomial matrices, they will in general destroy the corresponding structure at infinity. A partial solution to this difficulty was given with the introduction of biproper equivalence of rational matrices. Definition 5 [Equivalence at infinity] (Vardulakis et al., 1982) Two rational matrices $$A_{i}(s)\in\mathbb{R}(s)^{p\times q},$$$$i=1,2$$, are equivalent at $$s=\infty$$ if there exist biproper rational matrices $$U(s)\in\mathbb{R}_{pr}(s)^{p\times p},V(s)\in\mathbb{R}_{pr}(s)^{q\times q}$$, such that   A1(s)=U(s)A2(s)V(s). Equivalence at $$s=\infty$$ preserves the pole zero structure at $$s=\infty$$, which is exposed by the Smith–McMillan form at $$\infty$$ (see (9)). To overcome the difficulty of comparing the structure at $$s=\infty$$ of matrices of not necessarily equal dimensions, the following definition was given in Walker (1988). Definition 6 [Extended Causal Equivalence] Walker (1988) Two polynomial matrices $$A_{1}(s)$$, $$A_{2}(s)\in\mathcal{P}(m,l)$$ are extended causal equivalent (e.c.e.) if there exist proper matrices of appropriate dimensions $$M(s)$$, $$N(s)$$ such that   M(s)A1(s)=A2(s)N(s) (28) where the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] (29) have full rank for all $$s\in\mathbb{C}$$ and no zeros at $$s=\mathbb{\infty}$$. 5. Transformations preserving the structure at $$\mathbb{C}\cup\{\infty\}$$ 5.1. Strict equivalence The simplest form of polynomial matrices that have an interesting finite and infinite structure, is the first order one termed in the literature matrix pencils. The study of equivalences between matrix pencils dates back to Weierstrass (1868), where Weierstrass considered regular pencils and introduced the concept of strict equivalence and discovered a canonical form that is named after him. An extension of strict equivalence to the non regular case was given by Kronecker in Kronecker (1890), where the Kronecker canonical form was established. Both results have significant system theoretic applications and/or interpretations summarized in Gantmacher, 1959, Vol. 2, Chapter XII. Definition 7 [Strict equivalence] (Gantmacher, 1959, Vol. 2, Definition 1, p. 24) Two matrix pencils $$sE_{i}-A_{i},$$$$i=1,2,$$ with $$E_{i},A_{i}\in\mathbb{R}^{p\times q},$$ are strictly equivalent if there exist non singular matrices $$M\in\mathbb{R}^{p\times p}$$, $$N\in\mathbb{R}^{q\times q},$$ such that   sE1−A1=M(sE2−A2)N. Note that strict equivalence corresponds to unimodular equivalence and equivalence at $$\infty$$ at the same time. Also strictly equivalent matrix pencils, share identical finite and infinite elementary divisor structure and left/right minimal indices (Gantmacher, 1959, Vol. 2, p. 39), rendered via the well known Kronecker canonical form. Every matrix pencil $$sE-A\in\mathbb{R}[s]^{p\times q}$$, is strictly equivalent to a pencil of the form (16). As a result, strict equivalence of matrix pencils, serves as a tool to identify descriptor representations of the form (2) corresponding to the same underlying linear system. Given a descriptor system of the form (2) one may apply a change of coordinates on the descriptor vector similar to that in (19), that is $$x(t)=M\hat{x}(t)$$, along with a premultiplication of the first descriptor equation by a square invertible matrix $$N$$. This gives rise to a new descriptor system of the form   ρE^x^(t)=A^x^(t)+B^u(t)y(t)=C^x^(t)+Du(t) where $$\hat{E}=NEM$$, $$\hat{A}=NAM$$, $$\hat{B}=NB$$, $$\hat{C}=CM$$. It is easy to verify that the matrix pencils $$sE-A$$ and $$s\hat{E}-\hat{A}$$ are strictly equivalent in the sense of Definition 7. Notably, the system matrices of the descriptor systems related as described above, are connected via   [sE^−A^−B^C^D]=[N00Im][sE−A−BCD][M00Il] which essentially states that the system matrix pencils are also strictly equivalent. Similar relations hold for the pairs $\begin{bmatrix}s\hat{E}-\hat{A} & -\hat{B}\end{bmatrix}$, $\begin{bmatrix}sE-A & -B\end{bmatrix}$ and $\begin{bmatrix}s\hat{E}-\hat{A}\\ \hat{C} \end{bmatrix}$, $\begin{bmatrix}sE-A\\ C \end{bmatrix}$. 5.2. Complete equivalence In an attempt to obtain a system equivalence that preserves the structural invariants of descriptor systems, of same input/output but possibly different pseudostate dimensions, both in $$\mathbb{C}$$ and $$s=\infty$$ and hence their finite and infinite frequency behavior, strong system equivalence was introduced in Verghese et al. (1981). The definition of strong system equivalence in Verghese et al. (1981). can be seen as a collection of allowable operations on the descriptor system matrices. In Anderson et al. (1985) strong equivalence received a more compact formulation as constant system equivalence, and took on a closed form description in Pugh et al. (1987) as complete system equivalence. In the present note, we will focus only on the presentation of complete system equivalence. Complete equivalence can be seen as a particular case of extended unimodular equivalence and causal equivalence. Definition 8 [Complete equivalence] Pugh et al. (1987) Two matrix pencils $$sE_{i}-A_{i},$$$$i=1,2,$$ with $$E_{i},A_{i}\in\mathcal{P}(m,l),$$ are completely equivalent if there exist constant matrices $$M,N$$ of appropriate dimensions such that   M(sE1−A1)=(sE2−A2)N where the composite matrices   [MsE2−A2],[sE1−A1−N] (30) have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}$$. As mentioned earlier, complete system equivalence can be seen as a generalization of strict equivalence of matrix pencils, with the extra feature of being able to compare matrix pencils of different dimensions. It has been shown in Pugh et al. (1987) that two regular pencils are completely equivalent if they possess the same finite and non-trivial infinite elementary divisors and thus zeros. Furthermore, complete equivalence can be seen as a particular case of e.u.e. (Definition 3) and e.c.e (Definition 6). From a system theoretic point of view, complete equivalence takes the form described bellow. Given two descriptor systems of the form   ρEixi(t)=Aixi(t)+Biu(t)y(t)=Cixi(t)+Diu(t) (31) for $$i=1,2,$$ where $$A_{i},E_{i}\in\mathbb{R}^{p_{i}\times p_i}$$, $$B_{i}\in\mathbb{R}^{p_{i}\times m}$$, $$C_{i}\in\mathbb{R}^{l\times p_{i}}$$, $$D_{i}\in\mathbb{R}^{m\times l}$$ and $$x_{i}(t)\in\mathbb{R}^{p_{i}}$$, we have the following Definition 9 [Complete System Equivalence] Pugh et al. (1987) Two descriptor systems of the form (31) are completely system equivalent if there exist constant matrices $$M,N,X,Y$$ of appropriate dimensions such that   [M0XIm][sE1−A1−B1C1D1]=[sE2−A2−B2C2D2][NY0Il] (32) where the composite matrices   [MsE2−A2],[sE1−A1−N] (33) have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}$$. Notice that the above definition does not require the two descriptor vectors $$x_{i}(t)$$ to be of the same dimension. It was shown in Pugh et al. (1987) that two descriptor systems are completely system equivalent iff they are strongly system equivalent. The corresponding relation between strong and constant system equivalence was established in Anderson et al. (1985). Furthermore, in Hayton et al. (1986) it was shown that two descriptor systems are completely equivalent iff there exist bijective maps that leave input/output behavior and essential dynamics unchanged. This form of equivalence is termed fundamental equivalence in Hayton et al. (1986). 5.3. Full equivalence In Anderson et al. (1985) a generalization of strong and complete system equivalence to polynomial matrix descriptions of arbitrary degree was proposed and the dynamic interpretation of the corresponding invariants was studied in Coppel & Cullen (1985). However, in its original formulation, strong system equivalence of PMDs, suffered from a technical point of view a serious drawback: two PMDs were termed strongly system equivalent if they are both polynomially system equivalent and system equivalent at infinity Anderson et al. (1985). In order to overcome the difficulty of checking two separate conditions, a more compact form of matrix equivalence, known as full equivalence was proposed in Pugh et al. (1992). Definition 10 [Full equivalence] Pugh et al. (1992). Let $$A_{1}(s),A_{2}(s)\in\mathcal{P}(m,l).$$ Then $$A_{1}(s),A_{2}(s)$$ are fully equivalent (FE) if there exist polynomial matrices $$M(s),N(s)$$ of appropriate dimensions, such that   M(s)A1(s)=A2(s)N(s) (34) is satisfied and the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] (35) (i) have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}$$ (ii) $\delta_{M}\left[\begin{array}{cc} M(s) & A_{2}(s)\end{array}\right]=\delta_{M}A_{2}(s)$ and $\delta_{M}\left[\begin{array}{c} A_{1}(s)\\ -N(s) \end{array}\right]=\delta_{M}A_{1}(s)$ Notice that $$\delta_{M}(.)$$ denotes the McMillan degree, that is the total number of poles of the rational matrix involved. Some of the invariants of FE are (see Pugh et al. (1994)) The McMillan degree of $$A_{i}(s).$$ The finite and infinite zero structures of $$A_{i}(s).$$ The dynamic interpretation of the conditions appearing in definition 10 is given below. A generalized version of fundamental equivalence introduced in Hayton et al. (1986) for descriptor systems, was proposed in Pugh et al. (1994) as a more complete form of the equivalence proposed in Pernebo (1977). Given two PMD’s as in (25), we form the normalized system matrices   Pi(s)=[Ai(s)−Bi(s)00Ci(s)Di(s)−Im00Il0−Il00Im0]=[Ti(s)−UiVi0] (36) which have the advantage over (25) of allowing uniform treatment of finite and infinite frequency characteristics to be made. With the above setup, the following system equivalence was proposed. Definition 11 [Normal Full System Equivalence] (Pugh et al., 1994) The normalized system matrices $$\mathcal{P}_{i}(s)$$$$i=1,2$$ are normal full system equivalent if there exist polynomial matrices $$M(s)$$, $$N(s)$$, $$X(s)$$, $$Y(s)$$ such that the relation   [M(s)0X(s)I][T1(s)−U1V10]=[T2(s)−U2V20][N(s)Y(s)0Il] (37) is a full equivalence relation. In Karampetakis & Vardulakis (1992) full unimodular equivalence of Right and Left MFDs was proposed. Definition 12 [Full Unimodular (System) Equivalence] (Karampetakis & Vardulakis, 1992) Let $$\mathcal{P}_{i}(s)$$, $$i=1,2$$ be the system matrices of two Right MFDs as in 6, i.e. $P_{i}(s)=\begin{bmatrix}D_{R}^{i}(s) & I_{l}\\ -N_{R}^{i}(s) & 0 \end{bmatrix}$. Then $$P_{1}(s)$$ and $$P_{2}(s)$$ are called fully unimodular equivalent if and only if there exists a unimodular polynomial matrix $$T(s)\in\mathbb{R}^{l\times l}[s]$$ such that   [DR2(s)Il−NR2(s)0]=[DR1(s)Il−NR1(s)0][T(s)00Il] and the following hold: (i) The composite matrix $\begin{bmatrix}D_{R}^{2}(s)\\ N_{R}^{2}(s)\\ T(s) \end{bmatrix}$ has no zeros at infinity. (ii) $\delta_{M}\begin{bmatrix}D_{R}^{2}(s)\\ N_{R}^{2}(s)\\ T(s) \end{bmatrix}=\delta_{M}\begin{bmatrix}D_{R}^{2}(s)\\ N_{R}^{2}(s) \end{bmatrix}$. The corresponding definition for Left MFDs is omitted. The dynamic interpretation of normal full system equivalence, was demonstrated in Pugh et al. (1994) through fundamental equivalence of PMDs. Moreover, full equivalence and its behavioral interpretation have been generalized in (Bourles & Marinescu, 2011, Section 7.3) to the case of linear time varying systems. Definition 13 [Fundamental Equivalence of PMDs] (Pugh et al., 1994) Let $$\mathcal{P}_{i}(s)$$, $$i=1,2$$ be the normalized form of two PMDs described by (25). The PMDs are said to be fundamentally equivalent if there exists bijective polynomial differential map of the form   ξ2(t)=N(ρ)ξ1(t)+Y(ρ)u(t) and they have the same output $$y(t)$$. It has been shown in Pugh et al. (1994), that two PMDs are fundamentally equivalent if and only if they are normal full system equivalent. Furthermore since there exists a bijection between the solution/input pairs and the initial condition/output pairs, properties like controllability and observability remain invariant. It is interesting to notice that the equivalences described so far, provide the theoretical tools to identify the relation between two polynomial matrices/system descriptions leaving invariant certain aspects of their algebraic structures/behaviors. In practice, it is very difficult to verify the validity of such a relation, since this would require the computation of the transformation matrices involved. However, in certain cases such as the construction of realizations of PMD models using state space or descriptor systems, closed form equivalences provide the means to verify the soundness of the method. Indicative examples of such constructions, where the transformation matrices are computed in a systematic way, include Verghese (1978); Bosgra & Van Der Weiden (1981); Anderson et al. (1985); Vafiadis & Karcanias (1995). Proceeding a step further in the dynamic interpretation of full equivalence, fundamental equivalence of PMDs has been extended to fit the behavioral framework for higher order implicit systems (Auto Regressive - AR) introduced in Geerts (1996). Given two AR representations   Ai(ρ)ξi=0 (38) where $$A_{i}(s)\in\mathbb{R}^{p_{i}\times q_{i}}[s],$$ for $$i=1,2$$ of full normal rank, their behaviors are   Bi={ξi∈ℓimpqi:Ai(ρ)ξi=0} (39) where $$\ell_{imp}$$ is the space of impulsive - smooth distributions. For a more detailed presentation of the distributional behavioral framework, we refer the reader to Hautus & Silverman (1983); Geerts (1996) and references therein. In view of the above setting the following definition of equivalence of AR representations was introduced. Definition 14 [Fundamental equivalence of AR representations] Pugh et al. (2007) The systems described by (38) are fundamentally equivalent if there exists a bijective polynomial differential map $$N(\rho):\mathcal{B}_{1}\rightarrow\mathcal{B}_{2}$$. The connection between fundamental system equivalence and full equivalence is given by the following. Theorem 15 (Pugh et al., 2007) The systems described by the AR - representations (38) are fundamentally equivalent iff there exists a polynomial differential operator $$N(s)\in\mathbb{R}^{q_{2}\times q_{1}}[s]$$ satisfying the following conditions (i) $$\exists M(s)\in\mathbb{R}^{p_{2}\times p_{1}}[s]:M(s)A_{1}(s)=A_{2}(s)N(s).$$ (ii) $\delta_{M}\left[\begin{array}{c} A_{1}(s)\\ N(s) \end{array}\right]=\delta_{M}(A_{1}(s))$ and $\delta_{M}\left[\begin{array}{cc} M(s) & A_{2}(s)\end{array}\right]=\delta_{M}(A_{2}(s)).$ (iii) $\left[\begin{array}{c} A_{1}(s)\\ N(s) \end{array}\right],\left[\begin{array}{cc} M(s) & A_{2}(s)\end{array}\right]$ have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}.$$ (iv) $$q_{1}-p_{1}=q_{2}-p_{2}.$$ Notice that conditions (i)–(iii) in the above theorem, coincide with the requirements of full equivalence, while condition (iv) is essentially an alternative formulation of the assumption $$A_{1}(s),A_{2}(s)\in\mathcal{P}(m,l)$$ in Definition 10. Hence, the polynomial matrices $$A_{1}(s),A_{2}(s)$$ involved in two fundamentally equivalent AR representations, are fully equivalent. An open question is whether fundamental equivalence preserves not only the zero structure but also the left and right minimal indices. 5.4. Equivalence transformations of discrete time systems In this subsection, the behaviour of singular discrete time systems is considered over a finite time interval following the approach used in Lewis (1984); Antoniou et al. (1998); Karampetakis et al. (2001). Given two AR representations   Ai(σ)ξi[k]=0 (40) where $$A_{i}(\sigma)\in\mathbb{R}^{r_{i}\times r_{i}}[\sigma],$$ for $$i=1,2$$ of full normal rank and $$\sigma$$ is the forward shift operator i.e. $$\sigma\xi[k]=\xi[k+1]$$. Following Lewis (1984); Antoniou et al. (1998); Karampetakis et al. (2001), we study the behaviors of (40) over a finite time interval $$k=0,1,2,...,N$$ given appropriate initial and final conditions. In particular we define   BAi(σ)N={ξi[k]:Ai(σ)ξi[k]=0,k=0,1,2,...,N−qi} (41) where $$q_{i}=\deg(A_{i}(\sigma))$$. According to this approach the finite and infinite elementary divisors give rise to trajectories that can be seen as the forward or backward propagation of the initial and final conditions of the pseudostate respectively. Thus, in order to preserve this type of behaviour, a new kind of matrix equivalence preserving finite and infinite elementary divisors of a polynomial matrix has to be introduced, that provides a closed form relation between the relevant polynomial matrices. Definition 16 [Divisor equivalence] Karampetakis et al. (2004) Two regular matrices $$A_{i}(s)\in\mathbb{R}[s]^{r_{i}\times r_{i}},$$$$i=1,2$$ with $$r_{1}\deg A_{1}(s)=r_{2}\deg A_{2}(s)$$, are said to be divisor equivalent if there exist polynomial matrices $$M(s),N(s)$$ of appropriate dimensions, such that   M(s)A1(s)=A2(s)N(s) (42) is satisfied and the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] have no finite and infinite elementary divisors. The key property of divisor equivalence, is that if $$A_{1}(s)$$, $$A_{2}(s)$$ are divisor equivalent then they share identical finite and infinite elementary divisors structure. In the special case where $$A_{i}(s)$$ are matrix pencils, it has been shown that $$A_{i}(s)$$ are strictly equivalent (see Definition 7) if and only if they are divisor equivalent. From a system theoretic point of view one may define a form of equivalence between two systems, seen as an isomorphism between their behaviors. Definition 17 [Fundamental equivalence] (Vardulakis & Antoniou, 2003) Two AR-representations   Ai(σ)ξi[k]=0, (43) where $$A_{i}(\sigma)\in\mathbb{R}[\sigma]^{r_{i}\times r_{i}},$$$$i=1,2$$ with $$r_{1}\deg A_{1}(\sigma)=r_{2}\deg A_{2}(\sigma)$$ will be called fundamentally equivalent over the finite time interval$$k=0,1,2,...,N,$$ if there exists a bijective polynomial map between $$N(\sigma):\mathcal{B}_{A_{1}\left(\sigma\right)}^{N}\longrightarrow\mathcal{B}_{A_{2}\left(\sigma\right)}^{N}$$. It has been shown in Karampetakis et al. (2004) that if $$A_{1}(\sigma)$$, $$A_{2}(\sigma)$$ are divisor equivalent, then the AR-representations (43) are fundamentally equivalent. 6. Conclusions Equivalence transformations of polynomial matrices and their system theoretic counterparts, are without doubt key concepts in the theory of linear multivariable systems developed during the last four decades. In the present paper we have attempted a review of the main results both from a technical and a historical point of view. However, it is virtually impossible to cover every aspect of the subject. For instance, a very important subject closely related to equivalences of matrices and systems, is the problem of linearization/realization of a polynomial matrix/model, which was not touched here. Additionally, important work has been made in the study of n-D polynomial matrix equivalences and the related systems (Levy, 1981; Johnson, 1993; Galkowski, 1996; Pugh et al., 1998; Pugh et al.,, 2005; Boudellioua, 2012, 2013). We hope that the present work will serve as a reference point for further studies. Acknowledgements The authors would like to thank the anonymous referees for their constructive comments and recommendations. Funding This research has been co-financed by the European Union (European Social Fund ESF) and Greek national funds through the Operational Program Education and Lifelong Learning of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARCHIMEDES III. Investing in knowledge society through the European Social Fund. References Anderson B. D. O. Coppel W. A. & Cullen D. J. (1985) Strong system equivalence (i). The ANZIAM J.  27, 194– 222. Antoniou E. N. Vardulakis A. I. G. & Karampetakis N. P. (1998) A spectral characterization of the behavior of discrete time AR-representations over a finite time interval. Kybernetika , 34, 555– 564. Bosgra O. H. & Van Der Weiden A. J. J. (1981) Realizations in generalized state-space form for polynomial system matrices and the definitions of poles, zeros and decoupling zeros at infinity. Int. J. Control , 33, 393– 411. Google Scholar CrossRef Search ADS   Boudellioua M. S. (2012) Strict system equivalence of 2 linear discrete state space models. J. Control Sci. Eng. , https://doi.org/doi:10.1155/2012/609276. Boudellioua M. S. (2013) Further results on the equivalence to Smith form of multivariate polynomial matrices. Control Cybern. , 42, 543– 551. Bourlès H. & Marinescu B. (1999) Poles and zeros at infinity of linear time-varying systems. IEEE Trans. Autom. Control , 44, 1981– 1985. Google Scholar CrossRef Search ADS   Bourles H. & Marinescu B. (2011) Linear Time-Varying Systems, volume 410 of Lecture Notes in Control and Information Sciences. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. Cobb D. (1984) Controllability, observability, and duality in singular systems. IEEE Trans. Autom. Control , 29, 1076– 1082. Google Scholar CrossRef Search ADS   Cobb D. (1982) On the solutions of linear differential equations with singular coefficients. J. Diff. Equations , 46, 310– 323. Google Scholar CrossRef Search ADS   Cohn P. M. (1971) Free Rings and Their Relations. LMS monographs . Academic Press Inc., London, 1985. Coppel W. A. & Cullen D. J. (1985) Strong system equivalence (II). The ANZIAM J. , 27 223– 237. Forney G. D. (1975) Minimal bases of rational vector spaces, with applications to multivariable linear systems. SIAM J. Control , 13 493– 520. Google Scholar CrossRef Search ADS   Fuhrmann P. A. (1977) On strict system equivalence and similarity. Int. J. Control , 25, 5– 10. Google Scholar CrossRef Search ADS   Galkowski K. (1996) Elementary operations and equivalence of two-dimensional systems. Int. J. Control , 63, 1129– 1148. Google Scholar CrossRef Search ADS   Gantmacher F. R. (1959) The Theory of Matrices. New York: Chelsea Publishing Company. Geerts T. (1993) Solvability conditions, consistency, and weak consistency for linear differential-algebraic equations and time-invariant singular systems: The general case. Linear Algebra Appl. , 181, 111– 130. Google Scholar CrossRef Search ADS   Geerts T. (1996) Higher-order continuous-time implicit systems: Consistency and weak consistency, impulse controllability, geometric concepts, and invertibility properties. Linear Algebra Appl. , 244, 203– 253. Google Scholar CrossRef Search ADS   Gohberg I. Lancaster P. & Rodman L. (1982) Matrix polynomials . Computer Science and Applied Mathematics. Academic Press Inc. New York: [Harcourt Brace Jovanovich Publishers]. Hautus M. L. J. (1976) The formal laplace transform for smooth linear systems. Lect. Notes Econ. Math. Syst. , 131, 29– 47. Hautus M. L. J. & Silverman L. M. (1983) System structure and singular control. Linear Algebra Appl. , 50, 369– 402. Google Scholar CrossRef Search ADS   Hayton G. Fretwell P. & Pugh A. (1986) Fundamental equivalence of generalized state-space systems. IEEE Trans. Autom. Control , 31, 431– 439. Google Scholar CrossRef Search ADS   Johnson D. S. (1993) Coprimeness in Multidimensional System Theory and Symbolic Computation. Ph.D. Thesis . Loughborough University of Technology, U.K. Kailath T. (1980) Linear Systems , 1st edn. Prentice-Hall Inc., Englewood Cliffs, NJ. Karampetakis N. P. (2004) On the solution space of discrete time AR-representations over a finite time horizon. Linear Algebra Appl. , 382, 83– 116. Google Scholar CrossRef Search ADS   Karampetakis N. P. (2013) Construction of algebraic-differential equations with given smooth and impulsive behaviour. IMA J. Math. Control Inf , 32, 195– 224. Google Scholar CrossRef Search ADS   Karampetakis N. P., Jones J. & Antoniou E. N. (2001) Forward, backward, and symmetric solutions of discrete ARMA representations. Circ. Syst. Signal Processing , 20, 89– 109. Google Scholar CrossRef Search ADS   Karampetakis N. P. & Vardulakis A. I. (1992) Matrix fractions and full system equivalence. IMA J. Math. Control Inf. , 9, 147– 160. Google Scholar CrossRef Search ADS   Karampetakis N. P. & Vardulakis A. I. (1993) Generalized state-space system matrix equivalents of a Rosenbrock system matrix. IMA J Math. Control Info. , 10, 323– 344. Google Scholar CrossRef Search ADS   Karampetakis N. P., Vologiannidis S. & Vardulakis A. I. G. (2004) A new notion of equivalence for discrete time AR representations. Int. J. Control , 77, 584– 597. Google Scholar CrossRef Search ADS   Kronecker L. (1890) Algebraische reduktion der scharen bilinearer formen. Sitzungsber Akad Berlin , 763– 776. Levy B. C. (1981) 2-D Polynomial and Rational Matrices and Their Applications for the Modelling of 2-D Dynamical Systems. Ph. D. Thesis.  Stanford University, U.S.A. Levy B. Kung S.-Y. Morf M. & Kailath T. (1977) A unification of system equivalence definitions. In Proceedings of the 16th IEEE Conference on Decision and Control , New Orleans, IEEE Press, USA, 795– 800. Lewis F. (1984) Descriptor systems: decomposition into forward and backward subsystems. IEEE Trans. Autom. Control , 29, 167– 170. Google Scholar CrossRef Search ADS   Lewis F. L. (1986) A survey of linear singular systems. Circ. Syst. Signal Processing , 5, 3– 36. Google Scholar CrossRef Search ADS   Luenberger D. (1977) Dynamic equations in descriptor form. IEEE Trans. Autom. Control , 22, 312– 321. Google Scholar CrossRef Search ADS   Oberst U. (1990) Multidimensional constant linear systems. Acta Appl. Math. , 20, 1– 175. Google Scholar CrossRef Search ADS   Pernebo L. (1977) Notes on strict system equivalence. Int. J. Control , 25, 21– 38. Google Scholar CrossRef Search ADS   Pugh A. C. Antoniou E. N. & Karampetakis N. P. (2007) Equivalence of AR-representations in the light of the impulsive-smooth behaviour. Int. J. Robust Nonlinear Control , 17, 769– 785. Google Scholar CrossRef Search ADS   Pugh A. C. Hayton G. E. & Fretwell P. (1987) Transformations of matrix pencils and implications in linear systems theory. Int. J. Control , 45, 529– 548. Google Scholar CrossRef Search ADS   Pugh A. C. Johnson D. S. & Hayton G. E. (1992) On conditions guaranteeing two polynomial matrices possess identical zero structures. IEEE Trans. Autom. Control , 37, 1383– 1386. Google Scholar CrossRef Search ADS   Pugh A. C. Karampetakis N. P. Vardulakis A. I. G. & Hayton G. E. (1994) A fundamental notion of equivalence for linear multivariable systems. IEEE Trans. Autom. Control , 39, 1141– 1145. Google Scholar CrossRef Search ADS   Pugh A. C. Mcinerney S. J. Boudellioua M. S. Johnson D. S. & Hayton G. E. (1998) A transformation for 2-D linear systems and a generalization of a theorem of Rosenbrock. Int. J. Control , 71, 491– 503. Google Scholar CrossRef Search ADS   Pugh A. C. McInerney S. J. & El-Nabrawy E. M. O. (2005) Zero structures of n -D systems. Int. J. Control , 78, 277– 285. Google Scholar CrossRef Search ADS   Pugh A. C. & Shelton A. K. (1978) On a new definition of strict system equivalence. Int. J. Control , 27, 657– 672. Google Scholar CrossRef Search ADS   Rosenbrock H. H. (1970) State-space and Multivariable Theory , 1st edn. Nelson, London, UK. Rosenbrock H. H. (1974) Structural properties of linear dynamical systems. Int. J. Control , 20, 191– 202. Google Scholar CrossRef Search ADS   Smith M. C. (1981) Matrix fractions and strict system equivalence. Int. J. Control , 34, 869– 883. Google Scholar CrossRef Search ADS   Vafiadis D. & Karcanias N. (1995) Generalized state-space realizations from matrix fraction descriptions. IEEE Trans. Autom. Control , 40, 1134– 1137. Google Scholar CrossRef Search ADS   Vardulakis A. I. G. (1991) Linear Multivariable Control: Algebraic Analysis and Synthesis Methods , 1st edn. John Wiley & Sons Ltd, Chichester UK Vardulakis A. I. G. & Antoniou E. (2003) Fundamental equivalence of discrete-time AR representations. Int. J. Control , 76, 1078– 1088. Google Scholar CrossRef Search ADS   Vardulakis A. I. G. & Fragulis G. (1989) Infinite elementary divisors of polynomial matrices and impulsive solutions of linear homogeneous matrix differential equations. Circ. Syst. Signal Processing , 8, 357– 373. Google Scholar CrossRef Search ADS   Vardulakis A. I. G. Limebeer D. N. J. & Karcanias N. (1982) Structure and smith-MacMillan form of a rational matrix at infinity. Int. J. Control , 35, 701– 725. Google Scholar CrossRef Search ADS   Verghese G. (1978) Infinite frequency behavior in dynamical systems. Ph.D. Thesis , Department of Electrical Engineering, Stanford University. Verghese G. Levy B. & Kailath T. (1981) A generalized state-space for singular systems. IEEE Trans. Autom. Control , 26, 811– 831. Google Scholar CrossRef Search ADS   Walker A. B. (1988) Equivalence Transformations for Linear Systems. Ph.D. Thesis , University of Hull. Weierstrass K. T. W. (1868) Zur theorie der bilinearen und quadratischen formen. Monatsberichte der Akademie der Wissenschaften zu Berlin , (Werke, II, 19-44): 310– 338. Wolovich W. A. (1974) Linear Multivariable Systems . Springer-Verlag, New York Zerz E. (2000) On strict system eq uivalence for multidimensional systems. Int. J. Control , 73, 495– 504. Google Scholar CrossRef Search ADS   © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

Polynomial matrix equivalences: system transformations and structural invariants

Loading next page...
 
/lp/ou_press/polynomial-matrix-equivalences-system-transformations-and-structural-0TSxrdAsdg
Publisher
Oxford University Press
Copyright
© The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dnw065
Publisher site
See Article on Publisher Site

Abstract

The present article is a survey on linear multivariable systems equivalences. We attempt a review of the most significant forms of system equivalence having as a starting point matrix transformations preserving certain aspects of their spectral structure. From a system theoretic point of view, the need for a variety of forms of polynomial matrix equivalences, arises from the fact that different types of spectral invariants give rise to different types of dynamics of the underlying linear system. A historical perspective of the key results and their contributors is also given. 1. Introduction In this survey work, we consider linear time invariant systems and the respective equivalence relations amongst them. The class of linear multivariable systems has been studied through a variety of models. In what follows $$\mathbb{R},\mathbb{C}$$ denote the fields of real and complex numbers, respectively, $$\mathbb{R}[s]$$ the ring of polynomials with real coefficients, $$\mathbb{R}(s)$$ the field of real rational functions and $$\mathbb{R}_{pr}(s)$$ the ring of real proper rational functions. The classical time domain approach uses state space representations of the form   ρx(t)=Ax(t)+Bu(t)y(t)=Cx(t)+Du(t) (1) where $$A\in\mathbb{R}^{p\times p}$$, $$B\in\mathbb{R}^{p\times l}$$, $$C\in\mathbb{R}^{m\times p}$$, $$D\in\mathbb{R}^{m\times l}$$, $$x(t)\in\mathbb{R}^{p}$$ is the state vector and $$u(t)\in\mathbb{R}^{l}$$, $$y(t)\in\mathbb{R}^{m}$$ are respectively the input and output vectors. The operator $$\rho$$ is either the differential operator $$d/dt$$ or the forward shift operator $$\rho x(t)=x(t+1)$$, in the continuous and discrete time case respectively. In view of a more general perspective, (1) is a special case of the generalized state space or descriptor models  ρEx(t)=Ax(t)+Bu(t)y(t)=Cx(t)+Du(t). (2) Notice that the matrix $$E$$ may be singular, in which case the above descriptor model cannot be reduced to a state space representation by premultiplying both sides of the first equation (2) by $$E^{-1}$$. From a frequency domain point of view, one usually studies the input/output description of the linear system expressed by a transfer function   y(s)=G(s)u(s) (3) In the multivariable case, transfer functions are essentially rational matrices. In the multiple input–multiple output (MIMO) case the factorization of a given transfer function $$G(s)\in\mathbb{R}(s)^{m\times l}$$ as a fraction of polynomial matrices, leads to the introduction of two distinct fractional representations, i.e.   G(s)=NR(s)DR(s)−1=DL(s)−1NL(s), (4) Namely, the left matrix fraction description (MFDs) of $$G(s)$$ can be seen as an input–output relation of the form   DL(ρ)y(t)=NL(ρ)u(t) (5) where $$D_{L}(s)\in\mathbb{R}[s]^{m\times m}$$, $$N_{L}(s)\in\mathbb{R}[s]^{m\times l}$$, whereas the right matrix fraction description can be considered as a model of the form   DR(ρ)ξ(t)=u(t)y(t)=NR(ρ)ξ(t) (6) where $$D_{R}(s)\in\mathbb{R}[s]^{l\times l}$$, $$N_{R}(s)\in\mathbb{R}[s]^{m\times l}$$ and $$\xi(t)\in\mathbb{R}^{l}$$ is the pseudostate vector. All the above representations can be considered as special cases of polynomial matrix descriptions (PMDs)   A(ρ)ξ(t)=B(ρ)u(t)y(t)=C(ρ)ξ(t)+D(ρ)u(t) (7) where $$A(s)\in\mathbb{R}[s]^{r\times r}$$, $$B(s)\in\mathbb{R}[s]^{r\times l}$$, $$C(s)\in\mathbb{R}[s]^{m\times r}$$, $$D(\rho)\in\mathbb{R}[s]^{m\times l}$$. In the continuous time case, $$\rho$$ is to be interpreted as the differential operator in the distributional sense (see Hautus (1976); Cobb (1982); Hautus & Silverman (1983); Cobb (1984)). According to this approach, certain classes of linear systems may exhibit impulsive solutions along with the smooth functional ones (see Verghese et al., 1981; Hautus & Silverman, 1983; Cobb, 1984; Vardulakis & Fragulis, 1989; Geerts, 1993, 1996, Vardulakis, 1991, p. 176–223). For a detailed presentation of the framework of impulsive-smooth distributions and related results, we encourage the reader to see the articles cited above. It is well known that the finite frequency modes of the above systems are associated with the structure of the finite zeros of the matrix $$A(s)$$ in the case of (7), or the one of the corresponding matrices in the rest of the models. On the other hand, impulsive modes or infinite frequency behaviour is closely related to the presence of zeros at $$s=\infty$$ in the corresponding matrix (see for instance Vardulakis 1991, p. 176–223; Bourlès & Marinescu, 1999; Karampetakis, 2013). When discrete time systems are under consideration, $$\rho$$ is to be interpreted as the forward shift operator $$\rho x(t)=x(t+1).$$ While the regular discrete time case has no significant differences compared to the continuous one, a radically different approach is required, when singularity comes into consideration. Singular discrete time systems are in general non causal and hence not physically realizable. However, there are situations where singular models arise in a natural manner (see for instance Luenberger, 1977) or systems where the independent variable $$t$$ is spatial rather than temporal. The framework proposed for such problems (see Luenberger, 1977; Lewis, 1984, 1986; Antoniou et al., 1998; Karampetakis et al., 2001; Karampetakis, 2004) is to use a finite time interval and consider solutions propagating forward and backward in time given a set of admissible boundary conditions and exogenous inputs. The finite and infinite elementary divisors of the polynomial matrices involved in the analysis of such models have been shown to play a central role. Notably, the finite elementary divisors structure of a matrix is completely determined by its finite zero structure and corresponds to the forward solution space, while the infinite elementary divisor structure depends both on the poles and zeros at infinity of the polynomial matrix associated with the system giving rise to impulsive behavior for non compatible initial conditions at $$t=0$$ in the continuous time case, and backward solution of the system in the discrete time case. In either case, the finite and infinite zero structure in continuous time systems, or the corresponding elementary divisor structure in the discrete time case, plays a central role in the determination of the behavior of a linear system. In this respect, depending on the type of the model under investigation and the type of behaviour to be preserved, it is natural to seek transformations between system representations leaving invariant the corresponding spectral characteristics of the matrices involved. Following this approach we present matrix transformations preserving certain types of spectral structure accompanied by the induced system transformations preserving the corresponding type of dynamics. 2. Structure of polynomial matrices We first review some basic facts related to the finite zero structure of polynomial matrices. A square polynomial matrix $$A(s)$$ is termed regular, if its determinant is non zero for almost every $$s\in\mathbb{C}$$. In such a case, if $$\det A(s_{0})=0$$ for some $$s_{0}\in\mathbb{C}$$, then $$s_{0}$$ is a finite eigenvalue or zero of $$A(s)$$. Finite zeros can be defined in the case of non-square or square non-regular matrices using a slightly more general framework. A square polynomial matrix $$U(s)$$ satisfying $$\det U(s)\neq0$$ for every $$s\in\mathbb{C}$$ is called unimodular. Unimodular matrices are the invertible elements (units) of the ring of polynomial matrices. If $$s_{i},i=1,2,\ldots,\mu$$ are the distinct finite zeros of $$A(s)=\sum_{i=0}^{q}A_{i}s^{i}\in\mathbb{R}[s]^{m\times n}$$, then there exist unimodular matrices $$U_{L}(s),U_{R}(s)$$ of appropriate dimensions such that   SA(s)C(s)=UL(s)A(s)UR(s), (8) where $$S_{A(s)}^{\mathbb{C}}(s)=diag\{f_{1}(s),f_{2}(s),\ldots,f_{r}(s),0_{m-r,n-r}\}$$ is the Smith canonical form of $$A(s)$$ in $$\mathbb{C}$$, $$f_{j}(s)=\prod_{i=1}^{\mu}(s-s_{i})^{\sigma_{ij}}$$ are the invariant polynomials of $$A(s)$$ and the partial multiplicities of $$s_{i}$$ satisfy $$\sigma_{ij}\leq\sigma_{i,j+1}$$, hence $$f_{i}(s)\mid f_{i+1}(s)$$ (Vardulakis, 1991, p. 9–14) and $$r=\textrm{rank}_{\mathbb{R}(s)}A(s)$$ is the normal rank of $$A(s)$$. Additionally, the factors $$(s-s_{i})^{\sigma_{ij}}$$ are the finite elementary divisors of $$A(s)$$. We now draw our attention to the infinite structure of a polynomial or rational matrix. A rational function with numerator degree less than or equal (less than) to the denominator degree is called proper (strictly proper). The sets of proper and strictly proper rational functions are rings denoted by $$\mathbb{R}_{pr}(s),\mathbb{R}_{sp}(s)$$ respectively. Clearly, $$r(s)\in\mathbb{R}_{pr}(s)$$ iff   lims→∞r(s)=c∈R. Moreover, if $$c=0$$, $$r(s)$$ is strictly proper, while in case $$c\neq0$$ the function $$r(s)$$ is called biproper. The set of $$m\times n$$ matrices with elements in the sets $$\mathbb{R}(s),\mathbb{R}_{pr}(s)$$ and $$\mathbb{R}_{sp}(s)$$ will be denoted by $$\mathbb{R}(s)^{m\times n}$$, $$\mathbb{R}_{sp}(s)^{m\times n}$$ and $$\mathbb{R}_{sp}(s)^{m\times n}$$. A square proper rational matrix $$R(s)$$ is called biproper iff   lims→∞|R(s)|=c≠0. Biproper matrices serve as units on the ring of square proper rational matrices, in the sense that their inverses are proper rational matrices themselves. The Smith–McMillan form at$$s=\infty$$ of $$A(s)\in\mathbb{R}[s]^{m\times n}$$ (Vardulakis, 1991, p. 100), is   SA(s)∞(s)=diag{sq1,sq2,...,sqk,1sq^k+1,…,1sq^r,0m−r,n−r} (9) where $$q_{1}\geq q_{2}\geq\ldots\geq q_{k}>0$$, $$\widehat{q}_{r}\geq\widehat{q}_{r-1}\geq\ldots\geq\widehat{q}_{k+1}\geq0$$ are respectively the orders of the poles and zeros of $$A(s)$$ at $$s=\infty$$ and $$r=\textrm{rank}_{\mathbb{R}(s)}A(s)$$. Furthermore, it was shown in Vardulakis, 1991, Proposition. 3.52, p. 119 that $$q_{1}=q$$. Considering the dual or reverse matrix of $$A(s)$$  revA(s)=sqA(s−1)=∑i=0qAisq−i (10) then the finite Jordan pair (Gohberg et al., 1982, Chapter 7) of $$\mathrm{rev}A(s)$$ corresponding to the zero structure at $$s=0$$, is defined as the infinity Jordan pair$$(X_{\infty},J_{\infty})$$, of $$A(s)$$. As a result, the infinity Jordan pair of $$A(s)$$ satisfies the following properties   ∑i=0qAiX∞J∞q−i=0,rank[X∞X∞J∞⋮X∞J∞μ−1]=μ. (11) Furthermore, the structure of the infinity Jordan pair of $$A(s)$$ is closely related (Vardulakis, 1991, Section 4.2.3) to its Smith–McMillan form at $$s=\infty$$ given in (9). Particularly,   J∞=blockdiagi=1…r{Ji∞}∈Rμ×μ, where   Ji∞=[01⋯000⋱⋮⋮⋱⋱10⋯00]∈Rμi×μi, and   μi={q1−qi,i=2,…,kq1+q^ii=k+1,…,r. The columns of the matrix $$X_{\infty}$$ can be recovered (Vardulakis, 1991, sec. 4.2.2) from the biproper matrix which multiplies $$A(s)$$ on the right to obtain (9). We now draw our attention to the study of the structural invariants of non regular polynomial matrices. Let $$A(s)\in\mathbb{R}[s]^{m\times n}$$ be a polynomial matrix with $$r=\textrm{rank}_{\mathbb{R}(s)}A(s)$$. The right null space of $$A(s)$$ consists of all rational vectors $$\xi(s)\in\mathbb{R}(s)^{n\times1}$$ satisfying   A(s)ξ(s)=0. It can be easily shown that the right null space of $$A(s)$$ is a vector space over the field $$\mathbb{R}(s)$$ of dimension equal to $$(n-r)$$. Thus, there exist $$(n-r)$$ linearly independent (over $$\mathbb{R}(s)$$) rational column vectors $$\bar{f}_{i}(s)\in\mathbb{R}(s)^{n\times1}$$, $$i=1,2,\ldots,n-r$$ spanning the right null space of $$A(s)$$. Setting, $$\overline{F}(s)=\left[\overline{f}_{1}(s),\overline{f}_{2}(s),\ldots,\overline{f}_{n-r}(s)\right]$$ it can be shown that   A(s)F¯(s)=0 and $$\textrm{rank}_{\mathbb{R}(s)}F(s)=n-r$$. However, as noted in Forney (1975); Kailath, 1980, the structure of the right null space can be captured by the notion of a minimal polynomial basis. Using a more elaborate procedure one can compute a basis for the right null space consisting only of polynomial vectors having a minimal column degree complexity. Before we present the characterization of a minimal polynomial basis, we introduce some terminology regarding the column degrees of a polynomial matrix. Let   F(s)=[f1(s),f2(s),…,fn−r(s)]∈R[s]n×(n−r) (12) be a polynomial matrix and $$f_{i}(s)\in\mathbb{R}(s)^{n\times1}$$, $$i=1,2,\ldots,n-r$$ be its column vectors. Let also $$v_{i}=\deg f_{i}(s)$$, $$i=1,2,\ldots,n-r$$, be the column degrees of $$F(s)$$ and write   F(s)=Fhcdiag{sv1,sv2,…,svn−r}+Fl(s) where $$F_{l}(s)$$ is a polynomial matrix with column degrees strictly lower than the corresponding ones in $$F(s)$$ and $$F_{hc}$$ is the highest column degree coefficient matrix of $$F(s)$$. The matrix $$F(s)$$ is called column reduced or column proper if $$F_{hc}$$ has full column rank. Notably, it has been shown that every polynomial matrix with full normal column rank can be reduced to a column proper one by post-multiplication by an appropriate unimodular matrix (see Vardulakis, 1991, Section 1.2.3). The columns of a polynomial matrix of the form (12), form a minimal polynomial basis for the right null space of $$A(s)$$ (see Forney, 1975; Kailath, 1980, Section 6.5.4 or Vardulakis, 1991, Section 1.8), if the following properties are satisfied   A(s)F(s)=0, (13) that is the columns of $$F(s)$$ belong to the right null space of $$P(s)$$,   rankF(s)=n−r,∀s∈C, (14) which essentially states that $$F(s)$$ has no finite zeros and   rankFhc=n−r (15) where $$F_{hc}$$ is the highest column degree coefficient matrix of $$F(s)$$, which implies that the latter is column reduced. Notice also that this last condition implies the absence of zeros at $$s=\infty$$ in $$F(s)$$ (see Vardulakis, 1991, Corollary. 3.100). It can be shown (Forney, 1975; Kailath, 1980, Section 6.5.4), that if $$F(s)$$ satisfies the above conditions and $$v_{1}\leq v_{2}\leq\ldots\leq v_{n-r}$$ are its column degrees in ascending order, then the column degrees $$v_{1}^{\prime}\leq v_{2}^{\prime}\leq\ldots\leq v_{n-r}^{\prime}$$ of any other basis of the right null space of $$A(s)$$, will satisfy $$v_{i}\leq v_{i}^{\prime}$$ for every $$i=1,2,\ldots,n-r$$. The integers $$0\leq v_{1}\leq v_{2}\leq\ldots\leq v_{n-r}$$ are known as right minimal indices of $$A(s)$$. The left null space of a given polynomial matrix $$A(s)$$ is structured accordingly. The row degrees of a left minimal basis, are the left minimal indices of $$A(s)$$ and they will be denoted $$0\leq\eta_{1}\leq\eta_{2}\leq\ldots\leq\eta_{m-r}$$. In the special case of matrix pencils, that is first order polynomial matrices, Weierstrass (1868) and Kronecker (1890) introduced the Weierstrass and Kronecker canonical forms of a regular of non-regular matrix pencil respectively (see Gantmacher, 1959, Vol. 2, Theorem. 3, p.28 and Gantmacher, 1959, Vol. 2, Theorem. 5, p.40 respectively, for a more easily accessible source). The well known Kronecker canonical form of a possibly non regular matrix pencil $$sE-A$$ with $$E,A\in\mathbb{R}^{p\times q},$$ is a matrix pencil of the form   K(s)=diag{sIn−JC,sJ∞−Iμ,Lε(s),Lη(s)} (16) where the $$sI_{n}-J_{\mathbb{C}}$$ corresponds to the finite zeros (finite elementary divisors) of the pencil, with $$J_{\mathbb{C}}$$ being in Jordan canonical form. The block $$sJ_{\infty}-I_{\mu}$$ corresponds to the infinite elementary divisors of the pencil, where $$J_{\infty}$$ is in Jordan form, with all its diagonal elements equal to zero. Finally, the block $$L_{\varepsilon}(s)$$ (respectively $$L_{\eta}(s)$$) is a block diagonal matrix, comprised by non square blocks $$L_{\varepsilon_{i}}(s),~i=1,\ldots,r$$ (respectively $$L_{\eta_{i}}(s)~i=1,\ldots,l$$) of the form   Lεi(s)=sMεi−Nεi∈R[s]εi×(εi+1)Lηi(s)=sMηi⊺−Nηi⊺∈R[s](ηi+1)×ηi where   Mεi=[10⋯0⋮⋱⋱⋮0⋯10]∈Rεi×(εi+1)Nηi=[01⋯0⋮⋱⋱⋮0⋯01]∈Rηi×(ηi+1). The block $$L_{\varepsilon_{i}}(s)$$ ($$L_{\eta_{i}}(s)$$) is called the right (left) Kronecker block and the integers $$\varepsilon_{i}$$ ($$\eta_{i}$$) are the right (left) Kronecker indices of the original pencil. Moreover, if we set $$\varepsilon=\sum_{i=1}^{r}\varepsilon_{i}$$ and $$\eta=\sum_{i=1}^{l}\eta_{i},$$ then $$p=n+\mu+\varepsilon+\eta+l$$ and $$q=n+\mu+\varepsilon+\eta+r$$. 3. Transformations preserving the finite structure 3.1. Matrix similarity The most common form of matrix equivalence related to linear systems theory is matrix similarity. Matrix similarity relates constant squares matrices of the same dimension. Similar matrices represent the same linear transformation expressed in two different bases. Definition 1 [Similarity] (Gantmacher, 1959, Vol.1, Definition 10, p. 67) Two constant matrices $$A_{1},A_{2}\in\mathbb{R}^{n\times n},$$ are similar if there exists an invertible matrix $$M,$$ such that   A1=MA2M−1 (17) Notice that (17) can be also written in the form   (sIn−A1)=M(sIn−A2)M−1 (18) Similarity, as a relation on the set of $$n\times n$$ constant matrices, is easily shown to be an equivalence relation. The eigenstructures of two similar matrices, that is the eigenvalues along with their algebraic and geometric multiplicities, are identical. In particular, given a matrix $$A$$ with $$v$$ distinct eigenvalues $$\lambda_{1},\lambda_{2},\ldots,\lambda_{v}\in\mathbb{C}$$, there exists an invertible matrix $$M$$ (see for instance Gantmacher, 1959, Vol. 1, p. 152), such that $$A=MJM^{-1}$$, where   J=diagi=1,…,vj=1,…,vi{Jij} is the Jordan canonical form of $$A$$, $$v_{i}$$ is the geometric multiplicity of $$\lambda_{i}$$,   Jij=[λi1⋯00λi⋱⋮⋮⋱10⋯0λi]∈Cvij×vij and $$\sum\limits _{j=1}^{v_{i}}v_{ij}$$ is the algebraic multiplicity of $$\lambda_{i}$$. In general, the above decomposition, even if $$A$$ is a real matrix, may result in complex matrices $$M,J.$$ However, a real alternative of $$J$$ and $$M$$ is also possible. Given a linear system described by a set of state space equations of the form (1) the eigenstructure of the matrix $$A\in\mathbb{R}^{p\times p}$$ plays a crucial role in the dynamics of the system. A question naturally arising in the study of such systems, is how equation (1) changes, when a transformation of the state vector takes place, i.e. when   x(t)=Mx^(t) (19) It is easy to see that the above change of coordinates results in a new description of the form   ρx^(t)=A^x^(t)+B^u(t)y(t)=C^x^(t)+Du(t) where $$\hat{A}=M^{-1}AM$$, $$\hat{B}=M^{-1}B$$ and $$\hat{C}=CM$$. Equivalently two state space systems of the form (1) will be termed system similar if there exists an invertible matrix such that   [sI−A^B^−C^D]=[M−100I][sI−AB−CD][M00I] This type of equivalence between two state space representations is known in control theory literature as system similarity. 3.2. Unimodular equivalence Unimodular equivalence provides a transformation between rational matrices of the same dimensions. In what follows, we focus on the polynomial case which is widely used in linear systems theory. Recall that unimodular matrices are polynomial matrices whose inverse is also polynomial, that is their determinant is a non-zero constant. Pre or post multiplication of a polynomial matrix by a unimodular one, corresponds to performing row or column elementary operations on the matrix. This naturally leads to the following definition. Definition 2 [Unimodular equivalence] (Gantmacher, 1959, Vol. 1, Definition 2’, p. 133), Rosenbrock (1970) Two polynomial matrices $$A_{i}(s)\in\mathbb{R}[s]^{p\times q},$$$$i=1,2$$ are unimodular equivalent,if there exist unimodular matrices $$U(s)\in\mathbb{R}[s]^{p\times p}$$, $$V(s)\in\mathbb{R}[s]^{q\times q}$$, such that   A1(s)=U(s)A2(s)V(s). Unimodular equivalence is an equivalence relation, having as a canonical form the common Smith form of the equivalent polynomial matrices. The finite elementary divisor structure and therefore the finite zeros and their multiplicities form a complete set of invariants for unimodular equivalence. In the special case where $$A_{i}(s)=sI_{p}-A_{i}$$, $$i=1,2$$, unimodular equivalence can be reduced to matrix similarity of $$A_{i}$$’s (see (see Gantmacher, 1959, Vol. 1, Theorem. 7, p.147). From a system theoretic point of view, unimodular equivalence relates matrix fractional descriptions (MFDs) of a system with a given transfer function $$G(s)\in\mathbb{R}(s)^{m\times l}$$. In particular, given right and left MFDs of the form (4), it is easy to see that choosing $$V(s),U(s)$$ unimodular of appropriate dimensions, the pairs $$\hat{N}_{R}(s)=N_{R}(s)V(s)$$, $$\hat{D}_{R}(s)=D_{R}(s)V(s)$$ and $$\hat{N}_{L}(s)=U(s)N_{L}(s)$$, $$\hat{D}_{L}(s)=U(s)D_{L}(s)$$ give rise to the same transfer function $$G(s),$$ preserving input (respectively output) decoupling zeros of the MFDs, i.e. the zeros of   [DL(s)NL(s)],(respectively[NR(s)−DR(s)]). (20) On the other hand, it is natural to consider a left (right) MFD of the form (4) giving rise to the same transfer function as equivalent, if they also share identical input (output) decoupling zeros structure. Since left MFDs have no input decoupling zeros and right MFDs have no output decoupling zeros, we can deduce that a left and a right MFD of the form (4) can be equivalent if and only if there are no input and output decoupling zeros in both descriptions. Notice that (4) can be written as   DL(s)NR(s)=NL(s)DR(s) (21) or   [DL(s)NL(s)][NR(s)−DR(s)]=0, (22) while the absence of input and output decoupling zeros can be expressed by the absence of zeros in the respective compound matrix in (20). Notice that the coprimeness requirement of the pairs in the compound matrices in (20) in conjunction with equation (21), essentially imply that the pairs of numerators and denominators share common finite zero structures, despite the fact that $$D_{L}(s)$$ and $$D_{R}(s)$$ may be of different dimensions. In order to overcome the difficulty of relating systems with different number of pseudostates having the same number of inputs and outputs, the class of polynomial matrices $$\mathcal{P}(m,l)$$ with dimensions $$(r+m)\times(r+l)$$, for all $$r=1,2,\ldots$$, was introduced and the following definition was proposed. Definition 3 [Extended unimodular equivalence] (Pugh & Shelton, 1978) Two polynomial matrices $$A_{1}(s)$$, $$A_{2}(s)\in\mathcal{P}(m,l)$$ are extended unimodular equivalent (e.u.e.) if there exist polynomial matrices of appropriate dimensions $$M(s)$$, $$N(s)$$ such that   M(s)A1(s)=A2(s)N(s) (23) where the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] (24) have full rank and no zeros in $$\mathbb{C}$$. Proving that e.u.e. is an equivalence relation is not a trivial task (Pugh & Shelton, 1978; Smith, 1981). It has been shown in Pugh & Shelton (1978) that e.u.e. preserves the finite zero structure as well as the normal rank defect of the matrices involved. Additionally, it should be mentioned that conditions (23) and (24) are a well known necessary and sufficient condition for two polynomial matrices to be left-similar (Cohn, 1985, Proposition 6.5, p. 28) while at the same time they provide necessary and sufficient conditions for the corresponding behaviors to be isomorphic (Oberst, 1990, Theorem 54, p. 33). A more general form of equivalence relation between systems expressed in Polynomial Matrix Descriptions (PMDs) has been introduced in Rosenbrock (1970) known as strict system equivalence (s.s.e), while in Wolovich (1974), a different approach using state space reduction was followed to establish the same result. Additionally, an alternative formulation of s.s.e. was proposed in Fuhrmann (1977) and was proven equivalent to the original form of the s.s.e. in Levy et al. (1977). In what follows we shall focus on the definition of s.s.e. given in (Fuhrmann, 1977). Let   Pi(s)=[Ai(s)−Bi(s)Ci(s)Di(s)],i=1,2 (25) be the system matrices of two PMDs $${\it\Sigma}_{i}$$, where $$A_{i}(s)\in\mathbb{R}[s]^{r_{i}\times r_{i}}$$, $$B_{i}(s)\in\mathbb{R}[s]^{r_{i}\times l}$$, $$C_{i}(s)\in\mathbb{R}[s]^{m\times r_{i}}$$, $$D_{i}(s)\in\mathbb{R}[s]^{m\times l}$$. Definition 4 [Furhmann’s s.s.e] (Fuhrmann, 1977) $${\it\Sigma}_{1}$$ and $${\it\Sigma}_{2}$$ are Fuhrmann strictly system equivalent, if there exist polynomial matrices $$M_{1}(s)$$, $$M_{2}(s)$$, $$X_{1}(s)$$, $$X_{2}(s)$$ such that   [M1(s)0X1(s)Im][A1(s)−B1(s)C1(s)D1(s)]=[A2(s)−B2(s)C2(s)D2(s)][M2(s)X2(s)0Il] (26) where   [M1(s)A2(s)],[A1(s)−M2(s)] have full rank and no zeros in $$\mathbb{C}$$. A dynamic interpretation of strict system equivalence was given in terms of the existence of a differential bijective map between the solution sets of the systems in Pernebo (1977). In Pernebo (1977) it has been shown that two system matrices are s.s.e. if and only if there exists a bijective mapping between the sets of solutions of $${\it\Sigma}_{1}$$ and $${\it\Sigma}_{2}$$, of the form   ξ2(t)=N(ρ)ξ1(t)+Y(ρ)u(t), where $$N(\rho)$$ and $$Y(\rho)$$ are polynomial matrices, $$\xi_{i}(t)$$, $$i=1,2$$ are the pseudostate responses and $$u(t)$$ is the common input of the two systems, producing equal outputs $$y_{i}(t)$$, $$i=1,2$$. The bijective mapping must be such that the internal variables of the one system are linear combinations of the internal variables, the inputs and their derivatives of arbitrary order, of the other system. From a system theoretic point of view, if two PMDs with system matrices as in (25) are s.s.e., then it can be shown (Pugh & Shelton, 1978) that any of the following matrix pairs for   Ai(s),[Ai(s)−Bi(s)],[Ai(s)Ci(s)],Pi(s),i=1,2, (27) satisfy an e.u.e. relation. Hence, s.s.e preserves the poles, the input and output decoupling zeros and system zeros of $${\it\Sigma}_{i}$$, that are the zeros of the respective matrices in (27). Additionally Fuhrmann strict system equivalence and its behavioral interpretation has been generalized in Pugh et al. (1998); Zerz (2000) to the case of multidimensional systems. 4. Transformations preserving the structure at infinity The class of singular linear systems received special attention by many authors (see Rosenbrock, 1974; Verghese et al., 1981; Cobb, 1982; Hautus & Silverman, 1983; Vardulakis, 1991). A distinguishing feature of this class of linear systems, is their infinite frequency behavior which is directly related to the so called pole/zero structure at infinity of the polynomial matrices involved. As noted in (Kailath, 1980, p. 398), while unimodular equivalence operations preserve the finite zero structure of polynomial matrices, they will in general destroy the corresponding structure at infinity. A partial solution to this difficulty was given with the introduction of biproper equivalence of rational matrices. Definition 5 [Equivalence at infinity] (Vardulakis et al., 1982) Two rational matrices $$A_{i}(s)\in\mathbb{R}(s)^{p\times q},$$$$i=1,2$$, are equivalent at $$s=\infty$$ if there exist biproper rational matrices $$U(s)\in\mathbb{R}_{pr}(s)^{p\times p},V(s)\in\mathbb{R}_{pr}(s)^{q\times q}$$, such that   A1(s)=U(s)A2(s)V(s). Equivalence at $$s=\infty$$ preserves the pole zero structure at $$s=\infty$$, which is exposed by the Smith–McMillan form at $$\infty$$ (see (9)). To overcome the difficulty of comparing the structure at $$s=\infty$$ of matrices of not necessarily equal dimensions, the following definition was given in Walker (1988). Definition 6 [Extended Causal Equivalence] Walker (1988) Two polynomial matrices $$A_{1}(s)$$, $$A_{2}(s)\in\mathcal{P}(m,l)$$ are extended causal equivalent (e.c.e.) if there exist proper matrices of appropriate dimensions $$M(s)$$, $$N(s)$$ such that   M(s)A1(s)=A2(s)N(s) (28) where the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] (29) have full rank for all $$s\in\mathbb{C}$$ and no zeros at $$s=\mathbb{\infty}$$. 5. Transformations preserving the structure at $$\mathbb{C}\cup\{\infty\}$$ 5.1. Strict equivalence The simplest form of polynomial matrices that have an interesting finite and infinite structure, is the first order one termed in the literature matrix pencils. The study of equivalences between matrix pencils dates back to Weierstrass (1868), where Weierstrass considered regular pencils and introduced the concept of strict equivalence and discovered a canonical form that is named after him. An extension of strict equivalence to the non regular case was given by Kronecker in Kronecker (1890), where the Kronecker canonical form was established. Both results have significant system theoretic applications and/or interpretations summarized in Gantmacher, 1959, Vol. 2, Chapter XII. Definition 7 [Strict equivalence] (Gantmacher, 1959, Vol. 2, Definition 1, p. 24) Two matrix pencils $$sE_{i}-A_{i},$$$$i=1,2,$$ with $$E_{i},A_{i}\in\mathbb{R}^{p\times q},$$ are strictly equivalent if there exist non singular matrices $$M\in\mathbb{R}^{p\times p}$$, $$N\in\mathbb{R}^{q\times q},$$ such that   sE1−A1=M(sE2−A2)N. Note that strict equivalence corresponds to unimodular equivalence and equivalence at $$\infty$$ at the same time. Also strictly equivalent matrix pencils, share identical finite and infinite elementary divisor structure and left/right minimal indices (Gantmacher, 1959, Vol. 2, p. 39), rendered via the well known Kronecker canonical form. Every matrix pencil $$sE-A\in\mathbb{R}[s]^{p\times q}$$, is strictly equivalent to a pencil of the form (16). As a result, strict equivalence of matrix pencils, serves as a tool to identify descriptor representations of the form (2) corresponding to the same underlying linear system. Given a descriptor system of the form (2) one may apply a change of coordinates on the descriptor vector similar to that in (19), that is $$x(t)=M\hat{x}(t)$$, along with a premultiplication of the first descriptor equation by a square invertible matrix $$N$$. This gives rise to a new descriptor system of the form   ρE^x^(t)=A^x^(t)+B^u(t)y(t)=C^x^(t)+Du(t) where $$\hat{E}=NEM$$, $$\hat{A}=NAM$$, $$\hat{B}=NB$$, $$\hat{C}=CM$$. It is easy to verify that the matrix pencils $$sE-A$$ and $$s\hat{E}-\hat{A}$$ are strictly equivalent in the sense of Definition 7. Notably, the system matrices of the descriptor systems related as described above, are connected via   [sE^−A^−B^C^D]=[N00Im][sE−A−BCD][M00Il] which essentially states that the system matrix pencils are also strictly equivalent. Similar relations hold for the pairs $\begin{bmatrix}s\hat{E}-\hat{A} & -\hat{B}\end{bmatrix}$, $\begin{bmatrix}sE-A & -B\end{bmatrix}$ and $\begin{bmatrix}s\hat{E}-\hat{A}\\ \hat{C} \end{bmatrix}$, $\begin{bmatrix}sE-A\\ C \end{bmatrix}$. 5.2. Complete equivalence In an attempt to obtain a system equivalence that preserves the structural invariants of descriptor systems, of same input/output but possibly different pseudostate dimensions, both in $$\mathbb{C}$$ and $$s=\infty$$ and hence their finite and infinite frequency behavior, strong system equivalence was introduced in Verghese et al. (1981). The definition of strong system equivalence in Verghese et al. (1981). can be seen as a collection of allowable operations on the descriptor system matrices. In Anderson et al. (1985) strong equivalence received a more compact formulation as constant system equivalence, and took on a closed form description in Pugh et al. (1987) as complete system equivalence. In the present note, we will focus only on the presentation of complete system equivalence. Complete equivalence can be seen as a particular case of extended unimodular equivalence and causal equivalence. Definition 8 [Complete equivalence] Pugh et al. (1987) Two matrix pencils $$sE_{i}-A_{i},$$$$i=1,2,$$ with $$E_{i},A_{i}\in\mathcal{P}(m,l),$$ are completely equivalent if there exist constant matrices $$M,N$$ of appropriate dimensions such that   M(sE1−A1)=(sE2−A2)N where the composite matrices   [MsE2−A2],[sE1−A1−N] (30) have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}$$. As mentioned earlier, complete system equivalence can be seen as a generalization of strict equivalence of matrix pencils, with the extra feature of being able to compare matrix pencils of different dimensions. It has been shown in Pugh et al. (1987) that two regular pencils are completely equivalent if they possess the same finite and non-trivial infinite elementary divisors and thus zeros. Furthermore, complete equivalence can be seen as a particular case of e.u.e. (Definition 3) and e.c.e (Definition 6). From a system theoretic point of view, complete equivalence takes the form described bellow. Given two descriptor systems of the form   ρEixi(t)=Aixi(t)+Biu(t)y(t)=Cixi(t)+Diu(t) (31) for $$i=1,2,$$ where $$A_{i},E_{i}\in\mathbb{R}^{p_{i}\times p_i}$$, $$B_{i}\in\mathbb{R}^{p_{i}\times m}$$, $$C_{i}\in\mathbb{R}^{l\times p_{i}}$$, $$D_{i}\in\mathbb{R}^{m\times l}$$ and $$x_{i}(t)\in\mathbb{R}^{p_{i}}$$, we have the following Definition 9 [Complete System Equivalence] Pugh et al. (1987) Two descriptor systems of the form (31) are completely system equivalent if there exist constant matrices $$M,N,X,Y$$ of appropriate dimensions such that   [M0XIm][sE1−A1−B1C1D1]=[sE2−A2−B2C2D2][NY0Il] (32) where the composite matrices   [MsE2−A2],[sE1−A1−N] (33) have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}$$. Notice that the above definition does not require the two descriptor vectors $$x_{i}(t)$$ to be of the same dimension. It was shown in Pugh et al. (1987) that two descriptor systems are completely system equivalent iff they are strongly system equivalent. The corresponding relation between strong and constant system equivalence was established in Anderson et al. (1985). Furthermore, in Hayton et al. (1986) it was shown that two descriptor systems are completely equivalent iff there exist bijective maps that leave input/output behavior and essential dynamics unchanged. This form of equivalence is termed fundamental equivalence in Hayton et al. (1986). 5.3. Full equivalence In Anderson et al. (1985) a generalization of strong and complete system equivalence to polynomial matrix descriptions of arbitrary degree was proposed and the dynamic interpretation of the corresponding invariants was studied in Coppel & Cullen (1985). However, in its original formulation, strong system equivalence of PMDs, suffered from a technical point of view a serious drawback: two PMDs were termed strongly system equivalent if they are both polynomially system equivalent and system equivalent at infinity Anderson et al. (1985). In order to overcome the difficulty of checking two separate conditions, a more compact form of matrix equivalence, known as full equivalence was proposed in Pugh et al. (1992). Definition 10 [Full equivalence] Pugh et al. (1992). Let $$A_{1}(s),A_{2}(s)\in\mathcal{P}(m,l).$$ Then $$A_{1}(s),A_{2}(s)$$ are fully equivalent (FE) if there exist polynomial matrices $$M(s),N(s)$$ of appropriate dimensions, such that   M(s)A1(s)=A2(s)N(s) (34) is satisfied and the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] (35) (i) have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}$$ (ii) $\delta_{M}\left[\begin{array}{cc} M(s) & A_{2}(s)\end{array}\right]=\delta_{M}A_{2}(s)$ and $\delta_{M}\left[\begin{array}{c} A_{1}(s)\\ -N(s) \end{array}\right]=\delta_{M}A_{1}(s)$ Notice that $$\delta_{M}(.)$$ denotes the McMillan degree, that is the total number of poles of the rational matrix involved. Some of the invariants of FE are (see Pugh et al. (1994)) The McMillan degree of $$A_{i}(s).$$ The finite and infinite zero structures of $$A_{i}(s).$$ The dynamic interpretation of the conditions appearing in definition 10 is given below. A generalized version of fundamental equivalence introduced in Hayton et al. (1986) for descriptor systems, was proposed in Pugh et al. (1994) as a more complete form of the equivalence proposed in Pernebo (1977). Given two PMD’s as in (25), we form the normalized system matrices   Pi(s)=[Ai(s)−Bi(s)00Ci(s)Di(s)−Im00Il0−Il00Im0]=[Ti(s)−UiVi0] (36) which have the advantage over (25) of allowing uniform treatment of finite and infinite frequency characteristics to be made. With the above setup, the following system equivalence was proposed. Definition 11 [Normal Full System Equivalence] (Pugh et al., 1994) The normalized system matrices $$\mathcal{P}_{i}(s)$$$$i=1,2$$ are normal full system equivalent if there exist polynomial matrices $$M(s)$$, $$N(s)$$, $$X(s)$$, $$Y(s)$$ such that the relation   [M(s)0X(s)I][T1(s)−U1V10]=[T2(s)−U2V20][N(s)Y(s)0Il] (37) is a full equivalence relation. In Karampetakis & Vardulakis (1992) full unimodular equivalence of Right and Left MFDs was proposed. Definition 12 [Full Unimodular (System) Equivalence] (Karampetakis & Vardulakis, 1992) Let $$\mathcal{P}_{i}(s)$$, $$i=1,2$$ be the system matrices of two Right MFDs as in 6, i.e. $P_{i}(s)=\begin{bmatrix}D_{R}^{i}(s) & I_{l}\\ -N_{R}^{i}(s) & 0 \end{bmatrix}$. Then $$P_{1}(s)$$ and $$P_{2}(s)$$ are called fully unimodular equivalent if and only if there exists a unimodular polynomial matrix $$T(s)\in\mathbb{R}^{l\times l}[s]$$ such that   [DR2(s)Il−NR2(s)0]=[DR1(s)Il−NR1(s)0][T(s)00Il] and the following hold: (i) The composite matrix $\begin{bmatrix}D_{R}^{2}(s)\\ N_{R}^{2}(s)\\ T(s) \end{bmatrix}$ has no zeros at infinity. (ii) $\delta_{M}\begin{bmatrix}D_{R}^{2}(s)\\ N_{R}^{2}(s)\\ T(s) \end{bmatrix}=\delta_{M}\begin{bmatrix}D_{R}^{2}(s)\\ N_{R}^{2}(s) \end{bmatrix}$. The corresponding definition for Left MFDs is omitted. The dynamic interpretation of normal full system equivalence, was demonstrated in Pugh et al. (1994) through fundamental equivalence of PMDs. Moreover, full equivalence and its behavioral interpretation have been generalized in (Bourles & Marinescu, 2011, Section 7.3) to the case of linear time varying systems. Definition 13 [Fundamental Equivalence of PMDs] (Pugh et al., 1994) Let $$\mathcal{P}_{i}(s)$$, $$i=1,2$$ be the normalized form of two PMDs described by (25). The PMDs are said to be fundamentally equivalent if there exists bijective polynomial differential map of the form   ξ2(t)=N(ρ)ξ1(t)+Y(ρ)u(t) and they have the same output $$y(t)$$. It has been shown in Pugh et al. (1994), that two PMDs are fundamentally equivalent if and only if they are normal full system equivalent. Furthermore since there exists a bijection between the solution/input pairs and the initial condition/output pairs, properties like controllability and observability remain invariant. It is interesting to notice that the equivalences described so far, provide the theoretical tools to identify the relation between two polynomial matrices/system descriptions leaving invariant certain aspects of their algebraic structures/behaviors. In practice, it is very difficult to verify the validity of such a relation, since this would require the computation of the transformation matrices involved. However, in certain cases such as the construction of realizations of PMD models using state space or descriptor systems, closed form equivalences provide the means to verify the soundness of the method. Indicative examples of such constructions, where the transformation matrices are computed in a systematic way, include Verghese (1978); Bosgra & Van Der Weiden (1981); Anderson et al. (1985); Vafiadis & Karcanias (1995). Proceeding a step further in the dynamic interpretation of full equivalence, fundamental equivalence of PMDs has been extended to fit the behavioral framework for higher order implicit systems (Auto Regressive - AR) introduced in Geerts (1996). Given two AR representations   Ai(ρ)ξi=0 (38) where $$A_{i}(s)\in\mathbb{R}^{p_{i}\times q_{i}}[s],$$ for $$i=1,2$$ of full normal rank, their behaviors are   Bi={ξi∈ℓimpqi:Ai(ρ)ξi=0} (39) where $$\ell_{imp}$$ is the space of impulsive - smooth distributions. For a more detailed presentation of the distributional behavioral framework, we refer the reader to Hautus & Silverman (1983); Geerts (1996) and references therein. In view of the above setting the following definition of equivalence of AR representations was introduced. Definition 14 [Fundamental equivalence of AR representations] Pugh et al. (2007) The systems described by (38) are fundamentally equivalent if there exists a bijective polynomial differential map $$N(\rho):\mathcal{B}_{1}\rightarrow\mathcal{B}_{2}$$. The connection between fundamental system equivalence and full equivalence is given by the following. Theorem 15 (Pugh et al., 2007) The systems described by the AR - representations (38) are fundamentally equivalent iff there exists a polynomial differential operator $$N(s)\in\mathbb{R}^{q_{2}\times q_{1}}[s]$$ satisfying the following conditions (i) $$\exists M(s)\in\mathbb{R}^{p_{2}\times p_{1}}[s]:M(s)A_{1}(s)=A_{2}(s)N(s).$$ (ii) $\delta_{M}\left[\begin{array}{c} A_{1}(s)\\ N(s) \end{array}\right]=\delta_{M}(A_{1}(s))$ and $\delta_{M}\left[\begin{array}{cc} M(s) & A_{2}(s)\end{array}\right]=\delta_{M}(A_{2}(s)).$ (iii) $\left[\begin{array}{c} A_{1}(s)\\ N(s) \end{array}\right],\left[\begin{array}{cc} M(s) & A_{2}(s)\end{array}\right]$ have full rank and no zeros in $$\mathbb{C}\cup\{\infty\}.$$ (iv) $$q_{1}-p_{1}=q_{2}-p_{2}.$$ Notice that conditions (i)–(iii) in the above theorem, coincide with the requirements of full equivalence, while condition (iv) is essentially an alternative formulation of the assumption $$A_{1}(s),A_{2}(s)\in\mathcal{P}(m,l)$$ in Definition 10. Hence, the polynomial matrices $$A_{1}(s),A_{2}(s)$$ involved in two fundamentally equivalent AR representations, are fully equivalent. An open question is whether fundamental equivalence preserves not only the zero structure but also the left and right minimal indices. 5.4. Equivalence transformations of discrete time systems In this subsection, the behaviour of singular discrete time systems is considered over a finite time interval following the approach used in Lewis (1984); Antoniou et al. (1998); Karampetakis et al. (2001). Given two AR representations   Ai(σ)ξi[k]=0 (40) where $$A_{i}(\sigma)\in\mathbb{R}^{r_{i}\times r_{i}}[\sigma],$$ for $$i=1,2$$ of full normal rank and $$\sigma$$ is the forward shift operator i.e. $$\sigma\xi[k]=\xi[k+1]$$. Following Lewis (1984); Antoniou et al. (1998); Karampetakis et al. (2001), we study the behaviors of (40) over a finite time interval $$k=0,1,2,...,N$$ given appropriate initial and final conditions. In particular we define   BAi(σ)N={ξi[k]:Ai(σ)ξi[k]=0,k=0,1,2,...,N−qi} (41) where $$q_{i}=\deg(A_{i}(\sigma))$$. According to this approach the finite and infinite elementary divisors give rise to trajectories that can be seen as the forward or backward propagation of the initial and final conditions of the pseudostate respectively. Thus, in order to preserve this type of behaviour, a new kind of matrix equivalence preserving finite and infinite elementary divisors of a polynomial matrix has to be introduced, that provides a closed form relation between the relevant polynomial matrices. Definition 16 [Divisor equivalence] Karampetakis et al. (2004) Two regular matrices $$A_{i}(s)\in\mathbb{R}[s]^{r_{i}\times r_{i}},$$$$i=1,2$$ with $$r_{1}\deg A_{1}(s)=r_{2}\deg A_{2}(s)$$, are said to be divisor equivalent if there exist polynomial matrices $$M(s),N(s)$$ of appropriate dimensions, such that   M(s)A1(s)=A2(s)N(s) (42) is satisfied and the composite matrices   [M(s)A2(s)],[A1(s)−N(s)] have no finite and infinite elementary divisors. The key property of divisor equivalence, is that if $$A_{1}(s)$$, $$A_{2}(s)$$ are divisor equivalent then they share identical finite and infinite elementary divisors structure. In the special case where $$A_{i}(s)$$ are matrix pencils, it has been shown that $$A_{i}(s)$$ are strictly equivalent (see Definition 7) if and only if they are divisor equivalent. From a system theoretic point of view one may define a form of equivalence between two systems, seen as an isomorphism between their behaviors. Definition 17 [Fundamental equivalence] (Vardulakis & Antoniou, 2003) Two AR-representations   Ai(σ)ξi[k]=0, (43) where $$A_{i}(\sigma)\in\mathbb{R}[\sigma]^{r_{i}\times r_{i}},$$$$i=1,2$$ with $$r_{1}\deg A_{1}(\sigma)=r_{2}\deg A_{2}(\sigma)$$ will be called fundamentally equivalent over the finite time interval$$k=0,1,2,...,N,$$ if there exists a bijective polynomial map between $$N(\sigma):\mathcal{B}_{A_{1}\left(\sigma\right)}^{N}\longrightarrow\mathcal{B}_{A_{2}\left(\sigma\right)}^{N}$$. It has been shown in Karampetakis et al. (2004) that if $$A_{1}(\sigma)$$, $$A_{2}(\sigma)$$ are divisor equivalent, then the AR-representations (43) are fundamentally equivalent. 6. Conclusions Equivalence transformations of polynomial matrices and their system theoretic counterparts, are without doubt key concepts in the theory of linear multivariable systems developed during the last four decades. In the present paper we have attempted a review of the main results both from a technical and a historical point of view. However, it is virtually impossible to cover every aspect of the subject. For instance, a very important subject closely related to equivalences of matrices and systems, is the problem of linearization/realization of a polynomial matrix/model, which was not touched here. Additionally, important work has been made in the study of n-D polynomial matrix equivalences and the related systems (Levy, 1981; Johnson, 1993; Galkowski, 1996; Pugh et al., 1998; Pugh et al.,, 2005; Boudellioua, 2012, 2013). We hope that the present work will serve as a reference point for further studies. Acknowledgements The authors would like to thank the anonymous referees for their constructive comments and recommendations. Funding This research has been co-financed by the European Union (European Social Fund ESF) and Greek national funds through the Operational Program Education and Lifelong Learning of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARCHIMEDES III. Investing in knowledge society through the European Social Fund. References Anderson B. D. O. Coppel W. A. & Cullen D. J. (1985) Strong system equivalence (i). The ANZIAM J.  27, 194– 222. Antoniou E. N. Vardulakis A. I. G. & Karampetakis N. P. (1998) A spectral characterization of the behavior of discrete time AR-representations over a finite time interval. Kybernetika , 34, 555– 564. Bosgra O. H. & Van Der Weiden A. J. J. (1981) Realizations in generalized state-space form for polynomial system matrices and the definitions of poles, zeros and decoupling zeros at infinity. Int. J. Control , 33, 393– 411. Google Scholar CrossRef Search ADS   Boudellioua M. S. (2012) Strict system equivalence of 2 linear discrete state space models. J. Control Sci. Eng. , https://doi.org/doi:10.1155/2012/609276. Boudellioua M. S. (2013) Further results on the equivalence to Smith form of multivariate polynomial matrices. Control Cybern. , 42, 543– 551. Bourlès H. & Marinescu B. (1999) Poles and zeros at infinity of linear time-varying systems. IEEE Trans. Autom. Control , 44, 1981– 1985. Google Scholar CrossRef Search ADS   Bourles H. & Marinescu B. (2011) Linear Time-Varying Systems, volume 410 of Lecture Notes in Control and Information Sciences. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. Cobb D. (1984) Controllability, observability, and duality in singular systems. IEEE Trans. Autom. Control , 29, 1076– 1082. Google Scholar CrossRef Search ADS   Cobb D. (1982) On the solutions of linear differential equations with singular coefficients. J. Diff. Equations , 46, 310– 323. Google Scholar CrossRef Search ADS   Cohn P. M. (1971) Free Rings and Their Relations. LMS monographs . Academic Press Inc., London, 1985. Coppel W. A. & Cullen D. J. (1985) Strong system equivalence (II). The ANZIAM J. , 27 223– 237. Forney G. D. (1975) Minimal bases of rational vector spaces, with applications to multivariable linear systems. SIAM J. Control , 13 493– 520. Google Scholar CrossRef Search ADS   Fuhrmann P. A. (1977) On strict system equivalence and similarity. Int. J. Control , 25, 5– 10. Google Scholar CrossRef Search ADS   Galkowski K. (1996) Elementary operations and equivalence of two-dimensional systems. Int. J. Control , 63, 1129– 1148. Google Scholar CrossRef Search ADS   Gantmacher F. R. (1959) The Theory of Matrices. New York: Chelsea Publishing Company. Geerts T. (1993) Solvability conditions, consistency, and weak consistency for linear differential-algebraic equations and time-invariant singular systems: The general case. Linear Algebra Appl. , 181, 111– 130. Google Scholar CrossRef Search ADS   Geerts T. (1996) Higher-order continuous-time implicit systems: Consistency and weak consistency, impulse controllability, geometric concepts, and invertibility properties. Linear Algebra Appl. , 244, 203– 253. Google Scholar CrossRef Search ADS   Gohberg I. Lancaster P. & Rodman L. (1982) Matrix polynomials . Computer Science and Applied Mathematics. Academic Press Inc. New York: [Harcourt Brace Jovanovich Publishers]. Hautus M. L. J. (1976) The formal laplace transform for smooth linear systems. Lect. Notes Econ. Math. Syst. , 131, 29– 47. Hautus M. L. J. & Silverman L. M. (1983) System structure and singular control. Linear Algebra Appl. , 50, 369– 402. Google Scholar CrossRef Search ADS   Hayton G. Fretwell P. & Pugh A. (1986) Fundamental equivalence of generalized state-space systems. IEEE Trans. Autom. Control , 31, 431– 439. Google Scholar CrossRef Search ADS   Johnson D. S. (1993) Coprimeness in Multidimensional System Theory and Symbolic Computation. Ph.D. Thesis . Loughborough University of Technology, U.K. Kailath T. (1980) Linear Systems , 1st edn. Prentice-Hall Inc., Englewood Cliffs, NJ. Karampetakis N. P. (2004) On the solution space of discrete time AR-representations over a finite time horizon. Linear Algebra Appl. , 382, 83– 116. Google Scholar CrossRef Search ADS   Karampetakis N. P. (2013) Construction of algebraic-differential equations with given smooth and impulsive behaviour. IMA J. Math. Control Inf , 32, 195– 224. Google Scholar CrossRef Search ADS   Karampetakis N. P., Jones J. & Antoniou E. N. (2001) Forward, backward, and symmetric solutions of discrete ARMA representations. Circ. Syst. Signal Processing , 20, 89– 109. Google Scholar CrossRef Search ADS   Karampetakis N. P. & Vardulakis A. I. (1992) Matrix fractions and full system equivalence. IMA J. Math. Control Inf. , 9, 147– 160. Google Scholar CrossRef Search ADS   Karampetakis N. P. & Vardulakis A. I. (1993) Generalized state-space system matrix equivalents of a Rosenbrock system matrix. IMA J Math. Control Info. , 10, 323– 344. Google Scholar CrossRef Search ADS   Karampetakis N. P., Vologiannidis S. & Vardulakis A. I. G. (2004) A new notion of equivalence for discrete time AR representations. Int. J. Control , 77, 584– 597. Google Scholar CrossRef Search ADS   Kronecker L. (1890) Algebraische reduktion der scharen bilinearer formen. Sitzungsber Akad Berlin , 763– 776. Levy B. C. (1981) 2-D Polynomial and Rational Matrices and Their Applications for the Modelling of 2-D Dynamical Systems. Ph. D. Thesis.  Stanford University, U.S.A. Levy B. Kung S.-Y. Morf M. & Kailath T. (1977) A unification of system equivalence definitions. In Proceedings of the 16th IEEE Conference on Decision and Control , New Orleans, IEEE Press, USA, 795– 800. Lewis F. (1984) Descriptor systems: decomposition into forward and backward subsystems. IEEE Trans. Autom. Control , 29, 167– 170. Google Scholar CrossRef Search ADS   Lewis F. L. (1986) A survey of linear singular systems. Circ. Syst. Signal Processing , 5, 3– 36. Google Scholar CrossRef Search ADS   Luenberger D. (1977) Dynamic equations in descriptor form. IEEE Trans. Autom. Control , 22, 312– 321. Google Scholar CrossRef Search ADS   Oberst U. (1990) Multidimensional constant linear systems. Acta Appl. Math. , 20, 1– 175. Google Scholar CrossRef Search ADS   Pernebo L. (1977) Notes on strict system equivalence. Int. J. Control , 25, 21– 38. Google Scholar CrossRef Search ADS   Pugh A. C. Antoniou E. N. & Karampetakis N. P. (2007) Equivalence of AR-representations in the light of the impulsive-smooth behaviour. Int. J. Robust Nonlinear Control , 17, 769– 785. Google Scholar CrossRef Search ADS   Pugh A. C. Hayton G. E. & Fretwell P. (1987) Transformations of matrix pencils and implications in linear systems theory. Int. J. Control , 45, 529– 548. Google Scholar CrossRef Search ADS   Pugh A. C. Johnson D. S. & Hayton G. E. (1992) On conditions guaranteeing two polynomial matrices possess identical zero structures. IEEE Trans. Autom. Control , 37, 1383– 1386. Google Scholar CrossRef Search ADS   Pugh A. C. Karampetakis N. P. Vardulakis A. I. G. & Hayton G. E. (1994) A fundamental notion of equivalence for linear multivariable systems. IEEE Trans. Autom. Control , 39, 1141– 1145. Google Scholar CrossRef Search ADS   Pugh A. C. Mcinerney S. J. Boudellioua M. S. Johnson D. S. & Hayton G. E. (1998) A transformation for 2-D linear systems and a generalization of a theorem of Rosenbrock. Int. J. Control , 71, 491– 503. Google Scholar CrossRef Search ADS   Pugh A. C. McInerney S. J. & El-Nabrawy E. M. O. (2005) Zero structures of n -D systems. Int. J. Control , 78, 277– 285. Google Scholar CrossRef Search ADS   Pugh A. C. & Shelton A. K. (1978) On a new definition of strict system equivalence. Int. J. Control , 27, 657– 672. Google Scholar CrossRef Search ADS   Rosenbrock H. H. (1970) State-space and Multivariable Theory , 1st edn. Nelson, London, UK. Rosenbrock H. H. (1974) Structural properties of linear dynamical systems. Int. J. Control , 20, 191– 202. Google Scholar CrossRef Search ADS   Smith M. C. (1981) Matrix fractions and strict system equivalence. Int. J. Control , 34, 869– 883. Google Scholar CrossRef Search ADS   Vafiadis D. & Karcanias N. (1995) Generalized state-space realizations from matrix fraction descriptions. IEEE Trans. Autom. Control , 40, 1134– 1137. Google Scholar CrossRef Search ADS   Vardulakis A. I. G. (1991) Linear Multivariable Control: Algebraic Analysis and Synthesis Methods , 1st edn. John Wiley & Sons Ltd, Chichester UK Vardulakis A. I. G. & Antoniou E. (2003) Fundamental equivalence of discrete-time AR representations. Int. J. Control , 76, 1078– 1088. Google Scholar CrossRef Search ADS   Vardulakis A. I. G. & Fragulis G. (1989) Infinite elementary divisors of polynomial matrices and impulsive solutions of linear homogeneous matrix differential equations. Circ. Syst. Signal Processing , 8, 357– 373. Google Scholar CrossRef Search ADS   Vardulakis A. I. G. Limebeer D. N. J. & Karcanias N. (1982) Structure and smith-MacMillan form of a rational matrix at infinity. Int. J. Control , 35, 701– 725. Google Scholar CrossRef Search ADS   Verghese G. (1978) Infinite frequency behavior in dynamical systems. Ph.D. Thesis , Department of Electrical Engineering, Stanford University. Verghese G. Levy B. & Kailath T. (1981) A generalized state-space for singular systems. IEEE Trans. Autom. Control , 26, 811– 831. Google Scholar CrossRef Search ADS   Walker A. B. (1988) Equivalence Transformations for Linear Systems. Ph.D. Thesis , University of Hull. Weierstrass K. T. W. (1868) Zur theorie der bilinearen und quadratischen formen. Monatsberichte der Akademie der Wissenschaften zu Berlin , (Werke, II, 19-44): 310– 338. Wolovich W. A. (1974) Linear Multivariable Systems . Springer-Verlag, New York Zerz E. (2000) On strict system eq uivalence for multidimensional systems. Int. J. Control , 73, 495– 504. Google Scholar CrossRef Search ADS   © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Dec 24, 2016

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off