# Static output feedback canonical forms

Static output feedback canonical forms Abstract We construct complete static output feedback (SOF) invariants and canonical forms of the set of strictly proper full rank, rational, transfer function matrices. We consider the induced action of the SOF group on the set of Plücker coordinates. Some of them are SOF-invariant and that some others vary linearly with the entries of the feedback gain. Further SOF-invariants are constructed based on the consistency conditions of linear systems of equations. The pair of the above two lists of invariants is complete. The proof is given, introducing a new type of factorization. We construct a SOF-canonical form. We apply the results to the class of systems with two inputs two outputs, characterizing the SOF-equivalence classes in the case of four states as to the property of SOF-pole assignability and calculating the constrained dynamics in the case of more than four states. 1. Introduction 1.1. Problem statement and motivation In this paper we construct complete invariants and canonical forms of the set of linear, time invariant, strictly proper control systems, under the action of the static output feedback (SOF) group. We use them to characterize the SOF-equivalence classes as to the property of SOF-pole assignability of the set of systems with two inputs two outputs and four states and the constrained dynamics in the case of more than four states. We recall from MacLane & Birkoff (1999) that given a set $${\it{\Sigma}}$$ and an equivalence relation $${\rm E}$$ on it, a function $$f:{\it{\Sigma}} \to {\it{\Phi}}$$ is said to be $${\rm E}-$$ invariant if $$\sigma \,{\rm E}\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} \Rightarrow f(\sigma )=f(\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} )$$. It is said to be complete $${\rm E}-$$invariant if $$\sigma \,{\rm E}\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} \Leftrightarrow f(\sigma )=f(\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} )$$. A subset $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{{\it{\Sigma}} }}$$ of $${\it{\Sigma}}$$ is said to be a set of $${\rm E}-$$canonical forms if to each $$\sigma \in {\it{\Sigma}}$$ there is exactly one $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{\sigma }} \in \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{{\it{\Sigma}} }}$$ with $$\sigma \,{\rm E}\,\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{\sigma }}$$. If the complete $${\rm E}-$$invariant function $$f$$ is surjective, $$f(\sigma )$$ is said to be a complete system of independent $${\rm E}-$$invariants (Popov, 1972). For this article, $${\it{\Sigma}}$$ is the set of linear time invariant strictly proper systems identified with the set $$\mathbb{R}_{n}^{r\times m} \left\{ s \right\}$$ of $$r\times m$$ strictly proper, rational function matrices in the variable $$s$$, with real coefficients, of McMillan degree $$n$$. For the uniqueness of the representation, the denominators of the entries are supposed to be monic polynomials. We suppose additionally that the rows and the columns of $$F(s)\in {\it{\Sigma}}$$, are $$\mathbb{R}-$$linearly independent. $$\exists g\in \mathbb{R}^{m}\,\mbox{ with }F(s)g={\rm O}_{r\times 1} \Leftrightarrow g={\rm O}_{m\times 1} \,\,\,\,(a),\,\,\,\,\exists e\in \mathbb{R}^{r}\,\mbox{ with }eF(s)={\rm O}_{1\times m} \Leftrightarrow e={\rm O}_{1\times r} \,\,\,\,\,\,\,\,(b)$$ (1.1) The above hypothesis (1.1) amounts to full rank matrices $$B,C$$, with $$\left( {C,A,\,B} \right)$$ a minimal state space representation of the system. In this case, $$F(s)=C\left( {sI_{n} -A} \right)^{-1}B$$ with $$I_{n}$$ the unit of $$GL_{n} (\mathbb{R})$$. We suppose additionally that the matrix $$F(s)$$, has full rank over the field of rational functions. The SOF group denoted by $$\mathcal{H}$$, is the additive group of $$m\times r$$ real matrices $$\mathcal{H}=\mathbb{R}^{m\times r}$$. It acts on $${\it{\Sigma}}$$ by the transformation: $$F(s)\mapsto \tilde{{F}}(s)=\left\{ {{\begin{array}{@{}c} {(I_{r} +F(s)H)^{-1}F(s)\,\,\,(a)} \4pt] {F(s)(I_{m} +HF(s))^{-1}\,\,(b)} \end{array} }} \right.\forall \left( {F(s)\in {\it{\Sigma}} ,H\in \mathcal{H}} \right)$$ (1.2) Transformations (I.2a) and (I.2b) are identical as $$(I_{r} +F(s)H)^{-1}F(s)=F(s)(I_{m} +HF(s))^{-1}\Leftrightarrow (I_{r} +F(s)H)F(s)=F(s)(I_{m} +HF(s))$$ (1.3) Without loss of generality we suppose that r\leqslant m and it is more convenient to use (I.2a). In the case r<m, (1.1b) is covered by the rank hypothesis. In the case r=m both (1.1a and b) are covered by the rank hypothesis. The equivalence relation {\rm E} induced on {\it{\Sigma}} by the action of \mathcal{H} is described by the equations: $$\begin{array}{l} F(s){\rm E}\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)\Leftrightarrow \exists H\in \mathcal{H}\mbox{ with }\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=(I_{r} +F(s)H)^{-1}F(s) \\[4pt] \Leftrightarrow (I_{r} +F(s)H)\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=F(s)\Leftrightarrow F(s)H\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=F(s)-\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s) \end{array}$$ (1.4) We conclude that {\rm E}-equivalence amounts to the existence of a real solution in the mr entries of the gain H to a system of mr linear equations with coefficients in the field of rational functions. The objective of this article is to find necessary and sufficient conditions for the existence of a real solution to the system of linear equations (1.4), in terms of equality of complete systems of independent {\rm E}-invariants f\left( {F(s)} \right) and canonical forms \underline{F}(s). \begin{align} \exists H\in \mathcal{H}\mbox{ with }F(s)H\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=F(s)-\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)\Leftrightarrow \left\{ \begin{array}{@{}c} {f(F(s))=f(\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s))} \\ {\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\,\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{\overset{\lower0.3em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} }} (s)} \end{array} \right. \end{align} (1.5) If F(s){\rm E}\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s), hypothesis (1.1) implies that the gain H achieving equivalence is uniquely determined as H_{0} \in \mathcal{H} with F(s)H_{0} ={\rm O}_{r\times r} implies H_{0} ={\rm O}_{m\times r}. The research of SOF canonical forms of strictly proper linear systems, ({\rm E}-canonical forms of {\it{\Sigma}} for the sequel of the article), is a very exciting universal problem. It has additionally a great importance for other control problems compatible with the output feedback equivalence class. Consider for instance the problem of determining necessary and sufficient conditions for the existence of an output feedback gain H, assigning arbitrarily the coefficients of the denominator of (I_{r} +F(s)H)^{-1}\,F(s), known as SOF pole assignability problem. This is a problem compatible with the equivalence class because, if there is a solution for F(s) the same is true for any system {\rm E}-equivalent to it. The property of SOF assignability meant as a function of {\it{\Sigma}} on the Boolean set \left\{ {yes,\,\,no} \right\} is {\rm E}-invariant and as every invariant function does (Popov, 1972), it has to be a function of any complete {\rm E}-invariant function. In this article, we solve the universal problem of SOF-Equivalence as stated below by constructing complete SOF-invariant functions and canonical forms, and we use the solution to characterize the SOF-Equivalence classes of systems with two inputs two outputs and four states as to their SOF assignability properties. In the case of more than four states, we give the algebraic variety describing the closed loop dynamics and we use it to calculate the feedback gain assigning arbitrarily four poles. 1.2. Previous results The problem of {\rm E}-canonical forms of {\it{\Sigma}} is pending for more than four decades in control theory. The first result on {\rm E}-canonical forms is presented by Yannakoudakis (1980), and it concerns scalar systems. State space representation is considered and the SOF group is GL_{n} (\mathbb{R})\times \mathbb{R}^{m\times r}. In the same frame are given computable necessary and sufficient conditions for {\rm E}-equivalence on {\it{\Sigma}} (different than the equality of canonical forms) in terms of linear matrix equations. Hinrichsen & Pratzel-Wolters (1983) present a set of quasi {\rm E}-canonical forms of {\it{\Sigma}}. Byrnes & Crouch (1985) consider the full SOF group, involving also changes of the base of the input and output space and present complete systems of invariants (but not canonical forms) for the related equivalence relation, for the scalar case. Complete systems of full SOF invariants and canonical forms of scalar systems are given by Helmke & Fuhrmann (1989). Kim & Lee (1995), present {\rm E}-canonical forms for the equivalence classes that are SOF-assignable. Unfortunately, to characterize these equivalence classes, we need to know complete {\rm E}-invariants. Ravi et al. (2002) prove the existence of full SOF canonical forms in the case mr>n, but no construction algorithm is presented. In Yannakoudakis (2007) are presented in a new form the necessary and sufficient conditions for {\rm E}-equivalence of Yannakoudakis (1980) and in Yannakoudakis (2013a), computable necessary and sufficient conditions for full SOF equivalence. The above conditions are in terms of structured matrices, namely mosaic Hankel matrices. Summing, we conclude that complete, SOF or full SOF, invariants as well as canonical forms of {\it{\Sigma}} are known only for scalar systems. This is maybe the reason that problems as the SOF exact pole placement remain still open. Newer invariants as the Hankel ones introduced in Yannakoudakis (2013b), help in the highlighting of some aspects of the problem, Yannakoudakis (2015). 1.3. Outline of the methodology used and results In this article, we simply construct complete systems of {\rm E}-invariants and canonical forms of {\it{\Sigma}}, without prerequisites as constraints on the dimensions or SOF-assignability. We address definitely a problem open for more than four decades. We outline below the methodology used. Complete development of the material, proofs and discussion are provided in the next sections. This subsection is to motivate the reader to continue with the rather unusual content of the article. The first step of the attempt is to develop a particular factorization of the transfer function matrices (Section 2.2) F(s)=Z^{-1}(s)W(s) that we call exterior factorization because the polynomial factor Z(s) depends on the entries of the (r-1)th exterior power of F(s) and the polynomial factor W(s) depends on the entries of the rth exterior power of F(s), both multiplied by the least common multiple of the denominators of the entries of F(s). The second step is to prove that the factor W(s) of the exterior factorization is {\rm E}-invariant (Section 2.4, Theorem 2.2). The {\rm E}-invariance of the factor W(s)is the property differentiating our approach from approaches using classical coprime factorizations, like this of Ravi et al. (2002). It is remarkable that the g.c.d. of the entries of W(s) is state feedback invariant while the whole W(s) is SOF invariant. We believe that {\rm E}-invariance of W(s) will become fundamental in the theory of linear control systems. The exterior factorization makes multivariable systems looking like scalar systems. Applying Theorem 2.2 of invariance to the linear system of equations (1.4), we obtain a new linear system of equations suitable for the development of complete invariant functions and canonical forms $$\begin{array}{l} FH\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{F}} =F-\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} \Leftrightarrow Z^{-1}WH{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} =Z^{-1}W-{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} \overbrace \Rightarrow ^{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} =W}Z^{-1}WH{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} =\Big( {Z^{-1}-{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}} \Big)\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} \\ \overbrace \Rightarrow ^{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} \mbox{ has full rank}}Z^{-1}WH{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}-{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\Rightarrow WH=\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} -Z \end{array}$$ (1.6) Apparently the systems \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s), and F(s) are {\rm E}-equivalent, if and only if, for their exterior factors \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} (s),\,\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} (s), Z(s),\,\,W(s) we have \begin{align} \begin{array}{l} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} (s)=W(s)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(a) \\ \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} (s)-Z(s)=W(s)H\,\,\,\,\,\,\,\,\,\,\,\,(b) \\ \end{array} \end{align} (1.7) Thanks to the nature of the left part of (1.7b) necessary and sufficient conditions for the existence of solutions of linear systems of equations, are easily translated to equality of invariants. Equating the coefficients of the same power of s, we obtain a matrix equation with constant coefficients. \begin{align} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{Z}}-{\boldsymbol{Z}}={\boldsymbol{W}}H \end{align} (1.7c) If h_{1} ,h_{2} ,\ldots ,h_{r}, \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{1} ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{2} ,\ldots ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{r}, {\boldsymbol{z}}_{1} ,{\boldsymbol{z}}_{2} ,\ldots ,{\boldsymbol{z}}_{r}, {\boldsymbol{w}}_{1} ,{\boldsymbol{w}}_{2} ,\ldots ,{\boldsymbol{w}}_{m} are the columns of the gain H the matrix \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{Z}}, the matrix {\boldsymbol{Z}} and the matrix {\boldsymbol{W}}, respectively, (1.7a) is decomposed into r systems of equations $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{\textit{z}}}_{k} -{\boldsymbol{z}}_{k} ={\boldsymbol{W}}h_{k}$$ (1.8) We can calculate new invariants that complete the already known {\boldsymbol{W}} using the consistency conditions of linear systems of equations (1.8). We decompose the vectors \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{\textit{z}}}_{k} ,\,{\boldsymbol{z}}_{k} into two components one, \big({\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\parallel}_k,\,\,{\textit{z}}_{k}^{\parallel } \big) belonging to the column space of {\boldsymbol{W}} and one \big(\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{\textit{z}}}{}^{\bot}_k,\,\,{\boldsymbol{z}}_{k}^{\bot }\big) orthogonal to it i.e. \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{\textit{z}}}_{k}={\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\parallel}_k +\,{\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\bot}_k, {\boldsymbol{z}}_{k} ={\boldsymbol{z}}_{k}^{\parallel }+\,{\boldsymbol{z}}_{k}^{\bot }. The consistency conditions become: \begin{align} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{\textit{z}}}_{k} -{\boldsymbol{z}}_{k} \in \mbox{colspan}\left( {{\boldsymbol{W}}} \right)\Leftrightarrow \,{\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\bot}_k ={\boldsymbol{z}}_{k}^{\bot } \end{align} (1.9) As the above decomposition is unique, going back step we can decompose in a unique way Z(s) (Proof of Theorem 3.1). \begin{align} \begin{array}{r@{\,}c@{\,}l} Z(s)&=&Z^{\bot } (s)+Z^{\parallel } (s)\,\,\mbox{with} \\[4pt] Z^{\parallel } (s)&=&W(s)H_{0} ,\,\,H_{0} \in \mathcal{H} \end{array} \end{align} (1.10) Apparently The pair \left( {W(s),\,Z^{\bot } (s)} \right) is a complete system of independent {\rm E}-invariants of F(s)\in {\it{\Sigma}}. \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\left( {Z^{\bot } (s)} \right)^{-1}W(s) is a {\rm E}-canonical form of F(s)\in {\it{\Sigma}}. The gain H_{0} parameterizes the orbit space of \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s). Roughly speaking, the entries of H_{0} are the ‘coordinates’ of F(s) in the orbit. If F(s){\rm E}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s), then the gain achieving equivalence is H=H_{0} -\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{H}}_{0}. It is already mentioned that the exterior factorization is the advantage of our approach face to the approach of Ravi et al. (2002). As there are infinitely many left coprime factorizations the authors consider a kind of normal form (autoregressive system). But the induced action of the SOF group on the set of autoregressive systems is not linear. The difficulty to give construction algorithms arises from the non-linearity of the action. Similar problems will appear if we decide to use other normal (or canonical) forms of a coprime factorization. The action of the SOF group on them is linear only for special cases. Besides the ‘numerator’ is not always SOF-invariant. There are a finite number of exterior factorizations. For each one of them we can calculate an H_{0} which verifies (1.10). This fact means that there are a finite number of canonical forms. To be conform to the requirement of uniqueness, we introduce a total order in the set of sets of canonical forms and we select the first one. The set of canonical forms is naturally unique (without introduction of total order) in the case of square and single output systems, as for these cases the exterior factorization is unique. The complete system of independent invariants introduced, is useful for the construction of a unique representative of the equivalence class. For the solution of control problems, other invariants based on the Gauss elimination algorithm consistency conditions, or the Grassmannians of the spaces {\boldsymbol{z}}_{k}\oplus\,\mbox{colspan}\left( {{\boldsymbol{W}}} \right), appear to be more useful. 1.4. Organization of the article The article is organized in the following way: In the second section, we present the terminology used in this article, some well-known results in a new form using the presented terminology and some new background results as exterior factorization, the relation of Plücker coordinates with the exterior powers of the transfer function matrix and the properties of the closed loop parameters of the system. In the third section, are presented the main results on equivalence complete invariants and canonical forms. The fourth section is devoted to the particularization of the results for scalar, one output, square and rectangular systems having full rank transfer-function matrices. In the fifth section, we examine the relation of the presented complete {\rm E}- invariants and canonical forms to the SOF-assignability problem. We characterize the quotient set {\it{\Sigma}} \mathord{\left/ {\vphantom {{\it{\Sigma}} {\rm E}}} \right. } {\rm E} of systems with (r,n,m)=(2,4,2) as to their SOF assignability properties, and we describe by a symmetric multivariate polynomial the constrained dynamics of systems (2,n,2),n>4. 2. Preliminary results In this section, we present some well-known results, as well as some preliminary results in a form suitable for the development of our work on SOF-invariants of {\it{\Sigma}}. 2.1. Terminology Let S=\left\{ {s_{1} ,\,s_{2} ,\ldots ,s_{n} } \right\} be a finite set and \prec a total order on it s_{1} \prec \,s_{2} \prec \cdots \prec s_{n}. To each n\in \mathbb{N} let n denote the ordered set \left\{ {1,2,\ldots ,n} \right\}. Let \forall k\in {\bf n},\vartheta_{\prec } (s_{k} )=k be the function assigning to each element of S its ordinal number (position) with respect to the order \prec. Let {\it{\mathsf P}}_{k} (S) denote the set of the ordered subsets of S having cardinality k. To each \alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}\in {\it{\mathsf P}}_{k} (S), \vartheta_{\textit{lex}} (a) denotes the position of \alpha in {\it{\mathsf P}}_{k} (S) with respect to the lexicographic order and \vartheta_{\textit{rlex}} (a) denotes the position of \alpha in {\it{\mathsf P}}_{k} (S) with respect to the reverse lexicographic order. To each \alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}\in {\it{\mathsf P}}_{k} ({\bf n}) we define its complement \bar{{\alpha }} in n to be \bar{{\alpha }}={\bf n}\backslash \alpha. Apparently, \bar{{\alpha }}\in P_{n-k} (n). Let \sigma_{\alpha } =\alpha _{1} +\alpha_{2} +\cdots +\alpha_{k} be the sum of the elements of \alpha \in {\it{\mathsf P}}_{k} ({\bf n}), c_{k} (n) the number of combinations of n elements taken k at a time without repetition and \mathbf{c}_{k} (n) the ordered set \left\{ {1,2,\ldots ,c_{k} (n)} \right\}. Let now X be a r\times m matrix. For every \alpha \in {\it{\mathsf P}}_{k} ({\bf r}),\,\,\beta \in P_{l} ({\bf m})\mbox{, }\,X\{\alpha ,\beta \} denotes the k\times l submatrix of X in the intersection of the rows \left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}=\alpha and the columns \left\{ {\beta_{1} ,\beta_{2} ,\ldots ,\beta_{l} } \right\}=\beta. To any v\in P_{p} ({\bf r}) and w\in P_{q} ({\bf m}), v\_ w denotes the partial ordered multiset of p+q elements with the first p elements those of v and the last q elements those of w. ( v=\left\{ {1,3,7} \right\},w=\left\{ {2,3,7,10} \right\}\Rightarrowv\_ w=\left\{ {1,3,7,2,3,7,10} \right\} ) To each arbitrarily ordered set of natural numbers, w, w^{>} is the naturally ordered set and \mu (w) is the number of permutations one has to apply to w in order to obtain w^{>}. If \left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}=\alpha \in {\it{\mathsf P}}_{k} ({\bf m}), then \alpha +r,\,r\in \mathbb{N} denotes the set \left\{ {\alpha_{1} +r,\alpha_{2} +r,\cdots ,\alpha_{k} +r} \right\}\in {\it{\mathsf P}}_{k} ({\bf m}+{\bf r}). Let X be a n\times n invertible matrix and Y its inverse. The theorem for the minors of the inverse matrix (Gantmacher, 2000) is written using the above terminology: \begin{align} Y=X^{-1}\Rightarrow \forall \alpha \wedge \forall \beta \in {\it{\mathsf P}}_{k} ({\bf n}),\,\,\left| {Y\{\alpha ,\beta \}} \right|=(-1)^{\sigma_{\alpha } +\sigma _{\beta } }\frac{\left| {X^{{\rm T}}\{\bar{{\alpha }},\bar{{\beta }}\}} \right|}{\left| X \right|} \end{align} (2.1) Definition 2.1 The compound of order k, of the r\times m matrix X, denoted by \mathfrak{C}_{k} \left( X \right) is: \begin{align} \forall k\leqslant \min (r,m),\,\,\mathfrak{C}_{k} \left( X \right)=\left[ {{\begin{array}{*{20}c} {{\bf x}_{11} } & {{\bf x}_{12} } & \cdots & {{\bf x}_{1,c_{k} (m)} } \\ {{\bf x}_{21} } & {{\bf x}_{22} } & \cdots & {{\bf x}_{2,c_{k} (m)} } \\ \vdots & \vdots & \ddots & \vdots \\ {{\bf x}_{c_{k} (r),1} } & {{\bf x}_{c_{k} (r),2} } & \cdots & {{\bf x}_{c_{k} (r),c_{k} (m)} } \\ \end{array} }} \right]{\begin{array}{*{20}c} {\mbox{with}\,\,{\bf x}_{\zeta \xi } =\left| {X\{\alpha ,\beta \}} \right|} \\[4pt] {\alpha =\vartheta_{\textit{lex}}^{-1} (\zeta )\in {\it{\mathsf P}}_{k} ({\bf r})} \\[4pt] {\beta =\vartheta_{\textit{lex}}^{-1} (\xi )\in {\it{\mathsf P}}_{k} ({\bf m})} \end{array}} \end{align} (2.2) We define the supplementary compound matrix of order k of the matrix X, \mathfrak{C}^{k}\left( X \right) in a different way than in the literature, which applies also to rectangular matrices. The extension is crucial for our work as we deal with square matrices that are products of rectangular matrices X=YZ. The supplementary compound of the product is meaningful, \mathfrak{C}^{k}\left( X \right)=\mathfrak{C}^{k}\left( {YZ} \right) but the supplementary compound of the matrices Y,\,Z namely \mathfrak{C}^{k}\left( Y \right),\,\,\mathfrak{C}^{k}\left( Z \right) as well as their product \mathfrak{C}^{k}\left( Y \right)\mathfrak{C}^{k}\left( Z \right) is meaningless. For square matrices X,Y,Z with X=YZ we have \mbox{Adjoint}(X)=\mbox{Adjoint}(Z)\mbox{Adjoint}(Y) For rectangular matrices Y,Z with X=YZ square matrix, \mbox{Adjoint}(X) is meaningful while \mbox{Adjoint}(Z) and \mbox{Adjoint}(Y) are meaningless. This fact poses serious limits to the use of supplementary compound matrices, and we have to improve the definition. Definition 2.2 The supplementary compound of order k, of the r\times m matrix X, denoted by \mathfrak{C}^{k}\left( X \right) is: \begin{align} \forall k\leqslant \min (r,m),\,\,\mathfrak{C}^{k}\left( X \right)=\left[ {{\begin{array}{*{20}c} {x_{11} } & {x_{12} } & \cdots & {x_{1,c_{k} (m)} } \\ {x_{21} } & {x_{22} } & \cdots & {x_{2,c_{k} (m)} } \\ \vdots & \vdots & \ddots & \vdots \\ {x_{c_{k} (r),1} } & {x_{c_{k} (r),2} } & \cdots & {x_{c_{k} (r),c_{k} (m)} } \\ \end{array} }} \right]{\begin{array}{*{20}c} {\begin{array}{l} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\mbox{with } \\[4pt] x_{\zeta \xi } =(-1)^{\sigma_{\alpha } +\sigma_{\beta } }\left| {X\left\{ {\alpha ,\beta } \right\}} \right| \\[4pt] \end{array}} \\ {\alpha =\vartheta_{\textit{rlex}}^{-1} (\zeta )\in {\it{\mathsf P}}_{k} ({\bf r})} \\[4pt] {\beta =\vartheta_{\textit{rlex}}^{-1} (\xi )\in {\it{\mathsf P}}_{k} ({\bf m})} \end{array} } \end{align} (2.3) The definition of the compound matrices coincides with the definitions in the literature. The definition of the supplementary compound is different, in order to apply also to rectangular matrices. Restricted to square matrices it coincides with the definition of Wedderburn (1934). The supplementary compound of order k (adjugate compound of order k) as defined by Prells et al. (2003) is the transposed supplementary compound of order r-k of this article, with r the size of the (square) matrix. The supplementary compound matrix can be written as a product of five matrices. \begin{align} \mathfrak{C}^{k}\left( X \right)=R_{c_{k} (r)} S_{c_{k} (r)} \mathfrak{C}_{k} \left( X \right)S_{c_{k} (m)} R_{c_{k} (m)} \end{align} (2.4) S_{c_{k} (r)} \left( {S_{c_{k} (m)} } \right) is a diagonal matrix with its \zeta \mbox{th}\left( {\xi \mbox{th}} \right) element equal to (-1)^{\sigma_{\vartheta_{\textit{lex}}^{-1} (\zeta )} }\left( {(-1)^{\sigma _{\vartheta_{\textit{lex}}^{-1} (\xi )} }} \right). R_{c_{k} (r)} \left( {R_{c_{k} (m)} } \right) is a matrix with ones in the antidiagonal and zeros otherwise. The matrices ‘S’ serve to attribute the sign (-1)^{\sigma_{\vartheta_{\textit{lex}}^{-1} (\zeta )} +\sigma _{\vartheta_{\textit{lex}}^{-1} (\xi )} } to the entry with coordinates \zeta ,\xi of the compound matrix \mathfrak{C}_{k} \left( X \right). The matrices ‘R’ serve to reverse the ordering of rows and columns. Example 2.1 \begin{align*} X&=\left[ {{\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } \\ {b_{1} } & {b_{2} } & {b_{3} } \\ \end{array} }} \right]\Rightarrow\mathfrak{C}^{1}(X)\\ &=\left[ {{\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {(-1)^{1}} & 0 \\ 0 & {(-1)^{2}} \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } \\ {b_{1} } & {b_{2} } & {b_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {(-1)^{1}} & 0 & 0 \\ 0 & {(-1)^{2}} & 0 \\ 0 & 0 & {(-1)^{3}} \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right] \\ &=\left[ {{\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {(-1)^{1}} & 0 \\ 0 & {(-1)^{2}} \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {-a_{1} } & {a_{2} } & {-a_{3} } \\ {-b_{1} } & {b_{2} } & {-b_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right]\\ &=\left[ {{\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {a_{1} } & {-a_{2} } & {a_{3} } \\ {-b_{1} } & {b_{2} } & {-b_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right]\\ &= \left[ {{\begin{array}{*{20}c} {-b_{1} } & {b_{2} } & {-b_{3} } \\ {a_{1} } & {-a_{2} } & {a_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right]\\ &=\left[ {{\begin{array}{*{20}c} {-b_{3} } & {b_{2} } & {-b_{1} } \\ {a_{3} } & {-a_{2} } & {a_{1} } \\ \end{array} }} \right] \end{align*} Example 2.2 The supplementary compound of order 2 of a 3 by 4 matrix \begin{align*} \begin{array}{*{20}c} X=\left[\!\! {\begin{array}{*{20}c} 1& 2 &3 &4 \\[4pt] {1}& {0} &{0} &{1} \\ 2 &1& 0 &3& \end{array}} \!\!\!\!\!\!\right]\Rightarrow \mathfrak{C}_{{2}} \left( X \right){ =}\left[\!\! {\begin{array}{*{20}c} -{2 }&-{3 }&-3& 0 &2& 3 \\ -{3 }&-{6 }&-{5 }&-3& 2& 9 \\ 1 &0 &1 &0 &-1& 0 \end{array}} \!\!\right]{, }S_{c_{2} (3)} { =}\left[\!\! {\begin{array}{*{20}c} -1& 0& 0 \\ 0& 1& 0 \\ 0& 0 &-{1} \end{array}} \!\!\right] \end{array} \end{align*} \begin{align*} \begin{array}{*{20}c}S_{c_{2} (4)} =\left[ {\begin{array}{*{20}c} -1& 0& 0& 0& 0& 0 \\ 0& 1& 0& 0& 0& 0 \\ 0& 0&-1& 0& 0& 0 \\ 0& 0& 0&-1& 0& 0 \\ 0& 0& 0& 0& 1& 0 \\ 0& 0& 0& 0& 0& -1 \end{array}} \right]\Rightarrow \mathfrak{C}^{2}\left( X \right)=\left[ {\begin{array}{*{20}c} 0 & 1& 0& 1& 0 & 1 \\ -9& 2& 3& 5& -6& 3 \\ 3& -2& 0& -3& 3& -{2} \end{array}} \right]\!. \end{array} \end{align*} A formula analogue to (2.4) is derived in Prells et al. (2003) (for square matrices) using exterior algebra. For the very important properties of compound and supplementary compound matrices, see Prells et al. (2003), Wedderburn (1934) and the references therein. The basic properties of the supplementary compound of square matrices are preserved for the supplementary compound of rectangular matrices. We present some of the properties of the supplementary compound matrices used in this article. \begin{align} \begin{array}{l} \mathfrak{C}^{1}(I_{k} )=I_{k} ,\,\,\,\,\mathfrak{C}^{1}\left( {\mathfrak{C}^{1}(X)} \right)=(-1)^{r+m}X\,\, \\[4pt] X=Y+Z\Rightarrow \mathfrak{C}^{1}(X)=\mathfrak{C}^{1}(Y)+\mathfrak{C}^{1}(Z) \\[4pt] X=Y\cdot Z\Rightarrow \mathfrak{C}^{k}\left( X \right)=\mathfrak{C}^{k}\left( Y \right)\mathfrak{C}^{k}\left( Z \right) \\[4pt] \mbox{if }r=m,\,\,\,\,\,\,\,\mbox{Adjoint}(X)=\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( {X^{{\rm T}}} \right)} \right)=\mathfrak{C}^{r-1}\left( {\mathfrak{C}_{1} \left( {X^{{\rm T}}} \right)} \right)=\mathfrak{C}^{r-1}\left( {X^{{\rm T}}} \right)\!. \end{array} \end{align} (2.5) Their proof is easily obtained using (2.4). The theorem for the minors of the inverse matrix of Gantmacher (2000) is written using compound and supplementary compound matrices as: Proposition 2.1 (Theorem for the minors of the inverse matrix) \[ \mathfrak{C}_{k} \left( {X^{-1}} \right)=\mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)\left| X \right|^{-1}. Proof. We use the Laplace expansion to the determinant (Wedderburn, 1934): \begin{align*} \begin{array}{l} \mathfrak{C}_{k} \left( X \right)\mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)=I_{c(r,k)} \left| X \right|\Rightarrow \4pt] \mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)=\left( {\mathfrak{C}_{k} \left( X \right)} \right)^{-1}\left| X \right|\Leftrightarrow \mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)=\mathfrak{C}_{k} \left( {X^{-1}} \right)\left| X \right|\Leftrightarrow \mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)\left| X \right|^{-1}=\mathfrak{C}_{k} \left( {X^{-1}} \right)\!. \end{array} \\[-3.5pc] \end{align*} □ 2.2. Exterior factorization of a matrix The Laplace expansion theorem to the determinant of a square matrix is written as \begin{align} \left| X \right|I_{c(n,1)} =\mathfrak{C}^{r-1}(X^{T})X=\mathfrak{C}^{1}(X^{T})\mathfrak{C}_{r-1} \left( X \right)\!. \end{align} (2.6) For a full rank rectangular matrix X, we consider the matrix \[ {\it{\Omega}} =\mathfrak{C}^{1}(X^{T})\mathfrak{C}_{r-1} \left( X \right)\!. We shall see that the matrix $${\it{\Omega}}$$ is a well-defined function of $$\mathfrak{C}_{r} (X)$$ Theorem 2.1 For any full rank $$r\times m,r<m$$ matrix $$X$$, the matrix $${\it{\Omega}} =\mathfrak{C}^{1}(X^{T})\mathfrak{C}_{r-1} \left( X \right)$$ is a function of $$\mathfrak{C}_{r} (X)$$ Proof. \begin{align*} X=\left[\!\! {{\begin{array}{*{20}l} {x_{11} } & \cdots & {x_{1m} } \\ \vdots & \ddots & \vdots \\ {x_{r1} } & \cdots & {x_{rm} } \\ \end{array} }} \!\!\right]\Rightarrow \mathfrak{C}^{1}\left( {X^{T}} \right)=\left[\!\!{{\begin{array}{*{20}c} {(-1)^{r+m}x_{rm} } & \cdots & {(-1)^{m+1}x_{1m} } \\ \vdots & \ddots & \vdots \\ {(-1)^{m+r-\zeta +1}x_{r(m-\zeta +1)} } & \cdots & {(-1)^{m-\zeta +1}x_{1(m-\zeta +1)} } \\ \vdots & \ddots & \vdots \\ {(-1)^{r+1}x_{r1} } & \cdots & {x_{11} } \\ \end{array} }} \!\!\right]\!. \end{align*} Consider the matrix $${\it{\Omega}} =\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)$$. Its entry with coordinates (1, 1), is the inner product of the first row of $$\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)$$ and the first column of $$\mathfrak{C}_{r-1} \,\left( X \right)$$. The above inner product is up to a sign the Laplace expansion (by its last column), of the determinant of the virtual matrix constituted by the columns $$(1,2,\ldots ,r-1)=\vartheta_{\textit{lex}}^{-1} (1)\in P_{r-1} ({\bf m})\mbox{ and }m=\vartheta _{\textit{rlex}}^{-1} (1)\in P_{1} ({\bf m})$$ of $$X$$. $\mbox{This virtual matrix is:} \left[ {{\begin{array}{*{20}c} {x_{11} } & {x_{12} } & \cdots & {x_{1(r-1)} } & {x_{1m} } \\ {x_{21} } & {x_{22} } & \cdots & {x_{2(r-1)} } & {x_{2m} } \\ {\begin{array}{l} \,\,\,\vdots \\ x_{(r-1)1} \\ \end{array}} & {\begin{array}{l} \,\,\,\,\,\,\vdots \\ x_{(r-1)2} \\ \end{array}} & {\begin{array}{l} \ddots \\ \cdots \\ \end{array}} & {\begin{array}{l} \,\,\,\,\,\,\,\,\,\vdots \\ x_{(r-1)(r-1)} \\ \end{array}} & {\begin{array}{l} \,\,\,\,\,\,\,\vdots \\ x_{(r-1)m} \\ \end{array}} \\ {x_{r1} } & {x_{r2} } & \cdots & {x_{r(r-1)} } & {x_{rm} } \end{array} }} \right]$ The sign is $$(-1)^{r+m}$$, sign of $$x_{rm}$$ in $$\mathfrak{C}^{1}\left( {X^{T}} \right)$$ because in the Laplace expansion of the above virtual matrix by its last column,$$x_{rm}$$ goes with sign $$(-1)^{r+r}$$. As $$\vartheta_{\textit{lex}} \left( {(1,2,\cdots ,r-1, m)} \right)= m - r+1\mbox{ in }P_{r} ({\bf m})$$ we have: \begin{align} {\it{\Omega}} \{1,1\}=(-1)^{r+{m}}\mathfrak{C}_{r} \left( X \right)\{{m}-r+1\}. \end{align} (2.7) We proceed in a similar way for the entry with coordinates ($$\zeta$$, $$\xi )$$ of the matrix $${\it{\Omega}}$$. $${\it{\Omega}} \left\{ {\zeta ,\xi } \right\}$$ is the determinant of the virtual matrix constituted by the columns $$\left( {\lambda_{1} ,\lambda_{2} ,\ldots ,\lambda _{r-1} } \right)=\vartheta_{\textit{lex}}^{-1} (\xi )\in P_{r-1} ({\bf m})$$ and $$m-\zeta +1=\mbox{ }\vartheta_{\textit{rlex}}^{-1} (\zeta )\in P_{1} ({\bf m})$$, of $$X$$. Remark that: (i) This determinant vanishes, if $$m-\zeta +1\in \left\{ {\lambda_{1} ,\lambda_{2} ,\ldots ,\lambda_{r-1} } \right\}$$, because the virtual matrix has two identical columns (ii) It is equal up to a sign to $$\mathfrak{C}_{r} \left( X \right)\{\kappa \}$$ if $$m-\zeta +1\notin \left\{ {\lambda_{1} ,\lambda_{2} ,\ldots ,\lambda_{r-1} } \right\}$$, with $$\kappa =\vartheta_{\textit{lex}} \left( {\left( {\lambda_{1} ,\lambda_{2} ,\ldots ,\underline{\lambda_{\mu -1} },m-\zeta +1,\underline{\lambda_{\mu +1} },\ldots ,\lambda_{r-1} } \right)} \right)$$ in $$P_{r} ({\bf m})$$ (iii) The sign equals $$(-1)^{m+r-\mu -\zeta +1}$$ with $$\mu$$ the position of $$m-\zeta +1$$ in $$\left\{ {\lambda_{1} ,\lambda_{2} ,\cdots ,\underline{\lambda_{\mu -1} },m-\zeta +1,\underline{\lambda_{\mu +1} },\cdots ,\lambda_{r-1} } \right\}$$. $$\zeta +1$$ is due to the sign of the first element of the $$\zeta \mbox{th}$$ row of $$\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)$$ and $$r-\mu$$ to the number of permutations necessary to rearrange in increasing order the columns of the virtual matrix. The entries of the matrix $${\it{\Omega}}$$ are: \begin{align} &\begin{array}{ll} {\it{\Omega}} \left\{ {\zeta ,\xi } \right\}=\left\{ \begin{array}{@{}ll} 0 &\mbox{if}\,\vartheta _{\textit{rlex}}^{-1} (\zeta )\in \vartheta_{\textit{lex}}^{-1} (\xi )\,\4pt] (-1)^{m+r-\mu -\zeta +1}\mathfrak{C}_{r} \left( X \right)\{\kappa \}&\mbox{if}\,\vartheta_{\textit{rlex}}^{-1} (\zeta) {\not\in}\vartheta_{\textit{rlex}}^{-1} (\xi)\, \\[4pt] \end{array} \right.\,\,\,\kappa =\vartheta_{\textit{lex}} \left( {\left( {\vartheta_{\textit{lex}}^{-1} (\xi )\_ \vartheta _{\textit{rlex}}^{-1} (\zeta )} \right)^{>}} \right)\!. \\\\[-6pt] \qquad r-\mu =\mbox{is the number of elements of}\,\vartheta _{\textit{lex}}^{-1} (\xi )\mbox{ greater than }\vartheta_{\textit{rlex}}^{-1} (\zeta ) \end{array}\\ \end{align} (2.8) \begin{align} &\mbox{Of course } {\it{\Omega}} \mbox{ is also given as } {\it{\Omega}} =\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right) \end{align} (2.9) Example 2.3 The entries of the matrix {\it{\Omega}} for a 2\times 3 matrix X are as follows: \begin{align} {\begin{array}{*{20}c} {\left( {\zeta ,\xi } \right)} & {\vartheta_{\textit{lex}}^{-1} (\xi )} & {\vartheta_{\textit{rlex}}^{-1} (\zeta )} & \mu & \kappa & {{\it{\Omega}} \left\{ {\zeta ,\xi } \right\}} \\ {\left( {1,1} \right)} & 1 & 3 & 0 & 2 & {-\mathfrak{C}_{2} (X)\{2\}} \\ {\left( {1,2} \right)} & 2 & 3 & 0 & 3 & {-\mathfrak{C}_{2} (X)\{3\}} \\ {\left( {1,3} \right)} & 3 & 3 & \times & \times & 0 \\ {\left( {2,1} \right)} & 1 & 2 & 0 & 1 & {\mathfrak{C}_{2} (X)\{1\}} \\ {\left( {2,2} \right)} & 2 & 2 & \times & \times & 0 \\ {\left( {2,3} \right)} & 3 & 2 & 1 & 3 & {-\mathfrak{C}_{2} (X)\{3\}} \\ {\left( {3,1} \right)} & 1 & 1 & \times & \times & 0 \\ {\left( {3,2} \right)} & 2 & 1 & 1 & 2 & {\mathfrak{C}_{2} (X)\{1\}} \\ {\left( {3,3} \right)} & 3 & 1 & 1 & 1 & {\mathfrak{C}_{2} (X)\{2\}} \end{array} } \end{align} (2.10) Theorem 2.1 is an extension to non square matrices of the Laplace expansion to the determinant. Restricted to square matrices, it just means that a matrix equals the inverse of its inverse. The matrix X is supposed to have rank r. So \mathfrak{C}_{r} (X) is not identical zero and \mathfrak{C}_{r-1} \left( X \right) has rank r. Consequently there is a subset of its columns \alpha =\left( {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} } \right)\in P_{r} ({\bf c}_{r-1} (m)) with \left| {\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0. Choosing among the equations (2.9) those of the columns \alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} \mbox{ we obtain } {\it{\Omega}} \left\{ {{\bf m},\alpha } \right\}=\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\} The above equation enables the Theorem 2.2 To any full rank r\times m,r<m matrix X, there are two matrices Z,W with Z a function of \mathfrak{C}_{r-1} \left( X \right) W a function of \mathfrak{C}_{r} \left( X \right) and X=Z^{-1}W. Proof. \[ {\it{\Omega}} \left\{ {{\bf m},\alpha } \right\}=\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\} We take the supplementary compound of order one of both parts $\mathfrak{C}^{1}\left( {{\it{\Omega}} \left\{ {{\bf m},\alpha } \right\}} \right)=(-1)^{r+m}X^{{\rm T}}\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right)$ Thanks to the choice of $$\alpha$$ we can solve for $$X$$. \begin{align} \mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left\{ {{\bf m},\alpha } \right\}} \right)&=(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}}X \notag\\ X&=(-1)^{r+m}\left( {\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{{\bf r}-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}}} \right)^{-1}\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left\{ {{\bf m},\alpha } \right\}} \right) \end{align} (2.11) Putting $$Z=(-1)^{r+m}\left( {\mathfrak{C}^{1}\left({\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha }\right\}} \right)} \right)^{{\rm T}},W=\mathfrak{C}^{1}\left({{\it{\Omega}}^{{\rm T}}\left\{ {{\bf m},\alpha } \right\}} \right)$$ we can write $$X=Z^{-1}W$$. □ The ‘exterior’ factorization of the matrix $$X$$ will play a central role in this article. The matrices $$Z,W$$ depend on the choice of $$\alpha$$. As the greater number of $$\alpha$$ is $$c_{r} (c_{r-1} (m))$$ we have at most $$c_{r} (c_{r-1} (m))$$ exterior factorizations of an $$r\times m$$ matrix. For square matrices the exterior factorization is unique as $$c_{r} (c_{r-1} (r))=1$$. Remark that the choice of $$\alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,...,\alpha_{r} } \right\}$$ amounts to a multiplication of both sides of (2.9) by a matrix $$V_{\alpha }$$ whose $$k^{th}$$ column has zero entries except the $$\alpha_{k}^{{th}}$$ which is equal to one. Formula (2.11) holds true if instead of $$V_{\alpha }$$ we use any matrix $$V$$with $$\left| {C_{r-1} (X)V} \right|\ne 0$$. The significant difference among the two is that in the first we use the minimal number of the entries of the matrices $$C_{r-1} (X),\,\,{\it{\Omega}}$$, while in the second we use linear combinations of potentially bigger number of entries and thus redundant data. Furthermore, the $$\alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,...,\alpha_{r} } \right\}$$ are ordered lexicographically and we can use the first, ($$\min \left( {\vartheta { }_{\textit{lex}}(\alpha )} \right))$$ among those satisfying $$\left| {C_{r-1} (X)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ to obtain in a unique way exactly one exterior factorization of the matrix $$X$$ involving a minimal number of parameters. 2.3. Plucker coordinates and Plucker powers Definition 2.3 Let $$X\in \mathbb{K}^{p\times q},p<q$$ be a $$p\times q$$ matrix over $$\mathbb{K}$$ and $$\mathcal{X}=\mbox{rowspan}(X)$$ the $$p$$-dimensional subspace of $$\mathbb{K}^{q}$$ with basis the rows of $$X$$. The elements of the set of all $$c_{p} (q)$$, $$p\times p$$ minors of $$X$$ are then called the Grassmann coordinates of $$\mathcal{X}$$. Definition 2.4 (Karcanias & Giannakopoulos, 1984) The Grassmann coordinates of the space $$\mbox{rowspan}\left( {\left[ {D(s)\,\,N(s)} \right]} \right)$$ with $$F(s)=D^{-1}(s)\,N(s)$$ a left coprime factorization or the space $\mbox{colspan}\left( {\left[ {\begin{array}{l} D(s)\, \\ N(s) \\ \end{array}} \right]} \right)$ with $$F(s)=\,N(s)D^{-1}(s)$$ a right coprime factorization are called the Plücker coordinates of the transfer function matrix $$F(s)$$. Let $$\boldsymbol{\mathfrak{P}}=\left[ {\boldsymbol{\mathfrak{p}}_{1},\boldsymbol{\mathfrak{p}}_{2} ,\ldots,\boldsymbol{\mathfrak{p}}_{c_{r} (m+r)} }\right]=\boldsymbol{\mathfrak{C}}_{r} \left( {\left[ {D(s)\,\,N(s)}\right]} \right)$$ be the vector with entries the Plücker coordinates of the transfer function matrix $$F(s)$$ and $$\boldsymbol{\mathfrak{G}}=\left[ {\boldsymbol{\mathfrak{g}}_{1},\boldsymbol{\mathfrak{g}}_{2} ,\cdots,\boldsymbol{\mathfrak{g}}_{c_{r} (m+r)} }\right]=\boldsymbol{\mathfrak{C}}_{r} \left( {\left[ {I_{r} \,F(s)}\right]} \right)$$ the vector with entries the Grassmann coordinates of $$\mbox{rowspan}([I_{r} \,\,F(s)])$$. \begin{align} \mbox{Remark that} \ \mathfrak{C}_{r} \left( {\left[ {D(s)\,\,N(s)} \right]} \right)=\mathfrak{C}_{r} \left( {\left[ {I_{r} \,\,D^{-1}(s)N(s)} \right]} \right)/\left| {D(s)} \right| \mbox{ and thus } \mathfrak{P}=\mathfrak{G}\left| {D(s)} \right| \end{align} (2.12) To each transfer function matrix $$F(s)\in {\it{\Sigma}}$$ we define the sequence: \begin{align} \begin{array}{l} Z_{0} (s)\,\,\mbox{is the least common multiple of the denominators of the entries of }F(s) \\ Z_{k} (s)=\mathfrak{C}_{k} \left( {F(s)} \right) Z_{0} (s)\,\,k=1,\ldots ,r \end{array} \end{align} (2.13) Proposition 2.2 The entities, (scalars vectors or matrices) $$Z_{0} (s),Z_{1} (s),\ldots ,Z_{r} (s)\,$$ of (2.14) are polynomial. Proof. We consider a left coprime factorization $$\left( {D(s),\,\,N(s)} \right)$$ of the transfer function matrix $$F(s)$$. Then $$F(s)=D^{-1}(s)N(s)$$ and we apply the theorem for the minors of the inverse matrix. \begin{align*} \begin{array}{l} \mathfrak{C}_{k} \left( {F(s)} \right)=\mathfrak{C}_{k} \left( {D^{-1}(s)N(s)} \right)=\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)} \right)\left| {D(s)} \right|^{-1}\Rightarrow \\\-6pt] \Rightarrow \mathfrak{C}_{k} \left( {F(s)} \right)Z_{0} (s)=\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)} \right)\Rightarrow Z_{k} (s)=\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)} \right)\,\,\mbox{i.e. polynomial} \end{array} \\[-3.5pc] \end{align*} □ Definition 2.5 The entity Z_{k} (s) will be called the kth Plücker power of the transfer function matrix F(s). We will now establish a bijection between the set of the Plücker coordinates and the set of the entries of the Plücker powers of a transfer function matrix. For this purpose to each \lambda \in c_{r} (m+r), consider a=\left( {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} } \right)=\vartheta_{\textit{lex}}^{-1} (\lambda )\in P_{r} ({\bf m}+{\bf r}), \beta the subset of \alpha with elements at most equal to r and \gamma the subset of elements greater than r. Let v be the cardinal number of \gamma and \delta =\gamma -r\in P_{v} ({\bf m}), Let also \zeta =\vartheta_{\textit{lex}} \mbox{(}\bar{{\beta }}),\,\,\xi =\vartheta_{\textit{lex}} (\delta ). Proposition 2.3 \begin{align} \mathfrak{g}_{\lambda } =\mathfrak{s}_{\lambda } \left| {F(s)\left\{ {\bar{{\beta }},\delta } \right\}} \right|=\mathfrak{s}_{\lambda } \mathfrak{C}_{v} \left( {F(s)} \right)\left\{ {\zeta ,\xi } \right\},\mathfrak{s}_{\lambda } =(-1)^{\mu (\beta \_ \delta )} \end{align} (2.14) Proof. The determinant of the matrix constituted by the \alpha columns of [I_{r} \,\,F(s)], equals up to a sign the determinant of the matrix resulting from the replacement of the \bar{{\beta }} columns of I_{r} by the \delta columns of F(s). Using the Laplace expansion, we find that this determinant equals the determinant of the matrix constituted by the \bar{{\beta }} rows and \delta columns of F(s). The sign equals (-1)^{\mu (\beta \_ \delta )}, with \mu (\beta \_ \delta ) the number of permutations needed to rearrange (\beta \_ \delta ) in increasing order. □ A direct consequence of the above proposition is that: \begin{align} \mathfrak{p}_{\lambda } =\mathfrak{s}_{\lambda } z_{v} (s)\left\{ {\zeta ,\xi } \right\},\mathfrak{s}_{\lambda } =(-1)^{\mu (\beta \_ \delta )} \end{align} (2.15) Equation (2.15) establishes a one to one correspondence between the Plücker coordinates and the entries of the Plücker powers of transfer function matrices. Of course, we suppose that \mathfrak{C}_{0} \left( {F(s)} \right)=1. The difference between Plücker coordinates and Plücker powers is that the first are viewed as scalars and the second as matrices. Multiplying both sides of (2.14) by Z_{0} (s) we get: \begin{align} \mathfrak{p}_{\lambda } =\mathfrak{g}_{\lambda } Z_{0} (s)=\mathfrak{s}_{\lambda } \left| {F(s)\left\{ {\bar{{\beta }},\delta } \right\}} \right|Z_{0} (s)=\mathfrak{s}_{\lambda } Z_{v} \left\{ {\zeta ,\xi } \right\},\mathfrak{s}_{\lambda } =(-1)^{\mu (\beta \_ \delta )} \end{align} (2.16) The set of Plücker coordinates evokes the Plücker matrix \mathcal{P} of the system. Definition 2.6 (Karcanias & Giannakopoulos, 1984) The matrix \mathcal{P} =\left[ {{\it{p}}_{{\kern 1pt}{\kern 1pt}1} , {{\it{p}}}_{{\kern 1pt}2} ,\ldots ,{{\it{p}}}_{c_{r} (m+r)} } \right]\mbox{ with }{{\it{p}}}_{\lambda } the vector with entries the coefficients of \mathfrak{p}_{\lambda }, (each row of {\mathcal{P}} contains coefficients of the same degree) is called the Plücker matrix of the system F(s)\in {\it{\Sigma}} The Laplace expansion to the determinant, as well as the exterior factorization, are both subsets of the Plucker quadratic relations among the Plucker coordinates of a system, written in matrix multiplication form. 2.4. Exterior factorization of a transfer function matrix We will use now the Plucker powers to calculate the exterior factors of a transfer function matrix F(s). \[ \begin{array}{l} Z_{\alpha } =(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( {F(s)} \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}},W_{\alpha } =\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left( {\mathfrak{C}_{r} \left( {F(s)} \right)} \right)\left\{ {{\bf m},\alpha } \right\}} \right)\Rightarrow F(s)=Z_{\alpha }^{-1} W_{\alpha } \\\\[-6pt] \Rightarrow F(s)=\left( {Z_{\alpha }^{-1} Z_{0}^{-1} } \right)\left( {Z_{0} W_{\alpha } } \right) \end{array} Then we can put as $$Z_{\alpha } ,\,\,W_{\alpha }$$ $\begin{array}{l} Z_{\alpha } =(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( {F(s)} \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}} Z_{0} =(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {z_{r-1} \left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}} \\\\[-6pt] W_{\alpha } =\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left( {\mathfrak{C}_{r} \left( {F(s)} \right)} \right)\left\{ {{\bf m},\alpha } \right\}} \right)Z_{0} =\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left( {Z_{r} \left\{ {{\bf m},\alpha } \right\}} \right)} \right) \\ \end{array}$ This way the factors $$Z_{\alpha } ,\,\,W_{\alpha }$$, become polynomials depending on the Plucker powers $$r-1,\,\,r$$, respectively. Example 2.4 Exterior factorizations of a $$2\times 3$$ matrix over the field of rational functions $F(s)=\left[ {{\begin{array}{*{20}c} {\frac{1}{s+1}} & {\frac{1}{s+2}} & {\frac{1}{s+3}} \\ {\frac{1}{s+4}} & {\frac{1}{s+5}} & {\frac{1}{s+6}} \\ \end{array} }} \right]$ The Plucker powers are $\begin{array}{l} Z_{0} (s)=s^{6}+21s^{5}+175s^{4}+735s^{3}+1624s^{2}+1764s+720\\ Z_{1} (s)=\left[ {{\begin{array}{*{20}c} {s^{5}+20s^{4}+155s^{3}+580s^{2}+1044s+720} & {s^{5}+17s^{4}+107s^{3}+307s^{2}+396s+180} \\ {s^{5}+19s^{4}+137s^{3}+461s^{2}+702s+360} & {s^{5}+16s^{4}+95s^{3}+260s^{2}+324s+144} \\ {s^{5}+18s^{4}+121s^{3}+372s^{2}+508s+240} & {s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120} \\ \end{array} }} \right]^{{\rm T}} \\\\[-6pt] Z_{2} (s)=\left[ {{\begin{array}{*{20}c} {3s^{2}+27s+54} & {6s^{2}+42s+60} & {3s^{2}+15s+12} \\ \end{array} }} \right] \\ \end{array}$ The Omega matrix $${\it{\Omega}} =\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)=\mathfrak{C}^{1}\left( {F^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( F \right)$$ ${\it{\Omega}} (s)=\left[ {{\begin{array}{*{20}c} {6s^{2}+42s+60} & {3s^{2}+15s+12} & 0 \\ {-3s^{2}-27s-54} & 0 & {3s^{2}+15s+12} \\ 0 & {-3s^{2}-27s-54} & {-6s^{2}-42s-60} \\ \end{array} }} \right]=W$ There are three exterior factorizations with $$\mathbf{\alpha }_{1} =\{1,2\},\,\,\mathbf{\alpha }_{2} =\{1,3\},\,\,\mathbf{\alpha }_{3} =\{2,3\}$$ \begin{align*} Z_{\mathbf{\alpha }_{1} } &=\left[ {{\begin{array}{*{20}c} {s^{5}+16s^{4}+95s^{3}+260s^{2}+324s+144} & {-s^{5}-19s^{4}-137s^{3}-461s^{2}-702s-360} \\ {-s^{5}-17s^{4}-107s^{3}-307s^{2}-396s-180} & {s^{5}+20s^{4}+155s^{3}+580s^{2}+1044s+720} \\ \end{array} }} \right] \\ W_{\mathbf{\alpha }_{1} } &=\left[ {{\begin{array}{*{20}c} {3s^{2}+27s+54} & 0 & {-3s^{2}-15s-12} \\ 0 & {3s^{2}+27s+54} & {6s^{2}+42s+60} \\ \end{array} }} \right] \\ Z_{\mathbf{\alpha }_{2} } &=\left[ {{\begin{array}{*{20}c} {s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120} & {-s^{5}-18s^{4}-121s^{3}-372s^{2}-508s-240} \\ {-s^{5}-17s^{4}-107s^{3}-307s^{2}-396s-180} & {s^{5}+20s^{4}+155s^{3}+580s^{2}+1044s+720} \\ \end{array} }} \right] \\ W_{\mathbf{\alpha }_{2} } &=\left[ {{\begin{array}{*{20}c} {6s^{2}+42s+60} & {3s^{2}+15s+12} & 0 \\ 0 & {3s^{2}+27s+54} & {6s^{2}+42s+60} \\ \end{array} }} \right] \\ Z_{\mathbf{\alpha }_{3} } &=\left[ {{\begin{array}{*{20}c} {s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120} & {-s^{5}-18s^{4}-121s^{3}-372s^{2}-508s-240} \\ {-s^{5}-16s^{4}-95s^{3}-260s^{2}-324s-144} & {s^{5}+19s^{4}+137s^{3}+461s^{2}+702s+360} \\ \end{array} }} \right] \\ W_{\mathbf{\alpha }_{3} } &=\left[ {{\begin{array}{*{20}c} {6s^{2}+42s+60} & {3s^{2}+15s+12} & 0 \\ {-3s^{2}-27s-54} & 0 & {3s^{2}+15s+12} \\ \end{array} }} \right] \end{align*} We can easily verify that: $F(s)=\left( {Z_{\mathbf{\alpha }_{1} } } \right)^{-1}W_{\mathbf{\alpha }_{1} } =\left( {Z_{\mathbf{\alpha }_{2} } } \right)^{-1}W_{\mathbf{\alpha }_{2} } =\left( {Z_{\mathbf{\alpha }_{3} } } \right)^{-1}W_{\mathbf{\alpha }_{3} } \,\,\,$ Example 2.5 For $$r=3,\,\,m=4\, c_{3} (4)=4\Rightarrow z_{3} (s)=[z_{1} (s)\,\,z_{2} (s)\,\,z_{3} (s)\,\,z_{4} (s)]\,\,\,\,c_{2} (4)=6\Rightarrow {\it{\Omega}} (s)$$ is a $$4\times 6$$ matrix \begin{align*} {\it{\Omega}} (s)=\left[\!\!{\begin{array}{*{20}c} z_{2} (s) & z_{3}(s) & 0 & z_{4} (s) & 0 & 0 \\ -z_{1} (s) & 0 & z_{3} (s) & 0 & z_{4} (s) & 0 \\ 0 & -z_{1} (s) & -z_{2} (s) & 0 & 0 & z_{4} (s) \\ 0 & 0 & 0 & -z_{1} (s) & -z_{2} (s) & -z_{3} (s) \\ \end{array}}\!\! \right]\,\,\Rightarrow \left\{ \!\!\!{{\begin{array}{*{20}c} {{\rm {\bf \alpha }}=\left\{ {1,\,2,\,4} \right\}\Rightarrow W_{{\rm {\bf \alpha }}} (s)=} \\ {=-\left[\!\!\!\! {{\begin{array}{*{20}c} {-z_{1} (s)} & 0 & 0 & {-z_{4} (s)} \\ 0 & {-z_{1} (s)} & 0 & {\,\,\,z_{3} (s)} \\ 0 & 0 & {-z_{1} (s)} & {-z_{2} (s)} \\ \end{array} }} \!\!\!\!\right]} \\ \end{array} }} \right. \\ \left\{\!\! {{\begin{array}{*{20}c} {\,\mathbf{\alpha }=\left\{ {1,\,3,\,6} \right\}\Rightarrow W_{{\rm {\bf \alpha }}} (s)=} \\ {=-\left[\!\!\! {{\begin{array}{*{20}c} {-z_{3} (s)} & {-z_{4} (s)} & 0 & 0 \\ 0 & {-z_{2} (s)} & {-z_{3} (s)} & {\,\,\,0} \\ 0 & 0 & {-z_{1} (s)} & {-z_{2} (s)} \\ \end{array} }} \!\!\!\right]} \\ \end{array} }} \right.{\begin{array}{*{20}c} {\left\{\!\!\!{{\begin{array}{*{20}c} {{\rm {\bf \alpha }}=\left\{{4,\,5,\,6} \right\}\Rightarrow W_{{\rm {\bf \alpha }}} (s)=} \\ {=-\left[ {{\begin{array}{*{20}c} {-z_{3} (s)} & {-z_{4} (s)} & 0 & 0 \\ {z_{2} (s)} & 0 & {-z_{4} (s)} & {\,\,\,0} \\ {-z_{1} (s)} & 0 & 0 & {-z_{4} (s)} \\ \end{array} }} \right]\,\,} \\ \end{array} }} \right.} & \\ \end{array} } \end{align*} 2.5. Closed loop Plücker coordinates, powers and exterior factors of a system The action of the SOF group on $${\it{\Sigma}}$$, induces an action on the set of the Plücker coordinates. \begin{align} &{[}I_{r} \,\,F(s)]\mapsto [I_{r} \,\,\left( {I_{r} +F(s)H} \right)^{-1}F(s)]=\left( {I_{r} +F(s)H} \right)^{-1}[\,\left( {I_{r} +F(s)H} \right)F(s)] \notag\\ &\quad\Rightarrow {[}I_{r} \,\,F(s)]\mapsto \left( {I_{r} +F(s)H} \right)^{-1}[I_{r} \,\,F(s)]\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right] \notag\\ &\quad\Rightarrow\mathfrak{C}_{r} \left( {[I_{r} \,\,F(s)]} \right)\mapsto \mathfrak{C}_{r} \left( {\left( {I_{r} +F(s)H} \right)^{-1}} \right)\mathfrak{C}_{r} \left( {[I_{r} \,\,F(s)]} \right)\mathfrak{C}_{r} \left( {\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right]} \right) \notag\\ &\quad\Rightarrow \mathfrak{G}\mapsto \mathfrak{G}\mathfrak{C}_{r} \left( {\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right]} \right)/\left| {I_{r} +F(s)H} \right|\Rightarrow \mathfrak{P}\mapsto \mathfrak{P}\mathfrak{C}_{r} \left( {\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right]} \right)=\mathfrak{P}\mathfrak{T}=\tilde{{\mathfrak{P}}} \end{align} (2.17) Remark that the matrix $$\mathfrak{T}$$, is upper triangular with ones in the diagonal. So each closed loop Plucker coordinate $$\tilde{{\mathfrak{p}}}_{{\kern 1pt}i}$$ is a linear combination of its subsequent open loop Plucker coordinates $$\mathfrak{p}_{{\kern 1pt}j} ,j>i$$ The above remark is sufficient to state and prove the results of this paper. We give however, proper analytic expressions for the closed loop Plucker powers $$\tilde{{Z}}_{0} (s),\,\,\tilde{{Z}}_{1} (s),\tilde{{Z}}_{r-1} (s),\,\,\tilde{{Z}}_{r} (s)$$. Proposition 2.4 If $$z_{k\zeta \xi } (s)$$ is the entry with coordinates $$\zeta ,\,\xi$$ of $$Z_{k} \left( s \right)$$ and $$d_{k\xi \zeta }$$ the entry with coordinates $$\xi ,\,\zeta$$ of $$\mathfrak{C}_{k} \left( H \right)$$, the closed loop characteristic polynomial is \begin{align} \tilde{{Z}}_{0} (s)=Z_{0} (s)+\sum\limits_{k=1}^r {\sum\limits_{\zeta =1}^{\left( {_{k}^{r} } \right)} {\sum\limits_{\xi =1}^{\left( {_{k}^{m} } \right)} {z_{k\zeta \xi } (s)d_{k\xi \zeta } } } } \end{align} (2.18) Proof. We expand $$\left| {D(s)+N(s)H} \right|$$ as sum of two matrices using Theorem 4.1 of Prells et al. (2003) \begin{align} \left| {D(s)+N(s)H} \right|=\sum\limits_{k=0}^r {\mbox{trace}\left( {\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)H} \right)} \right)} \end{align} (2.19) \begin{align} \left| {D(s)+N(s)H} \right|=\sum\limits_{k=0}^r {\sum\limits_{\zeta =1}^{\left( {_{k}^{r} } \right)} {\sum\limits_{\xi =1}^{\left( {_{k}^{m} } \right)} {z_{k\zeta \xi } (s)d_{k\xi \zeta } } } } \mbox{ with }\left( {z_{011} (s)=z_{0} (s),d_{011} =1} \right) \end{align} (2.21a) We can use $$\chi_{k} =\vartheta_{\textit{lex}}^{-1} (\zeta )$$, $$\psi_{k} =\vartheta_{\textit{lex}}^{-1} (\xi )$$ to write (2.21a) as \begin{align} \left| {D(s)+N(s)H} \right|=\left| {D(s)} \right|\sum\limits_{k=0}^r {\sum\limits_{\chi_{k} \in {\it{\mathsf P}}_{k} ({\bf r})} {\sum\limits_{\psi_{k} \in {\it{\mathsf P}}_{k} ({\bf m})} } } \left| {F(s)\left\{ {\chi_{k} ,\psi_{k} } \right\}} \right|\left| {H\left\{ {\psi_{k} ,\chi_{k} } \right\}} \right| \end{align} (2.21b) Formula (2.21a) is oriented to the Plücker coordinates while (2.21b) to Plücker powers. □ As $$Z_{1} (s)=\mbox{adj}\left( {D(s)} \right)N(s),\tilde{{Z}}_{1} (s)=\mbox{adj}\left( {D(s)+N(s)H} \right)N(s)$$ one can use Jacobi’s formula to get an analytic expression for $$\tilde{{Z}}_{1} (s)$$. Proposition 2.5 The closed loop matrix $$\tilde{{Z}}_{1} (s)$$ is: \begin{align} \tilde{{Z}}_{1} (s)=\left[ {{\begin{array}{*{20}c} {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{11} }}} \right. } {\partial h_{11} }} & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{21} }}} \right. } {\partial h_{21} }} & \cdots & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{m1} }}} \right. } {\partial h_{m1} }} \\ {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{12} }}} \right. } {\partial h_{12} }} & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{22} }}} \right. } {\partial h_{22} }} & \cdots & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{m2} }}} \right. } {\partial h_{m2} }} \\ \vdots & \vdots & \ddots & \vdots \\ {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{1r} }}} \right. } {\partial h_{1r} }} & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{2r} }}} \right. } {\partial h_{2r} }} & \cdots & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{mr} }}} \right. } {\partial h_{mr} }} \\ \end{array} }} \right]\tilde{{Z}}_{0} (s) \end{align} (2.22) Proof. We use Jacobi’s formula $$\frac{d}{dx}\det \left( {M(x)} \right)=tr\left( {\mbox{adj}(M)\frac{dM(x)}{dx}} \right)$$ for the derivative of the determinant of a matrix to obtain: $\frac{d}{dh_{\zeta \xi } }\det \left( {D(s)+N(s)H} \right)=tr\left( {\mbox{adj}(D(s)+N(s)H)\frac{d(D(s)+N(s)H)}{dh_{\zeta \xi } }} \right)\,\,$ $$\mbox{but}\,\,\frac{d(N(s)H)}{dh_{\zeta \xi } }$$ is a matrix with all but $$\{\zeta ,\xi \}$$ entries zero. $\frac{d}{dh_{\zeta \xi } }\det \left( {D(s)+N(s)H} \right)=\mbox{adj}(D(s)+N(s)H)\{\zeta ,\xi \}N(s)\{\xi ,\zeta \}j$ □ We can use a similar to (2.22) formula for the higher order Plucker powers using the operator $$\partial \mathord{\left/ {\vphantom {\partial {\partial d_{\kappa \zeta \xi } }}} \right. } {\partial d_{\kappa \zeta \xi } }$$ For the closed loop Plucker powers $$r-1,\,\,r$$ however it is easier to use the theorem for the minors of the inverse matrix. Theorem 2.3 \begin{align} \mbox{The } r^{\textit{th}} \mbox{ Plucker power } Z_{r} (s) \mbox{ is } {\rm E}-\mbox{invariant i.e. } \tilde{{Z}}_{r} (s)=Z_{r} (s) \end{align} (2.23) Proof. For the open loop $$r^{\textit{th}}$$ Plucker power $$Z_{r} (s)$$ we have $\mathfrak{C}_{r} \left( {F(s)} \right)=\mathfrak{C}_{r} \left( {D^{-1}(s)N(s)} \right)=\mathfrak{C}_{r} \left( {N(s)} \right)\left| {D(s)} \right|^{-1}\Rightarrow Z_{r} (s)=\mathfrak{C}_{r} \left( {N(s)} \right)$ For the closed loop $$r^{th}$$ Plucker power $$\tilde{{Z}}_{r} (s)$$ we have \begin{align*} \mathfrak{C}_{r} \left( {\left( {D(s)+N(s)H} \right)^{-1}N(s)} \right)=\mathfrak{C}_{r} \left( {N(s)} \right)\left| {D(s)+N(s)H} \right|^{-1}\Rightarrow \tilde{{Z}}_{r} (s)=\mathfrak{C}_{r} \left( {N(s)} \right)=Z_{r} (s) \-3.5pc] \end{align*} □ Theorem 2.4 The (r-1)^{\textit{th}} closed loop Plucker power \tilde{{Z}}_{r-1} (s) varies linearly with the entries of the output feedback gain H. \begin{align} \tilde{{Z}}_{r-1} (s)=Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right) \end{align} (2.24) Proof. \begin{align*} \mathfrak{C}_{r-1} \left( {\left( {D(s)+N(s)H} \right)^{-1}N(s)} \right)&=\mathfrak{C}^{1}\left( {D^{{\rm T}}(s)+H^{{\rm T}}N^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)\left| {D(s)+N(s)H} \right|^{-1}\Rightarrow \\ \tilde{{Z}}_{r-1} (s)&=\mathfrak{C}^{1}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)+\mathfrak{C}^{1}\left( {H^{{\rm T}}N^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)\Rightarrow \\ \tilde{{Z}}_{r-1} (s)&=Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right)\mathfrak{C}^{1}\left( {N^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)\Rightarrow \\ \tilde{{Z}}_{r-1} (s)& = Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right)\\[-3.2pc] \end{align*} □ A direct consequence of theorems 2.2 and 2.3 is the following Theorem 2.5 The exterior factors of the closed loop transfer function matrix are \begin{align} \tilde{{W}}_{\alpha } =W_{\alpha } ,\,\,\,\tilde{{Z}}_{\alpha } =Z_{\alpha } +W_{\alpha } H\,\,\forall \alpha \mbox{ with }\left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 \end{align} (2.25) Proof. By Theorem 2.3, \tilde{{Z}}_{r} = Z_{r} and so \tilde{{W}}_{\alpha } =W_{\alpha } \quad \forall \alpha \mbox{ with }\left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 By Theorem 2.4, \tilde{{Z}}_{r-1} (s) = Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {z_{r} (s)} \right) and so \tilde{{Z}}_{\alpha } =Z_{\alpha } +W_{\alpha } H\,\,\forall \alpha with \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 □ We shall now show that the property \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 is {\rm E}-invariant. Thus the exterior factorization related to \alpha is meaningful for the whole equivalence class Theorem 2.6 The property \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 is {\rm E}-invariant i.e. \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0\Rightarrow \left| {\tilde{{Z}}_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 Proof. \[ \tilde{{Z}}_{r-1} (s)=Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right)\Rightarrow \tilde{{Z}}_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}=Z_{r-1} (s)\left\{ {r,\alpha } \right\}+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right)\left\{ {{\bf m},\alpha } \right\}\Rightarrow $$\left| {\tilde{{Z}}_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|=\left| {Z_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|+$$ a polynomial of lower degree. Thus $$\left| {Z_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ implies $$\left| {\tilde{{Z}}_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ □ 3. E–equivalence, invariants and canonical forms for full rank transfer function matrices 3.1. $${\rm E}-$$equivalence Theorem 3.1 Two systems $$F(s),\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)\in {\it{\Sigma}}$$ are SOF equivalent if and only if for some $$\alpha =\left( {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} } \right)\in P_{r} ({\bf c}_{r-1} (m))$$ their exterior factors $$\left( {Z_{\alpha } ,W_{\alpha } } \right) ,\big( {\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} _{\alpha } } \big)$$ satisfy the following: \begin{align} W_{\alpha } =\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{W}}_{\alpha }\\ \end{align} (3.1) \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha } (s)-Z_{\alpha } (s)=W_{\alpha } (s)H,\,\,H\in \mathbb{R}^{m\times r} \end{align} (3.2) Proof. Necessity: Direct from Theorem 2.5 Sufficiency As $$F=Z_{\alpha }^{-1} W_{\alpha }$$ and $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} =\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha }^{-1} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} _{\alpha }$$, $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)-Z_{\alpha } (s)=W_{\alpha } (s)H$$ implies $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} =\left( {Z_{\alpha } +W_{\alpha } H} \right)^{-1}W_{\alpha }$$. We can now construct new invariants based on the consistency conditions of the linear system of equations $$W_{\alpha } (s)H=\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)-Z_{\alpha } (s)$$. For this purpose, we transform the polynomial equation to another with constant coefficients \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha } (s)&=\sum\limits_{k=0}^{n-r+1-d} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,k} s^{k}} ,\,\,Z_{\alpha } (s)=\sum\limits_{k=0}^{n-r+1-d} {Z_{\alpha ,k} s^{k}} ,\,\,W_{\alpha } (s)=\sum\limits_{k=0}^{n-r-d} {W_{\alpha ,k} s^{k}}\\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{Z}}_{\alpha }& =\left[ {\begin{array}{l} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,(n-r-d+1)} \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,(n-r-d)} \\ \,\,\,\,\vdots \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,0} \\ \end{array}} \right],\,\,{\boldsymbol{Z}}_{\alpha } =\left[ {\begin{array}{l} Z_{\alpha ,(n-r-d+1)} \\ Z_{\alpha ,(n-r-d)} \\ \,\,\,\,\vdots \\ Z_{\alpha ,0} \\ \end{array}} \right],\,\,{\boldsymbol{W}}_{\alpha } =\left[ {\begin{array}{l} {\rm O}_{r\times m} \\ W_{\alpha ,(n-r-d)} \\ \,\,\,\,\vdots \\ W_{\alpha ,0} \\ \end{array}} \right]\Rightarrow \,\,\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} }_{\alpha } \mathbf{-Z}_{\alpha } \mathbf{=W}_{\alpha } H \notag \end{align} (3.3)$$d =$$ degree of the greater common divisor of the entries of $$Z_{\alpha } (s)\mbox{ }\big( {\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)} \big)$$ If $$h_{1} ,h_{2} ,\ldots ,h_{r}$$, $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{z}}_{1},\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{2} ,\ldots ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{r}$$, $${\boldsymbol{z}}_{1} ,{\boldsymbol{z}}_{2} ,\ldots ,{\boldsymbol{z}}_{r}$$, $$\mathbf{w}_{1} ,\mathbf{w}_{2} ,\ldots ,\mathbf{w}_{m}$$ are the columns of the gain $$H$$ the matrix $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{Z}}_{\alpha }$$ ,$${\boldsymbol{Z}}_{\alpha }$$ and $$W_{\alpha }$$, respectively, (3.3) is decomposed to $$r$$ equations \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k} -{\boldsymbol{z}}_{k} ={\boldsymbol{W}}_{\alpha } h_{k} \end{align} (3.3a) For the solvability conditions of equation (3.3a) we decompose $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{z}}_{k} ,\,\,{\boldsymbol{z}}_{k}$$ into two parts \begin{align} \begin{array}{l} {\boldsymbol{z}}_{k} ={\boldsymbol{z}}_{k}^{\bot } +{\boldsymbol{z}}_{k}^{\parallel } ,\mbox{ with }{\boldsymbol{z}}_{k}^{\parallel } \in \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right)\mbox{ }{\boldsymbol{z}}_{k}^{\bot } \bot \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right) \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k} =\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\bot } +\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\parallel } ,\,\,\mbox{with }\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\parallel } \in \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right)\mbox{ }\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\bot } \bot \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right) \\ \end{array} \end{align} (3.4) The solvability conditions of (3.3a) are $\forall k\in \mathbf{r},\,\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{k} -{\boldsymbol{z}}_{k} \in \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right)\Leftrightarrow \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{k}^{\bot } ={\boldsymbol{z}}_{k}^{\bot }$ Apparently \begin{align} \boldsymbol{z}_{k}^{\parallel } =\boldsymbol{W}_{\alpha } \boldsymbol{W}_{\alpha }^{\dagger } \boldsymbol{z}_{k} ,\,\,\boldsymbol{z}_{k}^{\bot } =\,\left( {I-\boldsymbol{W}_{\alpha } \boldsymbol{W}_{\alpha }^{\dagger } } \right)\boldsymbol{z}_{k} \end{align} (3.5) For a proof remark that $${W}_{\alpha }^{{\rm T}} \boldsymbol{z}_{k}^{\bot } =\boldsymbol{W}_{\alpha }^{{\rm T}} \,\left( {I-\boldsymbol{W}_{\alpha } \boldsymbol{W}_{\alpha }^{\dagger } } \right)\boldsymbol{z}_{k} =\left( {\boldsymbol{W}_{\alpha }^{{\rm T}} -\boldsymbol{W}_{\alpha }^{{\rm T}} } \right)\boldsymbol{z}_{k} =0$$ Remark that the Moore–Penrose (left) inverse $$\boldsymbol{W}_{\alpha }^{\dagger }$$ exists thanks to hypothesis (1.1). As the columns of $$F(s)$$ are $$\mathbb{R}-$$linearly independent the same is true for the columns of $$W_{\alpha } (s)$$ and the columns of $$\boldsymbol{W}_{\alpha }$$. If the solution of (3.3) exists, it is $$H= \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{H}}_{\alpha } -{\boldsymbol{H}}_{\alpha }$$ with $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{H}}_{\alpha } ={\boldsymbol{W}}_{\alpha }^{\dagger} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha } ,\,\,{\boldsymbol{H}}_{\alpha }={\boldsymbol{W}}_{\alpha }^{\dagger } {\boldsymbol{Z}}_{\alpha }\,$$. □ Theorem 3.2 Two systems $$F(s),\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{F}} (s)\in {\it{\Sigma}}$$ are SOF equivalent if and only if the coefficients of their exterior factors $$\left({\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{\boldsymbol{W}}}_{\alpha },\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha } } \right) \left({{\boldsymbol{W}}_{\alpha } \mathbf{,Z}_{\alpha } } \right)$$ satisfy the following: \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{\boldsymbol{W}}}_{\alpha } ={\boldsymbol{W}}_{\alpha }\\ \end{align} (3.6) \begin{align} \left( {I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger } } \right)\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{Z}}_{\alpha } =\left( {I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger } } \right){\boldsymbol{Z}}_{\alpha } \end{align} (3.7) There is a constant solution $$H\in \mathbb{R}^{m\times r}$$ to the equation $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)-Z_{\alpha } (s)=W_{\alpha } (s)H$$ with \begin{align} H=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{H}}_{\alpha } -{\boldsymbol{H}}_{\alpha } ,\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{H}}_{\alpha } ={\boldsymbol{W}}_{\alpha }^{\dagger } \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{Z}}_{\alpha } ,\,\,{\boldsymbol{H}}_{\alpha } ={\boldsymbol{W}}_{\alpha }^{\dagger } {\boldsymbol{Z}}_{\alpha } \end{align} (3.8) Proof. The columns of the matrices $$\left( {I-{\boldsymbol{W}}_{\alpha }{\boldsymbol{W}}_{\alpha }^{\dagger } }\right)\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha }$$, $$\left({I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger} } \right){\boldsymbol{Z}}_{\alpha }$$ are the components of the columns of $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha }$$, $${\boldsymbol{Z}}_{\alpha }$$ that are orthogonal to the column space of $${\boldsymbol{W}}_{\alpha }$$. Equation (3.3) has a solution if and only if they are equal. □ Theorem 3.3 To any system, $$F(s)\in {\it{\Sigma}}$$ the pair $$\left({{\boldsymbol{W}}_{\alpha } ,\,\,\left({I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger} } \right){\boldsymbol{Z}}_{\alpha } } \right)$$ is a complete system of independent $$E-$$invariants. Proof. For the invariance remark that the gain $$H$$ cannot alter $$\left({I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger} } \right){\boldsymbol{Z}}_{\alpha }$$ For the completeness remark that in the case of equality of the above pair of invariants (3.3) has a solution. □ Theorem 3.4 To any system $$F(s)\in {\it{\Sigma}}$$ and to each $$\alpha$$ with $$\left| {Z_{\alpha } (s)} \right|\ne 0$$ the rational function $$\underline{F_{\alpha } }(s)=\left( {Z_{\alpha } (s)-W_{\alpha } (s){\boldsymbol{H}}_{\alpha } } \right)^{-1}W_{\alpha } (s)=\left( {Z_{\alpha }^{\bot } (s)} \right)^{-1}W_{\alpha } (s)$$ is uniquely determined. Proof. Trivial, following the algorithm below: Algorithm 3.1 Calculate $$Z_{r-1} (s),\,\,Z_{r} (s)$$ Find all the $$\alpha$$ with $$\left| {Z_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ and for each one of them: Calculate the exterior factors $$Z_{\alpha } ,\,W_{\alpha }$$ of the transfer function matrix $$F(s)$$ Calculate the coefficient matrices $${\boldsymbol{Z}}_{\alpha }\mathbf{,W}_{\alpha }$$ Decompose $${\boldsymbol{Z}}_{\alpha }$$ into two parts $${\boldsymbol{Z}}_{\alpha } ={\boldsymbol{Z}}_{\alpha }^{\bot} +{\boldsymbol{Z}}_{\alpha }^{\parallel }$$ with the columns of $${\boldsymbol{Z}}_{\alpha }^{\parallel }$$ belonging to the column space of $${\boldsymbol{W}}_{\alpha }$$ and the columns of $${\boldsymbol{Z}}_{\alpha }^{\bot }$$ orthogonal to it: $${\boldsymbol{Z}}_{\alpha }^{\parallel }={\boldsymbol{W}}_{\alpha } \left( {{\boldsymbol{W}}_{\alpha}^{{\rm T}} {\boldsymbol{W}}_{\alpha } }\right)^{-1}{\boldsymbol{W}}_{\alpha }^{{\rm T}}{\boldsymbol{Z}}_{\alpha } ,\,{\boldsymbol{Z}}_{\alpha }^{\bot} =\,\left( {I-{\boldsymbol{W}}_{\alpha } \left({{\boldsymbol{W}}_{\alpha }^{{\rm T}} {\boldsymbol{W}}_{\alpha} } \right)^{-1}{\boldsymbol{W}}_{\alpha }^{{\rm T}} }\right){\boldsymbol{Z}}_{\alpha }$$ Calculate the output feedback gains $$H_{\alpha } =\left({{\boldsymbol{W}}_{\alpha }^{{\rm T}} {\boldsymbol{W}}_{\alpha} } \right)^{-1}{\boldsymbol{W}}_{\alpha }^{{\rm T}}{\boldsymbol{Z}}_{\alpha }$$ Calculate $$\underline{F_{\alpha } }(s)=\left( {Z_{\alpha } (s)-W_{\alpha } (s)H_{\alpha } } \right)^{-1}W_{\alpha } (s)$$ □ Remark that $$\underline{F_{\alpha } }(s)$$ is unique up to the choice of $$\alpha$$. It can become properly unique if we chose $$\alpha_{0}$$ with $$\vartheta_{\textit{lex}} \left( {\alpha_{0} } \right)\leqslant \vartheta_{\textit{lex}} \left( \alpha \right)$$. The resulting $$\underline{F_{\alpha_{0} } }(s)$$ is a uniquely determined representative of the equivalence class. It is a canonical form even though it is not conform to our feeling of canonicity. The pair $$\left( {{\boldsymbol{W}}_{\alpha },\,{\boldsymbol{Z}}_{\alpha }^{\bot } } \right)$$ is a complete system of independent invariants. It is how ever oriented to the construction of canonical forms. For proper control problems, complete invariants related to the Grassmann coordinates of the spaces $${\boldsymbol{z}}_{k} \oplus \mbox{colspan}\left({{\boldsymbol{W}}_{\alpha } } \right)$$, $${\boldsymbol{z}}_{k}\wedge \mathbf{w}_{1} \wedge \mathbf{w}_{2} \wedge \cdots \wedge\mathbf{w}_{m}$$ seem to be more important. Roughly speaking in the first case we have the base and the height of a prism and in the second the base and its volume. Starting from the Grassmann coordinates, we can construct multivariate polynomials that generalize the Bezoutian and represent the variety of the closed loop dynamics. The exterior powers of a transfer function matrix and their invariant properties do not appear for the first time. They are well known from the calculation of the Smith and Smith–MacMillan forms. Let $$\delta_{k} (s)$$ be the greater common divisor of the entries of $$Z_{k} (s)$$. Then $$\delta_{k} (s)$$ divides $$\delta_{k+1} (s)$$ and $$\delta_{r} (s)/\delta_{r-1} (s)$$ is known to be the $$r^{\it th}$$ invariant factor of the Smith form. It is well known that $$\delta_{r} (s)/\delta_{r-1} (s)$$ is state feedback invariant. The SOF invariance of the whole $$Z_{r} (s)$$ must be seen in this context. A question arises naturally. Is it possible to solve the problem using a coprime instead of the exterior factorization? The problem of SOF (full SOF) equivalence is solved in Yannakoudakis (2013a) using generalized Bezoutians. Given two systems $$F(s),\,\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)$$ we consider their left and right coprime factorizations \begin{align} \begin{array}{l} \left( {D_{L} (s),\,\,N_{L} (s)} \right),\,\,\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{L} (s),\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}}_{L} (s)} \right),\,\,\left( {\,N_{R} (s),\,\,D_{R} (s)} \right),\,\,\left( {\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}} _{R} (s),\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}}_{R} (s)} \right) \\ F(s)=D_{L}^{-1} (s)N_{L} (s)=N_{R} (s)D_{R}^{-1} (s),\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}}_{L}^{-1} (s)\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}} _{L} (s)=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}}_{R} (s)\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{R}^{-1} (s)\, \\ \end{array} \end{align} (3.9) and their generalized Bezoutians \begin{align} B(\lambda ,\,\mu )=\frac{N_{L} (\lambda )D_{R} (\mu )-D_{L} (\lambda )N_{R} (\mu )}{\lambda -\mu },\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{B}} (\lambda ,\,\mu )=\frac{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}}_{L} (\lambda )\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{R} (\mu )-\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{L} (\lambda )\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}} _{R} (\mu )}{\lambda -\mu } \end{align} (3.10) Then for SOF (full SOF) equivalence there must be unimodular matrices $$U_{H} (\mu ),\,\,V_{H} (\lambda )$$ with \begin{align} B(\lambda ,\,\mu )=\,U_{H} (\mu )\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{B}} (\lambda ,\,\mu )V_{H} (\lambda ) \end{align} (3.11) The gain achieving equivalence is calculated solving linear equations but the approach cannot serve for the calculation of complete invariants, because of the bilinearity of the formula (3.11) The SOF group acts on the set of the Bezoutians by the transformation \begin{align} B(\lambda ,\,\mu )\to \,U_{H} (\mu )B(\lambda ,\,\mu )V_{H} (\lambda ) \end{align} (3.12) It acts on the set of exterior factors by the transformation \begin{align} \left( {Z(s),\,W(s)} \right)\mapsto \left( {Z(s)+W(s)H,\,W(s)} \right) \end{align} (3.13) And on the set of the left coprime factors by the transformation \begin{align} \left( {D_{L} (s),\,N_{L} (s)} \right)\mapsto \left( {V_{H} (s)\left( {D_{L} (s)+N_{L} (s)H} \right),\,V_{H} (s)N_{L} (s)} \right) \end{align} (3.14) Apparently the action is linear only for the set of exterior factors. Remark that transformation (3.14), cannot serve neither for the test of SOF equivalence. 4. Applications 4.1. Scalar systems $${\it{\Sigma}}$$ is the set $$\mathbb{R}_{n} \left\{ s \right\}$$ of strictly proper rational functions with real coefficients of McMillan degree $$n$$ and monic denominator polynomial. $$\mathcal{H}$$ is the additive group of the real numbers $$\mathbb{R}$$. For the scalar system: \begin{align} {\it{\Sigma}} \ni F(s)=\frac{z(s)}{a(s)}=\frac{z_{n-1} s^{n-1}+z_{n-2} s^{n-2}+\cdots +z_{1} s+z_{0} }{s^{n}+a_{n-1} s^{n-1}+\cdots +a_{1} s+a_{0} } \end{align} (4.1) The Plucker coordinates and the Plucker matrix are. $\mathfrak{P}=[\mathfrak{p}{\kern 1pt}_{1} ,\,\mathfrak{p}_{2} ]=[Z_{0} (s)\,\,Z_{1} (s)]=[a(s)\,\,z(s)],\,\,\,\,{\mathcal{P}}=[{{\it{p}}}{\kern 1pt}_{1} ,\,{{\it{p}}}_{2} ]\,=\left[ {{\begin{array}{*{20}c} 1 & {a_{n-1} } & {a_{n-2} } & \cdots & {a_{0} } \\ 0 & {z_{n-1} } & {z_{n-2} } & \cdots & {z_{0} } \\ \end{array} }} \right]^{{\rm T}}$ There is a unique factorization of the transfer function matrix with $$\mathcal{H}$$–invariant numerator, achieved with $${\rm {\bf \alpha }}=\{1\}$$. \begin{align} Z_{{\rm {\bf \alpha }}} =z_{0} (s)=a(s),\,\,W_{{\rm {\bf \alpha }}} =z_{1} (s)=z(s) \end{align} (4.2) According to Theorem 3.1, $F(s)=\frac{z(s)}{a(s)}{\rm E}\frac{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{z}} (s)}{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)}=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)\Leftrightarrow \left\{ {{\begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{z}} (s)=z(s)} \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)-a(s)=hz(s)\,\,h\in \mathbb{R}} \\ \end{array} }} \right.$ The following are known to be complete $$\mathcal{H}$$-invariants. \begin{align} \text{Yannakoudakis (1980)}: \mathfrak{C}_{2} \left( {\mathcal{P}} \right)\\ \end{align} (4.3) \begin{align} \text{Byrnes & Crouch (1985)} \left( {z(s),\mathfrak{b}(s)} \right) \end{align} (4.4)$$\mathfrak{b}(s)=a(s)\frac{d}{ds}z(s)-z(s)\frac{d}{ds}a(s)$$ is the so called breakaway polynomial \begin{align} \text{Helmke & Fuhrmann (1989)}: \mathfrak{B}(a(s),z(s)) \end{align} (4.5)$$\mathfrak{B}(a(s),z(s))$$ is the Bezoutian of the polynomials $$a(s),z(s)$$. It is defined as the determinant of the alternant matrix of the polynomials for the variables $$s_{1} ,\,s_{2}$$ divided by the determinant of the Vandermonde matrix of the variables. \begin{align} &\mathfrak{B}(a(s),z(s))=\left| {{\begin{array}{*{20}c} {a(s_{1} )} & {a(s_{2} )} \\ {z(s_{1} )} & {z(s_{2} )} \\ \end{array} }} \right|\left( {\left| {{\begin{array}{*{20}c} 1 & 1 \\ {s_{1} } & {s_{2} } \\ \end{array} }} \right|} \right)^{-1}=\frac{a(s_{1} )z(s_{2} )-z(s_{1} )a(s_{2} )}{s_{2} -s_{1} } \\ \end{align} (4.6) \begin{align} &\mbox{The gain achieving equivalence is: } \,\,h=\frac{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)-a(s)}{z(s)} \end{align} (4.7) None of the above invariants is independent. There are several ways to take out the independent parameters. We present the one proposed by Algorithm 3.1. $${\boldsymbol{Z}}_{\alpha } ={{\it{p}}}_{1},\,\,{\boldsymbol{W}}_{\alpha } ={{\it{p}}}_{2}$$, The vector $${{\it{p}}}_{1}$$ is decomposed in two components, one parallel and one orthogonal to $${{\it{p}}}{\kern 1pt}_{2}$$, i.e. $${\rm{\bf p}}_{1} ={{\it{p}}}_{1}^{\bot } +{{\it{p}}}_{2} H_{a},H_{a} =\left\langle {{{\it{p}}}_{1} ,{{\it{p}}}_{2} }\right\rangle /\left\| {{{\it{p}}}_{2} } \right\|$$. Then the pair $$\left( {{{\it{p}}}_{1}^{\bot } ,\,\,{{\it{p}}}_{2} } \right)$$ is a complete system of independent $${\rm E}-$$invariants. The canonical form is $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\frac{W_{\alpha } (s)}{Z_{\alpha } (s)-W_{\alpha } (s)H_{\alpha } }=\frac{\mathfrak{p}_{2} (s)}{\mathfrak{p}_{1}^{\bot } (s)}$$ Example 4.1 \begin{align*}\label{exam4.1} f(s)&=\frac{s^{2}+1}{s^{4}+3s^{3}+2s^{2}+s+5},\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{f}} (s)=\frac{f(s)}{1+3f(s)}=\frac{s^{2}+1}{s^{4}+3s^{3}+5s^{2}+s+8} \\ {\mathcal{P}}&=\left[ {{\begin{array}{*{20}c} {1\,\,\,3\,\,\,2\,\,\,1\,\,\,5} \\ {0\,\,\,0\,\,\,1\,\,\,0\,\,\,1} \\ \end{array} }} \right]^{{\rm T}}\!\!,\,{\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{\mathcal{P}}} }}=\left[ {{\begin{array}{*{20}c} {1\,\,\,3\,\,\,5\,\,\,1\,\,\,8} \\ {0\,\,\,0\,\,\,1\,\,\,0\,\,\,1} \\ \end{array} }} \right]^{{\rm T}}\,\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)=\left[ {0\,\,1\,\,0\,\,1\,\,3\,\,0\,\,3\,\,-1\,\,-3\,\,1} \right]^{{\rm T}}=\mathbf{\mathfrak{C}}_{2} \left( {{\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{\mathcal{P}}} }}} \right) \\ \mathfrak{B}(a,z)&=\mathbf{\mathfrak{C}}_{2} \left( {\left[ {{\begin{array}{*{20}c} {s_{1}^{4} \,\,s_{1}^{3} \ldots 1} \\ {s_{2}^{4} \,\,\,s_{2}^{3} \cdots 1} \\ \end{array} }} \right]} \right)\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)(s_{1} -s_{2} )^{-1} \\ \mathfrak{B}(a,z)&= \left[ s_{1}^{3} s_{2}^{3} ,\,s_{1}^{2} s_{2}^{2} (s_{1} +s_{2} ),\,s_{1} s_{2} (s_{1}^{2} +s_{1} s_{2} +s_{2}^{2} ),\,(s_{1}^{3} +s_{1}^{2} s_{2} +s_{1} s_{2}^{2} +s_{2}^{3} ),s_{1}^{2} s_{2}^{2} ,s_{1} s_{2} (s_{1} +s_{2} ),s_{1}^{2}\right.\\ &\left.\quad +s_{1} s_{2} +s_{2}^{2} ,\,s_{1} s_{2} ,\,s_{1} +s_{2} ,\,1 \right]\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)\,\, \end{align*} $$a^{\bot }=[1\,\,3\,\,-3/2\,\,\,2\,\,3/2\,\,]$$ and the canonical form is $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\frac{s^{2}+1}{s^{4}+3s^{3}-3/2s^{2}+2s+3/2\,}=\frac{z(s)}{a^{\bot }(s)}$$ The complete set of independent invariants of this paper is the pair $$\left( {\mathfrak{p}_{2} (s),\,\,\mathfrak{p}_{1}^{\bot } (s)} \right)=\left( {z(s),\,\,a^{\bot }(s)} \right)$$ The only visible relation with the other known complete invariants $$\mathfrak{C}_{2} \left( {\mathcal{P}} \right)$$, $$\left( {z(s),\mathfrak{b}(s)} \right)$$, $$\mathfrak{B}(s_{1} ,s_{2} )$$ is that we can calculate each one from each other. The more powerful tool is the Bezoutian $$\mathfrak{B}(s_{1} ,s_{2} )$$. It is the resultant of the closed loop polynomials $$a(s_{1} )+hz(s_{1} ),\,\,a(s_{2} )+hz(s_{2} )$$. We can extend it to the general case, eliminating the gain $$H$$, but we don’t want to give more weigh to this article. However the multivariate closed loop characteristic polynomial describing the constrained dynamics in the last section of the article is such a generalization of the Bezoutian. It is useful to remark that $$\mathfrak{B}(s_{1} ,s_{2} )=0$$ describes the constrained closed loop dynamics and is equivalent to the root locus of the system as if we suppose that $$s_{1}$$ is a closed loop pole the roots in $$s_{2}$$ of $$\mathfrak{B}(s_{1} ,s_{2} )=0$$ are the other closed loop poles. The break away polynomial is $$\mathfrak{B}(s,s)$$ and the intersection of the root locus with the imaginary axis is given by the roots of $$\mathfrak{B}(j\omega ,-j\omega )=0$$. Remark also that $$\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)={{\it{p}}}_{1} \wedge {{\it{p}}}_{2}$$ is related to the Bezoutian by the equation below. $\mathfrak{B}(s_{1} ,s_{2} )={{\it{p}}}_{1} \wedge {{\it{p}}}_{2} \left( {\left( {s_{1}^{n} \,\,s_{1}^{n-1} \ldots 1} \right)^{{\rm T}}\wedge \left( {s_{2}^{n} \,\,\,s_{2}^{n-1} \cdots 1} \right)^{{\rm T}}} \right)/(s_{1} -s_{2} )$ 4.2. Single output systems For this section $${\it{\Sigma}}$$ is the set $$\mathbb{R}_{n}^{m} \left\{ s\right\}$$ of strictly proper rational vectors with real coefficients of McMillan degree $$n$$ and monic denominator, verifying also (I.3). $$\mathcal{H}$$ is the additive group of real vectors $$\mathbb{R}^{m}$$. Let $${\it{\Sigma}} \ni F(s)=a^{-1}(s)\left[ {z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)} \right]$$. Then $$Z_{0} (s)=a(s),\,\,Z_{1} (s)=\left[ {z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)} \right]$$ The Plucker coordinates and the Plucker matrix are. \begin{align} \mathfrak{P}&=[\mathfrak{p}{\kern 1pt}_{1} \,\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ]=[z_{0} (s)\,\,z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)]=[a(s)\,\,z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)] \\ \end{align} (4.8) \begin{align} {\mathcal{P}}&=[{{\it{p}}}{\kern 1pt}_{1} \,\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]\,=\left[ {{\begin{array}{*{20}c} {\begin{array}{l} \,\,1 \\ a_{n-1} \\ a_{n-2} \\ \,\,\,\vdots \\ a_{0} \\ \end{array}} & {\begin{array}{l} \,\,0 \\ z_{1,n-1} \\ z_{1,n-2} \\ \,\,\,\,\vdots \\ z_{1,0} \\ \end{array}} & \cdots & {\begin{array}{l} \,\,0 \\ z_{m,n-1} \\ z_{m,n-2} \\ \,\,\,\,\vdots \\ z_{m,0} \\ \end{array}} \\ \end{array} }} \right] \end{align} (4.9) There is a unique factorization of the transfer function matrix with $$\mathcal{H}-$$invariant numerator achieved with $${\rm {\bf \alpha}}=\{1\}:\,\,Z_{{\rm {\bf \alpha }}} =Z_{0} (s)=a(s),\,\,W_{{\rm{\bf \alpha }}} =Z_{1} (s)=\left[ {z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)} \right]$$ According to Algorithm 3.1, the systems $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{F}} (s),\,F(s)$$ are SOF – equivalent if and only if, $${\boldsymbol{W}}_{a} =[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{{\it{p}}}}_{m+1} ]\,=[\,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}}}}_{2} \,\,\cdots \,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}}}}_{m+1}]=\mathbf{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{W}} }_{a}$$ and ${{\it{p}}}_{1}^{\bot } =\left( {I_{n} -[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]^{\dagger }[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]\,} \right){{\it{p}}}_{1} =\left( {I_{n} -[\,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{2} \,\,\cdots \,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{m+1} ]^{\dagger }[\,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{2} \,\,\cdots \,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{m+1} ]\,} \right){{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{1} ={{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{1}^{\bot }$ $$[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]^{\dagger }$$ exists thanks to hypothesis (1.1) $$H_{a} =[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]^{\dagger }{{\it{p}}}_{1}$$ and the canonical form is \begin{align} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}} {{F}} (s)=\left( {\mathfrak{p}_{1} -[\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ]H_{\alpha } } \right)^{-1}[\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ]=\left( {\mathfrak{p}_{1}^{\bot } } \right)^{-1}[\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ] \end{align} (4.10) Example 4.2 \begin{align*} &F(s)=\frac{1}{s^{3}+6s^{2}+11s+6}\left[ {{\begin{array}{*{20}c} {s+3} & {s+1} \\ \end{array} }} \right] \quad \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)=\frac{1}{s^{3}+6s^{2}+9s}\left[ {{\begin{array}{*{20}c} {s+3} & {s+1} \\ \end{array} }} \right]\\ &a(s)=s^{3}+6s^{2}+11s+6,\,\,z_{1} (s)=s+3,\,\,z_{2} (s)=s+1 \\ & \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)=s^{3}+6s^{2}+9s,\,\,z_{1} (s)=s+3,\,\,z_{2} (s)=s+1 \end{align*} \begin{align*} {\mathcal{P}}&=\left[\!\! {\begin{array}{l} 1\,\,\,\,\,\,\,\,0\,\,\,\,\,\,0 \\ 6\,\,\,\,\,\,\,0\,\,\,\,\,\,0 \\ 11\,\,\,\,\,1\,\,\,\,\,\,\,1 \\ 6\,\,\,\,\,\,\,3\,\,\,\,\,\,\,1 \\ \end{array}} \!\!\right],{\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{P}} }}=\left[\!\! {\begin{array}{l} 1\,\,\,\,\,\,0\,\,\,\,\,\,0 \\ 6\,\,\,\,\,0\,\,\,\,\,\,0 \\ 9\,\,\,\,\,1\,\,\,\,\,\,\,1 \\ 0\,\,\,\,\,3\,\,\,\,\,\,\,1 \\ \end{array}} \!\!\right],\left( {I_{4} -\left[{{{\it{p}}}_{2} ,{{\it{p}}}_{3} } \right]^{\dagger }\left[ {{{\it{p}}}_{2} ,{{\it{p}}}_{3} } \right]} \right)=\left[\!\! {{\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} }} \!\!\right]\Rightarrow {{\it{p}}}_{1}^{\bot } ={\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{{\it{p}}}} }}_{1}^{\bot } \\ H_{a} &=\left[ \!\!{{\begin{array}{*{20}c} 0 & 0 & {-1/2} & {\,\,\,1/2} \\ 0 & 0 & {\,\,\,3/2} & {-1/2} \\ \end{array} }}\!\! \right]\left[ \!\!{{\begin{array}{*{20}c} 1 \\ 6 \\ {11} \\ 6 \\ \end{array} }} \!\!\right]=\left[ \!\!{{\begin{array}{*{20}c} {-5/2} \\ {27/2} \\ \end{array} }} \!\!\right]\!, \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{H}}_{a} =\left[ \!\!{{\begin{array}{*{20}c} 0 & 0 & {-1/2} & {\,\,\,1/2} \\ 0 & 0 & {\,\,\,3/2} & {-1/2} \\ \end{array} }} \!\!\right]\left[ \!\!{{\begin{array}{*{20}c} 1 \\ 6 \\ 9 \\ 0 \\ \end{array} }} \!\!\right]=\left[ \!\!{{\begin{array}{*{20}c} {-9/2} \\ {27/2} \\ \end{array} }} \!\!\right] \end{align*} The gain achieving equivalence is $$H=\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{H}} _{\alpha } -H_{\alpha } =[-2\,\,\,0]^{{\rm T}}$$$$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=(1+F(s)H)^{-1}F(s)$$ The canonical form of both $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s),\,F(s)\,$$is $\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}} {{F}} (s)=\frac{1}{s^{2}(s+6)}\left[ {{\begin{array}{*{20}c} {s+3} & {s+1} \\ \end{array} }} \right]$ Another complete invariant more significant for control problems is obtained eliminating the gain $$H=\left[ {h_{1} ,\,h_{2} ,\,\ldots ,h_{m} \,} \right]^{{\rm T}}$$ among the closed loop polynomials $$[a(s_{i} )\,\,z_{1} (s_{i} )\,\,z_{2} (s_{i} )\,\,\cdots \,\,z_{m} (s_{i} )]H,\,\,i=1,2,\ldots ,m+1$$ $\mathfrak{B}(a,\,z_{1} ,\,\,z_{2} ,\,\,\ldots ,\,z_{m} )=\mathbf{\mathfrak{C}}_{2} \left( {\left[ {{\begin{array}{*{20}c} {s_{1}^{n} \,\,\,\,\,\,s_{1}^{n-1} \,\,\ldots \,\,1} \\ {\begin{array}{l} \,\,\,s_{2}^{n} \,\,\,\,\,s_{2}^{n-1} \,\,\,\cdots 1 \\ \,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\vdots \\ \,\,s_{m+1}^{n} \,\,\,s_{m+1}^{n-1} \cdots 1 \\ \end{array}} \\ \end{array} }} \right]} \right)\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)\left| {{\begin{array}{*{20}c} {s_{1}^{n} \,\,\,\,\,\,s_{1}^{n-1} \,\,\ldots \,\,1} \\ {\begin{array}{l} \,\,\,s_{2}^{n} \,\,\,\,\,s_{2}^{n-1} \,\,\,\cdots 1 \\ \,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\vdots \\ \,\,s_{m+1}^{n} \,\,\,s_{m+1}^{n-1} \cdots 1 \\ \end{array}} \\ \end{array} }} \right|^{-1}$ It describes the constrained closed loop dynamics, i.e. if we fix $$m$$ roots $$s_{1} ,\,\,s_{2} ,\,\,\cdots \,,\,s_{m}$$ its roots in $$s_{m+1}$$ give the other closed loop poles. 4.3. Square systems There is a unique factorization of the transfer function matrix with $${\rm E}-$$invariant numerator achieved with $${\rm {\bf \alpha }}=\{1,2,\ldots ,r\}$$ $Z_{{\rm {\bf \alpha }}} =\mathbf{\mathfrak{C}}^{1}\left( {z_{r-1} (s)} \right)=\left[ {{\begin{array}{*{20}c} {z_{11} (s)} & \cdots & {z_{1r} (s)} \\ \vdots & \ddots & \vdots \\ {z_{r1} (s)} & \cdots & {z_{rr} (s)} \\ \end{array} }} \right]\in \mathbb{R}^{r\times r}[s],\,\,W_{{\rm {\bf \alpha }}} =z_{r} (s)=w(s)\in \mathbb{R}[s]$ For square systems $$W_{{\rm {\bf \alpha }}}$$ is scalar and so $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{{\rm {\bf \alpha }}} =Z_{{\rm {\bf \alpha }}} +\,\,W_{{\rm {\bf \alpha }}} H$$ is decomposed to $$r^{2}$$ equations for scalar systems \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{z}} _{ij} (s)=z_{ij} (s)+w(s)h_{ij} \end{align} (4.11) So we have that a complete $${\rm E}-$$invariant function of the set of $$r\times r$$ square systems of McMillan degree $$n$$, is the set of $$r^{2}$$ complete $${\rm E}-$$invariant functions of the set of scalar systems of McMillan degree $$n-r+1$$ defined by (4.9). To construct complete systems of independent $${\rm E}-$$invariants and canonical forms, we proceed as for scalar systems $$r^{2}$$ times. In the last section, we present complete invariants and canonical forms for square $$2\times 2$$ systems. 4.4. Rectangular systems Direct application of Algorithm 3.1. The exterior factorization is not unique, and we must choose the ‘first’ $$a$$ with $$\left| {Z_{\alpha } (s)} \right|\ne 0$$. Example 4.3 \begin{align*} F(s)&=\frac{1}{(s-1)^{4}}\left[ {{\begin{array}{*{20}c} {s^{3}-s^{2}} & {s^{3}-3s^{2}+3s-1} & {2s^{3}-5s^{2}+5s-2} \\ {2s^{3}-3s^{2}+s} & {s^{3}-3s^{2}+3s-1} & {s^{3}-s^{2}} \\ \end{array} }} \right]\left( {\mbox{Kim}-\mbox{Lee }\left( {\mbox{1995}} \right)} \right) \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)&=\frac{1}{s^{4}-2s^{3}+9s^{2}-16s+7}\left[ {{\begin{array}{*{20}c} {s^{3}-s^{2}-s} & {s^{3}-2s^{2}+4s-3} & {2s^{3}-2s^{2}+10s-8} \\ {2s^{3}-4s^{2}+2s} & {-4s^{2}+4s-1+s^{3}} & {-3s^{2}-s+2+s^{3}} \\ \end{array} }} \right]\\ {\it{\Omega}} &=\left[\!\!\! {{\begin{array}{*{20}c} {3s^{2}-2s} & {s^{2}-3s+2} & 0 \\ {-s^{2}+s} & 0 & {s^{2}-3s+2} \\ 0 & {-s^{2}+s} & {-3s^{2}+2s} \\ \end{array} }} \!\!\!\right]=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{{\it{\Omega}} }} \stackrel{{\mathbf{\alpha }=(1,2)}}{\Rightarrow} W_{\mathbf{\alpha }} =\left[\!\!\! {{\begin{array}{*{20}c} {-s^{2}+s} & 0 & {s^{2}-3s+2} \\ 0 & {-s^{2}+s} & {-3s^{2}+2s} \\ \end{array} }} \!\!\!\right] \\ Z_{\mathbf{\alpha }} &=\left[\!\!\! {{\begin{array}{*{20}c} {s^{3}-3s^{2}+3s-1} & {-s^{3}+3s^{2}-3s+1} \\ {3s^{2}-2s^{3}-s} & {s^{3}-s^{2}} \\ \end{array} }} \!\!\!\!\right],\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\mathbf{\alpha }} =\left[\!\!\!\! {{\begin{array}{*{20}c} {-4s^{2}+4s-1+s^{3}} & {-s^{3}+2s^{2}-4s+3} \\ {-2s+4s^{2}-2s^{3}} & {s^{3}-s^{2}-s} \\ \end{array} }} \!\!\!\!\right] \\ Z_{\mathbf{\alpha }} &=\underbrace {\left[ {\begin{array}{l} \,\,\,1\,\,\,\,\,-1 \\ -2\,\,\,\,\,\,1 \\ \end{array}} \right]}_{Z_{\alpha ,3} }s^{3}+\underbrace {\left[ {\begin{array}{l} -3\,\,\,\,\,\,3 \\ -2\,\,-1 \\ \end{array}} \right]}_{Z_{\alpha ,2} }s^{2}+\underbrace {\left[ {\begin{array}{l} \,\,\,3\,\,\,-3 \\ -1\,\,\,\,\,0 \\ \end{array}} \right]}_{Z_{\alpha ,1} }s+\underbrace {\left[ {\begin{array}{l} -1\,\,\,1 \\ \,\,\,0\,\,\,0 \\ \end{array}} \right]}_{Z_{\alpha ,0} },\\ W_{\mathbf{\alpha }} &=\underbrace {\left[ {\begin{array}{l} -1\,\,\,\,\,\,0\,\,\,\,\,\,\,\,1 \\ \,\,\,0\,\,\,-1\,\,\,-3 \\ \end{array}} \right]}_{W_{\alpha ,2} }s^{2}+\underbrace {\left[ {\begin{array}{l} 1\,\,\,0\,\,\,-3 \\ 0\,\,\,1\,\,\,\,\,\,2 \\ \end{array}} \right]}_{W_{\alpha ,1} }s+\underbrace {\left[ {\begin{array}{l} 0\,\,0\,\,\,2 \\ 0\,\,0\,\,\,0 \\ \end{array}} \right]}_{W_{\alpha ,0} } \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\mathbf{\alpha }} &=\underbrace {\left[\!\!\!\! {\begin{array}{l} \,\,\,1\,\,\,\,\,-1 \\ -2\,\,\,\,\,\,1 \\ \end{array}} \!\!\!\!\right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,3} }s^{3}+\underbrace {\left[\!\!\!\! {\begin{array}{l} -4\,\,\,\,\,2 \\ \,\,\,4\,\,-1 \\ \end{array}}\!\!\!\! \right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,2} }s^{2}+\underbrace {\left[ {\begin{array}{l} \,\,\,4\,\,\,-3 \\ -2\,\,\,-1 \\ \end{array}} \right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,1} }s+\underbrace {\left[ {\begin{array}{l} -1\,\,\,3 \\ \,\,\,0\,\,\,0 \\ \end{array}} \right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,0} }\,\, \\ {\boldsymbol{Z}}_{\mathbf{\alpha }}& =\left[ {{\begin{array}{*{20}c} {\,\,\,1} & {-2} & {-3} & {-2} & {\,\,\,3} & {-1} & {-1} & 0 \\ {-1} & {\,\,\,1} & {\,\,\,3} & {\,\,\,1} & {-3} & {\,\,\,0} & {\,\,\,1} & 0 \\ \end{array} }} \right]^{{\rm T}},\,\,\,\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} }_{\mathbf{\alpha }} =\left[ {{\begin{array}{*{20}c} {\,\,\,1} & {-2} & {-4} & {\,\,\,4} & {\,\,\,4} & {-2} & {-1} & 0 \\ {-1} & {\,\,\,1} & {\,\,\,2} & {-1} & {-3} & {-1} & {\,\,\,3} & 0 \\ \end{array} }} \right]^{{\rm T}}\\ {\boldsymbol{W}}_{\mathbf{\alpha }}& =\left[\!\!\! {{\begin{array}{*{20}c} 0 & 0 & {-1} & {\,\,\,0} & {\,\,\,1} & 0 & 0 & 0 \\ 0 & 0 & {\,\,\,0} & {-1} & {\,\,\,0} & 1 & 0 & 0 \\ 0 & 0 & {\,\,\,1} & {-3} & {-3} & 2 & 2 & 0 \\ \end{array} }} \!\!\!\right]^{{\rm T}}=\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{W}} }_{\mathbf{\alpha }} ,H_{\alpha } =\left[\!\!\! {{\begin{array}{*{20}c} {27/13} & {-29/13} \\ {-11/13} & {-6/13} \\ {-6/13} & {5/13} \\ \end{array} }} \!\!\!\right],\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{H}}_{\alpha } =\left[\!\!\! {{\begin{array}{*{20}c} {40/13} & {-3/13} \\ {-24/13} & {-45/13} \\ {-6/13} & {18/13} \\ \end{array} }} \!\!\!\right] \\ H&=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{H}} _{\alpha } -H_{\alpha } =\left[ {{\begin{array}{*{20}c} {-1} & {-2} \\ 1 & 3 \\ 0 & {-1} \\ \end{array} }} \right],\,\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} }_{\mathbf{\alpha }} -\,{\boldsymbol{Z}}_{\mathbf{\alpha }} =\,{\boldsymbol{W}}_{\mathbf{\alpha }} H \end{align*} 5. Complete SOF-invariants SOF-assignability and constrained dynamics 5.1. Complete SOF-invariants and SOF-assignability In the huge literature on SOF, see for instance the survey paper Syrmos et al. (1997) and the references therein, and the more recent Leventides & Karcanias (2008) and Karcanias & Leventides (2016), solvability conditions are expressed in terms of entities that are SOF-invariant. I cite some of them For SOF-assignability, the Plucker matrix must have full rank (Giannakopoulos & Karcanias, 1985). The rank of the Plucker matrix is SOF-invariant. For SOF-assignability of systems with two inputs two outputs and four states the transfer function matrix must be rank deficient (Brokett & Byrnes, 1981). The rank of the transfer function matrix is SOF-invariant. For some distributions of Kronecker’s indices systems are generically SOF-assignable (Yannakoudakis, 2013b). Kronecker’s indices are SOF-invariant. Up to yesterday, we were lacking a complete system of independent SOF-invariants. So the practice was to try to solve somehow the problems and then examine the solvability conditions with respect to their SOF-invariant properties. The question arising is if today, having a complete system of independent SOF-invariants, we can reverse the problems and start from the beginning looking for solvability conditions in terms of the invariants. In this purpose, we give in this section, the necessary and sufficient conditions for SOF assignability as functions of a complete system of SOF-invariants for the class of systems with two inputs, two outputs and four states and transfer function matrix: \begin{align} F(s)=\frac{1}{a_{0} (s)}\left[ {{\begin{array}{*{20}c} {a_{11} (s)} & {a_{12} (s)} \\ {a_{21} (s)} & {a_{22} (s)} \\ \end{array} }} \right] \end{align} (5.1) Its open and closed loop Plucker coordinates are \begin{align} \mathfrak{P}&=\left[ {a_{0} (s),\,\,a_{21} (s),\,\,a_{22} (s),\,\,-a_{11} (s),\,\,-a_{12} (s),\,\,D(s)} \right],\,\,D(s)=\frac{a_{11} (s)a_{22} (s)-a_{12} (s)a_{21} (s)\,}{a_{0} (s)} \\ \tilde{{\mathfrak{P}}}&=\mathfrak{P}\mathfrak{T},\,\,\mathfrak{T}=\mathfrak{C}_{2} \left( {\left[ {{\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ {h_{11} } & {h_{12} } & 1 & 0 \\ {h_{21} } & {h_{22} } & 0 & 1 \\ \end{array} }} \right]} \right)=\left[ {{\begin{array}{*{20}c} 1 & 0 & 0 & 0 & 0 & 0 \\ {h_{12} } & 1 & 0 & 0 & 0 & 0 \\ {h_{22} } & 0 & 1 & 0 & 0 & 0 \\ {-h_{11} } & 0 & 0 & 1 & 0 & 0 \\ {-h_{21} } & 0 & 0 & 0 & 1 & 0 \\ d & {-h_{21} } & {h_{11} } & {-h_{22} } & {h_{12} } & 1 \\ \end{array} }} \right]\!,\notag\\ d&=h_{11} h_{22} -h_{12} h_{21} \notag \end{align} (5.2) With feedback gain $H=\left[ {{\begin{array}{*{20}c} {h_{11} } & {h_{12} } \\ {h_{21} } & {h_{22} } \\ \end{array} }} \right]$ \begin{align} \begin{array}{l} \tilde{{\mathfrak{p}}}{\kern 1pt}_{1} =\tilde{{a}}_{0} (s)=a_{0} (s)+a_{21} (s)h_{12} +a_{22} (s)h_{22} +a_{11} (s)h_{11} +a_{12} (s)h_{21} +D(s)d \6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{2} =\tilde{{a}}_{21} (s)=a_{21} (s)-h_{21} D(s) \\[6pt] \tilde{{\mathfrak{p}}}_{{\kern 1pt}3} =\tilde{{a}}_{22} (s)=a_{22} (s)+h_{11} D(s) \\[6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{4} =\tilde{{a}}_{11} (s)=-a_{11} (s)-h_{22} D(s) \\[6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{5} =\tilde{{a}}_{12} (s)=-a_{12} (s)+h_{12} D(s) \\[6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{6} =\mathfrak{p}{\kern 1pt}_{6} \end{array} \end{align} (5.3) Let \bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} ,\,\,\bar{{D}}\in \mathbb{R}^{4} denote the vectors of the coefficients of the polynomials a_{0} (s)-a_{cl} (s),\,\,a_{11} (s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s),\,\,D(s), respectively. Then, we decompose each one of the vectors \,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} , to one parallel and one orthogonal to \bar{{D}}. \begin{align} \begin{array}{l} \,\,\left. {\begin{array}{l} \bar{{a}}_{11} =a_{11}^{\bot } +a_{11}^{\parallel } ,\,\,\bar{{a}}_{12} =a_{12}^{\bot } +a_{12}^{\parallel } ,\,\bar{{a}}_{21} =a_{21}^{\bot } +a_{21}^{\parallel } ,\,\,\bar{{a}}_{22} =a_{22}^{\bot } +a_{22}^{\parallel } \\[6pt] a_{11}^{\parallel } =\gamma_{11} \bar{{D}},\,\,a_{12}^{\parallel } =\gamma _{12} \bar{{D}},\,\,a_{21}^{\parallel } =\gamma_{21} \bar{{D}},\,\,a_{22}^{\parallel } =\gamma_{22} \bar{{D}} \\[6pt] \end{array}} \right\} \gamma_{\zeta \xi } =\frac{\left\langle {\bar{{a}}_{\zeta \xi } ,\bar{{D}}} \right\rangle }{\left\| {\bar{{D}}} \right\|^{2}} \\[6pt] a_{11} (s)=a_{11}^{\bot } (s)+\gamma_{11} D(s),\,a_{12} (s)=a_{12}^{\bot } (s)+\gamma_{12} D(s),\\[6pt] a_{21} (s)=a_{21}^{\bot } (s)+\gamma_{21} D(s),\,a_{22} (s)=a_{22}^{\bot } (s)+\gamma_{22} D(s) \\ \end{array} \end{align} (5.4) The function \begin{align} f:\mathbb{R}_{4}^{2\times 2} \left\{ s \right\}\to \mathbb{R}^{4\times 5},F(s)\mapsto \left[ {a_{21}^{\bot } ,\,\,a_{22}^{\bot } ,\,\,a_{11}^{\bot } ,\,\,a_{12}^{\bot } ,\,\,\bar{{D}}} \right] \end{align} (5.5) Is complete SOF-invariant and \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=D(s)\left[ {{\begin{array}{*{20}c} {a_{22}^{\bot } (s)} & {-a_{12}^{\bot } (s)} \\ {-a_{21}^{\bot } (s)} & {a_{11}^{\bot } (s)} \\ \end{array} }} \right]^{-1} is an E-canonical form of F(s). It is obtained from the initial system using the feedback gain H_{\alpha } =-\left[ {{\begin{array}{*{20}c} {\,\,\,\gamma_{22} } & {-\gamma_{12} } \\ {-\gamma_{21} } & {\,\,\gamma_{11} } \\ \end{array} }} \right] . Consider the truncated Plucker matrix of the canonical form. \begin{align} {\rm {\bf \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{P}} }}=\left[ {a_{11}^{\bot } \,\,a_{12}^{\bot } \,\,\,-a_{21}^{\bot } \,\,-a_{22}^{\bot } \,\,\bar{{D}}} \right] \end{align} (5.6) The vectors a_{11}^{\bot } ,\,\,a_{12}^{\bot } ,\,\,a_{21}^{\bot } ,\,\,a_{22}^{\bot } are linearly dependent as they belong to a three-dimensional subspace. We consider now the compound of Order 4 of the truncated Plucker matrix of the canonical form and the truncated Plucker matrix of the system \begin{align} \begin{array}{l} {\rm {\bf \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{P}} }}_{0} =\mathbf{\mathfrak{C}}_{4} \left( {\left[ {a_{11}^{\bot } \,\,a_{12}^{\bot } \,\,\,-a_{21}^{\bot } \,\,-a_{22}^{\bot } \,\,\bar{{D}}} \right]} \right)=\left[ {{\it{\Delta}}_{0} \,\,{\it{\Delta}}_{22} \,\,{\it{\Delta}}_{21} \,\,{\it{\Delta}}_{12} \,\,{\it{\Delta}}_{11} } \right],\,\,{\it{\Delta}}_{0} =0 \\ {\mathcal{P}}_{0} =\mathbf{\mathfrak{C}}_{4} \left( {\left[ {\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} ,\,\,\bar{{D}}} \right]} \right)=\left[ {{\it{\Delta}} \,\,{\it{\Delta}}_{22} \,\,{\it{\Delta}}_{21} \,\,{\it{\Delta}} _{12} \,\,{\it{\Delta}}_{11} } \right],\\ {\it{\Delta}} =\,\,\gamma_{22} {\it{\Delta}}_{22} +\gamma_{21} {\it{\Delta}}_{21} +\gamma_{12} {\it{\Delta}}_{12} +\gamma_{11} {\it{\Delta}} _{11} \\ \end{array} \end{align} (5.7) We suppose that the polynomials a_{11} (s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s) are {\rm P}-linearly independent. Then \begin{align} &D(s)=a_{11} (s)\delta_{11} +a_{12} (s)\delta_{12} +a_{21} (s)\delta_{21} +a_{22} (s)\delta_{22} \notag\\ &\quad\begin{array}{l} \delta_{11} ={\left| {\bar{{D}},\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{D}},\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\left| {\bar{{D}},a_{12}^{\bot } ,\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{D}},a_{12}^{\bot } ,\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}=-{{\it{\Delta}}_{11} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{11} } {\it{\Delta}} }} \right. } {\it{\Delta}} \\[6pt] \delta_{12} ={\left| {\bar{{a}}_{11} ,\bar{{D}},\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{11} ,\bar{{D}},\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={{\it{\Delta}}_{12} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} }}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={{\it{\Delta}}_{12} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} } \\[6pt] \delta_{21} ={\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{D}},\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{D}},\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,\bar{{D}},a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,\bar{{D}},a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={-{\it{\Delta}}_{21} } \mathord{\left/ {\vphantom {{-{\it{\Delta}}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} }}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={-{\it{\Delta}}_{21} } \mathord{\left/ {\vphantom {{-{\it{\Delta}}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} } \\[6pt] \delta_{22} ={\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{D}}} \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{D}}} \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,a_{21}^{\bot } ,\bar{{D}}} \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,a_{21}^{\bot } ,\bar{{D}}} \right|} {\left| {\,\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={{\it{\Delta}}_{22} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{22} } {\it{\Delta}} }} \right. } {\it{\Delta}}\end{array}\\ &\begin{array}{l} a_{11} (s)\delta_{11} +a_{12} (s)\delta_{12} +a_{21} (s)\delta_{21} +a_{22} (s)\delta_{22} =D(s)\Rightarrow \\[6pt] \Rightarrow \left( {a_{11}^{\bot } (s)+\gamma_{11} D(s)} \right)\delta _{11} +\left( {a_{12}^{\bot } (s)+\gamma_{12} D(s)} \right)\delta_{12} +\left( {a_{21}^{\bot } (s)+\gamma_{21} D(s)} \right)\delta_{21} +\left( {a_{22}^{\bot } (s)+\gamma_{22} D(s)} \right)\delta_{22}\\[6pt] =0\Rightarrow \\[6pt] \Rightarrow a_{11}^{\bot } (s)\delta_{11} +a_{12}^{\bot } (s)\delta_{12} +a_{21}^{\bot } (s)\delta_{21} +a_{22}^{\bot } (s)\delta_{22} +\left( {\delta_{11} \gamma_{11} +\delta_{12} \gamma_{12} +\delta_{21} \gamma _{21} +\delta_{22} \gamma_{22} -1} \right)D(s)=0\Rightarrow \\[6pt] \Rightarrow \left\{ {{\begin{array}{*{20}c} {\left( {\delta_{11} \gamma_{11} +\delta_{12} \gamma_{12} +\delta_{21} \gamma_{21} +\delta_{22} \gamma_{22} -1} \right)=0} \\ {a_{11}^{\bot } (s)\delta_{11} +a_{12}^{\bot } (s)\delta_{12} +a_{21}^{\bot } (s)\delta_{21} +a_{22}^{\bot } (s)\delta_{22} =0}\end{array} }} \right. \end{array}\notag \end{align} (5.8) \[ \begin{array}{l} a_{cl} (s)=a_{0} (s)-a_{11} (s)\left( {h_{22} +\delta_{11} d} \right)-a_{12} (s)\left( {h_{21} +\delta_{12} d} \right)-a_{21} (s)\left( {h_{12} +\delta_{21} d} \right)-a_{22} (s)\left( {h_{11} +\delta_{22} d} \right)\Leftrightarrow \\ \Leftrightarrow a_{0} (s)-a_{cl} (s)=a_{11} (s)\left( {h_{22} +\delta_{11} d} \right)+a_{12} (s)\left( {h_{21} +\delta_{12} d} \right)+a_{21} (s)\left( {h_{12} +\delta_{21} d} \right)\\ \qquad+a_{22} (s)\left( {h_{11} +\delta _{22} d} \right)\Rightarrow \end{array} \begin{align} \begin{array}{l} h_{22} +\delta_{11} d={\left| {\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{22} } \mathord{\left/ {\vphantom {{\mathbf{D}_{22} } {\it{\Delta}} }} \right. } {\it{\Delta}} \Rightarrow h_{22} ={\mathbf{D}_{22} } \mathord{\left/ {\vphantom {{\mathbf{D}_{22} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{11} d \6pt] h_{21} +\delta_{12} d={\left| {\,\bar{{a}}_{11} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\,\bar{{a}}_{11} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{21} } \mathord{\left/ {\vphantom {{\mathbf{D}_{21} } {{\it{\Delta}} \Rightarrow h_{21} ={\mathbf{D}_{21} } \mathord{\left/ {\vphantom {{\mathbf{D}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{12} d}}} \right. } {{\it{\Delta}} \Rightarrow h_{21} ={\mathbf{D}_{21} } \mathord{\left/ {\vphantom {{\mathbf{D}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{12} d} \\[6pt] h_{12} +\delta_{21} d={\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{12} } \mathord{\left/ {\vphantom {{\mathbf{D}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} \Rightarrow h_{12} ={\mathbf{D}_{12} } \mathord{\left/ {\vphantom {{\mathbf{D}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{21} d \\[6pt] h_{11} +\delta_{22} d={\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} } \right|} \mathord{\left/ {\vphantom {{\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{11} } \mathord{\left/ {\vphantom {{\mathbf{D}_{11} } {\it{\Delta}} }} \right. } {\it{\Delta}} \Rightarrow h_{11} ={\mathbf{D}_{11} } \mathord{\left/ {\vphantom {{\mathbf{D}_{11} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{22} d \end{array} \end{align} (5.9) But \begin{align} &h_{11} h_{22} -h_{12} h_{21} =d\Rightarrow \notag\\ &\left( {{\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} } \right)d^{2}-\left( {\,\,\,\mathbf{D}_{22} {\it{\Delta}}_{11} +\mathbf{D}_{11} {\it{\Delta}}_{22} +\mathbf{D}_{21} {\it{\Delta}}_{12} +\mathbf{D}_{12} {\it{\Delta}}_{21} +1} \right)d\notag\\ &+\left( {\mathbf{D}_{11} \mathbf{D}_{22} -\mathbf{D}_{12} \mathbf{D}_{21} } \right)=0\Leftrightarrow \notag\\ &\Leftrightarrow \alpha d^{2}+\beta d+\gamma =0 \notag\\ &\mathbf{D}\left( {a_{cl} (s)} \right)=\beta^{2}-4\alpha \gamma \end{align} (5.10) Remark that, in the case the determinant \mathsf{D}=\left( {{\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} } \right) vanishes, the SOF pole placement problem is linear and has a solution in the case \beta \left( {a_{cl} (s)} \right)\ne 0. In the case \mathcal{D}\ne 0 the problem has a real solution if \mathbf{D}\left( {a_{cl} (s)} \right)\geqslant 0. In the case the polynomials a_{11} (s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s) are {\rm P}-linearly dependent, using similar methods, we can prove that we have linearity whenever \mathbf{D}=\left( {{\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} } \right)=0. In the case D(s)\equiv 0 the rank of the transfer function matrix is one and thanks to theorem 3.1 the polynomials a_{11}(s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s) are \mathcal{H}-invariant. For SOF-assignability the polynomials must be {\rm P}-linearly independent thanks to Theorem 3.2. Apparently the problem is linear and every closed loop characteristic polynomial is assignable. Remark also that for this class of systems we have {\it{\Delta}} \ne 0,\,\,{\it{\Delta}}_{22} =\,\,{\it{\Delta}}_{21} =\,{\it{\Delta}}_{12} ={\it{\Delta}}_{11} =0 We summarize in the following \underline{\text{Characterization of the SOF-equivalence classes as to the Pole assignability}} \[ {\!\!\!\!\begin{array}{*{20}c} {\mbox{SYSTEMS}\,\mbox{WITH}} \\ {(r,\,n,\,m)=(2,\,4,\,2)} \\ \end{array} } \!\!\!\!\!\left\{\!\!\!\! {{\begin{array}{*{20}c} {\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\mathcal{P}}\equiv 0\,\left( {{\it{\Delta}} ={\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}} _{12} ={\it{\Delta}}_{21} =0} \right)\,\,\,\,\,\,\,\underline{\mbox{Not SOF-assignable}\,\,\,(1)}} \\ {{\mathcal{P}}\not\equiv 0\!\!\!\left\{\!\!\!\! {{\begin{array}{*{20}c} {D(s)\equiv 0\,\,\left( {{\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}}_{12} ={\it{\Delta}} _{21} =0} \right)\,\,\underline{\mbox{Complete}\,\mbox{real}\,\mbox{assignability}\,(2)}} \\ {D(s)\,{\not\equiv}\,0\left\{\!\! {{\begin{array}{*{20}c} {{\mathcal{D}}={\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} =0\,\,\,\,\,\,\,\,\,\,\,\underline{\mbox{Generic}\,\mbox{real}\,\,\mbox{assignability}\,\,\,\,\mbox{(3)}}} \\ {\!\!\!\!\!\!\!\!\!\!\!{\mathcal{D}}\neq 0\left\{\!\! {{\begin{array}{*{20}c} {{\mathbf{D}}\left( {a_{cl} (s)} \right)\geqslant 0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\underline{\mbox{Real}\,\,\mbox{assignability}\,\,\mbox{of}\,\,a_{cl} (s)\,}} \\ {{\mathbf{D}}\left( {a_{cl} (s)} \right)<0\,\,\,\,\,\underline{\mbox{Complex}\,\,\mbox{assignability}\,\,\mbox{of}\,\,a_{cl} (s)}} \\ \end{array} }} \right.} \\ \end{array} }} \right.} \\ \end{array} }} \right.} \\ \end{array} }} \right. Some of the above results are stated slightly different in the literature. (1) The Plucker matrix is rank deficient (Giannakopoulos & Karcanias, 1985). (2) The entries of the transfer function matrix are $$\mathbb{R}-$$linearly independent (meaning that the Plucker matrix has full rank) but the transfer function matrix is rank deficient (Brokett & Byrnes, 1981). (3) The Plucker matrix and the transfer function matrix have both full rank but the solution is linear. Result of this article. We conclude for this class of systems that: The degree of the solution is a property of the SOF equivalence class. If the degree is one, i.e. $${\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} =0\,$$, the equivalence class is completely assignable by real output feedback in the case $${\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}}_{12} ={\it{\Delta}}_{21} =0$$ and generically assignable otherwise i.e. we can assign only closed loop polynomials with $$\beta \left( {a_{cl} (s)} \right)\ne 0$$. If the degree is two, i.e. $${\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} \neq 0\,$$, real assignability is not a property of the equivalence class. It depends both on the system and the desired closed loop polynomial i.e. $$\mathbf{D}\left( {a_{cl} (s)} \right)\geqslant 0$$. Systems having the generic properties $$\left( {{\mathcal{P}}{\equiv }0,\,\,D(s){\equiv }0,\,\,\mathbf{D}\neq 0} \right)$$ are assignable by a complex gain. Real assignability of a given polynomial $$a_{cl} (s)$$ depends on both the system and the polynomial. The question arising naturally is what about systems of higher dimension. We can proceed the same way by: Calculating a canonical form Calculating a Groebner basis. The problem is that, even if we arrive to calculate a Groebner basis with coefficients functions of the invariants, we cannot obtain closed formulas on the base of the Sturm conditions, or on the base of the so called trace form, for the existence of real roots, as we did with the second degree equation. We are condemned to walk case by case. 5.2. Complete SOF-invariants and constrained closed loop dynamics In Syrmos et al. (1997) is referred to that there is no known way to characterize the constrained dynamics of the closed loop system by a SOF in the case not all the poles are assignable. This problem of constrained closed loop dynamics is analogous to the problem of fixed dynamics arising in the decentralized control (Anderson & Clements, 1981; Karcanias et al., 1988), In this subsection, we address this problem. It concerns systems with $$mr<n$$. The idea has its roots in the Bezoutian of scalar systems. Let $$F(s)=\frac{z(s)}{a(s)}$$ and $$r_{1} ,\,r_{2}$$ closed loop poles. Then, $$a(r_{1} )+hz(r_{1} )=a(r_{2} )+hz(r_{2} )=0$$. Eliminating the gain $$h$$ among the above equations, we obtain $$a(r_{1} )z(r_{2} )-z(r_{1} )a(r_{2} )=0\Rightarrow \mathfrak{B}(r_{1} ,r_{2} )=0$$. The Bezoutian is a closed loop characteristic polynomial. Fixing $$r_{1}$$, the roots of $$\mathfrak{B}(r_{1} ,r_{2} )=0$$ are the constrained poles of the closed loop system. We proceed in an analogue way whenever $$mr<n$$. We eliminate the output feedback gain among $$mr+1$$equations of the type (2.19): $\tilde{{Z}}_{0} (r_{v}) = Z_{0} (r_{v} )+\sum\limits_{k=1}^r {\sum\limits_{\zeta =1}^{\left( {_{k}^{r} } \right)} {\sum\limits_{\xi =1}^{\left( {_{k}^{m} } \right)} {z_{k\zeta \xi } (r_{v} )d_{k\xi \zeta } } } } ,\,\,v=1,2,\cdots ,mr+1$ In the case of two inputs two outputs we do not need to use resultants. The transfer function matrix when $$(r,m)=(2,2)$$ is: $F(s)=\frac{1}{a_{0} (s)}\left[ {{\begin{array}{*{20}c} {a_{11} (s)} & {a_{12} (s)} \\ {a_{21} (s)} & {a_{22} (s)} \\ \end{array} }} \right]$ The Plucker coordinates are: $\mathfrak{P}=\left[ {a_{0} (s),\,\,a_{21} (s),\,\,a_{22} (s),\,\,-a_{11} (s),\,\,-a_{12} (s),\,\,D(s)} \right],\,\,D(s)=\frac{a_{11} (s)a_{22} (s)-a_{12} (s)a_{21} (s)\,}{a_{0} (s)}$ The closed loop Plucker coordinate $$\tilde{{a}}_{0} (s)$$ is: $\tilde{{a}}_{0} (s)=a_{0} (s)+a_{21} (s)h_{12} +a_{22} (s)h_{22} +a_{11} (s)h_{11} +a_{12} (s)h_{21} +D(s)d$ Suppose now that $$r_{1} ,r_{2} ,r_{3} ,r_{4} ,r_{5} \,$$are closed loop roots then \begin{align} 0&=a_{0} (r_{k} )+a_{21} (r_{k} )h_{12} +a_{22} (r_{k} )h_{22} +a_{11} (r_{k} )h_{11} +a_{12} (r_{k} )h_{21} +D(r_{k} )d,\,\,k\in \left\{ {1,2,3,4,5} \right\} 0\notag\\ &=\left[ {\begin{array}{l} a_{0} (r_{1} )\quad \,a_{21} (r_{1} )\quad {\kern 1pt}a_{22} (r_{1} )\quad {\kern 1pt}a_{11} (r_{1} )\quad \,a_{12} (r_{1} )\quad {\kern 1pt}D(r_{1} ) \\ a_{0} (r_{2} )\quad a_{21} (r_{2} )\quad a_{22} (r_{2} )\quad a_{11} (r_{2} )\quad a_{12} (r_{2} )\quad D(r_{2} ) \\ a_{0} (r_{3} )\quad {\kern 1pt}a_{21} (r_{3} )\quad {\kern 1pt}a_{22} (r_{3} )\quad a_{11} (r_{3} )\quad {\kern 1pt}a_{12} (r_{3} )\quad D(r_{3} ) \\ a_{0} (r_{4} )\quad a_{21} (r_{4} )\quad a_{22} (r_{4} )\quad a_{11} (r_{4} )\quad a_{12} (r_{4} )\quad D(r_{4} ) \\ a_{0} (r_{5} )\quad a_{21} (r_{5} )\quad a_{22} (r_{5} )\quad a_{11} (r_{5} )\quad a_{12} (r_{5} )\quad D(r_{5} ) \\ \end{array}} \right]\left[ {\begin{array}{l} 1 \\ h_{12} \\ h_{22} \\ h_{11} \\ h_{21} \\ d \\ \end{array}} \right] \end{align} (5.11) Let now $\left[ {D_{1} ,\,\,D_{2} ,\,\,D_{3} ,\,\,D_{4} ,\,\,D_{5} ,\,\,D_{6} } \right]=\mathfrak{C}_{5} \left( {\left[\!\!\!{\begin{array}{l} a_{0} (r_{1} )\quad \,a_{21} (r_{1} )\quad {\kern 1pt}a_{22} (r_{1} )\quad {\kern 1pt}a_{11} (r_{1} )\quad \,a_{12} (r_{1} )\quad {\kern 1pt}D(r_{1} ) \\ a_{0} (r_{2} )\quad a_{21} (r_{2} )\quad a_{22} (r_{2} )\quad a_{11} (r_{2} )\quad a_{12} (r_{2} )\quad D(r_{2} ) \\ a_{0} (r_{3} )\quad {\kern 1pt}a_{21} (r_{3} )\quad {\kern 1pt}a_{22} (r_{3} )\quad a_{11} (r_{3} )\quad {\kern 1pt}a_{12} (r_{3} )\quad D(r_{3} ) \\ a_{0} (r_{4} )\quad a_{21} (r_{4} )\quad a_{22} (r_{4} )\quad a_{11} (r_{4} )\quad a_{12} (r_{4} )\quad D(r_{4} ) \\ a_{0} (r_{5} )\quad a_{21} (r_{5} )\quad a_{22} (r_{5} )\quad a_{11} (r_{5} )\quad a_{12} (r_{5} )\quad D(r_{5} )\\ \end{array}} \!\!\!\!\right]} \right)$ The solution to the system of equations (5.11) is \begin{align} \begin{array}{l} h_{12} =\frac{D_{5} }{D_{6} },\,\,h_{22} =-\frac{D_{4} }{D_{6} },\,\,h_{11} =\frac{D_{3} }{D_{6} },\,\,h_{21} =-\frac{D_{2} }{D_{6} },\,\,d=\frac{D_{1} }{D_{6} }\Rightarrow \6pt] \frac{D_{1} }{D_{6} }=-\frac{D_{3} }{D_{6} }\frac{D_{4} }{D_{6} }+\frac{D_{2} }{D_{6} }\frac{D_{5} }{D_{6} }\Rightarrow D_{1} D_{6} -D_{2} D_{5} +D_{3} D_{4} =0 \\ \end{array} \end{align} (5.12) The Plucker quadratic relation (5.12) D_{1} D_{6} -D_{2} D_{5} +D_{3} D_{4} =0 is {\rm E}-invariant and describes the closed loop dynamics. Fixing arbitrarily r_{1} ,r_{2} ,r_{3} ,r_{4} the roots of (5.12) in r_{5} \,describe the closed loop constrained dynamics. Example 5.1 \begin{align*} F(s)&=\left[ {\begin{array}{ll} 1/(s+2)\mbox{ }&1/(s+4) \\ (2s+4)/(s+1)/(s+3)\mbox{ }&1/(s+5) \\ \end{array}} \right] \\ \mathfrak{P}&=\left[ {\begin{array}{l} s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120 \\ s^{4}+13s^{3}+59s^{2}+107s+60 \\ 2s^{4}+26s^{3}+120s^{2}+232s+160\, \\ s^{4}+11s^{3}+41s^{2}+61s+30 \\ s^{4}+10s^{3}+35s^{2}+50s+24 \\ -s^{3}-10s^{2}-29s-28 \\ \end{array}} \right]^{{\rm T}} \end{align*} The multivariate polynomial describing the closed loop dynamics takes more than one page and it does not make any sense to present it here. Fixing r_{1} =-.5,\,\,r_{2} =-1.5,\,\,r_{3} =-2.5,\,\,r_{4} =-3.5, we obtain the polynomial describing the constrained dynamics: \[ -792135/8r_{5} -2299275/16-237555/16r_{5}^{2} Its roots are $$\mbox{ }-4.5346,\mbox{ }-2.1345$$ The gain assigning in the case $$r_{5} =-4.5346$$ is $H=\left[ {\begin{array}{l} -0.3484\mbox{ }-0.0273 \\ -1.0279\mbox{ }-1.0345 \\ \end{array}} \right]$ The gain assigning in the case $$r_{5} =-2.1345$$ is $H=\left[ {\begin{array}{l} \mbox{0.1110 }-\mbox{0.1679} \\ \mbox{2.5348 }-\mbox{7.1755} \\ \end{array}} \right]$ 6. Conclusions In this article, we constructed complete systems of independent SOF invariants and canonical forms, of strictly proper transfer functional matrices having full rank. We address definitely a very old outstanding problem in control theory. To reinforce our arguments about the importance of complete invariants we use them to give answers for proper control problems. For the class of two input two output four states systems we use the above invariants to characterize the assignability properties of the SOF-equivalence classes. The result is that exact SOF-assignability is a property of the equivalence class, only for the non generic equivalence classes ($${\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}}_{12} ={\it{\Delta}}_{21} =0\,)$$ and a property of both equivalence class and polynomial under assignment otherwise. For the class of systems with two inputs two outputs and more than four states, we give an invariant multivariate polynomial describing the closed loop dynamics. We use this polynomial to calculate an adequate feedback gain assigning partially the poles of the system. Acknowledgements We would like to thank professor N. Karkanias for the very long discussions on the SOF invariants, Professor N. Tzanakis for his help on the development of the formalism of the second section and the reviewers for the time they spent to read this paper as well as for their remarks. References Anderson B. D. O. & Clements D. J. ( 1981 ) Algebraic characterization of fixed modes in decentralized control. Automatica , 17 , 703 – 712 . Google Scholar CrossRef Search ADS Brokett R. W. & Byrnes C. I. ( 1981 ) Multivariate Nyquist criteria, root loci and pole placement: a geometric viewpoint. IEEE Trans. Autom. Control , AC-26 , 271 – 284 . Google Scholar CrossRef Search ADS Byrnes C. I. & Crouch P. E. ( 1985 ) Geometric methods for the classification of linear feedback systems. Syst. Control Lett. , 6 , 239 – 246 . Google Scholar CrossRef Search ADS Gantmacher F. R. ( 2000 ) The Theory of Matrices. Providence, RI : American Mathematical Society, AMS Chelsea Publishing . Giannakopoulos C. & Karcanias N. ( 1985 ) Pole assignment of strictly proper and proper linear systems by constant output feedback. Int. J. Control , 42 , 543 – 565 . Google Scholar CrossRef Search ADS Helmke U. & Fuhrmann P. A. ( 1989 ) Bezoutians Linear Algebra and Its Applications , vol. 122–124. E Amsterdam : Elsevier Science Publ. Co, pp. 1039 – 1097 . Hinrichsen D. & Pratzel-Wolters D. ( 1983 ) A canonical form for static output feedback. Universitat Bremen report 101 . Karcanias N. & Giannakopoulos C. ( 1984 ) Grassmann invariants, almost zeros and the determinantal zero, pole assignment problems of linear systems. Int. J. Control , 40 , 673 - 698 . Google Scholar CrossRef Search ADS Karcanias N. , Laios B. & Giannakopoulos C. ( 1988 ) The decentralised determinantal assignment problem: fixed and almost modes and zeros. Int. J. Control , 48 , 129 – 147 . Google Scholar CrossRef Search ADS Karcanias N. & Leventides J. ( 2016 ) Solution of the determinantal assignment problem using the Grassmann Matrices. Int. J. Control , 89 , pp. 352 – 367 . Google Scholar CrossRef Search ADS Kim S. W. & Lee E. B. ( 1995 ) Complete feedback invariant form for linear output feedback , New Orleans, LA, USA , IEEE Conf. CDC . Leventides J. & Karcanias N. ( 2008 ) Structured squaring down and zero assignment. Int. J. Control , 81 , 294 – 306 . Google Scholar CrossRef Search ADS MacLane S. & Birkoff G. ( 1999 ) Algebra . Providence, Rhode Island , AMS Chelsea Publishing Copyright . Popov V. M. ( 1972 ) Invariant description of linear time-invariant controllable systems. SIAM J Control , American Mathematical Society . Providence, Rhode Island , 10 , 252 – 264 . Google Scholar CrossRef Search ADS Prells U. , Friswell M. I. & Garvey S. D. ( 2003 ) Use of geometric algebra: compound matrices and the determinant of the sum of two matrices. Proc. R. Soc. London A , 459 , 273 – 285 . Google Scholar CrossRef Search ADS Ravi M. S. , Rosenthal J. & Helmke U. ( 2002 ). Output feedback invariants. Linear Algebra Appl. , 351–352 : 623 – 637 . Google Scholar CrossRef Search ADS Syrmos V. L. , Abdallah C. T. , Doratos J. P. & Grigoriadis K. ( 1997 ) Static output feedback—a survey. Automatica , 33 , 125 – 137 . Google Scholar CrossRef Search ADS Wedderburn J. H. M. ( 1934 ) Lectures on Matrices , vol. XVII . New York : American Mathematical Society Colloquium Publications . Google Scholar CrossRef Search ADS Yannakoudakis A. ( 1980 ) Invariant Algebraic structures in multivariable control theory. Preprint Laboratoire Automatique Grenoble November 1980 . Yannakoudakis A. ( 2007 ) Output feedback equivalence. European Control Conference, July 2–5, Cos Island, Greece , paperTuD14.2 . Yannakoudakis A. ( 2013a ). Full Static Output Feedback Equivalence. Journal of Control Science and Engineering , Volume 2013 (2013), https://doi.org/10.1155/2013/491709 . Yannakoudakis A. ( 2013b ) Distribution of invariant indices and static output feedback pole placement. MED Conference on Control and Automation Platanias , Crete, Greece , June , https://doi.org/10.1109/MED.2013.6608843 . Yannakoudakis A. ( 2015 ) The static output feedback from the invariant point of view. IMA J. Math. Contr. Inform. , https://doi.org/10.1093/imamci/dnu057 . © The authors 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

# Static output feedback canonical forms

, Volume Advance Article – Jan 8, 2018
31 pages

/lp/ou_press/static-output-feedback-canonical-forms-bxIkIoo8xq
Publisher
Oxford University Press
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dnx051
Publisher site
See Article on Publisher Site

### Abstract

Abstract We construct complete static output feedback (SOF) invariants and canonical forms of the set of strictly proper full rank, rational, transfer function matrices. We consider the induced action of the SOF group on the set of Plücker coordinates. Some of them are SOF-invariant and that some others vary linearly with the entries of the feedback gain. Further SOF-invariants are constructed based on the consistency conditions of linear systems of equations. The pair of the above two lists of invariants is complete. The proof is given, introducing a new type of factorization. We construct a SOF-canonical form. We apply the results to the class of systems with two inputs two outputs, characterizing the SOF-equivalence classes in the case of four states as to the property of SOF-pole assignability and calculating the constrained dynamics in the case of more than four states. 1. Introduction 1.1. Problem statement and motivation In this paper we construct complete invariants and canonical forms of the set of linear, time invariant, strictly proper control systems, under the action of the static output feedback (SOF) group. We use them to characterize the SOF-equivalence classes as to the property of SOF-pole assignability of the set of systems with two inputs two outputs and four states and the constrained dynamics in the case of more than four states. We recall from MacLane & Birkoff (1999) that given a set $${\it{\Sigma}}$$ and an equivalence relation $${\rm E}$$ on it, a function $$f:{\it{\Sigma}} \to {\it{\Phi}}$$ is said to be $${\rm E}-$$ invariant if $$\sigma \,{\rm E}\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} \Rightarrow f(\sigma )=f(\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} )$$. It is said to be complete $${\rm E}-$$invariant if $$\sigma \,{\rm E}\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} \Leftrightarrow f(\sigma )=f(\,\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{\sigma }} )$$. A subset $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{{\it{\Sigma}} }}$$ of $${\it{\Sigma}}$$ is said to be a set of $${\rm E}-$$canonical forms if to each $$\sigma \in {\it{\Sigma}}$$ there is exactly one $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{\sigma }} \in \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{{\it{\Sigma}} }}$$ with $$\sigma \,{\rm E}\,\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{\sigma }}$$. If the complete $${\rm E}-$$invariant function $$f$$ is surjective, $$f(\sigma )$$ is said to be a complete system of independent $${\rm E}-$$invariants (Popov, 1972). For this article, $${\it{\Sigma}}$$ is the set of linear time invariant strictly proper systems identified with the set $$\mathbb{R}_{n}^{r\times m} \left\{ s \right\}$$ of $$r\times m$$ strictly proper, rational function matrices in the variable $$s$$, with real coefficients, of McMillan degree $$n$$. For the uniqueness of the representation, the denominators of the entries are supposed to be monic polynomials. We suppose additionally that the rows and the columns of $$F(s)\in {\it{\Sigma}}$$, are $$\mathbb{R}-$$linearly independent. $$\exists g\in \mathbb{R}^{m}\,\mbox{ with }F(s)g={\rm O}_{r\times 1} \Leftrightarrow g={\rm O}_{m\times 1} \,\,\,\,(a),\,\,\,\,\exists e\in \mathbb{R}^{r}\,\mbox{ with }eF(s)={\rm O}_{1\times m} \Leftrightarrow e={\rm O}_{1\times r} \,\,\,\,\,\,\,\,(b)$$ (1.1) The above hypothesis (1.1) amounts to full rank matrices $$B,C$$, with $$\left( {C,A,\,B} \right)$$ a minimal state space representation of the system. In this case, $$F(s)=C\left( {sI_{n} -A} \right)^{-1}B$$ with $$I_{n}$$ the unit of $$GL_{n} (\mathbb{R})$$. We suppose additionally that the matrix $$F(s)$$, has full rank over the field of rational functions. The SOF group denoted by $$\mathcal{H}$$, is the additive group of $$m\times r$$ real matrices $$\mathcal{H}=\mathbb{R}^{m\times r}$$. It acts on $${\it{\Sigma}}$$ by the transformation: $$F(s)\mapsto \tilde{{F}}(s)=\left\{ {{\begin{array}{@{}c} {(I_{r} +F(s)H)^{-1}F(s)\,\,\,(a)} \4pt] {F(s)(I_{m} +HF(s))^{-1}\,\,(b)} \end{array} }} \right.\forall \left( {F(s)\in {\it{\Sigma}} ,H\in \mathcal{H}} \right)$$ (1.2) Transformations (I.2a) and (I.2b) are identical as $$(I_{r} +F(s)H)^{-1}F(s)=F(s)(I_{m} +HF(s))^{-1}\Leftrightarrow (I_{r} +F(s)H)F(s)=F(s)(I_{m} +HF(s))$$ (1.3) Without loss of generality we suppose that r\leqslant m and it is more convenient to use (I.2a). In the case r<m, (1.1b) is covered by the rank hypothesis. In the case r=m both (1.1a and b) are covered by the rank hypothesis. The equivalence relation {\rm E} induced on {\it{\Sigma}} by the action of \mathcal{H} is described by the equations: $$\begin{array}{l} F(s){\rm E}\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)\Leftrightarrow \exists H\in \mathcal{H}\mbox{ with }\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=(I_{r} +F(s)H)^{-1}F(s) \\[4pt] \Leftrightarrow (I_{r} +F(s)H)\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=F(s)\Leftrightarrow F(s)H\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=F(s)-\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s) \end{array}$$ (1.4) We conclude that {\rm E}-equivalence amounts to the existence of a real solution in the mr entries of the gain H to a system of mr linear equations with coefficients in the field of rational functions. The objective of this article is to find necessary and sufficient conditions for the existence of a real solution to the system of linear equations (1.4), in terms of equality of complete systems of independent {\rm E}-invariants f\left( {F(s)} \right) and canonical forms \underline{F}(s). \begin{align} \exists H\in \mathcal{H}\mbox{ with }F(s)H\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=F(s)-\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)\Leftrightarrow \left\{ \begin{array}{@{}c} {f(F(s))=f(\overset{\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s))} \\ {\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\,\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{\overset{\lower0.3em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} }} (s)} \end{array} \right. \end{align} (1.5) If F(s){\rm E}\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s), hypothesis (1.1) implies that the gain H achieving equivalence is uniquely determined as H_{0} \in \mathcal{H} with F(s)H_{0} ={\rm O}_{r\times r} implies H_{0} ={\rm O}_{m\times r}. The research of SOF canonical forms of strictly proper linear systems, ({\rm E}-canonical forms of {\it{\Sigma}} for the sequel of the article), is a very exciting universal problem. It has additionally a great importance for other control problems compatible with the output feedback equivalence class. Consider for instance the problem of determining necessary and sufficient conditions for the existence of an output feedback gain H, assigning arbitrarily the coefficients of the denominator of (I_{r} +F(s)H)^{-1}\,F(s), known as SOF pole assignability problem. This is a problem compatible with the equivalence class because, if there is a solution for F(s) the same is true for any system {\rm E}-equivalent to it. The property of SOF assignability meant as a function of {\it{\Sigma}} on the Boolean set \left\{ {yes,\,\,no} \right\} is {\rm E}-invariant and as every invariant function does (Popov, 1972), it has to be a function of any complete {\rm E}-invariant function. In this article, we solve the universal problem of SOF-Equivalence as stated below by constructing complete SOF-invariant functions and canonical forms, and we use the solution to characterize the SOF-Equivalence classes of systems with two inputs two outputs and four states as to their SOF assignability properties. In the case of more than four states, we give the algebraic variety describing the closed loop dynamics and we use it to calculate the feedback gain assigning arbitrarily four poles. 1.2. Previous results The problem of {\rm E}-canonical forms of {\it{\Sigma}} is pending for more than four decades in control theory. The first result on {\rm E}-canonical forms is presented by Yannakoudakis (1980), and it concerns scalar systems. State space representation is considered and the SOF group is GL_{n} (\mathbb{R})\times \mathbb{R}^{m\times r}. In the same frame are given computable necessary and sufficient conditions for {\rm E}-equivalence on {\it{\Sigma}} (different than the equality of canonical forms) in terms of linear matrix equations. Hinrichsen & Pratzel-Wolters (1983) present a set of quasi {\rm E}-canonical forms of {\it{\Sigma}}. Byrnes & Crouch (1985) consider the full SOF group, involving also changes of the base of the input and output space and present complete systems of invariants (but not canonical forms) for the related equivalence relation, for the scalar case. Complete systems of full SOF invariants and canonical forms of scalar systems are given by Helmke & Fuhrmann (1989). Kim & Lee (1995), present {\rm E}-canonical forms for the equivalence classes that are SOF-assignable. Unfortunately, to characterize these equivalence classes, we need to know complete {\rm E}-invariants. Ravi et al. (2002) prove the existence of full SOF canonical forms in the case mr>n, but no construction algorithm is presented. In Yannakoudakis (2007) are presented in a new form the necessary and sufficient conditions for {\rm E}-equivalence of Yannakoudakis (1980) and in Yannakoudakis (2013a), computable necessary and sufficient conditions for full SOF equivalence. The above conditions are in terms of structured matrices, namely mosaic Hankel matrices. Summing, we conclude that complete, SOF or full SOF, invariants as well as canonical forms of {\it{\Sigma}} are known only for scalar systems. This is maybe the reason that problems as the SOF exact pole placement remain still open. Newer invariants as the Hankel ones introduced in Yannakoudakis (2013b), help in the highlighting of some aspects of the problem, Yannakoudakis (2015). 1.3. Outline of the methodology used and results In this article, we simply construct complete systems of {\rm E}-invariants and canonical forms of {\it{\Sigma}}, without prerequisites as constraints on the dimensions or SOF-assignability. We address definitely a problem open for more than four decades. We outline below the methodology used. Complete development of the material, proofs and discussion are provided in the next sections. This subsection is to motivate the reader to continue with the rather unusual content of the article. The first step of the attempt is to develop a particular factorization of the transfer function matrices (Section 2.2) F(s)=Z^{-1}(s)W(s) that we call exterior factorization because the polynomial factor Z(s) depends on the entries of the (r-1)th exterior power of F(s) and the polynomial factor W(s) depends on the entries of the rth exterior power of F(s), both multiplied by the least common multiple of the denominators of the entries of F(s). The second step is to prove that the factor W(s) of the exterior factorization is {\rm E}-invariant (Section 2.4, Theorem 2.2). The {\rm E}-invariance of the factor W(s)is the property differentiating our approach from approaches using classical coprime factorizations, like this of Ravi et al. (2002). It is remarkable that the g.c.d. of the entries of W(s) is state feedback invariant while the whole W(s) is SOF invariant. We believe that {\rm E}-invariance of W(s) will become fundamental in the theory of linear control systems. The exterior factorization makes multivariable systems looking like scalar systems. Applying Theorem 2.2 of invariance to the linear system of equations (1.4), we obtain a new linear system of equations suitable for the development of complete invariant functions and canonical forms $$\begin{array}{l} FH\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{F}} =F-\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} \Leftrightarrow Z^{-1}WH{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} =Z^{-1}W-{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} \overbrace \Rightarrow ^{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} =W}Z^{-1}WH{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} =\Big( {Z^{-1}-{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}} \Big)\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} \\ \overbrace \Rightarrow ^{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} \mbox{ has full rank}}Z^{-1}WH{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}-{\mathord{\buildrel{\lower3pt\hbox{\scriptscriptstyle\frown}}\over Z} ^{ - 1}}\Rightarrow WH=\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} -Z \end{array}$$ (1.6) Apparently the systems \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s), and F(s) are {\rm E}-equivalent, if and only if, for their exterior factors \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} (s),\,\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} (s), Z(s),\,\,W(s) we have \begin{align} \begin{array}{l} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} (s)=W(s)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(a) \\ \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} (s)-Z(s)=W(s)H\,\,\,\,\,\,\,\,\,\,\,\,(b) \\ \end{array} \end{align} (1.7) Thanks to the nature of the left part of (1.7b) necessary and sufficient conditions for the existence of solutions of linear systems of equations, are easily translated to equality of invariants. Equating the coefficients of the same power of s, we obtain a matrix equation with constant coefficients. \begin{align} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{Z}}-{\boldsymbol{Z}}={\boldsymbol{W}}H \end{align} (1.7c) If h_{1} ,h_{2} ,\ldots ,h_{r}, \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{1} ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{2} ,\ldots ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{r}, {\boldsymbol{z}}_{1} ,{\boldsymbol{z}}_{2} ,\ldots ,{\boldsymbol{z}}_{r}, {\boldsymbol{w}}_{1} ,{\boldsymbol{w}}_{2} ,\ldots ,{\boldsymbol{w}}_{m} are the columns of the gain H the matrix \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{Z}}, the matrix {\boldsymbol{Z}} and the matrix {\boldsymbol{W}}, respectively, (1.7a) is decomposed into r systems of equations $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{\textit{z}}}_{k} -{\boldsymbol{z}}_{k} ={\boldsymbol{W}}h_{k}$$ (1.8) We can calculate new invariants that complete the already known {\boldsymbol{W}} using the consistency conditions of linear systems of equations (1.8). We decompose the vectors \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{\textit{z}}}_{k} ,\,{\boldsymbol{z}}_{k} into two components one, \big({\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\parallel}_k,\,\,{\textit{z}}_{k}^{\parallel } \big) belonging to the column space of {\boldsymbol{W}} and one \big(\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{\textit{z}}}{}^{\bot}_k,\,\,{\boldsymbol{z}}_{k}^{\bot }\big) orthogonal to it i.e. \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{\textit{z}}}_{k}={\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\parallel}_k +\,{\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\bot}_k, {\boldsymbol{z}}_{k} ={\boldsymbol{z}}_{k}^{\parallel }+\,{\boldsymbol{z}}_{k}^{\bot }. The consistency conditions become: \begin{align} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{\textit{z}}}_{k} -{\boldsymbol{z}}_{k} \in \mbox{colspan}\left( {{\boldsymbol{W}}} \right)\Leftrightarrow \,{\mathop{z}\limits^{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}}{}^{\bot}_k ={\boldsymbol{z}}_{k}^{\bot } \end{align} (1.9) As the above decomposition is unique, going back step we can decompose in a unique way Z(s) (Proof of Theorem 3.1). \begin{align} \begin{array}{r@{\,}c@{\,}l} Z(s)&=&Z^{\bot } (s)+Z^{\parallel } (s)\,\,\mbox{with} \\[4pt] Z^{\parallel } (s)&=&W(s)H_{0} ,\,\,H_{0} \in \mathcal{H} \end{array} \end{align} (1.10) Apparently The pair \left( {W(s),\,Z^{\bot } (s)} \right) is a complete system of independent {\rm E}-invariants of F(s)\in {\it{\Sigma}}. \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\left( {Z^{\bot } (s)} \right)^{-1}W(s) is a {\rm E}-canonical form of F(s)\in {\it{\Sigma}}. The gain H_{0} parameterizes the orbit space of \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s). Roughly speaking, the entries of H_{0} are the ‘coordinates’ of F(s) in the orbit. If F(s){\rm E}\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s), then the gain achieving equivalence is H=H_{0} -\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{H}}_{0}. It is already mentioned that the exterior factorization is the advantage of our approach face to the approach of Ravi et al. (2002). As there are infinitely many left coprime factorizations the authors consider a kind of normal form (autoregressive system). But the induced action of the SOF group on the set of autoregressive systems is not linear. The difficulty to give construction algorithms arises from the non-linearity of the action. Similar problems will appear if we decide to use other normal (or canonical) forms of a coprime factorization. The action of the SOF group on them is linear only for special cases. Besides the ‘numerator’ is not always SOF-invariant. There are a finite number of exterior factorizations. For each one of them we can calculate an H_{0} which verifies (1.10). This fact means that there are a finite number of canonical forms. To be conform to the requirement of uniqueness, we introduce a total order in the set of sets of canonical forms and we select the first one. The set of canonical forms is naturally unique (without introduction of total order) in the case of square and single output systems, as for these cases the exterior factorization is unique. The complete system of independent invariants introduced, is useful for the construction of a unique representative of the equivalence class. For the solution of control problems, other invariants based on the Gauss elimination algorithm consistency conditions, or the Grassmannians of the spaces {\boldsymbol{z}}_{k}\oplus\,\mbox{colspan}\left( {{\boldsymbol{W}}} \right), appear to be more useful. 1.4. Organization of the article The article is organized in the following way: In the second section, we present the terminology used in this article, some well-known results in a new form using the presented terminology and some new background results as exterior factorization, the relation of Plücker coordinates with the exterior powers of the transfer function matrix and the properties of the closed loop parameters of the system. In the third section, are presented the main results on equivalence complete invariants and canonical forms. The fourth section is devoted to the particularization of the results for scalar, one output, square and rectangular systems having full rank transfer-function matrices. In the fifth section, we examine the relation of the presented complete {\rm E}- invariants and canonical forms to the SOF-assignability problem. We characterize the quotient set {\it{\Sigma}} \mathord{\left/ {\vphantom {{\it{\Sigma}} {\rm E}}} \right. } {\rm E} of systems with (r,n,m)=(2,4,2) as to their SOF assignability properties, and we describe by a symmetric multivariate polynomial the constrained dynamics of systems (2,n,2),n>4. 2. Preliminary results In this section, we present some well-known results, as well as some preliminary results in a form suitable for the development of our work on SOF-invariants of {\it{\Sigma}}. 2.1. Terminology Let S=\left\{ {s_{1} ,\,s_{2} ,\ldots ,s_{n} } \right\} be a finite set and \prec a total order on it s_{1} \prec \,s_{2} \prec \cdots \prec s_{n}. To each n\in \mathbb{N} let n denote the ordered set \left\{ {1,2,\ldots ,n} \right\}. Let \forall k\in {\bf n},\vartheta_{\prec } (s_{k} )=k be the function assigning to each element of S its ordinal number (position) with respect to the order \prec. Let {\it{\mathsf P}}_{k} (S) denote the set of the ordered subsets of S having cardinality k. To each \alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}\in {\it{\mathsf P}}_{k} (S), \vartheta_{\textit{lex}} (a) denotes the position of \alpha in {\it{\mathsf P}}_{k} (S) with respect to the lexicographic order and \vartheta_{\textit{rlex}} (a) denotes the position of \alpha in {\it{\mathsf P}}_{k} (S) with respect to the reverse lexicographic order. To each \alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}\in {\it{\mathsf P}}_{k} ({\bf n}) we define its complement \bar{{\alpha }} in n to be \bar{{\alpha }}={\bf n}\backslash \alpha. Apparently, \bar{{\alpha }}\in P_{n-k} (n). Let \sigma_{\alpha } =\alpha _{1} +\alpha_{2} +\cdots +\alpha_{k} be the sum of the elements of \alpha \in {\it{\mathsf P}}_{k} ({\bf n}), c_{k} (n) the number of combinations of n elements taken k at a time without repetition and \mathbf{c}_{k} (n) the ordered set \left\{ {1,2,\ldots ,c_{k} (n)} \right\}. Let now X be a r\times m matrix. For every \alpha \in {\it{\mathsf P}}_{k} ({\bf r}),\,\,\beta \in P_{l} ({\bf m})\mbox{, }\,X\{\alpha ,\beta \} denotes the k\times l submatrix of X in the intersection of the rows \left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}=\alpha and the columns \left\{ {\beta_{1} ,\beta_{2} ,\ldots ,\beta_{l} } \right\}=\beta. To any v\in P_{p} ({\bf r}) and w\in P_{q} ({\bf m}), v\_ w denotes the partial ordered multiset of p+q elements with the first p elements those of v and the last q elements those of w. ( v=\left\{ {1,3,7} \right\},w=\left\{ {2,3,7,10} \right\}\Rightarrowv\_ w=\left\{ {1,3,7,2,3,7,10} \right\} ) To each arbitrarily ordered set of natural numbers, w, w^{>} is the naturally ordered set and \mu (w) is the number of permutations one has to apply to w in order to obtain w^{>}. If \left\{ {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{k} } \right\}=\alpha \in {\it{\mathsf P}}_{k} ({\bf m}), then \alpha +r,\,r\in \mathbb{N} denotes the set \left\{ {\alpha_{1} +r,\alpha_{2} +r,\cdots ,\alpha_{k} +r} \right\}\in {\it{\mathsf P}}_{k} ({\bf m}+{\bf r}). Let X be a n\times n invertible matrix and Y its inverse. The theorem for the minors of the inverse matrix (Gantmacher, 2000) is written using the above terminology: \begin{align} Y=X^{-1}\Rightarrow \forall \alpha \wedge \forall \beta \in {\it{\mathsf P}}_{k} ({\bf n}),\,\,\left| {Y\{\alpha ,\beta \}} \right|=(-1)^{\sigma_{\alpha } +\sigma _{\beta } }\frac{\left| {X^{{\rm T}}\{\bar{{\alpha }},\bar{{\beta }}\}} \right|}{\left| X \right|} \end{align} (2.1) Definition 2.1 The compound of order k, of the r\times m matrix X, denoted by \mathfrak{C}_{k} \left( X \right) is: \begin{align} \forall k\leqslant \min (r,m),\,\,\mathfrak{C}_{k} \left( X \right)=\left[ {{\begin{array}{*{20}c} {{\bf x}_{11} } & {{\bf x}_{12} } & \cdots & {{\bf x}_{1,c_{k} (m)} } \\ {{\bf x}_{21} } & {{\bf x}_{22} } & \cdots & {{\bf x}_{2,c_{k} (m)} } \\ \vdots & \vdots & \ddots & \vdots \\ {{\bf x}_{c_{k} (r),1} } & {{\bf x}_{c_{k} (r),2} } & \cdots & {{\bf x}_{c_{k} (r),c_{k} (m)} } \\ \end{array} }} \right]{\begin{array}{*{20}c} {\mbox{with}\,\,{\bf x}_{\zeta \xi } =\left| {X\{\alpha ,\beta \}} \right|} \\[4pt] {\alpha =\vartheta_{\textit{lex}}^{-1} (\zeta )\in {\it{\mathsf P}}_{k} ({\bf r})} \\[4pt] {\beta =\vartheta_{\textit{lex}}^{-1} (\xi )\in {\it{\mathsf P}}_{k} ({\bf m})} \end{array}} \end{align} (2.2) We define the supplementary compound matrix of order k of the matrix X, \mathfrak{C}^{k}\left( X \right) in a different way than in the literature, which applies also to rectangular matrices. The extension is crucial for our work as we deal with square matrices that are products of rectangular matrices X=YZ. The supplementary compound of the product is meaningful, \mathfrak{C}^{k}\left( X \right)=\mathfrak{C}^{k}\left( {YZ} \right) but the supplementary compound of the matrices Y,\,Z namely \mathfrak{C}^{k}\left( Y \right),\,\,\mathfrak{C}^{k}\left( Z \right) as well as their product \mathfrak{C}^{k}\left( Y \right)\mathfrak{C}^{k}\left( Z \right) is meaningless. For square matrices X,Y,Z with X=YZ we have \mbox{Adjoint}(X)=\mbox{Adjoint}(Z)\mbox{Adjoint}(Y) For rectangular matrices Y,Z with X=YZ square matrix, \mbox{Adjoint}(X) is meaningful while \mbox{Adjoint}(Z) and \mbox{Adjoint}(Y) are meaningless. This fact poses serious limits to the use of supplementary compound matrices, and we have to improve the definition. Definition 2.2 The supplementary compound of order k, of the r\times m matrix X, denoted by \mathfrak{C}^{k}\left( X \right) is: \begin{align} \forall k\leqslant \min (r,m),\,\,\mathfrak{C}^{k}\left( X \right)=\left[ {{\begin{array}{*{20}c} {x_{11} } & {x_{12} } & \cdots & {x_{1,c_{k} (m)} } \\ {x_{21} } & {x_{22} } & \cdots & {x_{2,c_{k} (m)} } \\ \vdots & \vdots & \ddots & \vdots \\ {x_{c_{k} (r),1} } & {x_{c_{k} (r),2} } & \cdots & {x_{c_{k} (r),c_{k} (m)} } \\ \end{array} }} \right]{\begin{array}{*{20}c} {\begin{array}{l} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\mbox{with } \\[4pt] x_{\zeta \xi } =(-1)^{\sigma_{\alpha } +\sigma_{\beta } }\left| {X\left\{ {\alpha ,\beta } \right\}} \right| \\[4pt] \end{array}} \\ {\alpha =\vartheta_{\textit{rlex}}^{-1} (\zeta )\in {\it{\mathsf P}}_{k} ({\bf r})} \\[4pt] {\beta =\vartheta_{\textit{rlex}}^{-1} (\xi )\in {\it{\mathsf P}}_{k} ({\bf m})} \end{array} } \end{align} (2.3) The definition of the compound matrices coincides with the definitions in the literature. The definition of the supplementary compound is different, in order to apply also to rectangular matrices. Restricted to square matrices it coincides with the definition of Wedderburn (1934). The supplementary compound of order k (adjugate compound of order k) as defined by Prells et al. (2003) is the transposed supplementary compound of order r-k of this article, with r the size of the (square) matrix. The supplementary compound matrix can be written as a product of five matrices. \begin{align} \mathfrak{C}^{k}\left( X \right)=R_{c_{k} (r)} S_{c_{k} (r)} \mathfrak{C}_{k} \left( X \right)S_{c_{k} (m)} R_{c_{k} (m)} \end{align} (2.4) S_{c_{k} (r)} \left( {S_{c_{k} (m)} } \right) is a diagonal matrix with its \zeta \mbox{th}\left( {\xi \mbox{th}} \right) element equal to (-1)^{\sigma_{\vartheta_{\textit{lex}}^{-1} (\zeta )} }\left( {(-1)^{\sigma _{\vartheta_{\textit{lex}}^{-1} (\xi )} }} \right). R_{c_{k} (r)} \left( {R_{c_{k} (m)} } \right) is a matrix with ones in the antidiagonal and zeros otherwise. The matrices ‘S’ serve to attribute the sign (-1)^{\sigma_{\vartheta_{\textit{lex}}^{-1} (\zeta )} +\sigma _{\vartheta_{\textit{lex}}^{-1} (\xi )} } to the entry with coordinates \zeta ,\xi of the compound matrix \mathfrak{C}_{k} \left( X \right). The matrices ‘R’ serve to reverse the ordering of rows and columns. Example 2.1 \begin{align*} X&=\left[ {{\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } \\ {b_{1} } & {b_{2} } & {b_{3} } \\ \end{array} }} \right]\Rightarrow\mathfrak{C}^{1}(X)\\ &=\left[ {{\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {(-1)^{1}} & 0 \\ 0 & {(-1)^{2}} \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } \\ {b_{1} } & {b_{2} } & {b_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {(-1)^{1}} & 0 & 0 \\ 0 & {(-1)^{2}} & 0 \\ 0 & 0 & {(-1)^{3}} \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right] \\ &=\left[ {{\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {(-1)^{1}} & 0 \\ 0 & {(-1)^{2}} \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {-a_{1} } & {a_{2} } & {-a_{3} } \\ {-b_{1} } & {b_{2} } & {-b_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right]\\ &=\left[ {{\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} {a_{1} } & {-a_{2} } & {a_{3} } \\ {-b_{1} } & {b_{2} } & {-b_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right]\\ &= \left[ {{\begin{array}{*{20}c} {-b_{1} } & {b_{2} } & {-b_{3} } \\ {a_{1} } & {-a_{2} } & {a_{3} } \\ \end{array} }} \right]\left[ {{\begin{array}{*{20}c} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} }} \right]\\ &=\left[ {{\begin{array}{*{20}c} {-b_{3} } & {b_{2} } & {-b_{1} } \\ {a_{3} } & {-a_{2} } & {a_{1} } \\ \end{array} }} \right] \end{align*} Example 2.2 The supplementary compound of order 2 of a 3 by 4 matrix \begin{align*} \begin{array}{*{20}c} X=\left[\!\! {\begin{array}{*{20}c} 1& 2 &3 &4 \\[4pt] {1}& {0} &{0} &{1} \\ 2 &1& 0 &3& \end{array}} \!\!\!\!\!\!\right]\Rightarrow \mathfrak{C}_{{2}} \left( X \right){ =}\left[\!\! {\begin{array}{*{20}c} -{2 }&-{3 }&-3& 0 &2& 3 \\ -{3 }&-{6 }&-{5 }&-3& 2& 9 \\ 1 &0 &1 &0 &-1& 0 \end{array}} \!\!\right]{, }S_{c_{2} (3)} { =}\left[\!\! {\begin{array}{*{20}c} -1& 0& 0 \\ 0& 1& 0 \\ 0& 0 &-{1} \end{array}} \!\!\right] \end{array} \end{align*} \begin{align*} \begin{array}{*{20}c}S_{c_{2} (4)} =\left[ {\begin{array}{*{20}c} -1& 0& 0& 0& 0& 0 \\ 0& 1& 0& 0& 0& 0 \\ 0& 0&-1& 0& 0& 0 \\ 0& 0& 0&-1& 0& 0 \\ 0& 0& 0& 0& 1& 0 \\ 0& 0& 0& 0& 0& -1 \end{array}} \right]\Rightarrow \mathfrak{C}^{2}\left( X \right)=\left[ {\begin{array}{*{20}c} 0 & 1& 0& 1& 0 & 1 \\ -9& 2& 3& 5& -6& 3 \\ 3& -2& 0& -3& 3& -{2} \end{array}} \right]\!. \end{array} \end{align*} A formula analogue to (2.4) is derived in Prells et al. (2003) (for square matrices) using exterior algebra. For the very important properties of compound and supplementary compound matrices, see Prells et al. (2003), Wedderburn (1934) and the references therein. The basic properties of the supplementary compound of square matrices are preserved for the supplementary compound of rectangular matrices. We present some of the properties of the supplementary compound matrices used in this article. \begin{align} \begin{array}{l} \mathfrak{C}^{1}(I_{k} )=I_{k} ,\,\,\,\,\mathfrak{C}^{1}\left( {\mathfrak{C}^{1}(X)} \right)=(-1)^{r+m}X\,\, \\[4pt] X=Y+Z\Rightarrow \mathfrak{C}^{1}(X)=\mathfrak{C}^{1}(Y)+\mathfrak{C}^{1}(Z) \\[4pt] X=Y\cdot Z\Rightarrow \mathfrak{C}^{k}\left( X \right)=\mathfrak{C}^{k}\left( Y \right)\mathfrak{C}^{k}\left( Z \right) \\[4pt] \mbox{if }r=m,\,\,\,\,\,\,\,\mbox{Adjoint}(X)=\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( {X^{{\rm T}}} \right)} \right)=\mathfrak{C}^{r-1}\left( {\mathfrak{C}_{1} \left( {X^{{\rm T}}} \right)} \right)=\mathfrak{C}^{r-1}\left( {X^{{\rm T}}} \right)\!. \end{array} \end{align} (2.5) Their proof is easily obtained using (2.4). The theorem for the minors of the inverse matrix of Gantmacher (2000) is written using compound and supplementary compound matrices as: Proposition 2.1 (Theorem for the minors of the inverse matrix) \[ \mathfrak{C}_{k} \left( {X^{-1}} \right)=\mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)\left| X \right|^{-1}. Proof. We use the Laplace expansion to the determinant (Wedderburn, 1934): \begin{align*} \begin{array}{l} \mathfrak{C}_{k} \left( X \right)\mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)=I_{c(r,k)} \left| X \right|\Rightarrow \4pt] \mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)=\left( {\mathfrak{C}_{k} \left( X \right)} \right)^{-1}\left| X \right|\Leftrightarrow \mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)=\mathfrak{C}_{k} \left( {X^{-1}} \right)\left| X \right|\Leftrightarrow \mathfrak{C}^{r-k}\left( {X^{{\rm T}}} \right)\left| X \right|^{-1}=\mathfrak{C}_{k} \left( {X^{-1}} \right)\!. \end{array} \\[-3.5pc] \end{align*} □ 2.2. Exterior factorization of a matrix The Laplace expansion theorem to the determinant of a square matrix is written as \begin{align} \left| X \right|I_{c(n,1)} =\mathfrak{C}^{r-1}(X^{T})X=\mathfrak{C}^{1}(X^{T})\mathfrak{C}_{r-1} \left( X \right)\!. \end{align} (2.6) For a full rank rectangular matrix X, we consider the matrix \[ {\it{\Omega}} =\mathfrak{C}^{1}(X^{T})\mathfrak{C}_{r-1} \left( X \right)\!. We shall see that the matrix $${\it{\Omega}}$$ is a well-defined function of $$\mathfrak{C}_{r} (X)$$ Theorem 2.1 For any full rank $$r\times m,r<m$$ matrix $$X$$, the matrix $${\it{\Omega}} =\mathfrak{C}^{1}(X^{T})\mathfrak{C}_{r-1} \left( X \right)$$ is a function of $$\mathfrak{C}_{r} (X)$$ Proof. \begin{align*} X=\left[\!\! {{\begin{array}{*{20}l} {x_{11} } & \cdots & {x_{1m} } \\ \vdots & \ddots & \vdots \\ {x_{r1} } & \cdots & {x_{rm} } \\ \end{array} }} \!\!\right]\Rightarrow \mathfrak{C}^{1}\left( {X^{T}} \right)=\left[\!\!{{\begin{array}{*{20}c} {(-1)^{r+m}x_{rm} } & \cdots & {(-1)^{m+1}x_{1m} } \\ \vdots & \ddots & \vdots \\ {(-1)^{m+r-\zeta +1}x_{r(m-\zeta +1)} } & \cdots & {(-1)^{m-\zeta +1}x_{1(m-\zeta +1)} } \\ \vdots & \ddots & \vdots \\ {(-1)^{r+1}x_{r1} } & \cdots & {x_{11} } \\ \end{array} }} \!\!\right]\!. \end{align*} Consider the matrix $${\it{\Omega}} =\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)$$. Its entry with coordinates (1, 1), is the inner product of the first row of $$\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)$$ and the first column of $$\mathfrak{C}_{r-1} \,\left( X \right)$$. The above inner product is up to a sign the Laplace expansion (by its last column), of the determinant of the virtual matrix constituted by the columns $$(1,2,\ldots ,r-1)=\vartheta_{\textit{lex}}^{-1} (1)\in P_{r-1} ({\bf m})\mbox{ and }m=\vartheta _{\textit{rlex}}^{-1} (1)\in P_{1} ({\bf m})$$ of $$X$$. $\mbox{This virtual matrix is:} \left[ {{\begin{array}{*{20}c} {x_{11} } & {x_{12} } & \cdots & {x_{1(r-1)} } & {x_{1m} } \\ {x_{21} } & {x_{22} } & \cdots & {x_{2(r-1)} } & {x_{2m} } \\ {\begin{array}{l} \,\,\,\vdots \\ x_{(r-1)1} \\ \end{array}} & {\begin{array}{l} \,\,\,\,\,\,\vdots \\ x_{(r-1)2} \\ \end{array}} & {\begin{array}{l} \ddots \\ \cdots \\ \end{array}} & {\begin{array}{l} \,\,\,\,\,\,\,\,\,\vdots \\ x_{(r-1)(r-1)} \\ \end{array}} & {\begin{array}{l} \,\,\,\,\,\,\,\vdots \\ x_{(r-1)m} \\ \end{array}} \\ {x_{r1} } & {x_{r2} } & \cdots & {x_{r(r-1)} } & {x_{rm} } \end{array} }} \right]$ The sign is $$(-1)^{r+m}$$, sign of $$x_{rm}$$ in $$\mathfrak{C}^{1}\left( {X^{T}} \right)$$ because in the Laplace expansion of the above virtual matrix by its last column,$$x_{rm}$$ goes with sign $$(-1)^{r+r}$$. As $$\vartheta_{\textit{lex}} \left( {(1,2,\cdots ,r-1, m)} \right)= m - r+1\mbox{ in }P_{r} ({\bf m})$$ we have: \begin{align} {\it{\Omega}} \{1,1\}=(-1)^{r+{m}}\mathfrak{C}_{r} \left( X \right)\{{m}-r+1\}. \end{align} (2.7) We proceed in a similar way for the entry with coordinates ($$\zeta$$, $$\xi )$$ of the matrix $${\it{\Omega}}$$. $${\it{\Omega}} \left\{ {\zeta ,\xi } \right\}$$ is the determinant of the virtual matrix constituted by the columns $$\left( {\lambda_{1} ,\lambda_{2} ,\ldots ,\lambda _{r-1} } \right)=\vartheta_{\textit{lex}}^{-1} (\xi )\in P_{r-1} ({\bf m})$$ and $$m-\zeta +1=\mbox{ }\vartheta_{\textit{rlex}}^{-1} (\zeta )\in P_{1} ({\bf m})$$, of $$X$$. Remark that: (i) This determinant vanishes, if $$m-\zeta +1\in \left\{ {\lambda_{1} ,\lambda_{2} ,\ldots ,\lambda_{r-1} } \right\}$$, because the virtual matrix has two identical columns (ii) It is equal up to a sign to $$\mathfrak{C}_{r} \left( X \right)\{\kappa \}$$ if $$m-\zeta +1\notin \left\{ {\lambda_{1} ,\lambda_{2} ,\ldots ,\lambda_{r-1} } \right\}$$, with $$\kappa =\vartheta_{\textit{lex}} \left( {\left( {\lambda_{1} ,\lambda_{2} ,\ldots ,\underline{\lambda_{\mu -1} },m-\zeta +1,\underline{\lambda_{\mu +1} },\ldots ,\lambda_{r-1} } \right)} \right)$$ in $$P_{r} ({\bf m})$$ (iii) The sign equals $$(-1)^{m+r-\mu -\zeta +1}$$ with $$\mu$$ the position of $$m-\zeta +1$$ in $$\left\{ {\lambda_{1} ,\lambda_{2} ,\cdots ,\underline{\lambda_{\mu -1} },m-\zeta +1,\underline{\lambda_{\mu +1} },\cdots ,\lambda_{r-1} } \right\}$$. $$\zeta +1$$ is due to the sign of the first element of the $$\zeta \mbox{th}$$ row of $$\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)$$ and $$r-\mu$$ to the number of permutations necessary to rearrange in increasing order the columns of the virtual matrix. The entries of the matrix $${\it{\Omega}}$$ are: \begin{align} &\begin{array}{ll} {\it{\Omega}} \left\{ {\zeta ,\xi } \right\}=\left\{ \begin{array}{@{}ll} 0 &\mbox{if}\,\vartheta _{\textit{rlex}}^{-1} (\zeta )\in \vartheta_{\textit{lex}}^{-1} (\xi )\,\4pt] (-1)^{m+r-\mu -\zeta +1}\mathfrak{C}_{r} \left( X \right)\{\kappa \}&\mbox{if}\,\vartheta_{\textit{rlex}}^{-1} (\zeta) {\not\in}\vartheta_{\textit{rlex}}^{-1} (\xi)\, \\[4pt] \end{array} \right.\,\,\,\kappa =\vartheta_{\textit{lex}} \left( {\left( {\vartheta_{\textit{lex}}^{-1} (\xi )\_ \vartheta _{\textit{rlex}}^{-1} (\zeta )} \right)^{>}} \right)\!. \\\\[-6pt] \qquad r-\mu =\mbox{is the number of elements of}\,\vartheta _{\textit{lex}}^{-1} (\xi )\mbox{ greater than }\vartheta_{\textit{rlex}}^{-1} (\zeta ) \end{array}\\ \end{align} (2.8) \begin{align} &\mbox{Of course } {\it{\Omega}} \mbox{ is also given as } {\it{\Omega}} =\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right) \end{align} (2.9) Example 2.3 The entries of the matrix {\it{\Omega}} for a 2\times 3 matrix X are as follows: \begin{align} {\begin{array}{*{20}c} {\left( {\zeta ,\xi } \right)} & {\vartheta_{\textit{lex}}^{-1} (\xi )} & {\vartheta_{\textit{rlex}}^{-1} (\zeta )} & \mu & \kappa & {{\it{\Omega}} \left\{ {\zeta ,\xi } \right\}} \\ {\left( {1,1} \right)} & 1 & 3 & 0 & 2 & {-\mathfrak{C}_{2} (X)\{2\}} \\ {\left( {1,2} \right)} & 2 & 3 & 0 & 3 & {-\mathfrak{C}_{2} (X)\{3\}} \\ {\left( {1,3} \right)} & 3 & 3 & \times & \times & 0 \\ {\left( {2,1} \right)} & 1 & 2 & 0 & 1 & {\mathfrak{C}_{2} (X)\{1\}} \\ {\left( {2,2} \right)} & 2 & 2 & \times & \times & 0 \\ {\left( {2,3} \right)} & 3 & 2 & 1 & 3 & {-\mathfrak{C}_{2} (X)\{3\}} \\ {\left( {3,1} \right)} & 1 & 1 & \times & \times & 0 \\ {\left( {3,2} \right)} & 2 & 1 & 1 & 2 & {\mathfrak{C}_{2} (X)\{1\}} \\ {\left( {3,3} \right)} & 3 & 1 & 1 & 1 & {\mathfrak{C}_{2} (X)\{2\}} \end{array} } \end{align} (2.10) Theorem 2.1 is an extension to non square matrices of the Laplace expansion to the determinant. Restricted to square matrices, it just means that a matrix equals the inverse of its inverse. The matrix X is supposed to have rank r. So \mathfrak{C}_{r} (X) is not identical zero and \mathfrak{C}_{r-1} \left( X \right) has rank r. Consequently there is a subset of its columns \alpha =\left( {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} } \right)\in P_{r} ({\bf c}_{r-1} (m)) with \left| {\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0. Choosing among the equations (2.9) those of the columns \alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} \mbox{ we obtain } {\it{\Omega}} \left\{ {{\bf m},\alpha } \right\}=\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\} The above equation enables the Theorem 2.2 To any full rank r\times m,r<m matrix X, there are two matrices Z,W with Z a function of \mathfrak{C}_{r-1} \left( X \right) W a function of \mathfrak{C}_{r} \left( X \right) and X=Z^{-1}W. Proof. \[ {\it{\Omega}} \left\{ {{\bf m},\alpha } \right\}=\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\} We take the supplementary compound of order one of both parts $\mathfrak{C}^{1}\left( {{\it{\Omega}} \left\{ {{\bf m},\alpha } \right\}} \right)=(-1)^{r+m}X^{{\rm T}}\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right)$ Thanks to the choice of $$\alpha$$ we can solve for $$X$$. \begin{align} \mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left\{ {{\bf m},\alpha } \right\}} \right)&=(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}}X \notag\\ X&=(-1)^{r+m}\left( {\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{{\bf r}-1} \left( X \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}}} \right)^{-1}\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left\{ {{\bf m},\alpha } \right\}} \right) \end{align} (2.11) Putting $$Z=(-1)^{r+m}\left( {\mathfrak{C}^{1}\left({\mathfrak{C}_{r-1} \left( X \right)\left\{ {{\bf r},\alpha }\right\}} \right)} \right)^{{\rm T}},W=\mathfrak{C}^{1}\left({{\it{\Omega}}^{{\rm T}}\left\{ {{\bf m},\alpha } \right\}} \right)$$ we can write $$X=Z^{-1}W$$. □ The ‘exterior’ factorization of the matrix $$X$$ will play a central role in this article. The matrices $$Z,W$$ depend on the choice of $$\alpha$$. As the greater number of $$\alpha$$ is $$c_{r} (c_{r-1} (m))$$ we have at most $$c_{r} (c_{r-1} (m))$$ exterior factorizations of an $$r\times m$$ matrix. For square matrices the exterior factorization is unique as $$c_{r} (c_{r-1} (r))=1$$. Remark that the choice of $$\alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,...,\alpha_{r} } \right\}$$ amounts to a multiplication of both sides of (2.9) by a matrix $$V_{\alpha }$$ whose $$k^{th}$$ column has zero entries except the $$\alpha_{k}^{{th}}$$ which is equal to one. Formula (2.11) holds true if instead of $$V_{\alpha }$$ we use any matrix $$V$$with $$\left| {C_{r-1} (X)V} \right|\ne 0$$. The significant difference among the two is that in the first we use the minimal number of the entries of the matrices $$C_{r-1} (X),\,\,{\it{\Omega}}$$, while in the second we use linear combinations of potentially bigger number of entries and thus redundant data. Furthermore, the $$\alpha =\left\{ {\alpha_{1} ,\alpha_{2} ,...,\alpha_{r} } \right\}$$ are ordered lexicographically and we can use the first, ($$\min \left( {\vartheta { }_{\textit{lex}}(\alpha )} \right))$$ among those satisfying $$\left| {C_{r-1} (X)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ to obtain in a unique way exactly one exterior factorization of the matrix $$X$$ involving a minimal number of parameters. 2.3. Plucker coordinates and Plucker powers Definition 2.3 Let $$X\in \mathbb{K}^{p\times q},p<q$$ be a $$p\times q$$ matrix over $$\mathbb{K}$$ and $$\mathcal{X}=\mbox{rowspan}(X)$$ the $$p$$-dimensional subspace of $$\mathbb{K}^{q}$$ with basis the rows of $$X$$. The elements of the set of all $$c_{p} (q)$$, $$p\times p$$ minors of $$X$$ are then called the Grassmann coordinates of $$\mathcal{X}$$. Definition 2.4 (Karcanias & Giannakopoulos, 1984) The Grassmann coordinates of the space $$\mbox{rowspan}\left( {\left[ {D(s)\,\,N(s)} \right]} \right)$$ with $$F(s)=D^{-1}(s)\,N(s)$$ a left coprime factorization or the space $\mbox{colspan}\left( {\left[ {\begin{array}{l} D(s)\, \\ N(s) \\ \end{array}} \right]} \right)$ with $$F(s)=\,N(s)D^{-1}(s)$$ a right coprime factorization are called the Plücker coordinates of the transfer function matrix $$F(s)$$. Let $$\boldsymbol{\mathfrak{P}}=\left[ {\boldsymbol{\mathfrak{p}}_{1},\boldsymbol{\mathfrak{p}}_{2} ,\ldots,\boldsymbol{\mathfrak{p}}_{c_{r} (m+r)} }\right]=\boldsymbol{\mathfrak{C}}_{r} \left( {\left[ {D(s)\,\,N(s)}\right]} \right)$$ be the vector with entries the Plücker coordinates of the transfer function matrix $$F(s)$$ and $$\boldsymbol{\mathfrak{G}}=\left[ {\boldsymbol{\mathfrak{g}}_{1},\boldsymbol{\mathfrak{g}}_{2} ,\cdots,\boldsymbol{\mathfrak{g}}_{c_{r} (m+r)} }\right]=\boldsymbol{\mathfrak{C}}_{r} \left( {\left[ {I_{r} \,F(s)}\right]} \right)$$ the vector with entries the Grassmann coordinates of $$\mbox{rowspan}([I_{r} \,\,F(s)])$$. \begin{align} \mbox{Remark that} \ \mathfrak{C}_{r} \left( {\left[ {D(s)\,\,N(s)} \right]} \right)=\mathfrak{C}_{r} \left( {\left[ {I_{r} \,\,D^{-1}(s)N(s)} \right]} \right)/\left| {D(s)} \right| \mbox{ and thus } \mathfrak{P}=\mathfrak{G}\left| {D(s)} \right| \end{align} (2.12) To each transfer function matrix $$F(s)\in {\it{\Sigma}}$$ we define the sequence: \begin{align} \begin{array}{l} Z_{0} (s)\,\,\mbox{is the least common multiple of the denominators of the entries of }F(s) \\ Z_{k} (s)=\mathfrak{C}_{k} \left( {F(s)} \right) Z_{0} (s)\,\,k=1,\ldots ,r \end{array} \end{align} (2.13) Proposition 2.2 The entities, (scalars vectors or matrices) $$Z_{0} (s),Z_{1} (s),\ldots ,Z_{r} (s)\,$$ of (2.14) are polynomial. Proof. We consider a left coprime factorization $$\left( {D(s),\,\,N(s)} \right)$$ of the transfer function matrix $$F(s)$$. Then $$F(s)=D^{-1}(s)N(s)$$ and we apply the theorem for the minors of the inverse matrix. \begin{align*} \begin{array}{l} \mathfrak{C}_{k} \left( {F(s)} \right)=\mathfrak{C}_{k} \left( {D^{-1}(s)N(s)} \right)=\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)} \right)\left| {D(s)} \right|^{-1}\Rightarrow \\\-6pt] \Rightarrow \mathfrak{C}_{k} \left( {F(s)} \right)Z_{0} (s)=\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)} \right)\Rightarrow Z_{k} (s)=\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)} \right)\,\,\mbox{i.e. polynomial} \end{array} \\[-3.5pc] \end{align*} □ Definition 2.5 The entity Z_{k} (s) will be called the kth Plücker power of the transfer function matrix F(s). We will now establish a bijection between the set of the Plücker coordinates and the set of the entries of the Plücker powers of a transfer function matrix. For this purpose to each \lambda \in c_{r} (m+r), consider a=\left( {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} } \right)=\vartheta_{\textit{lex}}^{-1} (\lambda )\in P_{r} ({\bf m}+{\bf r}), \beta the subset of \alpha with elements at most equal to r and \gamma the subset of elements greater than r. Let v be the cardinal number of \gamma and \delta =\gamma -r\in P_{v} ({\bf m}), Let also \zeta =\vartheta_{\textit{lex}} \mbox{(}\bar{{\beta }}),\,\,\xi =\vartheta_{\textit{lex}} (\delta ). Proposition 2.3 \begin{align} \mathfrak{g}_{\lambda } =\mathfrak{s}_{\lambda } \left| {F(s)\left\{ {\bar{{\beta }},\delta } \right\}} \right|=\mathfrak{s}_{\lambda } \mathfrak{C}_{v} \left( {F(s)} \right)\left\{ {\zeta ,\xi } \right\},\mathfrak{s}_{\lambda } =(-1)^{\mu (\beta \_ \delta )} \end{align} (2.14) Proof. The determinant of the matrix constituted by the \alpha columns of [I_{r} \,\,F(s)], equals up to a sign the determinant of the matrix resulting from the replacement of the \bar{{\beta }} columns of I_{r} by the \delta columns of F(s). Using the Laplace expansion, we find that this determinant equals the determinant of the matrix constituted by the \bar{{\beta }} rows and \delta columns of F(s). The sign equals (-1)^{\mu (\beta \_ \delta )}, with \mu (\beta \_ \delta ) the number of permutations needed to rearrange (\beta \_ \delta ) in increasing order. □ A direct consequence of the above proposition is that: \begin{align} \mathfrak{p}_{\lambda } =\mathfrak{s}_{\lambda } z_{v} (s)\left\{ {\zeta ,\xi } \right\},\mathfrak{s}_{\lambda } =(-1)^{\mu (\beta \_ \delta )} \end{align} (2.15) Equation (2.15) establishes a one to one correspondence between the Plücker coordinates and the entries of the Plücker powers of transfer function matrices. Of course, we suppose that \mathfrak{C}_{0} \left( {F(s)} \right)=1. The difference between Plücker coordinates and Plücker powers is that the first are viewed as scalars and the second as matrices. Multiplying both sides of (2.14) by Z_{0} (s) we get: \begin{align} \mathfrak{p}_{\lambda } =\mathfrak{g}_{\lambda } Z_{0} (s)=\mathfrak{s}_{\lambda } \left| {F(s)\left\{ {\bar{{\beta }},\delta } \right\}} \right|Z_{0} (s)=\mathfrak{s}_{\lambda } Z_{v} \left\{ {\zeta ,\xi } \right\},\mathfrak{s}_{\lambda } =(-1)^{\mu (\beta \_ \delta )} \end{align} (2.16) The set of Plücker coordinates evokes the Plücker matrix \mathcal{P} of the system. Definition 2.6 (Karcanias & Giannakopoulos, 1984) The matrix \mathcal{P} =\left[ {{\it{p}}_{{\kern 1pt}{\kern 1pt}1} , {{\it{p}}}_{{\kern 1pt}2} ,\ldots ,{{\it{p}}}_{c_{r} (m+r)} } \right]\mbox{ with }{{\it{p}}}_{\lambda } the vector with entries the coefficients of \mathfrak{p}_{\lambda }, (each row of {\mathcal{P}} contains coefficients of the same degree) is called the Plücker matrix of the system F(s)\in {\it{\Sigma}} The Laplace expansion to the determinant, as well as the exterior factorization, are both subsets of the Plucker quadratic relations among the Plucker coordinates of a system, written in matrix multiplication form. 2.4. Exterior factorization of a transfer function matrix We will use now the Plucker powers to calculate the exterior factors of a transfer function matrix F(s). \[ \begin{array}{l} Z_{\alpha } =(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( {F(s)} \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}},W_{\alpha } =\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left( {\mathfrak{C}_{r} \left( {F(s)} \right)} \right)\left\{ {{\bf m},\alpha } \right\}} \right)\Rightarrow F(s)=Z_{\alpha }^{-1} W_{\alpha } \\\\[-6pt] \Rightarrow F(s)=\left( {Z_{\alpha }^{-1} Z_{0}^{-1} } \right)\left( {Z_{0} W_{\alpha } } \right) \end{array} Then we can put as $$Z_{\alpha } ,\,\,W_{\alpha }$$ $\begin{array}{l} Z_{\alpha } =(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {\mathfrak{C}_{r-1} \left( {F(s)} \right)\left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}} Z_{0} =(-1)^{r+m}\left( {\mathfrak{C}^{1}\left( {z_{r-1} \left\{ {{\bf r},\alpha } \right\}} \right)} \right)^{{\rm T}} \\\\[-6pt] W_{\alpha } =\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left( {\mathfrak{C}_{r} \left( {F(s)} \right)} \right)\left\{ {{\bf m},\alpha } \right\}} \right)Z_{0} =\mathfrak{C}^{1}\left( {{\it{\Omega}}^{{\rm T}}\left( {Z_{r} \left\{ {{\bf m},\alpha } \right\}} \right)} \right) \\ \end{array}$ This way the factors $$Z_{\alpha } ,\,\,W_{\alpha }$$, become polynomials depending on the Plucker powers $$r-1,\,\,r$$, respectively. Example 2.4 Exterior factorizations of a $$2\times 3$$ matrix over the field of rational functions $F(s)=\left[ {{\begin{array}{*{20}c} {\frac{1}{s+1}} & {\frac{1}{s+2}} & {\frac{1}{s+3}} \\ {\frac{1}{s+4}} & {\frac{1}{s+5}} & {\frac{1}{s+6}} \\ \end{array} }} \right]$ The Plucker powers are $\begin{array}{l} Z_{0} (s)=s^{6}+21s^{5}+175s^{4}+735s^{3}+1624s^{2}+1764s+720\\ Z_{1} (s)=\left[ {{\begin{array}{*{20}c} {s^{5}+20s^{4}+155s^{3}+580s^{2}+1044s+720} & {s^{5}+17s^{4}+107s^{3}+307s^{2}+396s+180} \\ {s^{5}+19s^{4}+137s^{3}+461s^{2}+702s+360} & {s^{5}+16s^{4}+95s^{3}+260s^{2}+324s+144} \\ {s^{5}+18s^{4}+121s^{3}+372s^{2}+508s+240} & {s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120} \\ \end{array} }} \right]^{{\rm T}} \\\\[-6pt] Z_{2} (s)=\left[ {{\begin{array}{*{20}c} {3s^{2}+27s+54} & {6s^{2}+42s+60} & {3s^{2}+15s+12} \\ \end{array} }} \right] \\ \end{array}$ The Omega matrix $${\it{\Omega}} =\mathfrak{C}^{1}\left( {X^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( X \right)=\mathfrak{C}^{1}\left( {F^{{\rm T}}} \right)\mathfrak{C}_{r-1} \left( F \right)$$ ${\it{\Omega}} (s)=\left[ {{\begin{array}{*{20}c} {6s^{2}+42s+60} & {3s^{2}+15s+12} & 0 \\ {-3s^{2}-27s-54} & 0 & {3s^{2}+15s+12} \\ 0 & {-3s^{2}-27s-54} & {-6s^{2}-42s-60} \\ \end{array} }} \right]=W$ There are three exterior factorizations with $$\mathbf{\alpha }_{1} =\{1,2\},\,\,\mathbf{\alpha }_{2} =\{1,3\},\,\,\mathbf{\alpha }_{3} =\{2,3\}$$ \begin{align*} Z_{\mathbf{\alpha }_{1} } &=\left[ {{\begin{array}{*{20}c} {s^{5}+16s^{4}+95s^{3}+260s^{2}+324s+144} & {-s^{5}-19s^{4}-137s^{3}-461s^{2}-702s-360} \\ {-s^{5}-17s^{4}-107s^{3}-307s^{2}-396s-180} & {s^{5}+20s^{4}+155s^{3}+580s^{2}+1044s+720} \\ \end{array} }} \right] \\ W_{\mathbf{\alpha }_{1} } &=\left[ {{\begin{array}{*{20}c} {3s^{2}+27s+54} & 0 & {-3s^{2}-15s-12} \\ 0 & {3s^{2}+27s+54} & {6s^{2}+42s+60} \\ \end{array} }} \right] \\ Z_{\mathbf{\alpha }_{2} } &=\left[ {{\begin{array}{*{20}c} {s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120} & {-s^{5}-18s^{4}-121s^{3}-372s^{2}-508s-240} \\ {-s^{5}-17s^{4}-107s^{3}-307s^{2}-396s-180} & {s^{5}+20s^{4}+155s^{3}+580s^{2}+1044s+720} \\ \end{array} }} \right] \\ W_{\mathbf{\alpha }_{2} } &=\left[ {{\begin{array}{*{20}c} {6s^{2}+42s+60} & {3s^{2}+15s+12} & 0 \\ 0 & {3s^{2}+27s+54} & {6s^{2}+42s+60} \\ \end{array} }} \right] \\ Z_{\mathbf{\alpha }_{3} } &=\left[ {{\begin{array}{*{20}c} {s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120} & {-s^{5}-18s^{4}-121s^{3}-372s^{2}-508s-240} \\ {-s^{5}-16s^{4}-95s^{3}-260s^{2}-324s-144} & {s^{5}+19s^{4}+137s^{3}+461s^{2}+702s+360} \\ \end{array} }} \right] \\ W_{\mathbf{\alpha }_{3} } &=\left[ {{\begin{array}{*{20}c} {6s^{2}+42s+60} & {3s^{2}+15s+12} & 0 \\ {-3s^{2}-27s-54} & 0 & {3s^{2}+15s+12} \\ \end{array} }} \right] \end{align*} We can easily verify that: $F(s)=\left( {Z_{\mathbf{\alpha }_{1} } } \right)^{-1}W_{\mathbf{\alpha }_{1} } =\left( {Z_{\mathbf{\alpha }_{2} } } \right)^{-1}W_{\mathbf{\alpha }_{2} } =\left( {Z_{\mathbf{\alpha }_{3} } } \right)^{-1}W_{\mathbf{\alpha }_{3} } \,\,\,$ Example 2.5 For $$r=3,\,\,m=4\, c_{3} (4)=4\Rightarrow z_{3} (s)=[z_{1} (s)\,\,z_{2} (s)\,\,z_{3} (s)\,\,z_{4} (s)]\,\,\,\,c_{2} (4)=6\Rightarrow {\it{\Omega}} (s)$$ is a $$4\times 6$$ matrix \begin{align*} {\it{\Omega}} (s)=\left[\!\!{\begin{array}{*{20}c} z_{2} (s) & z_{3}(s) & 0 & z_{4} (s) & 0 & 0 \\ -z_{1} (s) & 0 & z_{3} (s) & 0 & z_{4} (s) & 0 \\ 0 & -z_{1} (s) & -z_{2} (s) & 0 & 0 & z_{4} (s) \\ 0 & 0 & 0 & -z_{1} (s) & -z_{2} (s) & -z_{3} (s) \\ \end{array}}\!\! \right]\,\,\Rightarrow \left\{ \!\!\!{{\begin{array}{*{20}c} {{\rm {\bf \alpha }}=\left\{ {1,\,2,\,4} \right\}\Rightarrow W_{{\rm {\bf \alpha }}} (s)=} \\ {=-\left[\!\!\!\! {{\begin{array}{*{20}c} {-z_{1} (s)} & 0 & 0 & {-z_{4} (s)} \\ 0 & {-z_{1} (s)} & 0 & {\,\,\,z_{3} (s)} \\ 0 & 0 & {-z_{1} (s)} & {-z_{2} (s)} \\ \end{array} }} \!\!\!\!\right]} \\ \end{array} }} \right. \\ \left\{\!\! {{\begin{array}{*{20}c} {\,\mathbf{\alpha }=\left\{ {1,\,3,\,6} \right\}\Rightarrow W_{{\rm {\bf \alpha }}} (s)=} \\ {=-\left[\!\!\! {{\begin{array}{*{20}c} {-z_{3} (s)} & {-z_{4} (s)} & 0 & 0 \\ 0 & {-z_{2} (s)} & {-z_{3} (s)} & {\,\,\,0} \\ 0 & 0 & {-z_{1} (s)} & {-z_{2} (s)} \\ \end{array} }} \!\!\!\right]} \\ \end{array} }} \right.{\begin{array}{*{20}c} {\left\{\!\!\!{{\begin{array}{*{20}c} {{\rm {\bf \alpha }}=\left\{{4,\,5,\,6} \right\}\Rightarrow W_{{\rm {\bf \alpha }}} (s)=} \\ {=-\left[ {{\begin{array}{*{20}c} {-z_{3} (s)} & {-z_{4} (s)} & 0 & 0 \\ {z_{2} (s)} & 0 & {-z_{4} (s)} & {\,\,\,0} \\ {-z_{1} (s)} & 0 & 0 & {-z_{4} (s)} \\ \end{array} }} \right]\,\,} \\ \end{array} }} \right.} & \\ \end{array} } \end{align*} 2.5. Closed loop Plücker coordinates, powers and exterior factors of a system The action of the SOF group on $${\it{\Sigma}}$$, induces an action on the set of the Plücker coordinates. \begin{align} &{[}I_{r} \,\,F(s)]\mapsto [I_{r} \,\,\left( {I_{r} +F(s)H} \right)^{-1}F(s)]=\left( {I_{r} +F(s)H} \right)^{-1}[\,\left( {I_{r} +F(s)H} \right)F(s)] \notag\\ &\quad\Rightarrow {[}I_{r} \,\,F(s)]\mapsto \left( {I_{r} +F(s)H} \right)^{-1}[I_{r} \,\,F(s)]\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right] \notag\\ &\quad\Rightarrow\mathfrak{C}_{r} \left( {[I_{r} \,\,F(s)]} \right)\mapsto \mathfrak{C}_{r} \left( {\left( {I_{r} +F(s)H} \right)^{-1}} \right)\mathfrak{C}_{r} \left( {[I_{r} \,\,F(s)]} \right)\mathfrak{C}_{r} \left( {\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right]} \right) \notag\\ &\quad\Rightarrow \mathfrak{G}\mapsto \mathfrak{G}\mathfrak{C}_{r} \left( {\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right]} \right)/\left| {I_{r} +F(s)H} \right|\Rightarrow \mathfrak{P}\mapsto \mathfrak{P}\mathfrak{C}_{r} \left( {\left[ {{\begin{array}{*{20}c} {I_{r} } & {{\rm O}_{r\times m} } \\ H & {I_{m} } \\ \end{array} }} \right]} \right)=\mathfrak{P}\mathfrak{T}=\tilde{{\mathfrak{P}}} \end{align} (2.17) Remark that the matrix $$\mathfrak{T}$$, is upper triangular with ones in the diagonal. So each closed loop Plucker coordinate $$\tilde{{\mathfrak{p}}}_{{\kern 1pt}i}$$ is a linear combination of its subsequent open loop Plucker coordinates $$\mathfrak{p}_{{\kern 1pt}j} ,j>i$$ The above remark is sufficient to state and prove the results of this paper. We give however, proper analytic expressions for the closed loop Plucker powers $$\tilde{{Z}}_{0} (s),\,\,\tilde{{Z}}_{1} (s),\tilde{{Z}}_{r-1} (s),\,\,\tilde{{Z}}_{r} (s)$$. Proposition 2.4 If $$z_{k\zeta \xi } (s)$$ is the entry with coordinates $$\zeta ,\,\xi$$ of $$Z_{k} \left( s \right)$$ and $$d_{k\xi \zeta }$$ the entry with coordinates $$\xi ,\,\zeta$$ of $$\mathfrak{C}_{k} \left( H \right)$$, the closed loop characteristic polynomial is \begin{align} \tilde{{Z}}_{0} (s)=Z_{0} (s)+\sum\limits_{k=1}^r {\sum\limits_{\zeta =1}^{\left( {_{k}^{r} } \right)} {\sum\limits_{\xi =1}^{\left( {_{k}^{m} } \right)} {z_{k\zeta \xi } (s)d_{k\xi \zeta } } } } \end{align} (2.18) Proof. We expand $$\left| {D(s)+N(s)H} \right|$$ as sum of two matrices using Theorem 4.1 of Prells et al. (2003) \begin{align} \left| {D(s)+N(s)H} \right|=\sum\limits_{k=0}^r {\mbox{trace}\left( {\mathfrak{C}^{r-k}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{k} \left( {N(s)H} \right)} \right)} \end{align} (2.19) \begin{align} \left| {D(s)+N(s)H} \right|=\sum\limits_{k=0}^r {\sum\limits_{\zeta =1}^{\left( {_{k}^{r} } \right)} {\sum\limits_{\xi =1}^{\left( {_{k}^{m} } \right)} {z_{k\zeta \xi } (s)d_{k\xi \zeta } } } } \mbox{ with }\left( {z_{011} (s)=z_{0} (s),d_{011} =1} \right) \end{align} (2.21a) We can use $$\chi_{k} =\vartheta_{\textit{lex}}^{-1} (\zeta )$$, $$\psi_{k} =\vartheta_{\textit{lex}}^{-1} (\xi )$$ to write (2.21a) as \begin{align} \left| {D(s)+N(s)H} \right|=\left| {D(s)} \right|\sum\limits_{k=0}^r {\sum\limits_{\chi_{k} \in {\it{\mathsf P}}_{k} ({\bf r})} {\sum\limits_{\psi_{k} \in {\it{\mathsf P}}_{k} ({\bf m})} } } \left| {F(s)\left\{ {\chi_{k} ,\psi_{k} } \right\}} \right|\left| {H\left\{ {\psi_{k} ,\chi_{k} } \right\}} \right| \end{align} (2.21b) Formula (2.21a) is oriented to the Plücker coordinates while (2.21b) to Plücker powers. □ As $$Z_{1} (s)=\mbox{adj}\left( {D(s)} \right)N(s),\tilde{{Z}}_{1} (s)=\mbox{adj}\left( {D(s)+N(s)H} \right)N(s)$$ one can use Jacobi’s formula to get an analytic expression for $$\tilde{{Z}}_{1} (s)$$. Proposition 2.5 The closed loop matrix $$\tilde{{Z}}_{1} (s)$$ is: \begin{align} \tilde{{Z}}_{1} (s)=\left[ {{\begin{array}{*{20}c} {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{11} }}} \right. } {\partial h_{11} }} & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{21} }}} \right. } {\partial h_{21} }} & \cdots & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{m1} }}} \right. } {\partial h_{m1} }} \\ {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{12} }}} \right. } {\partial h_{12} }} & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{22} }}} \right. } {\partial h_{22} }} & \cdots & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{m2} }}} \right. } {\partial h_{m2} }} \\ \vdots & \vdots & \ddots & \vdots \\ {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{1r} }}} \right. } {\partial h_{1r} }} & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{2r} }}} \right. } {\partial h_{2r} }} & \cdots & {\partial \mathord{\left/ {\vphantom {\partial {\partial h_{mr} }}} \right. } {\partial h_{mr} }} \\ \end{array} }} \right]\tilde{{Z}}_{0} (s) \end{align} (2.22) Proof. We use Jacobi’s formula $$\frac{d}{dx}\det \left( {M(x)} \right)=tr\left( {\mbox{adj}(M)\frac{dM(x)}{dx}} \right)$$ for the derivative of the determinant of a matrix to obtain: $\frac{d}{dh_{\zeta \xi } }\det \left( {D(s)+N(s)H} \right)=tr\left( {\mbox{adj}(D(s)+N(s)H)\frac{d(D(s)+N(s)H)}{dh_{\zeta \xi } }} \right)\,\,$ $$\mbox{but}\,\,\frac{d(N(s)H)}{dh_{\zeta \xi } }$$ is a matrix with all but $$\{\zeta ,\xi \}$$ entries zero. $\frac{d}{dh_{\zeta \xi } }\det \left( {D(s)+N(s)H} \right)=\mbox{adj}(D(s)+N(s)H)\{\zeta ,\xi \}N(s)\{\xi ,\zeta \}j$ □ We can use a similar to (2.22) formula for the higher order Plucker powers using the operator $$\partial \mathord{\left/ {\vphantom {\partial {\partial d_{\kappa \zeta \xi } }}} \right. } {\partial d_{\kappa \zeta \xi } }$$ For the closed loop Plucker powers $$r-1,\,\,r$$ however it is easier to use the theorem for the minors of the inverse matrix. Theorem 2.3 \begin{align} \mbox{The } r^{\textit{th}} \mbox{ Plucker power } Z_{r} (s) \mbox{ is } {\rm E}-\mbox{invariant i.e. } \tilde{{Z}}_{r} (s)=Z_{r} (s) \end{align} (2.23) Proof. For the open loop $$r^{\textit{th}}$$ Plucker power $$Z_{r} (s)$$ we have $\mathfrak{C}_{r} \left( {F(s)} \right)=\mathfrak{C}_{r} \left( {D^{-1}(s)N(s)} \right)=\mathfrak{C}_{r} \left( {N(s)} \right)\left| {D(s)} \right|^{-1}\Rightarrow Z_{r} (s)=\mathfrak{C}_{r} \left( {N(s)} \right)$ For the closed loop $$r^{th}$$ Plucker power $$\tilde{{Z}}_{r} (s)$$ we have \begin{align*} \mathfrak{C}_{r} \left( {\left( {D(s)+N(s)H} \right)^{-1}N(s)} \right)=\mathfrak{C}_{r} \left( {N(s)} \right)\left| {D(s)+N(s)H} \right|^{-1}\Rightarrow \tilde{{Z}}_{r} (s)=\mathfrak{C}_{r} \left( {N(s)} \right)=Z_{r} (s) \-3.5pc] \end{align*} □ Theorem 2.4 The (r-1)^{\textit{th}} closed loop Plucker power \tilde{{Z}}_{r-1} (s) varies linearly with the entries of the output feedback gain H. \begin{align} \tilde{{Z}}_{r-1} (s)=Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right) \end{align} (2.24) Proof. \begin{align*} \mathfrak{C}_{r-1} \left( {\left( {D(s)+N(s)H} \right)^{-1}N(s)} \right)&=\mathfrak{C}^{1}\left( {D^{{\rm T}}(s)+H^{{\rm T}}N^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)\left| {D(s)+N(s)H} \right|^{-1}\Rightarrow \\ \tilde{{Z}}_{r-1} (s)&=\mathfrak{C}^{1}\left( {D^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)+\mathfrak{C}^{1}\left( {H^{{\rm T}}N^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)\Rightarrow \\ \tilde{{Z}}_{r-1} (s)&=Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right)\mathfrak{C}^{1}\left( {N^{{\rm T}}(s)} \right)\mathfrak{C}_{r-1} \left( {N(s)} \right)\Rightarrow \\ \tilde{{Z}}_{r-1} (s)& = Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right)\\[-3.2pc] \end{align*} □ A direct consequence of theorems 2.2 and 2.3 is the following Theorem 2.5 The exterior factors of the closed loop transfer function matrix are \begin{align} \tilde{{W}}_{\alpha } =W_{\alpha } ,\,\,\,\tilde{{Z}}_{\alpha } =Z_{\alpha } +W_{\alpha } H\,\,\forall \alpha \mbox{ with }\left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 \end{align} (2.25) Proof. By Theorem 2.3, \tilde{{Z}}_{r} = Z_{r} and so \tilde{{W}}_{\alpha } =W_{\alpha } \quad \forall \alpha \mbox{ with }\left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 By Theorem 2.4, \tilde{{Z}}_{r-1} (s) = Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {z_{r} (s)} \right) and so \tilde{{Z}}_{\alpha } =Z_{\alpha } +W_{\alpha } H\,\,\forall \alpha with \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 □ We shall now show that the property \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 is {\rm E}-invariant. Thus the exterior factorization related to \alpha is meaningful for the whole equivalence class Theorem 2.6 The property \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 is {\rm E}-invariant i.e. \left| {Z_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0\Rightarrow \left| {\tilde{{Z}}_{r-1} \left\{ {\mathbf{r},\alpha } \right\}} \right|\ne 0 Proof. \[ \tilde{{Z}}_{r-1} (s)=Z_{r-1} (s)+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right)\Rightarrow \tilde{{Z}}_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}=Z_{r-1} (s)\left\{ {r,\alpha } \right\}+\mathfrak{C}^{1}\left( {H^{{\rm T}}} \right){\it{\Omega}} \left( {Z_{r} (s)} \right)\left\{ {{\bf m},\alpha } \right\}\Rightarrow $$\left| {\tilde{{Z}}_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|=\left| {Z_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|+$$ a polynomial of lower degree. Thus $$\left| {Z_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ implies $$\left| {\tilde{{Z}}_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ □ 3. E–equivalence, invariants and canonical forms for full rank transfer function matrices 3.1. $${\rm E}-$$equivalence Theorem 3.1 Two systems $$F(s),\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)\in {\it{\Sigma}}$$ are SOF equivalent if and only if for some $$\alpha =\left( {\alpha_{1} ,\alpha_{2} ,\ldots ,\alpha_{r} } \right)\in P_{r} ({\bf c}_{r-1} (m))$$ their exterior factors $$\left( {Z_{\alpha } ,W_{\alpha } } \right) ,\big( {\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} _{\alpha } } \big)$$ satisfy the following: \begin{align} W_{\alpha } =\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{W}}_{\alpha }\\ \end{align} (3.1) \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha } (s)-Z_{\alpha } (s)=W_{\alpha } (s)H,\,\,H\in \mathbb{R}^{m\times r} \end{align} (3.2) Proof. Necessity: Direct from Theorem 2.5 Sufficiency As $$F=Z_{\alpha }^{-1} W_{\alpha }$$ and $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} =\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha }^{-1} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{W}} _{\alpha }$$, $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)-Z_{\alpha } (s)=W_{\alpha } (s)H$$ implies $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} =\left( {Z_{\alpha } +W_{\alpha } H} \right)^{-1}W_{\alpha }$$. We can now construct new invariants based on the consistency conditions of the linear system of equations $$W_{\alpha } (s)H=\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)-Z_{\alpha } (s)$$. For this purpose, we transform the polynomial equation to another with constant coefficients \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha } (s)&=\sum\limits_{k=0}^{n-r+1-d} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,k} s^{k}} ,\,\,Z_{\alpha } (s)=\sum\limits_{k=0}^{n-r+1-d} {Z_{\alpha ,k} s^{k}} ,\,\,W_{\alpha } (s)=\sum\limits_{k=0}^{n-r-d} {W_{\alpha ,k} s^{k}}\\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{Z}}_{\alpha }& =\left[ {\begin{array}{l} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,(n-r-d+1)} \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,(n-r-d)} \\ \,\,\,\,\vdots \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\alpha ,0} \\ \end{array}} \right],\,\,{\boldsymbol{Z}}_{\alpha } =\left[ {\begin{array}{l} Z_{\alpha ,(n-r-d+1)} \\ Z_{\alpha ,(n-r-d)} \\ \,\,\,\,\vdots \\ Z_{\alpha ,0} \\ \end{array}} \right],\,\,{\boldsymbol{W}}_{\alpha } =\left[ {\begin{array}{l} {\rm O}_{r\times m} \\ W_{\alpha ,(n-r-d)} \\ \,\,\,\,\vdots \\ W_{\alpha ,0} \\ \end{array}} \right]\Rightarrow \,\,\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} }_{\alpha } \mathbf{-Z}_{\alpha } \mathbf{=W}_{\alpha } H \notag \end{align} (3.3)$$d =$$ degree of the greater common divisor of the entries of $$Z_{\alpha } (s)\mbox{ }\big( {\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)} \big)$$ If $$h_{1} ,h_{2} ,\ldots ,h_{r}$$, $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{z}}_{1},\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{2} ,\ldots ,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{r}$$, $${\boldsymbol{z}}_{1} ,{\boldsymbol{z}}_{2} ,\ldots ,{\boldsymbol{z}}_{r}$$, $$\mathbf{w}_{1} ,\mathbf{w}_{2} ,\ldots ,\mathbf{w}_{m}$$ are the columns of the gain $$H$$ the matrix $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{Z}}_{\alpha }$$ ,$${\boldsymbol{Z}}_{\alpha }$$ and $$W_{\alpha }$$, respectively, (3.3) is decomposed to $$r$$ equations \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k} -{\boldsymbol{z}}_{k} ={\boldsymbol{W}}_{\alpha } h_{k} \end{align} (3.3a) For the solvability conditions of equation (3.3a) we decompose $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{z}}_{k} ,\,\,{\boldsymbol{z}}_{k}$$ into two parts \begin{align} \begin{array}{l} {\boldsymbol{z}}_{k} ={\boldsymbol{z}}_{k}^{\bot } +{\boldsymbol{z}}_{k}^{\parallel } ,\mbox{ with }{\boldsymbol{z}}_{k}^{\parallel } \in \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right)\mbox{ }{\boldsymbol{z}}_{k}^{\bot } \bot \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right) \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k} =\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\bot } +\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\parallel } ,\,\,\mbox{with }\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\parallel } \in \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right)\mbox{ }\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{z}}_{k}^{\bot } \bot \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right) \\ \end{array} \end{align} (3.4) The solvability conditions of (3.3a) are $\forall k\in \mathbf{r},\,\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{k} -{\boldsymbol{z}}_{k} \in \mbox{colspan}\left( {{\boldsymbol{W}}_{\alpha } } \right)\Leftrightarrow \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{z}}_{k}^{\bot } ={\boldsymbol{z}}_{k}^{\bot }$ Apparently \begin{align} \boldsymbol{z}_{k}^{\parallel } =\boldsymbol{W}_{\alpha } \boldsymbol{W}_{\alpha }^{\dagger } \boldsymbol{z}_{k} ,\,\,\boldsymbol{z}_{k}^{\bot } =\,\left( {I-\boldsymbol{W}_{\alpha } \boldsymbol{W}_{\alpha }^{\dagger } } \right)\boldsymbol{z}_{k} \end{align} (3.5) For a proof remark that $${W}_{\alpha }^{{\rm T}} \boldsymbol{z}_{k}^{\bot } =\boldsymbol{W}_{\alpha }^{{\rm T}} \,\left( {I-\boldsymbol{W}_{\alpha } \boldsymbol{W}_{\alpha }^{\dagger } } \right)\boldsymbol{z}_{k} =\left( {\boldsymbol{W}_{\alpha }^{{\rm T}} -\boldsymbol{W}_{\alpha }^{{\rm T}} } \right)\boldsymbol{z}_{k} =0$$ Remark that the Moore–Penrose (left) inverse $$\boldsymbol{W}_{\alpha }^{\dagger }$$ exists thanks to hypothesis (1.1). As the columns of $$F(s)$$ are $$\mathbb{R}-$$linearly independent the same is true for the columns of $$W_{\alpha } (s)$$ and the columns of $$\boldsymbol{W}_{\alpha }$$. If the solution of (3.3) exists, it is $$H= \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{H}}_{\alpha } -{\boldsymbol{H}}_{\alpha }$$ with $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {\boldsymbol{H}}_{\alpha } ={\boldsymbol{W}}_{\alpha }^{\dagger} \overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha } ,\,\,{\boldsymbol{H}}_{\alpha }={\boldsymbol{W}}_{\alpha }^{\dagger } {\boldsymbol{Z}}_{\alpha }\,$$. □ Theorem 3.2 Two systems $$F(s),\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{F}} (s)\in {\it{\Sigma}}$$ are SOF equivalent if and only if the coefficients of their exterior factors $$\left({\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{\boldsymbol{W}}}_{\alpha },\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha } } \right) \left({{\boldsymbol{W}}_{\alpha } \mathbf{,Z}_{\alpha } } \right)$$ satisfy the following: \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{\boldsymbol{W}}}_{\alpha } ={\boldsymbol{W}}_{\alpha }\\ \end{align} (3.6) \begin{align} \left( {I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger } } \right)\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{Z}}_{\alpha } =\left( {I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger } } \right){\boldsymbol{Z}}_{\alpha } \end{align} (3.7) There is a constant solution $$H\in \mathbb{R}^{m\times r}$$ to the equation $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{\alpha } (s)-Z_{\alpha } (s)=W_{\alpha } (s)H$$ with \begin{align} H=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{H}}_{\alpha } -{\boldsymbol{H}}_{\alpha } ,\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{H}}_{\alpha } ={\boldsymbol{W}}_{\alpha }^{\dagger } \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {\boldsymbol{Z}}_{\alpha } ,\,\,{\boldsymbol{H}}_{\alpha } ={\boldsymbol{W}}_{\alpha }^{\dagger } {\boldsymbol{Z}}_{\alpha } \end{align} (3.8) Proof. The columns of the matrices $$\left( {I-{\boldsymbol{W}}_{\alpha }{\boldsymbol{W}}_{\alpha }^{\dagger } }\right)\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha }$$, $$\left({I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger} } \right){\boldsymbol{Z}}_{\alpha }$$ are the components of the columns of $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{\boldsymbol{Z}}_{\alpha }$$, $${\boldsymbol{Z}}_{\alpha }$$ that are orthogonal to the column space of $${\boldsymbol{W}}_{\alpha }$$. Equation (3.3) has a solution if and only if they are equal. □ Theorem 3.3 To any system, $$F(s)\in {\it{\Sigma}}$$ the pair $$\left({{\boldsymbol{W}}_{\alpha } ,\,\,\left({I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger} } \right){\boldsymbol{Z}}_{\alpha } } \right)$$ is a complete system of independent $$E-$$invariants. Proof. For the invariance remark that the gain $$H$$ cannot alter $$\left({I-{\boldsymbol{W}}_{\alpha } {\boldsymbol{W}}_{\alpha }^{\dagger} } \right){\boldsymbol{Z}}_{\alpha }$$ For the completeness remark that in the case of equality of the above pair of invariants (3.3) has a solution. □ Theorem 3.4 To any system $$F(s)\in {\it{\Sigma}}$$ and to each $$\alpha$$ with $$\left| {Z_{\alpha } (s)} \right|\ne 0$$ the rational function $$\underline{F_{\alpha } }(s)=\left( {Z_{\alpha } (s)-W_{\alpha } (s){\boldsymbol{H}}_{\alpha } } \right)^{-1}W_{\alpha } (s)=\left( {Z_{\alpha }^{\bot } (s)} \right)^{-1}W_{\alpha } (s)$$ is uniquely determined. Proof. Trivial, following the algorithm below: Algorithm 3.1 Calculate $$Z_{r-1} (s),\,\,Z_{r} (s)$$ Find all the $$\alpha$$ with $$\left| {Z_{r-1} (s)\left\{ {{\bf r},\alpha } \right\}} \right|\ne 0$$ and for each one of them: Calculate the exterior factors $$Z_{\alpha } ,\,W_{\alpha }$$ of the transfer function matrix $$F(s)$$ Calculate the coefficient matrices $${\boldsymbol{Z}}_{\alpha }\mathbf{,W}_{\alpha }$$ Decompose $${\boldsymbol{Z}}_{\alpha }$$ into two parts $${\boldsymbol{Z}}_{\alpha } ={\boldsymbol{Z}}_{\alpha }^{\bot} +{\boldsymbol{Z}}_{\alpha }^{\parallel }$$ with the columns of $${\boldsymbol{Z}}_{\alpha }^{\parallel }$$ belonging to the column space of $${\boldsymbol{W}}_{\alpha }$$ and the columns of $${\boldsymbol{Z}}_{\alpha }^{\bot }$$ orthogonal to it: $${\boldsymbol{Z}}_{\alpha }^{\parallel }={\boldsymbol{W}}_{\alpha } \left( {{\boldsymbol{W}}_{\alpha}^{{\rm T}} {\boldsymbol{W}}_{\alpha } }\right)^{-1}{\boldsymbol{W}}_{\alpha }^{{\rm T}}{\boldsymbol{Z}}_{\alpha } ,\,{\boldsymbol{Z}}_{\alpha }^{\bot} =\,\left( {I-{\boldsymbol{W}}_{\alpha } \left({{\boldsymbol{W}}_{\alpha }^{{\rm T}} {\boldsymbol{W}}_{\alpha} } \right)^{-1}{\boldsymbol{W}}_{\alpha }^{{\rm T}} }\right){\boldsymbol{Z}}_{\alpha }$$ Calculate the output feedback gains $$H_{\alpha } =\left({{\boldsymbol{W}}_{\alpha }^{{\rm T}} {\boldsymbol{W}}_{\alpha} } \right)^{-1}{\boldsymbol{W}}_{\alpha }^{{\rm T}}{\boldsymbol{Z}}_{\alpha }$$ Calculate $$\underline{F_{\alpha } }(s)=\left( {Z_{\alpha } (s)-W_{\alpha } (s)H_{\alpha } } \right)^{-1}W_{\alpha } (s)$$ □ Remark that $$\underline{F_{\alpha } }(s)$$ is unique up to the choice of $$\alpha$$. It can become properly unique if we chose $$\alpha_{0}$$ with $$\vartheta_{\textit{lex}} \left( {\alpha_{0} } \right)\leqslant \vartheta_{\textit{lex}} \left( \alpha \right)$$. The resulting $$\underline{F_{\alpha_{0} } }(s)$$ is a uniquely determined representative of the equivalence class. It is a canonical form even though it is not conform to our feeling of canonicity. The pair $$\left( {{\boldsymbol{W}}_{\alpha },\,{\boldsymbol{Z}}_{\alpha }^{\bot } } \right)$$ is a complete system of independent invariants. It is how ever oriented to the construction of canonical forms. For proper control problems, complete invariants related to the Grassmann coordinates of the spaces $${\boldsymbol{z}}_{k} \oplus \mbox{colspan}\left({{\boldsymbol{W}}_{\alpha } } \right)$$, $${\boldsymbol{z}}_{k}\wedge \mathbf{w}_{1} \wedge \mathbf{w}_{2} \wedge \cdots \wedge\mathbf{w}_{m}$$ seem to be more important. Roughly speaking in the first case we have the base and the height of a prism and in the second the base and its volume. Starting from the Grassmann coordinates, we can construct multivariate polynomials that generalize the Bezoutian and represent the variety of the closed loop dynamics. The exterior powers of a transfer function matrix and their invariant properties do not appear for the first time. They are well known from the calculation of the Smith and Smith–MacMillan forms. Let $$\delta_{k} (s)$$ be the greater common divisor of the entries of $$Z_{k} (s)$$. Then $$\delta_{k} (s)$$ divides $$\delta_{k+1} (s)$$ and $$\delta_{r} (s)/\delta_{r-1} (s)$$ is known to be the $$r^{\it th}$$ invariant factor of the Smith form. It is well known that $$\delta_{r} (s)/\delta_{r-1} (s)$$ is state feedback invariant. The SOF invariance of the whole $$Z_{r} (s)$$ must be seen in this context. A question arises naturally. Is it possible to solve the problem using a coprime instead of the exterior factorization? The problem of SOF (full SOF) equivalence is solved in Yannakoudakis (2013a) using generalized Bezoutians. Given two systems $$F(s),\,\,\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)$$ we consider their left and right coprime factorizations \begin{align} \begin{array}{l} \left( {D_{L} (s),\,\,N_{L} (s)} \right),\,\,\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{L} (s),\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}}_{L} (s)} \right),\,\,\left( {\,N_{R} (s),\,\,D_{R} (s)} \right),\,\,\left( {\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}} _{R} (s),\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}}_{R} (s)} \right) \\ F(s)=D_{L}^{-1} (s)N_{L} (s)=N_{R} (s)D_{R}^{-1} (s),\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}}_{L}^{-1} (s)\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}} _{L} (s)=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}}_{R} (s)\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{R}^{-1} (s)\, \\ \end{array} \end{align} (3.9) and their generalized Bezoutians \begin{align} B(\lambda ,\,\mu )=\frac{N_{L} (\lambda )D_{R} (\mu )-D_{L} (\lambda )N_{R} (\mu )}{\lambda -\mu },\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{B}} (\lambda ,\,\mu )=\frac{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}}_{L} (\lambda )\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{R} (\mu )-\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{D}} _{L} (\lambda )\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{N}} _{R} (\mu )}{\lambda -\mu } \end{align} (3.10) Then for SOF (full SOF) equivalence there must be unimodular matrices $$U_{H} (\mu ),\,\,V_{H} (\lambda )$$ with \begin{align} B(\lambda ,\,\mu )=\,U_{H} (\mu )\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{B}} (\lambda ,\,\mu )V_{H} (\lambda ) \end{align} (3.11) The gain achieving equivalence is calculated solving linear equations but the approach cannot serve for the calculation of complete invariants, because of the bilinearity of the formula (3.11) The SOF group acts on the set of the Bezoutians by the transformation \begin{align} B(\lambda ,\,\mu )\to \,U_{H} (\mu )B(\lambda ,\,\mu )V_{H} (\lambda ) \end{align} (3.12) It acts on the set of exterior factors by the transformation \begin{align} \left( {Z(s),\,W(s)} \right)\mapsto \left( {Z(s)+W(s)H,\,W(s)} \right) \end{align} (3.13) And on the set of the left coprime factors by the transformation \begin{align} \left( {D_{L} (s),\,N_{L} (s)} \right)\mapsto \left( {V_{H} (s)\left( {D_{L} (s)+N_{L} (s)H} \right),\,V_{H} (s)N_{L} (s)} \right) \end{align} (3.14) Apparently the action is linear only for the set of exterior factors. Remark that transformation (3.14), cannot serve neither for the test of SOF equivalence. 4. Applications 4.1. Scalar systems $${\it{\Sigma}}$$ is the set $$\mathbb{R}_{n} \left\{ s \right\}$$ of strictly proper rational functions with real coefficients of McMillan degree $$n$$ and monic denominator polynomial. $$\mathcal{H}$$ is the additive group of the real numbers $$\mathbb{R}$$. For the scalar system: \begin{align} {\it{\Sigma}} \ni F(s)=\frac{z(s)}{a(s)}=\frac{z_{n-1} s^{n-1}+z_{n-2} s^{n-2}+\cdots +z_{1} s+z_{0} }{s^{n}+a_{n-1} s^{n-1}+\cdots +a_{1} s+a_{0} } \end{align} (4.1) The Plucker coordinates and the Plucker matrix are. $\mathfrak{P}=[\mathfrak{p}{\kern 1pt}_{1} ,\,\mathfrak{p}_{2} ]=[Z_{0} (s)\,\,Z_{1} (s)]=[a(s)\,\,z(s)],\,\,\,\,{\mathcal{P}}=[{{\it{p}}}{\kern 1pt}_{1} ,\,{{\it{p}}}_{2} ]\,=\left[ {{\begin{array}{*{20}c} 1 & {a_{n-1} } & {a_{n-2} } & \cdots & {a_{0} } \\ 0 & {z_{n-1} } & {z_{n-2} } & \cdots & {z_{0} } \\ \end{array} }} \right]^{{\rm T}}$ There is a unique factorization of the transfer function matrix with $$\mathcal{H}$$–invariant numerator, achieved with $${\rm {\bf \alpha }}=\{1\}$$. \begin{align} Z_{{\rm {\bf \alpha }}} =z_{0} (s)=a(s),\,\,W_{{\rm {\bf \alpha }}} =z_{1} (s)=z(s) \end{align} (4.2) According to Theorem 3.1, $F(s)=\frac{z(s)}{a(s)}{\rm E}\frac{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{z}} (s)}{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)}=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)\Leftrightarrow \left\{ {{\begin{array}{*{20}c} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{z}} (s)=z(s)} \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)-a(s)=hz(s)\,\,h\in \mathbb{R}} \\ \end{array} }} \right.$ The following are known to be complete $$\mathcal{H}$$-invariants. \begin{align} \text{Yannakoudakis (1980)}: \mathfrak{C}_{2} \left( {\mathcal{P}} \right)\\ \end{align} (4.3) \begin{align} \text{Byrnes & Crouch (1985)} \left( {z(s),\mathfrak{b}(s)} \right) \end{align} (4.4)$$\mathfrak{b}(s)=a(s)\frac{d}{ds}z(s)-z(s)\frac{d}{ds}a(s)$$ is the so called breakaway polynomial \begin{align} \text{Helmke & Fuhrmann (1989)}: \mathfrak{B}(a(s),z(s)) \end{align} (4.5)$$\mathfrak{B}(a(s),z(s))$$ is the Bezoutian of the polynomials $$a(s),z(s)$$. It is defined as the determinant of the alternant matrix of the polynomials for the variables $$s_{1} ,\,s_{2}$$ divided by the determinant of the Vandermonde matrix of the variables. \begin{align} &\mathfrak{B}(a(s),z(s))=\left| {{\begin{array}{*{20}c} {a(s_{1} )} & {a(s_{2} )} \\ {z(s_{1} )} & {z(s_{2} )} \\ \end{array} }} \right|\left( {\left| {{\begin{array}{*{20}c} 1 & 1 \\ {s_{1} } & {s_{2} } \\ \end{array} }} \right|} \right)^{-1}=\frac{a(s_{1} )z(s_{2} )-z(s_{1} )a(s_{2} )}{s_{2} -s_{1} } \\ \end{align} (4.6) \begin{align} &\mbox{The gain achieving equivalence is: } \,\,h=\frac{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)-a(s)}{z(s)} \end{align} (4.7) None of the above invariants is independent. There are several ways to take out the independent parameters. We present the one proposed by Algorithm 3.1. $${\boldsymbol{Z}}_{\alpha } ={{\it{p}}}_{1},\,\,{\boldsymbol{W}}_{\alpha } ={{\it{p}}}_{2}$$, The vector $${{\it{p}}}_{1}$$ is decomposed in two components, one parallel and one orthogonal to $${{\it{p}}}{\kern 1pt}_{2}$$, i.e. $${\rm{\bf p}}_{1} ={{\it{p}}}_{1}^{\bot } +{{\it{p}}}_{2} H_{a},H_{a} =\left\langle {{{\it{p}}}_{1} ,{{\it{p}}}_{2} }\right\rangle /\left\| {{{\it{p}}}_{2} } \right\|$$. Then the pair $$\left( {{{\it{p}}}_{1}^{\bot } ,\,\,{{\it{p}}}_{2} } \right)$$ is a complete system of independent $${\rm E}-$$invariants. The canonical form is $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\frac{W_{\alpha } (s)}{Z_{\alpha } (s)-W_{\alpha } (s)H_{\alpha } }=\frac{\mathfrak{p}_{2} (s)}{\mathfrak{p}_{1}^{\bot } (s)}$$ Example 4.1 \begin{align*}\label{exam4.1} f(s)&=\frac{s^{2}+1}{s^{4}+3s^{3}+2s^{2}+s+5},\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{f}} (s)=\frac{f(s)}{1+3f(s)}=\frac{s^{2}+1}{s^{4}+3s^{3}+5s^{2}+s+8} \\ {\mathcal{P}}&=\left[ {{\begin{array}{*{20}c} {1\,\,\,3\,\,\,2\,\,\,1\,\,\,5} \\ {0\,\,\,0\,\,\,1\,\,\,0\,\,\,1} \\ \end{array} }} \right]^{{\rm T}}\!\!,\,{\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{\mathcal{P}}} }}=\left[ {{\begin{array}{*{20}c} {1\,\,\,3\,\,\,5\,\,\,1\,\,\,8} \\ {0\,\,\,0\,\,\,1\,\,\,0\,\,\,1} \\ \end{array} }} \right]^{{\rm T}}\,\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)=\left[ {0\,\,1\,\,0\,\,1\,\,3\,\,0\,\,3\,\,-1\,\,-3\,\,1} \right]^{{\rm T}}=\mathbf{\mathfrak{C}}_{2} \left( {{\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{\mathcal{P}}} }}} \right) \\ \mathfrak{B}(a,z)&=\mathbf{\mathfrak{C}}_{2} \left( {\left[ {{\begin{array}{*{20}c} {s_{1}^{4} \,\,s_{1}^{3} \ldots 1} \\ {s_{2}^{4} \,\,\,s_{2}^{3} \cdots 1} \\ \end{array} }} \right]} \right)\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)(s_{1} -s_{2} )^{-1} \\ \mathfrak{B}(a,z)&= \left[ s_{1}^{3} s_{2}^{3} ,\,s_{1}^{2} s_{2}^{2} (s_{1} +s_{2} ),\,s_{1} s_{2} (s_{1}^{2} +s_{1} s_{2} +s_{2}^{2} ),\,(s_{1}^{3} +s_{1}^{2} s_{2} +s_{1} s_{2}^{2} +s_{2}^{3} ),s_{1}^{2} s_{2}^{2} ,s_{1} s_{2} (s_{1} +s_{2} ),s_{1}^{2}\right.\\ &\left.\quad +s_{1} s_{2} +s_{2}^{2} ,\,s_{1} s_{2} ,\,s_{1} +s_{2} ,\,1 \right]\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)\,\, \end{align*} $$a^{\bot }=[1\,\,3\,\,-3/2\,\,\,2\,\,3/2\,\,]$$ and the canonical form is $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=\frac{s^{2}+1}{s^{4}+3s^{3}-3/2s^{2}+2s+3/2\,}=\frac{z(s)}{a^{\bot }(s)}$$ The complete set of independent invariants of this paper is the pair $$\left( {\mathfrak{p}_{2} (s),\,\,\mathfrak{p}_{1}^{\bot } (s)} \right)=\left( {z(s),\,\,a^{\bot }(s)} \right)$$ The only visible relation with the other known complete invariants $$\mathfrak{C}_{2} \left( {\mathcal{P}} \right)$$, $$\left( {z(s),\mathfrak{b}(s)} \right)$$, $$\mathfrak{B}(s_{1} ,s_{2} )$$ is that we can calculate each one from each other. The more powerful tool is the Bezoutian $$\mathfrak{B}(s_{1} ,s_{2} )$$. It is the resultant of the closed loop polynomials $$a(s_{1} )+hz(s_{1} ),\,\,a(s_{2} )+hz(s_{2} )$$. We can extend it to the general case, eliminating the gain $$H$$, but we don’t want to give more weigh to this article. However the multivariate closed loop characteristic polynomial describing the constrained dynamics in the last section of the article is such a generalization of the Bezoutian. It is useful to remark that $$\mathfrak{B}(s_{1} ,s_{2} )=0$$ describes the constrained closed loop dynamics and is equivalent to the root locus of the system as if we suppose that $$s_{1}$$ is a closed loop pole the roots in $$s_{2}$$ of $$\mathfrak{B}(s_{1} ,s_{2} )=0$$ are the other closed loop poles. The break away polynomial is $$\mathfrak{B}(s,s)$$ and the intersection of the root locus with the imaginary axis is given by the roots of $$\mathfrak{B}(j\omega ,-j\omega )=0$$. Remark also that $$\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)={{\it{p}}}_{1} \wedge {{\it{p}}}_{2}$$ is related to the Bezoutian by the equation below. $\mathfrak{B}(s_{1} ,s_{2} )={{\it{p}}}_{1} \wedge {{\it{p}}}_{2} \left( {\left( {s_{1}^{n} \,\,s_{1}^{n-1} \ldots 1} \right)^{{\rm T}}\wedge \left( {s_{2}^{n} \,\,\,s_{2}^{n-1} \cdots 1} \right)^{{\rm T}}} \right)/(s_{1} -s_{2} )$ 4.2. Single output systems For this section $${\it{\Sigma}}$$ is the set $$\mathbb{R}_{n}^{m} \left\{ s\right\}$$ of strictly proper rational vectors with real coefficients of McMillan degree $$n$$ and monic denominator, verifying also (I.3). $$\mathcal{H}$$ is the additive group of real vectors $$\mathbb{R}^{m}$$. Let $${\it{\Sigma}} \ni F(s)=a^{-1}(s)\left[ {z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)} \right]$$. Then $$Z_{0} (s)=a(s),\,\,Z_{1} (s)=\left[ {z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)} \right]$$ The Plucker coordinates and the Plucker matrix are. \begin{align} \mathfrak{P}&=[\mathfrak{p}{\kern 1pt}_{1} \,\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ]=[z_{0} (s)\,\,z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)]=[a(s)\,\,z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)] \\ \end{align} (4.8) \begin{align} {\mathcal{P}}&=[{{\it{p}}}{\kern 1pt}_{1} \,\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]\,=\left[ {{\begin{array}{*{20}c} {\begin{array}{l} \,\,1 \\ a_{n-1} \\ a_{n-2} \\ \,\,\,\vdots \\ a_{0} \\ \end{array}} & {\begin{array}{l} \,\,0 \\ z_{1,n-1} \\ z_{1,n-2} \\ \,\,\,\,\vdots \\ z_{1,0} \\ \end{array}} & \cdots & {\begin{array}{l} \,\,0 \\ z_{m,n-1} \\ z_{m,n-2} \\ \,\,\,\,\vdots \\ z_{m,0} \\ \end{array}} \\ \end{array} }} \right] \end{align} (4.9) There is a unique factorization of the transfer function matrix with $$\mathcal{H}-$$invariant numerator achieved with $${\rm {\bf \alpha}}=\{1\}:\,\,Z_{{\rm {\bf \alpha }}} =Z_{0} (s)=a(s),\,\,W_{{\rm{\bf \alpha }}} =Z_{1} (s)=\left[ {z_{1} (s)\,\,z_{2} (s)\,\,\cdots \,\,z_{m} (s)} \right]$$ According to Algorithm 3.1, the systems $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{F}} (s),\,F(s)$$ are SOF – equivalent if and only if, $${\boldsymbol{W}}_{a} =[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{{\it{p}}}}_{m+1} ]\,=[\,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}}}}_{2} \,\,\cdots \,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}}}}_{m+1}]=\mathbf{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}}{{W}} }_{a}$$ and ${{\it{p}}}_{1}^{\bot } =\left( {I_{n} -[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]^{\dagger }[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]\,} \right){{\it{p}}}_{1} =\left( {I_{n} -[\,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{2} \,\,\cdots \,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{m+1} ]^{\dagger }[\,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{2} \,\,\cdots \,\,{{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{m+1} ]\,} \right){{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{1} ={{\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{{\it{p}}}} }}_{1}^{\bot }$ $$[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]^{\dagger }$$ exists thanks to hypothesis (1.1) $$H_{a} =[\,\,{{\it{p}}}_{2} \,\,\cdots \,\,{{\it{p}}}_{m+1} ]^{\dagger }{{\it{p}}}_{1}$$ and the canonical form is \begin{align} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}} {{F}} (s)=\left( {\mathfrak{p}_{1} -[\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ]H_{\alpha } } \right)^{-1}[\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ]=\left( {\mathfrak{p}_{1}^{\bot } } \right)^{-1}[\mathfrak{p}_{2} \,\cdots \mathfrak{p}_{m+1} ] \end{align} (4.10) Example 4.2 \begin{align*} &F(s)=\frac{1}{s^{3}+6s^{2}+11s+6}\left[ {{\begin{array}{*{20}c} {s+3} & {s+1} \\ \end{array} }} \right] \quad \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)=\frac{1}{s^{3}+6s^{2}+9s}\left[ {{\begin{array}{*{20}c} {s+3} & {s+1} \\ \end{array} }} \right]\\ &a(s)=s^{3}+6s^{2}+11s+6,\,\,z_{1} (s)=s+3,\,\,z_{2} (s)=s+1 \\ & \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{a}} (s)=s^{3}+6s^{2}+9s,\,\,z_{1} (s)=s+3,\,\,z_{2} (s)=s+1 \end{align*} \begin{align*} {\mathcal{P}}&=\left[\!\! {\begin{array}{l} 1\,\,\,\,\,\,\,\,0\,\,\,\,\,\,0 \\ 6\,\,\,\,\,\,\,0\,\,\,\,\,\,0 \\ 11\,\,\,\,\,1\,\,\,\,\,\,\,1 \\ 6\,\,\,\,\,\,\,3\,\,\,\,\,\,\,1 \\ \end{array}} \!\!\right],{\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{P}} }}=\left[\!\! {\begin{array}{l} 1\,\,\,\,\,\,0\,\,\,\,\,\,0 \\ 6\,\,\,\,\,0\,\,\,\,\,\,0 \\ 9\,\,\,\,\,1\,\,\,\,\,\,\,1 \\ 0\,\,\,\,\,3\,\,\,\,\,\,\,1 \\ \end{array}} \!\!\right],\left( {I_{4} -\left[{{{\it{p}}}_{2} ,{{\it{p}}}_{3} } \right]^{\dagger }\left[ {{{\it{p}}}_{2} ,{{\it{p}}}_{3} } \right]} \right)=\left[\!\! {{\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} }} \!\!\right]\Rightarrow {{\it{p}}}_{1}^{\bot } ={\rm {\bf \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{{\it{p}}}} }}_{1}^{\bot } \\ H_{a} &=\left[ \!\!{{\begin{array}{*{20}c} 0 & 0 & {-1/2} & {\,\,\,1/2} \\ 0 & 0 & {\,\,\,3/2} & {-1/2} \\ \end{array} }}\!\! \right]\left[ \!\!{{\begin{array}{*{20}c} 1 \\ 6 \\ {11} \\ 6 \\ \end{array} }} \!\!\right]=\left[ \!\!{{\begin{array}{*{20}c} {-5/2} \\ {27/2} \\ \end{array} }} \!\!\right]\!, \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{H}}_{a} =\left[ \!\!{{\begin{array}{*{20}c} 0 & 0 & {-1/2} & {\,\,\,1/2} \\ 0 & 0 & {\,\,\,3/2} & {-1/2} \\ \end{array} }} \!\!\right]\left[ \!\!{{\begin{array}{*{20}c} 1 \\ 6 \\ 9 \\ 0 \\ \end{array} }} \!\!\right]=\left[ \!\!{{\begin{array}{*{20}c} {-9/2} \\ {27/2} \\ \end{array} }} \!\!\right] \end{align*} The gain achieving equivalence is $$H=\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{H}} _{\alpha } -H_{\alpha } =[-2\,\,\,0]^{{\rm T}}$$$$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s)=(1+F(s)H)^{-1}F(s)$$ The canonical form of both $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{F}} (s),\,F(s)\,$$is $\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}} {{F}} (s)=\frac{1}{s^{2}(s+6)}\left[ {{\begin{array}{*{20}c} {s+3} & {s+1} \\ \end{array} }} \right]$ Another complete invariant more significant for control problems is obtained eliminating the gain $$H=\left[ {h_{1} ,\,h_{2} ,\,\ldots ,h_{m} \,} \right]^{{\rm T}}$$ among the closed loop polynomials $$[a(s_{i} )\,\,z_{1} (s_{i} )\,\,z_{2} (s_{i} )\,\,\cdots \,\,z_{m} (s_{i} )]H,\,\,i=1,2,\ldots ,m+1$$ $\mathfrak{B}(a,\,z_{1} ,\,\,z_{2} ,\,\,\ldots ,\,z_{m} )=\mathbf{\mathfrak{C}}_{2} \left( {\left[ {{\begin{array}{*{20}c} {s_{1}^{n} \,\,\,\,\,\,s_{1}^{n-1} \,\,\ldots \,\,1} \\ {\begin{array}{l} \,\,\,s_{2}^{n} \,\,\,\,\,s_{2}^{n-1} \,\,\,\cdots 1 \\ \,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\vdots \\ \,\,s_{m+1}^{n} \,\,\,s_{m+1}^{n-1} \cdots 1 \\ \end{array}} \\ \end{array} }} \right]} \right)\mathbf{\mathfrak{C}}_{2} \left( {\mathcal{P}} \right)\left| {{\begin{array}{*{20}c} {s_{1}^{n} \,\,\,\,\,\,s_{1}^{n-1} \,\,\ldots \,\,1} \\ {\begin{array}{l} \,\,\,s_{2}^{n} \,\,\,\,\,s_{2}^{n-1} \,\,\,\cdots 1 \\ \,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\vdots \\ \,\,s_{m+1}^{n} \,\,\,s_{m+1}^{n-1} \cdots 1 \\ \end{array}} \\ \end{array} }} \right|^{-1}$ It describes the constrained closed loop dynamics, i.e. if we fix $$m$$ roots $$s_{1} ,\,\,s_{2} ,\,\,\cdots \,,\,s_{m}$$ its roots in $$s_{m+1}$$ give the other closed loop poles. 4.3. Square systems There is a unique factorization of the transfer function matrix with $${\rm E}-$$invariant numerator achieved with $${\rm {\bf \alpha }}=\{1,2,\ldots ,r\}$$ $Z_{{\rm {\bf \alpha }}} =\mathbf{\mathfrak{C}}^{1}\left( {z_{r-1} (s)} \right)=\left[ {{\begin{array}{*{20}c} {z_{11} (s)} & \cdots & {z_{1r} (s)} \\ \vdots & \ddots & \vdots \\ {z_{r1} (s)} & \cdots & {z_{rr} (s)} \\ \end{array} }} \right]\in \mathbb{R}^{r\times r}[s],\,\,W_{{\rm {\bf \alpha }}} =z_{r} (s)=w(s)\in \mathbb{R}[s]$ For square systems $$W_{{\rm {\bf \alpha }}}$$ is scalar and so $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\frown}}} {{Z}} _{{\rm {\bf \alpha }}} =Z_{{\rm {\bf \alpha }}} +\,\,W_{{\rm {\bf \alpha }}} H$$ is decomposed to $$r^{2}$$ equations for scalar systems \begin{align} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{z}} _{ij} (s)=z_{ij} (s)+w(s)h_{ij} \end{align} (4.11) So we have that a complete $${\rm E}-$$invariant function of the set of $$r\times r$$ square systems of McMillan degree $$n$$, is the set of $$r^{2}$$ complete $${\rm E}-$$invariant functions of the set of scalar systems of McMillan degree $$n-r+1$$ defined by (4.9). To construct complete systems of independent $${\rm E}-$$invariants and canonical forms, we proceed as for scalar systems $$r^{2}$$ times. In the last section, we present complete invariants and canonical forms for square $$2\times 2$$ systems. 4.4. Rectangular systems Direct application of Algorithm 3.1. The exterior factorization is not unique, and we must choose the ‘first’ $$a$$ with $$\left| {Z_{\alpha } (s)} \right|\ne 0$$. Example 4.3 \begin{align*} F(s)&=\frac{1}{(s-1)^{4}}\left[ {{\begin{array}{*{20}c} {s^{3}-s^{2}} & {s^{3}-3s^{2}+3s-1} & {2s^{3}-5s^{2}+5s-2} \\ {2s^{3}-3s^{2}+s} & {s^{3}-3s^{2}+3s-1} & {s^{3}-s^{2}} \\ \end{array} }} \right]\left( {\mbox{Kim}-\mbox{Lee }\left( {\mbox{1995}} \right)} \right) \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{F}} (s)&=\frac{1}{s^{4}-2s^{3}+9s^{2}-16s+7}\left[ {{\begin{array}{*{20}c} {s^{3}-s^{2}-s} & {s^{3}-2s^{2}+4s-3} & {2s^{3}-2s^{2}+10s-8} \\ {2s^{3}-4s^{2}+2s} & {-4s^{2}+4s-1+s^{3}} & {-3s^{2}-s+2+s^{3}} \\ \end{array} }} \right]\\ {\it{\Omega}} &=\left[\!\!\! {{\begin{array}{*{20}c} {3s^{2}-2s} & {s^{2}-3s+2} & 0 \\ {-s^{2}+s} & 0 & {s^{2}-3s+2} \\ 0 & {-s^{2}+s} & {-3s^{2}+2s} \\ \end{array} }} \!\!\!\right]=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{{\it{\Omega}} }} \stackrel{{\mathbf{\alpha }=(1,2)}}{\Rightarrow} W_{\mathbf{\alpha }} =\left[\!\!\! {{\begin{array}{*{20}c} {-s^{2}+s} & 0 & {s^{2}-3s+2} \\ 0 & {-s^{2}+s} & {-3s^{2}+2s} \\ \end{array} }} \!\!\!\right] \\ Z_{\mathbf{\alpha }} &=\left[\!\!\! {{\begin{array}{*{20}c} {s^{3}-3s^{2}+3s-1} & {-s^{3}+3s^{2}-3s+1} \\ {3s^{2}-2s^{3}-s} & {s^{3}-s^{2}} \\ \end{array} }} \!\!\!\!\right],\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\mathbf{\alpha }} =\left[\!\!\!\! {{\begin{array}{*{20}c} {-4s^{2}+4s-1+s^{3}} & {-s^{3}+2s^{2}-4s+3} \\ {-2s+4s^{2}-2s^{3}} & {s^{3}-s^{2}-s} \\ \end{array} }} \!\!\!\!\right] \\ Z_{\mathbf{\alpha }} &=\underbrace {\left[ {\begin{array}{l} \,\,\,1\,\,\,\,\,-1 \\ -2\,\,\,\,\,\,1 \\ \end{array}} \right]}_{Z_{\alpha ,3} }s^{3}+\underbrace {\left[ {\begin{array}{l} -3\,\,\,\,\,\,3 \\ -2\,\,-1 \\ \end{array}} \right]}_{Z_{\alpha ,2} }s^{2}+\underbrace {\left[ {\begin{array}{l} \,\,\,3\,\,\,-3 \\ -1\,\,\,\,\,0 \\ \end{array}} \right]}_{Z_{\alpha ,1} }s+\underbrace {\left[ {\begin{array}{l} -1\,\,\,1 \\ \,\,\,0\,\,\,0 \\ \end{array}} \right]}_{Z_{\alpha ,0} },\\ W_{\mathbf{\alpha }} &=\underbrace {\left[ {\begin{array}{l} -1\,\,\,\,\,\,0\,\,\,\,\,\,\,\,1 \\ \,\,\,0\,\,\,-1\,\,\,-3 \\ \end{array}} \right]}_{W_{\alpha ,2} }s^{2}+\underbrace {\left[ {\begin{array}{l} 1\,\,\,0\,\,\,-3 \\ 0\,\,\,1\,\,\,\,\,\,2 \\ \end{array}} \right]}_{W_{\alpha ,1} }s+\underbrace {\left[ {\begin{array}{l} 0\,\,0\,\,\,2 \\ 0\,\,0\,\,\,0 \\ \end{array}} \right]}_{W_{\alpha ,0} } \\ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} _{\mathbf{\alpha }} &=\underbrace {\left[\!\!\!\! {\begin{array}{l} \,\,\,1\,\,\,\,\,-1 \\ -2\,\,\,\,\,\,1 \\ \end{array}} \!\!\!\!\right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,3} }s^{3}+\underbrace {\left[\!\!\!\! {\begin{array}{l} -4\,\,\,\,\,2 \\ \,\,\,4\,\,-1 \\ \end{array}}\!\!\!\! \right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,2} }s^{2}+\underbrace {\left[ {\begin{array}{l} \,\,\,4\,\,\,-3 \\ -2\,\,\,-1 \\ \end{array}} \right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,1} }s+\underbrace {\left[ {\begin{array}{l} -1\,\,\,3 \\ \,\,\,0\,\,\,0 \\ \end{array}} \right]}_{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}}_{\alpha ,0} }\,\, \\ {\boldsymbol{Z}}_{\mathbf{\alpha }}& =\left[ {{\begin{array}{*{20}c} {\,\,\,1} & {-2} & {-3} & {-2} & {\,\,\,3} & {-1} & {-1} & 0 \\ {-1} & {\,\,\,1} & {\,\,\,3} & {\,\,\,1} & {-3} & {\,\,\,0} & {\,\,\,1} & 0 \\ \end{array} }} \right]^{{\rm T}},\,\,\,\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} }_{\mathbf{\alpha }} =\left[ {{\begin{array}{*{20}c} {\,\,\,1} & {-2} & {-4} & {\,\,\,4} & {\,\,\,4} & {-2} & {-1} & 0 \\ {-1} & {\,\,\,1} & {\,\,\,2} & {-1} & {-3} & {-1} & {\,\,\,3} & 0 \\ \end{array} }} \right]^{{\rm T}}\\ {\boldsymbol{W}}_{\mathbf{\alpha }}& =\left[\!\!\! {{\begin{array}{*{20}c} 0 & 0 & {-1} & {\,\,\,0} & {\,\,\,1} & 0 & 0 & 0 \\ 0 & 0 & {\,\,\,0} & {-1} & {\,\,\,0} & 1 & 0 & 0 \\ 0 & 0 & {\,\,\,1} & {-3} & {-3} & 2 & 2 & 0 \\ \end{array} }} \!\!\!\right]^{{\rm T}}=\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{W}} }_{\mathbf{\alpha }} ,H_{\alpha } =\left[\!\!\! {{\begin{array}{*{20}c} {27/13} & {-29/13} \\ {-11/13} & {-6/13} \\ {-6/13} & {5/13} \\ \end{array} }} \!\!\!\right],\,\,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{H}}_{\alpha } =\left[\!\!\! {{\begin{array}{*{20}c} {40/13} & {-3/13} \\ {-24/13} & {-45/13} \\ {-6/13} & {18/13} \\ \end{array} }} \!\!\!\right] \\ H&=\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{H}} _{\alpha } -H_{\alpha } =\left[ {{\begin{array}{*{20}c} {-1} & {-2} \\ 1 & 3 \\ 0 & {-1} \\ \end{array} }} \right],\,\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}} {{Z}} }_{\mathbf{\alpha }} -\,{\boldsymbol{Z}}_{\mathbf{\alpha }} =\,{\boldsymbol{W}}_{\mathbf{\alpha }} H \end{align*} 5. Complete SOF-invariants SOF-assignability and constrained dynamics 5.1. Complete SOF-invariants and SOF-assignability In the huge literature on SOF, see for instance the survey paper Syrmos et al. (1997) and the references therein, and the more recent Leventides & Karcanias (2008) and Karcanias & Leventides (2016), solvability conditions are expressed in terms of entities that are SOF-invariant. I cite some of them For SOF-assignability, the Plucker matrix must have full rank (Giannakopoulos & Karcanias, 1985). The rank of the Plucker matrix is SOF-invariant. For SOF-assignability of systems with two inputs two outputs and four states the transfer function matrix must be rank deficient (Brokett & Byrnes, 1981). The rank of the transfer function matrix is SOF-invariant. For some distributions of Kronecker’s indices systems are generically SOF-assignable (Yannakoudakis, 2013b). Kronecker’s indices are SOF-invariant. Up to yesterday, we were lacking a complete system of independent SOF-invariants. So the practice was to try to solve somehow the problems and then examine the solvability conditions with respect to their SOF-invariant properties. The question arising is if today, having a complete system of independent SOF-invariants, we can reverse the problems and start from the beginning looking for solvability conditions in terms of the invariants. In this purpose, we give in this section, the necessary and sufficient conditions for SOF assignability as functions of a complete system of SOF-invariants for the class of systems with two inputs, two outputs and four states and transfer function matrix: \begin{align} F(s)=\frac{1}{a_{0} (s)}\left[ {{\begin{array}{*{20}c} {a_{11} (s)} & {a_{12} (s)} \\ {a_{21} (s)} & {a_{22} (s)} \\ \end{array} }} \right] \end{align} (5.1) Its open and closed loop Plucker coordinates are \begin{align} \mathfrak{P}&=\left[ {a_{0} (s),\,\,a_{21} (s),\,\,a_{22} (s),\,\,-a_{11} (s),\,\,-a_{12} (s),\,\,D(s)} \right],\,\,D(s)=\frac{a_{11} (s)a_{22} (s)-a_{12} (s)a_{21} (s)\,}{a_{0} (s)} \\ \tilde{{\mathfrak{P}}}&=\mathfrak{P}\mathfrak{T},\,\,\mathfrak{T}=\mathfrak{C}_{2} \left( {\left[ {{\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ {h_{11} } & {h_{12} } & 1 & 0 \\ {h_{21} } & {h_{22} } & 0 & 1 \\ \end{array} }} \right]} \right)=\left[ {{\begin{array}{*{20}c} 1 & 0 & 0 & 0 & 0 & 0 \\ {h_{12} } & 1 & 0 & 0 & 0 & 0 \\ {h_{22} } & 0 & 1 & 0 & 0 & 0 \\ {-h_{11} } & 0 & 0 & 1 & 0 & 0 \\ {-h_{21} } & 0 & 0 & 0 & 1 & 0 \\ d & {-h_{21} } & {h_{11} } & {-h_{22} } & {h_{12} } & 1 \\ \end{array} }} \right]\!,\notag\\ d&=h_{11} h_{22} -h_{12} h_{21} \notag \end{align} (5.2) With feedback gain $H=\left[ {{\begin{array}{*{20}c} {h_{11} } & {h_{12} } \\ {h_{21} } & {h_{22} } \\ \end{array} }} \right]$ \begin{align} \begin{array}{l} \tilde{{\mathfrak{p}}}{\kern 1pt}_{1} =\tilde{{a}}_{0} (s)=a_{0} (s)+a_{21} (s)h_{12} +a_{22} (s)h_{22} +a_{11} (s)h_{11} +a_{12} (s)h_{21} +D(s)d \6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{2} =\tilde{{a}}_{21} (s)=a_{21} (s)-h_{21} D(s) \\[6pt] \tilde{{\mathfrak{p}}}_{{\kern 1pt}3} =\tilde{{a}}_{22} (s)=a_{22} (s)+h_{11} D(s) \\[6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{4} =\tilde{{a}}_{11} (s)=-a_{11} (s)-h_{22} D(s) \\[6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{5} =\tilde{{a}}_{12} (s)=-a_{12} (s)+h_{12} D(s) \\[6pt] \tilde{{\mathfrak{p}}}{\kern 1pt}_{6} =\mathfrak{p}{\kern 1pt}_{6} \end{array} \end{align} (5.3) Let \bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} ,\,\,\bar{{D}}\in \mathbb{R}^{4} denote the vectors of the coefficients of the polynomials a_{0} (s)-a_{cl} (s),\,\,a_{11} (s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s),\,\,D(s), respectively. Then, we decompose each one of the vectors \,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} , to one parallel and one orthogonal to \bar{{D}}. \begin{align} \begin{array}{l} \,\,\left. {\begin{array}{l} \bar{{a}}_{11} =a_{11}^{\bot } +a_{11}^{\parallel } ,\,\,\bar{{a}}_{12} =a_{12}^{\bot } +a_{12}^{\parallel } ,\,\bar{{a}}_{21} =a_{21}^{\bot } +a_{21}^{\parallel } ,\,\,\bar{{a}}_{22} =a_{22}^{\bot } +a_{22}^{\parallel } \\[6pt] a_{11}^{\parallel } =\gamma_{11} \bar{{D}},\,\,a_{12}^{\parallel } =\gamma _{12} \bar{{D}},\,\,a_{21}^{\parallel } =\gamma_{21} \bar{{D}},\,\,a_{22}^{\parallel } =\gamma_{22} \bar{{D}} \\[6pt] \end{array}} \right\} \gamma_{\zeta \xi } =\frac{\left\langle {\bar{{a}}_{\zeta \xi } ,\bar{{D}}} \right\rangle }{\left\| {\bar{{D}}} \right\|^{2}} \\[6pt] a_{11} (s)=a_{11}^{\bot } (s)+\gamma_{11} D(s),\,a_{12} (s)=a_{12}^{\bot } (s)+\gamma_{12} D(s),\\[6pt] a_{21} (s)=a_{21}^{\bot } (s)+\gamma_{21} D(s),\,a_{22} (s)=a_{22}^{\bot } (s)+\gamma_{22} D(s) \\ \end{array} \end{align} (5.4) The function \begin{align} f:\mathbb{R}_{4}^{2\times 2} \left\{ s \right\}\to \mathbb{R}^{4\times 5},F(s)\mapsto \left[ {a_{21}^{\bot } ,\,\,a_{22}^{\bot } ,\,\,a_{11}^{\bot } ,\,\,a_{12}^{\bot } ,\,\,\bar{{D}}} \right] \end{align} (5.5) Is complete SOF-invariant and \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{F}} (s)=D(s)\left[ {{\begin{array}{*{20}c} {a_{22}^{\bot } (s)} & {-a_{12}^{\bot } (s)} \\ {-a_{21}^{\bot } (s)} & {a_{11}^{\bot } (s)} \\ \end{array} }} \right]^{-1} is an E-canonical form of F(s). It is obtained from the initial system using the feedback gain H_{\alpha } =-\left[ {{\begin{array}{*{20}c} {\,\,\,\gamma_{22} } & {-\gamma_{12} } \\ {-\gamma_{21} } & {\,\,\gamma_{11} } \\ \end{array} }} \right] . Consider the truncated Plucker matrix of the canonical form. \begin{align} {\rm {\bf \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{P}} }}=\left[ {a_{11}^{\bot } \,\,a_{12}^{\bot } \,\,\,-a_{21}^{\bot } \,\,-a_{22}^{\bot } \,\,\bar{{D}}} \right] \end{align} (5.6) The vectors a_{11}^{\bot } ,\,\,a_{12}^{\bot } ,\,\,a_{21}^{\bot } ,\,\,a_{22}^{\bot } are linearly dependent as they belong to a three-dimensional subspace. We consider now the compound of Order 4 of the truncated Plucker matrix of the canonical form and the truncated Plucker matrix of the system \begin{align} \begin{array}{l} {\rm {\bf \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}} {{P}} }}_{0} =\mathbf{\mathfrak{C}}_{4} \left( {\left[ {a_{11}^{\bot } \,\,a_{12}^{\bot } \,\,\,-a_{21}^{\bot } \,\,-a_{22}^{\bot } \,\,\bar{{D}}} \right]} \right)=\left[ {{\it{\Delta}}_{0} \,\,{\it{\Delta}}_{22} \,\,{\it{\Delta}}_{21} \,\,{\it{\Delta}}_{12} \,\,{\it{\Delta}}_{11} } \right],\,\,{\it{\Delta}}_{0} =0 \\ {\mathcal{P}}_{0} =\mathbf{\mathfrak{C}}_{4} \left( {\left[ {\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} ,\,\,\bar{{D}}} \right]} \right)=\left[ {{\it{\Delta}} \,\,{\it{\Delta}}_{22} \,\,{\it{\Delta}}_{21} \,\,{\it{\Delta}} _{12} \,\,{\it{\Delta}}_{11} } \right],\\ {\it{\Delta}} =\,\,\gamma_{22} {\it{\Delta}}_{22} +\gamma_{21} {\it{\Delta}}_{21} +\gamma_{12} {\it{\Delta}}_{12} +\gamma_{11} {\it{\Delta}} _{11} \\ \end{array} \end{align} (5.7) We suppose that the polynomials a_{11} (s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s) are {\rm P}-linearly independent. Then \begin{align} &D(s)=a_{11} (s)\delta_{11} +a_{12} (s)\delta_{12} +a_{21} (s)\delta_{21} +a_{22} (s)\delta_{22} \notag\\ &\quad\begin{array}{l} \delta_{11} ={\left| {\bar{{D}},\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{D}},\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\left| {\bar{{D}},a_{12}^{\bot } ,\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{D}},a_{12}^{\bot } ,\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}=-{{\it{\Delta}}_{11} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{11} } {\it{\Delta}} }} \right. } {\it{\Delta}} \\[6pt] \delta_{12} ={\left| {\bar{{a}}_{11} ,\bar{{D}},\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{11} ,\bar{{D}},\bar{{a}}_{21} ,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={{\it{\Delta}}_{12} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} }}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\bar{{D}},\,a_{21}^{\bot } ,a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={{\it{\Delta}}_{12} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} } \\[6pt] \delta_{21} ={\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{D}},\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{D}},\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,\bar{{D}},a_{22}^{\bot } } \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,\bar{{D}},a_{22}^{\bot } } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={-{\it{\Delta}}_{21} } \mathord{\left/ {\vphantom {{-{\it{\Delta}}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} }}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|={-{\it{\Delta}}_{21} } \mathord{\left/ {\vphantom {{-{\it{\Delta}}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} } \\[6pt] \delta_{22} ={\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{D}}} \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{11} ,\bar{{a}}_{12} ,\bar{{a}}_{21} ,\bar{{D}}} \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,a_{21}^{\bot } ,\bar{{D}}} \right|} \mathord{\left/ {\vphantom {{\left| {a_{11}^{\bot } ,\,a_{12}^{\bot } ,\,a_{21}^{\bot } ,\bar{{D}}} \right|} {\left| {\,\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={{\it{\Delta}}_{22} } \mathord{\left/ {\vphantom {{{\it{\Delta}}_{22} } {\it{\Delta}} }} \right. } {\it{\Delta}}\end{array}\\ &\begin{array}{l} a_{11} (s)\delta_{11} +a_{12} (s)\delta_{12} +a_{21} (s)\delta_{21} +a_{22} (s)\delta_{22} =D(s)\Rightarrow \\[6pt] \Rightarrow \left( {a_{11}^{\bot } (s)+\gamma_{11} D(s)} \right)\delta _{11} +\left( {a_{12}^{\bot } (s)+\gamma_{12} D(s)} \right)\delta_{12} +\left( {a_{21}^{\bot } (s)+\gamma_{21} D(s)} \right)\delta_{21} +\left( {a_{22}^{\bot } (s)+\gamma_{22} D(s)} \right)\delta_{22}\\[6pt] =0\Rightarrow \\[6pt] \Rightarrow a_{11}^{\bot } (s)\delta_{11} +a_{12}^{\bot } (s)\delta_{12} +a_{21}^{\bot } (s)\delta_{21} +a_{22}^{\bot } (s)\delta_{22} +\left( {\delta_{11} \gamma_{11} +\delta_{12} \gamma_{12} +\delta_{21} \gamma _{21} +\delta_{22} \gamma_{22} -1} \right)D(s)=0\Rightarrow \\[6pt] \Rightarrow \left\{ {{\begin{array}{*{20}c} {\left( {\delta_{11} \gamma_{11} +\delta_{12} \gamma_{12} +\delta_{21} \gamma_{21} +\delta_{22} \gamma_{22} -1} \right)=0} \\ {a_{11}^{\bot } (s)\delta_{11} +a_{12}^{\bot } (s)\delta_{12} +a_{21}^{\bot } (s)\delta_{21} +a_{22}^{\bot } (s)\delta_{22} =0}\end{array} }} \right. \end{array}\notag \end{align} (5.8) \[ \begin{array}{l} a_{cl} (s)=a_{0} (s)-a_{11} (s)\left( {h_{22} +\delta_{11} d} \right)-a_{12} (s)\left( {h_{21} +\delta_{12} d} \right)-a_{21} (s)\left( {h_{12} +\delta_{21} d} \right)-a_{22} (s)\left( {h_{11} +\delta_{22} d} \right)\Leftrightarrow \\ \Leftrightarrow a_{0} (s)-a_{cl} (s)=a_{11} (s)\left( {h_{22} +\delta_{11} d} \right)+a_{12} (s)\left( {h_{21} +\delta_{12} d} \right)+a_{21} (s)\left( {h_{12} +\delta_{21} d} \right)\\ \qquad+a_{22} (s)\left( {h_{11} +\delta _{22} d} \right)\Rightarrow \end{array} \begin{align} \begin{array}{l} h_{22} +\delta_{11} d={\left| {\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{22} } \mathord{\left/ {\vphantom {{\mathbf{D}_{22} } {\it{\Delta}} }} \right. } {\it{\Delta}} \Rightarrow h_{22} ={\mathbf{D}_{22} } \mathord{\left/ {\vphantom {{\mathbf{D}_{22} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{11} d \6pt] h_{21} +\delta_{12} d={\left| {\,\bar{{a}}_{11} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\,\bar{{a}}_{11} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{21} } \mathord{\left/ {\vphantom {{\mathbf{D}_{21} } {{\it{\Delta}} \Rightarrow h_{21} ={\mathbf{D}_{21} } \mathord{\left/ {\vphantom {{\mathbf{D}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{12} d}}} \right. } {{\it{\Delta}} \Rightarrow h_{21} ={\mathbf{D}_{21} } \mathord{\left/ {\vphantom {{\mathbf{D}_{21} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{12} d} \\[6pt] h_{12} +\delta_{21} d={\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{22} } \right|} \mathord{\left/ {\vphantom {{\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} ,\,\,\bar{{a}}_{22} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{12} } \mathord{\left/ {\vphantom {{\mathbf{D}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} \Rightarrow h_{12} ={\mathbf{D}_{12} } \mathord{\left/ {\vphantom {{\mathbf{D}_{12} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{21} d \\[6pt] h_{11} +\delta_{22} d={\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} } \right|} \mathord{\left/ {\vphantom {{\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\bar{{a}}_{0} -\bar{{a}}_{cl} } \right|} {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}}} \right. } {\left| {\,\bar{{a}}_{11} ,\,\,\bar{{a}}_{12} ,\,\,\bar{{a}}_{21} ,\,\,\bar{{a}}_{22} } \right|}={\mathbf{D}_{11} } \mathord{\left/ {\vphantom {{\mathbf{D}_{11} } {\it{\Delta}} }} \right. } {\it{\Delta}} \Rightarrow h_{11} ={\mathbf{D}_{11} } \mathord{\left/ {\vphantom {{\mathbf{D}_{11} } {\it{\Delta}} }} \right. } {\it{\Delta}} -\delta_{22} d \end{array} \end{align} (5.9) But \begin{align} &h_{11} h_{22} -h_{12} h_{21} =d\Rightarrow \notag\\ &\left( {{\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} } \right)d^{2}-\left( {\,\,\,\mathbf{D}_{22} {\it{\Delta}}_{11} +\mathbf{D}_{11} {\it{\Delta}}_{22} +\mathbf{D}_{21} {\it{\Delta}}_{12} +\mathbf{D}_{12} {\it{\Delta}}_{21} +1} \right)d\notag\\ &+\left( {\mathbf{D}_{11} \mathbf{D}_{22} -\mathbf{D}_{12} \mathbf{D}_{21} } \right)=0\Leftrightarrow \notag\\ &\Leftrightarrow \alpha d^{2}+\beta d+\gamma =0 \notag\\ &\mathbf{D}\left( {a_{cl} (s)} \right)=\beta^{2}-4\alpha \gamma \end{align} (5.10) Remark that, in the case the determinant \mathsf{D}=\left( {{\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} } \right) vanishes, the SOF pole placement problem is linear and has a solution in the case \beta \left( {a_{cl} (s)} \right)\ne 0. In the case \mathcal{D}\ne 0 the problem has a real solution if \mathbf{D}\left( {a_{cl} (s)} \right)\geqslant 0. In the case the polynomials a_{11} (s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s) are {\rm P}-linearly dependent, using similar methods, we can prove that we have linearity whenever \mathbf{D}=\left( {{\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} } \right)=0. In the case D(s)\equiv 0 the rank of the transfer function matrix is one and thanks to theorem 3.1 the polynomials a_{11}(s),\,\,a_{12} (s),\,\,a_{21} (s),\,\,a_{22} (s) are \mathcal{H}-invariant. For SOF-assignability the polynomials must be {\rm P}-linearly independent thanks to Theorem 3.2. Apparently the problem is linear and every closed loop characteristic polynomial is assignable. Remark also that for this class of systems we have {\it{\Delta}} \ne 0,\,\,{\it{\Delta}}_{22} =\,\,{\it{\Delta}}_{21} =\,{\it{\Delta}}_{12} ={\it{\Delta}}_{11} =0 We summarize in the following \underline{\text{Characterization of the SOF-equivalence classes as to the Pole assignability}} \[ {\!\!\!\!\begin{array}{*{20}c} {\mbox{SYSTEMS}\,\mbox{WITH}} \\ {(r,\,n,\,m)=(2,\,4,\,2)} \\ \end{array} } \!\!\!\!\!\left\{\!\!\!\! {{\begin{array}{*{20}c} {\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\mathcal{P}}\equiv 0\,\left( {{\it{\Delta}} ={\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}} _{12} ={\it{\Delta}}_{21} =0} \right)\,\,\,\,\,\,\,\underline{\mbox{Not SOF-assignable}\,\,\,(1)}} \\ {{\mathcal{P}}\not\equiv 0\!\!\!\left\{\!\!\!\! {{\begin{array}{*{20}c} {D(s)\equiv 0\,\,\left( {{\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}}_{12} ={\it{\Delta}} _{21} =0} \right)\,\,\underline{\mbox{Complete}\,\mbox{real}\,\mbox{assignability}\,(2)}} \\ {D(s)\,{\not\equiv}\,0\left\{\!\! {{\begin{array}{*{20}c} {{\mathcal{D}}={\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} =0\,\,\,\,\,\,\,\,\,\,\,\underline{\mbox{Generic}\,\mbox{real}\,\,\mbox{assignability}\,\,\,\,\mbox{(3)}}} \\ {\!\!\!\!\!\!\!\!\!\!\!{\mathcal{D}}\neq 0\left\{\!\! {{\begin{array}{*{20}c} {{\mathbf{D}}\left( {a_{cl} (s)} \right)\geqslant 0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\underline{\mbox{Real}\,\,\mbox{assignability}\,\,\mbox{of}\,\,a_{cl} (s)\,}} \\ {{\mathbf{D}}\left( {a_{cl} (s)} \right)<0\,\,\,\,\,\underline{\mbox{Complex}\,\,\mbox{assignability}\,\,\mbox{of}\,\,a_{cl} (s)}} \\ \end{array} }} \right.} \\ \end{array} }} \right.} \\ \end{array} }} \right.} \\ \end{array} }} \right. Some of the above results are stated slightly different in the literature. (1) The Plucker matrix is rank deficient (Giannakopoulos & Karcanias, 1985). (2) The entries of the transfer function matrix are $$\mathbb{R}-$$linearly independent (meaning that the Plucker matrix has full rank) but the transfer function matrix is rank deficient (Brokett & Byrnes, 1981). (3) The Plucker matrix and the transfer function matrix have both full rank but the solution is linear. Result of this article. We conclude for this class of systems that: The degree of the solution is a property of the SOF equivalence class. If the degree is one, i.e. $${\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} =0\,$$, the equivalence class is completely assignable by real output feedback in the case $${\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}}_{12} ={\it{\Delta}}_{21} =0$$ and generically assignable otherwise i.e. we can assign only closed loop polynomials with $$\beta \left( {a_{cl} (s)} \right)\ne 0$$. If the degree is two, i.e. $${\it{\Delta}}_{11} {\it{\Delta}}_{22} -{\it{\Delta}}_{12} {\it{\Delta}}_{21} \neq 0\,$$, real assignability is not a property of the equivalence class. It depends both on the system and the desired closed loop polynomial i.e. $$\mathbf{D}\left( {a_{cl} (s)} \right)\geqslant 0$$. Systems having the generic properties $$\left( {{\mathcal{P}}{\equiv }0,\,\,D(s){\equiv }0,\,\,\mathbf{D}\neq 0} \right)$$ are assignable by a complex gain. Real assignability of a given polynomial $$a_{cl} (s)$$ depends on both the system and the polynomial. The question arising naturally is what about systems of higher dimension. We can proceed the same way by: Calculating a canonical form Calculating a Groebner basis. The problem is that, even if we arrive to calculate a Groebner basis with coefficients functions of the invariants, we cannot obtain closed formulas on the base of the Sturm conditions, or on the base of the so called trace form, for the existence of real roots, as we did with the second degree equation. We are condemned to walk case by case. 5.2. Complete SOF-invariants and constrained closed loop dynamics In Syrmos et al. (1997) is referred to that there is no known way to characterize the constrained dynamics of the closed loop system by a SOF in the case not all the poles are assignable. This problem of constrained closed loop dynamics is analogous to the problem of fixed dynamics arising in the decentralized control (Anderson & Clements, 1981; Karcanias et al., 1988), In this subsection, we address this problem. It concerns systems with $$mr<n$$. The idea has its roots in the Bezoutian of scalar systems. Let $$F(s)=\frac{z(s)}{a(s)}$$ and $$r_{1} ,\,r_{2}$$ closed loop poles. Then, $$a(r_{1} )+hz(r_{1} )=a(r_{2} )+hz(r_{2} )=0$$. Eliminating the gain $$h$$ among the above equations, we obtain $$a(r_{1} )z(r_{2} )-z(r_{1} )a(r_{2} )=0\Rightarrow \mathfrak{B}(r_{1} ,r_{2} )=0$$. The Bezoutian is a closed loop characteristic polynomial. Fixing $$r_{1}$$, the roots of $$\mathfrak{B}(r_{1} ,r_{2} )=0$$ are the constrained poles of the closed loop system. We proceed in an analogue way whenever $$mr<n$$. We eliminate the output feedback gain among $$mr+1$$equations of the type (2.19): $\tilde{{Z}}_{0} (r_{v}) = Z_{0} (r_{v} )+\sum\limits_{k=1}^r {\sum\limits_{\zeta =1}^{\left( {_{k}^{r} } \right)} {\sum\limits_{\xi =1}^{\left( {_{k}^{m} } \right)} {z_{k\zeta \xi } (r_{v} )d_{k\xi \zeta } } } } ,\,\,v=1,2,\cdots ,mr+1$ In the case of two inputs two outputs we do not need to use resultants. The transfer function matrix when $$(r,m)=(2,2)$$ is: $F(s)=\frac{1}{a_{0} (s)}\left[ {{\begin{array}{*{20}c} {a_{11} (s)} & {a_{12} (s)} \\ {a_{21} (s)} & {a_{22} (s)} \\ \end{array} }} \right]$ The Plucker coordinates are: $\mathfrak{P}=\left[ {a_{0} (s),\,\,a_{21} (s),\,\,a_{22} (s),\,\,-a_{11} (s),\,\,-a_{12} (s),\,\,D(s)} \right],\,\,D(s)=\frac{a_{11} (s)a_{22} (s)-a_{12} (s)a_{21} (s)\,}{a_{0} (s)}$ The closed loop Plucker coordinate $$\tilde{{a}}_{0} (s)$$ is: $\tilde{{a}}_{0} (s)=a_{0} (s)+a_{21} (s)h_{12} +a_{22} (s)h_{22} +a_{11} (s)h_{11} +a_{12} (s)h_{21} +D(s)d$ Suppose now that $$r_{1} ,r_{2} ,r_{3} ,r_{4} ,r_{5} \,$$are closed loop roots then \begin{align} 0&=a_{0} (r_{k} )+a_{21} (r_{k} )h_{12} +a_{22} (r_{k} )h_{22} +a_{11} (r_{k} )h_{11} +a_{12} (r_{k} )h_{21} +D(r_{k} )d,\,\,k\in \left\{ {1,2,3,4,5} \right\} 0\notag\\ &=\left[ {\begin{array}{l} a_{0} (r_{1} )\quad \,a_{21} (r_{1} )\quad {\kern 1pt}a_{22} (r_{1} )\quad {\kern 1pt}a_{11} (r_{1} )\quad \,a_{12} (r_{1} )\quad {\kern 1pt}D(r_{1} ) \\ a_{0} (r_{2} )\quad a_{21} (r_{2} )\quad a_{22} (r_{2} )\quad a_{11} (r_{2} )\quad a_{12} (r_{2} )\quad D(r_{2} ) \\ a_{0} (r_{3} )\quad {\kern 1pt}a_{21} (r_{3} )\quad {\kern 1pt}a_{22} (r_{3} )\quad a_{11} (r_{3} )\quad {\kern 1pt}a_{12} (r_{3} )\quad D(r_{3} ) \\ a_{0} (r_{4} )\quad a_{21} (r_{4} )\quad a_{22} (r_{4} )\quad a_{11} (r_{4} )\quad a_{12} (r_{4} )\quad D(r_{4} ) \\ a_{0} (r_{5} )\quad a_{21} (r_{5} )\quad a_{22} (r_{5} )\quad a_{11} (r_{5} )\quad a_{12} (r_{5} )\quad D(r_{5} ) \\ \end{array}} \right]\left[ {\begin{array}{l} 1 \\ h_{12} \\ h_{22} \\ h_{11} \\ h_{21} \\ d \\ \end{array}} \right] \end{align} (5.11) Let now $\left[ {D_{1} ,\,\,D_{2} ,\,\,D_{3} ,\,\,D_{4} ,\,\,D_{5} ,\,\,D_{6} } \right]=\mathfrak{C}_{5} \left( {\left[\!\!\!{\begin{array}{l} a_{0} (r_{1} )\quad \,a_{21} (r_{1} )\quad {\kern 1pt}a_{22} (r_{1} )\quad {\kern 1pt}a_{11} (r_{1} )\quad \,a_{12} (r_{1} )\quad {\kern 1pt}D(r_{1} ) \\ a_{0} (r_{2} )\quad a_{21} (r_{2} )\quad a_{22} (r_{2} )\quad a_{11} (r_{2} )\quad a_{12} (r_{2} )\quad D(r_{2} ) \\ a_{0} (r_{3} )\quad {\kern 1pt}a_{21} (r_{3} )\quad {\kern 1pt}a_{22} (r_{3} )\quad a_{11} (r_{3} )\quad {\kern 1pt}a_{12} (r_{3} )\quad D(r_{3} ) \\ a_{0} (r_{4} )\quad a_{21} (r_{4} )\quad a_{22} (r_{4} )\quad a_{11} (r_{4} )\quad a_{12} (r_{4} )\quad D(r_{4} ) \\ a_{0} (r_{5} )\quad a_{21} (r_{5} )\quad a_{22} (r_{5} )\quad a_{11} (r_{5} )\quad a_{12} (r_{5} )\quad D(r_{5} )\\ \end{array}} \!\!\!\!\right]} \right)$ The solution to the system of equations (5.11) is \begin{align} \begin{array}{l} h_{12} =\frac{D_{5} }{D_{6} },\,\,h_{22} =-\frac{D_{4} }{D_{6} },\,\,h_{11} =\frac{D_{3} }{D_{6} },\,\,h_{21} =-\frac{D_{2} }{D_{6} },\,\,d=\frac{D_{1} }{D_{6} }\Rightarrow \6pt] \frac{D_{1} }{D_{6} }=-\frac{D_{3} }{D_{6} }\frac{D_{4} }{D_{6} }+\frac{D_{2} }{D_{6} }\frac{D_{5} }{D_{6} }\Rightarrow D_{1} D_{6} -D_{2} D_{5} +D_{3} D_{4} =0 \\ \end{array} \end{align} (5.12) The Plucker quadratic relation (5.12) D_{1} D_{6} -D_{2} D_{5} +D_{3} D_{4} =0 is {\rm E}-invariant and describes the closed loop dynamics. Fixing arbitrarily r_{1} ,r_{2} ,r_{3} ,r_{4} the roots of (5.12) in r_{5} \,describe the closed loop constrained dynamics. Example 5.1 \begin{align*} F(s)&=\left[ {\begin{array}{ll} 1/(s+2)\mbox{ }&1/(s+4) \\ (2s+4)/(s+1)/(s+3)\mbox{ }&1/(s+5) \\ \end{array}} \right] \\ \mathfrak{P}&=\left[ {\begin{array}{l} s^{5}+15s^{4}+85s^{3}+225s^{2}+274s+120 \\ s^{4}+13s^{3}+59s^{2}+107s+60 \\ 2s^{4}+26s^{3}+120s^{2}+232s+160\, \\ s^{4}+11s^{3}+41s^{2}+61s+30 \\ s^{4}+10s^{3}+35s^{2}+50s+24 \\ -s^{3}-10s^{2}-29s-28 \\ \end{array}} \right]^{{\rm T}} \end{align*} The multivariate polynomial describing the closed loop dynamics takes more than one page and it does not make any sense to present it here. Fixing r_{1} =-.5,\,\,r_{2} =-1.5,\,\,r_{3} =-2.5,\,\,r_{4} =-3.5, we obtain the polynomial describing the constrained dynamics: \[ -792135/8r_{5} -2299275/16-237555/16r_{5}^{2} Its roots are $$\mbox{ }-4.5346,\mbox{ }-2.1345$$ The gain assigning in the case $$r_{5} =-4.5346$$ is $H=\left[ {\begin{array}{l} -0.3484\mbox{ }-0.0273 \\ -1.0279\mbox{ }-1.0345 \\ \end{array}} \right]$ The gain assigning in the case $$r_{5} =-2.1345$$ is $H=\left[ {\begin{array}{l} \mbox{0.1110 }-\mbox{0.1679} \\ \mbox{2.5348 }-\mbox{7.1755} \\ \end{array}} \right]$ 6. Conclusions In this article, we constructed complete systems of independent SOF invariants and canonical forms, of strictly proper transfer functional matrices having full rank. We address definitely a very old outstanding problem in control theory. To reinforce our arguments about the importance of complete invariants we use them to give answers for proper control problems. For the class of two input two output four states systems we use the above invariants to characterize the assignability properties of the SOF-equivalence classes. The result is that exact SOF-assignability is a property of the equivalence class, only for the non generic equivalence classes ($${\it{\Delta}}_{11} ={\it{\Delta}}_{22} ={\it{\Delta}}_{12} ={\it{\Delta}}_{21} =0\,)$$ and a property of both equivalence class and polynomial under assignment otherwise. For the class of systems with two inputs two outputs and more than four states, we give an invariant multivariate polynomial describing the closed loop dynamics. We use this polynomial to calculate an adequate feedback gain assigning partially the poles of the system. Acknowledgements We would like to thank professor N. Karkanias for the very long discussions on the SOF invariants, Professor N. Tzanakis for his help on the development of the formalism of the second section and the reviewers for the time they spent to read this paper as well as for their remarks. References Anderson B. D. O. & Clements D. J. ( 1981 ) Algebraic characterization of fixed modes in decentralized control. Automatica , 17 , 703 – 712 . Google Scholar CrossRef Search ADS Brokett R. W. & Byrnes C. I. ( 1981 ) Multivariate Nyquist criteria, root loci and pole placement: a geometric viewpoint. IEEE Trans. Autom. Control , AC-26 , 271 – 284 . Google Scholar CrossRef Search ADS Byrnes C. I. & Crouch P. E. ( 1985 ) Geometric methods for the classification of linear feedback systems. Syst. Control Lett. , 6 , 239 – 246 . Google Scholar CrossRef Search ADS Gantmacher F. R. ( 2000 ) The Theory of Matrices. Providence, RI : American Mathematical Society, AMS Chelsea Publishing . Giannakopoulos C. & Karcanias N. ( 1985 ) Pole assignment of strictly proper and proper linear systems by constant output feedback. Int. J. Control , 42 , 543 – 565 . Google Scholar CrossRef Search ADS Helmke U. & Fuhrmann P. A. ( 1989 ) Bezoutians Linear Algebra and Its Applications , vol. 122–124. E Amsterdam : Elsevier Science Publ. Co, pp. 1039 – 1097 . Hinrichsen D. & Pratzel-Wolters D. ( 1983 ) A canonical form for static output feedback. Universitat Bremen report 101 . Karcanias N. & Giannakopoulos C. ( 1984 ) Grassmann invariants, almost zeros and the determinantal zero, pole assignment problems of linear systems. Int. J. Control , 40 , 673 - 698 . Google Scholar CrossRef Search ADS Karcanias N. , Laios B. & Giannakopoulos C. ( 1988 ) The decentralised determinantal assignment problem: fixed and almost modes and zeros. Int. J. Control , 48 , 129 – 147 . Google Scholar CrossRef Search ADS Karcanias N. & Leventides J. ( 2016 ) Solution of the determinantal assignment problem using the Grassmann Matrices. Int. J. Control , 89 , pp. 352 – 367 . Google Scholar CrossRef Search ADS Kim S. W. & Lee E. B. ( 1995 ) Complete feedback invariant form for linear output feedback , New Orleans, LA, USA , IEEE Conf. CDC . Leventides J. & Karcanias N. ( 2008 ) Structured squaring down and zero assignment. Int. J. Control , 81 , 294 – 306 . Google Scholar CrossRef Search ADS MacLane S. & Birkoff G. ( 1999 ) Algebra . Providence, Rhode Island , AMS Chelsea Publishing Copyright . Popov V. M. ( 1972 ) Invariant description of linear time-invariant controllable systems. SIAM J Control , American Mathematical Society . Providence, Rhode Island , 10 , 252 – 264 . Google Scholar CrossRef Search ADS Prells U. , Friswell M. I. & Garvey S. D. ( 2003 ) Use of geometric algebra: compound matrices and the determinant of the sum of two matrices. Proc. R. Soc. London A , 459 , 273 – 285 . Google Scholar CrossRef Search ADS Ravi M. S. , Rosenthal J. & Helmke U. ( 2002 ). Output feedback invariants. Linear Algebra Appl. , 351–352 : 623 – 637 . Google Scholar CrossRef Search ADS Syrmos V. L. , Abdallah C. T. , Doratos J. P. & Grigoriadis K. ( 1997 ) Static output feedback—a survey. Automatica , 33 , 125 – 137 . Google Scholar CrossRef Search ADS Wedderburn J. H. M. ( 1934 ) Lectures on Matrices , vol. XVII . New York : American Mathematical Society Colloquium Publications . Google Scholar CrossRef Search ADS Yannakoudakis A. ( 1980 ) Invariant Algebraic structures in multivariable control theory. Preprint Laboratoire Automatique Grenoble November 1980 . Yannakoudakis A. ( 2007 ) Output feedback equivalence. European Control Conference, July 2–5, Cos Island, Greece , paperTuD14.2 . Yannakoudakis A. ( 2013a ). Full Static Output Feedback Equivalence. Journal of Control Science and Engineering , Volume 2013 (2013), https://doi.org/10.1155/2013/491709 . Yannakoudakis A. ( 2013b ) Distribution of invariant indices and static output feedback pole placement. MED Conference on Control and Automation Platanias , Crete, Greece , June , https://doi.org/10.1109/MED.2013.6608843 . Yannakoudakis A. ( 2015 ) The static output feedback from the invariant point of view. IMA J. Math. Contr. Inform. , https://doi.org/10.1093/imamci/dnu057 . © The authors 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

### Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Jan 8, 2018

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations