$$H_{2}$$ model order reduction based on gramians of discrete-time bilinear systems

$$H_{2}$$ model order reduction based on gramians of discrete-time bilinear systems Abstract In this paper, on the basis of gramians, $$H_{2}$$-optimal model order reduction of discrete-time bilinear dynamic systems is investigated. First, based on the reachability gramian and the observability gramian, the $$H_{2}$$-optimal necessary conditions of the discrete-time bilinear system are derived and the corresponding algorithm is proposed. When the algorithm is convergent, the resulting reduced system satisfies the necessary conditions. Then, for the single-input and single-output system, the $$H_2$$-optimal necessary conditions based on the generalized cross gramian are presented and the related algorithm is accordingly generated. Similarly, the convergent algorithm can make the reduced system satisfy the necessary conditions. Finally, two numerical examples are employed to illustrate the effectiveness of the proposed methods. 1. Introduction In the integrated circuit, the automatic control and some other engineering fields, there are many complex dynamic systems described by differential or difference equations. They are generally equipped with large dimension and difficult to be simulated on the computer. Model order reduction (MOR) is an effective way to deal with these problems. It uses a lower-dimension system to approximate the original system such that the storage and the computation cost can be efficiently reduced (see Antoulas, 2005; Jiang, 2010). Recently, many researchers have been paying attention to MOR and the relevant methods have been extensively investigated (see Lin et al., 2009; Knockaert et al., 2011; Jiang & Chen, 2012; Ibrir & Bettayeb, 2014). MOR of linear time invariant systems has been fully studied in the past several decades. There are many existing methods, such as balanced truncation methods (see Moore, 1981; Imran & Ghafoor, 2016), orthogonal polynomial MOR methods (see Jiang & Chen, 2012; Yuan et al., 2015) and Krylov subspace methods (see Lee et al., 2006; Lin et al., 2010; Druskin & Simoncini, 2011). Except those, MOR of some special linear systems has also been explored by some researchers. Gao et al. have taken the MOR method of switched hybrid systems into consideration (see Gao et al., 2006). MOR of Markovian jump systems can be found in the study by Wei et al. (2014). Wu & Zheng (2009) have discussed weighted $$H_{\infty }$$ MOR for linear switched systems with time-varying delay. The set of bilinear systems is a special class of non-linear systems. It arises as the natural model for a variety of physical and biomedical processes (see Bai & Skoogh, 2006). Moreover, the Carleman bilinearization of non-linear systems can also result in bilinear systems (see Sastry, 2005). Since the bilinear system has simpler structure than the general non-linear system and can better approximate the non-linear system compared with the linear system, there are many researchers devoting to it. Basic knowledge of bilinear systems can be found in the study by D’Alessandro et al. (1974) and in the book by Mohler (1991). Over the past few decades, MOR methods for bilinear systems have been proposed in many literatures (see Shaker & Tahavori, 2014a,b; Jazlan et al., 2016). Based on the block Krylov subspace, Lin et al. have explored the moment-matching MOR method for the bilinear system (see Lin et al., 2009). The $$H_{2}$$ norm of a system is known as the expected root-mean-square value of the output while the input is a unit variance white noise process (see Zhang & Lam, 2002). It is one of the most commonly used norms for measuring the MOR error. Relevant work on linear systems can be found in the studies by Wilson (1974), Yan & Lam (1999), Gugercin et al. (2008) and Jiang & Xu (2017). Zhang and Lam have defined the $$H_{2}$$ norm of the bilinear system and obtained the first-order necessary conditions of $$H_{2}$$ optimality (see Zhang & Lam, 2002). Benner and Breiten have specialized the necessary conditions and extended the iterative rational Krylov algorithm to the bilinear system (see Benner & Breiten, 2012). Also, they have noted that the resulting reduced system can satisfy the necessary conditions when the algorithm converges. The generalized cross gramian of the bilinear system has been first introduced in Shaker & Tahavori (2015), and its existence condition has been accordingly given. Furthermore, Jazlan et al. (2017) have focused on the development of frequency interval cross gramians for both linear and bilinear systems, and new generalized Sylvester equations for more efficiently calculating the frequency interval cross gramians have been derived. In addition, Xu et al. have viewed the $$H_{2}$$-optimal problem as a minimization problem on the Grassmann manifold and some optimization techniques have been employed to generate the reduced system (see Xu et al., 2015). Compared with continuous-time bilinear systems, MOR of discrete-time bilinear systems has received less attention. Zhang et al. (2003) have firstly defined the reachability gramian and the observability gramian of the discrete-time bilinear system. Then, the balanced truncation method has been applied to generate the reduced system. Benner et al. (2010) have proposed the Krylov subspace MOR method for the discrete-time bilinear system based on the multimoments. By the multimoments, Benner et al. have defined the $$H_{2}$$ norm of discrete-time bilinear system. According to the gramians and the pole residue formula of the generalized transfer function, different expressions of the $$H_{2}$$ norm have been given. Then, the generalized tangential interpolation for the discrete-time bilinear system has been discussed (see Benner et al., 2011). In addition, the Laguerre orthogonal polynomial MOR method for the discrete-time bilinear system has been explored by Wang & Jiang (2016), which has employed the discrete Laguerre series to approximate the stable transfer function. For the discrete-time linear systems, Bunse-Gerstner et al. (2012) have presented the necessary conditions for $$H_{2}$$-optimal MOR. It also motivates us to explore $$H_{2}$$-optimal MOR based on gramians for discrete-time bilinear systems. In this paper, we investigate the MOR methods of discrete-time bilinear systems based on gramians. By the $$H_{2}$$-norm expression, we firstly derive the $$H_{2}$$-optimal necessary conditions based on the reachability gramian and the observability gramian. When the coefficient matrices of the bilinear term are 0, the conditions can be reduced to the Wilson conditions for the discrete-time linear system. Next, an iterative algorithm based on these necessary conditions is established to generate the reduced system. It is worth mentioning that the reduced system can satisfy the necessary conditions when the algorithm is convergent. Concerning the single-input and single-output (SISO) system, the $$H_{2}$$ norm can be expressed by the generalized cross gramian. Then, the $$H_{2}$$-optimal necessary conditions are studied and the corresponding MOR algorithm is generated. Accordingly, the reduced system resulting from the convergent algorithm can also satisfy the related necessary conditions. Finally, two numerical examples are utilized to illustrate the efficiency of our proposed methods. This paper is organized as follows. In Section 2, we introduce some preliminary knowledge of the discrete-time bilinear system. In Section 3, the $$H_{2}$$-optimal MOR method based on the reachability gramian and the observability gramian is studied. In Section 4, we express the $$H_{2}$$ norm of the SISO systems by the generalized cross gramian. Then, the $$H_{2}$$-optimal necessary conditions based on the generalized cross gramian are presented. Two numerical examples are given to verify the effectiveness of the proposed methods in Section 5. Some conclusions are drawn in Section 6. 2. Preliminaries We consider the following discrete-time bilinear system:   \begin{align} \Sigma: \begin{cases} x(k+1)=Ax(k)+\sum\limits_{i=1}^{m}N_{i}x(k)u_{i}(k)+Bu(k)\text{,}\\[10pt] y(k)=Cx(k)\text{,} \end{cases} \end{align} (2.1)where A, $$N_{i}\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$ and $$C\in \mathbb{R}^{p\times n}$$ are the coefficient matrices of the system (2.1). $$x(k)\in \mathbb{R}^{n}$$ is the state variable, $$u(k)\in \mathbb{R}^{m}$$ and $$y(k)\in \mathbb{R}^{p}$$ denote the input variable and the output variable in the system (2.1), respectively. $$u_{i}(k)$$ is the i-th element of u(k). The aim of this paper is to construct two transformation matrices W, $$V\in \mathbb{R}^{n\times r}$$ and $$W^{T}V=I$$ such that   \begin{align}\Sigma_{r}:\begin{cases} x_{r}(k+1)=A_{r}x_{r}(k)+\sum\limits_{i=1}^{m}N_{ir}x(k)u_{i}(k)+B_{r}u(k)\text{,}\\[10pt] y_{r}(k)=C_{r}x_{r}(k)\text{,} \end{cases}\end{align} (2.2)where $$A_{r}=W^{T}AV$$, $$N_{ir}=W^{T}N_{i}V\in \mathbb{R}^{r\times r}$$, $$B_{r}=W^{T}B\in \mathbb{R}^{r\times m}$$, $$C_{r}=CV\in \mathbb{R}^{p\times r}$$ and r ≪ n. $$x_{r}(k)\in \mathbb{R}^{r} $$ and $$y_{r}(k)\in \mathbb{R}^{p}$$ individually represent the state variable and the output variable of the system (2.2). When p = m = 1, the system (2.1) is an SISO discrete-time bilinear system. We let $$b=B\in \mathbb{R}^{n}$$, $$c=C\in \mathbb{R}^{1\times n}$$ and $$N=N_{1}\in \mathbb{R}^{n\times n}$$. Its reduced system has the similar form as the reduced system (2.2). In the following, we introduce the gramians of the discrete-time bilinear system. Definition 2.1 (Zhang et al., 2003) Given the system (2.1), let   \begin{align*} P_{1}(k_{1})=&\,A^{k_{1}}B\text{,}~~ P_{i}(k_{1}\text{,}\cdots\text{,}~k_{i})=A^{k_{i}}[N_{1}P_{i-1}\text{,}\cdots\text{,}~N_{m}P_{i-1}]\text{,}\\[10pt] Q_{1}(k_{1})=&\,CA^{k_{1}}\text{,}\quad Q_{i}(k_{1}\text{,}\cdots\text{,}~k_{i})= \begin{bmatrix} Q_{i-1}N_{1} \\[2pt] Q_{i-1}N_{2} \\[2pt] \vdots \\[2pt] Q_{i-1}N_{m} \end{bmatrix}A^{k_{i}}\text{,}\quad i= 2,~3,\cdots\text{.} \end{align*}The reachability gramian P and the observability gramian Q of the system (2.1) are defined as   \begin{align} P=\sum_{i=1}^{\infty}\sum_{k_{i}=0}^{\infty}\cdots\sum_{k_{1}=0}^{\infty}{P_{i}{{P_{i}^{T}}}}\text{,} \end{align} (2.3)  \begin{align} Q=\sum_{i=1}^{\infty}\sum_{k_{i}=0}^{\infty}\cdots\sum_{k_{1}=0}^{\infty}{{{Q_{i}^{T}}}Q_{i}}. \end{align} (2.4) Reachability and observability are two basic properties of bilinear systems and their definitions can be found in the study by Dorissen (1989). Furthermore, the reachability gramian P and the observability gramian Q can satisfy two generalized Lyapunov equations, which are given by the following lemma. Lemma 2.1 (Zhang et al., 2003) If A is stable, then P and Q defined in (2.3) and (2.4) satisfy the following generalized Lyapunov equations:   \begin{align} APA^{T}-P+\sum_{i=1}^{m}N_{i}P{{N_{i}^{T}}}+BB^{T}=0\text{,} \end{align} (2.5)  \begin{align} {\hskip-5pt}A^{T}QA-Q+\sum_{i=1}^{m}{{N_{i}^{T}}}QN_{i}+C^{T}C=0. \end{align} (2.6) When the system (2.1) is an SISO system, we have   \begin{align*} P_{1}(k_{1})=&\,A^{k_{1}}b\text{,}~~ P_{i}(k_{1}\text{,}~k_{2}\text{,}\cdots\text{,}~k_{i})=A^{k_{i}}NP_{i-1}\text{,}\\ Q_{1}(k_{1})=&\,cA^{k_{1}}\text{,}~~ Q_{i}(k_{1}\text{,}~k_{2}\text{,}\cdots\text{,}~k_{i})=Q_{i-1}NA^{k_{i}}\text{,}\quad i= 2,~3,\cdots.\end{align*}Then, the generalized cross gramian of the system (2.1) is defined as (see Shaker & Tahavori, 2015)   \begin{align} R=\sum_{i=1}^{\infty}\sum_{k_{i}=0}^{\infty}\cdots\sum_{k_{1}=0}^{\infty}{P_{i}Q_{i}}. \end{align} (2.7)For the generalized cross gramian, there is a similar conclusion as Lemma 2.2. Namely, if A is stable, R satisfies the following generalized Sylvester equation (see Shaker & Tahavori, 2015):   \begin{align} ARA-R+NRN+bc=0. \end{align} (2.8) If the system (2.1) is reachable and observable, then the reachability gramian P and the observability gramian Q are positive definite (see Zhang et al., 2003). In the following, we always assume that the system (2.1) is reachable and observable. Then, the $$H_{2}$$ norm of $$\Sigma $$ can be expressed as (see Benner et al., 2011)   \begin{align} \|\Sigma\|_{H_{2}}^{2}=\textrm{tr}(CPC^{T})=\textrm{tr}(B^{T}QB)\text{,} \end{align} (2.9)where tr(⋅) signifies the trace of the corresponding matrix. If the reachability gramian P exists and (2.3) is convergent, then P can be worked out by the vectorization operator vec(⋅). A property of the operator vec(⋅) is that $$\textrm{vec}(XYZ)=(Z^{T}\otimes X)\textrm{vec}(Y)$$, where the notation ‘⊗’ represents the Kronecker product. Taking the vectorization on both sides of (2.5) and applying the property, we can obtain   \begin{align} \left(A\otimes A-I_{n}\otimes I_{n}+\sum_{i=1}^{m}N_{i}\otimes N_{i}\right)\textrm{vec}(P)=\textrm{vec}(-BB^{T})\text{,} \end{align} (2.10)where $$I_{n}$$ denotes the identity matrix with the dimension n. If $$A\otimes A-I_{n}\otimes I_{n}+\sum _{i=1}^{m}N_{i}\otimes N_{i}$$ is non-singular, then P is the unique solution of (2.5), i.e.   \begin{align} \textrm{vec}(P)=\left(A\otimes A-I_{n}\otimes I_{n}+\sum_{i=1}^{m}N_{i}\otimes N_{i}\right)^{-1}\textrm{vec}(-BB^{T}). \end{align} (2.11)Similarly, if the observability gramian Q exists, (2.4) is convergent and $$A^{T}\otimes A^{T}-I_{n}\otimes I_{n}+\sum _{i=1}^{m}{{N_{i}^{T}}}\otimes{{N_{i}^{T}}}$$ is non-singular, then Q is the unique solution of (2.6), i.e.   \begin{align} \textrm{vec}(Q)=\left(A^{T}\otimes A^{T}-I_{n}\otimes I_{n}+\sum_{i=1}^{m}{{N_{i}^{T}}}\otimes{{N_{i}^{T}}}\right)^{-1}\textrm{vec}(-C^{T}C). \end{align} (2.12) 3. $$H_{2}$$ MOR based on the reachability gramian and the observability gramian In this section, we derive the $$H_{2}$$-optimal necessary conditions based on the reachability gramian and the observability gramian. Then, an algorithm is established to generate the reduced system. Finally, we show that the resulting reduced system can satisfy the necessary conditions while the algorithm converges. The aim of $$H_{2}$$-optimal MOR is to construct a reduced system $$\Sigma _{r}$$ such that the reduced system $$\Sigma _{r}$$ approximates the original system $$\Sigma $$ in the $$H_{2}$$ norm sense as good as possible. That is to minimize $$\|\Sigma -\Sigma _{r}\|_{H_{2}}$$. Then, we generate the following error system:   \begin{align}\hat{\Sigma}: \begin{cases}\hat{x}(k+1)=\hat{A}\hat{x}(k)+\sum\limits_{i=1}^{m}\hat{N}_{i}\hat{x}(k)u_{i}(k)+\hat{B}u(k)\text{,}\\[6pt] \hat{y}(k)=\hat{C}\hat{x}(k), \end{cases} \end{align} (3.1)where   $$ \hat{x}(k)= \begin{bmatrix} x(k) \\ x_{r}(k) \end{bmatrix}\text{,}~~ \hat{y}(k)= \begin{bmatrix} y(k) \\ y_{r}(k) \end{bmatrix}\text{,}~~ \hat{A}= \begin{bmatrix} A & 0\\ 0 & A_{r} \end{bmatrix}\text{,}~~ \hat{N}_{i}= \begin{bmatrix} N_{i} & 0\\ 0 & N_{ir} \end{bmatrix}\text{,}~~ \hat{B}= \begin{bmatrix} B \\ B_{r} \end{bmatrix}\text{,}~~ \hat{C}= \begin{bmatrix} C & -C_{r} \end{bmatrix}. $$We assume that the error system $$\hat{\Sigma }$$ is stable, the reachability gramian $$\hat{P}$$ and the observability gramian $$\hat{Q}$$ exist. Then, $$\hat{P}$$ and $$\hat{Q}$$ satisfy the following generalized Lyapunov equations:   \begin{align} \hat{A}\hat{P}\hat{A}^{T}-\hat{P}+\sum_{i=1}^{m}\hat{N}_{i}\hat{P}\hat{N}_{i}^{T}+\hat{B}\hat{B}^{T}=0\text{,} \end{align} (3.2)  \begin{align} {\hskip-5pt}\hat{A}^{T}\hat{Q}\hat{A}-\hat{Q}+\sum_{i=1}^{m}\hat{N}_{i}^{T}\hat{Q}\hat{N}_{i}+\hat{C}^{T}\hat{C}=0. \end{align} (3.3) The $$H_{2}$$ norm of the error system can be expressed as   $$ \|\hat{\Sigma}\|_{H_{2}}^{2}=\textrm{tr}(\hat{C}\hat{P}\hat{C}^{T})=\textrm{tr}(\hat{B}^{T}\hat{Q}\hat{B}).$$Thus, the $$H_{2}$$-optimal reduced system can be converted into the solution of the following minimization problem:   \begin{align} \mathfrak{J}_{a}(A_{r}\text{,}~N_{ir}\text{,}~B_{r}\text{,}~C_{r}):=\min\|\hat{\Sigma}\|_{H_{2}}. \end{align} (3.4) In order to further study the minimization problem (3.4), we partition $$\hat{P}$$ and $$\hat{Q}$$ as   \begin{align} \hat{P}= \begin{bmatrix} P_{11} & P_{12}\\ P_{12}^{T} & P_{22} \end{bmatrix}\text{,} ~~\hat{Q}= \begin{bmatrix} Q_{11} & Q_{12}\\ Q_{12}^{T} & Q_{22} \end{bmatrix}, \end{align} (3.5) where $$P_{11}$$, $$Q_{11}\in \mathbb{R}^{n\times n}$$ and $$P_{22}$$, $$Q_{22}\in \mathbb{R}^{r\times r}$$. Suppose that the reduced system is reachable and observable. The reachability gramian and the observability gramian of the reduced system are denoted by $$P_{r}$$ and $$Q_{r}$$, respectively. Substituting (3.5) into (3.2) and (3.3), we have   $$ P_{11}=P\text{,}~~ Q_{11}=Q\text{,}~~ P_{22}=P_{r}\text{,}~~ Q_{22}=Q_{r}.$$ Moreover, $$P_{12}$$ and $$Q_{12}$$ individually satisfy   \begin{align} AP_{12}{{A_{r}^{T}}}-P_{12}+\sum_{i=1}^{m}N_{i}P_{12}N_{ir}^{T}+B{{B_{r}^{T}}}=0\text{,} \end{align} (3.6)  \begin{align} {\hskip-15pt}A^{T}Q_{12}A_{r}-Q_{12}+\sum_{i=1}^{m}{{N_{i}^{T}}}Q_{12}N_{ir}-C^{T}C_{r}=0. \end{align} (3.7) Next, we derive the $$H_{2}$$-optimal necessary conditions for the discrete-time bilinear system. Theorem 3.1 Let $$\Sigma _{r}$$ be the solution of the minimization problem (3.4) and assume that $$\Sigma _{r}$$ is stable. Then, the following equations hold:   \begin{align} {\hskip-23pt}Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r}=0\text{,} \end{align} (3.8)  \begin{align} {\hskip-30pt}Q_{12}^{T}N_{i}P_{12}+Q_{r}N_{ir}P_{r}=0\text{,} \end{align} (3.9)  \begin{align} CP_{12}-C_{r}P_{r}=0\text{,} \end{align} (3.10)  \begin{align} Q_{12}^{T}B+Q_{r}B_{r}=0. \end{align} (3.11) Proof. Let $$M:=\hat{C}^{T}\hat{C}$$, $$S:=\hat{B}\hat{B}^{T}$$ and $$\mathfrak{E}:= \|\hat{\Sigma } \|_{H_{2}}$$. Then $$\mathfrak{E}^{2}=\textrm{tr} (\hat{C}\hat{P}\hat{C}^{T} )=\textrm{tr} (\hat{P}M )$$. As the derivation procedures are similar, let $$\beta $$ be an arbitrary parameter of the minimization problem (3.4). Differentiating $$\mathfrak{E}^{2}$$ with respect to $$\beta $$, we can obtain   \begin{align} \frac{\partial\mathfrak{E}^{2}}{\partial\beta}=\textrm{tr}\left(\frac{\partial\hat{P}}{\partial\beta}M\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right). \end{align} (3.12)Substituting (3.3) into (3.12), it yields that   \begin{align} \frac{\partial\mathfrak{E}^{2}}{\partial\beta}=-\textrm{tr}\left(\frac{\partial\hat{P}}{\partial\beta} \hat{A}^{T}\hat{Q}\hat{A}-\frac{\partial\hat{P}}{\partial\beta}\hat{Q}+ \sum_{i=1}^{m}\frac{\partial\hat{P}}{\partial\beta}\hat{N}_{i}^{T}\hat{Q}\hat{N}_{i}\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right). \end{align} (3.13)Differentiating both sides of (3.2) with respect to $$\beta $$, we have   \begin{align} \frac{\partial\hat{A}}{\partial\beta}\hat{P}\hat{A}^{T}+\hat{A}\frac{\partial\hat{P}}{\partial\beta}\hat{A}^{T} +\hat{A}\hat{P}\frac{\partial\hat{A}^{T}}{\partial\beta}-\frac{\partial\hat{P}}{\partial\beta} +\sum_{i=1}^{m}\left(\frac{\partial\hat{N}_{i}}{\partial\beta}\hat{P}\hat{N}_{i}^{T}+\hat{N}_{i}\frac{\partial\hat{P}}{\partial\beta}\hat{N}_{i}^{T} +\hat{N}_{i}\hat{P}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\right)+\frac{\partial S}{\partial\beta}=0. \end{align} (3.14)Multiply (3.14) by $$\hat{Q}$$ from the left and take the trace on both sides. Then, it follows   \begin{multline} -\textrm{tr}\left(\frac{\partial\hat{P}}{\partial\beta}\hat{A}^{T}\hat{Q}\hat{A} -\frac{\partial\hat{P}}{\partial\beta}\hat{Q} +\sum_{i=1}^{m}\frac{\partial\hat{P}}{\partial\beta}\hat{N}_{i}^{T}\hat{Q}\hat{N}_{i}\right) \\ = \textrm{tr}\left(\frac{\partial\hat{A}}{\partial\beta}\hat{P}\hat{A}^{T}\hat{Q} +\frac{\partial\hat{A}^{T}}{\partial\beta}\hat{Q}\hat{A}\hat{P} +\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}}{\partial\beta}\hat{P}\hat{N}_{i}^{T}\hat{Q}+\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\hat{Q}\hat{N}_{i}\hat{P} +\frac{\partial S}{\partial\beta}\hat{Q}\right).\end{multline} (3.15)Substituting (3.15) into (3.13), we can get the following equation:   \begin{align} \frac{\partial\mathfrak{E}^{2}}{\partial\beta}&= \textrm{tr}\left(\frac{\partial\hat{A}}{\partial\beta}\hat{P}\hat{A}^{T}\hat{Q} +\frac{\partial\hat{A}^{T}}{\partial\beta}\hat{Q}\hat{A}\hat{P} +\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}}{\partial\beta}\hat{P}\hat{N}_{i}^{T}\hat{Q} +\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\hat{Q}\hat{N}_{i}\hat{P} +\frac{\partial S}{\partial\beta}\hat{Q}\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right)\notag\\ &=2\textrm{tr}\left(\frac{\partial\hat{A}^{T}}{\partial\beta}\hat{Q}\hat{A}\hat{P}\right)+ 2\textrm{tr}\left(\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\hat{Q}\hat{N}_{i}\hat{P}\right) +\textrm{tr}\left(\frac{\partial S}{\partial\beta}\hat{Q}\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right). \end{align} (3.16)Let $$a_{r}^{\,jk}$$, $$n_{ir}^{\,jk}$$, $$b_{r}^{\,jk}$$ and $$c_{r}^{\,jk}$$ be the $$(\,j, k) $$-th elements of the matrices $$A_{r}$$, $$N_{ir}$$, $$B_{r}$$ and $$C_{r}$$, respectively. We firstly let $$\beta =a_{r}^{\,jk}$$. According to (3.16), we have   $$ \frac{\partial\mathfrak{E}^{2}}{\partial a_{r}^{\,jk}}=2\textrm{tr}\left(\frac{\partial{{A_{r}^{T}}}}{\partial a_{r}^{\,jk}} (Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r})\right).$$If $$\Sigma _{r}$$ is $$H_{2}$$ optimal, then $$\frac{\partial \mathfrak{E}^{2}}{\partial a_{r}^{\,jk}}=0$$ holds for all $$a_{r}^{\,jk}$$$$(\,j, k = 1, \cdots , r) $$ and $$Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r}=0$$ can be obtained. Similarly, let $$\beta =n_{ir}^{\,jk}$$. It yields that   $$ \frac{\partial\mathfrak{E}^{2}}{\partial n_{ir}^{\,jk}}=2\textrm{tr}\left(\frac{\partial N_{ir}^{T}}{\partial n_{ir}^{jk}} (Q_{12}^{T}N_{i}P_{12}+Q_{r}N_{ir}P_{r})\right).$$Since $$\frac{\partial \mathfrak{E}^{2}}{\partial n_{r}^{\,jk}}=0$$, we can obtain $$Q_{12}^{T}N_{i}P_{12}+Q_{r}N_{ir}P_{r}=0.$$ Furthermore, we let $$\beta =b_{r}^{\,jk}$$ and $$c_{r}^{\,jk}$$, respectively. Equations (3.10) and (3.11) can be testified from $$\frac{\partial \mathfrak{E}^{2}}{\partial \beta }=0$$ in the same way. Thus, the proof of Theorem 3.1 is accomplished. In the following, we are going to establish the corresponding MOR algorithm based on Theorem 3.1. Suppose that the reduced system $$\Sigma _{r}$$ is stable, reachable and observable. Then, $$P_{r}$$ and $$Q_{r}$$ are positive definite. By Theorem 3.1, we can get   \begin{align} A_{r}&=-Q_{r}^{-1}Q_{12}^{T}AP_{12}P_{r}^{-1}\text{,}\notag\\ N_{ir}&=-Q_{r}^{-1}Q_{12}^{T}N_{i}P_{12}P_{r}^{-1}\text{,}\notag\\ C_{r}&=CP_{12}P_{r}^{-1}\text{,}\notag\\ B_{r}&=-Q_{r}^{-1}Q_{12}^{T}B.\notag \end{align}Since $$P_{r}$$ and $$Q_{r}$$ are the reachability gramian and the observability gramian of the reduced system (2.2), they individually satisfy the following generalized Lyapunov equations:   \begin{align} A_{r}P_{r}{{A_{r}^{T}}}-P_{r}+\sum_{i=1}^{m}N_{ir}P_{r}N_{ir}^{T}+B_{r}{{B_{r}^{T}}}=0\text{,} \end{align} (3.17)  \begin{align} {\hskip-5pt}{{A_{r}^{T}}}Q_{r}A_{r}-Q_{r}+\sum_{i=1}^{m}N_{ir}^{T}Q_{r}N_{ir}+{{C_{r}^{T}}}C_{r}=0. \end{align} (3.18)Premultiplying (3.6) by $$Q_{r}^{-1}Q_{12}^{T}$$, we have   \begin{align} Q_{r}^{-1}Q_{12}^{T}AP_{12}{{A_{r}^{T}}}-Q_{r}^{-1}Q_{12}^{T}P_{12}+ \sum_{i=1}^{m}Q_{r}^{-1}Q_{12}^{T}N_{i}P_{12}N_{ir}^{T}+Q_{r}^{-1}Q_{12}^{T}B{{B_{r}^{T}}}=0. \end{align} (3.19)Substituting (3.8)–(3.10) into (3.19), it yields that   \begin{align} A_{r}P_{r}{{A_{r}^{T}}}+Q_{r}^{-1}Q_{12}^{T}P_{12}+\sum_{i=1}^{m}N_{ir}P_{r}N_{ir}^{T}+B_{r}{{B_{r}^{T}}}=0. \end{align} (3.20)Comparing (3.17) with (3.20), we obtain $$P_{r}+Q_{r}^{-1}Q_{12}^{T}P_{12}=0$$, i.e. $$-Q_{r}^{-1}Q_{12}^{T}P_{12}P_{r}^{-1}=I_{r}$$. Let $$W^{T}:=-Q_{r}^{-1}Q_{12}^{T}$$ and $$V:=P_{12}P_{r}^{-1}$$. Then $$W^{T}V=I_{r}$$. Therefore, the coefficient matrices of the reduced system $$\Sigma _{r}$$ can be written as   $$ A_{r}=W^{T}AV\text{,}~~N_{ir}=W^{T}N_{i}V\text{,}~~B_{r}=W^{T}B\text{,}~~C_{r}=CV.$$ On the basis of Theorem 3.1 and the above discussions, we present an iterative algorithm to generate the reduced system. When Algorithm 1 is convergent, we can prove that the resulting reduced system $$\Sigma _{r}$$ satisfies the $$H_{2}$$-optimal necessary conditions. This result is stated in the following proposition. Proposition 3.1 Assume that Algorithm 1 is convergent. $$P_{12}$$ and $$Q_{12}$$ satisfy (3.6) and (3.7), respectively. $$A_{r}\otimes A_{r}-I_{r}\otimes I_{r}+\Sigma _{i=1}^{m}N_{ir}\otimes N_{ir}$$ is non-singular. Then, the returned coefficient matrices $$A_{r}$$, $$N_{ir}$$, $$B_{r}$$ and $$C_{r}$$ satisfy the $$H_{2}$$-optimal necessary conditions. Proof. Let $${{W_{k}^{T}}}$$ and $$V_{k}$$ represent $$W^{T}$$ and V in the k-th iteration of the Algorithm 1, respectively . Then, the returned coefficient matrices $$A_{r}={{W_{k}^{T}}}AV_{k}$$, $$N_{ir}={{W_{k}^{T}}}N_{i}V_{k}$$, $$B_{r}={{W_{k}^{T}}}B$$ and $$C_{r}=CV_{k}$$. Due to the convergence of Algorithm 1, there are non-singular matrices $$T_{1}$$ and $$T_{2}$$ such that $$W_{k}=Q_{12}{{T_{1}^{T}}}$$ and $$V_{k}=P_{12}T_{2}$$. Multiply (3.6) by $${{W_{k}^{T}}}$$ from the left and it follows that   $$ {{W_{k}^{T}}}AP_{12}{{A_{r}^{T}}}-{{W_{k}^{T}}}P_{12}+\sum_{i=1}^{m}{{W_{k}^{T}}}N_{i}P_{12}N_{ir}^{T}+{{W_{k}^{T}}}B{{B_{r}^{T}}}=0.$$Thus, it holds that   \begin{align} A_{r}T_{2}^{-1}{{A_{r}^{T}}}-T_{2}^{-1}+\sum_{i=1}^{m}N_{ir}T_{2}^{-1}N_{ir}^{T}+B_{r}{{B_{r}^{T}}}=0. \end{align} (3.21)Subtracting (3.21) from (3.17), we have   \begin{align} A_{r}\left(T_{2}^{-1}-P_{r}\right){{A_{r}^{T}}}-\left(T_{2}^{-1}-P_{r}\right)+\sum_{i=1}^{m}N_{ir}\left(T_{2}^{-1}-P_{r}\right)N_{ir}^{T}=0. \end{align} (3.22)Since $$A_{r}\otimes A_{r}-I_{r}\otimes I_{r}+\sum _{i=1}^{m}N_{ir}\otimes N_{ir}$$ is non-singular, (3.22) has the unique solution $$T_{2}^{-1}-P_{r}=0$$, i.e. $$T_{2}^{-1}=P_{r}$$. Similarly, we can get $${{T_{1}^{T}}}=-Q_{r}^{-1}$$. This leads to   \begin{align} Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r}=Q_{12}^{T}AP_{12}+Q_{r}{{T_{1}^{T}}}Q_{12}^{T}AP_{12}T_{2}P_{r}=0. \notag \end{align}In the same way, we can prove that (3.9)–(3.11) hold. Thus, the proof of Proposition 3.1 is accomplished. 4. $${H_{2}}$$ MOR based on the generalized cross gramian In this section, we explore $$H_{2}$$ MOR of the SISO discrete-time bilinear system. First, the $$H_{2}$$ norm is expressed by the generalized cross gramian. Then, we show the corresponding $$H_{2}$$-optimal necessary conditions. Finally, an MOR algorithm is developed to obtain the reduced system. The k-th transfer function of the SISO discrete-time bilinear systems is defined as (see Benner et al., 2011)   $$ H_{k}(z_{1}\text{,}~z_{2}\text{,}\cdots\text{,}~z_{k})=c(z_{k}I_{n}-A)^{-1}N(z_{k-1}I_{n}-A)^{-1}N\cdots N(z_{1}I_{n}-A)^{-1}b.$$Then, the $$H_{2}$$ norm of the system (2.1) can also be computed by   $$ \|\Sigma\|_{H_{2}}^{2}=\textrm{tr}\left(\sum_{k=1}^{\infty}\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} \overline{H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}(H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}}))^{T}}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\!.$$In the following, we give the expression for the $$H_{2}$$ norm of the SISO system (2.1) based on the generalized cross gramian. Theorem 4.1 Let $$\Sigma $$ be an SISO discrete-time bilinear system with the scalar matrix N, and assume that the generalized cross gramian R exists. Then the $$H_{2}$$ norm of $$\Sigma $$ can be expressed as   \begin{align} \|\Sigma\|_{H_{2}}^{2}=\textrm{tr}(cRb)=cRb. \end{align} (4.1) Proof. Since $$H_{k}$$ is a scalar, then   \begin{align} \|\Sigma\|_{H_{2}}^{2}&=\textrm{tr}\left(\sum_{k=1}^{\infty}\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} \overline{H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}(H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}}))^{T}}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\notag\\ &=\sum_{k=1}^{\infty}\textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\!.\notag \end{align}Let $$R_{i}:=\sum _{k_{i}=0}^{\infty }\cdots \sum _{k_{1}=0}^{\infty} {P_{i}Q_{i}}$$. Then, $$R=\sum _{i=0}^{\infty} {R_{i}}$$, and we just need to prove that   \begin{align} \textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right) =cR_{i}b\text{,} \end{align} (4.2)where k = 1, 2, ⋯. First, when k = 1, it holds that   \begin{align} &\textrm{tr}\left(\int_{0}^{2\pi}{\frac{1}{2\pi}H_{1}(e^{-i\theta_{1}})H_{1}(e^{i\theta_{1}})}\, \textrm{d}\theta_{1}\right)\notag\\ &\quad=\textrm{tr}\left(\int_{0}^{2\pi}{\frac{1}{2\pi}c(e^{-i\theta_{1}}I-A)^{-1}bc(e^{i\theta_{1}}I-A)^{-1}b}\, \textrm{d}\theta_{1}\right)\notag\\ &\quad=\textrm{tr}\left(c\frac{1}{2\pi}\int_{0}^{2\pi}{\left(\sum_{l_{1}=0}^{\infty}e^{il_{1}\theta_{1}}A^{l_{1}}\right)b c\left(\sum_{l_{2}=0}^{\infty}e^{-il_{2}\theta_{1}}A^{l_{2}}\right)}\, \textrm{d}\theta_{1}b\right)\notag\\ &\quad=\textrm{tr}\left(c\frac{1}{2\pi}\sum_{l_{1}=0}^{\infty}~\sum_{l_{2}=0}^{\infty}\int_{0}^{2\pi}e^{-i(l_{2}-l_{1})\theta_{1}}\, \textrm{d}\theta_{1}A^{l_{1}}bcA^{l_{2}}b\right).\notag \end{align}If $$l_{1}=l_{2}$$, then $$\int _{0}^{2\pi }e^{-i(l_{2}-l_{1})\theta _{1}}\, \textrm{d}\theta _{1}=2\pi $$; otherwise, $$\int _{0}^{2\pi }e^{-i(l_{2}-l_{1})\theta _{1}}\, \textrm{d}\theta _{1}=0$$. Hence, the above equation can be simplified as   $$ \textrm{tr}\left(\int_{0}^{2\pi}{\frac{1}{2\pi}H_{1}(e^{-i\theta_{1}})H_{1}(e^{i\theta_{1}})}\, \textrm{d}\theta_{1}\right) =\textrm{tr}\left(c\sum_{l=0}^{\infty}{A^{l}bcA^{l}}b\right)=\textrm{tr}(cR_{1}b)=cR_{1}b. $$Next, when k = 2, it follows   \begin{align} &\textrm{tr}\left(\int_{0}^{2\pi}\int_{0}^{2\pi}{\frac{1}{(2\pi)^{2}}H_{2}(e^{-i\theta_{1}}\text{,}~e^{-i\theta_{2}}) H_{2}(e^{i\theta_{1}}\text{,}~e^{i\theta_{2}})}\, \textrm{d}\theta_{1}\, \textrm{d}\theta_{2}\right)\notag\\ &\ =\textrm{tr}\left(\int_{0}^{2\pi}\int_{0}^{2\pi}\left({\frac{1}{(2\pi)^{2}}c(e^{-i\theta_{2}}I-A)^{-1}N(e^{-i\theta_{1}}I-A)^{-1}b c(e^{i\theta_{2}}I-A)^{-1}N(e^{i\theta_{1}}I-A)^{-1}b}\right)\, \textrm{d}\theta_{1}\, \textrm{d}\theta_{2}\right)\notag\\ &\ =\textrm{tr}\left(c\sum_{l_{21}=0}^{\infty}~\sum_{l_{22}=0}^{\infty}~\sum_{l_{11}=0}^{\infty}~\sum_{l_{12}=0}^{\infty}\int_{0}^{2\pi}\!\!\int_{0}^{2\pi}\frac{1}{(2\pi)^{2}} e^{-i(l_{22}-l_{21})\theta_{2}}e^{-i(l_{12}-l_{11})\theta_{1}}\, \textrm{d}\theta_{1}\, \textrm{d}\theta_{2} A^{l_{21}}NA^{l_{11}}bcA^{l_{22}}NA^{l_{12}}b\right)\notag\\ &\ =\textrm{tr}\left(c\sum_{l_{2}=0}^{\infty}~\sum_{l_{1}=0}^{\infty}A^{l_{2}}NA^{l_{1}}bcA^{l_{2}}NA^{l_{1}}b\right)\notag\\ &\ =\textrm{tr}\left(c\sum_{l_{2}=0}^{\infty}~\sum_{l_{1}=0}^{\infty}A^{l_{2}}NA^{l_{1}}bcA^{l_{1}}NA^{l_{2}}b\right)\notag\\ &\ =cR_{2}b.\notag \end{align}Similarly, we can also verify that (4.2) holds for k = 3, 4,⋯. Therefore,   $$ \textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right) =\textrm{tr}(cR_{k}b)\text{,}\quad k=1\text{,}~2\text{,}\cdots. $$That is   \begin{align} \|\Sigma\|_{H_{2}}^{2}&=\sum_{k=1}^{\infty}\textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\notag\\ &=\textrm{tr}\left(\sum_{k=1}^{\infty}cR_{k}b\right)\notag\\ &=cRb.\notag \end{align}Thus, the proof of Theorem 4.1 is accomplished. In order to obtain the $$H_{2}$$-optimal necessary conditions based on the generalized cross gramian, the error system (3.1) in the SISO case is reduced to   \begin{align}\hat{\Sigma}: \begin{cases} \hat{x}(k+1)=\hat{A}\hat{x}(k)+\hat{N}\hat{x}(k)u(k)+\hat{b}u(k)\text{,}\\[6pt] \hat{y}(k)=\hat{c}\hat{x}(k)\text{,} \end{cases}\end{align} (4.3)where   $$ \hat{x}= \begin{bmatrix} x(k) \\ x_{r}(k) \end{bmatrix}\text{,}~~ \hat{y}= \begin{bmatrix} y(k) \\ y_{r}(k) \end{bmatrix}\text{,}~~ \hat{A}= \begin{bmatrix} A & 0\\ 0 & A_{r} \end{bmatrix}\text{,}~~ \hat{N}= \begin{bmatrix} N & 0\\ 0 & N_{r} \end{bmatrix}\text{,}~~ \hat{b}= \begin{bmatrix} b \\ b_{r} \end{bmatrix}\text{,}~~ \hat{c}= \begin{bmatrix} c & -c_{r} \end{bmatrix}\text{.} $$According to the reduced system, we know that the system (4.3) is still an SISO system and $$N_{r}$$ is still a scalar matrix. When the error system (4.3) is stable and the generalized cross gramian $$\hat{R}$$ exists, $$\hat{R}$$ satisfies the following generalized Sylvester equation:   \begin{align} \hat{A}\hat{R}\hat{A}-\hat{R}+\hat{N}\hat{R}\hat{N}+\hat{b}\hat{c}=0. \end{align} (4.4)By Theorem 4.1, the $$H_{2}$$ norm of the system (4.3) can be written as   \begin{align} \|\hat{\Sigma}\|_{H_{2}}^{2}=\textrm{tr}(\hat{c}\hat{R}\hat{b})=\hat{c}\hat{R}\hat{b}. \end{align} (4.5)Hence, the $$H_{2}$$-optimal MOR problem can be converted into the minimization problem   \begin{align} \mathfrak{J}_{b}(A_{r}\text{,}~N_{r}\text{,}~b_{r}\text{,}~c_{r}):=\min\|\hat{\Sigma}\|_{H_{2}}. \end{align} (4.6)In order to further study the $$H_{2}$$ norm of the error system (4.3), $$\hat{R}$$ is partitioned as follows:   \begin{align} \hat{R}= \begin{bmatrix} R_{11} & R_{12}\\ R_{21} & R_{r} \end{bmatrix}, \end{align} (4.7)where $$R_{11}\in \mathbb{R}^{n\times n}$$ and $$R_{r}\in \mathbb{R}^{r\times r}$$. Substituting (4.7) into (4.4), we can get   \begin{align} AR_{11}A-R_{11}+NR_{11}N+bc=0\text{,} \end{align} (4.8)  \begin{align} {\hskip-10pt}AR_{12}A_{r}-R_{12}+NR_{12}N_{r}-bc_{r}=0\text{,} \end{align} (4.9)  \begin{align} {\hskip-10pt}A_{r}R_{21}A-R_{21}+N_{r}R_{21}N+b_{r}c=0\text{,} \end{align} (4.10)  \begin{align} {\hskip-8pt}A_{r}R_{r}A_{r}-R_{r}+N_{r}R_{r}N_{r}-b_{r}c_{r}=0. \end{align} (4.11)From (4.8), we have $$R_{11}=R$$. Based on the generalized cross gramian, we can also obtain the $$H_{2}$$-optimal necessary conditions for the SISO case, which are shown in the following theorem. Theorem 4.2 Suppose that the SISO system $$\Sigma _{r}$$ is the solution of the minimization problem (4.6) and it is stable. Then, the following equations hold:   \begin{align} {\hskip-25pt}R_{21}AR_{12}+R_{r}A_{r}R_{r}=0\text{,} \end{align} (4.12)  \begin{align} {\hskip-27pt}R_{21}NR_{12}+R_{r}N_{r}R_{r}=0\text{,} \end{align} (4.13)  \begin{align} cR_{12}-c_{r}R_{r}=0\text{,} \end{align} (4.14)  \begin{align} R_{21}b+R_{r}b_{r}=0. \end{align} (4.15) Proof. Let $$\mathfrak{E}_{b}$$ stand for the $$H_{2}$$ norm of the error system (4.3). Then, we have   $$ \mathfrak{E}_{b}^{2}=\|\Sigma\|_{H_{2}}^{2}=\textrm{tr}(\hat{c}\hat{R}\hat{b})=\textrm{tr}(\hat{R}U)\text{,}$$where $$U:=\hat{b}\hat{c}$$. Let $$\xi $$ denote an arbitrary parameter of $$\mathfrak{E}_{b}$$. Differentiating $$\mathfrak{E}_{b}^{2}$$ with respect to $$\xi $$ and combining with (4.4), we can obtain   \begin{align} \frac{\partial\mathfrak{E}_{b}^{2}}{\partial\xi}=\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}U\right) +\textrm{tr}\left(\hat{R}\frac{\partial U}{\partial\xi}\right)=-\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{A}\hat{R}\hat{A}\right) +\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{R}\right)-\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{N}\hat{R}\hat{N}\right) +\textrm{tr}\left(\hat{R}\frac{\partial U}{\partial\xi}\right). \end{align} (4.16)In order to simplify (4.16), we differentiate (4.4) with respect to $$\xi $$ on both sides and it yields that   \begin{align} \frac{\partial\hat{A}}{\partial\xi}\hat{R}\hat{A}+\hat{A}\frac{\partial\hat{R}}{\partial\xi}\hat{A} +\hat{A}\hat{R}\frac{\partial\hat{A}}{\partial\xi}-\frac{\partial\hat{R}}{\partial\xi} +\frac{\partial\hat{N}}{\partial\xi}\hat{R}\hat{N}+\hat{N}\frac{\partial\hat{R}}{\partial\xi}\hat{N} +\hat{N}\hat{R}\frac{\partial\hat{N}}{\partial\xi}+\frac{\partial U}{\partial\xi}=0. \end{align} (4.17)Premultiplying (4.17) by $$\hat{R}$$ and taking the trace, we can get   \begin{align} 2\textrm{tr}\left(\frac{\partial\hat{A}}{\partial\xi}\hat{R}\hat{A}\hat{R}\right) +\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{A}\hat{R}\hat{A}\right) -\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{R}\right) +2\textrm{tr}\left(\frac{\partial\hat{N}}{\partial\xi}\hat{R}\hat{N}\hat{R}\right) +\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{N}\hat{R}\hat{N}\right) +\textrm{tr}\left(\frac{\partial U}{\partial\xi}\hat{R}\right)=0. \end{align} (4.18)Substituting (4.18) into (4.16), we have   \begin{align} \frac{\partial\mathfrak{E}_{b}^{2}}{\partial\xi}= 2\textrm{tr}\left(\frac{\partial\hat{A}}{\partial\xi}\hat{R}\hat{A}\hat{R}\right) +2\textrm{tr}\left(\frac{\partial\hat{N}}{\partial\xi}\hat{R}\hat{N}\hat{R}\right) +2\textrm{tr}\left(\frac{\partial U}{\partial\xi}\hat{R}\right)=0. \end{align} (4.19)Let $$A_{r}= [a_{r}^{ij} ]$$ and $$\xi =a_{r}^{ij}$$ (i, j = 1,2,⋯r). From (4.19), we obtain   $$ \frac{\partial\mathfrak{E}_{b}^{2}}{\partial a_{r}^{ij}}=2\textrm{tr}\left(\frac{\partial A_{r}}{\partial a_{r}^{ij}} (R_{21}AR_{12}+R_{r}A_{r}R_{r})\right).$$If $$A_{r}$$ is the solution of the optimization problem (4.6), then $$\frac{\partial \mathfrak{E}_{b}^{2}}{\partial a_{r}^{ij}}=0$$. Obviously, it holds that   $$ R_{21}AR_{12}+R_{r}A_{r}R_{r}=0.$$Equations (4.13)–(4.15) can be analogously proved when $$\xi $$ individually represents $$n_{r}^{ij}$$, $$b_{r}^{ij}$$ and $$c_{r}^{ij}$$, where $$n_{r}^{ij}$$, $$b_{r}^{ij}$$ and $$c_{r}^{ij}$$ are the $$(i,j)$$-th elements of the matrices $$N_{r}$$, $$b_{r}$$ and $$c_{r}$$. Thus, the proof of Theorem 4.2 is accomplished. Assume that the $$H_{2}$$-optimal reduced system $$\Sigma _{r}$$ is stable and $$R_{r}$$ is non-singular. Then, by Theorem 4.2, we can obtain the following formulas:   \begin{align} A_{r}=-R_{r}^{-1}R_{21}AR_{12}R_{r}^{-1}\text{,} \end{align} (4.20)  \begin{align} N_{r}=-R_{r}^{-1}R_{21}NR_{12}R_{r}^{-1}\text{,} \end{align} (4.21)  \begin{align}{\hskip-30pt} b_{r}=-R_{r}^{-1}R_{21}b\text{,} \end{align} (4.22)  \begin{align} {\hskip-37pt}c_{r}=cR_{12}R_{r}^{-1}. \end{align} (4.23)Next, we consider (4.10). Multiplying (4.10) by $$R_{12}R_{r}^{-1}$$ from the right, we can get   \begin{align} A_{r}R_{21}AR_{12}R_{r}^{-1}-R_{21}R_{12}R_{r}^{-1}+N_{r}R_{21}NR_{12}R_{r}^{-1}+b_{r}cR_{12}R_{r}^{-1}=0. \end{align} (4.24)Substituting (4.12)–(4.14) into (4.24), we obtain the following equation:   \begin{align} -A_{r}R_{r}A_{r}-R_{21}R_{12}R_{r}^{-1}-N_{r}R_{r}N_{r}+b_{r}c_{r}=0. \end{align} (4.25)Inserting (4.11) into (4.25), we have $$R_{21}R_{12}R_{r}^{-1}=-R_{r}$$. That is $$-R_{r}^{-1}R_{21}R_{12}R_{r}^{-1}=I_{r}$$. Combining with (4.20)–(4.23), we can take the transformation matrices $$W^{T}:=-R_{r}^{-1}R_{21}$$ and $$V:=R_{12}R_{r}^{-1}$$ such that $$W^{T}V=I_{r}$$ and $$N_{r}$$ is a scalar matrix. Namely, the coefficient matrices of the reduced system can be generated by   $$ A_{r}=W^{T}AV\text{,}~~ N_{r}=W^{T}NV\text{,}~~ b_{r}=W^{T}b\text{,}~~ c_{r}=cV.$$Therefore, we can establish the following algorithm. Remark 4.1 Using the similar proof of Proposition 3.1, we can also obtain the analogous result. Specifically, assume that Algorithm 2 is convergent. $$R_{12}$$ and $$R_{21}$$ individually satisfy (4.9) and (4.10). $${{A_{r}^{T}}}\otimes A_{r}-I_{r}\otimes I_{r}+{{N_{r}^{T}}}\otimes N_{r}$$ is non-singular. Then, the returned coefficient matrices $$A_{r}$$, $$N_{r}$$, $$b_{r}$$ and $$c_{r}$$ can satisfy the $$H_{2}$$-optimal necessary conditions (4.12)–(4.15). 5. Numerical examples In this section, we present two numerical examples to illustrate the effectiveness of our proposed methods. We mainly show the relative errors of the output and the relative $$H_{2}$$ errors. The relative $$H_{2}$$ error is calculated by the formula $$\|\hat{\Sigma }\|_{H_{2}}/\|\Sigma \|_{H_{2}}$$. Both numerical examples are implemented in MATLAB(R2010a). Since both of these two experiments are not very large, we can directly solve (3.6) and (3.7) by the Kronecker product and the vectorization when the error system is stable. Similarly, (4.9) and (4.10) can also be solved in the same way. Example 5.1 The first example is a boundary controlled heat transfer system which is involved in Benner & Damm (2011). The partial differential equation is spatially discretized by finite difference on an h × h grid. Then, we can obtain a continuous-time two-input and one-output bilinear system. Next, we convert the bilinear system into a discrete-time bilinear system by the semi-implicit Euler method ($$\Delta t=0.005$$) (see Wang & Jiang, 2016). In this example, we take h = 15. It leads to the discrete-time bilinear system with the order n = 225. All reduced systems are obtained by Algorithm 1. For the input $$u(k)=[\frac{1}{k+1}\sin (\frac{k\pi }{5} ),~\frac{1}{k+1}\cos (\frac{k\pi }{5} ) ]^{T}$$, Table 1 presents the relative output errors between the original system and the reduced system with different orders. From Table 1, we can see that the output of the original system can be well approximated by those of the reduced systems. When the reduced order increases, the relative output error decreases as a whole. It is worth noting that the output and the relative output errors will be changed if the input u(k) is changed. Table 2 shows the relative $$H_{2}$$ errors of these reduced systems with different reduced orders. Table 1. Relative output errors with different orders in Example 5.1   k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$  Table 1. Relative output errors with different orders in Example 5.1   k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$  Table 2. Relative $$H_{2}$$ errors with different reduced orders in Example 5.1   r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$    r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$  Table 2. Relative $$H_{2}$$ errors with different reduced orders in Example 5.1   r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$    r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$  It is clear that all the reduced systems well approximate the original system in the $$H_{2}$$ norm sense and the relative $$H_{2}$$ error strictly decreases as the increase of the reduced order. Thus, Tables 1 and 2 imply the efficiency of Algorithm 1. Example 5.2 In this example, we study the following SISO discrete-time system:   \begin{align}\notag \begin{cases} x(k+1)=Ax(k)+Nx(k)u(k)+bu(k)\text{,}~\\[6pt]y(k)=cx(k)\text{,}~ \end{cases} \end{align}in which A is a randomly generated stable matrix with n = 598, the matrix N = diag(0.01, 0.01, ⋯ , 0.01) $$\in \mathbb{R}^{598\times 598}$$, the vector $$b=c^{T}\in \mathbb{R}^{598}$$ with all elements are 1. All reduced systems are obtained by Algorithm 2. For the input $$u(k)=\frac{1}{k+1}\sin (\frac{k\pi }{7})$$, Table 3 shows the relative output errors between the original discrete-time bilinear system and the reduced systems with varying orders. Table 3. Relative output errors with different reduced orders in Example 5.2   k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$  Table 3. Relative output errors with different reduced orders in Example 5.2   k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$  Table 3 indicates that the reduced systems can well approximate the original discrete-time bilinear system in the output behaviours. Obviously, the relative output error substantially decreases as the increase of the reduced order. Similarly, when the input variable is changed, the output variable and the relative output error will be accordingly changed. Table 4 provides the relative $$H_{2}$$ errors of the reduced systems with different reduced orders. Table 4. Relative $$H_{2}$$ errors with different reduced orders in Example 5.2   r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$    r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$  Table 4. Relative $$H_{2}$$ errors with different reduced orders in Example 5.2   r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$    r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$  From Table 4, we observe that although the reduced orders are much lower than 598 the reduced systems present good approximations of the original discrete-time bilinear system in the $$H_{2}$$ norm sense. Moreover, the relative $$H_{2}$$ error correspondingly decreases as the reduced order increases. Tables 3 and 4 illustrate that the reduced system generated by Algorithm 2 exhibits good approximation of the original system. Therefore, our proposed methods are feasible for these two examples. 6. Conclusions In this paper, we mainly investigate $$H_{2}$$-optimal MOR of discrete-time bilinear systems based on gramians. First, we derive the $$H_{2}$$-optimal necessary conditions based on the reachability and observability gramian. Next, the corresponding MOR algorithm is established and can make the reduced system satisfy the necessary conditions. Then, regarding the SISO discrete-time bilinear systems, the $$H_{2}$$ norm can be represented by the generalized cross gramian. On the basis of the generalized cross gramian, we derive the related $$H_{2}$$-optimal necessary conditions. Similarly, we propose the MOR algorithm based on the generalized cross gramian and the reduced system resulting from the algorithm can satisfy the necessary conditions. Finally, two numerical examples are employed to illustrate the efficiency of our proposed methods. Funding Natural Science Foundation of China (61663043). References Antoulas, A. C. ( 2005) Approximation of Large-Scale Dynamical System . Philadelphia: Society for Industrial and Applied Mathematics (SIAM). Bai, Z. & Skoogh, D. ( 2006) A projection method for model reduction of bilinear dynamical systems. Linear Alg. Appl ., 415, 406– 425. Google Scholar CrossRef Search ADS   Benner, P. & Breiten, T. ( 2012) Interpolation-based $$H_2$$ model reduction of bilinear control systems. SIAM J. Matrix Anal. Appl. , 33, 859– 885. Google Scholar CrossRef Search ADS   Benner, P., Breiten, T. & Damm, T. ( 2010) Krylov subspace methods for model order reduction of bilinear discrete-time control systems. Proceedings in Applied Mathematics and Mechanics . New Jersey: John Wiley, pp. 601– 602. Benner, P., Breiten, T. & Damm, T. ( 2011) Generalized tangential interpolation for model reduction of discrete-time MIMO bilinear systems. Int. J. Control , 84, 1398– 1407. Google Scholar CrossRef Search ADS   Benner, P. & Damm, T. ( 2011) Lyapunov equations, energy functionals, and model order reduction of bilinear and stochastic systems. SIAM J. Control Optim. , 49, 686– 711. Google Scholar CrossRef Search ADS   Bunse-Gerstner, A., Kubalińska, D., Vossen, G. & Wilczek, D. ( 2012) $$h_2$$-norm optimal model reduction for large scale discrete dynamical MIMO systems. J. Comput. Appl. Math. , 233, 1202– 1216. Google Scholar CrossRef Search ADS   D’Alessandro, P., Isidori, A. & Ruberti, A. ( 1974) Realization and structure theory of bilinear dynamical systems. SIAM J. Control , 12, 517– 535. Google Scholar CrossRef Search ADS   Dorissen, H. T. ( 1989) Canonical forms for bilinear systems. Syst. Control Lett. , 13, 153– 160. Google Scholar CrossRef Search ADS   Druskin, V. & Simoncini, V. ( 2011) Adaptive rational Krylov subspaces for large-scale dynamical systems. Syst. Control Lett. , 60, 546– 560. Google Scholar CrossRef Search ADS   Gao, H. J., Lam, J. & Wang, C. H. ( 2006) Model simplification for switched hybrid systems. Syst. Control Lett. , 55, 1015– 1021. Google Scholar CrossRef Search ADS   Gugercin, S., Antoulas, A. C. & Beattie, C. ( 2008) $$H_2$$ model reduction for large-scale linear dynamical systems. SIAM J. Matrix Anal. Appl. , 30, 609– 638. Google Scholar CrossRef Search ADS   Ibrir, S. & Bettayeb, M. ( 2014) Model reduction of a class of nonlinear systems: a convex-optimization approach. IMA J. Math. Control Inform. , 31, 519– 531. Google Scholar CrossRef Search ADS   Imran, M. & Glafoor, A. ( 2016) Model reduction of generalized non-singular systems using limited frequency interval gramians. IMA J. Math. Control Inform. , 32, 333– 347. Google Scholar CrossRef Search ADS   Jazlan, A., Sreeram, V., Shaker, H. R. & Togneri, R. ( 2016) Frequency interval balanced truncation of discrete-time bilinear systems. Cogent Eng. , 3. 1203082. Jazlan, A., Sreeram, V., Shaker, H. R., Togneri, R. & Minh, H. B. ( 2017) Frequency interval cross gramians for linear and bilinear systems. Asian J. Control , 19, 22– 34. Google Scholar CrossRef Search ADS   Jiang, Y. L. ( 2010) Model Order Reduction Methods . Beijing, China: Science Press. Jiang, Y. L. & Chen, H. B. ( 2012) Time domain model order reduction of general orthogonal polynomials for linear input-output systems. IEEE Trans. Automat. Contr. , 57, 330– 343. Google Scholar CrossRef Search ADS   Jiang, Y. L. & Xu, K. L. ( 2017) $$H_2$$ optimal reduced models of general MIMO LTI systems via the cross Gramian on the stiefel manifold. J. Franklin Inst. , 354, 3210– 3224. Google Scholar CrossRef Search ADS   Knockaert, L., Dhaene, T., Ferranti, F. & Zutter, D. D. ( 2011) Model order reduction with preservation of passivity, non-expansivity and Markov moments. Syst. Control Lett. , 61, 53– 61. Google Scholar CrossRef Search ADS   Lee, H. J., Chu, C. C. & Feng, W. S. ( 2006) An adaptive-order rational Arnoldi method for model-order reductions of linear time-invariant systems. Linear Alg. Appl. , 415, 235– 261. Google Scholar CrossRef Search ADS   Lin, Y. Q., Liang, B. & Wei, Y. M. ( 2009) Order reduction of bilinear MIMO dynamical systems using new block Krylov subspaces. Comput. Math. Appl. , 58, 1093– 1102. Google Scholar CrossRef Search ADS   Lin, Y. Q., Liang, B. & Wei, Y. M. ( 2010) Model-order reduction of large-scale k th-order linear dynamical systems via a k th-order Arnoldi method. Int. J. Comput. Math. , 87, 435– 453. Google Scholar CrossRef Search ADS   Mohler, R. R. ( 1991) Nonlinear Systems: Dynamics and Control , vol. 1. Applications to Bilinear Control , vol. 2. New Jersey, USA: Prentice Hall. Moore, B. C. ( 1981) Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Automat. Contr. , 26, 17– 32. Google Scholar CrossRef Search ADS   Sastry, S. ( 1999) Nonlinear Systems: Analysis, Stability and Control.  New York, USA: Springer. Shaker, H. R. & Tahavori, M. ( 2014a) Frequency-interval model reduction of bilinear systems. IEEE Trans. Automat. Contr. , 59, 1948– 1953. Google Scholar CrossRef Search ADS   Shaker, H. R. & Tahavori, M. ( 2014b) Time-interval model reduction for bilinear systems. Int. J. Control , 87, 1487– 1495. Google Scholar CrossRef Search ADS   Shaker, H. R. & Tahavori, M. ( 2015) Control configuration selection for bilinear systems via generalised Hankel interaction index array. Int. J. Control , 88, 30– 37. Google Scholar CrossRef Search ADS   Wang, X. L. & Jiang, Y. L. ( 2016) Model reduction of discrete-time bilinear systems by a Laguerre expansion technique. Syst. Control Lett. , 40, 6650– 6662. Wei, Y. L., Qiu, J. B., Karimi, H. R. & Wang, M. ( 2014) Model reduction for continuous-time Markovian jump systems with incomplete statistics of mode information. Int. J. Syst. Sci. , 45, 1496– 1507. Google Scholar CrossRef Search ADS   Wilson, D. A. ( 1974) Model reduction for multivariable systems. Int. J. Control , 20, 57– 64. Google Scholar CrossRef Search ADS   Wu, L. G. & Zheng, W. X. ( 2009) Weighted $$H_{\infty }$$ model reduction for linear switched systems with time-varying delay. Automatica , 45, 186– 193. Google Scholar CrossRef Search ADS   Xu, K. L., Jiang, Y. L. & Yang, Z. ( 2015) $$H_2$$ order-reduction for bilinear systems based on Grassmann manifold. J. Franklin Inst. , 352, 4467– 4479. Google Scholar CrossRef Search ADS   Yan, W. Y. & Lam, J. ( 1999) An approximate approach to $$H_2$$ optimal model reduction. IEEE Trans. Automat. Contr. , 44, 1341– 1358. Google Scholar CrossRef Search ADS   Yuan, J. W., Jiang, Y. L. & Xiao, Z. H. ( 2015) Structure-preserving model order reduction by general orthogonal polynomials for integral-differential systems. J. Franklin Inst. , 352, 138– 154. Google Scholar CrossRef Search ADS   Zhang, L. Q. & Lam, J. ( 2002) On $$H_2$$ model reduction of bilinear systems. Automatica , 38, 205– 216. Google Scholar CrossRef Search ADS   Zhang, L.Q., Lam, J. & Huang, B. ( 2003) On gramians and balanced truncation of discrete-time bilinear systems. Int. J. Control , 76, 414– 427. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

$$H_{2}$$ model order reduction based on gramians of discrete-time bilinear systems

Loading next page...
 
/lp/ou_press/h-2-model-order-reduction-based-on-gramians-of-discrete-time-bilinear-ALplrqptV8
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dny014
Publisher site
See Article on Publisher Site

Abstract

Abstract In this paper, on the basis of gramians, $$H_{2}$$-optimal model order reduction of discrete-time bilinear dynamic systems is investigated. First, based on the reachability gramian and the observability gramian, the $$H_{2}$$-optimal necessary conditions of the discrete-time bilinear system are derived and the corresponding algorithm is proposed. When the algorithm is convergent, the resulting reduced system satisfies the necessary conditions. Then, for the single-input and single-output system, the $$H_2$$-optimal necessary conditions based on the generalized cross gramian are presented and the related algorithm is accordingly generated. Similarly, the convergent algorithm can make the reduced system satisfy the necessary conditions. Finally, two numerical examples are employed to illustrate the effectiveness of the proposed methods. 1. Introduction In the integrated circuit, the automatic control and some other engineering fields, there are many complex dynamic systems described by differential or difference equations. They are generally equipped with large dimension and difficult to be simulated on the computer. Model order reduction (MOR) is an effective way to deal with these problems. It uses a lower-dimension system to approximate the original system such that the storage and the computation cost can be efficiently reduced (see Antoulas, 2005; Jiang, 2010). Recently, many researchers have been paying attention to MOR and the relevant methods have been extensively investigated (see Lin et al., 2009; Knockaert et al., 2011; Jiang & Chen, 2012; Ibrir & Bettayeb, 2014). MOR of linear time invariant systems has been fully studied in the past several decades. There are many existing methods, such as balanced truncation methods (see Moore, 1981; Imran & Ghafoor, 2016), orthogonal polynomial MOR methods (see Jiang & Chen, 2012; Yuan et al., 2015) and Krylov subspace methods (see Lee et al., 2006; Lin et al., 2010; Druskin & Simoncini, 2011). Except those, MOR of some special linear systems has also been explored by some researchers. Gao et al. have taken the MOR method of switched hybrid systems into consideration (see Gao et al., 2006). MOR of Markovian jump systems can be found in the study by Wei et al. (2014). Wu & Zheng (2009) have discussed weighted $$H_{\infty }$$ MOR for linear switched systems with time-varying delay. The set of bilinear systems is a special class of non-linear systems. It arises as the natural model for a variety of physical and biomedical processes (see Bai & Skoogh, 2006). Moreover, the Carleman bilinearization of non-linear systems can also result in bilinear systems (see Sastry, 2005). Since the bilinear system has simpler structure than the general non-linear system and can better approximate the non-linear system compared with the linear system, there are many researchers devoting to it. Basic knowledge of bilinear systems can be found in the study by D’Alessandro et al. (1974) and in the book by Mohler (1991). Over the past few decades, MOR methods for bilinear systems have been proposed in many literatures (see Shaker & Tahavori, 2014a,b; Jazlan et al., 2016). Based on the block Krylov subspace, Lin et al. have explored the moment-matching MOR method for the bilinear system (see Lin et al., 2009). The $$H_{2}$$ norm of a system is known as the expected root-mean-square value of the output while the input is a unit variance white noise process (see Zhang & Lam, 2002). It is one of the most commonly used norms for measuring the MOR error. Relevant work on linear systems can be found in the studies by Wilson (1974), Yan & Lam (1999), Gugercin et al. (2008) and Jiang & Xu (2017). Zhang and Lam have defined the $$H_{2}$$ norm of the bilinear system and obtained the first-order necessary conditions of $$H_{2}$$ optimality (see Zhang & Lam, 2002). Benner and Breiten have specialized the necessary conditions and extended the iterative rational Krylov algorithm to the bilinear system (see Benner & Breiten, 2012). Also, they have noted that the resulting reduced system can satisfy the necessary conditions when the algorithm converges. The generalized cross gramian of the bilinear system has been first introduced in Shaker & Tahavori (2015), and its existence condition has been accordingly given. Furthermore, Jazlan et al. (2017) have focused on the development of frequency interval cross gramians for both linear and bilinear systems, and new generalized Sylvester equations for more efficiently calculating the frequency interval cross gramians have been derived. In addition, Xu et al. have viewed the $$H_{2}$$-optimal problem as a minimization problem on the Grassmann manifold and some optimization techniques have been employed to generate the reduced system (see Xu et al., 2015). Compared with continuous-time bilinear systems, MOR of discrete-time bilinear systems has received less attention. Zhang et al. (2003) have firstly defined the reachability gramian and the observability gramian of the discrete-time bilinear system. Then, the balanced truncation method has been applied to generate the reduced system. Benner et al. (2010) have proposed the Krylov subspace MOR method for the discrete-time bilinear system based on the multimoments. By the multimoments, Benner et al. have defined the $$H_{2}$$ norm of discrete-time bilinear system. According to the gramians and the pole residue formula of the generalized transfer function, different expressions of the $$H_{2}$$ norm have been given. Then, the generalized tangential interpolation for the discrete-time bilinear system has been discussed (see Benner et al., 2011). In addition, the Laguerre orthogonal polynomial MOR method for the discrete-time bilinear system has been explored by Wang & Jiang (2016), which has employed the discrete Laguerre series to approximate the stable transfer function. For the discrete-time linear systems, Bunse-Gerstner et al. (2012) have presented the necessary conditions for $$H_{2}$$-optimal MOR. It also motivates us to explore $$H_{2}$$-optimal MOR based on gramians for discrete-time bilinear systems. In this paper, we investigate the MOR methods of discrete-time bilinear systems based on gramians. By the $$H_{2}$$-norm expression, we firstly derive the $$H_{2}$$-optimal necessary conditions based on the reachability gramian and the observability gramian. When the coefficient matrices of the bilinear term are 0, the conditions can be reduced to the Wilson conditions for the discrete-time linear system. Next, an iterative algorithm based on these necessary conditions is established to generate the reduced system. It is worth mentioning that the reduced system can satisfy the necessary conditions when the algorithm is convergent. Concerning the single-input and single-output (SISO) system, the $$H_{2}$$ norm can be expressed by the generalized cross gramian. Then, the $$H_{2}$$-optimal necessary conditions are studied and the corresponding MOR algorithm is generated. Accordingly, the reduced system resulting from the convergent algorithm can also satisfy the related necessary conditions. Finally, two numerical examples are utilized to illustrate the efficiency of our proposed methods. This paper is organized as follows. In Section 2, we introduce some preliminary knowledge of the discrete-time bilinear system. In Section 3, the $$H_{2}$$-optimal MOR method based on the reachability gramian and the observability gramian is studied. In Section 4, we express the $$H_{2}$$ norm of the SISO systems by the generalized cross gramian. Then, the $$H_{2}$$-optimal necessary conditions based on the generalized cross gramian are presented. Two numerical examples are given to verify the effectiveness of the proposed methods in Section 5. Some conclusions are drawn in Section 6. 2. Preliminaries We consider the following discrete-time bilinear system:   \begin{align} \Sigma: \begin{cases} x(k+1)=Ax(k)+\sum\limits_{i=1}^{m}N_{i}x(k)u_{i}(k)+Bu(k)\text{,}\\[10pt] y(k)=Cx(k)\text{,} \end{cases} \end{align} (2.1)where A, $$N_{i}\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$ and $$C\in \mathbb{R}^{p\times n}$$ are the coefficient matrices of the system (2.1). $$x(k)\in \mathbb{R}^{n}$$ is the state variable, $$u(k)\in \mathbb{R}^{m}$$ and $$y(k)\in \mathbb{R}^{p}$$ denote the input variable and the output variable in the system (2.1), respectively. $$u_{i}(k)$$ is the i-th element of u(k). The aim of this paper is to construct two transformation matrices W, $$V\in \mathbb{R}^{n\times r}$$ and $$W^{T}V=I$$ such that   \begin{align}\Sigma_{r}:\begin{cases} x_{r}(k+1)=A_{r}x_{r}(k)+\sum\limits_{i=1}^{m}N_{ir}x(k)u_{i}(k)+B_{r}u(k)\text{,}\\[10pt] y_{r}(k)=C_{r}x_{r}(k)\text{,} \end{cases}\end{align} (2.2)where $$A_{r}=W^{T}AV$$, $$N_{ir}=W^{T}N_{i}V\in \mathbb{R}^{r\times r}$$, $$B_{r}=W^{T}B\in \mathbb{R}^{r\times m}$$, $$C_{r}=CV\in \mathbb{R}^{p\times r}$$ and r ≪ n. $$x_{r}(k)\in \mathbb{R}^{r} $$ and $$y_{r}(k)\in \mathbb{R}^{p}$$ individually represent the state variable and the output variable of the system (2.2). When p = m = 1, the system (2.1) is an SISO discrete-time bilinear system. We let $$b=B\in \mathbb{R}^{n}$$, $$c=C\in \mathbb{R}^{1\times n}$$ and $$N=N_{1}\in \mathbb{R}^{n\times n}$$. Its reduced system has the similar form as the reduced system (2.2). In the following, we introduce the gramians of the discrete-time bilinear system. Definition 2.1 (Zhang et al., 2003) Given the system (2.1), let   \begin{align*} P_{1}(k_{1})=&\,A^{k_{1}}B\text{,}~~ P_{i}(k_{1}\text{,}\cdots\text{,}~k_{i})=A^{k_{i}}[N_{1}P_{i-1}\text{,}\cdots\text{,}~N_{m}P_{i-1}]\text{,}\\[10pt] Q_{1}(k_{1})=&\,CA^{k_{1}}\text{,}\quad Q_{i}(k_{1}\text{,}\cdots\text{,}~k_{i})= \begin{bmatrix} Q_{i-1}N_{1} \\[2pt] Q_{i-1}N_{2} \\[2pt] \vdots \\[2pt] Q_{i-1}N_{m} \end{bmatrix}A^{k_{i}}\text{,}\quad i= 2,~3,\cdots\text{.} \end{align*}The reachability gramian P and the observability gramian Q of the system (2.1) are defined as   \begin{align} P=\sum_{i=1}^{\infty}\sum_{k_{i}=0}^{\infty}\cdots\sum_{k_{1}=0}^{\infty}{P_{i}{{P_{i}^{T}}}}\text{,} \end{align} (2.3)  \begin{align} Q=\sum_{i=1}^{\infty}\sum_{k_{i}=0}^{\infty}\cdots\sum_{k_{1}=0}^{\infty}{{{Q_{i}^{T}}}Q_{i}}. \end{align} (2.4) Reachability and observability are two basic properties of bilinear systems and their definitions can be found in the study by Dorissen (1989). Furthermore, the reachability gramian P and the observability gramian Q can satisfy two generalized Lyapunov equations, which are given by the following lemma. Lemma 2.1 (Zhang et al., 2003) If A is stable, then P and Q defined in (2.3) and (2.4) satisfy the following generalized Lyapunov equations:   \begin{align} APA^{T}-P+\sum_{i=1}^{m}N_{i}P{{N_{i}^{T}}}+BB^{T}=0\text{,} \end{align} (2.5)  \begin{align} {\hskip-5pt}A^{T}QA-Q+\sum_{i=1}^{m}{{N_{i}^{T}}}QN_{i}+C^{T}C=0. \end{align} (2.6) When the system (2.1) is an SISO system, we have   \begin{align*} P_{1}(k_{1})=&\,A^{k_{1}}b\text{,}~~ P_{i}(k_{1}\text{,}~k_{2}\text{,}\cdots\text{,}~k_{i})=A^{k_{i}}NP_{i-1}\text{,}\\ Q_{1}(k_{1})=&\,cA^{k_{1}}\text{,}~~ Q_{i}(k_{1}\text{,}~k_{2}\text{,}\cdots\text{,}~k_{i})=Q_{i-1}NA^{k_{i}}\text{,}\quad i= 2,~3,\cdots.\end{align*}Then, the generalized cross gramian of the system (2.1) is defined as (see Shaker & Tahavori, 2015)   \begin{align} R=\sum_{i=1}^{\infty}\sum_{k_{i}=0}^{\infty}\cdots\sum_{k_{1}=0}^{\infty}{P_{i}Q_{i}}. \end{align} (2.7)For the generalized cross gramian, there is a similar conclusion as Lemma 2.2. Namely, if A is stable, R satisfies the following generalized Sylvester equation (see Shaker & Tahavori, 2015):   \begin{align} ARA-R+NRN+bc=0. \end{align} (2.8) If the system (2.1) is reachable and observable, then the reachability gramian P and the observability gramian Q are positive definite (see Zhang et al., 2003). In the following, we always assume that the system (2.1) is reachable and observable. Then, the $$H_{2}$$ norm of $$\Sigma $$ can be expressed as (see Benner et al., 2011)   \begin{align} \|\Sigma\|_{H_{2}}^{2}=\textrm{tr}(CPC^{T})=\textrm{tr}(B^{T}QB)\text{,} \end{align} (2.9)where tr(⋅) signifies the trace of the corresponding matrix. If the reachability gramian P exists and (2.3) is convergent, then P can be worked out by the vectorization operator vec(⋅). A property of the operator vec(⋅) is that $$\textrm{vec}(XYZ)=(Z^{T}\otimes X)\textrm{vec}(Y)$$, where the notation ‘⊗’ represents the Kronecker product. Taking the vectorization on both sides of (2.5) and applying the property, we can obtain   \begin{align} \left(A\otimes A-I_{n}\otimes I_{n}+\sum_{i=1}^{m}N_{i}\otimes N_{i}\right)\textrm{vec}(P)=\textrm{vec}(-BB^{T})\text{,} \end{align} (2.10)where $$I_{n}$$ denotes the identity matrix with the dimension n. If $$A\otimes A-I_{n}\otimes I_{n}+\sum _{i=1}^{m}N_{i}\otimes N_{i}$$ is non-singular, then P is the unique solution of (2.5), i.e.   \begin{align} \textrm{vec}(P)=\left(A\otimes A-I_{n}\otimes I_{n}+\sum_{i=1}^{m}N_{i}\otimes N_{i}\right)^{-1}\textrm{vec}(-BB^{T}). \end{align} (2.11)Similarly, if the observability gramian Q exists, (2.4) is convergent and $$A^{T}\otimes A^{T}-I_{n}\otimes I_{n}+\sum _{i=1}^{m}{{N_{i}^{T}}}\otimes{{N_{i}^{T}}}$$ is non-singular, then Q is the unique solution of (2.6), i.e.   \begin{align} \textrm{vec}(Q)=\left(A^{T}\otimes A^{T}-I_{n}\otimes I_{n}+\sum_{i=1}^{m}{{N_{i}^{T}}}\otimes{{N_{i}^{T}}}\right)^{-1}\textrm{vec}(-C^{T}C). \end{align} (2.12) 3. $$H_{2}$$ MOR based on the reachability gramian and the observability gramian In this section, we derive the $$H_{2}$$-optimal necessary conditions based on the reachability gramian and the observability gramian. Then, an algorithm is established to generate the reduced system. Finally, we show that the resulting reduced system can satisfy the necessary conditions while the algorithm converges. The aim of $$H_{2}$$-optimal MOR is to construct a reduced system $$\Sigma _{r}$$ such that the reduced system $$\Sigma _{r}$$ approximates the original system $$\Sigma $$ in the $$H_{2}$$ norm sense as good as possible. That is to minimize $$\|\Sigma -\Sigma _{r}\|_{H_{2}}$$. Then, we generate the following error system:   \begin{align}\hat{\Sigma}: \begin{cases}\hat{x}(k+1)=\hat{A}\hat{x}(k)+\sum\limits_{i=1}^{m}\hat{N}_{i}\hat{x}(k)u_{i}(k)+\hat{B}u(k)\text{,}\\[6pt] \hat{y}(k)=\hat{C}\hat{x}(k), \end{cases} \end{align} (3.1)where   $$ \hat{x}(k)= \begin{bmatrix} x(k) \\ x_{r}(k) \end{bmatrix}\text{,}~~ \hat{y}(k)= \begin{bmatrix} y(k) \\ y_{r}(k) \end{bmatrix}\text{,}~~ \hat{A}= \begin{bmatrix} A & 0\\ 0 & A_{r} \end{bmatrix}\text{,}~~ \hat{N}_{i}= \begin{bmatrix} N_{i} & 0\\ 0 & N_{ir} \end{bmatrix}\text{,}~~ \hat{B}= \begin{bmatrix} B \\ B_{r} \end{bmatrix}\text{,}~~ \hat{C}= \begin{bmatrix} C & -C_{r} \end{bmatrix}. $$We assume that the error system $$\hat{\Sigma }$$ is stable, the reachability gramian $$\hat{P}$$ and the observability gramian $$\hat{Q}$$ exist. Then, $$\hat{P}$$ and $$\hat{Q}$$ satisfy the following generalized Lyapunov equations:   \begin{align} \hat{A}\hat{P}\hat{A}^{T}-\hat{P}+\sum_{i=1}^{m}\hat{N}_{i}\hat{P}\hat{N}_{i}^{T}+\hat{B}\hat{B}^{T}=0\text{,} \end{align} (3.2)  \begin{align} {\hskip-5pt}\hat{A}^{T}\hat{Q}\hat{A}-\hat{Q}+\sum_{i=1}^{m}\hat{N}_{i}^{T}\hat{Q}\hat{N}_{i}+\hat{C}^{T}\hat{C}=0. \end{align} (3.3) The $$H_{2}$$ norm of the error system can be expressed as   $$ \|\hat{\Sigma}\|_{H_{2}}^{2}=\textrm{tr}(\hat{C}\hat{P}\hat{C}^{T})=\textrm{tr}(\hat{B}^{T}\hat{Q}\hat{B}).$$Thus, the $$H_{2}$$-optimal reduced system can be converted into the solution of the following minimization problem:   \begin{align} \mathfrak{J}_{a}(A_{r}\text{,}~N_{ir}\text{,}~B_{r}\text{,}~C_{r}):=\min\|\hat{\Sigma}\|_{H_{2}}. \end{align} (3.4) In order to further study the minimization problem (3.4), we partition $$\hat{P}$$ and $$\hat{Q}$$ as   \begin{align} \hat{P}= \begin{bmatrix} P_{11} & P_{12}\\ P_{12}^{T} & P_{22} \end{bmatrix}\text{,} ~~\hat{Q}= \begin{bmatrix} Q_{11} & Q_{12}\\ Q_{12}^{T} & Q_{22} \end{bmatrix}, \end{align} (3.5) where $$P_{11}$$, $$Q_{11}\in \mathbb{R}^{n\times n}$$ and $$P_{22}$$, $$Q_{22}\in \mathbb{R}^{r\times r}$$. Suppose that the reduced system is reachable and observable. The reachability gramian and the observability gramian of the reduced system are denoted by $$P_{r}$$ and $$Q_{r}$$, respectively. Substituting (3.5) into (3.2) and (3.3), we have   $$ P_{11}=P\text{,}~~ Q_{11}=Q\text{,}~~ P_{22}=P_{r}\text{,}~~ Q_{22}=Q_{r}.$$ Moreover, $$P_{12}$$ and $$Q_{12}$$ individually satisfy   \begin{align} AP_{12}{{A_{r}^{T}}}-P_{12}+\sum_{i=1}^{m}N_{i}P_{12}N_{ir}^{T}+B{{B_{r}^{T}}}=0\text{,} \end{align} (3.6)  \begin{align} {\hskip-15pt}A^{T}Q_{12}A_{r}-Q_{12}+\sum_{i=1}^{m}{{N_{i}^{T}}}Q_{12}N_{ir}-C^{T}C_{r}=0. \end{align} (3.7) Next, we derive the $$H_{2}$$-optimal necessary conditions for the discrete-time bilinear system. Theorem 3.1 Let $$\Sigma _{r}$$ be the solution of the minimization problem (3.4) and assume that $$\Sigma _{r}$$ is stable. Then, the following equations hold:   \begin{align} {\hskip-23pt}Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r}=0\text{,} \end{align} (3.8)  \begin{align} {\hskip-30pt}Q_{12}^{T}N_{i}P_{12}+Q_{r}N_{ir}P_{r}=0\text{,} \end{align} (3.9)  \begin{align} CP_{12}-C_{r}P_{r}=0\text{,} \end{align} (3.10)  \begin{align} Q_{12}^{T}B+Q_{r}B_{r}=0. \end{align} (3.11) Proof. Let $$M:=\hat{C}^{T}\hat{C}$$, $$S:=\hat{B}\hat{B}^{T}$$ and $$\mathfrak{E}:= \|\hat{\Sigma } \|_{H_{2}}$$. Then $$\mathfrak{E}^{2}=\textrm{tr} (\hat{C}\hat{P}\hat{C}^{T} )=\textrm{tr} (\hat{P}M )$$. As the derivation procedures are similar, let $$\beta $$ be an arbitrary parameter of the minimization problem (3.4). Differentiating $$\mathfrak{E}^{2}$$ with respect to $$\beta $$, we can obtain   \begin{align} \frac{\partial\mathfrak{E}^{2}}{\partial\beta}=\textrm{tr}\left(\frac{\partial\hat{P}}{\partial\beta}M\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right). \end{align} (3.12)Substituting (3.3) into (3.12), it yields that   \begin{align} \frac{\partial\mathfrak{E}^{2}}{\partial\beta}=-\textrm{tr}\left(\frac{\partial\hat{P}}{\partial\beta} \hat{A}^{T}\hat{Q}\hat{A}-\frac{\partial\hat{P}}{\partial\beta}\hat{Q}+ \sum_{i=1}^{m}\frac{\partial\hat{P}}{\partial\beta}\hat{N}_{i}^{T}\hat{Q}\hat{N}_{i}\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right). \end{align} (3.13)Differentiating both sides of (3.2) with respect to $$\beta $$, we have   \begin{align} \frac{\partial\hat{A}}{\partial\beta}\hat{P}\hat{A}^{T}+\hat{A}\frac{\partial\hat{P}}{\partial\beta}\hat{A}^{T} +\hat{A}\hat{P}\frac{\partial\hat{A}^{T}}{\partial\beta}-\frac{\partial\hat{P}}{\partial\beta} +\sum_{i=1}^{m}\left(\frac{\partial\hat{N}_{i}}{\partial\beta}\hat{P}\hat{N}_{i}^{T}+\hat{N}_{i}\frac{\partial\hat{P}}{\partial\beta}\hat{N}_{i}^{T} +\hat{N}_{i}\hat{P}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\right)+\frac{\partial S}{\partial\beta}=0. \end{align} (3.14)Multiply (3.14) by $$\hat{Q}$$ from the left and take the trace on both sides. Then, it follows   \begin{multline} -\textrm{tr}\left(\frac{\partial\hat{P}}{\partial\beta}\hat{A}^{T}\hat{Q}\hat{A} -\frac{\partial\hat{P}}{\partial\beta}\hat{Q} +\sum_{i=1}^{m}\frac{\partial\hat{P}}{\partial\beta}\hat{N}_{i}^{T}\hat{Q}\hat{N}_{i}\right) \\ = \textrm{tr}\left(\frac{\partial\hat{A}}{\partial\beta}\hat{P}\hat{A}^{T}\hat{Q} +\frac{\partial\hat{A}^{T}}{\partial\beta}\hat{Q}\hat{A}\hat{P} +\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}}{\partial\beta}\hat{P}\hat{N}_{i}^{T}\hat{Q}+\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\hat{Q}\hat{N}_{i}\hat{P} +\frac{\partial S}{\partial\beta}\hat{Q}\right).\end{multline} (3.15)Substituting (3.15) into (3.13), we can get the following equation:   \begin{align} \frac{\partial\mathfrak{E}^{2}}{\partial\beta}&= \textrm{tr}\left(\frac{\partial\hat{A}}{\partial\beta}\hat{P}\hat{A}^{T}\hat{Q} +\frac{\partial\hat{A}^{T}}{\partial\beta}\hat{Q}\hat{A}\hat{P} +\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}}{\partial\beta}\hat{P}\hat{N}_{i}^{T}\hat{Q} +\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\hat{Q}\hat{N}_{i}\hat{P} +\frac{\partial S}{\partial\beta}\hat{Q}\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right)\notag\\ &=2\textrm{tr}\left(\frac{\partial\hat{A}^{T}}{\partial\beta}\hat{Q}\hat{A}\hat{P}\right)+ 2\textrm{tr}\left(\sum_{i=1}^{m}\frac{\partial\hat{N}_{i}^{T}}{\partial\beta}\hat{Q}\hat{N}_{i}\hat{P}\right) +\textrm{tr}\left(\frac{\partial S}{\partial\beta}\hat{Q}\right) +\textrm{tr}\left(\hat{P}\frac{\partial M}{\partial\beta}\right). \end{align} (3.16)Let $$a_{r}^{\,jk}$$, $$n_{ir}^{\,jk}$$, $$b_{r}^{\,jk}$$ and $$c_{r}^{\,jk}$$ be the $$(\,j, k) $$-th elements of the matrices $$A_{r}$$, $$N_{ir}$$, $$B_{r}$$ and $$C_{r}$$, respectively. We firstly let $$\beta =a_{r}^{\,jk}$$. According to (3.16), we have   $$ \frac{\partial\mathfrak{E}^{2}}{\partial a_{r}^{\,jk}}=2\textrm{tr}\left(\frac{\partial{{A_{r}^{T}}}}{\partial a_{r}^{\,jk}} (Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r})\right).$$If $$\Sigma _{r}$$ is $$H_{2}$$ optimal, then $$\frac{\partial \mathfrak{E}^{2}}{\partial a_{r}^{\,jk}}=0$$ holds for all $$a_{r}^{\,jk}$$$$(\,j, k = 1, \cdots , r) $$ and $$Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r}=0$$ can be obtained. Similarly, let $$\beta =n_{ir}^{\,jk}$$. It yields that   $$ \frac{\partial\mathfrak{E}^{2}}{\partial n_{ir}^{\,jk}}=2\textrm{tr}\left(\frac{\partial N_{ir}^{T}}{\partial n_{ir}^{jk}} (Q_{12}^{T}N_{i}P_{12}+Q_{r}N_{ir}P_{r})\right).$$Since $$\frac{\partial \mathfrak{E}^{2}}{\partial n_{r}^{\,jk}}=0$$, we can obtain $$Q_{12}^{T}N_{i}P_{12}+Q_{r}N_{ir}P_{r}=0.$$ Furthermore, we let $$\beta =b_{r}^{\,jk}$$ and $$c_{r}^{\,jk}$$, respectively. Equations (3.10) and (3.11) can be testified from $$\frac{\partial \mathfrak{E}^{2}}{\partial \beta }=0$$ in the same way. Thus, the proof of Theorem 3.1 is accomplished. In the following, we are going to establish the corresponding MOR algorithm based on Theorem 3.1. Suppose that the reduced system $$\Sigma _{r}$$ is stable, reachable and observable. Then, $$P_{r}$$ and $$Q_{r}$$ are positive definite. By Theorem 3.1, we can get   \begin{align} A_{r}&=-Q_{r}^{-1}Q_{12}^{T}AP_{12}P_{r}^{-1}\text{,}\notag\\ N_{ir}&=-Q_{r}^{-1}Q_{12}^{T}N_{i}P_{12}P_{r}^{-1}\text{,}\notag\\ C_{r}&=CP_{12}P_{r}^{-1}\text{,}\notag\\ B_{r}&=-Q_{r}^{-1}Q_{12}^{T}B.\notag \end{align}Since $$P_{r}$$ and $$Q_{r}$$ are the reachability gramian and the observability gramian of the reduced system (2.2), they individually satisfy the following generalized Lyapunov equations:   \begin{align} A_{r}P_{r}{{A_{r}^{T}}}-P_{r}+\sum_{i=1}^{m}N_{ir}P_{r}N_{ir}^{T}+B_{r}{{B_{r}^{T}}}=0\text{,} \end{align} (3.17)  \begin{align} {\hskip-5pt}{{A_{r}^{T}}}Q_{r}A_{r}-Q_{r}+\sum_{i=1}^{m}N_{ir}^{T}Q_{r}N_{ir}+{{C_{r}^{T}}}C_{r}=0. \end{align} (3.18)Premultiplying (3.6) by $$Q_{r}^{-1}Q_{12}^{T}$$, we have   \begin{align} Q_{r}^{-1}Q_{12}^{T}AP_{12}{{A_{r}^{T}}}-Q_{r}^{-1}Q_{12}^{T}P_{12}+ \sum_{i=1}^{m}Q_{r}^{-1}Q_{12}^{T}N_{i}P_{12}N_{ir}^{T}+Q_{r}^{-1}Q_{12}^{T}B{{B_{r}^{T}}}=0. \end{align} (3.19)Substituting (3.8)–(3.10) into (3.19), it yields that   \begin{align} A_{r}P_{r}{{A_{r}^{T}}}+Q_{r}^{-1}Q_{12}^{T}P_{12}+\sum_{i=1}^{m}N_{ir}P_{r}N_{ir}^{T}+B_{r}{{B_{r}^{T}}}=0. \end{align} (3.20)Comparing (3.17) with (3.20), we obtain $$P_{r}+Q_{r}^{-1}Q_{12}^{T}P_{12}=0$$, i.e. $$-Q_{r}^{-1}Q_{12}^{T}P_{12}P_{r}^{-1}=I_{r}$$. Let $$W^{T}:=-Q_{r}^{-1}Q_{12}^{T}$$ and $$V:=P_{12}P_{r}^{-1}$$. Then $$W^{T}V=I_{r}$$. Therefore, the coefficient matrices of the reduced system $$\Sigma _{r}$$ can be written as   $$ A_{r}=W^{T}AV\text{,}~~N_{ir}=W^{T}N_{i}V\text{,}~~B_{r}=W^{T}B\text{,}~~C_{r}=CV.$$ On the basis of Theorem 3.1 and the above discussions, we present an iterative algorithm to generate the reduced system. When Algorithm 1 is convergent, we can prove that the resulting reduced system $$\Sigma _{r}$$ satisfies the $$H_{2}$$-optimal necessary conditions. This result is stated in the following proposition. Proposition 3.1 Assume that Algorithm 1 is convergent. $$P_{12}$$ and $$Q_{12}$$ satisfy (3.6) and (3.7), respectively. $$A_{r}\otimes A_{r}-I_{r}\otimes I_{r}+\Sigma _{i=1}^{m}N_{ir}\otimes N_{ir}$$ is non-singular. Then, the returned coefficient matrices $$A_{r}$$, $$N_{ir}$$, $$B_{r}$$ and $$C_{r}$$ satisfy the $$H_{2}$$-optimal necessary conditions. Proof. Let $${{W_{k}^{T}}}$$ and $$V_{k}$$ represent $$W^{T}$$ and V in the k-th iteration of the Algorithm 1, respectively . Then, the returned coefficient matrices $$A_{r}={{W_{k}^{T}}}AV_{k}$$, $$N_{ir}={{W_{k}^{T}}}N_{i}V_{k}$$, $$B_{r}={{W_{k}^{T}}}B$$ and $$C_{r}=CV_{k}$$. Due to the convergence of Algorithm 1, there are non-singular matrices $$T_{1}$$ and $$T_{2}$$ such that $$W_{k}=Q_{12}{{T_{1}^{T}}}$$ and $$V_{k}=P_{12}T_{2}$$. Multiply (3.6) by $${{W_{k}^{T}}}$$ from the left and it follows that   $$ {{W_{k}^{T}}}AP_{12}{{A_{r}^{T}}}-{{W_{k}^{T}}}P_{12}+\sum_{i=1}^{m}{{W_{k}^{T}}}N_{i}P_{12}N_{ir}^{T}+{{W_{k}^{T}}}B{{B_{r}^{T}}}=0.$$Thus, it holds that   \begin{align} A_{r}T_{2}^{-1}{{A_{r}^{T}}}-T_{2}^{-1}+\sum_{i=1}^{m}N_{ir}T_{2}^{-1}N_{ir}^{T}+B_{r}{{B_{r}^{T}}}=0. \end{align} (3.21)Subtracting (3.21) from (3.17), we have   \begin{align} A_{r}\left(T_{2}^{-1}-P_{r}\right){{A_{r}^{T}}}-\left(T_{2}^{-1}-P_{r}\right)+\sum_{i=1}^{m}N_{ir}\left(T_{2}^{-1}-P_{r}\right)N_{ir}^{T}=0. \end{align} (3.22)Since $$A_{r}\otimes A_{r}-I_{r}\otimes I_{r}+\sum _{i=1}^{m}N_{ir}\otimes N_{ir}$$ is non-singular, (3.22) has the unique solution $$T_{2}^{-1}-P_{r}=0$$, i.e. $$T_{2}^{-1}=P_{r}$$. Similarly, we can get $${{T_{1}^{T}}}=-Q_{r}^{-1}$$. This leads to   \begin{align} Q_{12}^{T}AP_{12}+Q_{r}A_{r}P_{r}=Q_{12}^{T}AP_{12}+Q_{r}{{T_{1}^{T}}}Q_{12}^{T}AP_{12}T_{2}P_{r}=0. \notag \end{align}In the same way, we can prove that (3.9)–(3.11) hold. Thus, the proof of Proposition 3.1 is accomplished. 4. $${H_{2}}$$ MOR based on the generalized cross gramian In this section, we explore $$H_{2}$$ MOR of the SISO discrete-time bilinear system. First, the $$H_{2}$$ norm is expressed by the generalized cross gramian. Then, we show the corresponding $$H_{2}$$-optimal necessary conditions. Finally, an MOR algorithm is developed to obtain the reduced system. The k-th transfer function of the SISO discrete-time bilinear systems is defined as (see Benner et al., 2011)   $$ H_{k}(z_{1}\text{,}~z_{2}\text{,}\cdots\text{,}~z_{k})=c(z_{k}I_{n}-A)^{-1}N(z_{k-1}I_{n}-A)^{-1}N\cdots N(z_{1}I_{n}-A)^{-1}b.$$Then, the $$H_{2}$$ norm of the system (2.1) can also be computed by   $$ \|\Sigma\|_{H_{2}}^{2}=\textrm{tr}\left(\sum_{k=1}^{\infty}\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} \overline{H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}(H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}}))^{T}}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\!.$$In the following, we give the expression for the $$H_{2}$$ norm of the SISO system (2.1) based on the generalized cross gramian. Theorem 4.1 Let $$\Sigma $$ be an SISO discrete-time bilinear system with the scalar matrix N, and assume that the generalized cross gramian R exists. Then the $$H_{2}$$ norm of $$\Sigma $$ can be expressed as   \begin{align} \|\Sigma\|_{H_{2}}^{2}=\textrm{tr}(cRb)=cRb. \end{align} (4.1) Proof. Since $$H_{k}$$ is a scalar, then   \begin{align} \|\Sigma\|_{H_{2}}^{2}&=\textrm{tr}\left(\sum_{k=1}^{\infty}\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} \overline{H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}(H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}}))^{T}}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\notag\\ &=\sum_{k=1}^{\infty}\textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\!.\notag \end{align}Let $$R_{i}:=\sum _{k_{i}=0}^{\infty }\cdots \sum _{k_{1}=0}^{\infty} {P_{i}Q_{i}}$$. Then, $$R=\sum _{i=0}^{\infty} {R_{i}}$$, and we just need to prove that   \begin{align} \textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right) =cR_{i}b\text{,} \end{align} (4.2)where k = 1, 2, ⋯. First, when k = 1, it holds that   \begin{align} &\textrm{tr}\left(\int_{0}^{2\pi}{\frac{1}{2\pi}H_{1}(e^{-i\theta_{1}})H_{1}(e^{i\theta_{1}})}\, \textrm{d}\theta_{1}\right)\notag\\ &\quad=\textrm{tr}\left(\int_{0}^{2\pi}{\frac{1}{2\pi}c(e^{-i\theta_{1}}I-A)^{-1}bc(e^{i\theta_{1}}I-A)^{-1}b}\, \textrm{d}\theta_{1}\right)\notag\\ &\quad=\textrm{tr}\left(c\frac{1}{2\pi}\int_{0}^{2\pi}{\left(\sum_{l_{1}=0}^{\infty}e^{il_{1}\theta_{1}}A^{l_{1}}\right)b c\left(\sum_{l_{2}=0}^{\infty}e^{-il_{2}\theta_{1}}A^{l_{2}}\right)}\, \textrm{d}\theta_{1}b\right)\notag\\ &\quad=\textrm{tr}\left(c\frac{1}{2\pi}\sum_{l_{1}=0}^{\infty}~\sum_{l_{2}=0}^{\infty}\int_{0}^{2\pi}e^{-i(l_{2}-l_{1})\theta_{1}}\, \textrm{d}\theta_{1}A^{l_{1}}bcA^{l_{2}}b\right).\notag \end{align}If $$l_{1}=l_{2}$$, then $$\int _{0}^{2\pi }e^{-i(l_{2}-l_{1})\theta _{1}}\, \textrm{d}\theta _{1}=2\pi $$; otherwise, $$\int _{0}^{2\pi }e^{-i(l_{2}-l_{1})\theta _{1}}\, \textrm{d}\theta _{1}=0$$. Hence, the above equation can be simplified as   $$ \textrm{tr}\left(\int_{0}^{2\pi}{\frac{1}{2\pi}H_{1}(e^{-i\theta_{1}})H_{1}(e^{i\theta_{1}})}\, \textrm{d}\theta_{1}\right) =\textrm{tr}\left(c\sum_{l=0}^{\infty}{A^{l}bcA^{l}}b\right)=\textrm{tr}(cR_{1}b)=cR_{1}b. $$Next, when k = 2, it follows   \begin{align} &\textrm{tr}\left(\int_{0}^{2\pi}\int_{0}^{2\pi}{\frac{1}{(2\pi)^{2}}H_{2}(e^{-i\theta_{1}}\text{,}~e^{-i\theta_{2}}) H_{2}(e^{i\theta_{1}}\text{,}~e^{i\theta_{2}})}\, \textrm{d}\theta_{1}\, \textrm{d}\theta_{2}\right)\notag\\ &\ =\textrm{tr}\left(\int_{0}^{2\pi}\int_{0}^{2\pi}\left({\frac{1}{(2\pi)^{2}}c(e^{-i\theta_{2}}I-A)^{-1}N(e^{-i\theta_{1}}I-A)^{-1}b c(e^{i\theta_{2}}I-A)^{-1}N(e^{i\theta_{1}}I-A)^{-1}b}\right)\, \textrm{d}\theta_{1}\, \textrm{d}\theta_{2}\right)\notag\\ &\ =\textrm{tr}\left(c\sum_{l_{21}=0}^{\infty}~\sum_{l_{22}=0}^{\infty}~\sum_{l_{11}=0}^{\infty}~\sum_{l_{12}=0}^{\infty}\int_{0}^{2\pi}\!\!\int_{0}^{2\pi}\frac{1}{(2\pi)^{2}} e^{-i(l_{22}-l_{21})\theta_{2}}e^{-i(l_{12}-l_{11})\theta_{1}}\, \textrm{d}\theta_{1}\, \textrm{d}\theta_{2} A^{l_{21}}NA^{l_{11}}bcA^{l_{22}}NA^{l_{12}}b\right)\notag\\ &\ =\textrm{tr}\left(c\sum_{l_{2}=0}^{\infty}~\sum_{l_{1}=0}^{\infty}A^{l_{2}}NA^{l_{1}}bcA^{l_{2}}NA^{l_{1}}b\right)\notag\\ &\ =\textrm{tr}\left(c\sum_{l_{2}=0}^{\infty}~\sum_{l_{1}=0}^{\infty}A^{l_{2}}NA^{l_{1}}bcA^{l_{1}}NA^{l_{2}}b\right)\notag\\ &\ =cR_{2}b.\notag \end{align}Similarly, we can also verify that (4.2) holds for k = 3, 4,⋯. Therefore,   $$ \textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right) =\textrm{tr}(cR_{k}b)\text{,}\quad k=1\text{,}~2\text{,}\cdots. $$That is   \begin{align} \|\Sigma\|_{H_{2}}^{2}&=\sum_{k=1}^{\infty}\textrm{tr}\left(\int_{0}^{2\pi}\cdots\int_{0}^{2\pi}{\frac{1}{(2\pi)^{k}} H_{k}(e^{-i\theta_{1}}\text{,}\cdots\text{,}~e^{-i\theta_{k}})H_{k}(e^{i\theta_{1}}\text{,}\cdots\text{,}~e^{i\theta_{k}})}\, \textrm{d}\theta_{1}\cdots \, \textrm{d}\theta_{k}\right)\notag\\ &=\textrm{tr}\left(\sum_{k=1}^{\infty}cR_{k}b\right)\notag\\ &=cRb.\notag \end{align}Thus, the proof of Theorem 4.1 is accomplished. In order to obtain the $$H_{2}$$-optimal necessary conditions based on the generalized cross gramian, the error system (3.1) in the SISO case is reduced to   \begin{align}\hat{\Sigma}: \begin{cases} \hat{x}(k+1)=\hat{A}\hat{x}(k)+\hat{N}\hat{x}(k)u(k)+\hat{b}u(k)\text{,}\\[6pt] \hat{y}(k)=\hat{c}\hat{x}(k)\text{,} \end{cases}\end{align} (4.3)where   $$ \hat{x}= \begin{bmatrix} x(k) \\ x_{r}(k) \end{bmatrix}\text{,}~~ \hat{y}= \begin{bmatrix} y(k) \\ y_{r}(k) \end{bmatrix}\text{,}~~ \hat{A}= \begin{bmatrix} A & 0\\ 0 & A_{r} \end{bmatrix}\text{,}~~ \hat{N}= \begin{bmatrix} N & 0\\ 0 & N_{r} \end{bmatrix}\text{,}~~ \hat{b}= \begin{bmatrix} b \\ b_{r} \end{bmatrix}\text{,}~~ \hat{c}= \begin{bmatrix} c & -c_{r} \end{bmatrix}\text{.} $$According to the reduced system, we know that the system (4.3) is still an SISO system and $$N_{r}$$ is still a scalar matrix. When the error system (4.3) is stable and the generalized cross gramian $$\hat{R}$$ exists, $$\hat{R}$$ satisfies the following generalized Sylvester equation:   \begin{align} \hat{A}\hat{R}\hat{A}-\hat{R}+\hat{N}\hat{R}\hat{N}+\hat{b}\hat{c}=0. \end{align} (4.4)By Theorem 4.1, the $$H_{2}$$ norm of the system (4.3) can be written as   \begin{align} \|\hat{\Sigma}\|_{H_{2}}^{2}=\textrm{tr}(\hat{c}\hat{R}\hat{b})=\hat{c}\hat{R}\hat{b}. \end{align} (4.5)Hence, the $$H_{2}$$-optimal MOR problem can be converted into the minimization problem   \begin{align} \mathfrak{J}_{b}(A_{r}\text{,}~N_{r}\text{,}~b_{r}\text{,}~c_{r}):=\min\|\hat{\Sigma}\|_{H_{2}}. \end{align} (4.6)In order to further study the $$H_{2}$$ norm of the error system (4.3), $$\hat{R}$$ is partitioned as follows:   \begin{align} \hat{R}= \begin{bmatrix} R_{11} & R_{12}\\ R_{21} & R_{r} \end{bmatrix}, \end{align} (4.7)where $$R_{11}\in \mathbb{R}^{n\times n}$$ and $$R_{r}\in \mathbb{R}^{r\times r}$$. Substituting (4.7) into (4.4), we can get   \begin{align} AR_{11}A-R_{11}+NR_{11}N+bc=0\text{,} \end{align} (4.8)  \begin{align} {\hskip-10pt}AR_{12}A_{r}-R_{12}+NR_{12}N_{r}-bc_{r}=0\text{,} \end{align} (4.9)  \begin{align} {\hskip-10pt}A_{r}R_{21}A-R_{21}+N_{r}R_{21}N+b_{r}c=0\text{,} \end{align} (4.10)  \begin{align} {\hskip-8pt}A_{r}R_{r}A_{r}-R_{r}+N_{r}R_{r}N_{r}-b_{r}c_{r}=0. \end{align} (4.11)From (4.8), we have $$R_{11}=R$$. Based on the generalized cross gramian, we can also obtain the $$H_{2}$$-optimal necessary conditions for the SISO case, which are shown in the following theorem. Theorem 4.2 Suppose that the SISO system $$\Sigma _{r}$$ is the solution of the minimization problem (4.6) and it is stable. Then, the following equations hold:   \begin{align} {\hskip-25pt}R_{21}AR_{12}+R_{r}A_{r}R_{r}=0\text{,} \end{align} (4.12)  \begin{align} {\hskip-27pt}R_{21}NR_{12}+R_{r}N_{r}R_{r}=0\text{,} \end{align} (4.13)  \begin{align} cR_{12}-c_{r}R_{r}=0\text{,} \end{align} (4.14)  \begin{align} R_{21}b+R_{r}b_{r}=0. \end{align} (4.15) Proof. Let $$\mathfrak{E}_{b}$$ stand for the $$H_{2}$$ norm of the error system (4.3). Then, we have   $$ \mathfrak{E}_{b}^{2}=\|\Sigma\|_{H_{2}}^{2}=\textrm{tr}(\hat{c}\hat{R}\hat{b})=\textrm{tr}(\hat{R}U)\text{,}$$where $$U:=\hat{b}\hat{c}$$. Let $$\xi $$ denote an arbitrary parameter of $$\mathfrak{E}_{b}$$. Differentiating $$\mathfrak{E}_{b}^{2}$$ with respect to $$\xi $$ and combining with (4.4), we can obtain   \begin{align} \frac{\partial\mathfrak{E}_{b}^{2}}{\partial\xi}=\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}U\right) +\textrm{tr}\left(\hat{R}\frac{\partial U}{\partial\xi}\right)=-\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{A}\hat{R}\hat{A}\right) +\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{R}\right)-\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{N}\hat{R}\hat{N}\right) +\textrm{tr}\left(\hat{R}\frac{\partial U}{\partial\xi}\right). \end{align} (4.16)In order to simplify (4.16), we differentiate (4.4) with respect to $$\xi $$ on both sides and it yields that   \begin{align} \frac{\partial\hat{A}}{\partial\xi}\hat{R}\hat{A}+\hat{A}\frac{\partial\hat{R}}{\partial\xi}\hat{A} +\hat{A}\hat{R}\frac{\partial\hat{A}}{\partial\xi}-\frac{\partial\hat{R}}{\partial\xi} +\frac{\partial\hat{N}}{\partial\xi}\hat{R}\hat{N}+\hat{N}\frac{\partial\hat{R}}{\partial\xi}\hat{N} +\hat{N}\hat{R}\frac{\partial\hat{N}}{\partial\xi}+\frac{\partial U}{\partial\xi}=0. \end{align} (4.17)Premultiplying (4.17) by $$\hat{R}$$ and taking the trace, we can get   \begin{align} 2\textrm{tr}\left(\frac{\partial\hat{A}}{\partial\xi}\hat{R}\hat{A}\hat{R}\right) +\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{A}\hat{R}\hat{A}\right) -\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{R}\right) +2\textrm{tr}\left(\frac{\partial\hat{N}}{\partial\xi}\hat{R}\hat{N}\hat{R}\right) +\textrm{tr}\left(\frac{\partial\hat{R}}{\partial\xi}\hat{N}\hat{R}\hat{N}\right) +\textrm{tr}\left(\frac{\partial U}{\partial\xi}\hat{R}\right)=0. \end{align} (4.18)Substituting (4.18) into (4.16), we have   \begin{align} \frac{\partial\mathfrak{E}_{b}^{2}}{\partial\xi}= 2\textrm{tr}\left(\frac{\partial\hat{A}}{\partial\xi}\hat{R}\hat{A}\hat{R}\right) +2\textrm{tr}\left(\frac{\partial\hat{N}}{\partial\xi}\hat{R}\hat{N}\hat{R}\right) +2\textrm{tr}\left(\frac{\partial U}{\partial\xi}\hat{R}\right)=0. \end{align} (4.19)Let $$A_{r}= [a_{r}^{ij} ]$$ and $$\xi =a_{r}^{ij}$$ (i, j = 1,2,⋯r). From (4.19), we obtain   $$ \frac{\partial\mathfrak{E}_{b}^{2}}{\partial a_{r}^{ij}}=2\textrm{tr}\left(\frac{\partial A_{r}}{\partial a_{r}^{ij}} (R_{21}AR_{12}+R_{r}A_{r}R_{r})\right).$$If $$A_{r}$$ is the solution of the optimization problem (4.6), then $$\frac{\partial \mathfrak{E}_{b}^{2}}{\partial a_{r}^{ij}}=0$$. Obviously, it holds that   $$ R_{21}AR_{12}+R_{r}A_{r}R_{r}=0.$$Equations (4.13)–(4.15) can be analogously proved when $$\xi $$ individually represents $$n_{r}^{ij}$$, $$b_{r}^{ij}$$ and $$c_{r}^{ij}$$, where $$n_{r}^{ij}$$, $$b_{r}^{ij}$$ and $$c_{r}^{ij}$$ are the $$(i,j)$$-th elements of the matrices $$N_{r}$$, $$b_{r}$$ and $$c_{r}$$. Thus, the proof of Theorem 4.2 is accomplished. Assume that the $$H_{2}$$-optimal reduced system $$\Sigma _{r}$$ is stable and $$R_{r}$$ is non-singular. Then, by Theorem 4.2, we can obtain the following formulas:   \begin{align} A_{r}=-R_{r}^{-1}R_{21}AR_{12}R_{r}^{-1}\text{,} \end{align} (4.20)  \begin{align} N_{r}=-R_{r}^{-1}R_{21}NR_{12}R_{r}^{-1}\text{,} \end{align} (4.21)  \begin{align}{\hskip-30pt} b_{r}=-R_{r}^{-1}R_{21}b\text{,} \end{align} (4.22)  \begin{align} {\hskip-37pt}c_{r}=cR_{12}R_{r}^{-1}. \end{align} (4.23)Next, we consider (4.10). Multiplying (4.10) by $$R_{12}R_{r}^{-1}$$ from the right, we can get   \begin{align} A_{r}R_{21}AR_{12}R_{r}^{-1}-R_{21}R_{12}R_{r}^{-1}+N_{r}R_{21}NR_{12}R_{r}^{-1}+b_{r}cR_{12}R_{r}^{-1}=0. \end{align} (4.24)Substituting (4.12)–(4.14) into (4.24), we obtain the following equation:   \begin{align} -A_{r}R_{r}A_{r}-R_{21}R_{12}R_{r}^{-1}-N_{r}R_{r}N_{r}+b_{r}c_{r}=0. \end{align} (4.25)Inserting (4.11) into (4.25), we have $$R_{21}R_{12}R_{r}^{-1}=-R_{r}$$. That is $$-R_{r}^{-1}R_{21}R_{12}R_{r}^{-1}=I_{r}$$. Combining with (4.20)–(4.23), we can take the transformation matrices $$W^{T}:=-R_{r}^{-1}R_{21}$$ and $$V:=R_{12}R_{r}^{-1}$$ such that $$W^{T}V=I_{r}$$ and $$N_{r}$$ is a scalar matrix. Namely, the coefficient matrices of the reduced system can be generated by   $$ A_{r}=W^{T}AV\text{,}~~ N_{r}=W^{T}NV\text{,}~~ b_{r}=W^{T}b\text{,}~~ c_{r}=cV.$$Therefore, we can establish the following algorithm. Remark 4.1 Using the similar proof of Proposition 3.1, we can also obtain the analogous result. Specifically, assume that Algorithm 2 is convergent. $$R_{12}$$ and $$R_{21}$$ individually satisfy (4.9) and (4.10). $${{A_{r}^{T}}}\otimes A_{r}-I_{r}\otimes I_{r}+{{N_{r}^{T}}}\otimes N_{r}$$ is non-singular. Then, the returned coefficient matrices $$A_{r}$$, $$N_{r}$$, $$b_{r}$$ and $$c_{r}$$ can satisfy the $$H_{2}$$-optimal necessary conditions (4.12)–(4.15). 5. Numerical examples In this section, we present two numerical examples to illustrate the effectiveness of our proposed methods. We mainly show the relative errors of the output and the relative $$H_{2}$$ errors. The relative $$H_{2}$$ error is calculated by the formula $$\|\hat{\Sigma }\|_{H_{2}}/\|\Sigma \|_{H_{2}}$$. Both numerical examples are implemented in MATLAB(R2010a). Since both of these two experiments are not very large, we can directly solve (3.6) and (3.7) by the Kronecker product and the vectorization when the error system is stable. Similarly, (4.9) and (4.10) can also be solved in the same way. Example 5.1 The first example is a boundary controlled heat transfer system which is involved in Benner & Damm (2011). The partial differential equation is spatially discretized by finite difference on an h × h grid. Then, we can obtain a continuous-time two-input and one-output bilinear system. Next, we convert the bilinear system into a discrete-time bilinear system by the semi-implicit Euler method ($$\Delta t=0.005$$) (see Wang & Jiang, 2016). In this example, we take h = 15. It leads to the discrete-time bilinear system with the order n = 225. All reduced systems are obtained by Algorithm 1. For the input $$u(k)=[\frac{1}{k+1}\sin (\frac{k\pi }{5} ),~\frac{1}{k+1}\cos (\frac{k\pi }{5} ) ]^{T}$$, Table 1 presents the relative output errors between the original system and the reduced system with different orders. From Table 1, we can see that the output of the original system can be well approximated by those of the reduced systems. When the reduced order increases, the relative output error decreases as a whole. It is worth noting that the output and the relative output errors will be changed if the input u(k) is changed. Table 2 shows the relative $$H_{2}$$ errors of these reduced systems with different reduced orders. Table 1. Relative output errors with different orders in Example 5.1   k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$  Table 1. Relative output errors with different orders in Example 5.1   k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 8  $$1.173\times 10^{-6}$$  $$1.898\times 10^{-5}$$  $$6.830\times 10^{-5}$$  $$2.799\times 10^{-5}$$  $$4.457\times 10^{-7}$$  r = 12  $$1.183\times 10^{-6}$$  $$2.504\times 10^{-6}$$  $$6.787\times 10^{-6}$$  $$2.757\times 10^{-6}$$  $$4.390\times 10^{-6}$$  r = 16  $$8.277\times 10^{-8}$$  $$2.114\times 10^{-8}$$  $$6.732\times 10^{-7}$$  $$5.158\times 10^{-8}$$  $$6.596\times 10^{-9}$$  r = 20  $$7.332\times 10^{-9}$$  $$1.648\times 10^{-7}$$  $$9.177\times 10^{-8}$$  $$3.651\times 10^{-7}$$  $$1.096\times 10^{-8}$$  Table 2. Relative $$H_{2}$$ errors with different reduced orders in Example 5.1   r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$    r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$  Table 2. Relative $$H_{2}$$ errors with different reduced orders in Example 5.1   r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$    r = 8  r = 12  r = 16  r = 20  Relative $$H_{2}$$ error  $$6.456\times 10^{-4}$$  $$4.278\times 10^{-5}$$  $$2.259\times 10^{-5}$$  $$4.747\times 10^{-6}$$  It is clear that all the reduced systems well approximate the original system in the $$H_{2}$$ norm sense and the relative $$H_{2}$$ error strictly decreases as the increase of the reduced order. Thus, Tables 1 and 2 imply the efficiency of Algorithm 1. Example 5.2 In this example, we study the following SISO discrete-time system:   \begin{align}\notag \begin{cases} x(k+1)=Ax(k)+Nx(k)u(k)+bu(k)\text{,}~\\[6pt]y(k)=cx(k)\text{,}~ \end{cases} \end{align}in which A is a randomly generated stable matrix with n = 598, the matrix N = diag(0.01, 0.01, ⋯ , 0.01) $$\in \mathbb{R}^{598\times 598}$$, the vector $$b=c^{T}\in \mathbb{R}^{598}$$ with all elements are 1. All reduced systems are obtained by Algorithm 2. For the input $$u(k)=\frac{1}{k+1}\sin (\frac{k\pi }{7})$$, Table 3 shows the relative output errors between the original discrete-time bilinear system and the reduced systems with varying orders. Table 3. Relative output errors with different reduced orders in Example 5.2   k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$  Table 3. Relative output errors with different reduced orders in Example 5.2   k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$    k = 5  k = 15  k = 25  k = 35  k = 45  r = 12  $$4.292\times 10^{-6}$$  $$1.647\times 10^{-4}$$  $$5.107\times 10^{-4}$$  $$3.665\times 10^{-3}$$  $$3.080\times 10^{-4}$$  r = 15  $$4.581\times 10^{-6}$$  $$2.451\times 10^{-4}$$  $$1.229\times 10^{-4}$$  $$1.423\times 10^{-3}$$  $$1.726\times 10^{-5}$$  r = 18  $$3.801\times 10^{-7}$$  $$2.683\times 10^{-5}$$  $$5.025\times 10^{-5}$$  $$1.089\times 10^{-3}$$  $$2.498\times 10^{-5}$$  r = 21  $$6.754\times 10^{-8}$$  $$2.919\times 10^{-7}$$  $$1.474\times 10^{-5}$$  $$3.059\times 10^{-6}$$  $$5.651\times 10^{-6}$$  Table 3 indicates that the reduced systems can well approximate the original discrete-time bilinear system in the output behaviours. Obviously, the relative output error substantially decreases as the increase of the reduced order. Similarly, when the input variable is changed, the output variable and the relative output error will be accordingly changed. Table 4 provides the relative $$H_{2}$$ errors of the reduced systems with different reduced orders. Table 4. Relative $$H_{2}$$ errors with different reduced orders in Example 5.2   r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$    r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$  Table 4. Relative $$H_{2}$$ errors with different reduced orders in Example 5.2   r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$    r = 12  r = 15  r = 18  r = 21  Relative $$H_{2}$$ error  $$6.041\times 10^{-4}$$  $$2.126\times 10^{-4}$$  $$5.593\times 10^{-5}$$  $$1.450\times 10^{-6}$$  From Table 4, we observe that although the reduced orders are much lower than 598 the reduced systems present good approximations of the original discrete-time bilinear system in the $$H_{2}$$ norm sense. Moreover, the relative $$H_{2}$$ error correspondingly decreases as the reduced order increases. Tables 3 and 4 illustrate that the reduced system generated by Algorithm 2 exhibits good approximation of the original system. Therefore, our proposed methods are feasible for these two examples. 6. Conclusions In this paper, we mainly investigate $$H_{2}$$-optimal MOR of discrete-time bilinear systems based on gramians. First, we derive the $$H_{2}$$-optimal necessary conditions based on the reachability and observability gramian. Next, the corresponding MOR algorithm is established and can make the reduced system satisfy the necessary conditions. Then, regarding the SISO discrete-time bilinear systems, the $$H_{2}$$ norm can be represented by the generalized cross gramian. On the basis of the generalized cross gramian, we derive the related $$H_{2}$$-optimal necessary conditions. Similarly, we propose the MOR algorithm based on the generalized cross gramian and the reduced system resulting from the algorithm can satisfy the necessary conditions. Finally, two numerical examples are employed to illustrate the efficiency of our proposed methods. Funding Natural Science Foundation of China (61663043). References Antoulas, A. C. ( 2005) Approximation of Large-Scale Dynamical System . Philadelphia: Society for Industrial and Applied Mathematics (SIAM). Bai, Z. & Skoogh, D. ( 2006) A projection method for model reduction of bilinear dynamical systems. Linear Alg. Appl ., 415, 406– 425. Google Scholar CrossRef Search ADS   Benner, P. & Breiten, T. ( 2012) Interpolation-based $$H_2$$ model reduction of bilinear control systems. SIAM J. Matrix Anal. Appl. , 33, 859– 885. Google Scholar CrossRef Search ADS   Benner, P., Breiten, T. & Damm, T. ( 2010) Krylov subspace methods for model order reduction of bilinear discrete-time control systems. Proceedings in Applied Mathematics and Mechanics . New Jersey: John Wiley, pp. 601– 602. Benner, P., Breiten, T. & Damm, T. ( 2011) Generalized tangential interpolation for model reduction of discrete-time MIMO bilinear systems. Int. J. Control , 84, 1398– 1407. Google Scholar CrossRef Search ADS   Benner, P. & Damm, T. ( 2011) Lyapunov equations, energy functionals, and model order reduction of bilinear and stochastic systems. SIAM J. Control Optim. , 49, 686– 711. Google Scholar CrossRef Search ADS   Bunse-Gerstner, A., Kubalińska, D., Vossen, G. & Wilczek, D. ( 2012) $$h_2$$-norm optimal model reduction for large scale discrete dynamical MIMO systems. J. Comput. Appl. Math. , 233, 1202– 1216. Google Scholar CrossRef Search ADS   D’Alessandro, P., Isidori, A. & Ruberti, A. ( 1974) Realization and structure theory of bilinear dynamical systems. SIAM J. Control , 12, 517– 535. Google Scholar CrossRef Search ADS   Dorissen, H. T. ( 1989) Canonical forms for bilinear systems. Syst. Control Lett. , 13, 153– 160. Google Scholar CrossRef Search ADS   Druskin, V. & Simoncini, V. ( 2011) Adaptive rational Krylov subspaces for large-scale dynamical systems. Syst. Control Lett. , 60, 546– 560. Google Scholar CrossRef Search ADS   Gao, H. J., Lam, J. & Wang, C. H. ( 2006) Model simplification for switched hybrid systems. Syst. Control Lett. , 55, 1015– 1021. Google Scholar CrossRef Search ADS   Gugercin, S., Antoulas, A. C. & Beattie, C. ( 2008) $$H_2$$ model reduction for large-scale linear dynamical systems. SIAM J. Matrix Anal. Appl. , 30, 609– 638. Google Scholar CrossRef Search ADS   Ibrir, S. & Bettayeb, M. ( 2014) Model reduction of a class of nonlinear systems: a convex-optimization approach. IMA J. Math. Control Inform. , 31, 519– 531. Google Scholar CrossRef Search ADS   Imran, M. & Glafoor, A. ( 2016) Model reduction of generalized non-singular systems using limited frequency interval gramians. IMA J. Math. Control Inform. , 32, 333– 347. Google Scholar CrossRef Search ADS   Jazlan, A., Sreeram, V., Shaker, H. R. & Togneri, R. ( 2016) Frequency interval balanced truncation of discrete-time bilinear systems. Cogent Eng. , 3. 1203082. Jazlan, A., Sreeram, V., Shaker, H. R., Togneri, R. & Minh, H. B. ( 2017) Frequency interval cross gramians for linear and bilinear systems. Asian J. Control , 19, 22– 34. Google Scholar CrossRef Search ADS   Jiang, Y. L. ( 2010) Model Order Reduction Methods . Beijing, China: Science Press. Jiang, Y. L. & Chen, H. B. ( 2012) Time domain model order reduction of general orthogonal polynomials for linear input-output systems. IEEE Trans. Automat. Contr. , 57, 330– 343. Google Scholar CrossRef Search ADS   Jiang, Y. L. & Xu, K. L. ( 2017) $$H_2$$ optimal reduced models of general MIMO LTI systems via the cross Gramian on the stiefel manifold. J. Franklin Inst. , 354, 3210– 3224. Google Scholar CrossRef Search ADS   Knockaert, L., Dhaene, T., Ferranti, F. & Zutter, D. D. ( 2011) Model order reduction with preservation of passivity, non-expansivity and Markov moments. Syst. Control Lett. , 61, 53– 61. Google Scholar CrossRef Search ADS   Lee, H. J., Chu, C. C. & Feng, W. S. ( 2006) An adaptive-order rational Arnoldi method for model-order reductions of linear time-invariant systems. Linear Alg. Appl. , 415, 235– 261. Google Scholar CrossRef Search ADS   Lin, Y. Q., Liang, B. & Wei, Y. M. ( 2009) Order reduction of bilinear MIMO dynamical systems using new block Krylov subspaces. Comput. Math. Appl. , 58, 1093– 1102. Google Scholar CrossRef Search ADS   Lin, Y. Q., Liang, B. & Wei, Y. M. ( 2010) Model-order reduction of large-scale k th-order linear dynamical systems via a k th-order Arnoldi method. Int. J. Comput. Math. , 87, 435– 453. Google Scholar CrossRef Search ADS   Mohler, R. R. ( 1991) Nonlinear Systems: Dynamics and Control , vol. 1. Applications to Bilinear Control , vol. 2. New Jersey, USA: Prentice Hall. Moore, B. C. ( 1981) Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Automat. Contr. , 26, 17– 32. Google Scholar CrossRef Search ADS   Sastry, S. ( 1999) Nonlinear Systems: Analysis, Stability and Control.  New York, USA: Springer. Shaker, H. R. & Tahavori, M. ( 2014a) Frequency-interval model reduction of bilinear systems. IEEE Trans. Automat. Contr. , 59, 1948– 1953. Google Scholar CrossRef Search ADS   Shaker, H. R. & Tahavori, M. ( 2014b) Time-interval model reduction for bilinear systems. Int. J. Control , 87, 1487– 1495. Google Scholar CrossRef Search ADS   Shaker, H. R. & Tahavori, M. ( 2015) Control configuration selection for bilinear systems via generalised Hankel interaction index array. Int. J. Control , 88, 30– 37. Google Scholar CrossRef Search ADS   Wang, X. L. & Jiang, Y. L. ( 2016) Model reduction of discrete-time bilinear systems by a Laguerre expansion technique. Syst. Control Lett. , 40, 6650– 6662. Wei, Y. L., Qiu, J. B., Karimi, H. R. & Wang, M. ( 2014) Model reduction for continuous-time Markovian jump systems with incomplete statistics of mode information. Int. J. Syst. Sci. , 45, 1496– 1507. Google Scholar CrossRef Search ADS   Wilson, D. A. ( 1974) Model reduction for multivariable systems. Int. J. Control , 20, 57– 64. Google Scholar CrossRef Search ADS   Wu, L. G. & Zheng, W. X. ( 2009) Weighted $$H_{\infty }$$ model reduction for linear switched systems with time-varying delay. Automatica , 45, 186– 193. Google Scholar CrossRef Search ADS   Xu, K. L., Jiang, Y. L. & Yang, Z. ( 2015) $$H_2$$ order-reduction for bilinear systems based on Grassmann manifold. J. Franklin Inst. , 352, 4467– 4479. Google Scholar CrossRef Search ADS   Yan, W. Y. & Lam, J. ( 1999) An approximate approach to $$H_2$$ optimal model reduction. IEEE Trans. Automat. Contr. , 44, 1341– 1358. Google Scholar CrossRef Search ADS   Yuan, J. W., Jiang, Y. L. & Xiao, Z. H. ( 2015) Structure-preserving model order reduction by general orthogonal polynomials for integral-differential systems. J. Franklin Inst. , 352, 138– 154. Google Scholar CrossRef Search ADS   Zhang, L. Q. & Lam, J. ( 2002) On $$H_2$$ model reduction of bilinear systems. Automatica , 38, 205– 216. Google Scholar CrossRef Search ADS   Zhang, L.Q., Lam, J. & Huang, B. ( 2003) On gramians and balanced truncation of discrete-time bilinear systems. Int. J. Control , 76, 414– 427. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com

Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Apr 9, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off