# A passivity-based observer for neural mass models

A passivity-based observer for neural mass models Abstract This paper presents a novel approach of designing a stabilizing observer for neural mass models which can simulate distinct rhythms in electroencephalography (EEG). Due to the structure characteristics, neural mass models are expressed as Lurie systems. The stabilizing observer is designed by using the Lurie system theory and the passive theory. The observer matrices are constructed via solutions of some linear matrix inequality (LMI) conditions. Numerical simulations are used to demonstrate the efficiency of the results. 1. Introduction State observers not only provide practical possibilities for the realization of state feedback, (Beker et al., 2014) but also play important roles in science and engineering, including understanding and predicting weather (Beker et al., 2009) or controlling dynamics from robotics (Yang et al., 2015) to neural systems, (Adhyaru, 2012; Baz, 1992) most notably in brain research (Hajime, 2016) where the sensors physically implanted in brain cannot directly measure all the variables of interest. The estimation of the states is therefore especially useful for general neuroscience studies for better understanding of the human brain. A state observer is a dynamical model of a system or process that reconstructs unmeasured or inaccessible variables from measurements of the external input and output (Surhone et al., 2013). Luenberger initially put forward the concept of state observers and presented the construction method of state observers for linear systems (Luenberger, 1964). In later works, various observer design methods were developed for non-linear systems as well, such as the Lyapunov-like method (Kudav & Narendra, 1973; Jeong et al., 2011), the coordinate transformation method (Marino, 1990; Nam, 1997) and the extended Luenberger observer (Zeitz, 1987). To facilitate the design of observers, it is a natural way to divide non-linear systems into different types with specific structures because of the inherent complexity of non-linear systems. Lurie-type systems are typical non-linear systems, which can be viewed as an interconnection of a linear system and a sector-restricted non-linearity (Leonov et al., 2001). The classical circle criterion was employed to design observers for Lurie-type non-linear systems (Arcak & Kokotović, 2001). Plentiful modified circle criterion observers were also constructed for this type of non-linear systems (Arcak & Kokotović, 2001; Fan & Arcak, 2003; Zemouche & Boutayeb, 2009). Neural mass models that describe the activity of populations of neurons share the same structure as Lurie-type systems (Lopes da Silva et al., 1974). The prior circle criterion observers were naturally extended to neural mass models. A direct extension of these circle criterion observers to neural mass models results in an infeasible linear matrix inequality (LMI) condition (Chong et al., 2012), which does not guarantee the convergence of the estimation error to the origin. A generalized result that leads to feasible LMIs and the convergence of the estimation error to the origin is subsequently established on the basis of the circle criterion observers (Chong et al., 2012). Then by using Lurie system theory and mathematical operation, Liu Xian and Miao Dong-Kai further improve the observer design method for neural mass models (Liu et al., 2015). However, their internal stability is still somewhat unsatisfactory. As far as the state observer design of the neural mass models involved in brain research, there still exists great room for improvement or developing new methods. This prompted us to carry out this study. Passivity has long been a powerful tool for dealing with non-linear systems (Schaft, 1996; Bai et al., 2011). It is effective when solving robustness issues of complex uncertain systems. During the last decades, the constructive techniques based on passive theory have provided useful tools for analysing and controlling a large class of non-linear systems from different areas such as robotic manipulators (Zanchettin et al., 2015), walking robots (Li et al., 2013), memristor-based recurrent neural networks (Rakkiyappan et al., 2014) and so on. Inspired by the advantages of passive property, we develop a new method of designing the state observer for neural mass models by using the passive theory. We will demonstrate that the passivity-based stabilization observer can realize the reconstruction of unmeasured variables. Notation The set of real numbers, positive real numbers, n-dimensional real column vectors and n × m-dimensional real matrixes are represented by $$\mathbb{R}$$, $$\mathbb{R}_{+}$$, $$\mathbb{R}^{n}$$ and $$\mathbb{R}^{n\times m}$$, respectively. A diagonal matrix is denoted as diag(⋅). An n × n-dimensional identity matrix is represented as In. The symmetric block component of a symmetric matrix is denoted by ⋆. The imaginary unit is denoted by ı. The L2 norm of a vector function x(t) is represented as $$\|x(t)\|_{L_{2}}=\left [\int _{0}^{\infty }x^{T}(t)x(t)\, \textrm{d}t\right ]^{\frac{1}{2}}$$. 2. Preliminaries The subject of our research is to design a new type of observer for neural mass model and to prove the superiority of the proposed observer. To make the problem statement clearer, we firstly present some preliminaries employed in this paper. 2.1. Lurie systems Lurie systems are a class of typical non-linear systems, which can be viewed as the connection of a linear system and a non-linear function. Many non-linear systems can be represented in the form of Lurie systems, such as Chua’s circuit (Camposcantón et al., 2014), Goodwin models (Ruoff et al., 2001) and neural mass models (Lopes da Silva et al., 1974). Lurie systems can be expressed as follows   $$\dot{x}=Ax+Bf(Hx),$$ (1) where $$x\in \mathbb{R}^{n}$$ is the state vector, $$A\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$, $$H\in \mathbb{R}^{m\times n}$$ are all constant matrices, and $$f(\cdot ):\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}$$ is a continuous non-linear vector function with each entry fi(⋅)(i = 1, 2, ⋯, m)satisfying fi(0) ≡ 0 and sector conditions   $$\mu_{i1}{\sigma_{i}^{2}}\leq \sigma_{i}\, f_{i}(\sigma_{i})\leq\mu_{i2}{\sigma_{i}^{2}},\quad\forall\sigma_{i}\in \mathbb{R},$$ (2) where $$\mu _{i1}\in \mathbb{R}$$, $$\mu _{i2}\in \mathbb{R}$$. Let μ1 = diag(μ11, μ21, ⋯, μm1), μ2 = diag(μ12, μ22, ⋯, μm2) and χ(ıω) = H(ıωIn−A)−1B. The origin x = 0 is an equilibrium point of (1). Circle criterion (Leonov et al., 2001) is one of the most classical criteria to judge the global asymptotic stability of the origin of system (1). Lemma 1 (Circle criterion) Suppose that the pair (A, B) is controllable and the matrix A has no eigenvalues with zero real parts. If the frequency−domain inequality   $$\textrm{Re}\left\{[\mu_{1}\chi(\imath\omega)+I_{m}]^{T}[\mu_{2}\chi(\imath\omega)+I_{m}]\right\}>0,\omega\in\mathbb{R}$$ (3) is fulfilled, then the origin of system (1) is globally asymptotically stable. By using the well-known Kalman–Yakubovich–Popov (KYP) lemma (Rantzer, 1996), the frequency−domain inequality (3) can be transformed into an equivalent matrix inequality which is easy to solve. Lemma 2 (KYP lemma) Given $$A\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$, $$M=M^{T}\in \mathbb{R}^{(n+m)\times (n+m)}$$, with det$$(\imath \omega I-A)\neq 0$$ for $$\omega \in \mathbb{R}$$ and (A, B) is controllable, the following two statements are equivalent: (i)   \begin{align} &\left[\begin{array}{@{}c@{}}(\imath\omega I_{n}-A)^{-1}B\\ I\end{array}\right]^{T}M\left[\begin{array}{@{}c@{}}(\imath\omega I_{n}-A)^{-1}B\\ I\end{array}\right]\leq0,\nonumber\\ &\forall\omega\in \mathbb{R}; \end{align} (4) (ii) There exists a matrix $$P\in \mathbb{R}^{n\times n}$$ such that P = PT and   $$M+{ \left[\begin{array}{@{}cc@{}} A^{T}P+PA& PB\\ B^{T}P&0 \end{array} \right]}\leq0.$$ (5) The corresponding equivalence for strict inequalities holds even if (A, B) is not controllable. Lemma 3 Suppose that the pair (A, B) is controllable and the matrix A has no eigenvalues with zero real parts. If there exist a matrix P = PT such that the matrix inequality   $${ \left[\begin{array}{@{}cc@{}} A^{T}P+PA+H^{T}\mu_{1}\mu_{2}H& PB+\frac{1}{2}H^{T}(\mu_{1}+\mu_{2})\\ \star&I_{m} \end{array} \right]}>0$$ (6) is fulfilled, then the origin of system (1) is globally asymptotically stable. Proof. We rewrite (3) as the form of (4) with   $$M={ \left[\begin{array}{@{}cc@{}} H^{T}\mu_{1}\mu_{2}H& \frac{1}{2}H^{T}(\mu_{1}+\mu_{2})\\ \star&I_{m} \end{array} \right]}.$$ (7) According to the KYP lemma, we can derive the equivalent inequality (6). This completes the proof. It has been shown that the direct application of the circle criterion to design a observer for neural mass models is not feasible (Chong et al., 2012). In this paper, we use the passive theory to the observer design of the underlying model. So we present the definitions of passivity and output strictly passivity. 2.2. Passivity of non-linear dynamical systems Consider the dynamical system   $$\begin{cases} \dot{x}=f(x,u),\\ y=h(x,u), \end{cases}$$ (8) where $$x\in \mathbb{R}^{n}$$ is the state vector, $$u\in \mathbb{R}^{p}$$ is the input, $$y\in \mathbb{R}^{m}$$ is the output, $$f(x,u)\in \mathbb{R}^{n}$$ and $$h(x,u)\in \mathbb{R}^{m}$$ are functions associated with x and u, respectively. Definition 1 System (8) is said to be passive if there exists a function S(x) ≥ 0 such that   $$\dot{S}\leq-W(x)+u^{T}y$$ (9) is fulfilled for some positive semidefinite function W(x). System (8) is said to be output strictly passive if there exists a function S(x) ≥ 0 such that   $$\dot{S}\leq-y^{T}\varphi(y)+u^{T}y$$ (10) is fulfilled, where yTφ(y) > 0. The function S satisfying (9) or (10) is called the storage function. 3. Problem formulation Consider neural mass models with the following structure   $$\begin{cases} \dot{x}=Ax+Bf(Hx)+B_{1}u,\\ z=Cx+D\omega, \end{cases}$$ (11) where $$x\in \mathbb{R}^{n}$$ is the state vector; $$u\in \mathbb{R}^{p}$$ is the input; $$z\in \mathbb{R}^{q}$$ is the measurement output; $$\omega \in \mathbb{R}^{s}$$ is the measurement noise; $$A\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$, $$B_{1}\in \mathbb{R}^{n\times p}$$, $$C\in \mathbb{R}^{q\times n}$$, $$H\in \mathbb{R}^{m\times n}$$ and $$D\in \mathbb{R}^{q\times s}$$ are all constant matrices; and $$f(\cdot ):\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}$$ is a non-linear vector function. Each entry of f(Hx) has the form   $$f_{i}=f_{i} \left (\sum_{j=1}^{n} h_{ij}x_{j} \right)=f_{i}(h_{i}x),\ i=1,2,\cdots,m,$$ (12) and it’s derivative satisfies   $$0\leq f_{i}^{\prime}(\sigma)\leq\delta_{i},\quad \forall \sigma\in \mathbb{R},\ i=1,2,\cdots, m,$$ (13) where hi = [hi1⋯hin], δi ≥ 0. The models originate from the seminal work of Lopes da Silva in Lopes da Silva et al., (1974) and they have the same mathematical structure as (11). We apply the following type of observer   $$\begin{cases} \dot{\hat{x}}=A\hat{x}+Bf[H\hat{x}+K(C\hat{x}-z)]+L(C\hat{x}-z)+B_{1}u,\\ \hat{z}=C\hat{x}, \end{cases}$$ (14) where $$\hat{x}$$ is the state estimation, $$\hat{z}$$ is the output estimation and $$K\in \mathbb{R}^{m\times q}$$ and $$L\in \mathbb{R}^{n\times q}$$ are the observer matrices to be designed. Defining the state observation error as $$e=\hat{x}-x$$ and the output observation error as $$z_{e}=\hat{z}-z$$, we obtain the observation error dynamical system   $$\begin{cases} \dot{e}=(A+LC)e+B\varphi(e,t)-LD\omega,\\ z_{e}=Ce-D\omega, \end{cases}$$ (15) where φ(e, t) = f(V ) − f(U), $$V=H\hat{x}+K(C\hat{x}-z)$$, U = Hx. The constraint (13) implies that each entry of φ(e, t) satisfies   \begin{align} 0&\leq\frac{\varphi_{i}(e,t)}{v_{i}-u_{i}} =\frac{f_{i}(v_{i})-f_{i}(u_{i})}{v_{i}-u_{i}}\leq\delta_{i},\nonumber\\ \forall t &\in \mathbb{R}_{+},\quad \forall e\in \mathbb{R},\ i=1,2,\cdots,m. \end{align} (16) Lemma 4 The observation error dynamical system (15) is said to be output strictly passive if there exists a continuous differentiable positive semidefinite function S such that   $$\dot{S}\leq-{z_{e}^{T}}\rho{(z_{e})}+\omega^{T}z_{e}, \quad{z_{e}^{T}}\rho{(z_{e})}>0, \forall z_{e}\neq 0.$$ (17) Here S is called the storage function. We use the passive property to determine the observer matrices K and L. 4. Main result In this section, we provide the method of designing the observer matrices K and L. Theorem 1 If there exist positive definite matrices P and Γ, a positive definite diagonal matrix R = diag(r1, r2, ⋯, rm), matrices K and L, scalar constant ε > 0, such that   $$\Sigma= { \left[\begin{array}{@{}ccc@{}} \Sigma_{1}& \Sigma_{2}&-PLD-C^{T} \Gamma D-\frac{1}{2}C^{T}\\ \star&-R&-\frac{1}{2}R\Delta KD\\ \star&\star&D^{T} \Gamma D+\frac{1}{2}(D+D^{T}) \end{array} \right]}\leq0,$$ (18) then the error dynamical system (15) is output strictly passive, and meanwhile the origin of the system (15) with ω = 0 is globally asymptotically stable, where   \begin{align} \Sigma_{1}=&\,P(A+LC)+(A+LC)^{ T }P+C^{T} \Gamma C,\nonumber\\[-2pt] \Sigma_{2}=&\,PB+\frac{1}{2}(H+KC)^{T} \Delta R. \end{align} (19) Proof. Construct a quadratic function of the form S = eTPe, which is positive definite because P is positive definite. Calculating the time derivative of S gets   $$\dot S=\dot{e}^{T}Pe+e^{T}P\dot{e} =[(A+LC)e+B\varphi(e,t)-LD\omega]^{T} Pe+e^{T} P[(A+LC)e+B\varphi(e,t)-LD\omega].$$ (20) In view of (16), it follows   $$\sum_{i=1}^{m}r_{i}\varphi_{i}(e,t)[\varphi_{i}(e,t)-\delta_{i}(v_{i}-u_{i})]\leq0,$$ (21) where ri ≥ 0. The formula (21) can be expressed as the following equivalent form   $$\varphi^{T}(e,t)R\varphi(e,t)-\varphi^{T}(e,t)R\Delta(V-U)\leq0,$$ (22) where φ(e, t) = [φ1(e, t), φ2(e, t), ⋯, φm(e, t)]T, Δ = diag(δ1, δ2, ⋯, δm), V − U = (H + KC)e − KDω. Incorporating (20) and (22), we have   \begin{align} \dot S\leq&\, [(A+LC)e+B\varphi(e,t) -LD\omega]^{T} Pe+e^{T} P[(A+LC)e+B\varphi(e,t)-LD\omega]\nonumber\\[-3pt] &\,-\varphi^{T}(e,t)R\varphi(e,t)+\varphi^{T}(e,t)R\Delta[(H+KC)e-KD\omega]\nonumber\\[-3pt] =&\, \chi^{T}\Sigma\chi-\chi^{T}\left[\begin{array}{@{}ccc@{}}C^{T}\Gamma C & 0 & -C^{T}\Gamma D-\frac{1}{2}C^{T}\\ \star & 0 & 0\\ \star & \star & D^{T} \Gamma D+\frac{1}{2}(D+D^{T})\end{array}\right]\chi\nonumber\\[-3pt] \leq&\, \omega^{T} (Ce-D\omega)-(Ce-D\omega)^{T} \Gamma(Ce-D\omega)\nonumber\\[-3pt] =&\, -{z_{e}^{T}}\Gamma z_{e}+\omega^{T} z_{e}, \end{align} (23) where χ = [e φ(e, t) ω]T. The derivative of S satisfies (17) with ρ(ze) = Γze and $${z_{e}^{T}}\Gamma z_{e}>0$$. Thus, S is a storage function and the error dynamical system (15) is output strictly passive. When w = 0, (23) can be written as   $$\dot S\leq -{z_{e}}^{T}\Gamma z_{e}.$$ (24) Since Γ is positive definite, $$\dot S$$ is negative definite. This qualifies that the storage function S is also a Lyapunov function. Thus, the origin of the error dynamical system (15) is globally asymptotically s by using the Lyapunov stability theory. This completes the proof. Remark 1 The application of inequality technique to (23) yields   \begin{align} \dot{S} \leq&\, -\lambda_{\min}{z_{e}}^{T}z_{e}+\omega^{T}z_{e}\nonumber\\[-3pt] =&\,-\frac{1}{2\lambda_{\min}}(\omega-\lambda_{\min}z_{e})^{T}(\omega-\lambda_{\min}z_{e})+\frac{1}{2\lambda_{\min}}\omega^{T}\omega-\frac{\lambda_{\min}}{2}{z_{e}}^{T}z_{e}\nonumber\\[-3pt] \leq&\, \frac{1}{2\lambda_{\min}}\omega^{T}\omega-\frac{\lambda_{\min}}{2}{z_{e}}^{T}z_{e}, \end{align} (25) where $$\lambda _{\min }$$ is the minimal eigenvalue of Γ. Integral operation of (25) derives   $$\|z_{e}\|_{L_{2}}\leq\rho\|\omega\|_{L_{2}}+\beta,$$ (26) where $$\rho =\frac{1}{\lambda _{\min }}$$ denotes the disturbance gain from ω to ze, $$\beta =\sqrt{\frac{2}{\lambda _{\min }}S(e(0))}$$. By setting PL = PL, RK = RK, (18) can be transformed into an LMI, and we have the following results. Theorem 2 If there exist positive definite matrices P and Γ, a positive definite diagonal matrix R = diag(r1, r2, ⋯, rm), matrices RK and PL, scalar constant ε > 0, such that   $$\left[ \begin{array}{@{}ccc@{}} \Omega_{1}&\Omega_{2}&-P_{L}D-C^{T} \Gamma D-\frac{1}{2}C^{T}\\ \star&-R&-\frac{1}{2}\Delta{R_{K}}D\\ \star&\star&D^{T} \Gamma D+\frac{1}{2}(D+D^{T}) \end{array} \right] \leq 0,$$ (27) then K = R−1RK and L = P−1PL, the error dynamical system (15) is output strictly passive and meanwhile the origin of the system (15) with ω = 0 is globally asymptotically stable, where   \begin{align} \Omega_{1}=&\,A^{T} P+C^{T} {P_{L}}^{T} +PA+{P_{L}}C+C^{T} \Gamma C,\nonumber\\[-3pt] \Omega_{2}=&\,PB+\frac{1}{2}H^{T} \Delta R+\frac{1}{2}C^{T} {R_{K}}^{T} \Delta. \end{align} (28) Remark 2 Theorem 2 provides the method of determining the observer matrices K and L such that the error dynamical system (15) is output strictly passive, and meanwhile the origin of the system (15) with ω = 0 is globally asymptotically stable. In (27) A, B, C, H, Δ are given, whilst P, R, PL, RK, Γ are unknown. Thus, (27) is a LMI. If (27) is feasible, then P, R, PL, RK and Γ can be solved by using the LMI toolbox in MATLAB (Gahinet et al., 1994). The observer matrices K and L can be constructed via K = R−1RK and L = P−1PL. 5. Simulation In this section, we choose the following neural mass model as an example to show the effectiveness of the proposed method   $$\begin{cases} \dot{x}_{1}(t)=x_{2}(t),\\ \dot{x}_{2}(t)=\theta_{A}aS[x_{3}(t)-x_{5}(t)]-2ax_{2}(t)-a^{2}x_{1}(t),\\ \dot{x}_{3}(t)=x_{4}(t),\\ \dot{x}_{4}(t)=\theta_{A}a\{u(t)+C_{2}S[C_{1}x_{1}(t)]\}-2ax_{4}(t)-a^{2}x_{3}(t),\\ \dot{x}_{5}(t)=x_{6}(t),\\ \dot{x}_{6}(t)=\theta_{B}b\{C_{4}S[C_{3}x_{1}(t)]\}-2bx_{6}(t)-b^{2}x_{5}(t),\\ \dot{x}_{7}(t)=x_{8}(t),\\ \dot{x}_{8}(t)=\theta_{A}a_{d}S[x_{3}(t)-x_{5}(t)]-2a_{d}x_{8}(t)-{a_{d}^{2}}x_{7}(t),\\ z(t)=x_{3}(t)-x_{5}(t)+D\omega(t). \end{cases}$$ (29) where variables x1, x3, x5, x7 are the outputs of the mean membrane postsynaptic potential modules of the neural populations and the variables x2, x4, x6, x8 are time derivatives of x1, x3, x5, x7, respectively. The input u(t) is the afferent influence from neighbouring or more distant populations and is represented by a pulse density which can be any arbitrary function, including white noise (Jansen & Rit, 1995). It has been reported that the neural mass model can produce signals similar to the spontaneous EEGs recorded from neocortical structure electrodes during interictal periods if u(t) is Gaussian white noise with appropriate mean and variance (Wendling et al., 2000). In our paper, u(t) is modelled by a Gaussian white noise with average value 220 and standard deviation 30. The output x3 − x5 is used to simulate EEG signals. Corresponds to the original model (29), the model is of the form (11) with   \begin{align} A=&\,\textrm{diag}(A_{1},\ldots,A_{4}),\ A_{i}= \left [\begin{array}{@{}cc@{}}0&1\\ -{k_{i}^{2}}&-2k_{i}\end{array}\right],\nonumber\\[-2pt] k_{1}=&\,k_{2}=a,\ k_{3}=b,\ k_{4}=a_{d},\nonumber\\[-2pt] B=&\,\left[\begin{array}{@{}cccccccc@{}}0&\theta_{A}a&0&0&0&0&0&\theta_{A}a_{d}\\[-2pt] 0&0&0&\theta_{A}aC_{2}&0&0&0&0\\ 0&0&0&0&0&\theta_{B}bC_{4}&0&0\end{array}\right]^{T},\nonumber\\[-2pt] B_{1}=&\,\left [\begin{array}{@{}cccccccc@{}}0& 0& 0&\theta_{A}a& 0& 0& 0& 0\end{array}\right]^{T},\nonumber\\[-2pt] C=&\,\left[\begin{array}{@{}cccccccc@{}}0&0&1&0&-1&0&0&0\end{array}\right],\ D=-150,\nonumber\\[-2pt] H=&\,\left[\begin{array}{@{}cccccccc@{}}0&0&1&0&-1&0&0&0\\ C_{1}&0&0&0&0&0&0&0\\ C_{3}&0&0&0&0&0&0&0\end{array}\right],\ f(\cdot)=\left[\begin{array}{@{}c@{}} S(\cdot)\\ S(\cdot)\\ S(\cdot)\end{array}\right].\nonumber \end{align} The notation S(⋅) denotes a non-linear sigmoid function with the form of   $$S(v)=\frac{2e_{0}}{1+e^{r(v_{0}-v)}},\quad \forall v\in\mathbb{R}.$$ (30) It satisfies (13) with $$\delta =\frac{1}{2}e_{0}r$$. Table 1 presents the physiological meaning and standard value of all the parameters in the model. Fig. 1. View largeDownload slide The time evolutions of the states x1 − x8 and their estimations. Fig. 1. View largeDownload slide The time evolutions of the states x1 − x8 and their estimations. Table 1. Physiological meanings and standard values of the model parameters (Wendling et al., 2000) Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    View Large Table 1. Physiological meanings and standard values of the model parameters (Wendling et al., 2000) Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    View Large The conditions in Theorem 2 are solved by using the LMI toolbox in MATLAB. The computed results are as follows   \begin{align} K=&\left [-0.2693\ \ \ 0.0490\ \ \ -0.0297 \right]^{T},\nonumber\\[-4pt] L=&10^{7}\ast\left [0.0160\ \ \ -0.0000\ \ \ -6.1533\ \ \ -0.013\ \ \ 3.1841\ \ \ 0.0169\ \ \ 0.0028\ \ \ -0.0001\right]^{T}\nonumber \end{align} and $$\lambda _{\min }=0.0033$$. According to Theorem 2, we conclude that the observer with the computed observer matrices K and L can reconstruct the states of the model. The following simulations also confirm that the passivity-based observer can reconstruct the states of the model. In simulations, we set x(0) = [1 0.5 1 0.5 1 0.5 1 0.5]T, $$\hat{x}(0)={[0\ 0\ 0\ 0\ 0\ 0\ 0\ 0]}^{T}$$ and ω = 0. Figure 1 presents the time evolutions of the states of the model (black line) and the passivity-based observer (red line). The time evolution of the state observation error is shown in Fig. 2. We observe that the state observation error eventually converges to zero. It shows that all the states of passivity-based stabilization observer converge well to the states of the model. The observer is designed for neural mass models by using the classical circle criterion in Chong et al. (2012) and the Lurie system theory and mathematical manipulation in Liu et al. (2015). Take the same simulation parameters, we get no feasible solutions by solving Theorem 1 in Chong et al. (2012) and in Liu et al. (2015). It illustrates that the proposed observer does a better job than the circle criterion observer in Chong et al. (2012) and the observer in Liu et al. (2015) for the considering models. Fig. 2. View largeDownload slide The observer error: $$e=\hat{x}-x$$. Fig. 2. View largeDownload slide The observer error: $$e=\hat{x}-x$$. 6. Conclusions We have provided a new method for designing a passivity-based stabilization observer for neural mass models. The observer matrices are solved by using the LMI toolbox in MATLAB. We have demonstrated that the proposed observer achieves the reconstruction of the unmeasured variables from measurement. We have also demonstrated that the proposed passivity-based observer is better than the circle criterion observer in Chong et al. (2012) and the observer in Liu et al. (2015) for the considering models. The proposed method can also be applied to study stability or observer design of other Lurie-type systems. It can be extended to complex networks composed of Lurie-type nodes, for example, multiple coupled neural mass models. In addition, the passive theory is effective to deal with robustness issues of complex uncertain systems or stability issues of time-delay systems. Thus, the proposed method can also be extended to study robust stability or observer design of complex networks with uncertainties or time delays. Funding National Science Foundation of China (61473245 and 61004050); Natural Science Foundation of Hebei (F2017203218); the Natural Science Foundation for Young Scientist of Hebei Province (F2014203099); the Independent Research Program for Young Teachers of Yanshan University (13LGA006). References Adhyaru, D. M. ( 2012) State observer design for nonlinear systems using neural network. Appl. Soft Comput. , 12, 2530– 2537. Google Scholar CrossRef Search ADS   Arcak, M. & Kokotović, P. ( 2001a) Nonlinear observers: a circle criterion design and robustness analysis. Automatica , 37, 1923– 1930. Google Scholar CrossRef Search ADS   Arcak, M. & Kokotović, P. ( 2001b) Observer-based control of systems with slope-restricted nonlinearities. Auto. Control IEEE Trans. , 46, 1146– 1150. Google Scholar CrossRef Search ADS   Bai, H., Arcak, M. & Wen, J. ( 2011) Cooperative Control Design: A Systematic, Passivity-Based Approach . New York: Springer, pp. 13– 42. Baz, A. ( 1992) A neural observer for dynamic systems. J. Sound Vibra. , 152, 227– 243. Google Scholar CrossRef Search ADS   Beker, M. G., Bertolini, A., van den Brand, J. F. J., Bulten, H. J., Hennes, E. & Rabeling, D. S. ( 2014) State observers and Kalman filtering for high performance vibration isolation systems. Rev. Sci. Instrum ., 85, 034501. Camposcantón, I., Seguracisneros, O. A., Balderasnavarro, R. E. & Camposcantón, E. ( 2014) Chua’s circuit and its characterization as a filter. Eur. J. Phys. , 35, 065018. Chong, M., Postoyan, R., Nešić, D., Kuhlmann, L. & Varsavsky, A. ( 2012) A robust circle criterion observer with application to neural mass models. Automatica , 48, 2986– 2989. Google Scholar CrossRef Search ADS   Deremble, B., D´Andrea, F. & Ghil, M. ( 2009) Fixed points, stable manifolds, weather regimes, and their predictability. Chaos , 19, 043109. Fan, X. Z. & Arcak, M. ( 2003) Observer design for systems with multivariable monotone nonlinearities. Syst. Control Lett. , 50, 319– 330. Google Scholar CrossRef Search ADS   Gahinet, P. M., Nemirovskii, A., Laub, A. J. & Chilali, M. ( 1994) LMI control toolbox. IEEE Conf. Decision Control. IEEE . 3, 2038– 2041. Hajime, H., Hiroaki, T., Kengo, N., Makito, H., Toshihiko, N., Kiyotaka, S., Takashi, T. & Jun, O. ( 2016) Wireless image-data transmission from an implanted image sensor through a living mouse brain by intra body communication. Jap. J. Appl. Phys. , 55, 04EM03. Jansen, B. H. & Rit, V. G. ( 1995) Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Bio. Cybern. , 73, 357– 366. Google Scholar CrossRef Search ADS   Jeong, C. S., Yaz, E. E. & Yaz, Y. I. ( 2011) Lyapunov-based design of resilient mixed MSE-dissipative-type state observers for a class of nonlinear systems and general performance criteria. Int. J. Syst. Sci. , 42, 789– 800. Google Scholar CrossRef Search ADS   Kudav, P. & Narendra, K. ( 1973) Synthesis of an adaptive observer using Lyapunov’s direct method, Int. J. Control , 18, 1201– 1210. Google Scholar CrossRef Search ADS   Leonov, G. A., Ponomarenko, D. V. & Smirnova, V. B. ( 1996a) Frequency-Domain Methods for Nonlinear Analysis: Theory and Applications . Singapore: World Scientific. Google Scholar CrossRef Search ADS   Leonov, G. A., Ponomarenko, D. V. & Smirnova, V. B. ( 1996b) Frequency-Domain Methods for Nonlinear Analysis: Theory and Applications . USA: World Scientific, pp. 46– 49. Google Scholar CrossRef Search ADS   Li, Q. D., Tang, S. & Yang, X. S. ( 2013) New bifurcations in the simplest passive walking model. Chaos , 23, 043110. Liu, X., Miao, D. K., Gao, Q. & Xu, S. Y. ( 2015) A novel observer design method for neural mass models. Chin. Phys. B. , 24, 090207. Lopes da Silva, F., Hoeks, A., Smits, H. & Zetterberg, L. H. ( 1974) Model of brain rhythmic activity. Kybernetik , 15, 27– 37. Google Scholar CrossRef Search ADS PubMed  Luenberger, D. G. ( 1964) Observing the state of a linear system. IEEE Trans. Military Electronics , 8, 74– 80 .  Google Scholar CrossRef Search ADS   Marino, R. ( 1990) Adaptive observers for single-output nonlinear systems. IEEE Trans. Auto. Control , 35, 1054– 1058. Google Scholar CrossRef Search ADS   Nam, K. ( 1997) An approximate nonlinear observer with polynomial coordinate transformation maps. IEEE Trans. Auto. Control , 42, 522– 527. Google Scholar CrossRef Search ADS   Rakkiyappan, R., Chandrasekar, A., Laksmanan, S. & Ju, H. P. ( 2014) State estimation of memristor-based recurrent neural networks with time-varying delays based on passivity theory. Complexity , 19, 32– 43. Google Scholar CrossRef Search ADS   Rantzer, A. ( 1996) On the Kalman–Yakubovich–Popov lemma. Syst. Control Lett. , 28, 7– 10. Google Scholar CrossRef Search ADS   Ruoff, P., Vinsjevik, M., Monnerjahn, C. & Rensing, L. ( 2001) The Goodwin model: simulating the effect of light pulses on the circadian sporulation rhythm of Neurospora crassa. J. Theor. Biol. , 209, 29– 42. Google Scholar CrossRef Search ADS PubMed  Surhone, L. M., Tennoe, M. T. & Henssonow, S. F. ( 2013) State Observer . Germany: Betascript Publishing. Van der Schaft, A. J. ( 1996) L2-Gain and Passivity Techniques in Nonlinear Control . New York: Springer. Google Scholar CrossRef Search ADS   Wendling, F., Bellanger, J. J., Bartolomei, F. & Chauvel, P. ( 2000) Relevance of nonlinear lumped-parameter models in the analysis of depth-EEG epileptic signals. Bio. Cybern. , 83, 367– 378. Google Scholar CrossRef Search ADS   Yang, H. J., Fan, X. Z., Xia, Y. Q. & Hua, C. C. ( 2015) Robust tracking control for wheeled mobile robot based on extended state observer. Adv.anced Robotics ., 30, 68– 78. Google Scholar CrossRef Search ADS   Zanchettin, A. M., Lacevic, B. & Rocco, P. ( 2015) Passivity-based control of robotic manipulators for safe cooperation with humans. Int. J. Control , 88, 429– 439. Google Scholar CrossRef Search ADS   Zeitz, M. ( 1987) The extended Luenberger observer for nonlinear systems. Syst. Control Lett. , 9, 149– 156. Google Scholar CrossRef Search ADS   Zemouche, A. & Boutayeb, M. ( 2009) A unified H$$\infty$$ adaptive observer synthesis method for a class of systems with both Lipschitz and monotone nonlinearities. Syst. Control Lett. , 58, 282– 288. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

# A passivity-based observer for neural mass models

, Volume Advance Article – Feb 5, 2018
11 pages

/lp/ou_press/a-passivity-based-observer-for-neural-mass-models-9zFAPftGmD
Publisher
Oxford University Press
© The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dny001
Publisher site
See Article on Publisher Site

### Abstract

Abstract This paper presents a novel approach of designing a stabilizing observer for neural mass models which can simulate distinct rhythms in electroencephalography (EEG). Due to the structure characteristics, neural mass models are expressed as Lurie systems. The stabilizing observer is designed by using the Lurie system theory and the passive theory. The observer matrices are constructed via solutions of some linear matrix inequality (LMI) conditions. Numerical simulations are used to demonstrate the efficiency of the results. 1. Introduction State observers not only provide practical possibilities for the realization of state feedback, (Beker et al., 2014) but also play important roles in science and engineering, including understanding and predicting weather (Beker et al., 2009) or controlling dynamics from robotics (Yang et al., 2015) to neural systems, (Adhyaru, 2012; Baz, 1992) most notably in brain research (Hajime, 2016) where the sensors physically implanted in brain cannot directly measure all the variables of interest. The estimation of the states is therefore especially useful for general neuroscience studies for better understanding of the human brain. A state observer is a dynamical model of a system or process that reconstructs unmeasured or inaccessible variables from measurements of the external input and output (Surhone et al., 2013). Luenberger initially put forward the concept of state observers and presented the construction method of state observers for linear systems (Luenberger, 1964). In later works, various observer design methods were developed for non-linear systems as well, such as the Lyapunov-like method (Kudav & Narendra, 1973; Jeong et al., 2011), the coordinate transformation method (Marino, 1990; Nam, 1997) and the extended Luenberger observer (Zeitz, 1987). To facilitate the design of observers, it is a natural way to divide non-linear systems into different types with specific structures because of the inherent complexity of non-linear systems. Lurie-type systems are typical non-linear systems, which can be viewed as an interconnection of a linear system and a sector-restricted non-linearity (Leonov et al., 2001). The classical circle criterion was employed to design observers for Lurie-type non-linear systems (Arcak & Kokotović, 2001). Plentiful modified circle criterion observers were also constructed for this type of non-linear systems (Arcak & Kokotović, 2001; Fan & Arcak, 2003; Zemouche & Boutayeb, 2009). Neural mass models that describe the activity of populations of neurons share the same structure as Lurie-type systems (Lopes da Silva et al., 1974). The prior circle criterion observers were naturally extended to neural mass models. A direct extension of these circle criterion observers to neural mass models results in an infeasible linear matrix inequality (LMI) condition (Chong et al., 2012), which does not guarantee the convergence of the estimation error to the origin. A generalized result that leads to feasible LMIs and the convergence of the estimation error to the origin is subsequently established on the basis of the circle criterion observers (Chong et al., 2012). Then by using Lurie system theory and mathematical operation, Liu Xian and Miao Dong-Kai further improve the observer design method for neural mass models (Liu et al., 2015). However, their internal stability is still somewhat unsatisfactory. As far as the state observer design of the neural mass models involved in brain research, there still exists great room for improvement or developing new methods. This prompted us to carry out this study. Passivity has long been a powerful tool for dealing with non-linear systems (Schaft, 1996; Bai et al., 2011). It is effective when solving robustness issues of complex uncertain systems. During the last decades, the constructive techniques based on passive theory have provided useful tools for analysing and controlling a large class of non-linear systems from different areas such as robotic manipulators (Zanchettin et al., 2015), walking robots (Li et al., 2013), memristor-based recurrent neural networks (Rakkiyappan et al., 2014) and so on. Inspired by the advantages of passive property, we develop a new method of designing the state observer for neural mass models by using the passive theory. We will demonstrate that the passivity-based stabilization observer can realize the reconstruction of unmeasured variables. Notation The set of real numbers, positive real numbers, n-dimensional real column vectors and n × m-dimensional real matrixes are represented by $$\mathbb{R}$$, $$\mathbb{R}_{+}$$, $$\mathbb{R}^{n}$$ and $$\mathbb{R}^{n\times m}$$, respectively. A diagonal matrix is denoted as diag(⋅). An n × n-dimensional identity matrix is represented as In. The symmetric block component of a symmetric matrix is denoted by ⋆. The imaginary unit is denoted by ı. The L2 norm of a vector function x(t) is represented as $$\|x(t)\|_{L_{2}}=\left [\int _{0}^{\infty }x^{T}(t)x(t)\, \textrm{d}t\right ]^{\frac{1}{2}}$$. 2. Preliminaries The subject of our research is to design a new type of observer for neural mass model and to prove the superiority of the proposed observer. To make the problem statement clearer, we firstly present some preliminaries employed in this paper. 2.1. Lurie systems Lurie systems are a class of typical non-linear systems, which can be viewed as the connection of a linear system and a non-linear function. Many non-linear systems can be represented in the form of Lurie systems, such as Chua’s circuit (Camposcantón et al., 2014), Goodwin models (Ruoff et al., 2001) and neural mass models (Lopes da Silva et al., 1974). Lurie systems can be expressed as follows   $$\dot{x}=Ax+Bf(Hx),$$ (1) where $$x\in \mathbb{R}^{n}$$ is the state vector, $$A\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$, $$H\in \mathbb{R}^{m\times n}$$ are all constant matrices, and $$f(\cdot ):\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}$$ is a continuous non-linear vector function with each entry fi(⋅)(i = 1, 2, ⋯, m)satisfying fi(0) ≡ 0 and sector conditions   $$\mu_{i1}{\sigma_{i}^{2}}\leq \sigma_{i}\, f_{i}(\sigma_{i})\leq\mu_{i2}{\sigma_{i}^{2}},\quad\forall\sigma_{i}\in \mathbb{R},$$ (2) where $$\mu _{i1}\in \mathbb{R}$$, $$\mu _{i2}\in \mathbb{R}$$. Let μ1 = diag(μ11, μ21, ⋯, μm1), μ2 = diag(μ12, μ22, ⋯, μm2) and χ(ıω) = H(ıωIn−A)−1B. The origin x = 0 is an equilibrium point of (1). Circle criterion (Leonov et al., 2001) is one of the most classical criteria to judge the global asymptotic stability of the origin of system (1). Lemma 1 (Circle criterion) Suppose that the pair (A, B) is controllable and the matrix A has no eigenvalues with zero real parts. If the frequency−domain inequality   $$\textrm{Re}\left\{[\mu_{1}\chi(\imath\omega)+I_{m}]^{T}[\mu_{2}\chi(\imath\omega)+I_{m}]\right\}>0,\omega\in\mathbb{R}$$ (3) is fulfilled, then the origin of system (1) is globally asymptotically stable. By using the well-known Kalman–Yakubovich–Popov (KYP) lemma (Rantzer, 1996), the frequency−domain inequality (3) can be transformed into an equivalent matrix inequality which is easy to solve. Lemma 2 (KYP lemma) Given $$A\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$, $$M=M^{T}\in \mathbb{R}^{(n+m)\times (n+m)}$$, with det$$(\imath \omega I-A)\neq 0$$ for $$\omega \in \mathbb{R}$$ and (A, B) is controllable, the following two statements are equivalent: (i)   \begin{align} &\left[\begin{array}{@{}c@{}}(\imath\omega I_{n}-A)^{-1}B\\ I\end{array}\right]^{T}M\left[\begin{array}{@{}c@{}}(\imath\omega I_{n}-A)^{-1}B\\ I\end{array}\right]\leq0,\nonumber\\ &\forall\omega\in \mathbb{R}; \end{align} (4) (ii) There exists a matrix $$P\in \mathbb{R}^{n\times n}$$ such that P = PT and   $$M+{ \left[\begin{array}{@{}cc@{}} A^{T}P+PA& PB\\ B^{T}P&0 \end{array} \right]}\leq0.$$ (5) The corresponding equivalence for strict inequalities holds even if (A, B) is not controllable. Lemma 3 Suppose that the pair (A, B) is controllable and the matrix A has no eigenvalues with zero real parts. If there exist a matrix P = PT such that the matrix inequality   $${ \left[\begin{array}{@{}cc@{}} A^{T}P+PA+H^{T}\mu_{1}\mu_{2}H& PB+\frac{1}{2}H^{T}(\mu_{1}+\mu_{2})\\ \star&I_{m} \end{array} \right]}>0$$ (6) is fulfilled, then the origin of system (1) is globally asymptotically stable. Proof. We rewrite (3) as the form of (4) with   $$M={ \left[\begin{array}{@{}cc@{}} H^{T}\mu_{1}\mu_{2}H& \frac{1}{2}H^{T}(\mu_{1}+\mu_{2})\\ \star&I_{m} \end{array} \right]}.$$ (7) According to the KYP lemma, we can derive the equivalent inequality (6). This completes the proof. It has been shown that the direct application of the circle criterion to design a observer for neural mass models is not feasible (Chong et al., 2012). In this paper, we use the passive theory to the observer design of the underlying model. So we present the definitions of passivity and output strictly passivity. 2.2. Passivity of non-linear dynamical systems Consider the dynamical system   $$\begin{cases} \dot{x}=f(x,u),\\ y=h(x,u), \end{cases}$$ (8) where $$x\in \mathbb{R}^{n}$$ is the state vector, $$u\in \mathbb{R}^{p}$$ is the input, $$y\in \mathbb{R}^{m}$$ is the output, $$f(x,u)\in \mathbb{R}^{n}$$ and $$h(x,u)\in \mathbb{R}^{m}$$ are functions associated with x and u, respectively. Definition 1 System (8) is said to be passive if there exists a function S(x) ≥ 0 such that   $$\dot{S}\leq-W(x)+u^{T}y$$ (9) is fulfilled for some positive semidefinite function W(x). System (8) is said to be output strictly passive if there exists a function S(x) ≥ 0 such that   $$\dot{S}\leq-y^{T}\varphi(y)+u^{T}y$$ (10) is fulfilled, where yTφ(y) > 0. The function S satisfying (9) or (10) is called the storage function. 3. Problem formulation Consider neural mass models with the following structure   $$\begin{cases} \dot{x}=Ax+Bf(Hx)+B_{1}u,\\ z=Cx+D\omega, \end{cases}$$ (11) where $$x\in \mathbb{R}^{n}$$ is the state vector; $$u\in \mathbb{R}^{p}$$ is the input; $$z\in \mathbb{R}^{q}$$ is the measurement output; $$\omega \in \mathbb{R}^{s}$$ is the measurement noise; $$A\in \mathbb{R}^{n\times n}$$, $$B\in \mathbb{R}^{n\times m}$$, $$B_{1}\in \mathbb{R}^{n\times p}$$, $$C\in \mathbb{R}^{q\times n}$$, $$H\in \mathbb{R}^{m\times n}$$ and $$D\in \mathbb{R}^{q\times s}$$ are all constant matrices; and $$f(\cdot ):\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}$$ is a non-linear vector function. Each entry of f(Hx) has the form   $$f_{i}=f_{i} \left (\sum_{j=1}^{n} h_{ij}x_{j} \right)=f_{i}(h_{i}x),\ i=1,2,\cdots,m,$$ (12) and it’s derivative satisfies   $$0\leq f_{i}^{\prime}(\sigma)\leq\delta_{i},\quad \forall \sigma\in \mathbb{R},\ i=1,2,\cdots, m,$$ (13) where hi = [hi1⋯hin], δi ≥ 0. The models originate from the seminal work of Lopes da Silva in Lopes da Silva et al., (1974) and they have the same mathematical structure as (11). We apply the following type of observer   $$\begin{cases} \dot{\hat{x}}=A\hat{x}+Bf[H\hat{x}+K(C\hat{x}-z)]+L(C\hat{x}-z)+B_{1}u,\\ \hat{z}=C\hat{x}, \end{cases}$$ (14) where $$\hat{x}$$ is the state estimation, $$\hat{z}$$ is the output estimation and $$K\in \mathbb{R}^{m\times q}$$ and $$L\in \mathbb{R}^{n\times q}$$ are the observer matrices to be designed. Defining the state observation error as $$e=\hat{x}-x$$ and the output observation error as $$z_{e}=\hat{z}-z$$, we obtain the observation error dynamical system   $$\begin{cases} \dot{e}=(A+LC)e+B\varphi(e,t)-LD\omega,\\ z_{e}=Ce-D\omega, \end{cases}$$ (15) where φ(e, t) = f(V ) − f(U), $$V=H\hat{x}+K(C\hat{x}-z)$$, U = Hx. The constraint (13) implies that each entry of φ(e, t) satisfies   \begin{align} 0&\leq\frac{\varphi_{i}(e,t)}{v_{i}-u_{i}} =\frac{f_{i}(v_{i})-f_{i}(u_{i})}{v_{i}-u_{i}}\leq\delta_{i},\nonumber\\ \forall t &\in \mathbb{R}_{+},\quad \forall e\in \mathbb{R},\ i=1,2,\cdots,m. \end{align} (16) Lemma 4 The observation error dynamical system (15) is said to be output strictly passive if there exists a continuous differentiable positive semidefinite function S such that   $$\dot{S}\leq-{z_{e}^{T}}\rho{(z_{e})}+\omega^{T}z_{e}, \quad{z_{e}^{T}}\rho{(z_{e})}>0, \forall z_{e}\neq 0.$$ (17) Here S is called the storage function. We use the passive property to determine the observer matrices K and L. 4. Main result In this section, we provide the method of designing the observer matrices K and L. Theorem 1 If there exist positive definite matrices P and Γ, a positive definite diagonal matrix R = diag(r1, r2, ⋯, rm), matrices K and L, scalar constant ε > 0, such that   $$\Sigma= { \left[\begin{array}{@{}ccc@{}} \Sigma_{1}& \Sigma_{2}&-PLD-C^{T} \Gamma D-\frac{1}{2}C^{T}\\ \star&-R&-\frac{1}{2}R\Delta KD\\ \star&\star&D^{T} \Gamma D+\frac{1}{2}(D+D^{T}) \end{array} \right]}\leq0,$$ (18) then the error dynamical system (15) is output strictly passive, and meanwhile the origin of the system (15) with ω = 0 is globally asymptotically stable, where   \begin{align} \Sigma_{1}=&\,P(A+LC)+(A+LC)^{ T }P+C^{T} \Gamma C,\nonumber\\[-2pt] \Sigma_{2}=&\,PB+\frac{1}{2}(H+KC)^{T} \Delta R. \end{align} (19) Proof. Construct a quadratic function of the form S = eTPe, which is positive definite because P is positive definite. Calculating the time derivative of S gets   $$\dot S=\dot{e}^{T}Pe+e^{T}P\dot{e} =[(A+LC)e+B\varphi(e,t)-LD\omega]^{T} Pe+e^{T} P[(A+LC)e+B\varphi(e,t)-LD\omega].$$ (20) In view of (16), it follows   $$\sum_{i=1}^{m}r_{i}\varphi_{i}(e,t)[\varphi_{i}(e,t)-\delta_{i}(v_{i}-u_{i})]\leq0,$$ (21) where ri ≥ 0. The formula (21) can be expressed as the following equivalent form   $$\varphi^{T}(e,t)R\varphi(e,t)-\varphi^{T}(e,t)R\Delta(V-U)\leq0,$$ (22) where φ(e, t) = [φ1(e, t), φ2(e, t), ⋯, φm(e, t)]T, Δ = diag(δ1, δ2, ⋯, δm), V − U = (H + KC)e − KDω. Incorporating (20) and (22), we have   \begin{align} \dot S\leq&\, [(A+LC)e+B\varphi(e,t) -LD\omega]^{T} Pe+e^{T} P[(A+LC)e+B\varphi(e,t)-LD\omega]\nonumber\\[-3pt] &\,-\varphi^{T}(e,t)R\varphi(e,t)+\varphi^{T}(e,t)R\Delta[(H+KC)e-KD\omega]\nonumber\\[-3pt] =&\, \chi^{T}\Sigma\chi-\chi^{T}\left[\begin{array}{@{}ccc@{}}C^{T}\Gamma C & 0 & -C^{T}\Gamma D-\frac{1}{2}C^{T}\\ \star & 0 & 0\\ \star & \star & D^{T} \Gamma D+\frac{1}{2}(D+D^{T})\end{array}\right]\chi\nonumber\\[-3pt] \leq&\, \omega^{T} (Ce-D\omega)-(Ce-D\omega)^{T} \Gamma(Ce-D\omega)\nonumber\\[-3pt] =&\, -{z_{e}^{T}}\Gamma z_{e}+\omega^{T} z_{e}, \end{align} (23) where χ = [e φ(e, t) ω]T. The derivative of S satisfies (17) with ρ(ze) = Γze and $${z_{e}^{T}}\Gamma z_{e}>0$$. Thus, S is a storage function and the error dynamical system (15) is output strictly passive. When w = 0, (23) can be written as   $$\dot S\leq -{z_{e}}^{T}\Gamma z_{e}.$$ (24) Since Γ is positive definite, $$\dot S$$ is negative definite. This qualifies that the storage function S is also a Lyapunov function. Thus, the origin of the error dynamical system (15) is globally asymptotically s by using the Lyapunov stability theory. This completes the proof. Remark 1 The application of inequality technique to (23) yields   \begin{align} \dot{S} \leq&\, -\lambda_{\min}{z_{e}}^{T}z_{e}+\omega^{T}z_{e}\nonumber\\[-3pt] =&\,-\frac{1}{2\lambda_{\min}}(\omega-\lambda_{\min}z_{e})^{T}(\omega-\lambda_{\min}z_{e})+\frac{1}{2\lambda_{\min}}\omega^{T}\omega-\frac{\lambda_{\min}}{2}{z_{e}}^{T}z_{e}\nonumber\\[-3pt] \leq&\, \frac{1}{2\lambda_{\min}}\omega^{T}\omega-\frac{\lambda_{\min}}{2}{z_{e}}^{T}z_{e}, \end{align} (25) where $$\lambda _{\min }$$ is the minimal eigenvalue of Γ. Integral operation of (25) derives   $$\|z_{e}\|_{L_{2}}\leq\rho\|\omega\|_{L_{2}}+\beta,$$ (26) where $$\rho =\frac{1}{\lambda _{\min }}$$ denotes the disturbance gain from ω to ze, $$\beta =\sqrt{\frac{2}{\lambda _{\min }}S(e(0))}$$. By setting PL = PL, RK = RK, (18) can be transformed into an LMI, and we have the following results. Theorem 2 If there exist positive definite matrices P and Γ, a positive definite diagonal matrix R = diag(r1, r2, ⋯, rm), matrices RK and PL, scalar constant ε > 0, such that   $$\left[ \begin{array}{@{}ccc@{}} \Omega_{1}&\Omega_{2}&-P_{L}D-C^{T} \Gamma D-\frac{1}{2}C^{T}\\ \star&-R&-\frac{1}{2}\Delta{R_{K}}D\\ \star&\star&D^{T} \Gamma D+\frac{1}{2}(D+D^{T}) \end{array} \right] \leq 0,$$ (27) then K = R−1RK and L = P−1PL, the error dynamical system (15) is output strictly passive and meanwhile the origin of the system (15) with ω = 0 is globally asymptotically stable, where   \begin{align} \Omega_{1}=&\,A^{T} P+C^{T} {P_{L}}^{T} +PA+{P_{L}}C+C^{T} \Gamma C,\nonumber\\[-3pt] \Omega_{2}=&\,PB+\frac{1}{2}H^{T} \Delta R+\frac{1}{2}C^{T} {R_{K}}^{T} \Delta. \end{align} (28) Remark 2 Theorem 2 provides the method of determining the observer matrices K and L such that the error dynamical system (15) is output strictly passive, and meanwhile the origin of the system (15) with ω = 0 is globally asymptotically stable. In (27) A, B, C, H, Δ are given, whilst P, R, PL, RK, Γ are unknown. Thus, (27) is a LMI. If (27) is feasible, then P, R, PL, RK and Γ can be solved by using the LMI toolbox in MATLAB (Gahinet et al., 1994). The observer matrices K and L can be constructed via K = R−1RK and L = P−1PL. 5. Simulation In this section, we choose the following neural mass model as an example to show the effectiveness of the proposed method   $$\begin{cases} \dot{x}_{1}(t)=x_{2}(t),\\ \dot{x}_{2}(t)=\theta_{A}aS[x_{3}(t)-x_{5}(t)]-2ax_{2}(t)-a^{2}x_{1}(t),\\ \dot{x}_{3}(t)=x_{4}(t),\\ \dot{x}_{4}(t)=\theta_{A}a\{u(t)+C_{2}S[C_{1}x_{1}(t)]\}-2ax_{4}(t)-a^{2}x_{3}(t),\\ \dot{x}_{5}(t)=x_{6}(t),\\ \dot{x}_{6}(t)=\theta_{B}b\{C_{4}S[C_{3}x_{1}(t)]\}-2bx_{6}(t)-b^{2}x_{5}(t),\\ \dot{x}_{7}(t)=x_{8}(t),\\ \dot{x}_{8}(t)=\theta_{A}a_{d}S[x_{3}(t)-x_{5}(t)]-2a_{d}x_{8}(t)-{a_{d}^{2}}x_{7}(t),\\ z(t)=x_{3}(t)-x_{5}(t)+D\omega(t). \end{cases}$$ (29) where variables x1, x3, x5, x7 are the outputs of the mean membrane postsynaptic potential modules of the neural populations and the variables x2, x4, x6, x8 are time derivatives of x1, x3, x5, x7, respectively. The input u(t) is the afferent influence from neighbouring or more distant populations and is represented by a pulse density which can be any arbitrary function, including white noise (Jansen & Rit, 1995). It has been reported that the neural mass model can produce signals similar to the spontaneous EEGs recorded from neocortical structure electrodes during interictal periods if u(t) is Gaussian white noise with appropriate mean and variance (Wendling et al., 2000). In our paper, u(t) is modelled by a Gaussian white noise with average value 220 and standard deviation 30. The output x3 − x5 is used to simulate EEG signals. Corresponds to the original model (29), the model is of the form (11) with   \begin{align} A=&\,\textrm{diag}(A_{1},\ldots,A_{4}),\ A_{i}= \left [\begin{array}{@{}cc@{}}0&1\\ -{k_{i}^{2}}&-2k_{i}\end{array}\right],\nonumber\\[-2pt] k_{1}=&\,k_{2}=a,\ k_{3}=b,\ k_{4}=a_{d},\nonumber\\[-2pt] B=&\,\left[\begin{array}{@{}cccccccc@{}}0&\theta_{A}a&0&0&0&0&0&\theta_{A}a_{d}\\[-2pt] 0&0&0&\theta_{A}aC_{2}&0&0&0&0\\ 0&0&0&0&0&\theta_{B}bC_{4}&0&0\end{array}\right]^{T},\nonumber\\[-2pt] B_{1}=&\,\left [\begin{array}{@{}cccccccc@{}}0& 0& 0&\theta_{A}a& 0& 0& 0& 0\end{array}\right]^{T},\nonumber\\[-2pt] C=&\,\left[\begin{array}{@{}cccccccc@{}}0&0&1&0&-1&0&0&0\end{array}\right],\ D=-150,\nonumber\\[-2pt] H=&\,\left[\begin{array}{@{}cccccccc@{}}0&0&1&0&-1&0&0&0\\ C_{1}&0&0&0&0&0&0&0\\ C_{3}&0&0&0&0&0&0&0\end{array}\right],\ f(\cdot)=\left[\begin{array}{@{}c@{}} S(\cdot)\\ S(\cdot)\\ S(\cdot)\end{array}\right].\nonumber \end{align} The notation S(⋅) denotes a non-linear sigmoid function with the form of   $$S(v)=\frac{2e_{0}}{1+e^{r(v_{0}-v)}},\quad \forall v\in\mathbb{R}.$$ (30) It satisfies (13) with $$\delta =\frac{1}{2}e_{0}r$$. Table 1 presents the physiological meaning and standard value of all the parameters in the model. Fig. 1. View largeDownload slide The time evolutions of the states x1 − x8 and their estimations. Fig. 1. View largeDownload slide The time evolutions of the states x1 − x8 and their estimations. Table 1. Physiological meanings and standard values of the model parameters (Wendling et al., 2000) Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    View Large Table 1. Physiological meanings and standard values of the model parameters (Wendling et al., 2000) Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    Parameter  Physiological meaning  Standard value  θA, θB  are the average gains of excitatory and inhibitory synaptic  θA = 3.25 mV, θB = 22 mV  a, b  membrane transfer and dendritic tree average time delay  a = 100 s−1, b = 50 s−1  ad  average contact time between the neural populations  ad = 33 s−1  C1, C2  averages of synaptic connections in the excitement feedback loop  C1 = 135, C2 = 108  C3, C4  averages of synaptic connections in the suppression feedback loop  C3 = C4 = 33.75  e0  represents the maximum firing rate  e0 = 2.5 s−1  v0  postsynaptic potential (PSP) corresponding to firing rate  v0 = 6 mV  r  bending degree of the sigmoid function  r = 0.56 mV−1  v  represents the presynaptic average membrane potential    View Large The conditions in Theorem 2 are solved by using the LMI toolbox in MATLAB. The computed results are as follows   \begin{align} K=&\left [-0.2693\ \ \ 0.0490\ \ \ -0.0297 \right]^{T},\nonumber\\[-4pt] L=&10^{7}\ast\left [0.0160\ \ \ -0.0000\ \ \ -6.1533\ \ \ -0.013\ \ \ 3.1841\ \ \ 0.0169\ \ \ 0.0028\ \ \ -0.0001\right]^{T}\nonumber \end{align} and $$\lambda _{\min }=0.0033$$. According to Theorem 2, we conclude that the observer with the computed observer matrices K and L can reconstruct the states of the model. The following simulations also confirm that the passivity-based observer can reconstruct the states of the model. In simulations, we set x(0) = [1 0.5 1 0.5 1 0.5 1 0.5]T, $$\hat{x}(0)={[0\ 0\ 0\ 0\ 0\ 0\ 0\ 0]}^{T}$$ and ω = 0. Figure 1 presents the time evolutions of the states of the model (black line) and the passivity-based observer (red line). The time evolution of the state observation error is shown in Fig. 2. We observe that the state observation error eventually converges to zero. It shows that all the states of passivity-based stabilization observer converge well to the states of the model. The observer is designed for neural mass models by using the classical circle criterion in Chong et al. (2012) and the Lurie system theory and mathematical manipulation in Liu et al. (2015). Take the same simulation parameters, we get no feasible solutions by solving Theorem 1 in Chong et al. (2012) and in Liu et al. (2015). It illustrates that the proposed observer does a better job than the circle criterion observer in Chong et al. (2012) and the observer in Liu et al. (2015) for the considering models. Fig. 2. View largeDownload slide The observer error: $$e=\hat{x}-x$$. Fig. 2. View largeDownload slide The observer error: $$e=\hat{x}-x$$. 6. Conclusions We have provided a new method for designing a passivity-based stabilization observer for neural mass models. The observer matrices are solved by using the LMI toolbox in MATLAB. We have demonstrated that the proposed observer achieves the reconstruction of the unmeasured variables from measurement. We have also demonstrated that the proposed passivity-based observer is better than the circle criterion observer in Chong et al. (2012) and the observer in Liu et al. (2015) for the considering models. The proposed method can also be applied to study stability or observer design of other Lurie-type systems. It can be extended to complex networks composed of Lurie-type nodes, for example, multiple coupled neural mass models. In addition, the passive theory is effective to deal with robustness issues of complex uncertain systems or stability issues of time-delay systems. Thus, the proposed method can also be extended to study robust stability or observer design of complex networks with uncertainties or time delays. Funding National Science Foundation of China (61473245 and 61004050); Natural Science Foundation of Hebei (F2017203218); the Natural Science Foundation for Young Scientist of Hebei Province (F2014203099); the Independent Research Program for Young Teachers of Yanshan University (13LGA006). References Adhyaru, D. M. ( 2012) State observer design for nonlinear systems using neural network. Appl. Soft Comput. , 12, 2530– 2537. Google Scholar CrossRef Search ADS   Arcak, M. & Kokotović, P. ( 2001a) Nonlinear observers: a circle criterion design and robustness analysis. Automatica , 37, 1923– 1930. Google Scholar CrossRef Search ADS   Arcak, M. & Kokotović, P. ( 2001b) Observer-based control of systems with slope-restricted nonlinearities. Auto. Control IEEE Trans. , 46, 1146– 1150. Google Scholar CrossRef Search ADS   Bai, H., Arcak, M. & Wen, J. ( 2011) Cooperative Control Design: A Systematic, Passivity-Based Approach . New York: Springer, pp. 13– 42. Baz, A. ( 1992) A neural observer for dynamic systems. J. Sound Vibra. , 152, 227– 243. Google Scholar CrossRef Search ADS   Beker, M. G., Bertolini, A., van den Brand, J. F. J., Bulten, H. J., Hennes, E. & Rabeling, D. S. ( 2014) State observers and Kalman filtering for high performance vibration isolation systems. Rev. Sci. Instrum ., 85, 034501. Camposcantón, I., Seguracisneros, O. A., Balderasnavarro, R. E. & Camposcantón, E. ( 2014) Chua’s circuit and its characterization as a filter. Eur. J. Phys. , 35, 065018. Chong, M., Postoyan, R., Nešić, D., Kuhlmann, L. & Varsavsky, A. ( 2012) A robust circle criterion observer with application to neural mass models. Automatica , 48, 2986– 2989. Google Scholar CrossRef Search ADS   Deremble, B., D´Andrea, F. & Ghil, M. ( 2009) Fixed points, stable manifolds, weather regimes, and their predictability. Chaos , 19, 043109. Fan, X. Z. & Arcak, M. ( 2003) Observer design for systems with multivariable monotone nonlinearities. Syst. Control Lett. , 50, 319– 330. Google Scholar CrossRef Search ADS   Gahinet, P. M., Nemirovskii, A., Laub, A. J. & Chilali, M. ( 1994) LMI control toolbox. IEEE Conf. Decision Control. IEEE . 3, 2038– 2041. Hajime, H., Hiroaki, T., Kengo, N., Makito, H., Toshihiko, N., Kiyotaka, S., Takashi, T. & Jun, O. ( 2016) Wireless image-data transmission from an implanted image sensor through a living mouse brain by intra body communication. Jap. J. Appl. Phys. , 55, 04EM03. Jansen, B. H. & Rit, V. G. ( 1995) Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Bio. Cybern. , 73, 357– 366. Google Scholar CrossRef Search ADS   Jeong, C. S., Yaz, E. E. & Yaz, Y. I. ( 2011) Lyapunov-based design of resilient mixed MSE-dissipative-type state observers for a class of nonlinear systems and general performance criteria. Int. J. Syst. Sci. , 42, 789– 800. Google Scholar CrossRef Search ADS   Kudav, P. & Narendra, K. ( 1973) Synthesis of an adaptive observer using Lyapunov’s direct method, Int. J. Control , 18, 1201– 1210. Google Scholar CrossRef Search ADS   Leonov, G. A., Ponomarenko, D. V. & Smirnova, V. B. ( 1996a) Frequency-Domain Methods for Nonlinear Analysis: Theory and Applications . Singapore: World Scientific. Google Scholar CrossRef Search ADS   Leonov, G. A., Ponomarenko, D. V. & Smirnova, V. B. ( 1996b) Frequency-Domain Methods for Nonlinear Analysis: Theory and Applications . USA: World Scientific, pp. 46– 49. Google Scholar CrossRef Search ADS   Li, Q. D., Tang, S. & Yang, X. S. ( 2013) New bifurcations in the simplest passive walking model. Chaos , 23, 043110. Liu, X., Miao, D. K., Gao, Q. & Xu, S. Y. ( 2015) A novel observer design method for neural mass models. Chin. Phys. B. , 24, 090207. Lopes da Silva, F., Hoeks, A., Smits, H. & Zetterberg, L. H. ( 1974) Model of brain rhythmic activity. Kybernetik , 15, 27– 37. Google Scholar CrossRef Search ADS PubMed  Luenberger, D. G. ( 1964) Observing the state of a linear system. IEEE Trans. Military Electronics , 8, 74– 80 .  Google Scholar CrossRef Search ADS   Marino, R. ( 1990) Adaptive observers for single-output nonlinear systems. IEEE Trans. Auto. Control , 35, 1054– 1058. Google Scholar CrossRef Search ADS   Nam, K. ( 1997) An approximate nonlinear observer with polynomial coordinate transformation maps. IEEE Trans. Auto. Control , 42, 522– 527. Google Scholar CrossRef Search ADS   Rakkiyappan, R., Chandrasekar, A., Laksmanan, S. & Ju, H. P. ( 2014) State estimation of memristor-based recurrent neural networks with time-varying delays based on passivity theory. Complexity , 19, 32– 43. Google Scholar CrossRef Search ADS   Rantzer, A. ( 1996) On the Kalman–Yakubovich–Popov lemma. Syst. Control Lett. , 28, 7– 10. Google Scholar CrossRef Search ADS   Ruoff, P., Vinsjevik, M., Monnerjahn, C. & Rensing, L. ( 2001) The Goodwin model: simulating the effect of light pulses on the circadian sporulation rhythm of Neurospora crassa. J. Theor. Biol. , 209, 29– 42. Google Scholar CrossRef Search ADS PubMed  Surhone, L. M., Tennoe, M. T. & Henssonow, S. F. ( 2013) State Observer . Germany: Betascript Publishing. Van der Schaft, A. J. ( 1996) L2-Gain and Passivity Techniques in Nonlinear Control . New York: Springer. Google Scholar CrossRef Search ADS   Wendling, F., Bellanger, J. J., Bartolomei, F. & Chauvel, P. ( 2000) Relevance of nonlinear lumped-parameter models in the analysis of depth-EEG epileptic signals. Bio. Cybern. , 83, 367– 378. Google Scholar CrossRef Search ADS   Yang, H. J., Fan, X. Z., Xia, Y. Q. & Hua, C. C. ( 2015) Robust tracking control for wheeled mobile robot based on extended state observer. Adv.anced Robotics ., 30, 68– 78. Google Scholar CrossRef Search ADS   Zanchettin, A. M., Lacevic, B. & Rocco, P. ( 2015) Passivity-based control of robotic manipulators for safe cooperation with humans. Int. J. Control , 88, 429– 439. Google Scholar CrossRef Search ADS   Zeitz, M. ( 1987) The extended Luenberger observer for nonlinear systems. Syst. Control Lett. , 9, 149– 156. Google Scholar CrossRef Search ADS   Zemouche, A. & Boutayeb, M. ( 2009) A unified H$$\infty$$ adaptive observer synthesis method for a class of systems with both Lipschitz and monotone nonlinearities. Syst. Control Lett. , 58, 282– 288. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

### Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Feb 5, 2018

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations