TY - JOUR AU1 - Zhang,, Sheng AU2 - Yong,, En-Mi AU3 - Zhou,, Yu AU4 - Qian,, Wei-Qi AB - Abstract A dynamic backstepping control method is proposed for non-linear systems in the pure-feedback form, for which the traditional backstepping method suffers from solving the implicit non-linear algebraic equation. This method treats the implicit algebraic equation directly via a dynamic way, by augmenting the (virtual) controls as states during each recursive step. Compared with the traditional backstepping method, one more Lyapunov design is executed in each step. As new dynamics are included in the design, the resulting control law is in the dynamic feedback form. Under appropriate assumptions, the proposed control scheme achieves the uniformly asymptotic stability and the closed-loop system is local input-to-state stable for various disturbance. Moreover, the control law may be simplified to the inverse-free form by setting large gains, which will alleviate the problem of `explosion of terms’. The effectiveness of this method is illustrated by the stabilization and tracking numerical examples. 1. Introduction The backstepping methodology provides an effective tool of controller design for a large class of non-linear systems with a triangular structure. Krstic et al. (1995) systematically developed this approach, from considering the exact model to encompassing the bounded and parameterized uncertainties. The basic idea behind backstepping is to break a design problem on the full system down to a sequence of sub-problems on lower-order systems and recursively use some states as `virtual controls’ to obtain the intermediate control laws with the control Lyapunov function (CLF). Starting from the lower-order system and routinely dealing with the interaction after the augmentation of new dynamics make the controller design easy. In addition, advantages of backstepping control include the guaranteed stability, the stress on robustness and the computable transient performance (Krstic et al., 1995). The backstepping method receives a great deal of interest since its proposal and has been widely applied to various control problems arising from the aerospace engineering (Lee & Kim, 2001; Sonneveldt et al., 2007; Lu et al., 2016), the mechanical engineering (Bridges et al., 1995; Melhem & Wang, 2009; Guo et al., 2017) etc. Along with these years of studies, this method has evolved to be fairly systematical and inclusive. For example, techniques like the non-linear damping (Krstic et al., 1995), the variable structure control (Won & Hedrick, 1996; Chehardoli & Eghtesad, 2015; Li et al., 2017), the neural network adaptive control (Zhang et al., 2000) and the fuzzy adaptive control (Yang & Zhou, 2005) are synthesized to address the uncertainties, including the matching and un-matching types. To resolve the problem of `explosion of terms’, the dynamic surface control (Swaroop et al., 2000) and the commander filtered backstepping control (Farrell et al., 2005) are further established. To address the deficiency in state information, the output feedback backstepping control is developed (Krstic et al., 1995; Tong et al., 2012). For the problem of input constraint, the boundedness propagation (Freeman & Praly, 1998) and the limiting filter (Farrell et al., 2005) are employed in the recursive design. To the contrary, the output constraint may also be efficiently met with the backstepping control upon barrier CLF (Tee et al., 2009). Nonetheless, within the extensive researches on the backstepping method, the plants studied are usually in the form of strict feedback, while more systems take the pure-feedback form. Kanellakopoulos et al. (1991) considered the adaptive control of parametric pure-feedback systems in a special form, and Wang & Huang (2002) explicitly solved the control for one class of pure-feedback systems. For the general pure-feedback plants, which have no affine appearance of the actual control and state variables to be used as the virtual controls, the usage of the backstepping method is restricted because of the intractable implicit algebraic equations encountered. To avoid the direct treatment of pure-feedback systems, Wang et al. (2010) exploited the mean-value theorem to obtain the affine form for the backstepping controller design. Na et al. (2012) also resorted to the mean-value theorem in designing the controller after the employment of non-linear coordinate transformation. With the adaptive technique, Ge & Wang (2002) utilized the neural networks to approximate the (virtual) controls out of the implicit algebraic equations and proved that the control error will be ultimately bounded. Under the similar strategy, Wang et al. (2006) employed the input-to-state stability analysis and the small-gain theorem to solve the `circularity problem’ arising from the general pure-feedback plants. By using the filtered signals, Zou & Hou (2008) circumvented the algebraic loop problems and applied the compensator to counteract the resulting approximation error. In this paper, we will solve the algebraic loop problem from a different way, and the core idea is to seek the root in an asymptotic dynamic manner upon the control-based view. For the similar algebraic issue in the non-affine-in-control systems with well-defined relative degree, Hovakimyan et al. (2008) introduced the controller dynamics, upon the singular perturbation theory, to approximate the ideal control solution. Here for the general pure-feedback non-linear systems, a dynamic backstepping method with stativization of control, including the virtual control, is proposed. Unlike most of the existing works, it handles the implicit algebraic equations directly within the frame of the backstepping CLF. The main innovations of this paper are that, (i) the root-finding via a dynamic way is proposed for the algebraic loop problems and it may also be employed to address such issues arising from other controller design problem, for example, the feedback linearization control on the non-affine plant; (ii) with large gains, the control law may be simplified or even applied in an inverse-free form, which will alleviate the problem of `explosion of terms’. The rest of this paper is organized as follows. In Section 2, problem formulation and preliminary regarding backstepping are presented. Section 3 details the dynamic backstepping method for the general pure-feedback system. Simplification and extension on the control law is discussed in Section 4. In Section 5, the illustrative examples are solved to demonstrate the effectiveness of the proposed method. The conclusive remarks are given at the end. 2. Preliminary and problem formulation We first briefly review the basic backstepping controller theory, and show its deficiency in treating the general pure-feedback plants. Since the controller design procedure is similar, in introducing the method, we only investigate the model of two cascades for the sake of brevity. Consider the strict-feedback plant described as \begin{equation} {\dot{\boldsymbol{x}}}_1={\boldsymbol{f}}_1\left({\boldsymbol{x}}_1\right)+{\boldsymbol{g}}_1\left({\boldsymbol{x}}_1\right){\boldsymbol{x}}_2 \end{equation} (2.1) \begin{equation} {\dot{\boldsymbol{x}}}_2={\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+{\boldsymbol{g}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right){\boldsymbol{u}}, \end{equation} (2.2) where |${\boldsymbol{x}}_1,{\boldsymbol{x}}_2\in{\mathbb{R}}^m$| are the states, |${\boldsymbol{u}}\in{\mathbb{R}}^m$| is the control, |${\boldsymbol{f}}_1:{\mathbb{R}}^m\to{\mathbb{R}}^m$|⁠, |${\boldsymbol{f}}_2:{\mathbb{R}}^m\times{\mathbb{R}}^m\to{\mathbb{R}}^m$| are smooth non-linear vector fields and the matrix-valued functions |${\boldsymbol{g}}_1:{\mathbb{R}}^m\to{\mathbb{R}}^{m\times m}$|⁠, |${\boldsymbol{g}}_2:{\mathbb{R}}^m\times{\mathbb{R}}^m\to{\mathbb{R}}^{m\times m}$| are smooth and invertible. The control objective is to stabilize the states of system towards the origin of $$\big[\begin{array}{c}{\boldsymbol{x}}_1\\[-4pt] {}{\boldsymbol{x}}_2\end{array}\big]=\textbf{0}$$ ⁠, which is assumed to be the equilibrium point. For other equilibrium points of the set-point control problems, they can be placed at the origin with coordinate transformation. Note however that the control |${\boldsymbol{u}}$| might not vanish at the origin. The basic backstepping design procedure for plant (2.1) and (2.2) is now summarized as follows: Step 1: consider Equation (2.1). Regard |${\boldsymbol{x}}_2$| as the virtual control in this equation and denote it with |${\boldsymbol{x}}_{2d}$|⁠. The Lyapunov design is carried out as \begin{equation} {\displaystyle \begin{array}{l}\kern0.1em \textrm{CLF}:\kern0.5em \\[3pt] {}\kern1.5em {V}_1=\frac{1}{2}{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{x}}_1\\[3pt] {}\kern3.899998em \Downarrow \\[3pt] {}\textrm{Driving}\kern0.5em {V}_1\kern0.5em \textrm{decrease}:\kern0.4em \\[3pt] {}\kern1.3em {\dot{V}}_1={{\boldsymbol{x}}_1}^{\textrm{T}}\left(\,{\boldsymbol{f}}_1\left({{\boldsymbol{x}}}_1\right)+{\boldsymbol{g}}_1\left({\boldsymbol{x}}_1\right){\boldsymbol{x}}_{2d}\right)\le 0\\[3pt] {}\kern3.899998em \Downarrow \\[3pt] {}\textrm{Algebraic}\kern0.3em \textrm{control}\kern0.5em \textrm{equation}:\kern0.4em \\[3pt] {}\kern1em {\boldsymbol{f}}_1\left({\boldsymbol{x}}_1\right)+{\boldsymbol{g}}_1\left({\boldsymbol{x}}_1\right){\boldsymbol{x}}_{2d}={\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)\\[3pt] {}\kern3.899998em \Downarrow \\[3pt] {}\textrm{Explicit}\kern0.34em \textrm{virtual}\kern0.34em \textrm{control}\kern0.3em \textrm{expression}:\kern0.4em \\{}\kern1.3em {\boldsymbol{x}}_{2d}={{\boldsymbol{g}}_1}^{-1}\left(-{\boldsymbol{f}}_1+{\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)\right)\end{array}} \end{equation} (2.3) where the superscript `|$\textrm{T}$|’ denotes the transpose operator. |${\boldsymbol{\kappa}}_1({\boldsymbol{x}}_1)$| is the expected dynamics that satisfies the following two conditions: (i) it drives |${\dot{V}}_1<0$| when |${\boldsymbol{x}}_1\ne \textbf{0}$|⁠, and (ii) the resulting virtual control solution is continuous and |${{\boldsymbol{x}}_{2d}|}_{{\boldsymbol{x}}_1=\textbf{0}}=\textbf{0}$|⁠. Usually it may be set that |${\boldsymbol{\kappa}}_1({\boldsymbol{x}}_1)=-{\boldsymbol{K}}_1{\boldsymbol{x}}_1$|⁠, where |${\boldsymbol{K}}_1$| is a positive-definite gain matrix. Step 2: consider Equations (2.1) and (2.2) together, and construct a synthetic CLF to obtain \begin{equation} {\displaystyle \begin{array}{l}\textrm{CLF}:\kern0.4em \\[3pt] {}\kern1.6em {V}_2=\frac{1}{2}{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{x}}_1+\frac{1}{2}{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)\\[3pt] {}\kern10.2em \Downarrow \\[3pt] {}\textrm{Driving}\kern0.5em {V}_1\kern0.5em \textrm{decrease}:\kern0.7em \\[3pt] {}\kern1.5em {\dot{V}}_2={{\boldsymbol{x}}_1}^{\textrm{T}}\left(\,{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1\right)+{\boldsymbol{g}}_1\left({\boldsymbol{x}}_1\right){\boldsymbol{x}}_{2d}\right)+{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}\left({\dot{\boldsymbol{x}}}_2-{\dot{\boldsymbol{x}}}_{2d}+{\boldsymbol{g}}_1{\left({\boldsymbol{x}}_1\right)}^{\textrm{T}}{\boldsymbol{x}}_1\right)\le 0\\[3pt] {}\kern10.2em \Downarrow \\{}\textrm{Algebraic}\kern0.3em \textrm{control}\kern0.5em \textrm{equation}:\kern0.4em \\[3pt] {}\kern1.1em {\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+{\boldsymbol{g}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right){\boldsymbol{u}}={\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)\\[3pt] {}\kern10.1em \Downarrow \\[3pt] {}\textrm{Explicit}\kern0.34em \textrm{control}\kern0.3em \textrm{expression}:\kern0.4em \\[3pt] {}\kern1.2em {\boldsymbol{u}}={{\boldsymbol{g}}_2}^{-1}\left(-{\boldsymbol{f}}_2+{\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)\right)\end{array}} \end{equation} (2.4) where |${\boldsymbol{\kappa}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2)$| is the expected dynamics that driving |${\dot{V}}_2<0$| when |${\boldsymbol{x}}_2\ne{\boldsymbol{x}}_{2d}$| and a common choice is |${\boldsymbol{\kappa}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2)=-{\boldsymbol{K}}_2({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d})-{\boldsymbol{g}}_1{({\boldsymbol{x}}_1)}^{\textrm{T}}{\boldsymbol{x}}_1+{\dot{\boldsymbol{x}}}_{2d}$|⁠, where |${\boldsymbol{K}}_2$| is a positive-definite gain matrix. The resulting control law |${\boldsymbol{u}}$| achieves the asymptotic stabilization since |${\dot{V}}_2<0$| except at $$\big[\begin{array}{c}{\boldsymbol{x}}_1\\[-4pt] {}{\boldsymbol{x}}_2\end{array}\!\big]=\textbf{0}$$ ⁠. Now consider the plant in the general pure-feedback form \begin{equation} {\dot{\boldsymbol{x}}}_1={\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)\kern0.1em \end{equation} (2.5) \begin{equation} {\dot{\boldsymbol{x}}}_2={\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\right), \end{equation} (2.6) where |${\boldsymbol{x}}_1,{\boldsymbol{x}}_2\in{\mathbb{R}}^m$| are the states, |${\boldsymbol{u}}\in{\mathbb{R}}^m$| is the control and |${\boldsymbol{f}}_i\kern0.4em (i=1, 2)$| are smooth vector fields. Proceeding accordingly as Step 1 before, we may obtain the algebraic control equation as \begin{equation} {\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)={\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right). \end{equation} (2.7) However, different from the strict-feedback system that has an affine structure, for the pure-feedback system, it may be not possible to get the explicit expression of |${\boldsymbol{x}}_{2d}$| due to the implicit non-linear form of Equation (2.7), and this confines the application of the traditional backstepping method. Moreover, there always exist model perturbation and uncertain inputs on the real dynamic system, which may be formulated as \begin{equation} {\dot{\boldsymbol{x}}}_1={\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{d}}_1\right)\kern0.1em \end{equation} (2.8) \begin{equation} {\dot{\boldsymbol{x}}}_2={\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}},{\boldsymbol{d}}_2\right), \end{equation} (2.9) where |${\boldsymbol{d}}_1$| and |${\boldsymbol{d}}_2$| represent the disturbance from different sources. Given a control law |${\boldsymbol{u}}$|⁠, it makes sense to investigate the stability performance of the closed-loop system under the disturbance, which falls into the category of the so-called `input/state’ stability properties (Sontag & Wang, 1995). 3. Dynamic backstepping method To gain a systematic solution for the controller design problems on general pure-feedback systems, a dynamic backstepping method is proposed. We first present the stabilizing control law and then expound the procedure to help understanding. 3.1. Main results Assumptions that make the results rigorous are first presented. Assumption 1 The exact solution for the implicit non-linear algebraic control equation exists, such as the existence of |${\boldsymbol{x}}_{2d}$| that rigorously nullifies Equation (2.7). Assumption 2 For the controlled domain |$\mathbb{D}$| that contains the equilibrium of |${\boldsymbol{x}}_1=\textbf{0}$| and |${\boldsymbol{x}}_2=\textbf{0}$|⁠, the Jacobi matrixes |$\frac{\partial{\boldsymbol{f}}_1}{\partial{\boldsymbol{x}}_2}$|⁠, |$\frac{\partial{\boldsymbol{f}}_1}{\partial{\boldsymbol{x}}_{2d}}$| and |$\frac{\partial{\boldsymbol{f}}_2}{\partial{\boldsymbol{u}}}$| are invertible. Assumption 3 For the controlled domain |$\mathbb{D}$| that contains the equilibrium of |${\boldsymbol{x}}_1=\textbf{0}$| and |${\boldsymbol{x}}_2=\textbf{0}$|⁠, |${\,\boldsymbol{f}}_1({\boldsymbol{x}}_1,{\boldsymbol{a}})\ne{\boldsymbol{f}}_1({\boldsymbol{x}}_1,{\boldsymbol{b}})$| when |${\boldsymbol{a}}\ne{\boldsymbol{b}}$|⁠. Assumption 1 guarantees the existence of the control law that stabilizes the closed-loop system under the frame of backstepping design. Assumption 2 is related to Assumption 1 with the implicit function theorem (Wang et al., 2000). In particular, |${\boldsymbol{x}}_{2d}$| is an augmented state in the proposed method, and the invertibility of |$\frac{\partial{\boldsymbol{f}}_1}{\partial{\boldsymbol{x}}_{2d}}$| and |$\frac{\partial{\boldsymbol{f}}_1}{\partial{\boldsymbol{x}}_2}$| is consistent when |${\boldsymbol{x}}_{2d}$| and |${\boldsymbol{x}}_2$| are close enough. Assumption 3 is used to deduce a general result for the controlled plant in this section, and it may be removed under certain condition, which will be shown in Section 4. Theorem 1 Consider the pure-feedback plant described by Equations (2.5) and (2.6). If Assumption 1 holds, then under the following dynamic feedback control law, there are |$\underset{t\to +\infty }{\lim }{\boldsymbol{x}}_1=\textbf{0}$| and |$\underset{t\to +\infty }{\lim }{\boldsymbol{x}}_2=\textbf{0}$| in the domain |$\mathbb{D}$| where Assumptions 2 and 3 hold. \begin{equation} \dot{\boldsymbol{u}}=-{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}{\boldsymbol{h}}_2-{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{-1}\boldsymbol{\alpha} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}}\right),\kern0.8000001em {\left.{\boldsymbol{u}}\right|}_{t=0}={\boldsymbol{u}}_0, \end{equation} (3.1) where \begin{equation} \boldsymbol{\alpha} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}}\right)=\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\right)+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}+{\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}{\boldsymbol{e}}_{{\boldsymbol{f}}_1} \end{equation} (3.2) \begin{equation} {\boldsymbol{e}}_{{\boldsymbol{f}}_1}={\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)-{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right).\, \, \ \ \ \quad\qquad\qquad\qquad\qquad\qquad\qquad \end{equation} (3.3) |${\boldsymbol{x}}_{2d}$| is an augmented state and its dynamics is \begin{equation} {\dot{\boldsymbol{x}}}_{2d}=-{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1-{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{-1}\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)+{\boldsymbol{x}}_1\right),\kern0.5em {\left.{\boldsymbol{x}}_{2d}\right|}_{t=0}={\boldsymbol{x}}_{2d0}. \end{equation} (3.4) |${\boldsymbol{u}}_0$| and |${\boldsymbol{x}}_{2d0}$| are the initial values for the augmented states. |${\boldsymbol{K}}_{v2}$| and |${\boldsymbol{K}}_{v1}$| are positive-definite matrixes. |${\boldsymbol{h}}_1$| and |${\boldsymbol{h}}_2$| denote the implicit algebraic control equations. They are \begin{equation} {\boldsymbol{h}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)={\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)-{\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)\quad\ \ \ \end{equation} (3.5) \begin{equation} {\boldsymbol{h}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}}\right)={\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\right)-{\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right). \end{equation} (3.6) Herein |${\boldsymbol{\kappa}}_1({\boldsymbol{x}}_1)$| may be arbitrary function that satisfies the following: (i) |${{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{\kappa}}_1({\boldsymbol{x}}_1)<0$| when |${\boldsymbol{x}}_1\ne \textbf{0}$|⁠, and (ii) the mapping |${\boldsymbol{x}}_{2d^\ast} ={{\boldsymbol{C}}}_1({\boldsymbol{x}}_1)$| implicitly determined by nullifying Equation (3.5) is continuous and |${{\boldsymbol{C}}}_1(\textbf{0})=\textbf{0}$|⁠. \begin{equation}\ \ \, {\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)=\boldsymbol{\varGamma} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)-{\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\right)}^{-1}\boldsymbol{\beta} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right) \end{equation} (3.7) \begin{equation} \boldsymbol{\beta} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)={\boldsymbol{x}}_1+{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1-\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}+\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\kern0.2em {\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right) \end{equation} (3.8) and |$\boldsymbol{\varGamma} ({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})$| may be arbitrary function that satisfies |${{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\frac{\partial{\boldsymbol{f}}_1({\boldsymbol{x}}_1,{\boldsymbol{x}}_2)}{\partial{\boldsymbol{x}}_2}\boldsymbol{\varGamma} ({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})<0$| when |${\boldsymbol{x}}_2\ne{\boldsymbol{x}}_{2d}$|⁠. Proof. Construct a CLF as \begin{equation} V=\frac{1}{2}{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{x}}_1+\frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}{\boldsymbol{h}}_1+\frac{1}{2}{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\boldsymbol{e}}_{{\boldsymbol{f}}_1}+\frac{1}{2}{{\boldsymbol{h}}_2}^{\textrm{T}}{\boldsymbol{h}}_2, \end{equation} (3.9) where |${\boldsymbol{e}}_{{\boldsymbol{f}}_1}$| is given in Equation (3.3), |${\boldsymbol{h}}_1$| and |${\boldsymbol{h}}_2$| are given by Equations (3.5) and (3.6), respectively. Differentiating Equation (3.9) renders \begin{align} \dot{V}&={{\boldsymbol{x}}_1}^{\textrm{T}}{\dot{\boldsymbol{x}}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\dot{\boldsymbol{x}}}_1+\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\kern0.2em {\dot{\boldsymbol{x}}}_1+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}{\dot{\boldsymbol{x}}}_2-\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)\nonumber\\ &\quad+ {{\boldsymbol{h}}_2}^{\textrm{T}}\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\dot{\boldsymbol{x}}}_1+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\dot{\boldsymbol{x}}}_2+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\dot{\boldsymbol{u}}\right). \end{align} (3.10) Treat the interactions with Equations (3.3), (3.5) and (3.6); there is \begin{equation} {\begin{array}{l} \dot{V}={{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)+{{\boldsymbol{h}}_1}^{\textrm{T}}{\boldsymbol{x}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)+\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\boldsymbol{x}}_1+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\displaystyle\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\kern0.2em {\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)\\[1.25em] {}\kern1.8em +{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1\kern0.1em +{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\left(\displaystyle\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,\,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}{\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)-\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)\\[1.25em] {}\kern1.8em +{{\boldsymbol{h}}_2}^{\textrm{T}}{\left(\displaystyle\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}{\boldsymbol{e}}_{{\boldsymbol{f}}_1}+{{\boldsymbol{h}}_2}^{\textrm{T}}\left(\displaystyle\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{f}}_2\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\Big)+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\dot{\boldsymbol{u}}\right).\\[1.25em] \end{array}} \end{equation} (3.11) Use Equation (3.2) and substitute in Equation (3.4); note that |$\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}=\frac{\partial{\boldsymbol{f}}_1({\boldsymbol{x}}_1,\,{\boldsymbol{x}}_{2d})}{\partial{\boldsymbol{x}}_{2d}}$|⁠. Then \begin{equation} {\displaystyle \begin{array}{l}\displaystyle\kern0.1em \dot{V}={{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)-{{\boldsymbol{h}}_1}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\left({\boldsymbol{x}}_1+{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1+\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\kern0.2em {\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)\right)\\[1.5em]\displaystyle{}\kern1.8em +{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}{\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)-\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)+{{\boldsymbol{h}}_2}^{\textrm{T}}\left(\boldsymbol{\alpha} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}}\right)\kern0.5em +\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\dot{\boldsymbol{u}}\right)\kern0.3em \end{array}} \end{equation} (3.12) Substituting in Equation (3.7) and Equation (3.1), we may further obtain that \begin{equation} \kern0.1em \dot{V}=-W\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}}\right)\le 0, \end{equation} (3.13) where \begin{eqnarray} W\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}}\right)&=&-{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)+{{\boldsymbol{h}}_1}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1-{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\boldsymbol{\varGamma} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)\nonumber\\&&+\; {{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}{\boldsymbol{h}}_2 .\end{eqnarray} (3.14) Since the positive-definite function |$W({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}})\kern0.1em$| vanishes only when |${\boldsymbol{x}}_i=\textbf{0},\kern0.4em {\boldsymbol{h}}_i=\textbf{0}\kern0.5em (i=1,2)$|⁠, this proves that |$\underset{t\to +\infty }{\lim }{\boldsymbol{x}}_1=\textbf{0}$| and |$\underset{t\to +\infty }{\lim }{\boldsymbol{x}}_2=\textbf{0}$| under the control law given by Equation (3.1). Actually, we may establish |$\underset{t\to +\infty }{\lim }{\boldsymbol{x}}_{2d}=\textbf{0}$| from preceding proof while only have |$\underset{t\to +\infty }{\lim}{\boldsymbol{u}}={\boldsymbol{u}}_{eq}$| with |${\boldsymbol{u}}_{eq}$| not necessarily equal to zero. Now we will consider the dynamic system under disturbance as given in Equations (2.8) and (2.9). Sontag & Wang (1995) have proved that the input-to-state stability is equivalent to the robust stability, which means that any disturbance bounded by a class |${\mathcal{K}}_{\infty }$| function of states will not change the asymptotic stability of the system. Therefore, we may study the robust stability of the closed-loop system under disturbance, with the tool of local input-to-state stability (Sontag & Wang, 1996). Definition 1 Let |$\mathbb{D}$| be a domain that contains the equilibrium. For the system \begin{equation} \dot{\boldsymbol{x}}=\boldsymbol{f}\left(\boldsymbol{x},\boldsymbol{d}\right), \end{equation} (3.15) where |${\boldsymbol{x}}$| is the state and |${\boldsymbol{d}}$| is the disturbance, it is said to be local input-to-state stable if for any |${\boldsymbol{x}}({t}_0)\in \mathbb{D}$| and |$\underset{\tau \in [{t}_0,t]}{\sup}\Vert{\boldsymbol{d}}(\tau )\Vert \le{r}_{{\boldsymbol{d}}}$|⁠, where |${r}_{{\boldsymbol{d}}}$| is the positive bound, the solution |${\boldsymbol{x}}(t)\in \mathbb{D}$| for all |$t\ge{t}_0$| and satisfies the inequality \begin{equation} \Vert{\boldsymbol{x}}(t)-{\boldsymbol{x}}_{eq}\Vert \le \beta (\Vert{\boldsymbol{x}}({t}_0)-{\boldsymbol{x}}_{eq}\Vert, t-{t}_0)+\gamma \left(\underset{\tau \in \left[{t}_0,t\right]}{\sup}\left\Vert{\boldsymbol{d}}\left(\tau \right)\right\Vert \right), \end{equation} (3.16) where |${\boldsymbol{x}}_{eq}$| is the equilibrium with |${\boldsymbol{d}}=\textbf{0}$|⁠. |$\beta$| is a class |$\mathcal{K}\mathcal{L}$| function and |$\gamma$| is a class |$\mathcal{K}$| function. Different from the notion of input-to-state stability, which is a global concept allowing arbitrary bounded disturbance, the local input-to-state stability highlights the limited domain of disturbance input and the neighbouring region around the equilibrium. It has been pointed out that a system is input-to-state stable if it admits an input-to-state stability CLF (Sontag & Wang, 1995). Since for a finite controlled domain |$\mathbb{D}$|⁠, there exist class |$\mathcal{K}$| functions |${\upsilon}_1$| and |${\upsilon}_2$| such that the CLF |$V({\boldsymbol{x}})$| is bounded as (Khalil, 2002) \begin{equation} {\upsilon}_1(\kern0.1em \Vert{\boldsymbol{x}}-{\boldsymbol{x}}_{eq}\Vert )\le V({\boldsymbol{x}})\le{\upsilon}_2(\kern0.1em \Vert{\boldsymbol{x}}-{\boldsymbol{x}}_{eq}\Vert ) \end{equation} (3.17) the local input-to-state stability of a non-linear system may be determined by Lemma 1 Let |$\mathbb{D}$| be a finite domain that contains the equilibrium |${\boldsymbol{x}}_{eq}$| and |$V({\boldsymbol{x}})$| be a continuously differentiable function such that for all |${\boldsymbol{x}}\in \mathbb{D}$| \begin{equation} \dot{V}\le -P\left({\boldsymbol{x}}\right),\kern0.3em \forall \Vert{\boldsymbol{x}}-{\boldsymbol{x}}_{eq}\Vert >\rho \left(\kern0.1em \left\Vert{\boldsymbol{d}}\right\Vert \right), \end{equation} (3.18) where |$P({\boldsymbol{x}})$| is a continuous positive-definite function and |$\rho$| is a class |$\mathcal{K}$| function that satisfies |$\mathbb{B}=\{{\boldsymbol{x}}|\big\Vert{\boldsymbol{x}}-{\boldsymbol{x}}_{eq}\big\Vert \le \rho ({r}_{{\boldsymbol{d}}})\}\subset \mathbb{D}$|⁠. Then the system (3.15) is local input-to-state stable. Assumption 4 The dynamics of the system described by Equations (2.8) and (2.9) are Lipschitz in the disturbance, i.e. \begin{equation} \left\Vert{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{{\boldsymbol{d}}}_1\right)-{\boldsymbol{f}}_1\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\Big)\right\Vert \le{L}_{{{\boldsymbol{d}}}_1}\left\Vert{{\boldsymbol{d}}}_1\right\Vert \end{equation} (3.19) \begin{equation} \left\Vert{\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}},{{\boldsymbol{d}}}_2\right)-{\boldsymbol{f}}_2\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\Big)\right\Vert \le{L}_{{{\boldsymbol{d}}}_2}\left\Vert{{\boldsymbol{d}}}_2\right\Vert, \end{equation} (3.20) where |${\boldsymbol{f}}_1({\boldsymbol{x}}_1,{\boldsymbol{x}}_2):= {\boldsymbol{f}}_1({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,\textbf{0})$| and |${\boldsymbol{f}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}):= {\boldsymbol{f}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}},\textbf{0})$|⁠. |${L}_{{{\boldsymbol{d}}}_1}$| and |${L}_{{{\boldsymbol{d}}}_2}$| are Lipschitz constants. Assumption 5 The functions |${\alpha}_1(\Vert{\boldsymbol{x}}_1\Vert):= -{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{\kappa}}_1({\boldsymbol{x}}_1)$| and |${\alpha}_2(\Vert{\boldsymbol{e}}_{{\boldsymbol{f}}_1}\Vert):= -{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\frac{\partial{\boldsymbol{f}}_1({\boldsymbol{x}}_1,{\boldsymbol{x}}_2)}{\partial{\boldsymbol{x}}_2}\boldsymbol{\varGamma} ({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})$| are class |$\mathcal{K}$| functions. Lemma 2 (Young’s inequality (Hardy et al., 1989)) For any two vectors |${{\boldsymbol{v}}}_1$| and |${{\boldsymbol{v}}}_2$|⁠, there are (i) |${{{\boldsymbol{v}}}_1}^{\textrm{T}}{{\boldsymbol{v}}}_2\le \frac{\varepsilon }{2}{{{\boldsymbol{v}}}_1}^{\textrm{T}}{\textbf{Q}{\boldsymbol{v}}}_1+\frac{1}{2\varepsilon }{{{\boldsymbol{v}}}_2}^{\textrm{T}}{\textbf{Q}}^{-1}{{\boldsymbol{v}}}_2$|⁠, where |$\varepsilon >0$| and |$\textbf{Q}$| is a positive-definite matrix; (ii) |${{{\boldsymbol{v}}}_1}^{\textrm{T}}{{\boldsymbol{v}}}_2\le \varepsilon \xi(\Vert{{\boldsymbol{v}}}_1\Vert )+\frac{1}{\varepsilon}\ell \xi(\Vert{{\boldsymbol{v}}}_2\Vert)$|⁠, where |$\varepsilon >0$|⁠, |$\xi$| is a class |$\mathcal{K}$| function and |$\ell \xi $| is the Legendre–Fenchel transform of |$\xi$| as |$\ell \xi (a)={\int}_0^a{\Big(\frac{\textrm{d}\xi (s)}{\textrm{d}s}\Big)}^{-1}\textrm{d}s$|⁠, which is also a class |$\mathcal{K}$| function. Theorem 2 Consider the pure-feedback plant under disturbance described by Equations (2.8) and (2.9). Let |$\Omega =\underset{\overline{\boldsymbol{x}}\in \partial \mathbb{D}}{\min}(W(\overline{\boldsymbol{x}}))$|⁠, where |$W(\cdot )$| is given in Equation (3.14) and |$\overline{\boldsymbol{x}}={\big[{{\boldsymbol{x}}_1}^{\textrm{T}}\kern0.5em {{\boldsymbol{x}}_2}^{\textrm{T}}\kern0.5em {{\boldsymbol{x}}_{2d}}^{\textrm{T}}\kern0.5em {\boldsymbol{u}}^{\textrm{T}}\big]}^{\textrm{T}}$|⁠. |$\partial \mathbb{D}$| denotes the boundary of the domain |$\mathbb{D}$|⁠. Then under Assumptions 1–5, the closed-loop system with the dynamic feedback control law (3.1) is local input-to-state stable in |$\mathbb{D}$|⁠, which contains the equilibrium $${\overline{\boldsymbol{x}}}_{eq}={\big[\begin{array}{cccc}{\textbf{0}}^{\textrm{T}}& {\textbf{0}}^{\textrm{T}}& {\textbf{0}}^{\textrm{T}}& {{\boldsymbol{u}}_{eq}}^{\textrm{T}}\end{array}\big]}^{\textrm{T}}$$ ⁠, for the disturbance bounds |${r}_{{{\boldsymbol{d}}}_1}$| and |${r}_{{{\boldsymbol{d}}}_2}$| that satisfy \begin{equation} {\alpha}_3\kern0.1em \left({r}_{{{\boldsymbol{d}}}_1}\right)+{\alpha}_4\kern0.1em \left({r}_{{{\boldsymbol{d}}}_2}\right)\le 2\Omega, \end{equation} (3.21) where \begin{eqnarray}&\!\!\!\!{\alpha}_3\left({r}_{{{\boldsymbol{d}}}_1}\right)=2\ell{\alpha}_1\! \left({L}_{{{\boldsymbol{d}}}_1}{r}_{{{\boldsymbol{d}}}_1}\!\right)+4\ell{\alpha}_2\! \left({L}_{{{\boldsymbol{d}}}_1}{r}_{{{\boldsymbol{d}}}_1}{\sigma}_{\textrm{max}}\left(\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\right)\right)+\frac{1}{2}{L_{{{\boldsymbol{d}}}_1}}^2{r_{{{\boldsymbol{d}}}_1}}^2{\sigma}_{\textrm{max}}\!\!\left(\!{\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}}^{\textrm{T}}{\!\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}\right)}^{\!\!\!-1}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\!\!\right)\nonumber\\&+\ {L_{{{\boldsymbol{d}}}_1}}^2{r_{{{\boldsymbol{d}}}_1}}^2{\sigma}_{\textrm{max}}\left({\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}}^{\textrm{T}}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}\right)}^{-1}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)\kern0.2em \end{eqnarray} (3.22) \begin{equation} {\alpha}_4\kern0.1em \left({r}_{{{\boldsymbol{d}}}_2}\right)=4\ell{\alpha}_2\kern0.1em \left({L}_{{{\boldsymbol{d}}}_2}{r}_{{{\boldsymbol{d}}}_2}{\sigma}_{\textrm{max}}\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\right)\right)+{L_{{{\boldsymbol{d}}}_2}}^2{r_{{{\boldsymbol{d}}}_2}}^2{\sigma}_{\textrm{max}}\left({\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}}^{\textrm{T}}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}\right)}^{-1}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right). \end{equation} (3.23) |${\sigma}_{\textrm{max}}(\cdot )$| denotes the maximum singular value of matrix. Proof. With the CLF given by Equation (3.9), we may obtain that \begin{eqnarray}&\dot{V}=-W\left(\overline{\boldsymbol{x}}\right)+\left({{\boldsymbol{x}}_1}^{\textrm{T}}+{{\boldsymbol{h}}_1}^{\textrm{T}}\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}+{{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)\left({\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{{\boldsymbol{d}}}_1\right)-{\boldsymbol{f}}_1\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\Big)\kern0.1em \right)\kern0.1em \nonumber\\ & +\left({{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\displaystyle\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}+{{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right)\left({\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}},{{\boldsymbol{d}}}_2\right)-{\boldsymbol{f}}_2\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\Big)\right).\end{eqnarray} (3.24) With the Young’s inequalities and Assumptions 4 and 5, we have \begin{equation} {{\boldsymbol{x}}_1}^{\textrm{T}}\left({\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{{\boldsymbol{d}}}_1\right)-{\boldsymbol{f}}_1\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\Big)\kern0.1em \right)\le \frac{1}{2}{\alpha}_1\left(\left\Vert{\boldsymbol{x}}_1\right\Vert \right)+2\ell{\alpha}_1\kern0.1em \left({L}_{{{\boldsymbol{d}}}_1}\left\Vert{{\boldsymbol{d}}}_1\right\Vert \right) \end{equation} (3.25) \begin{eqnarray} &{{\boldsymbol{h}}_1}^{\textrm{T}}\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\left({\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{{\boldsymbol{d}}}_1\right)-{\boldsymbol{f}}_1\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\Big)\kern0.1em \right)\le \frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1\qquad\qquad\qquad\qquad\qquad\quad\nonumber\\[1em]&+\displaystyle\frac{1}{2}{L_{{{\boldsymbol{d}}}_1}}^2{\left\Vert{{\boldsymbol{d}}}_1\right\Vert}^2{\sigma}_{\textrm{max}}\left({\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}}^{\textrm{T}}{\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}\right)}^{-1}\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right) \end{eqnarray} (3.26) \begin{equation} {{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\left({\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{{\boldsymbol{d}}}_1\right)-{\boldsymbol{f}}_1\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\Big)\kern0.1em \right)\le \frac{1}{4}{\alpha}_2\left(\left\Vert{\boldsymbol{e}}_{{\boldsymbol{f}}_1}\right\Vert \right)+4\ell{\alpha}_2\kern0.1em \left({L}_{{{\boldsymbol{d}}}_1}\left\Vert{{\boldsymbol{d}}}_1\right\Vert{\sigma}_{\textrm{max}}\left(\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\right)\right)\kern0.2em \end{equation} (3.27) \begin{eqnarray}&& {{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\left({\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{{\boldsymbol{d}}}_1\right)-{\boldsymbol{f}}_1\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\Big)\kern0.1em \right)\le \frac{1}{4}{{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}{\boldsymbol{h}}_2\qquad\qquad\qquad\qquad\qquad\qquad\quad\ \ \nonumber\\[1em]&&\qquad+\ {L_{{{\boldsymbol{d}}}_1}}^2{\left\Vert{{\boldsymbol{d}}}_1\right\Vert}^2{\sigma}_{\textrm{max}}\left({\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}}^{\textrm{T}}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}\right)}^{-1}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right) \end{eqnarray} (3.28) \begin{eqnarray}&& {{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\!\textrm{T}}\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\kern-0.1em \left({\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}},{{\boldsymbol{d}}}_2\right)-{\boldsymbol{f}}_2\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\Big)\right)\le \kern0.3em \frac{1}{4}{\alpha}_2\left(\left\Vert{\boldsymbol{e}}_{{\boldsymbol{f}}_1}\right\Vert \right)\qquad\qquad\qquad\qquad\qquad\qquad\quad\nonumber\\[1em]&&\qquad +\ 4\ell{\alpha}_2\kern-0.1em \left({L}_{{{\boldsymbol{d}}}_2}\left\Vert{{\boldsymbol{d}}}_2\right\Vert{\sigma}_{\textrm{max}}\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\right)\right) \end{eqnarray} (3.29) \begin{eqnarray}&& {{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\left({\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}},{{\boldsymbol{d}}}_2\right)-{\boldsymbol{f}}_2\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\Big)\right)\le \kern0.3em \frac{1}{4}{{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}{\boldsymbol{h}}_2\qquad\qquad\qquad\qquad\qquad\quad\nonumber\\[1em] &&\qquad+\ {L_{{{\boldsymbol{d}}}_2}}^2{\left\Vert{{\boldsymbol{d}}}_2\right\Vert}^2{\sigma}_{\textrm{max}}\left({\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}}^{\textrm{T}}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}\right)}^{-1}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right). \end{eqnarray} (3.30) Then \begin{equation} \dot{V}\le -\frac{1}{2}W\left(\overline{\boldsymbol{x}}\right)+{\alpha}_3\kern0.1em \left(\left\Vert{{\boldsymbol{d}}}_1\right\Vert \right)+{\alpha}_4\kern0.1em \left(\left\Vert{{\boldsymbol{d}}}_2\right\Vert \right). \end{equation} (3.31) Since we may find a class |$\mathcal{K}$| function that satisfies |${\alpha}_5(\Vert \overline{\boldsymbol{x}}-{\overline{\boldsymbol{x}}}_{eq}\Vert )\le W(\overline{\boldsymbol{x}})$| in |$\mathbb{D}$|⁠, then from any |${\boldsymbol{x}}({t}_0)\in \mathbb{D}$|⁠, we have \begin{equation} \dot{V}\le -\frac{1}{4}{\alpha}_5\kern0.1em \left(\left\Vert \overline{\boldsymbol{x}}-{\overline{\boldsymbol{x}}}_{eq}\right\Vert \right),\kern0.3em \forall \left\Vert \overline{\boldsymbol{x}}-{\overline{\boldsymbol{x}}}_{eq}\right\Vert \ge 2\left({\alpha_5}^{-1}\circ \left({\alpha}_3+{\alpha}_4\right)\right)\left(\left\Vert{\boldsymbol{d}}\right\Vert \right), \end{equation} (3.32) where |${\alpha_5}^{-1}\circ ({\alpha}_3+{\alpha}_4)$| is also a class |$\mathcal{K}$| function. According to Lemma 1, this proves the local input-to-state stability. Regarding the disturbance bound, from Equation (3.31), Equation (3.21) may be established easily. 3.2. Design procedure Readers may be enlightened by the proof of Theorem 1 about the controller design, yet to facilitate understanding how the control law is constructed, the concrete procedure that implements the core idea will be presented. Step 1: consider Equation (2.5) only. Regard |${\boldsymbol{x}}_2$| as the virtual control in this equation and denote it with |${\boldsymbol{x}}_{2d}$|⁠. A CLF is constructed to obtain the algebraic control equation as \begin{equation} {\displaystyle \begin{array}{c}{V}_{11}=\displaystyle\frac{1}{2}{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{x}}_1\\[.5em] {}\Downarrow \\[.25em] {}{\dot{V}}_{11}={{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)\le 0\\[.25em] {}\Downarrow \\[.25em] {}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)={\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right),\end{array}} \end{equation} (3.33) where |${\boldsymbol{\kappa}}_1({\boldsymbol{x}}_1)$| satisfies the conditions in Theorem 1. Since we may not be able to obtain the analytic expression of |${\boldsymbol{x}}_{2d^\ast} ={{\boldsymbol{C}}}_1({\boldsymbol{x}}_1)$| that rigorously satisfies |${\boldsymbol{f}}_1({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d^\ast} )-{\boldsymbol{\kappa}}_1({\boldsymbol{x}}_1)\equiv \textbf{0}$|⁠, we circumvent such problem by considering the dynamics of the virtual control |${\boldsymbol{x}}_{2d}$|⁠, hoping that |${\boldsymbol{x}}_{2d}$| will satisfy the implicit algebraic equation in an asymptotic way. To realize this, again a CLF is constructed as \begin{equation} {V}_{12}={V}_{11}+\frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}{\boldsymbol{h}}_1, \end{equation} (3.34) where |${\boldsymbol{h}}_1$| is given by Equation (3.5). Differentiating |${V}_{12}$| and driving |${\dot{V}}_{12}\le 0$|⁠, we have \begin{equation} { \begin{array}{l}{\dot{V}}_{12}={{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)+{{\boldsymbol{h}}_1}^{\textrm{T}}\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\dot{\boldsymbol{x}}}_1+\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)\\[1.75em] {}\kern1.4em ={{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)+{{\boldsymbol{h}}_1}^{\textrm{T}}\left({\boldsymbol{x}}_1+\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\dot{\boldsymbol{x}}}_1+\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)\le 0\\[.75em] {}\kern7.299995em \Downarrow \\[.75em] {}\kern.1em{\dot{\boldsymbol{x}}}_{2d}=-{\boldsymbol{K}}_{v1}{\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1-{\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{-1}\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)+{\boldsymbol{x}}_1\right).\end{array}} \end{equation} (3.35) Thus, we have the virtual control |${\boldsymbol{x}}_{2d}$| that achieves |$\underset{t\to \infty }{\lim }{\boldsymbol{h}}_1=0$| and |$\underset{t\to \infty }{\lim }{\boldsymbol{x}}_1=0$|⁠. Under Assumptions 1 and 2 that guarantee the existence of |${\boldsymbol{x}}_{2d}$|⁠, Equation (3.4) is established with an initial value of |${\boldsymbol{x}}_{2d0}$|⁠. Step 2: consider Equations (2.5) and (2.6) together, and construct a CLF, which aims to track |${\boldsymbol{x}}_{2d}$| in virtue of Assumption 3 \begin{equation} {V}_{21}={V}_{12}+\frac{1}{2}{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\boldsymbol{e}}_{{\boldsymbol{f}}_1}, \end{equation} (3.36) where |${\boldsymbol{e}}_{{\boldsymbol{f}}_1}$| is given in Equation (3.3). Proceeding similarly through driving |${\dot{V}}_{21}\le 0$| gives \begin{equation} {\displaystyle \begin{array}{l}{\dot{V}}_{21}={{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\boldsymbol{x}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)+\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)\\[1.5em] {}\kern2.2em + {{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\displaystyle\frac{\partial{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}{\partial{\boldsymbol{x}}_1}\kern0.2em {\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}{\dot{\boldsymbol{x}}}_2-\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)\kern0.1em \\[.75em] {}\kern1.6em \le 0\\[.75em] {}\kern5.699997em \Downarrow \\[.75em] {}{\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{u}}\right)={\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right), \end{array}} \end{equation} (3.37) where |${\boldsymbol{\kappa}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})$| is given by Equation (3.7) and a feasible choice of |$\boldsymbol{\varGamma} ({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})$| may be \begin{equation} \boldsymbol{\varGamma} \left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)=-{\boldsymbol{K}}_2{\left(\frac{\partial{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}{\boldsymbol{e}}_{{\boldsymbol{f}}_1}, \end{equation} (3.38) where |${\boldsymbol{K}}_2$| is a positive-definite gain matrix. Analogously, we consider the dynamics of the control |${\boldsymbol{u}}$| to seek its explicit solution. The CLF for the whole plant is constructed as \begin{equation} {V}_{22}={V}_{21}+\frac{1}{2}{{\boldsymbol{h}}_2}^{\textrm{T}}{\boldsymbol{h}}_2, \end{equation} (3.39) where |${\boldsymbol{h}}_2$| is given by Equation (3.6). |${V}_{22}$| is just the CLF constructed in the proof of Theorem 1. Then the control law (3.1) that ensures |${V}_{22}$| decrease is obtained. Also by Assumptions 1 and 2, the existence of |${\boldsymbol{u}}$| is guaranteed with a reasonable |${\boldsymbol{u}}_0$|⁠. Through the design procedure, it is shown that there are two Lyapunov designs in each step. This operation along with the augmentation of the dynamics of the (virtual) controls is used to solve the implicit non-linear algebraic control equations, i.e., |${\boldsymbol{h}}_1({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d})=\textbf{0}$| and |${\boldsymbol{h}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d},{\boldsymbol{u}})=\textbf{0}$|⁠. With this dynamic feedback control law, the states and the implicit non-linear algebraic equations will be both driven to be zero. 4. Further discussion 4.1. Simplification of controller form The control law presented last section aims at the common situation, and the resulting form is complex. Simplification of the controller is possible under some conditions, and this is helpful to alleviate the problem of `explosion of terms’. Consider the plant described by Equations (2.5) and (2.6). In the second Lyapunov design of Step 1, presuming that \begin{equation} {\boldsymbol{\kappa}}_1\left({\boldsymbol{x}}_1\right)=-{\boldsymbol{K}}_1{\boldsymbol{x}}_1, \end{equation} (4.1) where |${\boldsymbol{K}}_1$| is a positive-definite gain matrix, then with the condition that |${\boldsymbol{K}}_{v1}$| satisfies \begin{equation} \frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}>\frac{1}{2}{{\boldsymbol{K}}_1}^{-1}+\frac{1}{2}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}} \end{equation} (4.2) the virtual control, i.e., |${\boldsymbol{x}}_{2d}$|⁠, may be set as \begin{equation} {\dot{\boldsymbol{x}}}_{2d}=-{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1 \end{equation} (4.3) and it also guarantees the stability of the subsystem. This may be verified by substituting Equation (4.3) into |${\dot{V}}_{12}$| to be \begin{equation} {\displaystyle \begin{array}{l}{\dot{V}}_{12}={{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\right)+{{\boldsymbol{h}}_1}^{\textrm{T}}\left(\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\dot{\boldsymbol{x}}}_1+\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\right)\\[1em] {}\kern1.7em =-{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\left({\textbf{1}}_{m\times m}-\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}}{\boldsymbol{K}}_1\right){\boldsymbol{x}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{h}}_1-{{\boldsymbol{h}}_1}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1,\end{array}} \end{equation} (4.4) where |${\textbf{1}}_{m\times m}$| is the |$m\times m$| dimensional identity matrix. With the equality \begin{equation} {{\boldsymbol{h}}_1}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{h}}_1=\frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}+{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}\right){\boldsymbol{h}}_1 \end{equation} (4.5) and the Young’s inequality \begin{equation} {{\boldsymbol{h}}_1}^{\textrm{T}}\left({\textbf{1}}_{m\times m}-\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1\right){\boldsymbol{x}}_1\le \frac{1}{2}{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}_1+\frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}\left({{\boldsymbol{K}}_1}^{-1}-\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}+{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}\right)+\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}\right){\boldsymbol{h}}_1, \end{equation} (4.6) we may get \begin{equation} {\dot{V}}_{12}\le -\frac{1}{2}{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}_1-{{\boldsymbol{h}}_1}^{\textrm{T}}{{\boldsymbol{M}}}_1{\boldsymbol{h}}_1, \end{equation} (4.7) where \begin{equation} {{\boldsymbol{M}}}_1=\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}-\frac{1}{2}{{\boldsymbol{K}}_1}^{-1}-\frac{1}{2}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}. \end{equation} (4.8) Upon the gain condition given by Equation (4.2), there is |${\dot{V}}_{12}\le 0$|⁠. Furthermore, in the first Lyapunov design of Step 2 to derive |${\boldsymbol{\kappa}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})$|⁠, presume that the plant satisfies the Lipschitz condition of \begin{equation} \left\Vert{\boldsymbol{e}}_{{\boldsymbol{f}}_1}\right\Vert =\left\Vert{\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)-{\boldsymbol{f}}_1\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_{2d}\Big)\right\Vert \le L\left\Vert{\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right\Vert, \end{equation} (4.9) which usually holds for real systems. |$L$| is the Lipschitz constant. Then with the following condition on |${\boldsymbol{K}}_2$| \begin{equation} {\boldsymbol{K}}_2>{L}^2\left({\sigma}_{\textrm{max}}\left({{\boldsymbol{K}}_1}^{-1}\right)+\frac{1}{2}{\sigma}_{\textrm{max}}\left({\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{{{\boldsymbol{M}}}_1}^{-1}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)\right){\textbf{1}}_{m\times m} \end{equation} (4.10) the term |${\boldsymbol{\kappa}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})$| that guarantees the stability may be simplified as \begin{equation} {\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)=-{\boldsymbol{K}}_2\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)+{\dot{\boldsymbol{x}}}_{2d}. \end{equation} (4.11) In this way, Assumption 3 can also be removed. See, with the CLF, \begin{equation} {V}_{21}={V}_{12}+\frac{1}{2}{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right). \end{equation} (4.12) Its derivative is \begin{equation} {\displaystyle \begin{array}{l}{\dot{V}}_{21}={{\boldsymbol{x}}_1}^{\textrm{T}}{\dot{\boldsymbol{x}}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}{\dot{\boldsymbol{h}}}_1+{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}\left({\dot{\boldsymbol{x}}}_2-{\dot{\boldsymbol{x}}}_{2d}\right)\\[1em] {}\kern1.6em =-{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\left({\textbf{1}}_{m\times m}-\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}}{\boldsymbol{K}}_1\right){\boldsymbol{x}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\displaystyle\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{h}}_1+{{\boldsymbol{h}}_1}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}{\dot{\boldsymbol{x}}}_{2d}\\[1em] \quad\qquad+\,{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\left({\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1+{\boldsymbol{x}}_1\right)+{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}\left({\dot{\boldsymbol{x}}}_2-{\dot{\boldsymbol{x}}}_{2d}\right).\end{array}} \end{equation} (4.13) According to the Young’s inequality, we have \begin{equation}{}\kern1.7em {{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1\le \frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}{{\boldsymbol{M}}}_1{\boldsymbol{h}}_1+\frac{1}{2}{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{{{\boldsymbol{M}}}_1}^{-1}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}{\boldsymbol{e}}_{{\boldsymbol{f}}_1} \end{equation} (4.14) \begin{equation} {{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{\boldsymbol{x}}_1\le \frac{1}{4}{{\boldsymbol{x}}_1}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}_1+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}{{\boldsymbol{K}}_1}^{-1}{\boldsymbol{e}}_{{\boldsymbol{f}}_1}.\quad \end{equation} (4.15) With Equation (4.3) and the Lipschitz condition (4.9), then \begin{equation} {\displaystyle \begin{array}{l}{\dot{V}}_{21}\le -\displaystyle\frac{1}{2}{\boldsymbol{x}}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}-{{\boldsymbol{h}}_1}^{\textrm{T}}{{\boldsymbol{M}}}_1{\boldsymbol{h}}_1+{{\boldsymbol{e}}_{{\boldsymbol{f}}_1}}^{\textrm{T}}\left({\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1+{\boldsymbol{x}}_1\right)+{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}\left({\dot{\boldsymbol{x}}}_2-{\dot{\boldsymbol{x}}}_{2d}\right)\\[1.5em] {}\kern1.4em \le -\displaystyle\frac{1}{4}{\boldsymbol{x}}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}-\frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}{{\boldsymbol{M}}}_1{\boldsymbol{h}}_1-{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}{{\boldsymbol{M}}}_2\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right), \end{array}} \end{equation} (4.16) where \begin{equation} {{\boldsymbol{M}}}_2={\boldsymbol{K}}_2-{L}^2\left({\sigma}_{\textrm{max}}\left({{\boldsymbol{K}}_1}^{-1}\right)+\frac{1}{2}{\sigma}_{\textrm{max}}\left({\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{{{\boldsymbol{M}}}_1}^{-1}\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)\right){\textbf{1}}_{m\times m}. \end{equation} (4.17) Under the gain condition (4.10), we have |${\dot{V}}_{21}\le 0$|⁠. In particular, we may also use the first-order approximation of |${\boldsymbol{e}}_{{\boldsymbol{f}}_1}\approx \Big(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\Big)({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d})$| in Equation (4.16) to derive \begin{equation} {\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right)=-{\boldsymbol{K}}_2\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)-{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}\left({\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_1+{\boldsymbol{x}}_1\right)+{\dot{\boldsymbol{x}}}_{2d}, \end{equation} (4.18) which is more complex but may require a smaller |${\boldsymbol{K}}_2$|⁠. For the second Lyapunov design of Step 2, under the condition that \begin{eqnarray} &&\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}>\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}+\frac{1}{2}\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}+{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}\right)+\frac{1}{2}{{\boldsymbol{M}}}_3{{{\boldsymbol{M}}}_1}^{-1}{{{\boldsymbol{M}}}_3}^{\textrm{T}}\nonumber\\&&\qquad \qquad \qquad \qquad+\ \frac{L^2}{2}\frac{\sigma_{\textrm{min}}\left({{\boldsymbol{M}}}_2\right)}{\sigma_{\textrm{max}}\left({{\boldsymbol{M}}}_2\right)}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{{{\boldsymbol{M}}}_2}^{-1}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}+\frac{1}{2}{{\boldsymbol{M}}}_4, \end{eqnarray} (4.19) where |${\sigma}_{\textrm{min}}(\cdot)$| denotes the minimum singular value of matrix, and \begin{equation} {{\boldsymbol{M}}}_3=\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{}\kern1.93em\qquad\qquad\qquad\qquad\quad \end{equation} (4.20) \begin{equation} {{\boldsymbol{M}}}_4={{{\boldsymbol{M}}}_2}^{-1}+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{K}}_2{{{\boldsymbol{M}}}_2}^{-1}{\boldsymbol{K}}_2{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}-\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{K}}_2{{{\boldsymbol{M}}}_2}^{-1}+{{{\boldsymbol{M}}}_2}^{-1}{\boldsymbol{K}}_2{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}\right) \end{equation} (4.21) the final control law may be set as \begin{equation} \dot{\boldsymbol{u}}=-{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}{\boldsymbol{h}}_2, \end{equation} (4.22) which may guarantee the stability of the closed-loop system as well. Consider the CLF of \begin{equation} V={V}_{21}+\frac{1}{2}{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)+\frac{1}{2}{{\boldsymbol{h}}_2}^{\textrm{T}}{\boldsymbol{h}}_2. \end{equation} (4.23) With Equations (4.3) and (4.11), it may be derived that \begin{equation} \dot{V}\kern0.3em \le -\frac{1}{4}{\boldsymbol{x}}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}-\frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}{{\boldsymbol{M}}}_1{\boldsymbol{h}}_1-{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}{{\boldsymbol{M}}}_2\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)+{{\boldsymbol{h}}_2}^{\textrm{T}}\left({\dot{\boldsymbol{h}}}_2+\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)\right). \end{equation} (4.24) With Equations (3.3), (3.5), (3.6), (4.3), (4.11) and (4.22), we have \begin{eqnarray} &&{\dot{\boldsymbol{h}}}_2=\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{e}}_{{\boldsymbol{f}}_1}+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{h}}_1-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\boldsymbol{x}}+\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{h}}_2-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{K}}_2\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1\nonumber\\&&-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_{2d}}{\boldsymbol{K}}_{v1}{\left(\frac{\partial{\boldsymbol{h}}_1}{\partial{\boldsymbol{x}}_{2d}}\right)}^{\textrm{T}}{\boldsymbol{h}}_1-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}{\boldsymbol{h}}_2 \end{eqnarray} (4.25) Use the equality \begin{equation} {{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{h}}_2=\frac{1}{2}{{\boldsymbol{h}}_2}^{\textrm{T}}\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}+{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}\right){\boldsymbol{h}}_2 \end{equation} (4.26) and the Young’s inequalities \begin{equation} -{{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\boldsymbol{x}}\le \frac{1}{4}{\boldsymbol{x}}^{\textrm{T}}{\boldsymbol{K}}_1{\boldsymbol{x}}+{{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_2 \end{equation} (4.27) \begin{equation}{}\kern1.3em\qquad{{\boldsymbol{h}}_2}^{\textrm{T}}{{\boldsymbol{M}}}_3{\boldsymbol{h}}_1\le \frac{1}{2}{{\boldsymbol{h}}_1}^{\textrm{T}}{{\boldsymbol{M}}}_1{\boldsymbol{h}}_1+\frac{1}{2}{{\boldsymbol{h}}_2}^{\textrm{T}}{{\boldsymbol{M}}}_3{{{\boldsymbol{M}}}_1}^{-1}{{{\boldsymbol{M}}}_3}^{\textrm{T}}{\boldsymbol{h}}_2 \end{equation} (4.28) \begin{equation} {{\boldsymbol{h}}_2}^{\textrm{T}}\left({\textbf{1}}_{m\times m}-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}{\boldsymbol{K}}_2\right)\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)\le \frac{1}{2}{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}{{\boldsymbol{M}}}_2\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)+\frac{1}{2}{{\boldsymbol{h}}_2}^{\textrm{T}}{{\boldsymbol{M}}}_4{\boldsymbol{h}}_2{}\kern1.8em \end{equation} (4.29) \begin{equation} {{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{e}}_{{\boldsymbol{f}}_1}\le \frac{L^2}{2}\frac{\sigma_{\textrm{min}}\left({{\boldsymbol{M}}}_2\right)}{\sigma_{\textrm{max}}\left({{\boldsymbol{M}}}_2\right)}{{\boldsymbol{h}}_2}^{\textrm{T}}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{{{\boldsymbol{M}}}_2}^{-1}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}{\boldsymbol{h}}_2+\frac{1}{2}{\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right)}^{\textrm{T}}{{\boldsymbol{M}}}_2\left({\boldsymbol{x}}_2-{\boldsymbol{x}}_{2d}\right). \end{equation} (4.30) Note that the Lipschitz condition (4.9) is also used in Equation (4.30). There is \begin{equation} \dot{V}\le -{{\boldsymbol{h}}_2}^{\textrm{T}}\left(\!\!\!\begin{array}{l}\displaystyle\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}{\boldsymbol{K}}_{v2}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{u}}}\right)}^{\textrm{T}}-\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{\boldsymbol{K}}_1{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}-\frac{1}{2}\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}+{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_2}\right)}^{\textrm{T}}\right)\\[2em] {}\displaystyle-\frac{1}{2}{{\boldsymbol{M}}}_3{{{\boldsymbol{M}}}_1}^{-1}{{{\boldsymbol{M}}}_3}^{\textrm{T}}-\frac{L^2}{2}\frac{\sigma_{\textrm{min}}\left({{\boldsymbol{M}}}_2\right)}{\sigma_{\textrm{max}}\left({{\boldsymbol{M}}}_2\right)}\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}{{{\boldsymbol{M}}}_2}^{-1}{\left(\frac{\partial{\boldsymbol{h}}_2}{\partial{\boldsymbol{x}}_1}\right)}^{\textrm{T}}-\frac{1}{2}{{\boldsymbol{M}}}_4\!\!\!\end{array}\right){\boldsymbol{h}}_2. \end{equation} (4.31) Under the gain condition given by Equation (4.19), it is guaranteed that |$V$| given by Equation (4.23) is a valid CLF for the closed-loop system. So, with large control gains, the form of the control law may be greatly simplified, even to the inverse-free form. In addition, the input-to-state stability property may also be established as before. 4.2. One treatable singularity case Before presenting Theorem 1, we give Assumption 2 as a precondition. For some special case where Assumption 2 fails, the method may also be applicable. Consider a scalar illustrative example \begin{equation} {\dot{x}}_1={x}_1\left({x}_1+u+{u}^3\right), \end{equation} (4.32) where |${x}_1$| is the state and |$u$| is the control. The algebraic control equation obtained routinely is \begin{equation} h={x}_1\left({x}_1+u+{u}^3\right)-{\kappa}_1\left({x}_1\right) .\end{equation} (4.33) Apparently, Assumption 2 is not satisfied and there are infinite solutions of |$u$| when |${x}_1$| = 0. However, for such case, a continuous mapping |$u^\ast =C({x}_1)$|⁠, determined by Equation (4.33), may exist. Thus, this problem is solvable. By transforming the former algebraic Equation (4.33) to be \begin{equation} \tilde{h}=\frac{1}{x_1}h=\left({x}_1+u+{u}^3\right)-\frac{\kappa_1\left({x}_1\right)}{x_1}, \end{equation} (4.34) a definite control under the dynamic backstepping frame may be determined. Now consider a general case of the ith level dynamics in the pure-feedback system \begin{equation} {\dot{\boldsymbol{x}}}_i={\boldsymbol{f}}_i\left({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{i+1}\right).\kern0.1em \end{equation} (4.35) For the right-hand function, if the virtual control |${\boldsymbol{x}}_{i+1}$| may take arbitrary value in |${\boldsymbol{f}}_i(\textbf{0},\dots, \textbf{0},{\boldsymbol{x}}_{i+1})=\textbf{0}$|⁠, then to avoid the singularity, the original algebraic control equation, i.e., |${\boldsymbol{h}}_i={\boldsymbol{f}}_i\kern0.1em ({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{(i+1)d})-{\boldsymbol{\kappa}}_i({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{2d},\dots, {\boldsymbol{x}}_{id})$|⁠, may be reformulated as \begin{equation} {\tilde{\boldsymbol{h}}}_i={\boldsymbol{R}}\left({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i\right){\boldsymbol{h}}_i, \end{equation} (4.36) which satisfies |$\det \big(\frac{\partial{\tilde{\boldsymbol{h}}}_i}{\partial{\boldsymbol{x}}_{(i+1)d}}\big)\ne 0$| at the controlled domain |$\mathbb{D}$| with an artfully set matrix-valued function |${\boldsymbol{R}}({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i)$|⁠. Note that for the virtual control |${\boldsymbol{x}}_{(i+1)d}$|⁠, the mapping |${\boldsymbol{x}}_{(i+1)d^\ast} ={{\boldsymbol{C}}}_i({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{2d},\dots, {\boldsymbol{x}}_{id})$| implicitly determined by nullifying Equation (4.36) should be continuous and satisfy |${{\boldsymbol{C}}}_i(\textbf{0},\dots, \textbf{0})=\textbf{0}$|⁠. Meanwhile, in the Lyapunov design, Assumption 3 may be met with \begin{equation} {\tilde{\boldsymbol{f}}}_i\kern0.1em \left({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{i+1}\right)={\boldsymbol{R}}\left({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i\right){\boldsymbol{f}}_i\kern0.1em \left({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{i+1}\right). \end{equation} (4.37) Then, |${\tilde{\boldsymbol{f}}}_i\kern0.1em ({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{a}})\ne{\tilde{\boldsymbol{f}}}_i\kern0.1em \Big({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{b}}\Big)$| when |${\boldsymbol{a}}\ne{\boldsymbol{b}}$|⁠. With these modifications, the CLFs used to derive the control law are \begin{equation} {V}_{i2}={V}_{i1}+\frac{1}{2}{{\tilde{\boldsymbol{h}}}_i}^{\textrm{T}}{\tilde{\boldsymbol{h}}}_i \end{equation} (4.38) and \begin{equation} {V}_{\left(i+1\right)1}={V}_{i2}+\frac{1}{2}{{\boldsymbol{e}}_{{\tilde{\boldsymbol{f}}}_i}}^{\textrm{T}}{\boldsymbol{e}}_{{\tilde{\boldsymbol{f}}}_i}, \end{equation} (4.39) where |${\boldsymbol{e}}_{{\tilde{\boldsymbol{f}}}_i}={\tilde{\boldsymbol{f}}}_i\kern0.1em ({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{i+1})-{\tilde{\boldsymbol{f}}}_i\kern0.1em ({\boldsymbol{x}}_1,\dots, {\boldsymbol{x}}_i,{\boldsymbol{x}}_{(i+1)d})$|⁠. 4.3. System containing strict-feedback dynamics We have considered the problem where analytic expressions of all the (virtual) controls are not available, yet the method proposed may be extended to the situation where some explicit expression of the (virtual) control can be obtained. Variations of such case are abundant, but the key is to employ the dynamic backstepping whenever it is necessary. For example, consider a system of the form \begin{equation} {\dot{\boldsymbol{x}}}_1={\boldsymbol{f}}_1\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)\kern0.1em \end{equation} (4.40) \begin{equation} {\dot{\boldsymbol{x}}}_2={\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+{{\boldsymbol{g}}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right){\boldsymbol{u}}. \end{equation} (4.41) This plant contains a strict-feedback dynamics where the actual control |${\boldsymbol{u}}$| may be directly obtained. In the backstepping controller design procedure, Step 1 is same as that in Section 3.2, with two Lyapunov designs contained. In Step 2, the CLF is same to Equation (3.36) and through the algebraic control equation \begin{equation} {\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+{{\boldsymbol{g}}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right){\boldsymbol{u}}={\boldsymbol{\kappa}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\right),\kern0.4em \end{equation} (4.42) we may directly get the control law as \begin{equation} {\boldsymbol{u}}={{{\boldsymbol{g}}}_2}^{-1}\left(-{\boldsymbol{f}}_2\left({\boldsymbol{x}}_1,{\boldsymbol{x}}_2\right)+{\boldsymbol{\kappa}}_2\Big({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d}\Big) \right), \end{equation} (4.43) where |${\boldsymbol{\kappa}}_2({\boldsymbol{x}}_1,{\boldsymbol{x}}_2,{\boldsymbol{x}}_{2d})\kern0.3em$|is same to that in Equation (3.7). In this step, only one Lyapunov design is required. The proof for the closed-loop stability is similar to that in Section 3.1, by nullifying the term regarding the implicit algebraic control equation arising from the second step. 4.4. Tracking problem In the preceding, we focused on the stabilization problem, and the proposed approach is also applicable to the tracking problem. For the general pure-feedback plant given by Equations (2.5) and (2.6), presume that the reference signal is |${\boldsymbol{r}}$| and the output function is \begin{equation} \kern0.1em {\boldsymbol{y}}={\boldsymbol{x}}_1. \end{equation} (4.44) Then with extra consideration on the effect of the reference signal, the design of the tracking controller, which may achieve the asymptotic tracking, is similar to the stabilization controller design. The difference is that the CLF constructed during the design procedure is \begin{equation} {V}_{11}=\frac{1}{2}{\left({\boldsymbol{x}}_1-{\boldsymbol{r}}\right)}^{\textrm{T}}\left({\boldsymbol{x}}_1-{\boldsymbol{r}}\right). \end{equation} (4.45) Thus, the resulting (virtual) controls, i.e., |${\boldsymbol{x}}_{2d}$| and |${\boldsymbol{u}}$|⁠, are similar to the stabilization controller, yet with the dynamics of |${\boldsymbol{r}}$| included. Through the Lyapunov principle, |$\underset{t\to \infty }{\lim}({\boldsymbol{y}}-{\boldsymbol{r}})=0$| can be guaranteed if the dynamics of |${\boldsymbol{r}}$| is completely known. If not, the tracking error may be bounded for the local input-to-state stability property. 5. Illustrative examples 5.1. Example 1: stabilization of pure-feedback system A non-linear pure-feedback system (Wang et al., 2006; Zou & Hou, 2008) is considered. The dynamic equations are $$ { \begin{array}{l}{\dot{x}}_1={x}_1+{x}_2+\displaystyle\frac{x_2^3}{5}\\[1em] {}{\dot{x}}_2={x}_1{x}_2+u+\displaystyle\frac{u^3}{7},\end{array}} $$ where |${x}_1$| and |${x}_2$| are the states and |$u$| is the control. The control objective is to stabilize the states to the origin. According to the dynamic backstepping controller design procedure in Section 3.2, the implicit non-linear algebraic control equations derived are $$ {h}_1={x}_1+{x}_{2d}+\displaystyle\frac{x_{2d}^3}{5}+{K}_1{x}_1\qquad\qquad\qquad\qquad\qquad\quad\quad\quad $$ \begin{eqnarray} \quad{}\kern1.4em {h}_2={x}_1{x}_2+u+\frac{u^3}{7}+{K}_2\left(1+0.6{x_2}^2\right)\left(\begin{array}{l}\left({x}_2+0.2{x_2}^3\right)\\[.5em] {}-\left({x}_{2d}+0.2{x_{2d}}^3\right)\end{array}\right)\nonumber\\[.5em]\nonumber+\left(\frac{1}{1+0.6{x_2}^2}\right)\left(\begin{array}{l}\left(2-{K}_1-{K_1}^2\right){x}_1+2\left(1+{K}_1\right){h}_1\\[.5em] {}+{K}_{v1}{\left(1+0.6{x_{2d}}^2\right)}^2{h}_1\end{array}\right). \end{eqnarray} The control gains (in scalar form) |${K}_1$|⁠, |${K}_2$|⁠, |${K}_{v1}$| and |${K}_{v2}$| were all set to be 1. The initial states of the system were $${\Big[\begin{array}{c}{x}_1\\[-.25em] {}{x}_2\end{array}\!\Big]\Big|}_{t=0}=\Big[\begin{array}{c}0.5\\[-.25em] {}0\end{array}\!\Big]$$ ⁠, and the initial conditions for the augmented dynamics were set to be |${\big.{x}_{2d}\big|}_{t=0}=0$| and |${\big.u\big|}_{t=0}=0$|⁠. The simulation results about the states are plotted in Fig. 1. It is shown that the states are stabilized to the origin as expected. In Fig. 2, the virtual control |${x}_{2d}$| and the control |$u$| are plotted. They return to zero after a period of oscillation. About the implicit algebraic control equations, i.e., |${h}_1$| and |${h}_2$|⁠, Fig. 3 presents their profiles, showing that they approach zero rapidly. Fig. 1 View largeDownload slide The states profiles in Example 1. Fig. 1 View largeDownload slide The states profiles in Example 1. Fig. 2 View largeDownload slide The virtual control and control profiles in Example 1. Fig. 2 View largeDownload slide The virtual control and control profiles in Example 1. Fig. 3 View largeDownload slide The implicit algebraic control equation profiles in Example 1. Fig. 3 View largeDownload slide The implicit algebraic control equation profiles in Example 1. 5.2. Example 2: stabilization of system containing strict-feedback dynamics A simplified jet engine model from Krstic et al. (1995), which includes the strict-feedback dynamics, is considered. The dynamic equations are $$ {\displaystyle \begin{array}{l}\dot{R}=-\sigma{R}^2-\sigma R\left(2\phi +{\phi}^2\right)\\[.5em] {}\dot{\phi}=-\frac{3}{2}{\phi}^2-\frac{1}{2}{\phi}^3-3 R\phi -3R-\psi \\[.5em] {}\dot{\psi}=-u\end{array}}, $$ where |$R$|⁠, |$\phi$| and |$\psi$| are the states and |$u$| is the control. The parameter |$\sigma$| is 0.1. For this plant, the dynamics regarding |$R$| is in the pure-feedback form while the dynamics regarding |$\phi$| and |$\psi$| are in the strict-feedback form. Thus, within the proposed controller design scheme, only the first step includes two Lyapunov designs. In Krstic et al. (1995), the controller is artfully designed by using the input-to-state stability of the dynamics in |$R$|⁠. Here it is addressed under a unified frame. However, this example does not aim to show the superiority of the proposed method in performance. It is just used to demonstrate the capacity to deal with a model including the pure-feedback dynamics. In designing the controller, |${\kappa}_1(R)$| was particularly set to be |${\kappa}_1(R)=-{K}_1{R}^3$| for the requirement that the augmented state |${\phi}_d$| continuously approaches zero when |$R$| approaches zero. Also, the treatable singularity was avoided with the modified implicit algebraic control equation, i.e. $$ {\tilde{h}}_1=\frac{1}{R}{h}_1=-\sigma R-\sigma \left(2\phi +{\phi}^2\right)+{K}_1{R}^2. $$ The scalar control gains |${K}_1$|⁠, |${K}_2$|⁠, |${K}_3$| and |${K}_{v1}$| were all set to be 1. The initial states of the system were $${\Big.\Bigg[\!\begin{array}{c}R\\[-.35em] {}\phi \\[-.35em] {}\psi \end{array}\!\Bigg]\Big|}_{t=0}=\Bigg[\!\!\!\begin{array}{c}2\\[-.35em] {}1\\[-.35em] {}-1\end{array}\!\!\Bigg]$$ ⁠, and the initial condition for the augmented dynamics was set to be |${\big.{\phi}_d\big|}_{t=0}=0$|⁠. Figure 4 gives the profiles of the states, and all of them are stabilized to the origin. The profiles regarding the virtual control |${\phi}_d$| (an augmented state) and |${\psi}_d$| (function of |$R$|⁠, |$\phi$| and |${\phi}_d$|⁠) and the control |$u$|(function of |$R$|⁠, |$\phi$|⁠, |$\psi$| and |${\phi}_d$|⁠) are presented in Fig. 5. At the initial time, the values of |${\psi}_d$| and |$u$| are large while |${\phi}_d$| increases fast from zero, the initial condition prescribed, and they all tend to zero over time. For the implicit non-linear equation |${\tilde{h}}_1$|⁠, its profile is given in Fig. 6 and it is shown that the function value approaches zero as expected. Fig. 4 View largeDownload slide The states profiles in Example 2. Fig. 4 View largeDownload slide The states profiles in Example 2. Fig. 5 View largeDownload slide The virtual control and control profiles in Example 2. Fig. 5 View largeDownload slide The virtual control and control profiles in Example 2. Fig. 6 View largeDownload slide The implicit algebraic control equation profile in Example 2. Fig. 6 View largeDownload slide The implicit algebraic control equation profile in Example 2. Fig. 7 View largeDownload slide The tracking results of the reference signal in Example 3. Fig. 7 View largeDownload slide The tracking results of the reference signal in Example 3. Fig. 8 View largeDownload slide The virtual control and control profiles in Example 3. Fig. 8 View largeDownload slide The virtual control and control profiles in Example 3. Fig. 9 View largeDownload slide The implicit algebraic control equation profiles in Example 3. Fig. 9 View largeDownload slide The implicit algebraic control equation profiles in Example 3. 5.3. Example 3: signal tracking Again, consider the pure-feedback plant given in Example 1. The tracking problem is defined in Wang et al. (2006) and Zou & Hou (2008), with the Van der Pol oscillator taken as the reference model. $$ \frac{\textrm{d}}{\textrm{d}t}\left[\begin{array}{c}r\\{}\dot{r}\end{array}\right]=\left[\begin{array}{c}\dot{r}\\{}-r+0.2\left(1-{r}^2\right)\dot{r}\end{array}\right] $$ This model yields a limit cycle trajectory for any initial states except $$\big[\begin{array}{c}r\\[-.25em] {}\dot{r}\end{array}\!\big]=\big[\begin{array}{c}0\\[-.25em] {}0\end{array}\!\big]$$ ⁠. The output of the controlled system is |${x}_1$|⁠, and the control objective is to make |${x}_1$| follow the reference signal |$r$|⁠. The dynamic backstepping controller is designed similarly as Example 1, along with extra consideration on the reference signal. Now the implicit non-linear algebraic control equations are $$ {h}_1={x}_1+{x}_{2d}+\frac{x_{2d}^3}{5}+{K}_1\left({x}_1-r\right)-\dot{r}{}\kern1.9em \qquad\qquad\qquad $$ \begin{eqnarray*}{}\kern1.6em &{h}_2={x}_1{x}_2+u+\frac{u^3}{7}+{K}_2\left(1+0.6{x_2}^2\right)\left(\begin{array}{l}\left({x}_2+0.2{x_2}^3\right)\\[.5em] {}-\left({x}_{2d}+0.2{x_{2d}}^3\right)\end{array}\right)\\[1.25em] &\qquad\qquad\quad+\left(\displaystyle\frac{1}{1+0.6{x_2}^2}\right)\left(\begin{array}{l}\left(2-{K}_1-{K_1}^2\right){x}_1+2\left(1+{K}_1\right){h}_1\\[.25em] {}+{K}_{v1}{\left(1+0.6{x_{2d}}^2\right)}^2{h}_1-2r-{K}_1\dot{r}-\ddot{r}\end{array}\right).\\[.5em] \end{eqnarray*} The control gains |${K}_1$|⁠, |${K}_2$|⁠, |${K}_{v1}$| and |${K}_{v2}$| were all again set to be 1. The initial conditions for the reference model and the controlled system were $${\big.\Big[\begin{array}{c}r\\[-.25em] {}\dot{r}\end{array}\!\Big]\big|}_{t=0}=\Big[\begin{array}{c}0.5\\[-.25em] {}0\end{array}\!\Big]$$ and $${\big.\Big[\begin{array}{c}{x}_1\\[-.25em] {}{x}_2\end{array}\!\Big]\big|}_{t=0}=\Big[\begin{array}{c}0.5\\[-.25em] {}0\end{array}\!\Big]$$ ⁠, respectively. The initial conditions for the augmented dynamics were also |${\big.{x}_{2d}\big|}_{t=0}=0$| and |${\big.u\big|}_{t=0}=0$|⁠. Figure 7 plots the output trajectory and the tracking error, showing that the intermediate error vanishes rapidly and nearly exact tracking performance is achieved. The corresponding virtual control |${x}_{2d}$| and control |$u$| for the periodical signal are presented in Fig. 8. They also vary periodically. Figure 9 gives the profiles of |${h}_1$| and |${h}_2$|⁠. It is shown that they all approach zero quickly. 6. Conclusion A general backstepping controller design frame is developed for the pure-feedback system. The idea is to introduce new dynamics to describe the (virtual) control and then solve the implicit non-linear algebraic equations arising from the non-affine form of the dynamics upon a control-based view, which consequently requires another Lyapunov design. The effectiveness of the dynamic root-finding is demonstrated and the control objective may be well achieved. In addition, an analysis shows that with large gains, the control law may be simplified or even applied in an inverse-free form, which will alleviate the problem of `explosion of terms’. This paper provides a solution for the general pure-feedback system controller design problem, and it may be extended to address the parameterized uncertainties, as the adaptive backstepping method studied on the strict-feedback systems. Moreover, consideration on the control saturation and time delay may also be investigated. These problems will be studied in the future to further complete this method. Funding National Defence Key Laboratory Fund (614222003060717). References Bridges , M. M. , Dawson , D. M. & Abdallah , C. T. ( 1995 ) Control of rigid-link, flexible-joint robots: a survey of backstepping approaches . J. Robot. Syst. , 12 , 199 – 216 . Google Scholar Crossref Search ADS WorldCat Chehardoli , H. & Eghtesad , M. ( 2015 ) Robust adaptive control of switched non-linear systems in strict feedback form with unknown time delay . IMA J. Math. Control Inform. , 32 , 761 – 779 . WorldCat Farrell , J. , Sharma , M. & Polycarpou , M. ( 2005 ) Backstepping based flight control with adaptive function approximation . J. Guid. Control Dyn. , 28 , 1089 – 1102 . Google Scholar Crossref Search ADS WorldCat Freeman , R. & Praly , L. ( 1998 ) Integrator backstepping for bounded controls and control rates . IEEE Trans. Autom. Control , 43 , 258 – 262 . Google Scholar Crossref Search ADS WorldCat Ge , S. S. & Wang , C. ( 2002 ) Adaptive NN control of uncertain nonlinear pure-feedback systems . Automatica J. IFAC , 38 , 671 – 682 . Google Scholar Crossref Search ADS WorldCat Guo , F. , Liu , Y. & Luo , F. ( 2017 ) Adaptive stabilisation of a flexible riser by using the Lyapunov-based barrier backstepping technique . IET Control Theory Appl. , 11 , 2252 – 2260 . Google Scholar Crossref Search ADS WorldCat Hardy , G. , Littlewood , J. E. & Polya , G. ( 1989 ) Inequalities , 2nd edn. Cambridge, UK : Cambridge University Press . Google Preview WorldCat COPAC Hovakimyan , N. , Lavretsky , E. & Cao , C. Y. ( 2008 ) Dynamic inversion for multivariable non-affine-in-control systems via time-scale separation . Internat. J. Control , 81 , 1960 – 1967 . Google Scholar Crossref Search ADS WorldCat Kanellakopoulos , I. , Kokotovic , P. V. & Morse , A. S. ( 1991 ) Systematic design of adaptive controller for feedback linearizable systems . IEEE Trans. Autom. Control , 36 , 1241 – 1253 . Google Scholar Crossref Search ADS WorldCat Khalil , H. K. ( 2002 ) Nonlinear Systems , 3rd edn. New Jersey : Prentice Hall . Google Preview WorldCat COPAC Krstic , M. , Kanellakopoulos , I. & Kokotovic , P. V. ( 1995 ) Nonlinear and Adaptive Control Design . New York : Wiley . Google Preview WorldCat COPAC Lee , T. & Kim , Y. ( 2001 ) Nonlinear adaptive flight control using backstepping and neural networks controller . J. Guid. Control Dyn. , 24 , 675 – 682 . Google Scholar Crossref Search ADS WorldCat Li , J. P. , Yang , Y. N. , Hua , C. C. & Guan , X. P. ( 2017 ) Fixed-time backstepping control design for high-order strict-feedback non-linear systems via terminal sliding mode . IET Control Theory Appl. , 11 , 1184 – 1193 . Google Scholar Crossref Search ADS WorldCat Lu , H. , Liu , C. J. , Coombes , M. , Guo , L. & Chen , W. H. ( 2016 ) Online optimisation-based backstepping control design with application to quadrotor . IET Control Theory Appl. , 10 , 1601 – 1611 . Google Scholar Crossref Search ADS WorldCat Melhem , K. & Wang , W. ( 2009 ) Global output tracking control of flexible joint robots via factorization of the manipulator mass matrix . IEEE Trans. Robot. , 25 , 428 – 437 . Google Scholar Crossref Search ADS WorldCat Na , J. , Ren , X. M. , Shang , C. & Guo , Y. ( 2012 ) Adaptive neural network predictive control for nonlinear pure feedback systems with input delay . J. Process Control , 22 , 194 – 206 . Google Scholar Crossref Search ADS WorldCat Sonneveldt , L. , Chu , Q. P. & Mulder , J. A. ( 2007 ) Nonlinear flight control design using constrained adaptive backstepping . J. Guid. Control Dyn. , 30 , 322 – 336 . Google Scholar Crossref Search ADS WorldCat Sontag , E. D. & Wang , Y. ( 1995 ) On characterizations of input-to-state stability property . Systems Control Lett. , 24 , 351 – 359 . Google Scholar Crossref Search ADS WorldCat Sontag , E. D. & Wang , Y. ( 1996 ) New characterizations of input to state stability . IEEE Trans. Autom. Control , 41 , 1283 – 1294 . Google Scholar Crossref Search ADS WorldCat Swaroop , D. , Hedrick , J. K. , Yip , P. P. & Gerdes , J. C. ( 2000 ) Dynamic surface control for a class of nonlinear systems . IEEE Trans. Autom. Control , 45 , 1893 – 1899 . Google Scholar Crossref Search ADS WorldCat Tee , K. P. , Ge , S. S. & Tay , E. H. ( 2009 ) Barrier Lyapunov functions for the control of output-constrained nonlinear systems . Automatica J. IFAC , 45 , 918 – 927 . Google Scholar Crossref Search ADS WorldCat Tong , S. C. , Li , Y. M. & Shi , P. ( 2012 ) Observer-based adaptive fuzzy backstepping output feedback control of uncertain MIMO pure-feedback nonlinear systems . IEEE Trans. Fuzzy Syst. , 20 , 771 – 785 . Google Scholar Crossref Search ADS WorldCat Wang , C. , Hill , D. J. , Ge , S. S. & Chen , G. ( 2006 ) An ISS-modular approach for adaptive neural control of pure-feedback systems . Automatica J. IFAC , 42 , 723 – 731 . Google Scholar Crossref Search ADS WorldCat Wang , D. & Huang , J. ( 2002 ) Adaptive neural network control for a class of uncertain nonlinear systems in pure-feedback form . Automatica J. IFAC , 38 , 1365 – 1372 . Google Scholar Crossref Search ADS WorldCat Wang , H. , Luo , J. S. & Zhu , J. M. ( 2000 ) Advanced Mathematics. Changsha, China : National University of Defense Technology Press . COPAC Wang , M. , Ge , S. S. & Hong , K. S. ( 2010 ) Approximation-based adaptive tracking control of pure-feedback nonlinear systems with multiple unknown time-varying delays . IEEE Trans. Neural Netw. , 21 , 1804 – 1816 . Google Scholar Crossref Search ADS PubMed WorldCat Won , M. C. & Hedrick , J. K. ( 1996 ) Multiple surface sliding control of a class of uncertain nonlinear systems . Internat. J. Control , 64 , 693 – 706 . Google Scholar Crossref Search ADS WorldCat Yang , Y. & Zhou , C. ( 2005 ) Adaptive fuzzy H∞ stabilization for strict-feedback canonical nonlinear systems via backstepping and small-gain approach . IEEE Trans. Fuzzy Syst. , 13 , 104 – 114 . Google Scholar Crossref Search ADS WorldCat Zhang , T. , Ge , S. S. & Hang , C. C. ( 2000 ) Adaptive neural network control for strict-feedback nonlinear systems using backstepping design . Automatica J. IFAC , 36 , 1835 – 1846 . Google Scholar Crossref Search ADS WorldCat Zou , A. M. & Hou , Z. G. ( 2008 ) Adaptive control of a class of nonlinear pure-feedback systems using fuzzy backstepping . IEEE Trans. Fuzzy Syst. , 16 , 886 – 897 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2019. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Dynamic backstepping control for pure-feedback non-linear systems JF - IMA Journal of Mathematical Control and Information DO - 10.1093/imamci/dnz019 DA - 2020-06-06 UR - https://www.deepdyve.com/lp/oxford-university-press/dynamic-backstepping-control-for-pure-feedback-non-linear-systems-4ZkoSo4Dd7 SP - 1 VL - Advance Article IS - DP - DeepDyve ER -