Add Journal to My Library
IMA Journal of Mathematical Control and Information
, Volume Advance Article – Mar 12, 2017

24 pages

/lp/ou_press/synchronous-substitution-type-iterative-learning-control-for-discrete-L0v9XqMp20

- Publisher
- Oxford University Press
- Copyright
- © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
- ISSN
- 0265-0754
- eISSN
- 1471-6887
- D.O.I.
- 10.1093/imamci/dnx008
- Publisher site
- See Article on Publisher Site

Abstract The article develops an iterative learning control scheme for a class of repetitive discrete-time networked control systems with Bernoulli-type stochastic packet dropouts occurred in both the input and the output communication channels. Inspired by that the learning capacity is benefited from an appropriate compensation along the iteration direction, a synchronous substitution technique is proposed for handling the dropped instant-wise input and output data. Namely, the dropped instant-wise input and output are substituted by the synchronous data that are utilized at the previous iteration. By means of evaluating the tracking error in the sense of 1-norm in the form of mathematical expectation, the convergences of the developed learning scheme are derived for linear and nonlinear systems, respectively, which exhibit that the tracking errors vanish as the iteration goes on. Linear and nonlinear examples are respectively numerically simulated to testify the effectiveness and validity of the proposed scheme. 1. Introduction Iterative learning control (ILC) has been invented in 1980’s as an intelligent technique for a robot manipulator to repetitively attempt to execute a tracking task over a fixed time interval (Arimoto et al., 1984). The fundamental mechanism of ILC is to compensate for the current control input by its observed tracking discrepancy to generate an upgraded control input for the next operation. The aim is that the sequential control inputs may stimulate the system to successively track a desired trajectory perfectly as the iteration index goes to infinity. In three decades, ILC has been widely investigated in terms of the theoretical creations and practical applications (Chen et al., 2008; Mi et al., 2005; Bristow et al., 2006 and the references therein). The existing ILC issues are including initial states resetting (Chen et al., 1999; Meng et al., 2010), stochastic ILC (Saab, 2001; Shen & Wang, 2014), monotonic convergence (Park & Bien, 2005; Ruan et al., 2012), randomly varying trial lengths (Li et al., 2014, 2015) as well as frequency techniques (Ruan & Li, 2014), etc. In recent years, diverse networked control strategies have attracted much attention owing to the advantages of network such as information sharing, easy installation, reduced system wiring, simple system diagnosis and maintenance, and so on. In conventional networked control systems (NCSs), however, there happen the communication delays and packet dropouts, which may deteriorate the system performance. Regarding the communication delays, a usual manner is to replace the delayed data by the previous-instant captured data in the case when the delay is within one sampling step length (Krtolica et al., 1994; Yang et al., 2006; Wen & Yang, 2011). While in treating the packet dropout, the method is to mend the dropped data by the latest captured one (Wu & Chen, 2007; Wang et al., 2013). The aforementioned investigations exhibit that the handling techniques work satisfactorily under the assumptions that the probabilities of the communication delays and the packet dropouts are constrained appropriately. Inspired by the handling techniques for the NCSs, the investigations focusing on networked iterative learning control (NILC) systems have been emerged in concerning the communication delays and/or packet dropouts. For examples, Liu et al. (2012) consider a D-type NILC strategy for a class of linear time-invariant (LTI) multiple-input-multiple-output (MIMO) systems, where both the packet dropout and the communication delay are assumed to occur only in the output communication channel. Bu et al. (2013) propose a P-type NILC scheme for a class of nonlinear systems with random packet losses happened in both the input and the output communication channels, where the instant-wise communication dropped data is substituted by that of the one-step-ahead data. This implies that the previous sampling data must be available for the case when the current instant-wise data are dropped. It is noticed that in the above-mentioned NILCs, the handling strategy for the lost/delayed data is still the one-step-ahead mode which is no other than that of the conventional NCSs (Krtolica et al., 1994; Yang et al., 2006; Wen & Yang, 2011). The replacement mechanism to some extent does not match the ILC scheme which is an exact time point-to-point mapping along iteration direction. It is shown that the tracking error is asymptotically upper-bounded but nonzero when communication delays/packet dropouts occur (Liu et al., 2012; Bu et al., 2013). Lately, the authors’ group has developed a P-type NILC scheme for a class of nonlinear systems with stochastic delays in output and input channels, where the delayed data are substituted by the synchronous data of the previous iteration (Liu & Ruan, 2016). It has been shown that the proposed scheme can stimulate the NILC system to track the desired trajectory precisely. This implies that the NILC scheme with synchronous replacement for delayed/dropped data may own a better tracking performance than the scheme with the one-step-ahead substitution. In terms of packet dropouts of NILC systems, the existing works mainly concentrate on the packet dropout of system output, where a usual handling method for the dropped system output data is to zero the correction term of ILC update law (Ahn et al., 2006, 2008; Bu & Hou, 2011; Bu et al., 2013, 2016; Shen & Wang, 2015a,b). The handling method of dropped data is different from that in conventional NCSs (Wu & Chen, 2007; Wang et al., 2013), which is mainly determined by the structure characteristics and aim of ILC update law. In Ahn et al. (2006, 2008), Bu & Hou (2011), Bu et al. (2013, 2016), the packet loss is modelled as a 0–1 Bernoulli-type stochastic variable, whose value is 0 when the data packet is dropped and 1 otherwise, with the assumption that the transmission of each data packet is independent on others. Specifically, Ahn et al. (2006) have developed a D-type scheme for a class of LTI MIMO systems for the case when the whole instant-wise data probably drops in the output channel by Kalman filtering approach. Its further work considers a D-type NILC law for a general case that every component of the instant-wise system output data may be dropped (Ahn et al., 2008). Besides, Bu & Hou (2011) have analyzed the convergence of the proposed D-type NILC algorithm on basis of exponential stability for asynchronous dynamical systems, where the convergence condition is given in the form of linear matrix inequalities. Bu, Hou, and Chi (2013) have given a result based on converting the stochastic systems into deterministic ones in terms of mathematical expectation. In addition, in Bu et al. (2016), the stochastic systems are converted into 2-D model and then the convergence is derived by means of linear matrix inequality technique. Further, Shen & Wang (2015a,b) have proposed a D-type NILC algorithm for a class of SISO nonlinear systems where the random packet loss is model as a 0–1 stochastic variable without obeying any certain probability distribution, but at a fixed sampling instant the quantity of the successive packet loss is less than a finite constant. The learning gain has been designed as an iteration-decreasing sequence and the convergence has been deduced by stochastic approximation and optimization techniques, which implies that the learning capacity is vanishing as the iteration goes on. As mentioned above, the existing NILC works are made based on the assumption that data dropout occurs only in the sensor-to-controller (S/C) output channel with neglecting the impact of packet loss of control input happened in the controller-to-actuator (C/A) input channel. In fact, packet dropout may occur in both the output and input channels. However, the conventional replacement algorithm for packet loss of control input in NCSs may be infeasible as the aim of ILC is to pursue a perfect desired trajectory tracking rather than to stabilize the NCSs, and the handling strategy of packet loss of system output proposed in the aforementioned NILCs cannot be directly applied to deal with the packet loss of control input. Thus it is meaningful to study on the packet loss of system output and control input. Motivated by the above-mentioned limitation, this article develops a D-type NILC update law for discrete-time NCSs with the Bernoulli-type stochastic packet dropouts occurred in both the input and output channels, where the dropped data are replaced by the synchronous one used at the previous iteration. By evaluating the expectation of the tracking error, the zero-error convergence is derived for linear and nonlinear systems. The remaining of the article is organized as follows. In Section 2, a synchronous-substitution-type ILC scheme is constructed and some notations are presented. Section 3 analyzes the convergence of the developed learning scheme to linear systems and Section 4 addresses the convergent characteristics of the proposed learning rule imposed on a kind of affine nonlinear systems. The effectiveness and the validity are numerically simulated in Section 5 and the last Section 6 concludes the article. 2. Synchronous-substitution-type ILC algorithm and notations Let $$(X, F, P)$$ be a probability space and $$p\in [0,1]$$ be a constant number, where $$X = \{0,1\}$$ is a sample space, $$F =\{\emptyset,\{0\},\{1\},\{0,1\} \} $$ is a set of events and $$P$$ is a probability measure on $$F$$ satisfying $$P(\emptyset) = 0$$, $$P(\{0\}) = p$$, $$P(\{1\}) = 1-p$$ and $$P(\{0,1\}) = 1$$, respectively. A stochastic variable $$\xi $$ defined on $$(X, F, P)$$ is said to be a stochastic variable subject to 0–1 Bernoulli distribution refers that $$\xi (0) = 0$$ and $$\xi (1) = 1$$. Denote $$E\{\xi \}$$ as the mathematical expectation of the stochastic variable $$\xi $$. Then, $$E\{\xi \} = P\{\xi = 1\} = 1 - p$$. Consider a class of repetitive discrete-time single-input-single-output (SISO) systems described as follows. {xk(t+1)=f(xk(t),uk(t)),t∈S−,yk(t)=g(xk(t)),t∈S, (1) where $$k\in \{1,2,\ldots \}$$ denotes the operation/iteration index, $$t$$ is the time index, $$S^{-} = \{0,1,2,\ldots, N - 1\}$$ and $$S = \{0,1,2,\ldots, N\}$$. Meanwhile, $$x_{k} (t) \in R^{n}$$, $$u_{k} (t) \in R$$ and $$y_{k} (t) \in R$$ are $$ n$$-dimensional state, scalar input and scalar output of the system (1) at the $$k\mbox{th}$$ iteration, respectively. $$f(\cdot,\cdot)$$ and $$g(\cdot)$$ are linear or nonlinear functions of the state and the input. In the system (1), when the control input $$u_{k} (t)$$ is generated by an ILC updating law and is implemented through network, namely, the control command is transmitted from the ILC unit to the actuator via the input communication channel for stimulating the actuator, and simultaneously the system output $$y_{k} (t)$$ is transferred through the output communication channel from the sensor to the ILC unit for updating control input, the mode is regarded as a networked ILC paradigm, abbreviated as NILC. The diagram of the NILC is illustrated in Fig. 1. Fig. 1. View largeDownload slide Schematic diagram of NILC. Fig. 1. View largeDownload slide Schematic diagram of NILC. In the schematic Fig. 1, $$\tilde{{u}}_{k} (t)$$ denotes the control signal which is transmitted from the ILC unit to the actuator via the input channel, whilst $$u_{k} (t)$$ refers to the control command of the actuator for stimulation which is composed of $$\tilde{{u}}_{k} (t)$$ or $$u_{k-1} (t)$$ in an on/off switch mode. Particularly, in the case when the ILC signal $$\tilde{{u}}_{k} (t)$$ at the $$t\mbox{-th}$$ instant is successfully captured by the actuator, the signal $$\tilde{{u}}_{k} (t)$$ is taken as the signal for the stimulation, whilst in the case when the ILC signal $$\tilde{{u}}_{k} (t)$$ at the $$t\mbox{-th}$$ instant is dropped, the actuator will borrow its synchronous input data $$u_{k-1} (t)$$ at the previous iteration for the stimulation. Mathematically, the control input $$u_{k} (t)$$ of the actuator is represented as follows. u1(t)=u~1(t),t∈S−, given as a test signal,uk(t)=ωk(t)u~k(t)+[1−ωk(t)]uk−1(t),t∈S−,k=2,3,…, (2) where $$\{\omega_{k} (t):t\in S^{-}, k=2,3,\cdot \cdot \cdot \}$$ is a stochastic sequence whose term is subject to 0–1 Bernoulli distribution. Here, $$\omega_{k} (t)=1$$ means that the signal $$\tilde{{u}}_{k} (t)$$ is successfully transmitted whilst $$\omega _{k} (t)=0$$ marks the signal $$\tilde{{u}}_{k} (t)$$ is dropped. Moreover, $$y_{k} (t)$$ refers to the system output which will be transmitted to the ILC unit through the output channel for data updating whilst $$\tilde{{y}}_{k} (t)$$ denotes a candidate signal for the ILC updating which is equal to either the system output $$y_{k} (t)$$ or the historical signal $$\tilde{{y}}_{k-1} (t)$$ depending on the data communication success or failure. In detail, in the case when the system output $$y_{k} (t)$$ is successfully transferred to the ILC unit, it is adopted for data updating, whilst in the case when the system output $$y_{k} (t)$$ is dropped, the ILC unit utilizes the synchronous signal $$\tilde{{y}}_{k-1} (t)$$, which is used at the previous iteration for upgrading the control input. Thus, the signal $$\tilde{{y}}_{k} (t)$$ is formulated as follows. y~1(t)=y1(t),t∈S+, a test output signal corresponding to u~1(t),y~k(t)=αk(t)yk(t)+[1−αk(t)]y~k−1(t),t∈S+,k=2,3,…, (3) where $$S^{+} = \{1,2,\ldots, N\}$$ and for given $$t$$ and $$k$$ the stochastic variable $$\alpha_{k} (t)$$ is subject to 0–1 Bernoulli distributions. Here, $$\alpha_{k} (t)=1$$ stands for that the output signal $$y_{k} (t)$$ is successfully transmitted whilst $$\alpha_{k} (t)=0$$ means that the signal $$y_{k} (t)$$ is dropped. It is noted that, in the NILC profile Fig. 1, the status of the communicated data packet is either dropped or captured in success, which is modelled as 0–1 Bernoulli distributions. In general, it is known that the occurrences of the data packet dropout at two operations are independent. Thus, the assumptions are extracted as follows. (A1) Assume that the 0–1 Bernoulli stochastic variable $$\omega _{k} (t)$$ is independent on the variable $$\omega_{l} (s)$$ for all $$k\ne l$$ and $$s,\; t\in S^{-}$$. Meanwhile, assume that the 0–1 Bernoulli stochastic variable $$\alpha_{k} (t)$$ is independent upon the variable $$\alpha_{l} (s)$$ for all $$k\ne l$$ and $$s,\; t\in S^{+}.$$ Besides, assume that $$\alpha_{k} (t)$$ is independent on $$\omega_{l} (s)$$ for all $$k=2,3,\ldots, \quad l=2,3,\ldots, \quad t\in S^{+}$$ and $$s\in S^{-}$$. Moreover, for simplifying the analysis, the following assumption is introduced. (A2) Assume that the data packet dropout probabilities in the input and output channels are $$\bar{{\omega }}$$ and $$\bar{{\alpha }}$$, respectively, mathematically, P{ωk(t)=0}=ω¯,0⩽ω¯<1, for t∈S−,k=2,3,…,P{αk(t)=0}=α¯,0⩽α¯<1, for t∈S+,k=2,3,…. By considering the 0–1 Bernoulli distributions assumptions with respect to the stochastic variables $$\omega_{k} (t)$$ and $$\alpha_{k} (t)$$, it is easy to calculate the expectations of those stochastic variables as follows. E{ωk(t)}=P{ωk(t)=1}=1−ω¯,0⩽ω¯<1, for t∈S−,k=2,3,…,E{αk(t)}=P{αk(t)=1}=1−α¯,0⩽α¯<1, for t∈S+,k=2,3,…. Taking the above formulations (2) and (3) into account, a synchronous-substitution-type ILC updating law is constructed in the form of u~k+1(t)=u~k(t)+Γδy~k(t+1),t∈S−,k=1,2,…, (4) where $$\delta \tilde{{y}}_{k} (t+1)=y_{d} (t+1)-\tilde{{y}}_{k} (t+1)$$ and $${\it {\Gamma}} $$ denotes the learning gain. In order to analyze the convergent characteristics of the developed learning scheme (4) with (2) and (3), the lifting technique is used and a set of denotations are introduced as follows. uk=[uk(0),uk(1),⋅⋅⋅,uk(N−1)]⊤∈RN,u~k=[u~k(0),u~k(1),⋅⋅⋅,u~k(N−1)]⊤∈RN,ud=[ud(0),ud(1),⋅⋅⋅,ud(N−1)]⊤∈RN,yk=[yk(1),yk(2),⋅⋅⋅,yk(N)]⊤∈RN,y~k=[y~k(1),y~k(2),⋅⋅⋅,y~k(N)]⊤∈RN,yd=[yd(1),yd(2),⋅⋅⋅,yd(N)]⊤∈RN,δyk=yd−yk,δy~k=yd−y~k,δuk=ud−uk,δu~k=ud−u~k,Ωk=diag(ωk(0),ωk(1),⋅⋅⋅,ωk(N−1))∈RN×N,Ω=diag(ω¯,ω¯,⋅⋅⋅,ω¯)∈RN×N,Λk=diag(αk(1),αk(2),⋅⋅⋅,αk(N))∈RN×N,Λ=diag(α¯,α¯,⋅⋅⋅,α¯)∈RN×N. Thus, the expressions (2) and (3) are respectively lifted as u1=u~1,uk=Ωku~k+(I−Ωk)uk−1,k=2,3…, (5) and y~1=y1,y~k=Λkyk+(I−Λk)y~k−1,k=2,3,…, (6) where $$I$$ is an identity matrix with appropriate dimension. Moreover, the algorithm (4) is lifted as u~k+1=u~k+Γδy~k. (7) The following concepts and properties are involved in the article. Definition 1 Let $$x = (x_{1},\cdot \cdot \cdot, x_{n})^{\top } \in R^{n}$$ be an $$n$$-dimensional vector and $$H = (h_{ij})_{m\times n} \in R^{m\times n}$$ be a matrix. Then the 1-norm of the vector $$x$$ is defined as $$\| x \|_{1} = \sum_{i=1}^n {\vert x_{i} \vert } $$ and the induced 1-norm of the matrix is $$\| H \|_{1} = \max_{1\leqslant j\leqslant n} \{\sum_{i=1}^m \vert h_{ij} \vert\}$$. Besides, denote |x|=(|x1|,|x2|,…,|xn|)⊤ and |H|=(|hij|)m×n. Definition 2 Let $$x = (x_{1},\ldots, x_{n})^{\top }$$ and $$y = (y_{1},\ldots, y_{n})^{\top } \in R^{n}$$ be two $$n$$-dimensional real vectors. Their partial order relation $$\prec $$ is defined as $$x\prec y$$ if and only if $$x_{i} \leqslant y_{i} $$ for all $$i=1,2,\ldots, n$$. What follows are the basic properties of the above definitions. (P1) Let $$x,\;y,\;z\in R^{n}$$. If $$\vert x \vert \prec \vert y \vert $$ and $$\vert y \vert \prec \vert z \vert $$, then $$\vert x \vert \prec \vert z \vert $$. (P2) Let $$x\in R^{n}$$, $$y\in R^{m}$$ and $$H\in R^{m\times n}$$. If $$y=Hx$$, then $$\vert y \vert \prec \vert H \vert \vert x \vert $$. (P3) Let $$x,\;y\in R^{n}$$. If $$\vert x \vert \prec \vert y \vert $$, then $$\| x \|_{1} \leqslant \| y \|_{1} $$. (P4) Let $$x,\;y,\;z\in R^{n}$$. If $$z=x+y$$, then $$\vert z \vert \prec \vert x \vert +\vert y \vert $$. (P5) Let $$x, y\in R^{n}$$. Then $$ \vert\vert\vert x \vert +\vert y \vert \|_{1} \leqslant \| x \|_{1} +\| y \|_{1} $$. (P6) Let $$H_{1},\;H_{2} \in R^{m\times n}$$. If $$H_{2} =\vert H_{1} \vert $$, then $$\| H_{2} \|_{1} =\| H_{1} \|_{1} $$. (P7) Let $$x,\;y\in R^{n}$$ be stochastic vector. If $$\vert x \vert \prec \vert y \vert $$, then $$E\{\vert x \vert \}\prec E\{\vert y \vert \}$$. (P8) Let $$x\in R^{n}$$ be stochastic vector. Then $$\| E\{\vert x \vert \} \|_{1} =E\{\| x \|_{1}\}$$. Furthermore, the following Lemmas are useful in this article. Lemma 1 Let $$\{e_{k} \}_{k=1}^{\infty}$$, $$\{\sigma_{k} \}_{k=1}^{\infty } $$ and $$\{\varphi_{k} \}_{k=1}^{\infty}$$ be nonnegative sequences, which satisfy $$e_{k+1} \leqslant \sum_{i=1}^k {\sigma_{i} e_{k-i+1} } +\varphi_{k}, \ \sigma =\sum_{i=1}^\infty {\sigma_{i} } <1$$ and $$\lim_{k\to \infty} \varphi_{k} =0$$. Then $$\lim_{k\to \infty } e_{k} =0$$. Proof. First, we prove that the nonnegative sequence $$\{e_{k} \}_{k=1}^{\infty}$$ is bounded. Since the sequence $$\{\varphi_{k}\}_{k=1}^{\infty}$$ is nonnegative satisfying $$\lim_{k\to \infty } \varphi_{k} = 0$$ and $$\sigma = \sum_{i=1}^\infty {\sigma_{i}} < 1$$, there exists a positive integer $$K_{1} $$ such that $$\varphi_{k} + \sigma < 1$$ for all $$k \geqslant K_{1} $$. Thus, for all $$k \geqslant K_{1} + 1$$, we have ek⩽max{e1,e2,…,eK1,1}. Let $$C = \max \{e_{1}, e_{2},\cdot \cdot \cdot, e_{K_{1} },1\}.$$ Because that $$\sigma = \sum_{i=1}^\infty {\sigma_{i} } < 1$$ and $$\lim_{k\to \infty } \varphi_{k} = 0,$$ then for any positive number $$\varepsilon >0$$ there exists a positive integer $$K_{\varepsilon } \quad (K_{\varepsilon } \geqslant K_{1})$$ such that ∑j=1∞σKε+j<1−σCε2 and φKε+i<ε2(1−σ) for all i=1,2,⋅⋅⋅. For $$k\geqslant K_{\varepsilon } +1$$, we have ek+1⩽σ1ek+σ2ek−1+⋅⋅⋅+σKεek−Kε+1+σKε+1ek−Kε+⋅⋅⋅+σke1+φk⩽σ1ek+σ2ek−1+⋅⋅⋅+σKεek−Kε+1+(σKε+1+⋅⋅⋅+σk)C+ε2(1−σ)⩽σ1ek+σ2ek−1+⋅⋅⋅+σKεek−Kε+1+1−σCε2C+ε2(1−σ)⩽σ1ek+σ2ek−1+⋅⋅⋅+σKεek−Kε+1+ε(1−σ). Taking supreme limit on both sides of the above inequality yields limk→∞supek+1⩽σ1limk→∞supek+σ2limk→∞supek−1+⋅⋅⋅+σKεlimk→∞supek−Kε+1+ε(1−σ)⩽(σ1+σ2+⋅⋅⋅+σKε)limk→∞supek+ε(1−σ)⩽σlimk→∞supek+ε(1−σ). The above inequality leads to limk→∞supek⩽ε. Consequently limk→∞ek=0. This completes the proof. □ Lemma 2 Let $$\{\phi_{k} \}_{k=1}^{\infty } $$, $$\{\lambda_{k} \}_{k=1}^{\infty } $$ and $$\{\Phi_{k} \}_{k=1}^{\infty } $$ be nonnegative sequences, which satisfy $$(i) \lim\limits_{k\to \infty } \phi_{k} = 0, \lim\limits_{k\to \infty } \lambda_{k} = 0$$ and $$\lim\limits_{k\to \infty } \Phi_{k} = 0,$$ (ii) $$\sum_{k=1}^\infty {\phi_{k} } $$ is bounded. Then $$\lim\limits_{k\to \infty } (\sum_{i=1}^k {\phi_{i} \lambda_{k-i+1} } + \Phi_{k}) = 0.$$ Proof. From $$\lim_{k\to \infty } \lambda_{k} =0$$, it follows that the nonnegative sequence $$\{\lambda_{k} \}_{k=1}^{\infty } $$ is bounded. Let $$C=\sup_{k=1,2,\cdot \cdot \cdot } \{\lambda_{k} \}$$ and $$\phi =\sum_{k=1}^\infty {\phi_{k} } $$. From the sequence $$\{\phi_{k} \}_{k=1}^{\infty } $$ is nonnegative and the assumption (ii), it is true that for any $$\varepsilon >0$$ there exists such a positive integer $$K_{1} $$ that $$\sum_{k=K_{1} +1}^\infty {\phi_{k} } <\tfrac{\varepsilon }{3C}$$. In addition, from the assumption ($$i)$$ it is immediate that there exists a positive integer $$K_{2} \quad (K_{2} >K_{1})$$ so that $$\lambda_{k-K_{1} +1} <\tfrac{\varepsilon }{3\phi }$$ for all $$k-K_{1} +1>K_{2} $$. Further, the assumptions that $$\{\Phi_{k} \}_{k=1}^{\infty } $$ is nonnegative and $$\lim _{k\to \infty } \Phi_{k} =0$$ imply that there exists a positive integer $$K_{3} $$ such that $$\Phi_{k} < \tfrac{\varepsilon }{3}$$ for all $$k > K_{3} $$. Thus, for all $$k > \max \{K_{1} + K_{2} - 1, K_{3} \},$$ we have ∑i=1kϕiλk−i+1+Φk=ϕ1λk+ϕ2λk−1+⋅⋅⋅+ϕK1λk−K1+1+ϕK1+1λk−K1+⋅⋅⋅+ϕkλ1+Φk⩽(ϕ1+ϕ2+⋅⋅⋅+ϕK1)ε3ϕ+C(∑k=K1+1∞ϕk)+Φk<ϕ×ε3ϕ+C×ε3C+ε3=ε. Consequently limk→∞(∑i=1kϕiλk−i+1+Φk)=0. This completes the proof. □ 3. Convergence analysis for SISO LTI systems Consider a class of repetitive SISO LTI discrete-time systems taking a form of {xk(t+1)=Axk(t)+buk(t),t∈S−,yk(t)=cxk(t),t∈S, (8) where $$A$$, $$b$$ and $$c$$ are matrices with appropriate dimensions. In particular, $$cb$$ is assumed to be nonzero. The dynamic system (8) can be lifted as yk=Huk+Gxk(0). (9) Here G=[(cA)⊤,(cA2)⊤,…,(cAN)⊤]⊤∈RN×n,H=[cbcAbcbcA2bcAbcb⋮⋮⋮⋱cAN−1bcAN−2bcAN−3b⋅⋅⋅cb]∈RN×N. It is noted that the assumption that $$cb$$ is nonzero means that both $$b$$ and $$c$$ are nonzero. Thus, for any given desired output $$y_{d} (t)$$, $$t\in S$$, from the recursive mode of the above equation (8) it is not difficult to induce that $$u_{d} (t)=\frac{1}{cb}[y_{d} (t+1)-cAx_{d} (t)]$$ and $$x_{d} (t+1)=Ax_{d} (t)+bu_{d} (t)$$. By recursive derivation, the desired state $$x_{d} (t)$$, $$t\in S$$, and the desired control input $$u_{d} (t)$$, $$t\in S^{-}$$ can be derived satisfying {xd(t+1)=Axd(t)+bud(t),t∈S−,yd(t)=cxd(t),t∈S (10) and yd=Hud+Gxd(0), (11) where $$x_{d} (0)$$ is chosen to satisfy $$cx_{d} (0)=y_{d} (0).$$ Theorem 1 Assume that the proposed synchronous-substitution-type ILC scheme (4) with (2) and (3) is applied to the system (8) and the initial state is resettable, namely, $$x_{k} (0)=x_{d} (0)$$ for all $$k=1,2,\ldots $$. Then the expectation $$E\{\| \delta y_{k} \|_{1} \}$$ of tracking error $$\| \delta y_{k} \|_{1} $$ is convergent to zero as the iteration goes to infinity if the condition ρ1=‖E{|I−ΓΛkHΩk|}‖1+(α¯+ω¯−α¯ω¯)|Γ‖|H‖1<1 holds. Proof. From (6), we derive that δy~1=δy1,δy~k=Λkδyk+(I−Λk)δy~k−1,k=2,3,⋅⋅⋅. (12) By backwardly iterating (12), we have δy~k=Λkδyk+∑m=2k−1[∏j=0k−1−m(I−Λk−j)]Λmδym+[∏j=0k−2(I−Λk−j)]δy1. (13) From (7) and (13), it follows δu~k+1=δu~k−ΓΛkδyk−∑m=2k−1Γ[∏j=0k−1−m(I−Λk−j)]Λmδym−Γ[∏j=0k−2(I−Λk−j)]δy1. (14) Noticing that $$x_{k} (0)=x_{d} (0)$$ and taking (9) and (11) into consideration yield δyk=Hδuk. (15) Substituting (15) into (14) achieves δu~k+1=δu~k−ΓΛkHδuk−∑m=2k−1Γ[∏j=0k−1−m(I−Λk−j)]ΛmHδum−Γ[∏j=0k−2(I−Λk−j)]Hδu1. (16) By (5), we have δu1=δu~1,δuk=Ωkδu~k+(I−Ωk)δuk−1,k=2,3,⋅⋅⋅. (17) Backwardly iterating (17) leads to δuk=Ωkδu~k+∑m=2k−1[∏j=0k−1−m(I−Ωk−j)]Ωmδu~m+[∏j=0k−2(I−Ωk−j)]δu~1. (18) Substituting (18) into (16) achieves δu~k+1=(I−ΓΛkHΩk)δu~k−∑m=2k−1ΓΛkH[∏j=0k−1−m(I−Ωk−j)]Ωmδu~m−ΓΛkH[∏j=0k−2(I−Ωk−j)]δu~1−∑m=2k−1Γ[∏j=0k−1−m(I−Λk−j)]ΛmH(Ωmδu~m+∑i=2m−1[∏j=0m−1−i(I−Ωm−j)]Ωiδu~i+[∏j=0m−2(I−Ωm−j)]δu~1)−Γ[∏j=0k−2(I−Λk−j)]Hδu~1. (19) By considering Definition 2 together with the properties (P1), (P2) and (P4), the above (19) leads to |δu~k+1|≺|I−ΓΛkHΩk||δu~k|+∑m=2k−1|Γ|Λk|H|[∏j=0k−1−m(I−Ωk−j)]Ωm|δu~m|+|Γ|Λk|H|[∏j=0k−2(I−Ωk−j)]|δu~1|+∑m=2k−1|Γ|[∏j=0k−1−m(I−Λk−j)]Λm|H|(Ωm|δu~m|+∑i=2m−1[∏j=0m−1−i(I−Ωm−j)]Ωi|δu~i|+[∏j=0m−2(I−Ωm−j)]|δu~1|)+|Γ|[∏j=0k−2(I−Λk−j)]|H||δu~1|. (20) It is obvious that the assumption A2 results in the following expectations E{Ωk}=I−Ω,E{I−Ωk}=Ω,E{Λk}=I−Λ andE{I−Λk}=Λ. Thus, calculating the expectation on both sides of (20) and taking the Assumption A1 and the property (P7) into account yield E{|δu~k+1|}≺E{|I−ΓΛkHΩk|}E{|δu~k|}+∑m=2k−1|Γ|(I−Λ)|H|Ωk−m(I−Ω)E{|δu~m|}+|Γ|(I−Λ)|H|Ωk−1E{|δu~1|}+∑m=2k−1|Γ|Λk−m(I−Λ)|H|([I−Ω]E{|δu~m|}+∑i=2m−1Ωm−i[I−Ω]E{|δu~i|}+Ωm−1E{|δu~1|})+|Γ|Λk−1|H|E{|δu~1|}. (21) Computing 1-norm on both sides of (21) and considering the properties (P3), (P5), (P6) and (P8) achieve E{‖δu~k+1‖1}⩽‖E{|I−ΓΛkHΩk|}‖1E{‖δu~k‖1}+∑m=2k−1|Γ|(1−α¯)‖H‖1ω¯k−m(1−ω¯)E{‖δu~m‖1}+∑m=2k−1|Γ|α¯k−m(1−α¯)‖H‖1((1−ω¯)E{‖δu~m‖1}+∑i=2m−1ω¯m−i(1−ω¯)E{‖δu~i‖1}+ω¯m−1E{‖δu~1‖1})+|Γ|(1−α¯)‖H‖1ω¯k−1E{‖δu~1‖1}+|Γ|α¯k−1‖H‖1E{‖δu~1‖1}=λ1E{‖δu~k‖1}+∑i=1k−1λi+1E{‖δu~k−i‖1}+φkE{‖δu~1‖1}, (22) where $$\lambda_{1} =\| E\{\vert I-{\it {\Gamma}} {\it {\Lambda}}_{k} H{\it {\Omega}}_{k} \vert \} \|_{1},\lambda_{i+1} =\vert {\it {\Gamma}} \vert \| H \|_{1} (1-\bar{{\alpha }})(1-\bar{{\omega }})\left[{\sum_{m=0}^i {\bar{{\alpha }}^{i-m}\bar{{\omega }}^{m}} } \right](i=1,2,\cdot \cdot \cdot, k - 1),$$ and φk=|Γ|||H‖1(1−α¯)[∑m=0kα¯k−mω¯m]+|Γ|‖H‖1α¯k+1. A direct computation shows that ∑i=1+∞λi=‖E{|I−ΓΛkHΩk|}‖1+(α¯+ω¯−α¯ω¯)|Γ|‖H‖1 and limk→∞φk=0. From the assumption $$\rho_{1} =\| E\{\vert I-{\it {\Gamma}} {\it {\Lambda}}_{k} H{\it {\Omega}}_{k} \vert \} \|_{1} +(\bar{{\alpha }}+\bar{{\omega }}-\bar{{\alpha }}\bar{{\omega }})\vert {\it {\Gamma}} \vert \| H \|_{1} <1$$ and Lemma 1, the inequality (22) gives rise to limk→∞E{‖δu~k‖1}=0. (23) Substituting (18) into (15) yields δyk=HΩkδu~k+∑m=2k−1H[∏j=0k−1−m(I−Ωk−j)]Ωmδu~m+H[∏j=0k−2(I−Ωk−j)]δu~1. (24) From the properties (P1), (P2) and (P4), the equality (24) reduces |δyk|≺|H|Ωk|δu~k|+∑m=2k−1|H|[∏j=0k−1−m(I−Ωk−j)]Ωm|δu~m|+|H|[∏j=0k−2(I−Ωk−j)]|δu~1|. (25) Taking the expectation on both sides of (25) and taking the assumptions A1 and A2 into account, we get E{|δyk|}≺|H|(I−Ω)E{|δu~k|}+∑m=2k−1|H|Ωk−m(I−Ω)E{|δu~m|}+|H|Ωk−1E{|δu~1|}. (26) Taking the 1-norm on both sides of (26) and taking the properties (P3) and (P8) into consideration, we obtain E{‖δyk‖1}⩽‖H‖1((1−ω¯)E{‖δu~k‖1}+∑m=1k−1ω¯k−m(1−ω¯)E{‖δu~m‖1}+ω¯kE{‖δu~1‖1}). (27) By (23), (27) and Lemma 2, we reach limk→∞E{‖δyk‖1}=0. This completes the proof. □ Remark 1 It is observed that for the case when $$\bar{{\alpha }}=0$$ and $$\bar{{\omega }}=0$$ the inequality (22) reduces that E{‖δu~k+1‖1}⩽‖E{|I−ΓΛkHΩk|}‖1E{‖δu~k‖1}. This implies that the expectation of the input error is monotonically convergent to zero if the probabilities of the input and output data dropouts are null. Ideally, in the case when the input and output data dropouts do not happen, the input error in the sense of 1-norm is monotonously convergent. This coincides with the existing conclusion in Ruan et al. (2012). 4. Convergence analysis for nonlinear systems This section considers a kind of affine nonlinear systems described as {xk(t+1)=f(xk(t))+buk(t),t∈S−,yk(t)=cxk(t),t∈S, (28) where $$x_{k} (t)\in R^{n}, \quad u_{k} (t)\in R$$ and $$y_{k} (t)\in R$$ are $$n$$-dimensional state, scalar input and scalar output, respectively. $$f(\cdot)$$ is a nonlinear function and $$cb$$ is supposed to be nonzero, under which there is no difficulty to check that for a given desired output $$y_{d} (t), \quad t\in S,$$ there exist desired state $$x_{d} (t), \quad t\in S$$ and desired control input $$u_{d} (t), \quad t\in S^{-}$$ such that {xd(t+1)=f(xd(t))+bud(t),t∈S−,yd(t)=cxd(t),t∈S, (29) i.e., $$y_{d} (t)$$ is realizable. (A3) Assume that the nonlinear function $$f(z)$$ is uniformly globally Lipschitz with respect to $$z$$, i.e., for any $$z_{1}, z_{2} \in R^{n}$$, there exists a positive constant $$L_{f} $$ such that ‖f(z1)−f(z2)‖1⩽Lf‖z1−z2‖1. In order to analyze the convergent characteristics of the proposed synchronous-substitution-type ILC scheme (4) with (2) and (3) for the nonlinear system (28), the lifting technique is used and a set of denotations are introduced as follows. xk=[(xk(0))⊤,(xk(1))⊤,…,(xk(N−1))⊤]⊤∈RnN,xk+=[(xk(1))⊤,(xk(2))⊤,…,(xk(N))⊤]⊤∈RnN,xd=[(xd(0))⊤,(xd(1))⊤,…,(xd(N−1))⊤]⊤∈RnN,xd+=[(xd(1))⊤,(xd(2))⊤,…,(xd(N))⊤]⊤∈RnN,f(xk)=[(f(xk(0)))⊤,(f(xk(1)))⊤,…,(f(xk(N−1)))⊤]⊤∈RnN,f(xd)=[(f(xd(0)))⊤,(f(xd(1)))⊤,…,(f(xd(N−1)))⊤]⊤∈RnN,B=diag(b,b,…,b)∈RnN×N,C=diag(c,c,…,c)∈RN×nN. Thus, (28) and (29) are respectively rewritten as {xk+=f(xk)+Buk,yk=Cxk+ (30) and {xd+=f(xd)+Bud,yd=Cxd+. (31) Theorem 2 Assume that the proposed synchronous-substitution-type ILC scheme (4) with (2) and (3) is applied to the nonlinear system (28) and the initial state is resettable, i.e. $$x_{k} (0)=x_{d} (0)$$ for $$k=1,2,\cdot \cdot \cdot $$. Then the expectation $$E\{\| \delta y_{k} \|_{1} \}$$ of tracking error $$\| \delta y_{k} \|_{1} $$ converges to zero as the iteration tends to infinity if the inequalities $$L_{f} <1$$ and $$\rho_{2} =\| E\{\vert I-{\it {\Gamma}} {\it {\Lambda}}_{k} CB{\it {\Omega}}_{k} \vert \} \|_{1} +[(1-\bar{{\alpha }})(1-\bar{{\omega }})L_{f} +(\bar{{\alpha }}+\bar{{\omega }}-\bar{{\alpha }}\bar{{\omega }})]\frac{\vert {\it {\Gamma}} \vert \vert \vert b \|_{1} \vert \vert c \|_{1} }{1-L_{f} }<1$$ hold. Proof. The formulae (7) and (13) give rise to δu~k+1=δu~k−ΓΛkδyk−∑m=2k−1Γ[∏j=0k−1−m(I−Λk−j)]Λmδym−Γ[∏j=0k−2(I−Λk−j)]δy1. (32) From (30) and (31), it follows δyk=C(xd+−xk+)=C[f(xd)−f(xk)]+CBδuk. (33) Substituting (33) into (32) yields δu~k+1=δu~k−ΓΛkCBδuk−ΓΛkC[f(xd)−f(xk)]−Γ[∏j=0k−2(I−Λk−j)](C[f(xd)−f(x1)]+CBδu1)−∑m=2k−1Γ[∏j=0k−1−m(I−Λk−j)]Λm(C[f(xd)−f(xm)]+CBδum). (34) Substituting (18) into (34) leads to δu~k+1=(I−ΓΛkCBΩk)δu~k−∑m=2k−1ΓΛkCB[∏j=0k−1−m(I−Ωk−j)]Ωmδu~m−ΓΛkCB[∏j=0k−2(I−Ωk−j)]δu~1−∑m=2k−1Γ[∏j=0k−1−m(I−Λk−j)]Λm(C[f(xd)−f(xm)]+CBΩmδu~m+∑ν=2m−1CB[∏j=0m−1−ν(I−Ωm−j)]Ωνδu~ν+CB[∏j=0m−2(I−Ωm−j)]δu~1)−ΓΛkC[f(xd)−f(xk)]−Γ[∏j=0k−2(I−Λk−j)](C[f(xd)−f(x1)]+CBδu~1). (35) By the Definition 2, the properties (P1), (P2) and (P4) and equation (35), we get |δu~k+1|≺|I−ΓΛkCBΩk||δu~k|+∑m=2k−1|Γ|Λk|CB|[∏j=0k−1−m(I−Ωk−j)]Ωm|δu~m|+|Γ|Λk|CB|[∏j=0k−2(I−Ωk−j)]|δu~1|+∑m=2k−1|Γ|[∏j=0k−1−m(I−Λk−j)]Λm(|C||f(xd)−f(xm)|+|CB|Ωm|δu~m|+∑ν=2m−1|CB|[∏j=0m−1−ν(I−Ωm−j)]Ων|δu~ν|+|CB|[∏j=0m−2(I−Ωm−j)]|δu~1|)+|Γ|Λk|C||f(xd)−f(xk)|+|Γ|[∏j=0k−2(I−Λk−j)](|C||f(xd)−f(x1)|+|CB||δu~1|). (36) Calculating the expectation on both sides of (36) and taking the property (P7) and the Assumptions A1 and A2 into consideration, we obtain E{|δu~k+1|}≺E{|I−ΓΛkCBΩk|}E{|δu~k|}+∑m=2k−1|Γ|(I−Λ)|CB|Ωk−m(I−Ω)E{|δu~m|}+|Γ|(I−Λ)|CB|Ωk−1E{|δu~1|}+|Γ|(I−Λ)|C|E{|f(xd)−f(xk)|}+∑m=2k−1|Γ|Λk−m(I−Λ)(|C|E{|f(xd)−f(xm)|}+|CB|(I−Ω)E{|δu~m|}+∑ν=2m−1|CB|Ωm−ν(I−Ω)E{|δu~ν|}+|CB|Ωm−1E{|δu~1|})+|Γ|Λk−1(|C|E{|f(xd)−f(x1)|}+|CB|E{|δu~1|}). (37) Taking 1-norm on both sides of (37) and taking the property (P8), $$\| CB \|_{1} = \vert cb \vert $$ and $$\| C \|_{1} =\| c \|_{1} $$ into account, we have E{‖δu~k+1‖1}⩽‖E{|I−ΓΛkCBΩk|}‖1E{‖δu~k‖1}+∑m=2k−1|Γ‖cb|(1−α¯)(1−ω¯)ω¯k−mE{‖δu~m‖1}+|Γ|(1−α¯)|cb|ω¯k−1E{||δu~1‖1}+|Γ|(1−α¯)‖c‖1E{‖f(xd)−f(xk)‖1}+∑m=2k−1|Γ|α¯k−m(1−α¯)(||c‖1E{‖f(xd)−f(xm)‖1}+(1−ω¯)|cb|E{‖δu~m‖1}+∑ν=2m−1ω¯m−ν(1−ω¯)|cb|E{‖δu~ν‖1}+|cb|ω¯m−1E{‖δu~1‖1})+|Γ|α¯k−1(‖c‖1E{‖f(xd)−f(x1)‖1}+|cb|E{‖δu~1‖1}). (38) From the Assumption A3, it follows E{‖f(xd)−f(xk)||1}⩽LfE{‖xd−xk‖1}. (39) Substituting (39) into (38) arrives E{‖δu~k+1‖1}⩽‖E{|I−ΓΛkCBΩk|}‖1E{‖δu~k‖1}+∑m=2k−1|Γ‖cb|(1−α¯)(1−ω¯)ω¯k−mE{‖δu~m‖1}+|Γ|(1−α¯)|cb|ω¯k−1E{‖δu~1‖1}+|Γ|(1−α¯)‖c‖1LfE{‖xd−xk‖1}+∑m=2k−1|Γ|α¯k−m(1−α¯)(‖c‖1LfE{‖xd−xm‖1}+(1−ω¯)|cb|E{‖δu~m‖1}+∑ν=2m−1ω¯m−ν(1−ω¯)|cb|E{‖δu~ν‖1}+ω¯m−1|cb|E{‖δu~1‖1})+|Γ|α¯k−1(‖c‖1LfE{‖xd−x1‖1}+|cb|E{‖δu~1‖1}). (40) By (30), (31) and (18), we have xd+−xk+=f(xd)−f(xk)+BΩkδu~k+∑i=2k−1B[∏j=0k−1−i(I−Ωk−j)]Ωiδu~i+B[∏j=0k−2(I−Ωk−j)]δu~1. (41) By (41), Definition 2 and taking the properties (P1), (P2) and (P4) into consideration, we obtain |xd+−xk+|≺|f(xd)−f(xk)|+|B|Ωk|δu~k|+∑i=2k−1|B|[∏j=0k−1−i(I−Ωk−j)]Ωi|δu~i|+|B|[∏j=0k−2(I−Ωk−j)]|δu~1|. (42) Taking expectation on both sides of (42) and taking the assumptions A1 and A2 into consideration, we have E{|xd+−xk+|}≺E{|f(xd)−f(xk)|}+|B|(I−Ω)E{|δu~k|}+∑i=2k−1|B|Ωk−i(I−Ω)E{|δu~i|}+|B|Ωk−1E{|δu~1|}. (43) Computing 1-norm on both sides of (43) and considering the properties (P3), (P5), (P6), (P8) and $$\| B \vert \vert_{1} =\| b \vert \vert_{1} $$ achieve E{‖xd+−xk+‖1}⩽E{‖f(xd)−f(xk)‖1}+(1−ω¯)‖b‖1E{‖δu~k‖1}+∑i=2k−1(1−ω¯)ω¯k−i‖b‖1E{‖δu~i‖1}+ω¯k−1‖b‖1E{‖δu~1‖1}. (44) Substituting (39) into (44) reaches E{‖xd+−xk+‖1}⩽LfE{‖xd−xk‖1}+(1−ω¯)‖b‖1E{‖δu~k‖1}+∑i=2k−1(1−ω¯)ω¯k−i‖b‖1E{‖δu~i‖1}+ω¯k−1‖b‖1E{‖δu~1‖1}. (45) From $$E\{\| x_{d} - x_{k} \|_{1} \} \leqslant E\{\| x_{d}^{+} - x_{k}^{+} \|_{1} \}$$ and (45), it achieves E{‖xd+−xk+‖1}⩽(1−ω¯)||b‖11−LfE{‖δu~k‖1}+∑i=2k−1(1−ω¯)ω¯k−i‖b‖11−LfE{‖δu~i‖1}+ω¯k−1‖b‖11−LfE{‖δu~1‖1}. (46) Furthermore, from (40) and (46), it follows E{‖δu~k+1‖1}⩽(‖E{|I−ΓΛkCBΩk|}||1+(1−α¯)(1−ω¯)Lf|Γ|‖b‖1‖c‖11−Lf)E{‖δu~k‖1}+α¯k−1|Γ|‖b‖1‖c‖11−LfE{‖δu~1‖1}+∑m=2k−1(1−α¯)(1−ω¯)|Γ|‖b‖1‖c‖11−Lfω¯k−mE{‖δu~m‖1}+(1−α¯)|Γ|‖b‖1‖c‖11−Lfω¯k−1E{‖δu~1‖1}+∑m=2k−1α¯k−m(1−α¯)|Γ|‖b‖1‖c‖11−Lf((1−ω¯)E{‖δu~m‖1}+∑i=2m−1(1−ω¯)ω¯m−iE{‖δu~i‖1}+ω¯m−1E{‖δu~1‖1})=λ1E{‖δu~k‖1}+∑i=1k−1λi+1E{‖δu~k−i‖1}+φkE{‖δu~1‖1}, (47) where λ1=‖E{|I−ΓΛkCBΩk|}‖1+(1−α¯)(1−ω¯)Lf|Γ|‖b‖1‖c‖11−Lf,λi+1=(1−α¯)(1−ω¯)|Γ|‖b‖1‖c‖11−Lf(∑m=0iω¯i−mα¯m),(i=1,2,⋅⋅⋅,k−1),φk=(1−α¯)|Γ|‖b‖1‖c‖11−Lf(∑m=0kω¯k−mα¯m)+|Γ|‖b‖1‖c‖11−Lfα¯k+1. A direct computation shows that ρ2=∑i=1+∞λi=‖E{|I−ΓΛkCBΩk|}‖1+[(1−α¯)(1−ω¯)Lf+(α¯+ω¯−α¯ω¯)]|Γ|‖b‖1‖c‖11−Lf and limk→∞φk=0. From the assumption $$\rho_{2} < 1$$, Lemma 1 and (47), we have limk→∞E{‖δu~k‖1}=0. (48) By (46), (48) and Lemma 2, we get limk→∞E{‖xd+−xk+‖1}=0. (49) From (30) and (31), it follows δyk=C(xd+−xk+). (50) By (50) and simple computation, we have E{‖δyk‖1}⩽‖c‖1E{‖xd+−xk+‖1}. (51) (49) and (51) reduce limk→∞E{‖δyk‖1}=0. This completes the proof. □ Remark 2 It is noticed that the condition $$L_{f} < 1$$ is sufficient to guarantee the convergence. This does not mean it is necessary though some of NILC scheme is efficacious. In authors’ simulation experiences, the proposed synchronous-substitution-type ILC may be effective for some system where the condition $$L_{f} < 1$$ does not hold. This implies that the condition $$L_{f} < 1$$ for guaranteeing the convergence is rigorous. It is inspiring to seek a weaker condition to ensure the convergence. Corollary 1 Assume that the proposed synchronous-substitution-type ILC scheme (4) with (2) and (3) is applied to the LTI system (8) and the initial state is resettable, i.e., $$x_{k} (0) = x_{d} (0)$$ for $$k = 1,2,\cdot \cdot \cdot$$. Then the expectation $$E \{\vert \vert \delta y_{k} \| _{1}\}$$ of tracking error $$\| \delta y_{k} \| _{1} $$ is convergent to zero as the iteration approaches to infinity if the conditions $$\| A \|_{1} <1$$ and $$\tilde{{\rho }}_{1} = E\{\vert 1 - {\it {\Gamma}} \alpha_{k} (1)CB\omega_{k} (0) \vert \} + \left[{(1 - \bar{{\alpha}})(1 - \bar{{\omega }}) \| A \|_{1} + \bar{{\alpha }} + \bar{{\omega }} - \bar{{\alpha }}\bar{{\omega }}} \right]\frac{\vert {\it {\Gamma}} \vert \| b \|_{1} \| c \|_{1} }{1-\| A \| _{1} } < 1$$ are satisfied. Remark 3 It should be pointed out that the convergent condition $$\tilde{{\rho }}_{1} <1$$ does not always hold for any packet dropouts probabilities. In other words, the condition $$\tilde{{\rho }}_{1} <1$$ confines the probabilities scales of the packet dropouts. 5. Numerical simulations In order to evaluate the tracking performance in the statistical sense in terms of mathematical expectation, the numerical experiments are made in 500 runs, where the term ‘one run’ means that the synchronous-substitution-type ILC-driven system operates until a perfect tracking is reached. Thus, the expectation $$E\{\| \delta y_{k} \|_{1} \}$$ of tracking error $$\| \delta y_{k} \|_{1} $$ is computed as $$E\{\| \delta y_{k} \|_{1} \} = \tfrac{1}{500}\sum_{m=1}^{500} {\| \delta y_{k}^{(m)} \| _{1} } $$ and the expectation $$E\{y_{k} (t)\}$$ of system output $$y_{k} (t)$$ is calculated as $$\tfrac{1}{500}\sum_{m=1}^{500} {y_{k}^{(m)} (t)} $$ in an average form, where the superscript $$(m)$$ marks the run order. Example 1 Consider a second-order linear system as follows. [x1,k(t+1)x2,k(t+1)]=[16151516][x1,k(t)x2,k(t)]+[1212]uk(t),t∈S−={0,1,2,⋅⋅⋅,29},yk(t)=x1,k(t)+x2,k(t),t∈S={0,1,2,⋅⋅⋅,30},x1,k(0)=0,x2,k(0)=0. The desired trajectory is given as $$y_{d} (t)=\sin (\tfrac{\pi }{15}t), t\in S$$. The initial control signal is set as $$u_{1} (t)=0$$ for $$t\in S^{-}$$. In the proposed synchronous-substitution-type ILC scheme (4) with (2) and (3), the learning gain is chosen as $${\it {\Gamma}} =0.4$$. The probabilities of the input and output data packet dropouts are $$\bar{{\omega }} = 0.1$$ and $$\bar{{\alpha }} = 0. 1$$, respectively. It is checked that the convergence condition in Corollary 1 $$\tilde{{\rho }}_{1} =\tfrac{2336}{2375}<1$$ holds. The system outputs at the 5th and 10th iterations are displayed in Fig. 2, where the star-solid curve refers to the desired trajectory, the circle-solid curve presents the system output at the 5th iteration and the square-dash one stands for the system output at the 10th iteration, respectively. Figure 3 displays that the tracking error $$\| \delta y_{k} \|_{1} $$ tends to zero as the iteration increases. It is seen from Figs 2 and 3 that the proposed synchronous-substitution-type ILC scheme owns a satisfactory tracking performance. Fig. 2. View largeDownload slide System output of linear system. Fig. 2. View largeDownload slide System output of linear system. Fig. 3. View largeDownload slide Tracking error of linear system. Fig. 3. View largeDownload slide Tracking error of linear system. Figure 4 displays the average system outputs at the 5th and 10th iterations, where the star-solid curve presents the desired trajectory, the circle-solid curve stands for the average $$\tfrac{1}{500}\sum_{m=1}^{500} {y_{5}^{(m)} (t)} $$ of the system outputs at the 5th iteration and the square-dash one marks the average $$\tfrac{1}{500}\sum_{m=1}^{500} {y_{10}^{(m)} (t)} $$ of the system outputs at the 10th iteration, respectively. Figure 4 depicts the synchronous-substitution-type ILC-driven system tracks the desired trajectory perfectly in the view of statistical point. Figure 5 exhibits that the average $$\tfrac{1}{500}\sum_{m=1}^{500} {\| \delta y_{k}^{(m)} \|_{1} } $$ of tracking errors converges to zero as the iteration increases. Fig. 4. View largeDownload slide Average system output of linear system. Fig. 4. View largeDownload slide Average system output of linear system. Fig. 5. View largeDownload slide Average tracking error of linear system. Fig. 5. View largeDownload slide Average tracking error of linear system. Example 2 Consider a nonlinear system as follows. [x1,k(t+1)x2,k(t+1)]=[13sin(x2,k(t))13cos(x1,k(t))]+[1212]uk(t),t∈S−={0,1,2,…,29},yk(t)=xk1(t)+xk2(t),t∈S={0,1,2,…,30},x1,k(0)=0,x2,k(0)=0. The desired trajectory is set as $$y_{d} (t)=\sin (\tfrac{\pi }{15}t), t\in S$$. The control signal of the beginning iteration is chosen as $$u_{1} (t)=0$$ for $$t\in S^{-}$$. It is testified that the Lipschitz constant of the function $$f(x_{1, k} (t), x_{2, k} (t))=[\tfrac{1}{3}\sin (x_{2, k} (t)), \tfrac{1}{3}\cos (x_{1, k} (t))]^{\top }$$ is $$L_{f} =\tfrac{1}{3}$$. The learning gain is chosen as $${\it {\Gamma}} =0.4$$. The probabilities of the input and output data packet dropouts are $$\bar{{\omega }} = 0.1$$ and $$\bar{{\alpha }} = 0.1$$, respectively. It is verified that the convergence condition $$\rho_{2} = 0.952 < 1$$ holds. The system outputs at the 5th and 10th iterations are exhibited in Fig. 6, where the star-solid curve refers to the desired trajectory, the circle-solid curve presents the system output at the 5th iteration and the square-dash one marks the system output at the 10th iteration, respectively, which tells the synchronous-substitution-type ILC-driven system tracks the desired trajectory well. Figure 7 displays the tendency of tracking error $$\vert \vert \delta y_{k} \| _{1} $$ along the iteration direction. Fig. 6. View largeDownload slide gSystem output of nonlinear system. Fig. 6. View largeDownload slide gSystem output of nonlinear system. Fig. 7. View largeDownload slide Tracking error of nonlinear system. Fig. 7. View largeDownload slide Tracking error of nonlinear system. Figure 8 outlines the averages $$\tfrac{1}{500}\sum_{m=1}^{500} {y_{5}^{(m)} (t)} $$ and $$\tfrac{1}{500}\sum_{m=1}^{500} {y_{10}^{(m)} (t)} $$ of the system outputs at the 5th and 10th iterations, where the star-solid curve refers to the desired trajectory, the circle-solid curve stands for the average of system outputs at the 5th iteration and the square-dash one marks the average of system outputs at the 10th iteration, respectively. Figure 9 exhibits that the average $$\tfrac{1}{500}\sum_{m=1}^{500} {\| \delta y_{k}^{(m)} \|_{1} } $$ of tracking errors is decreasing to zero as the iteration index increases. Fig. 8. View largeDownload slide Average system output of nonlinear system. Fig. 8. View largeDownload slide Average system output of nonlinear system. Fig. 9. View largeDownload slide Average tracking error of nonlinear system. Fig. 9. View largeDownload slide Average tracking error of nonlinear system. 6. Conclusion This article develops a synchronous-substitution-type ILC scheme for networked discrete linear and nonlinear time-invariant systems with input and output data dropouts. The developed learning algorithm is constructed by means of replacing the dropped data with the synchronous data used at the previous iteration. Moreover, the zero-error convergences of the synchronous-substitution-type ILC for the SISO linear and affine nonlinear time-invariant systems are respectively derived in the sense that the tracking errors are evaluated in the sense of 1-norm in the form of mathematical expectation. The derivations along with the numerical simulations convey that the probabilities scales of the input and output data dropouts may influence the tracking performance. However, the convergence condition for guaranteeing the convergence is rigorous. It is inspiring to seek a weaker condition to ensure the convergence. Funding National Natural Science Foundation of China (F010114-60974140 and 61273135). References Ahn, H. S., Chen, Y. & Moore, K. L. ( 2006 ). Intermittent iterative learning control. Proceedings of the 2006 IEEE International Conference on Intelligent Control . IEEE : Munich, Germany , pp. 144 – 149 . Ahn, H. S., Moore, K. L. & Chen, Y. Q. ( 2008 ). Discrete-time intermittent iterative learning control with independent data dropouts. Proceedings of 17th IFAC World Congress . Elsevier : Seoul, Korea , pp. 12442 – 12447 . Arimoto, S., Kawamura, S. & Miyazaki, F. ( 1984 ). Bettering operation of robots by learning. J. Robotic Syst. , 1 , 123 – 140 . Google Scholar CrossRef Search ADS Bristow, D. A., Tharayil, M. & Alleyne, A. G. ( 2006 ). A survey of iterative learning control. IEEE Contr. Syst. Mag. , 26 , 96 – 114 . Google Scholar CrossRef Search ADS Bu, X. H. & Hou, Z. S. ( 2011 ). Stability of iterative learning control with data dropouts via asynchronous dynamical system. Int. J. Autom. Comput. , 8 , 29 – 36 . Google Scholar CrossRef Search ADS Bu, X. H., Hou, Z. S. & Chi, R. H. ( 2013 ). Effect analysis of data dropout on iterative learning control. 2013 25th Chinese Control and Decision Conference (Ccdc) , pp. 991 – 996 . Bu, X. H., Yu, F. S., Hou, Z. S. & Wang, F. Z. ( 2013 ). Iterative learning control for a class of nonlinear systems with random packet losses. Nonl. Anal.-Real World Appl. , 14 , 567 – 580 . Google Scholar CrossRef Search ADS Bu, X. H., Hou, Z. S., Jin, S. T. & Chi, R. H. ( 2016 ). An iterative learning control design approach for networked control systems with data dropouts. Int. J. Robust Nonl. Contr. , 26 , 91 – 109 . Google Scholar CrossRef Search ADS Chen, Y. Q., Wen, C., Gong, Z. & Sun, M. ( 1999 ). An iterative learning controller with initial state learning. IEEE Trans. Automat. Contr. , 44 , 371 – 376 . Google Scholar CrossRef Search ADS Chen, Y. Q., Moore, K. L., Yu, J. & Zhang, T. ( 2008 ). Iterative learning control and repetitive control in hard disk drive industry—a tutorial. Int. J. Adapt. Contr. Signal Process. , 22 , 325 – 343 . Google Scholar CrossRef Search ADS Krtolica, R., Ozguner, U., Chan, H., Goktas, H., Winkelman, J. & Liubakka, M. ( 1994 ). Stability of linear feedback-systems with random communication delays. Int. J. Contr. , 59 , 925 – 953 . Google Scholar CrossRef Search ADS Li, X. F., Xu, J. X. & Huang, D. Q. ( 2015 ). Iterative learning control for nonlinear dynamic systems with randomly varying trial lengths. Int. J. Adapt. Contr. Signal Process. , 29 , 1341 – 1353 . Google Scholar CrossRef Search ADS Li, X. F., Xu, J. X. & Huang, D. Q. ( 2014 ). An iterative learning control approach for linear systems with randomly varying trial lengths. IEEE Trans. Automat. Contr. , 59 , 1954 – 1960 . Google Scholar CrossRef Search ADS Liu, C. P., Xu, J. X. & Wu, J. ( 2012 ). Iterative learning control for remote control systems with communication delay and data dropout. Math. Probl. Eng. , https://doi.org/10.1155/2012/705474 . Liu, J. & Ruan, X. E. ( 2016 ). Networked iterative learning control approach for nonlinear systems with random communication delay. Int. J. Syst. Sci. https://doi.org/10.1080/00207721.2016. 1165894 . Meng, D. Y., Jia, Y. M., Du, J. P. & Yu, F. S. ( 2010 ). Initial shift problem for robust iterative learning control systems with polytopic-type uncertainty. Int. J. Syst. Sci. , 41 , 825 – 838 . Google Scholar CrossRef Search ADS Mi, C. T., Lin, H. & Zhang, Y. ( 2005 ). Iterative learning control of antilock braking of electric and hybrid vehicles. IEEE Trans. Veh. Tech. , 54 , 486 – 494 . Google Scholar CrossRef Search ADS Park, K. H. & Bien, Z. Z. ( 2005 ). Intervalized iterative learning control for monotonic convergence in the sense of sup-norm. Int. J. Contr. , 78 , 1218 – 1227 . Google Scholar CrossRef Search ADS Ruan, X. E., Bien, Z. Z. & Wang, Q. ( 2012 ). Convergence properties of iterative learning control processes in the sense of the Lebesgue-P norm. Asian J. Contr. , 14 , 1095 – 1107 . Google Scholar CrossRef Search ADS Ruan, X. E. & Li, Z. Z. ( 2014 ). Convergence characteristics of PD-type iterative learning control in discrete frequency domain. J. Process Contr. , 24 , 86 – 94 . Google Scholar CrossRef Search ADS Saab, S. S. ( 2001 ). A discrete-time stochastic learning control algorithm. IEEE Trans. Automat. Contr. , 46 , 877 – 887 . Google Scholar CrossRef Search ADS Shen, D. & Wang, Y. Q. ( 2014 ). Survey on stochastic iterative learning control. J. Process. Contr. , 24 , 64 – 77 . Google Scholar CrossRef Search ADS Shen, D. & Wang, Y. Q. ( 2015a ). Iterative learning control for networked stochastic systems with random packet losses. Int. J. Contr. , 88 , 959 – 968 . Shen, D. & Wang, Y. Q. ( 2015b ). ILC for networked nonlinear systems with unknown control direction through random Lossy channel. Syst. Contr. Lett., 77 , 30 – 39 . Google Scholar CrossRef Search ADS Wang, D., Wang, J. L. & Wang, W. ( 2013 ). H-infinity controller design of networked control systems with Markov packet dropouts. IEEE Trans. Syst. Man Cybern. Syst , 43 , 689 – 697 . Google Scholar CrossRef Search ADS Wen, D. L. & Yang, G. H. ( 2011 ). Dynamic output feedback H-infinity control for networked control systems with quantisation and random communication delays. Int. J. Syst. Sci. , 42 , 1723 – 1734 . Google Scholar CrossRef Search ADS Wu, J. & Chen, T. W. ( 2007 ). Design of networked control systems with packet dropouts. IEEE Trans. Automat. Contr. , 52 , 1314 – 1319 . Google Scholar CrossRef Search ADS Yang, F. W., Wang, Z. D., Hung, Y. S. & Gani, M. ( 2006 ). H(infinity) control for networked systems with random communication delays. IEEE Trans. Automat. Contr. , 51 , 511 – 518 . Google Scholar CrossRef Search ADS © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

IMA Journal of Mathematical Control and Information – Oxford University Press

**Published: ** Mar 12, 2017

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

ok to continue