IMA Journal of Mathematical Control and Information, Volume Advance Article (2) – Dec 25, 2016

20 pages

/lp/ou_press/decentralized-iterative-learning-control-for-large-scale-linear-s0oHWvG5tT

- Publisher
- Oxford University Press
- Copyright
- © Crown copyright 2016.
- ISSN
- 0265-0754
- eISSN
- 1471-6887
- D.O.I.
- 10.1093/imamci/dnw068
- Publisher site
- See Article on Publisher Site

Abstract This article addresses the problem of iterative learning control for large-scale linear discrete-time systems with fixed initial state errors. And two kinds of the system structures are considered in this article, one of them is the system whose input signal's dimension is less than or equal to the output dimension and the other one is the system whose output signal's dimension is less than or equal to the input dimension. According to the characteristics of the systems, decentralized learning schemes are proposed for such large-scale linear discrete-time systems, and the corresponding discrete-time output limiting trajectories under the action of the decentralized learning schemes are presented. The proposed controller of each subsystem only relies on local output variables without any information exchanging with other subsystems. Using the contraction mapping method, we show that the schemes can guarantee the output of each subsystem to converge uniformly to the corresponding discrete-time output limiting trajectory over all the given discrete-time points within a finite time interval. Furthermore, the decentralized initial rectifying strategies are applied to the large-scale linear discrete-time systems for eliminating the effect of the fixed initial state errors, and the corresponding decentralized learning schemes are established. When the learning schemes are applied to the large-scale linear discrete-time systems, the output of each subsystem can converge to the desired reference trajectory of each subsystem over all the given discrete-time points within a pre-specified interval no matter what value the fixed state error takes. Simulation examples illustrate the effectiveness of the proposed method. 1. Introduction Since the complete algorithm of iterative learning control (ILC) was first proposed by Arimoto et al. (1984), it has become the hot issues of cybernetics and has attracted widespread attention in recent years (see Bondi et al., 1988; Sugie & Ono, 1991; Heinzinger et al., 1992; Ahn et al., 1993; Saab, 1994; Xu & Tan, 2002). The basic idea of ILC is to improve the control signal for the present operation cycle by feeding back the control error in the previous cycle. And the classical formulation of ILC design problem is as follows: find an update mechanism for the output trajectory of a new cycle based on the information from previous cycles so that the output trajectory converges asymptotically to the desired reference trajectory. Owing to its simplicity and effectiveness, ILC has been found to be a good alternative in many areas and applications, e.g. see Bristow et al. (2006) for detailed results. Nowadays, ILC is playing a more and more important role in controlling repeatable processes. In the process of ILC design, an interesting question is how to set properly the initial value of the iterative system at each iteration, so that the output trajectory of the iterative system can converge to the desired reference trajectory. In the early work, a common assumption about this question was that the initial condition at each iteration should be equal to the initial condition of the desired reference trajectory (see Arimoto et al., 1984; Sugie & Ono, 1991; Ahn et al., 1993; Xu & Tan, 2002), or within its neighbourhood (see Bondi et al., 1988; Heinzinger et al., 1992; Saab, 1994). In the case of perturbed initial conditions, boundedness of the tracking error was established and the error bound was shown to be proportional to the bound on initial condition errors (see Bondi et al., 1988; Heinzinger et al., 1992; Saab, 1994). Recently, more attention has been paid to the performance of ILC in the presence of fixed initial state errors (see Porter & Mohamed, 1991; Lee & Bien, 1996; Park, 1999; Sun & Wang, 2002; Sun et al., 2015), and the initial rectifying strategies were introduced into learning algorithms (see Sun & Wang, 2002; Sun et al., 2015). For a class of partially irregular multivariable plants, Porter & Mohamed (1991) utilized initial impulse rectifying to eliminate the effect of the fixed state errors so that a complete reference trajectory tracking over the whole time interval was achieved. In the case of initial state shift with fixed error, Lee & Bien (1996) gave the output limiting trajectory for the first time under the action of the D-type and the PD-type learning schemes. Based on Lee & Bien (1996), Park (1999) further considered the convergence performance under the action of the PID-type learning scheme and extended the convergence result to nonlinear systems. Sun & Wang (2002) addressed the initial shift problem in the ILC for the affine nonlinear systems with system relative degree, and the uniform convergence of the output trajectory to a desired one jointed smoothly with a specified transient trajectory from the starting position was ensured in the presence of fixed initial state errors. Sun et al. (2015) proposed a feedback-aided PD-type learning algorithm to solve the initial shift problem for linear time-invariant systems with fixed initial state errors. In Cheng & Moore, 2002; Yu et al., 2002, under the initial resetting conditions, the ILC problem for discrete-time systems was considered and the PD-type learning algorithms were designed. In the presence of fixed initial state errors, Sun & Wang (2003) discussed the ILC problem for a class of nonlinear discrete-time systems with arbitrary relative degree, and a learning algorithm was obtained based on the solutions of a class of difference equations. And furthermore, the tracking performance was improved by introduction of initial rectifying action and the complete tracking was achieved over a specified interval. Large-scale system is a compound system which is composed of interconnected subsystems. In many practical control problems, the systems have large-scale system model, such as power system, chemical engineering, large space structure, computer communication networks and so on. Because of the reliability of implementation, decentralized control has become an active branch of the large-scale system theory (see Jamshidi, 1983; Pagilla, 1999; Pagilla & Zhong, 2003; Tae et al., 2012a, 2012b), and some results for large-scale continuous-time systems have been obtained based on the decentralized ILC (see Dong-Hwan et al., 1993; Wu, 2007; Ruan et al., 2008a, 2008b; Fu, 2013) in recent years. But there hasn't been much research on the ILC for large-scale discrete-time systems. Shen & Chen (2012) considered the ILC problem for a class of large-scale nonlinear discrete-time systems, under the initial resetting conditions in the sense that the initial value of each subsystem converges asymptotically to the initial value of the corresponding desired reference trajectory as the iteration index increases. This observation motivates our present study. In this article, the PD-type ILC technique is applied to a class of large-scale discrete-time systems with fixed initial state errors. The decentralized iterative learning algorithms and the decentralized initial rectifying strategies are introduced to achieve the corresponding results. Two conclusions are obtained, one of them is that the output of each subsystem converges uniformly to the corresponding output limiting trajectory over all the given discrete-time points within the whole time interval under the action of the decentralized iterative learning schemes, and the other one is that the output of each subsystem converges uniformly to the corresponding desired reference trajectory over all the given discrete-time points within the pre-specified interval under the action of the decentralized initial rectifying strategies. Two kinds of the convergence conditions are proposed in the decentralized ILC design, and correspondingly, two alternative proofs are presented. The following notations are adopted in this article. I denotes the identity matrix. For a given vector or matrix X, ||X|| denotes its Euclidean norm. For discrete-time systems, t denotes the discrete-time index, and t∈[0,T] denotes the integer sequence t=0,1,2,⋯,T. For a function h: [0,T]→Rn and a real number α≥1, ||h(⋅)||s denotes the supreme norm defined by ||h(⋅)||s=supt∈[0,T]||h(t)||; ||h(⋅)||α denotes the α-norm defined by ||h(⋅)||α=supt∈[0,T]α−t||h(t)||. From Wang (1998), we know that ||h(⋅)||s and ||h(⋅)||α are equivalent, i.e. either of the norms can be used to prove the convergence. 2. Decentralized iterative learning algorithms Consider the following large-scale linear discrete-time system: {xi(t+1)=Aixi(t)+Biui(t)+∑j=1,j≠iNDijxj(t)yi(t)=Cixi(t), (2.1) where i=1,2,⋯,N represent each subsystem, t∈[0,T] denotes the time instant, xi(t)∈Rni, ui(t)∈Rri, yi(t)∈Rmi represent the state, control input and output of i th subsystem respectively, Ai,Bi,Ci and Dij are matrices with appropriate dimensions. It is assumed that the system (2.1) is repeatable over t∈[0,T]. Rewrite the system (2.1) at each iteration as: {xik(t+1)=Aixik(t)+Biuik(t)+∑j=1,j≠iNDijxjk(t)yik(t)=Cixik(t), (2.2) where the subscript k is used to denote the iteration index. Construct the decentralized learning schemes for the system (2.2) as follows: ui(k+1)(t)=uik(t)+Γi(eik(t+1)−e−Lieik(t)), (2.3) where Γi∈Rri×mi and Li∈Rmi×mi are the learning gain matrices, and every eigenvalue of Li has positive real part. Also eik(t)=yid(t)−yik(t) is the tracking error of each subsystem at kth iteration. i=1,2,⋯,N. Before giving our decentralized ILC laws, we introduce the following assumptions for the system (2,2): Assumption 2.1 For each iteration index k, the initial value of the ith subsystem is always set to the fixed value xi0, i.e. xik(0)=xi0, i=1,2,⋯,N. Assumption 2.1 For the given trajectories yid(t)(i=1,2,⋯,N), there exist uid∗(t)(i=1,2,⋯,N) such that {xid∗(t+1)=Aixid∗(t)+Biuid∗(t)+∑j=1,j≠iNDijxjd∗(t)yid∗(t)=Cixid∗(t), where yid∗(t)=yid(t)−ei∗(t), ei∗(t)=e−Lit(yid(0)−Cixi0). Remark 2.1 Assumptions 2.1–2.2 are the usual conditions of the PD-type ILC for the systems with fixed initial state errors (see Lee & Bien, 1996; Sun et al., 2015). We have the following Theorems under the Assumptions 2.1–2.2. For system (2.2), whose input signal's dimension is less than or equal to the output dimension, i.e. ri≤mi, i=1,2,⋯,N, we have: Theorem 2.1 If there exist the gain matrices Γi∈Rri×mi(i=1,2,⋯,N) such that ρ=max1≤i≤N||I−ΓiCiBi||<1, (2.4) then the output yik(t) of each subsystem converges uniformly to the corresponding output limiting trajectory yid∗(t) over t∈[0,T] under the action of the learning scheme (2.3) , i.e. limk→∞||eik∗||s=0, where eik∗(t)=yid∗(t)−yik(t), i=1,2,⋯,N. Proof. From yid∗(t)=yid(t)−e−Lit(yid(0)−Cixi0), we have yid∗(0)=Cixi0. It follows from Assumption x that yid∗(0)=Cixid∗(0). Thus, it is reasonable to set the initial value as follow: xid∗(0)=xi0. (2.5) Since eik(t)=yid(t)−yik(t) and eik∗(t)=yid∗(t)−yik(t), eik(t)=eik∗(t)+ei∗(t)=eik∗(t)+e−Lit(yid(0)−Cixi0). So, we have eik(t+1)−e−Lieik(t)=eik∗(t+1)+e−Li(t+1)(yid(0)−Cixi0)−e−Lieik(t)=eik∗(t+1)+e−Lie−Lit(yid(0)−Cixi0)−e−Lieik(t)=eik∗(t+1)+e−Li(eik(t)−eik∗(t))−e−Lieik(t)=eik∗(t+1)−e−Lieik∗(t). (2.6) Denote Δuik∗(t)=uid∗(t)−uik(t), Δxik∗(t)=xid∗(t)−xik(t), i=1,2,⋯,N. From (2.2), (2.3), (2.6) and Assumption 2.2, we obtain Δui(k+1)∗(t)=uid∗(t)−ui(k+1)(t)=uid∗(t)−uik(t)−(ui(k+1)(t)−uik(t))=Δuik∗(t)−Γi(eik(t+1)−e−Lieik(t))=Δuik∗(t)−Γi(eik∗(t+1)−e−Lieik∗(t))=Δuik∗(t)−Γi(Cixid∗(t+1)−Cixik(t+1)−e−Li(Cixid∗(t)−Cixik(t)))=Δuik∗(t)−ΓiCiΔxik∗(t+1)+Γie−LiCiΔxik∗(t)=Δuik∗(t)−ΓiCi(AiΔxik∗(t)+BiΔuik∗(t)+∑j=1,j≠iNDijΔxjk∗(t))+Γie−LiCiΔxik∗(t)=(I−ΓiCiBi)Δuik∗(t)+(Γie−LiCi−ΓiCiAi)Δxik∗(t)−ΓiCi∑j=1,j≠iNDijΔxjk∗(t). Taking the α- norm on both sides of the above equality, it yields ||Δui(k+1)∗||α≤||I−ΓiCiBi||||Δuik∗||α+ci1||Δxik∗||α+∑j=1,j≠iNdij||Δxjk∗||α, where ci1=||Γie−LiCi−ΓiCiAi|| and dij=||ΓiCiDij||. i=1,2,⋯,N. Taking sum of the above inequality for i from 1 to N and combining with (2.4), we obtain ∑i=1N||Δui(k+1)∗||α≤ρ∑i=1N||Δuik∗||α+c1∑i=1N||Δxik∗||α+(N−1)c2∑i=1N||Δxik∗||α=ρ∑i=1N||Δuik∗||α+c3∑i=1N||Δxik∗||α, (2.7) where c1=max1≤i≤Nci1, c2=max1≤i,j≤N,i≠jdij, and c3=c1+(N−1)c2. It follows from (2.2) and Assumption 2.2 that Δxik∗(t+1)=AiΔxik∗(t)+BiΔuik∗(t)+∑j=1,j≠iNDijΔxjk∗(t). Taking Euclidean norm on both sides of the above equality, it yields ||Δxik∗(t+1)||≤||Ai||||Δxik∗(t)||+||Bi||||Δuik∗(t)||+∑j=1,j≠iN||Dij||||Δxjk∗(t)||. From (2.5) and Assumption 2.1, we know that Δxik∗(0)=0. Therefore, ||Δxik∗||α=maxt∈[0,T−1]α−(t+1)||Δxik∗(t+1)||≤α−1||Ai||maxt∈[0,T−1]α−t||Δxik∗(t)||+α−1||Bi||maxt∈[0,T−1]α−t||Δuik∗(t)||+α−1∑j=1,j≠iN||Dij||maxt∈[0,T−1]α−t||Δxjk∗(t)||≤α−1||Ai||maxt∈[0,T]α−t||Δxik∗(t)||+α−1||Bi||maxt∈[0,T]α−t||Δuik∗(t)||+α−1∑j=1,j≠iN||Dij||maxt∈[0,T]α−t||Δxjk∗(t)||=α−1||Ai||||Δxik∗||α+α−1||Bi||||Δuik∗||α+α−1∑j=1,j≠iN||Dij||||Δxjk∗||α. Taking sum of the above inequality for i from 1 to N, we have ∑i=1N||Δxik∗||α≤α−1c4∑i=1N||Δxik∗||α+α−1c5∑i=1N||Δuik∗||α+α−1(N−1)c6∑i=1N||Δxik∗||α=α−1c7∑i=1N||Δxik∗||α+α−1c5∑i=1N||Δuik∗||α, where c4=max1≤i≤N||Ai||, c5=max1≤i≤N||Bi||, c6=max1≤i,j≤N,i≠j||Dij||, and c7=c4+(N−1)c6. Taking α so that α−1c7<1, we have ∑i=1N||Δxik∗||α≤α−1c8∑i=1N||Δuik∗||α, (2.8) where c8=c51−α−1c7. Substituting (2.8) into (2.7) results ∑i=1N||Δui(k+1)∗||α≤ρ^∑i=1N||Δuik∗||α, (2.9) where ρ^=ρ+α−1c3c8. Since 0≤ρ<1 by (2.4), it is possible to choose α sufficiently large so that ρ^<1. Then (2.9) is a contraction in ∑i=1N||Δuik∗||α. It follows from (2.8) and (2.9) that limk→∞∑i=1N||Δxik∗||α=0. Therefore, we have limk→∞||eik∗||s=0, i=1,2,⋯,N. This completes the proof. □ For system (2.2), whose output signal's dimension is less than or equal to the input dimension, i.e. mi≤ri, i=1,2,⋯,N, we have: Theorem 2.2 If there exist the gain matrices Γi∈Rri×mi(i=1,2,⋯,N) such that ρ=max1≤i≤N||I−CiBiΓi||<1, (2.10) then the output yik(t) of each subsystem converges uniformly to the corresponding output limiting trajectory yid∗(t) of over t∈[0,T] under the action of the learning scheme (2.3), i.e. limk→∞||eik∗||s=0, where eik∗(t)=yid∗(t)−yik(t), i=1,2,⋯,N. Proof. Denote δxik(t)=xi(k+1)(t)−xik(t), δuik(t)=ui(k+1)(t)−uik(t). From (2.2), (2.3) and (2.6), we have δxik(t+1)=Aiδxik(t)+Biδuik(t)+∑j=1,j≠iNDijδxjk(t)=Aiδxik(t)+BiΓi(eik(t+1)−e−Lieik(t))+∑j=1,j≠iNDijδxjk(t)=Aiδxik(t)+BiΓi(eik∗(t+1)−e−Lieik∗(t))+∑j=1,j≠iNDijδxjk(t). (2.11) From (2.2) and (2.11), we obtain ei(k+1)∗(t)=eik∗(t)−(yi(k+1)(t)−yik(t))=eik∗(t)−Ciδxik(t)=eik∗(t)−Ci(Aiδxik(t−1)+BiΓi(eik∗(t)−e−Lieik∗(t−1))+∑j=1,j≠iNDijδxjk(t−1))=(I−CiBiΓi)eik∗(t)−Ci(Aiδxik(t−1)−BiΓie−Lieik∗(t−1)+∑j=1,j≠iNDijδxjk(t−1)). Taking Euclidean norm on both sides of the above equality, it yields ||ei(k+1)∗(t)||≤||I−CiBiΓi||||eik∗(t)||+c9||δxik(t−1)||+c10||eik∗(t−1)||+c11∑j=1,j≠iN||δxjk(t−1)||, where c9=max1≤i≤N||CiAi||, c10=max1≤i≤N||CiBiΓie−Li||, and c11=max1≤i,j≤N,i≠j||CiDij||. Note that ei(k+1)∗(0)=0. Therefore ||ei(k+1)∗||α=maxt∈[1,T]α−t||ei(k+1)∗(t)||≤||I−CiBiΓi||maxt∈[1,T]α−t||eik∗(t)||+c9α−1maxt∈[1,T]α−(t−1)||δxik(t−1)||+c10α−1maxt∈[1,T]α−(t−1)||eik∗(t−1)||+c11α−1∑j=1,j≠iNmaxt∈[1,T]α−(t−1)||δxjk(t−1)||≤||I−CiBiΓi||||eik∗||α+c9α−1||δxik||α+c10α−1||eik∗||α+c11α−1∑j=1,j≠iN||δxjk||α. Taking sum of the above inequality for i from 1 to N and combining with (2.10) , we obtain ∑i=1N||ei(k+1)∗||α≤ρ∑i=1N||eik∗||α+c9α−1∑i=1N||δxik||α+c10α−1∑i=1N||eik∗||α+c11(N−1)α−1∑i=1N||δxik||α=(ρ+c10α−1)∑i=1N||eik∗||α+c12α−1∑i=1N||δxik||α (2.12) where c12=c9+c11(N−1). By δxik(0)=0 and (2.11), we have ||δxik||α=maxt∈[0,T−1]α−(t+1)||δxik(t+1)||≤c4α−1maxt∈[0,T−1]α−t||δxik(t)||+c13maxt∈[0,T−1]α−(t+1)||eik∗(t+1)||+c14α−1maxt∈[0,T−1]α−t||eik∗(t)||+c6α−1∑j=1,j≠iNmaxt∈[0,T−1]α−t||δxjk(t)||≤c4α−1||δxik||α+c13||eik∗||α+c14α−1||eik∗||α+c6α−1∑j=1,j≠iN||δxjk||α=c4α−1||δxik||α+c15||eik∗||α+c6α−1∑j=1,j≠iN||δxjk||α, where c13=max1≤i≤N||BiΓi||, c14=max1≤i≤N||BiΓie−Li||, and c15=c13+c14α−1. Taking α so that c4α−1+c6α−1(N−1)<1, we have ∑i=1N||δxik||α≤c16∑i=1N||eik∗||α (2.13) where c16=c151−c4α−1−c6α−1(N−1). Substituting (2.13) into (2.12) results ∑i=1N||ei(k+1)∗||α≤ρ¯∑i=1N||eik∗||α, where ρ¯=ρ+c10α−1+c12c16α−1. Since 0≤ρ<1 by (2.10), it is possible to choose α sufficiently large so that ρ¯<1. Then limk→∞∑i=1N||eik∗||α=0. Therefore, we have limk→∞||eik∗||s=0, i=1,2,⋯,N. This completes the proof. □ Remark 2.2 Generally speaking, the PD-type learning algorithms for continuous systems can guarantee the output limiting trajectory of the system to converge asymptotically to the desired reference trajectory as time increases (see Lee & Bien, 1996; Park, 1999; Sun et al., 2015). It follows from yid∗(t)=yid(t)−e−Lit(yid(0)−Cixi0) that the decentralized learning algorithms for discrete systems in this article also have the same properties. 3. Decentralized initial rectifying strategies In order to achieve a complete reference trajectory tracking over a pre-specified interval, initial rectifying strategies (see Sun & Wang, 2002, 2003; Sun et al., 2015) are introduced to the decentralized learning schemes in this section. And the initial rectifying terms are added to the decentralized learning schemes to eliminate the effect of the fixed initial state errors beyond a small initial time interval. Construct the decentralized iterative learning schemes with initial rectifying action for the system (2.2) as follows: ui(k+1)(t)=uik(t)+Γi(eik(t+1)−e−Lieik(t)+ri(t)), (3.1) where ri(t) represents the initial rectifying action. For a given positive integer μ, and μ<T, define ei∗(t)={e−Lit(yid(0)−Cixi0),0≤t≤μ,0, μ+1≤t≤T+1. Taking ri(t)=−ei∗(t+1)+e−Liei∗(t), (3.2) we have ri(t)={0, 0≤t≤μ−1,e−Li(μ+1)(yid(0)−Cixi0), t=μ,0, μ+1≤t≤T. We have the following Theorems under the Assumptions 2.1–2.2. For system (2.2), whose input signal's dimension is less than or equal to the output dimension, i.e. ri≤mi, i=1,2,⋯,N, we have: Theorem 3.1 If there exist the gain matrices Γi∈Rri×mi(i=1,2,⋯,N) such that ρ=max1≤i≤N||I−ΓiCiBi||<1, then the output yik(t) of each subsystem converges uniformly to the corresponding output tracking trajectory yid∗(t) over t∈[0,T] under the action of the learning scheme (3.1), i.e. limk→∞||eik∗||s=0, where yid∗(t)=yid(t)−ei∗(t),ei∗(t)={e−Lit(yid(0)−Cixi0),0≤t≤μ,0,μ<t≤T+1. and eik∗(t)=yid∗(t)−yik(t). i=1,2,⋯,N. Proof. Note that yid∗(0)=Cixi0. Thus, by Assumption 2.2, it is reasonable to set the initial value as follow: xid∗(0)=xi0. Since eik(t)=yid(t)−yik(t) and eik∗(t)=yid∗(t)−yik(t), eik(t)=eik∗(t)+ei∗(t). (3.3) From (3.2) and (3.3), we have eik(t+1)−e−Lieik(t)+ri(t)=eik∗(t+1)+ei∗(t+1)−e−Lieik∗(t)−e−Liei∗(t)+ri(t)=eik∗(t+1)−e−Lieik∗(t)+ei∗(t+1)−e−Liei∗(t)+ri(t)=eik∗(t+1)−e−Lieik∗(t). (3.4) The rest of the proof is on exactly the same lines as that after (2.6) in the proof of Theorem 2.1. □ For system (2.2), whose output signal's dimension is less than or equal to the input dimension, i.e. mi≤ri, i=1,2,⋯,N, we have: Theorem 3.2 If there exist the gain matrices Γi∈Rri×mi(i=1,2,⋯,N) such that ρ=max1≤i≤N||I−CiBiΓi||<1, then the output yik(t) of each subsystem converges uniformly to the corresponding output tracking trajectory yid∗(t) over t∈[0,T] under the action of the learning scheme (3.1), i.e. limk→∞||eik∗||s=0, where yid∗(t)=yid(t)−ei∗(t),ei∗(t)={e−Lit(yid(0)−Cixi0),0≤t≤μ,0, μ<t≤T+1. and eik∗(t)=yid∗(t)−yik(t). i=1,2,⋯,N. Proof. Replacing (2.6) with (3.4); (2.3) with (3.1), the proof can be completed by the same way as that in Theorem 2.2. □ Theorems 3.1 and 3.2 show that the decentralized initial rectifying strategies can achieve a complete reference trajectory tracking beyond a initial time interval [0,μ]. 4. Simulation examples (1) Consider the system, whose input signal's dimension is less than or equal to the output dimension. Construct the following interconnected linear system: {x1k(t+1)=[1001]x1k(t)+[12]u1k+[1001]x2k(t)y1k(t)=[1001]x1k(t)x2k(t+1)=[−100−1]x2k(t)+[21]u2k+[1001]x1k(t)y2k(t)=[1001]x2k(t), where the subscript k is used to denote the iteration index. Set the initial state values at each iteration to the fixed values x10=[21], x20=[12], and take T=10, L1=L2=[1001] , Γ1=Γ2=[0.20.2] , then |1−Γ1C1B1|=|1−Γ2C2B2|=0.4<1. 1. Decentralized learning algorithms. For the given desired reference trajectories y1d(t)=[2et−2e−tet−e−t] , y2d(t)=[00], we have y1d∗(t)=y1d(t)−e−L1t(y1d(0)−C1x10)=[2et−2e−tet−e−t]−e[−t00−t]([00]−[1001][21])=[2et−2e−tet−e−t]−[e−t00e−t]([00]−[1001][21])=[2etet],y2d∗(t)=y2d(t)−e−L2t(y2d(0)−C2x20)=[00]−e[−t00−t]([00]−[1001][12])=[00]−[e−t00e−t]([00]−[1001][12])=[e−t2e−t]. Take the initial controls u10(t)=u20(t)=t4. Under the action of the learning scheme (2.3), the simulation results of ||eik∗||s(i=1,2) with the change of k are shown in Figs. 1 and 2. Fig. 1. View largeDownload slide Tracking errors of y1k(t). Fig. 1. View largeDownload slide Tracking errors of y1k(t). Fig. 2. View largeDownload slide Tracking errors of y2k(t). Fig. 2. View largeDownload slide Tracking errors of y2k(t). 2. Decentralized initial rectifying strategies. Take μ=2, and construct e1∗(t)={e−L1t(y1d(0)−C1x10),0≤t≤20, 3≤t≤11={[e−t00e−t](y1d(0)−[21]), 0, 3≤t≤110≤t≤2,e2∗(t)={e−L2t(y2d(0)−C2x20),0≤t≤20, 3≤t≤11={[e−t00e−t](y1d(0)−[12]),0≤t≤20,3≤t≤11,r1(t)={0,0≤t≤1e−3L1(y1d(0)−C1x10),t=20, 3≤t≤10,r2(t)={0,0≤t≤1e−3L2(y2d(0)−C2x20),t=20, 3≤t≤10. Take the desired reference trajectories as: y1d(t)=[2et−2e−tet−e−t], y2d(t)=[00] , we have y1d∗(t)=y1d(t)−e1∗(t), y2d∗(t)=y2d(t)−e2∗(t). Take the initial controls u10(t)=u20(t)=t4. Under the action of the learning scheme (3.1), the simulation results of ||eik∗||s(i=1,2) with the change of k are shown in Figs 3 and 4. Fig. 3. View largeDownload slide Tracking errors of y1k(t). Fig. 3. View largeDownload slide Tracking errors of y1k(t). Fig. 4. View largeDownload slide Tracking errors of y2k(t). Fig. 4. View largeDownload slide Tracking errors of y2k(t). (2) Consider the system, whose output signal's dimension is less than or equal to the input dimension. Construct the following interconnected linear system: {x1k(t+1)=[1001]x1k(t)+[1001]u1k+[1001]x2k(t)y1k(t)=[11]x1k(t)x2k(t+1)=[−100−1]x2k(t)+[1001]u2k+[1001]x1k(t)y2k(t)=[11]x2k(t), where the subscript k is used to denote the iteration index. Set the initial state values at each iteration to the fixed values x10=[21], x20=[12], and take T=10, L1=L2=1 , Γ1=Γ2=[0.250.25], then |1−C1B1Γ1|=|1−C2B2Γ2|=0.5<1. 1. Decentralized learning algorithms. For the given desired reference trajectories y1d(t)=3et−3e−t, y2d(t)=0, we have y1d∗(t)=y1d(t)−e−L1t(y1d(0)−C1x10)=3et−3e−t−e−t(0−[11][21])=3et,y2d∗(t)=y2d(t)−e−L2t(y2d(0)−C2x20)=0−e−t(0−[11][12])=3e−t. Take the initial controls u10(t)=u20(t)=[t2t2]. Under the action of the learning scheme (2.3), the simulation results of ||eik∗||s(i=1,2) with the change of k are shown in Figs 5 and 6. Fig. 5. View largeDownload slide Tracking errors of y1k(t). Fig. 5. View largeDownload slide Tracking errors of y1k(t). Fig. 6. View largeDownload slide Tracking errors of y2k(t). Fig. 6. View largeDownload slide Tracking errors of y2k(t). 2. Decentralized initial rectifying strategies. Take μ=2, and construct e1∗(t)={e−L1t(y1d(0)−C1x10),0≤t≤20, 3≤t≤11={−3e−t,0≤t≤20,3≤t≤11,e2∗(t)={e−L2t(y2d(0)−C2x20),0≤t≤20,3≤t≤11={−3e−t,0≤t≤20,3≤t≤11,r1(t)={0,0≤t≤1e−3L1(y1d(0)−C1x10),t=20,3≤t≤10,r2(t)={0,0≤t≤1e−3L2(y2d(0)−C2x20),t=20,3≤t≤10. For the given desired reference trajectories y1d(t)=3et−3e−t, y2d(t)=0, we have y1d∗(t)=y1d(t)−e1∗(t),y2d∗(t)=y2d(t)−e2∗(t). Take the initial controls u10(t)=u20(t)=[t2t2]. Under the action of the learning scheme (3.1), the simulation results of ||eik∗||s(i=1,2) with the change of k are shown in Figs. 7 and 8. Fig. 7. View largeDownload slide Tracking errors of y1k(t). Fig. 7. View largeDownload slide Tracking errors of y1k(t). Fig. 8. View largeDownload slide Tracking errors of y2k(t). Fig. 8. View largeDownload slide Tracking errors of y2k(t). 5. Conclusion This article studies the decentralized ILC problem for large-scale linear discrete-time systems with fixed initial state errors. Two kinds of the system structures are considered in this article, one of them is the system whose input signal's dimension is less than or equal to the output dimension and the other one is the system whose output signal's dimension is less than or equal to the input dimension. By using the decentralized learning schemes, the convergence theorems of the output tracking errors are established based on the contraction mapping method. Two conclusions are obtained, one of them is that the output of each subsystem converges uniformly to the corresponding output limiting trajectory over the whole time interval under the action of the decentralized learning schemes, and the other one is that the output of each subsystem converges uniformly to the corresponding desired reference trajectory beyond an initial time interval under the action of the initial rectifying strategies. The simulation results are consistent with theoretical analysis. Since the ILC for large-scale systems has focused mainly on the continuous-time systems by now, the results in this article extend the range of the application of ILC design for large-scale systems. Acknowledgements The authors would like to express their gratitude to the editor and the anonymous referees for their valuable suggestions that have greatly improved the quality of the article. Funding National Natural Science Foundation of China (No. 11371013), the Natural Science Foundation of Suzhou University of Science and Technology in 2016. References Ahn H. S. Choi C. H. Kim K. B. ( 1993 ) Iterative learning control for a class of nonlinear systems . Automatica , 29 , 1575 – 1578 . Google Scholar CrossRef Search ADS Arimoto S. Kawamura S. Miyazaki F. ( 1984 ) Bettering operation of robots by learning . J. Robot. Syst. , 1 , 123 – 140 . Google Scholar CrossRef Search ADS Bondi P. Casalino G. Grmbardella L. ( 1988 ) On the iterative learning control theory for robotic manipulators . IEEE J. Robot. Autom. , 4 , 14 – 22 . Google Scholar CrossRef Search ADS Bristow D. A. Tharayil M. Alleyne A. G. ( 2006 ) A survey of iterative learning control: a learning- method for high-performance tracking control . IEEE Control Syst. Mag. , 26 , 96 – 114 . Google Scholar CrossRef Search ADS Cheng Y.-Q. Kevin L. Moore. ( 2002 ) An optimal design of PD-type iterative learning control with monotonic convergence. Proceedings of the 2002 IEEE International Symposium on Intelligent Control, Vancouver, Canada. IEEE, USA, pp.55–60. Dong-Hwan H. Byung K. K. Zeungnam B. ( 1993 ) Decentralized iterative learning control methods for large scale linear dynamical systems . Int. J. Syst. Sci. , 24 , 2239 – 2254 . Google Scholar CrossRef Search ADS Fu Q. ( 2013 ) Decentralized iterative learning control for large-scale interconnected nonlinear systems . J. Syst. Sci. Math. Sci. , 33 , 1281 – 1292 (in Chinese). Heinzinger G. Fenwick D. Paden B. Miyazaki F. ( 1992 ) Stability of learning control with disturbances and uncertain initial conditions . IEEE Trans. Autom. Control , 37 , 110 – 114 . Google Scholar CrossRef Search ADS Jamshidi M. ( 1983 ) Large Scale Systems. Modeling and Control. North Holland : Elsevier Science Publishing Co.Inc . Lee H. S. Bien Z. ( 1996 ) Study on robustness of iterative learning control with non-zero initial error . Int. J. Control , 64 , 345 – 359 . Google Scholar CrossRef Search ADS Pagilla P. R. ( 1999 ) Robust decentralized control of large-scale interconnected systems: general interconnections. Proceedings of the American Control Conference, San Diego, California. IEEE, USA, pp.4527–4531. Pagilla P.R. Zhong H.-W. ( 2003 ) Semi-globally stable decentralized control of a class of large-scale interconnected systems. Proceedings of the American Control Conference, Denver, Colorado. IEEE, USA, pp.5017–5022. Park K.-H. ( 1999 ) A study on the robustness of a PID type iterative learning controller against initial state error . Int. J. Syst. Sci. , 30 , 49 – 59 . Google Scholar CrossRef Search ADS Porter B. Mohamed S. S. ( 1991 ) Iterative learning control of partially irregular multivariable palnts with initial impulsive action . Int. J. Syst. Sci. , 22 , 447 – 454 . Google Scholar CrossRef Search ADS Ruan X.-E. Bien Z. Park K.-H. ( 2008a ) Decentralized iterative learning control to large-scale industrial processes for nonrepetitive trajectory tracking . IEEE Trans. Syst. Man Cybern.-Park A , 38 , 238 – 252 . Google Scholar CrossRef Search ADS Ruan X.-E. Chen F.-M. Wan B.-W. ( 2008b ) Decentralized iterative learning controllers for nonlinear large-scale systems to track trajectories with different magnitudes . Acta Automat. Sinica , 34 , 426 – 432 . Google Scholar CrossRef Search ADS Saab S S. ( 1994 ) On the P-type learning control . IEEE Trans. Autom. Control , 39 , 2298 – 2302 . Google Scholar CrossRef Search ADS Shen D. Chen H.-F. ( 2012 ) Iterative learning control for large-scale nonlinear systems with observation noise . Automatica , 48 , 577 – 582 . Google Scholar CrossRef Search ADS Sugie T. Ono T. ( 1991 ) An iterative learning control law for dynamical systems . Automatica , 27 , 729 – 732 . Google Scholar CrossRef Search ADS Sun M.-X. Bi H.-Bo. Zhou G.-L. Wang H.-F. ( 2015 ) Feedback-aided PD-type iterative learning control: initial condition problem and rectifying strategies . Acta Autom. Sinica , 41 , 157 – 164 (in Chinese). Sun M. Wang D. ( 2002 ) Iterative learning control with initial rectifying action . Automatica , 38 , 1177 – 1182 . Google Scholar CrossRef Search ADS Sun M. Wang D. ( 2003 ) Initial shift issues on discrete-time iterative learning control with system relative degree . IEEE Trans. Autom. Control , 58 , 144 – 148 . Tae H.L. Ji D. H. Ju H. P. Jung H. J. ( 2012a ) Robust H∞ Decentralized guaranteed cost dynamic control for synchronization of a complex dynamical network with randomly switching topology . Appl. Math. Comput. , 219 , 996 – 1010 . Tae H. L. Ju H. P. Wu Z.-G. Sang-Choel L. Dong H. L. ( 2012b ) Robust H∞ Decentralized dynamic control for synchronization of a complex dynamical network with randomly occurring uncertainties . Nonlinear Dyn. , 70 , 559 – 570 . Google Scholar CrossRef Search ADS Wang D.-W. ( 1998 ) Convergence and robustness of discrete time nonlinear systems with iterative learning control . Automatica , 34 , 1445 – 1448 . Google Scholar CrossRef Search ADS Wu H.-S. ( 2007 ) Decentralized iterative learning control for a class of large scale interconnected dynamical systems . J. Math. Anal. Appl. , 327 , 233 – 245 . Google Scholar CrossRef Search ADS Xu J.-X. Tan Y. ( 2002 ) On the P-type and Newton-type ILC schemes for dynamic systems with non-affine-in-input factors . Automatica , 38 , 1237 – 1242 . Google Scholar CrossRef Search ADS Yu S.-J. Wu J.-H. Yan X.-W. ( 2002 ) A PD-type open-closed-loop iterative learning control and its convergence with discrete systems. Proceedings of the First International Conference on Machine Learning and Cybernetics, Beijing, China. IEEE, USA, pp.659–662. © Crown copyright 2016. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

IMA Journal of Mathematical Control and Information – Oxford University Press

**Published: ** Dec 25, 2016

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

ok to continue