Consensus-based rendezvous control of double integrators via binary relative positions and velocity feedback

Consensus-based rendezvous control of double integrators via binary relative positions and... Abstract This article considers the consensus problem for a network of agents that have double integrator dynamics. A protocol is proposed to achieve a consensus-based rendezvous of agents that depends only on the sign of the relative positions and the sign of the individual velocity. The problem is formulated in terms of differential inclusions with Filippov solutions. A detailed analysis of the Filippov set-valued map of the vector field of the closed-loop system is provided, based on which the proposed protocol is proven to attain the consensus of the agent positions with double integrator dynamics. To prove the convergence, the invariant set of the trajectories of agents is investigated based on a generalized theory of the invariance principle. Numerical examples are provided to illustrate the convergence of the agents via the proposed protocol. 1. Introduction Consensus problems of agents over networks have been receiving a great deal of attention in the last two decades as an approach to solving problems such as formation, distributed estimation, and decision making in complex and large-scale systems. It is widely known that linear protocols can achieve consensus for agents that have single or double integrator dynamics based on relative feedback of the state variables (see, e.g. Ren & Beard, 2008). There are plenty of advanced works to cope with issues such as time delays and time-varying and switching communication topologies. Moreover, non-linear discontinuous protocols have been studied for the consensus control of single and double integrators with binary or quantized relative positions and velocities. These protocols are attractive from an engineering point of view, allowing coarse measurements and reducing the amount of communication. Stability analysis for discontinuous dynamical systems based on differential inclusions has been applied to prove a consensus under such protocols, where the Filippov solution has been employed in many of the literature. In Cortés (2006) and Chen et al. (2011), finite-time convergent protocols are presented for agents with single integrator dynamics. For problems with single integrator dynamics, Dimarogonas & Johansson (2010), Ceragioli et al. (2011) and Frasca (2012) proposed protocols that work with quantized relative values of the state variable. In Guo & Dimarogonas (2013), quantized relative feedback is investigated for both single and double integrator dynamics. For double integrators, the convergence to a consensus point with logarithmically quantized measurements is proved. However, the agents with uniform quantizers, including quantizers that output binary values, are only shown to approach some neighbourhood of a consensus point. In this article, we consider the consensus problem for a network of agents that have double integrator dynamics. The proposed protocol achieves a consensus-based rendezvous. Namely, each agent gathers the relative positions of communicable agents and applies the protocol to determine the control input by also taking into account its own velocity. In particular, the protocol only depends on the sign of the relative positions and the sign of the individual velocity. We also consider the case where an exact velocity is available. The problem involves discontinuous dynamics because of the discretized feedback of the relative positions. We formulate the consensus problem in terms of differential inclusions and analyse the system in the framework of the Filippov solution (Filippov, 1988). The stability analysis methods of Shevitz & Paden (1994), Fischer et al. (2013) and Bacciotti & Ceragioli (1999) are employed, with certain modifications for the problem. We provide a detailed analysis of the Filippov set-valued map of the vector field of the closed-loop system. This reveals that the set-valued map is compact, convex and symmetric with respect to a certain point, by which the generalized time derivative (Shevitz & Paden, 1994) of a Lyapunov-like function is proved to be included in a singleton with a negative element. After showing stability using this approach, we investigate the trajectories of the agents to clarify that any point of the $$\omega$$-limit set (Shevitz & Paden, 1994; Bacciotti & Ceragioli, 1999) of any trajectory is indeed a consensus point. These results orchestrate to conclude that the proposed protocol attains the consensus-based rendezvous via feedback with binary relative positions. The rest of the article is organized as follows. In Section 2, we formulate the consensus control problem with an agent network and propose a protocol with binary relative positions and a binary or linear velocity. Section 3 is devoted to preliminaries on the Filippov solution for differential inclusions and stability analysis. Based on the methods provided in Section 3, we prove the consensus of the feedback system in Section 4 by showing results on an analysis of the Filippov set-valued map and stability analysis based on the invariant principle. Section 5 presents numerical examples that illustrate the protocol. Lastly, we conclude the article in Section 6. Notation: For a set $$S\subset{\mathbb {R}}^{n}$$, $${\mathop{\rm co}}\,S$$ is the convex hull of $$S$$ and $$\overline{\mathop{\rm co}} S$$ is the convex closure of $$S$$. The $$k$$-norm of $$x = (x^1, x^2, \ldots,x^n)\in{\mathbb {R}}^{n}$$ is written as $$|x|_k = (\sum_{i=1}^n |x^i|^k)^{1/k}$$, while the Euclidean norm is denoted by $$|x|$$ without the subscript. The elements of $${\mathbb {R}}^{n}$$ are regarded as a column vector if they appear in expressions with matrices and $$x^{\sf{T}}$$ denotes the transpose of $$x$$. The identity matrix of $${\mathbb {R}}^{{n}\times{n}}$$ is denoted by $$I_n$$. Let $$B_r(x)$$ stand for an open ball with center $$x$$ and radius $$r$$. The distance between $$x\in{\mathbb {R}}^{n}$$ and $$Y\subset{\mathbb {R}}^{n}$$ is $$\text{dist}(x,Y) = \inf_{y\in Y} |x-y|$$. Let $$\mu(S)$$ denote the Lebesgue measure of $$S\subset{\mathbb {R}}^{n}$$. 2. Formulation of the consensus problem Let $$\mathcal A=\{1,2,\ldots,n\}$$ be the set of agents and $$\mathcal E\subset\{(i,j)\in\mathcal A\times\mathcal A\}$$ be the set of pairs of agents in $$\mathcal A$$ that can communicate with each other to obtain information on their relative position. We assume that graph $$\mathcal G=(\mathcal A,\mathcal E)$$ is undirected and connected. Let $$\mathcal J^i = \{\,j\in\mathcal A:(i,j)\in\mathcal E\}$$ be the set of the agents connected to agent $$i$$. The dynamics of each agent is given by \begin{eqnarray} \dot p^i = q^i, \quad \dot q^i = u^i, \quad i=1,2,\ldots,n, \end{eqnarray} (2.1) where $$p^i\in{\mathbb {R}}$$ is the position, $$q^i\in{\mathbb {R}}$$ is the velocity, and $$u^i$$ is the force, which is the control input. Without loss of generality, only scalar variables $$p^i$$ and $$q^i$$ are considered for each agent. We assume that agent $$i$$ can only measure the sign of its velocity $$q^i$$ and the sign of relative positions $$p^j - p^j$$ to agents $$j\in\mathcal J^i$$. The goal of the control is to drive the agents to converge to a consensus point for their position, namely, $$\left\{ \matrix{ \mathop {\lim }\limits_{t \to \infty } ({p^i}(t) - {p^j}(t)) = 0,\quad i = 1,2, \ldots, n,\quad i\not = j, \hfill \cr \mathop {\lim }\limits_{t \to \infty } {q^i}(t) = 0,\quad i = 1,2, \ldots, n. \hfill \cr} \right. $$ (2.2) For this purpose, assuming the availability of the measurements of each agent stated above, we set control input $$u^i$$ as a consensus protocol, given as follows: \begin{eqnarray} u^i = - \alpha s^i(p) - \beta v^i(q), \quad i=1,2,\ldots,n, \end{eqnarray} (2.3) where $$\alpha$$ and $$\beta$$ are constants satisfying $$\alpha > \beta > 0$$. Functions $$s^i(\cdot)$$ and $$v^i(\cdot)$$ are defined as \begin{eqnarray} s^i(p) = \sum_{j\in\mathcal J^i} {\text{sgn}}(p^i - p^j), \quad v^i(q) = {\text{sgn}} q^i, \quad i=1,2,\ldots,n, \end{eqnarray} (2.4) where $${\text{sgn}}$$ is the signum function, defined for $$a\in{\mathbb {R}}$$ as \begin{eqnarray*} {\rm{sgn}}\,a = \left\{ {\matrix{ {1,} & {a{\rm{ > 0,}}} \cr {0,} & {a{\rm{ = 0,}}} \cr {{\rm{ - 1,}}} & {a{\rm{ < 0}}{\rm{.}}} \cr } } \right. \end{eqnarray*} Setting $$ p = \left[{\matrix{ {{p^1}} \cr {{p^2}} \cr \vdots \cr {{p^n}} \cr } } \right],\quad q = \left[{\matrix{ {{q^1}} \cr {{q^2}} \cr \vdots \cr {{q^n}} \cr } } \right],\quad u = \left[{\matrix{ {{u^1}} \cr {{u^2}} \cr \vdots \cr {{u^n}} \cr } } \right], \quad s\left(p \right) = \left[{\matrix{ {{s^1}\left(p \right)} \cr {{s^2}\left(p \right)} \cr \vdots \cr {{s^n}\left(p \right)} \cr } } \right], \quad v\left(q \right) = \left[{\matrix{ {{v^1}\left(p \right)} \cr {{v^2}\left(p \right)} \cr \vdots \cr {{v^n}\left(p \right)} \cr } } \right],$$ the closed-loop system is represented as \begin{eqnarray} \dot x = f(x), \quad x\in{\mathbb {R}}^{2n}, \end{eqnarray} (2.5) where $$ x = \left[{\matrix{ p \cr q \cr } } \right],\quad f\left(x \right) = \left[{\matrix{ q \cr { - \alpha s(p) - \beta v(q)} \cr } } \right]$$ (2.6) This system has discontinuities at points for which $$p^i = p^j$$, $$j\in\mathcal J^i$$ and $$q^i=0$$, $$i=1,2,\ldots,n$$. After giving the main results on the consensus (2.2) with protocol (2.3)–(2.4), we mention on the case where $$v^i(q)$$ in (2.4) is replaced with \begin{eqnarray} v^i(q) = q^i, \quad i=1,2,\ldots,n \end{eqnarray} (2.7) and the condition on the gains is relaxed to $$\alpha,\beta>0$$, where the condition $$\alpha>\beta$$ is not required. The protocol with (2.7) can be implementable if each agent has a precise internal sensor to detect its velocity. With this linear $$v(q)$$, it is shown in Section 4.5 that the agents can attain the average consensus, i.e. each $$p^i(t)$$ converges to the average of $$p^i(0)$$, $$i=1,2,\ldots,n$$ if the average of $$q^i(0)$$, $$i=1,2,\ldots,n$$ is zero. 3. Preliminaries for the Lyapunov methods for discontinuous dynamical systems This section is devoted to the preliminaries for the Lyapunov methods for the stability analysis of discontinuous dynamical systems within the framework of Filippov (1988). 3.1. Differential inclusions with Filippov solution We formulate a differential inclusion for system (2.5) and consider the Filippov solution. For notational simplicity, in this section, we let $$x\in{\mathbb {R}}^{n}$$ in (2.5). The Filippov set-valued map is defined as \begin{eqnarray*} K[\,f](x) = \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} f(B_\delta(x)\setminus N). \end{eqnarray*} Definition 3.1 (Filippov, 1988) A function $$x(t)$$ is called a Filippov solution of the differential equation $$\dot x = f(x)$$, $$x\in{\mathbb {R}}^{n}$$, over $$[t_0,t_1]$$ with $$t_0<t_1$$ with initial condition $$x(t_0) = x_0$$, if $$x(t)$$ is absolutely continuous and satisfies the following differential inclusion: \begin{eqnarray} \dot x(t) \in K[\,f](x(t)) \end{eqnarray} (3.1) for almost all $$t\in[t_0,t_1]$$ satisfying $$x(t_0) = x_0$$. If $$f:{\mathbb {R}}^{n}\to{\mathbb {R}}^{n}$$ is Lebesgue measurable and locally essentially bounded (i.e. $$f\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$), Filippov set-valued map (3.1) is upper semicontinuous with $$F(x)$$ being non-empty, bounded, closed and convex at each $$x$$. Hence, there exists at least one Filippov solution $$x(t)$$ satisfying an initial condition $$x(t_0)=x_0$$, defined for $$t\in[t_0,t_1]$$ with some $$t_1>t_0$$ (Filippov, 1988). The following calculus for the Filippov set-valued maps is exploited in the following section (Bacciotti & Ceragioli, 1999, see also Paden & Sastry, 1997). Lemma 3.1 Map $$K$$ has the following properties: (i) If $$f\in C({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[\,f](x) = \{f(x)\}\;\; \forall x\in{\mathbb {R}}^{n}$$. (ii) If $$f,g\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[\,f+g](x) \subset K[\,f](x) +K[g](x)\;\; \forall x\in{\mathbb {R}}^{n}$$. If $$f$$ also belongs to $$C({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[\,f+g](x) = f(x) +K[g](x)\;\; \forall x\in{\mathbb {R}}^{n}$$. (iii) If $$G\in C({\mathbb {R}}^{n},{\mathbb{R}}^{{m}\times{n}})$$ and $$u\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[G u](x) = G(x) K[u](x)\;\; \forall x\in{\mathbb {R}}^{n}$$. 3.2. Chain rule with differential inclusions We handle a Lyapunov-like function and its derivatives along Filippov solutions for a differential inclusion. For this, we invoke a generalized theory on the derivatives of functions (Clarke, 1983). For a function $$V:{\mathbb {R}}^{n}\to{\mathbb {R}}^{m}$$, the right directional derivative $$V'(x,w)$$ of $$V$$ at $$x\in{\mathbb {R}}^{n}$$ to the direction $$w\in{\mathbb {R}}^{n}$$ is defined as $$ V'(x,w) = \lim_{t\to 0+} \{V(x + w t) - V(x)\}/t$$. The right directional derivative may not exist. The generalized directional derivative of $$V$$ at $$x\in{\mathbb {R}}^{n}$$ to direction $$w\in{\mathbb {R}}^{n}$$ is $$V^\circ(x,w) = \limsup_{y\to x,\,t\to 0+} \{V(y + w t) - V(y)\}/t$$, which, in contrast, always exists. Definition 3.2. (Clarke, 1983) A function $$V:{\mathbb {R}}^{n}\to{\mathbb {R}}^{m}$$ is said to be regular at $$x\in{\mathbb {R}}^{n}$$ if it exists for all $$w\in{\mathbb {R}}^{n}$$ and satisfies $$V'(x,w) = V^\circ(x,w)$$. Clarke’s generalized gradient of $$V$$ for a locally Lipschitz continuous function $$V:{\mathbb {R}}^{n}\to{\mathbb {R}}$$ is defined as \begin{eqnarray} \partial V(x) = \overline{{\mathop{\rm co}}} \left\{\lim_{k\to\infty}{{\partial V{\rm{(}}{x_k}{\rm{)}}} \over {\partial x}} \,\bigg | \, \lim_{k\to\infty} x_k = x,\; x_k\not\in N_V \right\}\!, \end{eqnarray} (3.2) where $$N_V$$ is the set of $$x\in{\mathbb {R}}^{n}$$ in which $$\partial V(x)/\partial x$$ does not exist. Since $$V$$ is locally Lipschitz continuous, $$N_V$$ has zero measure. Moreover, the following is true: \begin{eqnarray} \partial V(x) = K\left[{{\partial V} \over {\partial x}} \right](x). \end{eqnarray} (3.3) Lemma 3.2 (Shevitz & Paden, 1994) Let $$x(t)$$ be a Filippov solution of $$\dot x = f(x)$$ defined on $$[t_0,t_1]$$ and $$V: {\mathbb {R}}^{n}\to{\mathbb {R}}$$ be a locally Lipschitz continuous and regular function. Then, $$V(x(t))$$ is absolutely continuous with respect to $$t$$, whereas $${\rm d}V(x(t))/{\rm d}t$$ exists for almost every $$t\in [t_0,t_1]$$ and \begin{eqnarray*} {{{\rm{d}}V\left({x\left(t \right)} \right)} \over {{\rm{d}}t}} \in \dot{\tilde V}(x(t)) \end{eqnarray*} holds for almost every $$t\in [t_0,t_1]$$, where \begin{eqnarray*} \dot{\tilde V}(x) = \bigcap_{\xi\in\partial V(x)} \xi^{\sf{T}} K[\,f](x). \end{eqnarray*} Remark 3.1 From the statement of this lemma, $$\dot{\tilde V}(x)$$ may be empty for some $$x$$, while, for every solution $$x(\cdot)$$, $$\dot{\tilde V}(x(t))$$ is not empty for almost all $$t\in[t_0,t_1]$$. 3.3. Stability and invariance In the following section, we consider a Lyapunov-like function $$V(x)$$ that is positive if velocity $$q$$ is not zero or position $$p$$ is not at a position of consensus. This motivates the following modified version of Corollary 2 of Fischer et al. (2013), as a generalization of the LaSalle–Yoshizawa Theorem to discontinuous systems. Lemma 3.3 Let $$f\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$ and $$V: {\mathbb {R}}^{n}\to{\mathbb {R}}$$ be locally Lipschitz continuous and regular. Let $$C\in{\mathbb{R}}^{{m}\times{n}}$$ be a constant matrix. Suppose that, for every $$r>0$$, \begin{eqnarray} M_r = \sup_{|C x|\le r} |f(x)| < \infty, \end{eqnarray} (3.4) and that \begin{eqnarray} && W_1(C x) \le V(x) \le W_2(C x), \\ \end{eqnarray} (3.5) \begin{eqnarray} && d \le - W_3(C x) \quad \forall d\in \dot{\tilde V}(x) \end{eqnarray} (3.6) hold for all $$x\in{\mathbb {R}}^{n}$$, where $$W_1,\, W_2: {\mathbb {R}}^{m}\to{\mathbb {R}}$$ are continuous positive definite functions and $$W_3\,{:}\,{\mathbb {R}}^{m}\,{\to}\,{\mathbb {R}}$$ is a positive semidefinite function. Then, any Filippov solution of $$\dot x = f(x)$$ with the initial condition $$x(0) = x_0$$ for any $$x_0\in{\mathbb {R}}^{n}$$ is defined for all $$t\ge 0$$ and $$C x(t)$$ is bounded for all $$t\ge 0$$. Moreover, it holds that \begin{eqnarray} && \lim_{t\to\infty} \int_0^t W_3(C x(\tau))d\tau \quad \hbox{exists and is finite, and}\\ \end{eqnarray} (3.7) \begin{eqnarray} && \lim_{t\to\infty} W_3(C x(t)) = 0. \end{eqnarray} (3.8) Proof. The lemma is proved similarly to that of the original result of Fischer et al. (2013). Details are given in Appendix A. □ To prove the convergence of $$x(t)$$, we invoke a generalization of the invariance principle to discontinuous systems, based on the notion of $$\omega$$-limit points of trajectories and weak invariant sets. Definition 3.3 (Filippov, 1988) Let $$x(t)$$ be an arbitrary maximal solution of (3.1) starting from an arbitrary initial point $$x_0\in{\mathbb {R}}^{n}$$. A point $$\xi\in{\mathbb {R}}^{n}$$ is said to be an $$\omega$$-limit point of $$x$$ if there exists a sequence $$t_k\in{\mathbb {R}}$$ that satisfies $$\lim_{k\to\infty} t_k = \infty$$ and $$\lim_{k\to\infty} x(t_k) = \xi$$. The $$\omega$$-limit set $${\it{\Omega}}(x)$$ of $$x(\cdot)\subset{\mathbb {R}}^{n}$$ is defined as the set of all $$\omega$$-limit points of $$x(\cdot)$$. Definition 3.4 (Filippov, 1988; Bacciotti & Ceragioli, 1999) A set $$S\subset{\mathbb {R}}^{n}$$ is said to be the weakly invariant set for (3.1) if, for every $$y_0\in S$$, there exists a maximal solution $$y(\cdot)$$ of (3.1) with $$y(0)=y_0$$ and $$y(t)\in S$$ for all $$t$$ for which $$y(t)$$ is defined. Once the state is proved to be bounded, the following lemma, which is based on Chapter 3 of Filippov (1988), will be useful for considering the convergence of $$x(t)$$. Lemma 3.4 (Filippov, 1988; Bacciotti & Ceragioli, 1999) Let $$x(\cdot)$$ be an arbitrary solution to (3.1). Then, the $$\omega$$-limit set $${\it{\Omega}}(x)$$ is weakly invariant. If $$\{x(t):t\ge 0\}$$ is bounded, then $${\it{\Omega}}(x)\not=\emptyset$$ and $${\it{\Omega}}(x)$$ is bounded and connected. Moreover, $$\text{dist}(x(t),{\it{\Omega}}(x))\to 0$$ as $$t\to\infty$$. 4. Proof of the consensus Using the material provided in Section 3, we prove the following main result of the consensus defined in (2.2). The rest of this section is mainly devoted to the proof of Theorem 4.1: Theorem 4.1. Suppose that $$\alpha > \beta > 0$$. The consensus-based rendezvous (2.2) of system (2.1) is then achieved with protocol (2.3)–(2.4), where the solution of closed-loop system (2.5) is in the sense of Filippov for differential inclusion (3.1). In Section 4.1, we investigate Filippov set-valued map (3.1) for closed-loop system (2.5). Section 4.2 considers a candidate of a Lyapunov-like function $$V$$ for system (2.5), where we call $$V$$ ‘Lyapunov-like’ because it is only positive semidefinite. Clarke’s generalized gradient is analysed to obtain a singleton that includes $$\dot{\tilde V}(x)$$. In addition, the equilibrium is determined in this subsection. Lemma 3.3 is then applied in Section 4.3 to show that $$p(t)$$ is bounded and $$q(t)$$ converges to zero. Section 4.4 concludes the consensus based on an application of the invariance principle, where the $$\omega$$-limit set of the Filippov solutions of (2.5) proves the convergence to a consensus point. In Section 4.5, we show a corollary to Theorem 4.1 for the protocol with the linear feedback of $$q$$ defined in (2.7). 4.1 Filippov set-valued map From Lemma 3.1, we have \begin{eqnarray} K[\,f](x) = \left[{\matrix{ q \cr { - \alpha K[s](p) - \beta K[v](q)} \cr } } \right] \end{eqnarray} (4.1) for vector field (2.6), where it is immediate to see that \begin{eqnarray*} K[v](q) = \left[{\matrix{ {{\rm{Sgn}}\,{q^1}} \cr {{\rm{Sgn}}\,{q^2}} \cr \vdots \cr {{\rm{Sgn}}\,{q^n}} \cr } } \right], \quad {\rm{Sgn}}\,{q^i}\, = \left\{ {\matrix{ \hfill {\left\{ 1 \right\},} \quad & \hfill {{q^i} > 0,} \cr \hfill {{\rm{[- 1,1],}}} \quad & \hfill {{q^i} = 0,} \cr \hfill {\left\{ {{\rm{ - 1}}} \right\},} \quad & \hfill {{q^i} < 0.} \cr } } \right. \end{eqnarray*} Next, let $$e_i$$ denote the $$i$$-th standard basis of $${\mathbb {R}}^{n}$$ and let $$\mathcal E_+ = \{(i,j)\in\mathcal E: i<j\}$$. Then, \begin{eqnarray*} s(p) &=& \sum_{i=1}^n e_i s^i(p) = \sum_{i=1}^n \sum_{j\in\mathcal J^i} e_i {\text{sgn}}(p^i - p^j) = \sum_{(i,j)\in\mathcal E} e_i {\text{sgn}}(p^i - p^j)\\ &=& \sum_{(i,j)\in\mathcal E_+} (e_i - e_j) {\text{sgn}}(p^i - p^j), \end{eqnarray*} where the last equality is because of the undirectedness of $$\mathcal E$$; both $$(i,j)$$ and $$(j,i)$$ are contained in $$\mathcal E$$ if either one is. Using this, observe that \begin{eqnarray} K[s](p) &=& \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ s(B_\delta(p)\setminus N) \right\}\\ &=& \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+} (e_i - e_j){\text{sgn}}((p^i+d^i) - (p^j+d^j)): d\in B_\delta(0)\setminus N \right\}\\ &=& \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j){\text{sgn}}((p^i-p^j) + (d^i-d^j))\right.\\ && \qquad \left.+ \sum_{(i,j)\in\mathcal E_+^{\rm neq}(p)} (e_i - e_j){\text{sgn}}((p^i-p^j) + (d^i-d^j)): d\in B_\delta(0)\setminus N \right\}\!, \end{eqnarray} (4.2) where $$\mathcal E_+^{\rm eq}(p)$$ and $$\mathcal E_+^{\rm neq}(p)$$ are defined as \begin{eqnarray*} \mathcal E_+^{\rm eq}(p) = \{(i,j)\in\mathcal E_+: p^i = p^j\}, \qquad \mathcal E_+^{\rm neq}(p) = \{(i,j)\in\mathcal E_+: p^i \not= p^j\}. \end{eqnarray*} Because $$B_\delta(0)$$ is monotonically increasing in $$\delta$$, one can replace $$\bigcap_{\delta>0}$$ with $$\bigcap_{0<\delta<\overline\delta}$$ for any $$\overline\delta > 0$$. Here, let us set \begin{eqnarray*} \overline\delta = \overline\delta(p) = \frac{1}{4} \min\{|p^i - p^j|:(i,j)\in\mathcal E_+^{\rm neq}(p)\}, \end{eqnarray*} which is strictly positive. Because this yields $$|d^i - d^j| < 2\overline\delta(p)$$ in (4.2), the $${\text{sgn}}(\cdot)$$ in the latter sum in (4.2) is constant independently of $$d^i - d^j$$, while $$p^i - p^j$$ vanishes in the former sum. Therefore, \begin{eqnarray} && \hbox{RHS of (4.2)}\\ && = \bigcap_{0<\delta<\overline\delta(p)} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j){\text{sgn}}(d^i-d^j): d\in B_\delta(0)\setminus N \right\}\\[2mm] && \qquad + \sum_{(i,j)\in\mathcal E_+^{\rm neq}(p)} (e_i - e_j){\text{sgn}}(p^i-p^j). \end{eqnarray} (4.3) Obviously, from the definition of $$\mathcal E_+^{\rm neq}(p)$$, the second term of (4.3) is $$s(q)$$. Let $$S^{\rm eq}(p)$$ denote the first term of (4.3); if $$\mathcal E^{\rm eq}_+(p) = \emptyset$$, let $$S^{\rm eq}(p) = \{0\}$$. Then, \begin{eqnarray} S^{\rm eq}(p) &=& \bigcap_{0<\delta<\overline\delta(p)} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus N \right\} \\ &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus N \right\} \\ \end{eqnarray} (4.4) \begin{eqnarray} &=& {\mathop{\rm co}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n},\; d^i\not=d^j\hbox{ if }i\not=j\right\}\!. \end{eqnarray} (4.5) To obtain (4.4), we replaced $$B_\delta(0)$$ with $${\mathbb {R}}^{n}$$ since $$\text{sgn}(\cdot)$$ is homogeneous of order zero. Hence the intersection with respect to $$\delta$$ in (4.4) can be removed. The last equality in (4.5) is obtained in Appendix B, where the set in (4.5) is a convex hull of a finite number of points in $${\mathbb {R}}^{n}$$ and hence $$S^{\rm eq}(p)$$ is compact and convex. We note that $$S^{\rm eq}(p)$$ is symmetric with respect to the origin, i.e. if $$s_0\in S^{\rm eq}(p)$$, $$-s_0\in S^{\rm eq}(p)$$. In fact, from Carathéodory’s Theorem (e.g. Rockafellar, 1970), if $$s_0\in S^{\rm eq}(p)$$, there exist $$a_k\in{\mathbb {R}}$$ and $$d_k\in{\mathbb {R}}^{n}$$, $$k=1,2,\ldots,n+1$$, such that \begin{eqnarray*} s_0 = \sum_{k=1}^{n+1} a_k \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i_k-d^j_k), \quad \sum_{k=1}^{n+1} a_k = 1, \quad a_k\ge 0,\quad k=1,2,\ldots,n+1 \end{eqnarray*} and $$d^i_k\not=d^j_k$$ if $$i\not=j$$. Obviously, $$ -s_0 = \sum_{k=1}^{n+1} a_k \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\{\text{sgn}((-d^i_k)-(-d^j_k))\} \in S^{\rm eq}(p)$$, simply because $$-d_k\in{\mathbb {R}}^{n}$$ and $$-d_k^i\not=-d_k^j$$ if $$i\not=j$$. The results so far are summarized in the following lemma. Lemma 4.1 It holds that \begin{eqnarray} K[s](p) = S^{\rm eq}(p) + s(p), \end{eqnarray} (4.6) where $$S^{\rm eq}(p)$$ is represented as in (4.5). Moreover, $$S^{\rm eq}(p)$$ is compact, convex and symmetric with respect to the origin. 4.2 Candidate of a Lyapunov-like function As a candidate of a Lyapunov-like function for system (2.5), we consider \begin{eqnarray} V(x) = \alpha V_1(p) + V_2(q), \end{eqnarray} (4.7) where \begin{eqnarray*} V_1(p) = \sum_{i=1}^n \sum_{j\in\mathcal J^i} |p^i - p^j| = 2 \sum_{(i,j)\in\mathcal E_+} |p^i - p^j|, \qquad V_2(q) = \frac{1}{2} \sum_{i=1}^{n} |q^i|^2. \end{eqnarray*} These $$V_1$$, $$V_2$$ and $$V$$ are convex and hence they are regular. Let us apply (3.3) and Lemma 3.1 to $$V$$ to get \begin{eqnarray*} \partial V(x)= K\left[{{{\partial V} \over {\partial x}}} \right]\left(x \right) = \left[{\matrix{ {\alpha K\left[{{{\partial {V_1}} \over {\partial p}}} \right](p)} \cr q \cr } } \right] = \left[{\matrix{ {\alpha \,\partial {V_1}(p)} \cr q \cr } } \right], \end{eqnarray*} where the second equality owes to the fact that $$\partial V/\partial x$$ is the sum of a discontinuous (but locally essentially bounded) function $$\partial V_1/\partial p$$ and a continuous function $$\partial V_2/\partial q = q$$ (Paden & Sastry, 1997). If $$V_1$$ is differentiable at $$p$$, \begin{eqnarray} \left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right) &=&\left[{\matrix{ \vdots \cr {\sum\limits_{j \in {J^i}} {{\rm{sgn}}} ({p^i} - {p^j})} \cr \vdots \cr } } \right] = \sum_{i=1}^n e_i \sum_{j\in J^i} \text{sgn}(p^i - p^j)\\ &=& \sum_{(i,j)\in\mathcal E} e_i \text{sgn}(p^i - p^j) = \sum_{(i,j)\in\mathcal E_+} (e_i - e_j)\text{sgn}(p^i - p^j). \end{eqnarray} (4.8) Define $$N_V$$ as the set of $$d\in{\mathbb {R}}^{n}$$ such that $${{{\partial {V_1}} \over {\partial p}}}$$ is not differentiable at $$p+d$$ and observe that \begin{align} & K\left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right)\\ &\quad{} = \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right)(w): w\in B_\delta(p)\setminus N,\; \left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right) \hbox{ is differentiable at w} \right\}\\ &\quad{} = \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+} (e_i - e_j)\text{sgn}((p^i+d^i) - (p^i+p^j)): d\in (B_\delta(0)\setminus N_V) \setminus N \right\}\\ &\quad{} = \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus N_V)\setminus N \right\}\\ &\qquad + \sum_{(i,j)\in\mathcal E_+^{\rm neq}(p)} (e_i - e_j)\text{sgn}(p^i-p^j), \end{align} (4.9) which is obtained in a similar way to the set in Lemma 4.1. Appendix C fills the gap between (4.9) and \begin{eqnarray} K\left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right)(p) = S^{\rm eq}(p) + s(p), \end{eqnarray} (4.10) where $$S^{\rm eq}(p)$$ is characterized as in Lemma 4.1. We can now state the following intermediate result of stability. Lemma 4.2 For $$V$$ defined as (4.7), it holds that \begin{eqnarray} \partial V_1(p) = S^{\rm eq}(p) + s(p), \qquad \partial V(x) =\left[{\matrix{ {\alpha \,\partial {V_1}(p)} \cr q \cr } } \right] , \qquad \dot{\tilde V}(x) \subset \{-\beta |q|_1\}. \end{eqnarray} (4.11) Proof. The first and the second equalities have been shown above. Let us consider $$\dot{\tilde V}$$: \begin{eqnarray*} \dot{\tilde V}(x) &=& \bigcap_{\hat\xi\in\partial V(x)} \hat \xi^{\sf{T}} K[\,f](x)\\ &=& \bigcap_{\xi\in S^{\rm eq}(p)} \left[{\matrix{ {\alpha (\xi + s(p))} \cr q \cr } } \right]^ {\sf{T}} K[\,f](x)\\ &=& \bigcap_{\xi\in S^{\rm eq}(p)} \left[{\matrix{ {\alpha (\xi + s(p))} \cr q \cr } } \right]^ {\sf{T}} \left[{\matrix{ q \cr { - \alpha ({S^{{\rm{eq}}}}(p) + s(p)) - \beta K[v](q)} \cr } } \right] \\ &\subset& \alpha q^{\sf{T}} \bigcap_{\xi\in S^{\rm eq}(p)} (\xi - S^{\rm eq}(p)) - \beta |q|_1\\ &=& \alpha q^{\sf{T}} \bigcap_{\xi\in S^{\rm eq}(p)} \bigcup_{\eta\in S^{\rm eq}(p)} \{\xi - \eta\} - \beta |q|_1 = \{-\beta |q|_1\}, \end{eqnarray*} where $$q^T K[v](q) = |q|_1$$ is obvious from Lemma 3.1 and the last equality is obtained via the lemma below, which exploits the fact that $$S^{\rm eq}(p)$$ is bounded and symmetric with respect to the origin. Thus, $$\dot{\tilde V}$$ has at most a single value $$-\beta|q|_1$$. □ Lemma 4.3 Suppose that a set $$S\subset{\mathbb {R}}^{n}$$ is not empty, symmetric with respect to the origin and bounded. Then, \begin{eqnarray*} S_1 = \bigcap_{\xi\in S} \bigcup_{\eta\in S} \{\xi - \eta\} = \{0\}. \end{eqnarray*} Proof. By definition, $$x\in S_1$$$$\iff$$$$(\forall\xi\in S)\,$$$$(\exists\eta\in S)\,$$$$x = \xi-\eta$$$$\iff$$$$(\forall\xi\in S)\,$$$$\xi - x\in S$$. Moreover, note that $$S$$ contains the origin because it is not empty. Thus, it is obvious that $$0\in S_1$$ from the symmetry of $$S$$ with respect to the origin. Next, assume that $$x\not=0$$ belongs to $$S_1$$. Then, as stated above, \begin{eqnarray} \forall\xi\in S\quad \xi - x\in S. \end{eqnarray} (4.12) Since $$0\in S$$, choosing $$\xi = 0$$ in (4.12) yields $$-x\in S$$. Setting $$\xi = -x\in S$$ in (4.12), we have $$-2x\in S$$. This can be repeated to finally obtain $$-k x\in S$$ for all positive integers $$k$$. Because $$x\not=0$$, this contradicts the assumption that $$S$$ is bounded. Therefore $$S_1 = \{0\}$$. □ In the following lemma, we determine the equilibrium points of differential inclusion (3.1) for system (2.5), where we say that $$x$$ is an equilibrium of differential inclusion (3.1) if $$0\in K[\,f](x)$$. Lemma 4.4. Let $$p,q\in{\mathbb {R}}^{n}$$. (i) The set-valued map $$K[s](p)$$ contains $$0$$ if and only if $$p^1=p^2=\cdots=p^n$$. (ii) If $$0\not\in\partial V_1(p)$$, it holds that $$\min\{|y|: y\in \{K[s](p)\}^i\} \ge 1$$ for some $$i$$ with $$1\le i\le n$$, where $$\{\cdot\}^i$$ stands for the $$i$$-th row of the set-valued map. (iii) Assume $$\alpha > \beta > 0$$. Then, $$x = \left[{p^{\sf{T}} \quad q^{\sf{T}}}\right]^{\sf{T}}$$ is an equilibrium point of differential inclusion (3.1) for system (2.5) if and only if $$p^1=p^2=\cdots=p^n$$ and $$q=0$$. Proof. (i) In (3.3) and Lemmas 4.1 and 4.2, $$K[s](p) = S^{\rm eq}(p) + s(p) = \partial V_1(p)$$. Because $$V_1$$ is convex, $$\partial V_1$$ further coincides with the subgradient of $$V_1$$ ((Clarke (1983)): \begin{eqnarray*} \partial V_1(p) = \{w\in{\mathbb {R}}^{n}: w^{\sf{T}}(x-p) + V_1(p) \le V_1(x)\;\; \forall x\in{\mathbb {R}}^{n}\}. \end{eqnarray*} Hence, $$0\in \partial V_1(p)$$ iff $$V_1(p)$$ is the minimum of $$V_1$$. From the definition of $$V_1$$ and the assumption that graph $$\mathcal G$$ is connected, the minimum of $$V_1$$ is zero and attained at a point of consensus, i.e. at $$p$$ with $$p^1=p^2=\cdots=p^n$$. (ii) In view of $$\partial V_1/\partial p$$ at differentiable points, as shown in (4.8), and the definition of Clarke’s generalized gradient (3.2), each row $$\{K[s](p)\}^i = \{\partial V_1(p)\}^i$$ is a closed interval $$J^i = [m^i_1,m^i_2]$$, where the $$m^i_j$$ are integers. Therefore, if $$0\not\in\partial V_1(p)$$, at least for one $$i$$, it holds that $$0\not\in J^i$$, which implies that $$m^i_1\ge 1$$ or $$m^i_2\le -1$$. This proves the claim. (iii) The sufficiency is obvious. To prove the necessity, suppose that $$0\in K[\,f](x)$$. From (4.1), we have $$q=0$$ and $$0\in -\alpha K[s](p) - \beta K[v](q)$$. The latter can be rewritten as $$0\in K[s](p) + (\beta/\alpha)[-1,1]^n$$. To conclude $$0\in K[s](p)$$, assume that $$0\not\in K[s](p)$$. Then, from (ii), there exists an $$i$$ with $$1\le i\le n$$ such that $$\min\{|y|: y\in \{K[s](p)\}^i\} \ge 1$$. This contradicts $$0\in K[s](p) + (\beta/\alpha)[-1,1]^n$$ for the $$i$$-th row because $$0 < \beta/\alpha < 1$$. Hence $$0\in K[s](p)$$. From (i), it holds that $$p^1=p^2=\cdots=p^n$$. □ 4.3. Partial proof of stability via Lemma 3.3 We apply Lemma 3.3 to derive the conclusion of the lemma for system (2.5). Let $$m_0$$ be the number of undirected edges of graph $$\mathcal G$$; $$m_0 = |\mathcal E_+|$$. Let $$E$$ be a matrix whose row vectors consist of $$m_0$$ distinct vectors $$(e_i - e_j)^{\sf{T}}$$ with $$(i,j)\in\mathcal E_+$$. Define \begin{eqnarray*} C = \left[{\matrix{ E & 0 \cr 0 & {{I_n}} \cr } } \right],\quad \quad y = Cx = \left[{\matrix{ {Ep} \cr q \cr } } \right]. \end{eqnarray*} Then, by defining $$y\in{\mathbb{R}}^{m_0+n}$$ appropriately as a function of $$p$$ and $$q$$, \begin{eqnarray*} V(x) &=& \alpha \sum_{i=1}^n \sum_{j\in\mathcal J^i} |p^i - p^j| + \frac{1}{2} \sum_{i=1}^{n} |q^i|^2 = 2 \alpha \sum_{(i,j)\in\mathcal E_+} |p^i - p^j| + \frac{1}{2} \sum_{i=1}^{n} |q^i|^2\\ &=& 2 \alpha \sum_{k=1}^{m_0} |y^k| + \frac{1}{2} \sum_{k=m_0+1}^{m_0+n} |y^k|^2. \end{eqnarray*} Thus, setting positive definite functions as \begin{eqnarray*} W_1(y) = W_2(y) = 2 \alpha \sum_{k=1}^{m_0} |y^k| + \frac{1}{2} \sum_{k=m_0+1}^{m_0+n} |y^k|^2, \end{eqnarray*} we have (3.5). Further, (3.6) holds for \begin{eqnarray} W_3(y) = W_3(C x) = \beta |q|_1. \end{eqnarray} (4.13) Lastly, consider (3.4). Because $$|q| \le r$$ if $$|C x|\le r$$, \begin{eqnarray} |f(x)| = \left|\, \left[{\matrix{ q \cr { - \alpha s(p) - \beta v(q)} \cr } } \right] \,\right| \le |q| + \alpha |s(p)| + \beta |v(q)| \le r + \alpha n(n-1) + \beta n < \infty. \end{eqnarray} (4.14) Thus, the assumptions of Lemma 3.3 are confirmed for the closed-loop system. Hence, $$E p(t)$$ and $$q(t)$$ are bounded and $$q(t)\to 0$$ as $$t\to\infty$$ for every solution $$x(t) = \left[{p(t)^{\sf{T}} \quad q(t)^{\sf{T}}}\right]^{\sf{T}}$$ of (2.5). Moreover, the boundedness of the average position is derived from Lemma 3.3. Define \begin{eqnarray*} \bar p(t) = \frac{1}{n} \sum_{i=1}^n p^i(t), \quad \bar q(t) = \frac{1}{n} \sum_{i=1}^n q^i(t), \end{eqnarray*} which are the averages of $$p(t)$$ and $$q(t)$$, respectively. Let $${\boldsymbol{1}} = \left[{1 \quad 1 \quad \cdots \quad 1}\right]^{\sf{T}}\in{\mathbb {R}}^{n}$$ and observe that \begin{eqnarray*} \dot{\bar p}(t) \in \frac{1}{n}\left[{\boldsymbol{1}^{\sf{T}} \quad 0}\right] K[\,f](x(t))= K\left[\frac{1}{n}\left[{\boldsymbol{1}^{\sf{T}} \quad 0}\right] f\right](x(t))= \{\bar q(t)\}, \end{eqnarray*} i.e. $$\dot{\bar p}(t) = \bar q(t)$$. Because (3.7) holds for $$W_3(C x) = |q|_1$$, we see that $$\bar p(t)$$ is bounded as \begin{eqnarray*} |\bar p(t) - \bar p(0)| = \left|\int_0^t \bar q(\tau)d\tau\right| \le \int_0^t |\bar q(\tau)| d\tau \le \frac{1}{n} \int_0^t |q(\tau)|_1 d\tau = \frac{1}{\beta n} \int_0^t W_3(C x(\tau)) d\tau < \infty. \end{eqnarray*} Because graph $$\mathcal G$$ is connected, the boundedness of $$E p(t)$$ shown above implies that the distance between every two agents is bounded. Combining this with the boundedness of $$\bar p(t)$$, we see that $$p(t)$$ is bounded. The results are summarized below. Lemma 4.5 Position $$p(t)$$ is bounded for $$t\ge 0$$ and velocity $$q(t)$$ tends to $$0$$ as $$t\to\infty$$. 4.4. Proof of the consensus Based on the results shown so far in the previous subsections, we finish the proof of Theorem 4.1 by showing the first condition in (2.2). Proof of Theorem 4.1. (i) Let $$x_0\in{\mathbb{R}}^{2n}$$ be arbitrary and let $$x(t)$$ be any Filippov solution to (2.5) with $$x(0) = x_0$$. From Lemma 4.5, $$x(t)$$ is bounded for all $$t\ge 0$$. Consider the $$\omega$$-limit set $${\it{\Omega}}(x)$$. From Lemma 3.4, $${\it{\Omega}}(x)$$ is not empty and is bounded, connected and weakly invariant. This implies with the boundedness of $$x(t)$$ that there exists a maximal Filippov solution $$z(t)$$ of (2.5) that lies in $${\it{\Omega}}(x)$$ for all $$t\ge 0$$. Furthermore, it holds that $$\lim_{t\to\infty} \text{dist}(x(t),{\it{\Omega}}(x)) = 0$$. (ii) Let , where $$q(t)\to 0$$ ($$t\to\infty$$) from Lemma 4.5. Hence, . Moreover, it holds that $$V_1(p)$$ is constant in $${\it{\Omega}}(x)$$. This can be proved as in the proof of Theorem 3 of Bacciotti & Ceragioli (1999); Recall that $$V(x(t))$$ is monotonically decreasing and bounded below. Hence, there exists a scalar $$c_0$$ such that \begin{eqnarray} \lim_{t\to\infty} V(x(t)) = c_0\ge 0. \end{eqnarray} (4.15) Let $$\xi\in{\it{\Omega}}(x)$$. There then exists a sequence $$t_k$$ satisfying \begin{eqnarray*} 0\le t_1<t_2<t_3<\cdots, \quad \lim_{k\to\infty} t_k = \infty, \quad\lim_{k\to\infty} x(t_k) = \xi. \end{eqnarray*} Combining this with (4.15), we see $$ V(\xi) = V(\lim_{k\to\infty} x(t_k)) = \lim_{k\to\infty} V(x(t_k)) = c_0 $$. Thus, $$V(\xi) = c_0$$ for all $$\xi\in{\it{\Omega}}(x)$$. Because the lower $$n$$ components are zero in set $${\it{\Omega}}(x)$$, we have, for $$c = c_0/\alpha \ge 0$$, \begin{eqnarray} {\it{\Omega}}(x) \subset \left\{\left[\begin{matrix}{p\\ 0}\end{matrix}\right]\in{\mathbb{R}}^{2n}: V_1(p) = c\right\}\!. \end{eqnarray} (4.16) (iii) Here, we determine $$c$$ in (4.16). Let be an arbitrary solution of (2.5) such that the whole trajectory of $$x(t)$$ is included in $${\it{\Omega}}(x)$$. It holds from (4.16) that $$q_z(t) = 0$$ and $$V_1(p_z(t)) = c$$ for all $$t\ge 0$$. Because $$\dot p_z(t) = q_z(t)$$ for almost all $$t\ge 0$$ and $$q_z(t)=0$$ for all $$t\ge 0$$, $$p_z(t) = p_z(0) = p_0$$ for all $$t\ge 0$$. Moreover, it holds that $$\dot q_z(t) = 0\in - \alpha K[s](p_0) - \beta K[v](0)$$. As in the proof of (iii) of Lemma 4.4, we can see that $$p_0^1=p_0^2=\cdots=p_0^n$$ or $$p_0 = a {\bf 1}$$ for some $$a\in{\mathbb {R}}$$, and also $$V_1(p_0) = 0$$. Because $$V_1(p)$$ is constant in $${\it{\Omega}}(x)$$, as shown in (4.16), and belongs to $${\it{\Omega}}(x)$$, we have $$V_1(p)=c=0$$. This result of $$c=0$$ allows us to rewrite the right-hand side of (4.16) as \begin{equation} {\it{\Omega}}(x) \subset \left\{\left[\begin{matrix}a {\bf 1}\\ 0\\ \end{matrix}\right]\in{\mathbb{R}}^{2n}: a\in{\mathbb {R}}\right\}\!. \end{equation} (4.17) (iv) Lastly, since $$\lim_{t\to\infty} \text{dist}(x(t),{\it{\Omega}}(x)) = 0$$, we conclude that $$\lim_{t\to\infty} p(t) = a{\bf 1}$$ for some $$a\in{\mathbb {R}}$$. This completes the proof. □ 4.5. Average consensus with linear velocity feedback We consider the case where $$v(q)$$ is set as in (2.7). Then, $$K[\,f](x)$$ in (4.1) becomes \begin{align*} K[\,f](x) = \left[\begin{matrix} q\\ -\alpha K[s](p)-\beta q\\ \end{matrix}\right] \end{align*} and (4.11) holds for $$|q|^2$$ instead of $$|q|_1$$. Further, $$W_3(x) = \beta|q|_1$$ is replaced with $$W_3(x) = \beta|q|^2$$ and an upper bound of $$|f(x)|$$ is given as \begin{eqnarray*} |f(x)| = \left|\,\left[\begin{matrix} q\\[3mm] -\alpha s(p)-\beta q\\ \end{matrix}\right]\,\right| \le \left|\,\left[\begin{matrix} q\\[3mm] -\beta q\\ \end{matrix}\right]\,\right| + \alpha |s(p)| \le \sqrt{1+\beta^2} r + \alpha n(n-1) < \infty. \end{eqnarray*} The proof of item (iii) of Lemma 4.4 is easier with $$v(q) = q$$, by which $$0\in -\alpha K[s](p) - \beta K[v](q)$$ is reduced to $$0\in K[s](p)$$ if $$q=0$$, and here we do not need the assumption $$0<\beta/\alpha<1$$. The boundedness of the average position shown in Section 4.3 is also valid with more detailed expressions. Because graph $$\mathcal G$$ is undirected, \begin{eqnarray*} {\bf 1}^{{\sf{T}}} s(p) = \sum_{i=1}^n \sum_{j\in\mathcal J^i} \text{sgn}(p^i-p^j) = \sum_{(i,j)\in\mathcal E} \text{sgn}(p^i-p^j) = 0, \end{eqnarray*} which yields with Lemma 3.1 that \begin{eqnarray*} \dot{\bar q}(t) \in \frac{1}{n}\left[\begin{matrix}{0 \quad {\bf 1}^{{\sf{T}}}}\end{matrix}\right] K[\,f](x(t)) = K\left[\frac{1}{n} {\bf 1}^{{\sf{T}}} (-\alpha s[p]-\beta q)\right](x(t)) = \{-\beta \bar q(t)\}. \end{eqnarray*} Therefore, $$\dot{\bar q}(t) = - \beta \bar q(t)$$, and hence \begin{eqnarray*} \bar q(t) = e^{-\beta t} \bar q(0), \quad \bar p(t) = \bar p(0) + \frac{1}{\beta}(1 - e^{-\beta t}) \bar q(0). \end{eqnarray*} Thus, the average position $$\bar p(t)$$ is bounded, which implies $$x(t)$$ is bounded. Moreover, if initial average velocity $$\bar q(0)$$ is zero, average position $$\bar p(t)$$ is equal to average initial position $$\bar p(0)$$ for all $$t\ge 0$$, which means that protocol (2.3) with $$v(q) = q$$ attains an average consensus. The results are summarized below. Corollary 4.1 Suppose that $$\alpha,\beta>0$$. Then, the consensus-based rendezvous of the agents in the sense of (2.2) is achieved in system (2.1) by a protocol with control input $$u^i = - \alpha s^i(p) - \beta q^i$$, $$i=1,2,\ldots,n$$. Moreover, this protocol realizes the average consensus of the position: $$\lim_{t\to\infty} p^i(t) = \bar p(0)$$, $$i=1,2,\ldots,n$$, if the average of the initial velocities of the agents is zero. 5. Numerical examples Let us consider the graph shown in Fig. 1 with six agents. The graph is undirected and connected. The gains are set as $$\alpha = 1$$ and $$\beta = 0.5$$. The input (2.3) with $$v^i(q) = \beta\,\text{sgn} q^i$$ is shown in Fig. 2, where the initial values are as follows: Fig. 1. View largeDownload slide Graph for the example of six agents. Fig. 1. View largeDownload slide Graph for the example of six agents. Fig. 2. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with $$\text{sgn}\,q_i$$ feedback. Fig. 2. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with $$\text{sgn}\,q_i$$ feedback. \begin{eqnarray*} \begin{array}{llllll} p_1(0) = 3, & p_2(0) = 4, & p_3(0) = -3, & p_4(0) = 1, & p_5(0) = 0, & p_6(0) = 1,\\ q_1(0) = 3, & q_2(0) = -2, & q_3(0) = 1, & q_4(0) = -5, & q_5(0) = -1, & q_6(0) = 4, \end{array} \end{eqnarray*} for which the averages of the initial position and velocity are $$\bar p(0) = 1$$ and $$\bar q(0) = 0$$, respectively. We can see in Fig. 2 that consensus (2.2) is attained. Figure 3 shows the responses with the same settings, but $$v^i(q)$$ is linear. In addition to (2.2), the positions converge to the average of the initial values. Fig. 3. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with linear $$q_i$$ feedback. Fig. 3. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with linear $$q_i$$ feedback. We demonstrate that there exists a problem instance that indeed needs the condition $$\alpha>\beta$$ for consensus. Consider graph $$\mathcal G=(\mathcal A,\mathcal E)$$ with $$\mathcal A=\{1,2\}$$, $$\mathcal E=\{(1,2),(2,1)\}$$, which is the simplest one. Let us set the gains as $$\alpha = 1$$ and $$\beta = 0.5$$ or $$\beta = 2$$. The responses of the positions are shown in Fig. 4, where the initial values are set as $$p_1(0) = 2$$, $$p_2(0) = 1$$, $$q_1(0) = 2$$, $$q_2(0) = -2$$. While the distance between the positions converges to zero with $$\beta = 0.5$$, the agents stall and do not attain consensus with $$\beta = 2$$. In fact, the equilibria for $$\beta = 2$$ are given by $$q^1=q^2=0$$ and $$\text{Sgn}(p^1-p^2) \cap [-2,2]\not=\emptyset$$. The latter condition can be satisfied by any $$p^1$$ and $$p^2$$ and hence consensus is not guaranteed, as happens in Fig. 4. Fig. 4. View largeDownload slide Position $$p(t)$$ with $$\beta=0.5$$ (left) and $$\beta = 2$$ (right). Fig. 4. View largeDownload slide Position $$p(t)$$ with $$\beta=0.5$$ (left) and $$\beta = 2$$ (right). 6. Conclusion In this article, we showed a protocol that attains a consensus-based rendezvous of agents that have double integrator dynamics, where only the sign of the relative positions and the sign of the velocity are needed for the protocol. We analysed the closed-loop system via Lyapunov methods extended to the Filippov solution of differential inclusions. In particular, to prove the consensus, we utilized the symmetry of the Filippov set-valued map of the closed-loop system and provided analysis of the $$\omega$$-limit set of the trajectories. Numerical examples showed that the consensus of position is achieved by the proposed protocol. Appendix A. Proof of Lemma 3.3 As stated in Section 3.1, for any $$x_0\in{\mathbb {R}}^{n}$$, there exists at least one Filippov solution $$x(t)$$ with $$x(0) = x_0$$ for $$t\in[0,t_1]$$ for some $$t_1>0$$. Then, \begin{eqnarray} V(x(t)) - V(x_0) = \int_0^{t} \frac{{\rm d}V(x(\tau))}{{\rm d}\tau} {\rm d}\tau \le - \int_0^{t} W_3(C x(\tau)){\rm d}\tau \le 0 \end{eqnarray} (A.1) holds for all $$t\in [0,t_1]$$ with $$W_1(C x(t)) \le V(x(t)) \le V(x_0)$$. Because $$W_1$$ is positive definite, $$|C x(t)| \le r$$ holds for some $$r>0$$. From (3.4), $$|f(x(t))| \le M_r$$ is valid as far as $$x(t)$$ exists. This implies that $$x(t)$$ is defined for all $$t\ge 0$$ and $$C x(t)$$ is bounded. Now, from (A.1), we have $$\int_0^t W_3(C x(\tau)){\rm d}\tau \le V(x_0) < \infty$$ for all $$t\ge 0$$. Because $$W_3$$ is positive semidefinite, the LHS is monotonically increasing in $$t$$. This means that $$\lim_{t\to\infty} \int_0^{t} W_3(C x(\tau)){\rm d}\tau$$ exists and is finite, as claimed in (3.7). Moreover, $$C x(t)$$ is absolutely continuous with respect to $$t$$ and bounded and $$W_3(x)$$ is continuous in $$x$$. Hence, $$W_3(C x(t))$$ is uniformly continuous in $$t$$. From Barbalat’s Lemma (e.g. Khalil, 1996), it holds that $$W_3(C x(t))\to 0$$ as $$t\to\infty$$. Thus, (3.8) is proved. Appendix B. Proof of (4.5) Consider the set in (4.4): \begin{eqnarray*} S_{(4.4)} &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{\sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus N \right\}\!. \end{eqnarray*} Let $$D_{ij} = \{d\in{\mathbb {R}}^{n}: d^i = d^j\}$$, which is an $$(n-1)$$-dimensional subspace of $${\mathbb {R}}^{n}$$ and hence $$\mu(D_{ij})=0$$ and $$D_{ij}$$ is closed. There are $$n(n-1)/2$$ different such subspaces, and we define \begin{eqnarray} D = \bigcup_{1\le i<j\le n} D_{ij}. \end{eqnarray} (B.1) Then, $$D$$ is a closed null set. It is obvious that \begin{eqnarray} S_{(4.4)} = \bigcap_{\mu(N)=0} S(N), \qquad S(N) = \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus D)\setminus N \right\}\!. \end{eqnarray} (B.2) Next, consider the following set, which includes $$S(N)$$: \begin{eqnarray*} S_0 = \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus D \right\}\!. \end{eqnarray*} To show the opposite inclusion, suppose that $$s_0\in S_0$$. Then, there exists a $$d_s\in{\mathbb {R}}^{n}\setminus D$$ such that \begin{eqnarray*} s_0 = \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d_s^i-d_s^j). \end{eqnarray*} Recall that $${\mathbb {R}}^{n}\setminus D$$ is open. It then holds that \begin{eqnarray*} s_0 = \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j) \quad \forall d\in B_{r_s}(d_s) \end{eqnarray*} for some $$r_s>0$$. Because an open set has a positive measure, we have $$B_{r_s}(d_s)\setminus N \not=\emptyset$$ for any null set $$N$$. Hence, there exists a $$d_N\in B_{r_s}(d_s)\setminus N$$ such that \begin{eqnarray*} s_0 = \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d_N^i-d_N^j), \end{eqnarray*} which implies that $$s_0\in S(N)$$. Therefore, $$S_0 = S(N)$$ holds for all null sets $$N$$. This yields \begin{eqnarray} S_{(4.4)} = \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}}\; S(N) = \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}}\; S_0 = \overline{\mathop{\rm co}} S_0 = {\mathop{\rm co}} S_0, \end{eqnarray} (B.3) where $${\mathop{\rm co}} S_0$$ is the set that appears in (4.5). Appendix C. Proof of (4.9) Here, we consider \begin{eqnarray*} S_{(4.9)} &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus N_V)\setminus N \right\}\!, \end{eqnarray*} where the only difference from $$S_{\text{(4.4)}}$$ is that $$N_V$$ is also removed. For the null set $$D$$ defined in (B.1), we also have the following: \begin{eqnarray*} S_{(4.9)} &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus D)\setminus N \right\} \nonumber\\ &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}}\; S(N) = {\mathop{\rm co}} S_0, \end{eqnarray*} where the last two equalities are shown in (B.2) and (B.3), respectively. References Bacciotti A. & Ceragioli F. ( 1999 ) Stability and stabilization of discontinuous systems and non-smooth Lyapunov functions. ESAIM Control Optim. Calc. Var. , 4 , 361 – 376 . Google Scholar CrossRef Search ADS Ceragioli F. , De Persis C. & Frasca P. ( 2011 ) Discontinuities and hysteresis in quantized average consensus. Automatica , 47 , 1916 – 1928 . Google Scholar CrossRef Search ADS Chen G. , Lewis F. L. & Xie L. ( 2011 ) Finite-time distributed consensus via binary control protocols. Automatica , 47 , 1962 – 1968 . Google Scholar CrossRef Search ADS Clarke F. H. ( 1983 ) Optimization and Nonsmooth Analysis . New York : Wiley . Cortés J. ( 2006 ) Finite-time convergent gradient flows with applications to network consensus, Automatica , 42 , 1993 – 2000 . Google Scholar CrossRef Search ADS Dimarogonas D. V. & Johansson K. H. ( 2010 ) Stability analysis for multi-agent systems using the incidence matrix: quantized communication and formation control, Automatica , 46 , 695 – 700 . Google Scholar CrossRef Search ADS Filippov A. F. ( 1988 ) Differential Equations with Discontinuous Righthand Sides . Dordrecht, The Netherlands : Kluwer Academic Publishers . Google Scholar CrossRef Search ADS Fischer N. , Kamalapurkar R. & Dixon W. E. ( 2013 ) LaSalle-Yoshizawa corollaries for nonsmooth systems. IEEE Trans. Automat. Control , 58 , 2333 – 2338 . Google Scholar CrossRef Search ADS Frasca P. ( 2012 ) Continuous-time quantized consensus: convergence of Krasovskii solutions. Syst. Control Lett. , 61 , 273 – 278 . Google Scholar CrossRef Search ADS Guo M. & Dimarogonas D. V. ( 2013 ) Consensus with quantized relative state measurements. Automatica , 46 , 2531 – 2537 . Google Scholar CrossRef Search ADS Khalil H. ( 1996 ) Nonlinear Systems . 2nd edn. Upper Saddle River, NJ : Prentice Hall . Li S. , Du H. & Lin X ( 2011 ) Finite-time consensus algorithm for multi-agent systems with double-integrator dynamics. Automatica , 47 , 1706 – 1712 . Google Scholar CrossRef Search ADS Paden B. & Sastry S. ( 1997 ) A calculus for computing Filippov’s differential inclusion with application to the variable structure control of robot manipulators. IEEE Trans. Circuits Syst. , 34 , 73 – 81 . Google Scholar CrossRef Search ADS Ren W. & Beard R. W. ( 2008 ) Distributed Consensus in Multi-vehicle Cooperative Control: Theory and Applications. London : Springer . Google Scholar CrossRef Search ADS Rockafellar R. T. ( 1970 ) Convex Analysis . Princeton : Princeton University Press . Google Scholar CrossRef Search ADS Shevitz D. & Paden B. ( 1994 ) Lyapunov stability theory of nonsmooth systems. IEEE Trans. Automat. Control , 39 , 1910 – 1914 . Google Scholar CrossRef Search ADS Xiao F. , Wang L. , Chen J. & Gao Y. ( 2009 ) Finite-time formation control for multi-agent systems. Automatica , 45 , 2605 – 2611 . Google Scholar CrossRef Search ADS © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

Consensus-based rendezvous control of double integrators via binary relative positions and velocity feedback

Loading next page...
 
/lp/ou_press/consensus-based-rendezvous-control-of-double-integrators-via-binary-GRtRFvxQUE
Publisher
Oxford University Press
Copyright
© The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
D.O.I.
10.1093/imamci/dnx033
Publisher site
See Article on Publisher Site

Abstract

Abstract This article considers the consensus problem for a network of agents that have double integrator dynamics. A protocol is proposed to achieve a consensus-based rendezvous of agents that depends only on the sign of the relative positions and the sign of the individual velocity. The problem is formulated in terms of differential inclusions with Filippov solutions. A detailed analysis of the Filippov set-valued map of the vector field of the closed-loop system is provided, based on which the proposed protocol is proven to attain the consensus of the agent positions with double integrator dynamics. To prove the convergence, the invariant set of the trajectories of agents is investigated based on a generalized theory of the invariance principle. Numerical examples are provided to illustrate the convergence of the agents via the proposed protocol. 1. Introduction Consensus problems of agents over networks have been receiving a great deal of attention in the last two decades as an approach to solving problems such as formation, distributed estimation, and decision making in complex and large-scale systems. It is widely known that linear protocols can achieve consensus for agents that have single or double integrator dynamics based on relative feedback of the state variables (see, e.g. Ren & Beard, 2008). There are plenty of advanced works to cope with issues such as time delays and time-varying and switching communication topologies. Moreover, non-linear discontinuous protocols have been studied for the consensus control of single and double integrators with binary or quantized relative positions and velocities. These protocols are attractive from an engineering point of view, allowing coarse measurements and reducing the amount of communication. Stability analysis for discontinuous dynamical systems based on differential inclusions has been applied to prove a consensus under such protocols, where the Filippov solution has been employed in many of the literature. In Cortés (2006) and Chen et al. (2011), finite-time convergent protocols are presented for agents with single integrator dynamics. For problems with single integrator dynamics, Dimarogonas & Johansson (2010), Ceragioli et al. (2011) and Frasca (2012) proposed protocols that work with quantized relative values of the state variable. In Guo & Dimarogonas (2013), quantized relative feedback is investigated for both single and double integrator dynamics. For double integrators, the convergence to a consensus point with logarithmically quantized measurements is proved. However, the agents with uniform quantizers, including quantizers that output binary values, are only shown to approach some neighbourhood of a consensus point. In this article, we consider the consensus problem for a network of agents that have double integrator dynamics. The proposed protocol achieves a consensus-based rendezvous. Namely, each agent gathers the relative positions of communicable agents and applies the protocol to determine the control input by also taking into account its own velocity. In particular, the protocol only depends on the sign of the relative positions and the sign of the individual velocity. We also consider the case where an exact velocity is available. The problem involves discontinuous dynamics because of the discretized feedback of the relative positions. We formulate the consensus problem in terms of differential inclusions and analyse the system in the framework of the Filippov solution (Filippov, 1988). The stability analysis methods of Shevitz & Paden (1994), Fischer et al. (2013) and Bacciotti & Ceragioli (1999) are employed, with certain modifications for the problem. We provide a detailed analysis of the Filippov set-valued map of the vector field of the closed-loop system. This reveals that the set-valued map is compact, convex and symmetric with respect to a certain point, by which the generalized time derivative (Shevitz & Paden, 1994) of a Lyapunov-like function is proved to be included in a singleton with a negative element. After showing stability using this approach, we investigate the trajectories of the agents to clarify that any point of the $$\omega$$-limit set (Shevitz & Paden, 1994; Bacciotti & Ceragioli, 1999) of any trajectory is indeed a consensus point. These results orchestrate to conclude that the proposed protocol attains the consensus-based rendezvous via feedback with binary relative positions. The rest of the article is organized as follows. In Section 2, we formulate the consensus control problem with an agent network and propose a protocol with binary relative positions and a binary or linear velocity. Section 3 is devoted to preliminaries on the Filippov solution for differential inclusions and stability analysis. Based on the methods provided in Section 3, we prove the consensus of the feedback system in Section 4 by showing results on an analysis of the Filippov set-valued map and stability analysis based on the invariant principle. Section 5 presents numerical examples that illustrate the protocol. Lastly, we conclude the article in Section 6. Notation: For a set $$S\subset{\mathbb {R}}^{n}$$, $${\mathop{\rm co}}\,S$$ is the convex hull of $$S$$ and $$\overline{\mathop{\rm co}} S$$ is the convex closure of $$S$$. The $$k$$-norm of $$x = (x^1, x^2, \ldots,x^n)\in{\mathbb {R}}^{n}$$ is written as $$|x|_k = (\sum_{i=1}^n |x^i|^k)^{1/k}$$, while the Euclidean norm is denoted by $$|x|$$ without the subscript. The elements of $${\mathbb {R}}^{n}$$ are regarded as a column vector if they appear in expressions with matrices and $$x^{\sf{T}}$$ denotes the transpose of $$x$$. The identity matrix of $${\mathbb {R}}^{{n}\times{n}}$$ is denoted by $$I_n$$. Let $$B_r(x)$$ stand for an open ball with center $$x$$ and radius $$r$$. The distance between $$x\in{\mathbb {R}}^{n}$$ and $$Y\subset{\mathbb {R}}^{n}$$ is $$\text{dist}(x,Y) = \inf_{y\in Y} |x-y|$$. Let $$\mu(S)$$ denote the Lebesgue measure of $$S\subset{\mathbb {R}}^{n}$$. 2. Formulation of the consensus problem Let $$\mathcal A=\{1,2,\ldots,n\}$$ be the set of agents and $$\mathcal E\subset\{(i,j)\in\mathcal A\times\mathcal A\}$$ be the set of pairs of agents in $$\mathcal A$$ that can communicate with each other to obtain information on their relative position. We assume that graph $$\mathcal G=(\mathcal A,\mathcal E)$$ is undirected and connected. Let $$\mathcal J^i = \{\,j\in\mathcal A:(i,j)\in\mathcal E\}$$ be the set of the agents connected to agent $$i$$. The dynamics of each agent is given by \begin{eqnarray} \dot p^i = q^i, \quad \dot q^i = u^i, \quad i=1,2,\ldots,n, \end{eqnarray} (2.1) where $$p^i\in{\mathbb {R}}$$ is the position, $$q^i\in{\mathbb {R}}$$ is the velocity, and $$u^i$$ is the force, which is the control input. Without loss of generality, only scalar variables $$p^i$$ and $$q^i$$ are considered for each agent. We assume that agent $$i$$ can only measure the sign of its velocity $$q^i$$ and the sign of relative positions $$p^j - p^j$$ to agents $$j\in\mathcal J^i$$. The goal of the control is to drive the agents to converge to a consensus point for their position, namely, $$\left\{ \matrix{ \mathop {\lim }\limits_{t \to \infty } ({p^i}(t) - {p^j}(t)) = 0,\quad i = 1,2, \ldots, n,\quad i\not = j, \hfill \cr \mathop {\lim }\limits_{t \to \infty } {q^i}(t) = 0,\quad i = 1,2, \ldots, n. \hfill \cr} \right. $$ (2.2) For this purpose, assuming the availability of the measurements of each agent stated above, we set control input $$u^i$$ as a consensus protocol, given as follows: \begin{eqnarray} u^i = - \alpha s^i(p) - \beta v^i(q), \quad i=1,2,\ldots,n, \end{eqnarray} (2.3) where $$\alpha$$ and $$\beta$$ are constants satisfying $$\alpha > \beta > 0$$. Functions $$s^i(\cdot)$$ and $$v^i(\cdot)$$ are defined as \begin{eqnarray} s^i(p) = \sum_{j\in\mathcal J^i} {\text{sgn}}(p^i - p^j), \quad v^i(q) = {\text{sgn}} q^i, \quad i=1,2,\ldots,n, \end{eqnarray} (2.4) where $${\text{sgn}}$$ is the signum function, defined for $$a\in{\mathbb {R}}$$ as \begin{eqnarray*} {\rm{sgn}}\,a = \left\{ {\matrix{ {1,} & {a{\rm{ > 0,}}} \cr {0,} & {a{\rm{ = 0,}}} \cr {{\rm{ - 1,}}} & {a{\rm{ < 0}}{\rm{.}}} \cr } } \right. \end{eqnarray*} Setting $$ p = \left[{\matrix{ {{p^1}} \cr {{p^2}} \cr \vdots \cr {{p^n}} \cr } } \right],\quad q = \left[{\matrix{ {{q^1}} \cr {{q^2}} \cr \vdots \cr {{q^n}} \cr } } \right],\quad u = \left[{\matrix{ {{u^1}} \cr {{u^2}} \cr \vdots \cr {{u^n}} \cr } } \right], \quad s\left(p \right) = \left[{\matrix{ {{s^1}\left(p \right)} \cr {{s^2}\left(p \right)} \cr \vdots \cr {{s^n}\left(p \right)} \cr } } \right], \quad v\left(q \right) = \left[{\matrix{ {{v^1}\left(p \right)} \cr {{v^2}\left(p \right)} \cr \vdots \cr {{v^n}\left(p \right)} \cr } } \right],$$ the closed-loop system is represented as \begin{eqnarray} \dot x = f(x), \quad x\in{\mathbb {R}}^{2n}, \end{eqnarray} (2.5) where $$ x = \left[{\matrix{ p \cr q \cr } } \right],\quad f\left(x \right) = \left[{\matrix{ q \cr { - \alpha s(p) - \beta v(q)} \cr } } \right]$$ (2.6) This system has discontinuities at points for which $$p^i = p^j$$, $$j\in\mathcal J^i$$ and $$q^i=0$$, $$i=1,2,\ldots,n$$. After giving the main results on the consensus (2.2) with protocol (2.3)–(2.4), we mention on the case where $$v^i(q)$$ in (2.4) is replaced with \begin{eqnarray} v^i(q) = q^i, \quad i=1,2,\ldots,n \end{eqnarray} (2.7) and the condition on the gains is relaxed to $$\alpha,\beta>0$$, where the condition $$\alpha>\beta$$ is not required. The protocol with (2.7) can be implementable if each agent has a precise internal sensor to detect its velocity. With this linear $$v(q)$$, it is shown in Section 4.5 that the agents can attain the average consensus, i.e. each $$p^i(t)$$ converges to the average of $$p^i(0)$$, $$i=1,2,\ldots,n$$ if the average of $$q^i(0)$$, $$i=1,2,\ldots,n$$ is zero. 3. Preliminaries for the Lyapunov methods for discontinuous dynamical systems This section is devoted to the preliminaries for the Lyapunov methods for the stability analysis of discontinuous dynamical systems within the framework of Filippov (1988). 3.1. Differential inclusions with Filippov solution We formulate a differential inclusion for system (2.5) and consider the Filippov solution. For notational simplicity, in this section, we let $$x\in{\mathbb {R}}^{n}$$ in (2.5). The Filippov set-valued map is defined as \begin{eqnarray*} K[\,f](x) = \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} f(B_\delta(x)\setminus N). \end{eqnarray*} Definition 3.1 (Filippov, 1988) A function $$x(t)$$ is called a Filippov solution of the differential equation $$\dot x = f(x)$$, $$x\in{\mathbb {R}}^{n}$$, over $$[t_0,t_1]$$ with $$t_0<t_1$$ with initial condition $$x(t_0) = x_0$$, if $$x(t)$$ is absolutely continuous and satisfies the following differential inclusion: \begin{eqnarray} \dot x(t) \in K[\,f](x(t)) \end{eqnarray} (3.1) for almost all $$t\in[t_0,t_1]$$ satisfying $$x(t_0) = x_0$$. If $$f:{\mathbb {R}}^{n}\to{\mathbb {R}}^{n}$$ is Lebesgue measurable and locally essentially bounded (i.e. $$f\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$), Filippov set-valued map (3.1) is upper semicontinuous with $$F(x)$$ being non-empty, bounded, closed and convex at each $$x$$. Hence, there exists at least one Filippov solution $$x(t)$$ satisfying an initial condition $$x(t_0)=x_0$$, defined for $$t\in[t_0,t_1]$$ with some $$t_1>t_0$$ (Filippov, 1988). The following calculus for the Filippov set-valued maps is exploited in the following section (Bacciotti & Ceragioli, 1999, see also Paden & Sastry, 1997). Lemma 3.1 Map $$K$$ has the following properties: (i) If $$f\in C({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[\,f](x) = \{f(x)\}\;\; \forall x\in{\mathbb {R}}^{n}$$. (ii) If $$f,g\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[\,f+g](x) \subset K[\,f](x) +K[g](x)\;\; \forall x\in{\mathbb {R}}^{n}$$. If $$f$$ also belongs to $$C({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[\,f+g](x) = f(x) +K[g](x)\;\; \forall x\in{\mathbb {R}}^{n}$$. (iii) If $$G\in C({\mathbb {R}}^{n},{\mathbb{R}}^{{m}\times{n}})$$ and $$u\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$, $$K[G u](x) = G(x) K[u](x)\;\; \forall x\in{\mathbb {R}}^{n}$$. 3.2. Chain rule with differential inclusions We handle a Lyapunov-like function and its derivatives along Filippov solutions for a differential inclusion. For this, we invoke a generalized theory on the derivatives of functions (Clarke, 1983). For a function $$V:{\mathbb {R}}^{n}\to{\mathbb {R}}^{m}$$, the right directional derivative $$V'(x,w)$$ of $$V$$ at $$x\in{\mathbb {R}}^{n}$$ to the direction $$w\in{\mathbb {R}}^{n}$$ is defined as $$ V'(x,w) = \lim_{t\to 0+} \{V(x + w t) - V(x)\}/t$$. The right directional derivative may not exist. The generalized directional derivative of $$V$$ at $$x\in{\mathbb {R}}^{n}$$ to direction $$w\in{\mathbb {R}}^{n}$$ is $$V^\circ(x,w) = \limsup_{y\to x,\,t\to 0+} \{V(y + w t) - V(y)\}/t$$, which, in contrast, always exists. Definition 3.2. (Clarke, 1983) A function $$V:{\mathbb {R}}^{n}\to{\mathbb {R}}^{m}$$ is said to be regular at $$x\in{\mathbb {R}}^{n}$$ if it exists for all $$w\in{\mathbb {R}}^{n}$$ and satisfies $$V'(x,w) = V^\circ(x,w)$$. Clarke’s generalized gradient of $$V$$ for a locally Lipschitz continuous function $$V:{\mathbb {R}}^{n}\to{\mathbb {R}}$$ is defined as \begin{eqnarray} \partial V(x) = \overline{{\mathop{\rm co}}} \left\{\lim_{k\to\infty}{{\partial V{\rm{(}}{x_k}{\rm{)}}} \over {\partial x}} \,\bigg | \, \lim_{k\to\infty} x_k = x,\; x_k\not\in N_V \right\}\!, \end{eqnarray} (3.2) where $$N_V$$ is the set of $$x\in{\mathbb {R}}^{n}$$ in which $$\partial V(x)/\partial x$$ does not exist. Since $$V$$ is locally Lipschitz continuous, $$N_V$$ has zero measure. Moreover, the following is true: \begin{eqnarray} \partial V(x) = K\left[{{\partial V} \over {\partial x}} \right](x). \end{eqnarray} (3.3) Lemma 3.2 (Shevitz & Paden, 1994) Let $$x(t)$$ be a Filippov solution of $$\dot x = f(x)$$ defined on $$[t_0,t_1]$$ and $$V: {\mathbb {R}}^{n}\to{\mathbb {R}}$$ be a locally Lipschitz continuous and regular function. Then, $$V(x(t))$$ is absolutely continuous with respect to $$t$$, whereas $${\rm d}V(x(t))/{\rm d}t$$ exists for almost every $$t\in [t_0,t_1]$$ and \begin{eqnarray*} {{{\rm{d}}V\left({x\left(t \right)} \right)} \over {{\rm{d}}t}} \in \dot{\tilde V}(x(t)) \end{eqnarray*} holds for almost every $$t\in [t_0,t_1]$$, where \begin{eqnarray*} \dot{\tilde V}(x) = \bigcap_{\xi\in\partial V(x)} \xi^{\sf{T}} K[\,f](x). \end{eqnarray*} Remark 3.1 From the statement of this lemma, $$\dot{\tilde V}(x)$$ may be empty for some $$x$$, while, for every solution $$x(\cdot)$$, $$\dot{\tilde V}(x(t))$$ is not empty for almost all $$t\in[t_0,t_1]$$. 3.3. Stability and invariance In the following section, we consider a Lyapunov-like function $$V(x)$$ that is positive if velocity $$q$$ is not zero or position $$p$$ is not at a position of consensus. This motivates the following modified version of Corollary 2 of Fischer et al. (2013), as a generalization of the LaSalle–Yoshizawa Theorem to discontinuous systems. Lemma 3.3 Let $$f\in L_{loc}^\infty({\mathbb {R}}^{n},{\mathbb {R}}^{n})$$ and $$V: {\mathbb {R}}^{n}\to{\mathbb {R}}$$ be locally Lipschitz continuous and regular. Let $$C\in{\mathbb{R}}^{{m}\times{n}}$$ be a constant matrix. Suppose that, for every $$r>0$$, \begin{eqnarray} M_r = \sup_{|C x|\le r} |f(x)| < \infty, \end{eqnarray} (3.4) and that \begin{eqnarray} && W_1(C x) \le V(x) \le W_2(C x), \\ \end{eqnarray} (3.5) \begin{eqnarray} && d \le - W_3(C x) \quad \forall d\in \dot{\tilde V}(x) \end{eqnarray} (3.6) hold for all $$x\in{\mathbb {R}}^{n}$$, where $$W_1,\, W_2: {\mathbb {R}}^{m}\to{\mathbb {R}}$$ are continuous positive definite functions and $$W_3\,{:}\,{\mathbb {R}}^{m}\,{\to}\,{\mathbb {R}}$$ is a positive semidefinite function. Then, any Filippov solution of $$\dot x = f(x)$$ with the initial condition $$x(0) = x_0$$ for any $$x_0\in{\mathbb {R}}^{n}$$ is defined for all $$t\ge 0$$ and $$C x(t)$$ is bounded for all $$t\ge 0$$. Moreover, it holds that \begin{eqnarray} && \lim_{t\to\infty} \int_0^t W_3(C x(\tau))d\tau \quad \hbox{exists and is finite, and}\\ \end{eqnarray} (3.7) \begin{eqnarray} && \lim_{t\to\infty} W_3(C x(t)) = 0. \end{eqnarray} (3.8) Proof. The lemma is proved similarly to that of the original result of Fischer et al. (2013). Details are given in Appendix A. □ To prove the convergence of $$x(t)$$, we invoke a generalization of the invariance principle to discontinuous systems, based on the notion of $$\omega$$-limit points of trajectories and weak invariant sets. Definition 3.3 (Filippov, 1988) Let $$x(t)$$ be an arbitrary maximal solution of (3.1) starting from an arbitrary initial point $$x_0\in{\mathbb {R}}^{n}$$. A point $$\xi\in{\mathbb {R}}^{n}$$ is said to be an $$\omega$$-limit point of $$x$$ if there exists a sequence $$t_k\in{\mathbb {R}}$$ that satisfies $$\lim_{k\to\infty} t_k = \infty$$ and $$\lim_{k\to\infty} x(t_k) = \xi$$. The $$\omega$$-limit set $${\it{\Omega}}(x)$$ of $$x(\cdot)\subset{\mathbb {R}}^{n}$$ is defined as the set of all $$\omega$$-limit points of $$x(\cdot)$$. Definition 3.4 (Filippov, 1988; Bacciotti & Ceragioli, 1999) A set $$S\subset{\mathbb {R}}^{n}$$ is said to be the weakly invariant set for (3.1) if, for every $$y_0\in S$$, there exists a maximal solution $$y(\cdot)$$ of (3.1) with $$y(0)=y_0$$ and $$y(t)\in S$$ for all $$t$$ for which $$y(t)$$ is defined. Once the state is proved to be bounded, the following lemma, which is based on Chapter 3 of Filippov (1988), will be useful for considering the convergence of $$x(t)$$. Lemma 3.4 (Filippov, 1988; Bacciotti & Ceragioli, 1999) Let $$x(\cdot)$$ be an arbitrary solution to (3.1). Then, the $$\omega$$-limit set $${\it{\Omega}}(x)$$ is weakly invariant. If $$\{x(t):t\ge 0\}$$ is bounded, then $${\it{\Omega}}(x)\not=\emptyset$$ and $${\it{\Omega}}(x)$$ is bounded and connected. Moreover, $$\text{dist}(x(t),{\it{\Omega}}(x))\to 0$$ as $$t\to\infty$$. 4. Proof of the consensus Using the material provided in Section 3, we prove the following main result of the consensus defined in (2.2). The rest of this section is mainly devoted to the proof of Theorem 4.1: Theorem 4.1. Suppose that $$\alpha > \beta > 0$$. The consensus-based rendezvous (2.2) of system (2.1) is then achieved with protocol (2.3)–(2.4), where the solution of closed-loop system (2.5) is in the sense of Filippov for differential inclusion (3.1). In Section 4.1, we investigate Filippov set-valued map (3.1) for closed-loop system (2.5). Section 4.2 considers a candidate of a Lyapunov-like function $$V$$ for system (2.5), where we call $$V$$ ‘Lyapunov-like’ because it is only positive semidefinite. Clarke’s generalized gradient is analysed to obtain a singleton that includes $$\dot{\tilde V}(x)$$. In addition, the equilibrium is determined in this subsection. Lemma 3.3 is then applied in Section 4.3 to show that $$p(t)$$ is bounded and $$q(t)$$ converges to zero. Section 4.4 concludes the consensus based on an application of the invariance principle, where the $$\omega$$-limit set of the Filippov solutions of (2.5) proves the convergence to a consensus point. In Section 4.5, we show a corollary to Theorem 4.1 for the protocol with the linear feedback of $$q$$ defined in (2.7). 4.1 Filippov set-valued map From Lemma 3.1, we have \begin{eqnarray} K[\,f](x) = \left[{\matrix{ q \cr { - \alpha K[s](p) - \beta K[v](q)} \cr } } \right] \end{eqnarray} (4.1) for vector field (2.6), where it is immediate to see that \begin{eqnarray*} K[v](q) = \left[{\matrix{ {{\rm{Sgn}}\,{q^1}} \cr {{\rm{Sgn}}\,{q^2}} \cr \vdots \cr {{\rm{Sgn}}\,{q^n}} \cr } } \right], \quad {\rm{Sgn}}\,{q^i}\, = \left\{ {\matrix{ \hfill {\left\{ 1 \right\},} \quad & \hfill {{q^i} > 0,} \cr \hfill {{\rm{[- 1,1],}}} \quad & \hfill {{q^i} = 0,} \cr \hfill {\left\{ {{\rm{ - 1}}} \right\},} \quad & \hfill {{q^i} < 0.} \cr } } \right. \end{eqnarray*} Next, let $$e_i$$ denote the $$i$$-th standard basis of $${\mathbb {R}}^{n}$$ and let $$\mathcal E_+ = \{(i,j)\in\mathcal E: i<j\}$$. Then, \begin{eqnarray*} s(p) &=& \sum_{i=1}^n e_i s^i(p) = \sum_{i=1}^n \sum_{j\in\mathcal J^i} e_i {\text{sgn}}(p^i - p^j) = \sum_{(i,j)\in\mathcal E} e_i {\text{sgn}}(p^i - p^j)\\ &=& \sum_{(i,j)\in\mathcal E_+} (e_i - e_j) {\text{sgn}}(p^i - p^j), \end{eqnarray*} where the last equality is because of the undirectedness of $$\mathcal E$$; both $$(i,j)$$ and $$(j,i)$$ are contained in $$\mathcal E$$ if either one is. Using this, observe that \begin{eqnarray} K[s](p) &=& \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ s(B_\delta(p)\setminus N) \right\}\\ &=& \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+} (e_i - e_j){\text{sgn}}((p^i+d^i) - (p^j+d^j)): d\in B_\delta(0)\setminus N \right\}\\ &=& \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j){\text{sgn}}((p^i-p^j) + (d^i-d^j))\right.\\ && \qquad \left.+ \sum_{(i,j)\in\mathcal E_+^{\rm neq}(p)} (e_i - e_j){\text{sgn}}((p^i-p^j) + (d^i-d^j)): d\in B_\delta(0)\setminus N \right\}\!, \end{eqnarray} (4.2) where $$\mathcal E_+^{\rm eq}(p)$$ and $$\mathcal E_+^{\rm neq}(p)$$ are defined as \begin{eqnarray*} \mathcal E_+^{\rm eq}(p) = \{(i,j)\in\mathcal E_+: p^i = p^j\}, \qquad \mathcal E_+^{\rm neq}(p) = \{(i,j)\in\mathcal E_+: p^i \not= p^j\}. \end{eqnarray*} Because $$B_\delta(0)$$ is monotonically increasing in $$\delta$$, one can replace $$\bigcap_{\delta>0}$$ with $$\bigcap_{0<\delta<\overline\delta}$$ for any $$\overline\delta > 0$$. Here, let us set \begin{eqnarray*} \overline\delta = \overline\delta(p) = \frac{1}{4} \min\{|p^i - p^j|:(i,j)\in\mathcal E_+^{\rm neq}(p)\}, \end{eqnarray*} which is strictly positive. Because this yields $$|d^i - d^j| < 2\overline\delta(p)$$ in (4.2), the $${\text{sgn}}(\cdot)$$ in the latter sum in (4.2) is constant independently of $$d^i - d^j$$, while $$p^i - p^j$$ vanishes in the former sum. Therefore, \begin{eqnarray} && \hbox{RHS of (4.2)}\\ && = \bigcap_{0<\delta<\overline\delta(p)} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j){\text{sgn}}(d^i-d^j): d\in B_\delta(0)\setminus N \right\}\\[2mm] && \qquad + \sum_{(i,j)\in\mathcal E_+^{\rm neq}(p)} (e_i - e_j){\text{sgn}}(p^i-p^j). \end{eqnarray} (4.3) Obviously, from the definition of $$\mathcal E_+^{\rm neq}(p)$$, the second term of (4.3) is $$s(q)$$. Let $$S^{\rm eq}(p)$$ denote the first term of (4.3); if $$\mathcal E^{\rm eq}_+(p) = \emptyset$$, let $$S^{\rm eq}(p) = \{0\}$$. Then, \begin{eqnarray} S^{\rm eq}(p) &=& \bigcap_{0<\delta<\overline\delta(p)} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus N \right\} \\ &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus N \right\} \\ \end{eqnarray} (4.4) \begin{eqnarray} &=& {\mathop{\rm co}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n},\; d^i\not=d^j\hbox{ if }i\not=j\right\}\!. \end{eqnarray} (4.5) To obtain (4.4), we replaced $$B_\delta(0)$$ with $${\mathbb {R}}^{n}$$ since $$\text{sgn}(\cdot)$$ is homogeneous of order zero. Hence the intersection with respect to $$\delta$$ in (4.4) can be removed. The last equality in (4.5) is obtained in Appendix B, where the set in (4.5) is a convex hull of a finite number of points in $${\mathbb {R}}^{n}$$ and hence $$S^{\rm eq}(p)$$ is compact and convex. We note that $$S^{\rm eq}(p)$$ is symmetric with respect to the origin, i.e. if $$s_0\in S^{\rm eq}(p)$$, $$-s_0\in S^{\rm eq}(p)$$. In fact, from Carathéodory’s Theorem (e.g. Rockafellar, 1970), if $$s_0\in S^{\rm eq}(p)$$, there exist $$a_k\in{\mathbb {R}}$$ and $$d_k\in{\mathbb {R}}^{n}$$, $$k=1,2,\ldots,n+1$$, such that \begin{eqnarray*} s_0 = \sum_{k=1}^{n+1} a_k \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i_k-d^j_k), \quad \sum_{k=1}^{n+1} a_k = 1, \quad a_k\ge 0,\quad k=1,2,\ldots,n+1 \end{eqnarray*} and $$d^i_k\not=d^j_k$$ if $$i\not=j$$. Obviously, $$ -s_0 = \sum_{k=1}^{n+1} a_k \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\{\text{sgn}((-d^i_k)-(-d^j_k))\} \in S^{\rm eq}(p)$$, simply because $$-d_k\in{\mathbb {R}}^{n}$$ and $$-d_k^i\not=-d_k^j$$ if $$i\not=j$$. The results so far are summarized in the following lemma. Lemma 4.1 It holds that \begin{eqnarray} K[s](p) = S^{\rm eq}(p) + s(p), \end{eqnarray} (4.6) where $$S^{\rm eq}(p)$$ is represented as in (4.5). Moreover, $$S^{\rm eq}(p)$$ is compact, convex and symmetric with respect to the origin. 4.2 Candidate of a Lyapunov-like function As a candidate of a Lyapunov-like function for system (2.5), we consider \begin{eqnarray} V(x) = \alpha V_1(p) + V_2(q), \end{eqnarray} (4.7) where \begin{eqnarray*} V_1(p) = \sum_{i=1}^n \sum_{j\in\mathcal J^i} |p^i - p^j| = 2 \sum_{(i,j)\in\mathcal E_+} |p^i - p^j|, \qquad V_2(q) = \frac{1}{2} \sum_{i=1}^{n} |q^i|^2. \end{eqnarray*} These $$V_1$$, $$V_2$$ and $$V$$ are convex and hence they are regular. Let us apply (3.3) and Lemma 3.1 to $$V$$ to get \begin{eqnarray*} \partial V(x)= K\left[{{{\partial V} \over {\partial x}}} \right]\left(x \right) = \left[{\matrix{ {\alpha K\left[{{{\partial {V_1}} \over {\partial p}}} \right](p)} \cr q \cr } } \right] = \left[{\matrix{ {\alpha \,\partial {V_1}(p)} \cr q \cr } } \right], \end{eqnarray*} where the second equality owes to the fact that $$\partial V/\partial x$$ is the sum of a discontinuous (but locally essentially bounded) function $$\partial V_1/\partial p$$ and a continuous function $$\partial V_2/\partial q = q$$ (Paden & Sastry, 1997). If $$V_1$$ is differentiable at $$p$$, \begin{eqnarray} \left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right) &=&\left[{\matrix{ \vdots \cr {\sum\limits_{j \in {J^i}} {{\rm{sgn}}} ({p^i} - {p^j})} \cr \vdots \cr } } \right] = \sum_{i=1}^n e_i \sum_{j\in J^i} \text{sgn}(p^i - p^j)\\ &=& \sum_{(i,j)\in\mathcal E} e_i \text{sgn}(p^i - p^j) = \sum_{(i,j)\in\mathcal E_+} (e_i - e_j)\text{sgn}(p^i - p^j). \end{eqnarray} (4.8) Define $$N_V$$ as the set of $$d\in{\mathbb {R}}^{n}$$ such that $${{{\partial {V_1}} \over {\partial p}}}$$ is not differentiable at $$p+d$$ and observe that \begin{align} & K\left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right)\\ &\quad{} = \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right)(w): w\in B_\delta(p)\setminus N,\; \left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right) \hbox{ is differentiable at w} \right\}\\ &\quad{} = \bigcap_{\delta>0} \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+} (e_i - e_j)\text{sgn}((p^i+d^i) - (p^i+p^j)): d\in (B_\delta(0)\setminus N_V) \setminus N \right\}\\ &\quad{} = \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus N_V)\setminus N \right\}\\ &\qquad + \sum_{(i,j)\in\mathcal E_+^{\rm neq}(p)} (e_i - e_j)\text{sgn}(p^i-p^j), \end{align} (4.9) which is obtained in a similar way to the set in Lemma 4.1. Appendix C fills the gap between (4.9) and \begin{eqnarray} K\left[{{{\partial {V_1}} \over {\partial p}}} \right]\left(p \right)(p) = S^{\rm eq}(p) + s(p), \end{eqnarray} (4.10) where $$S^{\rm eq}(p)$$ is characterized as in Lemma 4.1. We can now state the following intermediate result of stability. Lemma 4.2 For $$V$$ defined as (4.7), it holds that \begin{eqnarray} \partial V_1(p) = S^{\rm eq}(p) + s(p), \qquad \partial V(x) =\left[{\matrix{ {\alpha \,\partial {V_1}(p)} \cr q \cr } } \right] , \qquad \dot{\tilde V}(x) \subset \{-\beta |q|_1\}. \end{eqnarray} (4.11) Proof. The first and the second equalities have been shown above. Let us consider $$\dot{\tilde V}$$: \begin{eqnarray*} \dot{\tilde V}(x) &=& \bigcap_{\hat\xi\in\partial V(x)} \hat \xi^{\sf{T}} K[\,f](x)\\ &=& \bigcap_{\xi\in S^{\rm eq}(p)} \left[{\matrix{ {\alpha (\xi + s(p))} \cr q \cr } } \right]^ {\sf{T}} K[\,f](x)\\ &=& \bigcap_{\xi\in S^{\rm eq}(p)} \left[{\matrix{ {\alpha (\xi + s(p))} \cr q \cr } } \right]^ {\sf{T}} \left[{\matrix{ q \cr { - \alpha ({S^{{\rm{eq}}}}(p) + s(p)) - \beta K[v](q)} \cr } } \right] \\ &\subset& \alpha q^{\sf{T}} \bigcap_{\xi\in S^{\rm eq}(p)} (\xi - S^{\rm eq}(p)) - \beta |q|_1\\ &=& \alpha q^{\sf{T}} \bigcap_{\xi\in S^{\rm eq}(p)} \bigcup_{\eta\in S^{\rm eq}(p)} \{\xi - \eta\} - \beta |q|_1 = \{-\beta |q|_1\}, \end{eqnarray*} where $$q^T K[v](q) = |q|_1$$ is obvious from Lemma 3.1 and the last equality is obtained via the lemma below, which exploits the fact that $$S^{\rm eq}(p)$$ is bounded and symmetric with respect to the origin. Thus, $$\dot{\tilde V}$$ has at most a single value $$-\beta|q|_1$$. □ Lemma 4.3 Suppose that a set $$S\subset{\mathbb {R}}^{n}$$ is not empty, symmetric with respect to the origin and bounded. Then, \begin{eqnarray*} S_1 = \bigcap_{\xi\in S} \bigcup_{\eta\in S} \{\xi - \eta\} = \{0\}. \end{eqnarray*} Proof. By definition, $$x\in S_1$$$$\iff$$$$(\forall\xi\in S)\,$$$$(\exists\eta\in S)\,$$$$x = \xi-\eta$$$$\iff$$$$(\forall\xi\in S)\,$$$$\xi - x\in S$$. Moreover, note that $$S$$ contains the origin because it is not empty. Thus, it is obvious that $$0\in S_1$$ from the symmetry of $$S$$ with respect to the origin. Next, assume that $$x\not=0$$ belongs to $$S_1$$. Then, as stated above, \begin{eqnarray} \forall\xi\in S\quad \xi - x\in S. \end{eqnarray} (4.12) Since $$0\in S$$, choosing $$\xi = 0$$ in (4.12) yields $$-x\in S$$. Setting $$\xi = -x\in S$$ in (4.12), we have $$-2x\in S$$. This can be repeated to finally obtain $$-k x\in S$$ for all positive integers $$k$$. Because $$x\not=0$$, this contradicts the assumption that $$S$$ is bounded. Therefore $$S_1 = \{0\}$$. □ In the following lemma, we determine the equilibrium points of differential inclusion (3.1) for system (2.5), where we say that $$x$$ is an equilibrium of differential inclusion (3.1) if $$0\in K[\,f](x)$$. Lemma 4.4. Let $$p,q\in{\mathbb {R}}^{n}$$. (i) The set-valued map $$K[s](p)$$ contains $$0$$ if and only if $$p^1=p^2=\cdots=p^n$$. (ii) If $$0\not\in\partial V_1(p)$$, it holds that $$\min\{|y|: y\in \{K[s](p)\}^i\} \ge 1$$ for some $$i$$ with $$1\le i\le n$$, where $$\{\cdot\}^i$$ stands for the $$i$$-th row of the set-valued map. (iii) Assume $$\alpha > \beta > 0$$. Then, $$x = \left[{p^{\sf{T}} \quad q^{\sf{T}}}\right]^{\sf{T}}$$ is an equilibrium point of differential inclusion (3.1) for system (2.5) if and only if $$p^1=p^2=\cdots=p^n$$ and $$q=0$$. Proof. (i) In (3.3) and Lemmas 4.1 and 4.2, $$K[s](p) = S^{\rm eq}(p) + s(p) = \partial V_1(p)$$. Because $$V_1$$ is convex, $$\partial V_1$$ further coincides with the subgradient of $$V_1$$ ((Clarke (1983)): \begin{eqnarray*} \partial V_1(p) = \{w\in{\mathbb {R}}^{n}: w^{\sf{T}}(x-p) + V_1(p) \le V_1(x)\;\; \forall x\in{\mathbb {R}}^{n}\}. \end{eqnarray*} Hence, $$0\in \partial V_1(p)$$ iff $$V_1(p)$$ is the minimum of $$V_1$$. From the definition of $$V_1$$ and the assumption that graph $$\mathcal G$$ is connected, the minimum of $$V_1$$ is zero and attained at a point of consensus, i.e. at $$p$$ with $$p^1=p^2=\cdots=p^n$$. (ii) In view of $$\partial V_1/\partial p$$ at differentiable points, as shown in (4.8), and the definition of Clarke’s generalized gradient (3.2), each row $$\{K[s](p)\}^i = \{\partial V_1(p)\}^i$$ is a closed interval $$J^i = [m^i_1,m^i_2]$$, where the $$m^i_j$$ are integers. Therefore, if $$0\not\in\partial V_1(p)$$, at least for one $$i$$, it holds that $$0\not\in J^i$$, which implies that $$m^i_1\ge 1$$ or $$m^i_2\le -1$$. This proves the claim. (iii) The sufficiency is obvious. To prove the necessity, suppose that $$0\in K[\,f](x)$$. From (4.1), we have $$q=0$$ and $$0\in -\alpha K[s](p) - \beta K[v](q)$$. The latter can be rewritten as $$0\in K[s](p) + (\beta/\alpha)[-1,1]^n$$. To conclude $$0\in K[s](p)$$, assume that $$0\not\in K[s](p)$$. Then, from (ii), there exists an $$i$$ with $$1\le i\le n$$ such that $$\min\{|y|: y\in \{K[s](p)\}^i\} \ge 1$$. This contradicts $$0\in K[s](p) + (\beta/\alpha)[-1,1]^n$$ for the $$i$$-th row because $$0 < \beta/\alpha < 1$$. Hence $$0\in K[s](p)$$. From (i), it holds that $$p^1=p^2=\cdots=p^n$$. □ 4.3. Partial proof of stability via Lemma 3.3 We apply Lemma 3.3 to derive the conclusion of the lemma for system (2.5). Let $$m_0$$ be the number of undirected edges of graph $$\mathcal G$$; $$m_0 = |\mathcal E_+|$$. Let $$E$$ be a matrix whose row vectors consist of $$m_0$$ distinct vectors $$(e_i - e_j)^{\sf{T}}$$ with $$(i,j)\in\mathcal E_+$$. Define \begin{eqnarray*} C = \left[{\matrix{ E & 0 \cr 0 & {{I_n}} \cr } } \right],\quad \quad y = Cx = \left[{\matrix{ {Ep} \cr q \cr } } \right]. \end{eqnarray*} Then, by defining $$y\in{\mathbb{R}}^{m_0+n}$$ appropriately as a function of $$p$$ and $$q$$, \begin{eqnarray*} V(x) &=& \alpha \sum_{i=1}^n \sum_{j\in\mathcal J^i} |p^i - p^j| + \frac{1}{2} \sum_{i=1}^{n} |q^i|^2 = 2 \alpha \sum_{(i,j)\in\mathcal E_+} |p^i - p^j| + \frac{1}{2} \sum_{i=1}^{n} |q^i|^2\\ &=& 2 \alpha \sum_{k=1}^{m_0} |y^k| + \frac{1}{2} \sum_{k=m_0+1}^{m_0+n} |y^k|^2. \end{eqnarray*} Thus, setting positive definite functions as \begin{eqnarray*} W_1(y) = W_2(y) = 2 \alpha \sum_{k=1}^{m_0} |y^k| + \frac{1}{2} \sum_{k=m_0+1}^{m_0+n} |y^k|^2, \end{eqnarray*} we have (3.5). Further, (3.6) holds for \begin{eqnarray} W_3(y) = W_3(C x) = \beta |q|_1. \end{eqnarray} (4.13) Lastly, consider (3.4). Because $$|q| \le r$$ if $$|C x|\le r$$, \begin{eqnarray} |f(x)| = \left|\, \left[{\matrix{ q \cr { - \alpha s(p) - \beta v(q)} \cr } } \right] \,\right| \le |q| + \alpha |s(p)| + \beta |v(q)| \le r + \alpha n(n-1) + \beta n < \infty. \end{eqnarray} (4.14) Thus, the assumptions of Lemma 3.3 are confirmed for the closed-loop system. Hence, $$E p(t)$$ and $$q(t)$$ are bounded and $$q(t)\to 0$$ as $$t\to\infty$$ for every solution $$x(t) = \left[{p(t)^{\sf{T}} \quad q(t)^{\sf{T}}}\right]^{\sf{T}}$$ of (2.5). Moreover, the boundedness of the average position is derived from Lemma 3.3. Define \begin{eqnarray*} \bar p(t) = \frac{1}{n} \sum_{i=1}^n p^i(t), \quad \bar q(t) = \frac{1}{n} \sum_{i=1}^n q^i(t), \end{eqnarray*} which are the averages of $$p(t)$$ and $$q(t)$$, respectively. Let $${\boldsymbol{1}} = \left[{1 \quad 1 \quad \cdots \quad 1}\right]^{\sf{T}}\in{\mathbb {R}}^{n}$$ and observe that \begin{eqnarray*} \dot{\bar p}(t) \in \frac{1}{n}\left[{\boldsymbol{1}^{\sf{T}} \quad 0}\right] K[\,f](x(t))= K\left[\frac{1}{n}\left[{\boldsymbol{1}^{\sf{T}} \quad 0}\right] f\right](x(t))= \{\bar q(t)\}, \end{eqnarray*} i.e. $$\dot{\bar p}(t) = \bar q(t)$$. Because (3.7) holds for $$W_3(C x) = |q|_1$$, we see that $$\bar p(t)$$ is bounded as \begin{eqnarray*} |\bar p(t) - \bar p(0)| = \left|\int_0^t \bar q(\tau)d\tau\right| \le \int_0^t |\bar q(\tau)| d\tau \le \frac{1}{n} \int_0^t |q(\tau)|_1 d\tau = \frac{1}{\beta n} \int_0^t W_3(C x(\tau)) d\tau < \infty. \end{eqnarray*} Because graph $$\mathcal G$$ is connected, the boundedness of $$E p(t)$$ shown above implies that the distance between every two agents is bounded. Combining this with the boundedness of $$\bar p(t)$$, we see that $$p(t)$$ is bounded. The results are summarized below. Lemma 4.5 Position $$p(t)$$ is bounded for $$t\ge 0$$ and velocity $$q(t)$$ tends to $$0$$ as $$t\to\infty$$. 4.4. Proof of the consensus Based on the results shown so far in the previous subsections, we finish the proof of Theorem 4.1 by showing the first condition in (2.2). Proof of Theorem 4.1. (i) Let $$x_0\in{\mathbb{R}}^{2n}$$ be arbitrary and let $$x(t)$$ be any Filippov solution to (2.5) with $$x(0) = x_0$$. From Lemma 4.5, $$x(t)$$ is bounded for all $$t\ge 0$$. Consider the $$\omega$$-limit set $${\it{\Omega}}(x)$$. From Lemma 3.4, $${\it{\Omega}}(x)$$ is not empty and is bounded, connected and weakly invariant. This implies with the boundedness of $$x(t)$$ that there exists a maximal Filippov solution $$z(t)$$ of (2.5) that lies in $${\it{\Omega}}(x)$$ for all $$t\ge 0$$. Furthermore, it holds that $$\lim_{t\to\infty} \text{dist}(x(t),{\it{\Omega}}(x)) = 0$$. (ii) Let , where $$q(t)\to 0$$ ($$t\to\infty$$) from Lemma 4.5. Hence, . Moreover, it holds that $$V_1(p)$$ is constant in $${\it{\Omega}}(x)$$. This can be proved as in the proof of Theorem 3 of Bacciotti & Ceragioli (1999); Recall that $$V(x(t))$$ is monotonically decreasing and bounded below. Hence, there exists a scalar $$c_0$$ such that \begin{eqnarray} \lim_{t\to\infty} V(x(t)) = c_0\ge 0. \end{eqnarray} (4.15) Let $$\xi\in{\it{\Omega}}(x)$$. There then exists a sequence $$t_k$$ satisfying \begin{eqnarray*} 0\le t_1<t_2<t_3<\cdots, \quad \lim_{k\to\infty} t_k = \infty, \quad\lim_{k\to\infty} x(t_k) = \xi. \end{eqnarray*} Combining this with (4.15), we see $$ V(\xi) = V(\lim_{k\to\infty} x(t_k)) = \lim_{k\to\infty} V(x(t_k)) = c_0 $$. Thus, $$V(\xi) = c_0$$ for all $$\xi\in{\it{\Omega}}(x)$$. Because the lower $$n$$ components are zero in set $${\it{\Omega}}(x)$$, we have, for $$c = c_0/\alpha \ge 0$$, \begin{eqnarray} {\it{\Omega}}(x) \subset \left\{\left[\begin{matrix}{p\\ 0}\end{matrix}\right]\in{\mathbb{R}}^{2n}: V_1(p) = c\right\}\!. \end{eqnarray} (4.16) (iii) Here, we determine $$c$$ in (4.16). Let be an arbitrary solution of (2.5) such that the whole trajectory of $$x(t)$$ is included in $${\it{\Omega}}(x)$$. It holds from (4.16) that $$q_z(t) = 0$$ and $$V_1(p_z(t)) = c$$ for all $$t\ge 0$$. Because $$\dot p_z(t) = q_z(t)$$ for almost all $$t\ge 0$$ and $$q_z(t)=0$$ for all $$t\ge 0$$, $$p_z(t) = p_z(0) = p_0$$ for all $$t\ge 0$$. Moreover, it holds that $$\dot q_z(t) = 0\in - \alpha K[s](p_0) - \beta K[v](0)$$. As in the proof of (iii) of Lemma 4.4, we can see that $$p_0^1=p_0^2=\cdots=p_0^n$$ or $$p_0 = a {\bf 1}$$ for some $$a\in{\mathbb {R}}$$, and also $$V_1(p_0) = 0$$. Because $$V_1(p)$$ is constant in $${\it{\Omega}}(x)$$, as shown in (4.16), and belongs to $${\it{\Omega}}(x)$$, we have $$V_1(p)=c=0$$. This result of $$c=0$$ allows us to rewrite the right-hand side of (4.16) as \begin{equation} {\it{\Omega}}(x) \subset \left\{\left[\begin{matrix}a {\bf 1}\\ 0\\ \end{matrix}\right]\in{\mathbb{R}}^{2n}: a\in{\mathbb {R}}\right\}\!. \end{equation} (4.17) (iv) Lastly, since $$\lim_{t\to\infty} \text{dist}(x(t),{\it{\Omega}}(x)) = 0$$, we conclude that $$\lim_{t\to\infty} p(t) = a{\bf 1}$$ for some $$a\in{\mathbb {R}}$$. This completes the proof. □ 4.5. Average consensus with linear velocity feedback We consider the case where $$v(q)$$ is set as in (2.7). Then, $$K[\,f](x)$$ in (4.1) becomes \begin{align*} K[\,f](x) = \left[\begin{matrix} q\\ -\alpha K[s](p)-\beta q\\ \end{matrix}\right] \end{align*} and (4.11) holds for $$|q|^2$$ instead of $$|q|_1$$. Further, $$W_3(x) = \beta|q|_1$$ is replaced with $$W_3(x) = \beta|q|^2$$ and an upper bound of $$|f(x)|$$ is given as \begin{eqnarray*} |f(x)| = \left|\,\left[\begin{matrix} q\\[3mm] -\alpha s(p)-\beta q\\ \end{matrix}\right]\,\right| \le \left|\,\left[\begin{matrix} q\\[3mm] -\beta q\\ \end{matrix}\right]\,\right| + \alpha |s(p)| \le \sqrt{1+\beta^2} r + \alpha n(n-1) < \infty. \end{eqnarray*} The proof of item (iii) of Lemma 4.4 is easier with $$v(q) = q$$, by which $$0\in -\alpha K[s](p) - \beta K[v](q)$$ is reduced to $$0\in K[s](p)$$ if $$q=0$$, and here we do not need the assumption $$0<\beta/\alpha<1$$. The boundedness of the average position shown in Section 4.3 is also valid with more detailed expressions. Because graph $$\mathcal G$$ is undirected, \begin{eqnarray*} {\bf 1}^{{\sf{T}}} s(p) = \sum_{i=1}^n \sum_{j\in\mathcal J^i} \text{sgn}(p^i-p^j) = \sum_{(i,j)\in\mathcal E} \text{sgn}(p^i-p^j) = 0, \end{eqnarray*} which yields with Lemma 3.1 that \begin{eqnarray*} \dot{\bar q}(t) \in \frac{1}{n}\left[\begin{matrix}{0 \quad {\bf 1}^{{\sf{T}}}}\end{matrix}\right] K[\,f](x(t)) = K\left[\frac{1}{n} {\bf 1}^{{\sf{T}}} (-\alpha s[p]-\beta q)\right](x(t)) = \{-\beta \bar q(t)\}. \end{eqnarray*} Therefore, $$\dot{\bar q}(t) = - \beta \bar q(t)$$, and hence \begin{eqnarray*} \bar q(t) = e^{-\beta t} \bar q(0), \quad \bar p(t) = \bar p(0) + \frac{1}{\beta}(1 - e^{-\beta t}) \bar q(0). \end{eqnarray*} Thus, the average position $$\bar p(t)$$ is bounded, which implies $$x(t)$$ is bounded. Moreover, if initial average velocity $$\bar q(0)$$ is zero, average position $$\bar p(t)$$ is equal to average initial position $$\bar p(0)$$ for all $$t\ge 0$$, which means that protocol (2.3) with $$v(q) = q$$ attains an average consensus. The results are summarized below. Corollary 4.1 Suppose that $$\alpha,\beta>0$$. Then, the consensus-based rendezvous of the agents in the sense of (2.2) is achieved in system (2.1) by a protocol with control input $$u^i = - \alpha s^i(p) - \beta q^i$$, $$i=1,2,\ldots,n$$. Moreover, this protocol realizes the average consensus of the position: $$\lim_{t\to\infty} p^i(t) = \bar p(0)$$, $$i=1,2,\ldots,n$$, if the average of the initial velocities of the agents is zero. 5. Numerical examples Let us consider the graph shown in Fig. 1 with six agents. The graph is undirected and connected. The gains are set as $$\alpha = 1$$ and $$\beta = 0.5$$. The input (2.3) with $$v^i(q) = \beta\,\text{sgn} q^i$$ is shown in Fig. 2, where the initial values are as follows: Fig. 1. View largeDownload slide Graph for the example of six agents. Fig. 1. View largeDownload slide Graph for the example of six agents. Fig. 2. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with $$\text{sgn}\,q_i$$ feedback. Fig. 2. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with $$\text{sgn}\,q_i$$ feedback. \begin{eqnarray*} \begin{array}{llllll} p_1(0) = 3, & p_2(0) = 4, & p_3(0) = -3, & p_4(0) = 1, & p_5(0) = 0, & p_6(0) = 1,\\ q_1(0) = 3, & q_2(0) = -2, & q_3(0) = 1, & q_4(0) = -5, & q_5(0) = -1, & q_6(0) = 4, \end{array} \end{eqnarray*} for which the averages of the initial position and velocity are $$\bar p(0) = 1$$ and $$\bar q(0) = 0$$, respectively. We can see in Fig. 2 that consensus (2.2) is attained. Figure 3 shows the responses with the same settings, but $$v^i(q)$$ is linear. In addition to (2.2), the positions converge to the average of the initial values. Fig. 3. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with linear $$q_i$$ feedback. Fig. 3. View largeDownload slide Position $$p(t)$$ (left) and velocity $$q(t)$$ (right) of agents with linear $$q_i$$ feedback. We demonstrate that there exists a problem instance that indeed needs the condition $$\alpha>\beta$$ for consensus. Consider graph $$\mathcal G=(\mathcal A,\mathcal E)$$ with $$\mathcal A=\{1,2\}$$, $$\mathcal E=\{(1,2),(2,1)\}$$, which is the simplest one. Let us set the gains as $$\alpha = 1$$ and $$\beta = 0.5$$ or $$\beta = 2$$. The responses of the positions are shown in Fig. 4, where the initial values are set as $$p_1(0) = 2$$, $$p_2(0) = 1$$, $$q_1(0) = 2$$, $$q_2(0) = -2$$. While the distance between the positions converges to zero with $$\beta = 0.5$$, the agents stall and do not attain consensus with $$\beta = 2$$. In fact, the equilibria for $$\beta = 2$$ are given by $$q^1=q^2=0$$ and $$\text{Sgn}(p^1-p^2) \cap [-2,2]\not=\emptyset$$. The latter condition can be satisfied by any $$p^1$$ and $$p^2$$ and hence consensus is not guaranteed, as happens in Fig. 4. Fig. 4. View largeDownload slide Position $$p(t)$$ with $$\beta=0.5$$ (left) and $$\beta = 2$$ (right). Fig. 4. View largeDownload slide Position $$p(t)$$ with $$\beta=0.5$$ (left) and $$\beta = 2$$ (right). 6. Conclusion In this article, we showed a protocol that attains a consensus-based rendezvous of agents that have double integrator dynamics, where only the sign of the relative positions and the sign of the velocity are needed for the protocol. We analysed the closed-loop system via Lyapunov methods extended to the Filippov solution of differential inclusions. In particular, to prove the consensus, we utilized the symmetry of the Filippov set-valued map of the closed-loop system and provided analysis of the $$\omega$$-limit set of the trajectories. Numerical examples showed that the consensus of position is achieved by the proposed protocol. Appendix A. Proof of Lemma 3.3 As stated in Section 3.1, for any $$x_0\in{\mathbb {R}}^{n}$$, there exists at least one Filippov solution $$x(t)$$ with $$x(0) = x_0$$ for $$t\in[0,t_1]$$ for some $$t_1>0$$. Then, \begin{eqnarray} V(x(t)) - V(x_0) = \int_0^{t} \frac{{\rm d}V(x(\tau))}{{\rm d}\tau} {\rm d}\tau \le - \int_0^{t} W_3(C x(\tau)){\rm d}\tau \le 0 \end{eqnarray} (A.1) holds for all $$t\in [0,t_1]$$ with $$W_1(C x(t)) \le V(x(t)) \le V(x_0)$$. Because $$W_1$$ is positive definite, $$|C x(t)| \le r$$ holds for some $$r>0$$. From (3.4), $$|f(x(t))| \le M_r$$ is valid as far as $$x(t)$$ exists. This implies that $$x(t)$$ is defined for all $$t\ge 0$$ and $$C x(t)$$ is bounded. Now, from (A.1), we have $$\int_0^t W_3(C x(\tau)){\rm d}\tau \le V(x_0) < \infty$$ for all $$t\ge 0$$. Because $$W_3$$ is positive semidefinite, the LHS is monotonically increasing in $$t$$. This means that $$\lim_{t\to\infty} \int_0^{t} W_3(C x(\tau)){\rm d}\tau$$ exists and is finite, as claimed in (3.7). Moreover, $$C x(t)$$ is absolutely continuous with respect to $$t$$ and bounded and $$W_3(x)$$ is continuous in $$x$$. Hence, $$W_3(C x(t))$$ is uniformly continuous in $$t$$. From Barbalat’s Lemma (e.g. Khalil, 1996), it holds that $$W_3(C x(t))\to 0$$ as $$t\to\infty$$. Thus, (3.8) is proved. Appendix B. Proof of (4.5) Consider the set in (4.4): \begin{eqnarray*} S_{(4.4)} &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{\sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus N \right\}\!. \end{eqnarray*} Let $$D_{ij} = \{d\in{\mathbb {R}}^{n}: d^i = d^j\}$$, which is an $$(n-1)$$-dimensional subspace of $${\mathbb {R}}^{n}$$ and hence $$\mu(D_{ij})=0$$ and $$D_{ij}$$ is closed. There are $$n(n-1)/2$$ different such subspaces, and we define \begin{eqnarray} D = \bigcup_{1\le i<j\le n} D_{ij}. \end{eqnarray} (B.1) Then, $$D$$ is a closed null set. It is obvious that \begin{eqnarray} S_{(4.4)} = \bigcap_{\mu(N)=0} S(N), \qquad S(N) = \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus D)\setminus N \right\}\!. \end{eqnarray} (B.2) Next, consider the following set, which includes $$S(N)$$: \begin{eqnarray*} S_0 = \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in{\mathbb {R}}^{n}\setminus D \right\}\!. \end{eqnarray*} To show the opposite inclusion, suppose that $$s_0\in S_0$$. Then, there exists a $$d_s\in{\mathbb {R}}^{n}\setminus D$$ such that \begin{eqnarray*} s_0 = \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d_s^i-d_s^j). \end{eqnarray*} Recall that $${\mathbb {R}}^{n}\setminus D$$ is open. It then holds that \begin{eqnarray*} s_0 = \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j) \quad \forall d\in B_{r_s}(d_s) \end{eqnarray*} for some $$r_s>0$$. Because an open set has a positive measure, we have $$B_{r_s}(d_s)\setminus N \not=\emptyset$$ for any null set $$N$$. Hence, there exists a $$d_N\in B_{r_s}(d_s)\setminus N$$ such that \begin{eqnarray*} s_0 = \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d_N^i-d_N^j), \end{eqnarray*} which implies that $$s_0\in S(N)$$. Therefore, $$S_0 = S(N)$$ holds for all null sets $$N$$. This yields \begin{eqnarray} S_{(4.4)} = \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}}\; S(N) = \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}}\; S_0 = \overline{\mathop{\rm co}} S_0 = {\mathop{\rm co}} S_0, \end{eqnarray} (B.3) where $${\mathop{\rm co}} S_0$$ is the set that appears in (4.5). Appendix C. Proof of (4.9) Here, we consider \begin{eqnarray*} S_{(4.9)} &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus N_V)\setminus N \right\}\!, \end{eqnarray*} where the only difference from $$S_{\text{(4.4)}}$$ is that $$N_V$$ is also removed. For the null set $$D$$ defined in (B.1), we also have the following: \begin{eqnarray*} S_{(4.9)} &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}} \left\{ \sum_{(i,j)\in\mathcal E_+^{\rm eq}(p)} (e_i - e_j)\text{sgn}(d^i-d^j): d\in({\mathbb {R}}^{n}\setminus D)\setminus N \right\} \nonumber\\ &=& \bigcap_{\mu(N)=0} \overline{{\mathop{\rm co}}}\; S(N) = {\mathop{\rm co}} S_0, \end{eqnarray*} where the last two equalities are shown in (B.2) and (B.3), respectively. References Bacciotti A. & Ceragioli F. ( 1999 ) Stability and stabilization of discontinuous systems and non-smooth Lyapunov functions. ESAIM Control Optim. Calc. Var. , 4 , 361 – 376 . Google Scholar CrossRef Search ADS Ceragioli F. , De Persis C. & Frasca P. ( 2011 ) Discontinuities and hysteresis in quantized average consensus. Automatica , 47 , 1916 – 1928 . Google Scholar CrossRef Search ADS Chen G. , Lewis F. L. & Xie L. ( 2011 ) Finite-time distributed consensus via binary control protocols. Automatica , 47 , 1962 – 1968 . Google Scholar CrossRef Search ADS Clarke F. H. ( 1983 ) Optimization and Nonsmooth Analysis . New York : Wiley . Cortés J. ( 2006 ) Finite-time convergent gradient flows with applications to network consensus, Automatica , 42 , 1993 – 2000 . Google Scholar CrossRef Search ADS Dimarogonas D. V. & Johansson K. H. ( 2010 ) Stability analysis for multi-agent systems using the incidence matrix: quantized communication and formation control, Automatica , 46 , 695 – 700 . Google Scholar CrossRef Search ADS Filippov A. F. ( 1988 ) Differential Equations with Discontinuous Righthand Sides . Dordrecht, The Netherlands : Kluwer Academic Publishers . Google Scholar CrossRef Search ADS Fischer N. , Kamalapurkar R. & Dixon W. E. ( 2013 ) LaSalle-Yoshizawa corollaries for nonsmooth systems. IEEE Trans. Automat. Control , 58 , 2333 – 2338 . Google Scholar CrossRef Search ADS Frasca P. ( 2012 ) Continuous-time quantized consensus: convergence of Krasovskii solutions. Syst. Control Lett. , 61 , 273 – 278 . Google Scholar CrossRef Search ADS Guo M. & Dimarogonas D. V. ( 2013 ) Consensus with quantized relative state measurements. Automatica , 46 , 2531 – 2537 . Google Scholar CrossRef Search ADS Khalil H. ( 1996 ) Nonlinear Systems . 2nd edn. Upper Saddle River, NJ : Prentice Hall . Li S. , Du H. & Lin X ( 2011 ) Finite-time consensus algorithm for multi-agent systems with double-integrator dynamics. Automatica , 47 , 1706 – 1712 . Google Scholar CrossRef Search ADS Paden B. & Sastry S. ( 1997 ) A calculus for computing Filippov’s differential inclusion with application to the variable structure control of robot manipulators. IEEE Trans. Circuits Syst. , 34 , 73 – 81 . Google Scholar CrossRef Search ADS Ren W. & Beard R. W. ( 2008 ) Distributed Consensus in Multi-vehicle Cooperative Control: Theory and Applications. London : Springer . Google Scholar CrossRef Search ADS Rockafellar R. T. ( 1970 ) Convex Analysis . Princeton : Princeton University Press . Google Scholar CrossRef Search ADS Shevitz D. & Paden B. ( 1994 ) Lyapunov stability theory of nonsmooth systems. IEEE Trans. Automat. Control , 39 , 1910 – 1914 . Google Scholar CrossRef Search ADS Xiao F. , Wang L. , Chen J. & Gao Y. ( 2009 ) Finite-time formation control for multi-agent systems. Automatica , 45 , 2605 – 2611 . Google Scholar CrossRef Search ADS © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Aug 10, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off