Add Journal to My Library
IMA Journal of Mathematical Control and Information
, Volume Advance Article – May 13, 2017

18 pages

/lp/ou_press/multiple-source-seeking-via-distributed-sample-variance-control-of-LnCzJ7AGJI

- Publisher
- Oxford University Press
- Copyright
- © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
- ISSN
- 0265-0754
- eISSN
- 1471-6887
- D.O.I.
- 10.1093/imamci/dnx026
- Publisher site
- See Article on Publisher Site

Abstract In this article, we deal with a multiple source seeking problem via distributed control of swarm robots. Source seeking is the task of finding sources of hazardous events, which can be effectively carried out by using swarm robots. However, the robots might gather on a few of the sources if they are equipped only with locally detectable sensor and short-range communication devices. To overcome this difficulty, in this study, we propose a method to appropriately spread the robots out by using only local information. The key of the method is to control the sample variance of the robot coordinates based on a distributed-optimization technique. Although the sample variance depends on all robot coordinates, the proposed method enables each robot to compute its value via short-range communication between robots using local information on neighbouring robots. Finally, simulation results are presented to illustrate the effectiveness of the proposed method. 1. Introduction Autonomous robots are being increasingly used in hazardous environments, such as space, seafloats, seafloors, minefields and fires, in order to assist and protect humans (Bellingham & Rajan, 2007; Sukhatme et al., 2007; Murphy et al., 2009; Kumar et al., 2004). Such operations can be conducted even more efficiently if groups of robots, known as swarms, are used. Thus, swarm robotics has recently received considerable attention, and various control methodologies have been proposed for it (Mohan & Ponnambalam, 2009; Barca & Sekercioglu, 2013). The distinctive advantage of swarm robotics is that even if some of the robots break down, the others will continue to operate. This inevitably leads to the requirement for a leaderless system with distributed control, which is a technique for individually controlling the swarm robots based on local information (Murray, 2007; Martínez et al., 2007). One of the most potential applications of swarm robotics is source seeking (Ariyur & Krstic, 2003; Zhang & Ordóñez, 2011; Atanasov et al., 2012; Matveev et al., 2013), which is applicable in hazardous events, e.g., fire sites, chemical spills and radiation leaks. A typical temperature profile of a fire site is shown in Fig. 1, wherein there are possibly multiple sources in the field. In such a case, we should expect the swarm robots to effectively locate all the sources by cooperative behaviour. However, if they are equipped with only locally detectable sensors and short-range communication devices, there is a risk of robots gathering on a few of the sources but without every source being covered. Fig. 1. View largeDownload slide A typical temperature profile of a fire site. Fig. 1. View largeDownload slide A typical temperature profile of a fire site. In this article, we deal with a multiple source seeking problem via swarm robots. Our approach to solving the issue of unexpected gathering is to control the sample variance of the robot coordinates via distributed control. This would cause the robots to appropriately spread out and find all the sources instead of gathering at only particular ones. It should be noticed that the sample variance depends on all robot coordinates. Nevertheless, we propose a distributed method by which the value of the sample variance can be computed on each robot via short-range communication between robots using only local information. Although the problem of the sample-variance control has already discussed by Antonelli & Chiaverini (2003), only a centralized method was considered, whereas our main contribution is to propose a distributed method. Various studies have investigated source seeking by swarm robots from the engineering field (e.g., Biyik & Arcak, 2007; Choi et al., 2009; Li & Guo, 2012; Jadaliha et al., 2012; Briñóon-Arranz & Schenato, 2013; Atanasov et al., 2015) and from the biological field (e.g., Koorehdavoudi & Bogdan (2016); Attanasi et al. (2014)). However, none of them have not considered the sample variance of the robot coordinates or any indices for determining the spread of the robots. In contrast, our proposed method to control the sample variance allows the robots to appropriately spread out and successfully find multiple sources. Moreover, the proposed method is different from coverage control (Miller & Murphey, 2015; Cortés et al., 2004; Kantaros & Zavlanos, 2014a,b; Caicedo-Nunez & Zefran, 2008) in the sense that each robot does not need to compute the integral of the density function over the Voronoi area. Although the proposed method is based on the distributed-optimization method, it is different from the existing ones such as the alternating direction method of multipliers (ADMM) and the alternating direction augmented Lagrangian (ADAL) (Boyd et al., 2010; Chatzipanagiotis et al., 2015). Actually, these methods provide a distributed controller of robotic behaviour by using a common control signal among all the robots, which indicates a penalty relating to the error of the sample variance. However, this common signal must be centrally computed by collecting information from all the robots. In contrast, our study provides a communication-based distributed method for obtaining the sample variance. The update points of this study from our conference article (Sakurama & Nishida, 2016) are as follows: (i) the mathematical grounds are detailed including the complete proofs of theories, (ii) the feasibility of the proposed method is discussed and (iii) the simulation results with/without the proposed method are provided and the comparison between them is made. This article is organized as follows: Section 2 presents a basic preliminary procedure for solving a constrained optimization problem. Section 3 formulates the main problem: source seeking with sample-variance control in a distributed manner. Section 4 explains an issue that arises when a basic optimization procedure is applied to the main problem. Section 5 proposes a distributed method that overcomes this issue, which is the main result of this study. Section 6 illustrates the effectiveness of the proposed method through numerical examples. Section 7 provides conclusions. In this article, $${\mathbb R}$$, $${\mathbb R}^n$$ and $${\mathbb Z}_+$$ represent the sets of real numbers, $$n$$-dimensional real vectors, and non-negative integers, respectively. For a vector $$x \in {\mathbb R}^n$$, $$\|x\|$$ represents the Euclidean norm. For a finite countable set $${\mathcal N}$$, the number of its elements is represented by $$|{\mathcal N}|$$. 2. Preliminary We begin by explaining a fundamental optimization method. Consider the constrained optimization problem {minimizeU(x)subject toG(x)=0, (2.1) where $$x = [x_{1} x_{2} \cdots x_{n}]^\top \in {\mathbb R}^{n}$$ is the decision variable, $$U: {\mathbb R}^n \to {\mathbb R}$$ is the objective function, and $$G: {\mathbb R}^n \to {\mathbb R}$$ is the constraint function. Now, we consider the augmented Lagrangian method to solve this problem (Bertsekas, 1976; Nocedal & Wright, 2006). The Lagrangian function $$L(x,\lambda) \in {\mathbb R}$$ is defined as L(x,λ)=U(x)+λG(x), (2.2) where $$\lambda \in {\mathbb R}$$ is the Lagrangian multiplier, and the augmented Lagrangian function $$L_a(x,\lambda)$$ is defined as La(x,λ)=L(x,λ)+ρ2G(x)2, (2.3) where $$\rho$$ is a positive real number. Let $$x^{[s]} \in {\mathbb R}^n$$ and $$\lambda^{[s]} \in {\mathbb R}$$ be the variables with respect to step $$s \in {\mathbb Z}_+$$, which are updated to obtain the solution of (2.1). The following is an algorithm based on the augmented Lagrangian. Algorithm 2.1 (Augmented Lagrangian-based algorithm) (S-0) Assign $$x^{[0]} \in {\mathbb R}^{n}$$, $$\lambda^{[0]} \in {\mathbb R}$$, $$\rho > 0$$, $$K \in {\mathbb Z}_+$$ and $$S \in {\mathbb Z}_+$$. Set $$s=0$$. (S-1) Update $$x^{[s]}$$ so as to minimize $$L_a(x,\lambda^{[s]})$$ with respect to $$x$$. The update law is given as x[0] =x[s] (2.4) x[k+1] =x[k]−∂La∂x(x[k],λ[s]) =x[k]−∂U∂x(x[k])−(λ[s]+ρG(x[k]))∂G∂x(x[k]) (2.5) x[s+1] =x[K], (2.6) where the variable $$x^{[s+1]}$$ of the next step is recursively obtained via the intermediary variable $$x[k] \in {\mathbb R}^n$$. (S-2) Update $$\lambda^{[s]}$$ so as to increase $$L_a(x,\lambda^{[s]})$$ at the next step. The update law is given as λ[s+1] =λ[s]+∂La∂λ(x[s+1],λ[s]) =λ[s]+ρG(x[s+1]). (2.7) (S-3) If $$s<S$$, add $$1$$ to $$s$$ and return to (S–1). If $$s = S$$, stop the algorithm and the solution of (2.1) is obtained as $$x = x^{[s]}$$. Remark 2.1 If the equation $$\partial L_a/\partial x(x,\lambda) = 0$$ is directly solvable with respect to $$x$$, then it is not necessary to obtain $$x^{[s+1]}$$ recursively in (S-1). However, this equation is not solvable when Algorithm 2.1 is applied to the source-seeking problem because the intensity of the sources in the field is not known in advance. Thus, the recursive approach is taken. 3. Problem setting 3.1. Control objective In this subsection, a source-seeking problem by swarm robots is formulated via the optimization problem (2.1). Consider a swarm-robot system consisting of $$n$$ robots in a two-dimensional space. Let $$x_{i}[k] \in {\mathbb R}^2$$ ($$i = 1,2,\ldots,n$$) be the coordinate of robot $$i$$, where $$k \in {\mathbb Z}_+$$ represents the time. The dynamics of the robots are given by the discrete-time integrator for unit time as xi[k+1]=xi[k]+ui[k], (3.1) where $$u_i[k] \in {\mathbb R}^2$$ is the control input of robot $$i$$. Equation (3.1) implies that each robot is controlled by an inner loop so as to move according to the velocity input $$u_i[k]$$. The set of robot coordinates is given as x[k]=[x1⊤[k]x2⊤[k]⋯xn⊤[k]]⊤∈R2n. (3.2) For simplicity, we sometimes drop $$[k]$$ from $$x_{i}[k]$$ and $$x[k]$$. Consider multiple sources in the space, whose total intensity is represented by a function $$F: {\mathbb R}^2 \to {\mathbb R}$$. An example is depicted in Fig. 1. First, for source seeking, robot $$i$$ has to move in a direction such that the value of $$F(x_{i})$$ increases. Meanwhile, the robots have to avoid collisions and maintain communication connectivity. For this purpose, a function $$R_{i}(x) \in {\mathbb R}$$ is introduced to robot $$i$$ as Ri(x)=∑j=1nRij(xi,xj), (3.3) which is the sum of the functions $$R_{ij}: {\mathbb R}^2 \times {\mathbb R}^2 \to {\mathbb R}$$ with respect to $$j$$ as Rij(xi,xj)={C24|‖xi−xj‖−r1|2if‖xi−xj‖≤r10ifr1<‖xi−xj‖≤r2C34|‖xi−xj‖−r2|2ifr2<‖xi−xj‖≤r3C34|r3−r2|2if‖xi−xj‖>r3, (3.4) where $$r_1$$, $$r_2$$, $$r_3$$, $$C_2$$ and $$C_3$$ are positive real numbers. The function $$R_{ij}(x_i,x_j)$$ works as a repulsion between robots $$i$$ and $$j$$ within distance $$r_1,$$ and works as an attraction from distance $$r_2$$ to $$r_3$$. Then, we assign the following as the objective function: U(x)=−C1∑i=1nF(xi)+∑i=1nRi(x) (3.5) with a positive real number $$C_{1}$$. When $$U(x)$$ is minimized with respect to $$x$$, the coordinates $$x_i$$ are determined such that the robots are maneuvered around the sources without collisions or communication losses. Now, if there are multiple sources in the field, the swarm robots could all gather around a few particular sources. To prevent this, we seek to spread the robots out appropriately in the field. To do so, we regard the coordinates of the robots as statical samples and control the robots to achieve a desired sample variance. Let $$\sigma^2 > 0$$ be the desired sample variance of the coordinates. Then, for the average of the coordinates μ(x)=1n∑i=1nxi, (3.6) we expect to obtain 1n∑i=1n‖xi−μ(x)‖2=σ2, (3.7) where the left-hand side represents the sample variance of $$x_i$$ ($$i=1,2,\ldots,n$$). Now, the constraint function $$G(x)$$ is given by the difference between the left- and right-hand sides of (3.7) as G(x)=12(1n∑i=1n‖xi−μ(x)‖2−σ2). (3.8) The source-seeking problem by the swarm robots with sample-variance control is then described by the optimization problem (2.1) with (3.5) and (3.8). 3.2. Distributed algorithm The aim of this study is to realize a distributed control method for the above control objective. Here, distributed control means that each robot maneuvers itself using only information about the robots within a communication range. Let $$r > 0$$ be the distance of the range, which is common among all the robots. Let $${\mathcal N}_i(r) \subset \{1,2,\ldots,n\}$$ be the set of robots within distance $$r > 0$$ from robot $$i$$ as Ni(r)={j∈{1,2,…,n}:‖xi−xj‖≤r}. (3.9) Then, robot $$i$$ can communicate with the robots $$j \in {\mathcal N}_i(r)$$, which are called the neighbours of robot $$i$$. Let $${\it{\Gamma}}_i(z_i[k],v_i[k],y_i)$$ be a system (i.e., a computational procedure) implemented on robot $$i$$, where $$z_i[k] \in {\mathbb R}^{n_i}$$ is a state variable, $$v_i[k] \in {\mathbb R}^{m_i}$$ is an input variable and $$y_i \in {\mathbb R}^{l_i}$$ is an output constant. The collection of states, inputs and outputs of robot $$i$$’s neighbours are respectively denoted as zNi(r)[k] =[zj1⊤[k]zj2⊤[k]⋯zj|Ni(r)|⊤[k]]⊤∈RNi (3.10) vNi(r)[k] =[vj1⊤[k]vj2⊤[k]⋯vj|Ni(r)|⊤[k]]⊤∈RMi (3.11) yNi(r) =[yj1⊤yj2⊤⋯yj|Ni(r)|⊤]⊤∈RLi, (3.12) where $$j_h(h = 1,2,\ldots,|{\mathcal N}_i(r)|)$$ is an element of $${\mathcal N}_i(r)$$ satisfying $$\{j_1,j_2,\ldots,j_{|{\mathcal N}_i(r)|}\} = {\mathcal N}_i(r)$$, $$M_i = \sum_{j \in {\mathcal N}_i(r)}m_j$$, $$L_i = \sum_{j \in {\mathcal N}_i(r)}l_j$$ and $$N_i = \sum_{j \in {\mathcal N}_i(r)}n_j$$. The system $${\it{\Gamma}}_i(z_i[k],v_i[k],y_i)$$ of robot $$i$$ is said to be distributed if the computational procedure in the system uses only its own information $$z_i[k],v_i[k]$$ and that of its neighbours, i.e., $$z_j[k],v_j[k]$$ for $$j \in {\mathcal N}_i(r)$$. Then, a distributed system is described as {zi[0] =fi(vi[0],vNi(r)[0])zi[k+1] =gi(zi[k],zNi(r)[k],vi[k],vNi(r)[k])yi =hi(zi[K],zNi(r)[K],vi[K],vNi(r)[K]) (3.13) for functions $$f_i: {\mathbb R}^{m_i} \times {\mathbb R}^{M_i} \to {\mathbb R}^{m_i}$$, $$g_i: {\mathbb R}^{n_i} \times {\mathbb R}^{N_i} \times {\mathbb R}^{m_i} \times {\mathbb R}^{M_i} \to {\mathbb R}^{n_i}$$, $$h_i: {\mathbb R}^{n_i} \times {\mathbb R}^{N_i} \times {\mathbb R}^{m_i} \times {\mathbb R}^{M_i} \to {\mathbb R}^{l_i}$$, and an integer $$K > 0$$. An algorithm is said to be distributed if it consists of some distributed systems. The control objective formulated by the optimization problem (2.1) with (3.5) and (3.8) can be solved by Algorithm 2.1 introduced in Section 2. However, as shown in Section 4, this algorithm is not distributed. Actually, the main contribution of this study is to design a distributed algorithm to solve this optimization problem. The main problem is now given as follows. Problem 3.1 Design a distributed algorithm to solve the optimization problem (2.1) with (3.5) and (3.8) with a control input $$u_i[k]$$ in (3.1) such that the robots move according to the algorithm. We make certain assumptions as follows. First, we assume that the parameters $$r_1$$, $$r_2$$ and $$r_3$$ are chosen as follows. Assumption 3.2 The positive real numbers $$r_1$$, $$r_2$$, $$r_3$$ and $$r$$ satisfy r1≤r2≤r3≤r. Next, for the neighbours $${\mathcal N}_{i}(r)$$, we define the undirected graph $$\mathcal{G} = ({\mathcal V},{\mathcal E})$$ with the vertex set $${\mathcal V}$$ and edge set $${\mathcal E}$$ as V={1,2,…,n},E={{i,j}⊂V:j∈Ni(r)}. The graph $$\mathcal{G}$$ represents the network topology over which the robots communicate with each other. Note that the graph $$\mathcal{G}$$ can be varied according to the coordinates of the robots. Then, we assume the following for this graph. Assumption 3.3 The graph $$\mathcal{G}$$ is undirected and connected at any step $$s \in {\mathbb Z}_+$$ and time $$k\in {\mathbb Z}_+$$. Finally, we assume that the robots move around a bounded area as follows. Assumption 3.4 There exists $$C > 0$$ such that ‖xi[k]‖<C (3.14) for any $$i =1,2,\ldots,n$$, $$s \in {\mathbb Z}_+$$ and $$k \in {\mathbb Z}_+$$. 4. Issue of the augmented Lagrangian-based algorithm In this section, we discuss an issue that arises when Algorithm 2.1 (the augmented Lagrangian-based algorithm) is applied for distributed control of the swarm robots. For the objective (3.5) and constraint (3.8) functions, the augmented Lagrangian function (2.3) is given as La(x,λ)=−C1∑i=1nF(xi)+∑i=1nRi(x)+λG(x)+ρ2G(x)2. Then, the partial derivative of (4.1) with respect to $$x_{i}$$ is calculated as ∂La∂xi(x,λ)=−C1∂F∂xi(xi)+2∂Ri∂xi(x)+(λ+ρG(x))∂G∂xi(x). From (3.6) and (3.8), ∂G∂xi(x) =12∂∂xi(1n∑j=1n‖xj−μ(x)‖2−σ2) =1n(xi−μ(x))−1n∑j=1n(xj−μ(x))∂μ(x)∂xi =1n(xi−μ(x)) (4.1) is obtained. Moreover, from (3.3) and (3.4), ∂Ri∂xi(x) =∑j=1n∂Rij∂xi(xi,xj) =C22∑j∈Ni(r1)(‖xi−xj‖−r1)xi−xj‖xi−xj‖ +C32∑j∈Ni(r3)∖Ni(r2)(‖xi−xj‖−r2)xi−xj‖xi−xj‖ (4.2) is derived. Then, from (4.1) and (4.2), the update law (2.5) of $$x_i$$ is reduced to xi[k+1] =xi[k]+C1∂F∂xi(xi[k]) −C2∑j∈Ni(r1)(‖xi[k]−xj[k]‖−r1)xi[k]−xj[k]‖xi[k]−xj[k]‖ −C3∑j∈Ni(r3)∖Ni(r2)(‖xi[k]−xj[k]‖−r2)xi[k]−xj[k]‖xi[k]−xj[k]‖ −1n(λ[s]+ρG(x[k]))(xi[k]−μ(x[k])). (4.3) Since $${\mathcal N}_i(r_1) \subset {\mathcal N}_i(r)$$ and $${\mathcal N}_i(r_3) \subset {\mathcal N}_i(r)$$ hold from Assumption 3.2, the first, second and third terms in the right-hand side of (4.3) can be computed from their own states $$x_i[k]$$ and those of the neighbours, i.e., $$x_j[k],j \in {\mathcal N}(r)$$. However, to obtain the values of $$G(x[k])$$ and $$\mu(x[k])$$ in the final term of (4.3), we require all the robot coordinates $$x_i[k]$$ ($$i=1,2,\ldots,n$$) because of (3.6) and (3.8). Moreover, as for $$\lambda^{[s]}$$, the update law (2.7) requires all of $$x_i^{[s]}$$ ($$i=1,2,\ldots,n$$). Therefore, Algorithm 2.1 is not distributed. 5. Main result 5.1. Overview of the proposed method As explained in the previous section, the values of $$\lambda^{[s]}$$, $$G(x[k])$$ and $$\mu(x[k])$$ cannot be computed directly in a distributed way. The key idea to overcome this difficulty here is to estimate them for each robot in a distributed way. The estimated variables of $$\lambda^{[s]}$$, $$G(x[k])$$ and $$\mu(x[k])$$ by robot $$i$$ are denoted as $$\hat\lambda^{[s]}_i \in {\mathbb R}$$, $$\hat\nu_i[k] \in {\mathbb R}$$ and $$\hat\mu_i[k] \in {\mathbb R}^2$$, respectively. Note that while $$\lambda^{[s]}$$, $$G(x[k])$$ and $$\mu(x[k])$$ are common for all robots, the estimated variables $$\hat\lambda^{[s]}_i$$, $$\hat\nu_i[k]$$ and $$\hat\mu_i[k]$$ belong to individual robots. Then, (4.3) is reduced to xi[k+1] =xi[k]+C1∂F∂xi(xi[k]) −C2∑j∈Ni(r1)(‖xi[k]−xj[k]‖−r1)xi[k]−xj[k]‖xi[k]−xj[k]‖ −C3∑j∈Ni(r3)∖Ni(r2)(‖xi[k]−xj[k]‖−r2)xi[k]−xj[k]‖xi[k]−xj[k]‖ −1n(λ^i[s]+ρν^i[k])(xi[k]−μ^i[k]). (5.1) In this way, the update procedure of $$x^{[s]}$$ is given by (2.4), (2.6) and (5.1), which is a distributed system $${\it{\Gamma}}^a_i(x_i[k],(x_i^{[s]},\hat\lambda^{[s]}_i,\hat\nu_i[k],\hat\mu_i[k]),x_i^{[s+1]})$$. Moreover, the estimated variables $$\hat\lambda^{[s]}_i$$, $$\hat\nu_i[k]$$ and $$\hat\mu_i[k]$$ can be obtained by the distributed systems that are designed in the following subsection. 5.2. Distributed systems for estimation First, we consider the design of a distributed system to estimate the average $$\mu(x[k])$$ of $$x_{i}[k]$$ ($$i=1,2,\ldots,n$$) in (3.6). In order to update the estimated variable $$\hat\mu_i[k]$$, we propose the following system: μ¯i[0] =xi[k] (5.2) μ¯i[ℓ+1] =μ¯i[ℓ]−α∑j∈Ni(r)(μ¯i[ℓ]−μ¯j[ℓ]) (5.3) μ^i[k] =μ¯i[L], (5.4) where $$\alpha$$ is a positive real number and $$L$$ is a positive integer. Note that this system uses the intermediary variable $$\bar\mu_{i[\ell]}\in{\mathbb R}^2$$ to recursively obtain $$\hat{\mu}_{i}[k]$$ with respect to $$\ell \in {\mathbb Z}_+$$. Then, the following lemma is obtained. Lemma 5.1 Under Assumption 3.3, for $$\alpha \in (0,1/\max_{i=1,2,\ldots,n} |{\mathcal N}_i(r)|)$$, the system consisting of (5.2)–(5.4) is distributed, and it correctly estimates the value of $$\mu(x[k])$$ by $$\hat{\mu}_{i}[k]$$; namely, for any $$\delta > 0$$, there exists a positive integer $$L$$ such that ‖μ^i[k]−μ(x[k])‖<δ (5.5) holds for any $$i = 1,2,\ldots,n$$. Proof. The system consisting of (5.2)–(5.4) is distributed because it is described as the distributed system $${\it{\Gamma}}^b_i(\bar\mu_{i[\ell]},x_{i}[k],\hat{\mu}_{i}[k])$$ from (3.13). Note that (5.3) is an average-consensus controller, and all the states $$\bar{\mu}_{i[\ell]}$$ converge to their average of the initial states $$\bar{\mu}_{i[0]}$$ under Assumption 3.3 and the gain $$\alpha \in (0,1/\max_{i=1,2,\ldots,n} |{\mathcal N}_i(r)|)$$ (Olfati-Saber & Murray, 2004). Thus, from (5.4), for any $$\delta > 0$$, there exists a positive integer $$L$$ such that ‖μ^i[k]−1n∑i=1nμ¯i[0]‖=‖μ¯i[L]−1n∑i=1nμ¯i[0]‖<δ (5.6) holds for any $$i = 1,2,\ldots,n$$. From (3.6) and (5.2), we obtain 1n∑i=1nμ¯i[0]=1n∑i=1nxi[k]=μ(x[k]). (5.7) Equations (5.6) and (5.7) lead to (5.5). □ Next, we consider how to estimate the value of $$G(x[k])$$ in (3.8). For this, we can use the estimated variable $$\hat\mu_{i}[k]$$ of $$\mu(x[k])$$. Then, the system to derive the estimated variable $$\hat{\nu}_{i}[k]$$ of $$G(x[k])$$ is proposed as follows: ν¯i[0] =12(‖xi[k]−μ^i[k]‖2−σ2) (5.8) ν¯i[m+1] =ν¯i[m]−γ∑j∈Ni(r)(ν¯i[m]−ν¯j[m]) (5.9) ν^i[k] =ν¯i[M], (5.10) where $$\gamma$$ is a positive real number and $$M$$ is a positive integer. Note that this system derives $$\hat{\nu}_{i}[k]$$ through the intermediary variable $$\bar\nu_{i[m]} \in {\mathbb R}^2$$ recursively with respect to $$m \in {\mathbb Z}_+$$. Then, the following lemma is obtained. Lemma 5.2 Under Assumptions 3.3 and 3.4, for $$\gamma \in (0,1/\max_{i=1,2,\ldots,n} |{\mathcal N}_i(r)|)$$, the system consisting of (5.8)–(5.10) is distributed, and it correctly estimates the value of $$G(x[k])$$ by $$\hat{\nu}_{i}[k]$$; namely for any $$\epsilon > 0$$, there exist positive integers $$L$$ and $$M$$ such that ‖ν^i[k]−G(x[k])‖<ϵ (5.11) holds for any $$i = 1,2,\ldots,n$$. Proof. The system consisting of (5.8)–(5.10) is distributed because it is described as the distributed system $${\it{\Gamma}}^c_i(\bar\nu_{i[m]},(x_{i}[k],\hat\mu_{i}[k]),\hat{\nu}_{i}[k])$$ from (3.13). Since (5.9) is an average-consensus controller, under Assumption 3.3, for $$\gamma \in (0,1/\max_{i=1,2,\ldots,n} |{\mathcal N}_i(r)|)$$, for any $$\xi > 0$$, there exists a positive integer $$M$$ such that ‖ν^i[k]−1n∑i=1nν¯i[0]‖=‖ν¯i[M]−1n∑i=1nν¯i[0]‖<ξ (5.12) holds for any $$i = 1,2,\ldots,n$$ from (5.10). Note that the following expressions hold: |‖xi[k]−μ^i[k]‖2−‖xi[k]−μ(x[k])‖2| =|‖xi[k]−μ^i[k]‖−‖xi[k]−μ(x[k])‖| ×(‖xi[k]−μ^i[k]‖+‖xi[k]−μ(x[k])‖) ≤‖μ^i[k]−μ(x[k])‖ ×(‖μ^i[k]−μ(x[k])‖+2‖xi[k]−μ(x[k])‖) <δ(δ+2C), (5.13) where the last inequality follows from (5.5) and ‖xi[k]−μ(x[k])‖≤‖xi[k]‖+1n∑i=1n‖xi[k]‖<2C, which follows from (3.14) in Assumption 3.4. Then, from (3.8), (5.8) and (5.13), |1n∑i=1nν¯i[0]−G(x[k])| =|1n∑i=1n12(‖xi[k]−μ^i[k]‖2−σ2)−12(1n∑i=1n‖xi[k]−μ¯(x[k])‖2−σ2)| =12n|∑i=1n(‖xi[k]−μ^i[k]‖2−‖xi[k]−μ¯(x[k])‖2)| ≤12n∑i=1n|‖xi[k]−μ^i[k]‖2−‖xi[k]−μ¯(x[k])‖2| <δ2(δ+2C) (5.14) is obtained. Then, (5.12) and (5.14) lead to ‖ν¯i[M]−G(x[k])‖ ≤‖ν¯i[M]−1n∑i=1nν¯i[0]‖+‖1n∑i=1nν¯i[0]−G(x[k])‖ <ξ+δ2(δ+2C). (5.15) Note that we can choose arbitrarily small $$\delta>0$$ and $$\xi>0$$ by choosing $$L$$ and $$M$$. Then, $$\xi + \delta(\delta+2C)/2 \le \epsilon$$ can be achieved, and (5.11) holds from (5.15). □ Finally, we consider how to estimate the value of the Lagrangian multiplier $$\lambda^{[s]}$$, which has to be updated as (2.7). Here, instead of $$G(x^{[s+1]})$$, we use the estimated variable $$\hat{\nu}_{i}[K]$$. Then, the system to update the estimated variable $$\hat\lambda^{[s]}_i$$ of $$\lambda^{[s]}$$ is proposed as follows: λ¯i[0] =λ^i[s]+ρν^i[K] (5.16) λ¯i[h+1] =λ¯i[h]−β∑j∈Ni(r)(λ¯i[h]−λ¯j[h]) (5.17) λ^i[s+1] =λ¯i[H], (5.18) where positive real numbers $$\rho$$ and $$\beta$$ represent gains, $$H$$ is a positive integer, and $$\bar\lambda_{i}[h] \in \mathbb{R}$$ is the intermediary variable at $$h \in \mathbb{Z}_+$$. Then, the following lemma is obtained. Lemma 5.3 Under Assumption 3.3, for $$\beta \in (0,1/\max_{i=1,2,\ldots,n} |{\mathcal N}_i(r)|)$$, the system consisting of (5.16)–(5.18) is distributed and it correctly estimates the Lagrangian multiplier $$\lambda^{[s]}$$ by $$\hat\lambda_{i}^{[s]}$$; namely, for any $$\epsilon_1 > 0$$, $$\epsilon_2 > 0$$ and $$s \in \mathbb{Z}_+ \backslash \{0\}$$, there exist positive integers $$K$$, $$L$$, $$M$$ and $$H$$ such that ‖λ^i[s]−λ^j[s]‖<ϵ1 (5.19) ‖λ^i[s+1]−(λ^i[s]+ρG(x[s+1]))‖<ϵ2 (5.20) hold for any $$i,j = 1,2,\ldots,n$$, where $$x^{[s+1]}$$ is given by (2.6). Proof. The system consisting of (5.16)–(5.18) is distributed because it is described as the distributed system $${\it{\Gamma}}^d_i(\bar\lambda_{i}[h],(\hat{\lambda}_{i}^{[s]},\hat{\nu}_{i}[K]),\hat\lambda_{i}^{[s+1]})$$ from (3.13). Because (5.17) is an average-consensus controller, (5.19) is satisfied for some sufficiently large $$H$$. Moreover, from (5.18), for any $$\epsilon_3 > 0$$, the following is obtained for some positive integer $$H$$: ‖λ^i[s+1]−1n∑j=1nλ¯j[0]‖=‖λ^i[H]−1n∑j=1nλ¯j[0]‖<ϵ3. (5.21) Note that the following is obtained: ‖λ^i[s]+ρG(x[s+1])−1n∑j=1nλ¯j[0]‖ =‖λ^i[s]+ρG(x[s+1])−1n∑j=1n(λ^j[s]+ρν^j[K])‖ (5.22) ≤1n∑j=1n‖λ^i[s]−λ^j[s]‖+ρn∑j=1n‖G(x[s+1])−ν^j[K])‖ <ϵ1+ρϵ, (5.23) where (5.22) follows from (5.16), and (5.23) follows from (2.6), (5.11) and (5.19). Then, (5.21) and (5.23) lead to ‖λ^i[s+1]−(λ^i[s]+ρG(x[s+1]))‖ ≤‖λ^i[s+1]−1n∑j=1nλ¯j[0]‖+‖1n∑j=1nλ¯j[0]−(λ^i[s]+ρG(x[s+1]))‖ <ϵ3+ϵ1+ρϵ. (5.24) By choosing appropriate positive integers $$K$$, $$L$$, $$M$$ and $$H$$, the positive real numbers $$\epsilon_1$$, $$\epsilon_3$$ and $$\epsilon$$ can be arbitrarily small. Thus, $$\epsilon_3+\epsilon_1 + \rho\epsilon < \epsilon_2$$ can be satisfied, and (5.20) is obtained from (5.24). □ 5.3. Proposed distributed algorithm Based on Algorithm 2.1, a new distributed algorithm for source seeking by swarm robots is proposed using the distributed systems designed above. Algorithm 5.1 (Proposed algorithm) (S-0) Assign $$x^{[0]}_i \in {\mathbb R}^2$$, $$\hat{\lambda}_{i}^{[0]}\in {\mathbb R}^2$$, $$\hat{\nu}_{i}^{[0]}$$, $$\hat{\mu}_{i}^{[0]}\in {\mathbb R}$$, $$\rho > 0$$, $$\alpha$$, $$\beta$$, $$\gamma \in (0,1/\max_{i=1,2,\ldots,n} |{\mathcal N}_i(r)|)$$, positive integers $$S$$, $$K$$, $$L$$, $$M$$, and the desired sample variance $$\sigma^2 > 0$$. Set $$s=0$$. (S-1-0) Set $$k=0$$ and $$x_{i}[0] = x_{i}^{[s]}$$. (S-1-1) Calculate $$\hat\mu_i[k]$$ (the estimated variable of $$\mu(x[k])$$) according to (5.2)–(5.4). (S-1-2) Calculate $$\hat\nu_i[k]$$ (the estimated variable of $$G(x[k])$$) according to (5.8)–(5.10). (S-1-3) Calculate $$x_i[k+1]$$ according to (5.1). (S-1-4) If $$k < K$$, go back to (S-1-1). If $$k = K$$, set $$x_{i}^{[s+1]} = x_i[K]$$, $$\hat\nu^{[s+1]} = \hat\nu_{i[K]}$$ and go forward to (S-2). (S-2) Calculate $$\hat\lambda_i^{[s+1]}$$ (the estimated variable of $$\lambda^{[s+1]}$$) according to (5.16)–(5.18). (S-3) If $$s < S$$, add $$1$$ to $$s$$, and go back to (S-1-0). If $$s = S$$, stop the algorithm. The validity of Algorithm 5.1 is guaranteed by Lemmas 5.1, 5.2 and 5.3. The control input $$u_i[k]$$ in (3.1) is derived from (5.1). Then, the solution to Problem 3.1 is obtained as follows: Theorem 5.4 Under Assumptions 3.2, 3.3 and 3.4, for $$\alpha$$, $$\beta$$, $$\gamma \in (0,1/\max_{i=1,2,\ldots,n} |{\mathcal N}_i(r)|)$$, Algorithm 5.1 is distributed. Moreover, this algorithm yields a variable $$x^{[s]}$$ arbitrarily close to that obtained by Algorithm 2.1 by appropriately choosing the positive integers $$S$$, $$K$$, $$L$$ and $$M$$. Moreover, the control input $$u_i[k]$$ in (3.1) is derived by ui[k] =C1∂F∂xi(xi[k]) −C2∑j∈Ni(r1)(‖xi[k]−xj[k]‖−r1)xi[k]−xj[k]‖xi[k]−xj[k]‖ −C3∑j∈Ni(r3)∖Ni(r2)(‖xi[k]−xj[k]‖−r2)xi[k]−xj[k]‖xi[k]−xj[k]‖ −1n(λ^i[s]+ρν^i[k])(xi[k]−μ^i[k]). (5.25) This theorem shows that for sufficiently large integers $$S$$, $$K$$, $$L$$ and $$M$$, Algorithm 5.1 gives essentially the same solution as that obtained by Algorithm 2.1 in a distributed way. Note that the update laws in (S-1-1) and (S-1-2) of Algorithm 5.1 compute the important values recursively. These computational processes are usually faster than the movement of the robots. Therefore, even if the recursive numbers $$L$$ and $$M$$ are relatively large, the computations can be carried out while the robots are moving according to the control input $$u_i[k]$$ in (5.25). Remark 5.1 Algorithm 5.1 has the triple routines of the update laws as shown in Fig. 2. The outer loop is counted by $$s$$, which is the slowest, while the middle loop is counted by $$k$$, corresponding to the robot dynamics. On the other hand, the inner loops are counted by $$\ell$$, $$m$$ and $$h$$. Due to this construction, if the iteration numbers $$K$$, $$L$$, $$M$$ and $$H$$ are not sufficiently large, the estimation error causes the optimization error, which is in proportion to the absolute value of the estimation error and its square (Jadbabaie et al., 2009). However, the estimation error can be decreased by the choice of the gains $$\alpha$$, $$\beta$$ and $$\gamma$$. Thus, even if the iteration numbers $$K$$, $$L$$, $$M$$ and $$H$$ are not sufficiently large, the proposed method works by the appropriate choice of the gains. Fig. 2. View largeDownload slide Routines of the update laws in Algorithm 5.1. Fig. 2. View largeDownload slide Routines of the update laws in Algorithm 5.1. Remark 5.2 The network is necessary to be connected to achieve correct estimation by consensus control (Olfati-Saber & Murray, 2004). Because of the limitation of the consensus control, if the network was not connected, the sample variance could not be correctly estimated. The consensus control is a basic method to make agreement, which is the easiest way for the estimation to the best of the authors’ knowledge. From the property of the consensus control, the convergence speeds of the estimations depend on the network topology (more precisely its second smallest eigenvalue of the graph Laplacian) and the values of the gains $$\alpha$$, $$\beta$$ and $$\gamma$$. The required iteration numbers can be evaluated and adjusted by the choice of the gains. Therefore, even though the convergence speeds tend to be small in large-scale networks, the proposed method works well by appropriate choice of the gains in the practical usage. How to obtain the appropriate gains should be developed in future work. Remark 5.3 The proposed algorithm is assumed to work synchronously, but it is applicable to the asynchronous case. See the existing articles, e.g., Zhang & Giannakis (2014), for the asynchronous distributed optimization. In this case, if the robots move fast, the estimation might be too late. To prevent this, we have to assume that the robots move slower than the convergence speeds of the estimations, which is reasonable because the communication rate is usually faster than the robot dynamics and the convergence speeds can be adjusted by the gains as stated in Remark 5.2. 6. Numerical examples In this section, the effectiveness of the proposed method is illustrated by numerical examples. Consider the function $$F:{\mathbb R}^2 \to {\mathbb R}$$ of the source intensity as F(x)=12‖x−xa∗‖2‖x−xb∗‖2‖x−xc∗‖2‖x−xd∗‖2, (6.1) which contains four sources at $$x_a^* = [40 40]^\top$$, $$x_b^* = [30 80]^\top$$, $$x_c^* = [80 30]^\top$$ and $$x_d^* = [80 80]^\top$$. The communication range is given by $$r = 15$$. We use Algorithm 5.1 with parameters $$r_1 = 5$$, $$r_2=10$$, $$r_3 = 15$$, $$C_1 = 2 \times 10^{-12}$$, $$C_2 = 5 \times 10^{-2}$$, $$C_3 = 2 \times 10^{-2}$$, $$\alpha = \beta = \gamma = 2 \times 10^{-2}$$, $$\rho = 1 \times 10^{-2}$$, $$S=35$$, $$K = 10$$, $$L=M=1,000$$ and $$\sigma^2 = 225$$. Figures 3 (a)–(h) show the time transition of the robot coordinates, where the robots are depicted by dots. The areas of each figure are brighten according to the intensity of the sources (the value of $$F(x)$$ in (6.1)), where the four sources are depicted more brightly. It is observed that (a) the robots are first located around the corner $$(0,0)$$, then (b) they gather around particular sources, then (d) they start spreading, and (h) they finally cover all the sources while some stay between the sources to keep communication connectivity. Fig. 3. View largeDownload slide Transition of the robot coordinates with the proposed method. Fig. 3. View largeDownload slide Transition of the robot coordinates with the proposed method. We give one more numerical example in which the constraint on the sample variance (3.7) is not considered but the other conditions are the same as previously. Figures 4 (a)–(h) depict the result. In this case, it is observed that (b) the robots find two sources, but (f) they stop before the other sources are found. Fig. 4. View largeDownload slide Transition of the robot coordinates without the constraint of sample variance. Fig. 4. View largeDownload slide Transition of the robot coordinates without the constraint of sample variance. These simulation results show that the source-seeking succeeds because of the proposed sample-variance control, which illustrates the effectiveness of the proposed method. 7. Conclusion In this research, we proposed a distributed sample-variance control of swarm robots for multiple-source seeking. The proposed controller is distributed, which is different from existing optimization methods. For this, we designed distributed systems to estimate the values of the average of the coordinates, the constraint function and the Lagrangian multiplier. Finally, the effectiveness of the proposed method was demonstrated via numerical examples. The proposed method promotes the efficiency for source seeking by using the prior information on the variance $$\sigma^2$$ of sources. If the prior information is significantly different from the real value, the robots possibly spread too much or gather too small. Therefore, the performance of the proposed method is limited by the accuracy of the prior information on sources. Funding Japan Society for the Promotion of Science KAKENHI [Grant number 15K06143]. References Antonelli, G. & Chiaverini, S. ( 2003 ) Kinematic control of a platoon of autonomous vehicles . Proceedings of the 2003 IEEE International Conference on Robotics and Automation , Karlsruhe, Germany . Ariyur, K. B. & Krstic, M. ( 2003 ) Real-Time Optimization by Extremum-Seeking Control . New York, NY : Wiley . Atanasov, N. , Ny, J. L. , Michael, N. & Pappas, G. J. ( 2012 ) Stochastic source seeking in complex environments . Proceedings of the 2012 IEEE International Conference on Robotics and Automation , St. Paul , MN, USA . Atanasov, N. , Ny, J. L. & Pappas, G. ( 2015 ) Distributed algorithms for stochastic source seeking with mobile robot networks . ASME J. Dyn. Sys., Meas., Control 137 , 031004 . Attanasi, A. , Cavagna, A. , Castello, L. D. , Giardina, I. , Melillo, S. , Parisi, L. , Pohl, O. , Rossaro, B. , Shen, E. , Silvestri, E. & Viale, M. ( 2014 ) Collective behaviour without collective order in wild swarms of midges . PLOS Comput. Biol. 10 , e1003697 . Google Scholar CrossRef Search ADS PubMed Barca, J. C. & Sekercioglu, Y. A. ( 2013 ) Swarm robotics reviewed . Robotica 31 , 345 – 359 . Google Scholar CrossRef Search ADS Bellingham, J. G. & Rajan, K. ( 2007 ) ‘Robotics in remote and hostile environments’ , Science 318 , 1098 – 1102 . Google Scholar CrossRef Search ADS PubMed Bertsekas, D. P. ( 1976 ) ‘Multiplier methods: A survey’ , Automatica 12 , 133 – 145 . Google Scholar CrossRef Search ADS Biyik, E. & Arcak, M. ( 2007 ) Gradient climbing in formation via extremum seeking and passivity-based coordination rules . Proceedings of the 46 th IEEE Conference on Decision and Control , New Orleans, USA . Boyd, S. , Parikh, N. , Chu, E. , Peleato, B. & Eckstein, J. ( 2010 ) ‘Distributed optimization and statistical learning via the alternating direction method of multipliers’ , Foundations and Trends in Machine Learning 3 , 1 – 122 . Google Scholar CrossRef Search ADS Briñón-Arranz, L. & Schenato, L. ( 2013 ) Consensus-based source-seeking with a circular formation of agents . Proceedings of the European Control Conference , Zurich, Switzerland . Caicedo-Nunez, C. & Zefran, M. ( 2008 ) Performing coverage on non-convex domains . Proceedings of IEEE International Conference in Control Applications , San Antonio, USA . Chatzipanagiotis, N. , Dentcheva, D. & Zavlanos, M. M. ( 2015 ) ‘An augmented Lagrangian method for distributed optimization’ , Math. Programming 152 , 405 – 434 . Google Scholar CrossRef Search ADS Choi, J. , Oh, S. & Horowitz, R. ( 2009 ) ‘Distributed learning and cooperative control for multi-agent systems’ , Automatica 45 , 2802 – 2814 . Google Scholar CrossRef Search ADS Cortés, J. , Martinez, S. , Karatas, T. & Bullo, F. ( 2004 ) ‘Coverage control for mobile sensing networks’ , IEEE Trans. Robot. Autom. 20 , 243 – 255 . Google Scholar CrossRef Search ADS Jadaliha, M. , Lee, J. & Choi, J. ( 2012 ) ‘Adaptive control of multiagent systems for finding peaks of uncertain static fields’ , ASME Jour. of Dynamic Systems, Measurement, and Control 134 , 051007 . Google Scholar CrossRef Search ADS Jadbabaie, A. , Ozdaglar, A. & Zargham, M. ( 2009 ) A distributed Newton method for network optimization . Proceedings of the 48th IEEE Conference on Decision and Control . Kantaros, Y. & Zavlanos, M. M. ( 2014a ) Communication-aware coverage control for robotic sensor networks . Proceedings of the 53rd IEEE Conference on Decision and Control . Kantaros, Y. & Zavlanos, M. M. ( 2014b ) Distributed simultaneous coverage and communication control by mobile sensor networks , in ‘Proc. of the 2nd IEEE Global Conf. on Signal and Information Processing’ . Koorehdavoudi, H. & Bogdan, P. ( 2016 ) ‘A statistical physics characterization of the complex systems dynamics: Quantifying complexity from spatio-temporal interactions’ , Scientific Reports 6 , 27602 . Google Scholar CrossRef Search ADS PubMed Kumar, V. , Rus, D. & Singh, S. ( 2004 ) ‘Robot and sensor networks for first responders’ , IEEE Pervasive Comput. 3 , 24 – 33 . Google Scholar CrossRef Search ADS Li, S. & Guo, Y. ( 2012 ) Distributed source seeking by cooperative robots: all-to-all and limited communications , in ‘Proc. of the 2012 IEEE Int. Conf. on Robotics and Automation’ . Martínez, S. , Cortés, J. & Bullo, F. ( 2007 ) ‘Motion coordination with distributed information’ , IEEE Control Syst. Mag. 27 , 75 – 88 . Google Scholar CrossRef Search ADS Matveev, A. S. , Hoy, M. C. & Savkin, A. V. ( 2013 ) Reactive navigation of a mobile robot to the moving extremum of a dynamic unknown environmental field without derivative estimation , in ‘Proc. of the European Control Conf’ . Miller, L. & Murphey, T. ( 2015 ) Optimal planning for target localization and coverage using range sensing , in ‘Proc. of the 2015 IEEE Int. Conf. on Automation Science and Engineering’ . Mohan, Y. & Ponnambalam, S. G. ( 2009 ) An extensive review of research in swarm robotics , in ‘Proc. of the 2009 World Congress on Nature & Biologically Inspired Computing’ . Murphy, R. R. , Kravitz, J. , Stover, S. L. & Shoureshi, R. ( 2009 ) ‘Mobile robots in mine rescue and recovery’ , IEEE Robot. Autom. Mag. 16 , 91 – 103 . Google Scholar CrossRef Search ADS Murray, R. M. ( 2007 ) ‘Recent research in cooperative control of multivehicle systems’ , ASME Journal of Dynamic Systems, Measurement, and Control 129 , 571 – 583 . Google Scholar CrossRef Search ADS Nocedal, J. & Wright, S. J. ( 2006 ) Numerical Optimization , 2nd edn . New York, NY : Springer Science-Business Media . Olfati-Saber, R. & Murray, R. M. ( 2004 ) ‘Consensus problems in networks of agents with switching topology and time-delays’ , IEEE Trans. Autom. Control 49 , 1520 – 1533 . Google Scholar CrossRef Search ADS Sakurama, K. & Nishida, S. ( 2016 ) Source seeking by distributed swarm robots with sample variance control , in ‘Proc. of American Control Conference’ . Sukhatme, G. , Dhariwal, A. , Zhang, B. , Oberg, C. , Stauffer, B. & Caron, D. ( 2007 ) ‘Design and development of a wireless robotic networked aquatic microbial observing system’ , Environmental Engineering Science 24 , 205 – 215 . Google Scholar CrossRef Search ADS Zhang, C. & Ordóñez, R. ( 2011 ) Extremum-Seeking Control and Applications: A Numerical Optimization-Based Approach (Advances in Industrial Control) . London : Springer-Verlag . Zhang, Y. & Giannakis, G. B. ( 2014 ) Efficient decentralized economic dispatch for microgrids with wind power integration , in ‘Proc. of the sixth Annual IEEE Green Technologies Conf’ . © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

IMA Journal of Mathematical Control and Information – Oxford University Press

**Published: ** May 13, 2017

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.