Location Privacy-Preserving Method for Auction-Based Incentive Mechanisms in Mobile Crowd Sensing

Location Privacy-Preserving Method for Auction-Based Incentive Mechanisms in Mobile Crowd Sensing Abstract It is of significant importance to provide incentives to smartphone users in mobile crowd sensing systems. Recently, a number of auction-based incentive mechanisms have been proposed. However, an auction-based incentive mechanism may unexpectedly release the location privacy of smartphone users, which may seriously reduce the willingness of users participating in contributing sensing data. In an auction-based incentive mechanism, even if the location of a user is not enclosed in his/her bid submitted to the platform, the location information may still be inferred by an adversary by using the prices of the tasks required by the user. We take an example to show how an attack can recover the location information of a smartphone user by merely knowing his/her bid. To defend against such an attack, we propose a method to protect location privacy in auctions for mobile crowd sensing systems. This method encrypts prices in a bid so that the adversary cannot access and hence the location privacy of users can be protected. In the meanwhile, however, the auction can proceed properly, i.e. the platform can select the user offering the lowest price for each sensing task or the platform can choose users with budget constraint. We demonstrate the effectiveness of our proposed method with theoretical analysis and simulations. 1. INTRODUCTION The increasing popularity of smartphones with a rich set of sensors brings a new wireless sensing paradigm called mobile crowd sensing [1]. A mobile crowd sensing system consists of three components, i.e. organizers, a central platform and smartphone users. Organizers release sensing tasks and the platform is responsible for distributing them to smartphone users. Smartphone users are recruited for performing sensing tasks. The collected sensing data are gathered at the platform and then forwarded to organizers. There are a wide variety of application domains of mobile crowd sensing, such as transportation [1, 2], environment monitoring [3, 4] and healthcare [5, 6]. It is of significant importance to provide sufficient incentives for attracting smartphone users to contribute sensing data in a mobile crowd sensing system. A number of auction-based incentive mechanisms [7–9] have been proposed. As illustrated in Fig. 1, a typical auction-based mechanism has the following basic steps. Firstly, the platform announces the set of sensing tasks to all smartphone users. Secondly, each user submits a bid containing the claimed price of each task. Lastly, based on the collected bids, the platform allocates the sensing tasks to the users. Then, each smartphone performs the allocated tasks, reports the sensing data back to the platform and gets the reward accordingly. FIGURE 1. View largeDownload slide The basic steps of an auction-based incentive mechanism for mobile crowd sensing. FIGURE 1. View largeDownload slide The basic steps of an auction-based incentive mechanism for mobile crowd sensing. However, an auction-based incentive mechanism may unexpectedly release the location privacy of smartphone users, which may seriously reduce users’ willingness of participating in mobile crowd sensing. The main reason is as follows. To perform location-aware tasks, the distance between a task and a user is a deciding factor on the cost incurred on the user. As a consequence, the claimed prices1 may be exploited by the platform or the adversary to infer the location of a smartphone user. The intuition of inferring the location is that the bidding price of performing a given task generally becomes larger as the distance between the user and the task increases. Then, by knowing the locations of tasks and the bidding prices of a user, an attacker may determine the most likely location of the user. A number of location-preserving methods have been proposed. Existing works can be divided into three classes: (i) randomization [13–15], in which a smartphone user sends to the platform the true location as well as several fake locations; (ii) spatial generalization [16, 17], in which a smartphone user sends a geographical region that covers at least k−1 other users and (iii) time-fuzzy method [18]. In addition, several schemes [19–21] have been proposed for protecting sensing data. Unfortunately, existing methods for location preservation and data protection cannot directly be applied in auction-based mechanisms for mobile crowd sensing. Firstly, in an auction-based mechanism, even if the location of the user is not enclosed in the bid submitted to the platform, the location information may still be inferred by the prices of the tasks required by the user. Secondly, existing methods for data protection ignore the location privacy leakage issue in the bids of the users. To defend against such an attack in an auction-based incentive mechanism, we propose a location privacy-preserving method for auction mechanisms. This method encrypts the prices in the bid of a user so that the adversary cannot access the prices, and hence the location privacy of the user can be protected. In the meanwhile, however, the auction can proceed properly, i.e. the platform selects the user offering the lowest price for each sensing task or the platform selects users with budget constraint. To this end, our method employs the prefix membership verification scheme [22, 23]. The platform determines the winning user for a given sensing task by only using encrypted prices from all users. Through performing simulations, we demonstrate that the failure rate of the attack is dramatically increased with our protection method, compared with the one without any protection. Moreover, auction mechanisms can perform well under our protection. The rest of the paper is organized as follows. Section 2 discusses related work. Section 3 defines the system model of a crowd sensing system and makes several practical assumptions. In Section 4, we describe an attack model and show its effectiveness of compromising the location privacy of a smartphone user. Section 5 presents the design of our location privacy-preserving method. In Section 6, we present simulation results and finally we conclude our paper in Section 7. 2. RELATED WORK Incentive mechanisms [24] are important for attracting users and guaranteeing the quality of sensing data. According to different design goals, existing works can be divided into two types: user-centric [7–9] and platform-centric [25, 7, 26]. The reverse auction-based dynamic price (RADP) mechanism [8] is a typical user-centric incentive mechanism in mobile crowd sensing. The participants first submit their claimed prices for performing a sensing task, and then the platform chooses the user with the lowest price as the winner and recruits this user to gather the sensing data. One improvement of RADP called ‘MSensing’ is proposed in [7]. Incentive mechanisms with budget constraint are proposed in [12, 27], where the platform needs to select a set of users for each task and the total payment cannot be more than the budget. In this paper, we consider the privacy leakage problem in both the RADP-based mechanism and the mechanism with budget constraint. Several privacy-preserving schemes have been proposed for mobile crowd sensing. Most existing works introduce a trusted third party (TTP) as an agency [19–21]. In [19], participants use a symmetric key from the TTP. After sending sensing data with the symmetric key, they need to connect the TTP again for identification and then gets the reward. Li et al. [21] proposed a TTP-free approach. However, existing works focus on protecting sensing data or cutting the relationship between users and data, which ignore the privacy leakage in the auction period. The prefix membership verification [28] was first introduced in 2007 and then formalized in 2008 by Liu et al. [22], which has been applied in privacy protection in a dynamic spectrum auction in cognitive radio networks [23]. In this paper, we leverage this idea to hide the bid prices of each user. 3. SYSTEM MODEL In this section, we present the system model considered in our work, which includes the model of crowd sensing systems and the model of threats. We consider a typical crowd sensing system, which consists of three parts: an untrusted platform, smartphone users and a TTP. Especially, we focus on location-aware sensing tasks. Therefore, we consider the attackers’ goal is to obtain the real-time locations of smartphone users. 3.1. Crowd sensing system The platform accepts the commission from different organizers and distributes K sensing tasks to all smartphone users. These K tasks are located at different places in a whole area. The system owns S smartphone users. For the sake of convenience, we consider a rectangular area, which is divided into M×N fine-grained square grids. We design a location privacy-preserving method can be employed both in the RADP-based mechanism and the mechanism with a budget constraint. In the RADP-based mechanism, the platform always chooses the user with the lowest price as the winner for each task. In the mechanism with budget constraint, each task has a budget Cj. The platform needs to select a set of users, and the total payment is not larger than Cj. The interactions between the platform and users proceed as the following four steps: (i) Announcing tasks: The platform announces the set of K location-aware tasks to all users. (ii) Submitting bids: Each smartphone user submits the identification and the bid Bi={b1i,…,bKi} to the platform, where bji,∀i∈{1,2,…,S},∀j∈{1,2,…,K} is the bidding price claimed by user i for task j. (iii) Allocating tasks: The platform allocates the tasks to the winning users. (iv) Rewarding users: After the winners upload the sensing data, the platform distributes the rewards to each winning user. In an auction-based incentive mechanism, users are asked to submit a bidding price to the platform for each task they want to participate in. Considering truthful users, they always submit their bidding prices according to their true cost of doing a task. In our work, we formulate the cost of a user performing a location-aware task as follows, which is as well as the bidding price submitted to the platform, bji=fi(dji)+hi(·)+ε. (1) The formulation is comprised of three items, which represent different kinds of cost. The first item fi(dji) represents the cost of movement to the location of task j, which is a function of the distance between the user and the task. dji denotes the distance between user i and task j. The second item hi(·) denotes the cost of user i performing task j, which includes the energy consumption and so on. The last item ε characterizes random noise which follows a Gaussian distribution. And fi(·) can be any monotonically increasing function (e.g. linear or quadratic functions with non-negative coefficients). Note that bji can be set as bmax if user i has no intention of performing task j, where bmax is known as the maximum price that the platform can afford for a task. The TTP is responsible for distributing secret keys in the phase of announcing tasks. It also helps the platform to decrypt the bidding prices in the phase of allocating tasks. The adversary can be an external attacker or the platform. The platform is curious-but-honest, i.e. it will honestly execute the auction-based mechanism but it is curious about users’ information. All notations are listed in Table 1. Table 1. Notations and Descriptions. Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X View Large Table 1. Notations and Descriptions. Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X View Large 3.2. Threat model and assumptions We consider attackers’ goal is to obtain the locations of smartphone users by analyzing the bids submitted by users during the auction period. First, attackers know the locations of all tasks which are public to all. Second, attackers can intercept all data packets between each smartphone user and the platform, which means attackers can obtain the bidding prices of all tasks submitted by each user. Third, the platform can also be an attacker. According to Equation (1), it is easy to know that a higher bidding price of a task submitted by a user means a larger distance between the task and the user. Therefore, the location privacy of a user may be compromised given the locations of all tasks and the relative distances to these tasks analyzed from bidding prices. Here, we assume that attackers have sufficient computing resources. In Section 4, we will give an example to show how an attacker obtains the location information of a user. Such attacks become stronger when the number of tasks becomes larger or the dependence of costs on distances becomes stronger. However, the effectiveness of such attacks may become weaker when the noise added in the cost model in Equation (1) is larger. We also assume that attackers have the following limitations. First, attackers cannot access the information in encrypted messages. Second, attackers do not intentionally jam communication channels. 4. ATTACK ON LOCATION PRIVACY This section presents an attack on the location privacy of users in a crowd sensing system during the auction period. Note that this attack is inspired by the attack on location privacy in cognitive radio networks [23]. 4.1. Basic idea We assume that attackers already know the locations of the K tasks. On one hand, for a given grid, the distances from the grid to all the tasks are known in advance by the adversary. On the other hand, we know that, according to Equation (1), a higher price indicates a larger distance. Therefore, given the bid of a user, the claimed prices of the tasks carry the information of the relative distances to the tasks from the current location of the user. By comparing the similarity between the distances to all tasks of a user and of every grid, it is able to determine the location of the user by selecting the grid with the highest similarity. 4.2. Distance-to-tasks feature of a grid The distance-to-tasks feature of a given grid (m,n), denoted as Pmn=<p1(m,n),p2(m,n),…,pK(m,n)>, is defined as the vector containing all the normalized distances to the K tasks from the grid. The normalized distance pj(m,n) from the grid to a specific task is defined as follows, pj(m,n)=21+eθdj(m,n), (2) where dj(m,n) is the Euclidean distance between task j and grid (m,n), and θ is an adjustable parameter. Equation (2) is a monotone decreasing function of distance. It is apparent that pj(m,n)∈[0,1]. And pj(m,n)=1 if the jth task locates in grid (m,n). A larger pj(m,n) means shorter distance between grid (m,n) and task j. 4.3. Relative distance-to-tasks feature of a bid We define the relative distance-to-tasks feature of a given user i, denoted as Qi=<q1i,q2i,…,qKi>, as the vector containing all relative distances from the user to all the K tasks. The relative distance qji from the user to a specific task is defined as follows, qji=bminibji, (3) where bmini is the minimum price among all the prices in bid Bi={bji,1≤j≤K}. It is obvious that qji∈[0,1]. According to Equation (3), a larger qji means a lower bidding price, which implies a shorter distance between user i and task j. And Qi normalizes the distances from the user to all tasks. 4.4. Similarity between user and grid To obtain the location of a user (i.e. which gird the user is in currently), the attacker finds the grid whose distance-to-tasks feature is most similar to the relative distance-to-tasks feature of the user. To this end, the similarity si(m,n) between user i and grid (m,n) can be computed as follows, si(m,n)=(Qi−Pmn)T·(Qi−Pmn), (4) =∑j=1K(qji−pj(m,n))2 (5) The smaller si(m,n), the higher similarity. After calculating the similarities between the user and all the M×N grids, the adversary chooses the grid with the maximum similarity as the location of the user. Simulation results in Section 6 show that this attack can locate a user with a high accuracy. The complete algorithm of the attack is shown in Algorithm 1. Algorithm 1 Attack Algorithm to Location Privacy Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; Algorithm 1 Attack Algorithm to Location Privacy Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; 4.5. An example of the attack model We take a region divided into 3×3 grids as an example. In the example, there is one user located in grid (1,1) and three tasks located in (1,1),(2,2) and (2,3) respectively, as shown in Fig. 2a. For the ease of presentation, we set that f(d)=r+u·d with r=10 and u=10, and ignore hi(·) and ε. The attacker first calculates the distance-to-tasks feature of each grid by Equation (2) with θ=1, as shown in Fig. 2b. We assume the bid of the user is B={10,24,32}, and thus the relative distance-to-tasks feature is calculated as Q=[1,0.42,0.31]. Then the attacker calculates the similarities between Q and the feature vector of each grid, according to Equation (4). The results are shown in Fig. 2c. As a result, the attacker chooses grid (1,1) as the location of the user since it has the minimum value among all grids. Thus, the user’s location privacy is leaked. FIGURE 2. View largeDownload slide This is an example of the attack model. The attacker calculates the distance-to-tasks feature Q of the user and Pmn of each grid based on the locations of the three tasks and the bid of user B={10,24,32}. According to the similarities, the attacker locates the user in grid (1,1). (a) The locations of the three tasks and the user, (b) the distance-to-tasks feature of each grid and (c) the similarity between the distance-to-tasks feature of each grid and the distance-to-tasks feature of the user. FIGURE 2. View largeDownload slide This is an example of the attack model. The attacker calculates the distance-to-tasks feature Q of the user and Pmn of each grid based on the locations of the three tasks and the bid of user B={10,24,32}. According to the similarities, the attacker locates the user in grid (1,1). (a) The locations of the three tasks and the user, (b) the distance-to-tasks feature of each grid and (c) the similarity between the distance-to-tasks feature of each grid and the distance-to-tasks feature of the user. 5. LOCATION PRIVACY-PRESERVING METHOD In this section, we first give the basic idea. Then, we describe the preliminaries of the prefix membership verification. After that, we present the detailed design for both the RADP-based incentive mechanism and the mechanism with budget constraint. Finally, we offer the theoretical analysis. 5.1. Basic idea As illustrated in Section 4, attackers can infer the location of a user by using the bidding prices in B. Some previous studies [22, 23] have proposed location-protected methods, which encrypt the K prices in B with K different keys to avoid such an attack. However, encrypted prices are not comparable to the platform, and therefore the platform cannot select proper users for each task. In this work, we first introduce the prefix membership verification method to preprocess the prices. Under this method, the platform can compare the encrypted prices by using the same key. Thus, the platform can rank the prices for a given task and allocate the task in a cost-effective way. 5.2. Preliminaries The main idea of prefix membership verification [22, 23, 28] is to convert the estimation of whether a number is in a range to the judgement of whether the intersection of two sets is empty. There are several key concepts, such as s-prefix, the prefix family and the prefix numericalization function. An s-prefix {0,1}s{*}w−s consists of s 0 s or 1 s and (w−s) * s, where * represents a 0 or 1. For example, 1*** is a 1-prefix and 11* is a 2-prefix. An s-prefix represents a range ranging from {0,1}s{0}w−s to {0,1}s{1}w−s. For example, p=1*** represents the range [1000,1111], which means a number matches a s-prefix only if the leading s binary bits are the same as that of the s-prefix. Consider a w-bit binary number x=b1b2⋯bw. We define the prefix family of the number x as F(x)={b1b2⋯bw,b1b2⋯bw−1*,…,b1*⋯*,*⋯*}. The prefix family has (w+1) prefixes and its i th element is {b1b2⋯bw−i+1*⋯*}. For example, the prefix family of 9, i.e. 1001, is F(9)={1001,100*,10**,1***,****}. Thus, we conclude that given a number x and a prefix p, x is in the range of p if and only if p∈F(x). The prefix family of a range [d1,d2] is denoted as P([d1,d2]). The prefix family of a range consists of a set of prefixes, where each prefix covers a subrange in [d1,d2] and all the prefixes cover the whole range of [d1,d2]. For example, P([7,15])={0111,1***}. Then, we conclude that x∈[d1,d2] if and only if F(x)⋂P([d1,d2])≠∅. For example, 9∈[7,15] and F(9)⋂P([7,15])={1***}. This makes it possible to decide whether a number is in a range by comparing two sets of prefixes. A prefix numericalization function [22] is denoted as N(X) where X is a prefix family. The function turns a prefix family into a prefix numericalization family (PNF). It makes the judgement of whether F(x)⋂P([d1,d2])≠∅ executable. It turns each prefix element p into a binary number. Consider a w-bit prefix p=b1b2⋯bs*⋯*. This function converts p to a (w+1)-bit number by adding a 1 behind bs and replacing all the * s with 0 s. We make a conclusion as follows: x∈[d1,d2]ifN(F(x))⋂N(P([d1,d2]))≠∅. (6) For instance, N(F(9))={10011,10010,10100,11000,10000} and N(P[7,15])={01111,11000}. 9∈[7,15] since N(F(9))⋂N(P([7,15]))={11000}. In order to apply the prefix membership verification, we convert a single price into two parts: one is the bidding price bji and the other is the range [bji,bmax]. We build the price prefix numericalization family(PPNF) of bji, N(F(bji)), and the range prefix numericalization family (RPNF) of [bji,bmax], N(P([bji,bmax])). According to Equation (6), if N(F(bji))⋂N(P([bjk,bmax]))≠∅, then bji∈[bjk,bmax]. As a result, we get bji>bjk. After gathering all users’ PPNFs and RPNFs for a given task, the platform can rank the prices. Table 2 is an example of choosing the minimum price 2 among the four prices {5,15,2,9} with bmax=15. Table 2. PPNF of the bids and RPNF of the bid ranges. bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 The boldface values are emphasizing the price of the winner. View Large Table 2. PPNF of the bids and RPNF of the bid ranges. bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 The boldface values are emphasizing the price of the winner. View Large 5.3. Design for the RADP-based mechanism The main steps of our design for the RADP-based auction mechanism are shown in Fig. 3. Each user encrypts the K prices in B with K different keys of keyed-Hash Message Authentication Code (HMAC). As the same content is encrypted into the same ciphertext, the platform is capable of ranking the prices for a given task by using the aforementioned prefix membership verification. Announcing tasks: The platform announces the set of K tasks to all users. Distributing keys: The TTP generates K pairs of public key and private key {(gt1,gc1),(gt2,gc2),…,(gtK,gcK)} and distributes the public keys gtj to each user. The private keys gcj are kept by the TTP. The TTP also distributes the mapping parameter c to each user, which is explained in the next step. Submitting bids: Each user i first maps his/her bidding price bji to a novel number (bji)′ in range (c·(bji−1),c·bji]. Then, the user calculates the PPNF and the RPNF of (bji)′. After that, the user encrypts every element in the two sets with the corresponding public key gtj. Lastly, the encrypted sets, denoted as Hgt(N(F((bji)′))) and Hgt(N(P([(bji)′,(bmax)′]))), are submitted to the platform. Selecting winners: After gathering all the encrypted bids from the users, the platform judges that bji>bjk for a given task j if the following formula holds, Hgt(N(F((bji)′)))⋂Hgt(N(P([(bjk)′,(bmax)′])))≠∅. (7) Then, the platform selects the winner, denoted by t∈{1,2,…,S}. It is obvious that the intersections of the PPNF of the winner and the RPNFs of other users are always an empty set. The platform then sends the encrypted prices of the winners to the TTP. Sending deciphered bids: The TTP decrypts the encrypted prices using the corresponding private key gcj. Note that, after decryption, the plaintext is a number in the range (c·(bjt−1),c·bjt]. The TTP divides it by c and sends ⌈bjtc⌉ back to the platform. If bjt=bmax, the TTP informs the platform that the bid is invalid. Rewarding users: The users upload the sensing data, and the platform pays each user accordingly. Figure 3. View largeDownload slide The main steps of our design for the RADP-based auction mechanism. Figure 3. View largeDownload slide The main steps of our design for the RADP-based auction mechanism. Note the number of different prices can be very limited. Even after encryption, the platform is still able to guess the corresponding prices by ranking the encrypted prices for the same task, especially when there are a large number of users. For better protection, a parameter c is applied for mapping each price to a larger range. 5.4. Design for the mechanism with budget constraint As shown in Section 5.2, the platform cannot understand the price of each user but is capable of comparing the prices. This is enough for the RADP-based auction mechanism but is insufficient for the auction mechanism with budget constraint. As each task has a constraint Cj, The platform needs to make sure the total payment given to the selected users is less than Cj. The budget feasible mechanism in [29] inspires us. In this mechanism, the platform first sorts all the S prices for a certain task j, denoted as {bj1≤bj2≤⋯≤bjS}. Then it finds the last number k that satisfies bjk≤Cjk. The top k users are selected as winners and the corresponding payment for each winner is min{Cjk,bjk+1 }. Hence, after sorting the encrypted prices using Equation (7), the platform needs to return the rankings to the corresponding users. Then the user uploads the k·bjk back to the platform with the same process in submitting bids. The first user also uploads the encrypted Cj to the platform. Finally, the platform is capable of finding the last k which satisfies bjk·k≤Cj. The main steps are shown in Fig. 4. Announcing tasks: The platform announces the set of K tasks and the budget constraint of each task {C1,C2,…,CK} to all users. Distributing keys: The TTP generates K pairs of public key and private key {(gt1,gc1),(gt2,gc2),…,(gtK,gcK)} and distributes the public keys gtj to each user. The private keys gcj are kept by the TTP. The TTP also distributes the mapping parameter c to each user. Submitting bids: Each user i first maps the bid bji to a number (bji)′ in range (c·(bji−1),c·bji]. Then, the user calculates the PPNF and the RPNF of (bji)′. After that, the user encrypts every element in the two sets with the corresponding public key gtj. Lastly, the encrypted sets, denoted as Hgt(N(F((bji)′))) and Hgt(N(P([(bji)′,(bmax)′]))), are submitted to the platform. Returning rankings: After gathering all the encrypted bids from the users and sorting the encrypted prices by Equation (7), the platform get the ranking {1,2,…,S}. Note that each rank is corresponding to a user. It then returns the corresponding ranking back to each user. Uploading encrypted k·bjk: After getting the corresponding rank k, user j does the same process to bjk·k as the third step. Since the platform needs to compare Cj with bjk·k, the first user also needs to do the same process to budget Cj. Then, each user uploads the encrypted information to the platform. Sending deciphered bids: The platform compares Cj and k·bjk,∀k∈{1,2,…,S} by using Equation (7). It finds the last number k that satisfies k·bjk≤Cj. The platform then sends the encrypted prices of the k-th user and the (k+1)-th user to the TTP. The TTP decrypts the encrypted prices using the corresponding private key gcj. Note that, after decryption, the plaintext is a number in the range (c·(bjk−1),c·bjk]. The TTP divides it by c. Finally, the TTP returns the prices of the two ciphertexts back to the platform. Selecting winners: The platform gets the prices of the k-th user and the (k+1)-th user. If the k-th bid is bmax, then the allocation is a failure since there is at least one user that is not willing to do the task. Otherwise, the platform will allocate the task to the top k users and the payment of each user is min{Cjk,bjk+1}. Rewarding users: The users upload the sensing data, and the platform pays each user accordingly. Figure 4. View largeDownload slide The main steps of our design for the auction mechanism with budget constraint. Figure 4. View largeDownload slide The main steps of our design for the auction mechanism with budget constraint. 5.5. Theoretical analysis Theorem 1 Given the number Kof tasks, the maximum bidding price bmax, the mapping parameter cand the ratio rof the encrypted output. The transmission cost of each user is K·r·(⌈log(c·bmax)⌉+1)·(3⌈log(c·bmax)⌉−1). Proof Since bmax and c are fixed, the maximum figure to be sent is c·bmax. Its binary representation has l=⌈log(c·bmax)⌉ bits and l is the length of all the bidding prices. In the preprocessing step, each bidding price generate two sets of binary numbers. Each figure in PPNF and RPNF occupies l+1 bits. There are (l+1) elements in PPNF and at most (2l−2) elements [22] in RPNF. In the phase of sending bids, users encrypt all K prices with HMAC keys. The total transmission is hence K·r·(l+1)·(3l−1), which is K·r·(⌈log(c·bmax)⌉+1)·(3⌈log(c·bmax)⌉−1).□ Since the transmission cost is linear with respect to the number of tasks, the computation cost of HMAC encryption is quite small. Therefore, it is feasible for a large-scale mobile crowd sensing. The platform cannot know the prices of a user without the keys and hence cannot compare the elements in vector Bi. However, the platform is still able to find the corresponding relationship between bidding prices and the encrypted numbers, especially when there are a large number S of users. Theorem 2 Given the number Sof smartphone users, for a given task, it is difficult for the curious platform to find the correspondence between the encrypted numbers and the bidding prices while c≫Sbmax. Proof There are at most c·bmax different ciphertexts for each task with a given c. If c·bmax<=S, considering the special case that the platform collects S ciphertexts with T=c·bmax different values for a given task. Since the platform knows bmax in advance, it infers c by calculating Tbmax. Then, the platform gets the corresponding prices by ranking the T ciphertexts and divides them by c. However, if c·bmax≫S, the number of possibilities of correspondence is immensely huge. Assuming that the price for a given task follows the uniform distribution, then E(T)=c·bmax·(1−(c·bmax−1c·bmax)S). Hence with a large c, it is difficult for the platform to find the accurate correspondence.□ Theorem 3 In the auction mechanism with budget constraint, the location privacy-preserving method can hire at least half of the users as the mechanism without protection. Proof In origin mechanism with budget constraint, after sorting users bidding prices as {b1≤b2≤⋯≤bS}, the platform will select the maximum number l with ∑i=1lbi≤C and choose the top l users as winners. We have l2·bl2≤∑i=l2lbi. Hence, l2·bl2≤C. In the location-preserving mechanism, we need to find out the critical k, which satisfies k·bk≤C. Hence l2≤k and the protection mechanism can hire at least half of the winners of the original mechanism.□ 6. EVALUATION 6.1. Simulation settings To evaluate the performance of the attack model and our proposed privacy-protection method, we do simulations on a region with 50×50 grids. The default values of parameters are set as follows. fi(d) is a monotonically increasing function. We use four different strategies of fi(d), including linear function (i.e. fi(d)=aid), quadratic function (i.e. fi(d)=aid2), logarithmic function (i.e. fi(d)=ailog(d)) and square root function (i.e. fi(d)=ai(d)), where ai,∀i∈{1,2,…,S} for each user is randomly selected from range [0,50]. hi(·) is set as a constant. ε obeys a Gaussian distribution with 0 mean and 10 variance. bmax=100. The numbers of users and of tasks are 400, respectively. The results shown below is the average of 10 runs of simulations. If the inferred location is not the actual location of the user, the attack is considered to fail. The failure rate is defined as the ratio of failures out of all attacks. 6.2. Evaluation of the attack We first evaluate the effectiveness of the attack with the performance metric of failure rate. Since the pricing strategy of a user may have a large impact on the performance of the attack intuitively, we explore different pricing strategies. More specifically, we consider four different basic strategies of fi(d) for all users, i.e. linear function, quadratic function, logarithmic function and square root function. In addition, we also consider a more complicated situation, in which users can adopt different strategies. In Fig. 5, we report the failure rate of the attack as the number of tasks increases from 50 to 500. We can find that as the number of sensing tasks increases, the failure rate drops very quickly for all pricing strategies. This shows that the attack can be more effective as there are more sensing tasks. This is reasonable as the number of sensing tasks becomes larger, the length of the feature vector is longer. It is less likely that two grids produce the same degree of similarity. We also find that different pricing strategies do not create much difference. When the number of tasks is 500, the square root pricing strategy results in a slightly larger failure rate of the attack. Figure 5. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. Figure 5. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. To analyze the impact of random noise on the effectiveness of the attack model, we also do simulations by varying the variance of ε from 5 to 50 as shown in Fig. 6. We can find that when random noise becomes larger, the failure rate increases quickly for all pricing strategies. This is because it becomes harder to make the attack when the correlation between distance and bidding price drops due to the noise. Figure 6. View largeDownload slide Failure rate of attack with different types of f(d) s vs. random noise. Figure 6. View largeDownload slide Failure rate of attack with different types of f(d) s vs. random noise. 6.3. Security evaluation of the location-protected mechanisms We next evaluate the effectiveness in preserving location privacy of the proposed method. To this end, we study the failure rate of the attack with and without our proposed method. When our method is applied, the attacker blindly guesses the location of a task as the location of a user who is assigned the task. In addition, we evaluate the impacts of the number of users and of the number of sensing tasks on the performance of the method. In Fig. 7, we report the failure rate with and without our protection method as the number of users increases from 100 to 1000. The number of sensing tasks is 400. We can observe that the failure rate is dramatically increased when the proposed method is applied. For example, when there are 1000 users, the failure rate with protection is 7.5 times higher than that without protection. As there are more users, the effectiveness of the proposed method becomes even higher. This is because the attacker can only use the locations of the 400 sensing tasks to infer the locations of the winning users. As the number of users increases, this blind strategy becomes less effective. We also find that without protection, the failure rate is not sensitive to the changing number of users, which is stable at around 0.1. Figure 7. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. Figure 7. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. In Fig. 8, we report the failure rate with and without our protection method as the number of tasks increases from 300 to 800. The number of users is 400. We can find that for different numbers of sensing tasks, the protection of the proposed method is always effective. However, as the number of sensing tasks increases, the failure rate slightly decreases. This is because when there are more sensing tasks, the attacker has more candidate locations to infer the locations of the winning users. Figure 8. View largeDownload slide Comparison against failure rate with and without protection with different numbers of tasks. Figure 8. View largeDownload slide Comparison against failure rate with and without protection with different numbers of tasks. 6.4. Evaluation of the auction performance We now evaluate the performance of the auction mechanism with our location privacy-preserving method. The original number of winners is denoted as N1 and the number of winners under our location privacy-preserving method is N2. We test the impact of the location-protected method by N2N1. The higher the ratio, the less the impact on the auction mechanism. In simulations, we set the number of tasks is 100 and each result is the average of the 100 tasks. The evaluation is shown in Figs 9 and 10. Figure 9. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. Figure 9. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. Figure 10. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. Figure 10. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. In Fig. 9, we report the ratio of both the RADP-based (greedy) mechanism and the mechanism with budget constraint. The budget for each task obeys a Gaussian distribution. The mean of the distribution is 350 and the variance is 100. The number of users increases from 100 to 550. We can observe the ratio of the greedy one is always 1, this shows that the location-preserving method has no impact on the auction performance. The average ratio of the mechanism with a budget constraint is about 0.75. This is higher than the theoretical analysis, which means the protected mechanism can hire most winners as the original one. In Fig. 10, we report the ratio of the mechanism with different budget constraints. We can find that the ratio is around 0.75. As the number of users increases, the ratio is slightly increasing. When budget increases and the number of users remains unchanged, N2N1 is slightly decreasing. This is because, with more budget, the platform is more likely to choose users with the bigger price like bmax. 7. CONCLUSION In this paper, we study the problem of preserving location privacy in mobile crowd sensing. We find that an adversary may easily determine the location of a smartphone user only with the prices of the tasks in the user’s bid, even when the location information is not contained in the bid. This is because the pricing strategy of the user is strongly related to the distance between the user and the target task, which is quite common in the real world. In this paper, we propose a location privacy-preserving method for both the RADP-based auction mechanism and the mechanism with budget constraint. This method relies on a TTP which is responsible for distributing keys and decryption only. In the mechanism, a user submits a bid containing encrypted prices such that the platform or the adversary cannot access. By doing so, the mechanism preserves the location privacy of the user but does not block the platform in selecting the winning users who offer the lowest prices for a given sensing task. Our simulations show that our mechanism is effective. FUNDING This work is supported in part by 973 Program (No. 2014CB340303); National Natural Science Foundation of China (No. 61772341, 61472254 and 61472241); the Science and Technology Commission of Shanghai (Grant No. 14511107500 and 15DZ1100305) and Singapore NRF (CREATE E2S2). This work is also supported by the Program for Changjiang Young Scholars in University of China; the Program for Shanghai Top Young Talents and the National Top Young Talents Support Program. REFERENCES 1 Mohan , P. , Padmanabhan , V.N. and Ramjee , R. ( 2008 ) Nericell: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones. Proc. 6th ACM Conf. Embedded Networked Sensor Systems (SenSys), Raleigh, NC, USA, November 5–7, pp. 323–336. ACM. 2 Thiagarajan , A. , Ravindranath , L. , LaCurts , K. , Madden , S. , Balakrishnan , H. , Toledo , S. and Eriksson , J. ( 2009 ) VTrack: Accurate, Energy-aware Road Traffic Delay Estimation using Mobile Phones. Proc. 7th ACM Conf. Embedded Networked Sensor Systems (SenSys), Berkeley, CA, USA, November 4–6, pp. 85–98. ACM. 3 Mun , M. , Reddy , S. , Shilton , K. , Yau , N. , Burke , J. , Estrin , D. and Boda , P. ( 2009 ) PEIR, the Personal Environmental Impact Report, as a Platform for Participatory Sensing Systems Research. Proc. 7th Int. Conf. Mobile Systems, Applications, and Services (MobiSys), Kraków, Poland, June 22–25, pp. 55–68. ACM. 4 Rana , R.K. , Chou , C.T. , Kanhere , S.S. , Bulusu , N. and Hu , W. ( 2010 ) Ear-phone: An End-to-end Participatory Urban Noise Mapping System. Proc. 9th ACM/IEEE Int. Conf. Information Processing in Sensor Networks (IPSN), Stockholm, Sweden, April 12–16, pp. 105–116. ACM. 5 Oliver , N. and Flores-Mangas , F. ( 2007 ) Healthgear: Automatic sleep apnea detection and monitoring with a mobile phone . JCM , 2 , 1 – 9 . Google Scholar CrossRef Search ADS 6 Gao , C. , Kong , F. and Tan , J. ( 2009 ) Healthaware: Tackling Obesity with Health Aware Smart Phone Systems. Proc. IEEE Int. Conf. Robotics and Biomimetics (ROBIO), Guilin, China, December 18–22, pp. 1549–1554. IEEE. 7 Yang , D. , Xue , G. , Fang , X. and Tang , J. ( 2012 ) Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing. Proc. 18th Annual Int. Conf. Mobile Computing and Networking (Mobicom), Istanbul, Turkey, August 22–26, pp. 173–184. ACM. 8 Lee , J.S. and Hoh , B. ( 2010 ) Sell Your Experiences: A Market Mechanism based Incentive for Participatory Sensing. Proc. 8th IEEE Int. Conf. Pervasive Computing and Communications (PerCom), 29 March–2 April, pp. 60–68. IEEE. 9 Xu , H. and Larson , K. ( 2014 ) Improving the Efficiency of Crowdsourcing Contests. Proc. 13th Int. Conf. Autonomous Agents and Multiagent Systems (AAMAS), Paris, France, May 5–9, pp. 461–468. 10 Jaimes , L.G. , Vergara-Laurens , I. and Labrador , M.A. ( 2012 ) A Location-based Incentive Mechanism for Participatory Sensing Systems with Budget Constraints. Proc. 10th IEEE Int. Conf. Pervasive Computing and Communications (PerCom), Lugano, Switzerland, March 19–23, pp. 103–108. IEEE. 11 Feng , Z. , Zhu , Y. , Zhang , Q. , Ni , L.M. and Vasilakos , A.V. ( 2014 ) Trac: Truthful Auction for Location-aware Collaborative Sensing in Mobile Crowdsourcing. Proc. IEEE Conf. Computer Communications (INFOCOM), Toronto, Canada, 27 April–2 May, pp. 1231–1239. IEEE. 12 Koutsopoulos , I. ( 2013 ) Optimal Incentive-driven Design of Participatory Sensing Systems. Proc. IEEE Conf. Computer Communications (INFOCOM), Turin, Italy, April 14–19, pp. 1402–1410. IEEE. 13 Kido , H. , Yanagisawa , Y. and Satoh , T. ( 2005 ) Protection of location privacy using dummies for location-based services. Proc. 21st Int. Conf. Data Engineering Workshops (ICDEW), Tokyo, Japan, April 5–8, pp. 1248–1248. IEEE Computer Society. 14 Suzuki , A. , Iwata , M. , Arase , Y. , Hara , T. , Xie , X. and Nishio , S. ( 2010 ) A User Location Anonymization Method for Location based Services in a Real Environment. Proc. 18th Int. Conf. Advances in Geographic Information Systems (SIGSPATIAL GIS), San Jose, CA, November 2–5, pp. 398–401. ACM. 15 Kato , R. , Iwata , M. , Hara , T. , Suzuki , A. , Xie , X. , Arase , Y. and Nishio , S. ( 2012 ) A Dummy-based Anonymization Method based on User Trajectory with Pauses. Proc. 20th Int. Conf. Advances in Geographic Information Systems (SIGSPATIAL GIS), Redondo Beach, CA, November 6–9, pp. 249–258. ACM. 16 Gedik , B. and Liu , L. ( 2008 ) Protecting location privacy with personalized k-anonymity: Architecture and algorithms . IEEE Trans. Mobile Comput. , 7 , 1 – 18 . Google Scholar CrossRef Search ADS 17 Pan , X. , Xu , J. and Meng , X. ( 2012 ) Protecting location privacy against location-dependent attacks in mobile services . IEEE Trans. Knowl. Data Eng. , 24 , 1506 – 1519 . Google Scholar CrossRef Search ADS 18 Yigitoglu , E. , Damiani , M.L. , Abul , O. and Silvestri , C. ( 2012 ) Privacy-preserving Sharing of Sensitive Semantic Locations under Road-network Constraints. Proc. 13th Int. Conf. Mobile Data Management (MDM), Bengaluru, India, July 23–26, pp. 186–195. IEEE. 19 Zhang , J. , Ma , J. , Wang , W. and Liu , Y. ( 2012 ) A Novel Privacy Protection Scheme for Participatory Sensing with Incentives. Proc. 2nd Int. Conf. Cloud Computing and Intelligent Systems (CCIS), Hangzhou, China, 30 October–1 November, pp. 1017–1021. IEEE. 20 Christin , D. , Rokopf , C. , Hollick , M. , Martucci , L.A. and Kanhere , S.S. ( 2013 ) Incognisense: An anonymity-preserving reputation framework for participatory sensing applications . Pervasive Mobile Comput. , 9 , 353 – 371 . Google Scholar CrossRef Search ADS 21 Li , Q. and Cao , G. ( 2013 ) Providing Privacy-aware Incentives for Mobile Sensing. Proc. 11th IEEE Int. Conf. Pervasive Computing and Communications (PerCom), San Diego, CA, USA, March 18–22, pp. 76–84. IEEE. 22 Liu , A.X. and Chen , F. ( 2008 ) Collaborative Enforcement of Firewall Policies in Virtual Private Networks. Proc. 27th Annual ACM Symposium on Principles of Distributed Computing (PODC), Toronto, Canada, August 18–21, pp. 95–104. ACM. 23 Liu , S. , Zhu , H. , Du , R. , Chen , C. and Guan , X. ( 2013 ) Location Privacy Preserving Dynamic Spectrum Auction in Cognitive Radio Network. Proc. 33rd Int. Conf. Distributed Computing Systems (ICDCS), Philadelphia, USA, July 8–11, pp. 256–265. IEEE. 24 Gao , H. , Liu , C.H. , Wang , W. , Zhao , J. , Song , Z. , Su , X. and Leung , K.K. ( 2015 ) A survey of incentive mechanisms for participatory sensing . IEEE Commun. Surv. Tutorials , 17 , 918 – 943 . Google Scholar CrossRef Search ADS 25 Li , Q. and Cao , G. ( 2014 ) Providing Efficient Privacy-aware Incentives for Mobile Sensing. Proc. Int. Conf. Distributed Computing Systems (ICDCS), Madrid, Spain, 30 June–3 July, pp. 208–217. IEEE. 26 Luo , T. , Tan , H.P. and Xia , L. ( 2014 ) Profit-maximizing Incentive for Participatory Sensing. Proc. IEEE Conf. Computer Communications (INFOCOM), Toronto, Canada, 27 April–2 May, pp. 127–135. IEEE. 27 Zhang , Q. , Wen , Y. , Tian , X. , Gan , X. and Wang , X. ( 2015 ) Incentivize Crowd Labeling under Budget Constraint. Proc. IEEE Conf. Computer Communications (INFOCOM), Kowloon, Hong Kong, 26 April–1 May, pp. 2812–2820. IEEE. 28 Cheng , J. , Yang , H. , Wong , S.H. , Zerfos , P. and Lu , S. ( 2007 ) Design and Implementation of Cross-domain Cooperative Firewall. Proc. IEEE International Conf. Network Protocols (ICNP), Beijing, China, October 16–19, pp. 284–293. IEEE. 29 Singer , Y. ( 2010 ) Budget Feasible Mechanisms. Proc. 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), Las Vegas, NV, USA, October 23–26, pp. 765–774. IEEE. Footnotes 1 In this work, we consider truthful users, whose claimed prices are equal to their costs. Moreover, many existing works [10–12] have proposed truthful incentive mechanisms. Author notes Handling editor: Andrew Martin © The British Computer Society 2017. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Computer Journal Oxford University Press

Location Privacy-Preserving Method for Auction-Based Incentive Mechanisms in Mobile Crowd Sensing

Loading next page...
 
/lp/ou_press/location-privacy-preserving-method-for-auction-based-incentive-1noY7sfwFT
Publisher
Oxford University Press
Copyright
© The British Computer Society 2017. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ISSN
0010-4620
eISSN
1460-2067
D.O.I.
10.1093/comjnl/bxx124
Publisher site
See Article on Publisher Site

Abstract

Abstract It is of significant importance to provide incentives to smartphone users in mobile crowd sensing systems. Recently, a number of auction-based incentive mechanisms have been proposed. However, an auction-based incentive mechanism may unexpectedly release the location privacy of smartphone users, which may seriously reduce the willingness of users participating in contributing sensing data. In an auction-based incentive mechanism, even if the location of a user is not enclosed in his/her bid submitted to the platform, the location information may still be inferred by an adversary by using the prices of the tasks required by the user. We take an example to show how an attack can recover the location information of a smartphone user by merely knowing his/her bid. To defend against such an attack, we propose a method to protect location privacy in auctions for mobile crowd sensing systems. This method encrypts prices in a bid so that the adversary cannot access and hence the location privacy of users can be protected. In the meanwhile, however, the auction can proceed properly, i.e. the platform can select the user offering the lowest price for each sensing task or the platform can choose users with budget constraint. We demonstrate the effectiveness of our proposed method with theoretical analysis and simulations. 1. INTRODUCTION The increasing popularity of smartphones with a rich set of sensors brings a new wireless sensing paradigm called mobile crowd sensing [1]. A mobile crowd sensing system consists of three components, i.e. organizers, a central platform and smartphone users. Organizers release sensing tasks and the platform is responsible for distributing them to smartphone users. Smartphone users are recruited for performing sensing tasks. The collected sensing data are gathered at the platform and then forwarded to organizers. There are a wide variety of application domains of mobile crowd sensing, such as transportation [1, 2], environment monitoring [3, 4] and healthcare [5, 6]. It is of significant importance to provide sufficient incentives for attracting smartphone users to contribute sensing data in a mobile crowd sensing system. A number of auction-based incentive mechanisms [7–9] have been proposed. As illustrated in Fig. 1, a typical auction-based mechanism has the following basic steps. Firstly, the platform announces the set of sensing tasks to all smartphone users. Secondly, each user submits a bid containing the claimed price of each task. Lastly, based on the collected bids, the platform allocates the sensing tasks to the users. Then, each smartphone performs the allocated tasks, reports the sensing data back to the platform and gets the reward accordingly. FIGURE 1. View largeDownload slide The basic steps of an auction-based incentive mechanism for mobile crowd sensing. FIGURE 1. View largeDownload slide The basic steps of an auction-based incentive mechanism for mobile crowd sensing. However, an auction-based incentive mechanism may unexpectedly release the location privacy of smartphone users, which may seriously reduce users’ willingness of participating in mobile crowd sensing. The main reason is as follows. To perform location-aware tasks, the distance between a task and a user is a deciding factor on the cost incurred on the user. As a consequence, the claimed prices1 may be exploited by the platform or the adversary to infer the location of a smartphone user. The intuition of inferring the location is that the bidding price of performing a given task generally becomes larger as the distance between the user and the task increases. Then, by knowing the locations of tasks and the bidding prices of a user, an attacker may determine the most likely location of the user. A number of location-preserving methods have been proposed. Existing works can be divided into three classes: (i) randomization [13–15], in which a smartphone user sends to the platform the true location as well as several fake locations; (ii) spatial generalization [16, 17], in which a smartphone user sends a geographical region that covers at least k−1 other users and (iii) time-fuzzy method [18]. In addition, several schemes [19–21] have been proposed for protecting sensing data. Unfortunately, existing methods for location preservation and data protection cannot directly be applied in auction-based mechanisms for mobile crowd sensing. Firstly, in an auction-based mechanism, even if the location of the user is not enclosed in the bid submitted to the platform, the location information may still be inferred by the prices of the tasks required by the user. Secondly, existing methods for data protection ignore the location privacy leakage issue in the bids of the users. To defend against such an attack in an auction-based incentive mechanism, we propose a location privacy-preserving method for auction mechanisms. This method encrypts the prices in the bid of a user so that the adversary cannot access the prices, and hence the location privacy of the user can be protected. In the meanwhile, however, the auction can proceed properly, i.e. the platform selects the user offering the lowest price for each sensing task or the platform selects users with budget constraint. To this end, our method employs the prefix membership verification scheme [22, 23]. The platform determines the winning user for a given sensing task by only using encrypted prices from all users. Through performing simulations, we demonstrate that the failure rate of the attack is dramatically increased with our protection method, compared with the one without any protection. Moreover, auction mechanisms can perform well under our protection. The rest of the paper is organized as follows. Section 2 discusses related work. Section 3 defines the system model of a crowd sensing system and makes several practical assumptions. In Section 4, we describe an attack model and show its effectiveness of compromising the location privacy of a smartphone user. Section 5 presents the design of our location privacy-preserving method. In Section 6, we present simulation results and finally we conclude our paper in Section 7. 2. RELATED WORK Incentive mechanisms [24] are important for attracting users and guaranteeing the quality of sensing data. According to different design goals, existing works can be divided into two types: user-centric [7–9] and platform-centric [25, 7, 26]. The reverse auction-based dynamic price (RADP) mechanism [8] is a typical user-centric incentive mechanism in mobile crowd sensing. The participants first submit their claimed prices for performing a sensing task, and then the platform chooses the user with the lowest price as the winner and recruits this user to gather the sensing data. One improvement of RADP called ‘MSensing’ is proposed in [7]. Incentive mechanisms with budget constraint are proposed in [12, 27], where the platform needs to select a set of users for each task and the total payment cannot be more than the budget. In this paper, we consider the privacy leakage problem in both the RADP-based mechanism and the mechanism with budget constraint. Several privacy-preserving schemes have been proposed for mobile crowd sensing. Most existing works introduce a trusted third party (TTP) as an agency [19–21]. In [19], participants use a symmetric key from the TTP. After sending sensing data with the symmetric key, they need to connect the TTP again for identification and then gets the reward. Li et al. [21] proposed a TTP-free approach. However, existing works focus on protecting sensing data or cutting the relationship between users and data, which ignore the privacy leakage in the auction period. The prefix membership verification [28] was first introduced in 2007 and then formalized in 2008 by Liu et al. [22], which has been applied in privacy protection in a dynamic spectrum auction in cognitive radio networks [23]. In this paper, we leverage this idea to hide the bid prices of each user. 3. SYSTEM MODEL In this section, we present the system model considered in our work, which includes the model of crowd sensing systems and the model of threats. We consider a typical crowd sensing system, which consists of three parts: an untrusted platform, smartphone users and a TTP. Especially, we focus on location-aware sensing tasks. Therefore, we consider the attackers’ goal is to obtain the real-time locations of smartphone users. 3.1. Crowd sensing system The platform accepts the commission from different organizers and distributes K sensing tasks to all smartphone users. These K tasks are located at different places in a whole area. The system owns S smartphone users. For the sake of convenience, we consider a rectangular area, which is divided into M×N fine-grained square grids. We design a location privacy-preserving method can be employed both in the RADP-based mechanism and the mechanism with a budget constraint. In the RADP-based mechanism, the platform always chooses the user with the lowest price as the winner for each task. In the mechanism with budget constraint, each task has a budget Cj. The platform needs to select a set of users, and the total payment is not larger than Cj. The interactions between the platform and users proceed as the following four steps: (i) Announcing tasks: The platform announces the set of K location-aware tasks to all users. (ii) Submitting bids: Each smartphone user submits the identification and the bid Bi={b1i,…,bKi} to the platform, where bji,∀i∈{1,2,…,S},∀j∈{1,2,…,K} is the bidding price claimed by user i for task j. (iii) Allocating tasks: The platform allocates the tasks to the winning users. (iv) Rewarding users: After the winners upload the sensing data, the platform distributes the rewards to each winning user. In an auction-based incentive mechanism, users are asked to submit a bidding price to the platform for each task they want to participate in. Considering truthful users, they always submit their bidding prices according to their true cost of doing a task. In our work, we formulate the cost of a user performing a location-aware task as follows, which is as well as the bidding price submitted to the platform, bji=fi(dji)+hi(·)+ε. (1) The formulation is comprised of three items, which represent different kinds of cost. The first item fi(dji) represents the cost of movement to the location of task j, which is a function of the distance between the user and the task. dji denotes the distance between user i and task j. The second item hi(·) denotes the cost of user i performing task j, which includes the energy consumption and so on. The last item ε characterizes random noise which follows a Gaussian distribution. And fi(·) can be any monotonically increasing function (e.g. linear or quadratic functions with non-negative coefficients). Note that bji can be set as bmax if user i has no intention of performing task j, where bmax is known as the maximum price that the platform can afford for a task. The TTP is responsible for distributing secret keys in the phase of announcing tasks. It also helps the platform to decrypt the bidding prices in the phase of allocating tasks. The adversary can be an external attacker or the platform. The platform is curious-but-honest, i.e. it will honestly execute the auction-based mechanism but it is curious about users’ information. All notations are listed in Table 1. Table 1. Notations and Descriptions. Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X View Large Table 1. Notations and Descriptions. Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X Not. Des. K Number of sensing tasks S Number of smartphone users M×N Number of square grids Cj Budget of task j, where j∈{1,2,…,K} bji Bidding price for task j submitted by user i Bi Bid of user i, where Bi={b1i,b2i,…,bKi} dji Distance between user i and task j pj(m,n) Normalized distance from grid (m,n) to task j Pmn Distance-to-tasks feature vector of grid (m,n), where Pmn=<p1(m,n),p2(m,n),…,pK(m,n)> qji Relative distance from user i to task j Qi Distance-to-tasks feature vector of user i, where Qi=<q1i,q2i,…,qKi> sj(m,n) Similarity between user i and grid (m,n) F(x) Prefix family of number x P([d1,d2]) Prefix family of range [d1,d2] N(X) Prefix numericalization function of prefix family X View Large 3.2. Threat model and assumptions We consider attackers’ goal is to obtain the locations of smartphone users by analyzing the bids submitted by users during the auction period. First, attackers know the locations of all tasks which are public to all. Second, attackers can intercept all data packets between each smartphone user and the platform, which means attackers can obtain the bidding prices of all tasks submitted by each user. Third, the platform can also be an attacker. According to Equation (1), it is easy to know that a higher bidding price of a task submitted by a user means a larger distance between the task and the user. Therefore, the location privacy of a user may be compromised given the locations of all tasks and the relative distances to these tasks analyzed from bidding prices. Here, we assume that attackers have sufficient computing resources. In Section 4, we will give an example to show how an attacker obtains the location information of a user. Such attacks become stronger when the number of tasks becomes larger or the dependence of costs on distances becomes stronger. However, the effectiveness of such attacks may become weaker when the noise added in the cost model in Equation (1) is larger. We also assume that attackers have the following limitations. First, attackers cannot access the information in encrypted messages. Second, attackers do not intentionally jam communication channels. 4. ATTACK ON LOCATION PRIVACY This section presents an attack on the location privacy of users in a crowd sensing system during the auction period. Note that this attack is inspired by the attack on location privacy in cognitive radio networks [23]. 4.1. Basic idea We assume that attackers already know the locations of the K tasks. On one hand, for a given grid, the distances from the grid to all the tasks are known in advance by the adversary. On the other hand, we know that, according to Equation (1), a higher price indicates a larger distance. Therefore, given the bid of a user, the claimed prices of the tasks carry the information of the relative distances to the tasks from the current location of the user. By comparing the similarity between the distances to all tasks of a user and of every grid, it is able to determine the location of the user by selecting the grid with the highest similarity. 4.2. Distance-to-tasks feature of a grid The distance-to-tasks feature of a given grid (m,n), denoted as Pmn=<p1(m,n),p2(m,n),…,pK(m,n)>, is defined as the vector containing all the normalized distances to the K tasks from the grid. The normalized distance pj(m,n) from the grid to a specific task is defined as follows, pj(m,n)=21+eθdj(m,n), (2) where dj(m,n) is the Euclidean distance between task j and grid (m,n), and θ is an adjustable parameter. Equation (2) is a monotone decreasing function of distance. It is apparent that pj(m,n)∈[0,1]. And pj(m,n)=1 if the jth task locates in grid (m,n). A larger pj(m,n) means shorter distance between grid (m,n) and task j. 4.3. Relative distance-to-tasks feature of a bid We define the relative distance-to-tasks feature of a given user i, denoted as Qi=<q1i,q2i,…,qKi>, as the vector containing all relative distances from the user to all the K tasks. The relative distance qji from the user to a specific task is defined as follows, qji=bminibji, (3) where bmini is the minimum price among all the prices in bid Bi={bji,1≤j≤K}. It is obvious that qji∈[0,1]. According to Equation (3), a larger qji means a lower bidding price, which implies a shorter distance between user i and task j. And Qi normalizes the distances from the user to all tasks. 4.4. Similarity between user and grid To obtain the location of a user (i.e. which gird the user is in currently), the attacker finds the grid whose distance-to-tasks feature is most similar to the relative distance-to-tasks feature of the user. To this end, the similarity si(m,n) between user i and grid (m,n) can be computed as follows, si(m,n)=(Qi−Pmn)T·(Qi−Pmn), (4) =∑j=1K(qji−pj(m,n))2 (5) The smaller si(m,n), the higher similarity. After calculating the similarities between the user and all the M×N grids, the adversary chooses the grid with the maximum similarity as the location of the user. Simulation results in Section 6 show that this attack can locate a user with a high accuracy. The complete algorithm of the attack is shown in Algorithm 1. Algorithm 1 Attack Algorithm to Location Privacy Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; Algorithm 1 Attack Algorithm to Location Privacy Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; Input: The locations of all tasks and the bid of user i; Output: The grid (x,y) where user i locates; 1: bmini=min{bji,∀j∈{1,2,…,K}}; 2: for j=1,2,…,Kdo 3: qji=bminibji; 4: end for 5: for each grid (m,n)do 6: for j=1,2,…,Kdo 7: pj(m,n)=21+eθdj(m,n); 8: end for 9: si(m,n)=∑j=1K(qji−pj(m,n))2; 10: end for 11: si(x,y)=min{si(m,n),∀1≤m≤M,1≤n≤N}; 4.5. An example of the attack model We take a region divided into 3×3 grids as an example. In the example, there is one user located in grid (1,1) and three tasks located in (1,1),(2,2) and (2,3) respectively, as shown in Fig. 2a. For the ease of presentation, we set that f(d)=r+u·d with r=10 and u=10, and ignore hi(·) and ε. The attacker first calculates the distance-to-tasks feature of each grid by Equation (2) with θ=1, as shown in Fig. 2b. We assume the bid of the user is B={10,24,32}, and thus the relative distance-to-tasks feature is calculated as Q=[1,0.42,0.31]. Then the attacker calculates the similarities between Q and the feature vector of each grid, according to Equation (4). The results are shown in Fig. 2c. As a result, the attacker chooses grid (1,1) as the location of the user since it has the minimum value among all grids. Thus, the user’s location privacy is leaked. FIGURE 2. View largeDownload slide This is an example of the attack model. The attacker calculates the distance-to-tasks feature Q of the user and Pmn of each grid based on the locations of the three tasks and the bid of user B={10,24,32}. According to the similarities, the attacker locates the user in grid (1,1). (a) The locations of the three tasks and the user, (b) the distance-to-tasks feature of each grid and (c) the similarity between the distance-to-tasks feature of each grid and the distance-to-tasks feature of the user. FIGURE 2. View largeDownload slide This is an example of the attack model. The attacker calculates the distance-to-tasks feature Q of the user and Pmn of each grid based on the locations of the three tasks and the bid of user B={10,24,32}. According to the similarities, the attacker locates the user in grid (1,1). (a) The locations of the three tasks and the user, (b) the distance-to-tasks feature of each grid and (c) the similarity between the distance-to-tasks feature of each grid and the distance-to-tasks feature of the user. 5. LOCATION PRIVACY-PRESERVING METHOD In this section, we first give the basic idea. Then, we describe the preliminaries of the prefix membership verification. After that, we present the detailed design for both the RADP-based incentive mechanism and the mechanism with budget constraint. Finally, we offer the theoretical analysis. 5.1. Basic idea As illustrated in Section 4, attackers can infer the location of a user by using the bidding prices in B. Some previous studies [22, 23] have proposed location-protected methods, which encrypt the K prices in B with K different keys to avoid such an attack. However, encrypted prices are not comparable to the platform, and therefore the platform cannot select proper users for each task. In this work, we first introduce the prefix membership verification method to preprocess the prices. Under this method, the platform can compare the encrypted prices by using the same key. Thus, the platform can rank the prices for a given task and allocate the task in a cost-effective way. 5.2. Preliminaries The main idea of prefix membership verification [22, 23, 28] is to convert the estimation of whether a number is in a range to the judgement of whether the intersection of two sets is empty. There are several key concepts, such as s-prefix, the prefix family and the prefix numericalization function. An s-prefix {0,1}s{*}w−s consists of s 0 s or 1 s and (w−s) * s, where * represents a 0 or 1. For example, 1*** is a 1-prefix and 11* is a 2-prefix. An s-prefix represents a range ranging from {0,1}s{0}w−s to {0,1}s{1}w−s. For example, p=1*** represents the range [1000,1111], which means a number matches a s-prefix only if the leading s binary bits are the same as that of the s-prefix. Consider a w-bit binary number x=b1b2⋯bw. We define the prefix family of the number x as F(x)={b1b2⋯bw,b1b2⋯bw−1*,…,b1*⋯*,*⋯*}. The prefix family has (w+1) prefixes and its i th element is {b1b2⋯bw−i+1*⋯*}. For example, the prefix family of 9, i.e. 1001, is F(9)={1001,100*,10**,1***,****}. Thus, we conclude that given a number x and a prefix p, x is in the range of p if and only if p∈F(x). The prefix family of a range [d1,d2] is denoted as P([d1,d2]). The prefix family of a range consists of a set of prefixes, where each prefix covers a subrange in [d1,d2] and all the prefixes cover the whole range of [d1,d2]. For example, P([7,15])={0111,1***}. Then, we conclude that x∈[d1,d2] if and only if F(x)⋂P([d1,d2])≠∅. For example, 9∈[7,15] and F(9)⋂P([7,15])={1***}. This makes it possible to decide whether a number is in a range by comparing two sets of prefixes. A prefix numericalization function [22] is denoted as N(X) where X is a prefix family. The function turns a prefix family into a prefix numericalization family (PNF). It makes the judgement of whether F(x)⋂P([d1,d2])≠∅ executable. It turns each prefix element p into a binary number. Consider a w-bit prefix p=b1b2⋯bs*⋯*. This function converts p to a (w+1)-bit number by adding a 1 behind bs and replacing all the * s with 0 s. We make a conclusion as follows: x∈[d1,d2]ifN(F(x))⋂N(P([d1,d2]))≠∅. (6) For instance, N(F(9))={10011,10010,10100,11000,10000} and N(P[7,15])={01111,11000}. 9∈[7,15] since N(F(9))⋂N(P([7,15]))={11000}. In order to apply the prefix membership verification, we convert a single price into two parts: one is the bidding price bji and the other is the range [bji,bmax]. We build the price prefix numericalization family(PPNF) of bji, N(F(bji)), and the range prefix numericalization family (RPNF) of [bji,bmax], N(P([bji,bmax])). According to Equation (6), if N(F(bji))⋂N(P([bjk,bmax]))≠∅, then bji∈[bjk,bmax]. As a result, we get bji>bjk. After gathering all users’ PPNFs and RPNFs for a given task, the platform can rank the prices. Table 2 is an example of choosing the minimum price 2 among the four prices {5,15,2,9} with bmax=15. Table 2. PPNF of the bids and RPNF of the bid ranges. bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 The boldface values are emphasizing the price of the winner. View Large Table 2. PPNF of the bids and RPNF of the bid ranges. bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 bids: {5,15,2,9} 5 15 2(winner) 9 PPNF 01011 11111 00101 10011 01010 11110 00110 10010 01100 11100 00100 10100 01000 11000 01000 11000 10000 10000 10000 10000 bid range [5,15] [15,15] [2,15] [9,15] RPNF 01011 11111 00110 10011 01111 01100 10110 11000 11000 11000 The boldface values are emphasizing the price of the winner. View Large 5.3. Design for the RADP-based mechanism The main steps of our design for the RADP-based auction mechanism are shown in Fig. 3. Each user encrypts the K prices in B with K different keys of keyed-Hash Message Authentication Code (HMAC). As the same content is encrypted into the same ciphertext, the platform is capable of ranking the prices for a given task by using the aforementioned prefix membership verification. Announcing tasks: The platform announces the set of K tasks to all users. Distributing keys: The TTP generates K pairs of public key and private key {(gt1,gc1),(gt2,gc2),…,(gtK,gcK)} and distributes the public keys gtj to each user. The private keys gcj are kept by the TTP. The TTP also distributes the mapping parameter c to each user, which is explained in the next step. Submitting bids: Each user i first maps his/her bidding price bji to a novel number (bji)′ in range (c·(bji−1),c·bji]. Then, the user calculates the PPNF and the RPNF of (bji)′. After that, the user encrypts every element in the two sets with the corresponding public key gtj. Lastly, the encrypted sets, denoted as Hgt(N(F((bji)′))) and Hgt(N(P([(bji)′,(bmax)′]))), are submitted to the platform. Selecting winners: After gathering all the encrypted bids from the users, the platform judges that bji>bjk for a given task j if the following formula holds, Hgt(N(F((bji)′)))⋂Hgt(N(P([(bjk)′,(bmax)′])))≠∅. (7) Then, the platform selects the winner, denoted by t∈{1,2,…,S}. It is obvious that the intersections of the PPNF of the winner and the RPNFs of other users are always an empty set. The platform then sends the encrypted prices of the winners to the TTP. Sending deciphered bids: The TTP decrypts the encrypted prices using the corresponding private key gcj. Note that, after decryption, the plaintext is a number in the range (c·(bjt−1),c·bjt]. The TTP divides it by c and sends ⌈bjtc⌉ back to the platform. If bjt=bmax, the TTP informs the platform that the bid is invalid. Rewarding users: The users upload the sensing data, and the platform pays each user accordingly. Figure 3. View largeDownload slide The main steps of our design for the RADP-based auction mechanism. Figure 3. View largeDownload slide The main steps of our design for the RADP-based auction mechanism. Note the number of different prices can be very limited. Even after encryption, the platform is still able to guess the corresponding prices by ranking the encrypted prices for the same task, especially when there are a large number of users. For better protection, a parameter c is applied for mapping each price to a larger range. 5.4. Design for the mechanism with budget constraint As shown in Section 5.2, the platform cannot understand the price of each user but is capable of comparing the prices. This is enough for the RADP-based auction mechanism but is insufficient for the auction mechanism with budget constraint. As each task has a constraint Cj, The platform needs to make sure the total payment given to the selected users is less than Cj. The budget feasible mechanism in [29] inspires us. In this mechanism, the platform first sorts all the S prices for a certain task j, denoted as {bj1≤bj2≤⋯≤bjS}. Then it finds the last number k that satisfies bjk≤Cjk. The top k users are selected as winners and the corresponding payment for each winner is min{Cjk,bjk+1 }. Hence, after sorting the encrypted prices using Equation (7), the platform needs to return the rankings to the corresponding users. Then the user uploads the k·bjk back to the platform with the same process in submitting bids. The first user also uploads the encrypted Cj to the platform. Finally, the platform is capable of finding the last k which satisfies bjk·k≤Cj. The main steps are shown in Fig. 4. Announcing tasks: The platform announces the set of K tasks and the budget constraint of each task {C1,C2,…,CK} to all users. Distributing keys: The TTP generates K pairs of public key and private key {(gt1,gc1),(gt2,gc2),…,(gtK,gcK)} and distributes the public keys gtj to each user. The private keys gcj are kept by the TTP. The TTP also distributes the mapping parameter c to each user. Submitting bids: Each user i first maps the bid bji to a number (bji)′ in range (c·(bji−1),c·bji]. Then, the user calculates the PPNF and the RPNF of (bji)′. After that, the user encrypts every element in the two sets with the corresponding public key gtj. Lastly, the encrypted sets, denoted as Hgt(N(F((bji)′))) and Hgt(N(P([(bji)′,(bmax)′]))), are submitted to the platform. Returning rankings: After gathering all the encrypted bids from the users and sorting the encrypted prices by Equation (7), the platform get the ranking {1,2,…,S}. Note that each rank is corresponding to a user. It then returns the corresponding ranking back to each user. Uploading encrypted k·bjk: After getting the corresponding rank k, user j does the same process to bjk·k as the third step. Since the platform needs to compare Cj with bjk·k, the first user also needs to do the same process to budget Cj. Then, each user uploads the encrypted information to the platform. Sending deciphered bids: The platform compares Cj and k·bjk,∀k∈{1,2,…,S} by using Equation (7). It finds the last number k that satisfies k·bjk≤Cj. The platform then sends the encrypted prices of the k-th user and the (k+1)-th user to the TTP. The TTP decrypts the encrypted prices using the corresponding private key gcj. Note that, after decryption, the plaintext is a number in the range (c·(bjk−1),c·bjk]. The TTP divides it by c. Finally, the TTP returns the prices of the two ciphertexts back to the platform. Selecting winners: The platform gets the prices of the k-th user and the (k+1)-th user. If the k-th bid is bmax, then the allocation is a failure since there is at least one user that is not willing to do the task. Otherwise, the platform will allocate the task to the top k users and the payment of each user is min{Cjk,bjk+1}. Rewarding users: The users upload the sensing data, and the platform pays each user accordingly. Figure 4. View largeDownload slide The main steps of our design for the auction mechanism with budget constraint. Figure 4. View largeDownload slide The main steps of our design for the auction mechanism with budget constraint. 5.5. Theoretical analysis Theorem 1 Given the number Kof tasks, the maximum bidding price bmax, the mapping parameter cand the ratio rof the encrypted output. The transmission cost of each user is K·r·(⌈log(c·bmax)⌉+1)·(3⌈log(c·bmax)⌉−1). Proof Since bmax and c are fixed, the maximum figure to be sent is c·bmax. Its binary representation has l=⌈log(c·bmax)⌉ bits and l is the length of all the bidding prices. In the preprocessing step, each bidding price generate two sets of binary numbers. Each figure in PPNF and RPNF occupies l+1 bits. There are (l+1) elements in PPNF and at most (2l−2) elements [22] in RPNF. In the phase of sending bids, users encrypt all K prices with HMAC keys. The total transmission is hence K·r·(l+1)·(3l−1), which is K·r·(⌈log(c·bmax)⌉+1)·(3⌈log(c·bmax)⌉−1).□ Since the transmission cost is linear with respect to the number of tasks, the computation cost of HMAC encryption is quite small. Therefore, it is feasible for a large-scale mobile crowd sensing. The platform cannot know the prices of a user without the keys and hence cannot compare the elements in vector Bi. However, the platform is still able to find the corresponding relationship between bidding prices and the encrypted numbers, especially when there are a large number S of users. Theorem 2 Given the number Sof smartphone users, for a given task, it is difficult for the curious platform to find the correspondence between the encrypted numbers and the bidding prices while c≫Sbmax. Proof There are at most c·bmax different ciphertexts for each task with a given c. If c·bmax<=S, considering the special case that the platform collects S ciphertexts with T=c·bmax different values for a given task. Since the platform knows bmax in advance, it infers c by calculating Tbmax. Then, the platform gets the corresponding prices by ranking the T ciphertexts and divides them by c. However, if c·bmax≫S, the number of possibilities of correspondence is immensely huge. Assuming that the price for a given task follows the uniform distribution, then E(T)=c·bmax·(1−(c·bmax−1c·bmax)S). Hence with a large c, it is difficult for the platform to find the accurate correspondence.□ Theorem 3 In the auction mechanism with budget constraint, the location privacy-preserving method can hire at least half of the users as the mechanism without protection. Proof In origin mechanism with budget constraint, after sorting users bidding prices as {b1≤b2≤⋯≤bS}, the platform will select the maximum number l with ∑i=1lbi≤C and choose the top l users as winners. We have l2·bl2≤∑i=l2lbi. Hence, l2·bl2≤C. In the location-preserving mechanism, we need to find out the critical k, which satisfies k·bk≤C. Hence l2≤k and the protection mechanism can hire at least half of the winners of the original mechanism.□ 6. EVALUATION 6.1. Simulation settings To evaluate the performance of the attack model and our proposed privacy-protection method, we do simulations on a region with 50×50 grids. The default values of parameters are set as follows. fi(d) is a monotonically increasing function. We use four different strategies of fi(d), including linear function (i.e. fi(d)=aid), quadratic function (i.e. fi(d)=aid2), logarithmic function (i.e. fi(d)=ailog(d)) and square root function (i.e. fi(d)=ai(d)), where ai,∀i∈{1,2,…,S} for each user is randomly selected from range [0,50]. hi(·) is set as a constant. ε obeys a Gaussian distribution with 0 mean and 10 variance. bmax=100. The numbers of users and of tasks are 400, respectively. The results shown below is the average of 10 runs of simulations. If the inferred location is not the actual location of the user, the attack is considered to fail. The failure rate is defined as the ratio of failures out of all attacks. 6.2. Evaluation of the attack We first evaluate the effectiveness of the attack with the performance metric of failure rate. Since the pricing strategy of a user may have a large impact on the performance of the attack intuitively, we explore different pricing strategies. More specifically, we consider four different basic strategies of fi(d) for all users, i.e. linear function, quadratic function, logarithmic function and square root function. In addition, we also consider a more complicated situation, in which users can adopt different strategies. In Fig. 5, we report the failure rate of the attack as the number of tasks increases from 50 to 500. We can find that as the number of sensing tasks increases, the failure rate drops very quickly for all pricing strategies. This shows that the attack can be more effective as there are more sensing tasks. This is reasonable as the number of sensing tasks becomes larger, the length of the feature vector is longer. It is less likely that two grids produce the same degree of similarity. We also find that different pricing strategies do not create much difference. When the number of tasks is 500, the square root pricing strategy results in a slightly larger failure rate of the attack. Figure 5. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. Figure 5. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. To analyze the impact of random noise on the effectiveness of the attack model, we also do simulations by varying the variance of ε from 5 to 50 as shown in Fig. 6. We can find that when random noise becomes larger, the failure rate increases quickly for all pricing strategies. This is because it becomes harder to make the attack when the correlation between distance and bidding price drops due to the noise. Figure 6. View largeDownload slide Failure rate of attack with different types of f(d) s vs. random noise. Figure 6. View largeDownload slide Failure rate of attack with different types of f(d) s vs. random noise. 6.3. Security evaluation of the location-protected mechanisms We next evaluate the effectiveness in preserving location privacy of the proposed method. To this end, we study the failure rate of the attack with and without our proposed method. When our method is applied, the attacker blindly guesses the location of a task as the location of a user who is assigned the task. In addition, we evaluate the impacts of the number of users and of the number of sensing tasks on the performance of the method. In Fig. 7, we report the failure rate with and without our protection method as the number of users increases from 100 to 1000. The number of sensing tasks is 400. We can observe that the failure rate is dramatically increased when the proposed method is applied. For example, when there are 1000 users, the failure rate with protection is 7.5 times higher than that without protection. As there are more users, the effectiveness of the proposed method becomes even higher. This is because the attacker can only use the locations of the 400 sensing tasks to infer the locations of the winning users. As the number of users increases, this blind strategy becomes less effective. We also find that without protection, the failure rate is not sensitive to the changing number of users, which is stable at around 0.1. Figure 7. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. Figure 7. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. In Fig. 8, we report the failure rate with and without our protection method as the number of tasks increases from 300 to 800. The number of users is 400. We can find that for different numbers of sensing tasks, the protection of the proposed method is always effective. However, as the number of sensing tasks increases, the failure rate slightly decreases. This is because when there are more sensing tasks, the attacker has more candidate locations to infer the locations of the winning users. Figure 8. View largeDownload slide Comparison against failure rate with and without protection with different numbers of tasks. Figure 8. View largeDownload slide Comparison against failure rate with and without protection with different numbers of tasks. 6.4. Evaluation of the auction performance We now evaluate the performance of the auction mechanism with our location privacy-preserving method. The original number of winners is denoted as N1 and the number of winners under our location privacy-preserving method is N2. We test the impact of the location-protected method by N2N1. The higher the ratio, the less the impact on the auction mechanism. In simulations, we set the number of tasks is 100 and each result is the average of the 100 tasks. The evaluation is shown in Figs 9 and 10. Figure 9. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. Figure 9. View largeDownload slide Failure rate of attack with different types of f(d) s vs. number of tasks. Figure 10. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. Figure 10. View largeDownload slide Comparison against failure rate with and without protection with different numbers of users. In Fig. 9, we report the ratio of both the RADP-based (greedy) mechanism and the mechanism with budget constraint. The budget for each task obeys a Gaussian distribution. The mean of the distribution is 350 and the variance is 100. The number of users increases from 100 to 550. We can observe the ratio of the greedy one is always 1, this shows that the location-preserving method has no impact on the auction performance. The average ratio of the mechanism with a budget constraint is about 0.75. This is higher than the theoretical analysis, which means the protected mechanism can hire most winners as the original one. In Fig. 10, we report the ratio of the mechanism with different budget constraints. We can find that the ratio is around 0.75. As the number of users increases, the ratio is slightly increasing. When budget increases and the number of users remains unchanged, N2N1 is slightly decreasing. This is because, with more budget, the platform is more likely to choose users with the bigger price like bmax. 7. CONCLUSION In this paper, we study the problem of preserving location privacy in mobile crowd sensing. We find that an adversary may easily determine the location of a smartphone user only with the prices of the tasks in the user’s bid, even when the location information is not contained in the bid. This is because the pricing strategy of the user is strongly related to the distance between the user and the target task, which is quite common in the real world. In this paper, we propose a location privacy-preserving method for both the RADP-based auction mechanism and the mechanism with budget constraint. This method relies on a TTP which is responsible for distributing keys and decryption only. In the mechanism, a user submits a bid containing encrypted prices such that the platform or the adversary cannot access. By doing so, the mechanism preserves the location privacy of the user but does not block the platform in selecting the winning users who offer the lowest prices for a given sensing task. Our simulations show that our mechanism is effective. FUNDING This work is supported in part by 973 Program (No. 2014CB340303); National Natural Science Foundation of China (No. 61772341, 61472254 and 61472241); the Science and Technology Commission of Shanghai (Grant No. 14511107500 and 15DZ1100305) and Singapore NRF (CREATE E2S2). This work is also supported by the Program for Changjiang Young Scholars in University of China; the Program for Shanghai Top Young Talents and the National Top Young Talents Support Program. REFERENCES 1 Mohan , P. , Padmanabhan , V.N. and Ramjee , R. ( 2008 ) Nericell: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones. Proc. 6th ACM Conf. Embedded Networked Sensor Systems (SenSys), Raleigh, NC, USA, November 5–7, pp. 323–336. ACM. 2 Thiagarajan , A. , Ravindranath , L. , LaCurts , K. , Madden , S. , Balakrishnan , H. , Toledo , S. and Eriksson , J. ( 2009 ) VTrack: Accurate, Energy-aware Road Traffic Delay Estimation using Mobile Phones. Proc. 7th ACM Conf. Embedded Networked Sensor Systems (SenSys), Berkeley, CA, USA, November 4–6, pp. 85–98. ACM. 3 Mun , M. , Reddy , S. , Shilton , K. , Yau , N. , Burke , J. , Estrin , D. and Boda , P. ( 2009 ) PEIR, the Personal Environmental Impact Report, as a Platform for Participatory Sensing Systems Research. Proc. 7th Int. Conf. Mobile Systems, Applications, and Services (MobiSys), Kraków, Poland, June 22–25, pp. 55–68. ACM. 4 Rana , R.K. , Chou , C.T. , Kanhere , S.S. , Bulusu , N. and Hu , W. ( 2010 ) Ear-phone: An End-to-end Participatory Urban Noise Mapping System. Proc. 9th ACM/IEEE Int. Conf. Information Processing in Sensor Networks (IPSN), Stockholm, Sweden, April 12–16, pp. 105–116. ACM. 5 Oliver , N. and Flores-Mangas , F. ( 2007 ) Healthgear: Automatic sleep apnea detection and monitoring with a mobile phone . JCM , 2 , 1 – 9 . Google Scholar CrossRef Search ADS 6 Gao , C. , Kong , F. and Tan , J. ( 2009 ) Healthaware: Tackling Obesity with Health Aware Smart Phone Systems. Proc. IEEE Int. Conf. Robotics and Biomimetics (ROBIO), Guilin, China, December 18–22, pp. 1549–1554. IEEE. 7 Yang , D. , Xue , G. , Fang , X. and Tang , J. ( 2012 ) Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing. Proc. 18th Annual Int. Conf. Mobile Computing and Networking (Mobicom), Istanbul, Turkey, August 22–26, pp. 173–184. ACM. 8 Lee , J.S. and Hoh , B. ( 2010 ) Sell Your Experiences: A Market Mechanism based Incentive for Participatory Sensing. Proc. 8th IEEE Int. Conf. Pervasive Computing and Communications (PerCom), 29 March–2 April, pp. 60–68. IEEE. 9 Xu , H. and Larson , K. ( 2014 ) Improving the Efficiency of Crowdsourcing Contests. Proc. 13th Int. Conf. Autonomous Agents and Multiagent Systems (AAMAS), Paris, France, May 5–9, pp. 461–468. 10 Jaimes , L.G. , Vergara-Laurens , I. and Labrador , M.A. ( 2012 ) A Location-based Incentive Mechanism for Participatory Sensing Systems with Budget Constraints. Proc. 10th IEEE Int. Conf. Pervasive Computing and Communications (PerCom), Lugano, Switzerland, March 19–23, pp. 103–108. IEEE. 11 Feng , Z. , Zhu , Y. , Zhang , Q. , Ni , L.M. and Vasilakos , A.V. ( 2014 ) Trac: Truthful Auction for Location-aware Collaborative Sensing in Mobile Crowdsourcing. Proc. IEEE Conf. Computer Communications (INFOCOM), Toronto, Canada, 27 April–2 May, pp. 1231–1239. IEEE. 12 Koutsopoulos , I. ( 2013 ) Optimal Incentive-driven Design of Participatory Sensing Systems. Proc. IEEE Conf. Computer Communications (INFOCOM), Turin, Italy, April 14–19, pp. 1402–1410. IEEE. 13 Kido , H. , Yanagisawa , Y. and Satoh , T. ( 2005 ) Protection of location privacy using dummies for location-based services. Proc. 21st Int. Conf. Data Engineering Workshops (ICDEW), Tokyo, Japan, April 5–8, pp. 1248–1248. IEEE Computer Society. 14 Suzuki , A. , Iwata , M. , Arase , Y. , Hara , T. , Xie , X. and Nishio , S. ( 2010 ) A User Location Anonymization Method for Location based Services in a Real Environment. Proc. 18th Int. Conf. Advances in Geographic Information Systems (SIGSPATIAL GIS), San Jose, CA, November 2–5, pp. 398–401. ACM. 15 Kato , R. , Iwata , M. , Hara , T. , Suzuki , A. , Xie , X. , Arase , Y. and Nishio , S. ( 2012 ) A Dummy-based Anonymization Method based on User Trajectory with Pauses. Proc. 20th Int. Conf. Advances in Geographic Information Systems (SIGSPATIAL GIS), Redondo Beach, CA, November 6–9, pp. 249–258. ACM. 16 Gedik , B. and Liu , L. ( 2008 ) Protecting location privacy with personalized k-anonymity: Architecture and algorithms . IEEE Trans. Mobile Comput. , 7 , 1 – 18 . Google Scholar CrossRef Search ADS 17 Pan , X. , Xu , J. and Meng , X. ( 2012 ) Protecting location privacy against location-dependent attacks in mobile services . IEEE Trans. Knowl. Data Eng. , 24 , 1506 – 1519 . Google Scholar CrossRef Search ADS 18 Yigitoglu , E. , Damiani , M.L. , Abul , O. and Silvestri , C. ( 2012 ) Privacy-preserving Sharing of Sensitive Semantic Locations under Road-network Constraints. Proc. 13th Int. Conf. Mobile Data Management (MDM), Bengaluru, India, July 23–26, pp. 186–195. IEEE. 19 Zhang , J. , Ma , J. , Wang , W. and Liu , Y. ( 2012 ) A Novel Privacy Protection Scheme for Participatory Sensing with Incentives. Proc. 2nd Int. Conf. Cloud Computing and Intelligent Systems (CCIS), Hangzhou, China, 30 October–1 November, pp. 1017–1021. IEEE. 20 Christin , D. , Rokopf , C. , Hollick , M. , Martucci , L.A. and Kanhere , S.S. ( 2013 ) Incognisense: An anonymity-preserving reputation framework for participatory sensing applications . Pervasive Mobile Comput. , 9 , 353 – 371 . Google Scholar CrossRef Search ADS 21 Li , Q. and Cao , G. ( 2013 ) Providing Privacy-aware Incentives for Mobile Sensing. Proc. 11th IEEE Int. Conf. Pervasive Computing and Communications (PerCom), San Diego, CA, USA, March 18–22, pp. 76–84. IEEE. 22 Liu , A.X. and Chen , F. ( 2008 ) Collaborative Enforcement of Firewall Policies in Virtual Private Networks. Proc. 27th Annual ACM Symposium on Principles of Distributed Computing (PODC), Toronto, Canada, August 18–21, pp. 95–104. ACM. 23 Liu , S. , Zhu , H. , Du , R. , Chen , C. and Guan , X. ( 2013 ) Location Privacy Preserving Dynamic Spectrum Auction in Cognitive Radio Network. Proc. 33rd Int. Conf. Distributed Computing Systems (ICDCS), Philadelphia, USA, July 8–11, pp. 256–265. IEEE. 24 Gao , H. , Liu , C.H. , Wang , W. , Zhao , J. , Song , Z. , Su , X. and Leung , K.K. ( 2015 ) A survey of incentive mechanisms for participatory sensing . IEEE Commun. Surv. Tutorials , 17 , 918 – 943 . Google Scholar CrossRef Search ADS 25 Li , Q. and Cao , G. ( 2014 ) Providing Efficient Privacy-aware Incentives for Mobile Sensing. Proc. Int. Conf. Distributed Computing Systems (ICDCS), Madrid, Spain, 30 June–3 July, pp. 208–217. IEEE. 26 Luo , T. , Tan , H.P. and Xia , L. ( 2014 ) Profit-maximizing Incentive for Participatory Sensing. Proc. IEEE Conf. Computer Communications (INFOCOM), Toronto, Canada, 27 April–2 May, pp. 127–135. IEEE. 27 Zhang , Q. , Wen , Y. , Tian , X. , Gan , X. and Wang , X. ( 2015 ) Incentivize Crowd Labeling under Budget Constraint. Proc. IEEE Conf. Computer Communications (INFOCOM), Kowloon, Hong Kong, 26 April–1 May, pp. 2812–2820. IEEE. 28 Cheng , J. , Yang , H. , Wong , S.H. , Zerfos , P. and Lu , S. ( 2007 ) Design and Implementation of Cross-domain Cooperative Firewall. Proc. IEEE International Conf. Network Protocols (ICNP), Beijing, China, October 16–19, pp. 284–293. IEEE. 29 Singer , Y. ( 2010 ) Budget Feasible Mechanisms. Proc. 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), Las Vegas, NV, USA, October 23–26, pp. 765–774. IEEE. Footnotes 1 In this work, we consider truthful users, whose claimed prices are equal to their costs. Moreover, many existing works [10–12] have proposed truthful incentive mechanisms. Author notes Handling editor: Andrew Martin © The British Computer Society 2017. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

The Computer JournalOxford University Press

Published: Dec 29, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off