TY - JOUR AU - Karen, Yeung, AB - Key Points This comment begins from the observation that data-driven service delivery is catalysing a change in modes of production and consumption, marked by a move away from ‘mass production’ in favour of ‘mass predictive personalisation.’ Despite the portrayal of personalised as ‘empowering’ consumers, I identify five fears that the rise of mass predictive personalisation may portend for collective values and commitments. Fears (1), (2) and (3) are largely concerned the values of fairness and justice, and which can ultimately be attributed to the systematic use of digital profiling techniques that apply machine learning algorithms to merged sets of data collected from the digital traces generated from continuously tracking users’ on-line behaviour to make calculated predictions about individuals across a population. Fears (4) and (5) coalesce around concerns for social solidarity and loss of community that may be associated with the increasing personalisation of services and offerings, which is both fuelling and being fuelled by, an increasingly narcissistic mindset that mass personalisation makes possible. These reflections seek to provoke critical discussion and reflection that will motivate more penetrating research and place questions of this kind more firmly onto the academic, policy, public and political agenda. Introduction Contemporary production and consumption practices are currently being transformed in ways that depart radically from those which characterized the pre-digital, networked age. Indeed, many commentators claim that the ‘New Industrial Revolution’ that is now dawning will provoke changes across every aspect of social life of a magnitude and scale as disruptive and far-reaching as those brought about by the original Industrial Revolution.1 While there are many factors that have contributed to these changes, there is no doubt that technological innovation has played a critical role, particularly the emergence of the Internet, the ability to store digital data on the cloud, and the rise of ubiquitous computing (including the rapid and widespread take up of ‘smart’ connected devices in contemporary industrialized societies). All of these innovations have supported the emergence of powerful technologies currently referred to as ‘artificial intelligence.’ It is these technologies that, in retrospect, may come to be regarded as the 21st-century equivalent of the late 19th-century steam engine, for it is the application of machine learning algorithms applied to massive volumes of digital data, which is powering the transformations in cultures of consumption and production that are currently unfolding. One of the most commercially valuable applications of these technologies entails the digital profiling of individuals and groups across a population. These techniques are used to collect data gleaned from tracking the online behaviour of individuals across a population and subjecting that data to machine learning techniques in order to create detailed profiles capable of generating highly accurate predictions about the behaviours, interests, preferences, and traits of individuals and groups. Because these techniques can operate continuously and automatically, updating their operation via feedback loops that enable algorithms to improve their own performance, they have become exceptionally popular and powerful vehicles for accurately sifting, sorting, and scoring individuals across a population in ways that retailers and marketers in a pre-networked digital age could barely have imagined. Yet, despite the sophistication of these 21st-century techniques, the predominant business model that has emerged for consumer-facing contemporary digital services and with which the New Industrial Revolution is increasingly associated is rather primitive, for it is essentially one of ‘barter’. According to this model, users agree to continuous monitoring of their online behaviour and the collection of digital breadcrumbs thereby generated, in return for services—known in contemporary parlance as a ‘free’ rather than a ‘fee’ for services revenue model.2 The widespread adoption of profiling technologies to analyse and predict the preferences and behaviours of individuals and groups, together with this ‘free services’ business model, has spawned a post-industrial form of capitalist production, which Shoshana Zuboff has dubbed ‘surveillance capitalism’.3 According to Zuboff, the logic of surveillance capitalism rests on the generation of revenue from data assets acquired by ubiquitous automated operations, driven primarily by Silicon Valley’s hyperscale technology firms (and spearheaded by Google), which achieve growth mainly by leveraging automation via global digital platforms.4 The resulting pools of data, which these socio-technical systems routinely and strategically collect, constitute a new asset class that Zuboff dubs ‘surveillance assets’.5 Investors (so-called ‘surveillance capitalists’) generate profit from the global networked environment through a default business model, where company valuations routinely depend upon capturing human attention (‘the market for eyeballs’) rather than revenue as a predictor of return on investment. Surveillance capitalism, therefore, entails the continuous and pervasive collection of personal data by tracking the digital footprints of individuals, then channelling and controlling flows of personal information while converting them (via the use of AI technologies) into flows of profit, all in ways that are highly opaque to their users.6 The gradual but rapid replacement of industrial capitalism with surveillance capitalism can be understood as driven by changes in modes of production made possible by advances in networked computational technologies, highlighting the role of technological innovation as a catalyst for social transformation. It was the development of the steam engine that provided the technological driver underpinning the original Industrial Revolution and which, in turn, enabled the development of manufacturing and production processes at a scale and magnitude that was not previously possible. The steam engine enabled the construction of machinery housed within factories that could produce identical units on a mass basis. In a similar vein, the development of big data analytic techniques and the widespread and increasing use of connected smart devices, which are rapidly becoming indispensable in contemporary everyday life, has spawned not only a new model of capitalism but also a new mode of production: that of ‘mass personalization’. Common to both modes of production prevailing in the industrial era and the emerging networked digital era is their capacity to operate at ‘scale’. One of the defining features of the first Industrial Revolution that can be understood as critical to its socially disruptive effects was its departure from small-scale production that characterized modes of production within rural, agrarian societies, which the emergence of factories made possible, precipitating the growth and development of urban centres that enabled workers to live in close proximity to factories and for which human labour was essential. Despite this common capacity to operate at scale (and, thus, on a ‘mass’ basis), mass personalization can be distinguished from mass production in at least five ways: (i) It is primarily concerned with the provision of ‘services’ (although it is increasingly including the personalization of ‘goods’);7 (ii) rather than generating identical units of production, services are ‘personalized’ to each user, tailored to fit his or her individual tastes, interests, preferences, lifestyle, and behaviours; (iii) service provision operates, by default, on a ‘predictive’ basis owing to the application of advanced algorithmic profiling in order to ‘infer and predict’ each user’s service preferences with the aim of ‘anticipating’ user needs and ‘pushing’ personalized services to them accordingly; (iv) services are continually and automatically reconfigured in light of the feedback gleaned from monitoring the recipient’s response to the service without requiring the recipient to provide active and intentional feedback concerning their interest in the service thereby provided; and (v) although recipients of the service can be characterized as consumers, in the same way that those consuming factory-produced outputs under industrial capitalism may be understood as consumers, users are also concurrently providing raw materials and labour to the producer, composed of both the personal data that they generate via their digital interactions and the resulting ‘data exhaust’, the digital by-products incidentally generated from these interactions and which can be used by the service-provider to train and improve their algorithms. This move to mass personalization is likely to signify a major shift in our ‘modes and culture of consumption’, and it is clearly ripe for serious and wide-ranging academic research and reflection. It is beyond the scope of this article, however, to reflect comprehensively on the broader implications of its emergence. Rather, the purpose of this article is modest and limited. Although service providers (particularly social media platforms) typically portray personalized services as offering users a more ‘meaningful’ experience, the shift from mass production to mass personalization generates a number of potential dangers, including risks to our collective values and normative commitments and which are, in my view, both necessary and indispensable if democratic communities are to thrive. Yet, these threats have been largely overlooked in public and academic debates notwithstanding growing public anxieties about the adverse implications associated with the so-called ‘rise of the machines’. This article identifies five fears (or worries) that the rise of mass predictive personalization may portend for these collective values and commitments. These fears can be grouped into two broad sets of concerns. The first three fears are largely concerned with the potential for these techniques to be used to engage in unfair practices and which, therefore, directly implicate the values of ‘fairness and justice’. These fears can ultimately be attributed to the systematic use of digital profiling techniques that apply machine learning algorithms to merged sets of data collected from the digital traces generated from continuously tracking users’ online behaviour to make calculated predictions about individuals across a population. The remaining two fears coalesce around concerns for ‘social solidarity and loss of community’ that may be associated with the increasing personalization of services and offerings, which is both fuelling and being fuelled by an increasingly narcissistic mindset that mass personalization makes possible. In so doing, my aim is primarily to provoke critical discussion and reflection that will motivate more penetrating academic research and place questions of this kind more firmly onto the policy agenda of law-makers and regulators who are in a position to identify and implement the need for safeguards to maintain and protect core values that are necessary for individuals and communities to flourish in networked digital societies. A more meaningful experience and an empowered consumer? For the marketing industry, the emergence of digital technologies that enable the automatic personalization of offers and services to fit the preferences, interests, and tastes of individual consumers is in many ways the fulfilment of its ambitions that emerged during the 1950s. During that time, the retail industry began to attempt to segment their clients’ customer base in order to distinguish between existing and potential customers.8 For users and consumers, the predictive personalization of services may appear to offer considerable benefits. Proponents of contemporary personalization strategies that utilize data profiling techniques typically invoke the rhetoric of consumer ‘empowerment’,9 claiming that they relieve users from unwanted or irrelevant offers whilst the predictive nature of these strategies offers greater levels of convenience and efficiency. For individuals who rank at the top end of what are, in essence, customer scoring systems10 and, thus, are deemed to be highly desirable customers, mass personalization is likely to serve them well. But, will the turn to predictive personalization serve consumers as a whole well? Moreover, will even those high-end consumers deemed highly desirable be well-served by mass predictive personalization in the long run? Affirmative answers to these questions are, in my view, far from certain, given the following dangers that such practices are likely to provoke. Unfair practices Fear #1: It expands opportunities for consumer exploitation Because personalization practices rely on the digital profiling of individuals by mining digital datasets to infer and predict the tastes, interests, preferences and vulnerabilities of individuals, these practices foster and exacerbate the asymmetry of power between profilers (and their industry clients) and those to whom personalized services are provided, thereby increasing the opportunities for the former to exploit the latter. In particular, it is important to bear in mind that contemporary personalization practices rely critically on the mass surveillance of individuals across populations on a continuous and highly granular basis, and this will invariably have implications for the individuals and groups within the surveilled population. As the Rathenau Instituut commented in its report for the Council of Europe’s Parliamentary Assembly: many technologies nowadays can operate at a distance, most of us are not even aware of the mass surveillance taking place by state and market actors. This creeping development as a whole, and its impact on human rights and society, has received little attention and there has been scarcely any fundamental political and public debate so far. As a result, human beings are rather defenceless relative to this mass surveillance culture, since there are few opportunities to escape the surveillance activities if one does not want to be measured, analysed or coached (as part of a persuasion strategy).11 These observations not only draw attention to the pervasive population-wide surveillance that mass personalization necessarily entails but also highlights both the opacity of these processes and the asymmetry in power between the profilers and the profiled. While the former acquire ever greater and more detailed knowledge of the latter, the latter have no equivalent access to the inner workings and practices of the organizations who profile them. In any relationship, whether commercial or otherwise, those with greater knowledge of the other’s tastes, interests, preferences, dislikes, and vulnerabilities will be better placed to exploit them. At the same time, by dividing consumers into increasingly smaller segments, this further enhances the power of profilers and those on whose behalf they act. Such practices resonate directly with the time-honoured strategy of ‘divide and conquer’ that has been successfully invoked by ruthless political leaders and others to shore up their own power in pursuit of self-interest. Because personalization strategies serve to isolate individual consumers from each other, they thereby erode consumers’ power to act collectively in ways that might serve their interests as a whole. Moreover, because these profiling practices are highly opaque, dynamic, and largely operate hidden from public view, this exacerbates and reinforces the asymmetry of knowledge and power between the producer and consumer, thereby increasing the risk of exploitation by the former over the latter. Some of the most obvious risks of exploitation arise as we enter the so-called era of ‘artificial emotional intelligence.’12 In 2017, it was reported that tech firms are already developing billboards that can recognize and categorize individuals, then demographically direct personalized messages.13 Analysis of the click-through behaviour of individuals can readily identify when individuals are feeling low, more likely to make impulse purchases, or more susceptible to particular kinds of offers, enabling retailers to exploit detailed knowledge inferred from user profiles to micro-target personalized offers in ways that will maximize the opportunities to make a sale. But, mass personalization practices need not prey upon user’s emotional vulnerabilities to be classed as exploitative. For example, concrete examples may be found in the use of ‘personalized pricing’, which data-driven profiling and the rise of digital retailing makes possible. Under industrial capitalism, goods were mass produced and supplied to retailers and then offered for sale in bricks and mortar retail stores and made available to all customers entering the store at a particular time at the same price. In contrast, data-driven profiling now enables goods and services to be offered to potential customers at ‘personalized’ prices. Because each customer only sees his or her own individualized ‘digital shop front’ and does not have access to the prices or offers made to others online, the prices can be personalized through the use of data-driven profiling in order to identify each individual’s estimated maximum ‘willingness to pay’. While this strategy may optimize revenue for the retailer, it means that two individuals might be offered exactly the same item at precisely the same time, yet at very different prices.14 Although personalized pricing might not be regarded as exploitative per se, nonetheless the ability of retailers to profile their customers and tailor prices and offers accordingly can support specific personalization practices that have a distinctly exploitative hue. For example, economic models of price discrimination suggest that there is one group of consumers who might be routinely made worse off through the use of online personalized pricing: those consumers who are algorithmically identified as those likely to have difficulty making good decisions, either owing to lack of knowledge, poor digital literacy or owing to consumer disengagement and who, therefore, do not actively shop around for better offers nor switch to alternative providers who are willing to offer them better deals than their current provider.15 These so-called ‘sleepers’16 might fail to shop around owing to sheer laziness and apathy, and if so, one might regard them as ‘fair game’ by suppliers who can ‘get away with’ charging them higher prices. On the other hand, many of these consumers are likely to be vulnerable individuals, including those on low incomes, the elderly, the digitally disempowered (including those with no ready access to the Internet), and the poorly educated. The digital footprint of these individuals may superficially suggest that they have a low price-elasticity of demand (meaning that they are not very responsive to, nor care much about, price changes) because they fail to search out the best deal or switch providers when prices rise. Accordingly, these individuals are more likely to be offered higher prices than well-informed, digitally engaged, and financially savvy consumers who regularly switch prices, and at a higher price than those that would have prevailed under a uniform pricing regime in which the less savvy consumers are effectively ‘protected’ by well-informed consumers who shop around for the best offers.17 Consumers who fall into this class may find themselves being offered inferior deals to those offered to others in ways that may appropriately be regarded as exploitative, yet are likely to be completely unaware that they are being subject to inferior treatment. Fear #2: It enables subtle but powerful manipulation of individuals at scale Not only is mass predictive personalization likely to increase opportunities for those who employ such practices to exploit customers, but it also provides digital service providers with powerful tools of manipulation, which can be used to shape the decisions and behaviours of users in subtle yet powerful ways18. The recent Facebook/Cambridge Analytica scandal, in which it is alleged that data unlawfully harvested from the Facebook profiles of millions of users were utilized for political micro-targeting in ways that may have perverted the outcome of the US 2016 elections and the Brexit 2016 referendum, reveals not only how readily mass personalization techniques can be exploited and abused but also how serious and damaging their consequences might be for the health and integrity of democratic political orders. But, even in the consumer context, their manipulative potential becomes apparent by contrasting the characteristics of data-driven predictive personalized services with traditional service personalization, epitomized by the bespoke suit produced by the tailors of London’s famous Savile Row. Handcrafted human personalization typically entails the client first identifying a suitable craftsman, reviewing the craftsman’s credentials and qualifications to assess his or her suitability to undertake the task, taking into account, for example, the craftsman’s skills, qualifications, capacity and reliability, and so forth. Once identified, the client then requests and invites the craftsman to provide a particular service according to a set of requirements that the client explicitly identifies and stipulates. Those requirements might reflect the client’s conventional patterns of taste, preference, and needs, but they might not: indeed, they might depart entirely from her previous tastes and have nothing to do with population-wide trends, or the bespoke service or product might be intended for a person other than the person commissioning the service. In contrast, the typical aim of the algorithmic decision-making systems that rely on machine profiling is pre-emptively to ‘infer’ the preferences, behaviours, and lifestyles of individuals across a population in order to tailor informational services (whether video content, search engine results, retail products available for purchase, etc.) to fit the preferences and interests that the service provider has inferred through the use of behavioural profiling to generate computational predictions concerning those services that are calculated as ‘likely’ to be of interest to individuals users on a ‘pre-emptive’ basis, not only ‘before’ the individual has requested the service but also often ‘without’ the relevant individual requesting such a service at all. This comparison highlights crucial differences between traditional versus automated, algorithmic forms of service personalization. In the case of traditional, handcrafted service provision, the client explicitly communicates her needs and requirements about the desired service, while in the case of algorithmic personalization, the service is automatically configured according to the preferences and interests that the service provider has ‘inferred’ about the individual and offered ‘pre-emptively’ without any express request for service by the individual. Because the individual has not explicitly stated her preferences and interests about the service in question (indeed, she may not want the service at all), the potential to utilize predictive personalization techniques to pursue ends that may not be in the interests of the customer become more apparent once we attend to the underlying aim of these algorithmic systems. What is it, precisely, that these systems seek to optimize, and who has the power to specify that overarching goal? From the perspective of the system owner, the overarching aim of these systems is to channel user’s behaviour and decisions in the direction preferred by the system designer or ‘choice architect’, and, hence, the system is intentionally configured to optimize whatever variables will generate maximum commercial returns to its owner.19 Accordingly, there is no guarantee that these objectives will align with the longer-term interests and welfare of users whose decisions and behaviours these systems are aimed at influencing.20 It might be argued that these concerns are misplaced, given that individuals are free to choose whether or not to consume them, just as individuals have always been free to reject or ignore so-called ‘invitations to treat’ offered to them. At the same time, the whole point and purpose of marketing is to present attractive offers that are likely to be of interest to customers with the ultimate aim of promoting the interests of the retailer. In other words, it is inherent in the nature of capitalism generally that retailers will seek to influence the behaviour of customers in order to benefit the financial interests of sellers and producers, and it has long been recognized that the design and placement of products in-store can have significant effects on the purchasing behaviour of customers instore. But, despite these similarities between marketing practices in the pre-digital age and those involved in mass predictive personalization, it is important to recognize the powerful, subtle, and typically subliminal way in which these data-driven, predictive profiling systems operate. These employ particular kinds of ‘nudging’ techniques,21 which intentionally seek to exploit the systematic tendency of individuals to rely on cognitive heuristics or mental short-cuts in making decisions, rather than arriving at them through conscious, reflective deliberation. It is their intentional exploitation of the cognitive weaknesses of individuals with the aim of influencing them to behave, ways desired by the choice architect, which underpins their deceptive qualities.22 Not only are nudging techniques problematic because they can be understood as manipulative and lacking in transparency,23 but also when used for the purposes of mass predictive personalization, their manipulative power is enhanced. In particular, networked data-driven profiling techniques make it possible for each user’s choice architecture to be continuously reconfigured in real time in three directions: first, through continuous refinement of the user’s choice environment in responses to changes in the user’s behaviour and environment that have been identified by the system designer as relevant to the user’s decision-making through algorithmic analysis of the user’s constantly expanding data profile; secondly, by feeding back data to the system designer, which can itself be collected, stored and repurposed for other algorithmic applications; and thirdly, by continuous monitoring and refinement of the individual’s choice environment in light of population-wide trends that is made possible through population-wide surveillance upon which mass personalization techniques rely. Data-driven nudging techniques are, therefore, particularly nimble, unobtrusive, and highly potent, all in ways that qualitatively enhance their manipulative power and which I have referred to elsewhere as ‘hypernudging’24 and which Lanzig argues can be understood as threatening both informational and decisional privacy.25 Fear #3 It systematically marginalizes and excludes ‘low-value’ individuals The consumer empowerment rhetoric employed by advocates of data-driven personalization highlight the capacity of these services to filter out and remove irrelevant services and promotional offers from the informational choice environment of users. While this may be beneficial to users, it nonetheless conceals the underlying processes that such personalization processes entail: that it is the users themselves who are being sifted, sorted, and scored.26 As Rieder has observed, the collapse of traditional social structures of class, religion and lineage, and associated heterogeneity in individual lifestyles and preferences creates acute challenges for marketers and retailers wishing to engage in customer segmentation.27 Accordingly, one of the great attractions of data-driven profiling techniques is their capacity to ‘make the social legible again’ by ‘reinstalling mastery over societies that continuously diversify, creating differentiation that no longer conform to traditional groupings and categorization’ and which can be understood as the ‘raison d’etre’ of contemporary data analytics.28 Although those individuals who score highly in these algorithmic rankings are likely to benefit in the form of generous and attractive offers and opportunities, those who score poorly, and are, thus, deemed poor prospects for marketers and retailers alike, are likely to be disadvantaged and disempowered by the turn to mass personalization. As Gandy has observed: Part of the difficulty with the collection and use of personal information to support the marketing and sale of a variety of goods and services rests in the fact that personal information is not only used to include individuals within the marketing scan, but may also be used to exclude them from other life choices linked to employment, insurance, housing, education and credit.29 Almost a quarter of a century ago, Gandy had already warned of the dangers of what he called the ‘panoptic sort’, through which individuals are monitored, identified, and classified for the purposes of enabling retailers and advertisers to direct their efforts towards groups they believe represent good investment while ignoring those who are deemed unlikely to yield a profit.30 For individuals who are systematically excluded from a wide range of opportunities and facilities, many of which have very significant effects on their individual welfare, mass personalization threatens to diminish their personal autonomy. According to legal philosopher Joseph Raz, autonomy not only requires the capacity to plot one’s own life, but it also requires that individuals have a range of acceptable options from which to do so.31 Thus, by foreclosing options that would otherwise be available to the individual in the absence of mass personalization, data-driven profiling techniques can be understood as narrowing the range of worthwhile opportunities available to them and this, in turn, curtails their autonomy.32 In societies committed to individual freedom, the autonomy-diminishing impact of mass personalization should be a real cause for concern. One of the most problematic characteristics of the data-driven profiling techniques upon which mass personalization relies is its opacity, arising from both the highly complex nature of these socio-technical systems, their protection as trade secrets that often applies to algorithms developed by commercial providers and through which individuals are sorted and scored, and the way in which they operate automatically, seamlessly, and unobtrusively integrate into users’ daily routines. This makes it practically impossible for individuals to understand why particular services are offered to them at a particular time and on particular terms or to identify and understand how and why they are being profiled in a certain way and with what consequences for the resulting personalized offers and services they receive. As a result, the opportunity for individuals to challenge or otherwise contest the way that they have been profiled, sorted, and scored, particularly when those profiles have been constructed on the basis of erroneous or otherwise inaccurate data, are extremely limited.33 As Rieder has observed, algorithmic tools offer an ‘aura of objectivity, rationality, and legitimacy’ that is derived from their empirical underpinnings.34 Moreover, in cases of systematic exclusion from opportunities, these may be virtually impossible for the excluded individuals to detect. Take, for example, the study by Carnegie Mellon researchers, which found that male users were shown high-paying job ads six times more often than their female counterparts, based on automated algorithmic assessment that concluded that women were ‘not interested’ in high-paying jobs because they had historically not been employed in high-paying occupations.35 How could female users possibly detect that these ads were being systematically excluded from being shown to them? Although the EU General Data Protection Regulation introduces a number of rights that might assist data subjects seeking information of this kind, many of these rights only apply to algorithmic decision-making systems that have ‘significant effects’—it remains unknown whether promotional offerings and advertisements would be interpreted to fall within the scope of this term.36 For Turow, these profiles (which he calls ‘reputations’) play a central role in determining the promotional content (offers) that a person encounters online and these can be hard to shift’. For Turow: That marketers and media producers employ these activities hidden from the vast majority of viewers is deeply problematic—for the reputations of individuals may bear little on an individual’s self-perception, while viewers have little opportunities to correct misconceptions.37 Distributive justice, community, and social solidarity Fear #4: It perpetuates structural inequalities and exacerbates distributive injustice The aggregate and cumulative effect of mass personalization over time is likely to contribute to, and exacerbate, social inequality and distributive injustice. By providing marketers and retailers with the technological capacity to segment consuming publics into distinct groups, based on their relative value and profitability to the retailer, these technologies enable sellers to engage in a commercially rational form of social sorting, seeking to cultivate and attract the choicest customers (the so-called ‘strong market’ in economic parlance) and exclude low-value customers (the ‘weak market’). Akiva Miller argues that, if a considerable part of the information used as the basis for personalization pertains to persistent qualities, such as the neighbourhood in which consumers live, their income level, and maximum level of educational attainment, then such practices are likely to result in some individuals routinely suffering the same poor quality offers and treatment across different sellers. Commenting on the use of personalized pricing, Miller fears that, over time, these practices threaten to create a market divide between a class of consumers who receive lavish personal attention and preferential treatment, including offers for the best products, services, and prices and a class of consumers algorithmically assessed as ‘low value’, who are systematically ignored. In other words, Miller fears that retailers will, in essence, rationally seek to ‘cherry-pick’ the rich and ‘lemon-drop’ the poor. In many ways, mass personalization can be understood as pursuing the logic of market segmentation to its ultimate destination, in which each individual user is reduced to a unique market. The overarching logic of market segmentation is to identify and reinforce differences between consumers as the basis for identifying the services deemed (from the perspective of the service provider, rather than the user) most suitable to their inferred needs, preferences, traits, and interests. While customer segmentation strategies have understandable appeal to service providers (assuming that the cost of pursuing these strategies does not outweigh their value to the provider), segmentation in the online world can produce and reinforce societal inequalities because of their inevitable exclusionary effects in ways that may be much harder to detect and combat in comparison to the pursuit of segmentation strategies in traditional ‘bricks and mortar’ retail environments. In the offline world, market segmentation does not necessitate exclusion of consumers from one market from accessing the services offered in another market on the same terms and conditions available there. For example, supermarkets such as Waitrose and Wholefoods Market seek to target and appeal to high-end, high-income customers by stocking a greater range of fresh organic produce, luxury, and imported groceries, gourmet ready meals, and an extensive range of wines, while others such as Aldi and Lidl might seek to target low-end customers, offering a more limited product range across a set of basic groceries, stocked in higher volumes and offered at lower prices. Although, in the offline world, a customer who routinely shops at Aldi may nonetheless shop at Waitrose every now and again, simply by entering the store and availing himself of the full range of products available in-store at the same prices and on precisely the same terms as those offered to those customers who routinely shop there (leaving aside any loyalty discount that the latter might receive). In other words, the segmentation of customers does not prevent customers from one segment from taking advantage of offers targeted at a quite different customer segment, simply by shopping in-store. In the online environment, however, this is no longer possible. Although low-end consumers are free to browse the websites aimed at affluent consumers, they have no way of knowing whether the prices at which good and services offered to them are the same as those offered to others, because they only see their own ‘personalized’ version of the retailer’s site. This not only partitions consumers into discrete segments but also segregates and excludes them from the offers made available to other consumers and of which they are likely to be completely unaware. Accordingly, the net cumulative effect of mass personalization at scale is to perpetuate and reinforce existing forms of systematic societal discrimination; yet, these inherently discriminatory and exclusionary logics and effects may proceed unnoticed, which the rhetoric of personalization serves to conceal and obscure.38 Fear #5: It fuels a culture of narcissism, prioritizing economic morality over social equality thus eroding solidarity and community My fears about the effect of data-driven mass personalization techniques to exacerbate social inequality and collective justice point to deeper misgivings about what the cumulative and aggregate effect of these practices may portend for us, both in terms of our moral character as individuals and our collective culture as a moral and political community. While the tech and digital marketing industries portray personalization strategies as a boon for users, these are achieved by seeking pre-emptively to cater to the individual’s predicted needs, desires, interests, and tastes, through reliance on algorithmic sorting, scoring, and ranking. Mass personalization is, therefore, likely to create and exacerbate distributive injustice (see #4) by furthering and fostering processes of ‘social fragmentation’ through which individuals are classified, categorized, and mathematically sorted in ways that reinforce ‘differences’ between individuals and groups across society. Although it is impossible to prove that the increasing polarization and social segmentation that is evident in results of political elections across many industrialized and ostensibly democratic states in recent years as extremist political parties have gained ground can be attributed to the increasing use of data-driven profiling to personalize media content pushed to users at scale, it is far from implausible to think that these techniques have significantly contributed to it. The socially corrosive effects of this fragmentation are evident in the dangers associated with political manipulation and micro-targeting that was allegedly utilized at scale in recent presidential election and referendum campaigns in the USA and UK, respectively. Yet, it is not merely the effect on political processes that forms the focus of my fifth and final fear. Rather, one of my gravest concerns about mass personalization is the elusive yet highly consequential effects of these practices on both our character and mindset as individuals and on the collective moral culture of our societies. Population-wide and pervasive data-driven personalization prioritize values of economic morality over values associated with community, particularly the value of equality, in ways that may weaken and marginalize our commitment to equality as both a matter of principle and social practice. My fear is that, deployed at scale, the way in which these data-driven personalization techniques are being applied in contemporary societies to maximize the economic ‘value proposition’ for the organizations and institutions that utilize them is likely to have two significant and troubling consequences. Firstly, it fosters a cult of the individual that signifies a shift from a culture of capitalism that can be understood as moving beyond a cultural of material consumerism to one of narcissism. Secondly, mass personalization may over time corrode social solidarity and so loosen our social bonds that it could threaten the very nature of our collective character as a moral and political community. Each of these two concerns is briefly discussed in turn, beginning first with the rise of narcissism. In one respect, the emergence of mass personalization can be understood as a straightforward and unexceptional manifestation of the logic of capitalism through which resources and opportunities follow the logic of market forces. Yet, it cannot be denied that the widespread practices of the marketing industry have significantly affected our political and social culture, prompting economic sociologists and political economists to examine the broader social and political implications of what is sometimes referred to as ‘varieties of capitalism’. In many ways, the introduction of modern marketing techniques from the 1950s onwards served to inform a logic of industrial capitalism associated with mass production, consumption, and advertising. As Draper and Turow observe, in the 1950s, there were already moves towards ‘market segmentation’, marking a shift in the logic of industrial capitalism away from mass production towards mass customization, which—by the 20th century, had become entrenched.39 At the same time, the logic of neoliberal policies resting on an unshakable belief in laissez-faire free market capitalism pursued by the Thatcher and Reagan administrations on both sides of the Atlantic from the early 1980s onwards is widely seen as accelerating and deepening the rise of ‘consumerism’—in which the increasing consumption of goods is seen as economically desirable and in which individuals are increasingly preoccupied by, and inclined towards, the buying of consumer goods such that material wealth and possessions become an important indicator of self-worth and social status.40 My fear is that, because predictive personalization strategies that surveillance capitalism fuels are all oriented towards fulfilling the idiosyncratic (albeit inferred) preferences, tastes, desires, and inclinations of each individual, they may serve to reinforce and legitimate each individual’s belief in the central importance of their own personal tastes and inclinations. In so doing, I fear that mass personalization may foster the rise of widespread narcissism. Unlike consumerism, within a culture of narcissism, it is not the acquisition of material possessions per se that is valorized, but rather that of satisfying the individual’s idiosyncratic tastes, preferences, views, and inclinations. Because data-driven profiling enables mass personalization well beyond the sphere of commercial consumption into the larger political and social sphere, I worry that it will foster and legitimize an attitude of narcissistic self-obsession, with detrimental effects not only for the individual but also for society more generally. In particular, I worry that the rise of mass personalization may precipitate the rise of mass narcissism, and which might be so corrosive of social solidarity, and so loosen our social bonds to such an extent that the very nature of our collective character as a moral and political community might be called into question. Over time, the aggregate and cumulative effect of increasingly narcissistic tendencies and traits amongst individuals may fuel a larger narcissistic culture in which individuals come to believe in the validity of their own self-obsession and entitlement, fostering a narcissistic culture in which the individual’s subjective needs, perceptions, and desires are celebrated whilst concurrently fostering social isolation and undermining a sense of shared identity and collective purpose.41 Viewed in normative terms, personalization strategies focus on isolating and highlighting ‘differences’ between individuals in ways that emphasize ‘particularity over generality’. This is the core logic underpinning market segmentation and data-driven personalization strategies, which seek to identify particular features about the targeted individual that are predictively identified as highly correlated with particular offers or messages and, thus, more likely to be of direct relevance and particular interest to the individual in question. Yet, as Frederick Schauer has observed, practices based on generality and generalization may have important community-building functions, noting that … at times, though certainly not at all times, generality and generalization may bring positive virtues. Generality, by excluding local variation (and not just in the geographic sense of ‘local’), can serve important binding and community-creating functions … the rise of Europe as a power, a force, an idea, and a community has been a positive development, even if that positive development has come at the cost of some loss of local identity and national autonomy. By imposing uniformity in the face of diversity, harmonization in the face of difference, and consistency in the face of variance, generality flattens or dampens the variations among people, places, groups and events, often to the discomfort of those whose particular claims and situations are flattened, but often, as in Europe, to the benefit of the larger group.42 It is this social binding function that inheres in, and is essential for, the idea of a community and which may provide one of the strongest arguments in favour of equality. As Schauer puts it, ‘If we are not a community of equals, perhaps we are equals because we are a community’.43 He continues: The affinity between community and equality stems from our recognition of the real bite of the idea of equality. That bite emerges not from the fact of descriptive equality, but … from the fact that most of the interesting exemplars of equality exist against the background of important inequalities. Rather than being important because it treats like cases alike, equality becomes important precisely because it treats unlike cases alike … Even once we have recognized this fact, it remains puzzling for many people why we would want to treat unlike cases alike. Indeed, most of the sceptical writing about equality has maintained that equality for equality’s sake has little to be said for it, and that typically what masquerades as equality is something else entirely. But perhaps there is something to be said for equality for equality’s sake, and perhaps that something is closely related to the idea of community. That is, perhaps equality for equality’s sake is important precisely because the level or Procrustean effect of equality for equality’s sake is just what creates communities. To abstract away from our differences is to bring us closer together and to generate a focus on our similarities. When we ignore or abstract away from our differences, we necessarily increase our emphasis on shared standards and equal treatment … When community is pursued, one important way of pursuing it, and one consequence of its pursuit, is the emergence of a generality whose effect, in turn, is to treat unlike cases alike in the service of community.44 By shifting the balance of values away from the collective interest, and its associated commitment to generality, in favour of particularity, mass personalization might thereby weaken our commitment to the principle and practice of equality. In so doing, my fear is that it will weaken and dilute the communal and solidaristic character of our societies, such that our treatment of individuals persistently and systematically prioritizes the differences between us rather than our similarities and shared features. In so doing, this necessarily entails a significant shift away from the community-building nature of generality, threatening to erode our sense of common purpose, community, and solidarity while reinforcing social differences and exacerbating social stratification. In expressing this fear, I am not suggesting that we should flatten and ignore all the differences between us or that there is no individual and social value in seeking out and associating with others whose tastes, behaviours, interests, and political outlooks are similar to our own. As Turow has observed in relation to the increasing customization of media environments that data-driven technologies have enabled, a healthy society needs both (i) ‘segment-making media’, which encourages ‘small slices of society to talk among themselves’45 and (ii) ‘society-making media’—those which have the potential to promote conversation across segments.46 Each form of media has its own benefits and drawbacks for supporting a healthy society. While segment-making media tends to offer their audiences a narrow set of views, society-making media has a track record of marginalizing particular voices. Yet, together both create the possibility for engagement within and across interest groups.47 Similarly, healthy societies require both recognition of differences between individuals and groups that may warrant particular treatment and recognition of their similarities and common features, in which equality of treatment is both necessary and justified. My fear is that, by encouraging and legitimizing an attitude of narcissism, in which individuals become so preoccupied with themselves and the immediate satisfaction of their particular interests, desires, needs and wants, there is a real risk that, over time, the turn to mass personalization might threaten our collective character as a community at all. A narcissist is, by definition, not merely preoccupied with him or herself, but considers himself or herself to be the ultimate object of admiration, without any interest or concern for others. If we, as a community of individuals, become increasingly narcissistic in our outlook, this might ultimately threaten the very nature of collective life as a community. Indeed, a community of narcissists is a contradiction in terms. It is a collection of isolated individuals, not a community of equals bound to each other through the bonds of a shared commitment to recognition of their common humanity, which demands both equality of treatment and an other-regarding attitude of mutual respect and self-restraint. Conclusion The breath-taking pace of technological change in recent decades is fuelling social transformations that are likely, in retrospect, to justify their description as ‘revolutionary’. These include potentially profound changes to our modes and cultures of both production and consumption. In this article, I have sought to reflect critically on the rise of mass predictive personalization that data-driven service delivery in a hyperconnected age makes possible. While the tech and marketing industries champion their benefits by enabling users to enjoy a more ‘meaningful’ experience, I have suggested that all that glitters is not gold.48 In particular, I have identified five fears that the rise of mass predictive personalization may portend for important collective values and commitments, which can be broadly understood as concerns about fairness and justice on the one hand and the erosion of social solidarity and loss of community on the other. In raising these concerns, I hope to provoke critical discussion and reflection that will motivate more penetrating research and place questions of this kind more firmly onto the policy, public and political agenda. It is vital that, in the face of so much hype, hope and fear associated with our new data-driven technologies, we are actively involved in shaping our technological future by asking ourselves—ultimately—what is the good digital life? Only then can we identify whether the vision of the good digital life currently offered to us by Silicon Valley and its satellites is one that we willingly wish to embrace and, if not, to identify the social and technical governance mechanisms that are needed to reorient us and to nurture and sustain the foundational values that are vital for individuals and communities to flourish in a data-driven age. I am grateful to Fiona de Londras for comments on earlier drafts. All errors are entirely my own. Footnotes 1 dana boyd and Kate Crawford, ‘Critical Questions for Big Data’ (2012) 15 Information, Communication and Society 662. 2 Jose van Dijck ‘Datafication, Dataism and Dataveillance: Big Data Between Scientific Paradigm and Ideology’ (2014) 12 Surveillance and Society 199. By offering services for ‘free’, this eliminates an important barrier to adoption faced by firms seeking to attract new customers with initially high uncertainty about their valuation of the services offered: Anja Lambrecht, ‘The Economics of Pricing Services Online’ in SN Surlauf and LE Blume (eds), The New Palgrave Dictionary of Economics (Palgrave Macmillan, UK 2013). 3 Shoshana Zuboff, ‘Big Other: Surveillance Capitalism and the Prospects of an Informal Civilization’ (2015) 30 Journal of Information Technology 75. 4 Ibid. 5 Ibid, 80. See also World Economic Forum, Personal Data: The Emergence of a New Asset Class (2011). 6 Ibid. 7 The Deloitte Consumer Review (2015) Made to Order: The Rise of Mass Personalisation accessed 16 August 2018. Amaan Sadiq (2018) The Shift from Mass Production to Mass Personalisation, 31 May, CapGemini Consulting accessed 16 August 2018. 8 Nora A Draper and Joseph Turow, ‘Audience Constructions, Reputations and Emerging Media Technologies: New Issues of Legal and Social Policy’ in R Brownsword, E Scotford and K Yeung (eds), The Oxford Handbook of Law, Regulation and Technology (Oxford University Press 2017) 1143–67. 9 Sadiq (n 7). 10 Frank Pasquale, The Black Box Society (Harvard University Press 2015). 11 Rinie van Est and Joost Gerritsen, Human Rights in the Robot Age (Rathenau Instituut 2017) 43. 12 Richard Yonck, Heart of the Machine: Our Future in a World of Emotional Artificial Intelligence (Arcade Publishing 2017). 13 Richard Yonck, ‘Welcome to the Emotion Economy – Where AI Responds to, and Predicts, Your Feelings’ Fastcompany Newsletter (2 March 2017) accessed 16 August 2018. 14 Christopher Townley, Eric Morrison and Karen Yeung, ‘Big Data and Personalized Price Discrimination in EU Competition Law’ (2017) 36 Yearbook of European Law 683; Akiva A Miller, ‘What Do We Worry about When We Worry about Price Discrimination?: The Law and Ethics of Using Personal Information for Pricing’ (2014) 19 Journal of Technology Law and Policy 41. 15 Townley, Morrison and Yeung, ibid, 37. 16 Ariel Ezrachi and Maurice E Stucke, Virtual Competition (Harvard University Press 2016) 114. 17 Townley, Morrison and Yeung (n 14) 37–38. 18 Concerns about the use of ‘persuasive technologies’ has been a central theme in the recent observations of the Parliamentary Assembly of the Council of Europe. See, for example, van Est and Gerritsen (n 11) and Parliamentary Assembly of Council of Europe, Report on Technological Convergence, Artificial Intelligence and Human Rights (2017) by Rapporteur Jean-Yves Le Deaut accessed 14 September 2018. 19 Karen Yeung, ‘“Hypernudge”: Big Data as a Mode of Regulation by Design’ (2017) 20 Information, Communication and Society 118. 20 James Grimmelmann, ‘The Platform Is the Message’ (2018) 2 Georgetown Law Technology Review 217. 21 Richard Thaler and Cass Sunstein, Nudge (Penguin Books 2008). 22 Karen Yeung ‘Nudge as Fudge’ (2012) 75 Modern Law Review 122. 23 Ibid. 24 Yeung (n 19). 25 Marjolein Lanzing, ‘“Strongly Recommended” Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies’ (2018) Philosophy and Technology 1. 26 Draper and Turow (n 8). 27 Bernhard Rieder, ‘On the Diversity of the Accountability Problem: Machine Learning and Knowing Capitalism’ (2015) 2 Digital Culture and Society 39. 28 Ibid, citing Mark Andrejevic, Infoglut: How Too Much Information Is Changing the Way We Think and Know (Taylor & Francis 2013) 11. 29 Oscar H Gandy, ‘Coming to Terms with the Panoptic Sort’ in David Lyon and Elia Zuerik (eds), Computers, Surveillance and Privacy (University of Minnesota Press 1996) 132. 30 Oscar H Gandy, The Panoptic Sort: A Political Economy of Personal Information (Westview 1993). 31 Joseph Raz, The Morality of Freedom (Oxford University Press 1986) 398. 32 Ibid. 33 For disturbing examples of the effects on individuals of inaccurate algorithmic profiling, see Bill Davidow, ‘Welcome to Algorithmic Prison—The Use of Big Data to Profile Citizens is Subtly, Silently Constraining Freedom,’ The Atlantic (20 February 2014) and Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Penguin, UK 2016). 34 Bernhard Rieder, ‘On the Diversity of the Accountability Problem: Machine Learning and Knowing Capitalism’ (2015) 2 Digital Culture and Society 39. 35 Amit Datta, Michael Car Tschantz and Anupam Datta, ‘Automated Experiments on Ad Privacy Settings’ (2015) 1 Proceedings on Privacy Enhancing Technologies 92. 36 Lillian Edwards and Michael Veale ‘“Slave to the Algorithm”? Why a “Right to an Explanation” is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law and Technology Review 18. 37 Draper and Turrow (n 8) 1181. 38 Rieder (n 34). 39 Draper and Turow (n 8). 40 Alain De Botton, Status Anxiety (Knopf Doubleday Publishing Group 2008). 41 Robert D Putnam, Bowling Alone (Simon & Schuster 2001). 42 Frederick Schauer, Profiles, Probabilities and Stereotypes (Harvard University Press 2003) 289. 43 Ibid 296. 44 Ibid 296–98. 45 Joseph Turow, The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth (Yale University Press 2012) 194. 46 Ibid. 47 Turow and Draper (n 8) 1149. 48 Anthony Danna and Oscar Gandy, ‘All That Glitters Is Not Gold: Digging Beneath the Surface of Data Mining’ (2002) 30 Journal of Business Ethics 373. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Five fears about mass predictive personalization in an age of surveillance capitalism JF - International Data Privacy Law DO - 10.1093/idpl/ipy020 DA - 2018-08-01 UR - https://www.deepdyve.com/lp/oxford-university-press/five-fears-about-mass-predictive-personalization-in-an-age-of-spFgj0sAdk SP - 258 VL - 8 IS - 3 DP - DeepDyve ER -