TY - JOUR AU - Gray,, Mia AB - Abstract The relationship between technology and work, and concerns about the displacement effects of technology and the organisation of work, have a long history. The last decade has seen the proliferation of academic papers, consultancy reports and news articles about the possible effects of Artificial Intelligence (AI) on work—creating visions of both utopian and dystopian workplace futures. AI has the potential to transform the demand for labour, the nature of work and operational infrastructure by solving complex problems with high efficiency and speed. However, despite hundreds of reports and studies, AI remains an enigma, a newly emerging technology, and its rate of adoption and implications for the structure of work are still only beginning to be understood. The current anxiety about labour displacement anticipates the growth and direct use of AI. Yet, in many ways, at present AI is likely being overestimated in terms of impact. Still, an increasing body of research argues the consequences for work will be highly uneven and depend on a range of factors, including place, economic activity, business culture, education levels and gender, among others. We appraise the history and the blurry boundaries around the definitions of AI. We explore the debates around the extent of job augmentation, substitution, destruction and displacement by examining the empirical basis of claims, rather than mere projections. Explorations of corporate reactions to the prospects of AI penetration, and the role of consultancies in prodding firms to embrace the technology, represent another perspective onto our inquiry. We conclude by exploring the impacts of AI changes in the quantity and quality of labour on a range of social, geographic and governmental outcomes. Artificial Intelligence, bias in machine learning, automation, geography of technology, job displacement and growth When machines think for us: the consequences for work and place To what extent is Artificial Intelligence (AI) fundamentally reshaping our relationship to work? Will AI affect how and where we work? The last decade has seen the proliferation of academic papers, consultancy reports and news articles about the possible effects of AI on work—creating visions of both utopian and dystopian workplace futures. AI has the potential to transform the demand for labour, the nature of work and operational infrastructures, by solving complex problems with high efficiency and speed. However, despite a proliferation of reports and studies, AI remains an enigma, a newly emerging technology, and its rate of adoption and implications for the structure of work are still only beginning to be understood. While economic analysis tends to foreground AI’s potential in increasing innovation, productivity rates, output and the demand for labour, other research stresses issues of job loss, surveillance, and encoded and systemic biases. There is an ongoing debate over whether automation and AI will create mass unemployment, and the actual scale of potential labour displacement is disputed (Acemoglu and Restrepo, 2017; Autor, 2015; Brynjolfsson and McAfee 2017). Oxford University scholars Frey and Osborne (2017) predict up to 47% of US jobs will be at a ‘high risk’ of computerisation by the early 2030s, while an Organization for Economic Cooperation and Development (OECD) study by Arntz et al. (2016) claim this figure is too pessimistic, finding only 9% of jobs across the OECD are automatable. More interestingly, only a few studies attempt to determine under what circumstances displacement might override increases in labour demand from growth. In this vein, Acemoglu and Restrepo (2020) in this Special Issue of CJRES focus on how productivity increases could outweigh the displacement effect under the ‘right’ type of AI, and how better social outcomes are more likely when governments play an active role in shaping the direction of AI. The relationship between technology and work, and concerns about the displacement effects of technology and the organisation of work, have a long history. The displacement of labour by technology, in particular, is an old and reoccurring theme. The Luddites famously wrecked English textile machines in the 19th century to preserve their work. In the USA, a classic African American folksong pits the all-too-human efforts of John Henry against the steam-powered drilling machine (US Library of Congress, n.d.). The theme also resonates throughout the history of political economy. Marx put technological change, ownership and organisational structure at the centre of his critique of capitalism. Schumpeter told a story of the almost heroic inevitability of economic growth through the diffusion of technological innovation which destroys the previous round of jobs, skills and built capital. But, over time, other observers saw a more complicated relationship between work and technology. Although Keynes (2010) stressed the role of technology in mass unemployment at the height of the Great Depression, he also famously argued that technology-led productivity increases could bring in a dramatically shortened workweek, the net result being more leisure and culture. So, although these themes are not new, the emergence of AI technologies has created a renewed emphasis on—and societal anxiety over—the issue of potential mass worker displacement. Until recently, the application of machine automation primarily focussed on particular industries and tasks, such as the use of robotics in factories that require highly repetitive movements or significant physical forces. But the fusion of AI and automation constitutes a new and unprecedented development, which might mean that the threat of job loss is more severe, wide ranging and potentially disruptive. By implication, maldistribution of income between and within countries, in both the global south and the global north, would increase. In developing countries, there are particular concerns that industrialisation, once considered a secure means of development, may fail to produce additional jobs and income—with robotics completing the dull, dangerous and dirty tasks formerly done by low-skilled workers (Felipe et al., 2019; Frey and Rahbari, 2016). Additionally, in both the global north and south, there is an increasing number of examples in services and in the public sector in which integrated applications of automation, machine learning and AI capabilities will affect middle-skill, white-collar occupations. Legal services, data operations, public service delivery, medical and customer service occupations are all potential targets for transformation. A lack of clarity about what AI actually is adds to the uncertainty around the effects of its applications. Much of the current anxiety about labour displacement anticipates the growth and direct use of AI, but in many ways, AI might be overestimated in terms of impact. According to some scholars, AI technologies are more substantial in our imaginations than in our workplaces (see Vanderborght, 2019; for a moderated perspective, see Kenney and Zysman, 2020). Spencer and Slater (2020) stress the potential of AI technologies lies not in removing low-skilled work, but instead in increasing low-skilled and poorly paid work. Terms such as ‘AI’, ‘machine learning’ and ‘automation’ have become conflated—not just in the popular press, but also in academic work too. Dignam (2020) reminds us that the idea of AI as sentient human consciousness to date only exists in science fiction. AI has, in many ways, become a catch-all term for a range of technological change at work. Research on services (Brooks et al., 2020; Susskind and Susskind, 2015) and manufacturing (Waldman-Brown, 2020) suggest that small business owners and managers may talk of AI when they are actually describing new applications of automation. That is, different technological forms of automation and machine learning are understood as AI, even if the use of the AI (real machine intelligence) itself is more limited. Moreover, the business consultancy sector’s projections of the growth of AI often seem inflated—a linear projection of current trends—and are difficult to assess. We examine these consultancies not as neutral observers of technological adoption, but as important actors in driving a demand for, and use of, AI in the workplace. However AI is measured and understood, it is unlikely to be evenly utilised across the economy. Despite this, a common assumption in the literature is that the consequences of AI will be homogeneous across a nation. Many accounts of AI hold that the nation-state is the relevant unit of analysis and that the relationship between AI and labour displacement or productivity increases is unchanging across a nation—assuming the link is widespread and uniform. Despite this, there is an increasing body of research which explores automation, AI and machine learning, and argues that the consequences for work will be highly uneven and depend on a range of factors, including place, economic activity, business culture, education levels and gender, among others. We contend that the importance of place has not received enough attention in the literature on AI and work. Although country rankings and indices on ‘readiness’ for AI adoption have been produced (see Oxford Insights, 2019), these analyses rarely explore causal mechanisms. Place matters, first, because of the importance of existing regional sectoral patterns. Industrial processes and the production of services are delivered and concentrated in particular areas (see Acemoglu and Restrepo, 2017, for an analysis of robotic applications in the USA; also see Leigh et al., 2020). Automation may well be more straightforward, generally speaking, in industrial over service activities; hence, industry mix is likely to be important. But, just as important, the organisation of work culture will create a context that is more or less susceptible to being a target of automation (see Waldman-Brown, 2020). We already know from the literature on innovation and international business that industry customs and practices are embedded within national, regional and sectoral industrial practices (Gertler, 2004, Saxenian, 1994; Yeung, 2016). For this reason, production systems vary by place. For example, a report by the consultancy firm Price Waterhouse Cooper (PWC, 2020) argues that automation will be less impactful on employment levels in Japan compared with Germany. Their argument suggests that even if sector industrial compositions are similar, there is a lower proportion of manual tasks vis-à-vis management in the two countries’ industries. Other technology systems which are commonly nationally based, such as regulation and education, will also affect the use of AI. Regulatory structures—usually nationally based—are considered important factors in shaping future technology applications. Variation in regulation (of AI itself, of labour practices, of privacy laws and of intellectual property rights), of labour relations, and taxation (R&D tax breaks, tax on automated systems etc) may affect working practices in different ways around the globe. This means that the scope of automation or AI will not be either equally economically viable nor necessarily evident or possible in all contexts. While it seems inevitable that the scope of AI adoption will unevenly affect job loss or growth, the distribution of job opportunities may also be affected by the growth of AI in business services. Both Zhou (2019) and Dignam (2020) explore the gendered and racial biases embedded within AI algorithms. Studies suggest that the use of technologies, such as facial and voice recognition, automated screening of CVs and resumes, and targeted profiling, may inadvertently narrow the scope of successful job-seeking applicants in profoundly prejudiced ways (Criado Perez, 2019; Tatman and Kasten, 2017). Thus, the spread of AI and automation in general business services, such as Human Resource functions around job acquisition, may have significant consequences for jobseekers long before they make contact and engage with a live person. For example, automated processes that scan CVs may create a barrier to job acquisition, which reflects programmers’ conscious or unconscious bias or simply the poor quality of the underlying database they are asked to model. The role of AI in reproducing bias may compound pre-existing organisational problems with gender, ethnic and class inequalities. Education is likely to be another key to understanding the effects of AI on work. Generally, less-educated workers are more vulnerable to the effects of automation compared with those engaged in more complex and discretionary tasks. For example, in the financial and insurance sectors, repetitive data-intensive operations may be more automatable in the USA than in the UK due to the difference in the average education levels of finance and insurance professions. In legal services, it is the paralegal, less-skilled occupations that are at the greatest risk of displacement (Brooks et al., 2020). At present, it appears that men’s jobs are more vulnerable to automation, especially those that require lower educational attainment levels. These jobs tend to be routine industrial tasks subject to mechanisation and routinisation (Acemoglu and Restrepo, 2017). However, this may change in the future. Women dominate many care sector jobs in the so-called ‘high touch’ occupations, where emotional and cognitive labour are significant (Hochschild, 1983; McDowell, 2011). These jobs do not appear immediately impacted by technology encroachment due to their face-to-face qualities. In the short run, experts consider these tasks less amenable to automation. In the medium term, however, emerging technology applications aim to augment even these service functions with machine assistance and are likely to interact with and produce new gendered divisions of labour. The rest of this article consists of four sections. First, we appraise the history and the blurry boundaries around the definitions of AI. Second, we explore the debates around the extent of job augmentation, substitution, destruction and displacement by examining the empirical basis of claims, rather than mere projections. Third, we analyse corporate reactions to the prospects of AI penetration, and the role of consultancies in prodding firms to embrace the technology by suggesting those that fail to act will fall irreparably behind the competition. Fourth, we explore the impacts of AI changes in the quantity and quality of labour on a range of social, geographic and governmental outcomes. Finally, we briefly touch on some of the themes accompanying the geographic expression of these technologies both in terms of their likely location of production and in terms of their geographic expression in user settings. Definition and history of AI Underpinning the current fascination with advancing technologies, including AI, machine learning and automation, is a blurry understanding of what these technologies are, how they work, their lineage in conjunction with prior generations of technology and how they interact with one another. While there are many permutations on the definition of AI, for the sake of brevity, we define it as ‘a set of technologies that can imitate intelligent human behavior’ (KPMG, 2019, 3). Machine learning is an application of AI that provides systems the ability to learn and improve from experience without being explicitly programmed automatically to do so. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves. Automation is a system of technology that enables a machine to perform a process based on programmable commands that combines feedback controls to ensure the process executes according to the programmed instructions. Applied today, the combination of these three technological capabilities is transforming work and working practices across the spectrum of sectors comprising the modern economy. AI, machine learning and automation: the search to replicate human intelligence As we explore three different and potentially interconnecting technology capabilities, we begin by answering a simple question: what is human intelligence? Human intelligence comprises several intellectual activities—from simple computation to data processing, to pattern recognition, as well as the ability to solve problems, use judgment in solving problems, and demonstrate the ability to be creative and extend ideas in several directions and, importantly, communicate the results of thinking. During the Second World War, Alan Turing, a British mathematician, outfoxed the German coding generating machine, The Enigma, enabling the British codebreaking station to inform the British and American Naval operations of the location of U-Boats attempting to break the supply lines to the British. In 1952, Turing went on to develop two computers, and his 1950s, published paper is considered the first description of how machines might think like humans (DiSalvo, 2012). The challenges of automation emerged from the 1950s onward and were accompanied by new technological developments in AI and machine learning. The early start of AI traces back to the quest to build a machine to replace the human mind in completing complex mathematical calculations. During the Second World War, the US military required an ability to generate rocket trajectories using calculus and advanced mathematics. Women played a central role: ‘behind the computer’s apparent intelligence was the arduous and ground-breaking programming work of a team of six women, who themselves had previously worked as “computers”’ (Schwartz, 2019). Considered drudge work, the highly mathematical women took up the task of programming the operating system code necessary to direct the first computer. After the war, designing a programme built to mimic the problem-solving skills of the human mind, social scientist Herbert Simon and computer scientist Allen Newell laid out the basic logic capable of predicting that these new technologies would eventually approach the capabilities of the human brain. Already in the 1950s and 1960s, technological advances towards enabling machines to undertake specific tasks, including visual and speech recognition, pattern recognition of unstructured data and decision-making based on experience and current information. At the same time other researchers, including Minsky (1961), Papert (1988), McCarthy (1959), Newell et al. (1957) and Newell et al. (1983), pursued various technological paths, all aimed at demonstrating that the time would come when machines would be able to think like humans. In the 1960s and 1970s, authors regularly over-promised and under-delivered on the creation of devices that could be said to think. It was not until the 1990s that researchers openly suggested the quest for AI was possible, aided by increasingly powerful computers that could demonstrate they were better than humans in computation and data processing. With greater realism and more modest intentions, scholars turned their attention to replicating the functionality of the human brain for pattern recognition and prediction. Today, we distinguish between ‘general’ and ‘narrow’ AI (Acemoglu and Restrepo, 2020). Much of what companies, individuals and organisations currently pursue is considered a family of restricted applications, which are examples of narrow forms of AI. Many decision problems and activities of narrow AI represent examples of pattern recognition and prediction. These include facial and speech recognition, and pattern recognition represented in data relying on familiarity with past patterns and utilising analogical reasoning to understand current information. More advanced and more speculative is Artificial General Intelligence, the intelligence of a machine that can understand or learn any intellectual task that a human being can. Such a capability does not currently exist and scientists disagree on whether this capability will ever fully function (Vincent, 2018). Computers are still distant from the ability to reason like a human being. Over the last decade, significant advances have come about as a result of computing power and increasingly sophisticated algorithms that enable researchers to process massive amounts of unstructured data. Herein we find the connection to methods of machine learning, which are the statistical techniques that will allow computers and algorithms to learn, predict and perform tasks from large amounts of data without being explicitly programmed. Robotics often make use of narrow forms of AI and other digital technologies for processing data. They are programmed and trained to accomplish specific tasks by interacting with the physical world (moving around, transforming, rearranging or joining objects). In particular sectors, industrial robots are already widespread in many manufacturing industries and some retail and wholesale establishments (see for example Acemoglu and Restrepo, 2020; Dignam, 2020). Their commercial use is quite specific. It centres on the automation of narrow tasks that rely on being trained to substitute machines for certain specific activities and functions previously performed by humans. The growth debate around jobs: augmentation, substitution, destruction and displacement To understand whether AI creates or destroys jobs, we must assess how it is used in the production of goods and services. Flat or sluggish labour productivity rates have characterised most G7 countries since the global financial crash of 2008. Countries, including the USA, Canada, UK and Germany, have experienced very low labour productivity increases, while, in contrast, more peripheral countries in the Baltics and Eastern Europe have shown much higher rates of labour productivity (OECD, 2019a). These sluggish productivity rates in G7 countries suggest that firms may embrace investment in new ‘technological fixes’ to the problem of low labour productivity. Conversely, in countries such as the UK, where labour is relatively low cost, some firms may want to avoid the costs of investing in AI systems (Spencer and Slater, 2020). Given these low labour productivity increases over the last decade, many national governments are hoping that the development and applications of AI throughout the economy will kickstart a new phase of economic growth with much higher rates of productivity. Despite this, the effects of embracing AI are often unclear. Standard neoclassical measures of labour productivity, a ratio of labour hours to output, are commonly used as a shorthand to encapsulate the efficiency of labour. However, measures of labour productivity only partially capture the productivity of labour in terms of worker skills or the intensity of worker effort. Fundamentally, the ratio between GDP and hours worked depends on the combination of other inputs, such as capital and the technical, social and organisational structure of work—issues profoundly affected by the manner in which AI technologies are used, which is not easily captured by neoclassical approaches (Fernández-Huerga, 2019; Spencer and Slater, 2020; Zambelli, 2018). So the ways in which AI technologies are used in the workplace is likely to vary across sectors and individual firms and can be used either to increase productivity or to displace labour. Therefore, predicting the effect of AI in the economy is difficult. The rise and spread of AI, machine learning and automation technologies have led to generalised anxieties about the effects on employment and, in the extreme, to stark, apocalyptic visions that human labour could become increasingly redundant. Driverless cars, mass surveillance or the use of algorithms to determine social security status are cited as part of a dystopian AI future. Popular historian Yuval Noah Harari (2014) has claimed one of the most significant threats of the 21st century will be the rise of the ‘useless class’, predicting billions of people being pushed out of jobs by these new technologies. Anxiety about the effects of technologies associated with automation on employment has been a recurrent concern in economic theory. At the risk of oversimplifying, three main categories help classify this body of work: the pessimists, the pragmatists and the optimists. For Karl Marx, the Industrial Revolution shifted production towards a more significant role for industry and technology. He predicted this would lead to an unavoidable class struggle between the working and ruling class. ‘Technological unemployment’ was, therefore, the cornerstone of Marx’s theory of capitalist exploitation. In contrast, writing during the Great Depression, John Maynard Keynes warned of the ‘new disease’ of ‘technological unemployment’ (Keynes, 2010), arguing unemployment would occur when economic use of labour outran the pace at which new methods of work emerged. Similarly, Wassily Leontief (1952) predicted automation would render labour less and less critical to the functioning of the economy. He argued that, even when new employment emerged, the number of jobs would be insufficient to employ everyone who wanted one. For these pragmatists, public policies were required to support education, training and re-skilling to counter the potentially harmful effects of technological unemployment. Joseph Schumpeter (1942) offered a more optimistic reading of the effects of automation on employment. He believed that, although the technological change was disruptive in the short term—leading to unemployment—in the medium to longer term, it would generate job creation in innovative industries with higher productivity. The concept of ‘creative destruction’ saw innovation as an engine of growth, competition and structural change. AI and machine learning technologies—in combination with the rise of the Information and Communications Technology (ICT) revolution of the 1970s—effectively provides the infrastructure for a global inter-connected platform-led economy. Scholars, including Freeman and Soete (1994) and Perez (2009), argued a new ‘techno-economic paradigm’ has arrived, which was leading to a crisis of technological unemployment. The effects of technology required carefully planned responses, these authors argued: work itself was being profoundly reorganised, while governments were called upon to invest heavily in worker retraining and education. Indeed, this research was instrumental in influencing the uptake of further research into technology and employment. Many institutions concurred, including the OECD and the European Commission. From the early 2000s on, further breakthroughs in deep learning (algorithms that use multi-layered software, such as neural nets) occurred. Today, advances in machine learning are generating renewed anxiety, expectations and excitement regarding where these technologies will interface with employment outcomes. Many commentators accept that automation will most likely displace routinised and low-skilled jobs in the manufacturing sector (Arntz et al., 2016). Tasks regularly targeted are seen to affect physical, repetitive tasks, such as work on the assembly line. However, this new wave of technologies also has the potential to affect non-routine, cognitive tasks, usually performed by ‘white-collar’, highly skilled and well-paid workers—even in the professional and service industries (Koutroumpis and Lafond, 2018). To a great extent, the historical evidence on the actual relationship between technological change and employment seems to suggest that, over the centuries, technological change has, on the one hand, displaced workers but, by way of increasing productivity, has had positive consequences for the economy and workers as a whole. Economic analysis of the Industrial Revolution or the introduction of the first assembly lines did not precipitate massive technological unemployment (see Compagnucci et al., 2020). In other words, until now, technological change has not had the worrying effects as predicted by Marx, Keynes, Leontief, or Freeman and Soete. Will it be different this time? What will AI, machine learning and automation mean for the future? Hundreds of publications are now in print that debate the consequences of AI, machine learning and automation for employment into the 21st century. Authors include consultancies, firms, government agencies and scholars. Hyperbolic claims abound on the topic, with little evidence to back them up. Setting these aside, among the most highly cited quantitative studies, predictions differ—dramatically. For example, Frey and Osborne (2017), following an occupation-based approach, estimate that 47% of total employment in the USA is highly ‘susceptible’ to computerisation, including a range of occupations in services, sales and construction. In contrast, Arntz et al. (2016) suggest this approach is an overestimation. Taking a task-based approach, they claim only 9% of jobs across the OECD region are automatable. Both studies argue that minimally qualified workers will be disproportionately affected by technological change. The methodologies used, geographies covered and temporal periods explain largely the differences found in these studies. Other scholars argue that AI may not lead to mass job displacement at all. Leigh et al. (2020) find automation via robots has boosted regional employment growth in the USA. Spencer and Slater (2020) argue that for some countries, AI technologies may lead to a proliferation of low-income, low-skilled jobs. They argue that scholars such as Frey and Osborne over-estimate mass job loss because job loss cannot just be read off from industrial mix. Moreover, they argue that the application of AI technologies ‘will be driven by managerial decisions, sectoral context, workforce skills availability, existing technological investments and lock-in, industrial relations considerations, and in a cross-country context, additional legal and social as well as institutional considerations’ (Spencer and Slater, 2020). But, they argue, many tasks within jobs may well be reconfigured, given new technological capabilities of artificial intelligence and automation. This may well function to increase low-skilled and poorly paid work. Is the future of work preordained? However, the impact of technology on employment is not deterministic—the deployment of these new technologies is contingent upon a multitude of factors, including public policy, firm strategy and geography, among others. Acemoglu and Restrepo (2020) argue that the consequences of emerging technologies for employment depends, ultimately, on whether firms and governments promote the ‘right’ kind of AI—or not. In other words, the future of employment depends on how and what kind of technological platform we build. They argue that the ‘right’ sort of AI is that technology that has the potential to increase productivity and generate broad-based prosperity. The argument is that technology has both an ‘enabling’ or ‘replacing’ effect. Enabling technology augments the ongoing work of humans, thereby increasing productivity. Good examples include a laptop for a university professor, which gives them more tools to organise their work, a scanner for a supermarket cashier, or Computer-Aided Design, providing architects, designers and engineers with greater precision to envision and build. Enabling technology, then, helps employment by increasing productivity, presumably leading to wage increases. While this may not benefit all workers—it may hurt a section of the labour force—the benefits affect enough workers to see an increase in labour demand. Brooks et al. (2020) demonstrate the effect of enabling technology in their study of the introduction of AI-based technologies into the legal sector. At least in the short term, these authors conclude that technologies are helping lawyers to reduce the time required for legal research and contract review and analysis, for example, as well as in speeding up procedures and augmenting decision-making processes (Alarie et al., 2019). AI functions are principally used to automate the more labour-intensive practices where little professional judgment is required, such as increasing the accuracy of day-to-day legal practice (Brooks et al., 2020). Hence, the introduction of AI into the legal profession seems to function in an enabling capacity. However, the other alternative suggests that AI functions as a replacing technology, meaning it takes away a task previously performed by a worker—effectively substituting a machine for the worker in task completion. Acemoglu and Restrepo (2020) argue that industrial robots were not designed to increase productivity; instead, they were devised to automate tasks previously performed by production workers on the factory floor. Their effect is to decrease the cost per unit of output and therefore increase the incomes of owners of capital, who tend to be richer than those relying on labour income. Examples of replacing technology include ‘self-check-outs’ in supermarkets, mail sorting, assembly lines and cash dispensers. Replacing technologies cause first-order displacement effects—humans are no longer needed to complete this task—and can bring about adverse effects on employment and wages, as well as greater income inequality. However, the displacement effects of these new technologies are often complex and are unlikely to be the summation of jobs lost. Job growth may well counter some of the displacement effect. One perspective suggests if a supermarket uses self-check-outs, for example, and this ends up lowering costs, the benefits will pass to the entire supermarket industry, potentially creating new employment and increased labour demand in that sector as well as related industries. At present, while many retail firms have installed self-checkout machines, they find they still need staff to reduce theft and to assist and reassure customers, which makes ambiguous the new labour requirements for stores using the technology (Dizik, 2019). Indeed, some stores are trying to differentiate themselves by specialising in knowledgeable staff on the shop floor (Chapman, 2019). Additionally, lower costs can place more money in the hands of consumers to spend on other things—potentially leading to new employment across the economy. Waldman-Brown (2020) reports the application of automation technologies in small- and medium-sized enterprises. She finds examples of both enabling and replacing technology as firms opt for incremental automation that generally increases productivity. The firm owners in her study augment workers’ tasks rather than replacing them but, when workers are displaced, they are re-deployed in a different function within the factory. Hence, there is a kind of ‘race’ between the displacement effect caused by technologies, and a productivity effect that has the potential to be a counterweight. A strong productivity effect is more likely to be created when AI technologies perform well and reduce costs: however, if productivity effects are weak—which occurs when new technologies are only ‘so-so’, this may be insufficient to dampen the displacement effect. If jobs are displaced, an additional question becomes who will lose their work? The UK’s Office of National Statistics (ONS) estimates that the growing applications of automation in the service industries and public sector are likely to affect women and young workers the most (ONS, 2019). The ONS data show that the largest group of lower skilled workers in the UK are between 20 and 24 years old. They also estimate that over 70% of the jobs at high risk of automation in the UK are held by women (ONS, 2019), potentially creating a new gendered division of labour. In conclusion, the consequences of AI, machine learning and automation on employment are not preordained, but rather are contingent on decisions of firms and governments to promote productivity-increasing forms of AI: either deploying enabling technologies or utilising highly productive replacing ones. These are, in turn, partly determined by geography and local capacities. Compagnucci et al. (2020) demonstrate how different economic structures, with different industrial specialisations, have unique abilities and capacity to adapt to new technologies. Public policy will require retraining workers to ensure they can cope with the challenges thrown up by displacement. A dynamic approach to education and training, with a more adaptable and flexible approach to learning will create the right kinds of capabilities—and better safety nets. Finally, even where the ‘right AI’ is selected, the resulting productivity gains will take considerable time to become visible and therefore measurable. Brynjolfsson et al. (2017) explain this lag is because most AI technologies are ‘general-purpose technologies’, meaning they have an extensive application across sectors and require industry itself to restructure around these innovations to see the benefits. Consultancy influence on AI adoption While futurists, academics and policymakers opine about probable consequences of AI, machine learning and automation, corporations are busy pursuing relevant elements of these technologies for a variety of purposes. Two groups are of particular interest: companies that are actively applying these technologies to solve well-understood problems and consultants establishing market niches that fit with their expertise. While one informs the other, equally likely, the current moment is surprisingly ad hoc. Like corporations, consultants are madly hiring experts in these fields to build their product lines. The promotional material of top global consulting groups such as Boston Consulting Group (2019), PWC (2019), Accenture (2019) and McKinsey (Cam et al., 2019) is leaving nothing to chance. These firms are focussing on their bread-and-butter products they already are known for, while offering surprisingly similar warnings about either what problems are coming down the line and therefore justifying their services, or they produce warnings about the prerequisite behavioural changes required to ensure a likelihood of success. In other words, ‘front and centre’ is a disclaimer, masked as a statement of due diligence. Consultants are limiting the liability of their firms by using their first-mover clients to proof their services, with the proviso that should customers fail to carry out the exacting advice proffered to them these consulting firms cannot be held liable. A few lines from corporate materials underscore their uncertainty in light of the emergent nature of this suite of technologies. Consultants are converging on a narrative claiming that AI specifically, but further enabled by machine learning and automation, is no less than a new factor of production. Accenture, a global consulting firm perched atop of Google search, claims: ‘It’s a completely new factor of production, capable of driving business growth by augmenting natural human expertise, taking automation to new places, and diffusing innovation throughout society’. Perhaps less prosaically, and with greater caution, KPMG (2019) suggests the gains from these combined technologies are likely to pervade critical business areas, including data, business processes, the workforce, and risk and reputation. Further suggesting the level of caution required, KPMG (2019) goes on to indicate upfront costs will be substantial. The idea that AI is a simple ‘plug-n-play’ strategy is a misconception—these consultancies warn that companies need to build extensive internal capabilities even to start to take advantage of AI’s options. Moving into AI is not for the faint-hearted. A successful application is more likely for those both nimble and unshackled from legacy assets such as pre-existing platforms, built-in procedures and rigid practices characteristic of the past. In other words, untethered young firms may be able to augment their capabilities more easily compared with companies with established and safe operational practices. Mckinsey, Inc., the international consulting firm headquartered in the USA, recently published its annual assessment of AI applications among its client firms (Cam et al., 2019). In two reports, their findings suggest that utilisation of AI, machine learning and automation across sectors map to present-day areas where efficiencies are evident (Cam et al., 2019). Autos and heavy industries are deploying robotics and machine learning to automate dirty, dangerous and difficult tasks. Retailers are using AI analytics to track sales and customer preferences. Across the world, AI applications are increasingly utilised, with the Asia Pacific and North America leading the way. Still, crucially, the report suggests that most applications are in reach of near-term problems. Many fewer firms are taking advantage of more sophisticated AI-related applications, attributing the reticence to an unwillingness to ‘invest in talent, such as translator expertise, and ensuring that business staff and technical teams have the skills necessary for successful scaling’ (Accenture, 2019). For the majority of business enterprises, AI technologies are currently being deployed to solve low-risk problems, such as tracking customer preferences or delving into millions of lines of data for key patterns. In general, organisations are attempting to scale up slowly, absorbing these new competencies, instead of pursuing actions leading to organisational transformation. In stages terms, for the application of these technologies, companies are ‘plucking low hanging fruit’ and dipping their big toe into the frigid waters of the unknown (see Brooks et al., 2020). Two further developments are worth noting. Popular press accounts and global consultancies are converging on a set of concerns related to the effects of AI, machine learning and automation around privacy, precisely the control of subjectivity and unintentional bias, and the anticipated employment consequences of eventual adoption (Cheatham et al., 2019). Consultancies services’ being offered require that clients relinquish control over data, in the form of confidential information about the organisation and its configuration-based competitive advantages tied up in firm practices, culture and strategy. Accounting firms raise serious concerns about data privacy. Some scholars highlight upfront the challenges companies face when pursuing a future augmented by AI, machine learning and automation. There is still much to learn about the consequences of these technological capabilities. Companies and consultants openly discuss the recognition that displacement is not only possible, but likely. McKinsey’s annual report suggests the near-term job consequences are positive, especially in firms that are actively planning for and executing retraining efforts for their existing workforces. For how long this is the case is unclear. At its starkest, we see two paths forward. Fuelled by scare tactics and the ‘great unknown’, consulting firms are pushing companies to jump on the bandwagon before being left behind. Each consultancy is carving out a niche toward distinct trajectories, from relying on cutting costs and eliminating low-skilled labour to utilising the potential of these technological developments to repurpose existing workforces by using their tactic knowledge to facilitate the transition to a new frontier. Currently, the outcome is indeterminate; predicates are culture, context and policy. As presented here, we see another alternative. Corporate consultants are suggesting with AI, economic growth is gaining a fourth dimension of productivity—human mental capability positively augmented by machine enablement (Accenture, 2019). Echoing Acemoglu and Restrepo’s (2020) suggestion, the application consequences of these technologies are not preordained, and will reflect choices made at the organisational and societal levels. Social consequences: the question of who works There are so many soothsayers proselytising about the wonders, worries and woes of AI, machine learning and automation that it is difficult to separate the significant from the banal. From self-driving cars to facial recognition, to structured searching leading to finely grained precedence argued by machines in criminal courts, the world is changing rapidly. Beyond the hyperbole, what do these technologies of mass computational power mean for social questions such as who works, and how dramatic are the changes accompanying these new technological capabilities? Some scholars offer a more critical analysis of the lack of accountability and public purpose in the development and spread of automation, robotics and AI (Browne, 2018; Dignam, 2020; Sharkey et al., 2018). The pressure is mounting to make algorithms, AI and robotics fair, transparent, understandable and, therefore, more accountable. Sharkey and colleagues see the need to ‘incorporate ethical aspects of human well-being that may not be automatically considered in the current design and manufacture of A/IS technologies and reframe the notion of success, so human progress can include the intentional prioritisation of individual, community and societal, ethical values’ (Sharkey et al., 2018, 27). However, to date, the main actors are in the private sector, and the development of AI is relatively unregulated. The laws and regulatory structures around AI and the public interest are still embryonic and only advisory. For example, the UK has regulatory bodies and a public process to explore the bioethics of many applications of biotechnology and of human fertilisation technologies, which are intended to bridge the gap between the public, experts and government. However, there is no parallel to the development of AI (Browne, 2018). Dignam points to the libertarian beliefs of many founders and CEOs of AI technology firms, who actively advocate for the government to pursue a light regulatory structure for AI. They argue that the new technology ‘…is free of law and regulation by its very nature: that new technological frontiers are governed by mathematical calculation and old, outdated pre-existing laws cannot stand in the way’ (Dignam, 2020). The same critics (Acemoglu and Restrepo, 2020; Dignam, 2020; Sharkey et al., 2018) remind us that technology does not determine its uses and applications but that society does and that, in this sense, technology is a social construction. A significant concern is the fairness of AI systems. One of the problems already observed in the widespread application of AI has been its use in filtering access to opportunities in today’s labour markets. There are numerous examples of AI reproducing and amplifying gender and racial biases. Using overly ‘feminine’ descriptions of yourself in your CV or resume may lead the programme to screen out your application. The colour of your skin might determine whether or not facial recognition software recommends you for the job. For example, Zhou found bias in Amazon’s experiments with its system for automating the recruitment process. ‘Since the tech industry was male-dominated, most of the resumes fed into the machine learning model were for men. The resulting system favoured men over women. If this recruiting system deploys, it will further alienate women from the tech sector’ (Zhou, 2019, 65). Although this may not be the intention, extensive feedback on the use of AI in ‘human resource’ functions points to a fundamental problem with AI: biased datasets. Zhou’s work highlights that ‘AI systems rely solely on learning from a database; if the database is incomplete or biased, the system can unintentionally perpetuate bias to a widespread scale’. Similarly, Tatman and Kasten (2017) found bias in YouTube’s voice recognition programmes; they showed statistically significant differences in word error rates between dialects and races, with the non-white ethnicities showing much higher error rates. In this way, unrepresentative data, which has subtle racist and sexist patterns, reproduce those same patterns (Zhou, 2019). These findings suggest AI can function to reproduce and amplify existing gender and ethnic bias at work. As Wacher et al. (2017, 1) note: ‘systems can make unfair and discriminatory decisions, replicate or develop biases, and behave in inscrutable and unexpected ways in highly sensitive environments that put human interests and safety at risk’. Because many large datasets are incomplete and unrepresentative, they may incorrectly identify specific individuals, especially when it comes to minority groups and women. ‘Datasets are often unrepresentative of the public demographic. There is a high level of data deprivation when it comes to capturing vulnerable groups. Biased datasets amplify gender and racial inequality and project past and present biases into the future’ (Dillon and Collett, 2019, 5). For example, Buolamwini and Gebru (2018) examine bias in facial recognition systems. Their ‘Gender Shades’ study evaluates the degree of bias in facial recognition systems and finds substantial disparities in the accuracy of classifying darker females, lighter females, darker males and lighter males. In contrast, darker-skinned women were misclassified with an error rate of up to 34.7%, while lighter-skinned men had a maximum error rate of only 0.8%. Similarly, Garnerin et al. (2019) show how women’s under-representation in the media, in terms of speakers and ‘speech turns’, leads to their under-representation in the voice recognition data. Criado Perez (2019) argues the problem is twofold. Not only are the datasets dominated by white men, but that their over-representation, in turn, skews the output: ‘male data makes up the majority of what we know’ and so ‘what is male is seen as being universal’ (Criado Perez, 2019, 24). Thus, AI can be a method of perpetuating bias, causing unintended negative consequences and exacerbating inequities (Chowdhury and Mulani, 2018; Dillon and Collett, 2019). Sharkey, reflecting the severe nature of these biases in AI, calls for a ban of all ‘life-changing decision-making algorithms’ (MacDonald, 2019). Dignam (2020) highlights another problem—the lack of diversity in the actual labour force who are designing, coding, engineering and programming AI technologies. He examines employment data for firms, such as Apple, Google, Facebook and others, and finds high levels of gender disparity in the AI workforce, especially among technical staff. Dillon and Collett (2019) identify the same problem and also call for diversification of the AI workforce to make the design and implementation of technology more equitable. ‘This becomes even more urgent as there is increased demand for skilled technological experts accompanying the rise of AI. At the current rate, existing inequalities will only be aggravated and enlarged by an AI labour market which fails to reflect a diverse population’ (Dillon and Collett, 2019, 5). Again, we are reminded as to why the social construction critique is so powerful—‘mass computational power, while it allows greater statistical scale does not move the needle at all concerning basic statistical integrity, which hinges entirely on human judgment. Highly skilled human design, operation, and oversight are still essential’ (Dignam, 2020). The geographies of AI, machine learning and automation Daily, we endure new prognostications of what the future will include. On the one hand, the empirical evidence accompanying the analysis of the location of these technologies is mostly reinforcing the current geographies of tech industries, higher education and personal wealth. So far, no breakout locations have emerged to upset the applecart of global industry and technology leadership. However, geopolitics may play an important role spurring the national development of AI technologies. For more than 20 years, China has been challenging the leading purveyors of technology in the USA, UK and Japan. Today, China’s role is not that of the follower, but a nimble competitor in these industries. Defence applications have spurred on the growth of many new technology industries in the USA, including AI (Allen and Chan, 2017; Markusen et al., 1991). Given the lack of good-quality information from countries such as Russia and North Korea, it is difficult to know whether they are miles ahead or on an expected trajectory, given their financial condition and interests in espionage, advanced weaponry and geopolitical intrigue. We can speculate, however, that the emphasis of technological developments in both of these countries will focus first and foremost on deployment of technologies to surveil and control their citizenry. Thus, geography is an important, but not defining, determinant of where industries associated with these technologies will emerge. Presently, the geography of technology development versus adoption appears to be following mostly tried and true paths based on existing sites of human capital formation and longstanding industrial concentrations. Early evidence does suggest, however, that geography of use may be more important in defining where specialisms emerge than purely sites where pre-existing technology clusters exist. We note that in 1997 geographers Stan and Christine Openshaw wrote a 384-page book entitled Artificial Intelligence in Geography. This book foretold of a future world where geo-computation would evolve briskly and find application in any number of cognate and derivative fields. In addition, scholars have shed light onto the fact that, despite the strength of globalisation, national and regional industrial cultures and working practices may still vary by place (Gertler, 2004; Saxenian, 1994; Yeung, 2016). Saxenian (1994) famously examined how the computer and software industries had different models and working practices in different regional economies. Gertler (2004) showed that these different working practices mean that the same technology is deployed in subtle but distinct ways in different environments. He examined the difficulties that Canadian firms had in realising productivity increases from new German machine tools, due to the ways in which work was organised along slightly different lines in the two countries. German working practices, and in particular a closer relationship between the engineers and the shop floor, were embedded in the new technology. Therefore, technological diffusion, of AI or other new technologies, is not always straightforward and can be complicated by regional or national working practices. Kitson (2019) argues that in the UK, innovation policy is too heavily focussed on generating innovation, which can function to ignore the complications of encouraging the diffusion and adoption of innovations. In Deloitte’s (2017) State of Cognitive survey, the consultants found one of the biggest barriers to AI use was the difficulties in integrating cognitive projects with existing processes and systems—the survey gives no clue of the regional or national patterns to this finding, but it suggests that issues of ‘integration’ still loom large for firms. Finally, our knowledge of existing centres of technology firms suggests that the development of AI is likely to reinforce the economic power of these existing agglomerations of technology as the economic power of the ‘platform’ firms is reinforced by the newest technological advances (Kenney and Zysman, 2020). These existing technology agglomerations could experience job growth associated with the new technologies. Reinforcing the point that growth is most likely to occur in ready-made clusters of similar activities, probes suggest the likelihood that AI, machine learning and automation will do little to improve the age-old problem of uneven development. That is, initial endowments are clearly critical determinants of an area’s potential to benefit from developing new technology and lagging regions, in particular, are likely to be at a distinct disadvantage (Barzotto et al., 2020). Buarque et al. (2020) come to similar conclusions in considering the geography of patents filed by firms in our target industrial activities. In EU countries, firms spawning the lion’s share of innovations in AI, machine learning and automation are primarily found as expected in critical nodes of existing industrial development. However, in a recent double issue of CJRES on industrial policy, Bailey et al. (2019) highlight how the EU is trying to strategically invest in lagging regional economies to enable them to develop new technological specialisms that will allow movement into new trajectories. However, the geography of AI adoption is also likely to be uneven and strongly tied to existing industrial strengths and weaknesses. Within the UK, the ONS finds the probability of job losses from automation varies across the country. Poorer regional economies in peripheral regions, outside of the growth centres in the southeast and London, are those most likely to experience job displacement from automation. Thus, small town and rural areas such as Boston, South Holland and Fenlands are among the highest probability, as are areas such as North Cornwall, West Lancashire and Ribble Valley (ONS, 2019). These areas have distinct local economies, but share a local job structure that employs a higher-than-average number of low-skilled jobs, which are likely to be the easiest to automate. This is likely to be a story played out in many different national contexts, as certain local economies will be house a larger proportion of jobs vulnerable to automation and machine learning. Making sense of AI policy Particularly since 2017, governments around the world have started publishing national AI policy or strategy documents, with a view to prepare the ground for early adoption of AI, seen as essential to position that nation as an ‘AI leader’. Clinching AI leadership is seen to have essential economic—but also—political and military advantages. On the economic side, governments are promoting AI in order to obtain increasing national competitive advantage: the expectation is that AI will lead to greater industrial efficiency, larger manufacturing volume, higher productivity, better process means and so forth. From the political/military perspective, governments are interested in leading the ‘AI race’ to enhance cybersecurity and, more generally, strengthen a nation’s place in military terms in the global order, including trade in military-related AI technology (Dasgupta and Wendler, 2019). However, there are more sinister reasons why governments seek to be at the forefront of AI technology. Some governments, especially non-democratic ones, may wish to use AI to increase their surveillance capacity—and to potentially repress—their own population. AI could, for instance, be used to track down political dissidents and constrain individuals’ rights to self-expression (OECD, 2019a). Reporting in the New York Times in the Spring of 2019, Adam Mozur (2019) criticised China for using facial recognition software to develop a database of scans to profile a minority group in its population: ‘In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group’. AI-based surveillance technology is one of the chilling dark sides of AI and may become an important new export destined for governments around the world (Feldstein, 2019). Even if not deliberate, AI may function to undermine human rights. Using AI technologies to predict recidivism, for example, is likely prone to bias (OECD, 2019b). Noel Sharkey (2019) argues AI technologies are superior when identifying white when compared with darker-skinned individuals, for example, and has called for a moratorium on the use of ‘biased algorithms’ to take decisions affecting humans until bias can be rooted out. In terms of national industrial strategy, Canada is usually associated with being the first government to launch a national AI plan, in March 2017, followed quickly by Japan, Singapore, China, Malaysia and Finland, in the same year. In 2018, a flurry of governments published strategies, including the UK, France, Germany, Mexico and Taiwan. In February 2019, US President Donald Trump signed an Executive Order outlining a strategy, which focuses on promoting and protecting national AI technology and innovation (The White House, 2019). A chronology and explanation of the content of national AI plans and strategies can be found in the report by the OECD (2019a). Interestingly, national approaches to AI are quite different. While some are fully fledged, broad and accompanied with specific budgeting, others serve more as orienting guidelines. Dutton (2018) provides a visual map to comparatively analyse the content of 18 national plans and strategies. In this framework, he identified eight distinct policy concerns: scientific research; talent development; skills and the future of work; industrial strategy; ethical standards; data and digital infrastructure; government; and inclusion and social well-being. Looking across all 18 strategies, it is clear how lopsided these policy packages are, with the lion’s share of attention going to industrial issues, and very little attention paid to the side of AI related to social issues. For example, nearly all programmes give significant attention to research, talent and industrial strategy, but only four strategies paid significant attention to the future of work. Moreover, two programmes focussed significantly on AI in government and only one to social well-being (India). This map shows at a glance that, despite the fact national approaches are distinct, governments are prioritising the economic and military sides of AI: there is a lack of attention to the social challenges it will pose. To illustrate national approaches to AI, two major plans are compared: Canada and China. The Canadian government’s Pan Canadian AI Strategy is a five-year, 125 million Canadian dollar project, which takes a twofold strategy, focussing on research and talent. Its main objectives are as follows: (i) to increase the number of AI researchers and graduates; (ii) to establish three clusters of excellence (in Edmonton, Alberta Machine Intelligence Institute; in Montreal, Montreal Institute for Learning Algorithms; and in Toronto, the Vector Institute for Artificial Intelligence); (iii) to lead on thinking about AI in all its facets; and (iv) to support the national research community on AI (Dutton, 2018; OECD, 2019b). In contrast, the approach by the Chinese government, released in July 2017,1 is very wide ranging and comprehensive in approach. The report covers (i) R&D; (ii) industrial strategy, (iii) talent, (iv) education, (v) skills acquisition, (vi) standard setting and regulation, (vii) ethical norms and (viii) security (Dutton, 2018). What is clear about the Chinese approach is its clear ambition: the government claims it will, first, ensure China’s AI industry is in line with its competitors by 2020; second, become world leader in specific AI fields by 2025; and third, become the primary centre of AI innovation by 2030, with a global industry of an AI industry of RMB 1 trillion (US dollars 150 billion). Hence, the Chinese national AI plan demonstrates an ambition to influence or, even, dominate, the setting up of new standards, laws, norms and regulations around emerging AI technologies (OECD, 2019a). AI and the future of work? AI, and its precedents, are poised to transform the nature of work and the operation of systems and infrastructure by enabling solutions to complex problems with high efficiency and speed. While many applications of these technologies aim to enhance the experience of the user of the service, examples are growing that encompass the complete transformation of operations heretofore completed by human mental and physical effort. The consequence of AI, machine learning and automation technologies is still emergent and therefore likely to generate continuing debate across the international scientific and business communities. The extent to which these developments will challenge jobs is complex to determine. Labour productivity, which has been low across the last decade, means that firms may well want to, or be encouraged to, pursue labour-replacing forms of AI rather than, or in addition to, labour-enhancing forms of the technologies. Clearly though, the spread and diffusion of these technologies is not deterministic: this is a social and political choice. We have highlighted many ways in which the accepted usages of these still emerging technologies are structured by the legal and policy systems and the incentives and disincentives these create. But the regulatory infrastructure for AI technologies is itself forming in a slow and ad hoc manner around the world. Biases in the AI systems are often opaque, but are likely to reinforce existing gender and ethnic biases. There are calls for more transparency within AI technologies so that the programmes’ decision-making processes are opened up to scrutiny. There is a clear and important role for government in regulating, overseeing the ethics of and setting boundaries around the new technologies. It is important that policy is designed that helps push private-sector initiatives to produce outcomes that increase societal well-being and not merely seek to increase GDP (Gray et al., 2012). The role for government is not just in creating industrial policy, the approach that typifies most countries’ efforts to date, but also in determining and shaping the types of AI technologies we have, how they are adopted and the acceptable boundaries of their use. State policies (taxation, R&D relief, regulation) can help shift the incentives away from labour-replacing technologies and towards labour-enabling forms of AI technologies. The empirical evidence so far suggests that only a small proportion of firms have unproblematically adopted AI technologies to date (Deloitte, 2017). Many firms—large and small—are pursuing incremental adoption, just putting their ‘toe in the water’. Firms in some industries may be cautious about investing in high-cost AI systems, especially in countries with relatively low labour costs. Despite the international consulting firms acting as boosters for the new technology, firms large and small are testing the uses and monitoring the effects. The largest consumers of AI’s technologies may be nation states, interested in defence and surveillance, rather than commercial firms. Is this the over-selling of AI’s transformative capabilities, a time-lag or the cautious approach of firms towards a still unproven and often expensive technology? In the mid-1980s, economists also tried to understand the slower-than-predicted diffusion of computer technology into the broader economy. Robert Solow (1987, 36) dubbed it the ‘productivity paradox’, where computers where to be found ‘everywhere but in the productivity statistics’. Might we say the same for AI technologies? Like the spread of computers earlier, diffusion and adoption of AI technologies may be slower than the industry boosters suggest. One overriding concern is the extent to which the interplay between fundamental scientific discovery, technological enthusiasm exhibited by the consulting world, and national policy, blend into configurations that create pathways to the future which lead to a race to the bottom. Past technological eras tipped the balance of power among political rivals leaving behind many in the race to lead. Regardless of the perspective, warning signs are evident that policies will be needed to ensure the well-being of those who will experience disruptions from this new era. At its starkest, we see two paths forward. Fuelled by scare tactics and the ‘great unknown’, consulting firms are pushing companies to jump on the bandwagon to avoid becoming economic ‘laggards’. Each consultancy is carving out a niche toward distinct trajectories, from relying on cutting costs to eliminating low-skilled labour. In parallel, the lion’s share of government policy on AI expresses a narrow focus on economic gains. However, another more considered path is also possible. This would utilise the potential of these technological developments to repurpose existing workforces by using their tactic knowledge to facilitate the transition to a new frontier, enabling workers to use their skills alongside new technologies. In addition, AI and associate technologies could be deployed in an effort to positively transform education, health and, even, peace. There is nothing preordained about how AI will be deployed. Echoing Acemoglu and Restrepo’s (2020) suggestion, the application consequences of these technologies will reflect choices made at the organisational, political and societal levels. The future of AI, like all technologies, is too important to leave to the technological specialists in the AI field. Instead, social scientists, lawyers of technology and experts in the ethics of technology need to actively engage in the shape and structure of AI’s development and adoption. Footnotes 1 " See www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm July 2017. References Accenture ( 2019 ) Artificial Intelligence . Available online at: https://www.accenture.com/us-en/insights/artificial-intelligence-index [Accessed 12 October 2019 ]. Acemoglu , D. and Restrepo , P. ( 2017 ) Robots and Jobs: Evidence from U.S. Labor Markets . NBER Working Paper No. 23285. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Acemoglu , D. and Restrepo , P. ( 2020 ) The wrong kind of AI? Artificial intelligence and the future of labor demand , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Alarie , B. , Niblett , A. and Yoon , A. ( 2019 ) Data analytics and tax law. SSRN Electronic Journal , doi:10.2139/ssrn.3406784 Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC Allen , G. and Chan , T. ( 2017 ) Artificial Intelligence and National Security . Cambridge, MA : Belfer Center for Science and International Affairs . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Arntz , M. , Gregory , T. and Zierahn , U. ( 2016 ) The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis . OECD Social, Employment and Migration Working Papers No. 189. Paris : OECD . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Autor , D . ( 2015 ) Why are there still so many jobs? The history and future of workplace automation , Journal of Economic Perspectives , 29 : 3 – 30 . Google Scholar Crossref Search ADS WorldCat Bailey , D. , Glasmeier , A. and Tomlinson , P. R. ( 2019 ) Industrial policy back on the agenda: putting industrial policy in its place? , Cambridge Journal of Regions, Economy and Society , 12 : 319 – 326 . OpenURL Placeholder Text WorldCat Barzotto , M. , Corradini , C., Fai , F., Labory , S. and Tomlinson , P. R. ( 2020 ) Enhancing innovative capabilities in lagging regions: an extra-regional collaborative approach to RIS3 , Cambridge Journal of Regions, Economy and Society , 12 : 213 – 232 . Google Scholar Crossref Search ADS WorldCat Boston Consulting Group ( 2019 ) Artificial Intelligence. Available online at: https://www.bcg.com/publications/collections/strategy-digital-artificial-intelligence-business.aspx [Accessed 30 December 2019]. Brooks , C. , Gherhes , C. and Vorley , T. ( 2020 ) Artificial Intelligence, business models, and the pressures and challenges of transformation in the UK legal services sector , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Browne , J . (2018) 100 Years to Bliss? Artificial Intelligence, Politics and Regulation . Presented at the CFI Artificial Intelligence, Politics and Regulation: A Workshop, 27 September 2018. Available online at: http://lcfi.ac.uk/events/artificial-intelligence-politics-and-regulation-wo/ [Accessed 30 December 2019]. Brynjolfsson , E. and McAfee , A. ( 2017 ) The business of artificial intelligence , Harvard Business Review . OpenURL Placeholder Text WorldCat Brynjolfsson , E. , Rock , D. and Syverson , C. ( 2017 ) Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics . NBER Working Paper No. 24001. Available online at: http://www.nber.org/papers/w24001 [Accessed 30 December 2019]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Buarque , B. , Davies , R., Hynes R. and Kogler , D. ( 2020 ) OK computer: the creation and integration of AI in Europe , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Buolamwini , J. and Gebru , T. ( 2018 ) Gender shades: intersectional accuracy disparities in commercial gender classification . Proceedings of Machine Learning Research, 81: 1 – 15 . Cam , A. , Chui , M. and Hall , B. ( 2019 ) Global AI Survey: AI Proves Its Worth, but Few Scale Impact: Survey . Available online at: https://www.mckinsey.com/featured-insights/artificial-intelligence/global-ai-survey-ai-proves-its-worth-but-few-scale-impact [Accessed 28 October 2019 ]. Chapman , B . ( 2019 ) Supermarkets with no tills: will they be the death knell for 200,000 cashier jobs? , The Independent , 27 December. OpenURL Placeholder Text WorldCat Cheatham , B. , Javanmardian , K. and Samandari , H. ( 2019 ) Confronting the risks of artificial intelligence , McKinsey Quarterly , April. Available online at: https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/confronting-the-risks-of-artificial-intelligence [Accessed 10 November 2019 ]. OpenURL Placeholder Text WorldCat Chowdhury , R. and Mulani , N. ( 2018 ) Auditing algorithms for bias , Harvard Business Review , Available online at: https://hbr.org/2018/10/auditing-algorithms-for-bias [Accessed 30 December 2019]. OpenURL Placeholder Text WorldCat Compagnucci , F. , Gentili , A., Valentini , E. and Gallegati , M. ( 2020 ) Are machines stealing our jobs? , Cambridge Journal of Regions, Economy and Society , 13 . OpenURL Placeholder Text WorldCat Criado Perez , C . ( 2019 ) Invisible Women: Exposing Data Bias in a World Designed for Men . London : Chatto & Windus . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Dasgupta , A. and Wendler , S. ( 2019 ) AI Adoption Strategies . Working Paper Series No. 9. Centre for Technology and Global Affairs, University of Oxford . Available online at: https://www.ctga.ox.ac.uk/files/aiadoptionstrategies-march2019pdf [Accessed 30 December 2019]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Deloitte ( 2017 ) 2017 Deloitte State of Cognitive Survey . Available online at: https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-da-2017-deloitte-state-of-cognitive-survey.pdf [Accessed 30 December 2019]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Dignam , A . ( 2020 ) Artificial intelligence, tech corporate governance and the public interest regulatory response , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Dillon , S. and Collett , C. ( 2019 ) AI and Gender: Four Proposals for Future Research . Cambridge: The Leverhulme Centre for the Future of Intelligence. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC DiSalvo , D . ( 2012 ) How Alan Turing helped win WWII and was thanked with criminal prosecution for being gay , Forbes Magazine , 12 May. OpenURL Placeholder Text WorldCat Dizik , A . ( 2019 ) Yes, self-checkout machines encourage shoplifting. Here’s why stores love them anyway , Money Magazine . Available online at: https://money.com/shelf-checkout-encourages-shoplifting/ [Accessed 28 December 2019 ]. OpenURL Placeholder Text WorldCat Dutton , T . ( 2018 ) Building an AI World: Report on National and Regional AI Strategies . CIFAR . Available online at: https://www.cifar.ca/cifarnews/2018/12/06/building-an-ai-world-report-on-national-and-regional-ai-strategies [Accessed 30 December 2019]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Feldstein , S . ( 2019 ) The road to digital unfreedom: how artificial intelligence is reshaping repression , Journal of Democracy , 30 : 40 – 52 . Google Scholar Crossref Search ADS WorldCat Felipe , J. , Mehta , A. and Rhee , C. ( 2019 ) Manufacturing matters…but it’s the jobs that count , Cambridge Journal of Economics , 43 : 139 – 168 . OpenURL Placeholder Text WorldCat Fernández-Huerga , E . ( 2019 ) The labour demand of firms: an alternative conception based on the capabilities approach , Cambridge Journal of Economics , 43 : 37 – 60 . Google Scholar Crossref Search ADS WorldCat Freeman , R. and Soete , L. ( 1994 ) Work for All or Mass Unemployment: Computerised Technical Change in the 21 Century . London/New York : Pinter Publishers . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Frey , C. B. and Osborne , M. A. ( 2017 ) The future of employment: how susceptible are jobs to computerisation? , Technological Forecasting and Social Change , 114 : 254 – 280 . Google Scholar Crossref Search ADS WorldCat Frey , C. B. and Rahbari , E. ( 2016 ) Do Labour-Saving Technologies Spell the Death of Jobs in the Developing World? Available online at: https://www.brookings.edu/wpcontent/uploads/2016/07/Global_20160720_Blum_FreyRahbari.pdf [Accessed 30 December 2019]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Garnerin , M. , Rossato , S. and Besacier , L. ( 2019 ) Gender representation in French Broadcast Corpora and its impact on ASR performance . arXiv preprint arXiv:1908.08717. Gertler , M. S . ( 2004 ) Manufacturing Culture: The Institutional Geography of Industrial Practice . Oxford/New York : Oxford University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Gray , M. , Lobao , L. and Martin , R. ( 2012 ) Making space for well-being , Cambridge Journal of Regions, Economy and Society , 5 : 3 – 13 . Google Scholar Crossref Search ADS WorldCat Harari , Y. N . ( 2014 ) Sapiens: A Brief History of Humankind . New York: Random House . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Hochschild , A . ( 1983 ) The Managed Heart, Commercialization of Human Feeling . Berkeley: UC Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Kenney , M. and Zysman , J. ( 2020 ) The platform economy: restructuring the space of capitalist accumulation , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Keynes , J. M . (ed.) ( 2010 ) Economic possibilities for our Grandchildren. In Essays in Persuasion , pp. 321 – 332 . London : Palgrave Macmillan . doi:10.1007/978-1-349-59072-8_25. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Kitson , M . ( 2019 ) Innovation policy and place: a critical assessment , Cambridge Journal of Regions, Economy and Society , 12 : 293 – 315 . doi:10.1093/cjres/rsz007. Google Scholar Crossref Search ADS WorldCat Koutroumpis , P. and Lafond , F. ( 2018 ) Disruptive technologies and regional innovation policy. Background paper for an OECD/EC Workshop on within the workshop series Broadening Innovation Policy: New Insights for Regions and Cities, 22 November 2018. Paris : OECD . KPMG ( 2019 ) AI Transforming the Enterprise. Eight AI Adoption Trends. Available online at: https://advisory.kpmg.us/content/dam/advisory/en/pdfs/2019/8-ai-trends-transforming-the-enterprise.pdf [Accessed 30 December 2019]. KPWG ( 2019 ) Artificial Intelligence. Available online at: https://home.kpmg/xx/en/home/campaigns/2019/03/the-state-of-intelligent-automation.html [Accessed 12 October 2019 ]. Leigh , N. G. , Kraft , B. and Heonyeong , L. ( 2020 ) Robots, skill demand and manufacturing in US regional labour markets , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Leontief , W . ( 1952 ) Machines and man , Scientific American , 187 : 150 – 164 . Google Scholar Crossref Search ADS WorldCat MacDonald , H . ( 2019 ) AI expert calls for end to UK use of ‘racially biased’ algorithms , The Guardian , 12 December. OpenURL Placeholder Text WorldCat Markusen , A. R. , Campbell , S., Deitrick , S. and Hall , P. ( 1991 ) The Rise of the Gunbelt: The Military Remapping of Industrial America . Oxford : Oxford University Press. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC McCarthy , J . ( 1959 ) Programs with common sense . In Mechanisation of Thought Processes, Proceedings of the Symposium of the National Physics Laboratory , pp. 77 – 84 . London : UK Her Majesty’s Stationery Office . Reprinted in (McCarthy, 1990). Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC McDowell , L . ( 2011 ) Working Bodies: Interactive Service Employment and Workplace Identities . Chichester: John Wiley & Sons . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Minsky , M . ( 1961 ) Steps toward Artificial Intelligence , Proceedings of the IRE , 49 : 8 – 30 . Google Scholar Crossref Search ADS WorldCat Mozur , A . ( 2019 ) 500,000 Face scans: how China is using A.I. to profile a minority , New York Times , 19 April. OpenURL Placeholder Text WorldCat Newell , A. , Shaw , J. C. and Simon , H. A. ( 1957 ) Empirical explorations of the logic theory machine: a case study in heuristics. In Proceedings of the Western Joint Computer Conference, pp. 218 – 230 . New York: Institute of Radio Engineers. Newell , A. , Shaw , J. C. and Simon , H. A. ( 1983 ) Chess-playing programs, and the problem of complexity. In E. A. Feigenbaum and Julian Feldman (eds.) Computers and Thought , pp. 39 – 70 . New York : McGraw-Hill . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC OECD ( 2019a ) GDP per Hour Worked (Indicator) . Paris: OECD. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC OECD ( 2019b ) Artificial Intelligence in Society . Paris : OECD . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Office of National Statistics ( 2019 ) Which Occupations Are at Highest Risk of Being Automated? Available online at: https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/articles/whichoccupationsareathighestriskofbeingautomated/2019-03-25 [Accessed 27 December 2019 ]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Oxford Insights and the International Development Research Centre ( 2019 ) Government Artificial Intelligence Readiness Index . Available online at: https://www.oxfordinsights.com/ai-readiness2019 [Accessed 30 December 2019]. Papert , S. and Minsky , M. L. ( 1988 ) Perceptrons: An Introduction to Computational Geometry . Cambridge, MA : MIT Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Perez , C . ( 2009 ) Technological revolutions and techno-economic paradigms , Cambridge Journal of Economics , 34 : 185 – 202 . Google Scholar Crossref Search ADS WorldCat PWC ( 2019 ) PWC’s Global Artificial Intelligence Study . Available online at: https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html [Accessed 12 December 2019 ]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC PWC ( 2020 ) 2020 AI Predictions: Lead on Risk and Responsible AI. Available online at: https://www.pwc.com/us/en/services/consulting/library/artificial-intelligence-predictions-2020/lead-on-risk-and-responsibility.html [Accessed 15 February 2020]. Saxenian , A . ( 1994 ) Regional Advantage: Culture and Competition in Silicon Valley and Route 128 . Cambridge, MA : Harvard University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Schumpeter , J . ( 1942 ) Capitalism, Socialism and Democracy . New York/London : Harper & Brothers . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Schwartz , O . ( 2019 ) Untold history of AI: invisible women programmed America’s first electronic computer , Spectrum: IEEE , 25 March. OpenURL Placeholder Text WorldCat Sharkey , N . ( 2019 ) AI expert calls for end to UK use of ‘racially biased’ algorithms , The Guardian , 12 December. OpenURL Placeholder Text WorldCat Sharkey , N. , van Wynsberghe , A. Havens , J. C. and Michael , K. ( 2018 ) Socioethical approaches to robotics development , IEEE Robotics & Automation Magazine , 25 : 26 – 28 . Google Scholar Crossref Search ADS WorldCat Solow , R . ( 1987 ) We’d better watch out , New York Times Book Review , 12 July. OpenURL Placeholder Text WorldCat Spencer , D. and Slater , G. ( 2020 ) No automation please, we’re British: technology and the prospects for work , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Susskind , R. and Susskind , D. ( 2015 ) The Future of the Professions: How Technology Will Transform the Work of Human Experts . Oxford : Oxford University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Tatman , R. and Kasten , C. ( 2017 ) Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic Captions . In INTERSPEECH, pp. 934 – 938 . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC The White House ( 2019 ) Artificial Intelligence for the American People . Available online at: https://www.whitehouse.gov/ai/ [Accessed 30 December 2019]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC US Library of Congress (n.d.) John Henry. Available online at: https://www.loc.gov/item/ihas.200196572 [Accessed 15 December 2019]. Vanderborght , B . ( 2019 ) Robotic dreams, robotic realities , IEEE Robotics & Automation Magazine , 26 : 4 – 5 . OpenURL Placeholder Text WorldCat Vincent , J . ( 2018 ) This Is When AI’s Top Researchers Think Artificial General Intelligence Will Be Achieved . Available online at: https://www.theverge.com/2018/11/27/18114362/ai-artificial-general-intelligence-when-achieved-martin-ford-book [Accessed 12 September 2019 ]. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Wacher , S. , Mittelstadt , B. and Floridi , L. ( 2017 ) Transparent, explainable, and accountable AI for robotics , Science Robotics , 2 : 1 – 5 . OpenURL Placeholder Text WorldCat Waldman-Brown , A . ( 2020 ) Redeployment or robocalypse? Workers and automation in Ohio manufacturing SMEs , Cambridge Journal of Regions, Economy and Society , this issue. OpenURL Placeholder Text WorldCat Yeung , H. W . ( 2016 ) Strategic Coupling: East Asian Industrial Transformation in the New Global Economy. Cornell Studies in Political Economy . Ithaca, NY : Cornell University Press. Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Zambelli , S . ( 2018 ) The aggregate production function is NOT neoclassical , Cambridge Journal of Economics , 42 : 383 – 426 . Google Scholar Crossref Search ADS WorldCat Zhou , A . ( 2019 ) The intersection of ethics and AI , AI Matters , 5 : 64 – 69 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2020. Published by Oxford University Press on behalf of the Cambridge Political Economy Society. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - When machines think for us: the consequences for work and place JF - Cambridge Journal of Regions, Economy and Society DO - 10.1093/cjres/rsaa004 DA - 2020-05-15 UR - https://www.deepdyve.com/lp/oxford-university-press/when-machines-think-for-us-the-consequences-for-work-and-place-PSwmPipULP SP - 3 VL - 13 IS - 1 DP - DeepDyve ER -