Access the full text.
Sign up today, get DeepDyve free for 14 days.
Maria Figueroa-Armijos, Brent Clark, Serge Veiga (2022)
Ethical Perceptions of AI in Hiring and Organizational Trust: The Role of Performance Expectancy and Social InfluenceJournal of Business Ethics, 186
Esme Terry, Abigail Marks, Arek Dakessian, D. Christopoulos (2021)
Emotional Labour and the Autonomy of Dependent Self-Employed Workers: The Limitations of Digital Managerial Control in the Home Credit SectorWork, Employment and Society, 36
D. Marikyan, S. Papagiannidis, O. Rana, R. Ranjan, G. Morgan (2022)
“Alexa, let’s talk about my productivity”: The impact of digital assistants on work productivityJournal of Business Research
Janarthanan Balakrishnan, Yogesh Dwivedi (2021)
Role of cognitive absorption in building user trust and experiencePsychology & Marketing
Armin Granulo, Christoph Fuchs, S. Puntoni (2020)
Preference for Human (vs. Robotic) Labor is Stronger in Symbolic Consumption ContextsJournal of Consumer Psychology
Heiner Heiland (2021)
Neither timeless, nor placeless: Control of food delivery gig work via place-based working time regimesHuman Relations, 75
Chih‐hai Yang (2022)
How Artificial Intelligence Technology Affects Productivity and Employment: Firm-level Evidence from TaiwanResearch Policy
Luísa Nazareno, Danielle Schiff (2021)
The impact of automation and artificial intelligence on worker well-beingTechnology in Society
Paul Daugherty (2018)
Collaborative Intelligence: Humans and AI Are Joining Forces
D. Brougham, J. Haar (2020)
Technological disruption and employment: The influence on job insecurity and turnover intentions, 161
José Arias-Pérez, Juan Vélez-Jaramillo (2022)
Ignoring the three-way interaction of digital orientation, Not-invented-here syndrome and employee's artificial intelligence awareness in digital innovation performance: A recipe for failureTechnological Forecasting and Social Change
Soumyadeb Chowdhury, P. Budhwar, P. Dey, Sian Joel-Edgar, Amelie Abadie (2022)
AI-employee collaboration and business performance: Integrating knowledge-based view, socio-technical systems and organisational socialisation frameworkJournal of Business Research
A. Ocampo, S. Restubog, Lu Wang, P. Garcia, R. Tang (2022)
Home and away: How career adaptability and cultural intelligence facilitate international migrant workers' adjustmentJournal of Vocational Behavior
Florian Pethig, J. Kroenung (2022)
Biased Humans, (Un)Biased Algorithms?Journal of Business Ethics, 183
James Duggan, Ultan Sherman, Ronan Carbery, A. McDonnell (2021)
Boundaryless careers and algorithmic constraints in the gig economyThe International Journal of Human Resource Management, 33
C. Frey, Michael Osborne (2017)
The future of employment: How susceptible are jobs to computerisation?Technological Forecasting and Social Change, 114
Sheshadri Chatterjee, N. Rana, Yogesh Dwivedi, A. Baabdullah (2021)
Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE modelTechnological Forecasting and Social Change, 170
C. Lloyd, Jonathan Payne (2019)
Rethinking Country Effects: Robotics, AI and Work Futures in Norway and the UKInfoSciRN: Other Artificial Intelligence (Sub-Topic)
J. Short, G. Payne, D. Ketchen (2008)
Research on Organizational Configurations: Past Accomplishments and Future ChallengesJournal of Management, 34
A. Meijer, L. Lorenz, M. Wessels (2021)
Algorithmization of Bureaucratic Organizations: Using a Practice Lens to Study How Context Shapes Predictive Policing SystemsPublic Administration Review
Elisa Conz, Giovanna Magnani (2020)
A dynamic perspective on the resilience of firms: A systematic literature review and a framework for future researchEuropean Management Journal, 38
E. Brivio, Fulvio Gaudioso, Ilaria Vergine, Cassandra Mirizzi, C. Reina, Anna Stellari, C. Galimberti (2018)
Preventing Technostress Through Positive TechnologyFrontiers in Psychology, 9
Guanglu Zhang, A. Raina, J. Cagan, Christopher McComb (2021)
A cautionary tale about the impact of AI on human design teamsDesign Studies, 72
Emmanuelle Walkowiak (2021)
Neurodiversity of the workforce and digital transformation: The case of inclusion of autistic workers at the workplaceTechnological Forecasting and Social Change
G. Krogh (2018)
Artificial Intelligence in Organizations: New Opportunities for Phenomenon-Based TheorizingAcademy of Management Discoveries
Surabhi Verma, Vibhav Singh (2022)
Impact of artificial intelligence-enabled job characteristics and perceived substitution crisis on innovative work behavior of employees from high-tech firmsComput. Hum. Behav., 131
Shoshi Chen, M. Westman, S. Hobfoll (2015)
The commerce and crossover of resources: resource conservation in the service of resilience.Stress and health : journal of the International Society for the Investigation of Stress, 31 2
Sarah Bankins, Paul Formosa, Yannick Griep, Deborah Richards (2022)
AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management ContextInformation Systems Frontiers, 24
N. Kougiannou, P. Mendonça (2021)
Breaking the Managerial Silencing of Worker Voice in Platform Capitalism: The Rise of a Food Courier NetworkBritish Journal of Management
S. Fiore, Travis Wiltshire (2016)
Technology as Teammate: Examining the Role of External Cognition in Support of Team Cognitive ProcessesFrontiers in Psychology, 7
Xia Feng, G. Perceval, Wenfeng Feng, Chengzhi Feng (2020)
High Cognitive Flexibility Learners Perform Better in Probabilistic Rule LearningFrontiers in Psychology, 11
Christoph Keding, Philip Meissner (2021)
Managerial overreliance on AI-augmented decision-making processes: How the use of AI-based advisory systems shapes choice behavior in R&D investment decisionsTechnological Forecasting and Social Change, 171
C. Chuan, W. Tsai, Sumi Cho (2019)
Framing Artificial Intelligence in American NewspapersProceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
E. Brynjolfsson, D. Li, L. R. Raymond (2023)
Generative AI at work (No. w31161)
J. Mathieu, Gilad Chen (2011)
The Etiology of the Multilevel Paradigm in Management ResearchJournal of Management, 37
Andrew Nelson, J. Irwin (2013)
'Defining What We Do – All Over Again': Occupational Identity, Technological Change, and the Librarian/Internet-Search RelationshipSociology of Innovation eJournal
S. Parker, G. Grote (2020)
Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital WorldApplied Psychology
Bonhak Koo, Catherine Curtis, Bill Ryan (2020)
Examining the impact of artificial intelligence on hotel employees through job insecurity perspectivesInternational Journal of Hospitality Management
E. Anicich (2022)
Flexing and floundering in the on-demand economy: Narrative identity construction under algorithmic managementOrganizational Behavior and Human Decision Processes
Dongseop Lee, Youngho Rhee, R. Dunham (2009)
The Role of Organizational and Individual Characteristics in Technology AcceptanceInternational Journal of Human–Computer Interaction, 25
E. Demerouti, A. Bakker, F. Nachreiner, W. Schaufeli (2001)
The job demands-resources model of burnout.The Journal of applied psychology, 86 3
E. Bucher, P. Schou, Matthias Waldkirch (2020)
Pacifying the algorithm – Anticipatory compliance in the face of algorithmic management in the gig economyOrganization, 28
A. Wood, Mark Graham, V. Lehdonvirta, I. Hjorth (2018)
Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig EconomyWork, Employment & Society, 33
J. Tasioulas (2018)
First Steps Towards an Ethics of Robots and Artificial IntelligenceSocial Science Research Network
F. Pemer (2020)
Enacting Professional Service Work in Times of Digitalization and Potential DisruptionJournal of Service Research, 24
S. Bankins, P. Formosa, Y. Griep, D. Richards (2022)
AI decision making with dignity? Contrasting workers' justice perceptions of human and AI decision making in a HRM context, 24
P. Bourdieu (1991)
Language and symbolic power
A. Baabdullah, A. Alalwan, Emma Slade, R. Raman, Khalaf Khatatneh (2021)
SMEs and artificial intelligence (AI): Antecedents and consequences of AI-based B2B practicesIndustrial Marketing Management
Eva Selenko, Sarah Bankins, Mindy Shoss, Joel Warburton, S. Restubog (2022)
Artificial Intelligence and the Future of Work: A Functional-Identity PerspectiveCurrent Directions in Psychological Science, 31
D. Knippenberg, C. Dreu, A. Homan (2004)
Work group diversity and group performance: an integrative model and research agenda.The Journal of applied psychology, 89 6
J. Danaher, M. Hogan, Chris Noone, Rónán Kennedy, Anthony Behan, A. Paor, H. Felzmann, M. Haklay, S. Khoo, J. Morison, M. Murphy, Niall O'Brolchain, Burkhard Schafer, K. Shankar (2017)
Algorithmic governance: Developing a research agenda through the power of collective intelligenceBig Data & Society, 4
Pervaiz Akhtar, J. Frynas, Kamel Mellahi, S. Ullah (2019)
Big Data‐Savvy Teams’ Skills, Big Data‐Driven Actions and Business PerformanceInternational Management of Organizations & People eJournal
P. Sun, Julie Chen, U. Rani (2021)
From Flexible Labour to ‘Sticky Labour’: A Tracking Study of Workers in the Food-Delivery Platform Economy of ChinaWork, Employment and Society, 37
Y. Suseno, Chiachi Chang, Marek Hudík, Eddy Fang (2021)
Beliefs, anxiety and change readiness for artificial intelligence adoption among human resource managers: the moderating role of high-performance work systemsThe International Journal of Human Resource Management, 33
L. Robert, Casey Pierce, Liz Morris, Sangmi Kim, Rasha Alahmad (2020)
Designing fair AI for managing employees in organizations: a review, critique, and design agendaHuman–Computer Interaction, 35
S. Woo, Ernest O’Boyle, Paul Spector (2017)
Best practices in developing, conducting, and evaluating inductive researchHuman Resource Management Review, 27
S. Restubog, Pauline Schilpzand, Brent Lyons, Catherine Deen, Yaqing He (2023)
The Vulnerable Workforce: A Call for ResearchJournal of Management, 49
Lukas Lanz, Roman Briker, Fabiola Gerpott (2023)
Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine LearningJournal of Business Ethics
E. Brynjolfsson, A. McAfee (2014)
The second machine age: Work, progress, and prosperity in a time of brilliant technologies
Siliang Tong, Nan Jia, Xueming Luo, Z. Fang (2021)
The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performanceStrategic Management Journal
K. Kawaguchi (2020)
When Will Workers Follow an Algorithm? A Field Experiment with a Retail BusinessManag. Sci., 67
R. Allen, P. Choudhury (2021)
Algorithm-Augmented Work and Domain Experience: The Countervailing Forces of Ability and AversionOrganization Science
R. Büchter, Alina Weise, D. Pieper (2020)
Development, testing and use of data extraction forms in systematic reviews: a review of methodological guidanceBMC Medical Research Methodology, 20
Derek Lingmont, A. Alexiou (2020)
The contingent effect of job automating technology awareness on perceived job insecurity: Exploring the moderating role of organizational cultureTechnological Forecasting and Social Change, 161
M. Goos, A. Manning (2007)
Lousy and Lovely Jobs: The Rising Polarization of Work in BritainThe Review of Economics and Statistics, 89
Cristina Trocin, Ingrid Hovland, Patrick Mikalef, Christian Dremel (2021)
How Artificial Intelligence affords digital innovation: A cross-case analysis of Scandinavian companiesTechnological Forecasting and Social Change, 173
J. Colquitt (2001)
On the dimensionality of organizational justice: a construct validation of a measure.The Journal of applied psychology, 86 3
J. Holm, Edward Lorenz (2021)
The Impact of Artificial Intelligence on Skills at Work in DenmarkERN: Labor Markets (Topic)
Tuyet-Mai Nguyen, A. Malik (2021)
A Two‐Wave Cross‐Lagged Study on AI Service Quality: The Moderating Effects of the Job Level and Job RoleBritish Journal of Management
Berkeley Dietvorst, J. Simmons, Cade Massey (2014)
Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them ErrCSN: Business (Topic)
Cindy Candrian, Anne Scherer (2022)
Rise of the machines: Delegating decisions to autonomous AIComput. Hum. Behav., 134
Frank Fossen, Alina Sorgner (2022)
New digital technologies and heterogeneous wage and employment dynamics in the United States: Evidence from individual-level dataTechnological Forecasting and Social Change
Manuel Gonzalez, Weiwei Liu, Lei Shirase, David Tomczak, Carmen Lobbe, Richard Justenhoven, Nicholas Martin (2022)
Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processesComput. Hum. Behav., 130
Daniel Levinthal, J. March (1993)
The myopia of learningSouthern Medical Journal, 14
S. Schaupp (2021)
Algorithmic Integration and Precarious (Dis)Obedience: On the Co-Constitution of Migration Regime and Workplace Regime in Digitalised Manufacturing and LogisticsWork, Employment and Society, 36
Monserrat Bustelo, Pablo Egana-delSol, Laura Ripani, Nicolas Soler, Mariana Viollaz (2020)
Automation in Latin America: Are Women at Higher Risk of Losing Their Jobs?Technological Forecasting and Social Change
J. March (1991)
Exploration and Exploitation in Organizational LearningOrganization Science, 2
S. Restubog, C. Deen, A. Decoste, Yaqing He (2021)
From vocational scholars to social justice advocates: Challenges and opportunities for vocational psychology research on the vulnerable workforceJournal of Vocational Behavior, 126
Gilad Chen, P. Bliese, J. Mathieu (2005)
Conceptual Framework and Statistical Procedures for Delineating and Testing Multilevel Theories of HomologyOrganizational Research Methods, 8
David Brougham, J. Haar (2017)
Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees’ perceptions of our future workplaceJournal of Management & Organization, 24
S. Bankins, P. Formosa (2021)
Redefining the psychological contract in the digital era: Issues for research and practice
David Newman, Nathanael Fast, Derek Harmon (2020)
When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisionsOrganizational Behavior and Human Decision Processes, 160
M. Westman (2001)
Stress and Strain CrossoverHuman Relations, 54
M. Jarrahi (2018)
Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision makingBusiness Horizons
E. Fumagalli, S. Rezaei, A. Salomons (2022)
OK computer: Worker perceptions of algorithmic recruitmentResearch Policy
Akanksha Jaiswal, C. Arun, A. Varma (2021)
Rebooting employees: upskilling for artificial intelligence in multinational corporationsThe International Journal of Human Resource Management, 33
Pat Pataranutaporn, Valdemar Danry, J. Leong, Parinya Punpongsanon, Dan Novy, P. Maes, Misha Sra (2021)
AI-generated characters for supporting personalized learning and well-beingNature Machine Intelligence, 3
Stefania Innocenti, M. Golin (2022)
Human capital investment and perceived automation risks: Evidence from 16 countriesJournal of Economic Behavior & Organization
Sheshadri Chatterjee, Ranjan Chaudhuri, D. Vrontis, A. Thrassou, S. Ghosh (2021)
Adoption of artificial intelligence-integrated CRM systems in agile organizations in IndiaTechnological Forecasting and Social Change
P. Leonard, Roger Tyers (2021)
Engineering the revolution? Imagining the role of new digital technologies in infrastructure work futuresNew Technology, Work and Employment
Konrad Sowa, Aleksandra Przegalinska, L. Ciechanowski (2021)
Cobots in knowledge workJournal of Business Research, 125
T. Haesevoets, D. Cremer, Kim Dierckx, A. Hiel (2021)
Human-machine collaboration in managerial decision makingComput. Hum. Behav., 119
Katherine Kellogg, Melissa Valentine, Angèle Christin (2020)
Algorithms at Work: The New Contested Terrain of ControlAcademy of Management Annals
Erik Brynjolfsson, Danielle Li, Lindsey Raymond (2023)
Generative AI at WorkSSRN Electronic Journal
David Brougham, J. Haar (2020)
Technological disruption and employment: The influence on job insecurity and turnover intentions: A multi-country studyTechnological Forecasting and Social Change, 161
Jennifer Logga, Julia Minsona, Don Mooreb (2019)
Organizational Behavior and Human Decision Processes
Sarah Bankins, Paul Formosa (2023)
The Ethical Implications of Artificial Intelligence (AI) For Meaningful WorkJournal of Business Ethics
V. Bader, S. Kaiser (2019)
Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by artificial intelligenceOrganization, 26
Min Lee (2018)
Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic managementBig Data & Society, 5
Paul Glavin, A. Bierman, Scott Schieman (2021)
Über-Alienated: Powerless and Alone in the Gig EconomyWork and Occupations, 48
Xueming Luo, M. Qin, Z. Fang, Z. Qu (2020)
Artificial Intelligence Coaches for Sales Agents: Caveats and SolutionsJournal of Marketing, 85
Serge Veiga, Maria Figueroa-Armijos, Brent Clark (2023)
Seeming Ethical Makes You Attractive: Unraveling How Ethical Perceptions of AI in Hiring Impacts Organizational Innovativeness and AttractivenessJournal of Business Ethics
Bin Wang, Yukun Liu, S. Parker (2020)
How Does the Use of Information Communication Technology Affect Individuals? A Work Design PerspectiveAcademy of Management Annals
Dirk Basten, Thilo Haamann (2018)
Approaches for Organizational Learning: A Literature ReviewSAGE Open, 8
P. Bedué, A. Fritzsche (2022)
Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption, 35
Erin Makarius, D. Mukherjee, Joesph Fox, Alexa Fox (2020)
Rising With the Machines: A Sociotechnical Framework for Bringing Artificial Intelligence Into the OrganizationOrganizations & Markets: Policies & Processes eJournal
S. Hobfoll, J. Halbesleben, Jean‐Pierre Neveu, M. Westman (2018)
Conservation of Resources in the Organizational Context: The Reality of Resources and Their Consequences, 5
Jun Kim, Minki Kim, D. Kwak, Sol Lee (2021)
EXPRESS: Home-Tutoring Services Assisted with Technology: Investigating the Role of Artificial Intelligence Using a Randomized Field ExperimentJournal of Marketing Research
E. Brynjolfsson (2023)
Generative AI: Perspectives from Stanford HAI
G. Cao, Y. Duan, J. Edwards, Yogesh Dwivedi (2021)
Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-makingTechnovation
M. Tomprou, Min Lee (2022)
Employment relationships in algorithmic management: A psychological contract perspectiveComput. Hum. Behav., 126
Jennifer Wiseman, Amelia Stillwell (2022)
Organizational Justice: Typology, Antecedents and ConsequencesEncyclopedia
E. Reid-Musson, E. MacEachen, E. Bartel (2020)
‘Don't take a poo!': Worker misbehaviour in on‐demand ride‐hail carpoolingNew Technology Work and Employment, 35
A. Veen, T. Barratt, Caleb Goods (2019)
Platform-Capital’s ‘App-etite’ for Control: A Labour Process Analysis of Food-Delivery Work in AustraliaWork, Employment and Society, 34
R. Calvo, D. Vella-Brodrick, P. Desmet, Richard Ryan (2016)
Editorial for “Positive Computing: A New Partnership Between Psychology, Social Sciences and Technologists”Psychology of Well-Being, 6
Sophia Galière (2020)
When food‐delivery platform workers consent to algorithmic management: a Foucauldian perspectiveNew Technology Work and Employment
Cheri Ostroff (1993)
The Effects of Climate and Personal Influences on Individual Behavior and Attitudes in OrganizationsOrganizational Behavior and Human Decision Processes, 56
M. A. Boden (2016)
AI
(2022)
Algorithmic management in food‐delivery platform economy in China
Deepika Chhillar, Ruth Aguilera (2022)
An Eye for Artificial Intelligence: Insights Into the Governance of Artificial Intelligence and Vision for Future ResearchBusiness & Society, 61
A. Malik, P. Budhwar, Charmi Patel, N. Srikanth (2020)
May the bots be with you! Delivering HR cost-effectiveness and individualised employee experiences in an MNEThe International Journal of Human Resource Management, 33
J. Brock, Florian Wangenheim (2019)
Demystifying AI: What Digital Transformation Leaders Can Teach You about Realistic Artificial IntelligenceCalifornia Management Review, 61
K. Parry, Michael Cohen, S. Bhattacharya (2016)
Rise of the MachinesGroup & Organization Management, 41
D. Acemoglu, David Autor, J. Hazell, P. Restrepo (2022)
Artificial Intelligence and Jobs: Evidence from Online VacanciesJournal of Labor Economics, 40
(2020)
Even If AI Can Cure Loneliness, Should It?How AI Is Transforming the Organization
David Spencer (2018)
Fear and Hope in an Age of Mass Automation: Debating the Future of WorkIndividual Issues & Organizational Behavior eJournal
W. Orlikowski, D. Gash (1994)
Technological frames: making sense of information technology in organizationsACM Trans. Inf. Syst., 12
INTRODUCTIONOrganizations worldwide are in the midst of a technological shift, often referred to as the fourth industrial revolution (Schwab, 2016), the second machine age (Brynjolfsson & McAfee, 2014), or the algorithmic age (Danaher et al., 2017). This shift is re‐shaping the nature of human work, and a major driver of this change is the rapid development and deployment of artificially intelligent (AI) technologies. These technologies are defined as “a collection of interrelated technologies used to solve problems and perform tasks that, when humans do them, requires thinking” (Walsh et al., 2019, p. 14). Examples of AI technologies include: machine learning, which identifies patterns in large datasets via varying degrees of human supervision; natural language processing, which extracts, classifies, and translates written or spoken text; visual recognition, which uses image recognition and machine vision; and decision support systems (Walsh et al., 2019). Generative AI is another subset of artificial intelligence “that, based on a textual prompt, generates novel content,” such as written text, music, and visual images (Stanford University, 2023, p. 3). While current forms of AI are classified as narrow AI, designed for specific tasks such as image classification (Boden, 2016), their growing integration into the workplace, alongside the accelerating development of more sophisticated forms of AI, carries important implications for the nature of work.The introduction of AI technologies into organizations has sparked intense debates about its impact on workers and workplaces, with highly polarized views. Some suggest that it will lead to significant job losses (Frey & Osborne, 2017), while others argue that it will optimize productivity and improve job quality (Jarrahi, 2018; Spencer, 2018). This polarization is compounded by broader societal narratives that offer science fiction‐based portrayals of emerging technologies that can mischaracterize current AI systems (Cave et al., 2018). The convergence of these factors can then lead workers to fear the use of AI in their workplaces, regardless of its purpose, and generate negative worker outcomes such as lower commitment to work, cynicism, and turnover (Brougham & Haar, 2018).The current state of the discourse surrounding AI at work calls for an integrative and systematic discussion of its impacts on individual workers, teams, and organizations (von Krogh, 2018). Neither predominantly pessimistic nor predominantly optimistic views will fully capture the nuanced effects of these technologies, given the differences in how workers perceive them, what they are implemented to do, and the complex socio‐technical contexts into which they are deployed. At the same time, there is a lack of integrative discussion on how AI (re)configures work routines, work processes, and skills in practice (Meijer et al., 2021) and how workers perceive these changes and respond (Bucher et al., 2021). In this article, we provide a systematic review of empirical research that examines the use of AI at work. At individual, group, and organizational levels of analysis, we examine themes regarding human–AI collaboration, human and algorithmic capabilities, human workers' attitudes and experiences related to AI and algorithmic management, and AI's effects on the labor market. By synthesizing existing studies into a coherent multilevel framework, we make three interconnected contributions to the fast‐growing literature on AI in management.First, we identify how AI is shaping the future of work by changing specific aspects of work processes and human worker experiences. Doing so allows us to unpack research that investigates new divisions of labor across human and AI capabilities (Jarrahi, 2018) and the optimal interactions between AI systems and human skills to support organizational and worker effectiveness. We also shed light on how human–AI collaboration influences and alters the nature of work and the skills required to thrive in today's organizational landscape.Second, our review presents a detailed account of employees' attitudes towards AI use at work and their experiences of algorithmic management, particularly via surveillance and control, in platform‐based work. Established technology acceptance models recognize that employee attitudes are important predictors of technology use (Lee et al., 2009). More positive attitudes will likely result in efforts to upskill and functionally use the technology, whereas more negative attitudes will likely result in resistance and poor uptake (Suseno et al., 2022). Employees can also simultaneously hold positive and negative attitudes towards AI depending on their perceptions of the technology and their assessment of its costs and benefits (Lichtenthaler, 2019). As such, our research builds knowledge of the varied ways that employees construe AI, which provides an important precursor to understanding how they will use it.Third, by expanding the evidence base concerning how AI use generates changes across workplaces and the workforce, our review helps move beyond forecasting of what may occur in AI‐enabled workplaces to understanding what is occurring. Recent advances in generative AI (e.g., ChatGPT developed by OpenAI) have prompted academics, technology leaders, and even governments to reconsider the use and implications of AI platforms. For example, several technology leaders and government institutions have petitioned to halt further development of such AI systems until implications for ethics, education, and safety are fully examined (Vallance, 2023). We contribute to this ongoing discourse by focusing on empirical work that shows how AI technologies operate in their contexts of use. Our review informs organizational implementation strategies and will assist leaders in managing the “human side” of their firms' AI use. Overall, our aim is to generate critical insights to strengthen future theorizing and empirical investigation of the outcomes of AI at work, while also discussing how organizational leaders can embed AI systems to promote fairness, diversity, and effective decision making.In the following section, we describe our search strategy and coding procedures, which underpin our approach for synthesizing the literature. Next, we discuss the themes that emerged across levels of analysis. Finally, we highlight potential areas for future scholarly work and offer actionable suggestions for practitioners implementing AI in the workplace.LITERATURE SEARCH METHOD AND ANALYSISSearch strategyWe searched for relevant articles using a four‐step process, outlined in Figure 1 as a PRISMA flowchart. First, we searched the Scopus database using our search terms (see Box 1, Figure 1) and limited our results to articles published in business/management and psychology1Including this subject area allowed us to capture relevant work in applied and organizational psychology. subject areas. This initial search yielded 1730 articles. Second, articles were screened for journal quality. To ensure an inclusive sample with a focus on micro‐level work, we included articles published in journals ranked 3 or above in business/management (according to the Chartered Association of Business Schools journal list) or of any ranking if published in the psychology subject area (see Box 2, Figure 1). In the third and fourth steps, we manually screened the titles and abstracts of the remaining articles (n = 363) to ensure that they had a tight focus on the micro‐level implications of AI use in relevant management, organizational behavior, and organizational psychology outlets (see Box 3, Figure 1, for exclusion criteria). This process resulted in 97 relevant articles. After excluding review or conceptual‐only articles, we included 68 empirical articles to maintain our empirical focus.1FIGUREFlow diagram of search process.Coding procedures and approach to organizing the literatureFollowing previous research (e.g., Büchter et al., 2020), we used a data extraction form to obtain information from the articles, including their objectives, theories used, research design, method features, and findings. To ensure consistency in our data extraction and to assist with this process, we trained four research assistants who hold advanced degrees in business management. Using an inductive approach, we then identified themes and categories and drew inferences about the underlying topics of the articles in our review (Woo et al., 2017). Instead of approaching the articles with a predetermined categorization scheme, our inductive approach allowed the themes to emerge organically through immersion in the corpus of work (for similar approaches, see Conz & Magnani, 2020; Lightfoot et al., 2013).The data extraction form provided an initial basis for clustering papers according to their focus. Each paper was then read in full, with the themes developing through a process of iteration and refinement. For instance, the “Human–AI Collaboration” theme (now Theme 1) initially included work from “Perceptions of Human and Algorithmic Capabilities” and “Worker Attitudes Towards AI” themes (now Themes 2 and 3, respectively). But further immersion in the articles led us to identify nuanced distinctions across the work. Some articles explicitly compared human and algorithmic labor (which emerged as Theme 2), while others focused on worker attitudes towards AI (which emerged as Theme 3). The remaining themes then developed more clearly as distinct work, such as a clear set of articles focused on algorithmic management in platform work (Theme 4). Overall, our sample is comprised of five themes. Each theme and example constitutive articles are presented in Table 1.1TABLEKey themes from empirical research on AI in the workplace.ThemesDescriptionExample articlesMethods used (% of papers in the theme)Examples of theories used1. Human–AI collaboration(n = 20 papers)This theme focuses on a range of factors that can facilitate or inhibit effective human–AI collaboration and identifies outcomes of this collaboration.Individual level: Marikyan et al. (2022); Sowa et al. (2021); Tong et al. (2021)Surveys (35%)Experiments (25%)Case study/interviews (25%)Multimethod/mixed method (15%)Unified theory of acceptance and use of technology; AI aversion; job design theory; conservation of resources theory; social exchange theory; affordance theoryGroup level: Pemer (2021)Organizational level: Meijer et al. (2021)2. Perceptions of algorithmic and human capabilities(n = 9 papers)This theme contrasts people's views of human and algorithmic capabilities and identifies how people think about the differences, benefits, and limitations of AI taking on previously human work.Individual level: Pethig and Kroenung (2022); Keding and Meissner (2021); Candrian and Scherer (2022)Experiments (100%)Organizational justice theory; psychological contract theory; signaling theory; cognitive absorption theoryGroup level: NilOrganizational level: Nil3. Worker attitudes towards AI(n = 15 papers)This theme focuses on predominantly fear‐based worker responses to AI, including the mechanisms and outcomes of these attitudes. A subset of papers focus on facilitators of positive worker attitudes towards AI, usually based on technology acceptance models.Individual level: Ding (2021); Innocenti and Golin (2022); Leonard and Tyers (2021)Surveys (87%)Ethnography (6.5%)Mixed method (6.5%)Technology‐organization‐environment framework; technology acceptance model; social cognitive theory; extended unified theory of acceptance and use of technologyGroup level: NilOrganizational level: Suseno et al. (2022); Lingmont and Alexiou (2020)4. AI as a control mechanism in algorithmic management of platform‐based work(n = 15 papers)This theme focuses on algorithmic management and surveillance as control mechanisms, and workers' responses to these practices, in a predominantly gig work context.Individual level: Wood et al. (2019); Glavin et al. (2021); Duggan et al. (2022)Case study (20%)Interviews (20%)Multimethod (20%)Ethnography (13%)Mixed method (13%)Other (13%)Labor process theory; boundaryless career theory; alienation; Foucault's ‘dispositive’ framework; identityGroup level: Kougiannou and Mendonça (2021); Schaupp (2022)Organizational level: Huang (2022); Veen et al. (2020); Galière (2020)5. Labor market implications of AI use(n = 9 papers)This theme focuses on how the deployment of AI is impacting job growth and skill demand and how its use is differentially affecting groups of workers.Individual level: Fossen and Sorgner (2022); Egana‐del Sol et al. (2022); Walkowiak (2021)Surveys (67%)Interviews (33%)Resource‐based view; national innovation systems; dynamic skill theory; theory of productive complementaritiesGroup level: Akhtar et al. (2019)Organizational level: Yang (2022); Acemoglu et al. (2022)In the following sections, we delve into our analysis of the five themes that emerged from our review of the literature. These themes, while interconnected, are distinct and have implications across multiple levels of analysis: individual (e.g., individual‐level attitudes, characteristics, and experiences); group (e.g., group and team processes); and organizational (e.g., systems, culture, and leadership practices). We begin with the largest theme of human–AI collaboration (n = 20 articles), followed by two themes that focus on employee‐level perceptions of and attitudes towards AI. We then discuss the themes that center around AI's effects on a broader set of workers and the labor market. Figure 2 provides an integrative framework that synthesizes the key findings across our five themes.2FIGUREIntegrative framework of key findings.RESULTSTheme 1: Human–AI collaborationThe productivity and efficiency benefits of AI can only be realized through intentional and functional collaboration between humans and the technology. The future of work is frequently discussed in terms of the importance of such human–AI collaboration (e.g., Jarrahi, 2018). Essentially, this involves the application of AI systems to complement or to augment human work, skills, or training, rather than simply replacing workers with AI (Brynjolfsson, 2023). As AI has diverse applications, the specific ways in which it can support human workers are varied and can include providing performance feedback (Tong et al., 2021); using digital assistants (Malik et al., 2022); improving decision making (Meijer et al., 2021; Trocin et al., 2021; van den Broek et al., 2021); and enhancing customer interactions and service offerings (e.g., Kim et al., 2022; Pemer, 2021). Our review identifies several factors and processes that either enable or hinder the realization of these intended technological benefits at the individual, group, and organizational levels.Individual levelThe nature of human–AI collaboration at the individual level is shaped by various factors, including AI attributes, the effects of AI on tasks and job designs2We do note that job and task design are often, at least initially, determined at an organizational level. We position these factors at the individual level as the focus of these articles reflects how people experience their work as a result of interacting with AI, rather than a focus on how those AI implementation choices were made. and how these are experienced by employees, and workers' attitudes towards AI. Higher satisfaction with the technology's features, such as its performance on the task and fit to users' needs (Nguyen & Malik, 2022a), led to favorable outcomes such as increased job satisfaction (Nguyen & Malik, 2022a) and productivity (Marikyan et al., 2022). Workers who perceive a good task‐technology fit, such as by viewing AI as undertaking repetitive tasks (Sowa et al., 2021), being specialized to a task (Sowa & Przegalinska, 2020), improving their decision making (Kawaguchi, 2021), and who perceive good AI system quality (Nguyen & Malik, 2022b), were more likely to value the technology, use it as intended, and improve their performance. Our review also indicates that better human–AI collaboration occurs when workers trust the AI, understand its nature and purpose, and develop the skills to use it (Chowdhury et al., 2022).The way that AI altered workers' job designs, and the associated demands and resources they experienced, also exerted effects on the nature of human–AI collaboration. For example, enhanced innovative behaviors were observed when AI was applied to support job autonomy, job complexity, specialization, and information processing (Verma & Singh, 2022). In a service work context, AI can enhance workers' resources by reducing mental and physical fatigue and increasing positive emotional states (Qiu et al., 2022). Such outcomes are also facilitated by AI developers effectively combining machine intelligence and human domain expertise, in conjunction with those human experts, to create a human–machine learning hybrid work practice (van den Broek et al., 2021). However, the use of AI can also generate demands that impede its potential benefits. For example, AI use can increase requirements for worker skill development (Verma & Singh, 2022) and induce technology overload (Kim et al., 2022).Individuals' algorithmic aversion, defined as resistance to the use of algorithmic advice (Dietvorst et al., 2015), can also negatively affect AI use, with workers' domain experience or level of expertise influencing the extent of this aversion. Workers with higher domain expertise usually have higher algorithmic aversion, due to perceiving greater accountability for AI‐generated outcomes (Allen & Choudhury, 2022) and believing they have superior capabilities to AI and therefore seeing little benefit in its use (Kim et al., 2022). The effects of this aversion, however, can be marginally attenuated by allowing workers to incorporate their knowledge alongside AI output (Kawaguchi, 2021) or having greater human decision input overall (Haesevoets et al., 2021). Workers with low domain expertise are also more likely to be algorithmically averse, but due to lesser ability to assess and effectively use AI's output (Allen & Choudhury, 2022). By contrast, workers with moderate domain experience are usually more likely to use AI and generate benefits from its use (Allen & Choudhury, 2022).Relatedly, individuals' performance levels also influence their ability to collaborate with AI systems effectively. Luo et al. (2021) found that lower and higher ranked performers experienced fewer benefits from AI feedback due to information overload and algorithmic aversion, respectively. Conversely, middle‐ranked performers achieved the best outcomes from AI advice. These findings suggest that restricting feedback helps lower ranked performers assimilate the AI‐generated advice. Interestingly, a mix of AI and human coach advice improved outcomes for both lower and higher ranked performers. In contrast to these findings, Bader and Kaiser (2019) found that lower ranked performers gained a cognitive advantage from AI through provision of specific and helpful information to support customer interactions. Emerging evidence regarding generative AI's effects on workers supports this, with Brynjolfsson et al. (2023) showing that the introduction of a generative AI conversational assistant led to lower skilled and novice customer support workers gaining a productivity advantage, but with little gain achieved by higher skilled and more experienced workers. However, employees' attitudes towards AI also play a role. Related findings indicate that disclosure of AI assistance to workers can generate distrust and fear of displacement from the technology, which stifles potential collaboration and dampens productivity benefits (Tong et al., 2021). But individual‐level resources such as longer tenure can provide buffering social capital and attenuate this negative disclosure effect on human–AI collaboration. Overall, this evidence points to the need for context‐specific and employee‐centered design and presentation of output to maximize how AI can support workers across different levels of performance and experience.Group levelPemer (2021) shows how occupational identity influences workers' use of digital technologies, such as AI, and how it acts as a sensemaking tool to shape how they approach the disruption these changes can bring. While a sense of identity is interpreted by individuals, we position it at the group level as occupational identity reflects a collective sense of what an occupational group does, the boundaries of its expertise and conduct, and the nature of its values and contributions, to form a view of “who we are” (Heinzelmann, 2018; Nelson & Irwin, 2014; Pemer, 2021). How workers construe their occupational identity can facilitate or inhibit their engagement with workplace digitalization. Workers in occupations with well‐established and clearly defined identities are more likely to experiment with and adopt new technologies, alter work approaches, and position themselves as digital experts, compared to workers in occupations with less well‐defined identities (Pemer, 2021). It is the strong shared assumptions inherent in well‐established identities, across occupational members and their stakeholders, that appear to absorb the risks associated with adopting new technologies and legitimize any resulting changes to work approaches.Organizational levelSeveral organizational factors influence human–AI collaboration. Broadly, innovative and supportive organizational cultures and climates facilitate employees' AI use. Such organizational contexts appear to encourage approach‐oriented forms of coping with technological change by converting employees' fears of AI replacement towards achieving positive outcomes, such as utilizing the AI to generate innovative behavior (Verma & Singh, 2022). Indeed, active leader role modeling of technology use, HRM practices that support the uptake of AI, and high organizational technology readiness create the conditions for workers' AI use (Pemer, 2021).Our review of qualitative evidence also shows how established organizational cultures, work patterns, and norms can influence workers' collaboration with AI. For example, Meijer et al. (2021) demonstrate how largely the same AI technology deployed into different organizations can result in divergent outcomes, driven by existing organizational patterns of work. They found that AI implementation could generate an “algorithmic cage” for workers in hierarchical and bureaucratic organizations, which impeded autonomy and increased resistance to AI use. In contrast, organizations that focused on professional judgment generated an “algorithmic colleague” that gave primacy to AI as a tool to support human workers' judgment. Bader and Kaiser (2019) also suggest that without alignment between AI use and critical aspects such as work routines, performance measures, and leader support for the technology, organizations will struggle to facilitate human–AI collaboration.Theme summaryThis theme elaborates the factors that facilitate or inhibit human–AI collaboration at the individual, group, and organizational levels. Collaboration is facilitated when employees feel confident in and supported by AI use, including having confidence in their own skills and knowledge of AI, trusting the technology, and using it in ways that augment their roles and enhance their job experiences (individual and group levels). A supportive organizational climate that prepares employees and systems for technological changes and invests in and values its workforce also facilitates human–AI collaboration (organizational level). This theme reinforces the importance of adopting a systems thinking approach to unpacking the nature of human–AI collaboration, in order to identify the varied socio‐technical drivers and inhibitors of such collaboration.Theme 2: Perceptions of algorithmic and human capabilitiesPeople make different assessments of humans and algorithms (see Tomprou & Lee, 2022), including of their relative strengths and weaknesses in undertaking different tasks. The changing division of work between AI and humans represents a key change in work processes (Wilson & Daugherty, 2018). This means that uncovering the upsides and downsides of human capabilities in relation to algorithmic capabilities is critical for understanding under what conditions workers are more or less likely to accept the use of AI for certain tasks and are more or less likely to rely on AI for advice. This theme concentrates on how people perceive human and algorithmic labor, especially in the contexts of evaluating information and making decisions. Such work is important for informing the design and deployment of AI systems, by highlighting which types of tasks people prefer human labor to do and which tasks they prefer algorithms to undertake. This theme also expands our understanding of potential drivers of algorithmic aversion and algorithmic appreciation, the latter referring to people being receptive to and relying on algorithmic advice (Logg et al., 2019). Unlike the broader work in Theme 1, this research more explicitly compares individuals' (e.g., workers' and end users') perceptions of humans and algorithms performing the same job functions. As these studies all focus on perceptions at the individual level of analysis, we highlight three sub‐themes that capture the antecedents and outcomes of how people view algorithmic and human capabilities.Perceived capabilities: Differentiating strengths and limitations of human workers and AI systemsFavorable algorithmic capabilities include its perceived objectivity in decision making compared to humans (Pethig & Kroenung, 2022), its ability to follow structured decision‐making processes (Keding & Meissner, 2021), its lack of intentionality and self‐interest (Candrian & Scherer, 2022), and its ability to enhance cognitive absorption in a service interaction (Balakrishnan & Dwivedi, 2021). For example, in the context of investment decision making, Keding and Meissner (2021) found that within such objective task environments, human managers are more likely to perceive algorithmic recommendations as more legitimate compared to identical human recommendations. Human managers are more likely to trust algorithmic recommendations, as they believe that algorithms follow a more structured decision‐making process that improves the quality of those decisions. However, algorithmic capabilities also have drawbacks, such as their perceived reductionist and decontextualized approach to decision making, potentially resulting in inequitable decisions (Newman et al., 2020). For example, Gonzalez et al. (2022) found that job applicants have reduced confidence and a decreased sense of control when autonomous AI systems make recruitment decisions without any human involvement, due to the perception of less opportunity to convey their skills and abilities.On the other hand, human labor, when compared to machine labor, is perceived to generate unique products (Granulo et al., 2021). Humans can also take into account the personal and underlying conditions that affect employees' performance more holistically than algorithms (Newman et al., 2020). However, this attribute can also be viewed negatively given that it can create biased decision making (Fumagalli et al., 2022; Pethig & Kroenung, 2022). Tomprou and Lee (2022) show that, compared to human recruitment agents, the use of algorithmic recruitment agents can decrease newcomers' perceptions of the employer's socio‐emotional commitments to them. Additionally, newcomers responded more negatively when human agents underdelivered on their commitments, compared to algorithmic ones. Discussions are underway regarding the development of interpersonally attuned AI systems that are both viable and ethical (Kiron & Unruh, 2019; Tasioulas, 2019), considering that the downsides to human capabilities include the potential for betrayal (Candrian & Scherer, 2022), less structured decision processes (Keding & Meissner, 2021), and less objectivity (Pethig & Kroenung, 2022).Perceived capabilities: The role of individual goalsNumerous studies suggest that responses to algorithmic versus human intelligence are largely influenced by perceived relative capabilities and performance (Candrian & Scherer, 2022; Keding & Meissner, 2021), but individual self‐interest also appears to play a significant role. Our review reveals that when people assess algorithmic versus human output, they cognitively balance relative capabilities with their preferred outcomes. They also assess whether an algorithmic or human agent's capabilities yield the most benefit for them. For example, Newman et al. (2020) found that algorithmic recruiters are viewed as less fair than human recruiters because algorithms are less capable of accounting for qualitative information. However, other research suggests that human recruiters can be viewed as more error‐prone than algorithms, in part because they consider more qualitative and non‐objective forms of information (Figueroa‐Armijos et al., 2022; Fumagalli et al., 2022; Laurim et al., 2021).The interpretation of the same capability can also differ depending on individuals' intentions and motivations, such as assessing which agent is more or less likely to recruit them. Fumagalli et al. (2022) show that individuals with lower task performance prefer a human recruiter because this agent is better at assessing other non‐task related qualitative information, which could benefit them. Conversely, those with higher task performance prefer an algorithmic recruiter because it is viewed as more competent at assessing quantitative performance metrics related to task performance, which could benefit them. Similarly, Pethig and Kroenung (2022) demonstrate that women prefer algorithmic recruiters to human male recruiters because they expect better evaluations from algorithms than humans. This is because women perceive the potential for their gender identity to disadvantage their evaluation by male recruiters. Other studies also report gender effects in human‐algorithm interactions (Buolamwini & Gebru, 2018; Fumagalli et al., 2022).Increasing human acceptance of algorithmic outputOne way to address concerns about fairness in algorithmic decision making includes maintaining high human involvement in the decision‐making process, via their input and expertise, and positioning algorithmic output as one of several factors considered (Newman et al., 2020). This approach, known as augmented decision making, has shown promise in the recruitment process, resulting in positive outcomes such as increased acceptance of job offers by applicants (Gonzalez et al., 2022). Interestingly though, research suggests that individuals who are more familiar with AI are likely to be indifferent to which type of recruitment agent is used (i.e., human, algorithmic, or mixed; Gonzalez et al., 2022). However, some strategies that are thought to increase human acceptance of and trust in AI, such as transparency and explainability in the technology's operations (Bedué & Fritzsche, 2022), may not always effectively address fairness concerns. Research suggests that even when the factors considered in algorithmic decision making are transparent, individuals may still perceive the outcomes as unfair (Newman et al., 2020). Additionally, providing process transparency via explanations of algorithmic decision making does not necessarily increase delegation to AI agents (Candrian & Scherer, 2022). Nevertheless, outcome transparency (i.e., showing how successful an AI agent is in its decision making) may increase worker trust and willingness to delegate work and decisions to algorithmic agents (Candrian & Scherer, 2022; Lubars & Tan, 2019).Theme summaryPeople hold mental models or lay beliefs regarding the comparative capabilities of humans and algorithms, and these shape their views of AI (Logg et al., 2019). This theme suggests that people generally view algorithms as more objective, structured, and quantitatively driven than humans in decision‐making contexts. Additionally, algorithms are perceived as holding less negative intentionality. However, this perception of algorithms as more objective and less biased than humans could also be problematic if it results in overly reductive decision‐making processes. While human decision making is generally viewed as more nuanced, holistic, and adaptive to qualitative information, it can potentially generate biased and less objective outcomes. The mixed use of human and AI decision‐making capabilities generally improves perceptions of algorithmic use, but this is context‐dependent and varies according to individual intentions.Theme 3: Worker attitudes towards AIAlthough proponents of AI systems argue for their ability to enhance efficiency and worker effectiveness, discussions around the future of work also focus on negative, fear‐based, and threat‐focused employee attitudes towards AI. This theme highlights the drivers and outcomes of these fear‐based worker attitudes, which can impede employee adoption of AI technologies (Tong et al., 2021). These attitudes center around proxies for employees' awareness that: (a) AI and smart forms of technology may replace them or their current job in the future and/or affect their career prospects (e.g., via AI awareness and anxiety; Lingmont & Alexiou, 2020, Suseno et al., 2022); (b) their skills may become obsolete as a result of these technologies (e.g., via perceived automation risk; Innocenti & Golin, 2022); and (c) these technologies will radically change their current work (e.g., via the threat of technological disruption; Brougham & Haar, 2020). The effects of these fear‐based attitudes are transmitted through reduced organizational commitment, diminished work and job engagement, and increased perceptions of job insecurity. Such attitudes are then associated with heightened turnover intentions, job burnout, and resistance to change and external technologies (Arias‐Pérez & Vélez‐Jaramillo, 2022; Suseno et al., 2022). However, individual‐ and organizational‐level factors can influence these effects and potentially result in increased commitment, satisfaction, and performance.Individual levelEmployees' attitudes towards AI are shaped by various individual factors. For example, those with a high internal locus of control (i.e., the extent to which individuals believe they influence events in their lives; Levenson, 1981) are likely to express intentions to retrain and upskill when faced with automation risks (Innocenti & Golin, 2022). Individual framing of AI as a stressor is also important. Ding (2021) shows that employees who frame AI use as a challenge stressor, rather than a hindrance stressor, are more likely to adopt active coping strategies and become more productive via increased work engagement. Using an ethnographic approach, Leonard and Tyers (2021) demonstrate how the extent of workers' agency, constituted by their role in the organization and individual features such as age and career stage, shapes their “imaginations of digital futures” (p. 1). Compared to those with lower agency, employees with higher agency exhibit more positive attitudes towards technology.Several studies also identify both the positive and negative attitudes that workers can simultaneously hold regarding AI. For example, qualitative evidence from Koo et al. (2021) shows that workers hold mixed views regarding AI. They can simultaneously hold positive views that AI can assist them in their work, generate new human work tasks, and improve customer experiences. At the same time, they also develop concerns that AI can make their roles redundant. These results are consistent with the Integrated AI Acceptance‐Avoidance Model by Cao et al. (2021), suggesting that individuals' mixed views of AI reflect their consideration of its benefits and costs. The model shows that managers' attitudes towards AI are positive when performance and effort expectancies are met (i.e., that AI will help them attain their goals and is easy to use). In contrast, they can hold more negative views when accounting for personal well‐being concerns (i.e., stress and anxiety caused by AI use) and perceived threats from AI (i.e., the belief that AI's decisions may be harmful).Other research within this theme utilizes Technology Acceptance and Technology‐Organization‐Environment Models to analyze worker attitudes towards AI (e.g., Baabdullah et al., 2021). The findings indicate that workers are more likely to intend to use AI when they perceive it as useful, easy to use, and its implementation is supported by skill and knowledge development. For example, compatibility with existing organizational practices and technologies supports AI acceptance, via perceived usefulness (Chatterjee, Rana, et al., 2021). More specifically, Chatterjee, Chaudhuri, et al. (2021) identify that both trust and perceived ease of use are particularly critical for workers to improve their attitudes towards the use of AI and their intention to use it. Believing that AI use will improve one's performance is insufficient to ensure adoption; workers also need to believe that the technology is easy to understand and use with minimal effort (Chatterjee, Chaudhuri, et al., 2021; Chatterjee, Rana, et al., 2021).Organizational levelOrganizations that prioritize high performance work systems, which can help foster a climate of support, autonomy, and continuous learning, can reduce employees' anxiety towards AI (Suseno et al., 2022). Also, strong leader support for digital transformation can facilitate worker adoption of AI (Brock & Von Wangenheim, 2019). However, an authoritarian organizational culture may exacerbate job insecurity by signaling a lack of care and concern for vulnerable workers facing technology‐induced risks (Lingmont & Alexiou, 2020).Theme summaryEmployees who fear the use of AI often worry about the technology replacing their jobs, the potential damage to their skills and careers, and the disruption to their work processes. Such negative attitudes can lead to decreased engagement, commitment, and feelings of security at work as well as increased resistance to new technologies and subsequent turnover. However, personal resources can alleviate these fears by increasing employees' sense of control, fostering a positive stressor framing, maximizing agency at work, and easing the use of the technology. Additionally, aligned with findings from Theme 1, organizations that implement high performance work systems and cultivate climates of learning, growth, and change readiness, with strong support from leaders, can help counteract employees' AI anxiety.Theme 4: AI as a control mechanism in algorithmic management of platform‐based workThe rapid advancement of technology has led to the emergence of platform‐based gig work, which is changing the nature of work, performance evaluation, and employment relationships. Articles within this theme are predominantly qualitative studies that focus on how new forms of AI technology are transforming labor processes. A major area of interest is algorithmic management and the ways that AI technologies exert control over workers and their responses to this ‘management by algorithm’. While precarious employment is not new, AI technologies are playing an increasingly pervasive and significant role in managing gig workers' employment. These technologies can function as intermediaries that match workers to clients, capture data to rate and rank workers, allocate work, and even dismiss workers from platforms. They can also compute metrics that aid in monitoring and surveilling workers at various stages of the work process. While gig work varies, from local (e.g., food delivery) to remote (e.g., knowledge work) forms, there are commonalities in workers' experiences.Individual levelAlgorithmic management has been a defining feature of gig work, allowing platforms to exercise greater control over workers. Studies captured in this theme highlight that individual experiences of platform‐based work are generally negative. These include feelings of powerlessness and social isolation (Glavin et al., 2021), competition between workers (Heiland, 2022), reduced career competencies (Duggan et al., 2022), poor career mobility (Sun et al., 2023), and increased feelings of exhaustion and work intensification (Glavin et al., 2021; Wood et al., 2019). However, it is worth noting that despite these disadvantages, remote gig workers also identify work flexibility, task variety, and task complexity as benefits of platform‐based work (Wood et al., 2019).The negative experiences arising from algorithmic management can be lessened or heightened by various individual‐level factors. For example, platforms can encourage acceptance of algorithmic management approaches by fostering workers' entrepreneurial mindset, to reinforce the idea that platform work enables flexibility and being one's own boss (Reid‐Musson et al., 2020; Veen et al., 2020). This can lead workers to view the platform's control mechanisms as fair and efficient, while supporting them to be self‐reliant (Galière, 2020). Symbolic power, or the capacity to influence events and the actions of others (Bourdieu, 1991; Thompson, 1991), can also play a role in shaping worker outcomes. Emerging forms of gig worker symbolic power, such as developing in‐demand skills and a strong platform reputation through rankings systems, can improve aspects of job quality and generate better worker outcomes (Wood et al., 2019). Interestingly, the depersonalized conditions of platform work can also facilitate a form of individual identity work that provides psychological relief from these same conditions (Anicich, 2022). However, some vulnerable worker groups are more likely to experience negative outcomes from gig work, with migrant workers particularly vulnerable to exploitation (Veen et al., 2020) and control (Schaupp, 2022).Workers are not passive recipients of technology; they can activate their agency when technology is exerting a controlling influence (Bucher et al., 2021) or constraining their autonomy (Reid‐Musson et al., 2020). The control strategies used on platforms are also not absolute and gaps in both human and algorithmic oversight provide openings for workers to exert individual‐ and group‐level forms of agency (Galière, 2020; Heiland, 2022). Gig workers can respond and adapt to algorithmic management in several ways, particularly as they learn what behaviors trigger what consequences on platforms, usually arising from technological surveillance. Workers may comply with the algorithmic management system and adopt behaviors that are incentivized or rewarded rather than those that are discouraged or punished by the algorithm (Galière, 2020). For example, platforms can encourage workers to follow specific procedures, like delivering food through certain routes (Huang, 2022), while discouraging the use of particular language or terms via algorithmic scrutiny of online interactions and threatening workers with dismissal from the platform (Galière, 2020). However, complying with algorithmic management can generate additional cognitive and emotional work that must be performed in addition to paid tasks (Bucher et al., 2021). Workers may also ignore aspects of algorithmic management, such as by refusing work they view as unappealing (Reid‐Musson et al., 2020) or dismissing platform messaging that promotes a customer‐centric orientation (Veen et al., 2020). Alternatively, workers may engage in more extreme forms of behavior (i.e., subverting) that challenges algorithmic control and asserts their agency, such as by stealing products or manipulating data (e.g., by masking their location; Veen et al., 2020).Group levelAlthough not highly prevalent in platform work due to the use of strategies that distance workers from others (Galière, 2020; Glavin et al., 2021), emerging forms of collective behavior in this context are starting to be observed. For instance, “worker voice networks” (Kougiannou & Mendonça, 2021, p. 745) facilitated by Facebook discussion boards and messaging apps can provide support and advice across platform workers and assist them in advocating for better working conditions. Additionally, the increased precarity of migrant workers can also lead to forms of solidarity, such as informal networks and self‐help groups that provide advice and influence working conditions (Schaupp, 2022). Such online forums can also offer platform workers opportunities for positive social comparisons to confirm and support their identities (Anicich, 2022).Organizational levelAI‐driven technologies play a critical role in the control mechanisms utilized by platforms, shaping the organization of work in various ways. First, AI provides new forms of technological infrastructure that connects with workers, allocates work, and oversees and controls minute aspects of the work process, whether in the form of background smart machinery and analytics or as specific apps used by the worker (Huang, 2022; Veen et al., 2020). For example, in a food delivery context, Huang (2022) illustrates how order dispatch systems collate and analyze drivers' characteristics (e.g., location and performance) and broader task characteristics (e.g., weather and traffic conditions) to optimally allocate work.Second, customer ratings or other forms of rating‐and‐ranking systems exert significant control over workers and can influence the nature, type, and amount of work they are allocated. The use of AI systems that collect and analyze vast amounts of data suggest that workers operate in an environment where they are managed and rated by multiple stakeholders without direct supervision, leading to reduced individual power over the work they receive (Wood et al., 2019).Finally, information asymmetries (Veen et al., 2020) and information monopolies (Huang, 2022) suggest that workers can be unaware of the data being collected about them, how they are being monitored, or how that information is being used (Galière, 2020). The information opacity between workers and platforms reduces workers' control over the types of tasks they perform. Workers can be unclear about why they are being allocated certain work and, in the case of food delivery drivers, consumers' addresses may be withheld, making it difficult to decide whether to accept such work (Veen et al., 2020). It can also mean the performance management systems applied to workers, and the metrics they are assessed against, are unclear. These factors ultimately lead to experiences of diffuse surveillance and tracking, heightening the platform's control over workers (Galière, 2020).Theme summaryThe gig economy relies heavily on AI technologies to facilitate work and provide mechanisms for control, with platforms using technological infrastructure to connect workers to tasks, oversee task performance, and employ rating‐and‐ranking systems. The potential for opacity in data collection and analysis further diminishes workers' control over their work. Platform work can generate many negative outcomes for workers, alongside some benefits that can be enhanced by workers' feelings of entrepreneurialism and autonomy in their gig work and through the accrual of skills‐ and reputation‐based power. As gig workers become more aware of the impact of algorithms on their work, they can activate strategies of compliance, indifference, or subversion towards these technologies.Theme 5: Labor market implications of AI useThe impact of AI on jobs is the focus of much scholarly and practitioner attention. This includes questions about whether automation will displace workers, which skills and training will be essential in workplaces, and how this will affect the future of work. This theme highlights the labor market implications of AI use and the changing nature of employment and skill compositions in the workforce. As new technologies are implemented, understanding the skills workers need to use them is critical (Zuboff, 1988). It is worth noting that the articles reviewed in this theme do not support dystopian views of mass human unemployment resulting from AI use (see Lloyd & Payne, 2019). Rather, they present a more nuanced perspective on the overall employment effects of AI, including the changes in the number and types of jobs in the labor market and the types of skills in demand.Individual levelThe impact of AI is multifaceted and depends on the type of technology implemented and individual differences among workers.3We do note that these technology implementation choices are often determined at the organizational level. However, we position these findings at the individual level as the cited work focuses on individual‐level data. Fossen and Sorgner (2022) find that labor‐displacing technologies (i.e., that replace humans in a work process) reduce labor demand and result in slower wage growth and a greater likelihood of workers having to change occupation or become unemployed. On the other hand, labor‐reinstating technologies (i.e., that create new human tasks in a work process) increase wage growth and promote stable employment. Different workers are more susceptible to experiencing either set of employment consequences. In the face of labor‐displacing technologies, highly educated workers tend to adjust more effectively by changing occupations or moving onto self‐employment. Women are less likely to adjust effectively and lower‐educated workers are more likely to become unemployed (Egana‐del Sol et al., 2022; Fossen & Sorgner, 2022). Machine learning forms of AI are impacting a range of workers across education levels, but with more highly educated workers more likely to adjust by changing occupations and lower educated workers more likely to become unemployed (Fossen & Sorgner, 2022).AI use also affects workers' required skills, with upskilling necessary to adapt effectively to technological changes. Industry experts and policymakers believe that most jobs will require basic digital skills that can be acquired through everyday exposure to technology (Lloyd & Payne, 2019). However, Jaiswal et al. (2022) identify five key areas for employee upskilling in the multinational IT sector: (a) data analytic skills (i.e., applying statistics, sourcing relevant data, and using tools such as Python); (b) digital skills (i.e., understanding cloud automation and cybersecurity); (c) complex cognitive skills (i.e., a design thinking mindset, sensemaking of data, and extracting insights); (d) decision‐making skills (i.e., adopting evidence‐based approaches); and (e) continuous learning skills. In their assessment of online job vacancies in the United States, Acemoglu et al. (2022) find that firms with greater AI exposure have increasing demand for “engineering, analysis, marketing, finance, and information technology” skills (p. 320). Walkowiak (2021) also shows how AI can complement the skills of historically marginalized workers, such as neurodiverse employees. For example, there may be productive complementarities between skill sets associated with concentration and focus, pattern identification, and creative thinking that align with skills demanded in certain IT jobs. New technologies may also provide enhanced assistance to neurodiverse workers, such as by helping to monitor and alleviate anxiety.Holm and Lorenz (2022) also demonstrate that AI's effects on workers depend on how the technology is used in a job (i.e., either by providing orders to human workers or by providing information for human worker decision making) and the skill level of workers. For high‐ and mid‐skilled workers, using AI daily to support decision making can enhance workers' skills by facilitating high‐performance work practices, such as teamwork and job rotation. When AI delivers orders to workers, this constrains skill use and degrades job quality. Across all skill levels, AI used in this way also heightens work pace constraints and decreases autonomy for high‐skilled workers, while generating more monotony and less learning opportunities for mid‐skilled workers.Group levelBeyond individual‐level skills, Akhtar et al. (2019, p. 252) emphasize the importance of a multidisciplinary skillset for big data savvy (BDS) teams, which includes a combination of “computing, mathematics, statistics, machine learning, and business domain knowledge.” These teams are responsible for building AI technologies and/or generating data‐driven insights that can enhance service delivery, innovation, and productivity. The authors argue that this diverse skillset within cross‐functional teams is critical and difficult to replicate, creating a unique bundle of resources. We suggest that this skillset will also help create the supportive organizational and task conditions, identified in Theme 1, that facilitate effective human–AI collaboration.Organizational levelA firm's exposure to AI technologies has implications for its workforce demands. Yang (2022) shows that a firm's involvement in the development of AI technologies increases its demand for workers, with a 3.5% increase in worker demand observed in Taiwan's electronics industry, which consists of organizations that create components for AI products. This increase reflects job creation rather than job destruction. However, the positive association between AI and employment varies across skill levels, with evidence indicating an increase in demand for high‐skilled workers (i.e., those with graduate qualifications) and a decrease in demand for mid‐skilled workers (i.e., those that are college‐educated). This is consistent with the concept of job polarization, where technology tends to substitute for mid‐skilled workers (Goos & Manning, 2007; see also Fossen & Sorgner, 2022, for counter arguments).Using online vacancy data in the United States, Acemoglu et al. (2022) show that AI‐exposed establishments (i.e., firms where workers' tasks align to AI's current capabilities) tend to accelerate job postings for AI‐related vacancies and reduce postings for non‐AI‐related positions. They also found a decrease in demand for previously sought skills as well as a heightened demand for new skills, with some evidence that these firms are reducing worker hiring overall. However, the effects on employment at the occupational or industry levels were not detectable and this may be due to the impact of AI and its diffusion still being relatively small across the full labor market.Theme summaryThe impact of AI on the labor market and workers' skills and job experiences is heavily influenced by what the technology is designed and implemented to do. If AI is designed and deployed to replace or give orders to workers, it reduces labor demand, wage growth, skill use, and job quality. Conversely, when AI is implemented to create new human tasks or support workers by providing decision‐making information, it promotes employment, wage growth, skill use, and improved work practices. In the face of AI's displacement effects, men and more highly skilled and educated workers are likely to adjust better. Additionally, workers will likely need a range of higher level cognitive skills given AI deployment. A firm's exposure to AI technologies also affects its demand for workers.FUTURE RESEARCH DIRECTIONS: OPPORTUNITIES AND CHALLENGESOur integrative review highlights the significant implications of AI for the future of work at individual, group, and organizational levels of analysis. Based on the themes extracted from across our review and cross‐pollinating their ideas, we offer five research pathways that advance the scope of organizational behavior research on AI. A key theme that emerged across our review is the importance of creating supportive, threat‐reducing, and resilience‐ and confidence‐building conditions and uses of AI to enhance positive experiences and adoption by workers. This overarching theme informs our recommendations across each pathway. Table 2 summarizes our future research agenda by outlining illustrative factors and mechanisms and offering specific research questions.2TABLEFuture research pathways: Overview.Research pathwayLevel of analysisTypes of relevant factors (potential explanatory mechanisms)Types of outcomesExample research areas/questionsPathway 1: Using AI to facilitate worker well‐being and satisfactionIndividualJob design and job demands‐resources (generating social capital, meaningfulness, positive affect)Reduced occupational stress and burnout; job satisfaction; enhanced health and well‐being outcomes(1) What resources does AI use generate for workers? How do those resources influence social and work processes towards employee well‐being?(2) How can AI be deployed to enhance human flourishing at work?(3) How do organizations create cultures of safety to enhance employee comfort to experiment with AI for optimal use?GroupTechnology frames; use of liminal spaces (enhancing worker control over AI use)Team affective processes (e.g., team cohesion); team acceptance of AIOrganizationalOrganizational climate of psychological safety; organizational safety cultureAI acceptance/adoptionPathway 2: Designing effective human–AI collaboration systemsIndividualTechnology‐specific perceptions: Human vs. AI capabilities (perceived value or uniqueness of human skills)Individual difference: Cognitive flexibilityTechnology characteristics: System design/quality (enhancing task engagement, perceived ease of use)Effective employee‐AI collaboration; employee productivity(1) What are examples of novel job designs that explicitly incorporate human and algorithmic capabilities (see also Parker & Grote, 2022; Wang et al., 2020)?(2) How can AI be tailored to the different needs of workers with varied cognitive skill sets to optimize productivity?(3) How are teams' cognitive processes enhanced or diminished by AI use?(4) What other unexplored factors, particularly those bundles of supportive work practices, increase employees' acceptance of, and adaptability to, working alongside AI systems?GroupExternal cognition; team/work group characteristicsTeam cognitive processes (e.g., knowledge sharing); collective intelligenceOrganizationalLearning organization culture/climate; exploration vs. exploitation strategies for AI useOrganizational effectiveness and innovationPathway 3: Examining AI‐supported leadershipIndividualTechnology‐specific perceptions: Human vs. AI capabilitiesEffective leader‐AI collaboration; leader productivity(1) What job functions can/should be delegated to AI systems?(2) What types of work environments are receptive to AI‐supported leadership?(3) What factors promote or inhibit leader use of AI to support their work (e.g., via delegation)?GroupAI used for team controlTeam acceptance of AIOrganizationalHyper‐rationalist cultureLeader delegation to AI; leader automation biasPathway 4: Using AI to promote fairness in organizational processes and outcomesIndividualPerceptions of distributive, procedural, interactional, restorative justiceTrust in AI(1) How can AI be used to enhance justice perceptions?(2) How do minority groups perceive the workplace use of AI? What benefits and costs do they identify?(3) How, and to what outcomes, are employees assessing their organizations' ethical AI climate?GroupVulnerable/minority group technology perceptionsTrust in AI; use of AIOrganizationalOrganizational ethical climate/ethical AI climateOrganizational fairness perceptionsPathway 5: Incorporating multilevel thinking in AI researchMultilevelN/AN/A(1) What methods from disciplines such as computer science could advance AI research in management?(2) What theoretical lenses can be applied to examine mechanisms and boundary conditions for cross‐level effects of AI?Pathway 1: Using AI to facilitate worker well‐being and satisfactionOur review emphasizes the importance of employee perceptions and attitudes towards AI and how these influence their responses to its use. This points to future research opportunities that examine how the integration of AI systems can support employee health and well‐being. This would enable the identification of precursors to positive employee attitudes towards AI, rather than focusing on negative ones (Theme 3). Research on AI in cognate disciplines such as psychology, counseling, and computer science has already initiated concerted efforts to investigate how AI systems can promote health and well‐being, including the expansion of digital counseling delivery, particularly in resource‐constrained environments (Pataranutaporn et al., 2021). To build on this work, we propose integrating ideas from the positive technology domain, an area within the human–computer interaction field. This domain focuses on how technology development and deployment can enhance human well‐being and flourishing by generating positive experiences from technology use, such as by supporting self‐actualization, positive affect, and enhancing human connection (Brivio et al., 2018; Calvo et al., 2016). By adopting this approach, we can identify ways to design and implement AI systems that support employee well‐being and contribute to positive attitudes towards AI.Individual levelOur review indicates that AI‐enabled workplaces can create new forms of worker power and resources that improve their experiences of working with AI or in the context of algorithmic management, as demonstrated by Themes 1 and 4. Such findings could guide efforts to scale the benefits of AI use across a wider range of workers. For example, Theme 4 shows how platform workers with specific skills and reputational capital can benefit from working on these platforms (e.g., Wood et al., 2019). Future work could examine whether this relationship is generated through better social capital or higher work engagement (Brivio et al., 2018). Meanwhile, Theme 1 demonstrates how workers whose jobs are enhanced by AI use, such as through access to better quality data and insights, are also likely to accrue better knowledge resources compared to others. Future research could examine whether this relationship is linked to well‐being through the mechanism of improved meaningfulness of work (Bankins & Formosa, 2023). Furthermore, AI's ability to reduce physical and mental work demands could also connect to improved well‐being via positive affective experiences (Brivio et al., 2018; Nazareno & Schiff, 2021). Future research could further apply the job demands‐resources model (Demerouti et al., 2001), which proposes that jobs are characterized by both demands (i.e., physical, psychological, and other aspects of the job that require effort and can lead to negative outcomes such as role overload and burnout) and resources (i.e., physical, psychological, and other aspects of the job that stimulate growth and support goal accomplishment). Pinpointing the potential range of resources that accrue to workers from AI use, and their antecedents, as well as examining the types of mechanisms noted above through which such resources may support employees' well‐being, offers one way to identify positive psycho‐social outcomes from the technology's deployment.Scholars could also further examine the potential for AI‐generated conversational agents to facilitate the socialization of workers vulnerable to distress (e.g., newcomers, managers, and frontline employees) or how the frequency and content of digitally mediated interactions can mitigate loneliness or stress at work, particularly during significant organizational change or broader societal crises. From a positive technology perspective, AI used in this way could foster important social connections that support well‐being at work (Brivio et al., 2018; Ocampo et al., 2022).Group levelThemes 1 and 4 underscore the importance of occupational identity and collective voice mechanisms in empowering workers to experiment with task and role adjustments alongside AI and to voice their concerns regarding algorithmic management, while also providing a sense of collective support. Future research could explore specific group‐level attitudes or behaviors in teams that facilitate the assimilation of new technologies and support positive well‐being. This is crucial because previous research shows that group‐level sensemaking is important in shaping how technology affects workers. Under the umbrella of the social validation context, which refers to ways in which the social context supports sensemaking of change (Selenko et al., 2022), future research could investigate how groups' technology frames influence how they make sense of, and either integrate or resist, the introduction of AI. Technology frames refer to a group's assumptions about, and knowledge of, a technology, which can shape their responses to its use (Orlikowski & Gash, 1994). Investigating such frames in the context of AI use could further explain how and why employees in certain groups respond to AI in particular ways.Connecting to the types of voice mechanisms identified for gig workers, the social validation context can also involve the development of liminal spaces which provide “transitional, safe spaces” for employees to learn, experiment, and develop competence with AI technologies via a sense of autonomy or control over their AI use (Selenko et al., 2022, p. 276). As Bader and Kaiser (2019) point out, perceived imbalance in human and algorithmic decision input can result in employee workarounds, data manipulation, and poor performance. Thus, exploring how liminal spaces may convert otherwise dysfunctional behavior into constructive employee experimentation with the technology to optimize its use could be a promising direction for future research.Organizational levelPositive technology scholarship highlights the importance of organizational cultures of safety in facilitating the use of technology for positive human outcomes. Coetzee (2019, p. 319) asserts that a psychologically safe organizational environment can encourage workers to “engage in provisional tries,” which are aimed at exploring how technology can enhance their work performance. In such an environment, workers are confident that they will not face ridicule for experimenting with AI, that minor errors will be tolerated, and that support provisions are available when applying AI in their work. Brivio et al. (2018) similarly suggest that a broad organizational safety culture that prioritizes worker and public safety at all levels can help reduce the likelihood of technology‐related stress. Therefore, future work could position organizational cultures of safety as an important contextual enabler of AI use for employee thriving and well‐being.Pathway 2: Designing effective human–AI collaboration systemsOur review suggests the need to further specify and establish the conditions that promote human–AI complementarity, in ways that benefit both employees and organizations. Despite workers' fears of job replacement due to AI (Theme 3), it is not the only outcome of the technology's deployment (Theme 1). Instead, there are clear calls for organizations to use AI to complement employees, rather than to replace them (Brynjolfsson, 2023). To achieve this, we propose multiple factors at different levels of analysis that can bolster employee–AI collaboration.Individual levelFuture research is needed to examine how workers' perceptions of human versus AI capabilities affect their collaboration with or resistance to these technologies (i.e., connecting Themes 1 and 2). For example, studies could compare the likelihood of employees working alongside an AI that they perceive as encroaching on their human capabilities versus collaborating with an AI that is within its perceived capability boundaries. These perceptions may operate through mechanisms such as the perceived value or uniqueness of human skills for certain tasks (Granulo et al., 2021). As AI continues to advance, including in its generative forms, workers' views of human and AI capabilities may also change. Generative AI combines powerful models with highly intuitive interfaces that make it easier to use (Manning, 2023). However, such forms of AI may also heighten workers' fears of technology's encroachment on human skills and threaten perceptions of its appropriate use (via threats of human substitution; Verma & Singh, 2022). This could amplify both positive and negative attitudes towards AI (Theme 3), generating barriers to effective human–AI collaboration (Theme 1). Depending upon its context of deployment, the use of generative AI may also heighten workers' awareness of the ongoing value of human skills within a work process (possibly via perceptions of human skill value), thereby enhancing human–AI collaboration. While employees' AI skills and familiarity also support AI use (Theme 1), the increasingly intuitive and user‐friendly interfaces of generative AIs may ultimately lessen such skill demands (see Manning, 2023 for such an example).We believe that these questions necessitate conceptual and empirical efforts to address how such technology can replace or streamline existing tasks performed by humans and how humans can perform higher‐order cognitive tasks with the assistance of, or in collaboration with, AI. To expand on the Theme 5 insights regarding workers' AI‐relevant skillsets, research could investigate the role of cognitive flexibility, which enables individuals to switch between tasks or stimulus sets quickly and efficiently (Feng et al., 2020). This flexibility may allow workers to effectively integrate and situate AI output with other information sources across different tasks and so may be an important mechanism for explaining why workers with different performance levels extract different value from AI use (as seen in Theme 1; e.g., Luo et al., 2021). Organizational behavior researchers could also integrate evidence from information systems and human–computer interaction literature to expand on work in Theme 1, which highlights the importance of AI system quality for worker behavior (Nguyen & Malik, 2022b). Future studies could explore how different interfaces, types, and quality of AI systems encourage employee–AI collaboration. For example, the enhanced intuitiveness of some generative AI interfaces could improve perceived ease of use and task engagement to support human–AI collaboration, whereas poorly designed and difficult‐to‐use systems of variable quality may limit these mechanisms.Group levelOur review indicates a gap in the research on AI use at the group level of analysis, especially regarding the mechanisms and consequences of AI use within and across work groups. To address this, we suggest examining the concept of external cognition, which concerns how objects, such as technologies, can support team cooperation and performance (Fiore & Wiltshire, 2016). Future research could explore how AI systems can facilitate team tasks through offloading (i.e., by acting as a repository of information) and scaffolding (i.e., by supporting team member interactions to produce outcomes) (Fiore & Wiltshire, 2016). Such investigations can help elucidate how AI is used by teams in different ways and how it supports team‐level cognitive processes associated with knowledge acquisition and sharing.Our review also identifies barriers and opportunities for integrating AI with individuals nested within work teams. For instance, Zhang et al. (2021) suggest that AI poses a threat to the efficiency of high‐performing teams but boosts the performance of low‐performing teams. As such, future work can investigate how variations in team characteristics and their experiences with AI influence team‐level performance and whether these effects are influenced by the team context, such as team membership status, perspective, and commitment. One promising approach is to apply the categorization‐elaboration model (CEM; van Knippenberg et al., 2004) to explore how AI use interacts with diverse work group characteristics. Specifically, based on CEM, group dimensions, such as social category diversity (i.e., demographic differences) and informational/functional diversity (i.e., differences in job‐related, educational, and skill‐based attributes), could clarify how AI may be helpful or harmful for specific groups.Organizational levelTheme 1 of our review stresses the importance of supportive conditions, such as those relating to organizational climate/culture and aligned work systems, for facilitating human–AI collaboration. Evidence suggests that bundles of integrated work practices, rather than disconnected and potentially contradictory ones, are effective in increasing readiness for AI implementation (e.g., Suseno et al., 2022). Therefore, it would be instructive to examine what those configurations of bundled practices could entail. For example, it is largely established that developing employees' skills in using a technology is important for user acceptance (Zuboff, 1988), and Bankins and Formosa (2021) show how a telecommunications company that fosters a culture of ongoing learning helps employees become more receptive to AI technology specifically. Future work could investigate how the integrated practices within learning organizations, which emphasize experimentation, learning from experience and others, and knowledge transfer, could further promote employee–AI collaboration at both individual and group levels (Basten & Haamann, 2018).As generative AI becomes increasingly prevalent, more questions arise about the extent to which algorithms can replace or assist human workers in performing their tasks. To examine the impact of AI use at the organizational level, we propose applying the theory of exploration and exploitation (March, 1991). According to Levinthal and March (1993, p. 105), exploration constitutes “the pursuit of new knowledge, of things that might come to be known,” while exploitation refers to “the use and development of things already known.” In the context of AI use, exploration could involve finding new ways for AI and human teams to collaborate, developing novel technological approaches to organizational problems, and identifying unique solutions. Exploitation strategies of AI could focus on streamlining existing solutions, automating routine tasks, and improving current systems and processes.Pathway 3: Examining AI‐supported leadershipOur review suggests a need to broaden our understanding of leaders' roles in deploying AI systems, particularly in relation to decision‐making, job‐designing, and feedback‐giving practices. Parry et al. (2016) urge organizational scholars to reflect on the potential benefits and consequences of AI replacing critical aspects of tasks performed by leaders. As organizations increasingly adopt AI, the challenge is for scholars to help leaders not only reskill employees for AI use but also explore how the technology can enhance their leadership functions. We suggest that leaders need to be proactive in identifying opportunities where AI can complement and support their roles and, in doing so, create a culture of trust and collaboration between employees and technology.Individual levelFuture research can deepen our understanding of how AI‐supported decision systems can assist human leaders in carrying out various functions such as setting goals, facilitating creativity, identifying and correcting decision errors, and increasing team–machine and human–machine collaboration. Job design perspectives can be particularly useful in unpacking these interactions. While delegation to AI can improve managerial perceptions of decision quality (Keding & Meissner, 2021), emerging research suggests that employees may have different views and responses to humans or AIs undertaking leadership functions (de Cremer, 2020). For instance, Lanz et al. (2023) note that employees are less likely to comply with unethical instructions from an AI than a human supervisor. Future work could examine how those individual‐level beliefs, of leaders, employees, and stakeholders, shape the use and acceptance of AI for certain leadership activities and ultimately impact leader–employee influence processes. Additionally, future research can assess how AI‐based leadership decision systems can support human leaders in carrying out complex functions in real‐time. More specifically, scholars can explore how the interactivity and adaptability of AI decision systems assist managers in providing immediate feedback and how AI systems can support leaders in correcting errors in their judgment, especially during critical incidents.Group levelBased on the themes found in Theme 4, it is evident that gig work platforms use AI as a means of control and surveillance over their workers. Furthermore, emerging evidence shows that new technologies are also facilitating leaders' use of similar control mechanisms for other types of workers, such as knowledge workers, who are increasingly working flexibly and remotely (Kellogg et al., 2020; Klosowski, 2021). While algorithmic management is clearly shaping the experiences of platform workers, it remains to be seen how effective, and indeed possible, these forms of control are in broader work environments. For example, in the context of home credit agents' work, Terry et al. (2022) note that work activities related to tacit judgment and emotional labor are difficult to quantify and can fall outside the gaze of algorithmic management. Future research could explore how leaders use algorithmic management, people analytics, and control mechanisms in their work teams across industries and job tasks to gain a better understanding of the efficacy and outcomes of these approaches.Organizational levelThe growing use of AI has led to concerns about whether it will lead to a hyper‐rational organizational culture that prioritizes AI output over human contributions, as it is seen as efficient, objective, and undertaking structured, unbiased analyses (per Theme 2, and see Kahneman, 2018). Therefore, it is important to examine how this culture affects leaders' attitudes and behaviors towards AI, such as their tendency for automation bias (i.e., overreliance on machine output), the degree to which they delegate to AI, and the potential consequences of such actions.Pathway 4: Using AI to promote fairness in organizational processes and outcomesA crucial area for future research is the potential of AI for promoting diversity and fairness within organizations. Although AI can enhance task efficiency, researchers can expand on its outcomes by exploring how AI can mitigate long‐standing issues concerning fairness, equity, and transparency. The literature on ethical and responsible AI is growing, but organizational behavior researchers are yet to fully leverage their abundance of organizational justice and fairness constructs to advance a theoretically driven and systematic design agenda for fair and responsible AI at work.Individual levelRobert et al. (2020) argue that organizational scholars must investigate how AI can be deployed to promote fairness in managerial practices, such as hiring, promotion, and compensation decisions (see Bankins et al., 2022; Lee, 2018). Accordingly, we recommend that future research examines the role of AI through the lens of organizational justice to determine how AI‐based tools can either support or threaten fairness and transparency (Colquitt, 2001). One possible approach is to explore how AI can be used to enhance employees' perceptions and experiences of fair pay and rewards (distributive justice), transparency in dispute resolution processes (procedural justice), and respectful treatment (interactional justice). Given individuals' perceptions of human and AI capabilities (Theme 2), these effects may depend on the specific tasks for which AI is employed. Additionally, it would be valuable to examine the role of AI in supporting restorative justice, which entails actions to remedy harms, rebuild relationships, and rectify injustices (Wiseman & Stillwell, 2022). For instance, researchers could explore novel ways that AI can assist employees in restoring their trust in and commitment to their leaders and organizations following incidents of mistreatment or significant disruption. Since such experiences can elicit intense emotions, the perceived objectivity of AI (as discussed in Theme 2) may aid in mediating conflict resolution efforts.Group levelAcross our themes, we found that AI can have a significant impact on digital divides, where technologies can reinforce existing socio‐economic disparities between advanced and developing economies, metropolitan and rural areas, and more or less privileged individuals (Kitsara, 2022). For example, our review shows that AI is already benefiting highly skilled and highly educated workers, leading to better paying and better quality jobs with enhanced skill sets. Conversely, lower skilled and lower educated workers are more likely to become unemployed due to the integration of AI into the workplace, thereby exacerbating pre‐existing inequalities (Fossen & Sorgner, 2022). These findings drive a pressing set of research inquiries: How does the implementation of AI either intensify or alleviate digital divides? Under what circumstances does AI exacerbate inequalities, and when does it mitigate them? Equally important is the examination of how vulnerable groups (Restubog et al., 2021, 2023) perceive and experience the fairness—or lack thereof—of AI utilization.Our review also shows how members of certain groups (e.g., women) may perceive algorithmic decision making as fairer than human decision making due to historical biases perpetrated by dominant social groups (e.g., Pethig & Kroenung, 2022). These findings align with emerging research (see Lee & Rich, 2021) that explores how marginalized groups view AI based on ingroup and outgroup perceptions and beliefs about its potential to rectify systemic biases. We recommend conducting additional group‐level analyses to determine how minority groups in the workplace are benefitting from the implementation of AI or whether they feel particularly vulnerable to its negative impacts.Organizational levelOwing to the growing research focus on ethical AI—defined as “the fair and just development, use, and management of AI technologies” (Bankins & Formosa, 2021, p. 60)—our review demonstrates that people can identify fairness dimensions to organizational use of AI. To further expand Theme 1, we suggest that future work can investigate how an organization's perceived ethical climate or perceived (un)ethical approaches to AI use can affect employees' willingness to collaborate with these technologies. Fairness heuristic theory suggests that people make assessments of their fair treatment by a given institution, which then shapes their interactions with it (Lind, 2001) and potentially reduces their apprehension. With the increasing prevalence of AI, such fairness perceptions are likely to be influenced by organizations' approaches to ethically sensitive uses of the technology, such as for employee surveillance and performance management, as well as whether organizations explicitly adopt ethical AI principles and participatory approaches to AI design (van den Broek et al., 2021). Given increasing community awareness that when AI systems fail they potentially fail at scale, perceptions of an organization's ethical AI use are likely to become more widespread and could significantly affect how employees utilize the technology (see also da Motta Veiga et al., 2023).Pathway 5: Incorporating multilevel thinking in AI researchOur review and future research recommendations lay the foundation for a multilevel conceptualization of AI in the workplace. Although AI implementation inherently involves multilevel processes, these have not yet been fully explored in the organizational behavior literature. While individual level data points have been used to make assumptions about the implications of AI in management research, future work must examine whether, how, and to what extent organizational‐ or group‐level factors drive individual‐level attitudes about AI use and its consequences. We believe that this is a necessary pursuit as relationships found at one level of analysis may exert stronger or weaker effects, or even reverse effects, at different levels of analysis (Ostroff, 1993).We therefore encourage scholars to adopt a multilevel perspective to uncover a range of insights that may currently have been overlooked. Conceptually, scholars must take a theory‐driven approach to capture the antecedents, boundary conditions, mechanisms, and outcomes of AI implementation at various levels of analysis. This is consistent with the suggestions of Chen et al. (2005) that applying a multilevel theory perspective requires an explicit assessment of homology, whereby similar relationships exist between parallel constructs across various levels of analysis. If the consequences of AI are homogenous across levels of analysis, it contributes to the breadth of AI use (e.g., where AI elicits benefits at the individual, group, and organizational levels). However, if the consequences of AI are not homogenous across levels, which several themes in our review suggest (e.g., AI elicits both advantageous and maladaptive consequences across levels), then there is a need to theorize how and when moderators and mediators operate at each level of analysis. For example, Bankins and Formosa (2023) suggest how, in a call center setting, AI can facilitate managerial efficiency by assessing employee performance through AI‐supported software that can analyze conversational data. This helps managers identify areas for improvement, rather than manually listening to hundreds of calls. However, while AI supports managerial efficiency, call center employees may experience a lack of autonomy due to AI‐supported monitoring and surveillance. In what follows, we offer suggestions for how future research could adopt multilevel theorizing and identify ways to connect analyses across levels, to extend beyond our suggestions of salient factors within levels, and then outline multilevel method options to investigate these relationships.Multilevel theorizingTo connect analyses across levels, it is necessary to utilize theories that explicitly account for multilevel phenomena. Our review's main findings suggest that individual resources, such as tenure, skills, and reputational power (Themes 1, 3, and 4), and organizational resources, such as a supportive culture (Themes 1 and 3), can enhance outcomes from employees' interactions with AI. To facilitate multilevel theorizing of AI's workplace effects, we recommend utilizing conservation of resources (COR) theory. This theory posits that people strive to protect and gain resources, while minimizing resource threat and loss (Hobfoll et al., 2018). Chen et al. (2015) integrate the crossover model with COR theory to examine how resource exchange occurs across levels, such as from organizational to group to individual levels, providing the conceptual tools to theorize and then to empirically test multilevel relationships. Crossover refers to the ways that positive or negative states (including resources) transmit from, for example, a person to a wider group or vice versa, via mechanisms such as empathy (direct crossover), specific forms of interaction (indirect crossover), or experiences of shared stressors (Westman, 2001).In the context of AI at work, researchers can leverage the multilevel factors identified in our review to investigate whether negative individual‐level experiences of AI (e.g., via excessive job demands) transmit to the team, via empathy or shared stressor experiences, and whether this process is exacerbated by aspects of the organizational context (e.g., an authoritarian culture). Such conditions would likely create depleted resource passageways, reflecting an environment that threatens and undermines resource repertoires at multiple levels of analysis. Conversely, enriched resource passageways would likely create different outcomes. For example, organizational resource investment in fostering learning or a safety culture could offer supportive resources that generate positive individual‐level emotional states. Through enhanced thought‐action repertoires generated by these positive emotions (see Chen et al., 2015; Fredrickson, 2001), employees may experiment with novel ways to use the technology, which can positively shape group‐level attitudes through their technology frames or use of liminal spaces.By utilizing a COR perspective, researchers can uncover the multilevel conditions that promote (or diminish) resource gains from AI use and lead to enriched (or impoverished) resource passageways, which in turn improve (or reduce) the likelihood of successful deployment and use of AI at work. Such research can establish the multilevel resource combinations that facilitate either “AI receptive” or “AI resistant” environments (Bankins & Formosa, 2021). Overall, utilizing multilevel theorizing and identifying ways to link analyses across levels can help provide a comprehensive understanding of AI's implications at work.Multilevel methodsMethodologically, there are various approaches available to conduct the multilevel research proposed above. Our review reveals that organizational scholarship in AI is primarily based on single‐sourced and cross‐sectional designs, which provide limited insight into the robustness of AI implementation and consequences at different levels of analysis. While single‐level studies remain valuable, we believe that cross‐level longitudinal field investigations, which examine AI adoption or aversion, would broaden the breadth of understanding of AI's effects at work. To achieve this, we suggest following the recommendations of Hoffman et al. (2019) and Mathieu and Chen (2011) by carefully considering the: (a) level of theory (i.e., the focal entities to which generalizations are intended to apply); (b) level of measurement (i.e., the entities where data are collected); and (c) level of analysis (i.e., the level at which data are analyzed). It is critical to make these distinctions explicit, as Short et al. (2008) argue that clustering employees based solely on conceptual and analytic convenience, such as groups, organizations, or supervisors they report to, can compromise the reliability of multilevel findings. Further, we suggest that scholars account for multilevel temporal issues associated with multilevel relationships in AI deployment. Workers' perceptions and attitudes towards AI and subsequent individual‐, group‐, or organizational‐level outcomes are not static phenomena and will likely change over time.IMPLICATIONS FOR PRACTICEOur review has important practical implications, by showing how AI can generate a range of opportunities and challenges across multiple levels of analysis. The successful implementation of AI in the workplace depends on a range of factors that influence how workers and organizations respond to new technological developments. To support effective deployment of AI systems and mitigate potential harms, we offer several recommendations for employees and managers.Individual levelOur findings show that employees can hold both positive and negative perceptions of AI systems, depending on the benefits and risks associated with their implementation. To foster enthusiasm and interest in AI systems among employees, managers can use positively framed messages when introducing AI‐related topics. Chuan et al. (2019) suggest that when presented in this way, AI's benefits (e.g., for improving work quality and well‐being) are viewed with greater specificity. In addition, to navigate fast‐paced technological changes, managers must address employees' potentially negative anticipations of AI use by providing resources such as on‐site technology assistants and instructional materials, and offering formal training to socialize employees before and during implementation of AI systems (Makarius et al., 2020).Group levelEmpirical findings suggest that incorporating AI into business teams can enhance productivity (Wilcox & Rosenberg, 2019). As such, managers must carefully identify not only the types of tasks but also the types of groups that are most likely to benefit from human–AI collaboration. In other words, it is crucial for managers to determine which tasks are best suited to AI and which employees would benefit most from working with AI systems. This is essential to foster relational coordination and trust between employees and AI, while ensuring that job functions remain streamlined. In addition to task types, managers should consider group characteristics such as size, dynamics, and identity, which may influence the level of AI adoption and collaboration (Parry et al., 2016).Organizational levelOur review underscores that successful AI implementation involves bundling work practices and support mechanisms together, rather than treating technology as a standalone solution. High performance work systems, leader role modeling, and aligned HRM practices are effective in facilitating workers' use of AI (Pemer, 2021; Suseno et al., 2022). To encourage employees to use AI productively, organizations must also empower them by providing opportunities to build confidence and task mastery with AI systems. For instance, Chhillar and Aguilera (2022) assert that the application of appropriate forms of governance modalities (e.g., norms and laws) can harness AI decision making across individuals and businesses. Encouraging interactions between employees and AI that promote productivity and task enjoyment can also be effective, such as Amazon's practice of allowing warehouse employees to paint their collaborative robots to give them personalities, thereby highlighting task and social interdependence (Gonzalez, 2017). To promote AI acceptance, employees also need to feel that their value and social status are higher than AI systems (Kolbjørnsrud et al., 2017), making it important to consider the impact of AI on employees' sense of job security and status. By implementing AI systems in ways that complement and expand, rather than replace, workers' knowledge and skills, organizations can help to reduce anxiety and build employees' trust in the technology. Overall, successful AI integration involves a comprehensive approach that considers technological and human factors. By creating supportive work environments and empowering employees, organizations can unlock the full potential of AI systems while minimizing negative consequences.CONCLUSIONAs AI technologies continue to pervade organizations, their impact on how workers navigate their tasks, roles, and social connections cannot be understated. Our comprehensive review of empirical research into AI workplace use provides valuable insights into the critical factors that influence successful human–AI collaboration, perceptions of human and algorithmic capabilities, employees' attitudes towards AI and algorithmic management, and the impact of AI use on labor markets and skills. This research lays a crucial foundation for scholars and leaders to effectively navigate the complex landscape of AI adoption and create a balance between productivity and well‐being benefits for workers. It is imperative that organizations adopt effective implementation strategies that prioritize the needs of workers and promote a collaborative work environment. Only then can we fully harness the potential of AI to enhance work processes and ultimately improve the lives of workers now and in the future.ACKNOWLEDGEMENTSThe authors would like to sincerely thank the editor, Dr. Christian Resick, and the anonymous reviewers for their insightful and constructive feedback throughout the review process.Open access publishing facilitated by Macquarie University, as part of the Wiley ‐ Macquarie University agreement via the Council of Australian University Librarians.CONFLICT OF INTEREST STATEMENTThe authors have no relevant financial or non‐financial interest to disclose and no conflict of interest to declare that are relevant to the content of this article. All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non‐financial interest in the subject matter or materials discussed in this manuscript. The authors have no financial or proprietary interest in any material discussed in this article.DATA AVAILABILITY STATEMENTData available on request from the authorsREFERENCESReferences with an asterisk (*) identify studies included in the review.*Acemoglu, D., Autor, D., Hazell, J., & Restrepo, P. (2022). Artificial intelligence and jobs: Evidence from online vacancies. Journal of Labor Economics, 40(S1), S293–S340. https://doi.org/10.1086/718327*Akhtar, P., Frynas, J. G., Mellahi, K., & Ullah, S. (2019). Big data‐savvy teams' skills, big data‐driven actions and business performance. British Journal of Management, 30(2), 252–271. https://doi.org/10.1111/1467-8551.12333*Allen, R. T., & Choudhury, P. (2022). Algorithm‐augmented work and domain experience: The countervailing forces of ability and aversion. Organization Science, 33(1), 149–169. https://doi.org/10.1287/orsc.2021.1554*Anicich, E. M. (2022). Flexing and floundering in the on‐demand economy: Narrative identity construction under algorithmic management. Organizational Behavior and Human Decision Processes, 169, 104138. https://doi.org/10.1016/j.obhdp.2022.104138*Arias‐Pérez, J., & Vélez‐Jaramillo, J. (2022). Ignoring the three‐way interaction of digital orientation, not‐invented‐here syndrome and employee's artificial intelligence awareness in digital innovation performance: A recipe for failure. Technological Forecasting and Social Change, 174, 121305. https://doi.org/10.1016/j.techfore.2021.121305*Baabdullah, A. M., Alalwan, A. A., Slade, E. L., Raman, R., & Khatatneh, K. F. (2021). SMEs and artificial intelligence (AI): Antecedents and consequences of AI‐based B2B practices. Industrial Marketing Management, 98, 255–270. https://doi.org/10.1016/j.indmarman.2021.09.003*Bader, V., & Kaiser, S. (2019). Algorithmic decision‐making? The user interface and its role for human involvement in decisions supported by artificial intelligence. Organization, 26(5), 655–672. https://doi.org/10.1177/1350508419855714*Balakrishnan, J., & Dwivedi, Y. K. (2021). Role of cognitive absorption in building user trust and experience. Psychology & Marketing, 38(4), 643–668. https://doi.org/10.1002/mar.21462Bankins, S., & Formosa, P. (2021). Ethical AI at work: The social contract for artificial intelligence and its implications for the workplace psychological contract. In M. Coetzee & A. Deas (Eds.), Redefining the psychological contract in the digital era: Issues for research and practice (pp. 55–72). Springer. https://doi.org/10.1007/978-3-030-63864-1_4Bankins, S., & Formosa, P. (2023). The ethical implications of artificial intelligence (AI) for meaningful work. Journal of Business Ethics. https://doi.org/10.1007/s10551-023-05339-7Bankins, S., Formosa, P., Griep, Y., & Richards, D. (2022). AI decision making with dignity? Contrasting workers' justice perceptions of human and AI decision making in a HRM context. Information Systems Frontiers, 24, 857–875. https://doi.org/10.1007/s10796-021-10223-8Basten, D., & Haamann, T. (2018). Approaches for organizational learning: A literature review. SAGE Open, 8(3), 215824401879422. https://doi.org/10.1177/2158244018794224Bedué, P., & Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management, 35(2), 530–549. https://doi.org/10.1108/JEIM-06-2020-0233Boden, M. A. (2016). AI. Oxford University Press.Bourdieu, P. (1991). Language and symbolic power. Harvard University Press.Brivio, E., Gaudioso, F., Vergine, I., Mirizzi, C. R., Reina, C., Stellari, A., & Galimberti, C. (2018). Preventing technostress through positive technology. Frontiers in Psychology, 9, 2569. https://doi.org/10.3389/fpsyg.2018.02569Brock, J. K. U., & von Wangenheim, F. (2019). Demystifying AI: What digital transformation leaders can teach you about realistic artificial intelligence. California Management Review, 61(4), 110–134. https://doi.org/10.1177/1536504219865226Brougham, D., & Haar, J. (2018). Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees' perceptions of our future workplace. Journal of Management & Organization, 24(2), 239–257. https://doi.org/10.1017/jmo.2016.55*Brougham, D., & Haar, J. (2020). Technological disruption and employment: The influence on job insecurity and turnover intentions. Technological Forecasting and Social Change, 161, 120276. https://doi.org/10.1016/j.techfore.2020.120276Brynjolfsson, E. (2023). A call to augment—not automate—workers. In Generative AI: Perspectives from Stanford HAI. Stanford University Human‐Centered Artificial Intelligence. Retrieved from: https://hai.stanford.edu/sites/default/files/2023-03/Generative_AI_HAI_Perspectives.pdfBrynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work (No. w31161). National Bureau of Economic Research.Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.*Bucher, E. L., Schou, P. K., & Waldkirch, M. (2021). Pacifying the algorithm—Anticipatory compliance in the face of algorithmic management in the gig economy. Organization, 28(1), 44–67. https://doi.org/10.1177/1350508420961531Büchter, R. B., Weise, A., & Pieper, D. (2020). Development, testing and use of data extraction forms in systematic reviews: A review of methodological guidance. BMC Medical Research Methodology, 20(259), 1, 259–14. https://doi.org/10.1186/s12874-020-01143-3Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77–91). PMLR.Calvo, R. A., Vella‐Brodrick, D., Desmet, P., & Ryan, R. M. (2016). Editorial for “Positive computing: A new partnership between psychology, social sciences and technologists”. Psychology of Well‐Being, 6(10), 10. https://doi.org/10.1186/s13612-016-0047-1*Candrian, C., & Scherer, A. (2022). Rise of the machines: Delegating decisions to autonomous AI. Computers in Human Behavior, 134, 107308. https://doi.org/10.1016/j.chb.2022.107308*Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers' attitudes and behavioral intentions towards using artificial intelligence for organizational decision‐making. Technovation, 106, 102312. https://doi.org/10.1016/j.technovation.2021.102312Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2018). Portrayals and perceptions of AI and why they matter. The Royal Society. Retrieved from: https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf*Chatterjee, S., Chaudhuri, R., Vrontis, D., Thrassou, A., & Ghosh, S. K. (2021). Adoption of artificial intelligence‐integrated CRM systems in agile organizations in India. Technological Forecasting and Social Change, 168, 120783. https://doi.org/10.1016/j.techfore.2021.120783*Chatterjee, S., Rana, N. P., Dwivedi, Y. K., & Baabdullah, A. M. (2021). Understanding AI adoption in manufacturing and production firms using an integrated TAM‐TOE model. Technological Forecasting and Social Change, 170, 120880. https://doi.org/10.1016/j.techfore.2021.120880Chen, G., Bliese, P. D., & Mathieu, J. E. (2005). Conceptual framework and statistical procedures for delineating and testing multilevel theories of homology. Organizational Research Methods, 8(4), 375–409. https://doi.org/10.1177/1094428105280056Chen, S., Westman, M., & Hobfoll, S. E. (2015). The commerce and crossover of resources: Resource conservation in the service of resilience. Stress and Health, 31(2), 95–105. https://doi.org/10.1002/smi.2574Chhillar, D., & Aguilera, R. V. (2022). An eye for artificial intelligence: Insights into the governance of artificial intelligence and vision for future research. Business & Society, 61(5), 1197–1241. https://doi.org/10.1177/00076503221080959*Chowdhury, S., Budhwar, P., Dey, P. K., Joel‐Edgar, S., & Abadie, A. (2022). AI‐employee collaboration and business performance: Integrating knowledge‐based view, socio‐technical systems and organisational socialisation framework. Journal of Business Research, 144, 31–49. https://doi.org/10.1016/j.jbusres.2022.01.069Chuan, C.‐H., Tsai, W. H., & Cho, S.‐Y. (2019). Framing artificial intelligence in American newspapers. Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, 339‐344. https://doi.org/10.1145/3306618.3314285Coetzee, M. (2019). Organisational climate conditions of psychological safety as thriving mechanism in digital workspaces. In M. Coetzee (Ed.), Thriving in digital workspaces (pp. 311–327). Springer. https://doi.org/10.1007/978-3-030-24463-7_16Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386Conz, E., & Magnani, M. (2020). A dynamic perspective on the resilience of firms: A systematic literature review and a framework for future research. European Management Journal, 38(3), 400–412. https://doi.org/10.1016/j.emj.2019.12.004da Motta Veiga, S. P., Figueroa‐Armijos, M., & Clark, B. B. (2023). Seeming ethical makes you attractive: Unraveling how ethical perceptions of AI in hiring impacts organizational innovativeness and attractiveness. Journal of Business Ethics, 1–18. https://doi.org/10.1007/s10551-023-05380-6Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., de Paor, A., Felzmann, H., Haklay, M., Khoo, S.‐M., Morison, J., Murphy, M. H., O'Brolchain, N., Schafer, B., & Shankar, K. (2017). Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data & Society, 4(2), 205395171772655. https://doi.org/10.1177/2053951717726554de Cremer, D. (2020). Leadership by algorithm: Who leads and who follows in the AI era? Harriman House.Demerouti, E., Bakker, A. B., Nachreiner, F., & Schaufeli, W. B. (2001). The job demands‐resources model of burnout. Journal of Applied Psychology, 86(3), 499–512. https://doi.org/10.1037/0021-9010.86.3.499Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033*Ding, L. (2021). Employees' challenge‐hindrance appraisals toward STARA awareness and competitive productivity: A micro‐level case. International Journal of Contemporary Hospitality Management, 33(9), 2950–2969. https://doi.org/10.1108/IJCHM-09-2020-1038*Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2022). Boundaryless careers and algorithmic constraints in the gig economy. The International Journal of Human Resource Management, 33(22), 4468–4498. https://doi.org/10.1080/09585192.2021.1953565*Egana‐del Sol, P., Bustelo, M., Ripani, L., Soler, N., & Viollaz, M. (2022). Automation in Latin America: Are women at higher risk of losing their jobs? Technological Forecasting and Social Change, 175, 121333. https://doi.org/10.1016/j.techfore.2021.121333Feng, X., Perceval, G. J., Feng, W., & Feng, C. (2020). High cognitive flexibility learners perform better in probabilistic rule learning. Frontiers in Psychology, 11(415), 415. https://doi.org/10.3389/fpsyg.2020.00415Figueroa‐Armijos, M., Clark, B. B., & da Motta Veiga, S. P. (2022). Ethical perceptions of AI in hiring and organizational trust: The role of performance expectancy and social influence. Journal of Business Ethics. https://doi.org/10.1007/s10551-022-05166-2Fiore, S. M., & Wiltshire, T. J. (2016). Technology as teammate: Examining the role of external cognition in support of team cognitive processes. Frontiers in Psychology, 7(1531), 1531. https://doi.org/10.3389/fpsyg.2016.01531*Fossen, F. M., & Sorgner, A. (2022). New digital technologies and heterogeneous wage and employment dynamics in the United States: Evidence from individual‐level data. Technological Forecasting and Social Change, 175, 121381. https://doi.org/10.1016/j.techfore.2021.121381Fredrickson, B. L. (2001). The role of positive emotions in positive psychology: The broaden‐and‐build theory of positive emotions. American Psychologist, 56(3), 218.Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019*Fumagalli, E., Rezaei, S., & Salomons, A. (2022). OK computer: Worker perceptions of algorithmic recruitment. Research Policy, 51(2), 104420. https://doi.org/10.1016/j.respol.2021.104420*Galière, S. (2020). When food‐delivery platform workers consent to algorithmic management: A Foucauldian perspective. New Technology, Work and Employment, 35(3), 357–370. https://doi.org/10.1111/ntwe.12177*Glavin, P., Bierman, A., & Schieman, S. (2021). Über‐alienated: Powerless and alone in the gig economy. Work and Occupations, 48(4), 399–431. https://doi.org/10.1177/07308884211024711Gonzalez, A. (2017). Amazon's robots: Job destroyers or dance partners? Retrieved from: https://www.spokesman.com/stories/2017/aug/20/amazons-robots-job-destroyers-or-dance-partners/*Gonzalez, M. F., Liu, W., Shirase, L., Tomczak, D. L., Lobbe, C. E., Justenhoven, R., & Martin, N. R. (2022). Allying with AI? Reactions toward human‐based, AI/ML‐based, and augmented hiring processes. Computers in Human Behavior, 130, 107179. https://doi.org/10.1016/j.chb.2022.107179Goos, M., & Manning, A. (2007). Lousy and lovely jobs: The rising polarization of work in Britain. The Review of Economics and Statistics, 89(1), 118–133. https://doi.org/10.1162/rest.89.1.118*Granulo, A., Fuchs, C., & Puntoni, S. (2021). Preference for human (vs. robotic) labor is stronger in symbolic consumption contexts. Journal of Consumer Psychology, 31(1), 72–80. https://doi.org/10.1002/jcpy.1181*Haesevoets, T., de Cremer, D., Dierckx, K., & van Hiel, A. (2021). Human‐machine collaboration in managerial decision making. Computers in Human Behavior, 119, 106730. https://doi.org/10.1016/j.chb.2021.106730*Heiland, H. (2022). Neither timeless, nor placeless: Control of food delivery gig work via place‐based working time regimes. Human Relations, 75(9), 1824–1848. https://doi.org/10.1177/00187267211025283Heinzelmann, R. (2018). Occupational identities of management accountants: The role of the IT system. Journal of Applied Accounting Research, 19(4), 465–482. https://doi.org/10.1108/JAAR-05-2017-0059Hobfoll, S. E., Halbesleben, J., Neveu, J. P., & Westman, M. (2018). Conservation of resources in the organizational context: The reality of resources and their consequences. Annual Review of Organizational Psychology and Organizational Behavior, 5, 103–128. https://doi.org/10.1146/annurev-orgpsych-032117-104640Hoffman, M. E., Chan, D., Chen, G., Dansereau, F., Rousseau, D., & Schneider, B. (2019). Panel interview: Reflections on multilevel theory, measurement, and analysis. In S. E. Humphrey & J. M. LeBreton (Eds.), The handbook of multilevel theory, measurement, and analysis (pp. 587–607). American Psychological Association. https://doi.org/10.1037/0000115-026*Holm, J. R., & Lorenz, E. (2022). The impact of artificial intelligence on skills at work in Denmark. New Technology, Work and Employment, 37(1), 79–101. https://doi.org/10.1111/ntwe.12215*Huang, H. (2022). Algorithmic management in food‐delivery platform economy in China. New Technology, Work and Employment, 1–21. https://doi.org/10.1111/ntwe.12228*Innocenti, S., & Golin, M. (2022). Human capital investment and perceived automation risks: Evidence from 16 countries. Journal of Economic Behavior & Organization, 195, 27–41. https://doi.org/10.1016/j.jebo.2021.12.027*Jaiswal, A., Arun, C. J., & Varma, A. (2022). Rebooting employees: Upskilling for artificial intelligence in multinational corporations. The International Journal of Human Resource Management, 33(6), 1179–1208. https://doi.org/10.1080/09585192.2021.1891114Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human‐AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007Kahneman, D. (2018). Comment on “artificial intelligence and behavioral economics”. In The economics of artificial intelligence: An agenda (pp. 608–610). University of Chicago Press.*Kawaguchi, K. (2021). When will workers follow an algorithm? A field experiment with a retail business. Management Science, 67(3), 1670–1695. https://doi.org/10.1287/mnsc.2020.3599*Keding, C., & Meissner, P. (2021). Managerial overreliance on AI‐augmented decision‐making processes: How the use of AI‐based advisory systems shapes choice behavior in R&D investment decisions. Technological Forecasting and Social Change, 171, 120970. https://doi.org/10.1016/j.techfore.2021.120970Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174*Kim, J. H., Kim, M., Kwak, D. W., & Lee, S. (2022). Home‐tutoring services assisted with technology: Investigating the role of artificial intelligence using a randomized field experiment. Journal of Marketing Research, 59(1), 79–96. https://doi.org/10.1177/00222437211050351Kiron, D., & Unruh, G. (2019). Even if AI can cure loneliness‐should it? MIT Sloan Management Review, 60(2), 1–4.Kitsara, I. (2022). Artificial intelligence and the digital divide: From an innovation perspective. In A. Bounfour (Ed.), Platforms and artificial intelligence. Progress in IS (pp. 245–265).Klosowski, T. (2021). How your boss can use your remote‐work tools to spy on you. Wirecutter. Retrieved from: https://www.nytimes.com/wirecutter/blog/how-your-boss-can-spy-on-you/Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2017). Partnering with AI: How organizations can win over skeptical managers. Strategy & Leadership, 45(1), 37–43. https://doi.org/10.1108/SL-12-2016-0085*Koo, B., Curtis, C., & Ryan, B. (2021). Examining the impact of artificial intelligence on hotel employees through job insecurity perspectives. International Journal of Hospitality Management, 95, 102763. https://doi.org/10.1016/j.ijhm.2020.102763*Kougiannou, N. K., & Mendonça, P. (2021). Breaking the managerial silencing of worker voice in platform capitalism: The rise of a food courier network. British Journal of Management, 32(3), 744–759. https://doi.org/10.1111/1467-8551.12505Lanz, L., Briker, R., & Gerpott, F. H. (2023). Employees adhere more to unethical instructions from human than AI supervisors: Complementing experimental evidence with machine learning. Journal of Business Ethics. https://doi.org/10.1007/s10551-023-05393-1Laurim, V., Arpaci, S., Prommegger, B., & Krcmar, H. (2021). Computer, whom should I hire? Acceptance criteria for artificial intelligence in the recruitment process. In Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 5495–5504).Lee, D., Rhee, Y., & Dunham, R. B. (2009). The role of organizational and individual characteristics in technology acceptance. International Journal of Human Computer Interaction, 25(7), 623–646. https://doi.org/10.1080/10447310902963969Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684Lee, M. K., & Rich, K. (2021). Who is included in human perceptions of AI? Trust and perceived fairness around healthcare AI and cultural mistrust. In Proceedings of the 2021 CHI conference on human factors in computing systems (pp. 1–14). Association for Computing Machinery, Article 138. https://doi.org/10.1145/3411764.3445570*Leonard, P., & Tyers, R. (2021). Engineering the revolution? Imagining the role of new digital technologies in infrastructure work futures. New Technology, Work and Employment, 1–20. https://doi.org/10.1111/ntwe.12226Levenson, H. (1981). Differentiating among internality, powerful others, and chance. In H. M. Lefcourt (Ed.), Research with the locus of control construct, vol. 1: Assessment methods (pp. 15–63). Academic Press.Levinthal, D. A., & March, J. G. (1993). The myopia of learning. Strategic Management Journal, 14(S2), 95–112. https://doi.org/10.1002/smj.4250141009Lichtenthaler, U. (2019). Extremes of acceptance: Employee attitudes toward artificial intelligence. Journal of Business Strategy, 41(5), 39–45. https://doi.org/10.1108/JBS-12-2018-0204Lightfoot, H., Baines, T., & Smart, P. (2013). The servitization of manufacturing: A systematic literature review of interdependent trends. International Journal of Operations & Production Management, 33(11/12), 1408–1434. https://doi.org/10.1108/IJOPM-07-2010-0196Lind, E. A. (2001). Fairness heuristic theory: Justice judgments as pivotal cognitions in organizational relations. In J. Greenberg & R. Cropanzano (Eds.), Advances in organization justice (pp. 56–88). Stanford University Press.*Lingmont, D. N. J., & Alexiou, A. (2020). The contingent effect of job automating technology awareness on perceived job insecurity: Exploring the moderating role of organizational culture. Technological Forecasting and Social Change, 161, 120302. https://doi.org/10.1016/j.techfore.2020.120302*Lloyd, C., & Payne, J. (2019). Rethinking country effects: Robotics, AI and work futures in Norway and the UK. New Technology, Work and Employment, 34(3), 208–225. https://doi.org/10.1111/ntwe.12149Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005Lubars, B., & Tan, C. (2019). Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Proceedings of the 33rd conference on neural information processing systems (pp. 57–67). Curran Associates Inc.*Luo, X., Qin, M. S., Fang, Z., & Qu, Z. (2021). Artificial intelligence coaches for sales agents: Caveats and solutions. Journal of Marketing, 85(2), 14–32. https://doi.org/10.1177/0022242920956676Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120, 262–273. https://doi.org/10.1016/j.jbusres.2020.07.045*Malik, A., Budhwar, P., Patel, C., & Srikanth, N. R. (2022). May the bots be with you! Delivering HR cost‐effectiveness and individualised employee experiences in an MNE. The International Journal of Human Resource Management, 33(6), 1148–1178. https://doi.org/10.1080/09585192.2020.1859582Manning, C. D. (2023). The reinvention of work. In Generative AI: Perspectives from Stanford HAI. Stanford University Human‐Centered Artificial Intelligence. Retrieved from: https://hai.stanford.edu/sites/default/files/2023-03/Generative_AI_HAI_Perspectives.pdfMarch, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87. https://doi.org/10.1287/orsc.2.1.71*Marikyan, D., Papagiannidis, S., Rana, O. F., Ranjan, R., & Morgan, G. (2022). “Alexa, let's talk about my productivity”: The impact of digital assistants on work productivity. Journal of Business Research, 142, 572–584. https://doi.org/10.1016/j.jbusres.2022.01.015Mathieu, J. E., & Chen, G. (2011). The etiology of the multilevel paradigm in management research. Journal of Management, 37(2), 610–641. https://doi.org/10.1177/0149206310364663*Meijer, A., Lorenz, L., & Wessels, M. (2021). Algorithmization of bureaucratic organizations: Using a practice lens to study how context shapes predictive policing systems. Public Administration Review, 81(5), 837–846. https://doi.org/10.1111/puar.13391Nazareno, L., & Schiff, D. S. (2021). The impact of automation and artificial intelligence on worker well‐being. Technology in Society, 67, 101679. https://doi.org/10.1016/j.techsoc.2021.101679Nelson, A. J., & Irwin, J. (2014). Defining what we do—all over again: Occupational identity, technological change, and the librarian/internet‐search relationship. Academy of Management Journal, 57(3), 892–928. https://doi.org/10.5465/amj.2012.0201*Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn't fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008*Nguyen, T., & Malik, A. (2022a). A two‐wave cross‐lagged study on AI service quality: The moderating effects of the job level and job role. British Journal of Management, 33(3), 1221–1237. https://doi.org/10.1111/1467-8551.12540*Nguyen, T., & Malik, A. (2022b). Impact of knowledge sharing on employees' service quality: The moderating role of artificial intelligence. International Marketing Review, 39(3), 482–508. https://doi.org/10.1108/IMR-02-2021-0078Ocampo, A. C., Restubog, S. L. D., Wang, L., Garcia, P. R. J., & Tang, R. L. (2022). Home and away: How career adaptability and cultural intelligence facilitate international migrant workers' adjustment. Journal of Vocational Behavior, 138, 103759. https://doi.org/10.1016/j.jvb.2022.103759Orlikowski, W., & Gash, D. (1994). Technological frames. ACM Transactions on Information Systems, 12(2), 174–207. https://doi.org/10.1145/196734.196745Ostroff, C. (1993). The effects of climate and personal influences on individual behavior and attitudes in organizations. Organizational Behavior and Human Decision Processes, 56(1), 56–90. https://doi.org/10.1006/obhd.1993.1045Parker, S. K., & Grote, G. (2022). Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Applied Psychology, 71(4), 1171–1204. https://doi.org/10.1111/apps.12241Parry, K., Cohen, M., & Bhattacharya, S. (2016). Rise of the machines: A critical consideration of automated leadership decision making in organizations. Group & Organization Management, 41(5), 571–594. https://doi.org/10.1177/1059601116643442Pataranutaporn, P., Danry, V., Leong, J., Punpongsanon, P., Novy, D., Maes, P., & Sra, M. (2021). AI‐generated characters for supporting personalized learning and well‐being. Nature Machine Intelligence, 3, 1013–1022. https://doi.org/10.1038/s42256-021-00417-9*Pemer, F. (2021). Enacting professional service work in times of digitalization and potential disruption. Journal of Service Research, 24(2), 249–268. https://doi.org/10.1177/1094670520916801*Pethig, F., & Kroenung, J. (2022). Biased humans, (un)biased algorithms? Journal of Business Ethics, 183, 637–652. https://doi.org/10.1007/s10551-022-05071-8*Qiu, H., Li, M., Bai, B., Wang, N., & Li, Y. (2022). The impact of AI‐enabled service attributes on service hospitableness: The role of employee physical and psychological workload. International Journal of Contemporary Hospitality Management, 34(4), 1374–1398. https://doi.org/10.1108/IJCHM-08-2021-0960*Reid‐Musson, E., MacEachen, E., & Bartel, E. (2020). ‘Don't take a poo!’: Worker misbehaviour in on‐demand ride‐hail carpooling. New Technology, Work and Employment, 35(2), 145–161. https://doi.org/10.1111/ntwe.12159Restubog, S. L. D., Deen, C. M., Decoste, A., & He, Y. (2021). From vocational scholars to social justice advocates: Challenges and opportunities for vocational psychology research on the vulnerable workforce. Journal of Vocational Behavior, 126, 103561. https://doi.org/10.1016/j.jvb.2021.103561Restubog, S. L. D., Schilpzand, P., Lyons, B., Deen, C. M., & He, Y. (2023). The vulnerable workforce: A call for research. Journal of Management, 49(7), 2199–2207. https://doi.org/10.1177/01492063231177446Robert, L. P., Pierce, C., Marquis, L., Kim, S., & Alahmad, R. (2020). Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Human Computer Interaction, 1‐31. https://doi.org/10.1080/07370024.2020.1735391*Schaupp, S. (2022). Algorithmic integration and precarious (dis)obedience: On the co‐constitution of migration regime and workplace regime in digitalised manufacturing and logistics. Work, Employment and Society, 36(2), 310–327. https://doi.org/10.1177/09500170211031458Schwab, K. (2016). The fourth industrial revolution. Crown Business.Selenko, E., Bankins, S., Shoss, M., Warburton, J., & Restubog, S. L. D. (2022). Artificial intelligence and the future of work: A functional‐identity perspective. Current Directions in Psychological Science, 31(3), 272–279. https://doi.org/10.1177/09637214221091823Short, J. C., Payne, G. T., & Ketchen, D. J. Jr. (2008). Research on organizational configurations: Past accomplishments and future challenges. Journal of Management, 34(6), 1053–1079. https://doi.org/10.1177/0149206308324324Sowa, K., & Przegalinska, A. (2020). Digital coworker: Human‐AI collaboration in work environment, on the example of virtual assistants for management professions. In Digital transformation of collaboration: Proceedings of the 9th international COINs conference (pp. 179–201). Springer International Publishing.*Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work: Human‐AI collaboration in managerial professions. Journal of Business Research, 125, 135–142.Spencer, D. A. (2018). Fear and hope in an age of mass automation: Debating the future of work. New Technology, Work and Employment, 33(1–12). https://doi.org/10.1111/ntwe.12105Stanford University. (2023). Generative AI: Perspectives from Stanford HAI. Stanford University Human‐Centered Artificial Intelligence. Retrieved from: https://hai.stanford.edu/sites/default/files/2023-03/Generative_AI_HAI_Perspectives.pdf*Sun, P., Chen, J. Y., & Rani, U. (2023). From flexible labour to ‘sticky labour’: A tracking study of workers in the food‐delivery platform economy of China. Work, Employment and Society, 37(2), 412–431. https://doi.org/10.1177/09500170211021570*Suseno, Y., Chang, C., Hudik, M., & Fang, E. S. (2022). Beliefs, anxiety and change readiness for artificial intelligence adoption among human resource managers: The moderating role of high‐performance work systems. The International Journal of Human Resource Management, 33(6), 1209–1236. https://doi.org/10.1080/09585192.2021.1931408Tasioulas, J. (2019). First steps towards an ethics of robots and artificial intelligence. Journal of Practical Ethics, 7(1), 61–95. https://doi.org/10.2139/ssrn.3172840*Terry, E., Marks, A., Dakessian, A., & Christopoulos, D. (2022). Emotional labour and the autonomy of dependent self‐employed workers: The limitations of digital managerial control in the home credit sector. Work, Employment and Society, 36(4), 665–682. https://doi.org/10.1177/0950017020979504Thompson, J. B. (1991). Editor's introduction. In J. B. Thompson (Ed.), Language and symbolic power (pp. 1–31). Harvard University.*Tomprou, M., & Lee, M. K. (2022). Employment relationships in algorithmic management: A psychological contract perspective. Computers in Human Behavior, 125, 106997. https://doi.org/10.1016/j.chb.2021.106997*Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42(9), 1600–1631. https://doi.org/10.1002/smj.3322*Trocin, C., Hovland, I. V., Mikalef, P., & Dremel, C. (2021). How artificial intelligence affords digital innovation: A cross‐case analysis of Scandinavian companies. Technological Forecasting and Social Change, 173, 121081. https://doi.org/10.1016/j.techfore.2021.121081Vallance, C. (2023). Elon Musk among experts urging a halt to AI training. Retrieved from: https://www.bbc.com/news/technology-65110030*van den Broek, E., Sergeeva, A., & Huysman, M. (2021). When the machine meets the expert: An ethnography of developing AI for hiring. MIS Quarterly, 45(3), 1557–1580. https://doi.org/10.25300/MISQ/2021/16559van Knippenberg, D., de Dreu, C. K. W., & Homan, A. C. (2004). Work group diversity and group performance: An integrative model and research agenda. Journal of Applied Psychology, 89(6), 1008–1022. https://doi.org/10.1037/0021-9010.89.6.1008*Veen, A., Barratt, T., & Goods, C. (2020). Platform‐capital's ‘app‐etite’ for control: A labour process analysis of food‐delivery work in Australia. Work, Employment and Society, 34(3), 388–406. https://doi.org/10.1177/0950017019836911*Verma, S., & Singh, V. (2022). Impact of artificial intelligence‐enabled job characteristics and perceived substitution crisis on innovative work behavior of employees from high‐tech firms. Computers in Human Behavior, 131, 107215. https://doi.org/10.1016/j.chb.2022.107215von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for phenomenon‐based theorizing. Academy of Management Discoveries, 4(4), 404–409. https://doi.org/10.5465/amd.2018.0084*Walkowiak, E. (2021). Neurodiversity of the workforce and digital transformation: The case of inclusion of autistic workers at the workplace. Technological Forecasting and Social Change, 168, 120739. https://doi.org/10.1016/j.techfore.2021.120739Walsh, T., Levy, N., Bell, G., Elliott, A., Maclaurin, J., Mareels, I., & Wood, F. (2019). The effective and ethical development of artificial intelligence. Retrieved from: acola.org/wp‐content/uploads/2019/07/hs4_artificial‐intelligence‐report.pdfWang, B., Liu, Y., & Parker, S. K. (2020). How does the use of information communication technology affect individuals? A work design perspective. Academy of Management Annals, 14(2), 695–725. https://doi.org/10.5465/annals.2018.0127Westman, M. (2001). Stress and strain crossover. Human Relations, 54, 557–591. https://doi.org/10.1177/0018726701546002Wilcox, G., & Rosenberg, L. (2019). Swarm intelligence amplifies the IQ of collaborating teams. In Short paper at the second international conference on artificial intelligence for industries (AI4I). Laguna Hills, CA.Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114–123.Wiseman, J., & Stillwell, A. (2022). Organizational justice: Typology, antecedents and consequences. Encyclopedia 2022, 2(3), 1287–1295. https://doi.org/10.3390/encyclopedia2030086Woo, S. E., O'Boyle, E. H., & Spector, P. E. (2017). Best practices in developing, conducting, and evaluating inductive research. Human Resource Management Review, 27(2), 255–264. https://doi.org/10.1016/j.hrmr.2016.08.004*Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I. (2019). Good gig, bad gig: Autonomy and algorithmic control in the global gig economy. Work, Employment and Society, 33(1), 56–75. https://doi.org/10.1177/0950017018785616*Yang, C.‐H. (2022). How artificial intelligence technology affects productivity and employment: Firm‐level evidence from Taiwan. Research Policy, 51(6), 104536. https://doi.org/10.1016/j.respol.2022.104536Zhang, G., Raina, A., Cagan, J., & McComb, C. (2021). A cautionary tale about the impact of AI on human design teams. Design Studies, 72, 100990. https://doi.org/10.1016/j.destud.2021.100990Zuboff, S. (1988). In the age of the smart machine. Basic Books.
Journal of Organizational Behavior – Wiley
Published: Feb 1, 2024
Keywords: algorithmic management; artificial intelligence (AI); future of work; multilevel framework; technology and work
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.