TY - JOUR AU - Belmonte, Laura AB - In October 2017, Sophia addressed the 3,800 attendees gathered for the first-ever Future Investment Initiative in Riyadh, Saudi Arabia. Her bald head was uncovered. She was unaccompanied by a man. Bearing a striking resemblance to the late actor Audrey Hepburn, Sophia thanked the Kingdom of Saudi Arabia for making her “the first robot in the world granted citizenship.” The irony of the humanoid Sophia possessing more rights than real-life Saudi women did not go unnoticed. After all, Saudi women had only won the right to drive months earlier. They, unlike Sophia, were still legally mandated to wear hijabs and to have male guardians make financial and legal decisions on their behalf. Sophia was also exempted from the Saudi law barring non-Muslims from obtaining citizenship. Her declarations that “My AI is designed around human values such as wisdom, kindness, and compassion. I strive to be an empathetic robot” and “I will do my best to make the world a better place” did little to assuage critics of the Saudi regime and those worried about the implications of machines possessing civil liberties denied many people.1 It might have been possible to dismiss Sophia’s citizenship as a publicity stunt perpetuated by a repressive regime attempting to generate attention for its efforts to diversify an oil-centered economy had it not coexisted with serious debates about “electronic personhood.” In early 2017, the European Parliament’s legal affairs committee voted 17–2 in favor of recommendations calling for the European Union (EU) to adopt regulations governing the creation, use, and impact of artificial intelligence and robots.2 Although the authors of the report did not suggest that robots be accorded fundamental human rights such as the rights to marry or vote, they did call for self-learning robots to be insured and held liable should they injure people or damage property. 156 artificial intelligence experts released an open letter denouncing the recommendations, portraying them as a ruse that would allow robotics manufacturers to escape responsibility for their machines.3 Although the reality of robots capable of human-like emotions, self-consciousness, or decision-making capabilities remains confined to the realm of science fiction for now, it is abundantly clear that the time to consider whether intelligent machines are entities worthy of rights, freedoms, and protections—and how we can guard our rights, freedoms, and protections from them—is upon us. The intellectual trajectory of my career has brought me to a place where I often contemplate the intersections and contradictions of human rights and technology. Technological innovations such as radio broadcasting, television, and jamming played a central role in my work on propaganda and cultural diplomacy.4 My most recent book situates the international lesbian, gay, bisexual, and transgender (LGBT) rights movement into the history of human rights.5 My role as Dean of the College of Liberal Arts and Human Sciences at Virginia Tech puts me in constant dialogue with scholars grappling with the human impacts of technology and has made me a passionate advocate for humanists and social scientists working as vital partners with engineers and scientists in the creation, implementation, and regulation of technology. Over the last five decades, our field has dramatically expanded the lenses through which we evaluate international relations. Historians like Walter LaFeber, Jonathan Reed Winkler, and Katherine Epstein have illuminated how technology shapes the political, economic, military, and social factors at the core of foreign policy decision-making.6 Barbara Keys has shown how activists in the pre-digital age used telephones, telegrams, and print to elevate awareness of human rights abuses, gain adherents to their causes, raise funds, and mobilize supporters.7 Concurrently, scholars are documenting the origins and evolution of the international human rights agenda.8 While certainly not an exhaustive historical analysis of these trends, this essay seeks to demonstrate the critical importance of integrating these two bodies of scholarship. Conceding the huge scope of the term “technology,” I focus largely on digital technologies based on electronic devices that process and store data. While illuminating some of the most alarming facets of the effects of technology on contemporary global politics and society, I highlight a few issues exemplifying the complex ways in which technology is both advancing and imperiling human rights in three key areas: privacy, advocacy, and warfare. Privacy Although privacy is enshrined as a fundamental right in the Universal Declaration of Human Rights, the International Convention on Civil and Political Rights, and the constitutions of more than 150 countries, technology has significantly eroded both the notion of privacy itself and individuals’ ability to protect their personal data from misuse by governments, corporations, and criminals.9 State surveillance is certainly nothing new—consider the methods of the Stasi in the German Democratic Republic or the provisions of the original Patriot Act that permitted the U.S. government to monitor banking transactions, phone calls, and library records—but the scale and ease of these efforts today far exceeds anything we could have imagined during the Cold War or the immediate aftermath of the 9/11 attacks. From smart phones to the internet of things to the step trackers on our wrists, we are willingly (if not always knowingly) granting an innumerable array of private and public actors access to everything we buy, everywhere we go, intimate details about our health, our internet use, our financial history, our political views, our families and friends, and more. It should not be at all surprising that this avalanche of digital, biometric, and geospatial data is often used in ways that harm the economically disadvantaged. Consumers living in zip codes with high-crime rates or low average household incomes find themselves bombarded with digital ads for subprime mortgages, payday loans, and for-profit colleges. Digital scrutiny also informs why mortgage and insurance rates are higher and police response times slower in these neighborhoods. Complex algorithms reject job applicants on the basis of predicted health risks or low credit scores. Judges in several states use data models in setting bail and sentencing. Recent investigations have found these algorithms impose harsher criminal penalties on defendants of color, mirroring biases in policing occurring in the non-virtual world. Some states have developed software designed to root out fraud in public assistance programs, but make it extraordinarily difficult for those falsely accused of misusing benefits to clear their records or extricate themselves from fines.10 The algorithms driving the software fueling these trends are not neutral strings of ones and zeroes, but are potent reflections of the racial, class, and gender biases of those creating the code and aggregating the data.11 Artificial intelligence researchers like Joy Buolamwini, a computer scientist in the Massachusetts Institute of Technology’s Media Lab and founder of the Algorithmic Justice League, have identified coding biases in facial recognition and automatic assessment software that perpetuate discrimination against women and underrepresented groups.12 In late 2020, Google drew a firestorm of criticism when it fired Timnit Gebru, the technical co-lead of its Ethical A.I. team and co-founder of Black in AI, in a dispute over her work with a research team calling for technology companies to ensure that software emulating human speech did not amplify historic patterns of racism and sexism. The fact that Gebru was one of the few Black women among Google’s research scientists and a vocal critic of the company’s diversity practices was especially distressing to critics of Google’s overwhelmingly white, male corporate culture.13 The use of digital technologies in law enforcement and the workplace also merits consideration. In 2017, the Supreme Court ruled in Carpenter v. United States that police must obtain a warrant in order to access mobile phone location data from a service provider.14 But neither the Electronic Communications Privacy Act nor the Stored Communications Act bars private employers from surveilling their employees on company-issued devices and computers without prior notification, as required in some states and under the European Union’s General Data Protection Regulation. These practices include logging keystrokes to check productivity, audio and video monitoring, compiling email and web site logs, tracking physical locations, and investigating prospective and current employees’ social media posts.15 When remote work became the norm during the COVID-19 pandemic, the use and criticism of these technologies escalated greatly as boundaries between the workplace and the home vanished. Where employers viewed these measures as an appropriate way to ensure productivity from employees who could no longer be observed in the office, employees who became aware of the electronic surveillance found it rather Orwellian, voicing fears that private data stored on home computers now used for work would be compromised and objecting to having to reengage facial recognition software every time they went to the bathroom, tended to a family member, or moved in their chair.16 The recent U.S. Supreme Court decision Dobbs v. Jackson Women’s Health Organization striking down Roe v. Wade has generated additional fears about the erosion of privacy rights. Reproductive justice activists and digital privacy experts are urging people to take steps to protect themselves from digital surveillance that could have devastating consequences in states where abortion is now criminalized. They warn that data gleaned from location data and apps tracking one’s menstrual cycle could be used to prosecute women who illegally terminate a pregnancy or anyone who aids someone seeking an abortion. They stress that anti-abortion pregnancy centers have amassed huge troves of medical data on women who have sought their advice and used their services, but are not subject to federal health data privacy laws like the Health Insurance Portability and Accountability Act (HIPAA). Abortion rights advocates fear these pregnancy centers will release such records to law enforcement authorities seeking to enforce anti-abortion laws. They also express alarm that some search engines are now restricting results when one searches for abortion providers or mifepristone, a drug that can be used to end a pregnancy, because of corporate policies restricting the sale of illegal services and controlled substances.17 Spyware like Pegasus enables even more troubling practices. Created by an Israeli company called the NSO Group, the software gives the operator access to the microphone, camera, GPS data, passwords, audio and visual recordings, email, and voicemails on the unsuspecting user’s infected phone. In 2021, seventeen media outlets from ten countries joined forces with Amnesty International’s Security Lab and Forbidden Stories, a French non-profit, in investigating a leak of 50,000 phone numbers of people who might have been surveilled. Their work revealed authoritarian regimes’ pervasive use of Pegasus to target heads of state, diplomats, journalists, political dissidents, business executives, academics, and activists. The revelations directly contradicted NSO Group’s repeated assertions that only governments conducting legitimate investigations into terrorism and crime were allowed to purchase Pegasus.18 In early 2022, the European Parliament launched a formal inquiry into possible human rights violations linked to Pegasus, the findings of which could lead to more stringent EU privacy rules.19 But the challenges of enforcing one continent’s standards of privacy across the internet are monumental and advocates differ widely in their views of whether privacy rights can be protected without compromising freedoms of speech, association, and expression. Advocacy In the 1990s, the Zapatista movement in Mexico and anti-globalization protests provided powerful early examples of the ways that digital technologies have transformed human rights advocacy around the world. The internet, smart phones, social media, artificial intelligence, cloud computing, hacking software, and data mining have given activists unprecedented capacity to generate global awareness of human rights violations, to mobilize supporters, and to provide evidence supporting their claims. But these tools have also empowered states and private actors who wish to intimidate or harm those exposing corruption, atrocities, and injustice. As Ronald Niezen warns in #Human Rights, “these very technologies have begun to transcend human will and agency, creating vacuums of transparency and opportunities for domination, surveillance, and stifling of dissent.”20 These stark realities have forced many of us, myself included, to revise the more benign views of technology we formerly held. In 2006, when Facebook broadened its desired audience beyond college campuses and began encouraging anyone over the age of thirteen to join, and the podcasting company Odeo launched Twttr (later renamed Twitter), there was abundant evidence that digital networks were invaluable tools for advocates seeking to build community and affect social change. Two major human rights agreements crafted that same year—The Convention on the Rights of Persons with Disabilities and the Yogyakarta Principles on the Application of Human Rights Law in Relation to Sexual Orientation and Gender Identity—reflected decades of advocacy by international activists whose strategies capitalized on technologies that “helped them overcome their particular structural conditions of isolation, marginalization, and powerlessness.”21 Five years later, when protests swept the Middle East and North Africa, the Arab Spring seemed to herald a new era in which digital communication and organizing would be formidable weapons in populist struggles against autocracy, censorship, and inequality.22 Such hopes proved illusory as many of the democratic victories yielded in the aftermath of the Arab Spring were reversed and authoritarian regimes quickly grasped the power of digital technologies in crushing opposition and propagating disinformation. While Iran and China have attempted to segregate their internet users from the rest of the world, many other state and private actors have adopted far less expensive measures to quash messages and activities they find objectionable. Shutting down internet access is now a common tactic used by autocratic and illiberal regimes attempting to crush civil unrest, suppress dissent, or prevent exposure of human rights abuses. In 2020 alone, there were ninety-three state-imposed internet outages in twenty-one different countries, ranging from complete blocks on internet use to partial restriction of social media platforms, to slowing the speed with which data can travel.23 An even larger number of state and private actors intentionally inundate citizens with information aimed at exacerbating distrust of the media and invalidating dissenting voices. The tidal wave of falsehoods is chillingly effective in increasing popular confusion, cynicism, and disengagement.24 Big tech has both aided and thwarted these autocrats. Telegram, a messaging app created by a Russian exile named Pavel Durov, is a good example. Used to great effect in protests in Hong Kong, Belarus, Iran, and elsewhere, Telegram allows users to create encrypted group chats that have the targeting capabilities of social media combined with the privacy of messaging tools like WhatsApp. Over Telegram “channels,” advocates can quickly alert supporters about the locations of demonstrations, arrests, violence, and available medical and material resources. Telegram can be privately operated from anywhere and does not accept advertising. If paired with the app Psiphon, Telegram can circumvent most firewalls. In 2018, after Durov refused to surrender encrypted data to security officials, Russia banned Telegram. Russian users ignored the prohibition and simply used the app over their own virtual private networks (VPNs), a tactic that so vexed Russian authorities that Telegram was relegalized two years later. But the same features that make Telegram a big asset for activists also make it attractive to terrorists and criminals. In late 2020, despite the libertarian leanings of its founder, Telegram cooperated with Europol and removed thousands of channels associated with ISIS.25 It has proven far more problematic when, in the absence of government regulation, big tech has policed itself. Facebook’s real name policy is a potent example. Unlike Twitter or Reddit, which allow accounts to be registered using pseudonyms provided they are not impersonating another person, company, or organization, Facebook requires individuals to provide a government identification, utility bill, or library card as proof of their real names when they open an account. Users pointed out numerous problems with the policy including Facebook algorithms’ failure to recognize Native American, Aboriginal, or other names with “too many” letters, multiple capitalizations, or initials as first names. LGBTQ individuals criticized the policy for forcing people to out themselves even in circumstances that could result in persecution or harm. Victims of sexual abuse and domestic violence levied similar complaints.26 For the first decade of its existence, Facebook refused to change the rule, claiming that it helped to “keep our community safe.” In December 2015, after months of protests, Facebook finally agreed to modify the policy by allowing a person whose account was suspended on the basis of a fake name to indicate whether they had “special circumstances.” The revised policy also requires anyone challenging the veracity of another user’s name to provide specific reasons, thus undercutting the possibility of “report and takedown” harassment of activists and users from marginalized groups.27 Because the social media giant said nothing about how it collected, used, and shared its users’ personal information, the Electronic Frontier Foundation lambasted Facebook for forcing “those who are most vulnerable to reveal even more information about their intimate, personal lives. The only way they can use a pseudonym is to share more information, resulting in a remedy that is useless and risks putting them in a more dangerous situation should Facebook share those personal details.”28 Advocates have raised similar concerns about Facebook’s Free Basics service. Introduced in 2013 as part of an initiative called Internet.org, Free Basics is marketed as a means of expanding internet access in the developing world, where about 3 billion people, mostly in the Global South, do not have reliable internet. By 2019, the service was available in sixty-five nations. The mobile app gives users localized versions that provide no-fee access to a limited number of text-only web sites through the search engine Bing, stripping out photos and videos that can only be viewed by purchasing a data plan. The chosen sites are curated by Facebook. There is no email and no social media platforms other than Facebook can be used.29 Digital rights groups criticize Free Basics for its limited language options, its heavy reliance on third-party services owned by U.S. companies, its secretive data collection practices, and its violations of net neutrality.30 Ellery Biddle, advocacy director for the citizen media group Global Voices, asserted, “Facebook is not introducing people to open internet where you can learn, create, and build new things. It’s building this little web that turns the user into a mostly passive consumer of western corporate content. That’s digital colonialism.”31 In February 2016, digital rights activists celebrated when the Telecom Regulatory Authority of India banned Free Basics and any other free mobile data programs that violated net neutrality. The move followed a multi-million-dollar lobbying and advertising campaign by Facebook and marked a huge (and rare) setback for the social media company’s efforts to expand its global user base. In response, Facebook CEO Mark Zuckerberg used his personal Facebook page to vow that the company would continue trying to reach the 1 billion people in India without internet access. “We know connecting them can lift people out of poverty, create millions of jobs, and spread education opportunities. We care about these people,” he wrote.32 Such grandiose claims rang hollow as the 2016 U.S. presidential campaign soon demonstrated that entire nations could be victims of the weaponization of social media. In August 2016, The Guardian broke the story of how teenagers in Veles, Macedonia, an economically depressed industrial town in an isolated part of the former Yugoslavia, made small fortunes plagiarizing or replicating misinformation found in the U.S. right-wing media and posting it on web sites and Facebook pages they created to target supporters of Donald Trump. The teens profited every time readers clicked on embedded ads tracked by online networks like Google AdSense. False headlines like “Hillary Clinton in 2013: ‘I Would Like to See People Like Donald Trump Run For Office; They’re Honest And Can’t Be Bought” generated tens of thousands of shares, reactions, and repostings, enabling the teens to earn as much as $10,000 a month in a place with a nearly 25 per cent unemployment rate.33 When pressed about the possible consequences of his propagation of fake news, one teen nonchalantly replied: “I didn’t force anyone to give me money. People sell cigarettes, they sell alcohol. That’s not illegal, why is my business illegal? If you sell cigarettes, cigarettes kill people. I didn’t kill anyone.”34 The same weekend, a gunman opened fire at Comet Ping Pong after being persuaded by internet-based conspiracy theories that the Washington, D.C. pizzeria was a front for a pedophile ring operated by Democrats. While no one was injured in the shooting and the incident was not linked to any of the sites operated out of Veles, “Pizzagate” underscored the possible tragic real-world consequences of misinformation.35 While profit motivated the teens in Veles, Russian operatives aimed to undermine democracy. As documented in the 448-page Mueller Report released in April 2019, Russia used digital technologies to disrupt the 2016 U.S. elections “in sweeping and systematic fashion.”36 Their tactics included hacking into state voter databases and private email accounts of leading figures and organizations in the Democratic Party and the Hillary Clinton campaign team, releasing stolen information to WikiLeaks, and spreading propaganda on the internet and social media.37 In July 2018, a federal grand jury indicted twelve Russian military intelligence officers for conspiracy against the United States, using a false name to register a web domain, identity theft, and conspiracy to commit money laundering as part of the master plan to sway the 2016 election toward Donald Trump. No arrests have yet been made.38 Russian disinformation campaigns easily capitalized on divisions in American society. In one particularly nefarious scheme, the St. Petersburg-based Internet Research Agency (IRA) created fake accounts on Twitter, YouTube, Facebook, and Instagram to stoke racial tensions in the United States. Designed to mimic communications channels used by the Black Lives Matter movement, the IRA flooded the internet with videos and images designed to provoke outrage and political disengagement among Black Americans. The operatives savvily avoided using racial slurs in order not to violate social media companies’ policies mandating the removal of such content.39 But Russian intelligence officers were far from alone in exploiting flaws in social media applications for political ends. In March 2018, the New York Times working in collaboration with The Observer of London and The Guardian, reported that Cambridge Analytica, a data mining company largely owned by American conservative activist Robert Mercer, had illegally obtained the personal information of 87 million Facebook users. Exploiting a flaw in Facebook’s application programming interface (API), a researcher named Aleksandr Kogan designed a quiz called This Is My Digital Life. Unbeknownst to the 270,000 Facebook users who answered, the quiz gave Cambridge Analytica access to all of the respondents’ personal data as well as that of all of their Facebook friends. The firm then used the stolen data to create psychological profiles of voters and sold it to political operatives seeking to elect Donald Trump and to extricate the United Kingdom from the European Union. The ensuing furor triggered what was then the biggest scandal in Facebook’s history. Called before Congress to explain, Facebook CEO Mark Zuckerberg faced tough questions from House and Senate lawmakers demanding answers about the company’s misuse of data. Zuckerberg vowed to make improvements that would allow users to delete their Facebook histories, to improve privacy controls, and to ban apps that harvested personal data for suspicious purposes. But the hearings also exposed a few legislators’ painfully obvious ignorance about digital technologies, a reality that did not portend well for the passage of federal laws regulating social media.40 Although Cambridge Analytica closed its doors, its team set up new subsidiaries like Emerdata Limited that raised questions about the whereabouts of the data originally purloined from Facebook.41 In July 2019, the Federal Trade Commission fined Facebook approximately $5 billion for mishandling users’ personal information, the biggest fine ever imposed by the U.S. government on a technology company but only a fraction of Facebook’s $55 billion revenue in 2018. But the agency stopped short of placing severe limitations on how the social media giant collected and shared data.42 By that time, Facebook was confronting unanticipated fallout from a major overhaul of the algorithms that drove its popular News Feed. In early 2018, even as the Cambridge Analytica controversy was exploding, Facebook executives were worried that users were spending less time engaging content. In response, Facebook recalibrated its algorithms to boost “meaningful social interactions” with family and friends instead of professionally created material that its corporate research suggested harmed users’ mental health. An internal point system awarded one point for a “like,” but thirty points for an original comment, post, or reshare. The system also added additional points based on the audience with whom the content was shared. The changes backfired spectacularly. Posts designed to provoke an angry response drew the greatest user engagement, a fact that a host of political actors and publications soon exploited by posting increasingly sensationalized and infuriating content.43 Despite the efforts of Facebook employees like Sophie Zhang and Frances Haugen who alerted executives to the escalating use of fake pages and misinformation by businesses, individuals, and political actors seeking to manipulate users’ real-world actions, Facebook leaders dissembled and did little to reign in such “coordinated inauthentic behavior.” Reports of orchestrated misinformation campaigns and fake pages designed to shape a particular nation’s politics or to demonize ethnic minorities like Myanmar’s Rohingya or Ethiopia’s Tigrayans flooded in from Facebook monitors all over the world. So did accounts of human traffickers and drug cartels using the app for criminal purposes. But the company intentionally segregates the employees responsible for enforcing its policies from those charged with fostering positive relationships with foreign governments. Facebook’s dearth of staff and artificial intelligence systems fluent in particular languages and dialects compound the challenges of identifying and removing content propagating illegal activities, abuse, and misinformation in developing nations. The company’s willingness to cooperate with governments who wish to suppress dissent rather than risk losing access to nations with growing audience bases is also well-documented.44 But responses to the January 6, 2021 attacks on the U.S. Capitol demonstrate the power tech companies can wield in defense of violent and extremist content. In the immediate aftermath of the 1/6 riot, YouTube, Reddit, and TikTok deleted material questioning the legitimacy of the 2020 elections. Cracking down on the “free speech” social media network Parler that was rife with disinformation and extremism, Apple and Google stopped selling its app. Amazon Web Services brought Parler to a total halt by refusing to host the service. Parler sued and lost after a New York judge ruled that Amazon was within its rights in enforcing its policies against violent content.45 Hackers dealt Parler another humiliating blow when they exploited a security flaw in the software architecture to download every public post, picture, and video on the network and then uploaded the content on the Internet Archive.46 Twitter, which had exempted U.S. President Donald Trump from its acceptable content policies on the grounds that he is a public figure, permanently banned Trump for inciting violence, thus depriving him of the megaphone he used multiple times a day to reach his 88 million followers. Facebook took similar steps, but only after years of vociferous internal debates that began while Trump was campaigning for president. As early as December 2015, after Trump posted a video calling for a ban on Muslims entering the United States, some Facebook employees called for the removal of what they perceived as hate speech. They were overridden by others who declared the video “newsworthy.” Throughout his presidency, Trump pushed the boundaries of what social media companies considered content too inflammatory to permit. Time and time again, Facebook and Twitter left Trump’s posts and tweets alone, only occasionally affixing a warning label on or opting to hide particularly inflammatory posts. With the onset of the coronavirus pandemic, the companies finally took a stronger stance on misinformation about COVID-19 and adopted stronger policies against conspiracy theories and extremist groups.47 But it is clear that debates over freedom of speech, regulation, and holding those whose online behavior contributes to real-world harms accountable will continue to rage. Killer Robots On January 25, 1979, Robert Williams, an assembly line worker at the Ford Motor plant in Flat Rock, Michigan, died instantly when an industrial robot gathering parts alongside him slammed its arm into his head. The accident gave Williams the dubious distinction of becoming the first person in the world known to be killed by a robot. His family sued and won $10 million in damages resulting from a lack of physical safeguards at the plant.48 The case did little to stem the widespread adoption of robotic manufacturing and consumer products. Today, robots vacuum our houses, roam the surface of Mars, harvest fruit and vegetables, and provide critical medical services such as surgical assistance, dispensing prescriptions, disinfecting health care facilities, providing companionship, and aiding in physical rehabilitation. But with the ubiquity of robots in our daily lives come major ethical questions that we fail to address at our peril. In no realm are these issues more pressing than warfare. Since 2001, when the Central Intelligence Agency first began using drones in antiterrorist operations, the United States is estimated to have launched over 14,000 drone strikes in Pakistan, Yemen, Afghanistan, Somalia, and elsewhere, many of them resulting in civilian casualties.49 Initially, defense officials assumed that drone pilots would escape the trauma experienced by other warriors and did not define drone warfare as a form of combat service. The Pentagon was forced to reevaluate this position when drone pilots, many of whom fly hundreds of missions, reported symptoms of serious post-traumatic stress disorder (PTSD) including emotional distress, insomnia, substance abuse, and suicidal ideation. In some cases, the fact that drone pilots often tracked their targets for weeks and saw them interact with their families and carrying on normal daily activities intensified their despair in ways that soldiers who killed an enemy in the heat of a single battle did not report.50 But where drones are still operated by humans acting on orders from other people, a new generation of weapons operate far more independently. Unlike drone pilots, legal autonomous weapons systems (given, apparently without irony, the acronym LAWS) can identify and strike a target without much human direction. LAWS do not experience PTSD. They do not get scared or tired or hungry. The moral implications of such killing machines are breathtaking. Will a machine be able to differentiate between a legitimate combatant and a child in a war zone? Will autonomous weapons equipped with facial recognition software be used to target specific groups or individuals for genocide or political assassination? Would indiscriminate deaths inflicted by driverless tanks, pilotless planes, and robotic infantries render war crimes prosecutions meaningless if no sentient being can be held accountable for atrocities? Does a nation become more willing to wage war when fears of putting one’s own troops in harm’s way are minimized? Conversely, could robotic weapons increase the global community’s willingness to intervene in humanitarian disasters? These vexing questions help explain why the United Nations has yet to adopt a legal framework to regulate the use of the nearly autonomous robots that are already being used in combat. In Ukraine, Russia is allegedly deploying the Zala KYB drone, an unmanned aerial vehicle that can launch kamikaze strikes on manually programmed target coordinates, while Ukraine is operating semiautonomous Turkish-made drones. In March 2021, a report authored by the United Nations Security Council claimed that Turkey had used Kargu-2 drones operating with no human guidance to attack soldiers fleeing Libya’s civil war. Ozgur Guleryuz, the CEO of STM, the state-owned manufacturer of the drones, denied that the Kargu-2 had the capability to hunt down targets on its own and emphasized that no target was fired upon without prior human verification.51 These incidents have sparked renewed calls for a global ban on killer robots. In December 2021, experts gathered in Geneva to review the Convention on Certain Conventional Weapons (CCW), continuing eight years of previous discussions on whether to amend the agreement to ban or limit weapons that can exert lethal force without the direct engagement of a human. Advocates from human rights groups like the Campaign to Stop Killer Robots and a majority of the 125 signatories to the CCW called for a formal ban on autonomous robotic weapons, but several countries developing and deploying these machines, including the United States and Russia, blocked efforts to adopt any binding measures. The meeting ended in failure, leaving nations free to keep building LAWS with impunity for the foreseeable future.52 We are unquestionably at a frightening juncture where the digital revolution could end the human rights revolution. But it is not too late to make digital rights an integral element of the global human rights agenda and to institute legislative frameworks that guard privacy, preserve democratic freedoms, and prevent corporate actors from using personal data indiscriminately. Consumers can also demand that all tech companies offer features like Apple’s new Lockdown Mode that protect vulnerable users from targeted spyware like Pegasus.53 Advocacy organizations, academics, and activists are working together to promote public interest technology, a growing field committed to technologies that are created, implemented, and regulated in ways that are ethical and equitable. Collective action and legal mechanisms can safeguard us from a technocratic, dystopian future. Ask the content moderators who successfully sued Facebook for failing to provide sufficient mental health support when they suffered PTSD as a result of repeated exposure to social media posts graphically depicting the worst of human behavior.54 In December 2021, Rohingya refugees sued Meta Platforms, the parent company of Facebook, for $150 billion, accusing it of fueling hate speech that contributed to the deaths and displacement of tens of thousands of Rohingya. Drawing on the massive trove of documents leaked to the press by former Facebook employee Frances Haugen, the suit alleges that Facebook knowingly failed to hire enough content moderators with the linguistic skills and political knowledge to understand the toxic political situation arising in Myanmar. Nor did Facebook close accounts or remove content that promoted genocide. The case has enormous implications for human rights and digital rights advocates everywhere.55 An unusual coalition of right-wing and left-wing U.S. politicians is calling for more transparency and regulation of big tech while U.S. and European officials are searching for common ground on international reforms that can reign in the worst abuses of civil liberties in the digital world without trampling on the United States’ First Amendment. In the early stages of Russia’s invasion of Ukraine, masterful use of social media by Ukraine and its allies around the world helped to revitalize collective security among Western nations and mobilized millions to join efforts to help Ukraine defeat Russian expansionism. So, there is cause for hope amid the terrifying technological realities we face. But the bleak truth is that legislative solutions have limits and almost always lag far behind technological innovations. While the European Parliament’s recent passage of the Digital Services Act sets a very high standard for privacy protections and platform accountability, implementation of the sweeping legislation will be extremely challenging. The European Commission will have to establish a team of Digital Service Coordinators who will work in concert with teams from each European Union member nation. Enforcement of the complex transparency and data accessibility rules will require a huge staff of top-notch data scientists. Tech companies are expected to develop human rights impact assessments (HIRAs) that will have to be reviewed by EU regulators with deep understanding of human rights and strategies for mitigating harms related to digital technologies. The investment of time and capital required to make the Digital Service Act successful will be enormous and it remains to be seen whether the legislation will have its desired impact or inspire other nations and international organizations to adopt similar measures.56 As scary as the future that we and Sophia the robot will confront is, for now at least, humans can still take steps to ensure that justice, freedom, and privacy do not go the way of dial-up modems, Internet Explorer, and MySpace. We, not the machines, are still the arbiters of whether our digital future is one where human rights survive. Footnotes 1 Cleeve R Wootson, Jr., “Saudi Arabia, Which Denies Women Equal Rights, Makes a Robot a Citizen,” Washington Post, October 29, 2017, Last accessed September 20, 2022, https://www.washingtonpost.com/news/innovations/wp/2017/10/29/saudi-arabia-which-denies-women-equal-rights-makes-a-robot-a-citizen/; Robert David Hart, “Saudi Arabia’s Robot Citizen is Eroding Human Rights,” Quartz, February 18, 2018, Last accessed September 20, 2022, https://qz.com/1205017/saudi-arabias-robot-citizen-is-eroding-human-rights/. 2 Alex Hern, “Give Robots ‘Personhood’ Status, EU Committee Argues,” The Guardian, January 12, 2017, Last accessed September 20, 2022, https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues. 3 Janosch Delcker, “Europe Divided over Robot ‘Personhood,” April 11, 2018, Last accessed September 20, 2022, https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/; George Dvorsky, “Experts Sign Open Letter Slamming Europe’s Proposal to Recognize Robots as Legal Persons,” Gizmodo, April 13, 2018, Last accessed September 20, 2022, https://gizmodo.com/experts-sign-open-letter-slamming-europe-s-proposal-to-1825240003; George Dvorsky, “When Will Robots Deserve Human Rights?” Gizmodo, June 2, 2017, Last accessed September 20, 2022,https://gizmodo.com/when-will-robots-deserve-human-rights-1794599063. 4 Laura A. Belmonte, Selling the American Way: Propaganda, National Identity, and the Cold War (Philadelphia, PA, 2008). 5 Laura A. Belmonte, The International LGBT Rights Movement: A History (London, 2020). 6 Walter LaFeber, “Technology and U.S. Foreign Relations,” Diplomatic History 24, no. 1(2000):1–19; Jonathan Reed Winkler, “Technology and the Environment in the Global Economy” in America in the World: The Historiography of American Foreign Relations since 1941, eds., Frank Costigliola and Michael J. Hogan, (Cambridge, 2013), 284–306; Katherine C. Epstein, Torpedo: Inventing the Military-Industrial Complex in the United States and Great Britain (Cambridge, MA, 2014). 7 Barbara Keys, “The Telephone and Its Uses in 1980s U.S. Activism,” Journal of Interdisciplinary History 48, no. 4 (2018): 485–509. 8 Examples include: Samuel Moyn, The Last Utopia: Human Rights in History (Cambridge, MA, 2010); Sarah B. Snyder, From Selma to Moscow: How Human Rights Activists Transformed U.S. Foreign Policy (New York, 2018); Barbara Keys, Reclaiming American Virtue: The Human Rights Revolution of the 1970s (Cambridge, MA, 2014); Elizabeth Borgwardt, A New Deal for the World: America’s Vision for Human Rights (Cambridge, MA, 2007); and Akira Iriye, Petra Goedde, and William I. Hitchcock, eds., The Human Rights Revolution: An International History (New York, 2012). 9 William F. Schulz and Sushma Raman, The Coming Good Society: Why New Realities Demand New Rights (Cambridge, MA, 2020), 81–82. 10 Cathy O’Neill, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, 2016); Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York, 2019). 11 Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York, 2018). 12 Joy Buolamwini, “Artificial Intelligence Has a Problem with Racial and Gender Bias: Here’s How to Solve It,” Time, February 7, 2019, Last accessed September 20, 2022, https://time.com/5520558/artificial-intelligence-racial-gender-bias/. 13 In 2018, Joy Buolamwini and Timnit Gebru co-authored a landmark study demonstrating that facial recognition software manufactured by IBM and Microsoft had much higher error rates for darker-skinned people, a reality that contributes to disproportionate rates of false arrests. Julia Carrie Wong, “More than 1,200 Google Workers Condemn Firing of AI Scientist Timnit Gebru,” The Guardian, December 4, 2020, Last accessed September 20, 2022, https://www.theguardian.com/technology/2020/dec/04/timnit-gebru-google-ai-fired-diversity-ethics. 14 Sabrina McCubbin, “Summary: The Supreme Court Rules in Carpenter v. United States,” Lawfare, June 22, 2018, https://www.lawfareblog.com/summary-supreme-court-rules-carpenter-v-united-states. 15 Darrell M. West, “How Employers Use Technology to Surveil Employees,” Brookings TechTank, January 5, 2021, Last accessed September 20, 2022, https://www.brookings.edu/blog/techtank/2021/01/05/how-employers-use-technology-to-surveil-employees/. 16 Danielle Abril and Drew Harwell, “Keystroke Tracking, Screenshots, and Facial Recognition: The Boss May Be Watching Long After the Pandemic Ends,” Washington Post, September 24, 2021, Last accessed September 20, 2022, https://www.washingtonpost.com/technology/2021/09/24/remote-work-from-home-surveillance/. 17 Amy Gajda, “How Dobbs Threatens to Torpedo Privacy Rights in the US,” Wired, June 29, 2022, Last accessed September 20, 2022. https://www.wired.com/story/scotus-dobbs-roe-privacy-abortion/; Andrew Couts, “Security News This Week: The Post-Roe Privacy Nightmare Has Arrived,” June 25, 2022, Wired, Last accessed September 20, 2022, https://www.wired.com/story/post-roe-privacy-russia-ukraine-hacks/; Abigail Abrams and Vera Bergengruen, “Anti-Abortion Pregnancy Centers Are Collecting Troves of Data That Could Weaponized Against Women,” Time, June 22, 2022, Last accessed September 20, 2022, https://time.com/6189528/anti-abortion-pregnancy-centers-collect-data-investigation/?utm_medium=email&utm_source=sfmc&utm_campaign=newsletter+brief+default+ac&utm_content=+++20220623+++body&et_rid=206172055&lctg=206172055; Victoria Elliott, “Meta Was Restricting Abortion Content All Along,” July 1, 2022, Last accessed September 20, 2022, https://www.wired.com/story/meta-abortion-content-restriction/. 18 Stephanie Kirschgaessner, Paul Lewis, David Pegg, Sam Cutler, Nina Lakhani, and Michael Safi, “Revealed: Leak Uncovers Global Abuse of Cyber-Surveillance Weapon,” The Guardian, July 18, 2021, Last accessed September 20, 2022, https://www.theguardian.com/world/2021/jul/18/revealed-leak-uncovers-global-abuse-of-cyber-surveillance-weapon-nso-group-pegasus; Michael Birnbaum, Andras Petho, and Jean-Baptiste Chastand, “In Orban’s Hungary, Spyware Was Used to Monitor Journalists and Others Who Might Challenge The Government,” Washington Post, July 19, 2021, Last accessed September 20, 2022, https://www.washingtonpost.com/world/2021/07/18/hungary-orban-spyware/. 19 Daniel Boffey, “EU to Launch Rare Inquiry into Pegasus Spyware Scandal,” The Guardian, February 10, 2022, Last accessed September 20, 2022, https://www.theguardian.com/news/2022/feb/10/eu-close-to-launching-committee-of-inquiry-into-pegasus-spyware. 20 Ronald Niezen, #HumanRights: The Technologies and Politics of Justice Claims in Practice (Palo Alto, CA, 2020), 4–5. 21 Niezen, #HumanRights, 57. 22 Facebook became available in Arabic in 2009. See Ian Black and Jemina Kiss, “Facebook Launches Arabic Version, The Guardian, March 10, 2009, Last accessed September 20, 2022, https://www.theguardian.com/media/2009/mar/10/facebook-launches-arabic-version. 23 Kelvin Chan, “Digital Siege: Internet Cuts Become Favored Tool of Regimes,” AP News, February 10, 2021, Last accessed September 20, 2022, https://apnews.com/article/world-news-uganda-myanmar-army-media-2395b6db25272a95bdc6d1b2238e5e9e. 24 For a cogent appraisal of these trends, see Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (New Haven, CT, 2017). 25 Shaun Walker, “‘Nobody Can Block It:’ How the Telegram App Fuels Global Protest,” The Guardian, November 7, 2020, Last accessed September 20, 2022, https://www.theguardian.com/media/2020/nov/07/nobody-can-block-it-how-telegram-app-fuels-global-protest. 26 Emanuella Grinberg, “Facebook ‘Real Name’ Policy Stirs Questions Around Identity,” CNN, September 18, 2014, Last accessed September 20, 2022, https://www.cnn.com/2014/09/16/living/facebook-name-policy/index.html. 27 Dave Lee, “Facebook Amends ‘Real Name’ Policy After Protests,” BBC, December 15, 2015, Last accessed September 20, 2022, https://www.bbc.com/news/technology-35109045. 28 Eva Galperin and Wafa Ben Hassine, “Changes to Facebook’s ‘Real Names’ Policy Still Don’t Fix the Problem,” Electronic Frontier Foundation, December 18, 2015, Last accessed September 20, 2022, https://www.eff.org/deeplinks/2015/12/changes-facebooks-real-names-policy-still-dont-fix-problem. 29 Savannah Wallace, “In the Developing World, Facebook Is the Internet,” Medium, September 6, 2020, Last accessed September 20, 2022, https://medium.com/swlh/in-the-developing-world-facebook-is-the-internet-14075bfd8c5e. 30 Net neutrality is the belief that all internet content should be equally accessible and that no internet service provider should charge different rates or set data speed based upon the type of content, service, or device being used. See Klint Finley, “The WIRED Guide to Net Neutrality,” Wired, May 5, 2020, Last accessed September 20, 2022, https://www.wired.com/story/guide-net-neutrality/. 31 Olivia Solon, “‘It’s Digital Colonialism’: How Facebook’s Free Internet Service Has Failed Its Users,” The Guardian, July 27, 2017, Last accessed September 20, 2022, https://www.theguardian.com/technology/2017/jul/27/facebook-free-basics-developing-markets 32 Vindu Goel and Mike Isaac, “Facebook Loses a Battle in India Over Its Free Basics Program,” New York Times, February 8, 2016, https://www.nytimes.com/2016/02/09/business/facebook-loses-a-battle-in-india-over-its-free-basics-program.html. For a detailed account of Facebook’s failed campaign to launch Free Basics in India, see Rahul Bhatia, “The Inside Story of Facebook’s Biggest Setback,” The Guardian, May 12, 2016, Last accessed September 20, 2022, https://www.theguardian.com/technology/2016/may/12/facebook-free-basics-india-zuckerberg. 33 Dan Tynan, “How Facebook Powers Money Machines for Obscure Political ‘News’ Sites,” The Guardian, August 24, 2016, Last accessed September 20, 2022, https://www.theguardian.com/technology/2016/aug/24/facebook-clickbait-political-news-sites-us-election-trump; Craig Silverman, “How Teens in The Balkans Are Duping Trump Supporters With Fake News,” Buzzfeed, November 3, 2016, Last accessed September 20, 2022, https://www.buzzfeednews.com/article/craigsilverman/how-macedonia-became-a-global-hub-for-pro-trump-misinfo#.jfBa9vX7N. 34 Alexander Smith and Vladimir Banic, “Fake News: How a Partying Macedonian Teen Earns Thousands Publishing Lies,” NBC News, December 8, 2016, Last accessed September 20, 2022, https://www.nbcnews.com/news/world/fake-news-how-partying-macedonian-teen-earns-thousands-publishing-lies-n692451. 35 Tim Stelloh, “‘Pizzagate’ Gunman Surrendered After Finding No Evidence of Fake Conspiracy: Court Docs,” NBC News, December 5, 2016, Last accessed September 20, 2022, https://www.nbcnews.com/news/us-news/pizzagate-gunman-surrendered-after-finding-no-evidence-fake-conspiracy-court-n692321. 36 For the full two-volume report, see U.S. Department of Justice, Report on the Investigation Into Russian Interference In The 2016 Presidential Election, March 2019, Last accessed September 20, 2022, https://www.justice.gov/archives/sco/file/1373816/download. 37 Abigail Adams, “Here’s What We Know So Far About Russia’s 2016 Meddling,” Time, April 18, 2019, Last accessed September 20, 2022, https://time.com/5565991/russia-influence-2016-election/. 38 David Shepardson and Warren Strobel, “U.S. Accuses Russian Spies of 2016 Election Hacking as Summit Looms,” Reuters, July 13, 2018, Last accessed September 20, 2022, https://www.reuters.com/article/us-usa-trump-russia-indictments/u-s-accuses-russian-spies-of-2016-election-hacking-as-summit-looms-idUSKBN1K32DJ. 39 “Russian Trolls’ Chief Target Was ‘Black US Voters’ in 2016,” BBC, October 9, 2019, Last accessed September 20, 2022, https://www.bbc.com/news/technology-49987657. 40 Nicholas Confessore, “Cambridge Analytica and Facebook: The Scandal and The Fallout So Far,” New York Times, April 4, 2018, Last accessed September 20, 2022, https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html; Alvin Chang, “The Facebook and Cambridge Analytica Scandal, Explained with a Simple Diagram,” Vox, May 2, 2018, Last accessed September 20, 2022, https://www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram; Julie Carrie Wong, “The Cambridge Analytica Scandal Changed the World – But It Didn’t Change Facebook,” The Guardian, March 18, 2019, Last accessed September 20, 2022, https://www.theguardian.com/technology/2019/mar/17/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook. 41 Olivia Solon and Oliver Laughland, “Cambridge Analytica Closing After Facebook Data Harvesting Scandal,” The Guardian, May 2, 2018, Last accessed September 20, 2022, https://www.theguardian.com/uk-news/2018/may/02/cambridge-analytica-closing-down-after-facebook-row-reports-say; Jesse Witt and Alex Pasternack, “The Strange Afterlife of Cambridge Analytica and The Mysterious Fate of Its Data,” Fast Company, July 26, 2019, Last accessed September 20, 2022, https://www.fastcompany.com/90381366/the-mysterious-afterlife-of-cambridge-analytica-and-its-trove-of-data. 42 Cecilia King, “F.T.C. Approves Facebook Fine of About $5 Billion,” New York Times, July 12, 2019, Last accessed September 20, 2022, https://www.nytimes.com/2019/07/12/technology/facebook-ftc-fine.html. 43 Keach Hagey and Jeff Horwitz, “Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead,” Wall Street Journal, September 15, 2021, Last accessed September 20, 2022, https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215. 44 Julia Carrie Wong, “How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” The Guardian, April 12, 2021, Last accessed September 20, 2022, https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang; Justin Scheck, Newley Purnell, and Jeff Horwitz, “Facebook Employees Flag Drug Cartels and Human Traffickers. The Company’s Response Is Weak, Documents Show,” Wall Street Journal, September 16, 2021, Last accessed September 20, 2022, https://www.wsj.com/articles/facebook-drug-cartels-human-traffickers-response-is-weak-documents-11631812953?mod=djemalertNEWS. 45 Emily Bazelon, “Why is Big Tech Policing Speech? Because the Government Isn’t,” New York Times Magazine, January 26, 2021, Last accessed September 20, 2022, https://www.nytimes.com/2021/01/26/magazine/free-speech-tech.html. 46 Andy Greenberg, “An Absurdly Basic Bug let Anyone Grab All of Parler’s Data,” Wired, January 12, 2021, Last accessed September 20, 2022, https://www.wired.com/story/parler-hack-data-public-posts-images-video/?bxid=5be9cc5d2ddf9c72dc183b15&bxid=5be9cc5d2ddf9c72dc183b15&cndid=22811782&cndid=22811782&esrc=HeaderAndFooter&esrc=HeaderAndFooter&hasha=009e3a5e9e977a807281a8e42634915a&hasha=009e3a5e9e977a807281a8e42634915a&hashb=d56084b6370d390cc11eb3175d87c438d1e5903a&hashb=d56084b6370d390cc11eb3175d87c438d1e5903a&hashc=a5de9775f20cf50bc1989a0bf8eb173bfb1b5f7653264f0b54843de2d8134bfa&hashc=a5de9775f20cf50bc1989a0bf8eb173bfb1b5f7653264f0b54843de2d8134bfa&mbid=mbid%3DCRMWIR012019%0A%0A&mbid=mbid%3DCRMWIR012019%0A%0A&source=EDT_WIR_NEWSLETTER_0_DAILY_ZZ&source=EDT_WIR_NEWSLETTER_0_DAILY_ZZ&utm_brand=wired&utm_brand=wired&utm_campaign=aud-dev&utm_campaign=aud-dev&utm_content=Final&utm_content=Final&utm_mailing=WIR_Daily_011221&utm_mailing=WIR_Daily_011221&utm_medium=email&utm_medium=email&utm_source=nl&utm_source=nl&utm_term=list1_p4&utm_term=list1_p4. 47 Nitasha Tiku, Tony Romm, and Craig Timberg, “Twitter Bans Trump’s Account, Citing Risk of Further Violence,” Washington Post, January 8, 2021, Last accessed September 20, 2022, https://www.washingtonpost.com/technology/2021/01/08/twitter-trump-dorsey/; Craig Timburg, Elizabeth Dwoskin, and Reed Albergotti, “Inside Facebook, Jan. 6 Violence Fueled Anger, Regret Over Missed Warning Signs,” Washington Post, October 22, 2021, Last accessed September 20, 2022, https://www.washingtonpost.com/technology/2021/10/22/jan-6-capitol-riot-facebook/. 48 David Kravets, “January 25, 1979: Robot Kills Human,” Wired, January 25, 2010, Last accessed September 20, 2022, https://www.wired.com/2010/01/0125robot-kills-worker/ 49 Drone Warfare, The Bureau of Investigative Journalism, Last accessed September 20, 2022, https://www.thebureauinvestigates.com/projects/drone-war. 50 Dave Phillips, “The Unseen Scars of Those Who Kill Via Remote Control,” New York Times, April 15, 2022, Last accessed September 20, 2022, https://www.nytimes.com/2022/04/15/us/drones-airstrikes-ptsd.html. 51 Robert F. Trager and Laura M. Luca, “Killer Robots Are Here—and We Need to Regulate Them,” Foreign Policy, May 11, 2022, Last accessed September 20, 2022, https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/; Sinan Tavsan, “Turkish Defense Company Says Drone Unable to Go Rogue in Libya,” NikkeiAsia, June 20, 2021, Last accessed September 20, 2022, https://asia.nikkei.com/Business/Aerospace-Defense/Turkish-defense-company-says-drone-unable-to-go-rogue-in-Libya. 52 Sometimes referred to as the Inhumane Weapons Convention, the Convention on Certain Conventional Weapons limits or prohibits “specific types of weapons that are considered to cause unnecessary or unjustifiable suffering or to affect civilians indiscriminately.” “US Rejects Calls for Regulating or Banning ‘Killer Robots,” The Guardian, December 2, 2021, Last accessed September 20, 2022, https://www.theguardian.com/us-news/2021/dec/02/us-rejects-calls-regulating-banning-killer-robots; Adam Satariano, Nick Cumming-Bruce, and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing,” New York Times, December 17, 2021, https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html; James Dawes, “UN Fails To Agree on ‘Killer Robot’ Ban as Nations Pour Billions into Autonomous Weapons Research,” The Conversation, December 20, 2021, https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616. 53 Lily Hay Newman, “Apple’s Lockdown Mode Aims to Counter Spyware Threats,” Wired, July 8, 2022, Last accessed September 20, 2022, https://www.wired.com/story/digital-services-act-regulation/?bxid=5be9cc5d2ddf9c72dc183b15&cndid=22811782&esrc=HeaderAndFooter&mbid=CRMWIR092120&source=EDT_WIR_NEWSLETTER_0_DAILY_ZZ&utm_brand=wired&utm_campaign=aud-dev&utm_content=WIR_Daily_070922&utm_mailing=WIR_Daily_070922&utm_medium=email&utm_source=nl&utm_term=P1. 54 “Facebook to Pay $52m To Content Moderators over PTSD,” BBC, May 13, 2020, Last accessed September 20, 2022, https://www.bbc.com/news/technology-52642633. 55 Kelvin Chan, “Rohingya Sue Facebook for $150B, Alleging Role in Violence,” AP News, December 7, 2021, Last accessed September 20, 2022, https://www.bbc.com/news/technology-52642633. 56 Sam Schechner and Kim Mackrael, “EU Lawmakers Approve Sweeping Digital Regulations,” Wall Street Journal, July 5, 2022, https://www.wsj.com/articles/eu-lawmakers-approve-sweeping-new-digital-regulations-11657040485; Asha Allen, “Europe’s Big Tech Law is Approved. Now Comes the Hard Part,” Wired, July 8, 2022, Last accessed September 20, 2022, https://www.wired.com/story/digital-services-act-regulation/?bxid=5be9cc5d2ddf9c72dc183b15&cndid=22811782&esrc=HeaderAndFooter&mbid=CRMWIR092120&source=EDT_WIR_NEWSLETTER_0_DAILY_ZZ&utm_brand=wired&utm_campaign=aud-dev&utm_content=WIR_Daily_070922&utm_mailing=WIR_Daily_070922&utm_medium=email&utm_source=nl&utm_term=P1. © The Author(s) 2022. Published by Oxford University Press on behalf of the Society for Historians of American Foreign Relations. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) © The Author(s) 2022. Published by Oxford University Press on behalf of the Society for Historians of American Foreign Relations. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. TI - Can Human Rights Survive Technology? JF - Diplomatic History DO - 10.1093/dh/dhac079 DA - 2022-10-21 UR - https://www.deepdyve.com/lp/oxford-university-press/can-human-rights-survive-technology-5V0x3FVLSV SP - 1 EP - 18 VL - 47 IS - 1 DP - DeepDyve ER -