TY - JOUR AU1 - Cavelty, Myriam Dunn AB - Abstract The link between cyberspace and national security is often presented as an unquestionable and uncontested “truth.” However, there is nothing natural or given about this link: It had to be forged, argued, and accepted in the (security) political process. This article explores the constitutive effects of different threat representations in the broader cyber-security discourse. In contrast to previous work on the topic, the focus is not solely on discursive practices by “visible” elite actors, but also on how a variety of less visible actors inside and outside of government shape a reservoir of acceptable threat representations that influence everyday practices of cyber-security. Such an approach allows for a more nuanced understanding of the diverse ways in which cyber-security is presented as a national security issue and of the consequences of particular representations. Matters of cyber-(in)-security—though not always under this name—have been an issue in security politics for at least three decades (Dunn Cavelty 2008). As a result, the link between national security and cyberspace has become an uncontested, unshakable “truth” with budgetary and political consequences. However, this link is far more diverse as it is often assumed in the literature. The cyber-security discourse is about more than one threat form: ranging from computer viruses and other malicious software to cyber-crime activity to the categories of cyber-terror and cyber-war. Each subissue is represented and treated differently in the political process and at different points in time. Consequently, cyber-security policies contain an amalgam of countermeasures, tailored to meet different, and at times conflicting security needs. How these heterogeneous political manifestations are linked to different threat representations—ways to depict what counts as a threat or risk—is the focus of this article. Importantly, and in contrast to most of the other research on the subject, this paper therefore focuses on cyber-security comprehensively, rather than looking at discourse in subcategories like cyber-crime, cyber-terrorism, or cyber-war. I argue that only a broad understanding of cyber-security as discursive practice by a multitude of actors inside and outside of government reveals the variety of choices available to political actors at all times and enables us to show what the consequences of such choices are. The paper has four parts. First, the theoretical approach, a cross-over between the linguistic and the sociological (or practice-based) approach of Securitization Theory and discourse theory more broadly, is outlined. Second, the language used to talk about cyberspace is described, as it forms the basis for how security and insecurity in this realm are conceptualized. Third, three key threat representations are identified: the portrayal of malware by using a biological register (virus/worms); the description of disembodied miscreants (hackers); and complex interrelationships between critical infrastructures and the cyber-substructure, and a subsequent emphasis on vulnerability. Fourth, the paper links these threat representations to cyber-security policies and practices (as an indication of their constitutive effects). In conclusion, I reflect on what the choice for specific threat representations and the type of practices they inspire signify for the present and the future of cyber-security. Discursive Practices “Below the Radar” To date, political science literature on cyber-security—and closely related subissues such as cyber-crime, cyber-terrorism, or cyber-war—remains policy-oriented and does not communicate with more general international relations theory, not even neo-realism (Eriksson and Giacomello 2007). There is a notable exception: A limited number of scholars have used frameworks derived from Securitization Theory (Buzan, Wæver, and De Wilde 1998) to establish how different actors in politics have tried to argue the link between the cyber-dimension and national security (cf. Bendrath 2001, 2003; Eriksson 2001; Dunn Cavelty 2008; Hansen and Nissenbaum 2009; Lawson 2011, 2012). First-generation Securitization Theory treats security as a discursive (intersubjective and performative) practice, in which particular actors need to successfully proclaim something a security problem. In other words, links between so-called referent objects and the need to use security political means to secure or protect them need to be forged, argued, and accepted in the political process. Like many other discourse-theoretical approaches, the linguistic variant of Securitization Theory focuses on “politically salient speech acts” by “visible” political figures that can be approved or disproved by the general public (Huysmans 2011:371). Therefore, the emphasis is almost always on official statements by “the heads of states, governments, senior civil servants, high ranked military, heads of international institutions” (Hansen 2006:64). Such a focus reveals the constitutive effects the discursive practices of “capable actors” can have in (world) politics (Weldes and Saco 1996; Campbell 1998). However, approaches focusing on elite expressions neglect how these discursive practices are facilitated or thwarted by preceding and preparatory discursive practices of actors that are not as easily visible. This paper starts from the premise that within any given discourse, various actors seek to assert themselves and their pattern of argumentation and to establish a dominant discourse pattern. Arguably, social contests for the legitimate definition of reality do not only take place in the open political arena: Depending on the issue at hand, state and nonstate actors—including specialized bureaucratic units, consultants, or technical experts—have the capacity to establish “the truth” about certain threats (Huysmans 2006:72; Léonard and Kaunert 2011; also Kingdon 2003), therefore creating a “reservoir” of accepted threat representations on which visible political actors (must) draw. The argument this paper makes is that in order to understand the fundamentals of everyday security political processes, the interplay between political discourse and constitutive effects must also be studied in the realm of “little security nothings” (Huysmans 2011). The emergence of cyberspace is an exceptional opportunity to look at how such a reservoir of threat representations was formed. Political Images of Cyberspace Cyber-security is a type of security that unfolds in and through cyberspace; the making and practice of cyber-security is both constrained and enabled by this environment. Therefore, understanding the basic language used to talk about the digital realm is the first necessary step in this analysis, since cyber-security cannot be imagined without drawing on language used to describe the environment in which it operates. “Cyberspace” is one of the debate's pertinent neologisms, a portmanteau word combining “cybernetics” and “space.” The term “cyberspace” was coined by a cyber-punk novelist (William Gibson, who called it “a consensual hallucination” [1984:67]) and then imported (and in the process changed) into the political realm by John Perry Barlow, a prominent cyber-libertarian and activist. What is known today as “Barlovian cyberspace” is an inherently political concept: When Barlow announced the formation of the Electronic Frontier Foundation in 1990 (Barlow 1990), the space metaphor made sense and was useful for his purposes. First, it supported, and at the same time added a high-tech flavor to, the basic intuition that the interconnection of computers brings forth a sort of new “place.” Conceiving of cyberspace as a place allows different notions of control and domination over the virtual lands. Second, the place metaphor was convenient to establish the image of the “Western frontier” together with the name “Electronic Frontier.” It suggests an unexplored land, freedom from legal and social constraints associated with the civilized East (Yen 2003), and opportunities in line with the cyber-libertarian agenda that supports minimal Internet regulation/state involvement (Barlow 1996). The self-identification of the cyber-community as digital “pioneers” further helped to solidify the image of “good” cowboys inhabiting the unchartered land (Mihalache 2002). There are two basic ways in which cyberspace as place is conceptualized and defined: The first model excludes (physical and other) infrastructures from its definitions, whereas the second includes them to various degrees. In the first, cyberspace emerges as a space between the hardware components of computer networks, where interaction happens (Sterling 1993), a place that is fundamentally different from reality, as “the new home of Mind,” “a world that is both everywhere and nowhere, but it is not where our bodies live” (Barlow 1996). The second model takes into account different layers and abstractions of information riding on a physical layer of hardware (cf. Libicki 2009:12–13). Here, cyberspace is seen as comprising both a material and a virtual realm; it is a “space of things and ideas, structure and content” (Deibert and Rohozinski 2010:16). As an example for a definition situated at the extreme end of that spectrum, the cyberspace definition of the US Department of Defense refers almost exclusively to the (hardware) technology component, although software and data may be inferred from the wording: “a global domain within the information environment consisting of the interdependent network of information technology infrastructures, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers” (Department of Defense 2010:77). A different metaphor for cyberspace, which is less widespread than the place/frontier conception, uses the image of the “ecosystem” to describe the cyber-realm as a set of network technologies and network technology customers. In contrast to the simple spatiality found in the Western frontier metaphor, cyberspace as an ecosystem comes with images of organic evolution, interconnectedness, and complexity (Lapointe 2011). Ecosystems are habitats for a variety of different species that co-exist, influence each other, and are affected by a variety of external forces. From this point of view, social and technological forces are symbiotic. The ecosystem metaphor is also illuminating the ability to accommodate change in some more or less automatic way. Under this concept, cyberspace is defined as the fusion of all communication networks, databases, and sources of information into a vast, tangled, and diverse blanket of electronic interchange, a “bioelectronic environment that is literally universal” (Dyson, Gilder, Keyworth and Toffler 1996) or one that exists through the (symbiotic) interaction of different social actors (DHS 2011:2). The way cyberspace is imagined and defined has consequences for the way any type of action or strategy is conceptualized (Betz and Stevens 2011:36). Cyber-threat representations, too, are influenced by the different place metaphors. As will be shown below, the frontier image in particular is influential in shaping the conception of an unruly and lawless place in need of order. Cyber-Threat Representations: Creating and Changing “the Reservoir” Cyber-security as understood in this paper is a combination of linguistic and non-linguistic discursive practices from many different “communities” of actors. To systematize threat representations, this diversity needs to be mapped. While the common topic in all communities is the security of computers and computer networks, they differ most in their focus on the type of issues on a higher level, which they regard as being connected to or influenced by the security of computers and computer networks. In other words, the biggest difference between the communities is the “referent object” to which they pay particular attention. Taking these different referent objects in the broader discourse as the basis for the mapping, four clusters can be formed: In each cluster, particular groups of actors are shaping and driving the discourse, and there are specific threats to the referent objects they are concerned about (Table 1). Mapping the Field of Cyber-security I II III IV Main Actors Hacking Subculture Computer (security) experts Business actors Anti-virus industry Law enforcement Intelligence community Civil defense/Homeland security Military Referent Object Computers Computer networks Private sector (business networks) Classified information (government networks) Critical (information) infrastructures Society (particularly its “functioning”) Networked armed forces (military networks) Nation/state Threats Malware Network disruptions Hackers (all kinds) Advanced Persistent Threats (Malware) Cyber-Criminals (Nonstate) Cyber-Spies (State) Disruptions in critical infrastructures Cascading effects Cyber-terrorists (Nonstate) Cyber-commands (State) (Catastrophic) attacks on critical infrastructures Cyber-terrorists (Nonstate) Cyber-spies (State) Cyber-commands (State) I II III IV Main Actors Hacking Subculture Computer (security) experts Business actors Anti-virus industry Law enforcement Intelligence community Civil defense/Homeland security Military Referent Object Computers Computer networks Private sector (business networks) Classified information (government networks) Critical (information) infrastructures Society (particularly its “functioning”) Networked armed forces (military networks) Nation/state Threats Malware Network disruptions Hackers (all kinds) Advanced Persistent Threats (Malware) Cyber-Criminals (Nonstate) Cyber-Spies (State) Disruptions in critical infrastructures Cascading effects Cyber-terrorists (Nonstate) Cyber-commands (State) (Catastrophic) attacks on critical infrastructures Cyber-terrorists (Nonstate) Cyber-spies (State) Cyber-commands (State) View Large Mapping the Field of Cyber-security I II III IV Main Actors Hacking Subculture Computer (security) experts Business actors Anti-virus industry Law enforcement Intelligence community Civil defense/Homeland security Military Referent Object Computers Computer networks Private sector (business networks) Classified information (government networks) Critical (information) infrastructures Society (particularly its “functioning”) Networked armed forces (military networks) Nation/state Threats Malware Network disruptions Hackers (all kinds) Advanced Persistent Threats (Malware) Cyber-Criminals (Nonstate) Cyber-Spies (State) Disruptions in critical infrastructures Cascading effects Cyber-terrorists (Nonstate) Cyber-commands (State) (Catastrophic) attacks on critical infrastructures Cyber-terrorists (Nonstate) Cyber-spies (State) Cyber-commands (State) I II III IV Main Actors Hacking Subculture Computer (security) experts Business actors Anti-virus industry Law enforcement Intelligence community Civil defense/Homeland security Military Referent Object Computers Computer networks Private sector (business networks) Classified information (government networks) Critical (information) infrastructures Society (particularly its “functioning”) Networked armed forces (military networks) Nation/state Threats Malware Network disruptions Hackers (all kinds) Advanced Persistent Threats (Malware) Cyber-Criminals (Nonstate) Cyber-Spies (State) Disruptions in critical infrastructures Cascading effects Cyber-terrorists (Nonstate) Cyber-commands (State) (Catastrophic) attacks on critical infrastructures Cyber-terrorists (Nonstate) Cyber-spies (State) Cyber-commands (State) View Large Table 1 reveals considerable overlap with regard to the threats discussed in each cluster. A combination of the most similar types leads to three clusters of threat types: The first is technical in nature, focusing on malware (a portmanteau word combining “malicious” and “software”). The second is about socio-political threats, mainly human “wrongdoers” in a variety of shapes. The third is focused on human–machine interactions (Table 2). Three Threat Representations Technological Cluster Socio-Political Cluster Human-Machine Cluster Threat Malware Network disruptions Advanced Persistent Threats (Malware) Hackers (all kinds) Cyber-criminals (Nonstate) Cyber-spies (State) Cyber-terrorists (Nonstate) Cyber-commands (State) Complexity Disruptions in critical infrastructures Cascading effects (Catastrophic) attacks on critical infrastructures Threat Representation Virus Intruders Weapons Lawlessness Anonymity Vulnerability Unknowability Inevitability Technological Cluster Socio-Political Cluster Human-Machine Cluster Threat Malware Network disruptions Advanced Persistent Threats (Malware) Hackers (all kinds) Cyber-criminals (Nonstate) Cyber-spies (State) Cyber-terrorists (Nonstate) Cyber-commands (State) Complexity Disruptions in critical infrastructures Cascading effects (Catastrophic) attacks on critical infrastructures Threat Representation Virus Intruders Weapons Lawlessness Anonymity Vulnerability Unknowability Inevitability View Large Three Threat Representations Technological Cluster Socio-Political Cluster Human-Machine Cluster Threat Malware Network disruptions Advanced Persistent Threats (Malware) Hackers (all kinds) Cyber-criminals (Nonstate) Cyber-spies (State) Cyber-terrorists (Nonstate) Cyber-commands (State) Complexity Disruptions in critical infrastructures Cascading effects (Catastrophic) attacks on critical infrastructures Threat Representation Virus Intruders Weapons Lawlessness Anonymity Vulnerability Unknowability Inevitability Technological Cluster Socio-Political Cluster Human-Machine Cluster Threat Malware Network disruptions Advanced Persistent Threats (Malware) Hackers (all kinds) Cyber-criminals (Nonstate) Cyber-spies (State) Cyber-terrorists (Nonstate) Cyber-commands (State) Complexity Disruptions in critical infrastructures Cascading effects (Catastrophic) attacks on critical infrastructures Threat Representation Virus Intruders Weapons Lawlessness Anonymity Vulnerability Unknowability Inevitability View Large When looking more closely at these threat types, particular ways of talking about these threats become obvious: These “ways of talking” are the threat representations this paper is interested in. The technical domain is particularly rich in biological, and particularly virus-related, metaphors. The second capitalizes on the lawlessness of cyberspace (or the Western Frontier) and mainly revolves around shady, invisible, but powerful foes. The third is focused on the complexity of the computer infrastructure and the societal vulnerabilities created by our dependency on the computer infrastructure. These three threat types with their particular threat representations, including their origins and evolution, are detailed below in three subsections. These three influence countermeasures and form “the reservoir” on which visible political actors draw, as will be shown in the next section. Biologizing Technology: Viruses, Worms, and Other Bugs The first threat representation is about “digital accidents” (Sampson 2007) in the form of malware and the biological register employed for their depiction. The concept of the information virus was, like cyberspace, coined in sci-fi literature, but it took a computer scientist rather than an activist to popularize it (Parikka 2007). Fred Cohen's experiments with self-replicating mini-programs in the 1980s mark an important milestone in the conceptualization of software as risk. His description of the (substantial) danger stemming from computer viruses uses the image of a biological killer virus: As an analogy to a computer virus, consider a biological disease that is 100% infectious, spreads whenever animals communicate, kills all infected animals instantly at a given moment, and has no detectable side effects until that moment. […] If a computer virus of this type could spread throughout the computers of the world, it would […] wreak havoc on modern government, financial, business, and academic institutions. [Cohen 1987] When home computers and with them viral metaphors became more widespread in the 1980s, they were vivid and effective ways of explaining to non-technical experts how malware works and were actively used as such by computer specialists. However, a real-world incident and a lot of media attention were needed to instill in the public mind the imagery of (digital) viruses. The “Morris Worm” was such an incident. The worm used so many system resources that the attacked computers could no longer function and large parts of the early Internet went down (Parrika 2005). Thereafter, the image of computers as the epitome of control, reliability, efficiency, and order was transformed into an image of computers as threatened by the unexpected, though inevitable danger of rogue, rampant programs. Professional and popular discussions of computer viruses figured computer systems as self-contained bodies that must be protected from outside threat. These discussions mainly fed on anxieties about sexual contamination in populations, particularly AIDS, as expressed in statements like “Browsing the Internet without protection is just plain foolish!,” or calling behavioral computer rules “safe hex” practices. Computer security rhetoric about compromised networks also employs language suggestive of that used to describe the bodies of nation-states under military threat (Lupton 1994). Such language describes viruses using images of foreignness, illegality, and otherness. The biological description of viruses as key “intruder technology” is militarized: A virus consists of self-replicating code and a “payload.” The former is like the propulsion unit of a missile; the latter is the warhead it delivers (Helmreich 2000:473). The use of science-fiction terminology as the main source of cyber-threat representations in the technological domain seems an inevitable consequence of both the closeness of the computer community to the sci-fi subculture and a lack of alternatives. When the Internet and computer networks began to spread, there were few literary realms other than sci-fi with its fascination for outer space and alien life forms that could have helped policymakers and the public to learn how to cope with these novelties. Given the special place viruses have in history as one of the scourges of mankind, fear from infectious disease, virtual or real, is deeply ingrained in the human psyche, so that employing viral metaphors for things we are scared of, especially “known unknowns,” seems to come naturally to us. The parallels between “real” biological viruses (and the discourses about them) and its digital variants are striking. Not only have biological metaphors directly inspired technical innovation (like genetic algorithms or evolutionary programming), but biological models have also led creative individuals to new and more disruptive ways of programming malware (like polymorphic code that mutates). Also, biology occasionally looks to computer viruses to learn about viral and societal behavior, as in the case of the “Corrupted Blood” incident in “World of Warcraft” (Balicer 2005). This virtual plague, which was due to a programming error, “mirrored real-world epidemics in numerous ways: It originated in a remote, uninhabited region and was carried by travelers to urban centers; hosts were both human and animal, such as with avian flu; it was spread by close spatial contact” (Orland 2008). Most importantly, however, when looking at the evolution of the technical threat representations, AIDS has stopped being the prime health concern in the Western world—our epidemic fears today revolve around viruses that jump boundaries, especially those that overcome species barriers (zoonotics), like Swine Flu or Ebola. In 2010, the computer world had its own barrier-jumping incident, when a worm known as Stuxnet overcame the barrier between the virtual and the corporeal worlds by having a “real” (opposed to virtual) effect. Stuxnet was discovered in June 2010 and has been called “[O]ne of the great technical blockbusters in malware history” (Gross 2011) due to its complexity and sophistication. While it was initially impossible to know for certain who was behind this piece of code, though many suspected one or several state actors (Farwell and Rohozinski 2011), it was revealed in mid-2012 that Stuxnet is part of a US and Israeli intelligence operation, programmed and released to sabotage the Iranian nuclear program. For many observers, Stuxnet as a “digital first strike” marks the beginning of an age of (unrestrained) cyber-war between states. In the post-Stuxnet world, viruses are no longer like the common flu—they have killer qualities. Like the virus bred in the bio-terrorists' laboratory, the modern computer virus is increasingly conceptualized as a weapon, aimed at a specific target. And where there is a weapon, there is malicious intent, which is where the second type of threat representation comes in. Anonymous Data Wizards in Lawless Space Socio-political threats and their representations do not thrive on metaphors much. Rather, the material realities of computer networks and malicious activities taking place in or through them that manifest as “digital accidents” shape the threat representations in this cluster. New categories of threats were formed in government circles and think-tanks by linking the prefix “cyber-” to established and known threats to security, thus creating terms such as cyber-vandalism, cyber-crime, cyber-espionage, cyber-terror, or cyber-war (Denning 2012). “Old” forms of deviant behavior become “new” as they are imbued with a sense of “through the use of a computer” or “related to cyberspace.” The exact origin of these terms is sometimes hard to fathom, and tracking down the provenance for individual terms would be a lengthy undertaking; what seems most relevant in the context of this article is that behind all of these categories is in essence the archetype/stereotype of the “hacker,” individuals with technical superpowers. In the early stages of the discourse, computer hackers were depicted as highly skilled (male) youths (Ross 1991). Once again, it was popular culture that was at the forefront of shaping the simplified image of “the hacker.” The movie War Games (1983) in particular is regarded as crucial for not only giving substance to the hacker culture, but also for exposing the general public to the idea of computer hacking for the first time. In this film, a young computer whizz kid sees an advert for online war games and starts trying to hack into the company's server. When he finally gets access, he starts to play a simulated game called “global thermonuclear war”—unfortunately, he has hacked into the military simulation computer at the Pentagon, which starts to act out a response to an attack from Russia. In the end, World War III is barely averted. The popular conception of the hacker as adolescent boy, hunched over his computer and posing a latent, but severe threat to national security, was both reflected in and popularized by this movie, and supported at the same time by real-world hacking incidents. Very soon thereafter, computer hackers were increasingly branded as criminals in government circles, not least because computer break-ins seemed to become more widespread, and received a lot of media attention. In the 1980s, growing parts of society in the United States had already become dependent on computing for business practices and other basic functions. Tampering with computers suddenly meant potentially endangering people's careers and property, and some even said their lives (Spafford 1989). Not surprisingly, the development of legal tools to prosecute unauthorized entry into computer systems (like the Computer Fraud and Abuse Act of 1986 in the United States) coincided with the first serious network incidents. Furthermore, the new “authoritative voice” in the field, the anti-virus industry, invested a great deal of resources into spreading public information about the danger from hackers (Skibell 2002). However, the evolution of the hacker image did not stop at criminalization. In general, the debate moved away from hacker criminals to terrorist hackers after 9/11 (Bendrath, Eriksson, and Giacomello 2007), and then, in parallel to the development shown in the technological cluster, on to highly professional state-sponsored cyber-mercenaries, able to develop highly effective cyber-weapons. Such conceptions are supported by reports from anti-virus companies, which describe the main threat as one from increasingly organized professionals (cf. Panda Security 2010). Over the years, this discourse has become particularly focused on so-called advanced persistent threats, a cyber-attack category that connotes an attack with a high degree of sophistication and stealthiness over a prolonged duration of time. The attack objectives typically extend beyond immediate financial gain, so that states as instigators of cyber-misdemeanor, currently mainly in the form of cyber-espionage, are the main focus of attention. The lawlessness of cyberspace, in which hacking data wizards strive and prosper, emerges as one of the prime threat representations in this debate. This space is depicted as anarchic and out of control, or rather, in need of new rules and control. Threats are represented as disembodied: Laws of nature, especially physics, do not apply in this special space/place; there are no linear distances, no bodies, no physical co-presences. Hackers are represented by symbols, and their actions unfold their effects through this space/place anywhere instantaneously. As a result, clever opponents can hide behind the anonymity provided by the technical realm. The attribution problem, which refers to the difficulty of identifying those initially responsible for a cyber-attack and their motivating factors, is a key in solidifying the threat representation. Attacks and exploits that seemingly benefit states might well be the work of third-party actors operating under a variety of motivations. At the same time, the challenges of clearly identifying perpetrators give state actors convenient “plausible deniability and the ability to officially distance themselves from attacks” (Deibert and Rohozinski 2009:12). In addition, the cui bono logic (“to whose benefit?”) is hardly ever sufficient for justifying political action. There is an additional aspect of threat representation in this cluster. The “threat” has a voice of its own, so that the image of the “hacker” is not only a result of branding from the outside but also a result of how this threat represents itself. Over the past years, multifaceted activities of hacker collectives and activists such as Anonymous and LulzSec have added yet another twist to the cyber-threat story. Particularly, the name “Anonymous” has become associated with highly mediatized computer break-ins and subsequent release of sensitive information as part of a (new) form of global protest. Anonymous and other hacktivism groups cleverly use the same catchphrases governments use to depict them as threats for governments and other high-ranking establishments. Beyond its slogan (“We are Anonymous. We are Legion. We do not forgive. We do not forget. Expect us”), its use of Guy Fawkes masks, and its orientation toward issues of censorship, information freedom, and anonymous speech, the movement resists straightforward definition and classification. Shrouded in deliberate mystery in a time obsessed with control and surveillance, it purports to have no leaders, no hierarchical structure, nor any geographical epicenter. This elusiveness as a graspable actor feeds right off and into the fears about digital foes as sketched above. Ocean metaphors used by hacker groups to talk about the Internet—with clear allusions to piracy and buccaneering—reinforce fears in government circles that control over digital data is impossible, particularly once it has been stolen. But it is not only nonstate actors that form their own image of strength in the digital realm; states do the same. Chinese authorities have stated repeatedly that they consider cyberspace a strategic domain, and that they hope that mastering it will equalize the existing military imbalance between China and the United States more quickly. Recently, the United States, Israel, Britain, and Germany have made subtle shows of cyber-capabilities, likely designed to serve as deterrents. They are informing would-be attackers that they have the capability to issue frightening reprisals both in the cyber-domain and, if necessary, kinetically: Most prominently, the White House's International Strategy for Cyberspace of 2011 states that the United States reserves the right to retaliate to hostile acts in cyberspace with military force. Fears and unpredictability created through the attribution problem lets the cyber-debate run parallel to general strategic geopolitical debates, in which cyberspace has emerged as an additional domain for conflict. In military circles, cyberspace is depicted as a battlefield (or rather, battlespace) on which a covert war can be fought or, depending on the person's belief, is already being fought (Clarke 2010). Most references to a cyber(ed) battlefield are literal references to an actual battlefield. Military terms like cyber-weapons, cyber-capabilities, cyber-offense, cyber-defense, and cyber-deterrence suggest that cyberspace can and should be handled as an operational domain of warfare like land, sea, air, and outer space; and cyberspace has been officially recognized as a new domain in US military doctrine (Lynn 2010). While opinions about what a war in cyberspace will look like differ considerably among experts, a particular threat representation is responsible for how governments all over the world are rhetorically and practically arming up for cyber-battle. Vulnerabilities of the Complex, Interdependent Body At the heart of the third type of threat representation is the conceptualization of security threats as problems of (system) vulnerabilities, the degree to which a system is susceptible to, or unable to cope with, adverse effects. For most parts, this discourse was shaped by actors in the civil defense environment (Collier and Lakoff 2008). In it, particular systems and the functions they perform are singled out by the authorities as “critical” (in the sense of vital, crucial, essential) because their prolonged unavailability harbors the potential for major crisis, both political and social. Nowadays, these systems are thoroughly cybered: information infrastructures are intermediaries between physical assets and physical infrastructure, and the material dimension of infrastructures also expanded to encompass complex assemblages of knowledge. Bridged and interlinked by information pathways, the body of critical infrastructures is seen as interconnected, interdependent, and highly complex (cf. PCCIP 1997; Duit and Galaz 2008). At the same time, the image of modern critical infrastructures has become one in which it becomes futile to try and separate the human from the technological. Technology is not simply a tool that makes life livable, technologies become constitutive of novel forms of “a complex subjectivity,” which is characterized by an inseparable ensemble of material and human elements (Coward 2009:414). From this ecological understanding of subjectivity, a specific image of society emerges: Society becomes inseparable from critical infrastructure networks. In this way, systemic risks—understood as risks to critical infrastructure systems—are risks to the entire system of modern life and being. The main threat representation in this cluster is centered on one's own vulnerability (stemming from complexity, interdependency, and dependency). The very connectedness of infrastructures “poses dangers in terms of the speed and ferocity with which perturbations within them can cascade into major disasters” (Dillon 2005:3). Advances in information and communication technology have thus augmented the potential for major disaster (or systemic risk) in critical infrastructures by vastly increasing the possibility for local risks to mutate into systemic risks. Critical infrastructure protection practitioners are particularly concerned about two types of system effects: cascades and surprise effects. Cascade effects are those that produce a chain of events that cross geography, time, and various types of systems; surprise effects are unexpected events that arise out of interactions between agents and the negative and positive feedback loops produced through this interaction. Technological development is depicted as a force out of control, and the combination of technology and complexity conveys a sense of unmanageability, combining forces with an overall pessimistic perspective concerning accidents and the limited possibilities of preventing them and coping with them (Perrow 1984). Furthermore, complexity manifests as an epistemological breakdown. Because all of the interacting parts move between each other at varying speeds, future system behavior becomes hard to determine and predict. However, traditional risk assessment tools used to evaluate threat to critical infrastructures are grounded in strict, measurable assessments and predictive modeling (all of which is based on past behavior and experiences) and linear cause-effect thinking (cf. DHS 2009). They inevitably fail their purpose when applied to the truly complex and the uncertain. Therefore, the threats to the system are depicted as unpredictable and in essence unknowable, which adds to the feeling of vulnerability. This focus on vulnerabilities results in two noteworthy characteristics of the threat representation: First, the protective capacity of space is obliterated; there is no place that is safe from an attack. Second, the threat becomes quasi-universal, because it is now everywhere, creating a sense of “imminent but inexact catastrophe, lurking just beneath the surface of normal, technologised […] everyday life” (Graham 2006:258). Threats or dangers are no longer perceived as coming exclusively from a certain direction—traditionally, the outside—but are system-inherent; the threat is a quasi-latent characteristic of the system, which feeds a permanent sense of vulnerability and inevitable disaster. Linking Language/Practice to Practice/Language This article started from the premise that different conceptualizations of cyberspace and the different ways threats from and to that place are conceptualized have an effect not only on how visible actors form their opinions about political actions, but also on how the more general countermeasures and basic practices are shaped. Indeed, there seem to be several straightforward links between the type of threat representations and the type of countermeasures in the field of cyber-security, which is presented below in a first subsection. Furthermore, a second subsection shows in what way the “reservoir” of threat representations appears in top-level government discourse. Cyber-security Practices In the technological threat-cluster, the anti-virus industry emerged as the most powerful actor alongside the hacker community, and with it the techniques and programs for virus recognition, destruction, and prevention. Incidents like the Morris Worm were conceptualized as a “hygiene lesson” (Kocher 1989). Another common health-related metaphor compares the cyber-security mechanisms that are needed for the future to the human immune system (Taubes 1994:887). These are two types of countermeasures that are grounded in two different body images: The first sees (individual) computers as bodies, which comes with an ability to distinguish the inside from the outside, and the ability to block intruders and bad information flows (for example, with “firewalls,” whereby the good/safe is on the inside and the bad/threatening always comes from the outside (Lupton 1994). The second conceptualization is more ecological in nature and potentially linked to the image of computer as networked entities, in which borders and inside–outside do not play a crucial role. It suggests immediate, unreflecting, constructive, self-regulating response to intrusions, which cannot be prevented but only handled. In addition, and closely linked to threat-cluster two, conceptions of software as “weapons” in the hands of evildoers have led to discussion about how to control the military use of computer exploitation through arms control or multilateral behavioral norms, agreements that might pertain to the development, distribution, and deployment of cyber-weapons, or to their use (Libicki 2009). The second threat-cluster is linked to the development and use of law enforcement tools. On the one hand, these are tools for punishing miscreants effectively after the deed. As mentioned, the development of IT or computer law was instrumental in creating the image of the digital thief by providing the necessary concepts and categories. The “cyberspace as place” metaphor has been widely adopted by courts as a way of understanding the new medium (Olson 2005), but also to make it fit legal concepts. On the other hand, the necessity of finding, but also catching those responsible before legal tools can be employed has led to an almost insatiable data hunger on the part of law enforcement and intelligence agencies. This translates into attempts to push the boundaries of digital surveillance or data retention, or gigantic projects to intercept, decipher, analyze, and store vast swaths of the world's communications, in which every single individual is a target (Deibert 2003; Bamford 2012). There are two types of practices, too: The first seeks to solidify the bodiless miscreant by filling the information deficit about him/her with information (by forensic means or by data mining and digital surveillance). The second moves away from reactive models of crime toward a non-reactive model. This model of distributed security emphasizes prevention. This is achieved by a “distributed security” model, which uses criminal sanctions to require computer users and those who provide access to cyberspace to employ reasonable security measures to prevent the execution of cyber-crimes (Brenner and Clarke 2009). In the third threat-cluster, there are two types of practices, with again, different logics. The first is about protection. The essence of critical infrastructure protection as a security practice is to throw a “protective or preservative measure […] around a valued subject or object” (Dillon and Lobo-Guerrero 2008:276). Before this security can unfold, the valued subject/object needs to be identified, localized, and given substance in the world. As practiced by public and private actors alike, the identification and designation of the protection-worthy is performed following the well-established steps of (technical) risk analysis techniques, which contains both an act of naming and an act of prioritizing on the basis of cost-effectiveness. However, much of the expertise and many of the resources required for taking better protective measures are located outside governments. The military—or any other state entity for that matter—does not own most critical (information) infrastructures and has no direct access to them. Because the state has lost control over his body parts, it needs the private sector to localize them for him (Dunn Cavelty and Suter 2009). The second practice in this domain is resilience (Suter 2011). Resilience is commonly defined as the ability of a system to recover from a shock, either returning back to its original state or to a new adjusted state (Perelman 2007). Resilience has become an approach to risk management “which foregrounds the limits to predictive knowledge and insists on the prevalence of the unexpected” (Walker and Cooper 2011:6) across the entire range of societally relevant issues. A link between the growing popularity of resilience among national security experts and the threat perception that is fed by “the complex” is obvious: Since the task of “knowing” the threat to vital systems is so hard, preventing them seems unachievable. In this context, resilience has become a concept that allows the security practitioner to have, or rather regain a sense of safety/security even though disruptions of various kinds are seen as inevitable. Therefore, the concept promises an additional safety net against large-scale, major, and unexpected (unknowable) events. Resilience privileges self-organized governance from within the system rather than by hierarchically superior actors outside the system. Instead of investing in robustness against adverse influences, resilience thinking favors more flexible responses that are based on the principle of adaptation, which is like the image of a self-healing immune system type of security on a higher system level. In parallel to the view of the close interrelationship/fusion of society with critical infrastructures, resilience becomes a quality that the entire society should aspire to develop. Security in the age of resilience rests on self-regulating and self-organizing networks. “Visible” Discursive Practices “Visible” political figures make use of the reservoir of available threat representations to various degrees and in different settings when talking about cyber-threat to national security and when proposing action. Since this was not the main focus of this article, this last subsection will only give a brief overview of how threat representations influence discursive practices among “visible” political actors. Most often, they are used for mobilization in the political discourse to show that there is a need for (more) action and more forceful attempts to control the digital realm, and show an increasing focus on “cyber-war” (Lawson 2012). The necessity constantly to use mobilizers stems from the fact that there are little reliable data about the “actual” level of risk and that no cyber-incident has ever come anywhere near doomsday. Actors in the political process are successfully convinced that more and immediate action is required by pointing to potential but looming cyber-doom. Technological threat representations are mainly used to depict malware as weapons with a real-world impact. One often-used analogy is the designation of malware as “weapons of mass disruption” analogous to “weapons of mass destruction” (Chairman of the Joint Chiefs of Staff 2006), also in the form of “eWMDs.” When speaking about the distributed denial of service (DDoS) attacks against Estonian Web sites in 2007, the speaker of the Estonian parliament said: “When I look at a nuclear explosion and the explosion that happened in our country in May, I see the same thing” (Poulsen 2007). Some are also displaying cyber-attacks as more dangerous than traditional explosives: “We are at risk. Increasingly, America depends on computers. […] Tomorrow's terrorist may be able to do more damage with a keyboard than with a bomb” (National Academy of Sciences 1991:7). Socio-political threat representations are also directly linked to how political actors mobilize in the political arena. Many public statements address the need for less anonymity in cyberspace directly. In addition, since it can hardly ever be known who is behind an incident if it is discovered unless the perpetrator steps forward or unless the incident is investigated thoroughly (which usually takes months if not years), the common practice in government circles is to assume and communicate the worst when break-ins are discovered, the worst-case scenario usually being that of enemy state actors stealing the most sensitive data. Early in the debate, hacking incidents that were prominently discussed in the media—such as the intrusions into high-level computers perpetrated by the Milwaukee-based 414s—were turned into call for action: If teenagers (the 414s) were able to penetrate computer networks that easily, it was assumed to be highly likely that better organized entities such as states would be even better equipped to do so. Actual cyber-espionage cases (most prominent is the Cuckoo's Egg incident [Stoll 1989], which involved a German Hacker and the KGB) provided the proof for this assumption. Nowadays, China is most frequently made responsible for high-level penetrations of government and business computer, even though or exactly because these allegations almost exclusively rely on anecdotal and circumstantial evidence (Deibert and Rohozinski 2009). A very widespread practice linked to the third threat representation is to refer back to actual cyber-incidents, which are then portrayed as either devastating or, more commonly, as harbingers of imminent cyber-doom. The most prominent such cases are the DDoS attacks on Estonia in 2007, on Georgia in 2008, Stuxnet, or more recently Flame, which have become signifiers of the no-longer-future-but-reality of cyber-war. Such attempts to mobilize are also evident in the classical use of analogies or case-based reasoning. The most famous analogy is that of an “Electronic Pearl Harbor,” first used in testimony before the US Congress in 1991 (Schwartau 1994:43). The analogy links the cyber-security debate to a real and successful surprise attack on critical US military infrastructures during World War II while, at the same time, warning against the idea of US invulnerability due to its geographical position. The visions conjured up by this reference are of a sudden crippling blow against critical infrastructures resulting in chaos and destruction. Another prominent example is the “cyber-9/11.” Senator Joseph Lieberman said in a recent floor statement: “I fear that when it comes to protecting America from cyberattack it is September 10, 2001, and the question is whether we will confront this existential threat before it happens […] The system is blinking red, yet we fail to connect the dots—again” (Lieberman 2012). Other prominent figures have said that a cyber-9/11 is a matter of when, not if (Arquilla 2009) or have even predicted that a cyber-attack could surpass the impacts of 9/11 “by an order of magnitude” (Lawson 2011). Conclusion This paper identified multiple cyber-threat representations in order to explore possible connections between discursive representations of threats and cyber-security practices. While the analysis shows a close connection between threat representations and material-objective realities of computer networks at all times, the paper's focus on expressions of in-security allows for a more nuanced understanding of how cyber-security is presented as a national security issue and how these representations can change the shape of cyber-security practices. In particular, the paper has looked at who shapes threat representations, who (re-)uses them in what ways, and with what constitutive effects. As expected, different actors are involved in forming and shaping the representations—many of them outside of government. Therefore, non-governmental actors play a substantial part in constructing discursive settings. It seems reasonable that this role is particularly important when new issues are put on the security political agenda or when it concerns issues where there is a high level of disagreement among experts about the level and the nature of the threat or risk. The representation of threats in and through cyberspace is done with different (linguistic) means. The cyber-security discourse is particularly rich in metaphors, which emerge as powerful perception-shapers and anchoring devices, but also tools that are actively used in political discourse. In other words, discursive constructions are both setting the linguistic rules of the game and are being used instrumentally. On the one hand, viral metaphors were used by technical experts to allow laypersons to understand (complicated) technological issues. But viral metaphors and the virus-as-weapon also speak to deep-seated fears in the human psyche that make national security solutions the logical choice. Second, space/pace metaphors are used to convey a specific political image of the cyber-environment (Western Frontier/Ecosystem/Ocean…), which leads to different associations for different actor groups. For Internet activists, the Western Frontier is a place of freedom and adventure that must be kept free of government involvement. For law enforcement and other state bodies, it is a place of lawlessness in need of sheriffs and control. An ecosystem is a place that prospers best by itself for some and will always reach a harmonious equilibrium, but for others, reaching the state of equilibrium comes with the threat of extinction or uncontrolled growth. Last, the metaphor of the ocean suggests flows and vastness, but also uncontrollability and piracy. A different practice that was shown in the paper is the use of analogies in government circles to connect the cyber-realm more solidly to the real world, and to suggest that non-action will lead to sure disaster. The analysis also reveals a certain tension in how different actors represent threats in and through cyberspace that may well translate into tensions on the political/response level. I would argue that two different principles or logics are conveyed through threat representations and cyber-security practices. The first logic is linking cyberspace to state power, control, and order. The more the discourse is about (re-)establishing control and borders, the more it is about physical infrastructures that can be subjected to the principles of territoriality and sovereignty. The second is inspired by organic response, the image of networks and interconnectedness, and more closely connected to aspirations for self-healing, self-organization, and decentralization. Such different representations do not constitute a problem per se; but these two do not co-exist very well: If cyberspace is conceptualized as an auto-generating immune force, then the role of the state is that of a gardener and facilitator. If cyberspace is conceptualized as a problematic unruly place that needs to be tamed at all cost, then this inevitably leads to calls for strong(er) interference into the global cyber-system, including the topology of the Internet. Change in threat representations in the last few years shows a steady progression toward the second logic at the expense of the first. The frequent use of military language to speak about cyber-matters suggests that cyber-security can and should be managed as a military issue by military actors. The stronger the link between cyberspace and a threat of strategic dimensions becomes, the more natural it seems that the keeper of the peace in cyberspace should be the military. How easily the digital domain has been subjected to Cold War rhetoric and practices in recent years is both fascinating and alarming. Ultimately, it is a matter of choice, or at least a matter of (complicated) political processes that has produced this particular outcome. Awareness of the power of threat representation and the preferences that come with them can help to understand that it is neither natural nor inevitable that cyber-security should be presented in terms of power-struggles, war-fighting, and military action, and that there are always different, and sometimes better options. 1 Previous versions of this article have been presented at the pre-ISA workshop “International Relations in the Information Age” and the Presidential panel at the International Studies Association in San Diego (April 2012); at the 2nd Annual Conference of the “Transformation of Security Culture” project in Frankfurt (May 2012); and at the Research Colloquium at the Center for Security Studies, ETH Zurich (May 2012). I would like to thank the participants at these events as well as three anonymous reviewers and the editors of this special issue for their valuable comments. 2 I understand cyber-security policy to be the set of technologies, processes, and practices designed to protect networks, computers, programs, and data from attack, damage, or unauthorized access, in accordance with the common information security goals: the protection of confidentiality, integrity, and availability of information. 3 Singh (2013) advances a similar critique with his concept of meta-power, which focuses the attention of the analyst on the transformative power of interactions among individuals and the possibility for change. His move is more “radical” than mine, however, as it opens up the possibility for a politics beyond the state, whereas my analysis remains centered on the policy world. 4 As a side note: From early on, the different types of hackers were called White Hat (“good guys”) or Black Hat (“bad guys”) hackers—a designation borrowed from Western films and inspired by the Frontier metaphor. 5 A two-pronged approach was used for this mapping. On the one hand, the structuring of Table 1 is based on the study of numerous cyber-security policy documents. The analysis initially focused on what was depicted as being at threat (the “referent object”). The most similar of these referent objects were clustered. In a second step, the main threats to these referent objects were identified from the documents and also clustered according to similarity. Third, the actors most interested in and in charge of these different referent objects were identified. The initial clustering was cross-checked with secondary literature and solidified by a number of formal and information talks with cyber-security in a variety of countries. While I do not claim that Table 1 represents the only possible way to cluster this field, it is accepted as “valid” and robust by cyber-security experts. 6 The threat representations presented in Table 2 are loose categories formed out of a variety of ways of “talking about the threat.” For this part of the analysis, the initial set of documents was enlarged by more secondary literature and by observations made at a number of cyber-security relevant conferences and workshops that I have attended over the years. I do not claim that these representations are the only ones (far from it)—but they are the most dominant. 7 Examples are John Brunner's The Shockwave Rider (1975) or David Gerrold's When HARLIES Was One (1972). Early programming games such as “Darwin” (1961) and later “Core Wars” (1984) are about programs that compete for control and had viral characteristics. 8 See, for example: http://www.granneman.com/techinfo/security/securityanalogies/. The administrator of that website states that “one of the challenges security experts face is expressing in simple language the issues involved in security. Analogies are often a good way of making plain what the issues are, in a language that is easy to understand.” 9 http://winhelp2002.mvps.org/security.htm 10 I am indebted to an anonymous reviewer for this point. 11 http://pastebin.com/9KyA0E5v 12 Mueller, Schmidt, and Kuerbis (2013) start from a similar observation. Their analysis of two recent cases of networked governance on the Internet shows a less dramatic picture, however: While there are calls for stronger hierarchical governance structures in the name of security (with states in the driver's seat), the practice still favors networked approaches, with state authorities “embedding themselves into existing technical-operational networks.” References Arquilla John . ( 2009 ) Click, Click. Counting Down to Cyber-9/11 . San Francisco Chronicle (26 July), E2 . Balicer Ran . ( 2005 ) Modeling Infectious Diseases Dissemination through Online Role-Playing Games . Epidemiology 18 ( 2 ): 260 – 261 . Google Scholar CrossRef Search ADS Bamford James . ( 2012 ) The NSA Is Building the Country's Biggest Spy Center (Watch What You Say) . Wired Magazine (March 15). Available at http://www.wired.com/threatlevel/2012/03/ff_nsadatacenter/all/1. (Accessed November 15, 2012.) Barlow John P. ( 1990 ) Crime and Puzzlement . Available at http://w2.eff.org/Misc/Publications/John_Perry_Barlow/HTML/crime_and_puzzlement_1.html. (Accessed November 15, 2012.) Barlow John P. ( 1996 ) A Declaration of the Independence of Cyberspace . Available at http://homes.eff.org/~barlow/Declaration-Final.html. (Accessed November 15, 2012.) Bendrath Ralf . ( 2001 ) The Cyberwar Debate: Perception and Politics in US Critical Infrastructure Protection . Information & Security: An International Journal 7 : 80 – 103 . Google Scholar CrossRef Search ADS Bendrath Ralf . ( 2003 ) The American Cyber-Angst and the Real World—Any Link? In Bombs and Bandwidth: The Emerging Relationship between IT and Security , edited by Latham R. . New York : The New Press . Bendrath Ralf Eriksson Johan Giacomello Giampiero . ( 2007 ) Cyberterrorism to Cyberwar, Back and Forth: How the United States Securitized Cyberspace . In International Relations and Security in the Digital Age , edited by Eriksson Johan Giacomello Giampiero . London : Routledge . Betz David J. Stevens Tim . ( 2011 ) Cyberspace and the State: Towards a Strategy for Cyber-power . London : The International Institute for Strategic Studies . Brenner Susanne Clarke Leo . ( 2009 ) Combating Cybercrime through Distributed Security . International Journal of Intercultural Information Management 1 ( 3 ): 259 – 274 . Google Scholar CrossRef Search ADS Buzan Barry Wæver Ole De Wilde Jaap . ( 1998 ) Security: A New Framework for Analysis . Boulder : Lynne Rienner . Campbell David . ( 1998 ) Writing Security: United States Foreign Policy and the Politics of Identity . Minneapolis : University of Minnesota Press . Chairman of the Joint Chiefs of Staff . ( 2006 ) The National Military Strategy for Cyberspace Operations . Washington, DC : Chairman of the Joint Chiefs of Staff . Google Scholar Clarke Richard . ( 2010 ) Cyber War: The Next Threat to National Security and What to Do About It . New York : Ecco . Cohen Fred . ( 1987 ) Computer Viruses—Theory and Experiments . Computers & Security 6 ( 1 International Relationships in the Information Age Guest editors: J.P. Singh and Beth A. Simmons): 22 – 35 . Google Scholar CrossRef Search ADS Collier Stephen Lakoff Andrew . ( 2008 ) The Vulnerability of Vital Systems: How Critical Infrastructure Became a Security Problem . In The Politics of Securing the Homeland: Critical Infrastructure, Risk and Securitisation , edited by Dunn Cavelty Myriam Kristensen Kristian S. . London : Routledge . Coward Martin . ( 2009 ) Network-Centric Violence, Critical Infrastructure and the Urbanization of Security . Security Dialogue 40 ( 4–5 ): 399 – 418 . Google Scholar CrossRef Search ADS Deibert Ronald . ( 2003 ) Black Code: Censorship, Surveillance, and the Militarisation of Cyberspace . Millennium: Journal of International Studies 32 ( 3 ): 501 – 530 . Google Scholar CrossRef Search ADS Deibert Ronald Rohozinski Rafal . ( 2009 ) Tracking GhostNet: Investigating a Cyber-Espionage Network . Information Warfare Monitor. Available at http://www.infowar-monitor.net/2009/09/tracking-ghostnet-investigating-a-cyber-espionage-network/. (Accessed November 15, 2012.) Deibert Ronald Rohozinski Rafal . ( 2010 ) Risking Security: Policies and Paradoxes of Cyberspace Security . International Political Sociology 4 ( 1 ): 15 – 32 . Google Scholar CrossRef Search ADS Denning Dorothy . ( 2012 ) Stuxnet: What Has Changed? Future Internet 4 : 672 – 687 . Google Scholar CrossRef Search ADS Department of Defense . ( 2010 ) Department of Defense Dictionary of Military and Associated Terms . Available at http://www.dtic.mil/doctrine/new_pubs/jp1_02.pdf. (Accessed November 15, 2012.) Google Scholar Department of Homeland Security, DHS . ( 2009 ) National Infrastructure Protection Plan: Partnering to Enhance Protection and Resiliency . Available at http://www.dhs.gov/xlibrary/assets/NIPP_Plan.pdf. (Accessed November 15, 2012.) Google Scholar Department of Homeland Security, DHS . ( 2011 ) Enabling Distributed Security in Cyberspace—Building a Healthy and Resilient Cyber Ecosystem with Automated Collective Action . Available at http://www.dhs.gov/xlibrary/assets/nppd-cyberecosystem-white-paper-03-23-2011.pdf. (Accessed November 15, 2012.) Google Scholar Dillon Michael . ( 2005 ) Global Security in the 21st Century: Circulation, Complexity and Contingency . ISP/NSC Briefing Paper 05/02, Chatham House . Dillon Michael Lobo-Guerrero Luis . ( 2008 ) Biopolitics of Security in the 21st Century: An Introduction . Review of International Studies 34 : 265 – 292 . Google Scholar CrossRef Search ADS Duit Andreas Galaz Victor . ( 2008 ) Governance and Complexity—Emerging Issues for Governance Theory . Governance: An International Journal of Policy, Administration, and Institutions 21 ( 3 ): 311 – 335 . Google Scholar CrossRef Search ADS Dunn Cavelty Myriam . ( 2008 ) Cyber-Security and Threat Politics: US Efforts to Secure the Information Age . London : Routledge . Dunn Cavelty Myriam Suter Manuel . ( 2009 ) Public-Private Partnerships Are No Silver Bullet: An Expanded Governance Model for Critical Infrastructure Protection . International Journal of Critical Infrastructure Protection 2 ( 4 ): 179 – 187 . Google Scholar CrossRef Search ADS Dyson Esther Gilder George Keyworth George Toffler Alvin . ( 1996 ) Cyberspace and the American Dream: A Magna Carta for the Knowledge Age . The Information Society 12 ( 3 ): 295 – 308 . Google Scholar CrossRef Search ADS Eriksson Johan . ( 2001 ) Cyberplagues, IT, and Security: Threat Politics in the Information Age . Journal of Contingencies and Crisis Management 9 ( 4 ): 211 – 222 . Google Scholar CrossRef Search ADS Eriksson Johan Giacomello Giampiero , Eds. ( 2007 ) International Relations and Security in the Digital Age . London : Routledge . Farwell James P. Rohozinski Rafal . ( 2011 ) Stuxnet and the Future of Cyber-War . Survival: Global Politics and Strategy 53 ( 1 ): 23 – 40 . Gibson William . ( 1984 ) Neuromancer . New York : Ace . Graham Stephen . ( 2006 ) Cities and the “War on Terror.” International Journal of Urban and Regional Research 30 ( 2 ): 255 – 276 . Google Scholar CrossRef Search ADS Gross Michael J. ( 2011 ) Stuxnet Worm: A Declaration of Cyber-War . Vanity Fair . April. Available at http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104. (Accessed December 27, 2012.) Hansen Lene . ( 2006 ) Security as Practice: Discourse Analysis and the Bosnian War . London : Routledge . Hansen Lene Nissenbaum Helen . ( 2009 ) Digital Disaster, Cyber-Security, and the Copenhagen School . International Studies Quarterly 53 : 1155 – 1175 . Google Scholar CrossRef Search ADS Helmreich Stefan . ( 2000 ) Flexible Infections: Computer Viruses, Human Bodies, Nation-States, Evolutionary Capitalism . Science, Technology, & Human Values 25 ( 4 ): 472 – 491 . Google Scholar CrossRef Search ADS Huysmans Jef . ( 2006 ) The Politics of Insecurity: Fear, Migration and Asylum in the EU . London : Routledge . Huysmans Jef . ( 2011 ) What's in an Act? On Security Speech Acts and Little Security Nothings . Security Dialogue 42 ( 4–5 ): 371 – 383 . Google Scholar CrossRef Search ADS Kingdon John W. ( 2003 ) Agendas, Alternatives, and Public Policies , 2nd edition . New York : Harper Collins College Publishers . Kocher Bryan . ( 1989 ) A Hygiene Lesson . Communications of the ACM 32 ( 1 ): 3 – 6 . Google Scholar CrossRef Search ADS Lapointe Adriane . ( 2011 ) When Good Metaphors Go Bad: The Metaphoric “Branding” of Cyberspace . Center for Strategic and International Studies . Available at http://csis.org/publication/when-good-metaphors-go-bad-metaphoric-branding-cyberspace. (Accessed November 15, 2012.) Lawson Sean . ( 2011 ) Beyond Cyber-doom: Cyberattack Scenarios and the Evidence of History . Mercatus Center George Mason University Working Paper, No 11-01. Lawson Sean . ( 2012 ) Putting the “War” in Cyberwar: Metaphor, Analogy, and Cybersecurity Discourse in the United States . First Monday http://www.firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3848/3270. (Accessed November 15, 2012.) Léonard Sarah Kaunert Christian . ( 2011 ) Reconceptualizing the Audience in Securitization Theory . In Securitization Theory: How Security Problems Emerge and Dissolve , edited by Balzacq Thierry . London : Routledge . Libicki Martin C. ( 2009 ) Cyberdeterrence and Cyberwar . Santa Monica : RAND . Lieberman Joseph . ( 2012 ) Floor Statement for Sen. Joseph Lieberman, Introduction of Cybersecurity Act of 2012 , Washington, DC , February 14, 2012. Available at http://www.hsgac.senate.gov/download/senator-liebermans-statement-on-introduction-of-the-cybersecurity-act-of-2012. (Accessed November 15, 2012.) Lupton Deborah . ( 1994 ) Panic Computing: The Viral Metaphor and Computer Technology . Cultural Studies 8 ( 3 ): 556 – 568 . Google Scholar CrossRef Search ADS Lynn William J. III . ( 2010 ) Defending a New Domain: The Pentagon's Cyberstrategy . Foreign Affairs 2010 : 97 – 108 . Mihalache Adrian . ( 2002 ) The Cyber Space–Time Continuum: Meaning and Metaphor . The Information Society 18 : 293 – 301 . Google Scholar CrossRef Search ADS Mueller Milton Schmidt Andreas Kuerbis Brenden . ( 2013 ) Internet Security and Networked Governance in International Relations . International Studies Review 15 ( 1 ): 86 – 104 . Google Scholar CrossRef Search ADS National Academy of Sciences . ( 1991 ) Computer Science and Telecommunications Board, Computers at Risk: Safe Computing in the Information Age . Washington, DC : National Academy Press . Google Scholar Olson Kahtleen K. . ( 2005 ) Cyberspace as Place and the Limits of Metaphor . Convergence 11 ( 1 ): 10 – 18 . Orland Kyle . ( 2008 ) GFH: The Real Life Lessons of WoW's Corrupted Blood . Available at http://www.gamasutra.com/php-bin/news_index.php?story=18571. (Accessed November 15, 2012.) Panda Security . ( 2010 ) Panda Security Report: The Cyber-crime Black Market: Uncovered . Bilbao : Panda Security Labs . Google Scholar Parrika Jussi . ( 2005 ) Digital Monsters, Binary Aliens—Computer Viruses, Capitalism and the Flow of Information . Fibreculture Journal http://vxheavens.com/lib/mjp00.html. (Accessed November 15, 2012.) Parikka Jussi . ( 2007 ) Digital Contagions—A Media Archaeology of Computer Viruses . New York : Peter Lang . Perelman Lewis J. . ( 2007 ) Shifting Security Paradigms: Toward Resilience . In Critical Thinking: Moving from Infrastructure Protection to Infrastructure Resilience , edited by McCarthy John A. . Washington, DC : George Mason University . Perrow Charles . ( 1984 ) Normal Accidents: Living with High-Risk Technologies . New York : Basic Books . Poulsen Kevin . ( 2007 ) “Cyberwar” and Estonia's Panic Attack. Threat Level , 22 August. Available at http://www.wired.com/threatlevel/2007/08/cyber-war-and-e. (Accessed November 15, 2012.) President's Commission on Critical Infrastructure Protection, PCCIP . ( 1997 ) Critical Foundations: Protecting America's Infrastructures . Washington, DC : US Government Printing Office . Google Scholar Ross Andrew . ( 1991 ) Hacking Away at the Counterculture . In Technoculture , edited by Penley Constance Ross Andrew . Minneapolis : University of Minnesota Press . Sampson Tony . ( 2007 ) The Accidental Topology of Digital Culture: How the Network Becomes Viral . Transformations: Online Journal of Region, Culture and Society http://www.transformationsjournal.org/journal/issue_14/editorial.shtml. (Accessed November 15, 2012.) Schwartau Winn . ( 1994 ) Information Warfare. Cyberterrorism: Protecting Your Personal Security in the Electronic Age . New York : Thundermouth Press . Singh J.P. ( 2013 ) Information Technologies, Meta-power, and Transformations in Global Politics . International Studies Review 15 ( 1 ): 5 – 29 . Google Scholar CrossRef Search ADS Skibell Reid . ( 2002 ) The Myth of the Computer Hacker . Information, Communication & Society 5 ( 3 ): 336 – 356 . Google Scholar CrossRef Search ADS Spafford Eugene . ( 1989 ) The Internet Worm: Crisis and Aftermath . Communications of the ACM 32 ( 6 ): 678 – 687 . Google Scholar CrossRef Search ADS Sterling Bruce . ( 1993 ) Hacker Crackdown, Law and Disorder on the Electronic Frontier . New York : Bantam Books . Stoll Clifford . ( 1989 ) The Cuckoo's Egg: Tracking a Spy through the Maze of Computer Espionage . New York : Doubleday . Suter Manuel . ( 2011 ) Resilience and Risk Management: Exploring the Relationship and Comparing Its Use . Zurich : Center for Security Studies . Taubes Gary . ( 1994 ) Devising Software Immune Systems for Computers . Science 265 : 887 . Walker Jeremy Cooper Melinda . ( 2011 ) Genealogies of Resilience: From Systems Ecology to the Political Economy of Crisis Adaptation . Security Dialogue 14 ( 2 ): 143 – 160 . Google Scholar CrossRef Search ADS Weldes Jutta Saco Diane . ( 1996 ) Making State Action Possible: The U.S. and the Discursive Construction of “the Cuban Problem,” 1960–1994 . Millennium: Journal of International Studies 25 ( 2 ): 361 – 395 . Google Scholar CrossRef Search ADS Yen Alfred C. . ( 2003 ) Western Frontier or Feudal Society? Metaphors and Perceptions of Cyberspace . Berkeley Technology Law Journal http://www.law.berkeley.edu/journals/btlj/articles/vol17/Yen.stripped.pdf. (Accessed November 15, 2012.) © 2013 International Studies Association TI - From Cyber-Bombs to Political Fallout: Threat Representations with an Impact in the Cyber-Security Discourse JF - International Studies Review DO - 10.1111/misr.12023 DA - 2013-04-10 UR - https://www.deepdyve.com/lp/oxford-university-press/from-cyber-bombs-to-political-fallout-threat-representations-with-an-u5bowKUTAG SP - 1 EP - 122 VL - Advance Article IS - 1 DP - DeepDyve ER -