TY - JOUR
AU - Anthony, Mandal,
AB - Abstract This chapter examines material published in the field of digital humanities in 2017. Owing to controversial developments in the political sphere, public awareness of the role of big data in our lives has grown. Anxieties about ‘microtargeting’ and ‘dataveillance’ inflect our increasingly troubled relationship with computational culture, particularly as the commercialization of the Internet and its fragmentation into proprietary platforms mean that algorithmic and machine-learning processes are hidden away from scrutiny in ‘black box’ systems. Books by Nick Srnicek and Richard J. Lane discuss the turn towards big data and platforms, detailing the ways in which humanities scholars might engage with such transformations. A second strand of digital culture looks at the relationship between humanity and machines, with the material turn encouraging more sustained examination of ‘digital bodies’. This is the name of the collection edited by Susan Broadhurst and Sara Price, who bring together essays from a range of artists, performers, fashion designers, and sociologists. Looking at the augmentation of humanity by the digital, Andrew Pilsch aims to rehabilitate the transhumanist movement in scholarly circles by relocating it within a longer tradition of utopian evolutionary futurism that can be traced back to the early twentieth century. By contrast, N. Katherine Hayles turns to cognitive processes, arguing that recent discoveries in neuroscience regarding nonconscious cognition can realign our understanding of the relationship between humanity and machine. The final part of this chapter looks at recent monographs by David Berry and Anders Fagerjord and by James Smithies that propose new inflections of the digital humanities in response to the challenges outlined in the foregoing discussion. After surveying a number of publications that dealt with big data and the quantification of human identity, last year’s chapter on digital humanities (DH) concluded rather forebodingly with a reference to Cambridge Analytica. In the period between that chapter and this, the alleged misuse of big data in the Brexit referendum, the 2016 US presidential election, and other national elections, and its potential to subvert democratic processes, became mainstream news. Social media users’ interactions with seemingly innocuous quizzes on Facebook, for example, enabled Cambridge Analytica to generate 5,000 ‘data points’ on 220 million American citizens, which were then analysed using a range of algorithms as a means of predicting or influencing voting behaviour. As we move ever more deeply into the algorithmic age, our lives are governed, our identities constrained, and our futures shaped by digital processes. While we haven’t yet reached the ‘Singularity’—a term used by transhumanist Ray Kurzweil to describe the moment when machine intelligence will exceed all of humanity’s collective intelligence—as machine learning becomes increasingly complex, our understanding of the protocols that control so much of our lives is becoming increasingly mystified. If the nascent Internet of the 1990s was like the Wild West, a sort of free-for-all in open territory, the Web 2.0 era represented by social media giants like Facebook, Twitter, and Instagram has seen the commercialization of that ‘open territory’. Not only is our humanity normalized online through avatars that fit predetermined templates provided by these platforms: as commercial enterprises these systems remain ‘black-boxed’ to us, with the divide between us and our data growing deeper and wider. If we find it difficult to manage our personal data—where they go, who sees them, what is done with them—on whom can we rely to oversee the zettabytes of this digital effulgence in the epoch of big data? In 2010, following a noble humanistic tradition established centuries ago, the Library of Congress thought it could attend to part of this task by archiving all the public tweets issued on Twitter from its first tweet of 21 March 2006 onwards. By the end of 2017, the LoC announced enough was enough, and that it would only deposit tweets selectively with effect from 1 January 2018. As Amanda Petrusich notes in her blog post for the New Yorker on this decision: ‘Healthy consumption of the Internet requires curation. Though reading widely and expansively offline remains crucial, the present Internet deluge still means we all have to make serious choices about what we let in’ (Petrusich, ‘The Library of Congress Quits Twitter’). The books discussed in this chapter all share a preoccupation with the challenge of ‘what we let in’—and indeed ‘where we let ourselves go’—concerning themselves with three core questions: the role of datafication and platforms in the age of big data; the relationship between cognition and presence as our bodies become increasingly digitalized; and the role that can be played by digital humanists in responding to these phenomena. Nick Srnicek’s Platform Capitalism explores the transformation of businesses into platforms—that is, businesses (like Google, Facebook, and Microsoft) that provide the digital infrastructure for other businesses. Srnicek’s focus, much like that of the global market itself, is away from labour and on capital raised within the digital economy. This digital economy comprises more than simply the technology sector, cutting through most sectors, and legitimates capitalism by its seeming disruptiveness: ‘The digital economy is becoming a hegemonic model: cities are to become smart, businesses must be disruptive, workers are to become flexible, and governments must be lean and intelligent’ (p. 5). Platform Capitalism explores the shift in capitalism, from manufacturing to data-processing, and the coterminous emergence of platforms in directing global markets. The apparent novelty of these shifts, Srnicek argues, in fact belies much longer-standing trajectories in the market; moreover, the capitalist imperatives of efficiency and competition ineluctably demand constant technological change in order to lower the costs of production. Platform Capitalism begins with an economic history of three crises that have led to the present situation: the response to the economic downturn during the 1970s; the boom-and-bust cycles of the 1990s; and the reaction to the 2008 financial crisis. Post-Second World War Western economies were characterized by a Fordist–Taylorist model of mass production and the separation of labour into small, lower-skilled tasks and processes. This post-war period was an atypical ‘golden age’ for American capitalism and manufacturing, such that an increasingly globalized marketplace put pressure on US profits from the 1960s. This led the US to embark on a series of economic policies and activities in the 1960s and 1970s that culminated in the global crisis of the later 1970s. Competitiveness in the market stimulated a push towards overproduction (and therefore less profit), while a concurrent attack on the power of the unions weakened workers’ benefits and lowered liability costs. The 1970s laid the ground for the 1990s’ ‘dot-com boom’, which established the infrastructure for the digital market economy of today. In particular, the 1990s saw the commercialization of the Internet, driven in no small measure by venture capitalism, which realigned the focus of the economy from manufacturing to telecoms, which in turn drove further technological advances in infrastructure. However, such growth was to be short-lived: the dot-com bust at the turn of the millennium led to the deregulation of monetary policies that itself culminated in the much larger crisis of 2008 and the subsequent era of austerity. This made further economic stimulation based on renewed infrastructure projects that would generate labour a political impossibility, pushing investors into riskier digital ventures combined with cash-hoarding and offshoring of liquid assets—especially by tech companies. As Srnicek notes, ‘Tax evasion, austerity, and extraordinary monetary policies are all mutually reinforcing’ (p. 33), and inevitably such economic behaviour impacts on the labour market, leading to stagnation and precarity. When crises hit, capitalism recalibrates itself accordingly: we now live in an economy that can be defined as ‘cognitive’, ‘informational’, or ‘knowledge’-based. Hence, ‘some argue that the economy today is dominated by a new class, which does not own the means of production but rather has ownership over information’ (p. 38; original emphasis). However, the old business models are no longer appropriate for this new material, requiring instead an equally innovative model: ‘Platforms, in sum, are a new type of firm; they are characterised by providing the infrastructure to intermediate between the different user groups, by displaying monopoly tendencies driven by network effects, by employing cross-subsidisation to draw in different user groups, and by having a designed core architecture that governs the interaction possibilities’ (p. 48). Srnicek outlines five types of platform: advertising platforms (like Google and Facebook), which provide a service for free while harvesting user data; cloud platforms (such as Amazon Web Services and Salesforce), which supply software and hardware infrastructure; industrial platforms (for instance, Siemens and GE), which are transforming traditional manufacturing into Internet-connected processes; product platforms (like Rolls Royce and Spotify), which transform goods into services through rental or subscription; and lean platforms (most notably Uber and AirBnB), which reduce ownership of assets by the business to an absolute minimum in order to profit by reducing costs. It is interesting to note that, in Srnicek’s estimation, Amazon spans all five categories. Many of these businesses, for instance Google and Facebook, accumulate capital that is then stored overseas, is used in acquisitions or mergers, or is funnelled into startups. As Srnicek observes: ‘Enabled by digital technology, platforms emerge as the means to lead and control industries’ (p. 92). If capitalism renews itself through the creation of new technological complexes, Srnicek speculates whether information technology can revive capitalism’s ‘moribund growth’ (p. 94). On the one hand, new platforms are monopolistic: the greater the number of users who interact with the platform, the more valuable the platform becomes for each user. On the other hand, capitalism always supplies means for competition, and new ventures can eventually topple existing monopolies. However, unlike manufacturing, platforms are not judged solely on the differentials between costs and prices: instead, data acquisition and processing are far more important. Many of these systems are driven by analytics, particularly artificial intelligence, and convergence, as different platforms seek to replicate services provided by their competitors. Data extraction is funnelled into siloed platforms, locking users and their data into a specific platform, whether that be Apple’s iCloud ecosystem or Facebook. This phenomenon is part of a broader shift from an open to a closed web dominated by fragmented platforms, itself driven by our movement from computers to smartphones as our primary point of access to the Internet. However, platforms are themselves vulnerable to risk from a number of challenges. One key feature of the industrial Internet is the overcapacity of products. As a consequence, austerity is likely to continue and production will remain in decline; lean platforms will be unable to provide sustained momentum; and outsourcing will have expended itself. The result is an inherent lack of profitability in these new models, particularly given that lean platforms are entirely reliant on a vast mass of surplus capital to mitigate immense start-up costs; however, ‘[w]hereas the tech boom of the 1990s at least left us with the basis for the internet, the tech boom of the 2010s looks as though it will simply leave us with premium services for the rich’ (p. 121). Srnicek suggests that, in terms of profitability, Amazon is more representative of the future than Google, Facebook, and Uber. As such, existing socio-economic inequalities will be reflected in access inequalities to the Internet, resulting in a digital deficit. Platform Capitalism proposes a simple but profound solution to this problem: Rather than just regulating corporate platforms, efforts could be made to create public platforms—platforms owned and controlled by the people. […] More radically, we can push for postcapitalist platforms that make use of the data collected by these platforms in order to distribute resources, enable democratic participation, and generate further technological development. Perhaps today we must collectivise the platforms. (p. 128) One notable scholarly response to the global shift towards platforms and datafication has been the rising advocacy for a ‘big humanities’ commensurate with the challenges of ‘big data’. The big humanities featured notably in both versions of the field-defining ‘Digital Humanities Manifesto’ (2008, 2009), focusing on ‘the building of bigger pictures out of the tesserae of expert knowledge […] [which] promotes collaboration across domains of expertise’ (para. 16). More recently, Mirko Tobias Schäfer and Karin van Es prefaced their collection The Datafied Society: Studying Culture through Data (2017) by noting that ‘data have moved to the centre of media research and have become protagonists in media narratives. […] Data have become ontological and epistemological objects of research—manifestations of social interaction and cultural production’ (p. 11). Likewise, Patrik Svensson’s Big Digital Humanities: Imagining a Meeting Place for the Humanities and the Digital (2016) proposes that ‘big’ in this case encapsulates a multiplicity of humanistic approaches to the digital world, rather than simply scholarship on a macro-scale: Big digital humanities facilitates multiple modes of engagement between the humanities and the digital, stretches across all of the humanities and outside, and functions as a platform for the humanities. According to this model, the digital humanities engages with the digital as a tool, as an object of inquiry, and as an expressive medium. (p. x) Richard J. Lane’s The Big Humanities: Digital Humanities/Digital Laboratories grapples with this challenge by focusing on the digital laboratory as the paradigmatic environment of the DH. His book aims to demystify the emergent field of the ‘Big Humanities’ in the same way that ‘Big Science’ has similarly been made accessible. Lane provides a rich tradition of previous debates, drawing on a range of examples of existing digital projects, including Perseus, Transcribe Bentham, and the Devonshire Manuscript. Additionally, the book contextualizes the DH within a broader history, exploring various antecedents to the current debates—including the “Two Cultures” controversies that ensnared Matthew Arnold and T. H. Huxley in the late nineteenth century and C. P. Snow and F. R. Leavis in the 1960s—as well as drawing on Heidegger’s concept of ‘Enframing’: ‘technology, for Heidegger, occupies both sides of the “two cultures” divide: it is instrumental, facilitating the physical sciences, and transforming nature and humanity into a quantifiable resource; but it is simultaneously that which allows humanity to endure as the intelligent beings who will receive and comprehend truth’ (p. 33). Lane proposes the laboratory as ‘the new [space] for humanistic inquiry’, in which ‘the shift to lab-based hybrid humanistic/scientific research practices, and the accompanying self-reflexivity and theoretical engagement […] [have] the potential to […] “rebuild” the otherwise declining arts and humanities’ (p. 2). In this context, the big humanities draw together a variety of modes: traditional humanistic activity, remediating and manipulating texts, employing practical skills and technology, drawing on significant funding and team collaboration (p. 7). Lane posits the various digital tools available to humanities scholars as virtual laboratories. A key example here is temporality: machine reading of large datasets in seconds substitutes for potential years of close reading by scholars, enabling them to direct their energies elsewhere, such that the ‘cycle’ of humanistic work undergoes a radical change. Lane also invites us to consider technology in Heideggerian terms as ‘equipment’: ‘the tools that are part of an entire environment or horizon of understanding and intention’, which both fulfils ‘its pre-assigned function and tak[es] that function elsewhere, perhaps a long way from where we thought it was going or where it should be’ (p. 34). David Berry, in for example Understanding Digital Humanities (2012), has made similar claims about a multi-phasic DH, the latest wave of which transforms the very nature of ‘research’ through working within a primarily computational medium. The Big Humanities explores the role of ‘collaboratories’ in today’s social Internet, looking in particular at two cases: crowdsourcing in the Transcribe Bentham project and the social edition in the Devonshire Manuscript. The discussion draws on a number of responses to these initiatives, which range from criticisms of the positivism that substitutes data-crunching for answering deep intellectual questions, to celebrations of the citizen-researcher who can now participate in humanistic enquiry. Probing these disputes, Lane observes that ‘what is at stake here is the role of the “public intellectual” and the ways in which the broader academic community communicates (or not) with the general public’ (p. 62). Countering instrumentalist readings of such projects, he suggests that collaboratories like Transcribe Bentham and the Devonshire Manuscript facilitate ‘neural networks’, which encourage discussion, debate, and discourse that are themselves metacritical, even if they are not ‘theoretical’. So, for all the talk of ‘crowds’, participation is important because of quality rather than quantity: ‘with the example of Transcribe Bentham, the act of transcribing can facilitate a deeper “active” reading, or the sort of slow reading that leads an individual to reflective and critical questions that forge new modes of understanding’ (p. 81). The open-source movement is becoming increasingly powerful in shaping future digital labour, and for Lane this has important consequences for the big humanities. He helpfully distinguishes between the forking of the ‘open source’ and ‘free software’ movements, the former supporting the rights of individual curiosity to expand our knowledge and understanding of how those tools are made and should function, the latter advocating for community access to digital resources that are now seen as essential cultural tools. The challenge for the humanities lies in the potential disruption that open-source philosophy can bring to academia, whose infrastructure typically relies on closed systems and proprietary platforms such as costly textbooks and commercial databases. Here lies an essential challenge: if we are to be the informed stakeholders for which the open-source movement advocates, then we must learn to code: ‘As the humanities transitions into a Big Humanities model, code/knowledge reuse becomes essential, freeing humanists from the costs of either using proprietary software or paying programmers to write entire applications from scratch’ (p. 105). Lane concludes by considering how the humanities can be reinvigorated through big data and distant reading. Comparing a number of linguistic corpora to astronomical datasets, he notes that humanistic data are similar in magnitude to some of the largest scientific datasets available. However, quantity is less important than may initially seem to be the case: there are numerous ‘big’ humanities datasets that are small compared to scientific ones, yet they will be significant from a humanistic perspective and will still be of a much greater magnitude than the corpora traditionally examined in close textual analysis or literary-historical surveys. As a case in point, Lane considers the restricted nature of Ian Watt’s The Rise of the Novel (1957), whose ‘dataset is simply too small to meaningfully answer any of his questions’ (p. 112). In the future, the big humanities will pre-empt such self-selectivity, by both emphasizing the importance of scale and providing the tools required to undertake more comprehensive analysis. The book finishes by scrutinizing Franco Moretti’s controversial model of ‘distant reading’, which is driven by patterns rendered through big data rather than the vagaries of literary tradition and historical contingency. The challenge, then, ‘involves shifting from the microscope to the telescope, that is to say, from a limited set of books and close reading examined through the capacity of the human eye, to lab-based or lab-generated Big Humanities data’ (p. 119). Ultimately, the role of the scholar in the age of big humanities is not to provide answers, but to design tools that enable us to traverse the spaces and places opened up by such research. In so doing, the emphasis must be on crossing borders, Lane argues, rather than providing authoritative solutions akin to those preferred by Watt over half a century ago. While much DH scholarship has focused on the growing digitalization of a humanity considered from a distance and perhaps rendered more ephemeral than ever before, another strand explores the entanglements between our material and digital selves. Susan Broadhurst and Sara Price’s Digital Bodies: Creativity and Technology in the Arts and Humanities is a collection of seventeen essays that ‘illustrate the synergies and differences in the theorisation of the body and technology, and how these in turn shape new or evolving research practices across the arts and humanities’ (pp. 1–2). The book is divided into four sections: ‘The Performing Body: Creativity and Technology in Performance’; ‘Designing, (Re)designing: Embodiment and Digital Creativity in Art Practices’; ‘Digital Aesthetics and Identity: Creativity in Fashion Design’; and ‘Embodied Interaction: Digital Communication and Meaning Making in the Social Sciences’. The essays consider how the digital body is both extended and reconfigured in new and transformative ways, while technology is correspondingly posited as something that is part of, rather than apart from, the body, altering and reconditioning our experience in the world. For the research project detailed in Susan Broadhurst’s ‘Digital Performance and Creativity’, participants found that, rather than being obstacles, ‘technological limits can become creative opportunities’ (p. 20), and that ‘the digital does what all avant-garde art does: it is an experimental extension of the sociopolitical and cultural tendencies of an era’ (p. 21). Digital bodies become transitive, and the boundaries between ‘self’ and ‘other’ blur, with performer–audience relationships gaining increasing fluidity. Drawing on Wagnerian notions of the Gesamtkunstwerk (the total artwork), Broadhurst sees the re-emergence of Romantic aesthetics in such co-mingling of the digital and the somatic. If physical and topological boundaries can blur in digital performance, Helga Schmid’s ‘The Embodiment of Time’ observes that digitality has rendered time increasingly precise (atomistic, computational), making Romantic returns a challenge: ‘A leap back in time to temporal structures of the past does not meet contemporary living standards of individuality, autonomy and freedom of choice’ (p. 97). Schmid’s solution to this crisis lies in ‘uchronia’ (p. 98), a kind of temporal utopia, and she argues that time, as we engage with it, is a social construct. Her performance work, which relies on embedding the body within light and space for hours, enables the artist to ‘unlearn’ time and overcome conventional temporal structures. This can elicit a productive shift: from the ‘vita activa’ superintended by mechanical and algorithmic time to a ‘vita contemplativa’ governed by natural and biological rhythms. Camille Baker’s ‘Critical Interventions in Wearable Tech, Smart Fashion and Textiles in Art and Performance’ builds on the fact that, despite our increasing datafication, third parties have better access to our data than we do, with little ethical oversight. In Baker’s reading, the critical discourse is lacking when it comes to improving our personal rights over data and understanding the existential implications of data-as-identity. The essay traces the outcomes of Baker’s research network, which explored issues of ownership of personal data, how identity is performed in the digital space, and the need to educate the wider public about this phenomenon. The extension or renegotiation of boundaries through our immersion in the digital can manifest itself in ludic ways, as examined in a number of this collection’s chapters. Maria Chatzichristodolou’s ‘Karen by Blast Theory: Leaking Privacy’ looks at the artist collective Blast Theory, whose work deals with the sociopolitical functions of technology. Blast Theory employs genre-warping approaches to create immersive works that cross the boundaries between ‘game, art, and life’, in order to critique our digital lives; the collective ‘offers entertainment […] that is less concerned with enjoyment or pleasure (the sine qua non of all mainstream game design and entertainment ventures) and more with what [Blast Theory member Matt] Adams terms a “productive anxiety”’ (p. 68). Such productive anxiety can enhance our in-game experiences in three ways: as entertainment, as enlightenment, or for sociality. With social functions in mind, concepts of play can extend beyond the political and critical, into scenarios that can facilitate professional development: Caroline Pelletier and Roger Kneebone’s ‘Playing at Doctors and Nurses: Technology, Play and Medical Simulation’ challenges long-standing critical discourse on medical simulation. Scholars have typically argued that ‘fidelity’ is the most important criterion in the medical use of electronic substitutes or representations of the human body, which assumes transparency or a lack of mediation in the devices. Instead, the authors argue that ‘fidelity’ is itself socially constructed within the expectations of clinical practice, and that such encounters should be analysed through the lens of play. ‘Treating medical simulation as play does not mean treating it as idleness or triviality, but rather as an activity implicated in symbolising the world and, consequently, in experimenting with how it can be made sense of’ (p. 241). Moreover, play should be understood here not as developmental (and utilitarian), but as imaginative and affective, depending on a host of creative traditions, such as the carnivalesque and the phantasmagoric. These interactions demonstrate that ‘realism’ is a product of imaginative play, rather than its opposite—and, by extension, STEM conventions themselves might be better understood with the aid of humanistic models. Interaction, in its most intimate manifestations, forms another key strand through Digital Bodies, particularly when the role of touch is examined. Laura Ferrarello’s ‘The Oxymoron of Touch: The Tactile Perception of Hybrid Reality through Material Feedbacks’ looks at how haptic devices generate hybrid experiences that invest the digital with seemingly physical properties. In the digital age, ‘materiality’ intersects the physical and the virtual; Ferrarello uses the example of an apple, scanned into a computer as a 3D image, then manipulated as a digital file, before being printed using a 3D printer: ‘The oxymoron apple is the result of a series of material states that link the physical, digital and physical reality via an interwoven loop. Our mind combines information from the physical and digital reality to shape a third materiality’ (p. 139). As we shift further into the digital, Ferrarello argues, we must see it not as a simulation of the physical realm, but rather ‘the physical and the digital [should be] understood as a whole’ (p. 140). This fluid relationship between the two worlds is a problem encountered by fashion designers, according to Bruna Petreca’s ‘Giving Body to Digital Fashion Tools’. The main challenge is that, until recently, attempts to incorporate digitality into fashion design have sought to mimic haptic engagements in the physical world, with the focus falling heavily on touch, whereas there may be other sensory tools, such as sound, of greater benefit to designers. Petreca advocates for more nuanced encounters between designers and their tools and materials, which involves moving beyond hands: ‘designers “need to feel”. Since feeling seems to involve a balance between perceptual, conceptual and affective levels of experience, there is a need to balance the current realistic (physical) approach to textiles with the imaginary and the emotional’ (p. 201; original emphasis). Douglas Atkinson’s ‘Post-Industrial Fashion and the Digital Body’ examines the wider sociocultural interactions between fashion and the digital. Post-industrial design represents a major departure from traditional practices and perspectives—‘no longer shaped by the will of a single designer but distributed and accessible, to meet the challenges of a world of networked intelligence, digital tools and ecological woes’ (p. 148). Fashion has long resisted the digital owing to its haptic, material, and embodied principles: today’s problem is that post-industrial designers are gaining digital competencies while losing access to traditional material skills, leading to a divide between the certainty of the older, individual forms and the uncertainty of the digital, collective endeavours. With bodies increasingly subjected to datafication, the challenge facing designers is how to meet the needs and practices of the new post-industrial digital age without sacrificing the long tradition of materialized, embodied engagement that has given fashion its remarkable role in culture. Andrew Pilsch’s Transhumanism: Evolutionary Futurism and the Human Technologies of Utopia situates current transhumanist engagements with digitalized bodies in a broader sociohistorical context. Since its emergence in the 1960s, the transhumanist movement has celebrated the future of human potential in two ways: by means of planetary communication systems and through the application of radical technologies to the body to extend our lifespans and augment our cognitive facilities. As Pilsch notes, transhumanism has fared poorly in academic circles, perceived as something akin to a pseudo-scientific fad at best or a eugenicist cult at worst. Secondly, transhumanists have tended to eschew humanist traditions, focusing almost solely on scientific self-justifications. Taking a critical approach to transhumanism, Pilsch nonetheless wishes to engage with it within academic circles by examining, in detail, the sociocultural and literary traditions that have contributed to the development of transhumanism over the last century, particularly through its connections to evolutionary futurism and utopianism. According to Pilsch, transhumanism is ‘a rhetorical mode, a means of creating and seducing through language about the future […] mapping the general flux (raw data) of experience into a specific program for action’ (p. 11). Moreover, he argues that transhumanism can be located within the tradition of twentieth-century evolutionary futurism—a set of rhetorical approaches that explore how machines can assist our evolution and betterment. Chapter 1 looks at evolutionary futurism in avant-garde circles at the start of the twentieth century, in particular tackling the ambiguous function of Nietzsche’s Übermensch. Despite its rejection by transhumanists, Nietzsche’s ‘break from the human’ was influential on early evolutionary futurists, most notably in the mystical Darwinism of P. D. Ouspensky and the feminist futurism of Mina Loy. Chapter 2 explores the role played by pulp science fiction (SF) from the 1930s and evolutionary futurism. Examining A. E. Van Vogt’s serialized novel Slan and John W. Campbell’s stewardship of the magazine Astounding Science Fiction, Pilsch traces how reality and fiction interacted in SF fandom. Perceiving themselves both as outsiders isolated from the mainstream and as Slan-like superhumans, SF fans and authors established a number of utopian communities grounded in the challenges facing human evolution. The third chapter turns to examine the role of suffering in evolutionary futurism in the work of Pierre Teilhard de Chardin, a Jesuit priest. Teilhard’s The Phenomenon of Man (1955) predicted a transhumanist teleology that, reversing traditional Christology, would lead humanity to create God as a cosmic consciousness. Building on the work of Paul Virilio, Jean-François Lyotard, and Vladimir Vernadsky, Pilsch looks at Teilhard’s adoption of Vernadsky’s model of geological evolution: ‘When the geosphere was dominant, matter increasingly complexified through rock and metals until giving rise to the basic elements of life (single-celled organisms). At this point, the biosphere began to blanket the earth as the dominant configuration of matter (moving from mineral to life)’; the next stage, with humanity as the dominant geological, if not cosmic, force, would see the emergence of the ‘noösphere’, or collective intelligence (p. 117). Teilhard posited that matter itself was evolving to create a force in us (as different as life is from the mineral world), driven purely by thought, which would eventually be shared by each individual cognizer as part of a cosmic consciousness. Pilsch’s fourth chapter argues for an ‘aesthetics’ of transhumanism that counters the rationalism which has for many years been the cornerstone of contemporary transhumanist thought. Pilsch discusses Natasha Vita-More’s ‘Transhuman Manifesto’ (1983, revised as the ‘Transhumanist Arts Statement’ in 2003), which argued for an aesthetics that would generate an ethical transhumanism. He then turns to the architectural art of Arakawa and Gins, whose world is guided by a recognition that ‘our built environments, especially the ones in which we live, are part of our bodies and should be considered, definitionally, as such’ (p. 143). More recently, the New Aesthetic movement has sought to counter the increasing banality of technology in our lives by reinvigorating it with a sense of wonder. Their responses are structured around ‘[t]he way humans increasingly trust algorithms and objects to make decisions for them or to answer questions we previously could never have asked’, signalling an essential transformation in our relationship with technology (p. 159). Pilsch continues to meander between high and low culture, by wrapping up the chapter with a brief consideration of memes, specifically the ‘LOLcats’ phenomenon made (in)famous by the ‘ICANHASCHEEZBURGER’ meme as constitutive of a new language of an emergent collective consciousness still in its infancy. Pilsch concludes his study by reflecting that there can be no return to a pre-computational golden age of humanity, so we must seek to locate the ‘cyborg as a Utopian figure now more than ever’ (p. 186). He pauses upon two radical responses that might effect such utopian outcomes. The first is Alex Williams and Nick Srnicek’s ‘#Accelerate: Manifesto for an Accelerationist Politics’ (2014), which advocates for the end of neoliberalism through the very technoscientific acceleration that capitalism has engendered. Acceleration is not the ‘speeding up’ favoured by transhumanists like Ray Kurzweil and Nick Land, but ‘something as mutational as the introduction of print was to the human sensorium’ (p. 190)—in other words, a paradigm shift. Pilsch’s second utopian aim is to be found in the feminist collective Laboria Cuboniks’ ‘Xenofeminist Manifesto’ (2015), which reminds us that the future is discursively shaped by humanity, rather than reified as an objective telos. ‘[X]enofeminism outlines a version of history in which short-term major goals are sacrificed in favor of long-term accomplishments’ (p. 192). This view has much in common with evolutionary futurism, but is incompatible with contemporary transhumanism, which has typically emerged ‘from cis-gendered, white males who want to move beyond the limits of their privilege’ (p. 193). Rather than extending hegemonic models through technological augmentation, xenofeminism proposes a liberatory praxis: a ‘freedom-to’ rather than a ‘freedom-from’ (p. 194). In highlighting these two recent approaches to our transhuman futures, Pilsch consolidates his reading of an evolutionary futurist history guided by probabilities rather than certainties—and invoking, in the Romantic spirit, a sort of Keatsian ‘Negative Capability’ that provides a refreshing alternative to the positivism of much transhumanist discourse. N. Katherine Hayles is one of today’s leading thinkers about computational culture and posthumanism: her latest book, Unthought: The Power of the Cognitive Nonconscious, builds on recent discoveries in neuroscience ‘confirming the existence of nonconscious cognitive processes inaccessible to conscious introspection but nevertheless essential for consciousness to function’ (p. 1). Hayles’s definition of cognition can be applied to technical as well as biological systems—‘cognition’ here is not to be confused or conflated with ‘thinking’, as it extends to all life forms, including plants and micro-organisms: ‘Cognition is a process: this implies that cognition is not an attribute, such as intelligence is sometimes considered to be, but rather a dynamic unfolding within an environment in which its activity makes a difference’ (p. 25; original emphasis). Humans and machines form interdependent systems through ‘assemblages [that] are precisely structured by the sensors, perceptors, actuators, and cognitive processes of the interactors’ (p. 11)—thus, a person becomes part of a cognitive nonconscious assemblage while talking on a mobile phone. The cognitive nonconscious shares similarities with theories of cybernetics, but, as Hayles points out, if cybernetics humanizes the machine, the ‘cognitivist paradigm’ mechanizes the mind. Information should not be seen as separate from the cognitive processes, but as emerging from the embeddedness of an organism/mechanism within its environment. Hayles proposes a tripartite framework for human cognition: at the top are consciousness and unconsciousness, modes of awareness; next is nonconscious cognition, inaccessible but linked to consciousness; finally, we find the material processes that form the basis of all cognitive activities (pp. 27–28). In this context, technologies develop within complex ecologies, and their trajectories follow paths that optimize their advantages within their ecological niches. […] Computational media are distinct […] because they have a stronger evolutionary potential than any other technology, and they have this potential because of their cognitive capabilities, which, among other functionalities, enable them to simulate any other system. (p. 33; original emphasis)Unthought proceeds to read different attributes of cognition through the lens of the cognitive nonconscious, and attempts to signal its relevance to the humanities: ‘At stake is whether ordinary human activities are pervaded by rationality […] or whether nonrational processes also have important roles to play’ (p. 57). Citing recent findings in neuroscience that have demonstrated that the bulk of information-processing is not conscious at all, Hayles speculates that humanistic debates could turn ‘not on the question of whether humans are capable of reason […] but whether reason is central to everyday human action in the world’ (p. 59). These reflections are further complicated by a chapter on the impact of ‘the new materialisms’ in decentring the humanist subject, by their focus on matter as ‘lively’ and ‘vibrant’, their placement of the human on a continuum with nonhuman and material processes, and their emphasis on new kinds of political action. Hayles aligns her model of unthought alongside new materialist approaches to ontology, evolution, and transformation, as well as Deleuzian categories like survival and force. Sharing the new materialists’ attempts to challenge traditional ideas about the centrality of humans in the world, she identifies her theory of nonconscious cognition as something that ‘enlists the cognitive powers of humans […] while also insisting that nonhumans have cognitive powers of their own’ (pp. 84–85). The second part of Unthought examines the systemic effects of (human–technical) cognitive assemblages, which emphasize ‘the flow of information through a system and the choices and decisions that create, modify, and interpret the flow’ (p. 116). Hayles’s model supplements the study of power and politics missing from other systems theories, such as Bruno Latour’s Actor–Network Theory. Moreover, unlike networks, which have edges and nodes that convey ‘a clean materiality’, assemblages ‘allow for contiguity in a fleshly sense, touching, incorporating, repelling, mutating’ (p. 118). Because human and technical systems in an assemblage interconnect, the cognitive decisions of one affect the other, and their hybridity raises questions of agency and responsibility. As one of her key examples, Hayles examines the cognitive nonconscious frameworks that operate in high-frequency trading, relying on complex temporalities and illustrating the gap between human and ethnical cognizers. Tracing how such processes are implicated in the 2008 financial crisis, Hayles observes that the algorithms responsible for processing financial data suffer from both hypermnesia (having to process masses of data in real time) and hypomnesia (having to jettison the data in a matter of seconds once it has served its purpose). ‘Humans may set up these systems, but they are not in complete control of how they operate, evolve, and mutate’ (p. 172). Moreover, alongside other technical devices, cognitive assemblages also include ‘overtly political concerns as racism, gender discrimination, urban infrastructural design, and institutional politics’ (p. 185). Like Pilsch, Hayles concludes her study by examining the utopian potential of cognitive assemblages, considering how the mid-century cybernetic paradigm promulgated by Norbert Wiener and his successors correctly anticipated the emergence of communications between humans, nonhuman life-forms, and machines. However, she argues that cyberneticians were wrong in thinking that feedback mechanisms would enable control in this future: ‘In fact the whole idea of control, with its historical baggage of human domination and exceptionalism, has come to seem increasingly obsolete, if not outright dangerous’ (p. 202). Indeed, the more control is embedded within the computational paradigm, the clearer it becomes that complete control over systems is impossible. This is where a digital humanism can come into its own: the various thinkers, writers, and activists on whose work Hayles builds have one thing in common—they took the time to understand the system in detail, after which they were able to identify where change might be introduced to transform the system dynamics, while drawing on traditions of fair play, justice, sustainability, and environmental ethics. We live in an age where machine cognition is becoming increasingly complex and entangled with that of humans, raising a gamut of ethical and existential challenges. Assemblages promise a fairer and more just world for not only our futures, but those of nonhumans and indeed machines. Over the past decade, David Berry’s work on theorizing the digital humanities has made an important contribution to the field. His latest intervention, Digital Humanities: Knowledge and Critique in a Digital Age, written with Anders Fagerjord, aims to stimulate a ‘critical digital humanities’. The DH are unique in constituting a field positioned between technology and culture, yet technology is perceived as something done to the humanities: ‘even as the term “digital humanities” has solidified and entered into more general usage, we are keen to acknowledge that digital humanities is, and remains, a contested term’ (p. 32). As noted earlier in this chapter, the drift towards quantification is changing our understanding of ‘research’ (including research by academics), while new technology is accelerating the neoliberal transformation of higher education. Despite the established links between the DH and ‘making’, they draw deeply on a liberal arts tradition, enabling DH practitioners to provide much-needed critique of the wider digitalization of culture. Yet, ‘[w]ithout a keen critical reflexivity, digital humanities is failing in its normative potential to contribute to the wider humanities, above and beyond its instrumental contributions’ (p. 11). In an era of big data dominated by big platforms, humanists must treat the digital seriously and move away from occlusive contrasts between ‘digital’ and ‘non-digital’. Indeed, ‘[w]orking with digital humanities requires a new kind of critical approach to computational thought, which we call computational thinking’ (p. 40). Computational thinking emerges from an algorithmic phronesis, wisdom that builds on action (in contrast to techné, purely technical knowledge of the craft or art): a kind of ‘know-how’ rather than a ‘knowing that’. The exponential growth of social media over the last decade has transformed modern culture and its everyday practices. The corporate media titans of yesteryear have been toppled by younger companies that have embraced Web 2.0: Facebook, Google, Apple, Amazon (p. 50). The forward momentum of these companies is towards sophisticating artificial intelligence (nowadays relabelled ‘machine learning’) processes in order to improve revenues. In the age of social media, our identities are increasingly algorithmic, making it incumbent on humanists to take a stake in future developments. Indeed, while the DH are making productive use of big data, practitioners can complement distant-reading approaches with ‘nearly close reading’ models, such that ‘the focus should be on research infrastructures that intensify and allow creative forms of humanities research for the twenty-first century, not their replacement by science as a hegemonic form of knowledge creation’ (p. 84). Machine code, for example, can no longer be seen as neutral or ahistorical, but is deeply rooted in the socio-economic circumstances of its generation, stimulating all kinds of aesthetic and ethical questions that demand we read code and its conventions closely. We need to interpret code in order to describe the digital world, especially where deep understanding rather than mere explanation is required. Berry and Fagerjord invoke Bernard Stiegler’s concerns (raised in What Makes a Modern Life Worth Living: On Pharmacology [2013]) regarding the breakdown of the ‘long circuits’ of theoretical knowledge, as digitalization renders the materials we examine increasingly fragmented and the processes we use to analyse them ever more opaque to us: This Stiegler diagnoses as a serious danger to societies as they deconstruct the very structures of education and learning on which they are built. […] Thus, we enter a time of a new illegibility, in which we might say that we can no longer read what we are writing—we increasingly rely on digital technology both to write and to read for us as a form of algorithmic inspiration. (p. 69) The relationship between fragmentation and delimited knowledge circuits in the making of DH resources also points to a key tension within higher education: the growing importance of infrastructure in research projects, particularly when the digital component becomes a fetish for success. The authors inveigh against this economic model of the DH as bringing humanities into instrumentalism. Moreover, they insist that while the DH cannot simply translate the ‘old’ humanities into new formats (from ‘paper to file, library to database’), they also should not be ‘the conduit by which the “humanities” are “modernized” or “rationalized”’ to service economic agendas (p. 83). If computers can aid close, qualitative reading, ‘scaling up’ can conversely be seen as a humanistic endeavour, suited to addressing broader cultural and philosophical questions, and not solely the purview of science. ‘[D]igital humanists can actively contribute […] as a form of critique and as a thoughtful contribution to ensuring that critical, intellectual and hermeneutic specificities are defended, as well as being augmented, through new research infrastructures’ (p. 104). For the authors of Digital Humanities, the challenge facing the field is that we need to uncover the politics behind algorithms and software by deconstructing things as well as by building them. What Berry and Fagerjord propose is a plurality of approaches, whereby the critical digital humanities can extend the methods already available through the traditional humanities, while productively slowing down DH work to incorporate reflection and to resist instrumentalism. To meet these ambitions, they propose three areas of focus for DH practitioners who might wish to be guided by ‘a more interventionist and activist role’ (p. 144). Firstly, we need to interrogate how infrastructure interacts with society and culture, for example in the drift towards ‘platformization’ analysed by Srnicek; secondly, we must consider the impact of our data profiles in the digital world, especially where quantification and surveillance by third parties and governmental agencies are involved; finally, we need to engage with ‘how visibility is made problematic when mediated through computational systems’ (p. 146), particularly where gender and race are concerned, ensuring a voice is given to those who might otherwise be excluded by the digital age. James Smithies’s The Digital Humanities and the Digital Modern reminds us that technological modernity has always had its detractors—the resistance to computing within certain enclaves of the humanities is only the latest example. Smithies offers a scathing critique of the anti-DH movement, while pointing out how practitioners of the DH have laid themselves open to attack by their emphasis on production rather than theorization. Nevertheless, as observed by Berry and Fagerjord, the DH are well positioned to interrogate critically modernity’s processes and ideologies. The organizing principle, indeed the counterpart to the DH, is what Smithies terms ‘the digital modern’: To understand the digital humanities we have to apprehend the structural nature of digital culture and society, and we must remember that the field is unavoidably influenced by postindustrial capitalism and its associated sociopolitical machinery. […] The digital modern has little interest in the humanist tradition. It is brittle, contradictory, heterogeneous, networked, hierarchical and non-hierarchical, elitist and democratic. (pp. 18–19) Invoking the work of Anthony Giddens, Smithies locates the digital modern within a second, ‘reflexive modernity’. This second modernity is different from postmodernity, as it makes the consequences of modernity more radicalized and universalized than before. Turning inwards, modernity radicalizes itself by disrupting the certainties of industrial society and replacing them with ‘uncertainty and chaos’ (p. 23). Nowhere is this more evident than in the rise of cognitive or information capital, which transcends the boundaries of the nation state, being routed instead through the neoliberal logic of global capitalism. One consequence of second modernity’s instrumentalization of knowledge production is the discourse of ‘crisis’ that attaches itself to the humanities. Smithies examines how this rhetoric has been deployed by conservative commentators in the US, particularly in relation to the DH. (He identifies Francis Fukuyama and David Golumbia as prominent voices in this camp.) Of course, this anti-computational angst is not solely the purview of conservatives, and is shared by left-leaning academics who associate the DH with oppressive mainstream culture and politics. Smithies frames this resistance within a longer tradition of humanist scepticism towards technical work, which has itself been a major achievement of the humanities over the last three centuries. Nevertheless, a more nuanced response is needed, as exemplified by scholars like Alan Liu, whose ‘work presents a critique of digital technology that positions it as a subject area worthy of further development, rather than a threat in need of proscription’ (p. 61). The information society has stimulated the exponential growth in computing as the primary driver of the twenty-first-century global economy, placing the DH at an important critical juncture. Smithies identifies three key specific aspects of computational culture that merit further investigation by the DH: artificial intelligence, cyberinfrastructure, and the ‘multistability’ of computers. In Heidegger’s dystopian vision, as technology advances on its own solipsistic logic it ‘enframes’ our interactions with the world in increasingly insular ways, limiting human action and thought. Artificial intelligence also weakens the power of intellectuals and diminishes the role of expert knowledge—we can see this manifest itself in our increasingly normalized ‘Let’s-Google-that’ instincts. ‘The instability of knowledge related to artificial intelligence and automation feed into a sense of incipient risk’ (p. 81). Cyberinfrastructure is the glue that binds disciplines together, ‘a complex interpretative domain, characterised by a blend of technical, cultural, and sociopolitical factors that combine to resist simple elucidation’ (p. 113). Smithies notes that a ‘systems analysis of the humanities’ must begin with an acceptance that technologies are the very medium of human existence (p. 114). He suggests that technology should be understood as a network of complex sociotechnological systems, deeply embedded in if not constitutive of society itself—in many ways echoing Hayles’s concept of assemblages, discussed earlier in this chapter. Smithies is another commentator pointing to the centrality of platform studies and further examination of open standards in shaping our relationship to cyberinfrastructure. This is especially salient given the multistability of technologies, which he identifies as their ‘tendency to develop organically beyond the intentions of their original designers […] [implying] entanglement between humans and our technologies’ (p. 153). Countering Heidegger’s oppositional model of humanity versus technology, Smithies adopts a ‘postphenomenological’ approach that locates technologies as the medium of human existence, focusing in particular on software: ‘When viewed as a technology entangled with human identity and creative expression, software becomes a medium—necessarily limited, like pen, paper, and card catalogue—that humanists can use to gain knowledge about the world’ (p. 154). While postphenomenology can assist our understanding of the entanglements between technology and humanity, for Smithies it is not enough to help us analyse the operation of research itself. His solution is to draw upon ‘postfoundationalism’, which posits that all knowledge incorporates both facts and theories simultaneously: ‘Rather than cleaving to the notion that research can provide access to a world of perfectly objective truth, postfoundationalism claims that robust methods and appropriate levels of critical awareness can lead as close to it as is needed for effective engagement with the world’ (p. 161). Knowledge thus emerges from the interplay between different perspectives and multiple intellects, using a range of tools and methods in multidisciplinary approaches—breaking away from the ‘lone scholar’ model traditionally associated with the humanities. We can employ a panoply of computational tools—algorithmic criticism, distant reading, data-modelling—but we need to recognize these as interpretative, critical interventions rather than as empirical, comprehensive solutions. Indeed, uncertainty itself can productively yield new perspectives when we are faced with massive datasets and multifarious ways of crunching the data—especially during an age when machine-learning processes will continue to separate humanists increasingly from their materials. Books Reviewed Berry David M. , Anders Fagerjord , Digital Humanities: Knowledge and Critique in a Digital Age ( Cambridge : Polity , 2017 ). ISBN 9 7807 4569 7666. Broadhurst Susan , Sara Price , eds, Digital Bodies: Creativity and Technology in the Arts and Humanities , Palgrave Studies in Performance and Technology ( London : Palgrave Macmillan , 2017 ). ISBN 9 7813 4995 2403. Hayles N. Katherine , Unthought: The Power of the Cognitive Nonconscious ( Chicago : University of Chicago Press , 2017 ). ISBN 9 7802 2644 7889. Lane Richard J. , The Big Humanities: Digital Humanities/Digital Laboratories ( London : Routledge , 2016 ). ISBN 9 7807 4157 4882. Pilsch Andrew , Transhumanism: Evolutionary Futurism and the Human Technologies of Utopia ( Minneapolis : University of Minnesota Press , 2017 ). ISBN 9 7815 1790 1028. Smithies James , The Digital Humanities and the Digital Modern ( London : Palgrave Macmillan , 2017 ). ISBN 9 7811 3749 9431. Srnicek Nick , Platform Capitalism , Theory Redux ( Cambridge : Polity , 2017 ). ISBN 9 7815 0950 4879. References Berry David, ed. , Understanding Digital Humanities ( Basingstoke : Palgrave Macmillan , 2012 ). ‘A Digital Humanities Manifesto’ (15 December 2008), [accessed 28 June 2018]. ‘The Digital Humanities Manifesto 2.0’ (29 May 2009), [accessed 28 June 2018]. Laboria Cuboniks , ‘Xenofeminism: A Politics for Alienation’ (2015), [accessed 29 June 2018]. Petrusich Amanda , ‘The Library of Congress Quits Twitter’ (2 January 2018), [accessed 12 June 2018]. Schäfer Mirko Tobias , Karin van Es, , eds, The Datafied Society: Studying Culture through Data ( Amsterdam : Amsterdam University Press , 2017 ). Stiegler Bernard , What Makes a Modern Life Worth Living: On Pharmacology , trans. by Ross Daniel ( Cambridge : Polity , 2013 ). Svensson Patrik , Big Digital Humanities: Imagining a Meeting Place for the Humanities and the Digital ( Ann Arbor : University of Michigan Press , 2016 ). Vita-More Natasha , ‘Transhuman Manifesto’ ( 1983 ), [no longer available online]. Vita-More Natasha ,, ‘Transhumanist Arts Statement’ ( 2003 ), [accessed 13 June 2018]. Williams Alex , Nick Srnicek , ‘#Accelerate: Manifesto for an Accelerationist Politics’, in #Accelerate: The Accelerationist Reader , ed. by Mackay Robin , Armen Avanessian ( Falmouth : Urbanomic , 2014 ). © The English Association (2018) All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
TI - 16Digital Humanities
JF - The Year's Work in Critical and Cultural Theory
DO - 10.1093/ywcct/mby016
DA - 2018-11-01
UR - https://www.deepdyve.com/lp/oxford-university-press/16digital-humanities-gHwH21AWn2
SP - 305
VL - 26
IS - 1
DP - DeepDyve
ER -