TY - JOUR AU - Ammar,, Jamil AB - Abstract Among our mundane and technical concepts, machine learning is currently one of the most important and widely used, but least understood. To date, legal scholars have conducted comparatively little work on its cognate concepts. This article critically examines the use of machine learning technologies to suppress or block access to al-Qaida and IS-inspired propaganda. It will: (i) demonstrate that, insofar as law and policy dictate that machine learning systems comply with desired constitutional norms, automated-decision making systems are not as effective as critics would like; (ii) emphasize that, under the current envisaged ‘proactive’ role of networking sites, equating radical and extreme ideas and ideology with ‘violence’ is a practical reality; and (iii) outline a workable strategy for cross-border legal and technical counterterrorism that satisfies the requirements for algorithmic fairness. INTRODUCTION Social media operates as an effective platform from which jihadi extremists disseminate their propaganda and reach a worldwide audience. Overwhelming scope and open access have made social media a conspicuous domain for promoting warfare.1 During the last five years, the widespread adoption of networking sites as a global jihadi communication medium has contributed significantly to the international recruitment of fighters.2 As legal scholars and policymakers recognize these problems, pressure on social media and digital intermediaries to play a ‘proactive’ role in curbing the spread of ‘terrorist speech’ is mounting. Precisely how these platforms are supposed to do so, however, remains unclear. This lack of insight affords a disquieting opportunity for broad censorship of content that lacks a meaningful or decisive link to violence. Beginning in Europe, and then extending to the USA, the increased use of social media by jihadi groups was an important motivation for the first generation of modern Internet intermediaries’ laws and initiatives, which partially abandoned the well-established negligence-based system in favour of a more strict liability-leaning approach.3 Anchored in a set of intermediaries’ immunity practices,4 many of these laws and initiatives are still in the developing stages. For example, in its 2016 Digital Single Market Strategy, the European Commission planned to introduce ‘sectorial legislation’ that would effectively impose specific duties of care for certain intermediaries, thus introducing the possibility for extending liability relating to certain categories of illegal content.5 The European Commission’s 2017 communication on illegal content calls for ‘fully automated deletion or suspension of content’ where the circumstances leave little doubt about its illegality, such as with material reviewed by law enforcement authorities.6 Germany’s NetzDG law, which came into effect in January 2018, offers another notable example. Of particular concern in this case is placing the burden on intermediaries to determine, under a strict timetable, when a speech violates the law.7 France and the UK have also ‘considered’ levying steep penalties against users who visit jihadi websites and platforms that fail to find and delete terrorist content within two hours after hosts have uploaded it.8 In the USA, social media companies’ lack of response to these concerns has led many to call for imposing a strict liability regime for platforms failing to disclose or delete illegal materials. Professor Susan Klein, for example, has appealed for new legislation that would ‘criminalize social media companies’ failure to discover and release terrorism-related posts to the government’.9 Other controversial ideas include the criminalization of access to websites that ‘glorify, express support for, or provide encouragement for IS’ or even ‘distribute links to those websites or videos’.10 Absent proper evidence-based policy, however, enacting a ‘legislating without evidence’ strategy will surely run afoul of the current lexicon of free speech rights and values in the USA, Canada and theUK.11 Reflecting across geographies, four common threads emerge: (i) The networking industry must go ‘further’ in removing terrorist content;12 (ii) by developing automated filters, the industry needs to delete terrorist content ‘faster’;13 (iii) social media platforms have largely been perceived as ‘enablers’,14 and thus, to a certain degree, a threat to safety, often leading policymakers to seek—and the public at large to demand—additional safeguards in the legal and regulatory arena; and (iv) striking the right balance between free speech and security is identified as a top priority.15 Digital intermediaries and social media platforms have responded to pressure by developing a number of allegedly effective initiatives to counter violent extremism (CVE). On 31 May 2016, the European Commission and leading IT companies (Facebook, Twitter, YouTube and Microsoft) announced the adoption of filtering technology, a shared database of ‘hashes’ for filtering out ‘extremity content as defined in their communities’ standards.16 The Global Internet Forum to Counter Terrorism was established in 2017, sharing filtering technology with relevant parties.17 Facebook has reported that currently more than 98 per cent of CVE is instigated by automated systems, rather than by users.18 Facebook, in particular, used its 2018 report to advance a plan for artificial intelligence to root out, among other things, violent extremism. ‘If we do our job really well, we can be in a place where every piece of content is flagged by artificial intelligence before our users see it’, said Alex Schultz, Facebook’s vice president of data analytics.19 The company did not provide illustrations of removed content, however. Google’s four-fold test is another notable example.20 The tech giant has sought to increase the use of machine-learning technologies and video analysis models to automatically detect hateful and other illegal content.21 Social networking sites and digital intermediaries have evidently devoted enormous technological resources to suppress or block access to al-Qaida and IS-inspired propaganda.22 Evidence suggests, however, that ideology and technology are not necessarily the primary drivers of radicalization; social networks, grievances, vulnerabilities, inclination towards violence are equally influential.23 These foundations, however, are under-represented in most counter-radicalization initiatives24—a lapse of far-reaching significance. To date, despite heavy investment, Western governments have struggled to find practical solutions to CVE. In fact, of all their problematic features, most CVE initiatives lack independent and robust evaluation; as such, the security community continues to struggle to measure their impact.25 While logical, focusing narrowly on preventing access to ideological radicalization that lacks a substantial or decisive link to violence risks implying that radical jihadi beliefs are a proxy for terrorism, which is only partially true. Further, a system that concentrates on the ideological causes of radicalization is likely to err on the side of deleting even legal speech related to that ideology.26 In the same vein, excessive focus on the ideological impact of jihadi groups, and thus excessive censorship of material related to Islam, is not only misconceived, but counterproductive. Such a strategy reinforces the narrative that Islam is under attack by the USA and its Western allies and makes them seem like oppressive regimes themselves. The endurance of IS and al-Qaida indicates that, while blocking access to terrorist material should remain a priority, a removal policy (to be addressed further in this article) is insufficient to neutralize the global threat of violent extremism. A better, more affective counterterrorism strategy would address the circumstances by which individuals turn into terrorists, as opposed to tackling violent extremist narratives and ideologies directly.27 In the field of counterterrorism, this ‘move fast and break things’28 policy (in our context, deleting suspicious content that lacks a meaningful or decisive link to violence as quickly as possible) leads to censorship of content with no connection to violence; thus, further fueling jihadi extremists’ misguided perception of being persecuted. It goes without saying that fighting the causes of terrorism and extremism by thwarting the radicalization process is a must. Relying heavily on blocking technology to achieve this formidable task, as this essay will show, is questionable. A draconian blocking policy is significant if only for one reason, to date, little is known about when, if at all, jihadist propaganda leads to violence or how to stop it from doing so.29 A study by the Brookings Institute describes the literature in this field as fraught with ‘anecdotal observations, strongly held opinions, and small data samples derived with relatively weak- or entirely undisclosed-methods’.30 The extent to which machine learning algorithms offer a sustainable solution to terrorist groups online remains largely speculative, and the extent to which they can successfully meet the needs of a complex cast of private and public characters to deal with a cybersecurity-related issue remains to be seen. This article critically assesses the sources and impediments to the progress of a universal counterterrorism policy, offering insights into how legal, technical and economic factors have created a complex intergovernmental environment that both shapes global counter-jihadism policy and encourages myriad non-governmental actors with competing interests to influence it. Based on strong empirical grounding, this article starts by introducing machine-learning technology in the field of counterterrorism. It then highlights a number of factors that significantly hinder its effective deployment to counter jihadist propaganda, training particular focus on al-Qaida’s new propaganda theory. The essay concludes by offering a comprehensive counterterrorism proposal that meets the requirements of algorithmic fairness. WHAT IS MACHINE LEARNING? We are on the cusp of a revolution in robotics wherein machine-learning technology is expected to enable innovations that, until recently, were beyond the grasp of human ingenuity alone.31 This technology could be vastly useful, especially in complex fields with a multiplicity of variables, such as cybersecurity, free speech and counter-radicalization. In these spheres, humans face formidable challenges trying to manually design inventions that effectively and efficiently draw the line between deliberate and inciteful terrorist speech and outrageously offensive, dangerous speech that in no meaningful way correlates with violence. In 2018, the number of social media users worldwide reached 3.196 billion.32 Two-and-a-half quintillion bytes of data are created each day.33 Using human moderators to review exclusively violence-related data is costly and very time consuming. In this context, machine learning can be very effectively utilized to quickly and efficiently generate, simulate, and evaluate a large number of potential solutions in no time.34 While these complex algorithms are rapidly encroaching into the field of counterterrorism, it is imperative to stress that clarity about what is legal and what is not is a rare commodity in this complex arena. Material that is manifestly illegal and must without question be deleted—that is, certain terrorist speech or advocacy of violence—as can be seen with ISIS’s brutal and very graphic propaganda is not the primary concern. Due to the scale of this job, however, digital intermediaries and networking sites will continue to use machine-learning technologies to scan voluminous data for content potentially violating their rules. Most automated decision-making capabilities are driven by advanced algorithms, and no one, including the designers of these systems, can get more than a glimpse at the internal workings of the simplest of machine-learning technologies.35 All complex algorithms used by social media giants are inscrutable to observers, which effectively cloaks readily visible bias or discrimination.36 In this context, therefore, given the mounting pressure on social media platforms to do more to curb the spread of violent jihadi propaganda, technological bias (disproportionally deleting ideologically based material that in no meaningful way correlates with violence) is a practical certainty,37 circumstances often further complicated by the fact that the source code of most truly opaque algorithms is propriety trade secret. Facebook’s newsfeed algorithm is a notable example. It is tweaked on weekly basis, taking thousands of metrics into consideration,38 making it often virtually indecipherable even to Facebook’s own engineers.39 More intriguing, disclosing the source code of the algorithm does not necessarily improve transparency.40 Algorithms function by breaking/processing inputs into outputs. Once enough steps and data are added, the algorithm can calculate a great number of possibilities, in some cases, reaching millions of individual data points.41 The ability to act in a dynamic, interactive and purposive modality, however, makes it difficult to understand how it works, let alone how it predicts outputs.42 Given the large number of variables, ‘rationalizing’ the quality of the final output is oftentimes challenging. This complication, perhaps, explains why social networking sites reveal very little information about the number of deleted contents vis-à-vis the number of materials originally flagged. The unpredictable nature of machine systems is often referred to as the ‘black box society’, or the ‘black swans’.43 To further explicate this issue, consider the following: at its core, a machine learning algorithm is used to make predictions, which is hardly a unique method. Traditional statistical techniques have long been used to make predictions.44 However, unlike machine learning algorithms, traditional techniques require that the expert first conduct two tasks: she should ‘specify a mathematical equation expressing an outcome variable as a function of selected explanatory variables’, and, secondly, should test the extent to which the data will fit with her choices.45 Equally pressing, the expert should specify equations that represent her beliefs about the functional relationships between ‘independent’ and ‘dependent’ variables. In legal terms, this regression analysis between the two types of variables ostensibly represents the ‘causal relationship’ in the real world.46 By stark contrast, it can be difficult to explain the logic behind the final output of a machine learning algorithm because a counterterrorism system, such as neural network, does not typically extract causal relationships between inputs and outputs.47 That is why, to avoid stretching a cybersecurity system to a breaking point, the law must adjust to ensure a workable counterterrorism system, while keeping public interest in terms of free speech and religious freedom firmly in mind. Should algorithms, therefore, be designed with an extra dose of social responsibility whereby only content with links to violence should be deleted?48 If so, how do platforms technically draw the line between social responsibility and censorship? What counts as pro-jihad propaganda? How shall assessments be made about whether a particular expression should be regarded as manifestly illegal? Counterterrorism machine systems must wrestle with these and many other exigent legal variables. The lack of transparent technological criteria for identifying manifestly terrorist speech suggests that the limitations imposed on the freedom of expression often might not be in line with free speech and religious freedom.49 In theory, the legal rules governing these rights could provide reasonably clear standards. However, the growing use of digital intermediaries by extreme groups, the reliance on subtle religious propaganda material to spread jihadist ideologies, and the increased need for machine learning technologies have made the line between religious speech and political speech difficult to draw, particularly in Canada, the USA, and Europe, where free speech norms place limits on how far prohibitions on religious propaganda activities can reach.50 Some of the key questions here are whether digital intermediaries deploying machine learning decide what information should be censored, when a religious speech turns into a political speech, and when the latter becomes pro-jihadi propaganda? To elucidate the significance of these concerns, next we assess the requirements of an effective counterterrorism machine learning system. FLAGGING TERRORIST CONTENT WITH MACHINE LEARNING A conventional machine learning system contains the following components. The training data:51 Including pro-jihadi propaganda videos, images, sounds and texts that the algorithm will detect. Absent a clear cross-border definition of ‘terrorist content’, radical views, and jihadi propaganda that can be consistently applied, training data will have little value. The algorithm that will process the data and make decisions/predictions.52 Broadly speaking, algorithms serve four major functions: Prioritization:53 for our purposes, bringing attention to contents that should be flagged.54 Classification:55 in this context, categorization based on a number of features, such as propaganda videos, sounds or texts. Association: indicates relationship, such a link between a given account on social media and a terrorist group. Filtering: excluding certain information.56 Evaluation/Assessment: a technical criterion by which the performance of the algorithm can be measured.57 Results:58 in this context, results mean effective identification of terrorist materials or radical views (pro-jihadist material) that are ‘likely’ to lead to violence. A machine learning system can make predictions and/or interventions. For our purposes, multilayered neural networks—as an example, often referred to as ‘deep learning’—have been used to make predictions, especially in the case of high-dimensional data, such as images, texts and speech. Neural networks extract patterns with little prior knowledge about variables.59 Their power resides in their ability to produce outputs designed to inform cybersecurity personnel, enhancing their knowledge and understanding.60 Intervention type machine systems produce outputs that are actionable and should be applied directly, such as flagging or deleting propaganda material.The success of machine learning-based decision-making rests on training and access to sufficient data, often a significant body of accurate data. For instance, a dataset under 40,000 examples is barely sufficient to achieve decent accuracy for a machine-based program.61 Gathering sufficient training data is not a straightforward effort. The process is time-consuming, especially, in the case of newly emerged al-Qaida-inspired groups. Any significant change in media strategy requires a lot of new input and re-training if the system is to remain effective. In this respect, IS’s consistent and prolific media machine is the exception rather than the rule. The expanding use of Support Vector Machine (SVM), Neural Networks, Probabilistic Analytics, and Causal Models, however, has greatly increased the volume and variety of users’ collected data.62 Typically, training data is not inherently rivalrous. However, due to its sensitive nature, among other factors, in the field of counterterrorism, most social networking sites impose obstacles to their competitors’ access to training data. As machine learning technologies develop, the sensitive nature of the data, the potential proprietary value of it and the involvement of trade secrecy, to name just a few issues, are likely to further restrict access to training data and thus could hinder development of the technology. COUNTERTERRORISM AND ILLIBERAL TECHNOLOGY Facebook has ceremoniously announced that its automated techniques to eradicate terrorism-related posts are ‘bearing fruit’.63 Facebook’s vice president of data analytics further stressed that artificial intelligence is the means of flagging ‘100 percent’ of inappropriate content before users can even see it.64 The goal of this section is to assess the capacity of machine learning to achieve such accuracy. Before going into details, a few relevant facts must be borne in mind. First, terrorist groups have long diversified their outlets by exploiting alternative platforms as safe havens for their propaganda material. Facebook, therefore, while still relevant, is no longer a preferred venue.65 Jihadi groups active on Facebook have noticeably changed key elements of their online radicalization strategies. For instance, al-Qaida in Syria has been systematically working to reposition itself as a more moderate entity; refraining—to a large extent—from using offensive language or violent content. For example, Abdullah al-Muhaysini, a prominent Saudi cleric and one of the most influential Jihadi figures of Hayat Tahrir al-Sham (HTS)—an al-Qaida-affiliated group operating in Syria66—maintains a regular presence on Facebook and YouTube, often podcasting conventional content. Jihad-related material, however, is published on his ‘Maydanyia’ (battleground) Telegram channel. Al-Qaida’s deliberations about modifying its media strategy in Syria began tentatively in 2013, long before Facebook’s effective deployment of machine learning to counter-jihadism. A second distinct, though related, issue is that Facebook’s automated system is designed mainly to curb the influence of well-known terrorist groups and, in general, cannot accommodate new but equally effective insurgents.67 Facebook acknowledges the limitations of its approach by asserting that expanding its efforts to counter the propaganda of other radical groups was ‘not as simple as flipping a switch’.68 Thirdly, although Facebook aims to flag ‘100 percent’ of inappropriate content,69 the tech giant has provided neither examples of removed materials nor information about the number of deleted contents vis-à-vis the number of materials originally flagged. While social media has repeatedly pointed to the volume of deleted content as proof of the technology sophistication, we must be wary of drawing firm conclusions about the effectiveness of machine technology. ALGORITHMIC FAIRNESS: PERCEPTIONS VERSUS REALITY Networking sites do not apply a blanket prohibition on content associated with religious extremism.70 Indeed, social media and digital intermediaries have an economic incentive to voluntarily censor only material that attracts ‘universal’ condemnation, as opposed to removing all content related to hate speech. By working this fine line, networking sites avoid negative criticism. Public image is a prominent influencing factor71—not only does it affect the speed at which a social networking site deletes allegedly illegal content, but also to avoid a public backlash, it could enable algorithmic bias where minority groups can be the target. In this context, the economic incentive to automize the process of identifying and deleting allegedly terrorist content, even without meaningful and regression link to violence, is appealing. Expeditious removal of a ‘widely’ perceived radical content, though not necessarily violent, circumvent both potential litigation costs and high wages for increasingly in-demand employees to assess whether a claim is valid. To stay afloat, social media must avoid a public backlash at all costs. Social media’s large user-base makes it desirable to advertisers, which is why, in the context of countering extremism, from a public image perspective, there is a clear economic incentive for platforms to treat some offensive or radical religious contents less favourably than others. Counterterrorism is a politically charged issue, and stakeholders have an incentive to game against the system. Juan Mateos-Garcia speaks of ‘entropic forces’ that degrade algorithm accuracy; he continues: ‘[N]o matter how much more data you collect, it just impossible to make perfect predictions about a complex, dynamic [such as counterterrorism] reality.’72 In the context of fighting jihadist groups—obfuscation of security aside—algorithmic bias, and thus the heavy censorship of jihad-related content without a compelling link to violence, are not only unfair but also counterproductive. Media bias both shapes public opinion and promotes misperceptions that the core doctrine of Islam is the inherent threat—and not, in fact, the interpretations and distortions of Islamic teachings by militant violent extremists. Most recently, a study by Kearns, Betus and Lemieux shows that terrorist attacks committed by Muslim extremists receive 357 per cent more coverage in the USA than those committed by non-Muslims.73 As to why some terrorist attacks receive more media coverage than others, the study concludes that ‘perpetrator religion is the largest predictor of news coverage’.74 This finding is troubling given that 71 per cent of extremist-related killing in the USA (2008–2017) was perpetrated by right-wing extremism, while 26 per cent of attacks were committed by Islamic extremists.75 Some have already raised the alarm. Professor Barbara Perry, for instance, explores how politicians and media misrepresentation have contributed to an environment in which both antipathy towards Muslims and hate-motivated violence are on the rise.76 ‘The rhetoric of hate does not fall on deaf ears’, Perry argues.77 Political demonization of Muslims oftentimes ‘finds its way into policies and practices that further stigmatize the group’, ‘especially … the activities of … security agencies’.78 For our purposes here, networking sites and machine systems are the cornerstones for fighting terrorism. It is fair to assume that political polarization and media mischaracterization fuel algorithmic bias. As discussed in this section, social media must avoid a public backlash at all costs to maintain its large user-base, which makes it desirable to advertisers. Though arguably unfair, as data from the USA show, platforms have a clear economic incentive to treat some offensive or radical religious content less favourably than others, which is why one can fairly assume that technological bias is certain. This inevitability bears heavy consequences for one of the largest religious communities in the world. From the Muslim community’s perspective, heavy censorship cultivates an environment in which violence and antipathy towards non-Muslims can be readily justified. Algorithmic fairness, therefore, affects people’s lives and must be considered when considering counterterrorism technologies.79 THE TROUBLE WITH AUTOMATIC FILTERING SYSTEMS For our purposes here, credible literature and real-world examples both seem to overwhelmingly indicate that machine learning technology still has a long way to go.80 YouTube’s presumably sophisticated algorithms failed to spot the difference between news reporting and al-Qaida and IS propaganda videos, and subsequently removed the archive of a UK-based human rights organization documenting war crimes in Syria!81 YouTube’s algorithm placed adverts from some famous brands on videos with hate speech.82 Facebook’s algorithm posted violent videos in its users’ feeds.83 Facebook removed Wikipedia content containing US-designated terrorist organizations, including the ‘Jewish Defense League’, and Google auto-complete algorithm directed people looking for information about the Holocaust to neo-Nazi websites.84 Of significance is that algorithms’ errors are not primarily caused by discriminatory data or the algorithm’s inability to improvise creatively. On the contrary, even when they are generating routine predictions, based on non-biased data, algorithms will make errors.85 A ROLE FOR THE COST? In the field of fighting terrorism, the power of machine systems resides in their ability to reduce costs. Automating the process of identification and deletion of pro-jihadi materials makes economic sense, especially in the age of Big Data. As discussed before in this article, over 2.5 quintillion bytes of data are created every single day.86 In the context of counterterrorism, training data are produced mainly, but not exclusively, by machine learning-generated decisions, combining both the prediction and actionable functions of machine systems. These automated systems will likely gradually assume the function of identifying illegal material from humans especially given increasing pressure to pre-emptively take down abhorrent content (such as the latest terrorist attack in Christchurch in New Zealand which was podcasted live on Facebook)87—a task human moderators would be exceptionally challenged to deliver in timely fashion. For less controversial content, however, human moderators are likely to continue, at least for the time being, to play a role in choosing the appropriate cause of action. To be sure, such a job is not highly sought after. A human moderator job in this field has been characterized as depressing, overburdened and underpaid, and often conducted by people with little to no previous experience.88 Until very recently, a team of only 11 employees was assigned the task of moderating the content of 200 million users on Facebook!89 The scale of the job is simply mind boggling: 2.27 billion users and more than 100 languages are used on Facebook monthly.90 Costs aside, an army of human moderators struggles to review data generated by this number of users in a reasonable amount of time. But technology is not an ideal solution, either. Languages, such as Arabic and Burmese, present natural language processing technology with serious challenges.91 The outcome is far from ideal. As participants at the Berkman Klein Center for Internet and Security at Harvard University pointed out, algorithmically flagged material oftentimes violates constitutional law.92 The public neither know how social media algorithms work, nor how human moderators are trained.93 So, while both Google and Facebook have pledged to beef up the number of content moderators,94 their efforts are unlikely to go far enough. It is a matter of time before automated systems take de facto control over the moderation process. The technology is cost-effective and therefore likely to be heavily used. Identifying the risks and opportunities that the algorithmic decision-making process has posed, therefore, is a delicate task that requires acute consideration of the context in which the algorithm functions;95 this is especially so in the case of more complex algorithms used to counterterrorism. ON-DEMAND RADICALIZATION AND JIHADI ARCHIVES Another formidable challenge that a counterterrorism machine learning-based system must tackle is al-Qaeda and ISIS jihadi archives (accessible online and offline). While the last bureaucratic structure of the Islamic State is driven out of the town of Al-Bagouz in eastern Syria,96 it is abundantly clear that the propaganda battle against the IS—especially in the digital terrain—is yet to be won. This impediment is hardly surprising given ISIS’s extensive propaganda production. Its media arsenal includes magazines (such as Rumiyah, available in 11 languages), news updates (examples include Amaq and al-Bayan), and the weekly issue of al-Naba (Arabic language newspaper). Dissemination of this sophisticated, diverse, and reasonably maintained media machine across hundreds of online and offline domains has continued even under circumstances of extreme duress. ISIS has a proven capacity to produce a decent number of videos and magazine even as its fighters are being squeezed in their last urban stronghold in Syria.97 As an end-to-end encryption platform, Telegram provides a number of popular features, such as sharing an unlimited number of photos, videos, or files (doc, zip, mp3, etc.) of up to 1, 5 GB each, enables data storage on the cloud, and allows messages to be automatically deleted using Self-Destruct Timer.98 This combination of security and accessibility features has made Telegram the platform of choice by jihadi groups coordinating propaganda activities before being placed on other social media outlets, such as Facebook, YouTube or Twitter. This ‘on demand’ radicalization archive is quite effective. Typically, using a Telegram private (secret) channel, an online user sends a request for specific propaganda content (a document, an mp3 file, a video, etc.) using either the title of the piece, the name of the jihadi group or any other indicative keyword. Once received, the content can be downloaded and dispensed using different private or public communication channels. Different jihadist channels use Telegram to share content with other mainstream platforms such as Facebook, YouTube or Twitter. In this context, Telegram’s different channels play different roles within the network. Some Telegram channels insure the steady flow of propaganda content online, publishing in concentrated bursts two to three times a week. Other channels provide links to verified jihadi channels. A third type provides news coverage about jihadi activities and commentary related to recent jihadi and political events. A final type, such as ‘Idlib Province Radio’ Telegram channel (coordinated by al-Qaeda supporters in Syria and claiming 28,083 followers),99 publishes mainly daily news and only occasionally important or authoritative content disseminated by the core of the HTS movement.100 This wide-ranging archive and media dissemination is oftentimes overshadowed by mounting pressure to expeditiously delete online jihadi propaganda. With the existence of such a vast and authoritative archive, however, while desirable, the use of machine learning to swiftly remove online propaganda content has limited impact in disrupting or dismantling jihadis groups’ media machine, especially the dozens of thousands of Arabic books, articles, images and videos which aim to show the manifestation and realization of jihadist creed and methodology. DYSTECHNIA: TECHNOLOGY DEFICIENCY AND IMPLICATIONS IN THE AGE OF ON-DEMAND RADICALIZATION This section seeks to explain the practical effect of al-Qaeda’s new media strategy, along with the archiving of jihadi core ideology, on-demand radicalization, on the effective implementation of machine learning technology to counter violent extremism. HTS has assiduously sought to re-establish itself as a local (national) political and military power that focuses solely on jihad in Syria.101 Whether or not HTS has, indeed, severed ties with al-Qaida, from a counterterrorism standpoint, this new, less confrontational media strategy could see an expansion of the current wave of jihadist terrorism and possibly breathe new life into the jihadist movement. This new gambit will also make the already-challenging task of identifying terrorist speech harder, as it used to typically contain violent scenes and key incendiary vocabulary. The anonymity feature of social networking sites which embodies speakers making extreme statements by hiding behind an artificial identity will become less relevant, opening the door for wider, more transparent support for terrorists’ ‘allegedly’ moderate agenda. Recent HTS media campaigns on Telegram indicate that al-Qaida is taking this new model seriously. Beginning on 28 July 2016, the date HTS falsely rebranded itself as a local societal organization, the al-Qaeda brand, and animosity towards the West, almost vanished from the 67 Telegram channels run by HTS or its supporters, studied during this period.102 Although it strictly maintains the crux of jihad, this changes in its media strategy challenge the effectiveness of contemporary counterterrorism technologies, including Facebook’s machine learning system.103 As mentioned, an effective counterterrorism system requires a large volume of data for training purposes. Information about newly emerged radical groups and al-Qaida’s new propaganda strategy takes time to develop. To ensure the model’s accuracy, the effective utilization of a machine learning system requires ongoing monitoring and evaluation. Al-Qaida’s current changes in the propaganda strategy can lead a benign program to render undesired outputs; that is, control of a machine-based system varies depending on myriad factors,104 sufficient training data chief among them. Accounting for changes in al-Qaida’s propaganda strategy dictates that all components of any deployed system undergo consistent monitoring and evolution, otherwise, the concept of control becomes nuanced.105Absent sufficient, up-to-date data, an accurate model very quickly becomes obsolete.106 Data accuracy and control aside, other terminology-related issues impact the quality and effectiveness of the algorithm. A system’s correctness, and thus accuracy, can only be established relative to the clarity of the task at hand,107 without which even a sound system may not be faithfully implemented.108 From a policy perspective, the correctness of the system partially rests on understanding the nature of the problem the system is trying to solve. Put differently, a machine system must not mistakenly limit radical and extreme ideas and ideology to violence. When it comes to curbing the spread of al-Qaeda and IS propaganda, apart from resorting to violence to achieve its goals, there is hardly any meaningful ideological distinction between Salafi narratives, which are widely spread online, and violent jihadi narrative.109 Put differently, apart from one immensely influential political factor, namely, whether to resort to violence to achieve religious or political goals, both Salafi and jihadi groups are ideologically aligned.110 As such, under al-Qaida’s new propaganda theory, what counts or should be counted as pro-jihad propaganda? How to distinguish ‘terrorist content’ from a controversial religious speech that has no meaningful link to violence such as that enmeshed in the Salafi narrative? This question is relevant for one major reason: The machine learning system can fail due to a lack of precision about the definition and diameter of the system’s objective: to distinguish what should be censored from what should be kept online. An algorithm may be discriminate or yield some other results prohibited by law (such as deleting lawful Salafi narrative/speech) because the criteria to differentiate terrorist from contested speech are not sufficiently distinct. This complication will most assuredly be the case if the training database is solely or overwhelmingly the result of machine learning-informed counter-radicalization decisions. This outcome presents abstract challenges for the legal system which mainly result from the manner in which machine learning works. The increased use of machine learning, combined with a search algorithm that would flag a string of phrases indicative of terrorism, will not only be less effective but will also yield undesired results. Developing a machine learning tool to distinguish illegal speech from controversial, but non-hateful speech requires an annotated dataset111 with a comprehensive set of examples of terrorist speech and non-violent speech clearly delineated and categorized. One has to be extremely optimistic to believe that, given the complexity and geopolitical scope of content removal, online profit-seeking platforms with a global user base are in a better position than courts to impartially define these clearly charged terms. In sum, when forming a counterterrorism machine learning-based strategy, account must be taken of not only the terrorists themselves, why they became involved in terrorism, and how to stop others from doing so, but also why other individuals, such as the followers of the Salafi School of Islam, choose not to become involved in political violence.112 The challenge is more complex than merely distinguishing between violent and non-violent speech. A consensus is slowly emerging on the importance of preventing access to al-Qaeda and IS propaganda material, assuming that such a defensive approach persuades individuals not to become involved in political violence. While logical, very little empirical evidence exists to support this consensus, especially since, in some parts of the world, such as the Middle East, people are constantly exposed to extra doses of pro-jihadi material.113 And yet, relatively speaking, few individuals choose to become terrorists. Why do not the overwhelming majority of people, such as the Salafists, who have had the same or similar life experience as terrorists and adhere almost to the same ideology turn to political violence? Little research has been conducted into this question.114 This lapse is troubling, if only because the answers are critical to a thorough understanding of how to avoid the formation of a draconian online strategy to curb the spread of ideologically based materials that might have the non-intended effect of fuelling radicalization and political violence. This issue is far from insignificant: Al-Qaeda’s ‘A Course in the Art of Recruitment’ handbook encourages recruiters to seek individual recruits who are likely to be the most suitable for al-Qaeda; as opposed to approaching volunteer jihadists.115 Based on an understanding of this handbook, it would be wise not to capitulate to pressure by adopting a hard-nosed censorship policy before gaining a more nuanced understanding of the pathway to rejecting violent extremism in favour of more peaceful tactics; if only because individuals inclined to political violence have almost unfiltered access to a rich archive of jihadi propaganda. Avoiding a censorship policy that could stifle connection to, for example, the mostly peaceful dominant Salafi community is a priority that should always be kept in consideration. Counterterrorism is a complex, politically charged problem that is difficult to effectively solve. Various fundamental legal and technological limits continue to challenge different aspects of the technology. Insofar as law and policy dictate that machine learning systems comply with desired norms, counterterrorism strategies cannot be as effective as policymakers would like. Counter-radicalization is an example for which there is no free speech–friendly algorithm. Al-Qaida’s new media strategy can lead tech giants to censor ideas that lead to violence as well as controversial content that does not lead to violence. This outcome is far from insignificant. Most proposals to authorize government officials to force intermediaries to take down offending content do not precisely spell out the type of speech that warrants censorship.116 Deputizing private entities to censor on government’s behalf is, therefore, a business with uncertain outcomes.117 THE WAY FORWARD: ALGORITHMIC SOCIAL RESPONSIBILITY AS A PROGRAMMABLE SET OF LEGAL INSTRUCTIONS FOR ACTION The need to scan voluminous data for harmful content inevitably increases the reliance on technology. Automated decision-making informed by algorithms is here to stay and will play a part in many areas of law and law enforcement. They are increasingly used to aid governments and private actors to make more informed evaluations. This section examines why algorithmic transparency and social responsibility are two key tenets of democratic content monitoring and filtering methods. Algorithmic transparency has long been an important concept for holding governments and private entities accountable.118It is considered a means of seeing the motives behind individuals’ actions, ensuring social and legal accountability.119 Transparency permits access to information which influences power relationships between institutions and individuals—in our case, between social media, digital intermediaries and Internet users. The notion of algorithmic ethics is based on the idea that an ethical action can be programmed and automated in a logical language. In theory, ethical programming is possible without machine bias.120 The point here, however, is that the more complex an algorithm is, the more obscure it becomes—so much so that the processes developed by machine learning algorithms to generate certain results cannot even be explained by their developers.121 Machine learning systems have the capacity to make decisions the programmers did not directly anticipate.122 A machine system could rewrite its code from scratch, essentially changing the underlying dynamics of optimization. The key implication for our purposes is that, after reaching some threshold of criticality, a machine learning system can make a significantly intelligent leap123 or not so intelligent jumps.124 The unforeseeability component of the system is an intended feature, enabling it to free itself from cognitive biases and thus to come up with solutions that human thinking may not have considered. Algorithmic opaqueness also serves another important function: it makes the system considerably less vulnerable to manipulation and spam.125 The same feature, however, makes it difficult to verify that the system functions within agreed-upon conditions. Legal scholars and computer scientists have different meanings and functions for algorithmic transparency.126 Legal scholars perceive transparency as a mechanism for viewing the internals of an automated system in order to identify how it works and to address potential risks associated with its operation.127 This is hardly a new concept. Two decades ago, Paul Schwartz and Danielle Citron proposed a solution to mitigate possible undesired outcomes, such as discrimination or suppressing speech, using automated decision-marking.128 In this legal context, algorithmic transparency means that someone ‘ought to be able to look under the hood’129 of complex algorithms such as those used by social networking sites. It is a method to hold the designer of a faulty system accountable.130 For our purposes, transparency should go further than disclosing the source code of the algorithm. By itself, this act does not reveal whether the software was used in any particular manner and thus adds very little in terms of legal accountability.131 Computer scientists, in contrast, view algorithmic transparency as a process that can verify that a system functions as it is supposed to.132 In this context, transparency means the production of ‘a tamper-evident record that provides non-repudiable evidence’ of relevant steps (actions) by the automated system.133 The evidence is then used to hold the creator of the system accountable for those actions.134 Technical transparency, therefore, is a step towards legal accountability. Without this measure, it is impossible to hold private or public actors who design a system accountable for their actions.135 In principle, machine learning systems offer better chances of identifying harms, prohibiting outcomes, and banning undesirable uses,136 because an algorithm’s creation and rules, in some systems, are separate from human design and implementation—which is why some machine learning systems are more amenable to meaningful inspection and management.137 The selection of an appropriate algorithm not only impacts the quality of the system, but equally important, the degree to which the inner workings of the system can be interpreted and controlled.138 For example, two popular counterterrorism-related systems, namely, neural networks and SVMs, are listed as the least interpretable.139 In contrast, decision tree and naïve Bayes and rule learners are the most interpretable.140 With the decline of trust in social media,141 despite its limitations, algorithmic transparency could be a way to make accountable terrorist speech monitoring processes. The problem, however, is that machine learning technologies seem to be emerging in a ‘regulatory vacuum’.142 Most traditional methods of regulations143 are ill-suited to effectively manage the risks associated with this technology.144 The challenging characteristics of machine learning make ex-ante regulation also unsuited.145 A machine learning system requires little physical infrastructure and could be built by individuals from dispersed geographic locations.146 More significantly, it is difficult to secure any proof that the system is not designed to engage in, or has parts of it that lead to, undesired or even prohibited acts.147 Different components may be crafted by individuals in widely dispersed geographic locations.148 These and similar concerns demand a new way of thinking. To this, we now turn our attention. COUNTERTERRORISM: SPEED, CENSORSHIP AND COLLATERAL DAMAGE Digital networking sites are private entities. While they are voluntarily bound by free speech and religious freedom norms, they are not directly bound by constitutional law. Further, Internet intermediaries are not under any duty to monitor their services or to affirmatively seek facts indicating illegal activities.149 Still, there is increasing demand for openness and social responsibility to detect any potential misdeeds.150 The literature pertaining to illegal discrimination on the bases of race, gender and religion is particularly relevant151—oftentimes, it is challenging to identify the source of discrimination and thus to find a party to blame.152 There are simply too many factors and players to be considered. Blaming one party ignores context and correlations.153 In theory, machine learning systems could avoid discrimination.154 This potential accountability, nevertheless, is hindered by the use of computational censorships. Governments should not encumber private entities with the task of judging the legality of content.155 But, at the same time, one must acknowledge that traditional legal approaches for protecting speech are time-sensitive and ill-suited to security-related issues.156 Such complexities explain the need for a qualified independent team of stakeholders with ‘controlled’ access or insight into the intricate process of monitoring, checking, and even deleting allegedly terrorist content. In this context, algorithmic social responsibility opens the door for more independent examination of the monitoring process. This article contends that it is possible to design a socially responsible counterterrorism machine learning system that reasonably complies with relevant legal standards and practices to suppress or block access to online terrorist content.157 The development of algorithmic tools that aim to integrate legal and technical approaches could help pave the way for a more strategic and systematic method to counter violent jihadi ideologies. This proposal calls upon relevant actors to incorporate modern legal and technological approaches into their algorithmic settings. A well-crafted counterterrorism system could relieve intermediaries from adjudicative functions, while still providing a mechanism to encourage the removal of terrorist content. This proposal necessitates the normative turn towards algorithmic social responsibility as a key tenet of democratic content monitoring and filtering methods. A workable suggestion ought to avoid a scenario in which fundamental rights, such as free speech and religious freedom, meet head on with security, technical, and economic concerns. A sensible proposal (machine learning system), therefore, should be (i) evidence-based and (ii) satisfy the following requirements: There is no empirical evidence to suggest that seeing or sharing extremist content necessarily leads to violence. The link between removing terrorist material and enhancing security is anything but certain.158 The Internet is a one factor rather than the only cause of radicalization.159 A workable system would acknowledge that, in the context of fighting terrorism, blocking access to presumably terrorist content cannot be the only available option. A workable system needs to acknowledge negative side-effects on other areas of law, such as free speech and religious freedom. A workable system should not impose any further restrictions on free speech and religious freedom than is necessary for fighting terrorism and would never subject minority groups to restrictions worse than that could reasonably be imposed using a human-guided monitoring process.160 It should make a distinction between the core doctrine of Islam, and the interpretations and distortions of Islamic teachings by militant violent extremists. Already, over seventy social justice organizations cautioned Facebook that it removes more speech from minority speakers than other dominant groups.161 As the testimony of a number of social platforms witnesses indicates, using human reviews, as opposed to automated systems, does not necessarily solve the over-removal problem.162 Finally, the system would provide ‘workable’ legal clarity as it relates to the definition of terrorism, terrorist content and religious hate speech. Terrorist speech is transnational in nature. A conventional global counter-radicalization regime, however, exists in the context of a multilateral response to the cyber risk that is, at best, partial. Two factors contribute to the institutions’ struggle to adjust to this complex interplay. The first stems from the technical complexity associated with the task of curbing the spread of terrorist speech; the second relates to the lack of harmonization in legislation created by different nations in their attempts to deal with this global phenomenon.163 To harness the strengths and capabilities of the public and private sectors in offering a practical solution to the spread of jihadist agenda, this article proposes the establishment of International Multi-Stakeholder Technology and Counter Radicalisation Body (IMTCRB); a not-for-profit organization responsible for providing consistent recommendations in the field of counterterrorism. Ideally, this ‘specialised’ Body should be composed of independent legal experts with a strong technical background and access to secret security-related technical information, to react to new technology, to report on performance under existing legal norms, and, more generally, to keep the bigger picture in mind while advising social media companies on counterterrorism-related issues.164 The Body would have two components: evidence-based decision making and certification. The decision-making division would ensure that decisions are based on evidence to avoid the unfair targeting of any particular community. More specifically, the Body must counter and redress policies that equate radical and extreme ideas and ideology with violence. In this context, contemporary counter jihadist movement policies must acknowledge the limitations of machine learning technologies. Currently, governments and the security services have been drawn into a rather fruitless pattern of Internet censorship, in which precious resources are wasted on an elusive war to block access to al-Qaeda and ISIS propaganda on social media, while a comprehensive archive of core jihadi material can be accessed online and/or offline almost effortlessly, thereby capturing only a small percentage of the problem. It should be clear by now that concepts such as terrorism and radicalization are complex and very context-dependent and may never have universally agreed-upon definitions.165 Lack of precision about the definition and diameter of a system’s objective (what should be censored and what should be kept online) render it less effective. In this context, the British and French ‘action plan’ aimed, amongst other thing, at preventing the Internet from being used as a ‘safe space for terrorist and criminals’ is helpful.166 To encourage ‘technical and policy solutions to rapidly remove terrorist content on the internet’, the French and British governments advocate for a new regulation to identify what constitutes ‘unacceptable content online’.167 The ‘Five Eyes’ intelligence alliance (an alliance comprising Australia, New Zealand, Canada, the UK and the USA) has endorsed critical efforts to remove extremist material and terrorist manuals from the Internet.168 This productive first step should be conducted while keeping in mind that jihadi violent groups have been relentlessly seeking to securitize democratic societies (i), to undermine free speech norms and (ii), to divide the world into two opposing camps and two trenches, ‘with no third camp present’: ‘The camp of Islam and faith, and the camp of kufr (disbelief) and hypocrisy’ firmly in mind (iii). For these reasons, from counterterrorism’s standpoint, shifting the barometer of willingness to participate, profit-seeking social media platforms’ actions could potentially be counter-productive by fuelling sectarianism even further. In this regard, Ian Ayres and John Braithwaite’s stimulating work on ‘responsive regulation’ is helpful, providing as it does indicate where counterterrorism policies could be moving.169 Ayres and Braithwaite envisage a sliding scale of government policies to secure the compliance of industry—in this context, social media platforms—moving from attempts to persuade to significant punitive measures at the top, namely, revocation of licences (or accounts in the social media sphere).170 Here, while regulators should be prepared to ‘signal the capacity to get as tough as needed’ to secure compliance, it is assumed that both the regulators and industries favour voluntary and cooperative mechanisms. The most recent example is Australia’s draft law to crack down on violent videos on social media. Drafted in the wake of the Christchurch terrorist attack in New Zealand,171 the law seems to open the possibility of criminalizing anyone working for social media platforms for a failure to remove ‘abhorrent violent material’, defined by the law as ‘videos that show terrorist attacks, murders, rape or kidnapping’.172 Penalties for convicted individuals can be up to three years imprisonment or a $2.1m fine, or both. Corporate penalties can go up to $10.5m or 10 per cent of annual turnover.173 Enforced by civil or criminal penalties, the ‘securitization’ of social media platforms will eventually force these companies to adopt an aggressive censorship policy, with little to show by way of improved security. So how might a socially responsible counterterrorism machine learning system be applied to more effectively and rigorously remove extremist content online without stifling free speech? Across geographies, there is a consensus that the networking industry must go further and faster in removing terrorist content. Two tasks are required to properly achieve this goal. The first is to develop automated filters; the second is to identify what constitutes ‘unacceptable content online’ in the first place. There is a need therefore to draw a line between two classes of speech: (i) automatically (expeditiously) deletable content, manifestly illegal speech, such as IS online graphic videos or the divisive and incendiary speeches of identified terrorists and (ii) other contested material, such as ideologically based propaganda, such as the shared narrative of Salafi and jihadi groups. This latter category of speech should always be subject to human review. Armed with independent legal experts who have a strong technical background and access to secret security–related technical information, the Body would be well-equipped to provide guidance as to how best to draw this distinction without unduly violating existing free speech and religious freedom norms. Fully automated (swift) deletion or suspension of content then could be applied only to content whose circumstances leave little doubt about its illegality. The certification division will have the power to issue a verifiable public certificate, a workable ‘Free Speech Stress Test’ (FST) for the algorithms themselves. Combatting hair-trigger vilification is crucial. As Professor Barbara Perry argues, political demonization of Muslims oftentimes ‘finds its way into policies and practices that further stigmatize the group’, ‘especially the case … of … security agencies’.174 Social media and machine systems are the cornerstones of fighting terrorism. Algorithmic fairness dictates machine learning algorithms examined from perspectives of both accuracy and fairness.175 Aligned with this theory, this voluntary certification system will analyse the algorithm based on its identification and dilation strategies. Over-removing or biased algorithms must not be used. The sole aim of the FST is to ascertain algorithms’ suitability to the field of counter-radicalization, 176 especially given that some machine learning systems are more amendable to meaningful inspection and management than others.177 In the context of fighting violent jihadism, relying on complaints and a user flagging system is clearly not a workable policy.178 The Body could stress the importance of reputational deterrents, acknowledging the efforts of networking platforms that adopt active policies to responsibly remove manifestly illegal content. I am not claiming that this Body should directly influence social networks algorithms. The lack of ‘universal’ consensus about what terrorism, terrorist speech, hate speech or public interest looks like, in this context, makes the task of predicting how the algorithm should response to content posted online ever more challenging. The proposed Body, however, would provide some clarity as to the framework needed to circumvent de facto discrimination or algorithmic biases under the guise of the global war on terrorism. To this end, it requires some verifiable evidence to assure different stakeholders that a given system is acting under the agreed-upon standard. The Body, finally, is to encourage coherent, effective evidence-based, cross-border practices to counterterrorism. CONCLUSION It is doubtful that machine learning systems will correctly and consistently flag every example of terrorist content before users see it. The characterization of content is a question of subjective judgment, culture, values and legal experience that machine learning may not soundly crunch. The task is so demanding—and the temptations to mistakenly equate certain ideological speech with violence are difficult to resist. Cost reduction, political polarization and media mass-characterization are likely to influence algorithmic decision making in the wrong direction. Enforcing networking sites to promptly delete manifestly unlawful speech,179 while desirable, is also problematic. Deputizing the task of rooting out terrorist content to profit-seeking businesses will almost certainly yield undesired outcomes, or what Yale law Professor Jack Balkin calls, ‘collateral censorship’.180 Concerns here revolve around the means rather than the ends. The author accepts that the main objective in enforcing social media to perform a more proactive role is to face the challenges posed by IS- and al-Qaida-inspired terrorism (and in fact any terrorism). The concerns are not with this objective but with how social media purport to achieve this important goal. The author argues that, in their design and manner of addressing the legitimate goal of fighting terrorism, machine learning systems, if heavily used, will chill freedom of expression, potentially unfairly target the Muslim community, and at the same time not make us safer from valid terrorist threats. Given the challenges of responding to changes in propaganda strategies by terrorist groups, social media and digital intermediaries will ultimately censor all manner of speech only remotely linked to risks of violence. Machine systems may suppress lawful speech as the means to suppress terrorist speech. To date, there is no judicially tested technical method by which a selected number of qualified individuals outside a given private entity are given a chance to question, investigate or challenge decisions made by an automated system. And, the legal process of algorithmic transparency is complicated by the adaption of computational censorship and the opacity of their automated decision-making capabilities. This article therefore proposes the establishment of an IMTCRB, as a possible answer to these concerns. An early version of this article was presented at the Nathanson Centre, Osgoode Hall Law School. Special thanks to, Mary G. Condon, Craig M. Scott and Saptarishi Bandopadhyay. The author would like also to acknowledge the financial support of the Institute of International Education, Scholar Rescue Fund Program, New York (IIE-SRF). Footnotes 1 Jamil Ammar and Songhua Xu, When Jihadi Ideology Meets Social Media (Palgrave Macmillan 2018) 25. 2 ibid. 3 Commission, ‘Impact Assessment, Accompanying the document Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online, Staff Working Document’, COM (2018) 640 final, SEC 397 final, SWD 409 final, 73; see also, Commission, ‘Communication from the Commission to the European Parliament, public consultation on the regulatory environment for platforms, online intermediaries, data and cloud computing and the collaborative economy’ COM (2016) 288 final (2016), 8 and finally, Susan Klein and Crystal Flinn, ‘Social Media Compliance Programs and the War Against Terrorism’ (2017) Harvard Nat’l Sec J 8, 53, 112 accessed 19 January 2019. 4 See, in general, Arts 13, 14, and 15 of the Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce) OJ L178/1; see also, 17 US Code § 512—Limitations on liability relating to material online—of the US Digital Millennium Copyright Act. 5 Impact Assessment and Communication from the Commission to the European Parliament (n 3). 6 Commission, ‘Communication on Tackling Illegal Content Online - Towards an enhanced responsibility of online platforms, Brussels’, COM (2017) 555 final, 14. 7 Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act), Bearbeitungsstand: 12 July 2017, sections 3 and 4. 8 Natasha Lomas, ‘UK to Hike Penalties on Viewing Terrorist Content Online’ (TechCrunch, 3 October 2017) accessed 19 January; Chloe Farand, ‘France Scraps Law Making “Regular” Visits to Jihadi Websites an Offense’ Independent (London, 10 February 2017); Robert Hutton, ‘Tech Firms Face Fines Unless Terrorist Material Removed in Hours’ (Bloomberg, 19 September 2017) accessed 19 January 2019. 9 Klein and Flinn (n 3). 10 Eric Posner, ‘ISIS Gives Us No Choice but to Consider Limits on Speech’ (SLATE, 15 December 2015) accessed 19 January 2019. 11 Ammar and Songhua (n 1) 61 (Chapter 4, Extreme Groups Propaganda War Under a Free Speech Lens: The Unwinnable Battle). 12 Theresa May's Davos address (World Economic Forum, 25 January 2018) accessed 19 January 2019. 13 May (n 12). 14 See, Canada Center for Community Engagement and Prevention of Violence, National Strategy on Countering Radicalization to Violence (2008), in particular, priority 2: ‘Addressing Radicalization to Violence in the Online Space’, 23. 15 Lomas, Farand, and Hutton (n 8). 16 European Commission—Press Release, ‘European Commission and IT Companies announce Code of Conduct on illegal online hate speech’ (Brussels, 31 May 2016). 17 Extremist Propaganda and Social Media (C-SPAN, 17 January 2018) accessed 19 January 2019. 18 Sheera Frenkel, ‘Facebook Says It Deleted 865 Million Posts, Mostly Spam’ (The New York Times, 15 May 2018) . 19 Frenkel (n 18). 20 Google, Press Release, ‘Four Steps We’re Taking Today To Fight Terrorism Online’ (Google in Europe, 18 June 2017) . 21 Google, Press Release (n 20). 22 Facebook Newsroom, Hard Questions: What Are We Doing to Stay Ahead of Terrorists? November 8, 2018, . 23 See, in general, Keiran Hardy, ‘Comparing Theories of Radicalisation with Countering Violent Extremism Policy’ (2018) J Deradicalisation 15, 98, accessed 19 January 2019. See also, Canada Center for Community Engagement and Prevention of Violence, National Strategy on Countering Radicalization to Violence (2008), 8–9. 24 Hardy (n 23). 25 Vidhya Ramalingam and Henry Tuck, The Need For Exit Programmer (Institute for Strategic Dialogue 2014) 1. See also, Tinka Veldhuis, Designing Rehabilitation and Reintegration Programmes for Violent Extremist Offenders: A Realist Approach (2012) 17. 26 Sam Levin, ‘Civil Rights Groups Urge Facebook to Fix Racially Biased Moderation System’ (The Guardian, 18 January 2017) accessed 19 January 2019. 27 Such a pragmatic approach was adopted by Denmark and Sweden. See Government of Denmark (2009), A common and safe future: An action plan to prevent extremist views and radicalization among young people, ; see also Swedish Ministry of Justice (2011), Action plan to safeguard democracy against violence-promoting extremist, 9, . 28 Coined by Facebook founder Mark Zuckerberg. Hemant Taneja, ‘The Era of “Move Fast and Break Things” Is Over’ (2019) Harvard Bus Rev . 29 JM Berger and Jonathon Morgan, ‘The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter’(2015) Brookings Project on U.S. Relations with the Islamic World Analysis Paper No 20 accessed 19 January 2019. 30 Berger and Morgan (n 29) 53. 31 Alva Noe believes we are approaching a ‘singularity’, the point in time when machine technologies will be able to invent increasingly more intelligent inventions. See, in general, Alva Noe, ‘The Ethics of the “Singularity”’ (NPR, 23 January 2015) accessed 19 January 2019. 32 Dave Chaffey, Global Social Media Research Summary 2019 (Smart Insights, 12 February 2019). 33 DOMO, Data Never Sleeps 5.0 (1 April 2019), . 34 Paul Marks, ‘Eureka Machines’ (New Scientist, 2015) 227, 32–35 accessed 19 January 2019. 35 Frank Pasquale, The Black Box Society. The Secret Algorithms that Control Money and Information (Harvard University Press 2015) 1. 36 Sam Levin, ‘Civil Rights Groups Urge Facebook to Fix Racially Biased Moderation System’ (The Guardian, 18 January 2017) accessed 19 January 2019. 37 Some have already raised the alarm, see, Sam Levin, ‘Civil Rights Groups Urge Facebook to Fix Racially Biased Moderation System’ (The Guardian, 18 January 2017) accessed 19 January 2019. 38 Somaiya Ravi, ‘How Facebook Is Changing the Way Its Users Consume Journalism’ (The New York Times, 26 October 2014) accessed 19 January 2019. 39 The Ethics of Algorithms: From Radical Content to Self-driving Cars (The Center for Internet and Human Rights 2015) accessed 19 January 2019. 40 Deven Desai and Joshua Kroll, ‘Trust but Verify: A Guide to Algorithms and the Law’ (2017) 31(1) Harvard J Law Technol 10 accessed 19 January 2019. 41 The Center for Internet and Human Rights (n 39). 42 ibid. 43 Frank Pasquale (n 35). 44 Cary Conglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-learning Era’ (2017) 105 Geo L J 1147, 1157. 45 Conglianese and Lehr (n 44). 46 ibid. 47 ibid. 48 For the purpose of this piece, social responsibility means taking all legal and technical measures necessary not to unfairly impinge on the constitutional rights of any particular group, in this case, the Muslim community 49 Ammar and Xu (n 1) 61. 50 ibid. 51 See, in general, Christopher Bishop, Pattern Recognition and Machine Learning (Springer 2006) 4-58. 52 ibid. 53 Nicholas Diakopoulos, ‘Algorithmic Accountability’ (2014) Digital Journalism 1–18 accessed 19 January 2019. 54 ibid. 55 ibid. 56 ibid. 57 Bishop (n 51). 58 ibid. 59 Ammar and Xu (n 1) 112. 60 ibid. 61 Prakash Jay, ‘Transfer Learning Using Keras towards Data Science’ (15 April 2017) accessed 19 January 2019. 62 Ammar and Xu (n 1) 112. 63 ‘Facebook's AI wipes terrorism-related posts’ (BBC News, 29 November 2017) accessed 19 January 2019. 64 Frenkel (n 17). 65 See, for example, Martyn Frampton, Ali Fisher and Nico Prucha, The New Netwar: Countering Extremism Online (Policy Exchange 2017), the ‘Information Environment- the Problem of “Findability”, and Referrers’. 66 HTS has been designated as terrorist group in the USA, Canada and the UK. The Department of State has amended the designation of al-Nusrah Front—an al-Qa’ida affiliate in Syria—to include Hay’at Tahrir al-Sham (HTS) and other aliases (31 May 2018) ; Regulations Amending the Regulations Establishing a List of Entities: SOR/2018-103, Canada Gazette, Part II, Volume 152, Number 11, 23 May 2018, . See also, UK Home Office, Proscribed terrorist groups or organizations (1 March 2019), . 67 BBC News (n 63). 68 ibid. 69 Frenkel (n 18). 70 For example, Facebook permits the sharing of ‘objectionable content’, such as hate speech, for the purpose of raising awareness or educating others. Facebook Community Standards, accessed 2 May 2019. 71 See, for example, Caroline Corbin, ‘Terrorists Are Always Muslim but Never White: at the Intersection of Critical Race Theory and Propaganda’ (2017) 86(2) Fordham L Rev 455, 455 accessed 19 January 2019. 72 Juan Mateos-Carcia, ‘To Err Is Algorithm: Algorithmic Fallibility And Economic Organisation’ (NESTA, 10 May 2017), accessed 19 January 2019. 73 Erin Kearns, Allison Betus and Anthony Lemieux, Why Do Some Terrorist Attacks Receive More Media Attention Than Others? (2019) Justice Quart 10. 74 Kearns, Betus and Lemieux (n 73) 1. 75 Anti-Defamation League, . 76 Barbara Perry, ‘All of a Sudden, There Are Muslims’: Visibilities and Islamophobic Violence’ (2015) 4 Int J Crime, Justice Social Democracy 3, 4–15, 9 accessed 19 January 2019. 77 Perry (n 76). 78 ibid 8–9. 79 See, in general, Micah Altman, Alexandra Wood and Effy Vayena, ‘A Harm Reduction Framework for Algorithmic Fairness’ (2018) 16(3) IEEE Security & Privacy 34–45 accessed 19 January 2019. 80 ‘Mixed Messages? The Limits of Automated Social Media Content Analysis’ (The Center for Democracy and Technology, 28 November 2017), accessed 19 January 2019. 81 Malachy Browne, ‘YouTube Removes Videos Showing Atrocities in Syria’ (The New York Times, 22 August 2017) accessed 19 January 2019. 82 Olivia Solon, ‘Google's Bad Week: YouTube Loses Millions As Advertising Row Reaches US’ (The Guardian, 25 March 2017) accessed 19 January 2019. 83 Olivia Solon, ‘Live and Death: Facebook Sorely Needs a Reality Check About Video’ (The Guardian, 26 April 2017) accessed 19 January 2019. 84 Carole Cadwalladr, ‘Google, Democracy and the Truth About Internet Search’ (The Guardian, 4 December 2016) . 85 Mateos-Carcia (n 72). 86 ‘Data Never Sleeps’ (DOMO, 2018), accessed 19 January 2019. 87 ‘Christchurch Massacre: PM Confirms Children Among Shooting Victims – As It Happened’ (The Guardian.co.uk, 19 March 2019) . 88 Olivia Solon, ‘Underpaid and Overburdened: the Life of a Facebook Moderator’ (The Guardian, 25 May 2017), accessed 19 January 2019. 89 Alexis Madrigal, ‘Inside Facebook's Fast-Growing Content-Moderation Effort’ (The Atlantic, 7 February 2018) accessed 19 January 2019. 90 The number of monthly active Facebook users worldwide, as of the 3rd quarter of 2018 (in millions) accessed 19 January 2019. 91 The Arabic language is unique in terms of it diglossic nature, structure, and more importantly for our purpose here, its inseparable link with Islam; a significant point given that ubiquitous Sunni leaning religious groups disseminate hate speech in Arabic, and a large percentage of terrorism acts occur in Arabic speaking countries. 92 Jeffrey Rosen, ‘Harmful Speech Online: At the Intersection of Algorithms and Human Behavior’ (Berkman Klein Center for Internet and Security, 8 August 2017) accessed 19 January 2019. 93 ibid. 94 Michael Lev-Ram, ‘Why Thousands of Human Moderators Won't Fix Toxic Content on Social Media’ (Fortune, 22 March 2018) accessed 19 January 2019. 95 Lorena Jaume-Palasí and Matthias Spielkamp, ‘Ethics and Algorithmic Processes for Decision Making and Decision Support’ (Algorithm Watch, 1 June 2017) 5 accessed 19 January 2019. 96 Ben Wedeman and Lauren Said-Moorhouse, ISIS has lost its final stronghold in Syria, the Syrian Democratic Forces says (CNN, 23 March 2019) . 97 For example, IS produced two substantial propaganda pieces in Bagouz, Syria, entitled, ‘The Islamic State’, ‘Covenant and Steadfastness’, Sinai Wilayah (26 March 2019), and ‘In the Hospitality of the Amir of the Faithful Shaykh Abu Bakr al-Hussayni al-Quryshi al-Bagdadi’ (29 April 2019). 98 Telegram FAQs; How secure is Telegram? ; How do self-destructing messages work? . 99 As of 2 April 2019. Source: Telegram/me. 100 Observations are based on two years examination of al-Qaeda’s media activities online. Some of the channels run by HTS or its supporters in Syria followed in this study lasted for few months, others for over a year. 101 We discussed elsewhere how the Islamic Front, an umbrella organization, that once controlled more than 70,000 fighters in Syria, tried to build a bridge with the West by issuing two diametrically opposite manifestos, one in English speaks to Westerns, and another in Arabic speaks to locals. The English manifesto makes it very clear that, as an organization, the Islamic Front is ‘far from religious fanaticism and its resulting deviation of creed and action’. The Front’s English manifesto states that the organization believes in reform, solidarity and coexistence. The Front’s Arabic manifesto, however, reveals a very different story. It states: the Islamic Front is an ‘Islamic, military, political and social Body seeking to … establish an Islamic State where God’s law is the only source of legislation for the individual, the society and the state’. Jamil Ammar and Songhua Xu, ‘Yesterday’s Ideology Meets Today’s Technology: A Strategic Prevention Framework For Curbing The Use Of Social Media By Radical Groups’ (2016) Albany Law J Sci & Technol 26(2) 257. accessed19 January 2019. 102 Some of the channels run by HTS or its supporters followed in this study lasted for few months, others for over a year. 103 BBC News (n 63). 104 Jatinder Singh and others, ‘Responsibility & Machine Learning: Part of a Process’ (2016) University of Cambridge Computer Laboratory 4 accessed 19 January 2019. 105 Singh (n 104) 11–12. 106 ibid 11. 107 See, in general, Douglas Dunlop and Victor Basili, ‘A Comparative Analysis of Functional Correctness’ (1982) 14 ACM Computing Surv 229 accessed 19 January 2019. 108 See, in general, Desai and Kroll (n 40) 24. 109 Ammar and Xu (n 1) 25–59 (Extreme Groups and Militarization of Social Media). 110 ibid. 111 For a good account on how machine learning systems are trained, see The Center for Democracy and Technology (n 80). 112 See generally, R Kim Cragin, ‘Resisting Violent Extremism: A Conceptual Model for Non-Radicalization’ (2014) 26 Terrorism Political Violence 337–53, 337. 113 Ammar and Xu (n 1) 25–59 (Extreme Groups and Militarization of Social Media). 114 It is still quite difficult to pinpoint when an individual becomes radicalized. See, for example, Sophia Moskalenko and Clark McCauley, ‘Measuring Political Mobilization: The Distinction between Activism and Radicalism’ 21(2) (2009) Terrorism Political Violence 239, 242. 115 Abu Amr al-Qa’idi, A Handbook In the Art of Recruitment, trans, Abdullah Warius (West Point Counter Terrorism Center 2010) 2. 116 Take the UK as an example. The UK Terrorism Act of 2006 prohibits the ‘glorification’ of terrorism. The Act also criminalizes a ‘statement that is likely to be understood by some or all of the members of the public to whom it is published as a direct or indirect encouragement or other inducement to them to the commission, preparation or instigation of acts of terrorism or [other] offences’ (this would count as encouragement, The Terrorism Act of 2006, ch 11). The UK’s attempt to proscribe the so-called ‘terrorist speech’ would be impermissible in the USA and Canada. In the USA, the First Amendment and some of the case law of the US Supreme Court seem to permit a ‘peaceful’ advocacy for the moral propriety of violence, such as Jihad, and the moral necessity for a resort to force and violence. ‘The mere abstract teaching … of the moral propriety or even moral necessity for a resort to force and violence is not the same as preparing a group for violent action’ and thus cannot be restricted by the government (Brandenburg v Ohio, 395 US 444, (1969) 448). In Canada, Section 319(1) provides that ‘everyone who, by communicating statements in any public place, incites hatred against any identifiable group where such incitement is likely to lead to a breach of the peace is guilty of … an indictable offence … or an offence punishable on summary conviction’. However, section 319 (3) of the Canadian Criminal Code provides a strong defence to a person expressing, in good faith, or attempted to establish by an argument an opinion on a religious subject or an opinion based on a belief in a religious text. Criminal Code (R.S.C., 1985, c. C-46). 117 New York Times Co v Sullivan, 376 US 254 (1964), the Supreme Court of the USA confirmed that it would not delegate the fight to sanction speech to third parties. See also, Sorrell v IMS Health Inc, 564 US 552 (2011), in this case, the Supreme Court of the USA has confirmed that ‘information is speech for First Amendment purposes’. 118 Jack Balkin, ‘How Mass Media Simulate Political Transparency’ (1999) Yale Law School Faculty Scholarship Series 259 accessed 19 January 2019. 119 ibid. 120 Jaume-Palasí and Spielkamp (n 95) 5. 121 ibid. 122 ‘The Ethics of Algorithms: From Radical Content to Self-driving Cars’ (The Center for Internet and Human Rights, 2015) accessed 19 January 2019. 123 Eliezer Yudkowsky, ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk’ (Machine Learning Research Institute, 2008) accessed 19 January 2019. 124 Browne (n 81). 125 The Center for Internet and Human Rights (n 39). 126 Desai and Kroll (n 40) 7. 127 ibid 8. 128 Paul Schwartz, ‘Data Processing and Government Administration: The Failure of the American Legal Response to the Computer’ (1992) 43 Hastings L J 1321, 1343–74, ; see also, Danielle Citron, Technological Due Process (2008) 85 Wash U L Rev 1281–88, accessed 19 January 2019. 129 Frank Pasquale (n 35) 165. 130 Desai and Kroll (n 40) 8. 131 ibid 10. 132 Andreas Haeberlen, Petr Kuznetsov and Peter Druschel, ‘Practical Accountability for Distributed Systems’ (2007) 41 ACM SIGOPS Operating Sys Rev 175; see also, Desai and Kroll (n 29) 10. 133 Haeberlen, Kuznetsov and Druschel (n 132); see also, Desai and Kroll (n 40) 10. 134 ibid. 135 Desai and Kroll (n 40) 11. 136 The United State Federal Trade Commission, ‘Trade Comm’n, Big Data: A Tool for Inclusion or Exclusion?’ 1 (2016) 5–12 accessed 19 January 2019. 137 Singh (n 104). 138 ibid. 139 ibid. 140 ibid. 141 Nicholas Confessore, ‘Cambridge Analytica and Facebook: The Scandal and the Fallout So Far’ (The New York Times, 4 April 2018) accessed 19 January 2019. 142 Aileen Graef and Elon Musk, ‘We are ‘Summoning A Demon’ with Artificial Intelligence’ (UPI, 27 October 2014) ≤www.upi.com/Business_News/2014/10/27/Elon-Musk-We-are-summoning-a-demon-with-artificial-intelligence/4191414407652/> accessed 19 January 2019. 143 Such as, product licensing or tort liability. 144 Matthew Scherer, ‘Regulating Artificial Intelligences System: Risks, Challenges, Competencies, and Strategies’ (2016) 29(2) Harvard J Law & Technol 356 accessed 19 January 2019. 145 Scherer (n 144) 357. 146 ibid 356. 147 ibid 357. 148 ibid. 149 See Digital Millennium Copyright Act of 1998, Public Law No 105-304, section 512 (the USA), Arts 12, 13 and 14 of Council Directive 2000/31 (Mere Conduit—passive role—cashing and hosting) 2000/31/EC, Official Journal of the European Communities, L 178, Volume 43, 17 July 2000; and Section 31.1 of the Canadian Copyright Modernization Act, SC 2012, c 20). 150 Frank Pasquale, ‘Beyond Innovation and Competition: The Need for Qualified Transparency in Internet Intermediaries’ (2010) 104 Nw U L Rew 1, 166. 151 See, for example, Latanya Sweeney, ‘Discrimination in Online Ad Delivery’ (2013) 56 COMM ACM (suggesting the existence of noticeable difference in the delivery of ads based on searches of racially associated names); Amit Datta, Michael Carl Tschantz and Anupam Datta, ‘Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination’ (2015) 105. accessed 19 January 2019. 152 Sweeney (n 130) 52; Amit Datta and others, Automated Experiments on Ad Privacy Settings, Proceedings On Privacy Enhancing Techniques, April (2015) 92. 153 Michael and Datta (n 151), 105. 154 ibid. 155 ‘Why CDA 230 Is So Important’, ELEC. Frontier Found accessed 19 January 2019. 156 On both sides of the Atlantic, courts have established a robust but lengthy pro-free speech norms, in Europe, under arts 9, 10 and 17 of the European Convention of Human Rights, in the USA, under the First Amendment and in Canada under Canadian Charter Of Rights And Freedoms (sections 1, 2 and 15) and finally, s 319 of the Criminal Code (RSC, 1985, c C-46) (hate speech in Canada). Some of the major cases include: Brandenburg v Ohio, 395 US 444, (1969), Hess v Indiana, 414 US 105 (1973), United States v Kelner, 534 F2d 1020 (2d Cir 1976), cert denied, 429 US 1022 (1976), Holder v Humanitarian Law Project, 561 US 1,4 (2010), Gündüz v Turkey No 35071/97, ECHR 2003-XI (European Court of Human Rights), Gündüz v Turkey (No 2) Case No 59745/00,13 November 2003, Pavel Ivanov v Russia (Application no 35222/04- European Court of Human Rights) and Leroy v France (application no. 36109/03). 157 For example, in the USA, a content-based restriction on speech is only justified where two high threshold conditions are met: The speech restriction must directly advance a completing government interests; and secondly, the restriction must be the least restrictive means for achieving that interests (Sable Communications of California v Federal Communications Commission, 492 US 115, 126 (1989). In contrast, courts in Europe seem to be more willing to accord considerable leeway to regulate extreme expressions (Gündüz v TurkeyNo. 35071/97, ECHR 2003-XI (European Court of Human Rights), Gündüz v Turkey (No 2) Case No 59745/00,13 November 2003, Pavel Ivanov v. Russia (Application no 35222/04- European Court of Human Rights) and Leroy v France (application no. 36109/03). 158 ‘Terrorism and Social Media: #IsBigTechDoingEnough? Hearing before the Senate Commerce Comm’ (17 January 2018) ( at 23:40 sec) accessed 19 January 2019. 159 See Meleagrou-Hitchens and Kaderbhai, ‘Research Perspectives on Online Radicalisation’ (2006–2016) ICSR 19–39 accessed 19 January 2019. 160 There is a growing suspicion that social networking sites are already taking down too much. See, Daphne Keller, ‘Internet Platforms, Observations on Speech, Danger, and Money’ (2018) Hoover Institution Essay, Aegis Series Paper No 1807 (2018) 4 accessed 19 January 2019. 161 Sam Levin, ‘Civil Rights Groups Urge Facebook to Fix Racially Biased Moderation System’ (The Guardian, 18 January 2017) accessed 19 January 2019. 162 The United States Senate Testimony of 2018. Extremist Propaganda and Social Media (C-SPAN, 17 January 2018) accessed 19 January 2019. 163 A notable example is the International Bill of Human Rights. Among the Arab Gulf States, only Bahrain and Kuwait have signed this convention and both made it clear that the application of the Bill does not affect ‘in any way the prescriptions of the Islamic Shariah’, accessed 19 January 2019. 164 Both Australia and the UK enable independent reviewers of antiterrorism law and empowered them to see secret information to report on performance under existing law. See, Graig Forcese and Kent Roach, False Security, The Radicalisation of Canadian Anti-terrorism (Irwin Law 2015) 111. 165 Andrew Silke, ‘Terrorism and the Blind Men’s Elephant’ (1996) 8 Terrorism Political Violence 12–28. 166 French-British action plan: internet security, Cabinet Office and Home Office, 13 June 2017 . 167 French-British action plan: internet security, Cabinet Office and Home Office, 13 June 2017 . 168 Five Eye countries join Britain’s call to remove terror content online, Home Office, 28 June 2017 . 169 Ian Ayres and John Braithwaite, Responsive Regulation: Transcending the Regulation Debate (OUP 1992) 35. 170 The sliding scale provides six consecutive steps. These are: persuasion, warning letter, civil penalty, criminal penalty, license suspension and finally license revocation. ibid. 171 See text to note 87. 172 Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019, The Parliament of the Commonwealth of Australia (April 2019). 173 ibid. 174 Perry (n 76) 8–9. 175 Cynthia Dwork, ‘Skewed or Rescued?: The Emerging Theory of Algorithmic Fairness’ (2018) Harvard University accessed 19 January 2019. 176 FED (n 136) 5–12. 177 Singh (n 104). 178 See, for example, Jamil Ammar and Songhua Xu, ‘Yesterday’s Ideology Meets Today’s Technology: A Strategic Prevention Framework for Curbing the Use of Social Media by Radical Jihadists’ (2016) 26 Alb L J Sci & Tech, Torture in the Name of Allah: Deadly but not Offensive 277. 179 See pages 1–3. 180 Jack Balkin, ‘Old-School/New-School Speech Regulation’ (2014) 127 Harvard Law Rev 2296, 2298 accessed 19 January 2019. © The Author(s) (2019). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Cyber Gremlin: social networking, machine learning and the global war on Al-Qaida-and IS-inspired terrorism JF - International Journal of Law and Information Technology DO - 10.1093/ijlit/eaz006 DA - 2019-09-01 UR - https://www.deepdyve.com/lp/oxford-university-press/cyber-gremlin-social-networking-machine-learning-and-the-global-war-on-dzO0dukA8Q SP - 238 VL - 27 IS - 3 DP - DeepDyve ER -