2022 Information Polity
doi: 10.3233/ip-211531
The Passenger Name Record (PNR) Directive has introduced a pre-emptive, risk-based approach in the landscape of European databases and information exchange for security purposes. The article contributes to ongoing debates on algorithmic security and data-driven decision-making by fleshing out the specific way in which the EU PNR-based approach to security substantiates core characteristics of algorithmic regulation. The EU PNR framework appropriates data produced in the commercial sector for generating security-related behavioural predictions and does so in a way that gives rise to a paradoxical normativity directly dependent on empirical states. Its ‘securitisation move’ is moreover characterised by an inherent tendence to expand. As a result, the PNR Directive poses challenges for existing check and balance mechanisms and for human autonomy. These challenges could be partially addressed by strengthening ex-post control procedures and independent auditing. Yet in the decision to adopt a risk-based security model, something more fundamental seems to be at stake, namely, the preservation of the idea of human beings as moral agents able to direct and modify their behaviour in accordance with an intelligible, reliable and predictable normative order.
2022 Information Polity
doi: 10.3233/ip-211524
Artificial Intelligence (AI)-based surveillance technologies such as facial recognition, emotion recognition and other biometric technologies have been rapidly introduced by both public and private entities all around the world, raising major concerns about their impact on fundamental rights, the rule of law and democracy. This article questions the efficiency of the European Commission’s Proposal for Regulation of Artificial Intelligence, known as the AI Act, in addressing the threats and risks to fundamental rights posed by AI biometric surveillance systems. It argues that in order to meaningfully address risks to fundamental rights the proposed classification of these systems should be reconsidered. Although the draft AI Act acknowledges that some AI practices should be prohibited, the multiple exceptions and loopholes should be closed, and in addition new prohibitions, in particular to emotional recognition and biometric categorisation systems, should be added to counter AI surveillance practices violating fundamental rights. The AI Act should also introduce stronger legal requirements, such as third-party conformity assessment, fundamental rights impact assessment, transparency obligations as well as enhance existing EU data protection law and the rights and remedies available to individuals, thus not missing the unique opportunity to adopt the first legal framework that truly promotes trustworthy AI.
Gremsl, Thomas; Hödl, Elisabeth
2022 Information Polity
doi: 10.3233/ip-211529
The European Commission has presented a draft for an Artificial Intelligence Act (AIA). This article deals with legal and ethical questions of the datafication of human emotions. In particular, it raises the question of how emotions are to be legally classified. In particular, the concept of “emotion recognition systems” in the sense of the draft Artificial Intelligence Act (AIA) published by the European Commission is addressed. As it turns out, the fundamental right to freedom of thought as well as the question of the common good and human dignity become relevant in this context, especially when such systems are combined with others, such as scoring models.
2022 Information Polity
doi: 10.3233/ip-211532
Artificial Intelligence (AI) is one of the most significant of the information and communications technologies being applied to surveillance. AI’s proponents argue that its promise is great, and that successes have been achieved, whereas its detractors draw attention to the many threats embodied in it, some of which are much more problematic than those arising from earlier data analytical tools.This article considers the full gamut of regulatory mechanisms. The scope extends from natural and infrastructural regulatory mechanisms, via self-regulation, including the recently-popular field of ‘ethical principles’, to co-regulatory and formal approaches. An evaluation is provided of the adequacy or otherwise of the world’s first proposal for formal regulation of AI practices and systems, by the European Commission. To lay the groundwork for the analysis, an overview is provided of the nature of AI.The conclusion reached is that, despite the threats inherent in the deployment of AI, the current safeguards are seriously inadequate, and the prospects for near-future improvement are far from good. To avoid undue harm from AI applications to surveillance, it is necessary to rapidly enhance existing, already-inadequate safeguards and establish additional protections.
De Hert, Paul; Bouchagiar, Georgios
2022 Information Polity
doi: 10.3233/ip-211525
Earlier this year, the European Commission (EC) registered the ‘Civil society initiative for a ban on biometric mass surveillance practices’, a European Citizens’ Initiative. Citizens are thus given the opportunity to authorize the EC to suggest the adoption of legislative instruments to permanently ban biometric mass surveillance practices. This contribution finds the above initiative particularly promising, as part of a new development of bans in the European Union (EU). It analyses the EU’s approach to facial, visual and biometric surveillance,3 with the objective of submitting some ideas that the European legislator could consider when strictly regulating such practices.
Eneman, Marie; Ljungberg, Jan; Raviola, Elena; Rolandsson, Bertil
2022 Information Polity
doi: 10.3233/ip-211538
Emerging technologies with artificial intelligence (AI) and machine learning are laying the foundation for surveillance capabilities of a magnitude never seen before. This article focuses on facial recognition, now rapidly introduced in many police authorities around the world, with expectations of enhanced security but also subject to concerns related to privacy. The article examined a recent case where the Swedish police used the controversial facial recognition application Clearview AI, which led to a supervisory investigation that deemed the police’s use of the technology illegitimate. Following research question guided the study: How do the trade-offs between privacy and security unfold in the police use of facial recognition technology? The study was designed as a qualitative document analysis of the institutional dialogue between the police and two regulatory authorities, theoretically we draw on technological affordance and legitimacy. The results show how the police’s use of facial recognition gives rise to various tensions that force the police as well as policy makers to rethink and further articulate the meaning of privacy. By identifying these tensions, the article contributes with insights into various controversial legitimacy issues that may arise in the area of rules in connection with the availability and use of facial recognition.
Urquhart, Lachlan; Miranda, Diana; Podoletz, Lena
2022 Information Polity
doi: 10.3233/ip-211541
In this paper, we develop the concept of smart home devices as ‘invisible witnesses’ in everyday life. We explore contemporary examples that highlight how smart devices have been used by the police and unpack the socio-technical implications of using these devices in criminal investigations. We draw on several sociological, computing and forensics concepts to develop our argument. We consider the challenges of obtaining and interpreting trace evidence from smart devices; unpack the ways in which these devices are designed to be ‘invisible in use’; and reflect on the processes by which they become domesticated into everyday life. We also analyse the differentiated levels of control occupants have over smart home devices, and the surveillance impacts of making everyday life visible to third parties, particularly the police.
Pedrozo, Silvana; Klauser, Francisco
2022 Information Polity
doi: 10.3233/ip-211533
The use of police drones in Switzerland has significantly increased since the early 2000s. However, Swiss citizens remain largely uninformed about the multifold uses of drones by the police. Whilst many public institutions today advocate more transparent decision-making processes in security matters, this paper demonstrates that the acquisition of police drone systems results from a set of interdependent socio-technical mediations that are inherently opaque, and that bring together both formal and informal mechanisms. Drawing upon qualitative interviews and observational research conducted with Neuchâtel police (Switzerland), this analysis highlights the importance of the practical and relational mechanisms that interlink various public and private actors with the objects in question. This raises a series of informality issues surrounding the acquisition of police technologies more generally.
Showing 1 to 10 of 14 Articles