Home

Information Polity

Publisher:
IOS Press
IOS Press
ISSN:
1570-1255
Scimago Journal Rank:
39
journal article
LitStream Collection
Introduction to special issue algorithmic transparency in government: Towards a multi-level perspective

Giest, Sarah; Grimmelikhuijsen, Stephan

2020 Information Polity

doi: 10.3233/IP-200010

The editorial sets the stage for the special issue on algorithmic transparency in government. The papers in the issue bring together transparency challenges experienced across different levels of government, including macro-, meso-, and micro-levels. This highlights that transparency issues transcend different levels of government – from European regulation to individual public bureaucrats. With a special focus on these links, the editorial sketches a future research agenda for transparency-related challenges. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others. Finally, this introduction present an agenda for future research, which opens the door to comparative analyses for future research and new insights for policymakers.
journal article
LitStream Collection
Transparency in algorithmic decision-making: Ideational tensions and conceptual shifts in Finland

Ahonen, Pertti; Erkkilä, Tero

2020 Information Polity

doi: 10.3233/IP-200259

This article uses a theoretical and methodological framework derived from the political theorist Quentin Skinner and the conceptual historian Reinhart Koselleck to examine ideational and conceptual tensions and shifts related to the transparency of algorithmic and other automatic governmental decision-making in Finland. Most of the research material comprises national and international official documents and semi-structured expert interviews. In Finland, the concepts of ‘algorithmic transparency’ and other ‘transparency of automatic decision-making’ are situated amongst a complex array of legal, ethical, political, policy-oriented, managerial, and technical semantic fields. From 2016 to 2019 Finland’s Deputy Ombudsman of Parliament and the Constitutional Committee of Parliament pinpointed issues in algorithmic and other automatic decision-making with the consequence that at the turn of 2019 and 2020, the Ministry of Justice started moving towards the preparation of new legislation to resolve these issues. In conclusion and as expected, Finland’s version of the Nordic tradition of the public sphere with established legal guarantees of public access to government documents indeed has both important enabling and constraining effects upon resolving the transparency issues.
journal article
LitStream Collection
A machine learning approach to open public comments for policymaking

Ingrams, Alex

2020 Information Polity

doi: 10.3233/IP-200256

In this paper, the author argues that the conflict between the copious amount of digital data processed by public organisations and the need for policy-relevant insights to aid public participation constitutes a ‘public information paradox’. Machine learning (ML) approaches may offer one solution to this paradox through algorithms that transparently collect and use statistical modelling to provide insights for policymakers. Such an approach is tested in this paper. The test involves applying an unsupervised machine learning approach with latent Dirichlet allocation (LDA) analysis of thousands of public comments submitted to the United States Transport Security Administration (TSA) on a 2013 proposed regulation for the use of new full body imaging scanners in airport security terminals. The analysis results in salient topic clusters that could be used by policymakers to understand large amounts of text such as in an open public comments process. The results are compared with the actual final proposed TSA rule, and the author reflects on new questions raised for transparency by the implementation of ML in open rule-making processes.
journal article
LitStream Collection
Algorithmic transparency and bureaucratic discretion: The case of SALER early warning system

Criado, J. Ignacio; Valero, Julián; Villodre, Julián

2020 Information Polity

doi: 10.3233/IP-200260

The governance of public sector organizations has been challenged by the growing adoption and use of Artificial Intelligence (AI) systems and algorithms. Algorithmic transparency, conceptualized here using the dimensions of accessibility and explainability, fosters the appraisal of algorithms’ footprint in decisions of public agencies, and should include impacts on civil servants’ work. However, although discretion will not disappear, AI innovations might have a negative impact on how public employees support their decisions. This article is intended to answer the following research questions: RQ1. To what extent algorithms affect discretionary power of civil servants to make decisions?RQ2. How algorithmic transparency can impact discretionary power of civil servants? To do so, we analyze SALER, a case based on a set of algorithms focused on the prevention of irregularities in the Valencian regional administration (GVA), Spain, using a qualitative methodology supported on semi-structured interviews and documentary analysis. Among the results of the study, our empirical work suggests the existence of a series of factors that might be linked to the positive impacts of algorithms on the work and discretionary power of civil servants. Also, we identify different pathways for achieving algorithmic transparency, such as the involvement of civil servants in active development, or auditing processes being recognized by law, among others.
journal article
LitStream Collection
Administration by algorithm: A risk management framework

Bannister, Frank; Connolly, Regina

2020 Information Polity

doi: 10.3233/IP-200249

Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.
journal article
LitStream Collection
Artificial intelligence, bureaucratic form, and discretion in public service

Bullock, Justin; Young, Matthew M.; Wang, Yi-Fan

2020 Information Polity

doi: 10.3233/IP-200223

This article examines the relationship between Artificial Intelligence (AI), discretion, and bureaucratic form in public organizations. We ask: How is the use of AI both changing and changed by the bureaucratic form of public organizations, and what effect does this have on the use of discretion? The diffusion of information and communication technologies (ICTs) has changed administrative behavior in public organizations. Recent advances in AI have led to its increasing use, but too little is known about the relationship between this distinct form of ICT and to both the exercise of discretion and bureaucratic form along the continuum from street- to system-levels. We articulate a theoretical framework that integrates work on the unique effects of AI on discretion and its relationship to task and organizational context with the theory of system-level bureaucracy. We use this framework to examine two strongly differing cases of public sector AI use: health insurance auditing, and policing. We find AI’s effect on discretion is nonlinear and nonmonotonic as a function of bureaucratic form. At the same time, the use of AI may act as an accelerant in transitioning organizations from street- and screen-level to system-level bureaucracies, even if these organizations previously resisted such changes.
journal article
LitStream Collection
The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making

Peeters, Rik

2020 Information Polity

doi: 10.3233/IP-200253

With the rise of computer algorithms in administrative decision-making, concerns are voiced about their lack of transparency and discretionary space for human decision-makers. However, calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion. Through a review of recent academic literature, three algorithmic design variables that determine the preconditions for human transparency and discretion and four main sources of variation in ‘human-algorithm interaction’ are identified. The article makes two contributions. First, the existing evidence is analysed and organized to demonstrate that, by working upon behavioural mechanisms of decision-making, the agency of algorithms extends beyond their computer code and can profoundly impact human behaviour and decision-making. Second, a research agenda for studying how computer algorithms affect administrative decision-making is proposed.
Articles per page
Browse All Journals

Related Journals: