Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Two Kinds of Discrimination in AI-Based Penal Decision-Making

Two Kinds of Discrimination in AI-Based Penal Decision-Making The famous COMPAS case has demonstrated the difficulties in identifying and combatting bias and discrimination in AI-based penal decision-making. In this paper, I distinguish two kinds of discrimination that need to be addressed in this context. The first is related to the well-known problem of inevitable trade-offs between incompatible accounts of statistical fairness, while the second refers to the specific standards of discursive fairness that apply when basing human decisions on empirical evidence. I will sketch the essential requirements of non-discriminatory action within the penal sector for each dimension. Concerning the former, we must consider the relevant causes of perceived correlations between race and recidivism in order to assess the moral adequacy of alternative standards of statistical fairness, whereas regarding the latter, we must analyze the specific reasons owed in penal trials in order to establish what types of information must be provided when justifying court decisions through AI evidence. Both positions are defended against alternative views which try to circumvent discussions of statistical fairness or which tend to downplay the demands of discursive fairness, respectively. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM SIGKDD Explorations Newsletter Association for Computing Machinery

Two Kinds of Discrimination in AI-Based Penal Decision-Making

ACM SIGKDD Explorations Newsletter , Volume 23 (1): 10 – May 27, 2021

Loading next page...
 
/lp/association-for-computing-machinery/two-kinds-of-discrimination-in-ai-based-penal-decision-making-8UQcTF2GyP

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Association for Computing Machinery
Copyright
Copyright © 2021 Copyright is held by the owner/author(s)
ISSN
1931-0145
eISSN
1931-0153
DOI
10.1145/3468507.3468510
Publisher site
See Article on Publisher Site

Abstract

The famous COMPAS case has demonstrated the difficulties in identifying and combatting bias and discrimination in AI-based penal decision-making. In this paper, I distinguish two kinds of discrimination that need to be addressed in this context. The first is related to the well-known problem of inevitable trade-offs between incompatible accounts of statistical fairness, while the second refers to the specific standards of discursive fairness that apply when basing human decisions on empirical evidence. I will sketch the essential requirements of non-discriminatory action within the penal sector for each dimension. Concerning the former, we must consider the relevant causes of perceived correlations between race and recidivism in order to assess the moral adequacy of alternative standards of statistical fairness, whereas regarding the latter, we must analyze the specific reasons owed in penal trials in order to establish what types of information must be provided when justifying court decisions through AI evidence. Both positions are defended against alternative views which try to circumvent discussions of statistical fairness or which tend to downplay the demands of discursive fairness, respectively.

Journal

ACM SIGKDD Explorations NewsletterAssociation for Computing Machinery

Published: May 27, 2021

Keywords: ai-based decision-making

References