DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf

It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed t...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Γλώσσα:English
Έκδοση: Oxford University Press 2024
id oapen-20.500.12657-90555
record_format dspace
spelling oapen-20.500.12657-905552024-05-24T02:25:46Z Chapter 6 Learning to Discriminate Davies, Benjamin Douglas, Thomas Discrimination; Profiling; Machine Learning; Algorithmic Fairness; Racial Bias; Redundant Encoding; Criminal Recidivism; Crime Prediction; Artificial Intelligence; AI thema EDItEUR::J Society and Social Sciences::JK Social services and welfare, criminology::JKV Crime and criminology::JKVF Criminal investigation and detection thema EDItEUR::J Society and Social Sciences::JK Social services and welfare, criminology::JKV Crime and criminology It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the resulting predictive model does not use race as an explicit predictor. However, if race is correlated with measured recidivism in the training data, the ML tool may ‘learn’ a perfect proxy for race. If such a proxy is found, the exclusion of race would do nothing to weaken the correlation between risk (mis)classifications and race. Is this a problem? We argue that, on some explanations of the wrongness of discrimination, it is. On these explanations, the use of an ML tool that perfectly proxies race would (likely) be more wrong than the use of a traditional tool that imperfectly proxies race. Indeed, on some views, use of a perfect proxy for race is plausibly as wrong as explicit racial profiling. We end by drawing out four implications of our arguments. 2024-05-23T12:05:33Z 2024-05-23T12:05:33Z 2022 chapter https://library.oapen.org/handle/20.500.12657/90555 eng application/pdf Attribution 4.0 International DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf Oxford University Press Sentencing and Artificial Intelligence b9501915-cdee-4f2a-8030-9c0b187854b2 6a453af5-fc90-47b7-ae93-bda93088bb1d European Research Council (ERC) 26 open access
institution OAPEN
collection DSpace
language English
description It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the resulting predictive model does not use race as an explicit predictor. However, if race is correlated with measured recidivism in the training data, the ML tool may ‘learn’ a perfect proxy for race. If such a proxy is found, the exclusion of race would do nothing to weaken the correlation between risk (mis)classifications and race. Is this a problem? We argue that, on some explanations of the wrongness of discrimination, it is. On these explanations, the use of an ML tool that perfectly proxies race would (likely) be more wrong than the use of a traditional tool that imperfectly proxies race. Indeed, on some views, use of a perfect proxy for race is plausibly as wrong as explicit racial profiling. We end by drawing out four implications of our arguments.
title DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf
spellingShingle DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf
title_short DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf
title_full DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf
title_fullStr DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf
title_full_unstemmed DAVIES AND DOUGLAS Learning to Discriminate OUP preproof.pdf
title_sort davies and douglas learning to discriminate oup preproof.pdf
publisher Oxford University Press
publishDate 2024
_version_ 1799948775019511808