Algorithmic fairness

Across jurisdictions, criminal justice systems are enamored with the evidence-based practices movement. The idea is to utilize the best scientific data to identify and classify individuals based on their potential future risk of reoffending, and then to manage offender populations accordingly. While evidence-based methodologies are widely exalted as representing best practices, ethical and scientific objections have been raised. This body of research explores whether algorithmic risk assessment tools appear racist and/or sexist in their algorithms or outcomes. Still, the question itself about the fairness of any particularly actuarial tool is subject to debate. Numerous measures of equity and parity exist with respect to predictive risk tools, with many of the measures being mutually exclusive. In other words, no prediction model is statistically capable of satisfying them all. This research outlines how judgment calls are necessary in terms of trade-offs in determining the acceptability of false positive rates versus false negative rates considering interests in reducing correctional resources while serving public safety interests.

Avatar
Melissa Hamilton

Dr Melissa Hamilton is a member of the Hub and a Reader at the School of Law.

Publications

Automated risk assessment is all the rage in the criminal justice system. Proponents view risk assessment as an objective way to reduce …

The ProPublica/Northpointe racial bias debate, and the broader issue of algorithmic fairness, present significant dilemmas for criminal …

This essay reports original research using a large dataset of offenders who were scored on the popular risk assessment tool COMPAS.

Today, judges across the U.S. use risk assessment tools like COMPAS in sentencing decisions. In at least 10 states, these tools are a …

Talks