In criminal justice, the practice of predicting who will commit a crime is a long-standing tradition. Historically, risk predictions were based on the instinct or whim of a human decisionmaker. Predictions may inform such outcomes as whether to arrest, release a person pretrial, sentence a convicted individual to a term of imprisonment, or release early an incarcerated prisoner. Today, algorithms developed from scientific studies on what factors predict offending are deployed across jurisdictions to inform such decisions. The great hope is that this form of AI offers more accurate predictions while avoiding the downsides of human biases.
But can the AI turn in criminal justice offer transformational reforms to reduce prison populations without sacrificing public safety? Or does the AI turn present a cautionary tale when the AI can result in unintended consequences such as recreating racial inequalities if minorities are systematically assigned higher risk predictions?
This inaugural lecture explores relevant issues, with examples from original research into AI-based risk tools used in the field. Lessons learned include warnings that AI cannot entirely bar the infiltration of biases when the algorithms are developed on already biased big data. Human input is still required to ensure that AI provides value while reducing potential unfairness to individuals. These lessons matter as decisions in a criminal justice context have significant impacts on humans in terms of their interests in privacy, freedom, and equality.