Punishing Artificial Intelligence: Legal Fiction or Science Fiction

Abstract

Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI functionally commits a crime and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement for a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show AI punishment cannot be categorically ruled out with quick theoretical arguments. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime.

Publication
UC Davis Law Review, forthcoming
Avatar
Ryan Abbott
Research Lead in AI

Professor Ryan Abbott is the Hub’s Research Lead in AI and a Professor of Law and Health Sciences at the School of Law.

Avatar
Alex Sarch

Professor Alex Sarch is a member of the Hub. He is also a Professor of Legal Philosophy at the School of Law and the Head of the School.