•  
  •  
 

SMU Science and Technology Law Review

ORCID (Links to author’s additional scholarship at ORCID.org)

Shai Dothan: https://orcid.org/0000-0001-7026-4114

Gregor Maučec: https://orcid.org/0000-0003-0879-8128

 

Abstract

There is ample evidence that people are not completely rational. They suffer from a series of biases that limit their abilities to make the best decisions and to stick to them. Judges are a unique group of people. They go through many years of training that counter some of these biases, but not all of them. In fact, there is a whole field of research dedicated to predicting how judges, with their human flaws, are going to behave. But today, judges can use an increasing number of artificial intelligence (AI) tools to assist with their craft, particularly with research and with writing. Technology offers judges the possibility of transcending their human flaws and letting machines produce flawless legal texts. As tempting as this vision sounds, this paper warns that it may quickly turn dystopic. The reason is that the flaws that make people and the work they produce imperfect are also necessary to maintain the values cherished by humanity. Judges assisted by AI may avoid certain human biases but lead to outcomes that are inhumane.

Share

COinS