Over the last few years, legal scholars, policy-makers,
activists and others have generated a vast and
rapidly expanding literature concerning the ethical
ramifications of using artificial intelligence,
machine learning, big data and predictive software
in criminal justice contexts. These concerns can be
clustered under the headings of fairness,
accountability and transparency. First, can we trust
technology to be fair, especially given that the
data on which the technology is based are biased in
various ways? Second, whom can we blame if the
technology goes wrong, as it inevitably will on
occasion? Finally, does it matter if we do not know
how an algorithm works or, relatedly, cannot
understand how it reached its decision? I argue
that, while these are serious concerns, they are not
irresolvable. More importantly, the very same
concerns of fairness, accountability and
transparency apply, with even greater urgency, to
existing modes of decision-making in criminal
justice. The question, hence, is comparative: can
algorithmic modes of decision-making improve upon
the status quo in criminal justice? There is
unlikely to be a categorical answer to this
question, although there are some reasons for
cautious optimism.