Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder

Introduction

Excerpt of Introduction

In the many debates about whether and how algorithmic technologies should be used in law enforcement, all sides seem to share one assumption: that, in the struggle for justice and equity in our systems of governance, the subjectivity of human judgment is something to be overcome. While there is significant disagreement about the extent to which, for example, a machine-generated risk assessment might ever be unpolluted by the problematic biases of its human creators and users, no one in the scholarly literature has so far suggested that if such a thing were achievable, it would be undesirable.

This essay argues that it only becomes possible for policing to be something other than mere brutality when the activities of policing are themselves a way of deliberating about what policing is and should be, and that algorithms are definitionally opposed to such deliberation. An algorithmic process, whether carried out by a human brain or by a computer, can only operate at all if the terms that govern its operations have fixed definitions. Fixed definitions may be useful or necessary for human endeavors—like getting bread to rise or designing a sturdy foundation for a building—which can be reduced to techniques of measurement and calculation. But the fixed definitions that underlie policing algorithms (what counts as transgression, which transgressions warrant state intervention, etc.) relate to an ancient, fundamental, and enduring political question, one that cannot be ex-pressed by equation or recipe: the question of justice. The question of justice is not one to which we can ever give a final answer, but one that must be the subject of ongoing ethical deliberation within human communities.

Suggested Reading