105 Iowa L. Rev. 463 (2020)
Download PDF

Abstract

Imagine that an algorithmic computer model known to be 80 percent accurate predicts that a particular car is likely to be transporting drugs. Does that prediction provide law enforcement probable cause to search the car? When generated by humans, courts have consistently regarded such evidence of statistical likelihood as insufficiently individualized to satisfy even the most permissive legal standards—a position that has generated decades of debate among commentators. The proliferation of artificialintelligence-generated predictions—predictions that will be more accurate than humans’ and therefore more tempting to employ—requires us to revisit this debate over use of probabilistic evidence with renewed urgency, and to consider its implications for the use of predictive algorithms. This Article argues that reliance on probabilistic evidence to establish the individualized suspicion required by the Fourth Amendment, regardless of that evidence’s statistical accuracy—i.e., how likely it is that the predictions of criminal activity are correct—disregards fundamental interests that individualized suspicion is meant to protect, namely respect for human dignity, preservation of individual autonomy, and guarantees of procedural justice. So while accuracy is a necessary element of individualized suspicion findings, this Article contends that no level of statistical likelihood is sufficient. Further, it argues that careful consideration of these issues has become critically important in today’s big data world, because the shortcomings that “analog” probabilistic evidence presents are even more pronounced in the context of predictive algorithms.

Published:
Saturday, February 15, 2020