Note

The Need for Transparency in the Age of Predictive Sentencing Algorithms

I.     Introduction

As the United States faces unprecedented rates of incarceration,1 criminal law experts seek to decrease recidivism, believing that a small percentage of the population is responsible for a majority of crime.2 This theory, known as “selective incapacitation,” is based on the idea “that a small subset of repeat offenders is responsible for the majority of crime and that incapacitating that small group would have exponential benefits for the overall crime rate.”3 Researchers have started developing strategies that use objective evidence to identify criminals representing the most serious risk to the community based on their likelihood to reoffend.4 The process, known as “risk assessment,” has led to the creation of actuarial instruments, or statistical models that predict risk of recidivism by studying the common traits of paroled inmates responsible for committing multiple crimes.5

Prior to the use of actuarial instruments, predicting an offender’s risk of recidivism had been done by clinical assessment—“‘an informal, “in the head,” [and] impressionistic, subjective conclusion’ about the offender’s future dangerousness.”6 In the clinical model, assessments to evaluate a defendant are either made by mental health experts or other actors in the criminal justice system, such as judges or parole boards.7 The movement in criminal law has been away from these clinical methods based on the decision-makers’ subjective judgments, observations, and experiences, and toward the use of objective application of statistical models derived from large datasets of criminal offenders.8 This trend of objective risk assessment has led to the development of actuarial studies that attempt to isolate specific factors in estimating risk.9 Sometimes called “mechanical prediction,” actuarial methods consist of “the mechanical combining of information for classification purposes, and the resultant probability figure which is an empirically determined relative frequency.”10 The scores generated by actuarial risk assessment are now routinely used in all stages of the criminal proceeding, including parole determinations, prison classification, and sentencing.11

As the use of predictive risk assessment has increased, several states have turned to private companies to supply the algorithms needed to generate a defendant’s risk score.12 Although proponents of such methods claim the practice is efficient and effective,13 these predictive algorithms can be problematic in practice.14 Defendants have attempted to challenge the use of such algorithmic risk scores cited by judges in sentencing decisions since the defendants have no way of validating the accuracy of the formulas.15 So far, such attempts have been unsuccessful because the companies behind the formulas assert trade secret protection, ensuring that the formulas used to calculate risk score remain unknown.16 The problem is further complicated by the fact that most states themselves have taken no steps to ensure the accuracy of these formulas.17 Despite the recommendation that local governments conduct validation studies to ensure that an accurate correlation is produced on the population on which the formula is being used, very few jurisdictions have taken this step.18 This means that defendants must accept as accurate a formula that has typically not been validated by the state, and defendants are further prohibited from testing the accuracy themselves.

This Note argues that because private companies are benefitting financially by providing a public service, they should be required to conform to the same transparency requirements as public agencies. Part II first examines the development of predictive risk assessment models in the criminal justice system, particularly their use in sentencing. Part III then discusses the effect of privatization on defendants’ ability to challenge the validity of risk scores. Finally, Part IV explains the benefits of public access to risk assessment tools and proposes alternatives to privatized algorithms that would conform to freedom of information laws and keep the public fully informed of government actions.

II.     Background

In order to identify potential problems with predictive risk assessment tools, this Part reviews the objectives driving the development of risk assessment and the resulting implementation of risk assessment in criminal law. Part II.A details the recent growth in incarceration and the steps taken to counter recidivism. Next, Part II.B explains the basic structure of risk assessment tools, including a description of one of the most commonly used systems, COMPAS. Part II.C explores the expansion of actuarial risk assessment in the criminal sentencing phase, which has led to a competitive industry of privately owned, for-profit predictive sentencing formulas. Finally, Part II.D provides a brief explanation of freedom of information laws and the trade secret exemption that prevents the public from acquiring information on the proprietary risk assessment tools currently in use for criminal sentencing.

A.     Risk Assessment as a Response to an Overburdened Justice System

Criminal sentencing policy over the last 30 years has focused on punishment and imprisonment.19 As a result, the United States faces high levels of incarceration, increasing correction costs, and unparalleled recidivism rates.20 To counter the escalating costs of recidivism, policymakers have turned to predictive algorithms, which at their most basic level “mine personal information to make guesses about individuals’ likely actions and risks.”21 In the criminal context, predictive algorithms are used as a form of risk analysis “to constrain a dependence on imprisonment, encourage alternative rehabilitative programming, reduce recidivism risk, and improve public safety.”22 These analyses are based on “assessing an offender’s risk of reoffending, matching supervision and treatment to the offender’s risk level, and targeting the offender’s criminogenic needs or dynamic risk factors with the social learning and cognitive-behavioral programs most likely to effect change in the offender’s behavior given specific offender characteristics.”23 An actuarial risk assessment is a “statistical prediction[] about the criminality of groups or group traits to determine criminal justice outcomes for particular individuals within those groups.”24 Professor Ernest Burgess of the University of Chicago developed one of the first ever risk assessment methods in 1927—the “Burgess method”—which predicted an individual’s likelihood of success or failure on parole based on 21 factors.25 The principal competitor to the Burgess method was developed by Sheldon and Eleanor Glueck, a criminology professor and a criminology research assistant at the Harvard Law School, in 1930.26 The Gluecks examined 510 inmates and concluded that prediction methods should rely on a narrow set of factors, focusing on only seven.27 The competing methods led to a surge in research on the topics of behavioral analysis and sociology, eventually resulting in the adoption of modern risk assessment methods as a means “to inform post-conviction decisions and management strategies, such as parole determinations, supervised release conditions, provision of reentry services, decisions to revoke supervision, and judgments concerning probation and parole sanctions.”28 Instruments that calculate risk assessment are now used in most stages of criminal litigation, including sentencing in some states.29 In the past 20 years, advances in social science, as well as a growing national effort to curb repeat offenders, has resulted in a greater reliance on risk assessment tools.30 Prediction of risk has become a routine step in the criminal justice system, “seen as a necessity, no longer a mere convenience.”31 Proponents of such tools contend that they help relieve the burden of high incarceration rates by more efficiently allocating resources to prioritize correctional detention and supervision of high-risk criminals rather than low-risk individuals.32

 B.     The Basics of Predictive Risk Assessment Tools

There are over 60 different types of risk assessment tools currently in use in courthouses throughout the United States.33 The most widespread predictive tools are, at their core, questionnaires that assign points based on factors such as demographics, family background, and criminal history.34 Experts claim that recidivism is associated with a “central eight” risk-needs categories.35 Making up the “central eight” are “the ‘big four’[:] antisocial attitudes, antisocial associates, antisocial personalities, and criminal history” and “the ‘moderate four’[:] substance abuse, family characteristics, education and employment, and lack of prosocial leisure or recreation.”36 An offender receiving a low score will be designated as low risk to reoffend and will likely receive a lighter punishment than an individual designated as high risk to reoffend.37 The scores are derived from behavioral research involving examination of large populations of former prisoners for several years to determine which characteristics correspond with recidivism.38 The points are then weighed based on the statistical probability that the defendant’s behavior will correspond with the behavior of repeat offenders.39 Experts assert that factors indicative of repetitive criminal behavior include “feeling proud of breaking the law or having marital or substance abuse problems.”40 Researchers claim that the strongest indicators of reoffenders are sex, age, and prior criminal history, and therefore, these factors often have the most impact on an individual’s score.41

There are three main risk assessment instruments used in most jurisdictions: Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”), Public Safety Assessment (“PSA”), and Level of Service Inventory Revised (“LSI-R”).42 COMPAS is a “web-based tool designed to assess offenders’ criminogenic needs and risk of recidivism. Criminal justice agencies across the nation use COMPAS to inform decisions regarding the placement, supervision, and case management of offenders.”43 This
137-question evaluation collects data such as “criminal and parole history, age, employment status, social life, education level, community ties, drug use and beliefs.”44 Some of the questions include: “‘Did a parent figure who raised you ever have a drug or alcohol problem?’ and ‘Do you feel that the things you do are boring or dull?’”45 “The questionnaire also asks people to agree or disagree with statements such as ‘A hungry person has a right to steal’ and ‘If people make me angry or lose my temper, I can be dangerous.’”46 The questions are either answered by defendants or extracted from their criminal records.47 An interviewer will administer the questionnaire and “has some leeway in asking the questions in order to build rapport with the interviewee.”48 Some questions require interviewees to respond with a yes or a no, while other responses are scored on a numerical scale.49 The interviewer then scores the questionnaire, which determines the defendant’s risk level.50 These scores represent the odds that the defendant will reoffend within a certain time period.51 For states that use predictive algorithms in sentencing decisions, the questionnaire is generally completed after conviction during a pre-sentence investigation.52 The pre-sentence investigation report, which includes the defendant’s risk score, is then delivered to the judge to consider at the sentencing hearing.53

As the use of predictive risk assessments has become more widespread, critics have raised numerous objections to their use in the courtroom.54 For example, legal scholars believe that inherent biases result in higher scores for certain classes, races, and genders.55 Former U.S. Attorney General Eric Holder warned of potential bias, stating: “Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice, . . . exacerbat[ing] unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”56 Additionally, some critics claim that the results are inaccurate.57 A recent study of COMPAS by ProPublica found that “[t]he formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”58 Specifically, the study found:

[T]he algorithm is more likely to misclassify a black defendant as higher risk than a white defendant. Black defendants who do not recidivate were nearly twice as likely to be classified by COMPAS as higher risk compared to their white counterparts (45 percent vs.
23 percent). However, black defendants who scored higher did recidivate slightly more often than white defendants (63 percent vs. 59 percent).

The test tended to make the opposite mistake with whites, meaning that it was more likely to wrongly predict that white people would not commit additional crimes if released compared to black defendants. COMPAS under-classified white reoffenders as low risk 70.5 percent more often than black reoffenders (48 percent vs.
28 percent). The likelihood ratio for white defendants was slightly higher 2.23 than for black defendants 1.61.59

The study indicated that “[t]he score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.”60 When considering the results broadly, looking at the score’s accuracy at predicting recidivism following any crime, including misdemeanors, “the algorithm was somewhat more accurate than a coin flip.”61 Despite legal challenges seeking to prevent the use of such tools in parole and sentencing decisions, courts have so far upheld the use of actuarial risk assessment tools.62

 C.     Expansion of Actuarial Assessments in Criminal Sentencing

While risk assessment tools first emerged as a method of weighing parole decisions, the Justice Department’s National Institute of Corrections now promotes the use of predictive algorithms for all phases of criminal cases, including sentencing.63 Virginia was the first state to implement an actuarial risk assessment tool at sentencing for the purpose of diverting low-risk offenders otherwise bound for prison into alternative sanctions.64 Now, at least 20 states require that judges be provided with a risk score at the sentencing phase.65 A 2010 national survey conducted by the Vera Institute of Justice monitored the growing use of risk assessment tools, finding “that almost every state uses an assessment tool at one or more points in the criminal justice system to assist in the better management of offenders in institutions and in the community. Overall, over 60 community supervision agencies in 41 states reported using an actuarial assessment tool, suggesting that an overwhelming majority of corrections agencies nationwide routinely utilize assessment tools to some degree.”66 A sentencing reform bill currently pending in Congress would mandate the use of such assessments in federal prisons.67

In 2006, after a survey revealed that 75% of respondents favored major sentencing reform to offset high incarceration rates, the Conference of Chief Justices (“CCJ”) and the Conference of State Court Administrators (“COSCA”) established a national sentencing reform plan called “Getting Smarter About Sentencing.”68 The most important objectives of the plan were: “(a) expanding use of evidence-based practices and risk and needs assessment tools and (b) promoting community-based alternatives to incarceration for appropriate offenders.”69 The following year, CCJ and COSCA adopted a resolution titled “In Support of Sentencing Practices that Promote Public Safety and Reduce Recidivism.”70 This resolution encouraged the use of predictive risk assessment in sentencing and corrections policies by urging states to adopt such tools that have been shown to reduce recidivism.71 “[T]he public desires and deserves criminal justice systems that promote public safety while making effective use of taxpayer dollars.”72 The CCJ resolution also “urge[d] all members of the judiciary to educate themselves about the effectiveness of community-based corrections programs in their jurisdictions and to advocate and, when appropriate, make use of those programs shown to be effective in reducing recidivism.”73 Missouri Chief Justice Ray Price echoed the sentiment in support of sentencing reform in his 2010 State of the Judiciary speech: “There is a better way. We need to move from anger-based sentencing that ignores cost and effectiveness to evidence-based sentencing that focuses on results—sentencing that assesses each offender’s risk and then fits that offender with the cheapest and most effective rehabilitation that he or she needs.”74

A 2009 discussion draft of the Model Penal Code also supported the use of predictive risk assessment tools in sentencing, stating that “[t]he commission shall develop . . . offender risk-assessment instruments or processes, supported by current and ongoing recidivism research of felons in the state, that will estimate the relative risks that individual felons pose to public safety through future criminal conduct.”75 The commentary suggested that sentencing decisions made by judges are “notoriously imperfect.”76 The decisions vary based on the ability and intuition of each individual decision-maker, who often lacks training in human behavior, and the conscious or unconscious bias of judges.77 The commentary encouraged the use of predictive formulas in the sentencing process, as

[a]ctuarial—or statistical—predictions of risk, derived from objective criteria, have been found superior to clinical predictions built on the professional training, experience, and judgment of the persons making predictions. The superiority of actuarial over clinical tools in this arena is supported by more than 50 years of social-science research.78

State legislatures have largely followed these recommendations for utilizing risk assessment tools in an effort to reduce recidivism, passing comprehensive corrections reform legislation requiring courts and correction agencies to adopt predictive risk assessment practices.79 The risk assessment is typically conducted during the pre-sentence investigation and subsequently included in a report given to the judge for consideration at sentencing.80 As a result of the growing trend to implement actuarial risk assessment in sentencing, risk assessment has become a competitive industry with both governmental and for-profit businesses developing instruments.81 “Recidivism prediction is ubiquitous. . . . There is an enormous body of academic and professional literature. Unprecedented private sector involvement has occurred in designing and marketing instruments and providing services to [the] government.”82

But a massive disadvantage of proprietary risk assessment tools is that for-profit companies do not publicly disclose the formulas used to arrive at a risk score, so neither the defendants nor the public are privy to the calculations.83 For example, Northpointe, the company that sells COMPAS, does not reveal how it weighs the answers to arrive at a risk score.84 The company has claimed trade secret protection to protect the secrecy of its formula.85 The Wisconsin Supreme Court recently addressed the use of predictive algorithms used in the sentencing phase of a case, noting the proprietary nature of the formula at issue.86 The Court stated that “Northpointe, Inc. . . . considers COMPAS a proprietary instrument and a trade secret. Accordingly, it does not disclose how the risk scores are determined or how the factors are weighed.”87 However, the court did not further analyze the problems created by the trade secret protection, and merely held that any pre-sentencing investigative report must note the proprietary nature of COMPAS in order to caution judges on the limitations of the risk assessment and enable them to better weigh the factors.88

 D.     Government Transparency and Freedom of Information

The invocation of trade secret protection to evade disclosure of proprietary information implicates issues of government transparency. The Freedom of Information Act (“FOIA”), enacted in 1966 to supplement the Administrative Procedure Act of 1946 (“APA”), promotes an open government by granting individuals access to government records.89 President Lyndon Johnson emphasized the importance of public access and transparency when signing FOIA, stating “that ‘this legislation springs from one of our most essential principles: a democracy works best when the people have all the information that the security of the nation permits.’”90 This view has been echoed by former President Obama, who has stated that “[a] democracy requires accountability, and accountability requires transparency.”91 FOIA serves as a means to achieve this accountability and transparency by allowing individuals access to information being used by the government.92 By providing information to the public, FOIA is essential to a democratic society as it prevents secrecy and corruption and allows citizens to hold the government accountable for its actions.93

FOIA permits members of the public to formally request documents from the federal government, including agency rules, opinions, orders, records, and proceedings, which the government must promptly make available.94 This includes “essentially anything reproducible over which an agency has possession and control, no matter the format in which the record is maintained.”95 To access information according to FOIA, an individual sends a request letter to the relevant government entity.96 If the request “reasonably describes” the information, the agency must “make a determination on the request within 20 business days.”97 There are certain limitations to disclosure, though. FOIA recognizes nine exemptions, one of which is trade secret protection.98 The Restatement of Torts defines “trade secret” as “any formula, pattern, device or compilation of information which is used in one’s business, and which gives him an opportunity to obtain an advantage over competitors who do not know or use it.”99 As discussed in Part II.C, Northpointe has claimed trade secret protection in COMPAS, thereby avoiding disclosure of its risk assessment algorithm.100

Despite these exemptions, FOIA still favors disclosure. The Supreme Court has held that the nine exemptions are mostly “discretionary” and are to be “narrowly construed.”101 In one case, a private company contracting with the government objected to the release of proprietary records sought under an FOIA request, arguing that the trade secret exemption barred such a disclosure.102 The Court rejected the argument and held that “[e]nlarged access to governmental information undoubtedly cuts against the privacy concerns of nongovernmental entities, and as a matter of policy some balancing and accommodation may well be desirable. We simply hold here that Congress did not design the FOIA exemptions to be mandatory bars to disclosure.”103 More recently, former President Obama issued a memorandum for the heads of executive departments and agencies clarifying FOIA, in which he stated that “[t]he Freedom of Information Act should be administered with a clear presumption: In the face of doubt, openness prevails.”104 This presumption of disclosure illustrates the extent to which an open and transparent government is valued in American society as a means to hold the government accountable to its citizens.105 It also suggests that nongovernment entities may lose some privacy protection when contracting to perform government functions.

III.     How the Private Sector is Capitalizing on Trade Secret Protection to Evade Public Disclosure of its Sentencing Algorithms

In order to fully understand the impact of actuarial risk assessment instruments, it is helpful to examine cases in which judges have fully embraced the use of risk assessment scores in sentencing criminal defendants. The cases described in this Part demonstrate the weight given to risk assessment scores and how judges then implement those scores at sentencing. Although these tools are routinely being used to make sentencing decisions, defendants cannot properly challenge their accuracy because they do not have access to the formulas. Compounding this problem is the fact that many jurisdictions do not validate the accuracy of the algorithms on the local population prior to use. Part III.A first discusses the full-scale adoption of predictive algorithms in the sentencing phase and looks at two Wisconsin cases in which the judges specifically cited risk assessment scores as a factor in determining length of incarceration. Part III.B then examines the effects of privatization, specifically the use of trade secret protection to prevent disclosure of the formula to defendants wanting to challenge the accuracy of their risk score. Part III.C then explains how the so-called “validation problem” further complicates the use of risk assessment instruments, noting that the lack of validation studies calls into question the decision to deny the public access to risk assessment formulas.

A.     A Look at Predictive Algorithms in the Courts

Wisconsin has been a major proponent of utilizing risk assessment tools in sentencing decisions, operating the COMPAS risk assessment software for use in each step in the prison system, from sentencing to parole.106 The Department of Corrections attaches the COMPAS risk assessment score to a confidential pre-sentence report given to judges prior to all felony sentencing decisions.107 The primary purpose of the test is to determine a defendant’s eligibility for probation or treatment.108 Theoretically, judges are not supposed to give harsher sentences to defendants with higher risk scores.109 However, in practice, this is not always the case. Two cases illustrate the ways in which Wisconsin judges have relied on risk scores, to the defendant’s detriment, when determining sentences. One case involved Paul Zilly, a man accused of stealing a push lawnmower and other tools.110 Zilly and his lawyer agreed to a plea deal with prosecutors, in which the state would recommend one year in a county jail followed by supervision to ensure Zilly would “stay[] on the right path.”111 However, Judge James Babler overturned the plea deal and sentenced Zilly to two years in prison, stating: “When I look at the risk assessment . . . it is about as bad as it could be.”112 The judge referenced the score generated by COMPAS, which calculated Zilly as high risk for future violent crime and medium risk for general recidivism.113 In an appeals hearing, Judge Babler explained his sentencing decision: “Had I not had the COMPAS, I believe it would likely be that I would have given one year, six months.”114

In a similar sentencing decision, another Wisconsin judge directly referenced the defendant’s risk score during sentencing.115 Eric Loomis agreed to plead guilty to two of the five criminal charges brought by the state; in exchange, the state agreed to dismiss the other counts.116 After accepting Loomis’s plea, the court ordered a “pre-sentence investigation report” (“PSI”).117 The sentencing judge reviewed the PSI, which included an attached COMPAS risk score.118 At sentencing, the judge stated: “You’re identified, through the COMPAS assessment, as an individual who is at high risk to the community.”119 After noting that the risk assessment score suggested that Loomis was at an extremely high risk to reoffend, the judge then sentenced him within the maximum of the two charges for which he entered a plea.120 Loomis appealed the sentencing court’s decision, specifically challenging the use of the risk assessment portion of the COMPAS report at sentencing, asserting that use of the tool “violates a defendant’s right to due process.”121 Although the Wisconsin Supreme Court ultimately held that use of the score did not violate Loomis’s right to due process, the case illustrates the extent to which sentencing judges incorporate the scores into their decisions, as well as a defendant’s lack of access to the algorithm.122 The court rejected Loomis’s argument that “he is in the best position to refute or explain the COMPAS risk assessment, but cannot do so based solely on a review of the scores as reflected in the bar charts.”123 The court found that Loomis was not entitled to review how the factors are weighed to determine risk score in order to verify its accuracy.124

 B.     Invoking Trade Secret Protection to Avoid Disclosure

The Wisconsin cases underscore not only the growing presence of risk assessment at sentencing, but also the protection afforded proprietary risk formulas. As the demand for risk assessment tools has grown, the private sector has drastically expanded its involvement in designing and marketing risk assessment tools to sell to the government.125 One judge noted the recognizable private interests at stake in Loomis, stating: “Northpointe has an obvious financial and proprietary interest in the continued use of COMPAS.”126 Responding to concerns related to the growing use of COMPAS in criminal proceedings, Northpointe claims that “[t]here’s no secret sauce to what we do; it’s just not clearly understood.”127 Despite these claims, defendants and the public do not have full access to Northpointe’s “future-crime formula” because Northpointe carefully guards the calculations used to generate the risk scores.128 The company considers the algorithm in its risk assessment tool a proprietary instrument and a trade secret.129 Consequently, as the court addressed in Loomis, Northpointe “does not disclose how the risk scores are determined or how the factors are weighed.”130 In response to Loomis’s argument that his due process was violated because the proprietary nature of COMPAS prevented him from assessing its accuracy, the court held that all PSI’s must now include a written advisement disclosing the proprietary nature of COMPAS.131 Professor David Levine argues that “[a]s public-private partnerships and government commercial activities increase, more frequent assertion of government trade secrets may leave us by default with policies and practices that would not stand up to public scrutiny if the policies were made by legislatures in an open, deliberative fashion.”132

 C.     The Validation Problem

Because companies can avoid disclosing the algorithms behind their risk assessment tools, defendants cannot challenge the accuracy of the results.133 In response, Northpointe General Manager Jeffrey Harmon stated that “[t]he outcome . . . is all that is needed to validate the tools.”134 A report drafted by the National Center for State Courts emphasized the need for states themselves to validate risk assessment tools prior to full-scale implementation: “Given the purpose for and potential judicial consequences of using assessment information at sentencing, research must provide evidentiary support that the tool can effectively categorize all types of offenders in the local population on which the instrument will be used into groups with different probabilities of recidivating.”135 The report recommends that the supervising agency validate the instrument on a sample of the local population before fully implementing the tool in its jurisdiction.136 The report states:

After identifying the most promising tool for use in a jurisdiction, the supervising agency should validate the instrument on a sample that is representative of the local population before undertaking full-scale implementation. Importantly, this should include empirical efforts to norm the tool on different groups of offenders in the target population to ensure that the tool produces accurate risk classifications across subgroups.137

Validation of risk assessment tools is a necessary step to full-scale implementation because a single algorithm is unlikely to have widespread applicability across multiple populations.138 A report by the Center for Criminal Justice Research at the University of Cincinnati found that only 30% of agencies utilizing these tools had validated them on the local population prior to widespread use in their jurisdiction.139 This has created the “validation problem,” where the government cannot fully approve the accuracy of the risk assessment tools implemented in its jurisdiction.140 Additionally, few independent studies have validated risk assessments.141 ProPublica cited research conducted in 2013 that examined 19 different risk assessment instruments used throughout the United States and reported that “in most cases, validity had only been examined in one or two studies” and “frequently, those investigations were completed by the same people who developed the instrument.”142 The researchers’ analysis found that the risk assessment instruments “were moderate at best in terms of predictive validity.”143 The results of such studies reveal that many jurisdictions have fully implemented proprietary risk assessment tools, such as COMPAS, without first testing their validity.144 Courts seemingly accept the accuracy of these formulas, even when no validation studies have been conducted, and in turn deny defendants the opportunity to challenge their accuracy. Consequently, critics of these proprietary risk assessments are requesting more transparency in order to challenge the validity of the results at sentencing hearings.145

IV.     Requiring Transparency for Risk Assessment Formulas Developed by Private, For-Profit Companies

This Part explains how denying the public access to information used in governmental proceedings frustrates the purpose of freedom of information laws, which serve as a tool to keep the public fully informed of government actions. Part IV.A argues that current application of FOIA, recognizing trade secret protection for risk assessment tools used in criminal sentencing, frustrates government transparency, while Part IV.B reviews state-generated predictive sentencing algorithms as an alternative to proprietary instruments and how the use of these government-created tools avoids the trade secret problems inherent in privatized formulas. Part IV.C argues that disclosure requirements should extend to proprietary risk assessment formulas.

A.     Access to Information

Despite government assertions that transparency and accountability are necessary for a functioning democracy, government actions in criminal sentencing decisions have been informed by proprietary risk assessment tools that are shielded from the public with trade secret protection.146 Because companies, such as Northpointe, receive a profit in performing a public service, FOIA and its state counterparts should be amended so that the trade secret exemption to disclosure does not apply to proprietary risk assessment tools used in criminal sentencing.

Although private companies provide government services by supplying the risk assessment tools used in criminal law, these companies simultaneously take advantage of commercial law protections.147 Professor David Levine argues “that we can and should expect such public disclosure when companies step out of the purely private commercial world and seek to reap the financial benefits of providing essential public infrastructure, and that trade secret law stands in the way of this goal.”148 Private firms that elect to provide public services should be subjected to the same transparency and accountability requirements as government agencies.149 This would ensure that the underlying theory of a transparent government would be maintained. The basis of American freedom of information laws can be traced to Jeremy Bentham, the 18th-century English philosopher, who wrote:

But in an open and free policy, what confidence and security—I do not say for the people, but for the governors themselves! Let it be impossible that any thing should be done which is unknown to the nation—prove to it that you neither intend to deceive nor to surprise—you take away all the weapons of discontent. The public will repay with usury the confidence you repose in it. Calumny will lose its force; it collects its venom in the caverns of obscurity, but it is destroyed by the light of day.150

Although legislation such as FOIA embodies the “open” policy described by Bentham, the trade secret exemption has effectively frustrated those goals.151 The Supreme Court attributed the creation of the trade secret exception to the fact that “private entities [were] seeking [g]overnment contracts.”152 Despite creating an exception for trade secrets, “FOIA sets a default of disclosure[,] . . . orient[ing] government towards disclosure” rather than secrecy.153 Once a private company deviates from purely commercial matters towards governmental matters, trade secrecy, and democratic values are in conflict.154 Law professors Danielle Citron and Frank Pasquale reason that “the logics of predictive scoring systems should be open to public inspection . . . . There is little evidence that the inability to keep such systems secret would diminish innovation.”155

The need for an amendment to FOIA is illustrated by the results of a 2015 study at the University of Maryland addressing the lack of access to proprietary risk assessment instruments.156 That study involved FOIA requests for documents or source codes related to predictive algorithms used in criminal proceedings.157 Oregon was the only state to disclose its predictive algorithm, providing “the 16 variables and their weights.”158 Some states provided documents with descriptions of their risk assessments, but no details regarding development or validation.159 Other states refused the request completely based on the trade secret protection.160 Because there is a growing acceptance that the trade secret protection exempts companies like Northpointe from FOIA disclosure requirements, an amendment to the Freedom of Information Act is needed to require disclosure of risk assessment formulas, thereby ensuring the presumption of disclosure that the government seems to advocate. As a result, companies that sell risk score assessment instruments, that are then utilized by public agencies and considered by judges in criminal sentencing decisions, would be subjected to the same transparency requirements of other public agencies.

Requiring disclosure would ensure that the risk scores generated by proprietary instruments, which are often not validated before implementation, are in fact accurate. This would also ensure public access to government information. As more public functions are contracted out to private entities that are not subject to the same standards of transparency imposed on government actors, the effectiveness of FOIA is undermined.161 Danielle Citron warns that preventing public access to data models “undermines the democratic process.”162 When the creators of proprietary algorithms refuse to reveal the method and logic behind their models, it leaves the tools “shrouded in secrecy.”163 In a statement before the Senate Committee on Governmental Affairs, David Sobel stated “public disclosure of this information improves government oversight and accountability. It also helps ensure that the public is fully informed about the activities of government.”164 Existing transparency procedures should be applied to risk assessment algorithms to ensure that freedom of information laws are not undercut.165

 B.     The Benefits of Public Access to Risk Assessment

Requiring the disclosure of proprietary risk assessment tools is good public policy because it allows for a close evaluation of the algorithm, which in turn guarantees its accuracy. For example, some states are avoiding proprietary trade secret problems by developing their own risk assessment algorithms. In Pennsylvania, the Pennsylvania Risk Assessment Project was created to help make criminal sentencing decisions.166 Act 95 of the project directed a commission to adopt a risk assessment tool that would consider risk of reoffending, potential threats to public safety, and the possibility of alternative sentencing programs, all based on empirical data believed to predict recidivism.167

Unlike other states that use proprietary algorithms in sentencing decisions, “the level of transparency around the Pennsylvania Risk Assessment Project is laudable, with several publicly available in-depth reports on the development of the system.”168 These reports include extensive analysis of the development of the risk assessment method, explaining the factors to be considered and the weight to be given each factor in calculating the risk scale.169 The transparency in providing widespread access to the reports provides the public an opportunity to evaluate the risk assessment method, which is lacking with propriety instruments.170 The Commission also published a comprehensive validation report, providing the results of focus group and beta testing.171

Similarly, the Ohio Department of Rehabilitation and Correction constructed its own statewide risk assessment system, the Ohio Risk Assessment System, to “improve[] consistency and facilitate[] communication across criminal justice agencies.”172 By creating its own risk assessment instrument designed around its local, target population, Ohio has avoided one of the problems associated with commercial risk assessment instruments, which are generally developed on samples from a different population.173 Like Pennsylvania, Ohio published an in-depth study that it made available to the public.174 The study outlines “the creation and validation” of the risk assessment tools.175 While some jurisdictions rely on commercially developed risk assessment instruments due to limited resources, an algorithm developed by the state in which it was developed, such as the Ohio Risk Assessment System, may be more accurate at predicting recidivism for the state’s local population since it is designed around that population.176

 C.     Extending Disclosure Requirements to Proprietary Risk Assessment Instruments

As the government relies increasingly on technology, Congress and the courts have conditioned technological expansion on government adherence to safeguards “designed to ensure the fairness, transparency, and accountability of agencies’ decisions about particular individuals.”177 As this technological expansion implicates both public and private interests, some have argued that “a reconsideration of the rules for [the] government is not only a good idea but a necessity.”178 The idea of allowing companies to maintain a competitive advantage through trade secret protection directly should not have the same force when applied to risk assessment tools like COMPAS, because the company’s private interests cannot be reconciled with public interests.179 Some courts have actually rejected trade secret protection the moment a private function is recognized as governmental.180 Despite this, transparency for the public good is often sacrificed for the protection of commercial interests. Professor Levine argues that “[a]s public-private partnerships and government commercial activities increase, more frequent assertion of government trade secrets may leave us by default with policies and practices that would not stand up to public scrutiny if the policies were made by legislatures in an open, deliberative fashion.”181 When using tools such as COMPAS in making sentencing decisions, the function of the risk assessment tool should be considered governmental rather than proprietary. Once this occurs, companies can no longer assert proprietary protections, such as trade secret protection, to prevent disclosure of the algorithms. This governmental function requires that companies submit to the same transparency requirements as other government agencies, ensuring transparency.

V.     Conclusion

As jurisdictions continue to incorporate scientific and technological methods as a means of reducing recidivism, they should consider the costs and benefits that result from incorporating proprietary risk assessment instruments. While the increased efficiency associated with systems like COMPAS are perhaps understandable, state governments should not abandon the values of open government on which the country was founded. As the cases above demonstrate, states have accepted the accuracy of proprietary algorithms without first validating their accuracy. Defendants in the criminal justice system are then forced to also accept the accuracy of these algorithms, as trade secret protection prevents the disclosure of the formulas.

In short, when private companies benefit from providing a public service, they should be subjected to the same transparency requirements as public agencies. Just as states that develop their own risk assessment instruments disclose to the public in-depth reports about their algorithms, private companies should adhere to the same requirements when they choose to contract with the government.


 

  1. [1]. See Claire Botnick, Note, Evidence-Based Practice and Sentencing in State Courts: A Critique of the Missouri System, 49 Wash. U. J.L. & Pol’y 159, 159 (2015) (“The number of adults under some form of correctional supervision in the United States has increased by 270 percent since 1980.”).

  2. [2]. Bernard E. Harcourt, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age 88–89 (2007).

  3. [3]. Id. at 88; see also Pamela M. Casey et al., Nat’l Ctr. for State Courts, Using Offender Risk and Needs Assessment Information at Sentencing: Guidance for Courts from a National Working Group 2 (2011) (“A sample of felony defendants from the nation’s 75 most populous counties during [2004] revealed that more than 75 percent had a prior arrest history, and 53 percent had at least five prior arrest charges. Another study of nearly 275,000 prisoners released in 1994 found that two-thirds were rearrested for a new offense within three years.” (citation omitted)).

  4. [4]. Harcourt, supra note 2, at 88.

  5. [5]. Bernard E. Harcourt, Risk as a Proxy for Race 2 (John M. Olin Law & Econ. Working Paper No. 535 (2d Series) & Pub. Law & Legal Theory Working Paper No. 323, 2010), http://
    chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1265&context=public_law_and_legal_
    theory.

  6. [6]. Dawinder S. Sidhu, Moneyball Sentencing, 56 B.C. L. Rev. 671, 687 (2015) (alteration in original) (quoting William M. Grove & Paul E. Meehl, Comparative Efficiency of Informal (Subjective, Impressionistic) and Formal (Mechanical, Algorithmic) Prediction Procedures: The Clinical–Statistical Controversy, 2 Psychol. Pub. Pol’y, & L. 293, 294 (1996)).

  7. [7]. Harcourt, supra note 2, at 16–17.

  8. [8]. Id. at 2, 17–18; see also Scott VanBenschoten, Risk/Needs Assessment: Is This the Best We Can Do?, 72 Fed. Prob. 38, 38–39 (2008) (stating that the increased use of actuarial risk assessment instruments demonstrates a trend over the last 30 years away from the clinical judgment of judicial officers); Joe Palazzolo, Wisconsin Supreme Court to Rule on Predictive Algorithms Used in Sentencing, Wall St. J. (June 5, 2016, 5:30 AM), http://www.wsj.com/articles/wisconsin-supreme-court-to-rule-on-predictive-algorithms-used-in-sentencing-1465119008 (“‘Evidence has a better track record for assessing risks and needs than intuition alone,’ wrote Christine Remington, an assistant attorney general in Wisconsin, ... defending the state’s use of the evaluations.”); Harcourt, supra note 5, at 2 (“There are, to be sure, political advantages to using technical instruments such as actuarial tools to justify prison releases. Risk-assessment tools protect political actors and serve to de-responsibilize decision-makers.”).

  9. [9]. Harcourt, supra note 2, at 2 (“Today, the actuarial permeates the field of criminal law and its enforcement.”).

  10. [10]. Id. at 16–17 (quoting Paul E. Meehl, Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence 3 (1954)).

  11. [11]. See infra Part II.A.

  12. [12]. Algorithms in the Criminal Justice System, Electronic Privacy Info. Ctr., https://epic.org/
    algorithmic-transparency/crim-justice (last visited July 9, 2017).

  13. [13]. See Harcourt, supra note 2, at 21 (“It has become, today, second nature to believe that actuarial methods enhance the efficiency of our carceral practices with hardly any offsetting social costs—with the exception, for some, at least publicly, of racial profiling. To most people, criminal profiling on a nonspurious trait simply increases the detection of crime and renders police searches more successful, which inevitably reduces crime rates... . [T]he detection of crime will increase, the efficiency of law enforcement will improve, and, through the traditional mechanisms of deterrence and incapacitation, crime rates will decrease.”); see also Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1, 4 (2014) (“The scoring trend is often touted as good news. Advocates applaud the removal of human beings and their flaws from the assessment process. Automated systems are claimed to rate all individuals in the same way, thus averting discrimination. But this account is misleading. Because human beings program predictive algorithms, their biases and values are embedded into the software’s instructions, known as the source code and predictive algorithms.”).

  14. [14]. See infra Part II.B–C.

  15. [15]. See infra Part III.C.

  16. [16]. See infra Part III.B.

  17. [17]. See infra Part III.C.

  18. [18]. See infra Part III.C.

  19. [19]. Roger K. Warren, Evidence-Based Practices and State Sentencing Policy: Ten Policy Initiatives to Reduce Recidivism, 82 Ind. L.J. 1307, 1308 (2007) (“Over the last thirty years, the applicable state statutes, rules, and guidelines have increasingly relied upon imprisonment and incarceration—both for the purpose of punishment for criminal behavior, and for the purpose of incapacitation and deterrence from future criminal conduct—and lessened reliance on other forms of punishment as well as on strategies of rehabilitation.”).

  20. [20]. Id.

  21. [21]. Citron & Pasquale, supra note 13, at 3.

  22. [22]. Melissa Hamilton, Risk-Needs Assessment: Constitutional and Ethical Challenges, 52 Am. Crim. L. Rev. 231, 232 (2015); see also Harcourt, supra note 5, at 1 (“An increasing chorus argues, today, that risk-assessment instruments are a politically feasible method to redress our problem of mass incarceration and reduce prison populations.”).

  23. [23]. Casey et al., supra note 3, at 6.

  24. [24]. Harcourt, supra note 2, at 17.

  25. [25]. Harcourt, supra note 5, at 4; see also Harcourt, supra note 2, at 47 (“[T]here can be no doubt of the feasibility of determining the factors governing the success or the failure of the man on parole.” (alteration in original)). The factors include:

    (1) nature of offense; (2) number of associates in committing offense for which convicted; (3) nationality of the inmate’s father; (4) parental status, including broken homes; (5) marital status of the inmate; (6) type of criminal, as first offender, occasional offender, habitual offender, professional criminal; (7) social type, as ne’er-do-well, gangster, hobo; (8) county from which committed; (9) size of community; (10) type of neighborhood; (11) resident or transient in community when arrested; (12) statement of trial judge and prosecuting attorney with reference to recommendation for or against leniency; (13) whether or not commitment was upon acceptance of lesser plea; (14) nature and length of sentence imposed; (15) months of sentence actually served before parole; (16) previous criminal record of the prisoner; (17) his previous work record; (18) his punishment record in the institution; (19) his age at time of parole; (20) his mental age according to psychiatric examination; (21) his personality type according to psychiatric examination; (22) and psychiatric prognosis.

    Id. at 57, 276 n.44 (quoting Andrew A. Bruce et al., The Workings of the Indeterminate-Sentence Law and the Parole System in Illinois 262–63 (1928)).

  26. [26]. Harcourt, supra note 2, at 60.

  27. [27]. Id. at 61, 277 n.73 (“The seven factors follow: (1) industrial habits, (2) seriousness and frequency of prereformatory crime, (3) arrests for crimes preceding, (4) penal experience preceding, (5) economic responsibility preceding, (6) mental abnormality on entrance,
    (7) frequency of offences in reformatory.”).

  28. [28]. Hamilton, supra note 22, at 234 (footnotes omitted); see also Harcourt, supra note 5, at 6–8 (detailing the trends in research and later adoption of prediction methods).

  29. [29]. Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), http://www.propublica.
    org/article/machine-bias-risk-assessments-in-criminal-sentencing (explaining that risk assessments are used in setting bond amounts and, in some states, given to judges for criminal sentencing); Anna Maria Barry-Jester et al., The New Science of Sentencing, Marshall Project (Aug. 4, 2015, 7:15 AM), https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing (“Many court systems use the tools to guide decisions about which prisoners to release on parole, for example, and risk assessments are becoming increasingly popular as a way to help set bail for inmates awaiting trial.”); see also Hamilton, supra note 22, at 234–35 (“The adoption of the evidence-based model in general, and the implementation of risk-needs tools more specifically, has recently been promoted in pretrial contexts, such as pretrial diversion, deferred adjudication, bail, and plea negotiations, and juvenile transfers to adult court.” (footnotes omitted)).

  30. [30]. Joe Palazzolo, Court: Judges Can Consider Predictive Algorithms in Sentencing, Wall St. J. (July 13, 2016, 5:04 PM), http://blogs.wsj.com/law/2016/07/13/court-judges-can-consider-predictive-algorithms-in-sentencing.

  31. [31]. Harcourt, supra note 2, at 16.

  32. [32]. Barry-Jester et al., supra note 29; see also Palazzolo, supra note 30 (stating that the value in risk evaluation tools “is in helping authorities allocate resources more precisely, ideally ensuring that high-risk individuals receive more supervision and services and for longer periods than those deemed low risk”).

  33. [33]. Barry-Jester et al., supra note 29.

  34. [34]. Id.

  35. [35]. Hamilton, supra note 22, at 235–36.

  36. [36]. Id. (“[R]isk-needs instruments in the field of criminal offending often embed at least a few factors from the central eight categories.”).

  37. [37]. Barry-Jester et al., supra note 29.

  38. [38]. Id.

  39. [39]. Id.

  40. [40]. Id.

  41. [41]. Id.

  42. [42]. Algorithms in the Criminal Justice System, supra note 12 (COMPAS “assesses variables under five main areas: criminal involvement, relationships/lifestyles, personality/attitudes, family, and social exclusion. The LSI-R, developed by Canadian company Multi-Health Systems, also pulls information from a wide set of factors, ranging from criminal history to personality patterns. Using a narrower set of parameters, thePublic Safety Assessment, developed by the Laura and John Arnold Foundation, only considers variables that relate to a defendant’s age and criminal history.”). While the different systems vary slightly, this Note focuses on COMPAS, as it is one of the most widely used in the United States. See Jeff Larson et al., How We Analyzed the COMPAS Recidivism Algorithm, ProPublica (May 23, 2016), https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (“COMPAS ... is one of the most popular scores used nationwide ... .”).

  43. [43]. Hamilton, supra note 22, at 239 (quoting Northpointe, Practitioner’s Guide to COMPAS 1 (2012)).

  44. [44]. Palazzolo, supra note 8. COMPAS distinguishes itself from other algorithms:

    Unlike other risk assessment instruments, which provide a single risk score, the COMPAS provides separate risk estimates for violence, recidivism, failure to appear, and community failure. In addition to the Overall Risk Potential, as represented by those four scales, the COMPAS provides a Crimonogenic and Needs Profile for the offender. This profile provides information about the offender with respect to criminal history, needs assessment, criminal attitudes, social environment, and additional factors such as socialization failure, criminal opportunity, criminal personality, and social support.

    Hamilton, supra note 22, at 239 (quoting Tracy L. Fass et al., The LSI-R and the COMPAS: Validation Data on Two Risk-Needs Tools, 35 Crim. Just. & Behav. 1095, 1098 (2008)).

  45. [45]. Palazzolo, supra note 8.

  46. [46]. Angwin et al., supra note 29.

  47. [47]. Id.

  48. [48]. Harcourt, supra note 2, at 78–80.

  49. [49]. Id. at 80.

  50. [50]. Id.

  51. [51]. Id. “These grids and matrices reflect what Paul Robinson refers to as a fundamental shift in our criminal justice system during the last decades of the twentieth century: a shift ‘from punishing past crimes to preventing future violations through the incarceration and control of dangerous offenders’—or, more succinctly, ‘the shifting of the criminal justice system toward the dangerous offenders.’” Id. at 87 (quoting Paul H. Robinson, Punishing Dangerousness: Cloaking Preventive Detention as Criminal Justice, 114 Harv. L. Rev 1429, 1429, 1432 (2001)).

  52. [52]. Angwin et al., supra note 29.

  53. [53]. Id.

  54. [54]. Barry-Jester et al., supra note 29 (noting that the use of risk assessment in criminal sentencing raises some concerns because “[i]t’s a higher-stakes decision point in terms of someone’s liberty”).

  55. [55]. Palazzolo, supra note 30.

  56. [56]. Angwin et al., supra note 29.

  57. [57]. See id. (“Boessenecker, who trains other judges around the state in evidence-based sentencing, cautions his colleagues that the score doesn’t necessarily reveal whether a person is dangerous or if they should go to prison... . ‘A guy who has molested a small child every day for a year could still come out as a low risk because he probably has a job,’ Boessenecker said. ‘Meanwhile, a drunk guy will look high risk because he’s homeless. These risk factors don’t tell you whether the guy ought to go to prison or not; the risk factors tell you more about what the probation conditions ought to be.’”). But see Barry-Jester et al., supra note 29 (“When implemented correctly, whether in the fields of medicine, finance or criminal justice, statistical actuarial tools are accurate at predicting human behavior—about 10 percent more accurate than experts assessing without the assistance of such a tool, according to a 2000 paper by a team of psychologists at the University of Minnesota.”).

  58. [58]. Angwin et al., supra note 29.

  59. [59]. Larson et al., supra note 42.

  60. [60]. Angwin et al., supra note 29. The study was based on an analysis of more than 7,000 arrested individuals. After two years, researchers compared those individuals charged with new crimes to their prior COMPAS score. Id.

  61. [61]. Id. (“Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.”).

  62. [62]. See infra Part III.A (discussing court decisions); see also Barry-Jester et al., supra note 29 (“The ACLU challenged the constitutionality of the law, arguing that basing sentences on statistical correlations, rather than the details of a specific case, ‘cuts to the core of the fundamental Constitutional principles of equality and fairness.’”).

  63. [63]. Angwin et al., supra note 29; see also Harcourt, supra note 2, at 16 (“[A]ctuarial risk assessment ‘has become a largely uncontested aspect of a much-expanded criminal process, and it has been entrusted to a range of criminal justice actors, including prosecutors, juries, judges, and administrative appointees.’ Prediction of criminality has become de rigueur in our highly administrative law enforcement and prison sectors ... .” (quoting Jonathan Simon, Reversal of Fortune: The Resurgence of Individual Risk Assessment in Criminal Justice, 1 Ann. Rev. L. & Soc. Sci. 397, 398 (2005))).

  64. [64]. Richard P. Kern & Mark H. Bergstrom, A View from the Field: Practitioners’ Response to Actuarial Sentencing: An “Unsettled” Proposition, 25 Fed. Sent’g Rep. 185, 187 (2013).

  65. [65]. Palazzolo, supra note 30.

  66. [66]. Memorandum from the Vera Inst. of Justice, Ctr. on Sentencing & Corr. to Del. Justice Reinvestment Task Force 4 (Oct. 12, 2011), http://www.ma4jr.org/wp-content/uploads/2014/
    10/vera-institute-memo-on-risk-assessment-for-delaware-2011.pdf; see also Nathan James, Cong. Research Serv., Risk and Needs Assessment in the Criminal Justice System 4 (2015). “[A]ssessments can occur at different points in the system including pretrial detention, sentencing, intake to probation, entry to prison, release on parole, and during probation or parole supervision.” Memorandum from the Vera Inst. of Justice, Ctr. on Sentencing & Corr. to Del. Justice Reinvestment Task Force, supra, at 9.

  67. [67]. Angwin et al., supra note 29; see also Sentencing Reform and Corrections Act of 2015,
    S. 2123, 114th Cong. (2015).

  68. [68]. Casey et al., supra note 3, at 2–3. See generally Tracy W. Peters & Roger K. Warren, Nat’l Ctr. for State Courts, Getting Smarter About Sentencing: NCSC’s Sentencing Reform Survey (2006).

  69. [69]. Casey et al., supra note 3, at 3.

  70. [70]. Conference of Chief Justices & Conference of State Court Adm’rs, Resolution 12: In Support of Sentencing Practices that Promote Public Safety and Reduce Recidivism, Nat’l Ctr. for St. Cts. (Aug. 1, 2007).

  71. [71]. Id. (“[T]he Conference of Chief Justices and the Conference of State Court Administrators support state efforts to adopt sentencing and corrections policies and programs based on the best research evidence of practices shown to be effective in reducing recidivism; and ... the Conferences urge each chief justice and state court administrator to work with members of the executive and legislative branches as appropriate to promote policies and practices that place properly identified offenders in corrections programs and facilities shown to be effective in reducing recidivism ... .”).

  72. [72]. Id.

  73. [73]. Id.

  74. [74]. Casey et al., supra note 3, at 3.

  75. [75]. Model Penal Code § 6B.09 (Am. Law Inst., Discussion Draft No. 2, 2009).

  76. [76]. Id. cmt. a.

  77. [77]. Id.

  78. [78]. Id. cmt. a.

  79. [79]. PEW Ctr. on the States, Risk/Needs Assessment 101: Science Reveals New Tools to Manage Offenders (2011), http://www.pewtrusts.org/~/media/legacy/uploadedfiles/pcs_assets/
    2011/pewriskassessmentbriefpdf.pdf.

  80. [80]. Memorandum from the Vera Inst. of Justice, Ctr. on Sentencing & Corr. to Del. Justice Reinvestment Task Force, supra note 66, at 10 (“These results may include the offender’s level of risk, the needs or risk factors identified, and the strengths or assets identified. The report may also include a proposed supervision plan based on the identified needs and a recommendation as to whether the person is suitable for community placement.”). “The results are usually shared with the defendant’s attorney, but the calculations that transformed the underlying data into a score are rarely revealed.” Angwin et al., supra note 29.

  81. [81]. Hamilton, supra note 22, at 232.

  82. [82]. Id. at 234 (quoting Michael Tonry, Legal and Ethical Issues in the Prediction of Recidivism, 26 Fed. Sent’g Rep. 167, 167 (2014)).

  83. [83]. Id.

  84. [84]. Palazzolo, supra note 30.

  85. [85]. State v. Loomis, 881 N.W.2d 749, 760 (Wis. 2016).

  86. [86]. Id.

  87. [87]. Id. at 761.

  88. [88]. Id. at 763–64 (“Specifically, any PSI containing a COMPAS risk assessment must inform the sentencing court about the following cautions regarding a COMPAS risk assessment’s accuracy: (1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined; (2) risk assessment compares defendants to a national sample, but no cross-validation study for a Wisconsin population has yet been completed; (3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; and (4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations. Providing information to sentencing courts on the limitations and cautions attendant with the use of COMPAS risk assessments will enable courts to better assess the accuracy of the assessment and the appropriate weight to be given to the risk score.”).

  89. [89]. Jefferey M. Sellers, Note, Public Enforcement of the Freedom of Information Act, 2 Yale L.
    & Pol’y Rev. 78, 80 (1983). “The original APA, the first statute to provide for public disclosure of records, orders, and other Executive documents, required only that ‘matters of official record shall in accordance with published rule be made available to persons properly and directly concerned except information held confidential for good cause found.’ It allowed broad exemptions from this requirement ... .” Id. (quoting Administrative Procedure Act, § 3, 60 Stat. 237, 238 (1946), codified at 5 U.S.C. § 1002 (1964)). FOIA was meant to provide more transparency than the APA and only allowed for a more narrow list of exemptions. See id.

  90. [90]. Benny L. Kass, The New Freedom of Information Act, 53 A.B.A. J. 667, 669 (1967).

  91. [91]. U.S. Dep’t of Justice, Department of Justice Guide to the Freedom of Information Act 1–2 (2013), https://www.justice.gov/sites/default/files/oip/legacy/2014/07/23/intro-july-19-2013.pdf (alteration in original) (quoting Freedom of Information Act: Presidential Memorandum for Heads of Executive Departments and Agencies Concerning the Freedom of Information Act, 74 Fed. Reg. 4683, 4683 (Jan. 21, 2009)).

  92. [92]. See id. at 1.

  93. [93]. Id.

  94. [94]. Freedom of Information Act, 5 U.S.C. § 552(a)(2) (2012). Each agency must disclose to the public “final opinions, including concurring and dissenting opinions, as well as orders, made in the adjudication of cases,” “those statements of policy and interpretations which have been adopted by the agency and are not published in the Federal Register,” “administrative staff manuals and instructions to staff that affect a member of the public,” and “copies of all records.” Id.

  95. [95]. Justin Cox, Maximizing Information’s Freedom: The Nuts, Bolts, and Levers of FOIA, 13 N.Y.C. L. Rev. 387, 390 n.10 (2010).

  96. [96]. Id. at 391.

  97. [97]. Id.

  98. [98]. See 5 U.S.C. § 552(b). The nine categories that are exempted under FOIA include:

    (1) information classified under criteria established by Executive Order, (2) materials related solely to an agency’s internal rules and practices, (3) information specifically exempted from disclosure by statute, (4) trade secrets and confidential commercial or financial information, (5) agency memoranda that would not be available to the public by law, (6) files whose disclosure would constitute a clearly unwarranted invasion of privacy, (7) investigatory files compiled for law enforcement purposes, (8) certain materials related to regulation or supervision of financial institutions, and (9) geological and geophysical information.

    Sellers, supra note 89, at 80–81.

  99. [99]. Restatement (First) of Torts § 757 cmt. b (Am. Law Inst. 1939).

  100. [100]. See supra Part II.C.

  101. [101]. Cox, supra note 95, at 391.

  102. [102]. Chrysler Corp. v. Brown, 441 U.S. 281, 291 (1979).

  103. [103]. Id. at 293.

  104. [104]. Freedom of Information Act, supra note 91, at 4683; see also Cox, supra note 95, at 392 (“The presumption remains, at all times, that agency records are to be disclosed.”).

  105. [105]. Freedom of Information Act, supra note 91, at 4683 (“As Justice Louis Brandeis wrote, ‘sunlight is said to be the best of disinfectants.’ In our democracy, the Freedom of Information Act (FOIA), which encourages accountability through transparency, is the most prominent expression of a profound national commitment to ensuring an open Government. At the heart of that commitment is the idea that accountability is in the interest of the Government and the citizenry alike.”).

  106. [106]. Angwin et al., supra note 29.

  107. [107]. Id.

  108. [108]. Id.

  109. [109]. Id.

  110. [110]. Id.

  111. [111]. Id.

  112. [112]. Id.

  113. [113]. Id.

  114. [114]. Id.

  115. [115]. State v. Loomis, 881 N.W.2d 749, 755 (Wis. 2016).

  116. [116]. Id. at 754. The state of Wisconsin alleged that Loomis was the driver in a drive-by shooting. Id. The state “charged him with five counts, all as a repeater: (1) First-degree recklessly endangering safety ... ; (2) Attempting to flee or elude a traffic officer ... ; (3) Operating a motor vehicle without the owner’s consent; (4) Possession of a firearm by a felon ... ;
    (5) Possession of a short-barreled shotgun or rifle ... .” Id. According to the plea agreement, “[t]he State will leave any appropriate sentence to the Court’s discretion, but will argue aggravating and mitigating factors.” Id.

  117. [117]. Id.

  118. [118]. Id. at 755.

  119. [119]. Id.

  120. [120]. Id. at 756.

  121. [121]. Id. at 757.

    Specifically, Loomis asserts that the circuit court’s use of a COMPAS risk assessment at sentencing violates a defendant’s right to due process for three reasons: (1) it violates a defendant’s right to be sentenced based upon accurate information, in part because the proprietary nature of COMPAS prevents him from assessing its accuracy; (2) it violates a defendant’s right to an individualized sentence; and (3) it improperly uses gendered assessments in sentencing.

    Id.

  122. [122]. See id. at 771.

  123. [123]. Id. at 761.

  124. [124]. Id.

  125. [125]. Hamilton, supra note 22, at 234.

  126. [126]. Loomis, 881 N.W.2d at 776 (Abrahamson, J., concurring).

  127. [127]. Jason Tashea, Risk-Assessment Algorithms Challenged in Bail, Sentencing and Parole Decisions, ABA J. (Mar. 1, 2017, 1:30 AM), http://www.abajournal.com/magazine/article/algorithm_bail_
    sentencing_parole.

  128. [128]. Angwin et al., supra note 29.

  129. [129]. Loomis, 881 N.W.2d at 761.

  130. [130]. Id.

  131. [131]. Id. at 757, 769. The court stated that the written advisement should disclose the following:

    The proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are determined. Because COMPAS risk assessment scores are based on group data, they are able to identify groups of high-risk offenders—not a particular high-risk individual. Some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism. A COMPAS risk assessment compares defendants to a national sample, but no cross-validation study for Wisconsin population has yet been completed. Risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations. COMPAS was not developed for use at sentencing, but was intended for use by the Department of Corrections in making determinations regarding treatment, supervision, and parole.

    Id. at 769–70.

  132. [132]. David S. Levine, The People’s Trade Secrets?, 18 Mich. Telecomm. & Tech. L. Rev. 61, 100 (2011).

  133. [133]. Algorithms in the Criminal Justice System, supra note 12.

  134. [134]. Palazzolo, supra note 8.

  135. [135]. Casey et al., supra note 3, at 30. The authors argue that “[j]urisdictions should select instruments that fit their assessment needs and that have been properly validated for use with their offender populations.” Id. at 29.

  136. [136]. Id. at 29; see also Edward Latessa et al., Creation and Validation of the Ohio Risk Assessment System Final Report: Final Report 8–9 (2009) (“Many criminal justice agencies often use empirically derived tools that have been developed on samples from a different population. Although this is less cost restrictive, it assumes that the instrument is a valid predictor of recidivism for each agency’s specific population. Also, it is likely that there are different populations of offenders within jurisdictions... . Given that it is unlikely for a single instrument to have universal applicability across various offending populations, there is a clear necessity to validate risk assessment instruments to each specific target population.” (citations omitted)). Further complicating the need for validity research is the fact that many risk assessment instruments were not actually designed for use in sentencing defendants. See Angwin et al., supra note 29 (“Most modern risk tools were originally designed to provide judges with insight into the types of treatment that an individual might need—from drug treatment to mental health counseling.”).

  137. [137]. Casey et al., supra note 3 at 29–30; see also VanBenschoten, supra note 8, at 40 (“The accuracy of both the risk and need prediction is the most critical component to the future of risk assessment. Although companies that market off-the-shelf instruments make strong claims about the predictive statistical quality of their instrument, the question is deceptively complex. Because an instrument predicts well in the aggregate does not mean it predicts risk with every subpopulation.”).

  138. [138]. Edward J. Latessa et al., The Creation and Validation of the Ohio Risk Assessment System (ORAS), 74 Fed. Prob. 16, 17 (2010).

  139. [139]. Dana Jones Hubbard et al., Ctr. for Criminal Justice Research, Case Classification in Community Corrections: A National Survey of the State of the Art 34 (2001).

  140. [140]. Casey et al., supra note 3, at 30.

  141. [141]. Angwin et al., supra note 29.

  142. [142]. Id.

  143. [143]. Id.

  144. [144]. Algorithms in the Criminal Justice System, supra note 12; see also State v. Loomis, 881 N.W.2d 749, 762 (Wis. 2016) (“Wisconsin has not yet completed a statistical validation study of COMPAS for a Wisconsin population.”); Angwin et al., supra note 29 (“As often happens with risk assessment tools, many jurisdictions have adopted Northpointe’s software before rigorously testing whether it works. New York State, for instance, started using the tool to assess people on probation in a pilot project in 2001 and rolled it out to the rest of the state’s probation departments—except New York City—by 2010. The state didn’t publish a comprehensive statistical evaluation of the tool until 2012.”).

  145. [145]. See Palazzolo, supra note 30.

  146. [146]. See David S. Levine, Secrecy and Unaccountability: Trade Secrets in Our Public Infrastructure, 59 Fla. L. Rev. 135, 138 (2007) (“Secrecy, and its attendant goals of pecuniary gain and commercial competition, conflict with the methods and purpose of transparent and accountable democratic governance.”).

  147. [147]. Id. at 137.

  148. [148]. Id. at 140.

  149. [149]. Id. (“When private firms provide public infrastructure, commercial trade secrecy should be discarded (at least in its pure form) and give way to more transparency and accountability.”).

  150. [150]. Id. at 159 (quoting Jeremy Bentham, An Essay on Political Tactics, in 2 The Works of Jeremy Bentham 299, 310–12 (John Bowring ed., 1837)).

  151. [151]. See id.

  152. [152]. Levine, supra note 146, at 160 (second alteration in original) (quoting Chrysler Corp. v. Brown, 441 U.S. 281, 292 (1979)).

  153. [153]. Id. (explaining that open government values are “the opposite of trade secrecy, which protects secrecy except in limited circumstances”).

  154. [154]. Id. at 162 (“Trade secrecy and democratic values collide in the private provision of public infrastructure.”).

  155. [155]. Citron & Pasquale, supra note 13, at 26.

  156. [156]. Nicholas Diakopoulos, We Need to Know the Algorithms the Government Uses to Make Important Decisions AboutUs, Conversation (May 23, 2016, 8:48 PM), https://theconversation.
    com/we-need-to-know-the-algorithms-the-government-uses-to-make-important-decisions-about-us-57869.

  157. [157]. Id.

  158. [158]. Id.

  159. [159]. Id.

  160. [160]. Id.

  161. [161]. See Algorithms in the Criminal Justice System, supra note 12 (“[B]ecause such algorithms are proprietary, they are not subject to state or federal open government laws.”).

  162. [162]. Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1290­­­–91 (2008) (“Because the public has no opportunity to identify problems with troubled systems, it cannot present those complaints to elected officials. In turn, government actors are unable to influence policy when it is shrouded in closed code.”).

  163. [163]. Citron & Pasquale, supra note 13, at 5 (“No one can challenge the process of scoring and the results because the algorithms are zealously guarded trade secrets.”).

  164. [164]. Securing Our Infrastructure: Private/Public Information Sharing: Hearing Before the S. Comm. on Governmental Affairs, 107th Cong. 37 (2002) (statement of David L. Sobel, Gen. Counsel, Elec. Privacy Info. Ctr.); see also Algorithms in the Criminal Justice System, supra note 12 (“Secrecy of the algorithms used to determine guilt or innocence undermines faith in the criminal justice system.”).

  165. [165]. Diakopoulos, supra note 156.

  166. [166]. Risk Assessment Project, Penn. Comm’n on Sentencing, http://pcs.la.psu.edu/publications-and-research/research-and-evaluation-reports/risk-assessment (last visited July 9, 2017) (“The correctional reform legislation enacted in 2008 . . . requires the Commission to develop and adopt new guidelines for parole (county and state) and re-parole, as well as for re-sentencing following revocation probation, County Intermediate Punishment . . . and State Intermediate Punishment . . . . In developing guidelines for parole, Act 81 of 2008 mandates that the guidelines consider validated risk assessment tools, and take into account available research relating to the risk of recidivism.”).

  167. [167]. Id.

  168. [168]. Nicholas Diakopoulos, How to Hold Governments Accountable for the Algorithms They Use, Slate (Feb. 11, 2016, 8:00 AM), http://www.slate.com/articles/technology/future_tense/2016/02/how_
    to_hold_governments_accountable_for_their_algorithms.html.

  169. [169]. Penn. Comm’n on Sentencing, Interim Report 4: Development of Risk Assessment Scale 1, 5–7 (2012), http://pcs.la.psu.edu/publications-and-research/research-and-evaluation-reports/risk-assessment/phase-i-reports/interim-report-4-development-of-risk-assessment-scale/
    view.

  170. [170]. Diakopoulos, supra note 156.

  171. [171]. Penn. Comm’n on Sentencing, Phase II: Interim Report 1: Development of a Risk Assessment Scale by Offense Gravity Score for All Offenders 2–3 (2015), http://pcs.
    la.psu.edu/publications-and-research/research-and-evaluation-reports/risk-assessment/phase-ii-reports/Interim-Rpt-1-Phase-2/view.

  172. [172]. Latessa et al., supra note 136, at 6.

  173. [173]. See id. at 7.

  174. [174]. See generally id.

  175. [175]. Id. at 10.

  176. [176]. Id. at 8–10.

  177. [177]. Citron, supra note 162, at 1251.

  178. [178]. Levine, supra note 132, at 67.

  179. [179]. See id. at 68 (“Th[e] reality is a force opposing the trend of transparency at the state level because some traditional operating principles of government, like transparency and accountability, conflict with those of the private sector, like maintaining commercial secrecy for competitive advantage.”).

  180. [180]. See Hoffman v. Commonwealth, 455 A.2d 731, 733 (Pa. Commw. Ct. 1983) (“[T]rade secret contention ceases to be of any moment when the function is recognized as governmental, rather than that of a private business ... .”).

  181. [181]. Levine, supra note 132, at 100; see also Angwin et al., supra note 29 (“‘Risk assessments should be impermissible unless both parties get to see all the data that go into them,’ said Christopher Slobogin, director of the criminal justice program at Vanderbilt Law School. ‘It should be an open, full-court adversarial proceeding.’”).

*

J.D. Candidate, The University of Iowa College of Law, 2018; B.A., The University of Iowa, 2013. 



I would like to thank Professor Sarah Seo for encouraging me to write on this topic. Thank you also to the members of the Iowa Law Review for their hard work during the editing process, especially Courtney Brokloff, Nicholas Huffmon, and Lindsay Moulton. Finally, a special thanks to Rich and Mary Jo Parrino, who have been a tremendous source of advice and support throughout law school.