Response

The Future of Empirical Legal Studies: Observations on Holte & Sichelman’s Cycles of Obviousness

I.     Introduction

Over the last five years, the Iowa Law Review has published dozens of empirical legal studies.1 These studies have been a valuable contribution to the literature, adding data to help assess the validity of legal theories and hypotheses. Intellectual property law, in particular, has been an especially active growth area for empirical legal studies,2 and the Iowa Law Review—long one of the leaders in intellectual property law scholarship generally—has been at the cutting edge of empirical studies of intellectual property law.3

Ryan Holte and Ted Sichelman’s recent Cycles of Obviousness,4 the first in a planned series of articles on patent law’s obviousness requirement, is part of this development. Their study reflects many of the current norms in empirical intellectual property law studies: intensive case collection, systematic coding, and data availability for those seeking to replicate their study. Yet, while these norms reflect progress in the quality of empirical intellectual property law studies, there remain norms and practices in the field that can continue to improve. This improvement is vital: Empirical legal studies generally have been subjected to widespread criticism,5 and while there are deep theoretical and normative debates about their value, the reality is that they provide an important component of the landscape of knowledge about the law.6 In this context, there is room for all of us to grow.

This is not the first piece in a law journal to address ways in which norms and practices in empirical legal studies can improve. Almost twenty years ago, Professors Lee Epstein and Gary King published their provocative assault on the state of empirical legal studies.7 They fiercely targeted some of the most prominent studies of the time, and while some of their most direct critiques were ferociously contested in responses,8 their identification of fundamental methodological problems in empirical legal studies generally was critical for the development of the field.

Despite the waves that those articles made, and the progress of the field over the last 18 years, issues that Epstein and King identified persist in empirical legal studies.9 These issues extend beyond errors in the use of statistical tools or questions about the validity of ideological versus attitudinal models of judicial decisionmaking10 to also encompass topics such as the sparsity of methodological descriptions, uncertainty over data reliability, and disclosure practices—things that are sometimes viewed as mundane and boring.11

This Essay argues that a critical way to increase the credibility—and thus the value—of empirical legal studies published in the Iowa Law Review is to focus first on these seemingly mundane issues, emphasizing sound data acquisition descriptions, data reliability assessment, and data transparency. In other words, focus on the data.12 At the end of the day, if the data itself is clear, well described, and publicly accessible, it heightens the quality—and thus the credibility—of the study. This is what gives empirical work its core value.13 Indeed, a hypervigilant focus on whether a regression was done correctly may overlook whether the data being fed into the model was any good in the first place.14

In service of this goal, this Essay identifies some of the excellent methodological practices of Holte and Sichelman’s Cycles of Obviousness and some of the areas in which it reflects norms within the field that could be improved. Fortunately, the issues that I identify in this Essay are areas that can be addressed in future work. This Essay concludes with a few practical, easy-to-implement recommendations for both the empirical legal studies published in the Iowa Law Review and the journal itself.

II.     Background

Ryan Holte and Ted Sichelman’s recent Cycles of Obviousness is, at its heart, an empirical legal study of the patent law doctrine of nonobviousness.15 Following in the footsteps of other scholars who have also used empirical methods to study nonobviousness, such as Banks Miller and Brett Curry,16 Lee Petherbridge and R. Polk Wagner,17 Glynn S. Lunney, Jr. and Christian T. Johnson,18 Jennifer Nock and Sreekar Gadde,19 Ali Mojibi,20 and myself,21Cycles of Obviousness aims to examine all substantive obviousness determinations by both district courts and the United States Court of Appeals for the Federal Circuit between January 1, 2003 and December 31, 2013.22 The study involved coding both outcome metrics, such as whether the court concluded that the patent was obvious or nonobvious, and elements of legal reasoning, such as whether the court used the teaching-suggestion-motivation test at issue in the Supreme Court’s KSR v. Teleflex decision.23 Ultimately, the authors reported results that are consistent with past studies of nonobviousness while adding additional detail—especially on the courts’ reasoning and district court behavior.24 The study provides a valuable addition to the literature on the nonobviousness doctrine before and after the Supreme Court’s opinion in KSR v. Teleflex.25

Cycles of Obviousness reflects many of the best aspects of current norms in empirical legal studies. At the same time, there are areas in which those norms could be improved. With this frame in mind, this Essay offers a few observations on Cycles of Obviousness as it relates to the areas of data collection, reliability, and publication, with the goal of identifying ways in which the quality of other empirical studies published in the Iowa Law Review can be improved.26

III.     Data Collection

The selection of which observations to include in a study is a fundamental—and sometimes determinative—step. As Mark Hall and Ronald Wright note, “[a]n empirical researcher must first decide which cases to select and sample.”27 This creates the study frame, and thus affects what inferences can be drawn from the data. Selection of only appellate decisions, for example, limits the observations that one can draw based on the way that appellate decisions come to be.28 The determinations of what to include—and what not to include—shape the very data from which observations will be made. In addition, once the study frame is selected, there can still be challenges of quantity: Is it possible to study the entire population of that frame or is it necessary to sample?

Two of the good design choices of Cycles of Obviousness are to focus on both district court and appellate decisions of nonobviousness and to attempt to collect the entire population of decisions. Studying both district court decisions and appellate decisions is good, in that it provides for a more complete study frame. In addition, as many empirical legal scholars have noted, studying the entire population is ideal—and sometimes, as in studies of appellate decisions, it is even possible.29 The authors’ stated goal is to collect and review all decisions involving actual obviousness determinations at both the district courts and Federal Circuit, and they reference methods to achieve that: They describe “[u]sing a variety of search techniques, and [] collecting raw data from authors of several previous studies on nonobviousness.”30 They describe the sources that they searched.31 They reference a search methodology from a prior study.32 They also include Rule 36 summary affirmances33—a critical element of any study of doctrinal outcomes at the Federal Circuit.34 All of this is good: The authors appear to have begun with a wide funnel of potential decisions. Once all those possible records have been gathered, the next step is to decide which ones to actually include in the dataset. Typically, this involves some form of human review, as Holte and Sichelman performed.35 As with developing the initial set (the top of the funnel), the cases selected at this next stage (the bottom of the funnel) matter: These cases, after all, will make up the data that are analyzed in the study. In all, the authors provide about two pages of detail, including footnotes, describing their data collection methodology.36

The amount of detail that Holte and Sichelman provide about their data selection is consistent with many other empirical intellectual property law studies, and in part may reflect the weight that law reviews place on methodological descriptions versus theoretical discussions.37 For example, prior empirical studies of obviousness generally contain about two pages of discussion of case selection methods and data coding processes.38 These studies typically describe the databases that were searched, search terms that were used, and have some reference to the touchstone for determining whether a case involves an analysis of obviousness.39

Yet, while reflective of the amount of methodological detail in other studies, this is an area where we as empirical legal researchers can do better. While the nature of the search is outlined in general terms, specifics on how the search was conducted are lacking. The authors collected data from the authors of other studies and from several websites, but search parameters are not identified nor is there much detail about how these data sources fit together to produce the initial “700 . . . decisions that plausibly raised obviousness issues.”40 There is a reference to methods used in Rantanen (2013),41 but that study did not include district court decisions,42 which are especially difficult to identify.43Cycles of Obviousness also provides little detail on the criteria used for the human determination of which cases to include. The authors report narrowing down the initial 700 decisions to “319 district court opinions and 192 Federal Circuit opinions.”44 The only criteria that is described for the final set of cases was that they “had actual obviousness determinations.”45

Lack of detail about case selection raises several issues. First, it reduces the replicability of the study, and thus the degree to which others can rely on it. In The Rules of Inference, Epstein and King observe that “[a] major source of unreliability in measurement is vagueness: if researchers cannot replicate a measure it is probably because the original study did not adequately describe it.”46 A study with inadequate information about how the data were selected has a gap when it comes to replication. Someone seeking to replicate the study from the methodological description provided in the article would have to fill in gaps, perhaps making the same decisions as Holte and Sichelman; perhaps not. We don’t know, for example, whether the authors had a particular meaning of “actual obviousness determinations” in mind or whether this was the only criteria used in selecting the final set of cases for analysis.47 These choices can also be more difficult than they initially appear. As Epstein and King demonstrate, even the decision about whether or not an appellate court affirms or reverses a district court is more complex than just “affirm” or “reverse.”48

In addition, describing the methodology in detail helps to show that there was a process that was followed by the authors. Systematic, step-by-step practices are important in empirical legal studies—especially at the data collection stage, as choices made at this point dictate the entire contents of the dataset. Care taken in describing the methodology can serve as an indicator of care in the process itself.

Finally, describing the case selection methodology matters for interpretation of the results because decisions made during selection can systematically affect the data and inferences drawn from the data. Consider, for example, the criteria of “actual obviousness determinations.”49 Does this mean that the appellate court made a substantive determination on the issue of obviousness?50 Or does it mean that the appellate court decided that a patent was either obvious or not?51 The consequence of the former would be to include decisions in which the Federal Circuit vacated—as opposed to outright reversed—the district court determination of obviousness or nonobviousness; the consequence of the latter would be to exclude decisions to vacate.52 A definition that resulted in the systematic exclusion of decisions to vacate would cause the reported affirmance rate to be higher: Rather than including all decisions to affirm, vacate and reverse in the denominator, it would only include decisions to affirm or reverse.53 It could also affect the conclusions drawn from the data. If, for example, the rate at which the Federal Circuit vacated district court decisions changed after KSR, that would affect the affirmance rate—even if the proportion of affirmances to outright reversals remained the same. The consequence is that the authors’ conclusion that the affirmance rate for district court determinations of “obviousness” stayed about the same while its affirmance rate for determinations of “nonobviousness” fell following KSR54 may or may not be reflective of the data—the answer is that it depends.55

All of this is to say that data collection methodologies, although perhaps not really all that exciting for most readers, are an area where current norms in empirical intellectual property law studies can continue to develop. In part, this may require a shift in the weight that law reviews place on methodological descriptions versus theoretical discussions. On our end, we as study authors should aim to make our data selection processes as clear as possible, with meaningful detail about key decisions. This may involve, for example, publishing an appendix with a detailed methodology, if the methodological description is lengthy or does not fit into the style of the article itself.56 Doing so will improve the quality and credibility of our studies and raise the level of the field. It is also an area that Holte and Sichelman can further refine in future articles.

In addition to providing a clear and detailed description of data collection methodologies, there are also methods to directly improve data collection and reliability. One technique is to use a methodological checklist or framework when designing and describing the collection process. An example is the PRISMA methodology for systematic reviews, which provides a series of stages to walk through when building the set for the review.57 This methodology could readily be adapted to studies of judicial decisions.58 Ultimately, the overall goal with any methodology should be to improve the replicability and transparency of the empirical study.

IV.     Data Reliability

A second element of a high-quality study involves the reliability of the data itself. Cycles of Obviousness describes extensive quality control steps that the authors took: They “provided an intensive training instruction period including multiple training sessions reviewing the coding process,” they “conducted on-going review of the case coding in order to ensure the accuracy of [their] data,” and they “performed multiple rounds of quality checking during this process.”59 All of this was intended to produce as accurate a dataset as possible.

Accuracy is extremely important—but it does not necessarily confer data reliability. Epstein and King describe “a measure [as] reliable when it produces the same results repeatedly regardless of who or what is actually doing the measuring.”60 In other words, if we weighed a potato on a kitchen scale, and each time we got the same weight for the potato, we could say that the scale’s weight is reliable.61 The question then becomes: How does the reader know that the methods were reliable? And equally important, is it possible for someone else to replicate the results?62

In their methodological guide, Epstein and King note that “[t]he key to producing reliable measures is to write down a set of very precise rules for the coders . . . . This list should be made even if the investigator codes the data him- or herself, since without it others would not be able to replicate the research (and the measure).”63 A common practice is to put the coding instructions into a codebook—a written set of instructions that coders can follow.64

Consistent with this best practice, Holte and Sichelman created a codebook that coders could refer to when recording data about particular aspects of the courts’ decisions.65 Their codebook, available on SSRN as of early-November 2020,66 contains information for coding variables including both categories (an important mechanism for achieving replicable outcomes67) and notes on when certain categories would apply.68 Having a codebook that is shared publicly reflects the best of current norms and is a practice that should be expected in future empirical legal studies published in the Iowa Law Review.

However, having a system doesn’t necessarily mean that observations are reliable. Even if coding instructions are detailed and clear, the indeterminacy of the data itself can still produce outcomes that are inconsistent. The entire patent law jurisprudence on claim construction, for example, is an attempt to make something that reasonable minds disagree upon more determinate—and as numerous authors have shown, historically there’s been a lot of disagreement about the meaning of claim terms even when really smart patent law experts are the ones making decisions.69 Or to give a more classic example, is it a rabbit or a duck?70

Because coding instructions can be vague and data subjective or otherwise indeterminate, it’s important to assess how reliable the data actually are. If a type of data isn’t reliable because different coders might interpret the same thing differently—even when given the same instructions—the credibility of inferences drawn from the data suffers greatly.71 This is necessary even if experts such as Professors Holte and Sichelman are the ones doing the coding.72

While Cycles of Obviousness indicates that a majority of the cases were coded by multiple coders and their coding compared, it doesn’t report any intercoder reliability measures.73 This limits the inferences that can be drawn about the data, and it raises the question: How reliable is the data? This matters because different types of information can have different reliability. For example, based on intercoder reliability data from prior studies, the case outcomes reported in Cycles of Obviousness are probably highly reliable while determinations about the reasoning employed by the court, such as whether the court used some form of the teaching-suggestion-motivation test, are probably less reliable.74 We know even less about the reliability of the extensive list of factors that the authors used for their analysis of judicial reasoning. A measure of reliability would provide information on whether the data are reliable or not, and thus on how much weight to give them.

Formal assessments of coding need not be complex. For example, one method is to have two coders independently code a sample of cases and then compare the results by a simple percentage; a more sophisticated approach is to use a measure of agreement such as Cohen’s kappa.75 The end goal is to give readers a sense of data variability for a given coded parameter. A low agreement score can indicate a need to improve the coding instructions; it can also tell us how much indeterminacy the data possess, and thus how cautious we should be about inferences drawn from them.

V.     Data Publication

Public access to data is central to the reliability, credibility and value of an empirical legal study. Without the data, others can’t test the authors’ results, assess the degree to which the study methodology is replicable, or test their analyses of the data.76 This, in turn, feeds into study credibility: The ability to examine and test study data is, in large part, what makes a study believable.77 And without the data, others cannot build on what the study authors have done; instead, they must recreate the dataset version of the wheel. In a worst-case scenario, a lack of public archiving may lead to the data being lost forever and the thousands of hours of work vanishing.78 Unsurprisingly, in many fields publication of data has been an expected practice, especially when the data are not proprietary or contain personal information.79

In terms of data disclosure, Cycles of Obviousness is as good as or better than many of the other empirical intellectual property law studies. Consider, for example, James Bessen and Michael J. Meurer’s provocative article The Direct Costs from NPE Disputes, a study that relied on proprietary data sources for its analyses and conclusions.80 As David Schwartz and Jay Kesan discussed at length in a subsequent essay, because the data underlying that study were not available, they could not assess data quality or methods.81 Their request to the authors about their data on publicly traded non-practicing entities received no response.82 Indeed, Schwartz and Kesan note that even Bessen and Meurer did not have access to some of the underlying data.83

In contrast, the authors of Cycles of Obviousness provided me with their data and codebook when I requested them for purposes of a replication study that I am writing separately, and they provided them to another group that requested them as well.84 They also informed me that they provided it to the Iowa Law Review with the understanding that it could be given to researchers seeking to verify it.85 They have also told me that their data will be published once the series of articles they are writing is complete.86 All of this is good, and is consistent with norms of making data available on request.87

Given this voluntary disclosure of their data, the question becomes: Is it enough? Or should the authors of empirical legal studies be expected to do more—to publish the data underlying the study on a site that is accessible to all at the time the article is published? While some studies have published their data,88 it is still common in the field for authors to not affirmatively publish or publicly archive their data.89 Nor do the policies on the Iowa Law Review’s website say anything about data disclosure and publication for empirical studies.90

The time is right for this practice to change. At its heart, an empirical legal study’s primary evidentiary foundation is its dataset: the source that the article relies on to give it persuasive weight. As Epstein and King observed in 2002, a lack of data publication is strangely inconsistent with law reviews’ general approach, which is to place a high value on the documentation of evidence supporting every other sentence in an article.91

The lack of a general practice of data publication also limits the extent to which studies relying on confidential, proprietary data stick out. If the norm is not to publish data, it becomes harder to distinguish studies where the data are intentionally not published (for example, because they are proprietary) and give appropriate weight to their findings. The lack of data for those studies blends into the wallpaper as just another study in which the data were not published.

There may be reasons why the authors of empirical legal studies don’t want to put their data in a public archive. One may be that, frankly, it’s terrifying. When I write a theoretical article, someone might disagree with my reasoning. But when I publish an empirical dataset, my mistake is indelibly stamped: If I’ve made an error, it’s there for all to see. Michael Heise describes this as “[e]xposure to [f]alsification [t]hrough [r]eplication.”92 Or put another way, “numbers provide less shelter than words.”93 Yet, it’s important for those of us who do empirical legal research to face our fears and publish our datasets. The reality is, we will make mistakes.94 But the field as a whole cannot grow if we don’t publish our data.95 Sure, someone else may use it in their own study—although really, that’s half the point of publicly releasing the data. The potential for falsification by open assessment is what ultimately gives empirical legal research much of its weight. Because what we say about the data in empirical studies can be tested and either confirmed or rejected, it has a basis for trust that a law professor’s assertion of an unfalsifiable statement does not.96 And central to that falsifiability is public access to data. In the social sciences, for example, adequate disclosure of underlying research can be particularly important for credibility.97

Another argument against data publication may be that it is too difficult. A decade ago, that argument might have held some weight. Today, however, there are numerous public archives for data, such as the Harvard Dataverse and the Open Science Framework.98 Many of these options are free-of-charge for the size of datasets that are typically involved in empirical legal studies.99

A third argument may flow from the confidentiality of the dataset, either because it contains personally identifiable information (such as from surveys) or because it is proprietary data that was obtained under the terms of a non-disclosure agreement. For the former, anonymity may be necessary and institutional review boards can help preserve human subject privacy. For the latter, the proprietary nature of the data may reduce the weight of the study; or at least, place the burden on the authors to establish its credibility.100

While authors can be willing to share their data with others who ask, as Holte and Sichelman did, in my view true data publication doesn’t mean just making the data available on request. That imposes an unnecessary hurdle to replication and places too much power in the hands of the study authors to decide whether or not to disclose once a request is made. It also places the recipient of the data in a bind: By not publishing the dataset themselves, are the authors implying that it shouldn’t be published by the recipient either? It also tends to incentivize laziness: Why create a final, clean version of the dataset today when you don’t need to do it until (and unless) someone asks for it? Holte and Sichelman shared with me that they have not published the dataset because they will use it for other studies in the future.101 But the fact that authors may have strategic reasons for not disclosing—or even that this practice is a norm in some fields—does not diminish the issues posed by non-publication of data. In the end, the expectation should be that the data underlying empirical legal studies are published. Exceptions will exist; but they should be clearly delineated as an exception.102

VI.     Best Practices Going Forward

Scholars have proposed a variety of ways to remedy issues with empirical legal studies.103 Many of these recommendations are at a general level of detail, with ideas such as developing peer-review mechanisms for law reviews or aligning incentives by creating ranking systems.104 Others are more specific, such as the use of badges and study preregistration,105 but require an infrastructure that doesn’t yet exist.

Instead of suggesting general recommendations, then, I will offer three specific, concrete and practical things that the Iowa Law Review, in particular, could do to improve the quality of the empirical legal studies that it publishes. These do not come from the void; they draw on the recommendations of other sources cited in this Essay. They also provide a way for the Iowa Law Review to be a pioneer in this space: With a small number of exceptions, top law journals have not yet embraced credibility-enhancing practices.106

First, the Iowa Law Review should publicly adopt a policy that requires authors of empirical legal studies to submit their data at the time the article is submitted and to publish the data on which the study is based and the accompanying codebook at the time the article is published by the journal.107 For small-scale studies, this might take the form of tables in an appendix to the article, but for most studies this will necessitate publication in a data repository. For studies with larger amounts of data, there are a host of well-established data repositories.108

Some top law journals have already adopted a data archiving policy for empirical studies.109 For example, the Northwestern Law Review states that “the acceptance of any empirical work will be contingent upon the author’s documentation and archival of all datasets in a manner sufficient to allow third parties to replicate the published findings. These datasets will be posted in a publicly available space, such as the Law Review’s website.”110 Requiring publication of the underlying data is a step that the Iowa Law Review could take immediately, without investing any additional resources, and it would produce immediate dividends in study transparency and quality.

Second, the Iowa Law Review should create its own repository for data.111 As a journal that is over a century old, it has the institutional weight to create a permanent archive. More importantly, it makes sense to archive the data along with the article itself. This would be in addition to other places where the data might be archived. The Stanford Law Review, Northwestern Law Review, and New York University Law Review all have policies requiring publication of data on the journals’ websites.112

Ideally, the webpage hosting the article should include a link to the dataset and codebook. More detailed methodological information can be included as a downloadable appendix as well.113

Third, the Iowa Law Review should consider creating a methodological review board and implement a mechanism for conditional acceptance of empirical studies. Recommendations for peer review of law review submissions are common in the literature, and some journals have already begun seeking peer reviews.114 However, peer review comes with its own problems and challenges. In addition, it can be difficult to find professors who can serve as peer reviewers, especially on the short turnarounds that are necessary in the competitive law journal market.

Rather than develop a general system of peer reviews, the Iowa Law Review could instead consider developing an empirical advisory board with expertise in empirical studies that could provide input on the methodological aspects of empirical legal studies submissions to the journal.115 The input of the board would be limited to empirical methodology, and the board itself could be comprised of both legal scholars and scholars from other disciplines. For example, at the University of Iowa, there are multiple faculty with relevant PhDs in social sciences in the College of Law and in other departments. The expertise of these faculty on the methodological aspects of empirical legal studies could be brought to bear to improve the methodological quality of published studies.

In addition, rather than deciding directly to accept or reject an empirical study, the journal could consider a system that is common in other fields: the conditional acceptance.116 In a conditional acceptance, the article is accepted for publication—provided that the authors make the changes suggested by the editors. While this practice may be inappropriate for other types of articles—such as traditional doctrinal, theoretical, or normative pieces—it is especially well-suited for empirical studies where a methodological issue could be corrected by the study authors. Along those lines, the empirical advisory board would be able to identify any issues that needed to be addressed before the journal made an offer of publication. The end goal would be higher quality empirical studies—that is, studies published by the journal that addressed methodological issues up front.

***

In the end, Cycles of Obviousness is a valuable addition to the empirical literature on the nonobviousness doctrine that reflects both good norms of empirical research and those that can still be improved. Once the authors ultimately publish their data, it will be an even greater contribution. As Michael Heise and Lawrence Friedman observed, “Empirical research is hard work.”117 Hopefully, these modest steps will help to improve the quality of the empirical legal studies of the future.

  1. [1]. Empirical legal studies published in Volumes 100–105 of the Iowa Law Review range from Natalya Shnitser, Funding Discipline for U.S. Public Pension Plans: An Empirical Analysis of Institutional Design, 100 Iowa L. Rev. 663 (2015) to Carissa Byrne Hessick & Michael Morse, Picking Prosecutors, 105 Iowa L. Rev. 1537 (2020). To find these studies, my research team searched the text of articles published in the Iowa Law Review for “empirical” and then reviewed the articles to see whether they contained an original empirical legal study. The list of empirical studies that we identified is available at Studies and Data, Fed. Cir. Data Project, https://
    empirical.law.uiowa.edu/studies-and-data [https://perma.cc/VG4Y-869D].

  2. [2]. To see the wealth of empirical legal studies in intellectual property law, one need only crack open 2 Research Handbook on the Economics of Intellectual Property Law: Analytical Methods (Peter S. Menell & David L. Schwartz eds., 2019), a 675-page tome with chapters covering numerous studies of courts, patents, and more.

  3. [3]. Among the recent empirical legal studies relating to intellectual property law published in the Iowa Law Review are Michael D. Frakes & Melissa F. Wasserman, Patent Trial and Appeal Board’s Consistency-Enhancing Function, 104 Iowa L. Rev. 2417 (2019); Paul R. Gugliuzza, Elite Patent Law, 104 Iowa L. Rev. 2481 (2019); Saurabh Vishnubhakat, Renewed Efficiency in Administrative Patent Revocation, 104 Iowa L. Rev. 2643 (2019); Stephen Yelderman, Prior Art in Inter Partes Review, 104 Iowa L. Rev. 2705 (2019); Derek E. Bambauer, Paths or Fences: Patents, Copyrights, and the Constitution, 104 Iowa L. Rev. 1017 (2019); Matthew B. Kugler & Thomas H. Rousse, The Privacy Hierarchy: Trade Secret and Fourth Amendment Expectations, 104 Iowa L. Rev. 1223 (2019); and Janet Freilich, Patent Clutter, 103 Iowa L. Rev. 925 (2018). And these are just a few of them!

  4. [4]. Ryan T. Holte & Ted Sichelman, Cycles of Obviousness, 105 Iowa L. Rev. 107 (2019).

  5. [5]. See, e.g., Elizabeth Chambliss, When Do Facts Persuade? Some Thoughts on the Market for “Empirical Legal Studies”, Law & Contemp. Probs., Spring 2008, at 17, 25–31 (summarizing critiques of empirical legal studies).

  6. [6]. See Theodore Eisenberg, The Origins, Nature, and Promise of Empirical Legal Studies and a Response to Concerns, 2011 U. Ill. L. Rev. 1713, 1722–37 (describing the impact of empirical legal studies).

  7. [7]. See generally Lee Epstein & Gary King, The Rules of Inference, 69 U. Chi. L. Rev. 1 (2002) (providing methods to improve empirical legal research studies).

  8. [8]. See generally Frank Cross, Michael Heise & Gregory C. Sisk, Above the Rules: A Response to Epstein and King, 69 U. Chi. L. Rev. 135 (2002) (critiquing Epstein and King’s attack in The Rules of Inference); Jack Goldsmith & Adrian Vermeule, Empirical Methodology and Legal Scholarship, 69 U. Chi. L. Rev. 153 (2002) (same); Richard L. Revesz, A Defense of Empirical Legal Scholarship, 69 U. Chi. L. Rev. 169 (2002) (same).

  9. [9]. See, e.g., Kathryn Zeiler, The Future of Empirical Legal Scholarship: Where Might We Go from Here?, 66 J. Legal Educ. 78, 81–86 (2016).

  10. [10]. See Epstein & King, supra note 7, at 25–29 (critiquing statistical methodologies); Harry T. Edwards & Michael A. Livermore, Pitfalls of Empirical Studies that Attempt to Understand the Factors Affecting Appellate Decisionmaking, 58 Duke L.J. 1895, 1913–22 (2009) (critiquing the validity of the attitudinal model of judicial decisionmaking).

  11. [11]. Cf. Lee Epstein & Andrew D. Martin, Quantitative Approaches to Empirical Legal Research, in The Oxford Handbook of Empirical Legal Research 901, 911 (Peter Cane & Herbert M. Kritzer eds., 2010) (“[D]espite the common and fundamental role it plays in research, coding typically receives only the briefest mention in most volumes on empirical research; it has received almost no attention in empirical legal studies.”).

  12. [12]. See Jason Rantanen, Empirical Analyses of Judicial Opinions: Methodology, Metrics, and the Federal Circuit, 49 Conn. L. Rev. 227, 281–82 (2016) [hereinafter Rantanen, Methodology & Metrics].

  13. [13]. See Michael Heise, The Importance of Being Empirical, 26 Pepp. L. Rev. 807, 818 (1999).

  14. [14]. See Michele Landis Dauber, The Big Muddy, 57 Stan. L. Rev. 1899, 1912 (2005).

  15. [15]. See generally Holte & Sichelman, supra note 4.

  16. [16]. See generally Banks Miller & Brett Curry, Expertise, Experience, and Ideology on Specialized Courts: The Case of the Court of Appeals for the Federal Circuit, 43 Law & Soc’y Rev. 839 (2009) (analyzing the impact of experience and ideology on judicial decisionmaking through an empirical study of obviousness patent cases).

  17. [17]. See generally Lee Petherbridge & R. Polk Wagner, The Federal Circuit and Patentability: An Empirical Assessment of the Law of Obviousness, 85 Tex. L. Rev. 2051 (2007) (empirically studying the Federal Circuit’s doctrine of obviousness).

  18. [18]. See generally Glynn S. Lunney, Jr. & Christian T. Johnson, Not So Obvious After All: Patent Law’s Nonobviousness Requirement, KSR, and the Fear of Hindsight Bias, 47 Ga. L. Rev. 41 (2012) (analyzing the role of hindsight in obviousness determinations).

  19. [19]. See generally Jennifer Nock & Sreekar Gadde, Raising the Bar for Nonobviousness: An Empirical Study of Federal Circuit Case Law Following KSR, 20 Fed. Cir. Bar J. 369 (2011) (providing “an empirical study of all Federal Circuit obviousness decisions in the two and a half years following the KSR decision”).

  20. [20]. See generally Ali Mojibi, An Empirical Study of the Effect of KSR v. Teleflex on the Federal Circuit’s Patent Validity Jurisprudence, 20 Alb. L.J. Sci. & Tech. 559 (2010) (analyzing the effect of KSR v. Teleflex on the law of obviousness).

  21. [21]. See generally Jason Rantanen, The Federal Circuit’s New Obviousness Jurisprudence: An Empirical Study, 16 Stan. Tech. L. Rev. 709 (2013) [hereinafter Rantanen, New Obviousness Jurisprudence] (analyzing the effect of KSR on the Federal Circuit’s obviousness jurisprudence through an empirical study of Federal Circuit case law pre- and post-KSR).

  22. [22]. Holte & Sichelman, supra note 4, at 136.

  23. [23]. Id. at 137–38.

  24. [24]. Id. at 135–61.

  25. [25]. KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398 (2007).

  26. [26]. Cycles of Obviousness is hardly alone. One need only look at other studies published in the Iowa Law Review to find similar issues.

  27. [27]. Mark A. Hall & Ronald F. Wright, Systematic Content Analysis of Judicial Opinions, 96 Calif. L. Rev. 63, 101 (2008).

  28. [28]. See, e.g., John R. Allison, Mark A. Lemley & David L. Schwartz, Understanding the Realities of Modern Patent Litigation, 92 Tex. L. Rev. 1769, 1777 (2014). As Hall & Wright observe, at each layer in the appellate process, “a variety of factors potentially distort what one stage can reveal about the other. These biases can fundamentally threaten the ability to generalize or the validity of a study’s findings.” Hall & Wright, supra note 27, at 104 (footnote omitted).

  29. [29]. See, e.g., Epstein & King, supra note 7, at 99–103.

  30. [30]. Holte & Sichelman, supra note 4, at 136 (footnote omitted).

  31. [31]. See id. at 136–37. A minor critique is that while Holte & Sichelman refer to datasets provided by other authors, they don’t identify who those authors were. See id.

  32. [32]. See id. at 136 n.219.

  33. [33]. Id. at 136.

  34. [34]. See, e.g., Kimberly A. Moore, Markman Eight Years Later: Is Claim Construction More Predictable?, 9 Lewis & Clark L. Rev. 231, 234–35 (2005).

  35. [35]. Holte & Sichelman, supra note 4, at 137.

  36. [36]. See id. at 136–37.

  37. [37]. Compare id. at 136–37 (detailing their case selection methodology), with Petherbridge & Wagner, supra note 17, at 2070–72 (same), Nock & Gadde, supra note 19, at 386–87 (same), and Mojibi, supra note 20, at 575–76 (same).

  38. [38]. See, e.g., Mojibi, supra note 20, at 575–77; Petherbridge & Wagner, supra note 17, at 2071–74; Nock & Gadde, supra note 19, at 386–89. Rantanen, New Obviousness Jurisprudence, supra note 21, at 726–30, contains a detailed description of collection methods and design choices with additional detail provided in the accompanying codebook.

  39. [39]. See Holte & Sichelman, supra note 4, at 136; Mojibi, supra note 20, at 575–77.

  40. [40]. See Holte & Sichelman, supra note 4, at 136.

  41. [41]. Id. at 136 n.219.

  42. [42]. See Rantanen, New Obviousness Jurisprudence, supra note 21, at 726–29.

  43. [43]. See Allison et al., supra note 28, at 1771–72 (describing issues in collecting district court decisions); John R. Allison & Mark A. Lemley, Empirical Evidence on the Validity of Litigated Patents, 26 AIPLA Q.J. 185, 194–97 (1998). Because the universe of district court decisions is larger and often decisions are not published, more detail is necessary to determine what the authors have included and excluded from the study than when the study just focuses on appellate decisions. It can also be very costly to collect data on district court decisions. See Adam R. Pah, David L. Schwartz, Sarath Sanga, Zachary D. Clopton, Peter DiCola, Rachel Davis Mersey, Charlotte S. Alexander, Kristian J. Hammond & Luís A. Nunes Amaral, How to Build a More Open Justice System, Science, July 10, 2020, at 134, 134–35. None of this is unique to Cycles of Obviousness—it’s true of almost every study of district court litigation.

  44. [44]. See Holte & Sichelman, supra note 4, at 136.

  45. [45]. Id. A review of the codebook for Cycles of Obviousness did not give any additional information about the human review criteria.

  46. [46]. Epstein & King, supra note 7, at 83.

  47. [47]. See id. at 85–86. Epstein and King argue that proxies alone—such as researcher reputation—aren’t enough: The study should stand on its own. See id. at 34.

  48. [48]. Id. at 84–86. My own experience is that articulating reliable criteria for whether or not to include a decision for purposes of a study on obviousness is quite challenging; for example, the Codebook for The Federal Circuit’s New Obviousness Jurisprudence included several pages of description of the choices made in this determination. See Jason Rantanen, Codebook for Empirical Study of Federal Circuit Obviousness Jurisprudence (2013), https://empirical.
    law.uiowa.edu/sites/empirical.law.uiowa.edu/files/wysiwyg_uploads/Obviousness%20Codebook%20Final%202013-07-05.pdf [https://perma.cc/7M8F-4NQ6] [hereinafter Rantanen Codebook].

  49. [49]. See Holte & Sichelman, supra note 4, at 136.

  50. [50]. See id. at 145 (referencing “substantive Federal Circuit obviousness decisions”).

  51. [51]. Other questions include how to address obviousness-type double-patenting, obviousness determinations in the context of an interference, instances in which obviousness was decided by the district court but not appealed, instances in which obviousness was appealed but the appellate court affirmed on an alternate ground such as anticipation, and other borderline questions. See Rantanen Codebook, supra note 48.

  52. [52]. While there is ambiguity in the description of the methodology, my conclusion is that the authors did not include decisions to vacate because they do not report any decisions in which the Federal Circuit outcome was “No Final Determination” in Figure 5 and reference only a de minimis number of vacates. See Holte & Sichelman, supra note 4, at 141–46. This is despite the article referencing decisions vacating the district court, see id. at 146 n.252, and two decisions coded as “Vacated” in the dataset they provided to me (numbers 9 and 136). In contrast, as shown in Rantanen (2013), there were actually a non-trivial number of these decisions. See Rantanen, New Obviousness Jurisprudence, supra note 21, at 741 tbl.3 (reporting 15% decisions to vacate pre-KSR and 9% post-KSR for appeals arising from the district court and ITC). There were only six analyses in total in Rantanen (2013) that arose from the ITC. See id. at 733 n.110. The authors subsequently provided information to confirm that vacates were not included; however, it is preferable to avoid requesting clarification from the study authors. See Gary King, Replication, Replication, 28 PS 444, 444 (1995) (“The replication standard holds that sufficient information exists with which to understand, evaluate, and build upon a prior work if a third party could replicate the results without any additional information from the author.”).

  53. [53]. See Rantanen, Methodology & Metrics, supra note 12, at 263–65 (showing how the way that affirmances are counted can have a major effect on reported affirmance rates).

  54. [54]. Holte & Sichelman, supra note 4, at 142.

  55. [55]. As I have previously written, there is no “correct” approach to reporting affirmances. See Rantanen, Methodology & Metrics, supra note 12, at 281–82. My point here is not that there is a “best” approach but that data selection methodology is important and can affect how metrics are reported and the inferences that are drawn from them.

  56. [56]. One extraordinary example is the 52-page methods index for Abbe R. Gluck & Lisa Schultz Bressman, Statutory Interpretation from the Inside—An Empirical Study of Congressional Drafting, Delegation, and the Canons: Part I, 65 Stan. L. Rev. 901 (2013). See Abbe R. Gluck & Lisa Schultz Bressman, Statutory Interpretation from the Inside: Methods Appendix, Stan. L. Rev. (2013), https://review.law.stanford.edu/wp-content/uploads/sites/3/2017/01/Gluck_Bressman_65_
    Stan._L._Rev._Methods_Appendix.pdf [https://perma.cc/PD7G-DVL7] [hereinafter Gluck & Bressman, Methods Appendix].

  57. [57]. See David Moher, Alessandro Liberati, Jennifer Tetzlaff & Douglas G. Altman, Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement, PLoS Med., July 2009, at 1, 3–5.

  58. [58]. See Jason M. Chin, Alexander C. DeHaven, Tobias Heycke, Alexander O. Holcombe, David T. Mellor, Justin T. Pickett, Crystal N. Steltenpohl, Simine Vazire & Kathryn Zeiler, Improving the Credibility of Empirical Legal Research: Practical Suggestions for Researchers, Journals, and Law Schools 26 (Bos. Univ. Sch. of L., Public Law & Legal Theory Paper No. 20-32, 2020), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3703150 [https://perma.cc/T4NF-EF5L].

  59. [59]. Holte & Sichelman, supra note 4, at 137.

  60. [60]. Epstein & King, supra note 7, at 83.

  61. [61]. See id.

  62. [62]. Cf. Janet Freilich, The Replicability Crisis in Patent Law, 95 Ind. L.J. 431, 438–48 (2020) (describing the replicability crisis in both the scientific literature and patent law).

  63. [63]. Epstein & King, supra note 7, at 85.

  64. [64]. See id.; Hall & Wright, supra note 27, at 107. A codebook need not be a set of coding instructions; for other studies it can involve a description of the contents of data fields, such as national GDP.

  65. [65]. See Holte & Sichelman, supra note 4, at 137.

  66. [66]. See Ted Sichelman & Ryan Holte, Codebooks for Cycles of Obviousness, SSRN (Nov. 9 2020), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3719135 [https://perma.cc/DLB5-L6HB].

  67. [67]. See Epstein & King, supra note 7, at 84–85.

  68. [68]. See Sichelman & Holte, supra note 66. In the interests of full disclosure, much of the codebook for Cycles of Obviousness appears to be a copy of my codebook for The Federal Circuit’s New Obviousness Jurisprudence, so take my favorable review with that in mind.

  69. [69]. See, e.g., Moore, supra note 34, at 239–46 (studying the high reversal rate on claim construction following Markman v. Westview Instruments).

  70. [70]. Welche Thiere Gleichen Einander Am Meisten?: Kaninchen und Ente, Fliegende Blätter, Oct. 23, 1892, at 147.

  71. [71]. See Hall & Wright, supra note 27, at 112 (“If there might be elements of subjectivity or uncertainty in applying coding categories to legal decisions, any claim to scientific rigor requires some evaluation of whether different people would code the documents consistently.”).

  72. [72]. Id. (“Coding that primarily reflects the subjective, idiosyncratic interpretation of the particular individuals who read the cases or that has large elements of error or arbitrariness undermines the claim of replicability.”).

  73. [73]. Compare Holte & Sichelman, supra note 4, at 137 (not reporting any intercoder reliability measures), with Petherbridge & Wagner, supra note 17, at 2075 (reporting intercoder reliability measures), and Rantanen, New Obviousness Jurisprudence, supra note 21, at app. (same). In the interests of full transparency, I am currently conducting a replication analysis for Cycles of Obviousness that will be published separately. That analysis will go into depth on the reliability of different aspects of the data. As that study has not yet been publicly released, however, I am not comfortable relying on its data here.

  74. [74]. See Rantanen, New Obviousness Jurisprudence, supra note 21, at app. (reporting a Cohen’s kappa of 0.960 for the outcome of the Federal Circuit analysis and 0.481 for whether some form of the “teaching suggestion motivation” test was used by the court when compared to coding by an independent third party).

  75. [75]. Id. at 724.

  76. [76]. See Shari Seidman Diamond, Empirical Legal Scholarship: Observations on Moving Forward, 113 Nw. U. L. Rev. 1229, 1233–35 (2019) (discussing the value of transparency in empirical legal studies).

  77. [77]. See Epstein & King, supra note 7, at 12, 131–32; cf. Lisa Larrimore Ouellette & Andrew Tutt, How Do Patent Incentives Affect University Researchers?, 61 Int’l Rev. L. & Econ. (Special Issue) 6–16 (2020) (describing the results of a replication analysis of a major empirical study of university patent policies).

  78. [78]. Cf. Epstein & King, supra note 7, at 131 (making a similar point).

  79. [79]. See, e.g., id. at 130–32; Rantanen, Methodology & Metrics, supra note 12, at 282; Robin Feldman, Mark A. Lemley, Jonathan S. Masur & Arti K. Rai, Open Letter on Ethical Norms in Intellectual Property Scholarship, 29 Harv. J.L. & Tech. 339, 348 (2016). That said, practices may vary from field to field and even within fields, with different journals having different policies relating to data publication.

  80. [80]. James Bessen & Michael J. Meurer, The Direct Costs from NPE Disputes, 99 Cornell L. Rev. 387, 394–98, 398 n.55 (2014).

  81. [81]. David L. Schwartz & Jay P. Kesan, Analyzing the Role of Non-Practicing Entities in the Patent System, 99 Cornell L. Rev. 425, 445–47 (2014).

  82. [82]. See id. at 443; see also Mark A. Lemley, Kent Richardson & Erik Oliver, The Patent Enforcement Iceberg, 97 Tex. L. Rev. 801, 804 n.11 (2019) (stating that due to the nature of the data, the authors were departing “from the standard practice of releasing the raw data after publication”).

  83. [83]. See Schwartz & Kesan, supra note 81, at 446.

  84. [84]. E-mail from Ted Sichelman, Professor of L., Univ. of San Diego Sch. of L., to author (Dec. 18, 2019, 13:05 CST) (on file with author) (providing the dataset); E-mail from Ted Sichelman, Professor of L., Univ. of San Diego Sch. of L., to author (Jan. 7, 2020, 16:55 CST) (on file with author) (providing the codebook); E-mail from Ted Sichelman, Professor of L., Univ. of San Diego Sch. of L., to author (Nov. 12, 2020, 10:39 CST) (on file with author) [hereinafter Nov. 2020 E-mail from Ted Sichelman to author] (stating that they provided the dataset to another group that requested it).

  85. [85]. Nov. 2020 E-mail from Ted Sichelman to author, supra note 84. While this is commendable, there is no indication in either the article or on the journal’s website that the data is available through a request to the Iowa Law Review.

  86. [86]. Id.

  87. [87]. All of the authors of the studies of nonobviousness referenced earlier in this Essay provided the data when I requested them. See E-mail from Lee Petherbridge, Professor of L., Loyola L. Sch., to author (Nov. 12, 2020, 19:52 CST) (on file with author) (providing the dataset for The Federal Circuit and Patentability); E-mail from Glynn S. Lunney, Jr., Professor of L., Tex. A&M Sch. of L., to author (Oct. 20, 2020, 06:09 CST) (on file with author) (providing the dataset for Not So Obvious After All); E-mail from Banks Miller, Assoc. Dean of Graduate Educ., Univ. of Tex. at Dall., to author (Oct. 22, 2020, 08:25 CST) (on file with author) (providing the dataset for Expertise, Experience, and Ideology on Specialized Courts); E-mail from Ali Mojibi, Partner, Covington & Burling LLP, to author (Nov. 13, 2020, 12:42 CST) (providing the dataset for An Empirical Study of the Effect of KSR v. Teleflex on the Federal Circuit’s Patent Validity Jurisprudence); E-mail from Jennifer Nock, Member, Rothwell Figg, to author (Oct. 29, 2020, 15:14 CST) (on file with author) (providing the dataset for Raising the Bar for Nonobviousness). The dataset for The Federal Circuit’s New Nonobviousness Jurisprudence is publicly available. Studies and Data, supra note 1.

  88. [88]. See, e.g., Bambauer, supra note 3, at 1030 n.92; NPE Patent Data Project, NPE Data, http://www.npedata.com [https://perma.cc/9S2K-N4BQ]; Patent Litigation Docket Reports Data, U.S. Pat. & Trademark Off., https://www.uspto.gov/learning-and-resources/electronic-data-products/patent-litigation-docket-reports-data [https://perma.cc/2QVB-VL2X].

  89. [89]. See, e.g., Freilich, supra note 3, at 939–44; Reid Kress Weisbord & David Horton, Boilerplate and Default Rules in Wills Law: An Empirical Analysis, 103 Iowa L. Rev. 663, 685–88 (2018). This is not necessarily an intentional choice. Indeed, that’s the problem—there is no expectation of publishing data, and so often it is not.

  90. [90]. See Submissions, Iowa L. Rev., https://ilr.law.uiowa.edu/about/submissions [https://
    perma.c/KAF4-8C7R].

  91. [91]. See Epstein & King, supra note 7, at 130–31. I am in no way singling them out, but as one example of the importance law review editors place on footnotes, see generally Jonathan H. Adler & Christopher J. Walker, Delegation and Time, 105 Iowa L. Rev. 1931 (2020) (63-page article with 362 footnotes).

  92. [92]. Heise, supra note 13, at 818.

  93. [93]. Id.

  94. [94]. See Hall & Wright, supra note 27, at 105 (“All empirical studies are imperfect, especially observational (non-experimental) social science studies.”). Of course, the goal is always to minimize the imperfections—especially those that are due to methodology.

  95. [95]. See, e.g., Feldman et al., supra note 79, at 348.

  96. [96]. See, e.g., Heise, supra note 13, at 818–19; Gregory Mitchell, Empirical Legal Scholarship as Scientific Dialogue, 83 N.C. L. Rev. 167, 183 (2004).

  97. [97]. Mitchell, supra note 96, at 187, 199–200.

  98. [98]. See Dataverse Project, https://dataverse.org [https://perma.cc/3H3X-UJS7]; OSF Home, https://osf.io [https://perma.cc/7CS7-7Z7N]. For more options, see Recommended Data Repositories, Sci. Data, https://www.nature.com/sdata/policies/repositories [https://perma.cc/
    5QUT-Q7KE] (listing both discipline-specific, community-recognized repositories and generalist repositories).

  99. [99]. For example, under 1 GB of data per file is generally free. See, e.g., OSF Home, supra note 98 (“OSF is a free, open platform to support your research and enable collaboration.”).

  100. [100]. See Schwartz & Kesan, supra note 81, at 445–46.

  101. [101]. Nov. 2020 E-mail from Ted Sichelman to author, supra note 84.

  102. [102]. See, e.g., Lemley et al., supra note 82, at 804 n.11 (identifying the practice in that study as an exception and explaining why it was done).

  103. [103]. See, e.g., Epstein & King, supra note 7, at 116–33.

  104. [104]. See Zeiler, supra note 9, at 90–97.

  105. [105]. See Chin et al., supra note 58, at 10–12; Maggie Wittlin, Lisa Larrimore Ouellette & Gregory N. Mandel, What Causes Polarization on IP Policy?, 52 U.C. Davis L. Rev. 1193, 1237–39 (2018).

  106. [106]. See Chin et al., supra note 58, at 16–17, 43–45 (analyzing the 25 top law journals under the TOP score mechanism and finding that only 3 student-edited law journals had any relevant credibility-enhancing policies).

  107. [107]. This is not an original suggestion. See, e.g., Feldman et al., supra note 79, at 348–49. At a minimum, if a journal is going to require data to be submitted and make them available on request to the journal, the journal should indicate that the data are available on request on its website and in the article itself, and should require that any restrictions on access to the data be clearly stated.

  108. [108]. See sources cited supra note 98.

  109. [109]. My research team looked at the submission policies on the webpages of the top ten law reviews and found that three (the Yale Law Journal, Stanford Law Review, and New York University Law Review) had empirical study policies. See Yale L.J., Data Retention Policy for Authors, https://
    www.yalelawjournal.org/files/DataRetentionPolicyforAuthors_c1stami8.pdf [https://perma.cc/
    9A2V-GLMM]; Article Submissions, Stan. L. Rev., https://www.stanfordlawreview.org/submissions/
    article-submissions [https://perma.cc/3BCM-YZSM]; Submissions, N.Y.U. L. Rev., https://www.
    nyulawreview.org/submissions [https://perma.cc/CH82-F35L]. We could not find empirical study policies for the Harvard Law Review, Columbia Law Review, University of Pennsylvania Law Review, Georgetown Law Review, California Law Review, Notre Dame Law Review, and University of Chicago Law Review.

  110. [110]. Print Submissions, Nw. U. L. Rev., https://northwesternlawreview.org/submissions/print [https://perma.cc/5XHN-56MT].

  111. [111]. See, e.g., Hall & Wright, supra note 27, at 109.

  112. [112]. See Article Submissions, Stan. L. Rev., supra note 109; Print Submissions, supra note 110; Submissions, N.Y.U. L. Rev., supra note 109.

  113. [113]. See, e.g., Gluck & Bressman, Methods Appendix, supra note 56.

  114. [114]. See Zeiler, supra note 9, at 90–93, 92 n.57.

  115. [115]. The website of the Northwestern Law Review, for example, describes a mechanism similar to what I’ve proposed here. Empirical Issue, Nw. U. L. Rev., https://northwesternlawreview.org/
    submissions/empirical-issue [https://perma.cc/BWY5-42DD].

  116. [116]. See, for example, the process for review of articles submitted to Group & Organizational Management, which includes a methods review for articles that use quantitative methods. Group & Organization Management: Submit Paper, Sage J., https://journals.sagepub.com/author-instructions/
    GOM [https://perma.cc/84ZQ-9396].

  117. [117]. Heise, supra note 13, at 816 (quoting Lawrence M. Friedman, The Law and Society Movement, 38 Stan. L. Rev. 763, 774 (1986)).


     
     
     
     
     
     
     
     
     
     
     
     
*

Professor of Law, Ferguson-Carlson Fellow, and Director, Iowa Innovation, Business & Law Center.



Thanks to Daniel Kieffer, Lindsay Kriz, and Madison Murhammer Colon for assistance in preparing this Essay. Thanks also to Janet Freilich, Ryan Holte, Lee Petherbridge, Anya Prince, David Schwartz, and Ted Sichelman for comments on a draft version of this Essay.