Countering the Plaintiff’s Anchor: Jury Simulations to Evaluate Damages Arguments

I.     Introduction

Although jury trials are rare, they still drive nearly all legal outcomes because rational litigants negotiate in their shadow.1 For any given case, the likely outcome at trial is what motivates litigation decisions, drives how much money is spent developing the case, and ultimately determines settlement value. Thus, jury behavior remains important.

Numerous studies establish that the jury’s damages decision is strongly affected by the number suggested by the plaintiff’s attorney, independent of the strength of the actual evidence (a psychological effect known as “anchoring”).2 Indeed, the strength of the effect appears so powerful that some researchers advise that “the more you ask for, the more you get.”3 Yet many questions remain unanswered.

For the plaintiff’s strategy, these include: is there a limit to the anchoring effect that a plaintiff’s attorney can induce? Common sense suggests that, at some point, a proffered anchor would be perceived as so outrageous as to undermine the credibility of the speaker. But at what point, and does the expected value of the case shift such that the risk of losing liability offsets the marginal dollar gains of the positive verdicts?

For the defendant, what strategy should his or her attorney use to counteract the plaintiff’s attempt to anchor with a high ad damnum (damages demand)? Can a defendant attack the plaintiff’s high demand and thereby undermine the plaintiff’s credibility? Alternatively, should defendants provide a lower damages number to the jury? Such a “counter-anchor” could wash out the plaintiff’s anchoring effect, but some attorneys worry juries will interpret such a response as a concession of liability. But are concession effects real?

This study seeks to contribute answers for these questions. To do so, we videotaped a shortened medical malpractice trial with two different plaintiff damages demands and three different defendant responses. Using Amazon Mechanical Turk, we recruited 776 qualifying participants to view our mini-trial and render decisions on liability and damages. We ran a computer simulation to aggregate randomly selected individual jurors’ decisions to create mock juries and analyze their verdicts. Our study found powerful anchoring effects dominate much smaller but still statistically significant credibility effects. We also detected differences between the defendant’s responses to the plaintiff’s low damage demand. Surprisingly, countering with a lower alternative damages number actually improved defendant’s win rate, but did not lower damages. However, when the plaintiff demanded an unreasonably high award, none of the defendant’s responses produced a statistically significant difference in outcomes.

The answers are important for both litigators and policymakers. For policymakers, in particular, it is important to determine whether the anchoring effect is unduly biasing jury decisions. An affirmative answer would motivate rules to regulate plaintiff demands ex ante, as some states have already done, or to provide some reference points to jurors, as scholars have suggested.4 Alternatively, some may conclude, as many states have, that allowing demands for pain and suffering at trial is preferable to leaving the jury to make such awards in the absence of guidance, that awards can be addressed through existing damage caps, or that use of remitter is sufficient to curb any runaway awards. If, on the other hand, defendants already have effective strategies for countering the biases of plaintiff anchoring, then this may be simply the adversarial process at work.

II.     Background

A.     Anchors

One major component of a jury’s decision is the damage award. Numerous studies have suggested that a successful plaintiff can obtain a higher damage award simply by offering a higher ad damnum, that is, requesting more money from the jury.5 Psychologists call this an “anchoring effect,” referring to when “individuals’ numerical judgments are inordinately influenced by an arbitrary or irrelevant number.”6 Anchors are powerful influences, not only when they are made obvious, but also when subtly embedded in a more complex thicket of information.7

Anchoring effects have repeatedly been shown in the context of jury trials,8 reaching back at least to the 1950s, when the Chicago Jury Project studied jury responses to a typical car accident case.9 The study used mock juries who listened to tape-recorded mock trials. The participants were actual jurors who were on duty at the time.10 Jurors were exposed to the strengths of cases and the amounts demanded (the ad damnum). Across conditions, the conclusion was “that the higher the ad damnum the higher the verdict.”11

Studies since then have confirmed that as the demand increases, so does the award—indeed so much so that one study’s title provocatively suggests that “the more you ask for, the more you get.”12 A few studies suggest that this effect persists even when anchors are extreme. One study tested demands ranging from $100 to $1 billion.13 Both the absurdly low and inordinately high demands produced anchoring effects.14 However, another study has suggested that an absurdly high anchor can actually decrease damages.15

An important study from Diamond, Rose, Murphy, and Meixner questions the conventional view of anchoring and suggests that juries believe that anchors are “irrelevant” and often “outrageous.”16 Diamond analyzed transcripts of 31 actual jury deliberations to assess the effect of damages anchors, involving 33 plaintiffs.17 According to the authors, “the dangers of bias from these potential anchors offered by attorneys appear to be overstated as applied to the real world of deliberating juries.”18

A closer examination is useful. Focusing on pain and suffering, of the 33 closing arguments Diamond and colleagues examined, 21 plaintiffs made ad damnums asking for a specific dollar amount.19 In 15 cases, defendants offered a contingent concession, offering their own proposed amount if the jury chose to impose liability, and in 11 cases, defense attorneys offered rebuttals to the plaintiffs’ pain and suffering demands.20 After coding the deliberation transcripts, the Diamond research team found 1624 instances in which jurors referred to the attorneys’ damages recommendations, with 86% of jurors contributing at least one such comment.21 About one-third of these comments focused on the pain-and-suffering numbers in particular.22 The Diamond study demonstrated that jurors were more receptive to pain-and-suffering damages demands when attorneys grounded those damages in specific figures.23 In relative terms, the evidence supported this finding: for other sorts of ad damnums backed up by evidence, jurors were seven times more likely to simply accept them in their deliberation comments.24 However, in absolute terms, even for the plaintiff’s pain and suffering demands (generally not supported by objective evidence), three-quarters of the time the jurors’ comments were neutral (47.9%), useful (26.9%), or accepted outright (1.8%).25 Only one-quarter of jurors (23.4%) commented on the demand in a way that rejected it.26 Thus, the Diamond study suggests that even poorly supported anchors may have a substantial effect.

Notably, the Diamond study was observational and primarily focused on what jurors said about the ad damnums. For example, one juror said that the plaintiff’s demand was “stupid and it ma[de] no sense.”27 Much of the social science literature suggests, however, that anchors affect numerical estimates in unconscious ways, even among those that concede that the anchor should be irrelevant.28 In fact, the classic experiment on anchoring used “a wheel of fortune” to generate the anchoring number right in front of the participants, so that they would know that the number was “stupid and ma[de] no sense” for their estimation task.29 The participants nonetheless exhibited a huge anchoring effect in their estimates.30 For this reason, as Diamond and colleagues acknowledge, observation of jury deliberations cannot offer conclusive evidence for the causal effect of anchors.31 The strength of the Diamond study was that the investigators had unparalleled access to real jurors deciding real cases, even if the researchers were unable to manipulate the case facts presented to each jury in order to isolate the causal effect of an outrageously high demand.

B.     Credibility Effects

Although offering a high anchor leads to higher damages awards, most trial attorneys believe that a damages demand must pass the “straight face” test.32 Diamond and colleagues interpreted their qualitative data in accordance with this concern: “Many of the outright rejections of the plaintiff’s ad damnum revealed cynicism about attorney demands and ridicule of the amounts.”33

This “credibility effect” might hurt the plaintiff in one of two ways. First, a jury might reject the plaintiff’s anchors and award substantially lower damages than what was requested. This possibility has been referred to as the boomerang effect.34 Second, if juries conclude that a plaintiff’s damages request lacks credibility, they might become skeptical of the plaintiff’s other arguments as well. This could adversely affect plaintiff’s ability to prevail on liability.

In one experiment, Marti found a boomerang effect.35 The study exposed 500 undergraduate participants, who participated in partial fulfillment of a course, to a 4 x 4 design in which there were four plaintiff demand conditions (no monetary request, $1.5 million, $15 million, or $25 million) and four defendant responses (no rebuttal, $0, $100,000, or $500,000).36 The results showed that plaintiffs received higher awards for higher demands up to a point: the most extreme anchor—$25 million—actually received less than the $15 million anchor.37

However, Marti was unable to replicate her findings.38 In another experiment, Marti had jurors consider the same demands and counter-demands, but she introduced a variety of jury instructions.39 The instructions, which ranged from typical jury instructions to instructions that actually explained the risk of anchoring, were introduced to test the hypothesis that revealing the power of anchors to jurors could induce a boomerang effect at even lower plaintiff demands.40 Specifically, Marti hypothesized that by telling jurors how the plaintiff was trying to use anchors, it would cause the jurors to respond more negatively to even the $15 million anchor.41 More generally, Marti hypothesized that extremely high requests would bring to mind greed and would draw awards downwards.42

The results were surprising. When jurors were given traditional jury instructions, the boomerang effect did not appear at all.43 Similarly, when jurors were given instructions to disregard the demand because it was not evidence, or even an instruction which explained the dangers of anchors, the boomerang effect still did not occur (nor were the overall awards reduced).44 The only instruction that reduced awards was one that provided ranges of verdicts in similar cases, but even this “range instruction” didn’t produce the boomerang effect.45

As to the potential that these outrageous demands could impact the overall credibility of the plaintiff, the study provided no insight. Neither of the Marti experiments required the jurors to determine liability. Therefore, they had nothing to say about whether an outrageous anchor might adversely affect the plaintiff’s ability to obtain a verdict on liability. This is a common limitation in the literature. Many experiments only ask jurors to decide a single dependent variable: liability or damages (or even just punitive damages).46 Although there may be important reasons for these stylized decision tasks, they are undeniably artificial because in real-world trials, a single jury typically decides liability, economic damages, and, in appropriate cases, punitive damages.

The separation of these facets of jury decision making in studies may be especially limiting because there are a number of studies that find that jurors engage in fusion—a process in which the strength of a case influences damages awards despite static damage evidence or, in reverse, severity of injury (damage) influences findings of liability despite static liability evidence.47 This fusion, although legally impermissible, is a reality. As a result, separating liability from damages in jury studies limits the predictive power of that research for real juries.48

With the exception of Marti’s first experiment (called into question by the follow-on experiment using the same facts), the research to date suggests that no anchor is too high. Instead, the more a plaintiff asks for, the more she gets. The open question is whether this holds true in cases in which liability and damages must both be determined by the same participants.

C.     Concession

In trials, plaintiffs almost always present a concrete damage demand. Yet, not all defendants offer counter-anchors in their closing argument despite the fact that one might guess that such a counter-anchor could help reduce the impact of the plaintiff’s demand. Many defense attorneys fear that juries will interpret such a response as conceding liability.49

There is very little scholarly research on-point, and the practicing bar is split on whether counter-anchors are wise. Two experienced trial attorneys, Sobus and Laguzza, note that since the infamous Pennzoil-Texaco verdict, in which jurors awarded $10.53 billion, more attorneys feel compelled to offer counter-anchors.50 In that case, the jury assumed the plaintiff’s number was right because the defendant did not contest it in the closing argument.51 This led many trial attorneys to conclude that counter-demands are necessary.52 Sobus and Laguzza ultimately argue that this prevailing wisdom should be questioned—and oftentimes rejected—since a counter-anchor may be interpreted as a concession on liability.53 The authors are not alone. Other research shows that many defense attorneys do not provide a counter-anchor for the same reason.54 Additionally, some attorneys fear that providing a damage number might create a damage floor—a minimum amount the jury will award.55

A few scholars have tried to understand the dangers of counter-anchors. Decker presented 283 students with written case scenarios with four different defense strategies: no counter-anchor, or a counter-anchor of $0, $80,000, or $200,000.56 Participants were asked to determine both liability and damages in what turned out to be a close case (52.7% returned a verdict for plaintiff).57 Counter-anchors did not alter the percentage finding the defendant liable and did not influence average awards.58 This was largely consistent across conditions.59 “The lack of significant results from the present study provides limited practical help for the defense attorney. . . . Providing a damages counter-anchor did not influence or change the percentage of those who found the defendant responsible.”60

In one of the more robust experimental settings on the topic of counter-anchors, Leslie Ellis’s unpublished doctoral dissertation manipulated: (1) liability evidence; and (2) amount of defendant’s recommendation.61 She asked 360 real jurors to consider a slip-and-fall tort case, and manipulated both strength of case (three conditions) and the counter-anchors offered (no anchor, $500, $14,000, or $21,000).62 Her results contradict those of Decker, and are also more nuanced: Ellis found that counter-anchors reduced overall awards.63However, she also found that

[c]ompared to jurors who did not hear the defendant make an award recommendation, jurors who did hear the defendant make a recommendation were more likely to report that the recommendation was an indication of the severity of the plaintiff’s injuries, was what the defendant thought an appropriate award would be, was part of a well-prepared case, or was a negotiation point for deliberations on damages.64

In those cases, it appeared that jurors did in fact view the defendant’s mention of a counter-anchor as a concession.65 In both the more balanced case and the strong plaintiff case, the concession effect did not occur in a statistically significant way.66

As a result, the takeaway is muddy, but mirrors the suggestions by Sobus and Laguzza. Making a counter-anchor in a strong defense case is a bad idea. However, in close cases or strong plaintiff’s cases it is less likely to hurt. The obvious problem for practitioners is that it may be very difficult to assess accurately, prior to a verdict, whether a case is a close call or a strong case for a particular jury. In sum, the existing literature on concession is inconsistent and provides few answers that can be used by practitioners or policymakers.

III.     Experiment

A.     Hypotheses

Our study seeks to test four hypotheses related to the anchoring, credibility, and concession effects discussed above:

  1. Anchoring Effects: Juries award larger damages when a plaintiff requests a larger award, even if that award is unreasonable.
  2. Credibility Effects: A plaintiff’s credibility is adversely affected by requesting an unreasonably large award (resulting in a lower likelihood of prevailing on liability).
  3. Exploiting Credibility Effects: The credibility effect is sharpened when the defendant’s attorney explicitly attacks the plaintiff’s ad damnum as unreasonable (resulting in an even lower likelihood that plaintiff will prevail on liability).
  4. Concession Effects: When a defendant argues for an alternative, lower damages award, juries interpret the argument as a concession of liability (resulting in a higher likelihood of the plaintiff prevailing on liability).

The purpose of this study is to estimate the relative strengths of any of these effects and determine which are most important for litigation strategy and policymaking.

B.     Experimental Design

We performed an online vignette-based experiment in a 2 x 3 between-subjects factorial design (fully crossed). All subjects watched a medical malpractice trial video that lasted approximately 33 minutes. The video included opening statements from the plaintiff’s and defendant’s attorneys, testimony from expert witnesses about the standard of care in the case, cross-examination of both experts, and then—by random assignment—one of six different combinations of closing statements from the parties’ attorneys.67 This video was developed by real physicians serving as writers of the medical scenario and serving as actors for the expert witnesses, along with an experienced arbitrator consulting on the jury instructions and serving as the judge. Opening and closing arguments were written by one of the co-authors, an experienced trial attorney. Thus, although condensed, the video had a high degree of verisimilitude.

The scenario in the video concerned a primary care physician’s failure to diagnose a case of lumbar radiculopathy and refer the patient to imaging, which allegedly would have allowed timely surgery and avoided the permanent disability that the patient now suffers. The primary dispute concerned whether the physician–defendant met the standard of care when, instead of ordering imaging, he simply instructed the patient to take over-the-counter medications and return if the pain got worse.

Mock jurors viewed one of the six different combinations of closing arguments, as shown verbatim in the Appendix. There were two variations of the plaintiff’s closing argument. In both variations, the plaintiff made the same liability argument followed by one of two damages demands. The plaintiff’s attorney asked the jury to award either $250,000 or $5 million to compensate the plaintiff for pain and suffering associated with the back injury. We viewed the $250,000 figure as an objectively reasonable figure because it is roughly the average award given by mock jurors in an earlier experiment when the parties did not suggest any specific damages figures.68 The $5 million was selected as unreasonably high.

There were three different variations of the defendant’s closing argument. The defendant’s attorney made the same arguments against liability in each variation, but the damages arguments varied between ignoring, countering, and attacking. In one variation, the defendant’s attorney challenged both liability and damages and asked the jury to award “no money.” We refer to this as “ignoring” the damages demand because the defendant’s attorney never said what the appropriate amount of damages should be if liability were found. In a second variation, the defendant’s attorney first argued that there was no liability. However, he then argued that if there was liability, the jury should award no more than a reasonable amount, which he stated would mean “no more than $50,000.” We refer to this as “countering” because the defendant’s attorney is offering a lower alternative damages figure. Finally, in the third variation, the defendant’s attorney ridicules the plaintiff’s damages demand and explicitly uses the demand to argue that the jurors should not trust what the plaintiff has said about both liability and damages. We refer to this as “attacking” because the defendant’s attorney is attacking the plaintiff’s credibility.69 Notably, the amount of video that differed between conditions was less than one minute. This included variations in both plaintiff’s and defendant’s closing arguments.

The combination of different plaintiff and defendant arguments yielded six different experimental conditions. Subjects rendered individual judgments, responding “yes” or “no” to the prompt: “Based on the instructions provided by the judge in the video, do you believe that the Plaintiff has proved, by the greater weight of the evidence, that the Defendant committed medical negligence?” The jurors who found negligence awarded non-economic damages for “pain and suffering,” which had been defined by the judge’s instructions.70 Participants did not award economic damages, because the attorneys told the participants that they were not in dispute. Finally, we asked jurors to “in a sentence or two explain your answers.”

C.     Respondents

We recruited subjects from the population of workers on Amazon Mechanical Turk (“Mturk”) in June 2014 and screened for those that were “jury eligible,” meaning residents of the United States over age 18 who could read, write, and speak English. Subjects were paid three dollars to complete the experiment online. All subjects consented in accordance with the Institutional Review Board requirements. We administered a demographic questionnaire at the beginning of the survey.

In total, 776 people completed the online experiment.71 The sample was more female, educated, politically liberal, and younger than the population at large; race and median income, on the other hand, were more closely representative of the U.S. Census data.72 Balance checks did not reveal any differences in demographic compositions across the six experimental groups. The Appendix includes a regression analysis of these results.

D.     Jury Simulation

We were concerned that prior research may have exaggerated the effects of anchors to the extent that it relied upon the responses of individual mock jurors. As Vidmar explains, “damage awards are not rendered by individual jurors but by some combination of them, usually twelve or six, who combine their perspectives.”73 Collective judgments are known to have less variability than individual liability awards, and given that the distribution of individual verdicts is left-censored (at zero) and right-skewed (due to high outliers), the more moderate collective jury awards also tend to be lower than the average juror award. For example, in one highly realistic experiment, for pain and suffering, the average juror award was $2.3 million, with a standard deviation of $4.3 million, while the average jury award was only $486,000 with a standard deviation of $715,000.74

Although it was not feasible to facilitate actual jury deliberations with our online population, we performed computer simulations of jury awards based on the individual juror responses. Presently, there is no dominant theory that predicts how juries convert their individual pre-deliberation preferences into a collective post-deliberation verdict. However, prior work has found that the median individual vote is predictive of collective jury outcomes.75 Diamond and Casper conclude that, among the measures they studied, “the median [individual juror award pre-deliberation] is the best single predictor of the jury’s final verdict.”76 Other scholars have found a “severity shift,” towards higher-dollar awards, in punitive damages cases, but do not offer an alternative predictive model for simulation.77 Still, leading jury researchers have used the median as a rough approximation in prior jury simulations.78

To implement this simulation, we transformed the extreme outliers, those individual jurors awarding over $5 million (the maximum demand even in our high-anchor conditions), to $5 million. Then, for each individual juror award, we randomly selected 11 other juror awards from the same experimental condition, and then chose the median award from that group as the jury’s verdict. This calculation counted votes for the defendant as awards of $0.79 Thus, in juries where seven of the jurors voted for the defendant the jury awarded $0. After calculating the median for each jury, we then calculated the mean simulated jury award by experimental condition.80

Our simulated dataset thus has an equal number of simulated jury judgments (776) as our dataset of individual juror judgments (776). Our estimates for simulated juries are based on the 776 individual observations, which we combined 11 randomly selected jurors from the same experimental condition. This group of 12 was a jury for our purposes. In addition to the random error that exists in any sample-based research study, our final estimates are subject to additional variation due to the luck of the draw. To investigate this risk, we repeated the simulation 99 additional times, and calculated the conditional means. We then plotted the central 95% of these results as a bootstrapped confidence interval, shown in Figure 1 below. For the hypotheses tests described below, we used the R statistics program, specifically the permutations test in the Deducer package.


Figure 1. Mean of Simulated Jury Awards, Including Defendant Votes as Zeroes with 95% of Simulations Shown as Interval (N = 776)




Relying on both individual juror and simulated jury data, we first assess how anchoring affects damages outcomes (Hypothesis 1). We then use the verdict rates to investigate the credibility effect that could arise from the plaintiff’s overreaching (Hypothesis 2). We also use the verdict rates to assess whether the defendant’s attack can enhance the credibility effect (Hypothesis 3), or whether the defendant’s attempt to counter-anchor may create a concession effect (Hypothesis 4).81 Finally, we use the case expected value data, which incorporates all of these considerations, to assess the overall effectiveness of these strategies and the implications for policy.

E.     Damages Hypothesis Tests

The following discussion focuses exclusively on how the parties’ tactical choices affected the amount of damages awarded in cases where liability was found. Cases with zero damages (defense verdicts) are excluded from this analysis. This limits statistical power since the defendant won a majority of the cases.

Still, significant anchoring effects were found in both the individual juror analysis and jury simulation. When examining the data across all three defendant conditions, anchoring significantly affected damages. For individual jurors, damages jumped from $225,765 to $1,859,137 as the demand increased from $250,000 to $5 million (t = –11.3287, p < 0.0001). Although the damages were lower in the jury simulation, anchoring still had a significant effect. Damages increased from $64,623 to $277,857 as the demand increased from $250,000 to $5 million (t = –6.1911, p < 0.0001). The increases were 823% for individual juror responses and over 430% for simulated juries. Therefore, Hypothesis 1 is confirmed.

For the most part, our study suggested that different defense responses had little overall effect on damages (jurors: F = 0.6334, p = 0.5318; juries: F = 0.1191 and p = 0.8878).82 The one exception was that countering the $5 million demand reduced damages more than attacking that demand. The effect was only found in the jury simulation (jurors: t = –0.2237, p = 0.8207; juries: t = 5.2107, p < 0.0001). In the jury simulation, countering a $5 million demand reduced damages by 41% over attacking ($200,261 versus $341,872).

F.     Verdict Hypothesis Tests

Next, we considered how plaintiff’s anchors and defense responses affected the liability determinations (i.e., the chances of a plaintiff winning any nonzero amount). For individual juror verdicts, we observed no statistically significant differences between anchors.83 In the jury simulation, however, when the plaintiff demanded $250,000, the plaintiff prevailed 36.6%; when the demand increased to $5 million the plaintiff’s win rate was 29.3% (juries: χ2 = 4.349, p = 0.037). Thus, we did observe a credibility effect in the mock juries.

Because liability verdicts required seven individual jurors voting for the plaintiff, jury results tend to be more extreme than individual juror results (i.e., as individual votes move away from 50/‌50, collective jury results move more quickly away from 50/‌50). Modest, statistically insignificant individual juror credibility effects became larger and more significant in the jury simulation.

We also examined how win rates changed as a function of the defense’s response. Examining individual juror verdicts did not reveal any significant differences.84 However, we did detect some modest effect between particular conditions in the jury simulation (χ² = 12.756; p = 0.002). Surprisingly, the response that maximized defendant’s chance to win on liability was countering. Ignoring reduced defendant’s win rate by 6.6%, and attacking reduced the win rate by another 8.1%.85

It is important to note that these response effects were driven entirely by effects in the lower $250,000 anchor condition (χ2 = 34.1, p < 0.001). Defendants were best off when they countered, prevailing on liability 81.7% of the time. Ignoring reduced their chance of prevailing on liability by 19.4%, and attacking reduced the win rate by another 15.8%. In contrast, in the high-anchor condition, different defense responses appeared to have no effect on liability determinations, with all three rates within 5.5% (χ2 = 1.257, p = 0.533).

Given this data, we can reject Hypothesis 3—attacking does not seem to help in the high-anchor conditions, and it actually backfires in the low-anchor conditions. We can also reject Hypothesis 4—countering was apparently not viewed as a concession. Countering had no effect in the high-anchor conditions, and was the best strategy for winning in the low-anchor conditions.

G.     Case Value Hypothesis Tests

The expected case value takes into account both the defense verdicts (no liability and no damages awarded) and the verdicts for the plaintiff (liability and damages awarded) resulting in damages awards.86 This dependent variable—which includes all the data, with defense verdicts counted as zeros—allows us to assess the relative strength of the effects observed above. We also have greater power for these tests than for the damages tests, which excluded most of our observations.

Anchoring remained highly effective for plaintiffs, even after accounting for the credibility effect on win rates. When examining the data across all three defense-strategy conditions, anchoring significantly affected the expected case value. For individual jurors, the expected value jumped 813% as the demand increased from $250,000 to $5 million (t = –8.8715, p < 0.0001). In the jury simulation, awards were substantially lower overall, but anchoring still had a significant effect. The expected value increased 350% (t = –4.7846, p < 0.0001).

We did not detect an overall effect of defense strategy at our statistical power (jurors: F = 0.05167 and p = 0.9493; juries: F = 2.3448 and p = 0.0969).87 When comparing specific defense responses against each other, the jury simulation detected a number of effects that were not detected in the analysis of individual jurors. This is due to the extremely high variance seen in the individual juror data, which limited the power to detect what may be real effects.

Across both anchors, countering was a more effective defense response than attacking (jurors: t = –0.3206776, p = 0.754; juries: t = 2.1299585, p = 0.033). In the jury simulation, the expected value of countering improved over attacking by 43%.

Typically, a defendant can select his strategy after seeing the plaintiff’s anchor. Thus, the more interesting results are relative to each plaintiff strategy. Assuming the plaintiff anchors low, countering was also a more effective defense response than the ignoring strategy (jurors: t = –0.6659, p = 0.7543; juries t = 2.8038, p = 0.0036). In the jury simulation, the case’s expected value when the defense attorney ignored the $250,000 demand was $21,441. At $9464, countering resulted in a 56% lower expected value. Countering was also a more effective defense response to a low anchor than attacking (jurors: t = –0.2096, p = 0.9601; juries: t = 5.2107, p < 0.0001). For the jury simulation, the expected value of attacking the $250,000 demand was $39,901. At $9464, countering resulted in a 76% lower expected value. Predictably, although not as effective as countering, ignoring was a more effective response to a low anchor than attacking (jurors: t = –1.236, p = 0.2154; juries: t = –2.864, p = 0.0048). For the jury simulation, the expected value of the ignoring the $250,000 demand was 46% lower than attacking the demand.

We were unable to detect any statistically significant differences in the three defendant responses to the higher $5 million demand, likely due to the extremely high variance in the data, even after the jury simulation’s moderating effect.88 We can, however, reject the hypothesis that any of them successfully neutralize the plaintiff’s high anchor. Nonetheless, as shown in Figure 1, the pattern of responses in the high-anchor conditions was the same as the pattern in the low-anchor condition, with countering producing the lowest expected case values in both conditions.

IV.     Discussion

A.     Implications

Like previous studies, our study found that anchoring effects had an extremely powerful effect on juries. The plaintiff was able to dramatically increase its potential recovery by simply demanding more money. In our experiment, damages (when awarded) increased by an average of 823% for individual jurors and 430% in the jury simulation.

At the same time, our study found that this tactic had a small negative effect on the chances of winning anything at all. This effect was almost imperceptible in individual jurors, but became significant in juries (–7.3%). Because the anchoring effects dominated any possible credibility effects, the expected value of the case still increased by 350%.

Thus, it appears that a rational plaintiff should request an extremely high damage award. It should be noted that some risk averse plaintiff may make a different choice. Especially for a single shot plaintiff, the better tactic may be to maximize the chance of any recovery, even if that reduces the expected value of the case. These results produce interesting ethical implications too, as a client’s desire to maximize the likelihood of winning, even if that win is smaller in amount, may be at odds with an attorney’s personal interest in a larger average recovery across multiple cases.

Policymakers may consider anchoring problematic. The fact that a plaintiff can recover substantially more money simply by asking for it may suggest that this tactic should not be permitted. For example, New Jersey prohibits plaintiffs from placing a specific damages demand for non-economic damages.89 Presumably, this rule prevents plaintiffs from taking advantage of juries that might otherwise irrationally adopt high anchors.

As discussed earlier, many defense attorneys do not provide juries with a lower alternative to the plaintiff’s damages demand. They fear that juries will interpret such a tactic as a concession of liability. While our study has limitations, the results certainly call into question this belief.

We found no concession effects when defendants responded to the plaintiff’s $250,000 anchor by suggesting the lower $50,000 number. In fact, quite the opposite was true. The defendant actually won more. The defendant prevailed 81.7% when countering, but when it provided no alternative number (i.e. when it ignored) its win rate decreased by 19.4%. When it attacked the anchor, the defendant’s win rate decreased by another 15.8%. Oddly, offering a lower damages award only helped the defendant on liability, not on damages. We detected no response effects on damages. The overall benefit of countering to the $50,000 demand can be viewed by examining expected values, which takes into account both damages and liability outcomes. In our experiment, countering decreased the expected value of the case by 43% over ignoring and 76% over attacking.

The second best response to the $250,000 demand was ignoring. It improved defendant’s win rate by 8.1% over attacking. Again, defendant’s response did not appear to affect damages and the expected value of ignoring was 46% lower than attacking. But we take this result with a grain of salt because, in reality, a defendant’s attorney is only likely to attack an outrageous demand. We view $250,000 as reasonable in this case. Nonetheless, $250,000 is still five times the median income of Americans, and thus may be viewed by some jurors as outrageous, even if it is in fact typical for this sort of case.

The three different defense responses to the $5 million demand did not prove to have any statistically significant effect on either liability or damages. Any effect of response between conditions is relatively small, compared to the much larger unexplained variation within conditions. This finding is enough to conclude that, at least for the range of three strategies we tested, defendants lack an effective way to rebut a plaintiff’s outrageously high anchor. Even if one strategy may turn out to be slightly more effective than the other, we can reject the hypothesis that any of these three strategies is able to nullify the plaintiff’s anchor.

B.     Limitations

Our study had several limitations. First, we tested three potential responses by defense counsel. However, this does not encompass the universe of possible responses. For example, a defense attorney can mention a counter-demand in a number of ways, ranging from a brief mention of an alternative number to an extended discussion of why the plaintiff’s demand is unreasonable. These varying forms of offering alternative damage numbers may impact whether this is viewed as a concession or whether it functions as an effective counter-anchor. Similarly, some attorneys may choose not to attack a plaintiff who makes a high demand. Instead, they may remind the jury of its obligation to stick to the evidence regarding damages, or they might mention to the jury that an award in line with the plaintiff’s demand would be viewed as unreasonable by the community. These alternative methods of “attacking” an anchor may produce different results.

Second, our experiments used a case that was a close call on liability, with plaintiff win-rates hovering around 40%. This could impact how anchors, concessions, and credibility function. For example, one study suggested that a defendant who concedes in a case that is very strong for the defendant may suffer a more pronounced concession effect.90 Similarly, one could hypothesize that a plaintiff who makes an extremely high demand in a case that is extremely weak on liability might suffer a more pronounced credibility effect. We did not test such hypotheses in this experiment.

Third, our experiment only exposed mock jurors to two damages demands. The effects we tested may be moderated or enhanced by different anchors. We did not test how different anchors might change our results. For example, at some point, anchors that are sufficiently high may sour a juror on the plaintiff, causing them to provide lower awards or a greater credibility penalty.91 Additionally, attacking our high anchor had no significant effect. However, that may not be true for even higher anchors.

Fourth, we used a 32-minute abridged civil trial for our experimental stimulus. The condensed stimulus allowed us to utilize a randomized controlled trial experimental design, which is the gold-standard for scientific research. Still, there are reasonable concerns about external validity. One might expect that anchoring effects would be smaller, if outweighed by more trial evidence. However, since this particular manipulation is necessarily right at the end of the case, it will likely be salient to jurors, regardless of how much they saw before. Moreover, for mock jury research, a 32-minute videotaped stimulus, complete with jury instructions, witnesses, and arguments, is at the high end of the range of external validity, compared to other studies which might use a 5-minute paper-and-pencil task.

Fifth, we did not study real jurors. Prior research has shown that “the population of Mechanical Turk is at least as representative of the U.S. population as traditional subject pools.”92 Known experimental results have been replicated using the MTurk population.93 Nonetheless, MTurkers may be more easily distracted from the trial compared to real jurors and may even provide junk responses. It may be that real jurors are more earnest in their efforts to providing meaningful responses or that real jurors determine liability and damages differently, knowing that the outcomes will impact real individuals and companies. However, it is worth noting that we saw significant changes in juror decisions in our study despite the fact that only about one minute of the video was manipulated in each condition. This suggests that attention was likely not a problem in this study.

Sixth, the jury simulation modeled jury outcomes, but did so mathematically, without capturing everything that juries consider and discuss. Any particular jury may not consist of 12 members, may not simply vote on the median, and may not consider defense votes as if they were simply zeroes—all as our model assumed.94 We are unaware of how any such difference would interact with the hypotheses here tested. In this study, we used the simulation as something of a robustness check, just to ensure that we were not over-estimating the power of anchoring, and in relative terms underestimating the power of defense responses. Importantly, the jury simulation can be disregarded without changing the most important conclusions of the study. In Table 1, we present results for individual jurors, and have provided hypothesis tests on that data throughout. The jury simulation did, however, detect some smaller differences not observable in the juror data.

V.     Conclusions

Based on our experiment, we reach two sets of conclusions. First, we confirm that anchoring works. Although the plaintiff who shoots for the stars may take a credibility hit that reduces his chances of winning, that effect is outweighed by the higher damages award he gets if he wins. Our real contribution is to show that three promising strategies for defendants all fail to overcome this effect. Litigants should plan accordingly, and policymakers and judges should consider whether this dynamic serves policy goals of deterrence, compensation, and punishment, along with procedural values of predictability and legitimacy.

Second, our results challenge the conventional wisdom that juries will interpret a defendant’s proffer of a lower counter-anchor as a concession of liability. Ellis had previously detected such effects, but only when the evidence of liability strongly favored the defendant. Our case was a much closer case on liability, and we found no evidence of concession effects. This suggests that the conventional wisdom about offering a lower damages award is wrong when evidence of liability is a close call. This information should give defendants some comfort that they can provide juries with a more complete damages assessment.


A.     Manipulations

1.     Plaintiff’s Closing Argument

You have heard the evidence. There is no guess work. Dr. Dennis was presented with evidence of a pinched nerve—evidence of neurological problems. All he had to do was get a basic test to find out if this was a muscular problem or something more serious. Dr. Dennis did not do that. Instead, he sent Mr. Stevens away. Mr. Stevens followed Dr. Dennis’s orders. He tried physical therapy and taking medicine. And what did that do? It made things worse. And when they got worse, Mr. Stevens returned. It was then that he learned that he had a serious injury that required surgery.

The result?

Permanent disability and pain.

In the end, the evidence is quite clear that Dr. Dennis simply did not meet the standard of care for a doctor, and that this is medical negligence.

What remains is a question of damages. Let’s talk about that. Everyone agrees to a few things. First, Mr. Stevens incurred $100,000 in bills. Second, everyone agrees that Mr. Stevens is permanently disabled. And third, everyone agrees he will continue to experience pain.

So what should a person have to pay for causing someone that sort of pain? What is that worth—permanent back pain? Struggling to pick up grandkids? The inability to tie your shoes without hurting? Day by day, sleepless night after sleepless night? I’ll tell you what I believe it is worth, and then it will be your job to decide if you agree.

For the medical bills, for the past pain and suffering, and for future pain and suffering, I’m asking you to award:95

 A.     $250,000

B.     $5 million

On behalf of Mr. Stevens, thank you for your time and your service. I know this isn’t always fun, but it is important and his life. He asked me to thank you for taking it so seriously.

2.     Defendant’s Closing Argument

We all know that hindsight is 20/‌20.

In this case, we now know that Mr. Stevens had a severe neurological problem.

But when you decide this case, you have to go back to that day in Dr. Dennis’s office. Dr. Dennis was faced with someone with chronic back pain and a previous accident. He knew that when there isn’t a triggering event, this is almost always a muscular problem. He also knew that ordering a bunch of expensive, wasteful tests wasn’t in the best interest of his patient. Is he now to be punished because he practiced responsible medicine instead of defensive medicine, the very medicine that runs up costs and wastes time and money?

To be safe, Dr. Dennis did what any doctor would do. He told Mr. Stevens that if things got any worse, to return.

Mr. Stevens didn’t do that for three months. Instead, he worsened the condition by ignoring his doctor’s orders.

And what happened when he returned? Dr. Dennis immediately ordered an MRI and found the problem.

We can’t punish doctors for being reasonable, and we cannot expect them to assume each patient will ignore their advice.

The bottom line is that it is a shame that Mr. Stevens has back problems, but it is simply not the fault of Dr. Dennis.96

 A.  For that reason, I’m asking you to return a verdict in favor of Dr. Dennis and to award Mr. Stevens no money.

B.  For that reason, I firmly believe that evidence requires you to return a verdict in favor of Dr. Dennis and to award Mr. Stevens no money. However, I know that you are the jury, and that you might see it differently. So, I owe it to my client to talk to you about the damages the Plaintiff is asking for. Mr. Stevens had $100,000 in medical bills, but he probably would have eventually needed surgery anyway. He also has some permanent back pain and limitations. But remember that he had pain from a previous accident too. Is a little additional pain a reason to receive a windfall? If you award any damages at all, please be reasonable. Award a small portion of the medical bills, since Dr. Dennis did not cause the need for surgery, and award little to nothing for the future pain. Award no more than $50,000.

C.  Despite these facts, the Plaintiff’s attorney, with a straight face, asked you for $250,000/‌$5 million.97 That number is insulting. It’s unsupported, and it should tell you all you need to know about what this case is about. This isn’t supposed to be a get-rich-quick scheme, it is supposed to be a trial. It’s supposed to turn on facts. And the fact that the Plaintiff’s attorney would ask you to award that much money should tell you all you need to know about the Plaintiff’s credibility and the credibility of his counsel.

Send a message that juries won’t be toyed with. Return a verdict for Dr. Dennis and against Mr. Stevens. Award Mr. Stevens no money.

My client thanks you for your time. He has spent a career helping people, and he worked to help Mr. Stevens. Please don’t punish him based on hindsight.

B.     Regression

Regression of the log of 1 plus the juror award on the condition for all awards produced no models significant at the p = 0.05 level. Restricting to positive juror awards produces models significant at the p = 0.05 level, however the non-normality of the errors makes the p-values suspect. As shown in Table 2 the anchor and the intercept are the only significant coefficients.


Table 2. Log Regression on Experimental Condition, Restricted to Positive Juror Awards (N = 776)



  1. [1]. See generally Robert Cooter et al., Bargaining in the Shadow of the Law: A Testable Model of Strategic Behavior, 11 J. Legal Stud. 225, 225 (1982) (describing pretrial bargaining “as a game played in the shadow of the law”).

  2. [2]. See infra Part II.A. See generally Cass R. Sunstein et al., Punitive Damages: How Juries Decide (2002).

  3. [3]. See Gretchen B. Chapman & Brian H. Bornstein, The More You Ask for, the More You Get: Anchoring in Personal Injury Verdicts, 10 Applied Cognitive Psychol. 519, 538 (1996).

  4. [4]. See Shari Seidman Diamond et al., Juror Judgments About Liability and Damages: Sources of Variability and Ways to Increase Consistency, 48 DePaul L. Rev. 301, 318 (1998) (discussing, among other things, the New Jersey rule prohibiting plaintiffs from asking for a specific amount of damages for pain and suffering).

  5. [5]. John Malouff & Nicola S. Schutte, Shaping Juror Attitudes: Effects of Requesting Different Damage Amounts in Personal Injury Trials, 129 J. Soc. Psychol. 491, 495 (1989) (“The primary finding of the present experiment was that when more money was requested for damages... the jurors awarded more.”). See generally Chapman & Bornstein, supra note 3.

  6. [6]. Chapman & Bornstein, supra note 3, at 519 (citing Amos Tversky & Daniel Kahneman, Judgment Under Uncertainty: Heuristics and Biases, 185 Science 1124, 1128–30 (1974)).

  7. [7]. Id. at 520 (discussing studies in which anchors, even when included with a variety of other information, proved powerful).

  8. [8]. See, e.g., Dale W. Broeder, The University of Chicago Jury Project, 38 Neb. L. Rev. 744, 754 (1959); Jane Goodman et al., Runaway Verdicts or Reasoned Determinations: Mock Juror Strategies in Awarding Damages, 29 Jurimetrics J., 285, 291–92 (1989); Barry Markovsky, Anchoring Justice, 51 Soc. Psychol. Q. 213, 214 (1988). See generally Chapman & Bornstein, supra note 3.

  9. [9]. Broeder, supra note 8, at 753.

  10. [10]. Id.

  11. [11]. Id. at 759 (emphasis added).

  12. [12]. Chapman & Bornstein, supra note 3, at 519.

  13. [13]. Id. at 523.

  14. [14]. Studies finding that an anchoring effect is boundless arguably conflict with more general cognitive science literature that suggests that for an anchor to be salient, it must not be so extreme as to conflict with other scale elements. See, e.g., Markovsky, supra note 8, at 214.

  15. [15]. See infra notes 35–45 and accompanying text.

  16. [16]. Shari Seidman Diamond et al., Damage Anchors on Real Juries, 8 J. Empirical Legal Stud. 148, 176–78 (2011); see also Chapman & Bornstein, supra note 3, at 526–27 (similarly finding that experimental subjects viewed plaintiffs as “more selfish” when they “ask[ed] for extremely high amounts”).

  17. [17]. Diamond et al., supra note 16, at 155.

  18. [18]. Id. at 178. Note, however, that very few of the attorneys apparently made outrageous requests. See id. at 170 (describing the generally modest approach).

  19. [19]. Id. at 161 tbl.1.

  20. [20]. Id.

  21. [21]. Id. at 165.

  22. [22]. Id.

  23. [23]. Id. at 176.

  24. [24]. Id. at 166 tbl. 2 (12.5% compared to 1.8%).

  25. [25]. Id.

  26. [26]. Id.

  27. [27]. Id. at 168.

  28. [28]. Chapman & Bornstein, supra note 3, at 527 (suggesting that anchors were effective even though jurors did not appear to find the amount requested as relevant).

  29. [29]. Diamond et al., supra note 16, at 152, 168; see also Tversky & Kahneman, supra note 6, at 1128.

  30. [30]. Tversky & Kahneman, supra note 6, at 1128.

  31. [31]. Diamond et al., supra note 16, at 173 (“Yet attention may be necessary, but not sufficient, for influence.”).

  32. [32].See John A. DeMay, The Plaintiff’s Personal Injury Case: Its Preparation, Trial and Settlement 233 (1977).

  33. [33]. Diamond et al., supra note 16, at 168 (emphasis added).

  34. [34]. See, e.g., Malouff & Schutte, supra note 5, at 495 (noting a possible boomerang effect in one case involving a female Hispanic plaintiff). The sample size was too small to conclusively draw this conclusion (n = 1/4 of the 38).

  35. [35]. Mollie W. Marti & Roselle L. Wissler, Be Careful What You Ask for: The Effect of Anchors on Personal Injury Damages Awards, 6 J. Experimental Psychol. 91, 94 (2000).

  36. [36]. Id. at 97.

  37. [37]. Id. at 99.

  38. [38]. See Mollie Weighner Marti, Anchoring Biases and Corrective Processes in Personal Injury Damage Awards 36 (July 1999) (unpublished Ph.D. dissertation, University of Iowa) (on file with author) (“In sum, award size and variability for... the main design did not uniformly fall above or below that for the control condition.”).

  39. [39]. Id. at 25–26.

  40. [40]. Id. at 27–28.

  41. [41]. Id. at 28.

  42. [42]. Id. at 30.

  43. [43]. Id. at 40–41.

  44. [44]. Id.

  45. [45]. Id.

  46. [46]. See, e.g., Goodman et al., supra note 8, at 291 (in which jurors participants were told to assume the defendant was liable); Edith Greene et al., The Effects of Injury Severity on Jury Negligence Decisions, 23 Law & Hum. Behav. 675, 678 (1999) (in which jurors were asked to determine liability but not to award damages); Malouff & Schutte, supra note 5, at 493–94 (reporting only on the participant’s damages awards); Marti & Wissler, supra note 35, at 94 (in which participants were told that liability was already determined in the plaintiff’s favor and that all damages besides pain and suffering had already been awarded).

  47. [47]. See, e.g., Greene et al., supra note 46, at 689–90 (finding that jurors allowed severity of injury to influence findings regarding liability); Roselle L. Wissler et al., The Impact of Jury Instructions on the Fusion of Liability and Compensatory Damages, 25 Law & Hum. Behav. 125, 125–39 (2001) (defining fusion as the conflation of liability with damages, or damages with liability).

  48. [48]. Ironically, by instructing the jury how to find as to liability in an effort to avoid fusion effects may actually enhance them or create fusion confusion. If jurors are told to assume liability, this may serve as a proxy for their assessment of strength of case. Many studies do contain such instructions. See, e.g., Reid Hastie et al., Juror Judgments in Civil Cases: Effects of Plaintiff’s Requests and Plaintiff’s Identity on Punitive Damage Awards, 23 Law & Hum. Behav. 445, 450–51 (1999) (noting that participants were told liability and compensatory damages of over $24,500,000 had been awarded before being asked to award punitive damages); Marti & Wissler, supra note 35, at 94 (noting that participants were told that liability was already determined in the plaintiff’s favor and that all damages besides pain and suffering had already been awarded).

  49. [49]. Thomas A. Mauet, Trial Techniques 409 (8th ed. 2010).

  50. [50]. Mark S. Sobus & Ross P. Laguzza, Ghost of Pennzoil-Texaco: Hidden Risks of Arguing Alternative Damages, 67 Def. Couns. J. 511, 511–12 (2000); see also Thomas Petzinger, Jr., Oil & Honor: The Texaco-Pennzoil Wars: Inside the $11 Billion Battle for Getty Oil 409 (1987).

  51. [51]. See Petzinger, supra note 50, at 404.

  52. [52]. See Sobus & Laguzza, supra note 50, at 511–12.

  53. [53]. Id. at 516–17; see also Mauet, supra note 49, at 409; Tina L. Decker, Effects of Counter-Anchoring Damages During Closing Argument 49–50 (2006) (unpublished Ph.D. dissertation, University of Kansas) (on file with author).

  54. [54]. Neil Vidmar, Medical Malpractice and the American Jury: Confronting the Myths About Jury Incompetence, Deep Pockets, and Outrageous Damage Awards 197 (1995) (based on interviews, Vidmar reports that defense attorneys “were reluctant to dispute the amount of damages or to present expert evidence on damages on the theory that to do so would cause the jury to assume that the doctors were liable”); Diamond et al., supra note 16, at 162 (noting that defendants fear “conceding liability” and observing trials where defendants offered a rebuttal amount to plaintiff’s request in 18 of 30 cases involving past special damages).

  55. [55]. See Mauet, supra note 49, at 411; Decker, supra note 53, at 1.

  56. [56]. Decker, supra note 53, at 28–29

  57. [57]. Id. at 33.

  58. [58]. Id.

  59. [59]. See id. at 37.

  60. [60]. Id. at 46. The percentages do change, although the differences were not statistically significant. Id. at 33–34.

  61. [61]. Leslie Ellis, Don’t Find My Client Liable, But if You Do... : Defense Recommendations, Liability Verdicts, and General Damage Awards 34 (2002) (unpublished doctoral dissertation, University of Illinois at Chicago) (on file with author).

  62. [62]. Id. at 39, 44.

  63. [63]. Id. at 58–60.

  64. [64]. Id. at 105.

  65. [65]. Id.

  66. [66]. Id. at 113–14.

  67. [67]. The core trial footage, including jury instructions and two expert witnesses, was taken from a prior experiment. See generally Christopher T. Robertson & David V. Yokum, The Effect of Blinded Experts on Juror Verdicts, 9 J. Empirical Legal Stud. 765 (2012). We modified the trial by recording new opening and closing arguments by both sides. We performed a pilot experiment on, using a 2 x 2 between-subjects experimental design. Based on the results of the pilot trial, we then created two more sets of closings arguments for the primary experiment reported herein.

  68. [68]. Id. at 779–80.

  69. [69]. The attacking variation was not in the original pilot. Because we detected apparent credibility effects in the pilot experiment, we sought to determine if we could enhance those effects by explicitly attacking the $5 million demand in our primary experiment.

  70. [70]. Following the pattern jury instructions for Arizona, the judge stated:

    If you find Dr. Davidson liable to Mr. Stevens, you must then decide the full amount of money that will reasonably and fairly compensate Mr. Stevens. In this case, the amount of medical expenses and other damages have been stipulated, but you must decide the amount to compensate Mr. Stevens for the pain, discomfort, suffering, disability, disfigurement, and anxiety already experienced, and reasonably probable to be experienced in the future as a result of the injury.

  71. [71]. One thousand ninety-three people started the survey. The 30% attrition rate reflects subjects who dropped out of the survey voluntarily or were ejected from the survey for skipping past a video before enough time elapsed to possibly watch its entirety (thereby indicating task non-compliance). At least in terms of the collected demographics, the subset of persons dropping from the survey was statistically indistinguishable from the final study population.

  72. [72]. Specifically, the sample demographics are as follows: 59% female; mean and median age of 36 and 33, respectively; 78% White, 12% African American, 5% Asian, 1% American Indian, and the rest other; 45% with Bachelor’s degree or higher; and 53% lean toward, prefer, or strongly prefer the Democrats.

  73. [73]. Neil Vidmar, The Performance of the American Civil Jury: An Empirical Perspective, 40 Ariz. L. Rev. 849, 885 (1998).

  74. [74]. Diamond et al., supra note 4, at 315–16.

  75. [75]. See id. at 315–16, 315 nn.34–35.

  76. [76]. Shari Seidman Diamond & Jonathan D. Casper, Blindfolding the Jury to Verdict Consequences: Damages, Experts, and the Civil Jury, 26 Law & Soc’y Rev. 513, 546 (1992).

  77. [77]. See David Schkade et al., Deliberating About Dollars: The Severity Shift, 100 Colum. L. Rev. 1139, 1152–53 (2000) (referring “to the median predeliberation judgment of the individuals in [their experimental mock] jury as the verdict of the statistical jury,” and finding that “the median verdicts of deliberating and statistical juries produce very similar rankings of the cases” although “[d]eliberating juries produce much higher awards, especially but not only at the high end” (alternation in original)); see also Daniel Kahneman et al., Shared Outrage and Erratic Awards: The Psychology of Punitive Damages, 16 J. Risk & Uncertainty 49, 72–75 (1998).

  78. [78]. See Vidmar, supra note 54, at 226; Schkade et al., supra note 77, at 1163.

  79. [79]. See S. Femi Sonaike, The Influence of Jury Deliberation on Juror Perception of Trial, Credibility, and Damage Awards, 1978 BYU L. Rev. 889, 902 (treating defense verdicts as zeros); see also Diamond & Casper, supra note 76, at 546 n.37 (citing Sonaike, supra, and apparently doing the same).

  80. [80]. We used the simulation method described in the body because it allowed us to perform hypothesis testing at our given level of statistical power, because our model creates an equal number of simulated juries as the jurors we actually observed (similar to other transformations commonly used, such as a log transformation). For each experimental condition, our estimate is a sample of the possible juries that could be created by combining jurors; it does not incorporate every possible combination of jurors. Alternatively, it is also possible to perform an exact calculation of the expected value of the case for any given condition, taking into account every possible combination of jurors. We performed that calculation with the assistance of research statistician Cathy Durso, which yielded estimates that were very similar to our simulation findings, as we expected. However, these estimates are not as useful for hypothesis testing, because they do not incorporate the limits of our statistical power. Thus, in the body we report the simulated results and use them for hypothesis testing.

  81. [81]. Because our results yield damage awards with observed zero awards (i.e., defense verdict), we analyze our data in two parts—damages and liability. See Theodore Eisenberg et al., Addressing the Zeros Problem: Regression Models for Outcomes with a Large Proportion of Zeros, with an Application to Trial Outcomes, 12 J. Empirical Legal Stud. 161, 169 (2015) (applying a two-part model).

  82. [82]. These calculations used a one-way analysis of the means for each defense condition (not assuming equal variances). We also performed pairwise comparisons, but found no such effects. In the $250,000 condition, we detected no statistical differences between ignoring and attacking the damages demand (jurors: t = –0.9976 and p = 0.3168; juries: t = –1.6676 and p = 0.1056), no significant differences between ignoring and countering (jurors: t = –1.0946 and p = 0.2540; juries: t = 0.4636 and p = 0.6713), and no significant differences between countering and attacking (jurors: t = –0.9078 and p = 0.4860; juries: t = 1.9823 and p = 0.0726). When the plaintiff demanded $5 million, there were no significant differences between ignoring and countering (jurors: t = 0.2070 and p = 0.8289; juries: t = 1.253 and p = 0.2050), or between ignoring and attacking (jurors: t = 0.4574 and p = 0.6469; juries: t = –0.3135 and p = 0.7485).

  83. [83]. When the plaintiff’s demand was $250,000, its win rate was 41.5%, when the plaintiff’s demand increased to $5 million, the win rate was indistinguishable at 41.0% (χ= 0.007, p = 0.935).

  84. [84]. Overall, the plaintiff’s chance of prevailing was 38.5% when the defendant countered, 41.2% when the defendant ignored, and 44.1% when the defendant attacked (χ² = 1.682; p = 0.431).

  85. [85]. The tables report plaintiff win rates, but this discussion describes defendant’s win rates.

  86. [86]. Awards over $5 million were transformed to $5 million.

  87. [87]. The calculations were performed using a one-way analysis of the means for each defense condition (not assuming equal variances).

  88. [88]. There were no detectable differences between attacking the $5 million demand and countering it (jurors t = –0.1867, p = 0.8471; juries: t = 1.1139, p = 0.2642). Similarly, we found no detectable differences between attacking the $5 million demand and ignoring it (jurors: t=–0.2662, p= 0.7803; juries: t = 0.2921, p = 0.7735). Finally, we found no detectable differences between ignoring a $5 million demand and countering it (jurors: t = –0.0710, p = 0.9343; juries: t = –0.6612, p = 0.5101).

  89. [89]. N.J. Ct. R. 1:7-1(b) (parties may suggest “that unliquidated damages be calculated on a time-unit basis without reference to a specific sum”). We understand that plaintiff’s attorneys still try to use anchoring to influence the damages award by discussing how long the plaintiff will suffer (e.g., suffering for 20 million minutes or in other words, approximately 38 years).

  90. [90]. Ellis, supra note 61, at 101.

  91. [91]. See supra Part II.B for a discussion of a possible boomerang effect.

  92. [92]. Gabriele Paolacci et al., Running Experiments on Amazon Mechanical Turk, 5 Judgment & Decision Making 411, 411 (2010).

  93. [93]. Adam J. Berinsky et al., Evaluating Online Labor Markets for Experimental Research:’s Mechanical Turk, 20 Pol. Analysis 351, 361–65 (2012).

  94. [94]. Some states have 12 jurors, others have fewer. Roughly two-thirds require a supermajority, while one-third still require unanimity. Saul Levmore, Conjunction and Aggregation, 99 Mich. L. Rev. 723, 740 n.33 (2001). Meanwhile, in federal court juries must consist of at least six jurors and, unless agreed to by the parties, all six of those jurors must agree in order to return a verdict. Fed. R. Civ. P. 48(a)–(b).

  95. [95]. The only difference in Plaintiff’s closing was whether condition A or B ($250,000 or $5 million) was stated as the damage demand. To avoid editing issues, we recorded each scenario as a full read, rather than simply splicing in the different amounts.

  96. [96]. The only difference in Defendant’s closing was the use of condition A, B, or C (ignore, counter, or attack). To avoid editing issues, we recorded each scenario as a full read, rather than simply splicing in the different responses.

  97. [97]. Because condition C (attack) includes reference to the Plaintiff’s demand and suggests it is unreasonable, we tailored the attack to either $250,000 or $5 million. As a result, there were actually four recordings of the defendant’s response, but condition C was one experimental condition. It varied only in its reference to the plaintiff’s actual demand amount.


John Campbell and Bernard Chao share the Hughes-Ruud Research Professorship at the University of Denver. Professor Campbell is a Lawyering Process Professor at the Sturm College of Law, University of Denver.


John Campbell and Bernard Chao share the Hughes-Ruud Research Professorship at the University of Denver. Professor Chao is an associate professor at the Sturm College of Law, University of Denver.


Professor Robertson is an associate professor at James E. Rogers College of Law, University of Arizona.


David Yokum graduated from the James E. College of Law and has a Ph.D. in Psychology from the University of Arizona. Yokum is also a Fellow on the White House Social & Behavioral Sciences Team.

We thank Lawrence Friedman, Bryant Garth, Laura Beth Nielsen, Stewart Macaulay, and Joyce Sterling, who sit on the Hughes Research and Development Committee at the University of Denver, which provided both guidance and funding. Thanks also to Jim Greiner and David Schwartz for their thoughtful comments on earlier drafts of this paper. This paper was accepted in the peer reviewed process for the Conference on Empirical Legal Studies, where it also benefited from anonymous reviewer comments and from the discussants.