It showed up quietly in news feeds a couple of weeks ago: a little press release announcing a striking conclusion. "[I]n at least one out of eight cases, juries came up with the wrong decision," was the nutshell version of Northwestern statistics professor Bruce Spencer's paper, "Estimating The Accuracy of Jury Verdicts." Since then the paper has been noticed, to put it mildly.
A range of responses
It was like a jury Rorschach test. Thoughtful writers made thoughtful comments. Emma Barrett at Psychology and Crime News noted Spencer's "somewhat complex method"; Nicole Black at Sui Generis said "the method used to determine whether mistakes were made in a given trial is a bit questionable, in my humble opinion." Mark Bennett at Defending People had an extended analysis yesterday, and Scott Greenfield at Simple Justice added his thoughts today.
Jury critics criticized. "By whatever means necessary, let’s put an end to this inaccurate farce we call 'trial by jury,'" said one. "It is stories like this and Nifong, Fitzgerald and the rest that hurt our system. We need an overhaul of how juries are seated", another agreed.
And there was anger, a lot of anger, at Spencer himself. "Who made himself God in this study to declare which juries are right and which are wrong?" said one blogger. "This “study” sounds little like a study and more like a platform for advocating more liberal courtroom standards." And widely-read Patterico, calling his post "God Himself Appears In Human Form," said:
And just how does some statistics professor sitting in his office know whether the people in these cases were truly guilty or innocent? . . . I haven’t seen the study itself, but I’m calling bullshit. There is, quite simply, no way that some statistics professor sitting in his office can know the true guilt or innocence of 290 criminal defendants. That’s why we have a system, pal — because there is no way for any one self-appointed individual to be the Sole Arbiter of who’s guilty and who’s not.
Even DentalPlans.com had a post on Prof. Spencer's work.
What the paper says
Meanwhile I kept trying to read the paper, which is to be published in the July issue of Empirical Legal Studies, but is also on Northwestern's web site here. It's about statistics, so it's rough going. (Northwestern put out a more detailed and very helpful press release the other day; when an author's own press release calls his paper "a technical report," you know you're in trouble.)
The bottom line is this. In two major prior studies, judges were asked what their personal "verdict" would have been in trials that were decided by juries. Spencer looked at both studies and found that judges and juries agreed only in around 77 percent of cases in one study and 80 percent in the other. Where they disagreed, he figured, one of them had to be wrong.
Spencer did not, as critics assumed, simply decide that the judges were right and the juries were wrong. To the contrary, Spencer stresses, "[T]he judge’s verdict is not taken as the gold standard." The whole paper is an effort to figure out via statistical analysis how accurate -- objectively (we'll come back to that) -- juries were. Spencer applied different analyses to try to eliminate three kinds of "errors" that might have skewed the disagreement rate: "survey error" because the initial data might be tainted, "identification error" because juries and judges who agreed might both be wrong, and "specification error," exactly because when juries and judges disagree we don't know which was right.
Spencer misses few chances to stress the limitations of his work. "The estimates [of jury error] are no basis for action other than future studies," he says. It's clear that all he's trying to do is find out if this kind of measurement is possible, and at that level, it's a fascinating effort. You can see why the empirical legal studies folks are drawn to what they do.
For lawyers, though, what may be more interesting than Spencer's 33-page statistical analysis is his one-paragraph discussion of what it means to say that a jury is "wrong." Correctness can be defined two ways, he explains. There's "procedural" correctness:
Jury accuracy refers to the average probability for a set of cases that the jury verdict is, in some sense, correct. “Correct” can be interpreted in a variety of ways. A “procedural viewpoint” considers a “correct” decision to be one which applies the legal standards correctly: if proof is not demonstrated to the standards prescribed by the law, the defendant should be acquitted. Thus, if a person tried for a crime truly committed the crime, but the evidence was lacking, the procedurally correct decision would be acquittal.
And then there's factual, "impartial and rational" correctness:
An alternative, “omniscient viewpoint”, holds that the correct decision is the one that would be reached by an impartial and rational observer with perfect information (including complete and correct evidence) and complete understanding of the law. If the defendant committed the crime, the correct decision is guilt, regardless of the strength of evidence. Although this definition of correct decision could be ambiguous in certain settings involving legal complexities, where even experts would disagree, in many applications it could be precise enough.
"This perspective has the advantage of being defined independently of the evidence and courtroom presentations," the paragraph concludes, "and conforms to popular notions of justice." So it's factual error he's measuring for.
Popular -- meaning jurors' -- notions of justice
Think about that statement. The decision we ask juries to make -- did the government prove the defendant guilty of the precise crime charged beyond a reasonable doubt -- does not conform to popular notions of justice.
Prof. Spencer is outside his expertise when he says this, but he's right. And if anything accounts for the wrongful convictions he's trying to measure, it may be this dissonance. In trial, we have a few days to convince jurors that their job is to decide whether the exact elements of the case, civil or criminal, have been proven to the required legal standard. But their whole lives before then have taught them to believe that trials are supposed to seek justice and truth. The formulations overlap, but they're not the same. The popular notion is, well, wrong.
In short, the relevance of Prof. Spencer's work to trial lawyers may be not in what it says about juries, but in what it says about jurors. From high-school dropouts to university professors, they come to court with a clear, and often mistaken, idea of what their task is. If your case depends on correcting that idea, you may need to work harder than you imagined.
- Prof. Spencer's study concludes that where judges and juries disagreed, judges were more likely to convict. That rings true, but don't write off judges. A 2005 study by Andrew Leipold of the University of Illinois asks "Why Are Federal Judges So Acquittal Prone?", and backs it up: "Between 1989 and 2002, the average conviction rate for federal criminal defendants was 84% in jury trials, but a mere 55% in bench trials." The difference may be that Leipold studied cases where defense counsel chose a bench trial, while in all of Spencer's cases, defense counsel chose a jury trial.
- Mark Bennett writes often about the difference between factual and legal guilt, including this the other day:
"Right and wrong" and "legal and illegal" are entirely separate concepts. "Right" and "wrong" are moral terms, not legal terms. The fact that something is illegal does not make it wrong, and the fact that it is legal does not make it right. (Flipping those two propositions, the fact that something is right does not make it legal, and the fact that it is wrong does not make it illegal.) Something that is wrong and illegal does not become right if it's decriminalized. Something that's right and legal doesn't become wrong if it's criminalized.
(Image by at http://flickr.com/photos/olivepress/580930/; license details there.)