Monday, July 28, 2014

Looking Backwards: How Safe Are Fingerprint Identifications?

Yesterday, I explained why the frequency with which factors like confessions are found in cases of wrongful convictions does not measure the general prevalence of those factors. I questioned one published claim that false confessions occur in at least 25% of all cases. My argument was not that this conclusion is wrong, but rather that the studies of false convictions do not provide data that are directly applicable to estimating prevalence.

My analysis was not confined to confessions. It is based on the fact that the wrongful-conviction studies are retrospective. We take the outcome—a false conviction—and ask what evidence misled the judge or jury. This backwards look reveals the frequency of the type of evidence e given false convictions. The statistic P(e|FC) is equal to the prevalence of e in all cases if there is no association between false convictions and e. 1/ In general, such independence is most unlikely.

The flip side of invoking wrongful conviction statistics to conclude that false confessions are common is calling on them to show that fingerprint misidentifications are extremely rare. In United States v. Herrera, 704 F.3d 480 (7th Cir. 2013), Judge Richard Posner wrote that
Of the first 194 prisoners in the United States exonerated by DNA evidence, none had been convicted on the basis of erroneous fingerprint matches, whereas 75 percent had been convicted on the basis of mistaken eyewitness identification. 2/
For this remark, he received some flak. Northwestern University law professor Jay Koehler chastized Judge Posner for ignoring a clear case of misidentification. Koehler wrote that the court’s “claim is inaccurate. Stephan Cowans, who was the 141st person exonerated by postconviction DNA evidence, was famously convicted on the strength of an erroneous fingerprint match.” 3/ However, whether a 0 or instead a 1 belongs in the numerator is not so clear.

Judge Posner cited Greg Hampikian et al., The Genetics of Innocence: Analysis of 194 U.S. DNA Exonerations, 12 Annual Rev. of Genomics and Human Genetics 97, 106 (2011), for the view that there were no erroneous fingerprint matches. Interestingly,  this paper gives a larger figure than either 0 or 1. It claims that three “cases ... involving fingerprint testimony were found to be invalid or improper.” 4/ However, none of the “invalid or improper” fingerprinting results misidentified anyone. Rather, “[i]n the 3 cases that were found to be problematic, the analyst in 1 case reported that the fingerprint was unidentifiable when in fact there was a clear print (later discovered and analyzed); in the 2 other cases, police officers who testified did not disclose the fact that there were fingerprints that excluded the exonerees.” 5/ Taking these words at face value, the court could well conclude that none of the exonerations involved false positives from fingerprint comparisons.

However, the fingerprint evidence in the Cowans case involved both concealment and an outright false identification. As Professor Koehler noted, one of the foremost scholars of false convictions, Professor Brandon Garrett of the University of Virginia School of Law, reported the Cowans case as a false positive. Garrett clearly stated that although the Boston Police fingerprint examiner “realized at some point prior to trial that Cowans was excluded , ... he concealed the fact and instead told the jury that the print matched Cowan’s.” 6/ Likewise, along with the national Innocence Project’s co-founder and co-director Peter Neufeld, Professor Garrett explained in an earlier law review article that the trial transcript showed that “Officer LeBlanc misrepresented to the jury that the latent print matched Cowans’s.” 7/ Thus, the Innocence Project serves up 1.7% as the figure for “improper” fingerprint evidence in the first 300 exonerations.

This may seem like much ado about almost nothing. One problem case in a small sample is not significantly different from none. But there is a legal issue lurking here. To ascertain the more appropriate figure we need to specify the purpose of the inquiry. Do we want to estimate the prevalence of all kinds of improper behavior—including perjury (or at least knowing falsehoods uttered or implied) by fingerprint examiners? If so, the Hampikian or Koehler numbers are the candidates for further analysis.

But Judge Posner was responding to a Daubert challenge to fingerprinting. The question before the Herrara court was whether latent fingerprint examiners can provide valid, seemingly scientific, testimony—not whether they can lie or conceal evidence.  The rate of unintentional misidentifications therefore is the relevant one, and that rate seems closer to zero (in the exonerations to date) than to 1.7%. 8/

So Judge Posner is not clearly wrong in speaking of zero errors. But what can we legitimately conclude from his observation that "[o]f the first 194 prisoners in the United States exonerated by DNA evidence, none had been convicted on the basis of erroneous fingerprint matches, whereas 75 percent had been convicted on the basis of mistaken eyewitness identification"? Does this comparison prove that latent print examiners are more accurate than eyewitnesses?

Not necessarily. In rape cases, where DNA exonerations are concentrated (because DNA for postconviction testing is more likely to be available), there are more instances of eyewitness identifications than of fingerprint identifications. Even if the probability of a false positive identification is the same for fingerprint examiners as for eyewitnesses, there are fewer opportunities for latent print misidentifications to occur. Consequently, the set of false rape convictions will be disproportionately populated with eyewitness errors. The upshot of this base rate effect is that the relative frequency of the errors with different types of evidence in a sample of wrongful convictions may not reflect the relative accuracy of each type of evidence.

Nonetheless, we still have to ask why it is that no (or almost no) cases of unintentional false positives have emerged in the wrongful-conviction cases. Does not this absence of evidence of error prove that errors are absent? Koehler’s answer is that
The fact that few of the DNA exonerations cases overturned verdicts based on erroneous fingerprint matches says virtually nothing about the accuracy of fingerprint analysis precisely because cases involving fingerprint matches are rarely selected for postconviction DNA analyses. By this flawed logic, one might also infer that polygraph errors are “very rare” because none of the DNA exoneration cases overturned erroneous polygraph testimony. 9/
But gathering latent prints is more common than polygraphing defendants, and Koehler does not document his assertion that cases with fingerprint matches are much more rarely the subject of postconviction DNA testing than are cases with other kinds of evidence. Traditionally, it may have been harder to obtain DNA testing when a reported fingerprint match indicated guilt, but postconviction DNA testing has become more widely available. Indeed, Virginia has pursued a test-them-all approach in convictions (with available DNA) for sexual assaults, homicides, and cases of non-negligent manslaughter from 1973 to 1987. 10/ Nevertheless, a selection effect that creates a bias against the inclusion of reported fingerprint matches in the sample of known false verdicts cannot be dismissed out of hand. Certainly, Virginia’s comprehensive testing is exceptional.

Even so, pointing to a likely selection effect is not the same as assessing its impact. Selecting against fingerprinting cases reduces the value of P(FV|FL), the proportion of detected false verdicts given false latent print matches. At the same time, a reported latent print match is highly persuasive evidence. This boosts the value of P(FV|FL). If Koehler’s selection effect is dominant, we might try out a value such as P(FV|FL) = 0.04. That is, we assume that only 4% of all cases with false latent print matches culminate in detected false convictions. How large a fraction of false matches (out of all declared matches) could be reconciled with the observation that no more than 1% or so of the false convictions established by DNA testing involved an arguable latent fingerprint false positive error?

As explained yesterday, this will depend on other variables, some of which are interrelated. Consider 1,000 cases in which police recover and examine latent prints suitable for comparison in 100 (10%) of them. Suppose that 15 of these examinations (15%) produce false matches, and that (as proposed above) only 4% of these false-confessions cases terminate in convictions later upended by DNA evidence. The result is about 1 false conviction. Now consider the other 900 cases with no fingerprint evidence. If, say, 80% of these cases end in convictions of which 10% are false, 72 other false convictions will accrue. Upon examining the 73 false-conviction cases, one would find confessions present in 1/73 (about 1%) of them. Yet, a full 15% of all the fingerprint matches were (by hypothesis) false positives.

Now, I am not contending that any of these hypothetical numbers is realistic. But they do show how a high rate of false fingerprint identification can occur in general casework along with a low rate in the known DNA-based exonerations. Better evidence of the general validity of latent fingerprint analysis than the figures from exonerations should be—and is—available.

Notes
  1. By definition, P(e|FC) = P(e & FC) / P(FC). If e and FC are independent, then P(e & FC) = P(e) P(FC) / P(PC) = P(e).
  2. Id. at 487. 
  3. Jonathan J. Koehler, Forensic Fallacies and a Famous Judge, 54 Jurimetrics J. 211, 217 (2014) (note omitted).
  4. Greg Hampikian et al., The Genetics of Innocence: Analysis of 194 U.S. DNA Exonerations, 12 Annual Rev. of Genomics and Human Genetics 97, 106 (2011)
  5. Id.
  6. Brandon L. Garrett, Convicting the Innocent: Where Criminal Prosecutions Go Wrong 107 (2011).
  7. Brandon L. Garrett & Peter J. Neufeld, Invalid Forensic Science Testimony and Wrongful Convictions, 95 Va. L. Rev. 1, 74 (2009).
  8. I say “closer” because it appears that Office LeBlanc first reported that Cowans’ prints were on a mug at the site of the murder for which he was convicted. According to the Innocence Project, “Cowans' fingerprints were actually compared to themselves and not to the fingerprint on the evidence.” Innocence Project, Wrongful Convictions Involving Unvalidated or Improper Forensic Science that Were Later Overturned through DNA Testing. Independent auditors concluded that LeBlanc knew of his mistake before trial but tried to conceal it. Garrett & Neufeld, supra note 7, at 73–74. Perhaps we should score an initially mistaken analysis that an examiner knows is mistaken (and that would not produce false testimony from an honest analyst and that would be caught by the simple safeguard of blind verification) as half a misidentification?
  9. Koehler, supra note 3, at 217. 
  10. John Roman et al., Urban Institute, Post-conviction DNA Testing and Wrongful Conviction 11–12 (2012).
Related Post: Looking Backwards: How Prevalent Are False Confessions?, July 27, 2014

No comments:

Post a Comment