Friday, July 7, 2023

No "Daubert Hearing" on Latent Fingerprint Matching in US v. Ware

Last month, in United States v. Ware, 69 F.4th 830 (11th Cir. 2023), the U.S. Court of Appeals for the Eleventh Circuit  "carefully review[ed]" the convictions of Dravion Sanchez Ware arising out of a month-long crime spree near Atlanta, in 2017. He was found to have participated "in robbing ... three spas, four massage parlors, a nail salon, and a restaurant." The opinion  recounts the nine brutal robberies in luxuriant detail. It also discusses Mr. Ware's argument that the district court erred "by not holding a formal Daubert hearing before admitting expert fingerprint evidence."

In a word, the Eleventh Circuit rejected the argument as "unpersuasive." No surprise there. More surprising is the opinion's incoherent discussion of the 2009 NRC report on forensic science and the 2016 PCAST follow-up report. \1/ On the one hand, we are told that "[t]he science could not possibly have been so unreliable as to be inadmissible." On the other hand, "[t]he District Court here could have held a Daubert hearing to assess the relatively new reports Ware presented." So which is it? If a type of evidence cannot possibly be excluded as scientifically invalid under Daubert, how can it be proper to hold a pretrial testimonial hearing on admissibility under Daubert? And, was the court of appeals correct in concluding that the two reports do not impeach, to the point of requiring a hearing, the traditional practice of admitting latent fingerprint comparisons?

During Ware's trial, an unnamed "crime lab scientist with the Georgia Bureau of Investigation Division of Forensic Sciences" "outlined the science behind fingerprints themselves, including their uniqueness" and explained the four-step process the lab follows ... : 'Analysis, Comparison, Evaluation, and Verification,' or ACEV.” The last step "involves another examiner completing the whole process a second time." The opinion does not indicate whether the verifying analyst is blinded to the knowledge of the main examiner's finding. Interestingly as well (think Confrontation Clause), the opinion implies that the testifying expert in Ware was not the main examiner. "[S]he was the verifying examiner," and "she testified that the lab concluded the latent print ... led to an identification conclusion matched to Ware's left middle finger." After that,

Defense counsel specifically asked about the PCAST report [and] vigorously cross-examined ... discussing the possibility of a latent fingerprint not being usable ... , the subjectiveness of every step ... , and the bias that may creep into the verification process ... . The expert and defense counsel discussed ... the potential for false positives and negatives. On cross, the defense also attacked the expert's claim that she did not know of the Georgia Bureau of Investigation ever misidentifying someone with a fingerprint comparison, and that she did not know the rate at which a verifier disagrees with the original assessment.

To preclude such testimony about his unique fingerprint on an item stolen in one of the robberies, Ware had moved before the trial for an order excluding fingerprint-comparison evidence. Of course, such a ruling would have been extraordinary, but the defense contended that the 2009 and the 2016 reports required nothing less \2/ and asked for a full-fledged pretrial hearing on the matter. In response, "[t]he District Court conditionally denied the motion ... unless Ware's counsel could produce before trial a case from this Court or a district court in this Circuit that favors excluding fingerprint expert evidence under Daubert." \3/

The court of appeals correctly observed that "[f]ingerprint comparison has long been accepted as a field worthy of expert opinions in this Circuit, as well as in almost every one of our sister circuits." The only problem is that all the opinions cited to show this solid wall of precedent predate the NRC or the PCAST reports. A more complete analysis has to establish that the scientists' reviews of friction-ridge pattern matching do not raise enough of a doubt to expect that a hearing would let the defense breach the wall. 

Along these lines, the court of appeals wrote that

The [District] Court considered the reports and arguments presented and found that fingerprint evidence was reliable enough as a general matter to be presented to the jury. Many of the critiques of fingerprint evidence found in the PCAST report go to the weight that ought to be given fingerprint analysis, not to the legitimacy of the practice as a whole. Appellant Br. at 25 (“The studies collectively demonstrate that many examiners can, under some circumstances, produce correct answers at some level of accuracy.” (emphasis in original)).

This quotation from the PCAST report is faint praise. Although the court of appeals was sure that "Ware's contrary authority even says that fingerprint evidence can be reliable," the depth of its knowledge about the PCAST (and the earlier NRC committee) reports is open to question. The circuit court had trouble keeping track of the names of the groups. It transformed the National Research Council (the operating arm of the National Academies of Science, Engineering, and Medicine) into a "United States National Resource Council" (69 F.4th at 840) and then imagined an "NCAST report[]" (id. at 848). \4/ 

Deeper inspection of "Ware's contrary authority" is in order. The 2009 NRC committee report quoted with approval the searing conclusion of Haber & Haber that “[w]e have reviewed available scientific evidence of the validity of the ACE-V method and found none.” It reiterated the Habers' extreme claim that because "the standards upon which the method’s conclusions rest have not been specified quantitatively ... the validity of the ACE-V method cannot be tested." To be sure, the committee agreed that fingerprint examiners had something going for them. It wrote that "more research is needed regarding the discriminating value of the various ridge formations [to] provide examiners with a solid basis for the intuitive knowledge they have gained through experience." But does "intuitive knowledge" qualify as "scientific knowledge" under Daubert? Is a suggestion that friction-ridge comparisons need a more solid basis equal to a statement that the comparisons are "reliable" within the meaning of that opinion? The response to "NCAST" was underwhelming.

But research has progressed since  2009. The second "contrary authority," the PCAST report, reviewed this research. At first glance, this report supports the court's conclusion that no hearing was necessary. It assures courts that "latent fingerprint analysis is a foundationally valid subjective methodology." In doing so, it rejects the NRC committee's notion that the absence of quantitative match rules precludes testing whether examiners can reach valid conclusions. It discusses two so-called black-box studies of the work of examiners operating in the "intuitive" mode. Yet, the Ware court does not cite or quote the boxed and highlighted finding (Number 5).

Perhaps the omission reflects the fact that the PCAST finding is so guarded. PCAST added that "additional black-box studies are needed to clarify the reliability of the method," undercutting the initial assurance, which was "[b]ased largely on two ... studies." Furthermore, according to PCAST, to be "scientifically valid," latent-print identifications must be accompanied by admissions that "false positive rates" could be very high (greater than 1 in 18). \5/

The Ware court transforms all of this into a blanket and bland assertion that the report establishes reliability even though it "may cast doubt on the error rate of fingerprint analysis and comparison." The latter concern, it says, goes not to admissibility, but only to "weight" or "credibility." 

Can it really be this simple? Are not "error rates" an explicit factor affecting admissibility (as well as weight) under Daubert? Certainly, the Eleventh Cicuit's view that the problems with fingerprint comparisons articulated in the two scientific reports are not profound enough to force a wave of pretrial hearings is defensible, but the court's explanation of its position in Ware is sketchy.

At bottom, the problem with the fingerprint evidence introduced against Ware (as best as one can tell from the opinion) is not that it is speculative or valueless. The difficulty is that the judgments are presented as if they were scientific truths. The Ware court is satisfied because "Defense counsel put the Government's expert through his paces during cross-examination, and counsel specifically asked the expert about the findings in the PCAST report." But would it be better to moderate the presentations to avoid overclaiming in the first place? 

The impending amendment to Rule 702 of the Federal Rules of Evidence is supposed to encourage this kind of "gatekeeping." Defense counsel might be more successful in constraining overreaching experts than in excluding them altogether. That too should be part of the "considerable leeway" granted to district courts seeking to reconcile expert testimony.with modern scientific knowledge.

Notes

  1. President's Council of Advisors on Sci. & Tech., Exec. Office of the President, Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (2016),  [https://perma.cc/R76Y-7VU]
  2. In Ware
    The pretrial motion to exclude the fingerprint identification "relied on the 2009 United States National Resource Counsel (“NRC”) report and subsequent 2016 President's Counsel of Advisors on Science and Technology (“PCAST”) report, which supposedly revealed a dearth of "proper scientific studies of fingerprint comparison evidence" and claimed that "there is no scientific basis for concluding a fingerprint was left by a specific person," positing that "because fingerprint analysis involves individual human judgement, the resulting [fingerprint comparison] conclusion can be influenced by cognitive bias."
  3. Why insist on a pre-existing determination in one particular geographic region that scientific validity is lacking in order to grant a hearing on whether scientific validity is present? Is the "science" underlying fingerprint comparisons different in Georgia and the other southeastern states comprising the 11th Circuit different from that in the rest of the country?
  4. OK, these peccadillos are not substantive, but one would have thought that three circuit court judges, after "carefully reviewing the record," could have gotten the names and acronyms straight. Senior Judge Gerald Tjoflat wrote the panel opinion. At one point, he was a serious contender for the Supreme Court seat filled by Justice Anthony Kennedy. After Judge Tjoflat announced that he would retire to senior status on the bench in 2019, President Donald Trump nomined Robert J. Luck to the court. In addition to Judge Luck, Judge Kevin C. Newsom, a 2017 appointee of President Trump was on the panel. Judicial politics being what it is, over 30 senators voted against the confirmation of Judges Newsom and Luck.
  5. PCAST suggested that if a court agreed that what it called "foundational validity" were present, then to achieve "validity as applied" some very specific statements about "error rates" would be required:
    Overall, it would be appropriate to inform jurors that (1) only two properly designed studies of the accuracy of latent fingerprint analysis have been conducted and (2) these studies found false positive rates that could be as high as 1 in 306 in one study and 1 in 18 in the other study. This would appropriately inform jurors that errors occur at detectable frequencies, allowing them to weigh the probative value of the evidence.
    The studies actually found conditional false-positive proportions of 6/3628 (0.17%) and 42/995 (4.2%, or 7/960 = 1.4% if one discards "clerical errors.") (P. 98, tbl. 1). Earlier postings discuss these FBI-Noblis and Miami Dade police department numbers.

No comments:

Post a Comment