Thursday, March 21, 2019

Admitting Palm Print Evidence in United States v. Cantoni

A second U.S. District Court judge in the Eastern District of New York has ruled that latent fingerprint analysts may testify to categorical source attributions despite concerns voiced in prominent reviews of the field. 1/ Given the research demonstrating the ability of latent fingerprint examiners to associate prints with their sources, that is no surprise. More problematically, the case suggests that defense experts should not be permitted to testify about error rates in a forensic method unless they "conclude that [the analyst’s] conclusions were erroneous or that the process used was fundamentally unreliable."

In United States v. Cantoni, No. 18-cr-562 (ENV), 2019 WL 1259630 (E.D.N.Y. Mar. 19, 2019), the government was prepared to call three examiners from the New York City Police Department Latent Print Section to testify that a palm print found on a bank robber's note demanding money is Cantoni’s. Cantoni moved before trial for an order excluding this source conclusion or, alternatively, directing the examiners to include cautions about the risk of a false identification. In what the court called "throwing incense in Caesar’s face," Cantoni apparently relied on a 2017 report from a committee established by the AAAS, 2/ a 2016 report of the President’s Council of Scientific Advisors (PCAST), 3/ a 2012 report of a committee formed by the National Institute of Standards and Technology (NIST), 4/ and a 2006 report of the Inspector General of the Department of Justice. 5/ These did not get him very far.

Conclusions Admissible

In response to the demand for outright exclusion, district judge Eric N. Vitaliano pointed out that the PCAST report acknowledged the validity of latent print analysis. The studies of the ability of examiners to draw categorical conclusions that impressed PCAST involved fingerprints, but presumably those findings can be extrapolated to palm prints.

Cantoni tried to sidestep this evaluation by arguing that the NYPD examiners did not follow all the recommendations in the PCAST report. These were said to include testing examiners for minimal proficiency, disclosing the order in which the prints were examined and any extraneous facts that could have influenced the conclusions, documenting the comparison, and verifying that the latent print is comparable in quality to the prints used in validation studies.

The court noted that several of these matters, such as the fact that the examiners underwent proficiency testing, were not at issue in the case and that any remaining deficiencies did not make the process “so fundamentally unreliable as to preclude the testimony of the experts.” The court wrote that considering “Daubert’s liberal standard for the admission of expert testimony,” concerns about cognitive bias that underlay the PCAST procedures “are fodder for cross-examination rather than grounds to exclude the latent print evidence entirely.”

Limitations on the Testimony

Although the court did not exclude the examiners’ testimony in toto, it agreed to ensure that the testimony not include assertions “that their conclusion is certain, that latent print analysis has a zero error rate, or that their analysis could exclude all other persons who might have left the print.” The government had no problem with this constraint. It acknowledged that “[t]he language and claims that are of concern to defense counsel are disfavored in the latent print discipline.” One might think that analysts would not testify that way anymore, 6/ but some latent print examiners continue to offer conclusions in the traditional manner. 7/

Defendant’s remaining efforts to shape the testimony on direct examination were less successful. He wanted “the government [to] acknowledge, through the examiners or by stipulation, that studies have found the [false positive] error rate to be as high as 1 in 18 or 1 in 306.”

These numbers do not fairly or fully represent the findings of the validation studies from which they are drawn. These studies found smaller rates of false-positive errors. The quoted figures are inflated by taking the upper limit of a one-sided 95% confidence interval above the observed false-positive proportion. That a large error rate is not incompatible with a small study (one that lacks statistical power) cannot reasonably be interpreted as a finding that the error rate is large.

Perhaps the government pointed out this misuse of statistics, but the opinion describes a different rejoinder: “The government suggests that the studies Cantoni cites are inapposite because they involved the Federal Bureau of Investigation and the Miami-Dade Police Department.”

That response is disappointing. The most compelling study is the FBI-Noblis test of the accuracy of latent fingerprint examiners generally. It consisted of an experiment in which the researchers — but not the analysts — knew which pairs of prints were from the same finger and which were from fingers of different individuals. The test subjects were hardly limited to FBI examiners. To the contrary,
In order to get a broad cross-section of the latent print examiner community, participation was open to practicing latent print examiners from across the fingerprint community. A total of 169
latent print examiners participated; most were volunteers, while the others were encouraged or required to participate by their employers. Participants were diverse with respect to organization,
training history, and other factors. 8/
The government may have meant to argue that whereas the results of studies can be used to demonstrate that error rates are small among representative examiners, such an average figure is not a precise measure of the probability of error in a specific case. That much is true, but it does not mean that the error rates in the experiments are irrelevant or useless to a proper understanding of the risk of error in New York City.

The court did not reach such nuances. Judge Vitaliano wrote that “[c]ross-examination is the appropriate means to elicit weaknesses in direct testimony” and that “Cantoni may explore the error rates generated by the studies on cross-examination.” Yet, the court refused to let the defense inform the jury of error rates and other limitations on latent-print examinations through an expert of its choice — the social scientist, Simon Cole, who has researched the history of fingerprinting and written extensively about it.

Keeping Cole Away

The court provided two reasons for excluding Cole’s testimony. First, “this is a matter that may be explored on cross-examination and does not require an expert to offer an opinion.” However, the opportunity to cross-examine one expert normally does not preclude a party from calling its own expert. In a toxic tort case in which a manufacturer denies that its product is toxic, for example, the manufacturer ordinarily would be able not only to cross-examine plaintiff’s physicians, toxicologists or epidemiologists, but also to present its own comparable experts.

The difference, one might argue, is that “Dr. Cole’s opinions appear to be directed at NYPD’s methods for latent print analysis in general rather than specific issues in this case. ... [H]e does not thereby conclude that NYPD’s conclusions were erroneous or that the process used was fundamentally unreliable.” However, no rule of law requires an expert to give a final opinion. Simply providing background information — such as the accuracy of a medical diagnostic test — may assist the jury in a tort case in which plaintiff’s testifying physician relied on the diagnostic test in forming his or her opinion.

Second, the court suggests that the rule against hearsay makes the educational expert’s testimony inadmissible. Judge Vitaliano wrote that “to the extent that Dr. Cole merely plans to convey the contents of studies he did not conduct, he is poised to act as a conduit for hearsay, which is a prohibited role for an expert.” In the toxic tort case, however, the defendant’s epidemiologist could report the findings of other researchers. Federal Rule of Evidence 703 often permits experts to discuss inadmissible hearsay as the basis for their own opinion. In our toxic tort case, the defense need not call the authors of the studies as witnesses to lay a foundation for the epidemiologist to describe all the studies on point to explain why the substance has not been scientifically established to be toxic.

Again, one might argue that there is a difference. Epidemiologists study the types and quality of proof of toxicity; as a result of their specialized training in methodology, they can give an expert opinion on the state of the science. To inform that opinion, the epidemiologist can — indeed, must — review the published work of other researchers. But is Dr. Cole, as a sociologist of science, qualified to give the opinion that “there is now consensus in the scientific and governmental community that categorical conclusions of identification — such as the one made in this case — are scientifically indefensible”? The opinion in Cantoni does not squarely confront this Rule 702 question.

Whatever the answer to the question of qualifications, 9/ the hearsay rule should not preclude a statistician or other methodological expert from giving an opinion that well-designed research suggests that latent fingerprint examiners who have been the subject of experiments reach incorrect source conclusions under certain conditions or at certain rates. Analogous expert testimony about the accuracy of eyewitness identifications is now plainly admissible (in the court's discretion) in most jurisdictions. Cross-examination (this time by the government) can explore the extent to which these findings are applicable to the case at bar. If the known error rates are clearly inapposite, they should be excluded under Rule 403, but the hearsay objection seems misplaced.

NOTES
  1. The previous cases are noted in Ignoring PCAST’s Explication of Rule 702(d): The Opinions on Fingerprint Evidence in Pitts and Lundi, Forensic Sci., Stat. & L., July 16, 2018, and More on Pitts and Lundi: Why Bother with Opposing Experts?, Forensic Sci., Stat. & L., July 17, 2018.
  2. William Thompson, John Black, Anil Jain & Joseph Kadane, Forensic Science Assessments: A Quality and Gap Analysis, Latent Fingerprint Examination (2017).
  3. Executive Office of the President, President’s Council of Advisors on Sci. & Tech., Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (2016).
  4. NIST Expert Working Group on Human Factors in Latent Print Analysis, Latent Print Examination and Human Factors: Improving the Practice Through a Systems Approach (David H. Kaye, ed. 2012).
  5. U.S. Dep't of Justice, Office of the Inspector General, A Review of the FBI’s Handling of the Brandon Mayfield Case (2006).
  6. Cf. Another US District Court Finds Firearms-mark Testimony Admissible in the Post-PCAST World, Forensic Sci., Stat. & L., Mar. 15, 2019.
  7. Nicole Westman, Bad Police Fingerprint Work Undermines Chicago Property Crime Cases, Chi. Reporter, Mar. 21, 2019.
  8. Bradford T. Ulery et al., Accuracy and Reliability of Forensic Latent Fingerprint Decisions, 108 Proc. Nat’l Acad. Sci.  7733, 7734 (2011). For more discussion of the characteristics of the volunteers, see Part II of Fingerprinting Under the Microscope: Examiners Studied and Methods, Forensic Sci., Stat. & L., Apr. 27, 2011.
  9. For discussion of the qualifications of non-forensic scientists to opine on the state of forensic science methods, see David H. Kaye et al., The New Wigmore on Evidence: Expert Evidence, Ch. 2 (2d ed. 2011) (2019 cumulative update).

No comments:

Post a Comment