PCAST expressed concern that the
The court did not dispute the absence of a standardized procedure with well-defined judgmental criteria for source attribution. Rather, it maintained that the very scientific literature cited by PCAST does not contradict a previous ruling in the case that the judgments of firearms examiners amount to the kind of “scientific knowledge” necessary to admit scientific evidence under the Supreme Court's opinion in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993). As discussed at length in many publications (and occasionally in depth), Justice Blackmun's opinion in Daubert articulates a loose, multifactor standard for ascertaining the scientific soundness of proposed testimony, 3/ and the district court had examined the “Daubert factors.” in its first ruling. This time, it limited its analysis to “error rates.” Judge John J. Tharp, Jr., wrote that:
But is it true that “the report does not dispute the accuracy ... of firearm toolmark analysis within the courts”? The report claims that the accuracy of the statements that appear in court is not known with adequate precision. According to the authors, no convincing range for the risks of errors can be derived from the existing scientific literature. PCAST insists that “[b]ecause firearms analysis is at present a subjective feature-comparison method, its foundational validity can only be established through multiple independent black box studies ... .” 4/ PCAST is emphatic (some would say dogmatic) in insisting that “the sole way to establish foundational validity is through multiple independent ‘black-box’ studies that measure how often examiners reach accurate conclusions across many feature-comparison problems involving samples representative of the intended use. In the absence of such studies, a feature-comparison method cannot be considered scientifically valid” (pp. 66, 68).
Under this criterion for scientific validity, it is hard to see how the PCAST report can be characterized as not disputing accuracy. The Miami-Dade study that the opinion relies on barely counts for PCAST. The report lists it under the heading of “Non-black-box studies of firearms analysis” (p. 106). That leaves a single study, and a single study cannot satisfy the report’s demand for multiple studies. For better or worse, PCAST's bottom line is clear
The district court in Chester read this passage as meaning that the science is there, but that it would be nice to have a few more studies to show that other researchers can replicate the small error rates in the unpublished study. But is not PCAST really saying that there is a paucity of acceptable experiments from which to ascertain applicable error probabilities? In this regard, much more than "reliability" (in the statistical sense) and "reproducibility" of a number is at issue. One should ask not just whether a a second research group has replicated a given study, but more broadly, whether a solid body of studies with varied designs and different samples of examiners establish that the findings as a whole are robust and generalizable.
In contrast, the Chester court is satisfied with two studies that it understands to reveal false positive error probabilities in the neighborhood of 2%. Where PCAST is unable to perceive evidence of “validity” in the sense of reasonably well known error probabilities, the court finds the probability of error to be small enough to allow testimony as to the origin of the bullets.
But which probability does the court conclude is comfortingly small? The PCAST report defines a false-positive error probability one way. The Chester court expresses a different understanding of the meaning of this probability. Stay tuned for later discussion of this "false-positive error fallacy."
- See Eric Lander, William Press, S. James Gates, Jr., Susan L. Graham, J. Michael McQuade, and Daniel Schrag, PCAST Releases Report on Forensic Science in Criminal Courts, Sept. 20, 2016, https://www.whitehouse.gov/blog/2016/09/20/pcast-releases-report-forensic-science-criminal-courts
- Seth Augerstein, Firearms Evidence Allowed in Chicago Hobos Gang Trial—Despite PCAST Argument, Forensic Mag., Oct. 13, 2016. http://www.forensicmag.com/news/2016/10/firearms-evidence-allowed-chicago-hobos-gang-trial-despite-pcast-argument
- See, e.g., David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence (2d ed. 2011) (updated annually).
- P. 106. The definition of "black-box study" at page 48, Box 2, is seriously incomplete. There, the report explains that "[b]y a 'black-box study,' we mean an empirical study that assesses a subjective method by having examiners analyze samples and render opinions about the origin or similarity of samples." But an "empirical" studies come in a multitude of designs. The PCAST authors have specific ideas about the design of empirical studies that they call "black-box studies."