The court reasoned that the state had no need to prove that the “expert testimony ... albeit scientific in nature” was based on a scientifically validated procedure because the physical comparison was “neither scientifically obscure nor instilled with 'aura of mystic infallibility' ... which merely places a jury ... in in [sic] a position to weigh the probative value of the testimony without abandoning common sense and sacrificing independent judgment to the expert's assertions.” Patel (quoting Maher v. Quest Diagnostics, Inc., 269 Conn. 154, 170-71 n.22, 847 A.2d 978 (2004)).a September 2016 report by the President's Council of Advisors on Science and Technology [stating] that ‘there are no appropriate empirical studies to support the foundational validity of footwear analysis to associate shoeprints with particular shoes based on specific identifying marks (sometimes called randomly 'randomly [sic] acquired characteristics'). Such conclusions are unsupported by any meaningful evidence or estimates of their accuracy and thus are not scientifically valid.’
But the Superior Court did not stop here. Judge Danaher wrote that the President’s Council (PCAST) lacked relevant scientific expertise, and their skepticism did not alter the fact that courts previously had approved of “the ACE-V method under Daubert for footwear and fingerprint impressions.” He declared that "[t]here is no basis on which this court can conclude, as the defendant would have it, that the PCAST report constitutes 'the scientific community.'" These words might mean that the relevant scientific community disagrees with the Council that footwear-mark comparisons purporting to associate a particular shoe with a questioned impression lack adequate scientific validation. Other scientists might disagree either because they do not demand the same type or level of validation, or because they find the existing research satisfies PCAST's more demanding standards. The former is more plausible than the latter, but it is not clear which possibility the court accepted as true.
To reject the PCAST Report's negative finding, Judge Danaher relied exclusively on the testimony of “Lisa Ragaza, MSFS, CFWE, a ‘forensic science examiner 1’ ... who holds a B.S. degree from Tufts University and an M.S. degree from the University of New Haven.” What did the forensic-science examiner say to support the conclusion that PCAST erred in its determination that no adequate body of scientific research supports the accuracy of examiner judgments? To begin with,
It seems odd to have forensic examiners instruct the court in the law. That the courts in these jurisdictions (not all of which even require a showing of scientific validity) admit the testimony of footwear analysts that a given shoe is the source of a mark says little about the extent to which these judgments have been subjected to scientific testing. As a committee of the National Academy of Sciences reported in 2009, “Daubert has done little to improve the use of forensic science evidence in criminal cases.” NRC Committee on Strengthening Forensic Science in the United States, Strengthening Forensic Science in the United States: A Path Forward 106 (2009). Instead, “courts often ‘affirm admissibility citing earlier decisions rather than facts established at a hearing.’” Id. at 107.Ms. Ragaza testified that, in her opinion, footwear comparison analysis is generally accepted in the relevant scientific community. She testified that such evidence has been admitted in 48 or 49 of the 50 states in the United States, in many European countries, and also in India and China. In fact, she testified, such analyses have been admitted in United States courts since the 1930s, although she is also aware that one such analysis was carried out in Scotland as early as 1786.
Second,
But the existence of “treatises and journals” — including what the NAS Committee called “trade journals,” id. at 150 — does not begin to contradict PCAST’s conclusion about the dearth of studies of the accuracy of examiner judgments. PCAST commented (pp. 116-17) on one of the “studies relative to the statistical likelihood”:Ms. Ragazza testified that there are numerous treatises and journals, published in different parts of the world, on the topic of footwear comparison analysis. She testified that there have been studies relative to the statistical likelihood of randomly acquired characteristics appearing in various footwear.
Third,a mathematical model by Stone that claims that the chance is 1 in 16,000 that two shoes would share one identifying characteristics and 1 in 683 billion that they would share three characteristics. Such claims for “identification” based on footwear analysis are breathtaking—but lack scientific foundation. ... The model by Stone is entirely theoretical: it makes many unsupported assumptions (about the frequency and statistical independence of marks) that it does not test in any way.
Verification of an examiner’s conclusion by another examiner is a good thing, but it does almost nothing to establish the validity of the examination process. Making sure that two readers of tea leaves agree in their predictions does not validate tea reading (although it could offer data on measurement reliability, which is necessary for validity).Ms. Ragazza testified that her work is subject to peer review, including having a second trained examiner carry out a blind review of each analysis that she does. In response to the defendant's question as to whether such reviews have ever resulted in the second reviewer concluding that Ms. Ragazza had carried out an erroneous analysis, she responded that there were no such instances. Most of her work is not done in preparation for litigation. It is frequently done for investigative purposes and may be used to inculpate, but also exculpate, an individual. She indicated that the forensic laboratory carries out its analyses for both prosecutors and defense counsel.
Fourth,
Plainly, this misses the point. If tea reading were expanded to include magnifiers and microscopes, that would not make it more valid. (Actually, I believe that footwear-mark comparisons based on “randomly acquired characteristics” are a lot better than tea reading, but I still am searching for the scientific studies that let us know how much better.)Ms. Ragazza explained how footwear comparison analysis is carried out, using a protocol known as ACE-V, and employing magnifiers and/or microscopes.
Sixth,
Maybe there is something to this complaint, but what validity studies does the PCAST report overlook? The Supporting Documentation for Department of Justice Proposed Uniform Language for Testimony and Reports for the Forensic Footwear and Tire Impression Discipline (2016) begins “The origin of the principles used in the forensic analysis of footwear and tire impression evidence dates back to when man began hunting animals.” But the issue the PCAST Report addresses is not whether a primitive hunter can distinguish between the tracks of an elephant and a tiger. It is the accuracy with which modern forensic fact hunters can identify the specific origin of a shoeprint or a tire tread impression. If Ms. Ragazza provided the court with studies of this particular issue that would produce a different conclusion about the extent of the validation research reported on in both the NRC and PCAST reports, the court did not see fit to list them in the opinion.Ms. Ragazza does not agree with the PCAST report because, in her view, that report did not take into account all of the available research on the issue of footwear comparison evidence.
A footnote to the claim that "an examiner can identify a specific item of footwear/tire as the source of the footwear/tire impression" can be found in the Justice Department document mentioned above. This note (#12) lists the following publications:
- Cassidy, M.J. Footwear Identification. Canadian Government Publishing Centre: Ottawa, Canada, 1980, pp. 98-108;
- Adair, T., et al. (2007). The Mount Bierstadt Study: An Experiment in Unique Damage Formation in Footwear. Journal of Forensic Identification 57(2): 199-205;
- Banks, R., et al. Evaluation of the Random Nature of Acquired Marks on Footwear Outsoles. Research presented at Impression & Pattern Evidence Symposium, August 4, 2010, Clearwater, FL;
- Stone, R. (2006). Footwear Examinations: Mathematical Probabilities of Theoretical Individual Characteristics. Journal of Forensic Identification 56(4): 577-599;
- Wilson, H. (2012). Comparison of the Individual Characteristics in the Outsoles of Thirty-Nine Pairs of Adidas Supernova Classic Shoes. Journal of Forensic Identification 62(3): 194-203.
Finally,
It's true. The President's Council of Advisors on Science and Technology does not include footwear examiners. But would we say that only tea-leaf readers are able to judge whether there have been scientific studies of the validity of tea-leaf reading? That only polygraphers are capable of determining whether the polygraph is a valid lie detector? That only pathologists can ascertain whether an established histological test for cancer is accurate?She testified that, to her knowledge, the PCAST members did not include among their membership any forensic footwear examiners.
PCAST's conclusion was that no direct experiments currently establish the sensitivity and specificity of footwear-mark identification. In the absence of a single counter-example from the opinion, that conclusion seems sound. But the legal problem is whether to accept the PCAST report's premise that this information is essential to admissibility of footwear evidence under the standard for scientific expert testimony codified in Federal Rule of Evidence 702. Is it true, as a matter of law (or science), that only a large number of so-called black box studies with large samples can demonstrate the scientific validity of subjective identification methods or that the absence of precisely known error probabilities as derived from these experiments dictates exclusion? I fear that the PCAST report is too limited in its criteria for establishing the requisite scientific validity for forensic identification techniques, for there are other ways to test examiner performance and to estimate error rates. But however one comes out on such details, the need for courts to demand substantial empirical as well as theoretical studies that demonstrate the validity and quantify the risks of errors in using these methods remains paramount.
Although Patel is merely one unpublished pretrial ruling with no precedential value, the case indicates that defense counsel cannot just cite the conclusions of the PCAST report and expect judges to exclude familiar types of evidence. They need to convince courts that "the reliability requirements" for scientific evidence include empirical proof that a technique actually works as advertised. Then the parties can focus on whether PCAST's assessments of the literature omit or give too little weight to studies that would warrant different conclusions. Broadbrush references to "treatises and journals" and a history of judicial acceptance should not be enough to counter PCAST's findings of important gaps in the research base of a forensic identification method.
No comments:
Post a Comment