In ... fields such as latent fingerprint identification, firearms, clinical psychology, and clinical psychiatry ... , if judges ask the question, “Where are the data?” they would be met with blank stares. If you ask a latent fingerprint examiner, “Where are your data?” the answer is likely to be, “Data. We have no data. In fact, we don't need data. We're specialists.” ... Many of these experts have been practicing their trade for twenty-five years; they know it when they see it. ... Under Daubert, however, even if your data happen to be experience, you have to be able to articulate how you came to know what you think you know. (Faigman 2013, 914).A footnote explains that being “able to articulate how you came to know what you think you know” can be accomplished by “checking the basis for believing that the experience will produce reliable testimony.” (Ibid., 914 n. 64).
Now, I am no fan of claims of fingerprint examiners to be able to match latent prints to reference prints with absolute certainty (NIST 2012) and of lax and superficial court opinions allowing such testimony. (Kaye 2013; Kaye, Bernstein and Mnookin 2011). But the assertion that there are absolutely no data to show that latent print examiners can “produce reliable testimony” is too much for me to swallow. Indeed, in his treatise on scientific evidence, Professor Faigman does not insist that “no data” exist. The treatise correctly recognizes that “[a] few well-designed studies have now been conducted” (Faigman et al. 2012, § 33:56). To the list of six studies noted in the treatise (ibid., § 33.49 n. 10), one can add Tangen, Thompson, and McCarthy (2011). (As explained here last June, this Australian study showed a false negative rate of under 8% and a false positive rate of under 1% (Fingerprinting Error Rates Down Under, June 24, 2012)).
Perhaps Professor Faigman meant to say that even if data exist to support the judgments of fingerprint analysts—as they clearly do at a general level—a particular examiner’s judgments are not based on data, but on standardless, subjective impressions of the degree of similarity that warrants an identification or an exclusion. They just “know it when they see it.” That observation is closer to the mark (no pun intended). It is the gist of David Harris's contention that "most forensic science does not qualify as science in any true sense of that term." (Harris 2012, 36). Like Professor Faigman, Professor Harris complains that "[d]isciplines like fingerprint analysis, firearms tool-mark analysis, and bite-mark analysis have no basis in statistics, and do not originate in inquiry conducted according to scientific principles. Rather, they rely on human judgment grounded in experience ... without reference to rigorous and agreed-upon standards." (Ibid.) Identification experts who do not follow a protocol with quantitative or other external standards to achieve high inter-rater reliability should not insist that they are following the "scientific method." (Compare Kaye 2012, 123).
But "science" is not the only source of useful information, and experiments can measure the levels of accuracy for subjective as well as objective procedures. DNA laboratories have verified that DNA analysis performed in a specified way correctly distinguishes between samples taken from the same source and samples taken from different sources. Indeed, this is the only sense in which it could be said that “DNA profiling [always] ... had known error rates” (Faigman 2013, 913). Even today, the error rates of DNA laboratories in actual case work is not really known. In the same manner, tests of fingerprint analyses performed by trained examiners show that they are capable of routinely distinguishing between marks taken from the same source and marks taken from different sources (with some errors). Again, however, we do not know the error rates of these examiners in actual case work. (Kaye 2012).
Consequently, appropriately documented latent print comparisons undertaken without unnecessary exposure to biasing information, presented with a recognition of the uncertainty in the largely subjective procedure and verified by an independent examiner blinded to the initial outcome as well as the output of an automated scoring system, should survive the “more rigorous test” (Faigman 2013) established in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), and embellished in later cases. Although there is ample room to improve fingerprint comparisons by human examiners and to implement automated systems for latent print work, “blank stares” and “no data” are no longer the only answers available to an objection under Daubert.
References
Faigman, David L. 2013. “The Daubert Revolution and the Birth of Modernity: Managing Scientific Evidence in the Age of Science.” University of California at Davis Law Review 46:893–930.
Harris, David A. 2012. Failed Evidence: Why Law Enforcement Resists Science. New York and London: New York University Press.
Kaye, David H. 2013. “Experimental and Scientific Evidence: Criminalistics.” In McCormick on Evidence, edited by Kenneth Broun, § 207. Eagan, MN: West Publishing Co.
Kaye, David H., ed. 2012. Expert Working Group on Human Factors in Latent Print Analysis, Latent Print Examination and Human Factors: Improving the Practice Through a Systems Approach. Gaithersburg: National Institute of Standards and Technology.
Kaye, David H., David E. Bernstein, and Jennifer L. Mnookin, 2011. The New Wigmore: A Treatise on Evidence: Expert Evidence. New York: Aspen Publishing Company, 2d ed.
Tangen, Jason M., Matthew B. Thompson, and Duncan J. McCarthy 2011. “Identifying Fingerprint Expertise.” Psychological Science 22:995 (available online).
No comments:
Post a Comment