Thursday, March 8, 2012

Latent Fingeprint Identification for a New Generation: The NIST Report

Last month, after three-and-one-half years of seemingly interminable meetings, the Law Enforcement Standards Office at NIST (the National Institute of Standards and Technology) released a comprehensive report on improving the practice of latent print examination for criminal identification. It can be downloaded as one big pdf file. My views are not those of an objective observer (I was the editor and an author of the report), but I can say with 100% confidence that the book should be of interest to lawyers who encounter fingerprint evidence, not to mention all fingerprint analysts and many other individuals concerned with the production and delivery of forensic science evidence.

The oversized working group included both “insiders” (fingerprint examiners and representatives of professional organizations) and “outsiders” (psychologists, statisticians, engineers, law professors, etc.). Like most expert groups charged with making scientific assessments and recommendations for best practices, the group did not conduct new research, but it reviewed and described a substantial body of existing work in the forensic sciences, cognitive science, human factors, and law.

The group was hardly unanimous on every issue, but it succeeded in addressing the larger issues that have been prominent in the modern literature and cases on fingerprint identification — issues such as the measurement of error rates in fingerprint examinations, the role of statistical and intuitive methods in drawing inferences about identity, the scientific foundations of the discipline, and the ethical and legal considerations in reporting and testifying about laboratory findings.

One suggestion in the report regarding the last item is that the fingerprint community needs to be open to rethinking its historical postulates about what “identification” means and should adopt more cautious ways of describing the implications of similarities in latent and exemplar prints. Thus, a summary paragraph notes that
[W]ays to describe the possible association include statements about the strength of the evidence (the likelihoods) or the posterior probability. With appropriate data and validated modeling, such statements could be quantitative . . . , but less precise qualitative descriptions of the strength of the evidence or the source probability also are possible. . . . Given the current state of scientific and professional knowledge, . . . it is best to avoid testimony based on the theory of global general uniqueness. [E]xaminers [should] not testify to an identification to the exclusion of all other individuals in the world, [and] other, more conservative methods for conveying the probative value of a match [are available]. . . . The Working Group did not reach a consensus on which of these alternatives to universal-individualization testimony is best.
The alternatives listed in the report do not include statements like "it is a practical certainty that the prints are from the same finger." They do include the more radical suggestion that experts should avoid source attributions in favor of statements about "likelihoods." This a bit of statistical jargon that I won't take the time to explain here. The book contains a sample laboratory report that presents an example of the strength-of-the-evidence approach.


Expert Working Group on Human Factors in Latent Print Analysis, NIST, Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach: The Report of the Expert Working Group on Human Factors in Latent Print Analysis (David H. Kaye ed. 2012), available at

No comments:

Post a Comment