Saturday, March 17, 2012

Who is Nelson Acosta-Roque? (Part I)

According to the Department of Homeland Security, he is Victor Antonio Pecheca-Aromboles, an old cocaine dealer born in the Dominican Republic who served time in Pennsylvania in the early 1990s, was deported, and came back to the U.S. under an alias (via Puerto Rico, where he lived illegally for a few years), ending up in Anchorage as a waiter in a brew pub for the last ten years. [1] On the basis of a comparison of copies of fingerprint cards from Pennsylvania (for Pecheca-Aramboles) and New Jersey, Puerto Rico and Alaska (for Acosta-Roque), the government convinced the immigration judge and the Board of Immigration Appeals that the Anchorage waiter is the former Pennsylvania cocaine dealer.
The removal order is before the Ninth Circuit Court of Appeals. According to law professor Caleb Mason, who represents Acosta-Roque, the “the only question is the weight of an alleged 8-point match: is that enough, standing on its own, to warrant a finding of identity by clear and convincing evidence?” [1] To address this question, Professor Mason enlisted the aid of 39 “Scientists and Scholars of Fingerprint Identification as Amici Curiae” to file a brief in support of his client. The brief, which was written by Simon Cole of the University of California at Irvine, describes the expert’s testimony as problematic, unjustified, inadequately supported, vague, and groundless. [2]
My reading of the transcript, however, suggests that the issue on appeal is not really whether “an alleged 8-point match” is enough. The four ten-print cards were compared by Susan R. Blei, the supervisor of Alaska’s Criminal Records and Identification Bureau. Ms. Blei’s qualifications consisted of about seven weeks of training courses over the years, many “educational conferences” on AFIS (the Automated Fingerprint Identification System) organized by NEC, and many years of on-the-job experience. She never testified that there was a match at exactly eight points of comparison. She testified that she likes to have at least eight points before signing off on an identification, but she did not give the number of features on which she relied to form her “100% certain” opinion that all the ten-print cards came from one and the same individual. In essence, she said, “I compared, I counted, I concluded (and so did my unnamed verifying examiner).” She provided this ipse dixit with the encouragement of the government’s counsel. Thus, the issue should be whether the opinion of a fingerprint examiner, presented in this conclusory, “trust-me” fashion, can amount to clear and convincing evidence—the standard the government has to meet to establish that Acosta-Roque obtained his permanent resident status fraudulently, having been deported under a different name.
Is a fingerprint analyst’s assertion of identity clear and convincing proof when it lacks any meaningful description of the process, when it contains no statements that would show that the examiner followed accepted protocols (beyond the fact that the examiner took some short courses and has years of experience), and does not indicate the examiner’s performance on rigorous tests of her proficiency? There is at least a decent argument that it does not.
Related questions are whether the “trust me” testimony in this case met professional and ethical standards and whether existing scientific research warrants the claim of 100% certainty. I shall comment on some aspects of these questions, and on the positions of the “scientists and scholars of fingerprint identification,” in later postings.
1. Caleb Mason, Scientific Validity of Fingerprint Comparison: New Case and Amicus Brief (Mar. 5, 2012),
2. Brief of Scientists and Scholars of Fingerprint Identification as Amici Curiae in Support of Petitioner and in Favor of Reversal, Acosta-Roque v. Holder, No. 11-70705 (9th Cir., Mar. 8, 2012).

Thursday, March 8, 2012

Latent Fingeprint Identification for a New Generation: The NIST Report

Last month, after three-and-one-half years of seemingly interminable meetings, the Law Enforcement Standards Office at NIST (the National Institute of Standards and Technology) released a comprehensive report on improving the practice of latent print examination for criminal identification. It can be downloaded as one big pdf file. My views are not those of an objective observer (I was the editor and an author of the report), but I can say with 100% confidence that the book should be of interest to lawyers who encounter fingerprint evidence, not to mention all fingerprint analysts and many other individuals concerned with the production and delivery of forensic science evidence.

The oversized working group included both “insiders” (fingerprint examiners and representatives of professional organizations) and “outsiders” (psychologists, statisticians, engineers, law professors, etc.). Like most expert groups charged with making scientific assessments and recommendations for best practices, the group did not conduct new research, but it reviewed and described a substantial body of existing work in the forensic sciences, cognitive science, human factors, and law.

The group was hardly unanimous on every issue, but it succeeded in addressing the larger issues that have been prominent in the modern literature and cases on fingerprint identification — issues such as the measurement of error rates in fingerprint examinations, the role of statistical and intuitive methods in drawing inferences about identity, the scientific foundations of the discipline, and the ethical and legal considerations in reporting and testifying about laboratory findings.

One suggestion in the report regarding the last item is that the fingerprint community needs to be open to rethinking its historical postulates about what “identification” means and should adopt more cautious ways of describing the implications of similarities in latent and exemplar prints. Thus, a summary paragraph notes that
[W]ays to describe the possible association include statements about the strength of the evidence (the likelihoods) or the posterior probability. With appropriate data and validated modeling, such statements could be quantitative . . . , but less precise qualitative descriptions of the strength of the evidence or the source probability also are possible. . . . Given the current state of scientific and professional knowledge, . . . it is best to avoid testimony based on the theory of global general uniqueness. [E]xaminers [should] not testify to an identification to the exclusion of all other individuals in the world, [and] other, more conservative methods for conveying the probative value of a match [are available]. . . . The Working Group did not reach a consensus on which of these alternatives to universal-individualization testimony is best.
The alternatives listed in the report do not include statements like "it is a practical certainty that the prints are from the same finger." They do include the more radical suggestion that experts should avoid source attributions in favor of statements about "likelihoods." This a bit of statistical jargon that I won't take the time to explain here. The book contains a sample laboratory report that presents an example of the strength-of-the-evidence approach.


Expert Working Group on Human Factors in Latent Print Analysis, NIST, Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach: The Report of the Expert Working Group on Human Factors in Latent Print Analysis (David H. Kaye ed. 2012), available at