Saturday, November 21, 2015

Latent Fingerprint Identification in Flux?

Two recent articles suggest that seeds of change are taking root in the field of latent fingerprint identification.

I. The Emerging Paradigm Shift in the Epistemology of Fingerprint Conclusions

In The Emerging Paradigm Shift in the Epistemology of Fingerprint Conclusions, the chief of the latent print branch of the U.S. Army Criminal Investigation Laboratory, Henry J. Swofford, writes of “a shift away from categoric conclusions having statements of absolute certainty, zero error rate, and the exclusion of all individuals to a more modest and defensible framework integrating empirical data for the evaluation and articulation of fingerprint evidence.” Mr. Swofford credits Christophe Champod and Ian Evett with initiating “a fingerprint revolution” by means of a 2001 “commentary, which at the time many considered a radical approach for evaluating, interpreting, and articulating fingerprint examination conclusions.” He describes the intense resistance this paper received in the latent print community and adds a mea culpa:
Throughout the years following the proposition of this new paradigm by Champod and Evett, the fingerprint community continued to respond with typical rhetoric citing the historical significance and longstanding acceptance by court systems, contending that the legal system is a validating authority on the science, as the basis to its reliability. Even the author of this commentary, after undergoing the traditional and widely accepted training at the time as a fingerprint practitioner, defensively responded to critiques of the discipline without fully considering, understanding, or appreciating the constructive benefits of such suggestions [citing Swofford (2012)]. Touting 100% certainty and zero error rates throughout this time, the fingerprint community largely attributed the cause of errors to be the incompetence of the individual analyst and failure to properly execute the examination methodology. Such attitudes not only stifled potential progress by limiting the ability to recognize inherent weaknesses in the system, they also held analysts to impossible standards and created a culture of blame amongst the practitioners and a false sense of perfection for the method itself.
The article by Champod and Evett is a penetrating and cogent critique of what its authors called the culture of “positivity.” They were responding to the fingerprint community’s understanding, as exemplified in guidelines from the FBI’s Technical Working Group on Friction Ridge Analysis, Study and Technology (TWGFAST), that
"Friction ridge identifications are absolute conclusions. Probable, possible, or likely identification are outside the acceptable limits of the science of friction ridge identification" (Simons 1997, p. 432).
Their thesis was that a “science of friction ridge identification” could not generate “absolute conclusions.” Being “essentially inductive,” the reasoning process was necessarily “probabilistic.” In comparing latent prints and exemplars in “an open population ... probabilistic statements are unavoidable.” (I would go further and say that even in a closed population — one in which exemplars from all the possible perpetrators have been collected — any inferences to identity are inherently probabilistic, but one source of uncertainty has been eliminated.) Although the article referred to “personal probabilities,” their analysis was not explicitly Bayesian. Although they wrote about “numerical measures of evidential weight,” they only mentioned the probability of a random match. They indicated that if “the probability that there is another person who would match the mark at issue” could be calculated, it “should be put before the court for the jury to deliberate.”

Mr. Swofford’s recent article embraces the message of probabilism. Comparing the movement toward statistically informed probabilistic reasoning in forensic science to the development of evidence-based medicine, the article calls for “more scientifically defensible ways to evaluate and articulate fingerprint evidence [and] quantifiable, standardized criterion to support subjective, experience-based opinions, thus providing a more transparent, demonstrable, and scientifically acceptable framework to express fingerprint evidence.”

Nonetheless, the article does not clearly address how the weight or strength of the evidence should be expressed, and a new DFSC policy on which he signed off is not fully consistent with the approach that Champod, Evett, and others have developed and promoted. That approach, Part II of this posting will indicate, uses the likelihood ratio or Bayes factor to express the strength of evidence. In their 2001 clarion call to the latent fingerprint community, however, Champod and Evett did not actually present the framework for “evidential weight” that they have championed both before and afterward (e.g., Evett 2015). The word “likelihood” appears but once in the article (in a quotation from a court that uses it to mean the posterior probability that a defendant is the source of a mark).

II. Fingerprint Identification: Advances Since the 2009 National Research Council Report

The second article does not have the seemingly obligatory words “paradigm shift” in its title, but it does appear in a collection of papers on “the paradigm shift for forensic science.” In a thoughtful review, Fingerprint Identification: Advances Since the 2009 National Research Council Report, Professor Christophe Champod of the Université de Lausanne efficiently summarizes and comments on the major institutional, scientific, and scholarly developments involving latent print examination during the last five or six years. For anyone who wants to know what is happening in the field and what is on the horizon, this is the article to read.

Champod observes that “[w]hat is clear from the post NRC report scholarly literature is that the days where invoking ‘uniqueness’ as the main (if not the only) supporting argument for an individualization conclusion are over.” He clearly articulates his favored substitute for conclusions of individualization:
A proper evaluation of the findings calls for an assignment of two probabilities. The ratio between these two probabilities gives all the required information that allows discriminating between the two propositions at hand and the fact finder to take a stand on the case. This approach is what is generally called the Bayesian framework. Nothing prevents its adoption for fingerprint evidence.
and
[M]y position remains unchanged: the expert should only devote his or her testimony to the strength to be attached to the forensic findings and that value is best expressed using a likelihood ratio. The questions of the relevant population—which impacts on prior probabilities—and decision thresholds are outside the expert’s province but rightly belong to the fact finder.
I might offer two qualifications. First, although presenting the likelihood ratio is fundamentally different from expressing a posterior probability (or a announcing a decision that the latent print comes from the suspect’s finger), and although the Bayesian conceptualization of scientific reasoning clarifies this distinction, one need not be a Bayesian to embrace the likelihood ratio (or its logarithm) as a measure of the weight of evidence. The intuition that evidence that is more probable under one hypothesis than another lends more support to the former than the latter can be taken as a starting point. (But counter-examples and criticisms the “law of likelihood” have been advanced. E.g., van Enk (2015); Mayo (2014).)

Second, whether the likelihood-ratio approach to presenting results is thought to be Bayesian or to rest on a distinct "law of likelihood," what stands in the way of its widespread adoption is conservatism and the absence of data-driven conditional probabilities with which to compute likelihood ratios. To be sure, even without accepted numbers for likelihoods, the analyst who reaches a categorical conclusion should have some sense of the likelihoods that underlie the decision. As subjective and fuzzy as these estimates may be, they can be the basis for reporting the results of a latent print examination as a qualitative likelihood ratio (NIST Expert Working Group on Human Factors in Latent Print Analysis 2012).  Still, a question remains: How do we know that the examiner is as good at judging these likelihoods as at coming to a categorical decision without articulating them?

Looking forward to less opaquely ascertained likelihoods, Champod presents the following vision:
I foresee the introduction in court of probability-based fingerprint evidence. This is not to say that fingerprint experts will be replaced by a statistical tool. The human will continue to outperform machines for a wide range of tasks such as assessing the features on a mark, judging its level of distortion, putting the elements into its context, communicating the findings and applying critical thinking. But statistical models will bring assistance in an assessment that is very prone to bias: probability assignment. What is aimed at here is to find an appropriate distribution of tasks between the human and the machine. The call for transparency from the NRC report will not be satisfied merely with the move towards opinions, but also require offering a systematic and case-specific measure of the probability of random association that is at stake. It is the only way to bring the fingerprint area within the ethos of good scientific practice.
References
Acknowledgement: Thanks to Ted Vosk for telling me about the first article discussed here.

1 comment:

  1. With regards to the Likelihood Ratio, what are your thoughts on Hari Iyer's (NIST) claims that there are some concerns with the LR due to the fact that the priors cannot be known and assigning them all 1 is not an adequate solution?

    Perhaps, as conclusions move further from binary (ID/Exclusion)decisions, they might fare better using non Bayesian diagnosticity as outlined in Jonathan Nelson's 2005 paper

    ReplyDelete