Thursday, November 19, 2015

Marching Toward Improved Latent Fingerprint Testimony at the Army's Defense Forensic Science Center

The U.S. Army’s Defense Forensic Science Center (DFSC) has announced a change in its practice of reporting a positive association between a latent fingerprint and an exemplar. (The full notice of November 3, 2015, is reproduced below.)

The notice seems to say that it is no longer appropriate to “use the terms ‘identification’ or ‘individualization’ in technical reports and expert witness testimony to express the association of an item of evidence to a specific known source” because “these terms imply absolute certainty of the conclusion to the fact-finder which has not been demonstrated by available scientific data.” The DFSC “recognizes the importance of ensuring forensic science results are reported to the fact-finder in a manner which appropriately conveys the strength of the evidence, yet also acknowledges that absolute certainty should not be claimed based on currently available scientific data.”

All this sounds forward-looking, but are the words “based on currently available scientific data” meant to imply that “absolute certainty” is just a temporary deficiency, soon to be cured by more research? If so, the statement is mistaken. Inasmuch as all science is contingent (potentially subject to revision), no amount of research can deliver “absolute certainty.” But some propositions are nearly certain. Although we cannot be absolutely certain that the earth is the third planet orbiting the sun, we can be darned sure of it.

So what is the DFSC’s understanding of the current data? Are fingerprint analysts not allowed to say that they have made an “identification” because they cannot be third-planet-from-the-sun sure of it? Or is the policy change a reflection of substantially greater uncertainty than this?

The notice skates above the surface of these questions. However, three years ago, its author wrote an article entitled “Individualization Using Friction Skin Impressions: Scientifically Reliable, Legally Valid” in which he insisted on “the validity of testimonial claims of individualization.” At that time, he maintained that even though “[n]othing in science can ever be proven in the most absolute sense,”
It can be well agreed that the fundamental premise of friction ridge skin uniqueness has withstood considerable scrutiny since the late 17th century. Furthermore, ... friction ridge skin uniqueness is well within the bounds to be considered a scientific law that will occur invariably as a natural phenomenon, and it should be recognized, as such ... .
(Swofford 2012, p. 75). Sounds like an assertion that individualization via latent prints is third-planet-from-the-sun science. In a perceptive 2015 article, however, Mr. Swofford repudiated this traditional view of the current level of fingerprint-identification certainty as unduly defensive and detrimental to the field.*

In any event,
[T]he DFSC has modified the language which is used to express “identification” results on latent print technical reports. The revised languages [sic] is as follows: “The latent print on Exhibit ## and the record finger/palm prints bearing the name XXXX have corresponding ridge detail. The likelihood of observing this amount of correspondence when two impressions are made by different sources is considered extremely low.”
This is a step forward from an assertion that it is 100% certain that the latent print comes from XXXX’s finger. But the details of the move from absolute to partial certainty are not perfect. (What is?)

Exactly what “is considered extremely low” and by whom?  First, is the examiner saying that some number such as 0.00001 is known and that the DFSC considers this number to be extremely low? Or is the testimony that no specific figure is known, but the DFSC believes that it is within an otherwise unspecified range that is extremely low? Although asking the expert on cross-examination for the “extremely low” number should reveal that no specific number is known, would it be better to make this clear at the outset?

Second, what is the nature of the quantity that someone considers extremely low? Is it a “likelihood” or instead a probability? Colloquially, the words are synonyms, but technically, “likelihood” pertains to the hypothesis, not to the evidence. The probability of the evidence E given the hypothesis H (written P(E|H)), when summed or integrated over all possible E, must equal 1. The hypothesis is fixed, the evidence varies, and the probability attaches to the evidence. For example, if the probability that the “amount of correspondence” is 0.00001 “when two impressions are made by different sources,” then the probability of all other amounts must be 0.99999.

The concept of likelihood, however, treats the evidence as fixed and asks how strongly the fixed evidence supports possible hypotheses. The hypotheses vary, and there is no reason to believe that the sum or integral over all possible hypotheses “will be anything in particular” (Edwards 1992, p. 12). If the probability that the “amount of correspondence” is 0.00001 “when two impressions are made by different sources,” then the probability of the same correspondence when the two impressions are made by the same source could be 0.01. Or it could be 0.05, or many other values between 0 and 1. Mathematically, H’s likelihood is proportional to E’s conditional probability, but conceptually, “the distinction between probability and likelihood is vital ... .” (Ibid.)

Apparently the DFSC is not using “likelihood” as statisticians do when they are thinking about the logic of statistical inference, but is just referring to the garden variety probability of the latent print examiner’s observations (the evidence E) conditional on the hypothesis H0 of “different sources.” This sounds a lot like traditional null hypothesis testing or like the use of a Fisherian p-value (Kaye 2015).

Such discourse is fine as far as it goes. Talking about the low probability of the level of correspondence when the prints come from different fingers is much better than asserting that the observed correspondence is utterly inconceivable under the different-source hypothesis H0 or that the same-source hypothesis is absolutely certain to be true.

Nevertheless, this kind of testimony is still incomplete. As has been discussed many times in the forensic science and statistics literature (see Can Forensic Pattern Matching be Validated?), it is necessary to consider the evidence probability under the alternative hypothesis H1. How probable is it that the latent print would have the same degree of observed correspondence to the exemplar if it originated from the finger that produced the exemplar?

The ratio of these two probabilities, P(E|H1) to P(E|H0), equals the likelihood ratio, L(H1; E) to L(H0; E). Unless the numerator P(E|H1) is 1 (a conclusion that also is not “based on currently available scientific data”), giving only the denominator is problematic. It ignores the limitation in the evidence emphasized in two of the three references provided in favor of the new policy (the 2009 NRC Report and the 2012 NIST Expert Working Group Report).

Hopefully, the DFSC and other organizations will continue to refine their method of reporting associations. The Center's Information Paper promises that “[t]he next step will be to quantify both the amount of corresponding ridge detail and the related likelihood calculations.” But the DFSC need not wait for “likelihood calculations” to acknowledge in its reports and testimony that there is some variability in latent prints from the same finger.

Note

* Henry J. Swofford, The Emerging Paradigm Shift in the Epistemology of Fingerprint Conclusions, 65 J. Forensic Identification 201, 203 (2015). Postscript: This reference and the accompanying text were added on 11/20/15 10:35 pm EST. This article is discussed briefly in a posting of 11/21/15.

References
  • A. W. F. Edwards, Likelihood: Expanded Edition (1992).
  • David H. Kaye, Presenting Forensic Identification Findings: The Current Situation, in Communicating the Results of Forensic Science Examinations 12–30 (C. Neumann et al. eds. 2015) (Final Technical Report for NIST Award 70NANB12H014).
  • Henry J. Swofford, Individualization Using Friction Skin Impressions: Scientifically Reliable, Legally Valid, 62 J. Forensic Identification 65 (2012).
Acknowledgments

Thanks to Ted Vosk for calling the Information Paper discussed here to my attention.

APPENDIX

DEPARTMENT OF THE ARMY
DEFENSE FORENSIC SCIENCE CENTER
4930 N 31ST STREET
FOREST PARK, GA 30297

CIFS-FSL-LP
03 November 2015

INFORMATION PAPER

SUBJECT: Use of the term “Identification” in Latent Print Technical Reports

1. Forensic science laboratories routinely use the terms “identification” or “individualization” in technical reports and expert witness testimony to express the association of an item of evidence to a specific known source. Over the last several years, there has been growing debate among the scientific and legal communities regarding the use of such terms within the pattern evidence disciplines to express source associations which rely on expert interpretation. Central to the debate is that these terms imply absolute certainty of the conclusion to the fact-finder which has not been demonstrated by available scientific data. As a result, several well respected and authoritative scientific committees and organizations have recommended forensic science laboratories not report or testify, directly or by implication, to a source attribution to the exclusion of all others in the world or to assert 100% certainty and state conclusions in absolute terms when dealing with population issues.

2. The Defense Forensic Science Center (DFSC) recognizes the importance of ensuring forensic science results are reported to the fact-finder in a manner which appropriately conveys the strength of the evidence, yet also acknowledges that absolute certainty should not be claimed based on currently available scientific data. As a result, the DFSC has modified the language which is used to express “identification” results on latent print technical reports. The revised languages is as follows:
"The latent print on Exhibit ## and the record finger/palm prints bearing the name XXXX have corresponding ridge detail. The likelihood of observing this amount of correspondence when two impressions are made by different sources is considered extremely low."
3. This revision to the reporting language is not the result of changes in the examination methods and does not impact the strength of the source associations. Instead, it simply reflects a more scientifically appropriate framework for expressing source associations made when evaluating latent print evidence. The next step will be to quantify both the amount of corresponding ridge detail and the related likelihood calculations. In the interim, customers should continue to maintain strong confidence in latent print examination results.

[Page 2 of 2]

4. References:
a. National Research Council (2009). Strengthening Forensic Science in the United States: A Path Forward. National Research Council, Committee on Identifying the Needs of the Forensic Science Community. National Academies Press, Washington, D.C.

b. National Institute of Standards and Technology (2012). Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach. Expert Working Group on Human Factors in Latent Print Analysis, U.S. Department of Commerce, National Institute of Standards and Technology.

c. Garrett, R. (2009). Letter to All Members of the International Association for Identification, Feb. 19, 2009.
5. Questions regarding this information paper may be directed to Mr. Henry Swofford, Chief, Latent Print Branch, USACIL, DFSC, 404-469-5611 and Henry.J.Swofford.Civ@mail.mil.

No comments:

Post a Comment