The mask is down, and this should lead to heated debates in the near future as many practitioners have not yet realized the earth-shattering nature of the changes. (Preface, at xi).
If you thought that fingerprint identification is a moribund and musty field, you should read the second edition of
Fingerprints and Other Ridge Skin Impressions (FORSI for short), by Christophe Champod, Chris Lennard, Pierre Margot, and Milutin Stoilovic.
The first edition "observed a field that is in rapid progress on both detection and identification issues." (Preface 2003). In the ensuing 13 years, "the scientific literature in this area has exploded (over 1,000 publications) and the related professions have been shaken by errors, challenges by courts and other scientists, and changes of a fundamental nature related to previous claims of infallibility and absolute individualization." (Preface 2016, at xi).
The Scientific Method
From the outset, the authors -- all leading researchers in forensic science -- express dissatisfaction with "standard, shallow statements such as 'nature never repeats itself'" and "the tautological argument that every entity in nature is unique." (P. 1). They also dispute the claim, popular among latent print examiners, that the "ACE-V protocol" is a deeply "scientific method":
ACE-V is a useful mnemonic acronym that stands for analysis, comparison, evaluation, and verification ... . Although [ACE-V was] not originally named that way, pioneers in forensic science were already applying such a protocol (Heindi 1927; Locard 193]). ... Its. It is a protocol that does not, in itself give details as to how the inference is conducted. Most authors stay at this descriptive stage and leave the inferential or decision component of the process to "training and experience" without giving any more guidance as to how examiners arrive at their decisions. As rightly highlighted in the NRC report (National Research Council 2009, pp. 5-12): "ACE-V provides a broadly stated framework for conducting friction ridge analyses. However, this framework is not specific enough to qualify as a validated method for this type of analysis." Some have compared the steps of ACE-V to the steps of standard hypothesis testing, described generally as the "scientific method" (Wertheim 2000; Triplett and Cooney 2006; Reznicek et al. 2010: Brewer 2014). We agree that ACE-V reflects good forensic practice and that there is an element of peer review in the verification stage ... ; however, draping ACE-V with the term "scientific method" runs the risk of giving this acronym more weight than it deserves. (Pp. 34-35).
Indeed, it is hard to know what to make of claims that "standard hypothesis testing" is
the "scientific method." Scientific thinking takes many forms, and the source of its spectacular successes is a set of norms and practices for inquiry and acceptance of theories that go beyond some general steps for qualitatively assessing how similar two objects are and what the degree of similarity implies about a possible association between the objects.
Exclusions as Probabilities
Many criminalists think of exclusions as logical deductions. They think, for example, that deductively valid reasoning shows that the same finger could not possibly be the source of two prints that are so radically different in some feature or features. I have always thought that exclusions are part of an inductive logical argument -- not, strictly speaking, a deductive one.
1/ However, FORSI points out that if the probability is zero that "the features in the mark and in the submitted print [are] in correspondence, meaning within tolerances, if these have come from the same source," then "an exclusion of common source is the obvious deductive conclusion ... ." (P. 71). This is correct. Within a Boolean logic (one in which the truth values of all propositions are 1 or 0), exclusions are deductions, and deductive arguments are certainly valid or invalid.
But the usual articulation of what constitutes an exclusion (with probability 1) does not withstand analysis. Every pair of images has some difference in every feature (even when the images come from the same source). How does the examiner know (with probability 1) that a difference "cannot be explained other than by the hypothesis of different sources"? (P. 70). In some forensic identification fields, the answer is that the difference must be "significant."
2/ But this is an evasion. As FORSI explains,
In practice, the difficulty lies in defining what a "significant difference" actually is (Thornton 1977). We could define "significant as being a clear difference that cannot be readily explained other than by a conclusion that the print and mark are from different sources. But it is a circular definition: Is it "significant" if one can cannot resolve it by another explanation than a different source or do we conclude to an exclusion because of the "significant" difference? (Page 71).
Fingerprint examiners have their own specialized vocabulary for characterizing differences in a pair of prints. FORSI defines the terms "exclusion" and "significant" by invoking a concept familiar (albeit unnecessary) in forensic DNA analysis -- the match window within which two measurements of what might be the same allele are said to match. In the fingerprint world, the analog seems to be "tolerance":
The terms used to discuss differences have varied over the years and can cause confusion (Leo 1998). The terminology is now more or less settled (SWGFAST 2013b).
Dissimilarities are differences in appearance between two compared friction ridge areas from the same source, whereas
discrepancy is the observation of friction ridge detail in one impression that does not exist in the corresponding area of another impression. In the United Kingdom, the term
disagreement is also used for discrepancy and the term
explainable difference for dissimilarity (Forensic Science Regulator 2015a).
A discrepancy is then a "significant" difference and arises when the compared features are declared to be "out of tolerance" for the examiner, tolerances as defined during the analysis. This ability to distinguish between dissimilarity (compatible to some degree with a common source) and discrepancy (meaning almost
de facto different sources) is essential and relies mainly on the examiner's experience. ... The first key question ... then becomes ... :
Ql, How probable is it to observe the features in the mark and in the submitted print in correspondence. meaning within tolerances, if these have come from the same source? (P. 71).
The phrase "almost
de facto different sources" is puzzling. "De facto" means in fact as opposed to in law. Whether a print that is just barely out of tolerance originated from the same finger always is a question of fact. I presume "almost de facto different sources" means the smallest point at which probability of being out of tolerance is so close to zero that we may as well round it off to exactly zero. An exclusion is thus a claim that it is
practically impossible for the compared features to be out of tolerance when they are in an image from the same source.
But to insist that this probability
is zero is to violate "
Cromwell's Rule," as the late Dennis Lindley called the admonition to avoid probabilities of 0 or 1 for empirical claims. As long as there is a non-zero probability that the perceived "discrepancy" could somehow arise -- as there always is if only because every rule of biology could have a hitherto unknown exception -- deductive logic does not make an exclusion a logical certainty. Exclusions are probabilistic. So are "identifications" or "individualizations."
Inclusions as Probabilities
At the opposite pole from an exclusion is a categorical "identification" or "source attribution." Categorical exclusions are statements of probability -- the examiner is reporting "I don't see how these differences could exist for a common source" -- from which it follows that the hypothesis of a different source has a high probability (not that it is deductively certain to be true). Likewise, categorical "identifications" are statements of probability -- now the examiner is reporting "I don't see how all these features could be as similar as they are for different sources" -- from which it follows that the hypothesis of a common source has a high probability (not that it is certain to be true). This leaves a middle zone of inclusions in which the examiner is not confident enough to declare an identification or an exclusion and the examiner makes no effort to describe its probative value -- beyond saying "It is not conclusive proof of anything."
The idea that examiners report all-but-certain exclusions and all-but-certain inclusions ("identifications") has three problems. First, how should examiners get to these states of subjective near-certainty? Second, each report seemed to involve the probability of the observed features under only a single hypothesis -- different source for exclusions and same source for inclusions. Third, everything between the zones of near-certainty gets tossed in the dust bin.
I won't get into the first issue here, but I will note FORSI's treatment of the second two. FORSI seems to accept exclusions (in the sense of near-zero probabilities for the observations given the same-source hypothesis) as satisfactory; nevertheless, for inclusions, it urges examiners to consider the probability of the observations under
both hypotheses. In doing so, it adopts a mixed perspective, using a match-window
p-value for the exclusion step and a likelihood ratio for an inclusion. Some relevant excerpts follow:
The above discussion has considered the main factors driving toward an exclusion (associated with question
Q1; we should now move to the critical factor that will drive toward an identification, with this being the
specificity of the corresponding features. ...
Considerable confusion exists among laymen, indeed also among fingerprint examiners, on the use of words such as
match, unique, identical, same, and
identity. Although the phrase "all fingerprints are unique" has been used to justify fingerprint identification opinions, it is no more than a statement of the obvious. Every entity is unique, nu because an entity can only be identical to itself. Thus, to say that "this mark and this print are identical to each other" is to invoke a profound misconception; the two might be indistinguishable, but they cannot be identical. In turn, the notion of "indistinguishability" is intimately related to the quantity and quality of detail that has been observed. This leads to distinguishing between the source variability derived from good-quality prints and the expressed variability in the mark, which can be partial, distorted, or blurred (Stoney 1989). Hence, once the examiner is confident that they cannot exclude, the only question that needs to be addressed is simply:
Q2. What is the probability of observing the features in the mark (given their tolerances) if the mark originates from an unknown individual?
If the ratio is calculated between the two probabilities associated with
Ql. and
Q2, we obtain what is called a likelihood ratio (LR).
Ql becomes the numerator question and
Q2 becomes the denominator question. ...
In a nutshell, the numerator is the probability of the observed features if the mark is from the POI, while the denominator is the probability of the observed features if the mark is from a different source. When viewed as a ratio, the strength of the observations is conveyed not only by the response to one or the other of the key questions, but by a balanced assessment of both. ... The LR is especially ... applies regardless of the type of forensic evidence considered and has been put at the core of evaluative reporting in forensic science (Willis 2015). The range of values for the LR is between 0 and infinity. A value of 1 indicates that the forensic findings are equally likely under either proposition and they do not help the case in one direction or the other. A value of 10,000, as an example, means that the forensic finding provides very strong support for the prosecution proposition (same source) as opposed to its alternative (the defense proposition—different sources). A value below 1 will strengthen the case in favor of the view that the mark is from a different source than the POI. The special case of exclusion is when the numerator of the LR is equal to 0, making the LR also equal to 0. Hence, the value of forensic findings is essentially a relative and conditional measure that helps move a case in one direction or the other depending on the magnitude of the LR. The explicit formalization of the problem in the form of a LR is not new in the area of fingerprinting and can be traced back to Stoney (1985). (P. 75)
In advocating a likelihood ratio (albeit one for an initial "exclusion" with poorly defined statistical properties), FORSI is at odds with the historical practice. This practice, as we saw, demands near certainty if an inclusion is to labelled an "identification" or an "exclusion." In the middle range, examiners "report 'inconclusive' without any other qualifiers of the weight to be assigned to the comparison." (P. 98). FORSI disapproves of this "peculiar state of affairs." (P. 99). It notes that
Examiners could, at times, resort to terms such as "consistent with, points consistent with," or "the investigated person cannot be excluded as the donor of the mark," but without offering any guidance as to the weight of evidence [see, for example, Maceo (2011a)]. In our view, these expressions are misleading. We object to information formulated in such broad terms that may be given more weight than is justified. These terms have been recently discouraged in the NRC report (National Research Council 2009) and by some courts (e.g., in England and Wales R v. Puacca [2005] EWCA Crim 3001). And this is not a new debate. As early as 1987, Brown and Cropp (1987) suggested to avoid using the expressions "match," "identical" and "consistent with."
There is a need to find appropriate ways to express the value of findings. The assignment of a likelihood ratio is appropriate. Resorting to the term "inconclusive" deprives the court of information that may be essential. (P. 99).
The Death of "Individualization" and the Sickness of "Individual Characteristics"
The leaders of the latent print community have all but abandoned the notion of "individualization" as a claim that one and one finger that ever existed could have left the particular print. (Judging from public comments to the National Commission on Forensic Science, however, individual examiners are still comfortable with such testimony.) FORSI explains:
In the fingerprint held, the term identification is often used synonymously with individualization. It represents a statement akin to certainty that a particular mark was made by the friction ridge skin of a particular person. ... Technically identification refers to the assignment of an entity to a specific group or label. whereas individualization represents the special case of identification when the group is of size 1. ... [Individualization] has been called the Earth population paradigm (Champod 2009b). ... Kaye (2009) refers to "universal individualization" relative to the entire world. But identification could also be made without referring to the Earth's population, referring instead to a smaller subset, for example, the members of a country, a city, or a community. In that context, Kaye talks about "local individualization" (relative to a proper subset). This distinction between "local" and "global" was used in two cases ... [W]e would recommend avoiding using the term "individualization." (P. 78).
The whole earth definition of "individualization" also underlies the hoary distinction in forensic science between "class" and "individual" characteristics. But a concatenation of class characteristics can be extremely rare and hence of similar probative value as putatively individual characteristics, and one cannot know
a priori that "individual" characteristics are limited to a class of size 1. In the fingerprinting context, FORSI explains that
In the literature, specificity was often treated by distinguishing "class" characteristics from "individual" characteristics. Level 1 features would normally be referred to as class characteristics, whereas levels 2 and 3 deal with "individual" characteristics. That classification had a direct correlation with the subsequent decisions: only comparisons involving "individual" characteristics could lead to an identification conclusion. Unfortunately, the problem of specificity is more complex than this simple dichotomy. This distinction between "class" and "individual" characteristics is just a convenient, oversimplified way of describing specificity. Specificity is a measure on a continuum (probabilities range from 0 to 1, without steps) that can hardly be reduced to two categories without more nuances. The term individual characteristic is particularly misleading, as a concordance of one minutia (leaving aside any consideration of level 3 features) would hardly be considered as enough to identify The problem with this binary categorization is that it encourages the examiner to disregard the complete spectrum of feature specificity that ranges from low to high. It is proposed that specificity at each feature level be studied without any preconceived classification of its identification capability by itself Indeed, nothing should prevent a specific general pattern—such as, for example, an arch with continuous ridges from one side to the other (without any minutiae)—from being considered as extremely selective, since no such pattern has been observed to date. (P.74)
FORSI addresses many other topics -- errors, fraud, automated matching systems, probabilistic systems, chemical methods for detection of prints, and much more. Anyone concerned with latent-fingerprint evidence should read it. Those who do will see why the authors express views like these:
Over the years, the fingerprint community has fostered a state of laissez-faire that left most of the debate to the personal informed decisions of the examiner. This state manifests itself in the dubious terminology and semantics that are used by the profession at large ... . (P. 344).
We would recommend, however, a much more humble way of reporting this type of evidence to the decision maker. Fingerprint examiners should be encouraged to report all their associations by indicating the degree of support the mark provides in favor of an association. In that situation, the terms "identification" or "individualization" may disappear from reporting practices as we have suggested in this book. (P. 345).
Notes
- David H. Kaye, Are "Exclusions" Deductive and "Identifications" Merely Probabilistic?, Forensic Sci., Stat. & L., Apr. 28, 2017, http://for-sci-law.blogspot.com/2017/04/
- E.g., SWGMAT, Forensic Paint Analysis and Comparison Guidelines 3.2.9 (2000), available at https://drive.google.com/file/d/0B1RLIs_mYm7eaE5zOV8zQ2x5YmM/view