Columbia University has announced that "AI Discovers That Not Every Fingerprint Is Unique"! The subtitle of the press release of January 10, 2024, boldly claims that
Columbia engineers have built a new AI that shatters a long-held belief in forensics–that fingerprints from different fingers of the same person are unique. It turns out they are similar, only we’ve been comparing fingerprints the wrong way!
Forensic Magazine immediately and uncritically rebroadcast (quoting verbatim without acknowledgment from the press release) the confused statements about uniqueness. According to the Columbia release and Forensic Magazine, "It’s a well-accepted fact in the forensics community that fingerprints of different fingers of the same person—or intra-person fingerprints—are unique and therefore unmatchable." Forensics Magazine adds that "Now, a new study shows an AI-based system has learned to correlate a person’s unique fingerprints with a high degree of accuracy."
Does this mean that the "well-accepted fact" and "long-held belief" in uniqueness been shattered or not? Clearly, not. The study is about similarity, not uniqueness. In fact, uniqueness has essentially nothing to do with it. I can classify equilateral triangles drawn on a flat surface as triangles rather than as other regular polygons whether or not the triangles are each different enough from one another (uniqueness within the set of triangles) that I notice these differences. To say that objects "are unique and therefore unmatchable" is a nonsequitur. A human genome is probably unique to that individual, but forensic geneticists know that six-locus STR profiles are "matchable" to those of other individuals in the population. A cold hit to a person who could not have been the source of the six-locus profile in the U.K. database occurred long ago (as was to be expected for the random-match probabilities of the genotypes).
Perhaps the myth that the study shatters is that it is impossible to distinguish fingerprints left by different fingers of the same individual X from fingerprints left by fingers of different individuals (not-X). But there is no obvious reason why this would be impossible even if every print is distinguishable from every other print (uniqueness).
The Columbia press release describes the study design this way:
[U]ndergraduate senior Gabe Guo ... who had no prior knowledge of forensics, found a public U.S. government database of some 60,000 fingerprints and fed them in pairs into an artificial intelligence-based system known as a deep contrastive network. Sometimes the pairs belonged to the same person (but different fingers), and sometimes they belonged to different people.
Over time, the AI system, which the team designed by modifying a state-of-the-art framework, got better at telling when seemingly unique fingerprints belonged to the same person and when they didn’t. The accuracy for a single pair reached 77%. When multiple pairs were presented, the accuracy shot significantly higher, potentially increasing current forensic efficiency by more than tenfold.
The press release reported the following odd facts about the authors' attempts to publish their study in a scientific journal:
Once the team verified their results, they quickly sent the findings to a well-established forensics journal, only to receive a rejection a few months later. The anonymous expert reviewer and editor concluded that “It is well known that every fingerprint is unique,” and therefore it would not be possible to detect similarities even if the fingerprints came from the same person.
The team ... fed their AI system even more data, and the system kept improving. Aware of the forensics community's skepticism, the team opted to submit their manuscript to a more general audience. The paper was rejected again, but [Professor Hod] Lipson ... appealed. “I don’t normally argue editorial decisions, but this finding was too important to ignore,” he said. “If this information tips the balance, then I imagine that cold cases could be revived, and even that innocent people could be acquitted.” ...
After more back and forth, the paper was finally accepted for publication by Science Advances. ... One of the sticking points was the following question: What alternative information was the AI actually using that has evaded decades of forensic analysis? ... “The AI was not using ... the patterns used in traditional fingerprint comparison,” said Guo ... . “Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.”
Proprietary fingerprint matching algorithms also do not arrive at matches the way human examiners do. They "see" different features in the patterns and tend to rank the top candidates for true matches in a database trawl differently than the human experts. Again, however, these facts about automated systems neither prove nor disprove claims of uniqueness. And, theoretical uniqueness has little or nothing to do with the actual probative value of assertions of matches by humans, automated systems, or both.
Although not directly applicable, the day after the publicity on the Guo et al. paper, I came across the following report on "Limitations of AI-based predictive models" in a weekly survey of papers in Science:
A central promise of artificial intelligence (AI) in health care is that large datasets can be mined to predict and identify the best course of care for future patients. Unfortunately, we do not know how these models would perform on new patients because they are rarely tested prospectively on truly independent patient samples. Chekroud et al. showed that machine learning models routinely achieve perfect performance in one dataset even when that dataset is a large international multisite clinical trial (see the Perspective by Petzschner). However, when that exact model was tested in truly independent clinical trials, performance fell to chance levels. Even when building what should be a more robust model by aggregating across a group of similar multisite trials, subsequent predictive performance remained poor. -- Science p. 164, 10.1126/science.adg8538; see also p. 149, 10.1126/science.adm9218
Note: This posting was last modified on 1/12/24 2:45 PM