Sunday, February 8, 2015

"Remarkably Accurate": The Miami-Dade Police Study of Latent Fingerprint Identification (Pt. 1)

A week ago (Feb. 2, 2015), the Justice Department issued a press release entitled "Fingerprint Examiners Found to Have Very Low Error Rates." According to the Department:
A large-scale study of the accuracy and reliability of decisions made by latent fingerprint examiners found that examiners make extremely few errors. Even when examiners did not get an independent second opinion about the decisions, they were remarkably accurate. But when decisions were verified by an independent reviewer, examiners had a 0% false positive, or incorrect identification, rate and a 3% false negative, or missed identification, rate. ... “The results from the Miami-Dade team address the accuracy, reliability, and validity in the forensic science disciplines, ...” said Gerald LaPorte, Director of NIJ’s Office of Investigative and Forensic Sciences.
Inasmuch as the researchers -- latent print examiners and a police commander in the Miami Dade Police Department 1/ -- only studied the performance of 109 latent print examiners, it is not clear how many forensic science disciplines it actually addresses. Nor is it obvious what "validity" means (beyond "accuracy") in this one activity.

But let's put press releases to the side and look into the study itself. The authors assert that
The foundation of latent fingerprint identification is that friction ridge skin is unique and persistent. Through the examination of all of the qualitative and quantitative features available in friction ridge skin, impressions can be positively identified or excluded to the individual that produced it. 2/
This study does next to nothing to validate this foundation. The premise of uniqueness is very difficult to validate, and this study is limited to "80 latent prints with varying quantity and quality of information from [a grand total of] ten known sources." 3/ But, to its credit, the research does tell us about the ability of one large group of examiners to correctly and reliably pair these particular latent prints to the more complete known prints of the fingers that generated them. Let's see how much it reveals in this regard.

The Test Set

As for the prints used in the experiment, "[a] panel of three International of Association (IAI) certified latent print examiners independently examined and compared the 320 latent prints to the known standards and scored each latent print and subsequent comparison to their known standard according to a rating scale that was designed and used for this research; 80 were selected as the final latent prints to be used for testing purposes." 4/ The purpose of the three independent examinations was to rate the latent-known pairs on a difficulty scale "in order to present the participants with a broad range of latent print examinations that were representative of actual casework." 5/ Although the researchers may well have succeeded in fashioning a test set with pairs of varying difficulty, the report does not explain how they knew that this set was "representative of actual casework" and that "[t]he test sets utilized in this study were similar to the work that participants perform on a daily basis." 6/ Neither did they report how consistently the three uber-experts gauged the difficulty of the pairs.

The Examiners Who Were Tested

It seems that readers of the Miami-Dade report must take on faith the assertion that the test set is "representative of actual casework." In contrast, it is plain that the test subjects are not representative of all caseworkers. Rather than seek a random sample of all practicing latent print examiners -- which would be a difficult undertaking -- the researchers chose a convenience sample. Only "[l]atent print examiners in the United States who were an active member [sic] of the IAI received an email invitation from the MDPD FSB inviting them to participate in this study." 7/ Inasmuch as IAI certification is a mark of distinction, the sampling frame diverges from the population of all examiners. Departing from good statistical practice, the report does not state how large the nonresponse rate for IAI-certified invitees was. If it was high (as seems probable), the sample of examiners is likely to be a biased sample of all IAI-certified examiners.

In addition to soliciting participation from IAI-certified examiners, "[a]pplications were also made available to any qualified latent print examiner, regardless of affiliation with a professional organization." 8/ How this was done is not explained, but in the end, 55% of the subjects were not IAI-certified. 9/

Of course, these features of the sampling method do not deprive the study of all value. The experiment shows what a set of motivated examiners (volunteers) with high representation from IAI-certified examiners achieved when they (1) knew that their performance would be used in a report on the capabilities of their profession, (2) had an unspecified period of time to work, and (3) may not have always worked alone on the test materials. In the next posting on the study, I will describe these results.


  1. The only description of the authors in the report is on the title page, which identifies them as Igor Pacheco, CLPE (MDPD), Brian Cerchiai, CTPE (MDPD), and Stephanie Stoiloff, MS (MDPD)." The International Association for Identification lists the first two authors as certified latent print examiners as of Dec. 4, 2014. Mr. Cerchiai is also a, IAI certified tenprint examiner. The third author is a senior police bureau commander in the Forensic Services Bureau of the Miami-Dade Police Department (MDPD). In July 2012, she testified before the Senate Judiciary Committee on behalf of the International Association of Chiefs of Police that "[f]orensic science is not the floundering profession that some may portray it to be."
  2. Igor Pacheco, Brian Cerchiai & Stephanie Stoiloff, Miami-Dade Research Study for the Reliability of the ACE-V Process: Accuracy & Precision in Latent Fingerprint Examinations, Final Technical Report, Award No. 2010-DN-BX-K268, Dec. 2014 (abstract).
  3. Id. The latent prints were not just from fingers. Some were palm prints.
  4. Id. at 24.
  5. Id. at 27.
  6. Id. at 35.
  7. Id. at 34.
  8. Id. at 35.
  9. Id. at 51.
Related Postings
  • Reports on studies in mainstream journals can be found on this blog under the labels "fingerprint" and "error."

No comments:

Post a Comment