Love argued that latent fingerprint analysis lacks the scientific or other foundation for admissibility under the federal rules of evidence. The district judge, M. Margaret McKeown, held a pretrial hearing at which Ruth, who "possesses degrees in genetic engineering and in forensic and biological anthropology," id. at *9, was the only witness. In her (inexplicably unreported) opinion, Judge McKeown described the testimony on the risk of error as follows:
Latent fingerprint examiners have sometimes stated that their analyses have a zero error rate. . . . Ruth did not so testify. Rather, she stated that the ACE–V methodology is unbiased and that the methodology itself introduces no random error. Errors occur, but those errors are human errors resulting from human implementation of the ACE–V process. . . . Because human errors are nonsystematic, Ruth believes that there is no overall predictive error rate in latent fingerprint analysis.Id. at *5. This sounds like gobbledy-gook to me. To begin with, the FBI’s continued insistence that there is an ACE-V “methodology” separate from the human being who is doing the analysis, comparison, and evaluation is mystifying. As a NIST expert working group on human factors in fingerprinting recently wrote, human errors are a function of the system that includes human beings. Such a system can be structured to favor errors of one type—false identifications—or the other—false eliminations. The Noblis-FBI study (see postings of April 26, 27, 30, and May 1) indicates that in the analysis phase, examiners systematically discard prints that contain useful information. In the evaluation phase, they systematically err in favor of false eliminations over false identifications.
Moreover, the reasoning that “[b]ecause human errors are nonsystematic, ... there is no overall predictive error rate in latent fingerprint analysis” is hard to decipher. Long-term features of random (nonsystematic) processes are predictable. If a laboratory’s examiners make false identifications and false eliminations, each with a constant probability of 0.001, then the “human errors are nonsystematic” but the 0.001 error rate can be used to predict the incidence of errors in casework. Ms. Ruth probably was saying that error probabilities are not constant and that existing data do not permit very realistic or accurate statistical modeling of errors in a laboratory.
In any event, a better argument is available. It is clear that fingerprint examinations can produce useful information—they can make correct source attributions at levels far greater than chance alone would produce. The opinion refers to
very few . . . cases in which an examiner identifies a latent print as matching a known print even though the two prints were actually made by different individuals. Most significantly, the May 2011 [Noblis-FBI] study of the performance of 169 fingerprint examiners revealed a total of six false positives among 4,083 comparisons of non-matching fingerprints for “an overall false positive rate of 0.1%.” See Ulery et al., supra, at 7733, 7735.Id. Of course, the 0.1% figure from a controlled experiment in which the examiners knew they were being tested might not be “predictive” of the rates in casework. An experiment reveals what happens under the conditions in the experiment. Generalizing to other situations takes judgment. Apparently, Ms. Ruth thought that the error rate in practice would be smaller “because the prints used in the study were of relatively low quality.” Id. And, the court noted that verification by a second examiner should reduce the rate still more. Id.
The observed error rate of 0.1%, the court wrote, was only “marginally higher than the rates the FBI has previously estimated.” Id. The only previous statistic that the court cites is “1 in 11 million cases.” Id. The difference between 1/1,000 and 1/11,000,000 strikes me as more serious than the term “marginal” connotes. The government ought to devote more resources to reducing the risk of error if the chance of a false identification is as high as 1/1,000 than if it is a mere 1/11,000,000. The former may be "very low," but the latter is incredibly low.
Furthermore, historically derived error statistics such as 1/11,000,000 are all highly problematic in the absence of any mechanism that would uncover possible errors in run-of-the-mill cases. Consequently, and contrary to the view expressed in the opinion, one should not regard a supervisor's recollections of errors and the outcomes of investigations of occasional complaints of misidentifications as "evidence suggest[ing] that the rate is much lower than that figure [of 0.1%]." Id. at *6.
Appendix
Love is the first case to generate a widely available opinion discussing the Noblis-FBI study. It is reproduced below:
United States v. Love, No. 10cr2418–MMM, 2011 WL 2173644 (S.D. Cal. June 1, 2011) (not reported)
ORDER REGARDING DONNY LOVE, SR.'S MOTION TO EXCLUDE LATENT FINGERPRINT TESTIMONY
Honorable M. MARGARET McKEOWN, District Judge.
The government charged Donny Love, Sr. with crimes related to the May 4, 2008 bombing of the federal courthouse in San Diego. A trial on those charges began on May 23, 2011. Before trial, Love moved to exclude the testimony of Robin Ruth, the standards and practices program manager of Federal Bureau of Investigation's (“FBI”) latent fingerprint unit. Love argued that latent fingerprint analysis generally, and therefore Ruth's specific testimony about fifteen latent prints she analyzed for this case, are insufficiently reliable for admission under Federal Rule of Evidence 702 and the Supreme Court's opinions in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), and Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999). At Love's request, on May 16, 2011, the court held a Daubert hearing regarding the latent fingerprint evidence. Ruth testified at that hearing regarding latent fingerprint analysis generally, the FBI's practices, and the analyses she performed in the course of her work on this case. No other witnesses were called to testify at the hearing. Because the May 16 hearing took place one week before trial, in order to provide sufficient notice of the ruling, the court provided an oral ruling at the close of the hearing and denied Love's motion to exclude the latent fingerprint testimony. The court also gave a brief explanation of that ruling, but stated that it would issue a written order elaborating on its reasons for denying Love's motion. This order provides that further explanation. To the extent anything in this order differs from the court's oral ruling of May 16, 2011, this order supersedes the oral ruling.
I. Latent Fingerprint Analysis
There are, broadly speaking, two kinds of fingerprints. “Rolled,” “full,” or “known” prints are taken under controlled circumstances, often for official use. “Latent” prints, by contrast, are left—often unintentionally—on the surfaces of objects touched by a person. Latent fingerprint examiners compare unidentified latent prints to known prints as a means of identifying the person who left the latent print.
In United States v. Mitchell, 365 F.3d 215 (3d Cir.2004), the Third Circuit provided a succinct introduction to the terminology used by latent fingerprint examiners and to the methodology employed by FBI examiners. In that court's words,
[f]ingerprints are left by the depositing of oil upon contact between a surface and the friction ridges of fingers. The field uses the broader term “friction ridge” to designate skin surfaces with ridges evolutionarily adapted to produce increased friction (as compared to smooth skin) for gripping. Thus toeprint or handprint analysis is much the same as fingerprint analysis. The structure of friction ridges is described in the record before us at three levels of increasing detail, designated as Level 1, Level 2 and Level 3. Level 1 detail is visible with the naked eye; it is the familiar pattern of loops, arches, and whorls. Level 2 detail involves “ridge characteristics”--the patterns of islands, dots, and forks formed by the ridges as they begin and end and join and divide. The points where ridges terminate or bifurcate are often referred to as “Galton points,” whose eponym, Sir Francis Galton, first developed a taxonomy for these points. The typical human fingerprint has somewhere between 75 and 175 such ridge characteristics. Level 3 detail focuses on microscopic variations in the ridges themselves, such as the slight meanders of the ridges (the “ridge path”) and the locations of sweat pores. This is the level of detail most likely to be obscured by distortions.
The FBI ... uses an identification method known as ACE–V, an acronym for “analysis, comparison, evaluation, and verification.” The basic steps taken by an examiner under this protocol are first to winnow the field of candidate matching prints by using Level 1 detail to classify the latent print. Next, the examiner will analyze the latent print to identify Level 2 detail (i.e., Galton points and their spatial relationship to one another), along with any Level 3 detail that can be gleaned from the print. The examiner then compares this to the Level 2 and Level 3 detail of a candidate full-rolled print (sometimes taken from a database of fingerprints, sometimes taken from a suspect in custody), and evaluates whether there is sufficient similarity to declare a match.
Id. at 221–22. At the evaluation step, the examiner may reach three conclusions—that the prints are a match (“identification”), that the prints are not a match (“exclusion”), or that more information is needed (“inconclusive”). See Hearing Ex. 1, at 42. “In the final step, the match is independently verified by another examiner.” Mitchell 365 F.3d at 222. It bears noting that since Mitchell was decided 2004, there have been important refinements and developments with respect to latent print identification, as documented in testimony before the court.
II. General Admissibility of Latent Fingerprint Evidence
As with all scientific or technical evidence, latent fingerprint evidence is admissible in court if it “will assist the trier of fact to understand or to determine a fact in issue” and “if (1) the testimony is based upon sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case.” Fed.R.Evid. 702. “Many factors ... bear on the [reliability] inquiry.” Daubert, 509 U.S. at 592–93. The Supreme Court has identified several factors that may often be relevant. These include whether the “technique can be (and has been) tested,” “[w]hether it has been subjected to peer review and publication,” the “known or potential rate of error,” “whether there are standards controlling the technique's operation,” and “whether the ... technique enjoys general acceptance within a relevant scientific community.” Kumho Tire, 526 U.S. at 149–50 (internal quotation marks and alteration omitted); accord Daubert, 509 U.S. at 593–94. Nevertheless, “the test of reliability is flexible, and Daubert 's list of specific factors neither necessarily nor exclusively applies to all experts or in every case.” Kumho, 526 U.S. at 141 (internal quotation marks omitted).
“The test under Daubert is not the correctness of the expert's conclusions but the soundness of [her] methodology.” Primiano v. Cook, 598F.3d558, 564 (9th Cir.2010). As a result, “the role of the courts in reviewing proposed expert testimony is to analyze expert testimony in the context of its field to determine if it is acceptable science.” Boyd v. City & Cnty. of S.F., 576 F.3d 938. 946 (9th Cir.2009). So long as “an expert meets the threshold established by Rule 702 as explained in Daubert, the expert may testify and the jury decides how much weight to give that testimony.” Primiano, 598 F.3d at 565. Put another way, although the court “must ensure that ‘junk science’ plays no part in the [jury's] decision,” Elsayed Mukhtar v. Cal. State Univ., Hayward, 299 F.3d 1053, 1063 (9th Cir.2002), “[s]haky but admissible evidence is to be attacked by cross examination, contrary evidence, and attention to the burden of proof, not exclusion,” Primiano, 598 F.3d at 564.
In this case, the parties discuss each of the five factors mentioned in Daubert and Kumho, They also address two additional factors drawn from United States v. Downing, 753 F.2d 1224 (3d Cir.1985)—namely, whether latent fingerprint analysis has a “relationship to ... established modes of scientific analysis” and whether it has “non-judicial uses.” Id. at 1238–39. The court addresses each factor in turn.
A. Testing
The first factor discussed in Daubert and Kumho asks whether a methodology can be tested and whether it has been tested. The parties do not dispute that the reliability of latent fingerprint analysis can be tested, and the record reveals three categories of potential tests. First, latent fingerprint analysis rests on two hypotheses—that fingerprints are unique to an individual and that prints persist, which is to say that they do not change over the course of an individual's life.\1/ These hypotheses are open to testing. The uniqueness hypothesis would be falsified if two people were found to share identical fingerprints, while the persistence hypothesis would be falsified if the same finger produced prints with details that evolved over time. Second, it is possible, at least in principle, to learn “the prevalence of different ridge flows and crease patterns” and “ridge formations and clusters of ridge formations” across individuals. See National Research Council of the National Academies, Strengthening Forensic Science in the United States: A Path Forward 144 (2009) (“NAS Report”), available at: http:// www.nap.edu/catalog/12589.html. This information would facilitate estimates of the reliability of the conclusion that two specific fingerprints are from the same individual. See Opp'n, Ex. B, at 8. Third, it is possible to test the reliability of conclusions reached by a given fingerprint analyst through controlled examinations.
1. At the hearing, Ruth testified that permanent scars alter an individual's fingerprints and thus are a known exception to the persistence hypothesis.
The fact that latent fingerprint analysis can be tested for reliability, without more, allows the first Daubert “factor to weigh in support of admissibility.” See Mitchell, 365 F.3d at 238. Ruth also testified, however, that at least some actual testing and research has been performed along each of the three dimensions discussed above. Testing of the uniqueness and persistence hypotheses dates to the eighteenth century and includes, among other things, a simple longitudinal study performed by Sir William Galton in the 1890s, a 1982 study of twins, and contemporary Bayesian statistical models. See Hearing Ex. 1, at 26–28; see also Mitchell, 365 F.3d at 236 & n. 16 (discussing a test of 50,000 fingerprints for uniqueness). Some recent statistical models also bear on the distribution of particular ridge characteristics across the population as a whole. See Opp'n, Ex. B, at 8 (citing three additional studies). Finally, several studies of the performance of fingerprint examiners have been performed. The most recent such study was published in May 2011. See Bradford T. Ulery et al., “Accuracy and Reliability of Forensic Latent Fingerprint Decisions,” 108 Proceedings of the National Academy of Sciences 7733 (May 10, 2011): see also Hearing Ex. 1, at 45 (citing five additional articles). The FBI also conducts proficiency examinations of its examiners, which—even if taken under conditions that “do not accurately represent [those] encountered in the field”—are of some value in assessing the reliability of individual examiners. See United States v. Baines, 573 F.3d 979, 990 (10th Cir.2009).
The court recognizes that the NAS Report called for additional testing to determine the reliability of latent fingerprint analysis generally and of the ACE–V methodology in particular. See NAS Report at 143–44. The Report also questions the validity of the ACE–V method. See id. at 142. However, Daubert, Kumho, and Rule 702 do not require “absolute certainty,” Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311, 1316 (9th Cir.1995) (opinion on remand); instead, they ask whether a methodology is testable and has been tested. On this record, the court finds that latent fingerprint analysis can be tested and has been subject to at least a modest amount of testing—some of which, like the study published in May 2011, was apparently undertaken in direct response to the NAS's concerns. The court therefore concludes that this factor weighs in favor of admitting latent fingerprint evidence. See Baines, 573 F.3d at 990 (“[W]hile we must agree with defendant that this record does not show that the technique has been subject to testing that would meet all of the standards of science, it would be unrealistic in the extreme for us to ignore the countervailing evidence.”).
B. Peer Review and Publication
The second factor in Daubert and Kumho concerns whether a methodology has been subject to publication and peer review. Publications regarding latent fingerprint analysis are relatively few in number. The government introduced into evidence a list of roughly thirty publications touching on various aspects of the field. See Hearing Ex. 1, at 66–69. Love's moving papers also contain citations to a handful of other relevant publications. See Mot. at 19–21. Although limited in quantity, many of these publications appear to “address ... theoretical/foundational questions” in latent fingerprint analysis. Mitchell, 365 F.3d at 239. The articles are therefore relevant to the reliability inquiry.
Ruth stated that at least one of these articles was published in a peer-reviewed journal, and the government's brief cites three other examples of peer-reviewed work. See Opp'n at 17. Even assuming that all of the articles cited by Ruth and the parties are peer-reviewed, latent fingerprint analysis would have only a small fraction of the number of peer reviewed publications found in established sciences. Nonetheless, because there are a handful of publications that concern the reliability of latent fingerprint analysis, at least a few of which are peer-reviewed, this factor is either neutral or weighs slightly in favor of admissibility. See, e.g., Boyd, 576 F.3d at 946 (affirming the admission of evidence of a theory supported by fourteen publications, ten of which were peer-reviewed); United States v. Prime, 431 F.3d 1147, 1153–54 (9th Cir.2005) (admitting handwriting analysis evidence in part because of publications that were peer reviewed by other forensic scientists).
C. Error Rates
The third factor deals with the known or potential error rate of a methodology. Latent fingerprint examiners have sometimes stated that their analyses have a zero error rate. See, e.g., NAS Report at 143. Ruth did not so testify. Rather, she stated that the ACE–V methodology is unbiased and that the methodology itself introduces no random error. Errors occur, but those errors are human errors resulting from human implementation of the ACE–V process. See Hearing Ex. 1, at 47–52. Because human errors are nonsystematic, Ruth believes that there is no overall predictive error rate in latent fingerprint analysis.
Ruth's testimony does not mean that the ACE–V process is perfect, or even that it is necessarily the best possible process for identifying latent prints. See NAS Report at 143 (“The [ACE–V] method, and the performance of those who use it, are inextricably linked, and both involve multiple sources of error (e.g., errors in executing the process steps, as well as errors in human judgment).”). Nevertheless, all of the relevant evidence in the record before the court suggests that the ACE–V methodology results in very few false positives—which is to say, very few cases in which an examiner identifies a latent print as matching a known print even though the two prints were actually made by different individuals. Most significantly, the May 2011 study of the performance of 169 fingerprint examiners revealed a total of six false positives among 4,083 comparisons of non-matching fingerprints for “an overall false positive rate of 0.1%.” See Ulery et al., supra, at 7733, 7735. This false positive rate is marginally higher than the rates the FBI has previously estimated. See, e.g., Barnes, 573 F.3d at 990–91 (recounting testimony suggesting that the FBI's error rate was 1 in 11 million cases). That discrepancy may be partially due to the fact that identifications in the study were not subject to verification.\2/ At least in Ruth's estimation, the error rate in the study may also have been marginally higher because the prints used in the study were of relatively low quality.\3/ In any case, a false positive rate of 0.1% remains quite low.\4/
2. No two individuals made the same false positive conclusion. See Ulery et al., supra, at 7735.
3. The examiners who participated in the study generally said that the prints in the study were similar in quality to those they encounter in their work. See Ulery et al., supra, at 7734.
4. The false negative rate in the May 2011 study was somewhat higher. See Ulery et al., supra, at 7733, 7736. The rate of false negatives is, however, not relevant here insofar as Ruth's testimony will identify various fingerprints as belonging to Love or others. See Mitchell 365 F.3d at 239. The FBI also concluded that two fingerprints in this case do not match those of Love or of the three individuals who pled guilty to the bombing. However, Love's brief does not specifically challenge the ACE–V methodology on the basis of its false negative rate, perhaps because a higher rate of false negatives could result if examiners take special care not to misidentify prints. See id. at 239 n. 19. The rate of false negatives may therefore be inversely related to the rate of false positives.
To counter the evidence of a low false positive rate, Love relies heavily on a high-profile misidentification made by the FBI in its investigation into the terrorist bombing of a train in Madrid. However, one confirmed misidentification is in no way inconsistent with an exceedingly low rate of error. Nor is the only other evidence on which Love relies. An examiner who served for fourteen years on a board that investigates complaints of misidentification has stated that he knew of twenty-five to thirty misidentifications that occurred in the United States during those fourteen years. See Mot. at 57 (citing Office of the Inspector General, A Review of the FBI's Handling of the Brandon Mayfield Case 137 (2006), available at: ht tp:// www.justice.gov/oig/special/s0601/final.pdf). Of course, any misidentification is troublesome. Without more foundation, however, this statement does not translate into a quantifiable error rate.\5/ The Inspector General's report does not include a benchmark for comparison.
5. For example, some rudimentary math suggests that this statement may imply a very low error rate. Ruth testified that she has made 1,200 identifications in six years as an examiner. At the same rate, she would perform 2,800 identifications over fourteen years, and it would take only ten examiners to perform 28,000 identifications over that time—roughly the number needed for 25–30 false positives assuming a 0.1% false positive rate. Because there are undoubtedly many more than ten latent fingerprint examiners in the United States, the examiner's statement is, if anything, indicative of a false positive rate lower than 0.1%.
The court acknowledges that, as Ruth testified, historical error rates do not necessarily reflect the predictive error rate that should be applied to identifications made by any given examiner. See also United States v. Cordoba, 194 F.3d 1053, 1059 (9th Cir.1999) (discussing polygraphs). However, there is no evidence in the record to suggest that the rate of misidentifications made by latent fingerprint examiners is greater than an average of 0.1%, and some evidence suggests that the rate is much lower than that figure. The court therefore concludes that the error rate favors admission of latent fingerprint evidence. Accord, e.g., United States v. John, 597 F.3d 263, 275 (5th Cir.2010); Mitchell, 365 F.3d at 241; see also United States v. Chischilly, 30 F.3d 1444, 1154–55 (9th Cir.1994) (“conclud[ing] that there was a sufficient showing of low error rates” in DNA profiling despite arguments that “the potential rate of error in the forensic DNA typing technique is unknown”) (internal quotation marks omitted).
D. Standards
The fourth factor discussed in Daubert and Kumho is whether standards control a technique's operation. It is not disputed that the ACE–V methodology leaves much room for subjective judgment. According to Ruth, a fingerprint examiner must decide whether a latent print has value for comparison. Then, if the print does have value, the examiner must choose both the points at which to begin her comparison of the latent print to the known print and the sequence of comparisons to make from those starting points. Examiners in the United States also make a subjective decision at the evaluation stage; it is up to the examiner to determine whether or not prints match on the basis of the comparison and her experience and expertise. The verifying examiner then makes the same subjective evaluation as the original examiner. Love suggests that the standards factor weighs against admission because these subjective elements in the ACE–V process imply a lack of specific, objective standards that guard against examiner bias and error.
The standards used to guide an examiner's judgment vary across laboratories. There is not a single national standard. In part for this reason, and in part because of the subjective judgments made during the ACE–V process, the court acknowledges that the standards used in fingerprint analysis “are insubstantial in comparison to the elaborate and exhaustively refined standards found in many scientific and technical disciplines.” Mitchell, 365 F.3d at 241.
The court finds, however, that various standards imposed by the FBI's latent fingerprints unit, which conducted the analyses relevant to this case, sufficiently safeguard against bias and error. The FBI uses three different types of standards in an effort to reach reliable conclusions through the ACE–V process. At the laboratory level, the FBI adheres to the standards for calibration laboratories provided by the International Organization for Standardization and the standards for forensic laboratories promulgated by the American Society of Crime Laboratory Directors Laboratory Accreditation Board (“ASCLD/LAB”). See Prime, 431 F.3d at 1153–54 (noting that a laboratory conformed to ASCLD/LAB standards). The FBI also follows the consensus-based guidelines for laboratories engaged in friction ridge analysis issued by the Scientific Working Group on Friction Ridge Analysis Study and Technology.
At the level of individual examiners, the FBI applies relatively stringent standards for qualification. In addition to a college degree, examiners must have a significant amount of training in the physical sciences. They are then given eighteen months of FBI training and a four-day proficiency examination. After passing that test, an examiner has a six-month probationary period in which every aspect of her work is fully reviewed. Even thereafter, examiners undergo annual audits, proficiency tests, and continuing education.
Finally, at the level of individual comparisons and evaluations, the ACE–V methodology provides procedural standards that must be followed, such as the requirement that examiners assess each ridge and ridge feature in the prints under comparison. See United States v. Crisp, 324 F.3d 261, 269 (4th Cir.2003). The FBI has also recently incorporated documentation requirements that record the process used to detect latent prints as well as the examiner's comparison of the latent print to a known print. These documentation requirements and other procedures are enforced through the technical and administrative review of examiners' work. And the verification process serves as an error-reducing backstop.\6/
6. Under certain relatively rare circumstances, the FBI now performs verifications that are blind to both the identity of the original examiner and the conclusion that examiner reached. Blind verification was used for only one print at issue in this case.
In short, despite the subjectivity of examiners' conclusions, the FBI laboratory imposes numerous standards designed to ensure that those conclusions are sound. See United States v. Llera–Plaza, 188 F.Supp.2d 549, 571 (E.D.Pa.2002) (concluding that the subjective elements in latent fingerprint analysis are of a significantly “restricted compass”). The court therefore concludes that this factor weighs in favor of admission.\7/
7. Love's brief points to an ongoing controversy regarding whether a minimum number of matching details should serve as a prerequisite to the identification of a latent fingerprint. Ruth testified that a minimum points standard would be misguided, because three matching details in an area of the finger containing relatively few characteristics can be more telling than, say, eight matching points in a detail-rich area. In holding that the standards factor supports admission, the court does not attempt to resolve this dispute. The court instead concludes that other standards unrelated to a minimum points standard provide sufficient guidance to satisfy Daubert, Kumho, and Rule 702.
E. General Acceptance
The final factor discussed in Daubert and Kumho is whether a technique is generally accepted in a relevant scientific community. Love argues that the NAS report's criticisms of latent fingerprint analysis in general and the ACE–V methodology in particular demonstrate that friction ridge analysis is not accepted in the relevant scientific community. That assertion contains a kernel of truth. The NAS report does demonstrate some hesitancy in accepting latent fingerprint analysis on the part of the broader scientific community. Love's claim is subject to two significant qualifications. First, the NAS report itself states that “friction ridge analysis has served as a valuable tool, both to identify the guilty and to exclude the innocent.” NAS Report at 142. Instead of a full-fledged attack on friction ridge analysis, the report is essentially a call for better documentation, more standards, and more research. Cf. Chischilly, 30 F.3d at 1154 (“[T]he mere existence of scientific institutions that would interpret data more conservatively scarcely indicates a ‘lack of general acceptance’ ....”).
Second, Love does not dispute that the forensic science and law enforcement communities strongly support the use of friction ridge analysis. Acceptance in that narrower community is also relevant to the Daubert inquiry. See, e.g., Baines, 573 F.3d at 991 (stating that “acceptance of other experts in the field should ... be considered”); Mitchell, 365 F.3d at 241 (“[W]e consider as one factor in the Daubert analysis whether fingerprint identification is generally accepted within the forensic identification community.”); see also Prime, 431 F.3d at 1154 (affirming the admission of handwriting evidence in part because the district court “recognized the broad acceptance of handwriting analysis and specifically its use by such law enforcement agencies as the CIA, FBI, and the United States Postal Inspection Service”). For both of these reasons, the court concludes that the general acceptance factor at least weakly supports the admission of latent fingerprint evidence.
F. Relationship to Established Techniques
“[A] a court assessing reliability may consider the ‘novelty’ of the new technique, that is, its relationship to more established modes of scientific analysis.” Downing, 753 F.2d at 1238. Insofar as such a relationship exists, “the scientific basis of the new technique [may have] been exposed to” indirect “scientific scrutiny.” See id. at 1239. The Third Circuit held in Mitchell, and Ruth testified, that friction ridge analysis is related to the undisputeclly scientific “fields of developmental embryology and anatomy,” which explain “the uniqueness and permanence of areas of friction ridge skin.” Mitchell 365 F.3d at 242. Love argues that this strong tie to the biological sciences is irrelevant, because “ ‘uniqueness and persistence are necessary” ‘ but not sufficient conditions for friction ridge analysis to be reliable. Mot. at 67 (quoting NAS Report at 144). Even if uniqueness and persistence alone cannot validate latent fingerprint analysis, however, it remains true that “[i]ndependent work in [developmental embryology and anatomy] bolsters” two critical “underlying premises of fingerprint identification.” Mitchell, 365 F.3d at 242. This factor weighs in favor of admission.
G. Non–Judicial Applications
“[N]on-judicial use of a technique can imply that third parties ... would vouch for the reliability of the expert's method.” Id. at 242–43. Ruth testified that friction ridge analysis is used for various other purposes, including to identify disaster victims, to identify newborns at hospitals, for biometric devices, for some passports and visas, and for certain jobs. See Hearing Ex. 1, at 64–65. Love stresses that these non-judicial applications generally use rolled and not latent prints. Ruth testified, however, that latent prints are used to identify disaster victims when rolled prints are unavailable. See also Barnes, 573 F.3d at 990 (noting the use of latent prints in this context). But see Mitchell, 365 F.3d at 243 (stating that post-disaster identifications “differ from latent fingerprint identification because [those] identification[s] us[e] actual skin[, which] eliminates the challenges introduced by distortions”). On the basis of the widespread use of fingerprints, and occasional use of latent prints, for non-judicial identification purposes, the court concludes that this factor modestly supports the admission of latent fingerprint evidence.
H. Summary
The court recognizes that the NAS Report and other publications cited by Love critique some aspects of latent fingerprint analysis. However, the forensic science community generally and the FBI in particular have begun to take appropriate steps to respond to that criticism. On this record, in part because of recent developments regarding testing, publication, error rates, and the FBI's governing standards, none of the seven factors discussed by the parties weighs against the admission of latent fingerprint evidence. Friction ridge analysis is not foolproof, but it is also far removed from the types of “junk science” that must be excluded under Rule 702, Daubert, and Kumho. Considering and weighing all of the factors, the credible testimony of Ruth, and the written submissions by both parties, the court concludes that latent fingerprint analysis is sufficiently reliable to be admitted. The court denies Love's motion to exclude the testimony of Robin Ruth insofar as that motion is based on the supposed unreliability of latent fingerprint analysis generally. See, e.g., John, 597 F.3d at 276 (affirming the admission of latent fingerprint evidence); United States v. Pena, 586 F.3d 105, 110–11 (1st Cir.2009) (same); Baines, 573 F.3d at 992 (same); Mitchell, 365 F.3d at 245–46 (same); Crisp, 324 F.3d at 269 (same); United States v. Hernandez, 299 F.3d 984, 991 (8th Cir.2002) (same); United States v. Havvard, 260 F.3d 597, 601 (7th Cir.2001) (same); see also United States v. Sherwood, 98 F.3d 402, 408 (9th Cir.1996) (holding that it was not error to admit fingerprint evidence when the party challenging the evidence admitted that several Daubert factors were satisfied).
III. The Evidence in this Case
Love's motion to exclude the specific evidence the government wishes to introduce in this case also implicates the questions of whether Robin Ruth is qualified to testify as an expert and whether Ruth's analyses are reliable.
There can be no doubt that Ruth is qualified to testify as an expert in latent fingerprint analysis. Since October 2010, Ruth has served as the standards and practices program manager in the FBI's latent prints unit, meaning that she is tasked with overseeing the quality control efforts in that unit. She previously spent five years as an FBI fingerprint examiner; since joining the FBI, she has conducted over 150,000 comparisons and has made roughly 1,200 fingerprint identifications. Ruth possesses degrees in genetic engineering and in forensic and biological anthropology, see Opp'n, Ex. A, at 2, has co-authored two articles about friction ridge analysis, and routinely teaches and lectures about the field, see id. at 4–5.
Love argues that Ruth should not be allowed to testify because she is not certified by the International Association for Identification (“IAI”).\8/ Love's contention relies on the NAS Report, but that report does not state that all examiners should be IAI-certified. It instead simply recommends that forensic scientists be accredited by some outside body, and suggests that the not-yet-existing National Institute of Forensic Science should “determin[e] appropriate standards for accreditation and certification.” See NAS Report 208, 215. Rule 702 requires only that an expert be “qualified as an expert by knowledge, skill, experience, training, or education.” “No specific credentials or qualifications are mentioned,” and the Ninth Circuit has “held that an expert need not have [any] official credentials in the relevant subject matter to meet Rule 702's requirements.” United States v. Smith, 520 F.3d 1097, 1105 (9th Cir.2008). The court rejects Love's challenge to Ruth's qualifications.
8. Ruth has never taken the IAI's certification test for fingerprint examiners, but she is an IAI member, and she passed the FBI's own proficiency test.
Love does not argue that the evidence in this case is less reliable than other latent fingerprint evidence, and there is no reason to believe it is. The fingerprint evidence in this case was analyzed using the ACE–V methodology and the FBI's standard operating procedures. See Sherwood, 98 F.3d at 408 (noting that the examiner's “technique [was] the generally-accepted technique for testing fingerprints”). Ruth is the actually the third FBI examiner to analyze the evidence at issue: After an examiner performed the initial comparison and evaluation, those steps were verified by the senior examiner who performs technical reviews for the FBI's latent print unit. Ruth then re-examined the prints. All three examiners reached the same conclusions regarding each of the fifteen prints at issue. Ruth also testified that no recourse to Level 3 details—the very small details in a print like pores, which Love contends are easily misinterpreted—was necessary to reach or support her conclusions. In short, nothing about the latent prints in this case suggests that Ruth's conclusions are less reliable than other conclusions reached using the ACE–V method as implemented by the FBI. Those conclusions are therefore sufficiently reliable to be admitted into evidence.\9/ Of course, Ruth will be subject to cross-examination about her background, methods, analysis, conclusions, and latent fingerprint analysis generally.
9. It is undisputed that the fingerprint evidence—which includes evaluations of latent prints taken from literature related to the construction of pipe bombs—is highly relevant to this case.
For these reasons, it is ORDERED that Love's motion to exclude the testimony of Robin Ruth is DENIED.
This comment has been removed by a blog administrator.
ReplyDelete