Friday, July 20, 2018

Handwriting Evidence in Almeciga and Pitts: Ships Passing in the Night?

Almeciga: A Signature Case

Erica Almeciga sued the Center for Investigative Reporting (CIR) for releasing a video on Rosalio Reta, a former member of the Los Zetas Drug Cartel, in which she was interviewed about Rosalio Reta, her “romantic partner at the time.” Almeciga v. Center for Investigative Reporting, Inc., 185 F.Supp.3d 401 (S.D.N.Y. 2016). Her complaint was that the producers breached a promise to conceal her identity, causing her to “develop[] paranoia” and to be “treated for depression and Post Traumatic Stress Disorder.” Id. at 409. In response, CIR “produced a standard release form ... authorizing [it] to use [her] ‘name, likeness, image, voice, biography, interview, performance and/or photographs or films taken of [her] ... in connection with the Project.’” The release, she said, was fabricated—she never saw or signed it—and she obtained an expert report from “a reputed handwriting expert, Wendy Carlson,” id. at 413, that “‘[b]ased on [her] scientific examination’ the signature on the Release was a forgery.” Id. at 414. To conduct that examination, Carlson compared the signature on the release to “numerous purported ‘known’ signatures” given to her by Almeciga’s lawyer. Id. at 414.

The case found its way to the United States District Court for the Southern of New York. Judge Jed. S. Rakoff dismissed the complaint because “New York's Statute of Frauds [requires that] if a contract is not capable of complete performance within one year, it must be in writing to be enforceable.” Id. at 409. The alleged promise to keep Almeciga's identity concealed was oral, not written.

The court also imposed sanctions on Almeciga for “fabricat[ing] the critical allegations in her Amended Complaint.” Id. at 408. Of course, if the handwriting expert’s analysis was correct, Almeciga’s claim that the release was forged was true, and there would have been no “fraud upon the Court.” Id. at 413. Therefore, Judge Rakoff held “a ‘Daubert’ hearing on the admissibility of Carlson's testimony” in conjunction with the evidentiary hearing on CIR's motion for sanctions. Id. at 414. His conclusion was uncompromising:
[T]he Court grants defendant's motion to exclude Carlson's “expert” testimony, finding that handwriting analysis in general is unlikely to meet the admissibility requirements of Federal Rule of Evidence 702 and that, in any event, Ms. Carlson's testimony does not meet those standards.
Id. at 407–08.As this sentence indicates, there are two facets to the Almeciga opinion: (1) “that handwriting analysis in general”—meaning “the ‘ACE–V’ methodology ... , an acronym for ‘Analyze, Compare, Evaluate, and Verify’” (id. at 418)—“bears none of the indicia of science and suggests, at best, a form of subjective expertise” (id. at 419); and (2) that the particulars of how the expert examined the signatures not only “flunks Daubert” (id. at 493), but also fell short of the potentially less stringent requirements for nonscientific expertise.

Although one would not expect the defects in the particular case to be at issue in all or even most cases, one would expect the court’s Daubert holding to be a wake-up call. As Judge Rakoff noted, “even if handwriting expertise were always admitted in the past (which it was not), it was not until Daubert that the scientific validity of such expertise was subject to any serious scrutiny.” Id. at 418.

Pitts: “Inapposite and Unpersuasive”

Lee Andrew Pitts allegedly “entered a branch of Chase Bank ... and handed [the manager at a teller window] a withdrawal slip that had written on it: “‘HAND OVER ALL 100, 50, 20 I HAVE A GUN I WILL SHOOT.’” United States v. Pitts, 16-CR-550 (DLI), 2018 WL 1116550 (E.D.N.Y. Feb. 26, 2018). After the manager repeatedly said that she had no money, the would-be robber “fled on foot ... leaving behind the withdrawal slip” with latent fingerprints. A trawl of a fingerprint database — the court does not say which one or how it was conducted — led New York police to arrest Pitts.

At Pitts’s impending trial on charges of entering the bank with the intent to rob it, the government planned to elicit testimony from “Criminalist Patricia Zippo, who is a handwriting examiner and concluded that Defendant ‘probably may have’ written the demand note found at the crime scene.” Pitts moved “to preclude the government from introducing expert opinion testimony as to ... handwriting analysis.” He “relie[d] principally on the [Almeciga] decision” from the other side of the East River.

Chief Judge Dora L. Irizarry dismissed Almeciga as “inapposite and unpersuasive” because of “significant factual differences from the instant case.” Let’s look at each of these differences.
  • First, the plaintiff in Almeciga tasked the analyst with determining whether plaintiff’s signature on a contractual release was a forgery. ... Forgery analysis is markedly more difficult than comparing typical signatures and has considerably higher error rates than simpler comparisons. Id. at 422 (citation omitted) (“[W]hile forensic document examiners might have some arguable expertise in distinguishing an authentic signature from a close forgery, they do not appear to have much, if any, facility for associating an author’s natural handwriting with his or her disguised handwriting.”).
It is true that the task in Pitts was not to compare signatures. It was to investigate the similarity between two written sentences as they appear on the robbery note and ... what? Exemplars the defendant was forced to write (and that like the exemplars in Almeciga, might have been disguised versions of normal handwriting)? Or did Zippo receive previously existing exemplars of defendant’s handwriting? What do scientific studies of performance on this sort of handwriting-comparison task show? The Pitts opinion does not even hazard a guess, and it blithely ignores the broad conclusion in Almeciga that
[as to] the third Daubert factor, “[t]here is little known about the error rates of forensic document examiners.” While a handful of studies have been conducted, the results have been mixed and “cannot be said to have ‘established’ the validity of the field to any meaningful degree.” (Citations omitted.)
  • Second, the expert performed her initial analysis without any independent knowledge of whether the “known” handwriting samples used for comparison belonged to the plaintiff.
This refers to the fact that in Almeciga, the expert received the exemplars from the lawyer—she did not collect them herself. Her conclusion therefore was conditioned on the assumption that the exemplars really were representative of Almeciga's true signatures. But the need to make this assumption does not pertain to the validty of the ACE-V part of the examination. This difference therefore has no bearing on Almeciga’s conclusion that handwriting determinations have not been scientifically validated.
  • Third, the expert conflictingly claimed that her analysis was based on her “experience” as a handwriting analyst, but then claimed in her expert report that her conclusions were based on her “scientific examination” of the handwriting samples.
Certainly, Judge Rakoff was not impressed with the witness, but the conclusion that Judge Rakoff drew from the juxtaposed statements was only that given these and other statements about the high degree of subjectivity in handwriting comparisons, “[i]t therefore behooves the Court to examine more specifically whether the ACE–V method of handwriting analysis, as described by Carlson, meets the common indicia of admissible scientific expertise as set forth in Daubert.” Judge Irizarry evidently was not disposed to conduct a similar inquiry.
  • Fourth, the court noted several instances of bias introduced by plaintiff’s counsel. For example, counsel initiated the retention by providing a conclusion that “[t]he questioned document was a Release that Defendant CIR forged.” (Citations omitted.)
Was the witness in Pitts insulated from expectation bias? The opinion does not describe any precautions taken to avoid potentially biasing ionformation. What did Patricia Zippo know when she received the handwritten note? Was she given equivalent sets of exemplars from several writers and not expecting only one to be the writer? That seems doubtful.
  • Fifth, the expert contradicted herself in numerous respects, including by stating that her conclusions were verified when they were not, and claiming both that the signature on the questioned document was “‘made to resemble’ plaintiff’s” and also that the signatures were “‘very different.’”
Like many of the other differences, this one does not bear on Judge Rakoff’s conclusion that the “amorphous, subjective approach” of ACE-V “flunks Daubert.” Almeciga simply used Carlson's contradictory statements and  the other as-applied factors to reject the argument that even if the handwriting examination was inadmissible as scientific evidence, it might be admissible as expertise that “is not scientific in nature.”

The Significant Difference

In sum, the Pitts opinion does not grapple with the Daubert issue of scientific validity. Instead of surveying the scientific literature to ascertain whether handwriting examiners’ claims of expertise have been validated (which boils down to studies of how accurate examiners are at the kind of comparisons performed in the case), the court reasons that the process must be accurate because handwriting examiners’ opinions are commonly admitted in court and “wholesale exclusion of handwriting analysis ... is not the majority view in this Circuit.”

Both the Almeciga and Pitts courts were “free to consider how well handwriting analysis fares under Daubert and whether ... testimony is admissible, either as “science” or otherwise.” Almeciga, 185 F.Supp.3d at 418. The most significant difference between the two opinions is that one judge took a hard look at what is actually known about handwriting expertise (or at least tried to), while the other did not.

Tuesday, July 17, 2018

More on Pitts and Lundi: Why Bother with Opposing Experts?

In the post-PCAST cases of United States v. Pitts 1/ and United States v. Lundi 2/, the government prevented a scholar of the development and culture of fingerprinting from testifying for the defense. The proposed witness was Simon Cole, Professor of Criminology, Law and Society at the Department of Social Ecology of the University of California (Irvine). Pitts “contend[ed] that Dr. Cole’s testimony [was] necessary ‘contrary evidence’ that calls into question the reliability of fingerprint analysis.” Lundi wanted Cole to testify about “forensic print analysis, in particular in the areas of accuracy and validation,” including "best practices." The federal district court would not allow it.

Qualifications of an Expert Witness

In Pitts, the government first denied that Cole had the qualifications to say anything useful about fingerprinting. It maintained (in the court's words) that “Dr. Cole (1) is ‘not a trained fingerprint examiner’; (2) ‘has not published peer-reviewed scientific articles on the topic of latent fingerprint evidence’; and (3) ‘has not conducted any validation research in the field.’” The court neither accepted nor clearly rejected this argument, for it decided to keep Cole away from the jury on a different ground.

Exclusion for lack of qualifications would have done violence to Federal Rule of Evidence 702. First, Cole was not going to offer an opinion as a criminalist (which he is not) nor as an interdisciplinary scholar (which he is) on whether the examiner in the case had accurately perceived, compared, and evaluated the images. More likely, he would have opined on the extent to which scientific studies have shown that fingerprint examiners can distinguish between same-source and different-source prints. Training and experience in conducting fingerprint identifications is largely irrelevant to this task.

Second, Rule 702 does not require someone to publish peer-reviewed articles on a topic to be qualified to give an opinion as to the state of the scientific literature. Epidemiologists and toxicologists, for example, can opine about the toxicity of a compound without first publishing their own research on the compound's toxicity. Finally, there is no support in logic or law for the notion that someone has to conduct his or her own validity study to have helpful information on the studies that others have done and what these studies prove.

The Panacea of Cross-examination

The government’s other argument was “that Dr. Cole’s testimony will not assist the trier of fact.” But this argument was garbled:
Specifically, the government points out that Dr. Cole’s only disclosed opinion is that the government’s expert’s testimony “exaggerates the probative value of the evidence because such testimony improperly purports to eliminate the probability that someone else might be the source of the latent print.” “Professor Cole fails to provide any analysis of why latent fingerprint evidence [in general] is so unreliable that it should not be submitted to the jury or, if such evidence can be reliable in some circumstances, what precisely the NYPD examiners did incorrectly in this case.” Dr. Cole is not expected to testify that the identification made by the government’s expert in this case is unreliable or that the examiners made a misidentification. Therefore, the government argues Dr. Cole’s opinion goes to the weight of the government’s evidence, not its admissibility. (Citations and internal quotation marks omitted.)
Chief Judge Dora L. Irizarry had already ruled that a source attribution made with an acknowledgment of at least a theoretical possibility that the match could be a false positive was admissible. If expert evidence is admitted, the opposing party is normally permitted to counter it with expert testimony that it deserves little weight. To argue that just because evidence goes to weight, rebuttal evidence about its weight is inadmissible makes no sense.  Once the evidence is admitted, it's weight is the only game in town.

The real issue is what validity or possibility-of-error testimony would add to the jury's knowledge. In this regard, Judge Irizarry wrote that
The Court is not convinced that Dr. Cole’s testimony would be helpful to the trier of fact. The only opinion Defendant seeks to introduce is that fingerprint examiners “exaggerate” their results to the exclusion of others. However, the government has indicated that its experts will not testify to absolutely certain identification nor that the identification was to the exclusion of all others. Thus, Defendant seeks admit Dr. Cole’s testimony for the sole purpose of rebutting testimony the government does not seek to elicit. Accordingly, Dr. Cole’s testimony will not assist the trier of fact to understand the evidence or determine a fact in issue. (Citations omitted.)
At first blush, this seems reasonable. If the only thing Cole was prepared to say was that fingerprinting does not permit “absolutely certain identification,” and if the fingerprint examiners will have said this anyway, why have him repeat it?

But surely Cole (or another witness— say, a statistician) could have testified to something more than that. An expert with statistical knowledge could inform the jury that although there is very little direct evidence on how frequently fingerprinting experts err in making source attributions in real casework, experiments have tested their accuracy, and the researchers detected errors at various rates. This information could “assist the trier of fact to understand the evidence or determine a fact in issue.” So why keep Cole from giving this “science framework” testimony?

The court’s answer boils down to this:
Moreover, the substance of Dr. Cole’s opinion largely appears in the reports and attachments cited in Defendant’s motion to suppress .... For example, Dr. Cole’s article More Than Zero contains a lengthy discussion about error rates in fingerprint analysis and the rhetoric in conveying those error rates ... , and the PCAST Report notes that jurors assume that error rates are much lower than studies reveal them to be (PCAST Report at 9-10 (noting that error rates can be as high as one in eighteen)). Defendant identifies no additional information or expertise that Dr. Cole’s testimony provides beyond what is in these articles and does not explain why cross-examination of the government’s experts using these reports would be insufficient. 3/
Now, I think the 1 in 18 figure is mildly ridiculous, 4/ but there is no general rule that because published findings could be introduced via cross-examination, a party cannot call on an opposing expert to present or summarize the findings. First, the expert being cross-examined might not concede that the findings are from authoritative sources. This occurred repeatedly when a number of prosecution DNA experts flatly refused to acknowledge the 1992 NAS report on forensic DNA technology as authoritative. That created a hearsay problem for defendants. After all, the authors of the report were not testifying and hence were not subject to cross-examination. The rule against hearsay applies to such statements because the jury would have to evaluate the truthfulness of the statements without hearing from the individuals who wrote them.

Therefore, counsel could not quote or paraphrase the report’s statements over a hearsay objection unless the report fell under some exception to the rule against hearsay. The obvious exception—for “learned treatises”—does not apply unless the report first is “established as a reliable authority by the testimony or admission of the witness or by other expert testimony or by judicial notice.” 5/

In Pitts, however, it appears that the government’s experts were willing to concede that the NAS and PCAST reports were authoritative (even though a common complaint from vocal parts of the forensic-science community about both reports was that they were not credible because they lacked representation from enough practicing forensic scientists). Moreover, a court might well have to admit the PCAST report under the hearsay exception for government reports.

Nonetheless, a second problem with treating cross-examination as the equivalent of testimony from an opposing expert is that it is not equivalent. By way of comparison, would  judges in a products liability case against the manufacturer of an alleged teratogen reason that the defense cannot call an expert to present and summarize the results of studies that address the strength of the  association between exposure and birth defects but rather can only ask the plaintiff’s experts about the studies?

In criminal cases, even if the defense expert eschews opinions on whether the defendant is the source of the latent prints as beyond his (or anyone’s?) expertise, the jury might consider this expert to be more credible and more knowledgeable about the underlying scientific literature than the latent print examiners. Examiners understandably can have great confidence in their careful judgments and in the foundations of the important work that they do. It would not be surprising for their message on cross-examination (or re-direct examination) to be, yes, errors are possible and they have occurred in artificial experiments and a few extreme cases, but, really,the process is highly valid and reliable. An outside observer may have a less sanguine perspective to offer even when discussing the same underlying literature.

Cross-examination is all well and good, but cross-examination of experts is delicate, difficult, and dangerous. Confining the defense to posing questions about specific studies in lieu of its own expert testimony about these studies is not normal. Court instructions about error probabilities (analogous to instructions about the factors that degrade eyewitness identifications) might be a device to avoid unduly time-consuming defense witnesses, but those do not yet exist. The opportunity to cross-examine the other party's witnesses rarely warrants depriving a party of the right to present testimony from its experts.

NOTES
  1. United States v. Pitts, 16-CR-550 (DLI), 2018 WL 1169139 (E.D.N.Y. Mar. 2, 2018).
  2. United States v. Lundi, 17-CR-388 (DLI), 2018 WL 3369665 (E.D.N.Y. July 10, 2018).
  3. The opinion in Lundi is similar:
    The government seeks to preclude Defendant’s proposed expert, Dr. Cole, from testifying, and points to this Court’s decision in Pitts, ... . The government argues that, as was the case in Pitts, Dr. Cole’s anticipated testimony would serve to rebut testimony from the government’s experts that the government does not expect to elicit. ... The government argues further that Dr. Cole’s additional proposed testimony, which would address the reliability of fingerprint examinations and the “best practices” to be followed when conducting such examinations, is not distinguishable from the information contained in the reports Defendant attached to his motion, and with which he can cross examine the government’s experts. ...

    Defendant claims that Dr. Cole’s testimony is necessary in this case because the reports could not be introduced through the government’s experts. .... However, the government has given every indication that its experts would recognize these reports, such that Defendant can use them on cross-examination. See Opp’n at 18 (“[t]o the extent the defendant wants to cross examine the [fingerprint] examiners on the basis of the empirical studies in which the error rates cited in the defendant’s motion were found, the defendant is free to do so....”). The Court finds that Dr. Cole’s testimony would not assist the trier of fact. See Pitts, 2018 WL 1169139, at *3. Accordingly, the testimony is precluded.
  4. See David H. Kaye, On a “Ridiculous” Estimate of an “Error Rate for Fingerprint Comparisons”, Forensic Sci., Stat. & L., Dec. 10, 2016, http://for-sci-law.blogspot.com/2016/12/on-ridiculous-estimate-of-error-rate.html.
  5. Federal Rule of Evidence 803(18); see generally David H. Kaye, David A. Bernstein, & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence § 5.4 (2d ed. 2011).

Monday, July 16, 2018

Ignoring PCAST’s Explication of Rule 702(d): The Opinions on Fingerprint Evidence in Pitts and Lundi

With the release of an opinion in February and another in July 2018, the District Court for the Eastern District of New York became at least the second federal district court to find that the 2016 report of the President’s Council of Advisors on Science and Technology (PCAST) [1] did not militate in favor of excluding testimony that a defendant is the source of a latent fingerprint. Chief Judge Dora L. Irizarry wrote both opinions.

United States v. Pitts [2]

The first ruling came in United States v. Pitts. The government alleged that Lee Andrew Pitts “entered a branch of Chase Bank ... and handed [the manager at a teller window] a withdrawal slip that had written on it: “‘HAND OVER ALL 100, 50, 20 I HAVE A GUN I WILL SHOOT.’” After the manager repeatedly said that she had no money, the would-be robber fled on foot ... leaving behind the withdrawal slip” with latent fingerprints. A trawl of a fingerprint database — the court does not say which one or how it was conducted — led New York police to arrest Pitts two weeks later.

Facing trial on charges of entering the bank with the intent to rob it, Pitts moved “to preclude the government from introducing expert opinion testimony as to latent fingerprint and handwriting analysis.” The opinion does not specify the exact nature of the expert's fingerprint testimony. Presumably, it would have been an opinion that Pitts is the source of the print on the withdrawal slip. The court merely noted that the government "claims that its fingerprint experts do not intend to testify that fingerprint analysis has a zero or near zero error rate."

Judge Irizarry made short work of Pitts’s contention that such testimony would contravene Federal Rule of Evidence 702 and Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). Pitts relied “chiefly on the findings of the PCAST Report, the [2009] NAS Report [3], and several out-of-circuit court decisions that question the reliability of latent fingerprint analysis.” The judge was “not persuaded.” She acknowledged that “[t]he PCAST and NAS Reports [indicate that] error rates are much higher than jurors anticipate” and that “the NAS Report [stated] that “[w]e have reviewed available scientific evidence of the validity of the ACE-V method and found none.” But she was “dismayed that Defendant’s opening brief failed to address an addendum to the PCAST Report.” According to the court,
[The 2017 Addendum] applaud[ed] the work of the friction-ridge discipline” for steps it had taken to confirm the validity and reliability of its methods. ... The PCAST Addendum further concluded that “there was clear empirical evidence” that “latent fingerprint analysis [...] method[ology] met the threshold requirements of ‘scientific validity’ and ‘reliability’ under the Federal Rules of Evidence.”
Actually, the Addendum [4] adds little to the 2016 report. It responds to criticisms from the forensic-science establishment. The assessment of the scientific showing for the admissibility of latent fingerprint identification under Rule 702 is unchanged. The original report stated that “latent fingerprint analysis is a foundationally valid subjective methodology—albeit with a false positive rate that is substantial and is likely to be higher than expected by many jurors based on longstanding claims about the infallibility of fingerprint analysis.” It added that “[i]n reporting results of latent-fingerprint examination, it is important to state the false-positive rates based on properly designed validation studies.” The Addendum does not retreat from or modify these conclusions in any way.

Both the Report and the Addendum reinforce the conclusion that, despite the lack of detailed, objective standards for evaluating the degree of similarity between pairs of prints, experiments have shown that analysts can reach the conclusion that two prints have a common source with good accuracy. But the Report also lists five more conditions that bear on whether a particular analyst has reached the correct conclusion in a given case. It creates the neoteric phrase “validity as applied” for the showing that a procedure has been properly applied in a the case at bar:
Scientific validity as applied, then, requires that an expert: (1) has undergone relevant proficiency testing to test his or her accuracy and reports the results of the proficiency testing; (2) discloses whether he or she documented the features in the latent print in writing before comparing it to the known print; (3) provides a written analysis explaining the selection and comparison of the features; (4) discloses whether, when performing the examination, he or she was aware of any other facts of the case that might influence the conclusion; and (5) verifies that the latent print in the case at hand is similar in quality to the range of latent prints considered in the foundational studies.
The opinion does not discuss whether the court accepts or rejects this five-part test for admitting the proposed testimony. It jumps to the unedifying conclusion that defendant’s “critiques [do not] go to the admissibility of fingerprint analysis, rather than its weight.”

United States v. Lundi [5]

Chief Judge Irizarry returned to the question of admissibility of source attributions form latent prints in United States v. Lundi.  In the middle of thre afternoon of February 20, 2017, three men entered a check cashing and hair salon on Flatbush Avenue in Brooklyn. They forced an employee in a locked glass booth to let them in by pointing a gun at the head of a customer. They made off with approximately $13,000, but one of them had put his hands on top of the glass booth. Police ran an image of the latent prints from the booth through a New York City automated fingerprint identification system (AFIS) database. They decided that those prints came from Steve Lundi. Federal charges followed.

In advance of trial, Lundi moved to exclude the identification. He avoided the Pitts pitfall of arguing that there was no adequate scientific basis for expert latent print source attributions (although the more recent report of an American Association for the Advancement of Science (AAAS) working group would have lent some credence to such a claim [6]). Instead, Lundi “challeng[ed] the application of that [validated] science to the specific examinations conducted in the instant case.” It is impossible to tell from the opinion whether the court was made aware of the PCAST five-part test admissibility under Rule 702(d). Again, citing the unpublished opinion of a federal court in Illinois, Judge Irizarri apparently leapt over this part of the Report to the conclusion that
This Court is not persuaded that Defendant’s challenges go to the admissibility of the government’s fingerprint evidence, rather than to the weight accorded to it. Moreover, as this Court noted in Pitts, fingerprint analysis has long been admitted at trial without a Daubert hearing. ... The Court sees no reason to preclude such evidence here. Accordingly, Defendant’s motion to preclude fingerprint evidence is denied.
Again, it is impossible to tell from the court's cursory and conclusory analysis whether the theory is that an uncontroverted assurance that an expert undertook an “analysis,” a “comparison,” and an “evaluation” and that another expert did a “verification” ipso facto satisfies Rule 702(d).  The judge noted that “the government points to concrete indicators of how the ACE-V method actually was followed by Detective Skelly,” but it would be hard to find a modern fingerprint identification in which there were no indications that the examiner (1) analyzed the latent print (decided that it was of adequate quality to continue), (2) picked out features to compare and compared them, and then (3) evaluated what was seen. If this is all it takes to satisfy the Rule 702(d) requirement that “the expert has reliably applied the principles and methods to the facts of the case,” then the normal burden on the advocate of expert evidence to show that it meets all the rule’s requirements has evaporated into thin air.

Yet, this could be all that the court required. It suggested that all expert evidence is admissible as long as it is reliable in some general sense, writing that “our adversary system provides the necessary tools for challenging reliable, albeit debatable, expert testimony” and “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence” (citing Daubert, 509 U.S. at 596).

The suggestion assumes what is to be proved—that the evidence—“shaky” or unshakeable—is “admissible.” The PCAST Report tried to give meaning to the case-specific reliability prong of Rule 702 (which simply codifies post-Daubert jurisprudence) by spelling out, for highly subjective procedures like ACE-V, what is necessary to demonstrate the legally reliable application in a specific case. Perhaps the “concrete indicators” showed that PCAST’s conditions were satisfied. Perhaps they did not go that far. Perhaps the PCAST conditions are too demanding. Perhaps they are too flaccid. Judge Irizarri does not tell us what she thinks.

After Lundi and Pitts, courts should strive to fill the gap in the analysis of the application of a highly subjective procedure. They should reveal what they think of PCAST’s effort to clarify (or, more candidly, to prescribe) what is required for long-standing methods in forensic science to be admissible under Rule 702(d).

REFERENCES
  1. Executive Office of the President, President’s Council of Advisors on Science and Technology, Report to the President: Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods, Sept. 2016.
  2. United States v. Pitts, 16-CR-550 (DLI), 2018 WL 1116550 (E.D.N.Y. Feb. 26, 2018).
  3. Comm. on Identifying the Needs of the Forensic Sci. Cmty., Nat'l Research Council, Strengthening Forensic Science in the United States: A Path Forward (2009).
  4. PCAST, An Addendum to the PCAST Report on Forensic Science in Criminal Courts, Jan. 6, 2017.
  5. United States v. Lundi, 17-CR-388 (DLI), 2018 WL 3369665 (E.D.N.Y. July 10, 2018).
  6. William Thompson, John Black, Anil Jain & Joseph Kadane, Forensic Science Assessments: A Quality and Gap Analysis, Latent Fingerprint Examination (2017).

Thursday, July 5, 2018

A Strange Report of "Forensic Epigenetics ... in CODIS"

The “Featured Story” in today’s Forensic Magazine is “Forensic Epigenetics: How Do You Sort Out Age, Smoking in CODIS?” The obvious answer is that you don't and you can't. CODIS records contain no epigenetic data.

What Is Epigenetics?

As a Nature educational webpage explains, “[e]pigenetics involves genetic control by factors other than an individual's DNA sequence. Epigenetic changes can switch genes on or off and determine which proteins are transcribed.” 1/ One chemical mechanism for accomplishing this is DNA methylation, "a chemical process that adds a methyl group to DNA." 2/ More precisely, "methylation of DNA (not to be confused with histone methylation) is a common epigenetic signaling tool that cells use to lock genes in the 'off' position." 3/ This methylation is involved in cell differentiation and hence the formation and maintenance of different tissue types. 4/  "Given the many processes in which methylation plays a part, it is perhaps not surprising that researchers have also linked errors in methylation to a variety of devastating consequences, including several human diseases.” 5/

In forensic genetics, "DNA methylation profiling [has been proposed] for tissue determination, age prediction, and differentiation between monozygotic twins." 6/ Because this "profiling" can uncover health-related and other information as well, discussion of regulating its use by police has begun. 7/

What Is CODIS?

CODIS is “the acronym for the Combined DNA Index System and is the generic term used to describe the FBI’s program of support for criminal justice DNA databases as well as the software used to run these databases.” 8/ The DNA data, which come from twenty locations (loci) on various chromosomes, reveal nothing about methylation patterns. The information from these loci relates solely to the set of underlying DNA sequences. These particular sequences are not transcribed, are essentially identical in all tissues and all identical twins, and do not change as a person ages (except for occasional mutations).

What Is “Sort[ing] Out Age, Smoking in CODIS?”

I don't know. Forensic epigenetics or epigenomics involves neither CODIS databases, CODIS loci, nor CODIS software. Does Forensic Magazine's “Senior Science Writer” think that the databases will be expanded to include epigenetic data? That is not what the article asserts. The only attempt to bridge the two is a concluding sentence that reads, "But some studies, like a Stanford exploration last spring, show that even 13 loci can carry more information than originally believed."

That is not much of a connection, and the statement itself is a trifle misleading. Thirteen is the number of STR loci in CODIS profiles before the expansion to twenty in 2017. The description of the “Stanford exploration” referenced in the article 9/ does not show that the original understanding of the information contained in those core CODIS loci was faulty. Rather, it talks about the growth of the size of the databases and research showing that CODIS profiles “could possibly” be linked to records in medical research databases by “authorized or unauthorized analysts equipped with two datasets, one with SNP genotypes and another CODIS genotypes.” 10/

This possibility does not come as a complete surprise. CODIS profiles are meant to be individual identifiers (or nearly so). If there are genomic databases that sufficiently overlap these regions, then a CODIS profile can be used to locate the record pertaining to the same individual in those databases. The extent to which this possibility is cause for concern is worth considering, 11/ but it has nothing to do with the privacy implications of epigenetic data.

NOTES
  1. Simmons, D. (2008) Epigenetic influence and disease. Nature Education 1(1):6
  2. Id.
  3. Theresa Phillips (2008) The role of methylation in gene expression. Nature Education 1(1):116.
  4. See, e.g., Karyn L. Sheaffer, Rinho Kim, Reina Aoki, et al. (2014) DNA methylation is required for the control of stem cell differentiation in the small intestine. Genes & Development, http://genesdev.cshlp.org/content/28/6/652.abstract; Bo Zhang, Yan Zhou, Nan Lin, et al. (2013) Functional DNA methylation differences between tissues, cell types, and across individuals discovered using the M&M algorithm. Genome Research, https://genome.cshlp.org/content/early/2013/06/26/gr.156539.113.abstract
  5. Phillips, supra note 3.
  6. Athina Vidaki & Manfred Kayser (2017) From forensic epigenetics to forensic epigenomics: broadening DNA investigative intelligence, Genome Biol. 18: 238, doi:  10.1186/s13059-017-1373-1
  7. Mahsa Shabani, Pascal Borry, Inge Smeers, & Bram Bekaert (2018) Forensic Epigenetic Age Estimation and Beyond: Ethical and Legal Considerations. Trends in Genet 34(7): 489–491
  8. FBI, Frequently Asked Questions on CODIS and NDIS, https://www.fbi.gov/services/laboratory/biometric-analysis/codis/codis-and-ndis-fact-sheet
  9. Seth Augenstein, CODIS Has More ID Information than Believed, Scientists Find,” Forensic Mag., May 15, 2017, https://www.forensicmag.com/news/2017/05/codis-has-more-id-information-believed-scientists-find
  10. Id.
  11. Cf. David H. Kaye, The Genealogy Detectives: A Constitutional Analysis of “Familial Searching,” 51 Am. Crim. L. Rev. 109, 137 n. 170 (2013) (“However, there is at least one rather roundabout way in which the identification profiles could reveal substantial medical information. In the future, when the full genomes of individuals are recorded in clinical databases of medical records, a police agency possessing the profile and having surreptitious access to the database could locate the entry for the individual’s genome and any associated medical records without anyone’s knowledge. Although the STRs would be useful only for identification, that use could be the key to locating information in patient records. Furthermore, the patient’s records and full genome could lead police to the stored genomes and records of relatives. Although I cannot think of many scenarios in which police would be motivated to engage in this computer hacking and medical snooping, there may be some.”).

Saturday, June 23, 2018

Trawling Genealogy Databases and the Fourth Amendment: Part I

Law-enforcement use of a DNA database created for genealogy enthusiasts helped discover the man believed to be the Golden State Killer. It also provoked an immediate outpouring of media reports of concerns about "genetic privacy." Now, essays from groups of bioethicists and lawyers have appeared in both the Annals of Internal Medicine [1] and Science [2]. Neither article gives a convincing and complete analysis of the legal issues—hardly surprising given the word limits for such policy forum essays—but both are useful as starting points for discussion.

I. The Misplaced “Abandonment” Theory

The Annals article, Is It Ethical to Use Genealogy Data to Solve Crimes? [1], assures us that the law provides “clarity.” The authors find this clarity in “the abandonment doctrine.” The following paragraph comprises their entire legal analysis:
The legal questions raised by genealogy searches are measurably simpler than the ethical concerns. In terms of the U.S. Constitution, a genealogy search triggered by DNA collected from a crime scene probably would not count as a “search” under the Fourth Amendment (4). Even assuming it would, the applicable legal theory—the “abandonment doctrine”—holds that a person has no “reasonable expectation of privacy” in abandoned materials. Courts have allowed law enforcement to test DNA “abandoned” in a range of settings (such as hair clippings and discarded cigarette butts). At genealogy Web sites, users voluntarily upload (that is, abandon) familial data into commercial databases. Whether they are aware that their data are subject to police collection is, legally, irrelevant (5). Notwithstanding the clarity of the law, it is questionable whether it is good social policy to consider the uploading of genealogic data the same as abandoning DNA in a public space.
These remarks confuse two very different questions. The first is whether a data-gathering method is a search within the meaning of the Fourth Amendment. The Amendment protects against “unreasonable searches and seizures” of “persons, houses, papers, and effects,” in large part by requiring police to acquire judicial warrants based on probable cause before undertaking a search or seizure. (Reliance on a properly issued warrant makes the search reasonable.)

But not all information collection is a search or seizure that triggers the Fourth Amendment demand for reasonableness. For example, a police officer who merely watches a shady character—or anyone else—walk down the street has not searched or seized anyone. If the officer snaps a photo of the person and compares it to photos of wanted criminals, there is still no search or seizure, for there has been no interference with the individual’s body, movements, or property. And if no search has occurred, there is no need to ask whether the officer’s decision to study the individual was reasonable in light of the facts known to the officer. The notion that what a person knowingly exposes to the general public cannot be the subject of a “search” is sometimes called the “public exposure doctrine.” It pertains to the threshold question of whether a search has occurred.

The “abandonment doctrine” also applies to this threshold question. It applies to property that a person has discarded or left behind. If the police see an individual throw away a syringe, they may collect it and then analyze it for the presence of heroin without obtaining a warrant—because they have not performed a search that affects any legitimate interest. By intentionally relinquishing the syringe, the individual has given up any property interest. He or she might not want it to become known that the syringe has traces of heroin in it, but if heroin possession is a criminal act, then the individual can hardly claim that the interest in keeping this fact secret is legitimate and hence protected by the Fourth Amendment. So the abandonment doctrine is another route to a conclusion that the police have conducted no search or seizure within the meaning of the Amendment.

The lower courts have almost always applied the abandonment doctrine to DNA molecules shed or deposited in both legal and illegal activities. But it is odd to maintain, as Is It Ethical? does, that abandonment makes a search reasonable because “a person has no ‘reasonable expectation of privacy’ in abandoned materials.” That mistakes the question of whether a search is justified for the question of whether the police conduct is a search. The “reasonable expectation” standard, introduced in Katz v. United States, 389 U.S. 347 (1967), is merely a way to show that police have engaged in a search; it is not a way to show that a search is reasonable.

This distinction may sound finicky. Functionally, what is the difference between (1) defining everything as a search, but then asking whether the investigation is reasonable because there is no reasonable expectation of privacy in the items searched, and (2) asking whether there is a search because there is no reasonable expectation of privacy in the first place? Nevertheless, no court that is faithful to the Supreme Court's many Fourth Amendment opinions would adopt the position contemplated in Is It Ethical?. No such court would write that even though profiling DNA from a crime-scene is a search, it does not require a warrant (or an exception to the generally applicable warrant requirement for a search to be constitutionally reasonable).

More importantly, the opinions allowing the police to profile shed DNA and compare the identifying profile with a suspect's DNA (or with all the profiles in a law enforcement DNA database), all without a warrant, do not demonstrate that police can also trawl a database of DNA sequences to see who might be related to whom. Further analysis is required to determine the constitutionality of familial, or other-directed searching by the state in both law-enforcement [3] and private (i.e., non-governmental) databases such as GEDmatch and the more restrictive commercial ones.

With respect to the private databases, the state's argument lies not so much in abandonment as in public exposure. The very reason the individual puts DNA data on the database is to enable curious members of the public to inspect it. As such, the Fourth Amendment issue is whether police (without a warrant) can do what anyone else can—namely, trawl the database for a partial match indicative of a genetic relationship to the suspect whose DNA is associated with a crime. In many contexts—overflying private property to get a look at what is there, for example—the Supreme Court has reasoned that what is open to the public generally is open to the police as well. Indeed, the Court has even held that entrusting or conveying information to private parties defeats the claim of a reasonable expectation of privacy and hence the claim of a search that requires probable cause and a warrant. Exposure to a small slice of the public—even a banker or a telephone company—is enough let the police in without a showing of probable cause. (Disclosure of information to one’s lawyer may be protected by the attorney-client privilege but not the Fourth Amendment.)

In the past several years, however, some Justices have evinced discomfort with this “third-party doctrine.” Just yesterday, the Court held in Carpenter v. United States, No. 16–402, 2018 WL 3073916 (U.S. June 22, 2018), that certain data generated by a cell-phone service provider—the third party—is not outside the protective umbrella of the Fourth Amendment just because it has been given to or generated by a third party. The data in the case amounted to extended tracking of the past whereabouts of a person’s cellphone’s via the electronic tracks, so to speak, left at cell towers. That information, the majority reasoned, was so sensitive as to make its possession by the cellular phone service providers insufficient to defeat the claim of a reasonable expectation of privacy. A warrant was required.

The Science article correctly frames the pivotal Fourth Amendment issue as the scope of the third-party doctrine, but it leaves much unsaid. I will turn to the implications of this evolving doctrine for trawls of genealogy databases in a later installment.

REFERENCES

1. Benjamin E. Berkman, Wynter K. Miller & Christine Grady, Is It Ethical to Use Genealogy Data to Solve Crimes?, Annals Internal Med., May 29, 2018, DOI: 10.7326/M18-1348.
2. Natalie Ram, Christi J. Guerrini & Amy L. McGuire, Genealogy Databases and the Future of Criminal Investigation, 360 Science 1078-1079 (2018) DOI: 10.1126/science.aau1083
3. David H. Kaye, The Genealogy Detectives: A Constitutional Analysis of “Familial Searching”, 51 Am. Crim. L. Rev. 109 (2013), https://ssrn.com/abstract=2043091

Tuesday, June 5, 2018

DNA Evidence and the Warrant Affidavit in the Golden State Killer Case

Last Friday, Sacramento Superior Court Judge Michael Sweet “ordered arrest and search warrant information in the East Area Rapist/Golden State Killer case unsealed after weeks of arguments between attorneys over how the release would impact the trial of suspect Joseph James DeAngelo.” 1/ The 171 heavily redacted pages of documents did not discuss the kinship trawl of the publicly accessible genealogy database that occupied much of the news about the case. However, they did refer to later DNA tests of surreptiously procured samples of DeAngelo’s DNA:
     [I]nvestigators didn't have a sample of DeAngelo's DNA, so Sacramento sheriff's detectives began following him as he moved about town, finally watching April 18 as DeAngelo parked his car in a public parking lot at a Hobby Lobby store in Roseville, according to an arrest warrant affidavit unsealed Friday.
     "A swab was collected from the door handle while DeAngelo was inside the store," according to the affidavit from sheriff's Detective Sgt. Ken Clark. "This car door swab was submitted to the Sacramento DA crime lab for DNA testing." ... The swab contained DNA from three different people, and 47 percent of the DNA came from one person, the affidavit said.
     That DNA was compared to murders in Orange and Ventura counties where DNA had been collected and saved from decades before, and it came back with results that elated investigators. "The likelihood ratio for the three-person mixture can be expressed as at least 10 billion times more likely to obtain the DNA results if the contributor was the same as the Orange County/Ventura County (redacted) profile and two unknown and unrelated individuals than if three unknown and unrelated individuals were the contributors," Clark wrote in his affidavit seeking an arrest warrant for DeAngelo. ...
     Sacramento County District Attorney Anne Marie Schubert has said previously that even with the possible match she asked for a better sample, so investigators went hunting again, this time focusing on DeAngelo's trash on April 23.
     "The trash can was put out on the street in front of his house the night before," Clark wrote. "DeAngelo is the only male ever seen at the residence during the surveillance of his home which has occurred over the last three days."
     Detectives gathered "multiple samples" from the trash can and sent them to the crime lab on Broadway for analysis. "Only one item, a piece of tissue (item #234-#8), provided interpretable DNA results," Clark wrote."The likelihood ratio for this sample can be expressed as at least 47.5 Septillion times more likely to obtain the DNA results if the contributor was the same as the Orange County/Ventura County (redacted) profile than if an unknown and unrelated individual is the contributor." 2/
Compare this statement of the likelihood ratio to the misstated version from a “senior science writer” for Forensic Magazine:
The warrants now show that: ... [t]he additional surreptitious sample was from DeAngelo’s trash can set out at the curb. Only one piece of tissue provided interpretable DNA results, but those translated to a likelihood that DeAngelo was 47.5 septillion times more likely to be the Golden State Killer than an unknown and unrelated individual. 3/
To see the error, click on the label “transposition” in this blog. Of course, one can ask what’s the big deal when the likelihood ratio is in the septillions. But that question translates into an argument about harmless error. Sometimes the errors associated with sloppy phrasing won’t have immediate repercussions, but a magazine written for forensic practitioners ought not propagate sloppy thinking. In any case, things are looking up when detectives take the care to avoid transposing their conditional probabilities.

NOTES
  1. Sam Stanton & Darrell Smith, Read the Warrant Documents in the East Area Rapist Case, Sacramento Bee, June 1, 2018, http://www.sacbee.com/news/local/article212377094.html
  2. Sam Stanton & Darrell Smith, How Detectives Collected DNA Samples from the East Area Rapist Suspect, Sacramento Bee, June 1, 2018, http://www.sacbee.com/latest-news/article212334279.html
  3. Seth Augenstein, Golden State Killer Warrants Show Evolution of Killer — But Not Genealogy, Forensic Mag., June 4, 2018, https://www.forensicmag.com/news/2018/06/golden-state-killer-warrants-show-evolution-killer-not-genealogy

Wednesday, May 30, 2018

Fusing Humans and Machines to Recognize Faces

A new article on the accuracy of facial recognition by humans and machines represents “the most comprehensive examination to date of face identification performance across groups of humans with variable levels of training, experience, talent, and motivation.” 1/ It concludes that the optimal performance comes from a “fusion” of man (or woman) and machine. But the meaning of “accuracy” and “fusion” are not necessarily what one might think.

Researchers from NIST, The University of Texas at Dallas, the University of Maryland, and the University of New South Wales displayed “highly challenging” pairs of face images to individuals with and without training in matching images, and to “deep convolutional neural networks” (DCNNs) that trained themselves to classify images as being from the same source or from different sources.

The Experiment
Twenty pairs of pictures (12 same-source and 8 different-source pairs) were presented to the following groups:
  • 57 forensic facial examiners (“professionals trained to identify faces in images and videos [for use in court] using a set of tools and procedures that vary across forensic laboratories”);
  • 30 forensic facial reviewers (“trained to perform faster and less rigorous identifications [for] generating leads in criminal cases”);
  • 13 super-recognizers (“untrained people with strong skills in face recognition”);
  • 31 undergraduate students; and
  • 4 DCNNs (“deep convolutional neural networks” developed between 2015 and 2017”).
Students took the test in a single session, while the facial examiners, reviewers, super-recognizers, and fingerprint examiners had three months to complete the test. They all expressed degrees of confidence that each pair showed the same person as opposed to two different people. (+3 meant that “the observations strongly support that it is the same person”; –3 meant that “the observations strongly support that it is not the same person”). The computer programs generated “similarity scores” that were transformed to the same seven-point scale.

Comparisons of the Groups
To compare the performance of the groups, the researchers relied on a statistic known as AUC (or, more precisely, AUROC, for “Area Under the Receiver Operating Characteristic” curve). AUROC combines two more familiar statistics—the true-positive (TP) proportion and the false-positive (FP) proportion—into one number. In doing so, it pays no heed to the fact that a false-positive may be more costly than a false negative. A simple way to think about the number is this: The AUROC of a classifier is equal to the probability that the classifier will rank a randomly chosen pair of images higher when they originate from the same source than when the pair come from two different sources. That is,

AUROC = P(score|same > score|different)

Because making up scores at random would be expected to be correct in this sense about half the time, an AUROC of 0.5 means that, overall, the classifier’s scores are useless for distinguishing between same-source and different-source pairs. AUROCs greater than 0.5 indicate better overall classifications, but the value for the area generally does not translate into the more familiar (and more easily comprehended) measures of accuracy such as the sensitivity (the true-positive probability) and specificity (the true-negative probability) of a classification test. See Box 1. Basically, the larger the AUROC, the better the scores are--in some overall sense--in discriminating between same-source and and different-source pairs. 

Now that we have some idea of what the AUROC signifies (corrections are welcome—I do not purport to be an expect on signal detection theory), let’s see how the different groups of classifiers did. The median performance of each group was
A2017b:░░░░░░░░░░ 0.96
facial examiners:░░░░░░░░░ 0.93
facial reviewers:░░░░░░░░░ 0.87
A2017a:░░░░░░░░░ 0.85
super-recognizers:░░░░░░░░ 0.83
A2106:░░░░░░░░ 0.76
fingerprint examiners:░░░░░░░░ 0.76
A2015:░░░░░░░ 0.68
students:░░░░░░░ 0.68
Again, these are medians. Roughly half the classifiers in each group had higher AUROCs, and half had lower ones. (The automated systems A2015, A2016, A2017a, and A2017b had only one ROC, and hence only one AUROC.) “Individual accuracy varied widely in all [human] groups. All face specialist groups (facial examiners, reviewers, and super-recognizers) had at least one participant with an AUC below the median of the students. At the top of the distribution, all but the student group had at least one participant with no errors.”

Using the distribution of student UAROCs (fitted to a normal distribution), the authors reported the fraction of participants in each group who scored above the student 95th percentile as follows:
facial examiners:░░░░░░░░░░░ 53%
super-recognizers:░░░░░░░░░ 46%
facial reviewers:░░░░░░░ 36%
fingerprint examiners:░░░ 17%
The best computerized system, A2017b, had a higher AUROC than 73% of the face specialists. To put it another way, “35% of examiners, 13% of reviewers, and 23% of superrecognizers were more accurate than A2017b,” which “was equivalent to a student at the 98th percentile.”

But none of the preceding reveals how often the classifications based on the scores would be right or wrong. Buried in an appendix to the article (and reproduced below in Box 2) are estimates of “the error rates associated with judgments of +3 and −3 [obtained by computing] the fraction of high-confidence same-person (+3) ratings made to different identity face pairs” and estimates of “the probability of same identity pairs being assigned a −3.” The table indicates that facial examiners who were very confident usually were correct, expressing maximum confidence less than 1% of the time for same-source pairs (false positives) and less than 2% of the time for different-source pairs (false negatives). Students made these errors a little more than 7% and 14% of the time, respectively.

Fusion
The article promises to “show the benefits of a collaborative effort that combines the judgments of humans and machines.” It describes the method for ascertaining whether “a colloborative effort” improves performance as follows:
We examined the effectiveness of combining examiners, reviewers, and superrecognizers with algorithms. Human judgments were fused with each of the four algorithms as follows. For each face image pair, an algorithm returned a similarity score that is an estimate of how likely it is that the images show the same person. Because the similarity score scales differ across algorithms, we rescaled the scores to the range of human ratings (SI Appendix, SI Text). For each face pair, the human rating and scaled algorithm score were averaged, and the AUC was computed for each participant–algorithm fusion.
Unless I am missing something, there was no collaboration between human and machine. Each did their own thing. A number midway between the separate similarity scores on each pair produced a larger area under the ROC than either set of separate scores. To the extent that “Fusing Humans and Machines” conjures images of cyborgs, it seems a bit much. The more modest point is that a very simple combination of scores of a human and a machine classifier works better (with respect to AUROC as a measure of success) than either one alone.

BOX 1. THE ROC CURVE AND ITS AREA

Suppose that we were to take a score of +1 or more as sufficient to classify a pair of images as originating from the same source. Some of these classifications would be incorrect (contributing to the false-positive (FP) proportion for this decision threshold), and some would be correct (contributing to the true-positive (TP) proportion). Of course, the threshold for the classification could be set at other scores. The ROC curve is simply a plot of the points (TPP[score], FPP[score]) for the person or machine scoring the pairs of images for the many possible decision thresholds.

For example, if the threshold score for a positive classification were set higher than all the reported scores, there would no declared positives. Both the false positive and the true positive proportions would be zero. At the other extreme, if the threshold score were placed at the bottom of the scale, all the classifications would be positive. Hence, every same-source pair would be classified positively, as would every different-source pair. Both the TPP and the FPP would be 1. A so-called random classifier, in which the scores have no correlation to the actual source of images, would be expected to produce a straight line connecting these points (0,0) and (1,1). A more useful classifier would have a curve with mostly higher points, as shown in the sketch below.

      TPP (sensitivity)
     1 |           *   o
       |       *
       |           o
       |   
       +   *   o
       |                   o Random (worthless) classifier
       |   o               * Better classifier (AUC > 0.5)
       |                         
       o---+------------ FPP (1 – specificity)
                       1
An AUROC of, say, 0.75, does not mean that 75% of the classifications (using a particular score as the threshold for declaring a positive association) are correct. Neither does it mean that 75% is the sensitivity or specificity when using a given score as a decision threshold. Nor does not mean that 25% is the false-positive or the false-negative proportion. Instead, how many classifications are correct at a given score threshold depends on: (1) the specificity at that score threshold, (2) the specificity at that score threshold, and (3) the proportion of same-source and different-source pairs in the sample or population of pairs.

Look at the better classifier in the graph (the one whose operating characteristics are indicated by the asterisks). Consider the score implicit in the asterisk above the little tick-mark on the horizontal axis and across from the mark on the vertical axis. The FPP there is 0.2, so the specificity is 0.8. The sensitivity is the height of the better ROC curve at that implicit score threshold. The height of that asterisk is 0.5. The better classifier with that threshold makes correct associations only half the time when confronted with same-source pairs and 80% of the time when presented with different-source pairs. When shown 20 pairs, 12 of which are from the same face, as in the experiment discussed here, the better classifier is expected to make 50% × 12 = 6 correct positive classifications and 80% × 8 = 6.4 correct negative classifications. The overall expected percentage of correct classifications is therefore 12.4/20 = 62% rather than 75%.

The moral of the arithmetic: The area under the ROC is not so readily related to the accuracy of the classifier for particular similarity scores. (It is more helpful in describing how well the classifier generally ranks a same-source pair relative to a different-source pair.) 2/


BOX 2. "[T]he estimate qˆ for the error rate and the upper and lower limits of the 95% confidence interval." (From Table S2)
GroupEstimate0.95 CI
Type of Error: False Positive (+3 on different faces)
Facial Examiners0.9%0.002 to 0.022
Facial Reviewers1.2%0.003 to 0.036
Super-recognizers1.0%0.0002 to 0.052
Fingerprint Examiners3.8%0.022 to 0.061
Students7.3%0.044 to 0.112
Type of Error: False Negative (-3 on same faces)
Facial Examiners1.8%0.009 to 0.030
Facial Reviewers1.4%0.005 to 0.032
Super-recognizers5.1%0.022 to 0.099
Fingerprint Examiners3.3%0.021 to 0.050
Students14.5%0.111 to 0.185



UPDATES
June 9, 2018: Corrections and additions made in response to comments from Hari Iyer.
NOTES
  1. P.J. Phillips, A.N. Yates, Y. Hu, C.A. Hahn, E. Noyes, K. Jackson, J.G. Cavazos, G. Jeckeln, R. Ranjan, S. Sankaranarayanan, J.-C. Chen, C.D. Castillo, R. Chellappa, D. White and A.J. O’Toole. Face Recognition Accuracy of Forensic Examiners, Superrecognizers, and Algorithms. Proceedings of the National Academy of Sciences, Published online May 28, 2018. DOI: 10.1073/pnas.1721355115
  2. As Hari Iyer put it in response to the explanation in Box 1, "given a randomly chosen observation x1 belonging to class 1, and a randomly chosen observation x0 belonging to class 0, the (empirical) AUC is the estimated probability that the evaluated classification algorithm will assign a higher score to x1 than to x0." For a proof, see Alexej Gossman, Probabilistic Interpretation of AUC, Jan. 25, 2018, http://www.alexejgossmann.com/auc/. A geometric proof can be found in Matthew Drury, The Probabilistic Interpretation of AUC, in Scatterplot Smoothers, Jun 21, 2017, http://madrury.github.io/jekyll/update/statistics/2017/06/21/auc-proof.html.

Saturday, May 26, 2018

Against Method: ACE-V, Reproducibility, and Now Preproducibility

Forensic-science practitioners like to describe their activities as scientific. Indeed, if the work they did were not scientific, how could one say that they were practicing forensic science?

Thus, one finds textbooks with impressive titles like "Forensic Comparative Science" devoted to “[t]he comparative science disciplines of finger prints, firearm/tool marks, shoe prints/tire prints, documents and handwriting,” and more. 1/ The practitioners of these "science disciplines" describe their work as "analogous to scientific method of critically observing details in images, determining similarities or differences in the data, performing comparative measurements to experiment whether the details in the images actually agree or disagree," and so on. 2/ They insist that they follow a multistep process that "is a scientific methodology" for "hypothesis testing"3/ — even if the process lacks any defined threshold for deciding when perceived (or even objectively measured) features are sufficiently similar or different to reach a conclusion.

An entire article in the Journal of Forensic Identification — “a scientific journal that provides more than 100 pages of articles related to forensics ... written by forensic authorities from around the world who are practitioners or academics in forensic science fields” 4/ — is devoted to demonstrating that “[a]nalysis, comparison, evaluation, and verification (ACE-V) is a scientific methodology that is part of the scientific method.” 5/ The abstract observes that
Several publications have attempted to explain ACE-V as a scientific method or its role within the scientific method, but these attempts are either not comprehensive or not explicit. This article ... outlines the scientific method as a seven-step process. The scientific method is discussed using the premises of uniqueness, persistence, and classifiability. Each step of the scientific method is addressed specifically as it applies to friction ridge impression examination in casework. It is important for examiners to understand and apply the scientific method, including ACE-V, and be able to articulate this method. 6/
The Scientific Working Group on Friction Ridge Analysis, Study, and Technology (SWGFAST) agreed, urging examiners to write in their reports that ACE-V is nothing less than “[t]he acronym for a scientific method: Analysis, Comparison, Evaluation, and Verification.” 7/

It is revealing to contrast such assertions with comments on the meaning of reproducibility in science that appear in an essay published this week in Nature. There, Philip Stark, the Associate Dean of the Division of Mathematical and Physical Sciences and Professor of Statistics at the University of California (Berkeley), noted that reproducibility means different things in different fields, but pointed to “preproducibility” as a  prerequisite to reproducibility:
An experiment or analysis is preproducible if it has been described in adequate detail for others to undertake it. ... The distinction between a preproducible scientific report and current common practice is like the difference between a partial list of ingredients and a recipe. To bake a good loaf of bread, it isn’t enough to know that it contains flour. It isn’t even enough to know that it contains flour, water, salt and yeast. The brand of flour might be omitted from the recipe with advantage, as might the day of the week on which the loaf was baked. But the ratio of ingredients, the operations, their timing and the temperature of the oven cannot.

Given preproducibility — a ‘scientific recipe’ — we can attempt to make a similar loaf of scientific bread. If we follow the recipe but do not get the same result, either the result is sensitive to small details that cannot be controlled, the result is incorrect or the recipe was not precise enough ... . 8/
Descriptions of procedures for subjective pattern-matching in traditional forensic science are much like the list of ingredients. There are no quantitative instructions for how many and which of the potentially distinguishing features to use and how long to process or cook these ingredients at each step of the process. Indeed, it is even worse than that. Even though trained examiners know what ingredients to choose from, they can pick any subset of them that they think could be effective for the case at hand. Thus, although ACE-V can be described as a series of steps within "a broadly stated framework," 9/ that does not make it a “scientific recipe.” Imprecision at every step deprives it of  "preproducibility." It might be called a "process" rather than a "method," 10/ but in the end, “ACE-V is an acronym, not a methodology." 11/

It does not follow, however, that the comparisons and conclusions are of no epistemic value or that they cannot be studied scientifically. Quite the contrary. Unlike the natural sciences, the absence of preproducibility does not preclude reproducibilty. Another laboratory expert can start with the same materials to be compared, and we can see if the outcome is the same. Moreover, we can even conduct blind tests of examiner performance to determine how accurately criminalists are able to classify traces originating from the same source and traces coming from different sources.

Such validation studies show that latent print examiners, for example, have real expertise, but these findings do not mean that expert examiners are following a particularly “scientific method” in making their judgments. Psychologists report that some individuals are phenomenally accurate in recognizing faces, 12/ but that does not mean that the “super recognizers” are using a well-defined, or indeed, any kind of scientific procedure to accomplish these feats. The training and experience that criminalists receive may include instruction in facts and principles of biology and physics, and their performance may be generally accurate and reliable, but that does not mean that they are applying a scientific method. A flow chart is not a scientific test.

Until criminalists can articulate and follow a preproducible procedure, they should not present their work as deeply scientific. In court, they can explain that scientists have studied the nature of the patterns they analyze. They can refer to any well-designed studies proving that subjective pattern-matching by trained analysts can be valid and reliable. They can vouch for the fact that criminalists have been making side-by-side comparisons for a long time. If courts are persuaded that the resulting individual opinions are helpful, then skilled witnesses can give those opinions. But such opinions should not be gussied up as a scientific method of hypothesis testing. 13/ As one critic of such rhetoric explained, "forensic science could show that it does have validation, certification, accreditation, oversight, and  basic  research  without  showing  that  it  uses  the  'scientific method.'" 14/

NOTES
  1. John R. Vanderkolk, Forensic Comparative Science xii (2009).
  2. Id. at 90.
  3. M. Reznicek, R.M. Ruth & D.M. Schilens, ACE-V and the Scientific Method, 60 J. Forensic Identification 87, 87 (2010). See also Michele Triplett & Lauren Cooney, The Etiology of ACE-V and its Proper Use: An Exploration of the Relationship Between ACE-V and the Scientific Method of Hypothesis Testing, 56 J. Forensic Identification 345, 353 (2006), http://www.fprints.nwlean.net/JFI.pdf (“ACE-V is synonymous with hypothesis testing. A more in-depth understanding of scientific methodology can be found by reading the works of well-known scientists and philosophers such as Aristotle, Isaac Newton, Francis Bacon, Galileo Galilei, and Karl Popper, to name just a few.”).
  4. Abstract of Journal of Forensic Identification (JFI), https://www.theiai.org/publications/jfi.php, accessed May 25, 2018.
  5. Reznicek et al., supra note 3, at 87.
  6. Id. The Department of Justice has retreated from this phrasing, preferring to call “an examiner’s belief” an “inductive inference . . . made in a logical and scientifically defensible manner.” Department of Justice, Approved Uniform Language for Testimony and Reports for the Forensic Latent Print Discipline, Feb. 22, 2018, at 2 & 2 n.2, https://www.justice.gov/file/1037171/download.
  7. Scientific Working Group on Friction Ridge Analysis, Study, and Technology, Standard for Reporting Friction Ridge Examinations (Latent/Tenprint), Appendix, at 4 n.2, 2012, https://www.nist.gov/sites/default/files/documents/2016/10/26/swgfast_standard_reporting_2.0_121124.pdf (emphasis added).
  8. Philip B. Stark, Before Reproducibility must Come Preproducibility, 557 Nature 613 (2018), doi: 10.1038/d41586-018-05256-0, https://www.nature.com/articles/d41586-018-05256-0.
  9. Comm. on Identifying the Needs of the Forensic Sci. Cmty., Nat'l Research Council, Strengthening Forensic Science in the United States: A Path Forward 142 (2009).
  10. Michele Triplett, Is ACE-V a Process or a Method?, IDentification News, June/July 2012, at 5–6, http://www.fprints.nwlean.net/ProcessOrMethod.pdf.
  11. Sandy L. Zabell, Fingerprint Evidence, 13 J. L. & Pol'y 143, 178 (2005).
  12. Richard Russell, Brad Duchaine, and Ken Nakayama, Super-recognizers: People with Extraordinary Face Recognition Ability, 16 Psychonomic Bull. and Rev. 252 (2009), doi: 10.3758/PBR.16.2.252.
  13. David H. Kaye, How Daubert and Its Progeny Have Failed Criminalistics Evidence, and a Few Things the Judiciary Could Do About It, 86 Fordham L. Rev. 1639 (2018), https://ssrn.com/abstract=3084881.
  14. Simon A. Cole, Acculturating Forensic Science: What Is ‘Scientific Culture’, and How Can Forensic Science Adopt It?, 38 Fordham Urb. L.J. 436, 451 (2010).

Monday, May 21, 2018

Firearms Toolmark Testimony: Looking Back and Forward

By inspecting toolmarks on bullets or spent cartridge cases, firearms examiners can supply valuable information on whether a particular gun fired the ammunition in question. But the limits on this information have not always been respected in court, and a growing number of opinions have tried to address this fact. A forthcoming article in a festschrift for Professor Paul Giannelli surveys the developing law on this type of feature-matching evidence.

The article explains how the courts have moved from a position of skepticism of the ability of examiners to link bullets and other ammunition components to a particular gun to full-blown acceptance of identification “to the exclusion of all other firearms.” From that apogee, challenges to firearm-mark evidence over the past decade or so, have generated occasional restrictions on the degree of confidence that firearms experts can express in court, but they have not altered the paradigm of making source attributions and exclusions instead of statements about the degree to which the evidence supports these conclusions. After reviewing the stages in the judicial reception of firearm-mark evidence, including the reactions to reports from the National Academy of Sciences and the President's Council of Advisors on Science and Technology, the article concludes by describing a more scientific, quantitative, evidence-based form of testimony that should supplant or augment the current experience-based decisions of skilled witnesses. A few excerpts follow:
From: David H. Kaye, Firearm-Mark Evidence: Looking Back and Looking Ahead, Case Western Reserve Law Review (forthcoming Vol. 68, Issue 3, 2018) (most footnotes omitted)

* * *
I. Rejection of Expert Source Attributions
For a time, courts did not admit testimony that items originated from a particular firearm. Some courts reasoned that jurors could make the comparisons and draw their own conclusions. In People v. Weber, for example, the trial court struck from the record an examiner’s testimony “that in his opinion the two bullets taken from the bodies were fired from this pistol, leaving that as a question for the jury to determine by an inspection of the bullets themselves.” In this 1904 trial, the court did not question the expert’s ability to discover toolmarks that could be probative of identity, but it saw no reason to believe that the expert would be better than lay jurors at drawing inferences from that information. Other courts allowed such opinions, but not if they were stated as “facts.” * * *

IV. Heightened Scrutiny Following the 2009 NAS Report
* * *
Neither the 2008 nor the 2009 NAS report made recommendations on admissibility of evidence, for that was not part of their charge. Practitioners and prosecutors proposed that this meant that the reports should or could not be taken as undermining the admissibility of traditional highly judgmental pattern-matching identifications. However, the committees’ reviews of the literature clearly lent credence to the questions about the routine admission of categorical source attributions based on firearm-marks. 50/ In five prominent published opinions, courts cited the NAS reports and the opinions cited in Part III of this Article to limit such testimony. * * *

50. For example, in describing the scientific basis of “forensic science fields like firearms examination,” the 2008 report quoted with approval an article by two forensic scientists stating that “[f]orensic individualization sciences that lack actual data, which is most of them, . . . simply . . . assume the conclusion of a miniscule probability of a coincidental match . . . .” [Nat'l Research Council Comm. To Assess the Feasibility, Accuracy, and Tech. Capability of a Nat'l Ballistics Database, Ballistic Imaging 1, 54-55 (Daniel L. Cork et al., eds. 2008)] (quoting John I. Thornton & Joseph L. Peterson, The General Assumptions and Rationale of Forensic Identification, in 3 David L. Faigman, David H. Kaye, Michael J. Saks, & Joseph Sanders, Modern Scientific Evidence: the Law and Science of Expert Testimony § 24-7.2, at 169 (2002)). Apparently recognizing the threat of such assessments, AFTE complained that the committees’ literature reviews were shallow. In response to the 2008 Report, it wrote that “the committee lacked the expertise and information necessary for the in-depth study that would be required to offer substantive statements with regard to these fundamental issues of firearm and toolmark identification.” [AFTE Comm. for the Advancement of the Sci. of Firearm & Toolmark Identification, The Response of the Association of Firearm and Tool Mark Examiners to the National Academy of Sciences 2008 Report Assessing the Feasibility, Accuracy, and Technical Capability of a National Ballistics Database, AFTE J., Summer 2008, at 243, available at https://afte.org/uploads/documents/position-nas-2008.pdf 243]. Likewise, it wrote that “the [2009] NAS committee in effect chose to ignore extensive research supporting the scientific underpinnings of the identification of firearm and toolmark evidence.” AFTE Comm. for the Advancement of the Sci. of Firearm & Toolmark Identification, The Response of the Association of Firearms and Tool Mark Examiners to the February 2009 National Academy of Science Report “Strengthening Forensic Science in the United States: A Path Forward,” AFTE J., Summer 2009, at 204, 206. According to AFTE, “years of empirical research . . . conclusively show[] that sufficient individuality is often present on tool (firearm tools or non-firearm tools) working surfaces to permit a trained examiner to conclude that a toolmark was made by a certain tool and that there is no credible possibility that it was made by any other tool working surface.” AFTE Comm. Response, supra * * * , at 242. After all, “[t]he principles and techniques utilized in forensic firearms identification have been used internationally for nearly a century by the relevant forensic science community to both identify and exclude specific firearms as the source of fired bullets and cartridge cases.” Id. at 237 (emphasis added). Prosecutors too sought to blunt the implications of the skeptical statements about the limited validation of the premises of the traditional theory of firearm-mark identification with an affidavit from the chairman of the NAS committee that wrote the 2008 Report. Affidavit of John E. Rolph at 1-3, United States v. Edwards, No. F-516-01 (D.C. Super. Ct., May 23, 2008). Yet, the affidavit merely collects excerpts from the report itself and ends with one that could be read as supporting admissibility under certain conditions. For another affidavit from a committee member contending that NAS “has questioned the validity of these fundamental assumptions of uniqueness and reproducibility,” see Declaration of Alicia Carriquiry, PhD. In Support of Motion in Limine to Exclude Firearms Examiner’s Opinion at 5, People v. Knight, No. LA067366 (Cal. Super. Ct. Apr. 2012). The use of affidavits of one or two committee members to give their personal views on what the words that the committee as a whole agreed upon is ill-advised. It resembles asking individual members of Congress to provide their post hoc thoughts on what a committee report on legislation, or the statute itself, really meant.
READ MORE

Sunday, April 8, 2018

On the Difficulty of Latent Fingerprint Examinations

This morning the Office of Justice Programs of the Department of Justice circulated an email mentioning a “New Article: Defining the Difficulty of Fingerprint Comparisons.” The article, written a couple of weeks ago, is from the DOJ’s National Institute of Justice (NIJ). [1] It summarizes an NIJ-funded study that was completed two years ago. [2] The researchers published findings on their attempt to measure difficulty in the online science journal PLOS One four years ago. [3]

The “New Article” explains that
[T]he researchers asked how capable fingerprint examiners are at assessing the difficulty of the comparisons they make. Can they evaluate the difficulty of a comparison? A related question is whether latent print examiners can tell when a print pair is difficult in an objective sense; that is, whether it would broadly be considered more or less difficult by the community of examiners.
The first of these two questions asks whether examiners’ subjective assessments of difficulty are generally accurate. To answer this question, one needs an independent, objective criterion for difficulty. If the examiners’ subjective assessments of difficulty line up with the objective measure, then we can say that examiners are capable of assessing difficulty.

Notice that agreement among examiners on what is difficult and what is easy would not transform subjective assessments of difficulty into “objective” ones—anymore than the fact that a particular supermodel would “broadly be considered” beautiful would make her beautiful “in an objective sense.” It would simply mean that there is inter-subjective agreement within a culture. One should not mistake inter-examiner reliability for objectivity.

In psychometrics, a simple measure of the difficulty of a question on a test is the proportion of test-takers who answer correctly. [4] Of course, “difficulty” could have other meanings. It might be that test-takers would think that one item is more difficult than another even though, after struggling with it, they did just as well as they had on an item that they (reliably) rated as much easier. A criterion for difficulty in this sense might be the length of time a test-taker devotes to the question. But the correct-answer criterion is appropriate in the fingerprint study because the research is directed at finding a method of identifying those subjective conclusions that are most likely to be correct (or incorrect).

NIJ’s new article also mentions the hotly disputed issue of whether error probabilities, as estimated by the performance of examiners making a specific set of comparisons, should be applied to a single examiner in a given case. One would think the answer is that as long as the conditions of the experiments are informative of what can happen in practice, group means are the appropriate starting point—recognizing that they are subject to adjustment by the factfinder for individualized determinations about the acuity of the examiner and the difficulty of the task at hand. However, prosecutors have argued that the general statistics are irrelevant to weighing the case-specific conclusions from their experts. The NIJ article states that
The researchers noted that being aware that some fingerprint comparisons are highly accurate whereas others may be prone to error, “demonstrates that error rates are indeed a function of comparison difficulty.” “Because error rates can be tied to comparison difficulty,” they said, “it is misleading to generalize when talking about an overall error rate for the field.”
But the assertion that “it is misleading to generalize when talking about an overall error rate for the field” cannot be found in the 59-page document. When I searched for the string “generalize,” no such sentence appeared. When I searched for “misleading,” I found the following paragraph (p. 51):
The mere fact that some fingerprint comparisons are highly accurate whereas others are prone to error has a wide range of implications. First, it demonstrates that error rates are indeed a function of comparison difficulty (as well as other factors), and it is therefore very limited (and can even be misleading) to talk about an overall “error rate” for the field as a whole. In this study, more than half the prints were evaluated with perfect accuracy by examiners, while one print was misclassified by 91 percent of those examiners evaluating it. Numerous others were also misclassified by multiple examiners. This experiment provides strong evidence that prints do vary in difficulty and that these variations also affect the likelihood of error.
As always, the inability to condition on relevant variables with unknown values “can be misleading” when making an estimate or prediction. But this fact about statistics does not make an overall mean irrelevant. Knowing that there is a high overall rate of malaria in a country is at least somewhat useful in deciding whether to take precautions against malaria when visiting that country—even though a more finely grained analysis of the specific locales within the country could be more valuable. That said, when a difficulty-adjusted estimate of a probability of error becomes available, requiring it to be presented to the triers of fact instead of the group mean would be a sound approach to the relevance objection.

The experiments described in the report to NIJ are fascinating in many respects. In the long run, the ideas and findings could lead to better estimates of accuracy (error rates) for use in court. More immediately, one can ask how the error rates seen in these experiments compare to earlier findings (reviewed in the report and on this blog). But it is hard to make meaningful comparisons. In the first of the three experiments in the NIJ-funded research, 56 examiners were recruited from participants in the 2011 IAI Educational Conference. These examiners (a few of whom were not latent-print examiners) made forced judgments with a time constraint about the association (positive or negative) of many pairs of prints. The following classification table can be inferred from the text of the report:

Truly Positive Truly Negative
Positive Reported 985 37
Negative Reported 163 1107
Total 1148 1144

The observed sensitivity, P(say + | is +), across the examiners and comparisons was 985/1148 = 85.8%, and the observed specificity, P(say – | is –), was 1107/1144 = 96.8%. The corresponding conditional error proportions are 14.2% for false negatives and 3.2% for false positives. These error rates are higher than those in other research, but in those experiments, the examiners could declare a comparison to be inconclusive and did not have to make a finding within a fixed time. These constraints were modified in a subsequent experiment in the NIJ-funded study, but the report does not provide a sufficient description to produce a complete table.

References
1. National Institute of Justice, “Defining the Difficulty of Fingerprint Comparisons,” March 22, 2018, NIJ.gov: https://nij.gov/topics/forensics/evidence/impression/Pages/defining-difficulty-of-fingerprint-comparisons.aspx

2. Jennifer Mnookin, Philip J. Kellman, Itiel Dror, Gennady Erlikhman, Patrick Garrigan, Tandra Ghose, Everett Metler, & Dave Charlton, Error Rates for Latent Fingerprinting as a Function of Visual Complexity and Cognitive Difficulty, May 2016, https://www.ncjrs.gov/pdffiles1/nij/grants/249890.pdf

3. Philip J. Kellman, Jennifer L. Mnookin, Gennady Erlikhman, Patrick Garrigan, Tandra Ghose, Everett Mettler, David Charlton, & Itiel E. Dror, Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty, PLOS One, May 2, 2014, https://doi.org/10.1371/journal.pone.0094617

4. Frederic M. Lord, The Relationship of the Reliability of Multiple-Choice Test to the Distribution of Item Difficulties, 18 Psychometrika 181 (1952).