Friday, July 27, 2018

The ACLU’s In-Your-Face Test of Facial Recognition Software

The ACLU has reported that Amazon’s facial recognition “software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.” [1] This figure is calculated to impress the very legislators the ACLU is asking to “enact a moratorium on law enforcement use of face recognition.” All these false matches, the organization announced, create “28 more causes for concern.” Inasmuch as there are 535 members of Congress (Senators plus Representatives), the false-match rate is 5%.

Or is it? The ACLU’s webpage states that
To conduct our test, we used the exact same [sic] facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza.

Using Rekognition, we built a face database and search tool using 25,000 publicly available arrest photos. Then we searched that database against public photos of every current member of the House and Senate. We used the default match settings that Amazon sets for Rekognition.
So there were 535 × 25000 = 13,375,000 comparisons. With that denominator, the false-match rate is about 2 per million (0.0002%).

But none of these figures—28, 5%, or 0.0002%— means very much, since the ACLU’s “test” used a low level of similarity to make its matches. The default setting for the classifier is 80%. Police agencies do not use this weak a threshold [2, 3]. Using a low figure like 80% ensures that there will more be false matches among so many comparisons. Amazon recommends that police who use its system raise the threshold to 95%. The ACLU apparently neglected to adjust the level (even though it would have cost less than a large pizza). Or, worse, it tried the system at the higher level and chose not to report an outcome that probably would have had fewer "causes for concern." Either way, public discourse would benefit from more complete testing or reporting.

It also is unfortunate that Amazon and journalists [2, 3] call the threshold for matches a “confidence threshold.” The percentage is not a measure of how confident one can be in the result. It is not the probability of a true match given a classified match. It is not a probability at all. It is a similarity score on a scale of 0 to 1. A similarity score of 0.95 or 95%, does not even mean that the paired images are 95% similar in an intuitively obvious sense.

The software does give a “confidence value,” which sounds like a probability, but the Amazon documentation I have skimmed suggests that this quantity relates to some kind of “confidence” in the conclusion that a face (as opposed to anything else) is within the rectangle of pixels (the “bounding box”). The Developer Guide states that [4]
For each face match, the response provides a bounding box of the face, facial landmarks, pose details (pitch, role, and yaw), quality (brightness and sharpness), and confidence value (indicating the level of confidence that the bounding box contains a face). The response also provides a similarity score, which indicates how closely the faces match.
and [5]
For each face match that was found, the response includes similarity and face metadata, as shown in the following example response [sic]:
{
   ...
    "FaceMatches": [
        {
            "Similarity": 100.0,
            "Face": {
                "BoundingBox": {
                    "Width": 0.6154,
                    "Top": 0.2442,
                    "Left": 0.1765,
                    "Height": 0.4692
                },
                "FaceId": "84de1c86-5059-53f2-a432-34ebb704615d",
                "Confidence": 99.9997,
                "ImageId": "d38ebf91-1a11-58fc-ba42-f978b3f32f60"
            }
        },
        {
            "Similarity": 84.6859,
            "Face": {
                "BoundingBox": {
                    "Width": 0.2044,
                    "Top": 0.2254,
                    "Left": 0.4622,
                    "Height": 0.3119
                },
                "FaceId": "6fc892c7-5739-50da-a0d7-80cc92c0ba54",
                "Confidence": 99.9981,
                "ImageId": "5d913eaf-cf7f-5e09-8c8f-cb1bdea8e6aa"
            }
        }
    ]
}
From a statistical standpoint, the ACLU’s finding is no surprise. Researchers encounter the false discovery problem with big data sets every day. If you make enough comparisons with a highly accurate system, a small fraction will be false alarms. Police are well advised to use facial recognition software in the same manner as automated fingerprint identification systems—not as simple, single-source classifiers, but rather as a screening tool to generate a list of potential sources. And, they can have more confidence in classified matches from comparisons in a small database of images of, say, dangerous fugitives than in a reported hit to one of thousands upon thousands of mug shots.

These observations do not negate the privacy concerns with applying facial recognition software to public surveillance systems. Moreover, I have not discussed the ACLU’s statistics on differences in false-positive rates by race. There are important issues of privacy and equality at stake. In addressing these issues, however, a greater degree of statistical sophistication would be in order.

REFERENCES
  1. Jacob Snow, Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots, July 26, 2018, 8:00 AM, https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28
  2. Natasha Singer, Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says, N.Y. Times, July 26, 2018,  https://www.nytimes.com/2018/07/26/technology/amazon-aclu-facial-recognition-congress.html
  3. Ryan Suppe. Amazon's Facial Recognition Tool Misidentified 28 Members of Congress in ACLU Test, USA Today, July 26, 2018, https://www.usatoday.com/story/tech/2018/07/26/amazon-rekognition-misidentified-28-members-congress-aclu-test/843169002/
  4. Amazon Rekognition Developer Guide: CompareFaces, https://docs.aws.amazon.com/rekognition/latest/dg/API_CompareFaces.html
  5. Amazon Rekognition Developer Guide: SearchFaces Operation Response, https://docs.aws.amazon.com/rekognition/latest/dg/search-face-with-id-procedure.html

Friday, July 20, 2018

Handwriting Evidence in Almeciga and Pitts: Ships Passing in the Night?

Almeciga: A Signature Case

Erica Almeciga sued the Center for Investigative Reporting (CIR) for releasing a video on Rosalio Reta, a former member of the Los Zetas Drug Cartel, in which she was interviewed about Rosalio Reta, her “romantic partner at the time.” Almeciga v. Center for Investigative Reporting, Inc., 185 F.Supp.3d 401 (S.D.N.Y. 2016). Her complaint was that the producers breached a promise to conceal her identity, causing her to “develop[] paranoia” and to be “treated for depression and Post Traumatic Stress Disorder.” Id. at 409. In response, CIR “produced a standard release form ... authorizing [it] to use [her] ‘name, likeness, image, voice, biography, interview, performance and/or photographs or films taken of [her] ... in connection with the Project.’” The release, she said, was fabricated—she never saw or signed it—and she obtained an expert report from “a reputed handwriting expert, Wendy Carlson,” id. at 413, that “‘[b]ased on [her] scientific examination’ the signature on the Release was a forgery.” Id. at 414. To conduct that examination, Carlson compared the signature on the release to “numerous purported ‘known’ signatures” given to her by Almeciga’s lawyer. Id. at 414.

The case found its way to the United States District Court for the Southern of New York. Judge Jed. S. Rakoff dismissed the complaint because “New York's Statute of Frauds [requires that] if a contract is not capable of complete performance within one year, it must be in writing to be enforceable.” Id. at 409. The alleged promise to keep Almeciga's identity concealed was oral, not written.

The court also imposed sanctions on Almeciga for “fabricat[ing] the critical allegations in her Amended Complaint.” Id. at 408. Of course, if the handwriting expert’s analysis was correct, Almeciga’s claim that the release was forged was true, and there would have been no “fraud upon the Court.” Id. at 413. Therefore, Judge Rakoff held “a ‘Daubert’ hearing on the admissibility of Carlson's testimony” in conjunction with the evidentiary hearing on CIR's motion for sanctions. Id. at 414. His conclusion was uncompromising:
[T]he Court grants defendant's motion to exclude Carlson's “expert” testimony, finding that handwriting analysis in general is unlikely to meet the admissibility requirements of Federal Rule of Evidence 702 and that, in any event, Ms. Carlson's testimony does not meet those standards.
Id. at 407–08.As this sentence indicates, there are two facets to the Almeciga opinion: (1) “that handwriting analysis in general”—meaning “the ‘ACE–V’ methodology ... , an acronym for ‘Analyze, Compare, Evaluate, and Verify’” (id. at 418)—“bears none of the indicia of science and suggests, at best, a form of subjective expertise” (id. at 419); and (2) that the particulars of how the expert examined the signatures not only “flunks Daubert” (id. at 493), but also fell short of the potentially less stringent requirements for nonscientific expertise.

Although one would not expect the defects in the particular case to be at issue in all or even most cases, one would expect the court’s Daubert holding to be a wake-up call. As Judge Rakoff noted, “even if handwriting expertise were always admitted in the past (which it was not), it was not until Daubert that the scientific validity of such expertise was subject to any serious scrutiny.” Id. at 418.

Pitts: “Inapposite and Unpersuasive”

Lee Andrew Pitts allegedly “entered a branch of Chase Bank ... and handed [the manager at a teller window] a withdrawal slip that had written on it: “‘HAND OVER ALL 100, 50, 20 I HAVE A GUN I WILL SHOOT.’” United States v. Pitts, 16-CR-550 (DLI), 2018 WL 1116550 (E.D.N.Y. Feb. 26, 2018). After the manager repeatedly said that she had no money, the would-be robber “fled on foot ... leaving behind the withdrawal slip” with latent fingerprints. A trawl of a fingerprint database — the court does not say which one or how it was conducted — led New York police to arrest Pitts.

At Pitts’s impending trial on charges of entering the bank with the intent to rob it, the government planned to elicit testimony from “Criminalist Patricia Zippo, who is a handwriting examiner and concluded that Defendant ‘probably may have’ written the demand note found at the crime scene.” Pitts moved “to preclude the government from introducing expert opinion testimony as to ... handwriting analysis.” He “relie[d] principally on the [Almeciga] decision” from the other side of the East River.

Chief Judge Dora L. Irizarry dismissed Almeciga as “inapposite and unpersuasive” because of “significant factual differences from the instant case.” Let’s look at each of these differences.
  • First, the plaintiff in Almeciga tasked the analyst with determining whether plaintiff’s signature on a contractual release was a forgery. ... Forgery analysis is markedly more difficult than comparing typical signatures and has considerably higher error rates than simpler comparisons. Id. at 422 (citation omitted) (“[W]hile forensic document examiners might have some arguable expertise in distinguishing an authentic signature from a close forgery, they do not appear to have much, if any, facility for associating an author’s natural handwriting with his or her disguised handwriting.”).
It is true that the task in Pitts was not to compare signatures. It was to investigate the similarity between two written sentences as they appear on the robbery note and ... what? Exemplars the defendant was forced to write (and that like the exemplars in Almeciga, might have been disguised versions of normal handwriting)? Or did Zippo receive previously existing exemplars of defendant’s handwriting? What do scientific studies of performance on this sort of handwriting-comparison task show? The Pitts opinion does not even hazard a guess, and it blithely ignores the broad conclusion in Almeciga that
[as to] the third Daubert factor, “[t]here is little known about the error rates of forensic document examiners.” While a handful of studies have been conducted, the results have been mixed and “cannot be said to have ‘established’ the validity of the field to any meaningful degree.” (Citations omitted.)
  • Second, the expert performed her initial analysis without any independent knowledge of whether the “known” handwriting samples used for comparison belonged to the plaintiff.
This refers to the fact that in Almeciga, the expert received the exemplars from the lawyer—she did not collect them herself. Her conclusion therefore was conditioned on the assumption that the exemplars really were representative of Almeciga's true signatures. But the need to make this assumption does not pertain to the validty of the ACE-V part of the examination. This difference therefore has no bearing on Almeciga’s conclusion that handwriting determinations have not been scientifically validated.
  • Third, the expert conflictingly claimed that her analysis was based on her “experience” as a handwriting analyst, but then claimed in her expert report that her conclusions were based on her “scientific examination” of the handwriting samples.
Certainly, Judge Rakoff was not impressed with the witness, but the conclusion that Judge Rakoff drew from the juxtaposed statements was only that given these and other statements about the high degree of subjectivity in handwriting comparisons, “[i]t therefore behooves the Court to examine more specifically whether the ACE–V method of handwriting analysis, as described by Carlson, meets the common indicia of admissible scientific expertise as set forth in Daubert.” Judge Irizarry evidently was not disposed to conduct a similar inquiry.
  • Fourth, the court noted several instances of bias introduced by plaintiff’s counsel. For example, counsel initiated the retention by providing a conclusion that “[t]he questioned document was a Release that Defendant CIR forged.” (Citations omitted.)
Was the witness in Pitts insulated from expectation bias? The opinion does not describe any precautions taken to avoid potentially biasing ionformation. What did Patricia Zippo know when she received the handwritten note? Was she given equivalent sets of exemplars from several writers and not expecting only one to be the writer? That seems doubtful.
  • Fifth, the expert contradicted herself in numerous respects, including by stating that her conclusions were verified when they were not, and claiming both that the signature on the questioned document was “‘made to resemble’ plaintiff’s” and also that the signatures were “‘very different.’”
Like many of the other differences, this one does not bear on Judge Rakoff’s conclusion that the “amorphous, subjective approach” of ACE-V “flunks Daubert.” Almeciga simply used Carlson's contradictory statements and  the other as-applied factors to reject the argument that even if the handwriting examination was inadmissible as scientific evidence, it might be admissible as expertise that “is not scientific in nature.”

The Significant Difference

In sum, the Pitts opinion does not grapple with the Daubert issue of scientific validity. Instead of surveying the scientific literature to ascertain whether handwriting examiners’ claims of expertise have been validated (which boils down to studies of how accurate examiners are at the kind of comparisons performed in the case), the court reasons that the process must be accurate because handwriting examiners’ opinions are commonly admitted in court and “wholesale exclusion of handwriting analysis ... is not the majority view in this Circuit.”

Both the Almeciga and Pitts courts were “free to consider how well handwriting analysis fares under Daubert and whether ... testimony is admissible, either as “science” or otherwise.” Almeciga, 185 F.Supp.3d at 418. The most significant difference between the two opinions is that one judge took a hard look at what is actually known about handwriting expertise (or at least tried to), while the other did not.

Tuesday, July 17, 2018

More on Pitts and Lundi: Why Bother with Opposing Experts?

In the post-PCAST cases of United States v. Pitts 1/ and United States v. Lundi 2/, the government prevented a scholar of the development and culture of fingerprinting from testifying for the defense. The proposed witness was Simon Cole, Professor of Criminology, Law and Society at the Department of Social Ecology of the University of California (Irvine). Pitts “contend[ed] that Dr. Cole’s testimony [was] necessary ‘contrary evidence’ that calls into question the reliability of fingerprint analysis.” Lundi wanted Cole to testify about “forensic print analysis, in particular in the areas of accuracy and validation,” including "best practices." The federal district court would not allow it.

Qualifications of an Expert Witness

In Pitts, the government first denied that Cole had the qualifications to say anything useful about fingerprinting. It maintained (in the court's words) that “Dr. Cole (1) is ‘not a trained fingerprint examiner’; (2) ‘has not published peer-reviewed scientific articles on the topic of latent fingerprint evidence’; and (3) ‘has not conducted any validation research in the field.’” The court neither accepted nor clearly rejected this argument, for it decided to keep Cole away from the jury on a different ground.

Exclusion for lack of qualifications would have done violence to Federal Rule of Evidence 702. First, Cole was not going to offer an opinion as a criminalist (which he is not) nor as an interdisciplinary scholar (which he is) on whether the examiner in the case had accurately perceived, compared, and evaluated the images. More likely, he would have opined on the extent to which scientific studies have shown that fingerprint examiners can distinguish between same-source and different-source prints. Training and experience in conducting fingerprint identifications is largely irrelevant to this task.

Second, Rule 702 does not require someone to publish peer-reviewed articles on a topic to be qualified to give an opinion as to the state of the scientific literature. Epidemiologists and toxicologists, for example, can opine about the toxicity of a compound without first publishing their own research on the compound's toxicity. Finally, there is no support in logic or law for the notion that someone has to conduct his or her own validity study to have helpful information on the studies that others have done and what these studies prove.

The Panacea of Cross-examination

The government’s other argument was “that Dr. Cole’s testimony will not assist the trier of fact.” But this argument was garbled:
Specifically, the government points out that Dr. Cole’s only disclosed opinion is that the government’s expert’s testimony “exaggerates the probative value of the evidence because such testimony improperly purports to eliminate the probability that someone else might be the source of the latent print.” “Professor Cole fails to provide any analysis of why latent fingerprint evidence [in general] is so unreliable that it should not be submitted to the jury or, if such evidence can be reliable in some circumstances, what precisely the NYPD examiners did incorrectly in this case.” Dr. Cole is not expected to testify that the identification made by the government’s expert in this case is unreliable or that the examiners made a misidentification. Therefore, the government argues Dr. Cole’s opinion goes to the weight of the government’s evidence, not its admissibility. (Citations and internal quotation marks omitted.)
Chief Judge Dora L. Irizarry had already ruled that a source attribution made with an acknowledgment of at least a theoretical possibility that the match could be a false positive was admissible. If expert evidence is admitted, the opposing party is normally permitted to counter it with expert testimony that it deserves little weight. To argue that just because evidence goes to weight, rebuttal evidence about its weight is inadmissible makes no sense.  Once the evidence is admitted, it's weight is the only game in town.

The real issue is what validity or possibility-of-error testimony would add to the jury's knowledge. In this regard, Judge Irizarry wrote that
The Court is not convinced that Dr. Cole’s testimony would be helpful to the trier of fact. The only opinion Defendant seeks to introduce is that fingerprint examiners “exaggerate” their results to the exclusion of others. However, the government has indicated that its experts will not testify to absolutely certain identification nor that the identification was to the exclusion of all others. Thus, Defendant seeks admit Dr. Cole’s testimony for the sole purpose of rebutting testimony the government does not seek to elicit. Accordingly, Dr. Cole’s testimony will not assist the trier of fact to understand the evidence or determine a fact in issue. (Citations omitted.)
At first blush, this seems reasonable. If the only thing Cole was prepared to say was that fingerprinting does not permit “absolutely certain identification,” and if the fingerprint examiners will have said this anyway, why have him repeat it?

But surely Cole (or another witness— say, a statistician) could have testified to something more than that. An expert with statistical knowledge could inform the jury that although there is very little direct evidence on how frequently fingerprinting experts err in making source attributions in real casework, experiments have tested their accuracy, and the researchers detected errors at various rates. This information could “assist the trier of fact to understand the evidence or determine a fact in issue.” So why keep Cole from giving this “science framework” testimony?

The court’s answer boils down to this:
Moreover, the substance of Dr. Cole’s opinion largely appears in the reports and attachments cited in Defendant’s motion to suppress .... For example, Dr. Cole’s article More Than Zero contains a lengthy discussion about error rates in fingerprint analysis and the rhetoric in conveying those error rates ... , and the PCAST Report notes that jurors assume that error rates are much lower than studies reveal them to be (PCAST Report at 9-10 (noting that error rates can be as high as one in eighteen)). Defendant identifies no additional information or expertise that Dr. Cole’s testimony provides beyond what is in these articles and does not explain why cross-examination of the government’s experts using these reports would be insufficient. 3/
Now, I think the 1 in 18 figure is mildly ridiculous, 4/ but there is no general rule that because published findings could be introduced via cross-examination, a party cannot call on an opposing expert to present or summarize the findings. First, the expert being cross-examined might not concede that the findings are from authoritative sources. This occurred repeatedly when a number of prosecution DNA experts flatly refused to acknowledge the 1992 NAS report on forensic DNA technology as authoritative. That created a hearsay problem for defendants. After all, the authors of the report were not testifying and hence were not subject to cross-examination. The rule against hearsay applies to such statements because the jury would have to evaluate the truthfulness of the statements without hearing from the individuals who wrote them.

Therefore, counsel could not quote or paraphrase the report’s statements over a hearsay objection unless the report fell under some exception to the rule against hearsay. The obvious exception—for “learned treatises”—does not apply unless the report first is “established as a reliable authority by the testimony or admission of the witness or by other expert testimony or by judicial notice.” 5/

In Pitts, however, it appears that the government’s experts were willing to concede that the NAS and PCAST reports were authoritative (even though a common complaint from vocal parts of the forensic-science community about both reports was that they were not credible because they lacked representation from enough practicing forensic scientists). Moreover, a court might well have to admit the PCAST report under the hearsay exception for government reports.

Nonetheless, a second problem with treating cross-examination as the equivalent of testimony from an opposing expert is that it is not equivalent. By way of comparison, would  judges in a products liability case against the manufacturer of an alleged teratogen reason that the defense cannot call an expert to present and summarize the results of studies that address the strength of the  association between exposure and birth defects but rather can only ask the plaintiff’s experts about the studies?

In criminal cases, even if the defense expert eschews opinions on whether the defendant is the source of the latent prints as beyond his (or anyone’s?) expertise, the jury might consider this expert to be more credible and more knowledgeable about the underlying scientific literature than the latent print examiners. Examiners understandably can have great confidence in their careful judgments and in the foundations of the important work that they do. It would not be surprising for their message on cross-examination (or re-direct examination) to be, yes, errors are possible and they have occurred in artificial experiments and a few extreme cases, but, really,the process is highly valid and reliable. An outside observer may have a less sanguine perspective to offer even when discussing the same underlying literature.

Cross-examination is all well and good, but cross-examination of experts is delicate, difficult, and dangerous. Confining the defense to posing questions about specific studies in lieu of its own expert testimony about these studies is not normal. Court instructions about error probabilities (analogous to instructions about the factors that degrade eyewitness identifications) might be a device to avoid unduly time-consuming defense witnesses, but those do not yet exist. The opportunity to cross-examine the other party's witnesses rarely warrants depriving a party of the right to present testimony from its experts.

NOTES
  1. United States v. Pitts, 16-CR-550 (DLI), 2018 WL 1169139 (E.D.N.Y. Mar. 2, 2018).
  2. United States v. Lundi, 17-CR-388 (DLI), 2018 WL 3369665 (E.D.N.Y. July 10, 2018).
  3. The opinion in Lundi is similar:
    The government seeks to preclude Defendant’s proposed expert, Dr. Cole, from testifying, and points to this Court’s decision in Pitts, ... . The government argues that, as was the case in Pitts, Dr. Cole’s anticipated testimony would serve to rebut testimony from the government’s experts that the government does not expect to elicit. ... The government argues further that Dr. Cole’s additional proposed testimony, which would address the reliability of fingerprint examinations and the “best practices” to be followed when conducting such examinations, is not distinguishable from the information contained in the reports Defendant attached to his motion, and with which he can cross examine the government’s experts. ...

    Defendant claims that Dr. Cole’s testimony is necessary in this case because the reports could not be introduced through the government’s experts. .... However, the government has given every indication that its experts would recognize these reports, such that Defendant can use them on cross-examination. See Opp’n at 18 (“[t]o the extent the defendant wants to cross examine the [fingerprint] examiners on the basis of the empirical studies in which the error rates cited in the defendant’s motion were found, the defendant is free to do so....”). The Court finds that Dr. Cole’s testimony would not assist the trier of fact. See Pitts, 2018 WL 1169139, at *3. Accordingly, the testimony is precluded.
  4. See David H. Kaye, On a “Ridiculous” Estimate of an “Error Rate for Fingerprint Comparisons”, Forensic Sci., Stat. & L., Dec. 10, 2016, http://for-sci-law.blogspot.com/2016/12/on-ridiculous-estimate-of-error-rate.html.
  5. Federal Rule of Evidence 803(18); see generally David H. Kaye, David A. Bernstein, & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence § 5.4 (2d ed. 2011).

Monday, July 16, 2018

Ignoring PCAST’s Explication of Rule 702(d): The Opinions on Fingerprint Evidence in Pitts and Lundi

With the release of an opinion in February and another in July 2018, the District Court for the Eastern District of New York became at least the second federal district court to find that the 2016 report of the President’s Council of Advisors on Science and Technology (PCAST) [1] did not militate in favor of excluding testimony that a defendant is the source of a latent fingerprint. Chief Judge Dora L. Irizarry wrote both opinions.

United States v. Pitts [2]

The first ruling came in United States v. Pitts. The government alleged that Lee Andrew Pitts “entered a branch of Chase Bank ... and handed [the manager at a teller window] a withdrawal slip that had written on it: “‘HAND OVER ALL 100, 50, 20 I HAVE A GUN I WILL SHOOT.’” After the manager repeatedly said that she had no money, the would-be robber fled on foot ... leaving behind the withdrawal slip” with latent fingerprints. A trawl of a fingerprint database — the court does not say which one or how it was conducted — led New York police to arrest Pitts two weeks later.

Facing trial on charges of entering the bank with the intent to rob it, Pitts moved “to preclude the government from introducing expert opinion testimony as to latent fingerprint and handwriting analysis.” The opinion does not specify the exact nature of the expert's fingerprint testimony. Presumably, it would have been an opinion that Pitts is the source of the print on the withdrawal slip. The court merely noted that the government "claims that its fingerprint experts do not intend to testify that fingerprint analysis has a zero or near zero error rate."

Judge Irizarry made short work of Pitts’s contention that such testimony would contravene Federal Rule of Evidence 702 and Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). Pitts relied “chiefly on the findings of the PCAST Report, the [2009] NAS Report [3], and several out-of-circuit court decisions that question the reliability of latent fingerprint analysis.” The judge was “not persuaded.” She acknowledged that “[t]he PCAST and NAS Reports [indicate that] error rates are much higher than jurors anticipate” and that “the NAS Report [stated] that “[w]e have reviewed available scientific evidence of the validity of the ACE-V method and found none.” But she was “dismayed that Defendant’s opening brief failed to address an addendum to the PCAST Report.” According to the court,
[The 2017 Addendum] applaud[ed] the work of the friction-ridge discipline” for steps it had taken to confirm the validity and reliability of its methods. ... The PCAST Addendum further concluded that “there was clear empirical evidence” that “latent fingerprint analysis [...] method[ology] met the threshold requirements of ‘scientific validity’ and ‘reliability’ under the Federal Rules of Evidence.”
Actually, the Addendum [4] adds little to the 2016 report. It responds to criticisms from the forensic-science establishment. The assessment of the scientific showing for the admissibility of latent fingerprint identification under Rule 702 is unchanged. The original report stated that “latent fingerprint analysis is a foundationally valid subjective methodology—albeit with a false positive rate that is substantial and is likely to be higher than expected by many jurors based on longstanding claims about the infallibility of fingerprint analysis.” It added that “[i]n reporting results of latent-fingerprint examination, it is important to state the false-positive rates based on properly designed validation studies.” The Addendum does not retreat from or modify these conclusions in any way.

Both the Report and the Addendum reinforce the conclusion that, despite the lack of detailed, objective standards for evaluating the degree of similarity between pairs of prints, experiments have shown that analysts can reach the conclusion that two prints have a common source with good accuracy. But the Report also lists five more conditions that bear on whether a particular analyst has reached the correct conclusion in a given case. It creates the neoteric phrase “validity as applied” for the showing that a procedure has been properly applied in a the case at bar:
Scientific validity as applied, then, requires that an expert: (1) has undergone relevant proficiency testing to test his or her accuracy and reports the results of the proficiency testing; (2) discloses whether he or she documented the features in the latent print in writing before comparing it to the known print; (3) provides a written analysis explaining the selection and comparison of the features; (4) discloses whether, when performing the examination, he or she was aware of any other facts of the case that might influence the conclusion; and (5) verifies that the latent print in the case at hand is similar in quality to the range of latent prints considered in the foundational studies.
The opinion does not discuss whether the court accepts or rejects this five-part test for admitting the proposed testimony. It jumps to the unedifying conclusion that defendant’s “critiques [do not] go to the admissibility of fingerprint analysis, rather than its weight.”

United States v. Lundi [5]

Chief Judge Irizarry returned to the question of admissibility of source attributions form latent prints in United States v. Lundi.  In the middle of thre afternoon of February 20, 2017, three men entered a check cashing and hair salon on Flatbush Avenue in Brooklyn. They forced an employee in a locked glass booth to let them in by pointing a gun at the head of a customer. They made off with approximately $13,000, but one of them had put his hands on top of the glass booth. Police ran an image of the latent prints from the booth through a New York City automated fingerprint identification system (AFIS) database. They decided that those prints came from Steve Lundi. Federal charges followed.

In advance of trial, Lundi moved to exclude the identification. He avoided the Pitts pitfall of arguing that there was no adequate scientific basis for expert latent print source attributions (although the more recent report of an American Association for the Advancement of Science (AAAS) working group would have lent some credence to such a claim [6]). Instead, Lundi “challeng[ed] the application of that [validated] science to the specific examinations conducted in the instant case.” It is impossible to tell from the opinion whether the court was made aware of the PCAST five-part test admissibility under Rule 702(d). Again, citing the unpublished opinion of a federal court in Illinois, Judge Irizarri apparently leapt over this part of the Report to the conclusion that
This Court is not persuaded that Defendant’s challenges go to the admissibility of the government’s fingerprint evidence, rather than to the weight accorded to it. Moreover, as this Court noted in Pitts, fingerprint analysis has long been admitted at trial without a Daubert hearing. ... The Court sees no reason to preclude such evidence here. Accordingly, Defendant’s motion to preclude fingerprint evidence is denied.
Again, it is impossible to tell from the court's cursory and conclusory analysis whether the theory is that an uncontroverted assurance that an expert undertook an “analysis,” a “comparison,” and an “evaluation” and that another expert did a “verification” ipso facto satisfies Rule 702(d).  The judge noted that “the government points to concrete indicators of how the ACE-V method actually was followed by Detective Skelly,” but it would be hard to find a modern fingerprint identification in which there were no indications that the examiner (1) analyzed the latent print (decided that it was of adequate quality to continue), (2) picked out features to compare and compared them, and then (3) evaluated what was seen. If this is all it takes to satisfy the Rule 702(d) requirement that “the expert has reliably applied the principles and methods to the facts of the case,” then the normal burden on the advocate of expert evidence to show that it meets all the rule’s requirements has evaporated into thin air.

Yet, this could be all that the court required. It suggested that all expert evidence is admissible as long as it is reliable in some general sense, writing that “our adversary system provides the necessary tools for challenging reliable, albeit debatable, expert testimony” and “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence” (citing Daubert, 509 U.S. at 596).

The suggestion assumes what is to be proved—that the evidence—“shaky” or unshakeable—is “admissible.” The PCAST Report tried to give meaning to the case-specific reliability prong of Rule 702 (which simply codifies post-Daubert jurisprudence) by spelling out, for highly subjective procedures like ACE-V, what is necessary to demonstrate the legally reliable application in a specific case. Perhaps the “concrete indicators” showed that PCAST’s conditions were satisfied. Perhaps they did not go that far. Perhaps the PCAST conditions are too demanding. Perhaps they are too flaccid. Judge Irizarri does not tell us what she thinks.

After Lundi and Pitts, courts should strive to fill the gap in the analysis of the application of a highly subjective procedure. They should reveal what they think of PCAST’s effort to clarify (or, more candidly, to prescribe) what is required for long-standing methods in forensic science to be admissible under Rule 702(d).

REFERENCES
  1. Executive Office of the President, President’s Council of Advisors on Science and Technology, Report to the President: Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods, Sept. 2016.
  2. United States v. Pitts, 16-CR-550 (DLI), 2018 WL 1116550 (E.D.N.Y. Feb. 26, 2018).
  3. Comm. on Identifying the Needs of the Forensic Sci. Cmty., Nat'l Research Council, Strengthening Forensic Science in the United States: A Path Forward (2009).
  4. PCAST, An Addendum to the PCAST Report on Forensic Science in Criminal Courts, Jan. 6, 2017.
  5. United States v. Lundi, 17-CR-388 (DLI), 2018 WL 3369665 (E.D.N.Y. July 10, 2018).
  6. William Thompson, John Black, Anil Jain & Joseph Kadane, Forensic Science Assessments: A Quality and Gap Analysis, Latent Fingerprint Examination (2017).

Thursday, July 5, 2018

A Strange Report of "Forensic Epigenetics ... in CODIS"

The “Featured Story” in today’s Forensic Magazine is “Forensic Epigenetics: How Do You Sort Out Age, Smoking in CODIS?” The obvious answer is that you don't and you can't. CODIS records contain no epigenetic data.

What Is Epigenetics?

As a Nature educational webpage explains, “[e]pigenetics involves genetic control by factors other than an individual's DNA sequence. Epigenetic changes can switch genes on or off and determine which proteins are transcribed.” 1/ One chemical mechanism for accomplishing this is DNA methylation, "a chemical process that adds a methyl group to DNA." 2/ More precisely, "methylation of DNA (not to be confused with histone methylation) is a common epigenetic signaling tool that cells use to lock genes in the 'off' position." 3/ This methylation is involved in cell differentiation and hence the formation and maintenance of different tissue types. 4/  "Given the many processes in which methylation plays a part, it is perhaps not surprising that researchers have also linked errors in methylation to a variety of devastating consequences, including several human diseases.” 5/

In forensic genetics, "DNA methylation profiling [has been proposed] for tissue determination, age prediction, and differentiation between monozygotic twins." 6/ Because this "profiling" can uncover health-related and other information as well, discussion of regulating its use by police has begun. 7/

What Is CODIS?

CODIS is “the acronym for the Combined DNA Index System and is the generic term used to describe the FBI’s program of support for criminal justice DNA databases as well as the software used to run these databases.” 8/ The DNA data, which come from twenty locations (loci) on various chromosomes, reveal nothing about methylation patterns. The information from these loci relates solely to the set of underlying DNA sequences. These particular sequences are not transcribed, are essentially identical in all tissues and all identical twins, and do not change as a person ages (except for occasional mutations).

What Is “Sort[ing] Out Age, Smoking in CODIS?”

I don't know. Forensic epigenetics or epigenomics involves neither CODIS databases, CODIS loci, nor CODIS software. Does Forensic Magazine's “Senior Science Writer” think that the databases will be expanded to include epigenetic data? That is not what the article asserts. The only attempt to bridge the two is a concluding sentence that reads, "But some studies, like a Stanford exploration last spring, show that even 13 loci can carry more information than originally believed."

That is not much of a connection, and the statement itself is a trifle misleading. Thirteen is the number of STR loci in CODIS profiles before the expansion to twenty in 2017. The description of the “Stanford exploration” referenced in the article 9/ does not show that the original understanding of the information contained in those core CODIS loci was faulty. Rather, it talks about the growth of the size of the databases and research showing that CODIS profiles “could possibly” be linked to records in medical research databases by “authorized or unauthorized analysts equipped with two datasets, one with SNP genotypes and another CODIS genotypes.” 10/

This possibility does not come as a complete surprise. CODIS profiles are meant to be individual identifiers (or nearly so). If there are genomic databases that sufficiently overlap these regions, then a CODIS profile can be used to locate the record pertaining to the same individual in those databases. The extent to which this possibility is cause for concern is worth considering, 11/ but it has nothing to do with the privacy implications of epigenetic data.

NOTES
  1. Simmons, D. (2008) Epigenetic influence and disease. Nature Education 1(1):6
  2. Id.
  3. Theresa Phillips (2008) The role of methylation in gene expression. Nature Education 1(1):116.
  4. See, e.g., Karyn L. Sheaffer, Rinho Kim, Reina Aoki, et al. (2014) DNA methylation is required for the control of stem cell differentiation in the small intestine. Genes & Development, http://genesdev.cshlp.org/content/28/6/652.abstract; Bo Zhang, Yan Zhou, Nan Lin, et al. (2013) Functional DNA methylation differences between tissues, cell types, and across individuals discovered using the M&M algorithm. Genome Research, https://genome.cshlp.org/content/early/2013/06/26/gr.156539.113.abstract
  5. Phillips, supra note 3.
  6. Athina Vidaki & Manfred Kayser (2017) From forensic epigenetics to forensic epigenomics: broadening DNA investigative intelligence, Genome Biol. 18: 238, doi:  10.1186/s13059-017-1373-1
  7. Mahsa Shabani, Pascal Borry, Inge Smeers, & Bram Bekaert (2018) Forensic Epigenetic Age Estimation and Beyond: Ethical and Legal Considerations. Trends in Genet 34(7): 489–491
  8. FBI, Frequently Asked Questions on CODIS and NDIS, https://www.fbi.gov/services/laboratory/biometric-analysis/codis/codis-and-ndis-fact-sheet
  9. Seth Augenstein, CODIS Has More ID Information than Believed, Scientists Find,” Forensic Mag., May 15, 2017, https://www.forensicmag.com/news/2017/05/codis-has-more-id-information-believed-scientists-find
  10. Id.
  11. Cf. David H. Kaye, The Genealogy Detectives: A Constitutional Analysis of “Familial Searching,” 51 Am. Crim. L. Rev. 109, 137 n. 170 (2013) (“However, there is at least one rather roundabout way in which the identification profiles could reveal substantial medical information. In the future, when the full genomes of individuals are recorded in clinical databases of medical records, a police agency possessing the profile and having surreptitious access to the database could locate the entry for the individual’s genome and any associated medical records without anyone’s knowledge. Although the STRs would be useful only for identification, that use could be the key to locating information in patient records. Furthermore, the patient’s records and full genome could lead police to the stored genomes and records of relatives. Although I cannot think of many scenarios in which police would be motivated to engage in this computer hacking and medical snooping, there may be some.”).