Wednesday, November 15, 2017

It’s a Match! But What Is That?

When it comes to identification evidence, no one seems to know precisely what a match means. The comedian John Oliver used the term to riff CSI and other TV shows in which forensic scientists or their machines announce devastating “matches.” The President’s Council of Advisors on Science and Technology could not make up their minds. The opening pages of their 2016 report included the following sentences:

[T]esting labs lacked validated and consistently-applied procedures ... for declaring whether two [DNA] patterns matched within a given tolerance, and for determining the probability of such matches arising by chance in the population (P. 2)
Here, a “match” is a correspondence in measurements, and it is plainly not synonymous with a proposed identification. The identification would be an inference from the matching measurements that could arise "by chance" or because the DNA samples being analyzed are from the same source.

By subjective methods, we mean methods including key procedures that involve significant human judgment—for example, about which features to select within a pattern or how to determine whether the features are sufficiently similar to be called a probable match. (P. 5 n.3)
Now it is seems that “match” refers to “sufficiently similar” features and to an identification of a single, probable source of the traces with these similar features.

Forensic examiners should therefore report findings of a proposed identification with clarity and restraint, explaining in each case that the fact that two samples satisfy a method’s criteria for a proposed match does not mean that the samples are from the same source. For example, if the false positive rate of a method has been found to be 1 in 50, experts should not imply that the method is able to produce results at a higher accuracy. (P. 6)
Here, “proposed match” seems to be equated to “proposed identification.” (Or does “proposed match” mean that the degree of similarity a method uses to characterize the measurements as matching might not really be present in the particular case, but is merely alleged to exist?)

Later, the report argues that
Because the term “match” is likely to imply an inappropriately high probative value, a more neutral term should be used for an examiner’s belief that two samples come from the same source. We suggest the term “proposed identification” to appropriately convey the examiner’s conclusion ... . (Pp. 45-46.)
Is this a blanket recommendation to stop using the term “match” for an observed degree of similarity? It prompted the following rejoinder:
Most scientists would be comfortable with the notion of observing that two samples matched but would, rightly, refuse to take the logically unsupportable step of inferring that this observation amounts to an identification. 1/
I doubt that it is either realistic or essential to banish the word “match” from the lexicon for identification evidence. But it is essential to be clear about its meaning. As one textbook lucidly explains:
Yet another word that is the source of much confusion is 'match'. 'Match' can mean three different things:
• Two traces share some characteristic which we have defined and categorised, for example, when two fibres are both made of nylon.
• Two traces display characteristics which are on a continuous scale but fall within some arbitrarily defined distance of each other.
• Two traces have the same source, as implied in expressions such as 'probable match' or 'possible match'.
If the word 'match' must be used, it should be carefully defined. 2/
NOTES
  1. I.W. Evett, C.E.H. Berger, J.S. Buckleton, C. Champod, & G. Jackson, Finding the Way Forward for Forensic Science in the US—A Commentary on the PCAST Report, 278 Forensic Sci. Int'l 16, 19 (2017). One might question whether “most scientists” should “be comfortable” with observing that two samples “matched” on a continuous variable (such as the medullary index of hair). Designating a range of matching and nonmatching values means that a close nonmatch is treated radically differently than an almost identical match at the edge of the matching zone. Ideally, a measure of probative value of evidence should not incorporate this discontinuity.
  2. Bernard Robertson, Charles E. H. Berger, and G. A. Vignaux, Interpreting Evidence: Evaluating Forensic Science in the Courtroom 63 (2d ed. 2016).

Saturday, November 4, 2017

Louisiana's Court of Appeals Brushes Aside PCAST Report for Fingerprints and Toolmark Evidence

A defense effort to exclude fingerprint and toolmark identification evidence failed in State v. Allen, No. 2017-0306, 2017 WL 4974768 (La. Ct. App., 1 Cir., Nov. 1, 2017). The court described the evidence in the following paragraphs:
The police obtained an arrest warrant for the defendant and a search warrant for his apartment. During the ... search ... , the police recovered a .40 caliber handgun and ammunition ... The defendant denied owning a firearm ... .
     BRPD [Baton Ropuge Police Department] Corporal Darcy Taylor processed the firearm ... , lifting a fingerprint from the magazine of the gun and swabbing various areas of the gun and magazine. Amber Madere, an expert in latent print comparisons from the Louisiana State Police Crime Lab (LSPCL), examined the fingerprint evidence and found three prints sufficient to make identifications. The latent palm print from the magazine of the gun was identified as the defendant's left palm print.
     Patrick Lane, a LSPCL expert in firearms identification, examined the firearm and ammunition in this case. Lane noted that the firearm in evidence, was the same caliber as the cartridge cases. ... He further test-fired the weapon and fired reference ammunition from the weapon for comparison to the ammunition in evidence. Lane determined that based on the quality and the quantity of markings that were present on the evidence cartridge case and the multiple test fires, the weapon in evidence fired the cartridge case in evidence.
Defendant moved for "a Daubert hearing ... based on a report, released by the President's Council of Advisors on Science and Technology (PCAST) three days before the motion was filed, which called into question the validity of feature-comparison models of testing forensic evidence." The court of appeals decided that there had been such a hearing. The opinion is not explicit about the timing of the hearing. It suggests that it consisted of questions to the testifying criminalists immediately before they testified. In the court's words,
[B]efore the trial court's determination as to their qualification as experts and the admission of their expert testimony, Madere and Lane were thoroughly questioned as to their qualifications, and as to the reliability of the methodology they used, including the rates of false positives and error. The defendant was specifically allowed to reference the PCAST report during questioning. Thus, the trial court allowed a Daubert inquiry to take place in this case. The importance of the Daubert hearing is to allow the trial judge to verify that scientific testimony is relevant and reliable before the jury hears said testimony. Thus, the timing of the hearing is of no moment, as long as it is before the testimony is presented.
The court was correct to suggest that a "hearing" can satisfy Daubert even if it was not conducted before the trial. However, considering that the "hearing" involved only two prosecution witnesses, whether it should be considered "thorough" is not so clear.

As for the proof of scientific validity, the court pointed to a legal history of admissibility mostly predating the 2009 NRC report on Strengthening Forensic Science in the United States and the later PCAST report. It failed to consider a number of federal district court opinions questioning the type of expert testimony apparently used in the case (it is certain that "the weapon ... fired the cartridge case"). Yet, it insisted that "[c]onsidering the firmly established reliability of fingerprint evidence and firearm examination analyses, the expert witness's comparison of the defendant's fingerprints, not with latent prints, but with known fingerprints, ... we find [no] error in the admission of the testimony in question." The assertion that the latent print examiner did not compare "the defendant's fingerprints [to] latent fingerprints" is puzzling. The fingerprint expert testified that "[t]he latent palm print from the magazine of the gun was identified as the defendant's left palm print."That was the challenged testimony, not some unexplained  comparison of known prints to known prints.

The text of opinion did not address the reasoning in the PCAST report. A footnote summarily -- and unconvincingly -- disposed of the PCAST report in a few sentences:
[T]he PCAST report did not wholly undermine the science of firearm analysis or fingerprint identification, nor did it actually establish unacceptable error rates for either field of expertise. In fact, the PCAST report specifically states that fingerprint analysis remains “foundationally valid” and that “whether firearms should be deemed admissible based on current evidence is a decision that belongs to the courts.”
"Did not wholly undermine the science" is faint praise indeed. The council's views about the "foundational valdidty" (and hence the admissibility under Daubert) of firearms identification via toolmarks was clear: "Because there has been only a single appropriately designed study, the current evidence falls short of the scientific criteria for foundational validity." (P. 111).

As regards fingerprints, the court's description of the report is correct but incomplete. The finding of "foundational validity" was grudging: "The studies collectively demonstrate that many examiners can, under some circumstances, produce correct answers at some level of accuracy." (P. 95). The council translated its misgivings about latent fingerprint identification into the following recommendation:
Overall, it would be appropriate to inform jurors that (1) only two properly designed studies of the accuracy of latent fingerprint analysis have been conducted and (2) these studies found false positive rates that could be as high as 1 in 306 in one study and 1 in 18 in the other study. This would appropriately inform jurors that errors occur at detectable frequencies, allowing them to weigh the probative value of the evidence. (P. 96).
To say that the Louisiana court did not undertake a careful analysis of the PCAST report would be an understatement. Of course, courts need not accept the report's detailed criteria for establishing "validity." Neither must they defer to its particular views on how to convey the probative value of scientific evidence. But if they fail to engage with the reasoning in the report, their opinions will be superficial and unpersuasive.

Why the News Stories on the Louisiana Lawyer Dog Are Misleading

The news media and the bloggers are abuzz with stories of how Louisiana judges think that a suspect's statement to "get me a lawyer dog" (or maybe "give me a lawyer, dog") is not an invocation of the right to counsel, which, under Miranda v. Arizona, requires the police to terminate a custodial interrogation. Although the case has nothing to do with forensic science or statistics, this blog often points out journalists' misrepresentations, and I'll digress from the main theme of this blog to explain how the media has misrepresented the case.

Five days ago, a reason.com blog called "Hit and Run" observed that Justice Scott Chricton of the Louisiana Supreme Court wrote a concurring opinion to explain why he agreed that the court need not review the case of Warren Demesme. It seems that Demesme said to his interlocutors, "If y'all, this is how I feel, if y'all think I did it, I know that I didn't do it so why don't you just give me a lawyer dog cause this is not what's up." The police continued the interrogation. Demesme made some admissions. Now he is in jail on charges of aggravated rape and indecent behavior with a juvenile.

Reason.com writer (formerly with Fox Business and NBC) read the Justice's opinion and thought
Chricton's argument relies specifically on the ambiguity of what a "lawyer dog" might mean. And this alleged ambiguity is attributable entirely to the lack of a comma between "lawyer" and "dog" in the transcript. As such, the ambiguity is not the suspect's but the court's. And it requires willful ignorance to maintain it.
Credulous writers at Slate, the Washington Post, and other news outlets promptly amplified and embellished Krayewski's report. Slate writer Mark Joseph Stern announced that
Justice Scott Crichton ... wrote, apparently in absolute seriousness, that “the defendant’s ambiguous and equivocal reference to a ‘lawyer dog’ does not constitute an invocation of counsel that warrants termination of the interview.”
Reason’s Ed Krayewski explains that, of course, this assertion is utterly absurd. Demesme was not referring to a dog with a license to practice law, since no such dog exists outside of memes. Rather, as Krayewski writes, Demesme was plainly speaking in vernacular; his statement would be more accurately transcribed as “why don’t you just give me a lawyer, dawg.” The ambiguity rests in the court transcript, not the suspect’s actual words. Yet Crichton chose to construe Demesme’s statement as requesting Lawyer Dog, Esq., rather than interpreting his words by their plain meaning, transcript ambiguity notwithstanding.
This Slate article also urged the U.S. Supreme Court to review the case (if it were to receive a petition from the as-yet-untried defendant). The Post's Tom Jackman joined the bandwagon, arguing that
When a friend says, “I’ll hit you up later dog,” he is stating that he will call again sometime. He is not calling the person a “later dog.”
But that’s not how the courts in Louisiana see it. .... It’s not clear how many lawyer dogs there are in Louisiana, and whether any would have been available to represent the human suspect in this case ... .
Yet, the case clearly does not turn on "the lack of a comma between 'lawyer' and 'dog,'" and Justice Chricton did not maintain that Mr. Demesme's request was too ambiguous because "lawyer" was followed by "dog." Public defender Derwyn D. Bunton contended that when “Mr. Demesme said "with emotion and frustration, 'Just give me a lawyer,'" he "unequivocally and unambiguously asserted his right to counsel." (At least, this is what the Washington Post reported.) If this were all there was to the request, there would no doubt that the police violated Miranda.

The problem for Mr. Demesme is that the "unambiguous" assertion "just give me a lawyer" did not stand alone. It was conditional. What he said was "if y'all think I did it, I know that I didn't do it so why don't you just give me a lawyer ...[?]" For Justice Chricton, the "if" was the source of the ambiguity. That ambiguity did not arise from the phrase "lawyer dog." It would have made no difference if defendant had said "lawyer" without the "dog." Contrary to the media howling, Justice Chricton was not taking the phrase "lawyer dog" literally. He was taking the phrase "if y'all think" literally. Here is what the judge actually wrote:
I agree with the Court’s decision to deny the defendant’s writ application and write separately to spotlight the very important constitutional issue regarding the invocation of counsel during a law enforcement interview. The defendant voluntarily agreed to be interviewed twice regarding his alleged sexual misconduct with minors. At both interviews detectives advised the defendant of his Miranda rights and the defendant stated he understood and waived those rights. ... I believe the defendant ambiguously referenced a lawyer—prefacing that statement with “if y’all, this is how I feel, if y’all think I did it, I know that I didn’t do it so why don’t you just give me a lawyer dog cause this is not what’s up.”... In my view, the defendant’s ambiguous and equivocal reference to a “lawyer dog” does not constitute an invocation of counsel that warrants termination of the interview ... .
The Justice cited a Louisiana Supreme Court case and the U.S. Supreme Court case, Davis v. United States, 512 U.S. 452 (1994). In Davis, Naval Investigative Service agents questioned a homicide suspect after reciting Miranda warnings and securing his consent to be questioned. An hour and a half into the questioning, the suspect said "[m]aybe I should talk to a lawyer." At that point, "[a]ccording to the uncontradicted testimony of one of the interviewing agents, the interview then proceeded as follows:"
[We m]ade it very clear that we're not here to violate his rights, that if he wants a lawyer, then we will stop any kind of questioning with him, that we weren't going to pursue the matter unless we have it clarified is he asking for a lawyer or is he just making a comment about a lawyer, and he said, [']No, I'm not asking for a lawyer,' and then he continued on, and said, 'No, I don't want a lawyer.'
They took a short break, after which "the agents reminded petitioner of his rights to remain silent and to counsel. The interview then continued for another hour, until petitioner said, 'I think I want a lawyer before I say anything else.' At that point, questioning ceased."

The Supreme Court held that the initial statement “[m]aybe I should talk to a lawyer,” coming after a previous waiver of the right to consult counsel and followed by the clarification that "I'm not asking for a lawyer," could be deemed too equivocal and ambiguous to have forced the police to have terminated the interrogation immediately.

The Louisiana case obviously is different. Police did not seek any clarification of the remark about a lawyer "if y'all think I did it." From what has been reported, they continued without missing a beat. However, in the majority opinion for the Court, Justice Sandra Day O'Connor went well beyond the facts of the Davis case to write that
Of course, when a suspect makes an ambiguous or equivocal statement it will often be good police practice for the interviewing officers to clarify whether or not he actually wants an attorney. That was the procedure followed by the NIS agents in this case. Clarifying questions help protect the rights of the suspect by ensuring that he gets an attorney if he wants one, and will minimize the chance of a confession being suppressed due to subsequent judicial second-guessing as to the meaning of the suspect's statement regarding counsel. But we decline to adopt a rule requiring officers to ask clarifying questions. If the suspect's statement is not an unambiguous or unequivocal request for counsel, the officers have no obligation to stop questioning him.
The Louisiana courts -- and many others -- have taken this dictum -- repudiated by four concurring Justices -- to heart. Whether it should ever apply and whether Justice Chricton's application of it to the "if ..." statement is correct are debatable. But no responsible and knowledgeable journalist could say that the case turned on an untranscribed comma or on the difference between "lawyer" and "lawyer dog." The opinion may be wrong, but it is clearly unfair to portray it as "willful ignorance" and "utterly absurd." The majority opinion in Davis and the cases it has spawned are fair game (and the Post article pursues that quarry), but the writing about the dispositive role of the lawyer dog meme in the Louisiana case is barking up the wrong tree.

Friday, October 27, 2017

Dodging Daubert to Admit Bite Mark Evidence

At a symposium for the Advisory Committee on the Federal Rules of Evidence, Chris Fabricant juxtaposed two judicial opinions about bite-mark identification. To begin with, in Coronado v. State, 384 S.W.3d 919 (Tex. App. 2012), the Texas Court of Appeals deemed bite mark comparisons to be a “soft science” because it is “based primarily on experience or training.” It then applied a less rigorous standard of admissibility than that for a “hard science.”

The state’s expert dentist, Robert Williams, “acknowledged that there is a lack of scientific studies testing the reliability of bite marks on human skin, likely due to the fact that few people are willing to submit to such a study. However, he did point out there was one study on skin analysis conducted by Dr. Gerald Reynolds using pig skin, ‘the next best thing to human skin.’” The court did not state what the pig skin study showed, but it must have been apparent to the court that direct studies of the ability of dentists to distinguish among potential sources of bite marks were all but nonexistent.

That dentists have a way to exclude and include suspects as possible biters with rates of accuracy that are known or well estimated is not apparent. Yet, the Texas appellate court upheld the admission of the "soft science" testimony without discussing whether it was presented as hard science, as "soft science," or as nonscientific expert testimony.

A trial court in Hillsborough County, Florida, went a step further. Judge Kimberly K. Fernandez wrote that
During the evidentiary hearing, the testimony revealed that there are limited studies regarding the accuracy or error rate of bite mark identification, 3/ and there are no statistical databases regarding uniqueness or frequency in dentition. Despite these factors, the Court finds that this is a comparison-based science and that the lack of such studies or databases is not an accurate indicator of its reliability. See Coronado v. State, 384 S.W. 3d 919 (Tex. App. 2012) ("[B]ecause bite mark analysis is based partly on experience and training, the hard science methods of validation such as assessing the potential rate of error, are not always appropriate for testing its reliability.")
The footnote added that "One study in 1989 reflected that there was a 63% error rate.” This is a remarkable addition. Assuming "the error rate" is a false-positive rate for a task comparable to the one in the case, it is at least relevant to the validity of bite-mark evidence. In Coronado, the Texas court found the absence of validation research not preclusive of admissibility.  That was questionable enough. But in O'Connell, the court found that the presence of research that contradicted any claim of validity “inappropriate” to consider! That turns Daubert on its head.

Friday, October 20, 2017

"Probabilistic Genotyping," Monte Carlo Methods, and the Hydrogen Bomb

Many DNA samples found in criminal investigations contain DNA from several people. A number of computer programs seek to "deconvolute" these mixtures -- that is, to infer the several DNA profiles that are mushed together in the electrophoretic data. The better ones do so using probability theory and an estimation procedure known as a Markov Chain Monte Carlo (MCMC) method. These programs are often said to perform "probabilistic genotyping." Although both words in this name are a bit confusing, 1/ lawyers should appreciate that the inferred profiles are just possibilities, not certainties. At the same time, some may find the idea of using techniques borrowed from a gambling casino (in name at least) disturbing. Indeed, I have heard the concern that "You know, don't you, that if the program is rerun, the results can be different!"

The answer is, yes, that is the way the approximation works. Using more steps in the numerical process also could give different output, but would we expect the further computations to make much of a difference? Consider a physical system that computes the value of π. I am thinking of Buffon's Needle. In 1777, Georges-Louis Leclerc, the Count of Buffon, imagined "dropping a needle on a lined sheet of paper and determining the probability of the needle crossing one of the lines on the page." 2/ He found that the probability is directly related to π. For example, if the length of the needle and the distance between the lines are identical, one can estimate π as twice the number of drops divided by the number of hits.3/ Repeating the needle-dropping procedure the same number of times will rarely give exactly the same answer. (Note that pooling the results for two runs of the procedure is equivalent to one run with twice as many needle drops.) For a very large number of drops, however, the approximation should be pretty good.

MCMC computations are more complicated. They simulate a random walk that samples values of a random variable so as to ascertain a posterior probability distribution. The walk could get stuck for a long time in a particular region. Nevertheless, the general approach is very well established in statistics, and Monte Carlo methods are widely used throughout the sciences. 4/ Indeed, they were integral to the development of nuclear weapons. 5/ The book, Dark Sun: The Making of the Hydrogen Bomb, provides the following account:
On leave from the university, resting at home during his extended recovery [from a severe brain infection], [Stanislaw] Ulam amused himself playing solitaire. Sensitivity to patterns was part of his gift. He realized that he could estimate how a game would turn out if he laid down a few trial cards and then noted what proportion of his tries were successful, rather than attempting to work out all the possible combinations in his head. "It occurred to me then," he remembers, "that this could be equally true of all processes involving branching of events." Fission with its exponential spread of reactions was a branching process; so would the propagation of thermonuclear burning be. "At each stage of the [fission] process, there are many possibilities determining the fate of the neutron. It can scatter at one angle, change its velocity, be absorbed, or produce more neutrons by a fission of the target nucleus, and so on." Instead of trying to derive the expected outcomes of these processes with complex mathematics, Ulam saw, it should be possible to follow a few thousand individual sample particles, selecting a range for each particle's fate at each step of the way by throwing in a random number, and take the outcomes as an approximate answer—a useful estimate. This iterative process was something a computer could do. ...[W]hen he told [John] von Neumann about his solitaire discovery, the Hungarian mathematician was immediately interested in what he called a "statistical approach" that was "very well suited to a digital treatment." The two friends developed the mathematics together and named the procedure the Monte Carlo method (after the famous gaming casino in Monaco) for the element of chance it incorporated. 6/
Even without a computer in place, Los Alamos laboratory staff, including a "bevy of young women who had been hastily recruited to grind manually on electric calculators," 7/ performed preliminary calculations examining the feasibility of igniting a thermonuclear reaction. As Ulam recalled:
We started work each day for four to six hours with slide rule, pencil and paper, making frequent quantitative guesses. ... These estimates were interspersed with stepwise calculations of the behavior of the actual motions [of particles] ... The real times for the individual computational steps were short ... and the spatial subdivisions of the material assembly very small. ... The number of individual computational steps was therefore very large. We filled page upon page with calculations, much of it done by [Cornelius] Everett. In the process he almost wore out his own slide rule. ... I do not know how many man hours were spent on this problem. 8/
NOTES
  1. In forensic DNA work, probabilities also are presented to explain the probative value of the discovery of a "deterministic" DNA profile -- one that is treated as known to a certainty. See David H. Kaye, SWGDAM Guidelines on "Probabilistic Genotyping Systems" (Part 2), Forensic Sci., Stat. & L., Oct. 25, 2015. In addition, the "genotypes" in "probabilistic genotyping" do not refer to genes.
  2. Office for Mathematical, Science and Technology Education, College of Educvation, University of Illinois, Boffon's Needle: An Analysis and Simulation, https://mste.illinois.edu/activity/buffon/.
  3. Id.
  4. See, e.g., Persi Diaconis, The Markov Chain Monte Carlo Revolution, 46 Bull. Am. Math. Soc'y 179-205 (2009), https://doi.org/10.1090/S0273-097908-01238-X; Sanjib Sharma, Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy, arXiv:1706.01629 [astro-ph.IM], https://doiorg/10.1146/annurev-astro-082214-122339.
  5. Roger Eckhard, Stan Ulam, John von Neumann, and the Monte Carlo Method, Los Alamos Sci., Special Issue 1987, pp. 131-41, http://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-88-9068.
  6. Richard Rhodes, Dark Sun: The Making of the Hydrogen Bomb 303-04 (1995).
  7. Id. at 423 (quoting Françoise Ulam).
  8. Id.