Wednesday, November 22, 2017

A Two-culture Problem with Forensic Science?

Recently, U.S. Court of Appeals Judge Harry T. Edwards complained that
Forensic practitioners are the people who got us in trouble in the first place. They don't know what they don't know. ... The people who are doing this do not understand what we mean when we say to them, "what you're doing has no scientific foundation." They don't understand it because they were brought up in a different world. They don't understand science. 1/
That is a harsh and sweeping generalization. Judge Edwards, who co-chaired a committee empaneled by the National Research Council to study forensic science in the United States, knows that the forensic-science community is far from monolithic. 2/

But I fear that, in part and in some instances, the frustration expressed by practitioners with the criticism that the validity and reliability of pattern-matching practices such as fingerprint, toolmark, footwear, handwriting, and bitemark comparisons have yet to be sufficiently demonstrated does reflect a lack of appreciation for what it takes to validate a process of measurement and inferences in these fields.

An example is the reaction of the president of the International Association for Identification (IAI) to a draft recommendation of a subcommittee of the U.S. National Commission on Forensic Science. 3/ The document, which received the endorsement of 60% of the members of the Commission 4/ (including, I would guess, all of the nonforensic scientists on the panel), suggested that
Forensic science practitioners should not state that a specific individual or object is the source of the forensic science evidence and should make it clear that, even in circumstances involving extremely strong statistical evidence, it is possible that other individuals or objects could possess or have left a similar set of observed features. Forensic science practitioners should confine their evaluative statements to the support that the findings provide for the claim linked to the forensic evidence.
In other words, practitioners should evaluate the probability of the observed correspondence in the features of the specimens they examine if the specimens have a common origin and the probability of these observations if the specimens are from different sources. But they should not take it upon themselves to opine on the ultimate issue of who is the source.

Although advocacy of this approach is hardly novel in the international forensic-science community, the president of "the foremost international organization" 5/ of practitioners was incensed at the thought that latent fingerprint examiners should cease and desist from "conclusion decisions" 6/ in court. In an interview for Forensic Magazine, he said that it was just unfair. 7/ He fulminated (or elaborated):
"Even if all the minutiae all match up, you're telling me I can't say it came from the same source?" Ruslander said. "There are millions of fingerprints in AFIS, and there's never been a bad match, to my knowledge," he added. "That's a pretty good empirical study." 8/
Anyone competent in scientific methodology would have to call this kind of study fundamentally misconceived. One could design a good experiment to ascertain the validity of an automated fingerprint matcher, but it would not consist of one person’s memory of no “bad matches” — whatever that means for a system that merely produces a list of possibly matching candidates rather than single-source “conclusion decision.” Moreover, even if a perfectly accurate, fully automated system to make single-source attributions existed, what would its uncanny performance tell us about the validity and reliability of mere mortals who do not make their “conclusion decisions” the same way and who are known to err from time to time?

Now there are studies that clearly demonstrate that latent print examiners can make source attributions and exclusions at rates far better than chance — in other words, there is scientifically demonstrable expertise even if the procedure is highly subjective and not particularly “scientific” at critical junctures. But remarks like those of the leader of “the world’s oldest and largest forensic science identification association” 9/ as to what is a “good empirical study” only lend credence to Judge Edwards’ complaint. They make it appear that practitioners “were brought up in a different world” and “don't understand science.” The forensic-science community can and must do better.

  1. "Our Worst Fears Have Been Realized" -- Forensic "Evidence, Science, and Reason in an Era of 'Post-truth' Politics" (Part 1), Forensic Science, Statistics and Law, Nov. 20, 2017, 
  2. His committee’s 2009 NRC Report noted that
    the “forensic science community” ... consists of a host of practitioners, including scientists (some with advanced degrees) in the fields of chemistry, biochemistry, biology, and medicine; laboratory technicians; crime scene investigators; and law enforcement officers. There are very important differences, however, between forensic laboratory work and crime scene investigations. There are also sharp distinctions between forensic practitioners who have been trained in chemistry, biochemistry, biology, and medicine (and who bring these disciplines to bear in their work) and technicians who lend support to forensic science enterprises. (P. 7)
  3. In the interest of full disclosure, I should note that I participated in drafting the "views document" that contains this recommendation and that I made suggestions to the Commission for further revisions.
  4. Transcript of Meeting 13, Part 1, Apr. 10, 2017,.at 48,
  5. United States v. Herrera, 704 F.3d 480, 486 (7th Cir. 2013).
  6. H.W. “Rus” Ruslander, Feb. 5,  2017, IAI Position Statement on Conclusions, Qualified Statements, and Probability Modeling,
  7. Seth Augenstein, National Commission on Forensic Science Asks for Public Comment, Forensic Mag., Feb. 22, 2017,
  8. Augenstein, supra note 4.
  9. IAI’s Mission Statement,

Monday, November 20, 2017

"Our Worst Fears Have Been Realized" -- Forensic "Evidence, Science, and Reason in an Era of 'Post-truth' Politics" (Part 1)

On October 27, a trio of panelists spoke at the Harvard Law School on "Evidence, Science, and Reason in an Era of 'Post-truth' Politics." The organizers, law professors Scott Brewer and Dan Kahan, called the panel “stellar” -- and for good reason. The speakers were
★ Judge (and now Professor) Harry T. Edwards, who co-chaired the National Academic of Science’s 17-member committee on Strengthening Forensic Science in the United States: A Path Forward. The committee’s 2009 report found that “In a number of forensic science disciplines, forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions, and the courts have been utterly ineffective in addressing this problem.”
★ Professor (and former Justice) Charles Fried, who was the U.S. Solicitor General during the Reagan administration, and who argued on behalf of Merrell Dow Pharmaceuticals in the landmark case of Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), and
★ Professor Eric S. Lander, who is best known to the forensic-community for his early testimony and writing on DNA evidence and, of late, for his leadership role in a 2016 report of the President’s Council of Advisors on Science and Technology. This report on “Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods” noted that "[f]ederal appellate courts have not with any consistency or clarity imposed standards ensuring the application of scientifically valid reasoning and reliable methodology in criminal cases involving Daubert questions." His day job lies in directing the Broad Institute of MIT and Harvard, which “is empowering a revolution in biomedicine to accelerate the pace at which the world conquers disease.”
The program did not deal with “Evidence, Science, and Reason” writ large. Mostly, it concerned how the legal system has handled the issue of ensuring that trace evidence, which can associate individuals or objects with crimes, produces “scientific truth.”

What follows is a summary and compilation of some of the more provocative -- and sometimes ad hominem -- statements by Professor Brewer and Judge Edwards. There are also notes on a few of their remarks. I hope to touch on the remainder of the hour-and-a-half program in latter installments. The full video recording is on the web.

PROFESSOR BREWER introduced the panel and the “question of how the truth that the [best?] of science has to offer can inform and guide what is called ‘forensic science’ in such a way that when judges and jurors rely, as they clearly do, on forensic science, they are actually relying on information that legitimately and accurately claims the mantle of scientific truth ... .” He opined that “The response to both reports has been, or seems to me anyway, dispiriting.” The response to which he referred was
☁ A statement from then Attorney General Loretta Lynch that “the Department will not be adopting the recommendations [of the 2016] report related to the admissibility of forensic-science evidence”
☁ Testimony from then Senator (now Attorney General) Jeff Sessions after the 2009 report that “I don’t think we should suggest that these proven scientific scientific principles that we’ve been using for decades are somehow uncertain ... .”
☁ “More recently, the Sessions' Justice Department appointed Ted Hunt, a former prosecutor, as was Sessions, as senior forensic advisor overseeing a forensic science working group to create guidelines for forensic examiners to follow in court testimony. Hunt was ... one of two commissioners [on the National Commission on Forensic Science] to reject the recommendation that forensic experts and attorneys working on behalf of the Justice Department stop using the phrase ... ‘to a reasonable degree of scientific certainty.’” 1/
☁ The Justice Department under Attorney General Sessions “disbanded the National Commission on Forensic Science.” (For the official explanation, see The Justice Department’s Explanation for the End of the National Commission on Forensic Science, Forensic Sci., Stat. & L., April 26, 2017.)
JUDGE EDWARDS spoke of the "two-and-one-half years" that the NAS committee spent "going through all of the research that was available," the discouraging conclusions ("a community in disarray"), and the recommendation for an "independent federal agency." As for the latter,
[O]ne of our most important recommendations was the DOJ, the Department of Justice, should not be that agency. ... They had a vested interest in prosecuting. It's inconsistent with the culture of science ... and we were all, to a person, unanimous in the view that DOJ had to be kept out of it, and boy, were we prescient.
Our worst fears have been realized. ... We got no serious help from DOJ once the report issued, and that goes through ... all the administrations that have been involved. And the only time that DOJ acted in a way that has been useful is when they've been under pressure -- for example, when they were exposed on microscopic hair examinations ... and they had to 'fess up ... Other than that, we were not getting help from DOJ. ... I was appalled we could get no support from the Department to try to advance reform movement.
The other part of our worst fears realized is DOJ is now the self-annointed leader of the forensic science reform movement, which is a disaster. ... [I]t is shrewd on their part because they want to control what is and is not done -- mostly what is not done. And they are in control right now, which is really unfortunate.
On the other hand, Judge Edwards added that
Now ... just so you get the full picture, after the [2009 NAS] report issued, it was cited throughout the world ... . The settlement of the hair cases, press reports, and the National Institute of Justice [within the DOJ] has begun to sponsor some research to try to improve some of the disciplines. ... And in 2013, ... DOJ, and (in my view) under pressure, because a lot of us including the press, had been pressing for some movement, they cosponsored with the National Institutes of Standards and Technology the creation of the National Commission on Forensic Science.
But the Commission, Judge Edwards contended, suffered precisely because it was run by the Justice Department:
Now, here's the problem. DOJ held on to it. Not NIST -- DOJ. So any recommendations coming out of this group went to DOJ, and DOJ decided whether or not the recommendations would be implemented. [ 2/ ] Most of the recommendations have not been implemented. [ 3/ ] They only met twice a year. [ 4/ ] There was no real leadership, and at one point one of my colleagues, Judge Jed Rakoff, a district court judge in New York, resigned from the Commission because DOJ was going to limit the scope of their work, and he wrote an article, an op-ed piece, in the press, and they backed down and forced DOJ to open it back up again. [ 5/ ] But they had no enforcement, and the recommendations were not being accepted by DOJ, but there was a little bit of progress because at least until they were shut down, they started to come together around some recommendations that would have advanced the forensic science project. [ 6/ ]
Next, Judge Edwards discussed the PCAST report and the reaction of the Justice Department:
It's a really strong report ... essentially saying that with respect to these pattern-matching disciplines there are serious problems -- this is not science. You have people testifying about things on the assumption that it's science, and there was no scientific basis for what they were saying.

And then you have the current Department of Justice. ... They tried to block the issuance of the White House report -- DOJ did. I know about the internal battle. ... These were world-class scientists who had studied all of these disciplines [and] had come to very serious conclusions about the frailties of these disciplines, and DOJ pressed the White House not to let this report come out. It finally did come out. DOJ said "we're not interested," and when the new administration came in, DOJ said "we're still not interested, and we're not going to apply any of the recommendations here."

The current DOJ -- and I had an opportunity to hear the new leader within the Department of Justice, Ted Hunt -- who spoke to the National Academy of Sciences a week or two ago at a meeting that I was at. This is a person who was on the National Commission of Forensic Science [and] had voted against a number of proposals that would helped to reform the forensic science community. He blasted the PCAST report and said "they did not understand what they were talking about with respect to science." ... I really wish I could have videotaped the exchange. When he blasted PCAST -- and I'm sitting in a room with world-class scientists at the National Academy of Scientists -- and he did his critique on scientific research, and one of my colleagues couldn't stand it any longer. She said "what are you talking about?" She said, "I teach scientific methodology every Friday, every Friday at 1, and you haven't the faintest idea of what you are talking about." And it was exactly accurate. He had no sense of scientific methodology. ... He made a comment that was one of the most astonishing things I've ever heard. He said, "and incidentally, the jury is still out on bitemarks." The jury is not still out on bitemarks. Trust me, there is no science supporting bitemarks, and yet it is still a discipline that we use in the United States, and it's still being accepted  by the courts. And this new person who's heading the forensic science wing at DOJ has the chutzpah to say that the jury's still out. And I said, "well if you're really serious about advancing reform, wouldn't the first thing you would want to say to the world be, bitemarks is gone as far as we're concerned?" He had no interest in an independent group overseeing the reform effort, and he may kill the National Institute of Science and Technology effort.
In sum,
The National Commission on Forensic Science is dead now because Justice has killed it. So an enterprise that might have produced recommendations that could have been helpful is no longer in existence. There is this guy in Justice, Ted Hunt, who's now called head of it all, and he has his little working group, and no one knows what they're doing, and they refuse to have an independent science group oversee it.
As for NIST and its creation, the Organization of Scientific Area Committees for Forensic Science (OSAC), Judge Edwards maintained the committees are not staffed by enough "real scientists" who actually "understand science":
NIST still exists, and they oversee Scientific Area Committees. And what they are trying to do is establish standards in the disciplines for each of these groups. But I want you to understand ... while this is a noble enterprise in some respects, it's not going to get us where we need to go. The NIST enterprise with these Area Committees is pretty much dominated by forensic practitioners. Forensic practitioners are the people who got us in trouble in the first place. They don't know what they don't know. That's the problem. ... The people who are doing this do not understand what we mean when we say to them, "what you're doing has no scientific foundation." They don't understand it because they were brought up in a different world. They don't understand science. The disciplines that they are now trying to set standards for, many of them have not been validated and they have not been shown to be reliable. So how do you set standards a discipline that has not been shown to be valid and not shown to be reliable? That's one of the frailties of this whole NIST project. ...

These practitioners do not want to know sources of variability. They don't want to try and understand error rates. They don't want to believe that uncertainty exists. They object to blind studies that would help to confirm the reliability of their work. They're persuaded by very small sample sizes. And they fight the real scientists with whom they are working on these Area Committees. And they dominate the committees by 70% to 30%, and the real scientists on these committees with whom I've been in contact say it's a nightmare trying to struggle with them because they don't understand the issues.
His final remarks concerned courtroom testimony and judicial permissiveness:
The exaggerated testimony in court is horrible. We have people testifying "zero error rate, vanishingly small, essentially zero," and we have appellate court opinions in the federal courts adopting zero error rates as if it were a viable notion. ... [T]he federal rules are no help. ... Rule 702 ... was based on Daubert, which purports to talk about scientific validity, [but] Daubert has been ... a failure in ... the criminal arena ... In criminal cases, the notion of scientific validity that is very much a part of Daubert has not worked. It has failed. And it has failed because ... of judges who are wedded to precedent [and] believe that because we said it before, it must be right, and because these practitioners have been around for a long time, it must be right. In other words, history is the proof, and precedent controls. ... And when the experts come in, even when they have some science ... [in cases involving the compositional analysis of bullet lead] they did not know how to do ... a statistical analysis to look at variability and error rates -- they don't know anything about it. And the courts didn't know they didn't know anything about it. ...
  1. On the nature of Commissioner Hunt’s arguments against the Commission’s recommendation, see "Reasonable Scientific Certainty," the NCFS, the Law of the Courtroom," and that Pesky Passive Voice, Forensic Science, Stat. & L., Mar. 1, 2016; Is "Reasonable Scientific Certainty" Unreasonable?, Forensic Sci., Stat. & L., Feb. 26, 2016.
  2. It is not obvious that the division of authority between NIST and DOJ was unreasonable or nefarious. I think Judge Edwards' criticism boils down to frustration with the absence of a centralized, scientific agency that regulates forensic science in America. Neither NIST nor DOJ has the power to make mandatory rules for all forensic scientists. As discussed in a posting of April 12, 2017 (Two Misconceptions About the End of the National Commission on Forensic Science), NCFS provided advice for specific actions by the Attorney General and promulgated more general views for the benefit of what DOJ and NIST call "stakeholders." NIST officials co-chaired and vice-chaired the Commission, and, with DOJ funding, NIST established a complementary structure -- the Organization of Scientific Area Committees on Forensic Science (OSAC) -- to develop science-based, voluntary standards. The stated aim of OSAC is "to identify and promote technically sound, consensus-based, fit-for-purpose documentary standards that are based on sound scientific principles." How well OSAC has met this goal is a distinct question. The Commission recommended the creation of an independent body to ensure "technical merit" of the forensic-science standards that OSAC deems meritorious. OSAC has three "resource committees," but none of them is tasked with reviewing and reporting on the technical merit of standards that the committees propose for addition to a registry of OSAC-approved standards. 
  3. The Commission made recommendations for actions by the Department of Justice regarding accreditation of forensic-science service providers, proficiency testing, public release of quality-management-systems documents, a code of professional responsibility for forensic providers, AFIS interoperability, root-cause analysis of errors, certification of medicolegal death examiners, accreditation of medical examiner and coroner offices, electronic networking of those offices, a national disaster call center, a national office for medicolegal death investigation, model legislation for medicolegal death investigations, use of the term "reasonable scientific certainty," pretrial discovery, documentation and reporting, and forensic-science curriculum development. It also promulgated "views documents" that did not call on the Attorney General to take specific actions. See Work Products Adopted by the Commission, Nov. 6, 2017,
         Computing a percentage for the adoption of the recommendation would not be trivial, as some were adopted in part and rejected in other parts. For example, the code of professional responsibility that the Department adopted omitted or altered the following provisions:
    ▹ Utilize scientifically validated methods and new technologies, while guarding against the use of unproven methods in casework and the misapplication of generally-accepted standards. [Rather than commit to using scientifically validated methods, DOJ enjoins forensic-science professionals to "[c]onduct research and forensic casework using the scientific method or agency best practices. Where validation tools are not known to exist or cannot be obtained, conduct internal or inter-laboratory validation tests in accordance with the quality management system in place."]
    ▹ Conduct independent, impartial, and objective examinations that are fair, unbiased, and fit- for-purpose. [DOJ's version omits "objective" and reads "Conduct examinations that are fair, unbiased, and fit-for-purpose."]
    ▹ Once a report is issued and the adjudicative process has commenced, communicate fully when requested with the parties through their investigators, attorneys, and experts, except when instructed that a legal privilege, protective order or law prevents disclosure. [DOJ's version reqiuires "[h]onest[] communication ... when permitted by ... agency practice."]
    ▹ Appropriately inform affected recipients (either directly or through proper management channels) of all nonconformities or breaches of law or professional standards that adversely affect a previously issued report or testimony and make reasonable efforts to inform all relevant stakeholders, including affected professional and legal parties, victim(s) and defendant(s). [DOJ preferred that its laboratory professionals have a much more limited duty to "[i]nform the prosecutors involved through proper laboratory management channels of material nonconformities or breaches of law or professional standards that adversely affect a previously issued report or testimony."]
  4. The Commission did not meet only twice a year. It met four times in 2014 and 2015, three times in 2016, and twice in the first quarter of 2017, after which its charter was not renewed. The average meeting rate was thus four times a year.
  5. Judge Rakoff did not write an op-ed article -- at least not one that I can find on the web. His letter of resignation appeared in the Washington Post, and newly appointed Deputy Attorney General Sally Yates flew to New York, talked with Judge Rakoff, and rescinded the decision that pretrial discovery rules were outside the Commission's mandate. The Commission made its recommendations for more complete pretrial discovery in criminal cases involving forensic-science evidence, and DOJ implemented them. See "A Bump in the Road" for the National Commission on Forensic Science, Jan. 29, 2015, Forensic Sci., Stat. & L.,; Justice Department Reverses Decision on the Mandate of the National Commission on Forensic Science, Jan. 31, 2015,; Joseph Ax, After Quitting in Protest, Prominent U.S. Judge Rejoins DOJ Commission, Reuters, Jan. 30, 2015,
  6. Sadly, the Commission was unable to attain a two-thirds majority on a subcommittee's "Final Draft Views on Report and Case Record Contents" (59% voted in favor) and "Final Draft Views on Statistical Statements in Forensic Testimony" (60% voted in favor). Whether more meetings would have produced consensus (in the sense of the required two-thirds vote) on these important matters is unclear.

Saturday, November 18, 2017

A New Breed of Peer Reviewer

Dr. Olivia Doll has ample academic credentials on her C.V. She holds a doctorate in Canine Studies from the Subiaco College of Veterinary Science (Dissertation: Canine Responses to Avian Proximity); a master’s degree in Early Canine Studies from Shenton Park Institute for Canine Refuge Studies; and a bachelor’s degree from the Staffordshire College of Territorial Science. Her research includes such topics as the relationships between Doberman Pinschers and Staffordshire Terriers in domestic environments, the role of domestic canines in promoting optimal mental health in ageing males, the impact of skateboards on canine ambulation, and the benefits of abdominal massage for medium-sized canines.

However, the Shenton Park “Institute” is the animal shelter in which she lived as a pup, and there is no College of Veterinary Science in Subiaco. Her expertise in canine studies comes strictly from her skill, experience, and training (not to mention her instincts) as a five-year-old Staffordshire Terrier, AKA Ollie.

Nevertheless, this tongue-in-jowl record was sufficient to induce the following flaky academic journals to add the pit bull from Perth to their editorial boards:
■ *EC Pulmonary and Respiratory Medicine, Echronicon Open Access
■ Journal of Community Medicine and Public Health Care, Herald Scholarly Open Access
■ Journal of Tobacco Stimulated Diseases, Peertechz
■ Journal of Alcohol and Drug Abuse, SMGroup
■ Alzheimer’s and Parkinsonism: Research and Therapy, SMGroup
■ *Journal of Psychiatry and Mental Disorders, Austin Publishing Group
■ *Global Journal of Addiction and Rehabilitation Medicine (Associate Editor), Juniper Publishers
■ Austin Addiction Sciences, Austin Publishing Group
Ollie's exploits received international coverage in newspapers and in Science in May. Yet, Dr. Doll remains on the websites of the journals marked with an asterisk (accessed November 18, 2017). I do not know if she is still reviewing papers for the Journal of Community Medicine and Public Health Care, for that journal does not list names of editors or reviewers.


Friday, November 17, 2017

Comedic License

The other day, I mentioned a satire by the comedian John Oliver on developments in forensic science. A few minutes of the video was played for an international audience at Harvard Law School on October 27 in the introduction to a panel discussion entitled "Evidence, Science, and Reason in an Era of 'Post-truth' Politics." The clip included remarks from Santae Tribble and his lawyer about hair evidence in the highly (and deservedly) publicized exoneration of Mr. Tribble. 1/ I got the impression from the Oliver video that an FBI criminalist claimed that there was but a 1 in 10 million chance that the hair could have come from anyone else. In Oliver's recounting of the case,
Take Santae Tribble, who was convicted of murder, and served 26 years in large part thanks to an FBI analyst who testified that his hair matched hair left at the scene, and he will tell you the evidence was presented as being rock solid. "They said they matched my hair in all microscopical characteristics. And that's the way they presented it to the jury, and the jury took it for granted that that was my hair."

You know, I can see why they did. Who other than an FBI expert would possibly know that much about hair? ... Jurors in Tribble's case were actually told that there was one chance in ten million that it could be someone else's hair.
I have not seen the trial transcript, but a Washington Post reporter, who presumably has, provided this account:
In Tribble’s case, the FBI agent testified at trial that the hair from the stocking matched Tribble’s “in all microscopic characteristics.” In closing arguments, federal prosecutor David Stanley went further: “There is one chance, perhaps for all we know, in 10 million that it could [be] someone else’s hair.” 2/
If the Post is correct, it was not "an FBI expert" who "actually told" jurors that the probability the hair was not Tribble's was 1/10,000,000. It was an aggressive prosecutor who made up a number that he imagined "perhaps ... could" be true. Like the similar figure of 1/12,000,000 in the notorious case of People v. Collins, 438 P.2d 33 (Cal. 1968), this number plainly was objectionable.

In distinguishing between the criminalist's testimony and the lawyer's argument, I do not mean to be too critical of Oliver's condensed version and passive voice. Sometimes a little oversimplification saves a lot of pedantic explanation. The sad fact is that, whichever participants in  the trial were responsible for the 1/10,000,000 figure, Tribble's case exemplifies Oliver's earlier observation that "[t]he problem is, not all forensic science is as reliable as we have become accustomed to believing. ,,, It's not that all forensic science is bad, because it's not. But too often, its reliability is dangerously overstated ... ." 3/

  1. See The FBI's Worst Hair Days, July 31, 2014, Forensic Sci., Stat. & L.,
  2. Spencer S. Hsu, Santae Tribble Cleared in 1978 Murder Based on DNA Hair Test, Wash. Post, Dec. 14, 2012, (emphasis added).
  3. Cf. David H. Kaye, Ultracrepidarianism in Forensic Science: The Hair Evidence Debacle, 72 Wash. & Lee L. Rev. Online 227 (2015),

After composing these remarks, I located the 2012 motion to vacate Tribble's conviction. (Motion to Vacate Conviction and Dismiss Indictment With Prejudice on the Grounds of Actual Innocence Under the Innocence Protection Act, United States v. Tribble, Crim. No. F-4160-78 (D.C. Super. Ct. Jan. 18, 2012), available at This document contains more extended excerpts from the trial transcript, These indicate that although the FBI agent, James Hilverda, did not generate a nonsource probability of 1/10,000,000, he did insist that it was at least "likely" that Tribble was the source, and he indicated that the probability of a coincidental match could be on the order of one in a thousand.
Q. Is it possible that two individuals could have hairs with the same characteristics?
A. It is possible, but my personal experience, I rarely have seen it. Only on very rare occasions have I seen hairs of two individuals that show the same characteristics. Usually when we find hairs that we cannot distinguish from two individuals, it is due to the fact that the hairs are either so dark that you cannot see sufficient characteristics, or they are too light that they do not have very many characteristic[ s] to examine. So you cannot make a real good analysis in that manner. But there is a potential even with sufficient characteristics to once in a while to see hairs that you cannot distinguish.
Q. And is it because of that small possibility that two hairs can be from different people that you never give absolutely positive opinion that two hairs, in fact, did come from the same person.
A. That is correct.
Q. If you compare them and they microscopically match in all characteristics, could you say they could have come from the same person?
A. That's correct.
Q. Have you ever seen two hairs from two. different people that microscopically match?
A. Well, it is possible in the thousands of hair examples I have done, it is very, very rare to find hairs from two different people that exhibit the same microscopic characteristics. It is possible, but as I said, very rare.
A. ... I found that these hairs — the hairs that I removed from the stocking matched in all microscopic characteristics with the head hair samples submitted to me from Santae Tribble. Therefore, I would say that this hair could have originated from Santae Tribble.
Q. Is there any microscopic characteristics [sic] of the known hair that did not match?
A. No. The hair that I aligned with Santae Tribble matched in all microscopic characteristics, all characteristics were the same.
Q. And were there a large number of characteristics which were the same or—
A. All the characteristics were the same and there was a sufficient number of characteristics to allow me to do my examinations.
A. ... I think you can identify individuals, say, as to race, you can an [sic] indication that it came from an individual because it matches in all characteristics and I would say that when I have-in my experience that I feel when I have made this type of examination, it is likely that it came from the individual which I depicted it as coming from.
Mrs. McCormick said at least one of the robbers, the only one that she saw, was wearing a stocking mask. A block from where that homicide occurred, more or less, through the alley and around the corner on a fresh trail found by a trained canine dog, was found what, a stocking mask. In that stocking mask was found what, a hair.
Now whose hair was that? Well it was compared by the FBI laboratory and they can say for one thing, it wasn't Cleveland Wright's hair. They can say that that hair matched in every microscopic characteristic, the hair of Santae Tribble. Now for scientific reasons, which were explained, they cannot say positively that that is Santae Tribble's hair because on rare occasions they have seen hairs from two different people that matched. But usually that is the case where there have been few characteristics present. Now Agent Hilverda told you that on these hairs, the known hairs of Santae Tribble and the hair found in the stocking, there were plenty of characteristics present, not that situation at all. But because the FBI is cautious, he cannot positively say that that is Santae Tribble's hair.
Well what did Special Agent Hilverda say, microscopic analysis. Throwing the large terms away, what did he say? "Could be." He looked at one hair, he looked at Santae Tribble's hair and said, could be.
[T]he dog found a stocking with a hair which not only could be Santae Tribble's, it exactly matches Santae Tribble's in every microscopic characteristic. And Mr. Potenza says throw away all the scientific terms and reduce it to "could be." Scientific terms are important He told you how he compares hairs with powerful microscopes. It is not just the color or wave. He could reject Cleveland Wright's hair immediately; it wasn't Cleveland Wright's hair in that stocking. But he couldn't reject Santae Tribble's because it was exactly the same. And the only reason he said could be is because there is one chance, perhaps for all we know, in ten million that it could [be] someone else's hair. But what kind of coincidence is that?
... The hair is a great deal more than "could be." And if you listened to Agent Hilverda and what he really said, the hair exactly matched Santae Tribble's hair, found in the stocking.

Wednesday, November 15, 2017

It’s a Match! But What Is That?

When it comes to identification evidence, no one seems to know precisely what a match means. The comedian John Oliver used the term to riff CSI and other TV shows in which forensic scientists or their machines announce devastating “matches.” The President’s Council of Advisors on Science and Technology could not make up their minds. The opening pages of their 2016 report included the following sentences:

[T]esting labs lacked validated and consistently-applied procedures ... for declaring whether two [DNA] patterns matched within a given tolerance, and for determining the probability of such matches arising by chance in the population (P. 2)
Here, a “match” is a correspondence in measurements, and it is plainly not synonymous with a proposed identification. The identification would be an inference from the matching measurements that could arise "by chance" or because the DNA samples being analyzed are from the same source.

By subjective methods, we mean methods including key procedures that involve significant human judgment—for example, about which features to select within a pattern or how to determine whether the features are sufficiently similar to be called a probable match. (P. 5 n.3)
Now it is seems that “match” refers to “sufficiently similar” features and to an identification of a single, probable source of the traces with these similar features.

Forensic examiners should therefore report findings of a proposed identification with clarity and restraint, explaining in each case that the fact that two samples satisfy a method’s criteria for a proposed match does not mean that the samples are from the same source. For example, if the false positive rate of a method has been found to be 1 in 50, experts should not imply that the method is able to produce results at a higher accuracy. (P. 6)
Here, “proposed match” seems to be equated to “proposed identification.” (Or does “proposed match” mean that the degree of similarity a method uses to characterize the measurements as matching might not really be present in the particular case, but is merely alleged to exist?)

Later, the report argues that
Because the term “match” is likely to imply an inappropriately high probative value, a more neutral term should be used for an examiner’s belief that two samples come from the same source. We suggest the term “proposed identification” to appropriately convey the examiner’s conclusion ... . (Pp. 45-46.)
Is this a blanket recommendation to stop using the term “match” for an observed degree of similarity? It prompted the following rejoinder:
Most scientists would be comfortable with the notion of observing that two samples matched but would, rightly, refuse to take the logically unsupportable step of inferring that this observation amounts to an identification. 1/
I doubt that it is either realistic or essential to banish the word “match” from the lexicon for identification evidence. But it is essential to be clear about its meaning. As one textbook on interpreting forensic-science evidence cautions:
Yet another word that is the source of much confusion is 'match'. 'Match' can mean three different things:
• Two traces share some characteristic which we have defined and categorised, for example, when two fibres are both made of nylon.
• Two traces display characteristics which are on a continuous scale but fall within some arbitrarily defined distance of each other.
• Two traces have the same source, as implied in expressions such as 'probable match' or 'possible match'.
If the word 'match' must be used, it should be carefully defined. 2/
  1. I.W. Evett, C.E.H. Berger, J.S. Buckleton, C. Champod, & G. Jackson, Finding the Way Forward for Forensic Science in the US—A Commentary on the PCAST Report, 278 Forensic Sci. Int'l 16, 19 (2017). One might question whether “most scientists” should “be comfortable” with observing that two samples “matched” on a continuous variable (such as the medullary index of hair). Designating a range of matching and nonmatching values means that a close nonmatch is treated radically differently than an almost identical match at the edge of the matching zone. Ideally, a measure of probative value of evidence should not incorporate this discontinuity.
  2. Bernard Robertson, Charles E. H. Berger, and G. A. Vignaux, Interpreting Evidence: Evaluating Forensic Science in the Courtroom 63 (2d ed. 2016).

Saturday, November 4, 2017

Louisiana's Court of Appeals Brushes Aside PCAST Report for Fingerprints and Toolmark Evidence

A defense effort to exclude fingerprint and toolmark identification evidence failed in State v. Allen, No. 2017-0306, 2017 WL 4974768 (La. Ct. App., 1 Cir., Nov. 1, 2017). The court described the evidence in the following paragraphs:
The police obtained an arrest warrant for the defendant and a search warrant for his apartment. During the ... search ... , the police recovered a .40 caliber handgun and ammunition ... The defendant denied owning a firearm ... .
     BRPD [Baton Rouge Police Department] Corporal Darcy Taylor processed the firearm ... , lifting a fingerprint from the magazine of the gun and swabbing various areas of the gun and magazine. Amber Madere, an expert in latent print comparisons from the Louisiana State Police Crime Lab (LSPCL), examined the fingerprint evidence and found three prints sufficient to make identifications. The latent palm print from the magazine of the gun was identified as the defendant's left palm print.
     Patrick Lane, a LSPCL expert in firearms identification, examined the firearm and ammunition in this case. Lane noted that the firearm in evidence, was the same caliber as the cartridge cases. ... He further test-fired the weapon and fired reference ammunition from the weapon for comparison to the ammunition in evidence. Lane determined that based on the quality and the quantity of markings that were present on the evidence cartridge case and the multiple test fires, the weapon in evidence fired the cartridge case in evidence.
Defendant moved for "a Daubert hearing ... based on a report, released by the President's Council of Advisors on Science and Technology (PCAST) three days before the motion was filed, which called into question the validity of feature-comparison models of testing forensic evidence." The court of appeals decided that there had been such a hearing. The opinion is not explicit about the timing of the hearing. It suggests that it consisted of questions to the testifying criminalists immediately before they testified. In the court's words,
[B]efore the trial court's determination as to their qualification as experts and the admission of their expert testimony, Madere and Lane were thoroughly questioned as to their qualifications, and as to the reliability of the methodology they used, including the rates of false positives and error. The defendant was specifically allowed to reference the PCAST report during questioning. Thus, the trial court allowed a Daubert inquiry to take place in this case. The importance of the Daubert hearing is to allow the trial judge to verify that scientific testimony is relevant and reliable before the jury hears said testimony. Thus, the timing of the hearing is of no moment, as long as it is before the testimony is presented.
The court was correct to suggest that a "hearing" can satisfy Daubert even if it was not conducted before the trial. However, considering that the "hearing" involved only two prosecution witnesses, whether it should be considered "thorough" is not so clear.

As for the proof of scientific validity, the court pointed to a legal history of admissibility mostly predating the 2009 NRC report on Strengthening Forensic Science in the United States and the later PCAST report. It failed to consider a number of federal district court opinions questioning the type of expert testimony apparently used in the case (it is certain that "the weapon ... fired the cartridge case"). Yet, it insisted that "[c]onsidering the firmly established reliability of fingerprint evidence and firearm examination analyses, the expert witness's comparison of the defendant's fingerprints, not with latent prints, but with known fingerprints, ... we find [no] error in the admission of the testimony in question." The assertion that the latent print examiner did not compare "the defendant's fingerprints [to] latent fingerprints" is puzzling. The fingerprint expert testified that "[t]he latent palm print from the magazine of the gun was identified as the defendant's left palm print."That was the challenged testimony, not some unexplained  comparison of known prints to known prints.

The text of opinion did not address the reasoning in the PCAST report. A footnote summarily -- and unconvincingly -- disposed of the PCAST report in a few sentences:
[T]he PCAST report did not wholly undermine the science of firearm analysis or fingerprint identification, nor did it actually establish unacceptable error rates for either field of expertise. In fact, the PCAST report specifically states that fingerprint analysis remains “foundationally valid” and that “whether firearms should be deemed admissible based on current evidence is a decision that belongs to the courts.”
"Did not wholly undermine the science" is faint praise indeed. The council's views about the "foundational validity" (and hence the admissibility under Daubert) of firearms identification via toolmarks was clear: "Because there has been only a single appropriately designed study, the current evidence falls short of the scientific criteria for foundational validity." (P. 111).

As regards fingerprints, the court's description of the report is correct but incomplete. The finding of "foundational validity" was grudging: "The studies collectively demonstrate that many examiners can, under some circumstances, produce correct answers at some level of accuracy." (P. 95). The council translated its misgivings about latent fingerprint identification into the following recommendation:
Overall, it would be appropriate to inform jurors that (1) only two properly designed studies of the accuracy of latent fingerprint analysis have been conducted and (2) these studies found false positive rates that could be as high as 1 in 306 in one study and 1 in 18 in the other study. This would appropriately inform jurors that errors occur at detectable frequencies, allowing them to weigh the probative value of the evidence. (P. 96).
To say that the Louisiana court did not undertake a careful analysis of the PCAST report would be an understatement. Of course, courts need not accept the report's detailed criteria for establishing "validity." Neither must they defer to its particular views on how to convey the probative value of scientific evidence. But if they fail to engage with the reasoning in the report, their opinions will be superficial and unpersuasive.

Why the News Stories on the Louisiana Lawyer Dog Are Misleading

The news media and the bloggers are abuzz with stories of how Louisiana judges think that a suspect's statement to "get me a lawyer dog" (or maybe "give me a lawyer, dog") is not an invocation of the right to counsel, which, under Miranda v. Arizona, requires the police to terminate a custodial interrogation. Although the case has nothing to do with forensic science or statistics, this blog often points out journalists' misrepresentations, and I'll digress from the main theme of this blog to explain how the media has misrepresented the case.

Five days ago, a blog called "Hit and Run" observed that Justice Scott Chricton of the Louisiana Supreme Court wrote a concurring opinion to explain why he agreed that the court need not review the case of Warren Demesme. It seems that Demesme said to his interlocutors, "If y'all, this is how I feel, if y'all think I did it, I know that I didn't do it so why don't you just give me a lawyer dog cause this is not what's up." The police continued the interrogation. Demesme made some admissions. Now he is in jail on charges of aggravated rape and indecent behavior with a juvenile. writer (formerly with Fox Business and NBC) read the Justice's opinion and thought
Chricton's argument relies specifically on the ambiguity of what a "lawyer dog" might mean. And this alleged ambiguity is attributable entirely to the lack of a comma between "lawyer" and "dog" in the transcript. As such, the ambiguity is not the suspect's but the court's. And it requires willful ignorance to maintain it.
Credulous writers at Slate, the Washington Post, and other news outlets promptly amplified and embellished Krayewski's report. Slate writer Mark Joseph Stern announced that
Justice Scott Crichton ... wrote, apparently in absolute seriousness, that “the defendant’s ambiguous and equivocal reference to a ‘lawyer dog’ does not constitute an invocation of counsel that warrants termination of the interview.”
Reason’s Ed Krayewski explains that, of course, this assertion is utterly absurd. Demesme was not referring to a dog with a license to practice law, since no such dog exists outside of memes. Rather, as Krayewski writes, Demesme was plainly speaking in vernacular; his statement would be more accurately transcribed as “why don’t you just give me a lawyer, dawg.” The ambiguity rests in the court transcript, not the suspect’s actual words. Yet Crichton chose to construe Demesme’s statement as requesting Lawyer Dog, Esq., rather than interpreting his words by their plain meaning, transcript ambiguity notwithstanding.
This Slate article also urged the U.S. Supreme Court to review the case (if it were to receive a petition from the as-yet-untried defendant). The Post's Tom Jackman joined the bandwagon, arguing that
When a friend says, “I’ll hit you up later dog,” he is stating that he will call again sometime. He is not calling the person a “later dog.”
But that’s not how the courts in Louisiana see it. .... It’s not clear how many lawyer dogs there are in Louisiana, and whether any would have been available to represent the human suspect in this case ... .
Yet, the case clearly does not turn on "the lack of a comma between 'lawyer' and 'dog,'" and Justice Chricton did not maintain that Mr. Demesme's request was too ambiguous because "lawyer" was followed by "dog." Public defender Derwyn D. Bunton contended that when “Mr. Demesme said "with emotion and frustration, 'Just give me a lawyer,'" he "unequivocally and unambiguously asserted his right to counsel." (At least, this is what the Washington Post reported.) If this were all there was to the request, there would no doubt that the police violated Miranda.

The problem for Mr. Demesme is that the "unambiguous" assertion "just give me a lawyer" did not stand alone. It was conditional. What he said was "if y'all think I did it, I know that I didn't do it so why don't you just give me a lawyer ...[?]" For Justice Chricton, the "if" was the source of the ambiguity. That ambiguity did not arise from the phrase "lawyer dog." It would have made no difference if defendant had said "lawyer" without the "dog." Contrary to the media howling, Justice Chricton was not taking the phrase "lawyer dog" literally. He was taking the phrase "if y'all think" literally. Here is what the judge actually wrote:
I agree with the Court’s decision to deny the defendant’s writ application and write separately to spotlight the very important constitutional issue regarding the invocation of counsel during a law enforcement interview. The defendant voluntarily agreed to be interviewed twice regarding his alleged sexual misconduct with minors. At both interviews detectives advised the defendant of his Miranda rights and the defendant stated he understood and waived those rights. ... I believe the defendant ambiguously referenced a lawyer—prefacing that statement with “if y’all, this is how I feel, if y’all think I did it, I know that I didn’t do it so why don’t you just give me a lawyer dog cause this is not what’s up.”... In my view, the defendant’s ambiguous and equivocal reference to a “lawyer dog” does not constitute an invocation of counsel that warrants termination of the interview ... .
The Justice cited a Louisiana Supreme Court case and the U.S. Supreme Court case, Davis v. United States, 512 U.S. 452 (1994). In Davis, Naval Investigative Service agents questioned a homicide suspect after reciting Miranda warnings and securing his consent to be questioned. An hour and a half into the questioning, the suspect said "[m]aybe I should talk to a lawyer." At that point, "[a]ccording to the uncontradicted testimony of one of the interviewing agents, the interview then proceeded as follows:"
[We m]ade it very clear that we're not here to violate his rights, that if he wants a lawyer, then we will stop any kind of questioning with him, that we weren't going to pursue the matter unless we have it clarified is he asking for a lawyer or is he just making a comment about a lawyer, and he said, [']No, I'm not asking for a lawyer,' and then he continued on, and said, 'No, I don't want a lawyer.'
They took a short break, after which "the agents reminded petitioner of his rights to remain silent and to counsel. The interview then continued for another hour, until petitioner said, 'I think I want a lawyer before I say anything else.' At that point, questioning ceased."

The Supreme Court held that the initial statement “[m]aybe I should talk to a lawyer,” coming after a previous waiver of the right to consult counsel and followed by the clarification that "I'm not asking for a lawyer," could be deemed too equivocal and ambiguous to have forced the police to have terminated the interrogation immediately.

The Louisiana case obviously is different. Police did not seek any clarification of the remark about a lawyer "if y'all think I did it." From what has been reported, they continued without missing a beat. However, in the majority opinion for the Court, Justice Sandra Day O'Connor went well beyond the facts of the Davis case to write that
Of course, when a suspect makes an ambiguous or equivocal statement it will often be good police practice for the interviewing officers to clarify whether or not he actually wants an attorney. That was the procedure followed by the NIS agents in this case. Clarifying questions help protect the rights of the suspect by ensuring that he gets an attorney if he wants one, and will minimize the chance of a confession being suppressed due to subsequent judicial second-guessing as to the meaning of the suspect's statement regarding counsel. But we decline to adopt a rule requiring officers to ask clarifying questions. If the suspect's statement is not an unambiguous or unequivocal request for counsel, the officers have no obligation to stop questioning him.
The Louisiana courts -- and many others -- have taken this dictum -- repudiated by four concurring Justices -- to heart. Whether it should ever apply and whether Justice Chricton's application of it to the "if ..." statement is correct are debatable. But no responsible and knowledgeable journalist could say that the case turned on an untranscribed comma or on the difference between "lawyer" and "lawyer dog." The opinion may be wrong, but it is clearly unfair to portray it as "willful ignorance" and "utterly absurd." The majority opinion in Davis and the cases it has spawned are fair game (and the Post article pursues that quarry), but the writing about the dispositive role of the lawyer dog meme in the Louisiana case is barking up the wrong tree.