Saturday, September 24, 2016

The PCAST Report and Argumentum Ad Hominem

I suppose it was inevitable. If you don't like the message, just dismiss the messengers as ill-informed or biased. Toss in a few simplistic generalities, and some praise for the people who agree with you, and, voila, you can avoid trying to analyze of the substantive basis for the message. This dynamic is at work in the reactions to the report of the President's Council of Advisors on Science and Technology (PCAST) on forensic science. I'll give two examples — one from a pro-report commentator, and another from an anti-report group.

On the pro-report side, we have Randy Balko writing on the Washington Post website that the PCAST report should be seen as scientifically correct because it is written by "people such as Eric Lander, a geneticist, biologist and mathematician at the Massachusetts Institute of Technology; Sylvester James Gates Jr., who studies superstring theory and particle physics at the University of Maryland; Susan Graham, acclaimed computer scientist at the University of California at Berkeley; and William H. Press, an astrophysicist, theoretical physicist, computer scientist and computational biologist at the University of Texas." 1/ In contrast, the prosecutors who attacked PCAST's assessments of the scientific literature in a press release a few weeks ago are "district attorneys — people who work in an adversarial profession; who have achieved great success in a field where rhetoric and persuasion are just as if not more important than facts (remember, most DAs are elected) [and their] ability to win convictions stands to get much more difficult should these disciplines be barred from the courtroom."

Now this is not a fallacious argumentum ad hominem. Everything else being equal, it is rational to believe scientists' views about science over lawyers'. But the acclaim that individuals have achieved in other fields of science is not a sure guide to the validity of their judgments of the scientific foundations of various techniques and theories. William Shockley's outstanding work on electrons and holes in semiconductors did not lend substantive weight to his arguments on intelligence, heredity, and race. The contribution of biochemist Kary Mullis — another Nobel Laureate — to PCR amplification reveals little about the truth of his views on the scientific foundations of climate change, ozone depletion, and the cause of AIDS. 2/

This is not to say that the reputation of a scientist has nothing to do with the reception of his or her ideas. Neither is it a comparison of any one scientist to another or a claim that the PCAST report is mistaken. It is simply a statement that the authority proposing an idea or making an argument matters more to lawyers than it does (or, more precisely, than it should) to scientists. In science, "arguments from authority are worthless," or so it has been said. 3/ Given this Galilean ideal, 4/ one would hope that the responses to the PCAST report from forensic scientists either would concur with PCAST's observations or reject them in the light of a more thorough or more perceptive  study of the foundational literature.

So far, I have not seen such responses — although it is way too early to expect to find them. Instead, one group of forensic scientists have spoken out against the report by taking the unscientific tack of ad hominem argumentation. At least, this is a prominent feature of the Position Statement of the American Congress of Forensic Science Laboratories. 5/ These forensic scientists had this to say about the report:
Unfortunately, it was born of an imbalanced and inexperienced working group whose make-up included no forensic practitioners nor any other professionals with demonstrated experience in the practice of forensic science. The Chair of the aforementioned working group, Eric Lander, sits on the Board of Directors of the Innocence Project, a legal-activism group that has itself been publicly criticized on numerous occasions (including within peer reviewed literature) for the unfairness of its public statements and the conflicts of interest that have long called into question its motives. In addition, the working group’s writer, Tania Simoncelli,4 has publicly authored previous opinions that DNA database collections violate civil liberties.5
The argument in the first sentence that a competent literature review cannot be written by a group of scientists that does not include "forensic practitioners" or "professionals [who] practice ... forensic science" is the mark of an insular guild that resists scientific norms. Anyone skilled in empirical research methods and statistics should be able to survey and comment on the state of the science. "Just as an epidemiologist who does not treat patients may be qualified to testify to the state of the scientific proof of a chemical’s toxicity, a scientist with expertise in research methods and statistics need not be an expert in the details of toolmark examinations to determine whether the scientific literature supports a practitioner’s professed abilities." 6/

The remainder of the polemic "is not [there] to disparage any individuals or their motives," but it is hard to see how else it is relevant. I have disagreed with some of the legal, scientific, and policy positions espoused by Tania Simoncelli, 7/ but a person's views on controversial issues such as the constitutionality of DNA sampling on arrest — a procedure that four U.S. Supreme Court Justices insisted was unconstitutional — hardly makes someone unsuitable as a technical writer. Finally, the idea that a co-chair of the PCAST working group is tainted by an association with the Innocence Project — an advocacy group that itself relies on forensic science — is about as far from a substantive critique as one can get.

Forensic scientists who want to show that parts of the PCAST report are flawed will have to resist the temptation to dismiss the report as "a political phenomenon" born of "ignoran[ce] of the ugly realities associated with solving crimes like murder and rape as quickly and accurately as possible." 8/ They will have to respond in a scientifically defensible manner.

  1. Randy Balko, White House Science Council: Bite-mark Matching Is Junk Science, The Watch, Sept. 7, 2016,
  2. Even within the fields in which a given scientist has made important contributions, examples of erroneous judgments from that scientist are commonplace. Arguably, Eric Lander's conclusions about "spectacular deviations from Hardy-Weinberg, indicating the presence of genetically distinct subgroups" in People v. Castro, an early DNA-evidence case noted in the PCAST report, falls into this category. See David H. Kaye, The Double Helix and the Law of Evidence 115-20 (2010). Even Albert Einstein made his share of mistakes in physics and mathematics. Hans C. Ohanian, Einstein's Mistakes: The Human Failings of Genius (2008).
  3. Carl Sagan, Cosmos 277 (1985).
  4. "In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual." Galileo Galilei, as quoted in Fran├žois Arago, Biographies of Distinguished Scientific Men 365 (1859).
  5. American Congress of Forensic Science Laboratories, Position Statement, Sept. 21, 2016,
  6. David H. Kaye et al., The New Wigmore on Evidence: Expert Evidence § 373 (Cum. Suppl. 2016). The better reasoned judicial opinions on the qualifications of expert witnesses to testify to their reviews adopt this approach. E.g., State v. Romero, 365 P.3d 358, 362 (Ariz. 2016).
  7. See, e.g., Hot Off the Presses: Chimeric Criminals, Forensic Sci., Stat. & L., Mar. 23, 2013,; "Genetic Justice": The Potential and the Real, Forensic Sci., Stat. & L., June 6, 2011,
  8. Position Statement, supra note 5.
More on the PCAST Report

Tuesday, September 20, 2016

The PCAST Report on Forensic Science: "A Roadmap for Defense Lawyers"

Yesterday, U.S. Court of Appeals judge Alex Kozinski announced in the Wall Street Journal that after examining "the scientific validity of forensic-evidence techniques—DNA, fingerprint, bitemark, firearm, footwear and hair analysis," the President’s Council of Advisors on Science and Technology (PCAST) "concludes that virtually all of these methods are flawed, some irredeemably so." The report, he predicted in this op-ed on "Rejecting Voodoo Science," will "immediately influence ongoing criminal cases, as it provides a road map for defense lawyers to challenge prosecution experts."

Here I will merely point to some large landmarks on this map. Later, I hope to critically explore the report's seemingly tendentious use of phrases like "validity," "reliability," and "scientifically meaningless." In haec verba, the "scientific findings ... concerning foundational validity of six forensic feature comparison methods" are as follows:
  • DNA analysis of single-source samples or simple mixtures of two individuals, such as from many rape kits, is an objective method that has been established to be foundationally valid (P. 147).
  • DNA analysis of complex mixtures based on CPI [Combined Probability of Inclusion]-based approaches has been an inadequately specified, subjective method that has the potential to lead to erroneous results. As such, it is not foundationally valid (P. 148).
  • Objective analysis of complex DNA mixtures with probabilistic genotyping software is relatively new and promising approach. ... At present, published evidence supports the foundational validity of analysis, with some programs, of DNA mixtures of 3 individuals in which the minor contributor constitutes at least 20 percent of the intact DNA in the mixture and in which the DNA amount exceeds the minimum required level for the method (P. 148).
  • [B]itemark analysis does not meet the scientific standards for foundational validity, and is far from meeting such standards (P. 148).
  • [L]atent fingerprint analysis is a foundationally valid subjective methodology—albeit with a false positive rate that is substantial and is likely to be higher than expected by many jurors based on longstanding claims about the infallibility of fingerprint analysis. [¶] Conclusions of a proposed identification may be scientifically valid, provided that they are accompanied by accurate information about limitations on the reliability of the conclusion—specifically, that (1) only two properly designed studies of the foundational validity and accuracy of latent fingerprint analysis have been conducted, (2) these studies found false positive rates that could be as high as 1 error in 306 cases in one study and 1 error in 18 cases in the other, and (3) because the examiners were aware they were being tested, the actual false positive rate in casework may be higher (P. 149).
  • [F]irearms analysis currently falls short of the criteria for foundational validity, because there is only a single appropriately designed study to measure validity and estimate reliability (P. 150).
  • [T]here are no appropriate empirical studies to support the foundational validity of footwear analysis to associate shoeprints with particular shoes based on specific identifying marks (sometimes called “randomly acquired characteristics). Such conclusions are ... not scientifically valid. (P. 150).
It seems safe to predict that the phrase "foundational validity" and the basis for the report's conclusions will be the subject of heated debate. More on that later.

More on the PCAST Report

Friday, September 16, 2016

Plagiarism as a Clinical Offence

Regrettably, there is no shortage of flaky academic publishers. They are the dark side of open access, the bad money in Gresham's Law.  One of these is Jacobs Publishers, purportedly based in Austin, Texas. The About Us page explains that:
  • We are involved in filling perforation in Open Access Journals. We are anchormen in leading articles with international standards.
  • It is based on the most exciting researchers with respect to the functional journals covering cosmic fields ... .
  • This Journal opts to bring elixir to the problem ... .
  • Jacobs Publishers is on the way of new strategies in scientific and medical field which are retrievable.
  • Our Mission is to foster and enrich the top-tier research around the globe by our diligence towards inventiveness and innovation in bringing science, medicine, engineering and Pharmacy to the spearhead.
Ouch! But it is good to know that
We strictly oppose copying of content as Plagiarism which is a clinical offence.

Monday, September 12, 2016

US Dep't of Justice to discourage expressions of "reasonable scientific certainty"

At this afternoon's 11th meeting of the National Commission on Forensic Science, Dr. Victor Weedn, "the Senior Forensic Advisor to the Deputy Attorney General," informed the Commission of the Department of Justice's responses to its recent recommendations. He explained that the Department is "adopting the core" of the recommendations seeking to discourage the use of the phrase "reasonable scientific certainty" and variants on it in reports and testimony. In particular, Department forensic labs will adopt policies and procedures to ensure that such terms are not used, and Department prosecutors will abstain from posing questions with these terms to witnesses (unless required to do so by judges). Commission member, theoretical physicist James Gates, complimented the Department for dealing with these words that "make scientists cringe."

Monday, September 5, 2016

The National District Attorneys Association’s Slam: PCAST "Usurps the Constitutional Role of the Courts"

A few days ago, the National District Attorneys Association issued a press release entitled “National District Attorneys Association slams President’s Council of Advisors on Science and Technology report.” That these attorneys have a negative reaction to criticism of their witnesses is not surprising, but what are their arguments? Is the slam a devastating critique, bombastic oratory, or something in between? Here are the principal arguments and some thoughts on them:
The forensic science disciplines that the PCAST authors attack are (and have been) reliably used every day by investigators, prosecutors, and defense attorneys across the United States to aid in both exonerating the innocent and convicting the guilty.
If “reliably” refers to the accuracy of a forensic examiner’s judgment in a given cases, the statement is true. Analysts of bitemarks, bullet striations, footwear marks, fingerprints, hair fibers, and complex DNA mixtures make correct positive and negative associations every day. That much is true of decisions based on the flip of a coin. It also could be true that every day these analysts also make incorrect judgments, incriminating the innocent and exonerating the guilty. Wouldn’t it be nice to know how many? And to know whether the correct statements are the result of demonstrable expertise?
[I]n each instance that such evidence is used, the process of presenting and cross examining the forensic evidence is overseen by objectively neutral judges whose role is to fairly supervise the introduction of evidence into trials and to act as “gatekeepers” to determine the reliability and admissibility of forensic evidence on a case-by-case basis.
I have great respect for judges, but most of the ones I know candidly admit that they lack the scientific training and knowledge to discriminate between scientifically valid, unvalidated, and invalid methods. The judges who admitted Dr. Louise Robbins’ breathtaking identifications based on the size and shape of feet (not friction skin ridge patterns) were “objectively neutral.” (Robbins was an academic physical anthropologist also managed to identify a 3.5-million-year-old fossilized footprint in Tanzania as the mark of a prehistoric woman who was 5-and-one-half months pregnant.) Even today, the law in some jurisdictions does not require expert identification of things that jurors can see and compare for themselves (such as handwriting) to meet the standards established for scientific evidence generally.
As with all evidence presented in criminal courts, this forensic scientific evidence is subject to cross-examination as well as evaluation by the Court.
The issue raised by PCAST is what it takes to show that forensic-science evidence is scientifically valid. An expert phrenologist would be subject to cross-examination and evaluation. That does not convert phrenology into a science.
Many accrediting bodies consisting of world-renowned scientists and highly skilled experts evaluate forensic labs and practitioners, helping to guarantee that only qualified forensic experts testify to solid forensic facts in our courts.
District attorneys and defense counsel also present forensic-science testimony from experts who do not work in accredited laboratories, and, anyway, accreditation is not designed to examine the validity of the very premises of the field.
PCAST has taken it upon itself to usurp the Constitutional role of the Courts and decades of legal precedent and insert itself as the final arbiter of the reliability and admissibility of the information generated through these forensic science disciplines.
Someone sent me a copy of the draft, and I may as well quote from it on this point, since Eric Lander did in Thursday’s PCAST meeting:
Judges’ decisions about the admissibility of scientific evidence rest solely on legal standards; they are exclusively the province of the courts. But, the overarching subject of the judges’ inquiry is scientific validity.6 It is the proper province [of] the scientific community to provide guidance concerning scientific standards for scientific validity.7

7. In this report, PCAST addresses solely the scientific standards for scientific validity and reliability. We do not offer opinions concerning legal standards.

(1) The admissibility of expert testimony depends on a threshold test of, among other things, whether it meets certain legal standards embodied in Rule 702. These decisions about admissibility are exclusively the province of the courts.

(2) Yet, as noted above, the overarching subject of the judge’s inquiry under Rule 702 is “scientific validity.” It is the proper province of the scientific community to provide guidance concerning scientific standards for scientific validity.

PCAST does not opine here on the legal standards, but seeks only to clarify the scientific standards that underlie them.
The scientists' last sentence may not be entirely accurate. The report does more than merely “clarify the scientific standards.” It also applies them. It assesses the state of scientific knowledge in the fields it examined in light of those standards. But to do so is hardly “to usurp the Constitutional role of the Courts and decades of legal precedent and insert [themselves] as the final arbiter.” Anyone is free to disagree with the report’s definition of scientific validity and its assessments of the state of the science. Indeed, if time permits, I may advance some criticisms of my own.

But I will try to avoid empty rhetoric and undocumented claims that “PCAST ... ignored vast bodies of research, validation studies, and scientific literature authored by true subject matter experts” and eschewed “engagement with recognized subject matter experts.” To suggest that the PCAST draft report makes it necessary “to defend our constitutional adversarial system of criminal justice . . . against those who would seek to undermine the role of the courts, prosecutors, defense attorneys, and juries” is not merely unfounded—it is offensive.

More on the PCAST Report

Friday, September 2, 2016

Unhappiness Expressed over PCAST Draft Report

The PCAST Report described yesterday is the subject a couple of newspaper articles that provide more information about it. The Los Angeles Times quoted verbatim from the "do not quote or distribute" draft report:
  • In what is likely to be its most controversial finding, the report states that analysis linking firearms to bullets and shell casings “falls short” of scientific standards for admission as evidence. If judges permit such testimony, the report says, they should tell jurors that error rates by firearms examiners are higher than would be expected.
  • “It has become apparent, over the past decade, that faulty forensic feature comparison has led to numerous miscarriages of justice,” according to the draft report dated Aug. 26. “It has also been revealed that the problems are not due simply to poor performance by a few practitioners, but rather to the fact that the reliability of many forensic feature-comparison methods has never been meaningfully evaluated.”
As for reactions,
  • [Unnamed] current and former U.S. law enforcement officials ... said they were particularly irked that the group was calling into question firearms evidence, which has long been considered grounded in science by judges and appeals courts.
  • John Walsh, former U.S. attorney in Colorado, said he was “surprised and concerned” by the commission’s findings. He said the Justice Department had been working for years to better evaluate potential evidence and fairly explain its meaning and limitations to jurors.
  • Jim Pasco, executive director of the National Fraternal Order of Police, was briefed on the recommendations and said it appeared to be based on a “half-baked model” that “calls into question technologies” that have long been used in court.
The Wall Street Journal, referring to a copy of the report that its reporters had read, provided these quotations from the draft:
  • “It has become increasingly clear in recent years that lack of rigor in the assessment of the scientific validity of forensic evidence is not just a hypothetical problem but a real and significant weakness in the judicial system”
  • In the draft, the council looked at several common analyses used in criminal trials, including latent fingerprints, firearms, footwear, bite marks and DNA [and] found ... that a number of them either weren’t scientifically valid or hadn’t been independently scrutinized enough by “science based agencies” to have “foundational validity,” meaning it had met the standard for “whether evidence is based on reliable principles and methods.”
  • The report said foundational validity requires studies by more than one group, but only one such study had been done on firearm analysis. “Because only one such study has been done “the current evidence falls short of the scientific criteria for foundational validity.”
It too described angst or concern on the part of some organizations:
  • Jim Bueermann, president of the Police Foundation, which does law enforcement-related research, said he would be interested in the opinions of crime lab experts from the Federal Bureau of Investigation. "Just because there is a lack of science does not mean the analysis is inaccurate or done wrong or is not worthwhile,” he said.
  • “What they’ve done is turn the accepted reliability of expert witnesses and their evidence on their heads,” said Jim Pasco, executive director of the Fraternal Order of Police. “As a result there will be people who are not going to go to jail who should be incarcerated and some who are currently incarcerated will be released. The effect will be a threat to the public safety of American citizens.”

○ Del Quentin Wilber, White House Panel Expected to Issue Report Critical of Some Forensic Evidence in Criminal Cases, L.A. Times, Sept. 1, 2015
○ Gary Fields & Kate O’Keeff, Presidential Advisory Council Questions Validity of Forensics in Criminal Trials: Group Sees Lack of Science Behind Much of Bite-mark, Hair, Footwear, Firearm and Tool-mark Analysis, Wall St. J., Sept. 1, 2016
PCAST Recommends More Forensic Science R&R (Research & Reform), Forensic Sci., Stat. & L., Sept. 1, 2016,

More on the PCAST Report

Thursday, September 1, 2016

PCAST Recommends More Forensic Science R&R (Research & Reform)

This is a near real-time bulletin on a meeting of the President's Council of Advisors on Science and Technology (PCAST) to consider a working group's recommendations regarding forensic science. Eric Lander presented the working group's draft report to the Council for a vote. He made the following comments (not quite verbatim):

The report is a natural follow-on to 2009 NRC Report on strengthening forensic science. We've been seeing a shift in the forensic science community toward empirical measurement of accuracy. The FBI has produced a series of truly elegant papers ... black box studies and white box studies on latent fingerprint identification. These show an error rate of 1/600, or, if you include a confidence interval, as you should, 1/300. 1/

The report lays out criteria for determining scientific validity and applies them to DNA analysis of mixed samples, bitemarks, latent fingerprinting, firearms, footwear, and, to a lesser extent, to hair analysis. It makes the following recommendations:
  • Ongoing assessment of foundational validity for forensic feature assessments performed annually by the national Institute of Standards and Technology (NIST);
  • A leadership role for NIST in transforming subjective methods into objective feature-comparison methods;
  • NIST should improve the process by which the Organization of Scientific Area Committees (OSAC) develop or approve standards by using a metrology resource committee;
  • The White House Office of Science and Technology Programs should develop an R&D strategy for forensic science;
  • The FBI laboratory should undertake additional black box studies of subjective assessments and implement routine blind proficiency testing in case flow; 
  • The FBI's Uniform Language for Testifying and and Reporting standards should entail statements based on empirical tests of validity;
  • Federal judges should take into account the appropriate scientific criteria for judging validity and should ensure that testimony about the probative value of the comparisons is consistent with empirical studies.
  • Judicial organizations should provide guidance and education for judges should on how to do this;
  • There should be more federal funding for forensic-science research to make the transformation to objective methods and to improve the use of subjective procedures in the interim.
In response to a question about the nature of the objective methods being sought, Lander referred to recent, impressive progress in computerized facial recognition.

PCAST unanimously adopted the report (subject to final edits that will not change the substance of the recommendations).  There was no mention of when a report will become public.

1. Editorial comment: Of course, a two-sided interval is equally consistent with a smaller false positive rate for individual examiner judgments, and the rate for blindly verified positive identifications would be smaller.
More on the PCAST Report