Wednesday, December 30, 2015

Higher math in a Kansas case

The diagram of a car crash is drawn at a scale of 1 inch to 20 feet. The distance between two points on the diagram is 3 and 3/16 inches. How far apart are two locations shown in the diagram?

You would think that an expert in the field of "accident reconstruction" could answer this question correctly with a pencil and paper or a calculator (if not in his head). But today's online New York Times hosts a re-enactment of the deposition testimony of an expert accident reconstructionist who refused to try without his "formula sheets" and computer.

Here is a small part of the transcript:
A. Three and three-sixteenths inches.
Q. And that is, when you convert that from the scale, what does that convert to?
A. Sixty-eight feet, approximately, sir.
Q. What are the numbers?
A. Three and three-sixteenths.
Q. OK, well here, run it out for me (handing the witness a pocket calculator).
A. Run it out?
Q. Yeah, calculate it for me.
A. (Working on calculator) And again, I'd do this on the computer.
Q. You can't do it, can you?
A. Not without my formulas in front of me, no sir. I can't do it from my head.
Q. You're not able to do a simple scaling problem with a calculator?
A. I don't wish to. I don't wish to make any mistakes. I use instrumentation that does it exact [sic].
Q. You can't show us, based on the numbers you just gave me, that will spit out the 68-foot distance, can you?
A. Not here today I can't, no.
This colloquy suggests an extra-credit problem: Multiply 3 and 3/16 by 20. Do you obtain 68?

Film-maker and comic writer Brett Weiner dramatized this and more of the transcript without changing a word to achieve this surreal video, Verbatim: Expert Witness. Last year, a similar film, Verbatim: What Is a Photocopier?, won the audience award for best short film at the 2014 Dallas Film Festival. There, an IT guy in Ohio struggles with yet another deeply technical issue -- the meaning of the term "photocopier."

Thursday, December 24, 2015

Flaky Academic Conferences

Paralleling the proliferation of journals of ill repute is the globalization of the marketing of academic conferences. Information on and tidbits from sellers whose incessant spam has reached me is at the blog Flaky Academic Conferences. Links to these postings follow.

BIT Life Sciences, aka BIT Congress and BIT Group Global is "Your Think Tank." It lists conference organizers, presenters, and session chairs without their knowledge or over their objections (see The Dalian Letters).
Looks like another mushy mega-conference organizer for China travel.
Conference Series LLC,
a front for OMICS with (as of 8/25/16) "1000+ Global Events Every Year across USA, Europe & Asia" with support from 1000 more scientific societies and Publishes 700+ Open access journals which contains over 100000 eminent personalities, reputed scientists ... ."
DEStech Publications
runs "a leading conference for all researchers from different countries and territories to present their research results about human society and spiritual cultures of human annually"
Engineering Information Institute
Hardly limited to engineering, this group's "mission is to meet the satisfactions of our authors involved in all kinds of comprehensive conferences. ... We look forward to benefiting and establishing harmonious relationship with everybody."
Eureka Science
Nobel Laureate Ferid Murad promises conferences that "should provide eminent scientists the opportunity to present their cutting edge researches" at "important," "exciting," and, of course, "scientific events."
Global Science and Technology Forum
a group from Singapore that has been accused of "conference hijacking" and is on Beall's List as "an exploitative publisher that ... everyone should avoid."
Institute of Research Engineers and Doctors (IRED)
"IRED welcomes all the Doctors, Scientist, Engineers Professionals, Researchers, Scholars and Medical and Health, Technical Engineering Colleges and Universities to join us to exchange information and ideas; in according with our objective to facilitate this, we call upon to network with us."
International Scientific Events
Come to Hotel "Royal Castle" on the Bulgarian Black Sea Coast.
North Sea Conference and Journal
Believes forensic science reform is part of the Internet of things
Well known as a "predatory publisher", OMICS is also in the conference business -- big time. Despite a California address, its roots are in India, and the FTC has charged it with deceptive practices.
Oxford Global Marketing, Ltd. (OGM)
With offices in Singapore and London, "We also offer bespoke event management to companies in the sector." It belts out email for the Annual Genetics in Forensics Congress.
Oxford Round Table
An American invention (starting in Kentucky) with a history that would impress a corporate reorganization lawyer.
Pace Institute of Technology and Sciences (PITS)
"Elsevier based conference is going to organizing in Andhra pradesh, India during 29th to 30th July 2016," but don't trust us about that ("disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited")
SASI Institute of Technology & Engineering (SASI)
All of engineering conferences from "[w]hat began as a small school in with a 9 students in a small village in West Godavari [that] has created a sensation in the field of education" ... The very name SASI instills confidence in the minds." ⊘ Scientific Federation (SF) is an "abode for researchers"


Wednesday, December 23, 2015

Flaky Academic Journals

Barely a week goes by when I am not asked to submit an article to or become an editor of the Journal of This or That. One such missive is from Maple Xiao of the Canadian Center of Science and Education. Less than a week after I posted a 38-year-old book review on the Social Science Research Network, Ms. Xiao pounced:
I have the honor to read your paper "Book Review, The Right and the Power: The Prosecution of Watergate", and really appreciate your contributions in this area. As the editorial assistant of Journal of Management and Sustainability, I write to invite you to submit manuscripts to our journal.
Apparently, "management and sustainability" embraces a wide swath of subjects. (The book review asked whether it was proper for a former prosecutor to disclose previously private information about the subjects of the criminal investigations, and it questioned the justifications given by former Watergate Special Prosecutor Leon Jaworski for some of his decisions.)

Although the problem of bogus academic journals is widely recognized, 1/ I began listing excerpts from some of the academic journal spam that was reaching me along with observations about the purveyors and their editorial boards. When the posting grew too long, I converted it into a blog of its own, named Flaky Academic Journals. Links to the accumulated information on the spamming and dubious journals follow.

Academicians' Research Center (ARC)
email from ARC Journal of Forensic Science and ARC Journal of Nursing and Healthcare
Allied Academies
"confidential" email from Journal of Forensic Genetics and Medicine and Journal of Sinusitis and Migraine
American Association for Science and Technology (AASCIT)
American Institute of Science (AIS)
American Research Institute for Policy Development (ARID)
email from Journal of Law and Criminal Justice
Annex Publishers
email for the Journal of Forensic Science and Criminology, publisher of articles on the sacred geometry of fingerprints
Apex Journal International
email soliciting for 12 journals at once, including International Research on Medical Sciences, Journal of Education Research and Behavioral Sciences, and International Law and Policy Research Journal
Austrian Scientific Publication House (ASPH)
Where are the Austrians?
Bentham Science
email from Current Drug Abuse Reviews and Neuroscience and Biomedical Engineering
Bioaccent Group
email from BOAJ Urology and Nephrology
Biomed Central
email from Skeletal Muscle
Canadian Center of Science and Education (CCSE)
email from Journal of Management and Sustainability
Center for Promoting Ideas (CPI), USA
email from American International Journal of Social Science and International Journal of Humanities and Social Sciences
Centre of Excellence for Scientific and Research Journalism (COES&RJ)
email from Journal of Social Science with a fake address in Texas
Elyns Publishing Group
email from Journal of  Forensic Medicine and Legal Affairs
Gavin Publishers
email from Journal of Forensic Studies
Herald Scholarly Open Access (HSOA)
email from Journal of Forensic, Legal & Investigative Sciences and a vision "to highlight quality exploration work to the biggest possible swarm over development points secured under the field of medicine."
Internal Medicine Review
"complete rubbish" from "a completely fake medical journal that falsely claims to be based in Washington, D.C."
Institute of Research in Engineering and Technology (IRET)
email from International Journal of Emerging Trends in Electrical and Electronics (IJETEE)
Insight Medical Publishing (iMedPub)
email from Journal of Medical Toxicology and Clinical Forensic Medicine
Jacobs Publishers
"bringing science, medicine, engineering and Pharmacy to the spearhead."
email from Jacobs Journal of Forensic Science.
JSciMed Central (JSM)
email from Annals of Forensic Science and Analysis
Juniper Publishers
email from the Journal of Forensic Science and Criminal Investigation
Knowledge Enterprises, Inc. (KEI Journals)
"A publisher to avoid"
"confidential" email from Medical Research Archives
Medwin Publishers
seeking "to intellectualize the global society by providing them with the advancements"
email from Vaccines & Vaccination Open Access and comments on the International Journal of Forensic Sciences
Mehta Press
No editorial boards, but "rigorously reviewed" with "Maximum review time 15 days"
Merit Journals
"This journal opts to bring panacea"
Net Journals
email from International Research Journal of Medicine and Medical Sciences (IRJMMS) and Biochemistry and Biotechnology Research (BBR)
OMICS International
email from Intellectual Property Rights: Open Access, Journal of Civil & Legal Sciences, and Global Journal of Nursing & Forensic Studies
"We look forward for a long lasting scientific relationship."
Open Access Library (OALib) Journal
Openventio Publishers
email from Anthropology - Open Journal
Peertechz Journals
email from Forensic Science and Technology and Archives of Sports Medicine and Physiotherapy
"themed Organization setted up with 40 Peer Reviewed Medical Journals"
Progressive Science Publications (PSCIPUB)
emails from four journals at once; "+5000 active participants"
Public Science Framework
"Continued Privilege: Publishing Papers with 50% Discount" and you can submit "the extended version" of your previously published paper
Remedy Publications
email from Clinics in Oncology
Research Institute for Progression of Knowledge (RIPK)
email from International Journal of Education and Social Science and International Journal of Humanities and Social Science Review
Science Publishing Group (SciencePG)
articles include "Modification of Einstein's E= mc2 to E =1/22 mc2/" and "Mathematical Proof of the Law of Karma"
Scientific Research Association (SCIREA)
"takes an opportunity to serve the scientific community, students and researchers with undefiled research works."
Scientific Research Publishing Inc. (SCIRP)
Looks like a subject-based filing system that, for a fee, stores even randomly generated papers.
Scientifica (Hindawi)
Cairo-based Hindawi Publishing claims to have "more than 30,000 internationally-recognized Editors"
email from Journal of Nucleic Acids
Scinzer Scientific Journals
"Fast track paper publication (3-10 Days)" and "Papers from your country are welcome." Only "40 USD per paper." ⊗ Scitech Central
27 journals out to save the world being being "a quantum to research"
SciTechnol OMICS in disguise
email from Journal of Forensic Toxicology and Pharmacology
Time Journals
"It arises from a reaction to the severe restriction of knowledge distribution"
email from Time Journal of Biological Sciences
Trade Science Inc.
"wide spectrum of audience. SUBMIT MANUSCRIPTS NOW !!"
email from 18 journals

  1. Declan Butler, Investigating Journals: The Dark Side of Publishing, 495 Nature 433 (2013);  John D. Bowman, Predatory Publishing, Questionable Peer Review, and Fraudulent Conferences, 78 Am. J. Pharm. Educ. 176 (2014), doi: 10.5688/ajpe7810176,; Kevin Carey, Fake Academe, Looking Much Like the Real Thing, N.Y. Tim es, Dec. 30, 2016,; Jocalyn Clark & Richard Smith, Firm Action Needed on Predatory Journals, 350 Brit. Med. J. h210 (2015), doi:,; Colleen Flaherty, Librarians and Lawyers, Inside Higher Education, Feb 15, 2013,; David Moher & Ester Moher, Stop Predatory Publishers Now: Act Collaboratively, Annals Internal Med. (2016),; Cenyu Shen & Bo-Christer Bj√∂rk, ‘Predatory’ Open Access: A Longitudinal Study of Article Volumes and Market Characteristics, 13 BMC Medicine 230 (2015), DOI: 10.1186/s12916-015-0469-2,
Related Postings

Thursday, December 17, 2015

"Remarkably Accurate": The Miami-Dade Police Study of Latent Fingerprint Identification (Pt. 3)

The Department of Justice continues to communicate the sweeping view that fingerprint examiners make extremely few errors in their work. A few days ago, it issued this bulletin:

Miami-Dade Examiner Discusses a Highly Accurate Print Identification Process
In a new video, Brian Cerchiai talks about an NIJ-supported study conducted by the Miami-Dade Police Department on the accuracy of fingerprint examiners. The study found that fingerprint examiners make extremely few errors. Even when examiners did not get an independent second opinion about their decisions, they were remarkably accurate. But when decisions were verified by an independent reviewer, examiners had a 0-percent false positive, or incorrect identification, rate and a 3-percent false negative, or missed identification, rate.

A transcript of the NIJ video can be found below. The naive reader of the bulletin might think that Miami Dade's latent print examiners do not make false identifications -- they are "remarkably accurate" in their initial judgments -- and they have a "0-percent" rate of incorrectly declaring a match in their cases. In previous postings, I suggested that this first characterization is a remarkably rosy view of the results reported in the study, but I did not address the verification phase that brought the false positive rate (of 3 or 4%) for the judgments of individual examiners down to zero.
Today, Professor Jay Koehler shared his reactions to both aspects of the Miami Dade study on a discussion list of evidence law professors. I have not reread the study myself to verify the details of his analysis, but here is his take on the study: Photograph of latent print on a bottle.

Regarding the Miami-Dade fingerprint proficiency test (funded by the Department of Justice) - and DOJ’s claim that it showed a 0% false positive error rate - I urge you to be skeptical.

First, the study was not blind (examiners knew they were being tested) and the participants were volunteers. If we are serious about estimating casework error rates, these features are not acceptable.

Second, the Department of Justice’s press release indicates that the study showed examiners to be “remarkably accurate” and “found that examiners make extremely few errors.” But the press released doesn’t actually state what those remarkable error rates were.

Here they are: the false positive error rate was 4.2% (42 erroneous identifications out of 995 chances, excluding inconclusives), and the false negative error rate was 8.7% (235 erroneous exclusions out of 2,692 chance, excluding inconclusives). In case you are wondering whether the false positive errors were confined to a few incompetents, 28 of the 109 examiners who participated in the study made an erroneous identification. Also, the identification errors occurred on 21 of the 80 different latent prints used in the study.

The error rates identified in this study produce a likelihood ratio of about 22:1 for a reported fingerprint match.  This means that, one should believe that it is about 22 times more likely that the suspect is the source of the latent print in question than it was prior to learning of the match. Not 22 million or billion times more likely to be the source of the latent print in question. Just 22 times more likely.

But not all false positive errors are equal, and most of those reported in this study really shouldn’t count as false positive errors if we are concerned with who is the source of the fingerprint as opposed to which finger is the source of the fingerprint. The authors report that 35 of the 42 false positive errors seemed to be nothing more than “clerical errors” in which the correct matching person was selected but the wrong finger was identified. If we move those 35 minor false positives into the correct calls category, we are left with 7 major false positive errors (i.e., a person who was not the source is falsely identified as the source). This translates to a 0.7% false positive error rate (i.e., about one false positive error per 142 trials), and a likelihood ratio of 130:1. Better, but still not even close to millions or billions to one.

Third, the study provides some evidence about the value of verification for catching false positive errors, but caution is needed here as well. The 42 false positives were divided up and assigned to one of three verification conditions: a group of different examiners, a group of examiners who were led to believe that they were the 2nd verifiers, and the original examiners themselves (months later). The 0% post-verification error rate that the Department of Justice touts is an apparent reference to the performance of the first verification group only. None of the 15 false positive errors that were sent to this group of verifiers was repeated. But some of the original false positive errors were repeated by the second and third group of verifiers. The authors are silent on whether any of the 7 major false positive errors were falsely verified or not.

Appendix: NIJ Video Transcript: How Reliable Are Latent Fingerprint Examiners?
Forensic Science, Statistics, and the Law gratefully acknowledges the U.S. Department of Justice, Office of Justice Programs, National Institute of Justice, for allowing it to reproduce the transcript of the video How Reliable Are Latent Fingerprint Examiners? The opinions, findings, and conclusions or recommendations expressed in this video are those of the speaker and do not necessarily represent the official position or policies of the U.S. Department of Justice.

Research Conducted by the Miami-Dade Police Department.
Speaking in this video: Brian Cerchiai, CLPE, Latent Fingerprint Examiner, Miami-Dade Police Department

The goal of the research was to determine if latent finger print examiners can make and be able to make identifications, exclude properly prints not visible to the naked eye. In this case, we had these 13 volunteers leave over 2000 prints on different objects that were round, flat, smooth and we developed them with black powder and tape lifts.

We did the ACE which is analyze compare evaluate. Where we gave latent examiners - 109 latent examiners - unknown finger prints or palm prints and latents to look at and compare to three known sources. So essentially, compare this latent to one of these 30 fingers or one of these six palms.
[Slide text] 109 examiners compared the unknown latent prints to known sources. Can they match the prints correctly?
So as participants were looking at the latent list and comparing them to the subjects, we asked them if they could identify any of those three subjects as being the source of that latent print. In that case, they would call that an identification. If we asked them to exclude, we are basically asking them to tell us that none of those three standards made that latent or were not the source of that latent print.

That ACE verification (ACE-V) process works, secondly, the examiner looks at that comparison and does their own analysis comparison and gives their evaluation of that decision.

When we found that under normal conditions where one examiners made an identification and the second examiner verified that no erroneous identification got passed that second latent examiner. So it had a false positive rate of zero.
[Slide text] With verification, 0% false positive.
So when we are looking at ACE comparisons where one latent examiner looked a print and one latent examiner analyzed compared and evaluate and came up with a decision. We came up- there was a false positive rate which basically an erroneous identification where they identified the wrong source.
[Slide text] Without verification, 3% false positive.
Without verification, there was a three percent error rate for that type of identification. And we also tracked a false negative rate where given those three standards, people were erroneously excluded that source; where you’re given the source, check one of these three people and then you now eliminate that one of those latent print does not come from one of those three people, even though it did. So that would be a false negative. And that false negative rate was 7.5 percent.
[Slide text] Without verification, 7.5% false negative.
And what we did during the third part of our phase in this test was – we were testing for repeat ability and reproduce ability. We sent back answers over - after six months we sent back participants their own answers and we also gave them answers from other participants. But all those answers came back as if they were verifying somebody’s answers.
[Slide text] To test the error rate further, an independent examiner verified comparisons conducted by other examiners.
Under normal conditions we’d give them the source, latent number and basically agree, disagree or inconclusive. With a biased conditions, we’d give them the identification answer that someone identified, given the answer of a verifier. So now, it’s already been verified and now we want them to give a second verification. Having those print verified, ending out those erroneous identification to other examiners not one latent examiner under just regular verification process-not one latent examiner identified that, they caught all those errors. What actually brought the error rate – the reported error rate dropped down to zero.
[Slide text] The independent examiner caught all of the errors, dropping false positive error rate to 0%.
We maintained our regular case load, this was done in the gaps in between, after hours. The hardest part of doing this was not being dedicated researchers. That’s why it took us quite a long time to get this done. Now that it’s finally out here and we are doing things like this -- giving presentations this year. We really hope to expand on this research. The results from this study are fairly consistent with those of other studies.
[Slide text] This research project was funded by the National Institute of Justice, award no. 2010-DN-BX-K268. The goal of this research was to evaluate the reliability of the Analytics, Comparison, and Evaluation (ACE) and Analysis, Comparison, Evaluation, and Verification (ACE-V) methodologies in latent fingerprint examinations.
Produced by the Office of Justice Programs, Office of Communications. For more information contact the Office of Public Affairs at: 202-307-0703.

Friday, December 11, 2015

More on Task Relevance in Forensic Tests

Yesterday, I suggested that the National Commission on Forensic Science's views on task relevance are a significant step forward, and I elaborated on the use of conditional independence in determining which information is task relevant. The NCFS position is simple -- the examiner "should rely solely on task-relevant information when performing forensic analyses."

As the Commission explained, excluding task-irrelevant information avoids subtle but possible biases. 1/ However, what if the potentially biasing information could improve the accuracy of the analyst's conclusions? Statisticians often use biased estimators because they have greater precision -- they tend to give estimates that are closer to the true value with limited data -- even though these estimates tend to lie consistently on one side of that value. Moreover, even if the bias from the task-irrelevant information would increase the risk of an incorrect conclusion, what if it would be very costly to keep it out of the examination process? One might argue that the NCFS view is too stringent.

This challenge to the simple rule of no reliance is not persuasive. First, it is rather theoretical. Reasonably cheap methods to blind analysts to biasing, task-irrelevant information are generally available. The NCFS document explains how they can work.

Second, the conclusions that are likely to be more accurate are not those that the analyst should be drawing. At least with identification evidence in the courtroom, the expert should explain the strength of the scientific evidence, leaving the conclusion as to the identity of the true source to the judge or jury to decide based on all the evidence in the case. The NCFS views document adopts this philosophy most clearly in the last sentence of the appendix, which reads "[a]ny inferences analysts might draw from the task-irrelevant information involve matters beyond their scientific expertise that are more appropriately considered by others in the justice system, such as police, prosecutors, and jurors."

But it is not just a matter of relative expertise that should limit the analyst to task-relevant information. Forensic scientists are supposed to be conveying scientific information, and if the putative scientific judgment comes from a mixture of scientific and other information, the judge or jury cannot properly evaluate its weight without knowing what is the scientific part and what is some other part. 

Information contamination also makes it difficult to discern the validity of scientific tests. Consider hair-morphology evidence. I have presented the Houck-Budowle study of the correspondence between microscopic hair examinations and mitochondrial DNA tests as evidence that the former has some modest probative value (as measured by the likelihood ratio for positive associations). 2/ But inasmuch as the examiners were not blinded to task-irrelevant information, it is hard to tell from this one study how much of the probative value comes from the features of the hair and how much comes from other information that the hair examiners might have considered.

Studies of polygraphic lie detection offer another example. The technique sounds scientific, and the graphs of physiological responses look technical. But if the examiners' conclusions used in a validation study are influenced by impressions of the subject, the study does not reveal the diagnostic value of just the information in the tracings  -- the impact of that information and the subjective impressions are confounded. (This problem can be avoided by computerized scoring of the data.)

As the NCFS appendix emphasizes, the task-irrelevant information "does not help the analyst draw conclusions from the physical evidence that has been designated for examination through correct application of an accepted analytic method." At the risk of oversimplifying a complex subject, the message is that forensic scientists should stick to the scientific information.

Even this precept is not a complete response to concerns about bias. What if the task-relevant information also poses a serious risk of bias? If the contribution to the scientific analysis is minor and the risk of distortion is great, should not the examiner be blinded to this concededly task-relevant information? NCFS expressed no view on this situation. Perhaps it never arises, but if it does, standard-setting organizations should deal with it.

  1. The NCFS observes that "there are risks entailed in exposing examiners unnecessarily to task-irrelevant information." But if the information is truly task-irrelevant, why would it be necessary? And if such information exists, would not the same risk of biasing the analysis be present?
  2. David H. Kaye, Ultracrepidarianism in Forensic Science: The Hair Evidence Debacle, 72 Wash. & Lee L. Rev. Online 227 (2015); Disentangling Two Issues in the Hair Evidence Debacle, Forensic Sci., Stat. & L., Aug. 22, 2015,

Thursday, December 10, 2015

Blinding Forensic Analysts to Task-irrelevant Information: A National Commission (NCFS) Speaks Out

This week, the National Commission on Forensic Science (NCFS) approved a “views document” entitled Ensuring That Forensic Analysis Is Based Upon Task-Relevant Information. 1/ If these views are translated into practice, it will be a major step forward in making sure forensic science findings are based on scientific data and not on extraneous information. The document is thus cause for celebration.

Here, I describe how the document defines task-relevance. I identify an arguable inconsistency in the Commission’s terminology and elaborate on the use of what is known in probability theory as conditional independence.

The NCFS’ views are these:
  1. FSSPs [Forensic Science Service Providers] should rely solely on task-relevant information when performing forensic analyses.
  2. The standards and guidelines for forensic practice being developed by the Organization of Scientific Area Committees (OSAC) should specify what types of information are task-relevant and task-irrelevant for common forensic tasks.
  3. Forensic laboratories should take appropriate steps to avoid exposing analysts to task-irrelevant information through the use of context management procedures detailed in written policies and protocols.
The analysis and explication that follows this enumeration tries to define task-relevance both in words and in symbols involving conditional probabilities. The NCFS definition is in two parts:
(1) [I]nformation is task-relevant for analytic tasks if it is necessary for drawing conclusions: (i.) about the propositions in question, (ii.) from the physical evidence that has been designated for examination, (iii.) through the correct application of an accepted analytic method by a competent analyst.
(2) Information is task-irrelevant if it is not necessary for drawing conclusions about the propositions in question, if it assists only in drawing conclusions from something other than the physical evidence designated for examination, or if it assists only in drawing conclusions by some means other than an appropriate analytic method.
Taken literally, this formulation seems to dismiss as task-irrelevant information that could help the analyst assess the strength of the evidence yet is not necessary for drawing conclusions about the propositions.  For example, suppose the proposition P in question is whether a trace sample that has both clear and ambiguous features came from a suspect. Viewing a tape of someone who looks like (and thus might be) the suspect leaving the mark is task-irrelevant under (2). No analyst needs to view the tape to compare the questioned mark to a known exemplar. The analyst can reach a conclusion of some sort without the video.

Nevertheless, viewing the tape could help the analyst doing a side-by-side comparison resolve the ambiguities in the features in the mark and thereby “assess the strength of the inferential connection between the physical evidence being examined and the propositions the analyst is evaluating.” For example, if the mark is a distorted fingerprint, observing how it was deposited might help the analyst. It seems as if the tape should be declared task-relevant, but (1) requires that it be necessary to the analysis. Strictly speaking, it is not.

Indeed, a few sentences later, the document states that information “is task-relevant if it helps the analyst assess the strength of the inferential connection between the physical evidence being examined and the propositions the analyst is evaluating.” Not everything that is helpful is necessary.

That the Commission did not really mean to require necessity also can be gleaned from the “more formal definition of task-relevance and task-irrelevance ... in the technical appendix.” For “two mutually exclusive propositions P and NP that a forensic science service provider (FSSP) is asked to evaluate,” and for E defined as “the features or characteristics of the physical evidence that has been designated for examination,”
(1) information is task-relevant if it has the potential to assist the examiner in evaluating either the conditional probability of E under P—which can be written p(E|P)—or the conditional probability of E under NP—which can be written p(E|NP);
(2) information is task-irrelevant if it has no bearing on the conditional probabilities p(E|P) or p(E|NP).
Again, necessity is not crucial: The phrase “has the potential to assist” has been substituted for “is necessary,” and “has no bearing” has replaced “is not necessary.”

Technical definitions (1) and (2) also depart from (or refine) the main definitions (1) and (2) in that the only “conclusions” that can be considered in judging task-relevance are conditional probabilities for “features” given certain propositions. These conditional probabilities often are called “likelihoods” to distinguish them from the posterior probabilities of the propositions given the features. Traditionally, analysts testified about posterior probabilities expressed qualitatively or categorically. For example, the statement P that “Jane Doe’s thumb is the source of the latent print” is a categorical conclusion meaning that the posterior probability Pr(P|E) is close to 1.

Using likelihoods could be valuable in clarifying task-relevance, but the concepts of “bearing” and “potential to assist” remain undefined. It would seem that the NCFS intends to equate task-irrelevance with conditional independence. Let I denote the information that might be task-irrelevant. E and I are conditionally independent given some proposition R if and only if (iff) Pr(E&I|R) = P(E|R) P(I|R). An equivalent definition looks to whether Pr(E|I&R) = Pr(E|R). The idea is that once R is known, knowing I brings no additional information about E.

If we take conditional independence to be the NCFS’ technical definition of task-irrelevance, and we use R to stand for either P or NP, then we can rewrite the NCFS definitions as
(3) I is task-relevant iff Pr(E|I&R) ≠ Pr(E|R);
(4) I is task-irrelevant iff Pr(E|I&R) = Pr(E|R).
This more precise definition is easier to write than to apply. The “physical evidence” itself — bits of soil, specimens of handwriting, latent and rolled fingerprints, and so on — is not “E” in the probability function. Instead, E is “the features or characteristics of the physical evidence.” But are these the actual features or the declared features?

I think the formal definition works better when E refers to the true features (although the judge or jury only knows what the expert thinks they are). First, let’s look at an easy case. Suppose that E refers to the DNA alleles present at each locus in the suspect’s DNA (A0) and the profile in the crime-scene DNA (A1). Thus, E = A0 & A1. P means that the suspect is the source of both DNA samples; let Q mean that someone else is. I is a credible report that the suspect was near the crime scene just after the crime occurred. Finally, suppose that A0 and A1 are the same — the DNA in both samples have the same true features.

If P is true, then the samples must have the same features, so Pr(E|P) = Pr(E|I&P) =1. If Q is true, then whether the samples have the same features also does not depend on I — if someone else left the DNA, the suspect’s propinquity does not affect the alleles that the true contributor possesses and left at the crime-scene. Consequently, under (4), I is task-irrelevant, just as it should be.

Now, let’s make it more complicated. The laboratory is asked to assess whether a suspect’s “touch” DNA is present on a gun used in a killing. Several small peaks in the electropherogram are at the positions one would expect if this were the case, but they are at the limit of detectability. Some analysts would treat them as real (true peaks), but others would see them as spurious. The question is whether the analyst should be able to know the profile reported for the suspect — let’s call it r[A0] — before ascertaining the profile A1 in the crime-scene sample. Is I = r[A0] task-relevant to the determination of A1?

Some analysts might argue that I is task-relevant because knowing what is in the suspect’s DNA helps them understand what really is in the crime-scene DNA. They could say that the fact that the small peaks in the crime-science sample are located at just the same places as their larger homologs in the suspect’s sample helps them resolve the ambiguity arising from the small peak heights. Of course, if they are thoughtful, they also will recognize that the related information I could bias them, and they might well agree that they should not be exposed to it because it does not contribute enough to the accuracy of their determinations. But are they wrong in their claim that I is task-relevant (applying the NCFS definition)?

The views document does not give an explicit answer. It concludes with the observation that 
[Task-irrelevant information] might help the analyst draw conclusions about the propositions, but it does not help the analyst draw conclusions from the physical evidence that has been designated for examination through correct application of an accepted analytic method. Any inferences analysts might draw from the task-irrelevant information involve matters beyond their scientific expertise that are more appropriately considered by others in the justice system, such as police, prosecutors, and jurors.
This relative-expertise criterion, however, does not quite define task-irrelevance. The inference that a small peak in an electropherogram is the result of chemiluminescence from alleles as opposed to an artifact or background noise may be difficult to make correctly, but it is not clear that it is lies more squarely within the expertise of police, prosecutors, and jurors than of DNA analysts. 2/

The formal definition can help us out here. If P is true, then regardless of what the peaks look like and irrespective of the suspect’s reported profile r[A0], the true profile of the crime-scene sample is A1 = A0. Thus, Pr(E=A1|P) = Pr(A1|P&I) = Pr(A1|P&r[A0]) = 1. Likewise, if someone else’s DNA is on the gun instead of the suspect’s, then the probability that the profiles match also is unrelated to a report of what is in the suspect’s DNA sample. Once again, the conditional-independence definition of task-irrelevance seems to work.  Sometimes probability notation is purely window dressing, but the approach begun in the technical appendix might do some useful work in spotting task-irrelevant information. 3/

  1. The document should appear on the Commission’s web page in the near future. The principal drafter of the document was Bill Thompson, who is the chair of the Human Factors Resource Committee of the NIST Organization of Scientific Area Committees (OSACs) that is developing standards for forensic science.
  2. One can argue that looking at the suspect's profile or peaks before resolving ambiguities in the crime-scene profile is not a "correct application of an accepted analytic method." However, given that the task is to ascertain and compare the two DNA samples, it seems odd to call this information about the profiles "irrelevant" as opposed to improper or not acceptable. And, if there were no standard in place rejecting this practice (as was true for a period of time), this criterion would not render the information task-irrelevant.
  3. This is so even though the likelihoods involved are not necessarily the ones that determine the probative value of the forensic analysis with respect to the two competing hypotheses P and Q. Those likelihoods are Pr(E*|P) and Pr(E*|Q). The asterisk is attached because we do not know the true features E in the samples. We have data E* on them (fallible measurements or observations of them). The probative value of E* (or, if you like, of the analysis that generates these data) is the likelihood ratio Pr(E*|P) / Pr(E*|Q). For example, even though we can speak of the likelihood ratio for the true DNA profiles (A1 & A0) for present purposes, the court’s evidence is the reported profiles: E* = r[A1 & A0].

Sunday, December 6, 2015

Hair Evidence in the “Clearly Not Exonerated” Exoneration of Mark Reid

On November 3, the fictional forensic scientist in the world’s most watched television drama was aghast that she had once performed microscopic hair comparisons. Having learned that such comparisons are entirely discredited, NCIS’s Abby Sciuto is horrified: “Can you imagine if I messed up, what that really means? It means that innocent people went to jail because of me, because of my mistakes.” As a writer for Entertainment Weekly wrote, “Abby’s spinning out of control, locked in her lab and reexamining every single case she’s ever touched.” 1/

The “16 Years” episode is fiction, but real people have gone to jail for longer than that — and some could have been executed — because of mistakes by examiners. One disturbing hair-comparison case is State v. Reid. 2/ I have cited the Connecticut Supreme Court's opinion in two publications that survey different ways to testify about the implications of similarities between trace evidence and samples from known sources (such as the defendant). 3/ In doing so, I was not expressing the slightest agreement with the supreme court’s reasoning or arguing that the court was correct to hold that the trial judge properly admitted the testimony. But the case does illustrate how a careful criminalist called upon to testify could proceed in the face of gaping scientific uncertainty about the significance of similarities in the trace material and the known samples. It also illustrates the different sorts of errors than can occur in ascertaining hair morphology and drawing inferences from it as well as the different types of exonerations that can occur with the benefit of DNA testing.

I. The Criminalist Gives “Features Only” Testimony in Reid

It seems hard to deny that various physical features of hair display at least some variation within a population.  But without extensive population data that might permit at least rough estimates of the relative frequencies of the features, and without employing methods that have demonstrated reliability in measuring the features of interest, it is not clear how, or even if, this information should be used in trials.

One possibility is to limit the testimony to a presentation of the observed features (and perhaps a characterization of the samples' features as similar or different, as the case may be). According to the Connecticut Supreme Court, the analyst in Reid pursued this “features only” approach:
[He] displayed an enlarged photograph of one of the defendant's hairs and one of the hairs recovered from the victim's clothing as they appeared side-by-side under the comparison microscope. [He] explained to the jurors how the hairs were similar and what particular features of the hairs were visible. He also drew a diagram of a hair on a courtroom blackboard for the jurors. The jurors were free to make their own determinations as to the weight they would accord the expert's testimony in the light of the photograph and their own powers of observation and comparison.
The trial court had held a pretrial hearing to decide whether this testimony satisfied the preliminary showing of scientific validity normally required of all suitably challenged scientific evidence. The court found that it did, but the supreme court did not rely on or discuss either the scientific validity or the general scientific acceptance of visual hair comparisons. It avoided the issue by holding that the testimony did not have to satisfy such standards — because it was not “scientific evidence” at all. Rather, the expert “testified about a subject that simply required the jurors to use their own powers of observation and comparison.”

Three years later, in 2003, the Superior Court granted a petition for a new trial. 4/ Its opinion casts doubt on the no-science theory. Elaborating on the supreme court’s description of the testimony, this court observed that at the trial, the expert “indicated that hair comparison analysis ... is generally accepted as reliable within the field of forensic science” and “that he could state, ‘to a reasonable degree of scientific certainty,’ that the pubic hairs found on the victim's clothing were microscopically similar to those pubic hair samples taken from Mark Reid.” On such a record, the supreme court’s conclusion that the usual standards for scientific evidence are beside the point is hard to swallow.

II. The Superior Court Orders a New Trial While Insisting that the DNA Evidence Does Not Exonerate Reid

In any event, at Reid’s trial
Mr. Settachatgul testified that the three hairs recovered from the victim's clothing were pubic hairs. These hairs were rootless, indicating that they were shed, not plucked; one was found on the victim's jeans, another on a sock, and another on her lower undergarment (panty). ... Based on the microscopic analysis, Mr. Settachatgul's conclusion was that the three rootless hairs recovered from M.'s clothing were Negroid pubic hairs which had similar characteristics to the pubic hairs supplied by petitioner.
As the State’s Attorney explained in her summation, “the hairs ... appeared to be the same color, both had an abundance of fuci, and both exhibited the shadow of twisting, indicative of pubic hair.”  However, she acknowledged that “statistics are not done in the comparison field, [and] the only conclusion that can be drawn is similar or dissimilar characteristics, not the percentage of the population which shares those hair characteristics.” She told the jury “this is not conclusive evidence. I agree with the defense to that degree. It is not conclusive. But it is supportive of the victim's I.D.”

This support collapsed when postconviction mitochondrial DNA testing established that the three public hairs came from the same individual or from individuals in the same maternal lineage. Critically, Reid was not in that maternal line, while the victim, a white woman, was. This left two major possibilities: either the unknown rapist was the source of the three hairs or the victim was. The former scenario totally exculpates Reid; the latter renders the “supportive” expert testimony inconclusive.

Reid argued that the mtDNA test proved his actual innocence — that the criminalist’s determination of race had to be true, that the victim was white, and therefore the true rapist must have been some other black man who deposited three hairs on the victim’s clothes. The court did not buy this argument. And for good reason. Because the mtDNA testing showed that the victim’s mtDNA sequences matched those of the three hairs, the most plausible conclusion is that the criminalist erred in finding “Negroid pubic hairs.” The hairs probably were the victim’s rather than any rapist’s.

Thus, nothing was left of to connect Reid to the rape except the victim’s identification of him. The Superior Court concluded:
This is a close, difficult case. The new mtDNA evidence merely excludes petitioner as the depositor of the unknown hairs; it clearly does not exonerate him. And, as stated, the victim, M., was certain and steadfast in her identification of Mark Reid, and the circumstances surrounding that identification support its reliability, at least when viewed absent the newly discovered mtDNA evidence.
There are reasons to question this rosy picture of the eyewitness testimony, but whatever one thinks of that identification, the Superior Court found that Reid was entitled to a new trial at which he could use the mtDNA evidence to devastate a major part of the state’s case — the hair testimony.

III. The End of the Story Leaves Questions Hanging

In the end, there was no retrial. According the University of Michigan Law School’s National Registry of Exonerations, the state dismissed the charges “after the victim declined to participate ... . Reid, who had other felony convictions, was deported to his native Jamaica. In 2004, Reid filed a lawsuit seeking $2 million in damages from East Hartford.” The Registry does not report the outcome of that action. 5/

Whether or not all microscopic hair testimony is scientifically invalid because the comparisons have not been shown to be scientifically reliable, there is agreement that criminalists frequently have erred by using the similarity between hairs to make strong or quantified statements about the source of the trace hairs. 6/ This type of overclaiming apparently did not occur in Reid. In granting a new trial, the Superior Court emphasized that “Mr. Settachatgul was testifying only to the very ‘narrow opinion’ that the three pubic hairs recovered from the victim's clothing were similar to the samples obtained from the defendant, and, that he, Settachatgul could not say that the questioned specimens were the pubic hairs of petitioner/defendant.”

This testimony would not be judged as scientifically invalid under the FBI’s guidelines for reviewing microscopic-hair-comparison testimony. 7/ Likewise, the prosecutor’s summation was not infected with the sort of egregious overstatements, such as “There is one chance, perhaps for all we know, in 10 million that it could [be] someone else’s hair,” heard in other cases. 8/

The hair analyst may have erred in concluding that the features were similar. But mtDNA testing cannot tell us that. The small DNA molecules in the mitochondria do not relate to hair morphology. They provide a complementary — and more specific — test for identity. Sequence differences can exclude suspects when the inherently less discriminating visible features cannot. Standing alone, this limitation does not make microscopic hair analysis scientifically invalid, and it does not mean that the analyst misjudged the visual features here. But it does underscore the need to estimate the likelihoods or conditional error rates for microscopic hair comparisons. Without this information, how can anyone know what weight to give to the criminalist’s findings of similar hairs?

Finally, given the mitochondrial results for the victim and the pubic hairs in Reid, the hair analyst probably erred in concluding that the three pubic hairs were of “Negroid origin.” As noted earlier, the woman who was attacked in Reid was white, and the simplest conclusion is that the hairs on her clothes were hers, as the mtDNA sequences suggest. Furthermore, the unqualified assurance as to the racial origin of the hairs was unjustified — even if the classification turned out to be correct. Skimming a few forensic science textbooks, I can find no reference to publications in the scientific literature to support the position that hair analysts can make firm determinations of biogeographic ancestry. Caution is usually advised. 9/

  1. Sara Netzley, “16 Years,” Entertainment,
  2. 757 A.2d 482 (Conn. 2000). The case is noted in J.M. Taupin, Forensic Hair Morphology Comparison— A Dying Art or Junk Science?, 44 Sci. & Justice 95 (2004).
  3. David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence (2d ed. 2011); David H. Kaye, Presenting Forensic Identification Findings: The Current Situation, in Communicating the Results of Forensic Science Examinations 12–30 (C. Neumann et al. eds. 2015) (Final Technical Report for NIST Award 70NANB12H014).
  4. Reid v. State, No. CV020818851, 2003 WL 21235422 (Ct. Super. Ct. May 14, 2003).
  5. Maurice Possley, Mark Reid, The National Registry of Exonerations.
  6. David H. Kaye, Ultracrepidarianism in Forensic Science: The Hair Evidence Debacle, 72 Wash. & Lee L. Rev. Online 227 (2015)
  7. Id.
  8. Spencer S. Hsu, Santae Tribble Cleared in 1978 Murder Based on DNA Hair Test, Dec.14, 2012 (quoting from federal prosecutor David Stanley’s closing argument).
  9. Max M. Houck & Jay A. Siegel, Fundamentals of Forensic Science 303 (2015) (“Estimating the ethnicity or ancestry of an individual from his or her hairs is just that: an estimate.”); Richard Saferstein, Forensic Science: An Introduction 419 (2d ed. 2011) (“all of these observations are general, with many possible exceptions. The criminalist must approach the determination of race from hair with caution and a good deal of experience.”).
Acknowledgment: Thanks to Chris Fabricant for thoughts on State v. Reid and for pointing me to the full history of the case.