Friday, December 11, 2015

More on Task Relevance in Forensic Tests

Yesterday, I suggested that the National Commission on Forensic Science's views on task relevance are a significant step forward, and I elaborated on the use of conditional independence in determining which information is task relevant. The NCFS position is simple -- the examiner "should rely solely on task-relevant information when performing forensic analyses."

As the Commission explained, excluding task-irrelevant information avoids subtle but possible biases. 1/ However, what if the potentially biasing information could improve the accuracy of the analyst's conclusions? Statisticians often use biased estimators because they have greater precision -- they tend to give estimates that are closer to the true value with limited data -- even though these estimates tend to lie consistently on one side of that value. Moreover, even if the bias from the task-irrelevant information would increase the risk of an incorrect conclusion, what if it would be very costly to keep it out of the examination process? One might argue that the NCFS view is too stringent.

This challenge to the simple rule of no reliance is not persuasive. First, it is rather theoretical. Reasonably cheap methods to blind analysts to biasing, task-irrelevant information are generally available. The NCFS document explains how they can work.

Second, the conclusions that are likely to be more accurate are not those that the analyst should be drawing. At least with identification evidence in the courtroom, the expert should explain the strength of the scientific evidence, leaving the conclusion as to the identity of the true source to the judge or jury to decide based on all the evidence in the case. The NCFS views document adopts this philosophy most clearly in the last sentence of the appendix, which reads "[a]ny inferences analysts might draw from the task-irrelevant information involve matters beyond their scientific expertise that are more appropriately considered by others in the justice system, such as police, prosecutors, and jurors."

But it is not just a matter of relative expertise that should limit the analyst to task-relevant information. Forensic scientists are supposed to be conveying scientific information, and if the putative scientific judgment comes from a mixture of scientific and other information, the judge or jury cannot properly evaluate its weight without knowing what is the scientific part and what is some other part. 

Information contamination also makes it difficult to discern the validity of scientific tests. Consider hair-morphology evidence. I have presented the Houck-Budowle study of the correspondence between microscopic hair examinations and mitochondrial DNA tests as evidence that the former has some modest probative value (as measured by the likelihood ratio for positive associations). 2/ But inasmuch as the examiners were not blinded to task-irrelevant information, it is hard to tell from this one study how much of the probative value comes from the features of the hair and how much comes from other information that the hair examiners might have considered.

Studies of polygraphic lie detection offer another example. The technique sounds scientific, and the graphs of physiological responses look technical. But if the examiners' conclusions used in a validation study are influenced by impressions of the subject, the study does not reveal the diagnostic value of just the information in the tracings  -- the impact of that information and the subjective impressions are confounded. (This problem can be avoided by computerized scoring of the data.)

As the NCFS appendix emphasizes, the task-irrelevant information "does not help the analyst draw conclusions from the physical evidence that has been designated for examination through correct application of an accepted analytic method." At the risk of oversimplifying a complex subject, the message is that forensic scientists should stick to the scientific information.

Even this precept is not a complete response to concerns about bias. What if the task-relevant information also poses a serious risk of bias? If the contribution to the scientific analysis is minor and the risk of distortion is great, should not the examiner be blinded to this concededly task-relevant information? NCFS expressed no view on this situation. Perhaps it never arises, but if it does, standard-setting organizations should deal with it.

Notes
  1. The NCFS observes that "there are risks entailed in exposing examiners unnecessarily to task-irrelevant information." But if the information is truly task-irrelevant, why would it be necessary? And if such information exists, would not the same risk of biasing the analysis be present?
  2. David H. Kaye, Ultracrepidarianism in Forensic Science: The Hair Evidence Debacle, 72 Wash. & Lee L. Rev. Online 227 (2015); Disentangling Two Issues in the Hair Evidence Debacle, Forensic Sci., Stat. & L., Aug. 22, 2015, http://for-sci-law.blogspot.com/2015/08/disentangling-two-issues-in-hair.html.

No comments:

Post a Comment