This innovation -- said to be unique among forensic laboratories and to exceed the demands of accreditation -- does not refer to blind testing of samples from crime scenes. It is generally recognized that analysts should be blinded to information that they do not need to reach conclusions about the similarities and differences in crime-scene samples and samples from suspects or other persons of interest. One would hope that many laboratories already employ this strategy for managing unwanted sources of possible cognitive bias.
Perhaps confusingly, the Houston lab's announcement refers to "'blindly' test[ing] its analysts and systems, assisting with the elimination of bias while also helping to catch issues that might exist in the processes." More clearly stated, "[u]nder HFSC’s blind testing program analysts in five sections do not know whether they are performing real casework or simply taking a test. The test materials are introduced into the workflow and arrive at the laboratory in the same manner as all other evidence and casework."
A month earlier, the National Commission on Forensic Science unanimously recommended, as a research strategy, "introducing known-source samples into the routine flow of casework in a blinded manner, so that examiners do not know their performance is being studied." Of course, whether the purpose is research or instead what the Houston lab calls a "blind quality control program," the Commission noted that "highly challenging samples will be particularly valuable for helping examiners improve their skills." It is often said that existing proficiency testing programs not only fail to blind examiners to the fact that they are being tested, but also are only designed to test minimum levels of performance.
The Commission bent over backward to imply that the outcomes of the studies it proposed would not necessarily be admissible in litigation. It wrote that
To avoid unfairly impugning examiners and laboratories who participate in research on laboratory performance, judges should consider carefully whether to admit evidence regarding the occurrence or rate of error in research studies. If such evidence is admitted, it should only be under narrow circumstances and with careful explanation of the limitations of such data for establishing the probability of error in a given case.The Commission's concern was that applying statistics from work with unusually difficult cases to more typical casework might overstate the probability of error in the less difficult cases. At the same time, its statement of views included a footnote implying that the defense should have access to the outcomes of performance tests:
[T]he results of performance testing may fall within the government’s disclosure obligations under Brady v Maryland, 373 U.S. 83 (1963). But the right of defendants to examine such evidence does not entail a right to present it in the courtroom in a misleading manner. The Commission is urging that courts give careful consideration to when and how the results of performance testing are admitted in evidence, not that courts deny defendants access to evidence that they have a constitutional right to review.Using traditional proficiency test results and the newer performance tests in which examiners are blinded to the fact that they are being tested in a given case (which is a better way to test proficiency) to impeach a laboratory's reported results raises interesting questions of relevance under Federal Rules of Evidence 403 and 404. See, e.g., Edward J. Imwinkelried & David H. Kaye, DNA Typing: Emerging or Neglected Issues, 76 Wash. L. Rev. 413 (2001).