Tuesday, December 25, 2012

The Judicial Reception of Acquiring Biometric Data on Arrest: Photographing, Sizing, and Fingerprinting Before 1933

A recent article suggests that in the first third of the Twentieth Century, American courts generally did not sanction photographing, fingerprinting, or making physical measurements of arrestees (Bertillonage). According to Wayne A. Logan, Policing Identity, 92 B.U. L. Rev. 1561 (2012), courts did not “sanction the common practice of ‘mugging’ every suspect whose picture and measurements the police would like to have. Nor d[id the courts] sustain the right to retain the prints and measurements after acquittal.” Id. at 1579 (quoting A.M. Kidd, The Right to Take Fingerprints, Measurements and Photographs, 8 Cal. L. Rev. 25, 32 (1919)). A mere “handful of decisions [adopted] a more generous stance.” Id. The early regime of general disapproval of police acquisition of biometric data then changed “[s]tarting in the 1930s, [as] courts began evincing a less critical and more accepting view. Most notably, in United States v. Kelly [55 F.2d 67 (2d Cir. 1932)] the Second Circuit rejected a challenge to the use of identity evidence, based on the absence of statutory authority to extract prints, brought by a defendant facing misdemeanor prosecution under the National Prohibition Act.” Logan, supra, at 1580.

The theory that United States v. Kelly, 55 F.2d 67 (2d Cir. 1932), represents a phase shift is open to serious question. True, a mere handful of decisions upheld the collection, retention, and widespread circulation of biometric records all at once. But then again there were only a handful of reported opinions—of any kind on each of these matters. An annotation from 1933 observed that
Considering the large number of arrests that are made, there are comparatively few cases in which the right of a person to prevent his description from taking its place in the "rogues' gallery" has been involved. The decisions of those few cases are not uniform. This is accounted for in part by the difficulty the courts find, on account of the form of the pleadings, in granting the relief sought. Both mandamus and injunction have been held not to be the proper remedies.
Annotation, Right to Take Finger Prints and Photographs of Accused Before Trial, or to Retain Same in Police Record After Acquittal or Discharge of Accused, 83 A.L.R. 127 (1933).

Having now read almost all the appellate opinions in the original annotation, my conclusion is that the clear majority rule favored the collection and retention of the data. Both before and after Kelly, appellate courts did not recognize high constitutional barriers to collecting biometric data from arrestees and rarely found merit in demands for the destruction of fingerprints, photographs, or bodily measurements.

Proceeding chronologically, in the earliest case collected in the Annotation, the Indiana Supreme Court evinced no doubt that a sheriff had the power to acquire biometric data on an arrestee. In State ex rel. Bruns v. Clausmeier, 57 N.E. 541 (Ind. 1900), the gravamen of the complaint was libel—that the sheriff damaged an acquitted defendant’s reputation by circulating his picture. The Indiana Supreme Court observed (in dictum) that “It would seem, therefore, if, in the discretion of the sheriff, he should deem it necessary to the safe-keeping of a prisoner and to prevent his escape, or to enable him the more readily to retake the prisoner if he should escape, to take his photograph, and a measurement of his height, and ascertain his weight, name, residence, place of birth, occupation, and the color of his eyes, hair, and beard, as was done in this case, he could lawfully do so.” The court cited no opinions to the contrary.

In Shaffer v. United States, 24 App. D.C. 417 (D.C. 1904), the defendant objected to the use of his arrest photograph (in which he had no beard) to help a witness identify him at trial (when he had a beard) on the ground that the state had no right to photograph him for that purpose and that using the photograph at trial violated his right not to be compelled to incriminate himself. The District of Columbia’s Court of Appeals forcefully rejected the argument, observing that photographing arrestees was “one of the usual means employed in the police service of the country, and it would be matter of regret to have its use unduly restricted upon any fanciful theory or constitutional privilege.”

Thus, the earliest appellate cases reflect no condemnation of the routine collection biometric identification information following an arrest. However, the Louisiana Supreme Court expressed a different view in Itzkovitch v. Whitaker, 42 So. 228 (La. 1906), and Schulman v. Whitaker, 42 So.2d 227 (La. 1906).  In these cases, the court saw no reason to take photographs (and share them with other police agencies) before conviction. The Louisiana Supreme Court cited no opinions previously adopting this position. Clausmeier and Schaffer notwithstanding, the Whitaker court “found no precedent directly pertinent to the issues here.”

The Arkansas Supreme Court in Mabry v. Kettering, 117 S.W. 746 (Ark. 1909), followed Clausmeier and Schaffer. Three men charged with state crimes and held in a county jail sought an injunction against developing the negatives of photographs that their jailers had taken. The state planned to give the photographs to federal officials “for the purpose of identifying appellants in the various localities where [federal] offenses are charged to have been committed.” The state supreme court held that they were not entitled to the injunction. It wrote that “[t]he authorities cited by appellants in support of their claim for a temporary injunction clearly recognize the principle that public officers, charged with the enforcement of criminal laws, and having in their custody individuals charged with crime, may use photographs for the purpose of identifying the individual accused.” The “identification” here plainly involved the use of the photographs in separate and unrelated investigations.

In the same year, Maryland’s highest court also rejected the Louisiana approach. In Downs v. Swann, 73 A. 653 (Md. 1909), the Court of Appeals affirmed the dissolving of an injunction against taking photographs and bodily measurements to identify an arrestee. This court defined “[t]he precise question” as “whether the police authorities of Baltimore city may lawfully provide themselves, for the use of their department of the city government, with the means of identification of a person arrested by them upon a charge of felony, but not yet tried or convicted, by photographing and measuring him under the Bertillon system.” The court perceived no constitutional defect in acquiring the identifying information. As for the general state of the law, the court explained that “The right of the police authorities to employ the Bertillon process for the identification of convicted criminals has been recognized in most, if not all, of the jurisdictions in which the subject has received consideration, although several courts and text-writers have either questioned or denied the right to subject to that process persons accused of crimes before their trial or conviction.” It cautioned, however, that it was not countenancing “the placing in the rogues' gallery of the photograph of any person, not a habitual criminal, who has been arrested, but not convicted, on a criminal charge, or the publication under those circumstances of his Bertillon record.”

In 1915, the Washington Supreme Court rejected an offender’s demand for the destruction of postconviction photographs. Although Hodgeman v. Olsen, 150 P. 1122 (Wash. 1915), is not a case on arrestee data collection—indeed, the court judiciously noted that did not need to reach the question of preconviction data collection—the opinion sheds light on the judicial understanding of that question at that time. The Washington court “call[ed] attention to . . . cases explicit in affirming the implied police power to take, preserve, and make reasonable use of such photographs and data for the identification of persons convicted of crime, and even of persons accused of crime, but not yet convicted.” 150 P. 1122. The only appellate cases to the contrary were the isolated Louisiana ones.

Finally, in Miller v. Gillespie, 163 N.W. 22 (Mich. 1917), the Michigan Supreme Court held that an arrestee was not entitled to the destruction of identifying records even though the charges against him were dismissed at trial. It perceived absolutely no authority—including the Louisiana cases—“for granting relief . . . , unless it can be said that the mere preservation in the files of the police department of a report proper to be made in the first instance—a true report—exposes plaintiff to ridicule, obloquy, or disgrace.” Naturally, the court was unwilling to say any such thing.

In sum, by the 1920s, the Louisiana cases were the exception to the rule. Those opinions, and ones from trial judges reaching similar results, never were the majority rule. The main innovation of United States v. Kelly, 55 F.2d 67 (2d Cir. 1932), was its deliberate extension of the majority rule to misdemeanor arrests. It described fingerprinting of arrestees as "widely known and frequently practiced both in jurisdictions where there are statutory provisions regulating it and where it has no sanction other than the common law." Id. at 70. The opinion was not radical. If it was persuasive in expanding the established doctrine, its impact may have been related to its unusually detailed analysis of the law and policy and to the prestige of the court. Judges Augustus Hand, Learned Hand, and Thomas Swan comprised the panel--a veritable judicial powerhouse.


Annotation, Right to Take Finger Prints and Photographs of Accused Before Trial, or to Retain Same in Police Record After Acquittal or Discharge of Accused, 83 A.L.R. 127 (1933).

David H. Kaye, The Constitutionality of DNA Sampling on Arrest, 10 Cornell J.L. & Pub. Pol'y 455 (2001)

A.M. Kidd, The Right to Take Fingerprints, Measurements and Photographs, 8 Cal. L. Rev. 25, 32 (1919)

Wayne A. Logan, Policing Identity, 92 B.U. L. Rev. 1561 (2012)

Saturday, December 22, 2012

"Human Error, Bias, and Malfeasance" in DNA Databases and Law Reviews

A new article in the Boston University Law Review offers the following warning:
[E]xpansive police arrest authority—and the desire to continually enlarge identity evidence databases at very little cost in time and expense—should give pause for several reasons. First, contrary to common public perception, DNA is not infallible. Rather, like other evidence, it is subject to human error, bias, and malfeasance, and has figured in several wrongful accusations and convictions. As Professor David Kaye notes in his recent book:
How probable is it that two, correctly identified DNA genotypes would be the same if they originated from two unrelated individuals? By definition, [such matches] do not consider any uncertainty about the origins of the samples (the chain-of-custody issue), about the relatedness of the individuals who left or contributed the samples (the identical-alleles-by-descent issue), or about the determination of the genotypes themselves (the laboratory-error issue).
Wayne A. Logan, Policing Identity, 92 B.U. L. Rev. 1561, 1580-89 (2012) (footnote numbers omitted).

Having searched without success for a single case in the U.S. of a false conviction based on DNA evidence from a database search,1 I was puzzled. Could I have missed several false convictions arising from erroneous DNA testing? Did these cases involve database trawls, where observer “bias” is not normally an issue?

Being a lawyer, I did what any reader of law review articles must do. I turned to the footnotes. The footnote on false convictions as a reason to constrain DNA databases reads as follows:
See Greg Hampikian et al., The Genetics of Innocence: Analysis of 194 DNA Exonerations, 12 Ann. Rev. Genomics & Hum. Genetics 97, 107 (2011) (mentioning existence of at least fifteen exonerations in which DNA resulted in conviction).
If Professor Logan (and the source-citation reviewers of the Boston University Law Review) are correct, Professor Hampikian discovered at least 15 cases of DNA evidence that resulted in false convictions. How could I have missed all these case in my earlier postings?

The Genomics and Human Genetics review article plainly does not even begin to support the claim that DNA testing produced 15 false convictions. It merely states that among previously analyzed cases of postconviction exonerations, "there were at least 15 cases where DNA was tested prior to conviction." Hampikian et al., supra, at 107. Let's look at the outcomes of this DNA testing, as presented by Dr. Hampikian and his colleagues:
  • The cited article does not even discuss the outcome of the DNA tests in two of the 15 cases because there were no "transcripts or other accurate information on the DNA results available." Id. Counting two cases on which there is no information as showing that contemporary DNA databases produce false convictions is surprising.
  • "The majority of these cases included proper testimony, with DNA results that excluded the exoneree (9 of the 13 cases). These exclusions were explained away by the state in various ways—perhaps the defendant had an unknown codefendant, the DNA could have come from a consensual sex partner, etc." Id. Claiming that DNA databases should be constrained because most DNA typing accurately showed that a defendant was not the source of an incriminating sample is a blunder.
  • "In 5 of the 13 cases, DQ alpha tests included the exonerees as possible contributors. In 4 of these 5 cases, however, more discriminating tests performed postconviction excluded the exonerees. In the remaining case, a second round of DQ alpha testing exonerated the defendant after it was discovered that the original lab analysis was incorrect." Id. Before the DQA test was retired from forensic DNA testing, it was known to be relatively undiscriminating. See, e.g., Cecelia A. Crouse, Analysis of HLA DQ alpha Allele and Genotype Frequencies in Populations from Florida, 39 J. Forensic Sci. 731 (1994); NFSTC, DNA Analyst Training. Questioning databases stocked with CODIS profiles because a different, bi-allelic locus has different properties is off target.
  • "There were four cases where improper DNA testimony was given at trial. In one, the analyst testified about a match based on DQ alpha testing; however, the analyst did not disclose that it was only a partial match. In another case, the analyst did not provide the proper statistic for the population included by the results of DQ alpha testing." To be sure, "improper" testimony is deplorable, but it is not clear that the analyst in the first case incorrectly stated the implications of the match or, more importantly for worries about databases, that analysts working with database matches would give incorrect estimates of genotype frequencies.
  • "In a third case, the analyst testified that the DNA matched the exoneree, but failed to disclose an additional exclusionary DNA result." Withholding exculpatory evidence of any sort—DNA, fingerprint, toolmark, eyewitness, or anything else—is unconscionable and unconstitutional. But it is not much of an argument against inclusive DNA databases.
  • “In the final case, the analyst misinterpreted the results of the testing (which was performed incorrectly—failing to separate the male and female DNA during differential extraction), falsely including the exoneree as a source of the DNA when in fact he should have been excluded.” Yes, if crime-scene DNA is mistyped, and if this error goes unnoticed, a database match could result.
Can DNA databases produce false convictions? Of course they can. Police can commit perjury about DNA evidence, just as they can about other evidence. If there were no databases, it might be slightly harder to fabricate such impressive evidence. DNA evidence, like all evidence, “is subject to human error, bias, and malfeasance.” So are law review articles. And so are blog postings—corrections are welcome.


1. David. H. Kaye, Have DNA Databases Produced False Convictions?, Forensic Science, Statistics, and the Law, July 7, 2012 (cross-posted to The Double Helix Law Blog); David H. Kaye, Genetic Justice: Potential and Real, Forensic Science, Statistics, and the Law, June 5, 2011 (cross-posted to The Double Helix Law Blog).

Monday, December 17, 2012

The Department of Justice and the Definition of Junk DNA

In drafting an amicus brief in Maryland v. King, the case in which the Supreme Court is reviewing the constitutionality of routine collection of DNA before conviction, I decided it is important to clarify the term "junk DNA" if only because it gets tossed around in so many court opinions and briefs. The Department of Justice defines “junk DNA” as “[s]tretches of DNA that do not code for genes.” U.S. Dep’t of Justice, Nat’l Institute of Justice, DNA Initiative Training for Officers of the Court, Glossary, http://www.dna.gov/glossary/ (last visited Dec. 17, 2012). In scientific discourse, however, DNA does not “code for genes.” Rather, parts of genes encode proteins and RNAs. "Junk DNA" is not a synonym for the rest of the genome. It is a provocative and deprecated term for that "fraction of DNA that has little or no adaptive advantage for the organism." Sean R. Eddy, The C-value Paradox, Junk DNA and ENCODE, 22 Current Biology R898 (2012). Some of what NIJ thinks is "junk DNA" is important to fitness. It is not "junk."

NIJ's sloppy treatment of terms like "genes" and "junk" is unfortunate, but in the end I decided the awkward definition was not important enough to snipe at in the brief. On a blog, however, one can be more snippy.

Tuesday, December 11, 2012

Reconsidering the “Considered Analysis”: How Convincing Are the Cases Cited in the Stay Order in Maryland v. King?

For nearly a decade, DNA-on-arrest laws eluded scrutiny in the courts. For another five years, they withstood a gathering storm of constitutional challenges. In King v. State, 42 A.3d 549 (Md. 2012), however, the Maryland Court of Appeals reasoned that usually fingerprints provide everything police need to establish the true identity of an individual before trial and that the state's interest in finding the perpetrators of crimes by trawling databases of DNA profiles is too "generalized" to support "a warrantless, suspicionless search." The U.S. Supreme Court reacted forcefully. Even before the Court could consider issuing a writ of certiorari, Chief Justice Roberts stayed the Maryland judgment. His chambers opinion signaled that "given the considered analysis of courts on the other side of the split, there is a fair prospect that this Court will reverse the decision below."

Some thoughts on the lower court opinions and the issues the Supreme Court will confront are in press in the online Discourse section of the UCLA Law Review. The essay provides a more coherent, complete, and polished presentation than the scattered remarks in earlier postings on this blog. It briefly examines four sets of opinions—the early one from the Virginia Supreme Court in Anderson, the Third Circuit’s en banc opinions in Mitchell, the Ninth Circuit’s panel opinions in Haskell (vacated to make way for en banc review), and the Arizona Supreme Court’s opinion in Mario W. Building on these judicial efforts, the essay outlines the Fourth Amendment questions that a fully considered analysis must answer, identifies questionable treatments of “searches” and “seizures” in the four sets of opinions, and criticizes the creative compromise in Mario W. that allows sample collection but not DNA testing before conviction.

I do not think that there is much room for compromise on the constitutional question. Various opinions maintain (in dictum) that preconviction collection is acceptable after, but not before, an indictment or preliminary hearing. That's another compromise, of sorts, and the Maryland law (as the state has implemented it) postpones DNA collection until after a probable-cause-for-trial hearing. Thus, anything the Supreme Court will say in King on DNA collection as part of the booking procedure will be dictum. It seems to me, however, that once an individual is legitimately detained, either the Fourth Amendment permits the compulsory collection, analysis, and use of DNA—the whole ball of wax—as a biometric identifier for both authentication and criminal intelligence purposes or it does not.  Thus, I am betting that the Court will write a broad opinion upholding DNA database laws at all points after arrest.  But IMHO, it's a close question.

  • David H. Kaye, On the "Considered Analysis" of Collecting DNA Before Conviction, 60 UCLA L. Rev. Discourse (forthcoming 2013) (preprint)
  • David H. Kaye, Drawing Lines: Unrelated Probable Cause as a Prerequisite to Early DNA Collection, 91 N.C. L. Rev. Addendum 1 (2012) (preprint)

Wednesday, November 7, 2012

The Dictionary and the Database: Thoughts on State v. Emerson

Last week, the Supreme Court of Ohio held that the state may use, in a completely unrelated case, information derived from a DNA sample acquired pursuant to a search warrant without seeking a new warrant. This result is not novel—indeed, a contrary outcome would have departed from the law elsewhere.

Nevertheless, the opinion in State v. Emerson presents a new wrinkle. After Dajuan Emerson was acquitted of the 2005 rape of a 7-year-old girl, his DNA profile somehow resided in the state’s convicted-offender database. Then, in 2007, 37-year-old Marnie Macon was stabbed 74 times in her apartment. (Ludlow 2012). Police recovered blood from a door handle. The DNA profile from this crime-scene sample (often called a “forensic sample”) was run against the state database. It matched Emerson’s profile from 2005. After the trial court denied a motion to suppress this match, the case went to trial and the jury found Emerson guilty of aggravated murder (and tampering with evidence). An Ohio District Court of Appeals affirmed, and the state supreme court affirmed that judgment.

The obvious questions are why the 2005 profile entered the convicted-offender database and whether the Fourth Amendment’s exclusionary rule for unreasonable searches or seizures applies to the resulting cold hit. The Ohio Supreme Court’s analysis of these issues is a little odd. I shall quickly run through the opinion, indicating the oddities.

What is an allele?

The first peculiarity is ultimately of no moment, but I’ll mention it anyway because it shows the continuing inability of too many judges (or the recent law school graduates who are their clerks) to consult suitable scientific references. According to the opinion, “[a] DNA profile consists of a series of numbers that represent different alleles that are present at different locations on the DNA” and “[a]n allele is defined as ‘either of a pair of genes located at the same position on both members of a pair of chromosomes and conveying characters [sic] that are inherited in accordance with Mendelain [sic] law.’ Webster’s New World Dictionary, Third College Edition 36 (1988).”

The alleles used in modern DNA databases are not parts of genes. (Well, some of them are meaningless variations within introns, but even those do not “convey characters” as the classical definition from Webster’s would require.) Perhaps judges should not be criticized for thinking that the word “allele” always refers to genes. To denote variations in DNA sequences that are not the allelotypes of genes, forensic scientists themselves borrowed from the terminology for genes, inviting such confusion. (Kaye 2010). But there are many reasonably accurate explanations of forensic STR “alleles” in the legal and forensic science literature. Consequently, there is little excuse using the inapt dictionary definition. Fortunately, this error does not affect anything else in the opinion.

How did Emerson’s DNA profile get into a CODIS database?

The justices evinced little concern about the statutory violation that led to the fateful match in the case. In fact, the unanimous opinion prominently denies that putting the profile of someone who was not convicted into the state and national databases (SDIS and NDIS) for future trawls departed from Ohio’s convicted-offender law.

The court reached this counter-intuitive result by relying on Black’s Law Dictionary:
Appellant is correct that R.C. 2901.07 does not support the inclusion of his profile in CODIS. However, the same cannot be said for R.C. 109.573. The superintendent of BCI is empowered to “establish and maintain a DNA database.” R.C. 109.573(B)(1)(b). “DNA database” is defined in part as “a collection of DNA records from forensic casework.” R.C. 109.573(A)(3). “Forensic” is defined as “[u]sed in or suitable to courts of law or public debate.” Black’s Law Dictionary 721 (9th Ed.2009). In this case, the police lawfully obtained the DNA sample in the course of the 2005 rape investigation. Therefore, the profile obtained from the sample is a record from forensic casework and is properly maintained in CODIS. Moreover, we note that neither R.C. 109.573 nor 2901.07 require that the state, on its own initiative, remove the DNA profile of a person who was acquitted at trial.
Again, the failure to consult relevant sources for the actual terminology in the field is a gross mistake. Ohio Revised Code § 109.573(3) defines “DNA database” as
a collection of DNA records from forensic casework or from crime scenes, specimens from anonymous and unidentified sources, and records collected pursuant to sections 2152.74 and 2901.07 of the Revised Code and a population statistics database for determining the frequency of occurrence of characteristics in DNA records.
(This is the current version. I am assuming the words are the same as they were in 2007.) The “records collected” under the enumerated sections pertained to “adjudicated delinquents” and to convicted offenders—not to mere suspects. The phrase “forensic casework or crime-scene samples” refers to DNA of unknown origin—from vaginal swabs, clothing, property, etc. As the FBI explains, “the DNA data that may be maintained at NDIS [consists of profiles from] convicted offender, arrestees, legal, detainees, forensic (casework), unidentified human remains, missing persons and relatives of missing persons.” (FBI, undated). There is no authorized category for sundry individuals whose DNA profiles have become known to the police for miscellaneous reasons. Ohio did not take DNA samples from arrestees or detainees until 2011. Under the Emerson court’s peculiar reading of the statute, police in Ohio could use the “abandoned DNA” ploy to acquire a profile from a person even without a warrant and upload it to the state and national databases.

The court’s theory that the Ohio legislature used the phrase “forensic casework” to cover every sample and profile “[u]sed in or suitable to courts of law or public debate” is astonishing. A convicted-offender database system has one set of so-called “forensic” profiles (that could link perpetrators to crimes) and another set of convicted-offender profiles (who might be found to be the perpetrators of the unsolved solves). The “forensic” profiles come from the unknown perpetrators of the crimes. They can be matched, if possible, against the convicted offenders’ profiles (and among one another to identify serial crimes). Neither they nor the convicted-offender database was intended to house profiles from specific suspects who never were found guilty of a qualifying crime. Thus, the state had no convincing legal basis for uploading Emerson’s profile to SDIS and NDIS—and the court should not have approved of such misconduct.

Nonetheless, the statutory violation does not justify excluding the cold hit under the Fourth Amendment. The U.S. Supreme Court has not been kind to the exclusionary rule in recent years. As Emerson observes, it has held that a violation of a state statute does not make a search constitutionally unreasonable.

Did Emerson lack standing to complain of a Fourth Amendment violation?

The Emerson opinion contains a third error. The court holds “that a person does not have standing to object to the retention of his or her DNA profile or to the profile’s use in a subsequent criminal investigation.” This misrepresents the meaning of “standing.” In the Fourth Amendment context, the standing requirement bars “attempts to vicariously assert violations of the Fourth Amendment rights of others.” United States v. Salvucci, 448 U.S. 83, 86 (1980). Thus, in Salvucci, police searched an apartment rented by a defendant’s mother and found checks that her son had stolen from the mails. In his prosecution for possession of stolen mail, the son lacked standing to complain the search violated the mother’s interest in the privacy of her apartment.

In Emerson, the defendant never argued that the cold hit violated someone else’s rights. He argued that it violated his right to be free from unreasonable searches because he had a legitimate expectation of privacy in his DNA profile retained by the state. He surely had standing to raise that claim, and the court references to “standing” are superfluous and confused.

Was the retention of the profile and the trawl of the database a search or seizure?

At last, we come to the dispositive issue in the case—was any Fourth Amendment interest of Emerson’s violated by the retention of his profile and the trawl of the database? The court held—correctly, I believe—that Emerson had no such interest. The state acquired the DNA sample in 2005 pursuant to a search warrant of unchallenged validity. Laboratory analysis of the sample was not a separate search, but the very reason for the search warrant. Simply keeping the identifying profile and looking to see whether it matched new profiles in the “forensic index,” as the FBI calls them, does not rise to the level of new search. Once the government legitimately acquires information pursuant to a search warrant, it need not toss out and forget about that information if it cannot secure a conviction. In later investigations and prosecutions, it can use what it finds in the fully authorized and entirely legitimate search.

Obviously, the situation would be otherwise if the original search were unreasonable. Then the evidence should be excluded to vindicate the defendant’s right to be free from unreasonable searches and seizures. But it would be worse than pointless to exclude, on constitutional grounds, legitimately acquired evidence of guilt. This is the sound core of the reasoning in Emerson. Whether the defendant was acquitted in the case that generated the search warrant, whether  he was convicted then, or whether he never was prosecuted in that case makes no difference. There is no constitutional reason to exclude evidence from a reasonable search.

In Boroian v. Mueller, a case that Emerson overlooks, the U.S. Court of Appeals for the First Circuit held that continued trawls of a database may continue even after an offender has completed his sentence. Emerson extends the reasoning of Boroian to an individual whose DNA profile should not have been in the database in the first place. But because the objection in that respect is entirely statutory, it does not change the result.

Of course, one can question the conclusion that trawling a database is not a separate search, and some commentators as well as some recent opinions on the constitutional of pre-conviction DNA sampling, analysis, and trawling have spoken of different steps in the process as if they were independent searches, each of constitutional magnitude. For reasons stated in Kaye (2011), however, I doubt that these claims are tenable. Despite the terminological and conceptual flaws in the opinion in Emerson, the Ohio Supreme Court reached the correct result.


United States v. Salvucci, 448 U.S. 83, 86 (1980)

Boroian v. Mueller, 616 F.3d 60 (1st Cir. 2010)

State v. Emerson, No. 2011-0486 (Ohio Nov. 1, 2012) (Slip Opinion No. 2012-Ohio-5047)

FBI, Frequently Asked Questions (FAQs) on the CODIS Program and the National DNA Index System, http://www.fbi.gov/about-us/lab/codis/codis-and-ndis-fact-sheet.

David H. Kaye, The Double Helix and the Law of Evidence (2010)

David H. Kaye, DNA Database Trawls and the Definition of a Search in Boroian v. Mueller, 97 Va. L. Rev. in Brief 41 (2011)

Randy Ludlow, Ohio Suspects' DNA Can Be Saved for Later Cases, Court Rules, Columbus Dispatch, Nov. 6, 2012

Cross-posted to The Double Helix Law Blog.

Sunday, November 4, 2012

Lies/Fibs, Damned Lies, and Experts/Statistics

Perhaps the most famous quotation about statistics is the most annoying—the one that Mark Twain mistakenly attributed to Benjamin Disraeli: “There are three kinds of lies: lies, damned lies, and statistics.” This tripartite classification of mendacity is quoted with great frequency (I won’t give a statistic) by writers criticizing some dubious statistic or other. For example, one self-styled “critical thinker” uncritically accepts the 19th-century British Prime Minister as the originator of the aphorism. [1]

According to Yale Law Librarian Fred Shapiro, “the first known use of the famous words ‘lies, damned lies, and statistics’ was quoted in the Leeds Mercury, June 29, 1892. The source was a speech by Arthur Balfour—yet another prime minister.” [2] But comparable words, often with “experts” in place of “statistics” appeared in print before then, and Balfour referred to it as “an old saying.” [3] The results of more sleuthing can be found on a webpage maintained at the University of York’s mathematics department’s website.


  1. Jim Baird, How Statistics Can Lie: Are You Impressed by Remarkable Claims in Product Ads? Here's Why You Might Want to Be Skeptical, http://turf.unl.edu/extpresentationspdf/BairdStats.pdf 
  2. Fred R. Shapiro, You Can Quote Them, Yale Alumni Mag., Sept.-Oct. 2012, at 56.
  3. Peter M Lee (?), Lies, Damned Lies and Statistics, July 19, 2012,http://www.york.ac.uk/depts/maths/histstat/lies.htm

Thursday, November 1, 2012

Who Is Nelson Acosta-Roque? (Part IV)

Despite a brief from 39 “Scientists and Scholars of Fingerprint Identification as Amici Curiae” questioning the telephonic testimony of a fingerprint analyst, the U.S. Court for the Ninth Circuit recently allowed a deportation order based on that testimony to stand. The unpublished per curiam opinion frames the issue as "whether substantial evidence supported the BIA's [Board of Immigration Appeals'] finding 'by clear and convincing evidence' that Mr. Acosta-Roque and Mr. Pecheca-Aromboles [who was convicted in 1991 of delivery of a controlled substance in Pennsylvania] are the same person." The brief opinion concludes that "Mr. Acosta-Roque has not shown that 'no reasonable factfinder' would find that the government proved by clear and convincing evidence that he was a criminal alien under [8 U.S.C.] § 1182(a)(2)."

The reasoning sandwiched between these statements is brief:
[S]cientists and courts have regarded such evidence as reliable for upwards of a century. See United States v. Calderon-Segura, 512 F.3d 1104, 1108-09 (9th Cir. 2008). When, as here, the fingerprints “were exemplars taken under controlled circumstances and were complete, not fragmented,” fingerprint evidence is in fact highly reliable. Id. at 1109. Although the fingerprint examiner in this case may have been less than cautious in her testimony, the immigration judge and the BIA did not err in relying upon it, given the examiner’s experience and the fact that another technician corroborated the findings.
The examiner in the case was unclear about how complete her identification was. How can the court conclude that an examination "is in fact highly reliable" just because an examiner says that the prints match? In effect, the court applied a presumption of reliability to the testimony--or at least enough of a presumption to make a cursorily presented but unquestioned opinion "substantial."


Wednesday, October 31, 2012

Florida Trial Court Excludes the Opinion of a Latent Fingerprint Examiner — Maybe

Last week, Miami-Dade Circuit Court Judge Milton Hirsch issued an “order” stating that when the time comes in a burglary case, he will exercise his “common sense” to fulfill his “gatekeeping function” for scientific evidence under Florida law to prevent “excessive and unsupportable claims made by fingerprint examiners.” Order on Defendant's Motion in Limine, State v. Borrego, Nos. F12-101 & F12-7083, at 16 (Fla. Cir. Ct. Oct. 25, 2012) [cited as ODMIL]. One would not think that this promise would be horribly out of line.

Yet, the Miami Herald reported that outraged prosecutors now “vow to appeal” this “rare and controversial legal move.” David Ovalle, Miami-Dade Judge Rules Fingerprint Evidence Should Be Restricted, Miami Herald, Oct. 28, 2012. So what, exactly, is the shocking legal move here? Reviewing the three documents filed so far in the case, the judge's "order" looks more like a vague campaign pledge than a concrete judicial order amenable to interlocutory review.

The Defendant’s Motion

The defense asked for a specific ruling. It filed a pretrial motion for an order limiting the examiner’s testimony “to the similarities and dissimilarities he observed.” Defendant’s Motion in Limine, State v. Borrego, Nos. F12-101 & F12-7083 (Fla. Cir. Ct.) [cited as DMIL]. The ghost of U.S. District Court Judge Pollak’s perceptive (but then disowned) initial opinion in United States v. Llera Plaza rises just in time for Halloween. Andy Newman, Judge Rules Fingerprints Cannot Be Called a Match, N.Y. Times, Jan. 11, 2002.

In addition, the public defender asked for a series of “thou shalt nots. One would think that these would have been superfluous if the court made the first ruling. Nonetheless, the defense wanted an order prohibiting the analyst from speaking of a “match” or “identification,” and from stating his “level of confidence in his own testimony,” and from revealing or suggesting that a second examiner verified the match. DMIL at 2.

The defense did not rely on the general acceptance standard for scientific evidence that Florida follows. It could have. The Florida Supreme Court has applied the standard in a manner that resembles the direct inquiry into scientific validity mandated for federal courts in Daubert v. Merrell Dow Pharmaceuticals, and even a long history of use in police laboratories is not conclusive proof of general acceptance when a broader cross-section of the scientific community expresses doubts. David H. Kaye et al., The New Wigmore on Evidence: Expert Evidence (2d ed. 2011).

Instead of raising this threshold objection, however, the defense contended that a good fingerprint examiner is no better than a juror in forming a categorical opinion on the basis of the similarities and differences between an exemplar and a latent print DMIL at 2-3. This claim in highly problematic. Existing research may not be extensive, but it does support the view that trained examiners can outperform the laity. See Fingerprinting Error Rates Down Under, June 24, 2012.

Also implausibly, the defense argued that “match” necessarily means “absolute certainty” and somehow reached the conclusion that in recommending an end to testimony of “a source attribution to the exclusion of all others in the world,” DMIL at 4, the NIST report on latent fingerprinting supported the contention that no testimony about a “match” should be allowed. See Government-sponsored Report on Latent Fingerprint Work in Criminal Investigation and Prosecution, Feb. 18, 2012.

The defense advanced several other peculiar arguments. It maintained that the expert was unqualified to attribute a print to Borrego, even tentatively, because he lacked training or education in “population statistics or probabilities.” DMIL at 5. This qualifications argument has no force of its own. The real argument in this part of the public defender's memorandum is that fingerprint examiners do not follow the practice of DNA analysts of reporting probabilities “based on established scientific principles.” Id. at 6. A course in statistics and probability would not solve this problem. The gravamen of the complaint is not really the education of examiners. It is the practice of using personal judgment instead of a generally accepted statistical model.

Finally, the defense suggested, with no legal analysis, that “due process” and the “constitutional right to trial by jury, rather than trial by ‘expert’” justified “an order limiting Womack’s testimony to the parameters [sic] described herein.” Id at 8. In light of the normal opportunity to challenge excessive or dubious claims before a jury, however, the Fifth and Sixth Amendments do not add much, if anything, to the evidentiary argument.

The Prosecutor’s Reply

Rather than respond to any of these overblown arguments, the prosecution filed the State’s Motion to Disqualify Judge, State v. Borrego, Nos. F12-101 & F12-7083 (Fla. Cir. Ct. Oct. 15, 2012) [SMDJ]. (With unintended humor, the Miami Herald’s website refers to this as a “motion to rescue Judge Hirsch from the case.”) The motion stated that Judge Hirsch, in another case, had suggested that the prosecutor read his writings on fingerprint evidence and then said he would recuse himself if the state moved for his disqualification. On this basis, the prosecutor wrote that she entertained “a reasonable belief that Judge Hirsch will not be fair and impartial in ruling on any motions on fingerprint testimony.” SMDJ at 2.

Judge Hirsch was not swayed. He summarily denied the request. His Order Denying Motion to Disqualify Judge, State v. Borrego, Nos. F12-101 & F12-7083 (Fla. Cir. Ct. Oct. 25, 2012), is unedifying. But then again, the state’s theory that a judge must or should disqualify himself because he has written something on a subject or has stated a willingness to recuse himself in another case seems flimsy. These statements do not mean that a judge is incapable of making a fair ruling. Still, the state’s argument, if raised on appeal, might gain more traction if (as the Miami Herald reported) “Hirsch issued his order Thursday before prosecutors could write their reply to Borrego’s defense request to restrict the testimony of the fingerprint expert.”

The Order on the Fingerprint Testimony

The Order on Defendant’s Motion in Limine stretches across 17 pages. The court prepared and issued this sprawling “order” without a reply memorandum from the state. For that matter, neither does the document discuss the defendant’s argument about the education of the expert in probability and statistics. It surveys the history of science and scientific evidence, Florida’s adoption and application of the standard of general acceptance for such evidence, and a judge’s role in excluding evidence.

Then it turns to fingerprinting in literature and law. The court states that no problem arises when a fingerprint analyst merely displays the similarities and differences between two images. ODMIL at 13. This might be so if the examiner did not present himself as a forensic scientist, did not speak of the “science” of fingerprinting, did not refer to “the scientific method,” and insisted that he was present as little more than a photographer of images with no more skill than any juror to make the comparison. But even with all these fangs removed from the expert testimony, the admissibility of such testimony on ordinary relevance grounds is open to question. What intelligent use can jurors who know nothing about the variability of impressions of fingerprint features make of the images that are said to coincide sufficiently to be incriminating? Kaye et al., supra. Yet, the court seems to regard it as axiomatic that “[w]hen blow-ups, photographs, or other reproductions of fingerprints are admitted into evidence, the truth-seeking function of trials is advanced.” ODMIL at 14-15.

The Order castigates the most extreme testimony that fingerprint examiners once provided. Testimony that an identification can be made “to the exclusion of every other fingerprint in the history of the world” is “unsupportable.” Id. at 14. Testimony that “the error rate associated with their work, or with fingerprint examination in general, is zero” is“worse than wrong.” Id. Yet, the ensuing discussion of error rates from “human imperfections” is itself rather confused. The Order defines “error rate” as “false positives plus false negatives over total population.” ODMIL at 14. Whatever this means, it is not an “error rate” suitable for presentation to a jury. Kaye et al., supra. Moreover, contrary to the implication in the Order, it is not the “measurable error rate” used in “DNA analysis, the gold standard in forensic evidence.” Id. at 14. Like fingerprint examiners, DNA analysts do not normally present a rate of errors from “human imperfections,” and few of them would claim that human errors are impossible or never occur.

Thus, only this much of the Order is clear: The expert may not state that there is no chance at all that someone other than Radames Borrego left the fingerprints, that the association between Mr. Borrego and the latent prints is absolutely certain, or that mistakes are impossible. But beyond these limitations—which should be part of the profession’s own standards anyway—the Order is incredibly murky. It states that
a trial court must ... protect the integrity of the truth-seeking function from pollution and misdirection due to excessive and unsupportable claims made by fingerprint examiners. For a fingerprint witness to testify, "I direct the jury's attention to the arch appearing here, and the loop appearing here" is one thing; for a fingerprint witness to testify, "I have concluded that this fingerprint matches that of the defendant to the exclusion of all other fingerprints in the history of the world" is a very different thing. And in between these two very different things lie a thousand nuances and gradations of testimony. The trial judge must apply Frye, and Ramirez [a Florida Supreme Court case excluding testimony not generally accepted among toolmark analysts about the marks on cartilage], and his gatekeeping function, and his common sense, to each one of them when and as they are offered in evidence. [¶] And that is exactly what I intend to do at the trial of the case at bar.
Id. at 16.

The Bottom Line

So what do the 17 pages mean for the “thousand nuances and gradations” of testimony that are the subject of allusion rather than analysis? Would a description of relevant features followed by a qualitative statement of likelihoods be acceptable? Cf. Going South with Shoeprint Testimony, July 14, 2012. How about a statement that Mr. Borrego cannot be excluded as the source, although most people could be? Would the judge's "common sense" allow the observation that the prints in question are far more consistent with each other than randomly selected ones? May the examiner opine that the prints “match” as that term is used in the field, while adding that this match does not mean no one else in the world also might have a finger that would produce a matching image? Cf. Who Is Nelson Acosta-Roque? (Part III).

The operative part of the Order is not helpful here. It reads, “Defendant's motion in limine is respectfully GRANTED only to the extent of the foregoing order” ODMIL at 16. It is fine to enliven opinions with poetry, as this one does, but some precision in evidentiary rulings would be more useful to the parties.

Acknowledgments: Thanks to Professor Joelle Moreno for calling the case to my attention.

Wednesday, October 17, 2012

More on Semrau: The Other Daubert Factors

In United States v. Semrau, the U.S. Court of Appeals for the Sixth Circuit upheld the exclusion of a defendant’s “unilateral” fMRI testing for conscious deception. Previously, I focused on the court’s discussion of error rates. Known error rates implicate admissibility under both Federal Rule of Evidence 702 and Federal Rule 703. (Rule 702 is the locus of the scientific validity standard adopted in Daubert v. Merrell Dow Pharmaceutics, and Rule 703 states the common law, ad hoc balancing test for virtually all evidence.) I do not think the opinion is as clear as it could have been on which error rate pertained to what. Nevertheless, it is encouraging that the court recognized that two parameters are necessary to describe the accuracy of a procedure that classifies items or people into two categories (liar or truth teller).

But Daubert's list of factors extends beyond error rates, and the Semrau court’s handling of the other Daubert subissues also merits a mixed review. First, the court suggested that fMRI lie detection satisfied Daubert’s criteria for testing and peer review. It referred to “several factors in Dr. Semrau's favor,” namely:
“[T]he underlying theories behind fMRI-based lie detection are capable of being tested, and at least in the laboratory setting, have been subjected to some level of testing. It also appears that the theories have been subjected to some peer review and publication.” Semrau, 2010 WL 6845092, at *10. The Government does not appear to challenge these findings, although it does point out that the bulk of the research supporting fMRI research has come from Dr. Laken himself.
The suggestion that these factors favor the defendant treats Daubert’s references to testing, peer review, and publication rather superficially. That a scientific theory is “capable of being tested” tells us almost nothing about the validity of the theory. The theory that in the year 2075, the moon will turn into a blob of green cheese is capable of being tested, but that does not help validate it today. Likewise, the mere existence of peer reviewed publications means nothing without examining the content of the publications and the reactions to them in the scientific literature.

The court came closer to addressing the true Daubert issue of whether peer reviewed publications,  considered as a whole, validate a technique or theory when it responded to defendant’s argument that the district court was overly concerned with the realism of validity studies.  In that context, the court of appeals quoted the caveat in one fMRI study that:
This study has several factors that must be considered for adequate interpretation of the results. Although this study attempted to approximate a scenario that was closer to a real-world situation than prior fMRI detection studies, it still did not equal the level of jeopardy that exists in real-world testing. The reality of a research setting involves balancing ethical concerns, the need to know accurately the participant's truth and deception, and producing realistic scenarios that have adequate jeopardy.... Future studies will need to be performed involving these populations.
But even this mention of the content of one study does not explain why the experiments are inadequate to demonstrate validity. Why would it be harder to detect a lie that has grave consequences to the subject of the laboratory experiment or field study than one that has more trivial consequences?

Second, the court of appeals wrote that the “controlling standards factor” had not been satisfied because “[w]hile it is unclear from the testimony what the error rates are or how valid they may be in the laboratory setting, there are no known error rates for fMRI-based lie detection outside the laboratory setting, i.e., in the ‘real-world’ or ‘real-life’ setting.” But what does the realism of laboratory experiments have to do with the existence of a clear protocol for gathering and interpreting data? Naturally, if a test is not standardized, it is hard to ascertain its error rate—a point that has been prominent in debates over fingerprinting. And, if the tester departs slightly from the standard test protocol, the probative value of the test should be questioned under Rule 403. But the issue of external validity should not be confused with the issue of whether standards are in place for administering a test.

Finally, the court implied that without realistic field testing, there could be no general scientific acceptance of a method of lie detection in the forensic setting. This may be true, but all empirical studies pertain to particular times, places, and subjects. Deciding what generalizations are reasonable or generally accepted depends on understanding the phenomena in question. Can laboratory experiments alone show that certain factors tend to affect the accuracy of eyewitness identifications? For years, many experimental psychologists seemed willing to accept forensic applications of laboratory results that lacked complete realism. The ability of fingerprint examiners to match true pairs of prints and exclude false pairs of prints can be demonstrated in laboratory studies with artificially created pairs. Applying the error rates from such laboratory experiments to actual forensic setting could well be hazardous, but the experiments still prove that there is information that analysts can use to make valid judgments. In that situation, it is doubtful that raising the stakes of a judgment will render the technique invalid.

The Semrau court does not pinpoint the source of its discomfort with pure laboratory experiments. As we have just seen, a court should not assume that laboratory experiments never can establish validity of a technique as applied to casework. However, in the case of fingerprint identification, it seems clear enough that the prints do not change depending on whether they are deposited in the course of a crime or produced at another location. The fMRI data might well be different when generated under fully realistic circumstances. As a result, proving that there is detectable brain activity specific to conscious deception under low stakes conditions might not establish that the same pattern arises under high stakes conditions. Without a generally accepted theory of underlying mechanisms to justify extrapolations to the usual conditions of casework, low stakes laboratory findings may not suffice show general acceptance of validity under those conditions.

In sum, Semrau should not be read as establishing that the existence of testability carries any significant weight in favor of admission, that publication in a peer reviewed journal necessarily demonstrates validity, that a lack of complete realism in laboratory studies proves that there  are no “controlling standards” in practice, or that only field studies can establish general acceptance.

These concerns about the wording of the opinion notwithstanding, the problem of generalizing from the laboratory studies to the conditions of the Semrau case are substantial, and the court’s conclusion is difficult to dispute.

Wednesday, September 26, 2012

True Lies: fMRI Evidence in United States v. Semrau

This month, the U.S. Court of Appeals for the Sixth Circuit issued an opinion on “a matter of first impression in any jurisdiction.” The case is United States v. Semrau, No. 11-5396, 2012 WL 3871357 (6th Cir. Sept. 7, 2012). Its subject is the admissibility of the latest twist, the ne plus ultra, in lie detection—functional magnetic resonance imaging (fMRI).

In several ways, the case resembles what may well be the single most cited case on scientific evidence—namely, Frye v. United States, 293 F. 1013 (D.C. Cir. 1923). Frye instituted a special test for admitting scientific evidence. In Frye, a defense lawyer asked a psychologist, Dr. William Moulton Marston, who had developed and published studies of a systolic blood pressure test for conscious deception, to examine a young man accused of murdering a prominent physician. Dr. Marston came to Washington and was prepared to testify that the accused was truthful in retracting his confession to the murder. The trial court would not hear of it. The jury convicted. The defendant appealed. In a short opinion pregnant with implications, the Court of Appeals for the District of Columbia affirmed the exclusion of the expert’s opinion that the defendant was not lying to him.

In United States v. Semrau, defense counsel invited Dr. Steven Laken to examine the owner and CEO of two firms accused of criminal fraud in billing Medicare and Medicaid for psychiatric services that the firm supplied in nursing homes. Like Marston, Dr. Laken, had invented and published on an impressive method of lie detection. Following three sessions with the defendant, Dr. Laken concluded that the accused “was generally truthful as to all of his answers collectively.” As in Frye, the district court excluded such testimony. As in Frye, a jury convicted. As in Frye, the defendant appealed. As in Frye, the court of appeals affirmed.

Dr. Marston held degrees from Harvard in law and in psychology. He worked hard to develop and popularize psychological theories (and he created the comic book character, Wonder Woman). Like Marston, Dr. Laken is highly creative, productive, and enterprising. Dr. Laken started his scientific career in genetics and cellular and molecular medicine. He achieved early fame for discovering a genetic marker and developing a screening test for an elevated risk of a form of colon cancer. For that accomplishment, MIT’s Technology Review recognized him as one of the most important 35 innovators under the age of 35 and noted that “Laken believes his methods could spot virtually any illness with a genetic component, from asthma to heart disease.” I do not know if that happened. After four years as Director of Business Development and Intellectual Asset Management at Exact Sciences, a “molecular diagnostics company focused on colorectal cancer,” Laken left genetic science to found Cephos, “the world-class leader in providing fMRI lie detection, and in bringing fMRI technology to commercialization.”1/

Despite these parallels, Laken is not Marston, and Semrau is not Frye. For one thing, in Frye, the trial judge excluded the evidence without an explanation. In Semrau, the trial judge had a magistrate conduct a two-day hearing. Two highly qualified experts called by the government challenged the validity of Dr. Laken’s theories, and the magistrate judge wrote a 43-page report recommending exclusion of the fMRI testimony from the trial.2/

Furthermore, in Frye, there was no previous body of law imposing a demanding standard on the proponents of scientific evidence—the Frye court created from whole cloth the influential “general acceptance” test.3/ In Semrau, the court began with the Federal Rules of Evidence, ornately embroidered with the Supreme Court's opinions in Daubert v. Merrell Dow Pharmaceuticals and two related cases and with innumerable lower court opinions applying the Daubert trilogy. This legal tapestry requires a showing of “reliability” rather than “general acceptance,” and it usually involves attending to four or five factors relating to scientific validity enumerated in Daubert.3/ I want to look briefly at a few of these in the context of Semrau.

* * *

Even though the only judges to address fMRI-based lie detection (those in Semrau) have deemed it inadmissible under both the Daubert standard (and under the Frye criterion of general acceptance), Cephos continues to advise potential clients that “[t]he minimum requirements for admissibility of scientific evidence under the U.S. Supreme Court ruling Daubert v. Merrell Dow Pharmaceuticals, are likely met.” One can only wonder whether its “legal advisors,” such as Dr. Henry Lee (see note 1), are comfortable with Cephos’s reasoning that
According to a PubMed search, using the keywords ”fMRI” or “functional magnetic resonance imaging” yields over 15,000 fMRI publications. Therefore, the technique from which the conclusions are drawn is undoubtedly generally accepted.
The reasoning is peculiar, or at least incomplete. The sphygmomanometer that Dr. Marston used also was “undoubtedly generally accepted.” This pressure meter was invented in 1881, improved in 1896, and modernized in 1901, when Harvey Cushing popularized the device in the medical community. However, the acknowledged ability to measure systolic blood pressure reliably and accurately does not validate the theory—which predated Marston—that blood pressure is a reliable and valid indicator of conscious deception. Likewise, the number of publications about fMRI in general—and even particular evidence that it is a wonderful instrument with which to measure blood oxygenation levels in parts of the brain—reveals very little about the validity of the theory that these levels are well correlated with conscious deception. To be sure, there is more research on this association than there was on the blood pressure theory in Frye, but the Semrau courts were not overly impressed with applicability of the experimentation to the examination conducted in the case before it.4/

* * *

In addition to directing attention to general acceptance, Daubert v. Merrell Dow Pharmaceuticals identifies “the known or potential rate of error in using a particular scientific technique” as a factor to consider in determining “evidentiary reliability.” The Daubert Court took this factor from circuit court cases involving polygraphy and “voiceprints.” Unfortunately, the ascertainment of meaningful error rates has long confused the courts,5/ and the statistics in Semrau are not presented as clearly as one might hope.

According to Cephos, “[p]eer review results support high accuracy,” but this short statement begs vital questions. Accuracy under what conditions? How “high” is it? Higher for diagnoses of conscious deception than for diagnoses of truthfulness, or vice versa? The court of appeals began its description Semrau’s evidence on this score as follows:
Based on these studies, as well as studies conducted by other researchers, Dr. Laken and his colleagues determined the regions of the brain most consistently activated by deception and claimed in several peer-reviewed articles that by analyzing a subject's brain activity, they were able to identify deception with a high level of accuracy. During direct examination at the Daubert hearing, Dr. Laken reported these studies found accuracy rates between eighty-six percent and ninety-seven percent. During cross-examination, however, Dr. Laken conceded that his 2009 “Mock Sabotage Crime” study produced an “unexpected” accuracy decrease to a rate of seventy-one percent. ...
But precisely what do these “accuracy rates” measure? By “identify deception,” does the court mean that 71%, 86%, and 97% are the proportions of subjects who were diagnosed as deceptive out of those whom the experimenters asked to lie? If we denote a diagnosis of deception as a “positive” finding (like testing positive for a disease), then such numbers are observed values for the sensitivity of the test. They indicate the probability that given a lie, the fMRI test will detect it—in symbols, P(diagnose liar | liar), where “|” means “given.” The corresponding conditional error probability is the false negative probability P(diagnose truthful | liar) = 1 – sensitivity. It is the probability of missing the act of lying when there is a lie.

So far so good. But it takes two probabilities to characterize the accuracy of a diagnostic test. The other conditional probability is known as specificity. Specificity is the probability of a negative result when the condition is not present. In symbols that apply here, the specificity is P(diagnose truthful | truthful). Its complement, 1 – specificity, is the false positive, or false alarm, probability, P(diagnose liar | truthful). That is, the false alarm probability is the probability of diagnosing the condition as present (the subject is lying) when it is absent (the subject actually is not lying). What might the specificity be? According to the court,
Dr. Laken testified that fMRI lie detection has “a huge false positive problem” in which people who are telling the truth are deemed to be lying around sixty to seventy percent of the time. One 2009 study was able to identify a “truth teller as a truth teller” just six percent of the time, meaning that about “nineteen out of twenty people that were telling the truth we would call liars.” . . .
Why was this not a problem for Dr. Laken in this case? Well, the fact that the technique has a high false positive error probability (that it classifies most truthful subjects as liars) does not mean that it also has a high false negative probability (that it classifies most lying subjects as truthful). Dr. Laken conceded that the false positive probability, P(diagnose liar | truthful), is large (around 0.65, from the paragraph quoted immediately above). Indeed the reference to 6% accuracy for classifying liars (the technique’s sensitivity to lying), corresponds to a false positive probability of 100% – 6% = 0.94. The average figure for this false alarm probability, according to Dr. Laken’s statements in the preceding quoted paragraph, is lower, but it is still a whopping 0.65. Nevertheless, if the phrase “accuracy rates” in the first quoted paragraph refers to specificity, then the estimates of specificity that he provided are respectable. The average of 0.71, 0.86, and 0.97 is 0.85.

What do these numbers prove? One answer is that they apply only under the conditions of the experiments and only to subjects of the type tested in these experiments. The opinions take this strict view of the data, pointing out that the experimental subjects were younger than Semrau and that they faced low penalties for lying. Indeed, the court explained that
Dr. Peter Imrey, a statistician, testified: “There are no quantifiable error rates that are usable in this context. The error rates [Dr. Laken] proposed are based on almost no data, and under circumstances [that] do not apply to the real world [or] to the examinations of Dr. Semrau.”
These remarks go largely to the Daubert question. If the experiments are of little value in estimating an error rate in populations that would be encountered in practice, then the validity of the technique is difficult to gauge, and Cephos’s assurance that this factor weighs in favor of admissibility is vacuous. If there is no way to estimate the conditional error probability for the examination of Semrau, then it is hard to conclude that the test has been validated for its use in the case.

* * *

Fair enough, but I want to go beyond this easy answer. Psychologists often are willing to generalize from laboratory conditions to the real world and from young subjects (usually psychology students) to members of the general public. So let us indulge, at least arguendo, the heroic assumption that the ballpark figures for the specificity and the false alarm probability apply to defendants asserting innocence in cases like Semrau. On this assumption, how useful is the test?

Judging from the experiments as described in the court of appeals opinion, if Semrau is truthful in denying any intent to defraud, there is roughly a 0.85 probability of detecting it, and if he lies, there is maybe a 0.65 probability of misdiagnosing him as truthful. So the evidence—the diagnosis of truthfulness—is not much more probable when he is truthful than when he is lying. As such, the fMRI diagnosis of truthfulness has little probative value. (The likelihood ratio is .85/.65 = 1.3.)

That a diagnosis of deception is almost as probable for truthful subjects as for mendacious ones bears mightily on the Rule 403 balancing of prejudice against probative value. The court held that this balancing justified exclusion of Dr. Laken’s testimony, largely for reasons that I won’t go into.6/ It referred to questions about “reliability” in general, but it did not use the error probabilities to shed a more focused light on the probative value of the evidence.

However, it seems from the opinion that Dr. Laken offered at least one probability to show that his diagnosis was correct. The court noted that
Dr. Imrey also stated that the false positive accuracy data reported by Dr. Laken does not “justify the claim that somebody giving a positive test result ... [h]as a six percent chance of being a true liar. That simply is mathematically, statistically and scientifically incorrect.”
It is hard to understand what the “six percent chance” for “somebody giving a positive test result” had to do with the negative diagnosis (not lying) for Semrau. A jury provided with negative fMRI evidence (“He was not lying”) must decide whether the result is a true negative or a false negative—not what might have happened had there been a positive diagnosis.

As for the 6% solution, it is impossible to know from the opinion how Dr. Laken arrived at such a number for the probability that a subject is lying given a positive diagnosis. The conditional probabilities from the experiments run in the opposite direction. They address the probability of evidence (a diagnosis) given an unknown state of the world (a liar or a truthful subject). If Dr, Laken really opined on the probability of the state of the world (a liar) given the fMRI signals, then he either was naively transposing a conditional probability—a no-no discussed many times in this blog—or he was using Bayes’ rule. In light of Dr. Imrey’s impeccable credentials as a biostatistician and his unqualified dismissal of the number as “mathematically, statistically and scientifically incorrect,” I would not bet on the latter explanation.


1. If the firm’s website is any indication, it is not an equivalent leader in good grammar. Apparently seeking the attention of wayward lawyers, it advertises that “[i]f you or your client professes their innocence, we may provide pro bono consulting.” The website also offers intriguing reasons to believe in the company’s prowess: it is “represented by one of the top ten intellectual property law firms”; it has “been asked to present to the ... Sandra Day O’Connor Federal Courthouse”; and its legal advisors include Dr. Henry C. Lee (whose website includes “recent sightings of Dr. Lee.”). In addition to its lie-detection work, Cephos offers DNA testing, so perhaps I should not say that Dr. Laken has withdrawn entirely from genetic science.

2. The court of appeals buttressed its approval of the report with the observation that “Professor Owen Jones, who observed the hearing” and is on the faculties of law and biology at Vanderbilt University, stated in an interview with Wired, that the report was “carefully done.”

3. For elaboration, see David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence—Expert Evidence (2d ed. 2011) http://www.aspenpublishers.com/product.asp?catalog_name=Aspen&product_id=0735593531

4. For a short discussion of validity in this context, see Francis X. Shen & Owen D. Jones, Brain Scans as Evidence: Truths, Proofs, Lies, and Lessons, 62 Mercer L. Rev. 861 (2011),

5. See David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence—Expert Evidence (2d ed. 2011).

6. The court appeals wrote that “the district court did not abuse its discretion in excluding the fMRI evidence pursuant to Rule 403 in light of (1) the questions surrounding the reliability of fMRI lie detection tests in general and as performed on Dr. Semrau, (2) the failure to give the prosecution an opportunity to participate in the testing, and (3) the test result's inability to corroborate Dr. Semrau's answers as to the particular offenses for which he was charged.”

Thursday, September 20, 2012

Dear Judges: A Letter from the Electronic Frontier Foundation to the Ninth Circuit

On the eve of the en banc oral argument in Haskell v. Harris, The Electronic Frontier Foundation (EFF) filed a letter asking "the Court to consider the ENCODE project findings in determining the outcome of this case." It seems hard to oppose the idea that the court should consider relevant scientific research, but without input from the scientific community, will the judges do better than they have in the past as "amateur scientists" (to use the skeptical phrase of Chief Justice Rehnquist in Daubert v. Merrell Dow Pharmaceuticals, Inc.)?

Deciphering the ENCODE papers' descriptions of the data is no easy task, and EFF's lawyers do not seem to be up to it. Their letter asserts that the project "has determined that more than 80% of DNA once thought to be no more than 'junk' has at least one biochemical function, controlling how our cells, tissue and organs behave." This is not a fair characterization of the findings. Which geneticist ever claimed that all noncoding DNA plays no role in how cells behave? The issue always has been how much junk, how much func -- and what "functions"?

What does EFF mean by "controlling"? Making organs function? Stimulating tissue growth? Turning normal cells into cancerous ones? Making us tall or short, fat or skinny, gay or straight? None of those things are mentioned in the Nature cover story cited in the letter. Instead, the EFF relies on New York Times reporter Gina Kolata's misleading news article for EFF's claim that "The ENCODE project has determined that 'junk' DNA plays a critical role in determining a person’s susceptibility to disease and physical traits like height."

My earlier postings described the limited meaning of the phrase "biochemical function" in the cited paper. I'd love to see a citation to a page of an ENCODE paper that asserts that fully 80% of the noncoding DNA is determining "susceptibility to disease and physical traits like height." And if I were a judge, I would demand an explanation of why "physical traits like height" are, in the words of the EFF letter, "sensitive and private."

After the judges consider the ENCODE papers (by having their law clerks read them?), will they be better informed about the actual privacy implications of the CODIS loci than they were before this excursion into this realm of the bioinformatics? I would not bet on it, but maybe I am growing cynical.

Wednesday, September 19, 2012

On the "clear" outcome of "established" law

Today's New York Times included an editorial (California and the Fourth Amendment) on Haskell v. Harris, the challenge to the California Proposition requiring DNA sampling on arrest. En banc oral argument takes place today. The following is a letter I sent to the Times editor. I expect somewhere between 0 and 50 percent of it to be published there (point estimate = 0):

Dear Editor,

Your editorial (September 19) asserts that the constitutionality of taking DNA on arrest “should be clear” given “established rights against unreasonable search and seizure.” Yet, over vigorous dissents, federal courts of appeals have ruled otherwise—twice in panels of the Ninth Circuit and once in the Third Circuit.

Whether acquiring purely biometric data from arrestees necessitates a warrant is doubtful, and whether acquiring DNA data is “unreasonable” is a close question. The physical invasion of personal security is minor when the individual is already in custody and the sampling is only marginally more intrusive than fingerprinting. The medical information content of the identification profile is (given current knowledge) only slightly more significant than that of a fingerprint. Very few false convictions arising from DNA database searches have been documented. (One in Australia has been reported.)

Contrary to the suggestion in the editorial, what divided the judges in the Ninth Circuit was not whether “the law’s real purpose was investigation.” No one doubted that. The dissenting judge believed that the Supreme Court already had decided that “fingerprints may not be taken from an arrestee solely for an investigative purpose, absent a warrant or reasonable suspicion that the fingerprints would help solve the crime for which he was taken into custody.” What the Court actually held was “that transportation to and investigative detention at the station house without probable cause or judicial authorization together violate the Fourth Amendment.” The dissenting judge also worried, among other things, that “it is possible that ... at some future time,” an identification profile might permit strong inferences about the diseases an arrestee has or might develop.

I do not claim that arrestee DNA sampling clearly is constitutional. There are a number of valid concerns about indefinite sample retention and other matters. Neither do I maintain that its benefits (which are not well quantified) plainly outweigh its costs and its impact on legitimate interests in personal privacy and security. But assertions that the balance is “clear” and that the “established” law dictates the result oversimplify a delicate constitutional question.

Tuesday, September 18, 2012

ENCODE’S “Functional Elements” and the CODIS Loci (Part II. Alice in Genomeland)

Yesterday, I introduced the concepts and terms required to ascertain whether the estimated proportion of the genome that encodes the structure of proteins or regulates gene expression has jumped from 5 or 10% to 80%. Today, I shall focus on the possible meanings of "functional" to show that this is not what the ENCODE papers state or imply. “Functional” is an adjective, and Alice learned from Humpty Dumpty that adjectives are malleable:
"When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less."
"The question is," said Alice, "whether you can make words mean so many different things."
"The question is," said Humpty Dumpty, "which is to be master—that's all."
Alice was too much puzzled to say anything, so after a minute Humpty Dumpty began again. "They've a temper, some of them—particularly verbs, they're the proudest—adjectives you can do anything with, but not verbs—however, I can manage the whole lot! Impenetrability! That's what I say!"
Like Humpty, who was redefining the word “glory,” the ENCODE authors recognized that “functional” can have many meanings. As Ewan Birney later explained:
Like many English language words, “functional” is a very useful but context-dependent word. Does a “functional element” in the genome mean something that changes a biochemical property of the cell (i.e., if the sequence was not here, the biochemistry would be different) or is it something that changes a phenotypically observable trait that affects the whole organism?1/
Still other possibilities exist. For example, the first paper to use the adjective “junk” for noncoding DNA noted that even debris accumulated in the course of evolution or introduced from viral infections could have a function simply by creating spaces between genes.2/ The pieces of dead wood that are joined together to form the hull of a row boat have a function—they exclude the water from the vessel to keep it afloat. This does not mean that the detailed structure of the planks—the precise width of each plank or the number of ridges on its surface—affects its functionality. And, just as something can be inactive and functional, so too something can be alive with activity and yet be nonfunctional.

ENCODE uses biochemical activity—the notion that “the biochemistry would be different”—as a synonym for functional. Here is the definition of “functional” in the top-level paper:
Operationally, we define a functional element as a discrete genome segment that encodes a defined product (for example, protein or non-coding RNA) or displays a reproducible biochemical signature (for example, protein binding, or a specific chromatin structure).3/
This definition may be useful for the purpose of describing the size of ENCODE’s catalog of elements for later study, but it contrasts sharply with the notion of functional as affecting a nontrival phenotype. The ENCODE papers show that 80% of the genome displays signs of certain types of biochemical activity—even though the activity may be insignificant, pointless, or unnecessary. This 80% includes all of the introns, for they are active in the production of pre-mRNA transcripts. But this hardly means that they are regulatory or otherwise functional.4/ Indeed, if one carries the ENCODE definition to its logical extreme, 100% of the genome is functional—for all of it participates in at least one biochemical process—DNA replication.

That the ENCODE project would not adopt the most extreme biochemical definition is understandable—that definition would be useless. But the ENCODE definition is still grossly overinclusive from the standpoint of evolutionary biology. From that persective, most estimates of the proportion of “functional” DNA are well under 80%. Various biologists or related specialists have provided varying guestimates:
  • Under 50%: “About 1% … is coding. Something like 1-4% is currently expected to be regulatory noncoding DNA ... . About 40-50% of it is derived from transposable elements, and thus affirmatively already annotated as “junk” in the colloquial sense that transposons have their own purpose (and their own biochemical functions and replicative mechanisms), like the spam in your email. And there’s some overlap: some mobile-element DNA has been co-opted as coding or regulatory DNA, for example. [¶] … Transposon-derived sequence decays rapidly, by mutation, so it’s certain that there’s some fraction of transposon-derived sequence we just aren’t recognizing with current computational methods, so the 40-50% number must be an underestimate. So most reasonable people (ok, I) would say at this point that the human genome is mostly junk (“mostly” as in, somewhere north of 50%).”5/

  • 40%: “ENCODE biologist John Stamatoyannopoulos … said … that some of the activity measured in their tests does involve human genes and contributes something to our human physiology. He did admit that the press conference mislead people by claiming that 80% of our genome was essential and useful. He puts that number at 40%.”6/

  • 20%: “[U]sing very strict, classical definitions of “functional” [to refer only to] places where we are very confident that there is a specific DNA:protein contact, such as a transcription factor binding site to the actual bases—we see a cumulative occupation of 8% of the genome. With the exons (which most people would always classify as “functional” by intuition) that number goes up to 9%. … [¶] In addition, in this phase of ENCODE we did [not] sample … completely in terms of cell types or transcription factors. [W]e’ve seen [at most] around 50% of the elements. … A conservative estimate of our expected coverage of exons + specific DNA:protein contacts gives us 18%, easily further justified (given our [limited] sampling) to 20%.”7/
So why did the ENCODErs opt for the broadest arguable definition? Birney’s answer is that it describes a quantity that the project could measure; that the larger number underscores that a lot is happening in the genome; that it would have confused readers to receive a range of numbers; and that the smaller number would not have counted the efforts of all the researchers.

Whether these are very satisfactory reasons for trumpeting a widely misunderstood number is a matter that biologists can debate. All I can say is that (1) I have been unable to extract a clear number—whatever one should make of it—for a percentage of the genome that constitutes the regulatory elements—the promoters, enhancers, silencers, ncRNA “genes,” and so on; (2) this number is almost surely less than the 80% figure that, at first glance, one might have thought ENCODE was reporting; and (3) “functional element” as defined by the ENCODE Project is not a term that has clear or direct implications for claims of the law enforcement community that the loci used in forensic identification are not coding and therefore not informative.

Of course, none of this means that the description from law enforcement is correct. It simply means that even after this phase of ENCODE, there are still a huge number of base pairs that might or might not be regulatory or influence regulation and hence, gene expression. And the CODIS STRs might or might not be among them. Published reports suggest that they are not,8/ but the logic that just because a DNA sequence is noncoding (and nonregulatory), it conveys zero information about phenotype is flawed. It overlooks the possibility of a correlation between the nonfunctional sequence (because it sits next to an exon or a regulatory sequence).9/ Again, however, the published literature reviewing the CODIS STRs does not reveal any population-wide correlations that permit valid and strong inferences about disease status or propensity or other socially significant phenotypes.10/

Will this situation change? A thoughtful answer would take up a lot of space.11/ For now, I'll just repeat the aphorism attributed to Yogi Berra, Neils Bohr, and Storm P: "It's hard to make predictions, especially about the future."


1. Ewan Birney, ENCODE: My Own Thoughts, Ewan’s Blog: Bioinformatician at Large, Sept. 5, 2012, http://genomeinformatician.blogspot.co.uk/2012/09/encode-my-own-thoughts.html.

2. David E. Comings, The Structure and Function of Chromatin, in 3 Advances in Human Genetics 237, 316 (H. Harris & K. Hirschhorn eds. 1972) (“Large spaces between genes may be a contributing factor to the observation that most recombination in eukaryotes is inter- rather than intragenic. Furthermore, if recombination tended to be sloppy with most mutational errors occurring in the process, it would an obvious advantage to have it occur in intergenic junk.”). For more discussion of this paper, see T. Ryan Gregory, ENCODE (2012) vs. Comings (1972), Sept. 7, 2012, http://www.genomicron.evolverzone.com/2012/09/encode-2012-vs-comings-1972/.

3. Ian Dunham et al., An Integrated Encyclopedia of DNA Elements in the Human Genome, 489 Nature 57 (2012).

4. These regions do contain some RNA-coding sequences, and those small parts could be doing something interesting (producing RNAs that are regulatory or that defend against infection by viral DNA, for example), but this kind of activity does not exist in the bulk of the introns that are, under the ENCODE definition, 100% functional.

5. Sean Eddy, ENCODE Says What?, Sept. 8, 2012, http://selab.janelia.org/people/eddys/blog/?p=683. He adds that:
[A]s far as questions of “junk DNA” are concerned, ENCODE’s definition isn’t relevant at all. The “junk DNA” question is about how much DNA has essentially no direct impact on the organism’s phenotype—roughly, what DNA could I remove (if I had the technology) and still get the same organism. Are transposable elements transcribed as RNA? Do they bind to DNA-binding proteins? Is their chromatin marked? Yes, yes, and yes, of course they are—because at least at one point in their history, transposons are “alive” for themselves (they have genes, they replicate), and even when they die, they’ve still landed in and around genes that are transcribed and regulated, and the transcription system runs right through them.
6. Faye Flam, Skeptical Takes on Elevation of Junk DNA and Other Claims from ENCODE Project, Sept. 12, 2012, http://ksj.mit.edu/tracker/2012/09/skeptical-takes-elevation-junk-dna-and-o. Stamatoyannopoulos added that:
What the ENCODE papers … have to say about transposons is incredibly interesting. Essentially, large numbers of these elements come alive in an incredibly cell-specific fashion, and this activity is closely synchronized with cohorts of nearby regulatory DNA regions that are not in transposons, and with the activity of the genes that those regulatory elements control. All of which points squarely to the conclusion that such transposons have been co-opted for the regulation of human genes -- that they have become regulatory DNA. This is the rule, not the exception.
7. Ewan Birney, ENCODE: My Own Thoughts, Ewan’s Blog: Bioinformatician at Large, Sept. 5, 2012, http://genomeinformatician.blogspot.co.uk/2012/09/encode-my-own-thoughts.html.

8. E.g., Sara H. Katsanis & Jennifer K. Wagner, Characterization of the Standard and Recommended CODIS Markers, J. Forensic Sci. (2012).

9. E.g., David H. Kaye, Two Fallacies About DNA Databanks for Law Enforcement, 67 Brook. L. Rev. 179 (2001).

10. E.g., Sara H. Katsanis & Jennifer K. Wagner, Characterization of the Standard and Recommended CODIS Markers, J. Forensic Sci. (2012); Jennifer K. Wagner, Reconciling ENCODE and CODIS, Penn Medicine News Blog, Sept. 18, 2012, http://news.pennmedicine.org/blog/2012/09/reconciling-encode-and-codis.html.

11. For my earlier, and possibly dated, effort to evaluate the likelihood that the CODIS loci someday will prove to be powerfully predictive or diagnostic, see David H. Kaye, Please, Let's Bury the Junk: The CODIS Loci and the Revelation of Private Information, 102 Nw. U. L. Rev. Colloquy 70 (2007) and Mopping Up After Coming Clean About "Junk DNA", Nov. 23, 2007.