Wednesday, September 26, 2012

True Lies: fMRI Evidence in United States v. Semrau

This month, the U.S. Court of Appeals for the Sixth Circuit issued an opinion on “a matter of first impression in any jurisdiction.” The case is United States v. Semrau, No. 11-5396, 2012 WL 3871357 (6th Cir. Sept. 7, 2012). Its subject is the admissibility of the latest twist, the ne plus ultra, in lie detection—functional magnetic resonance imaging (fMRI).

In several ways, the case resembles what may well be the single most cited case on scientific evidence—namely, Frye v. United States, 293 F. 1013 (D.C. Cir. 1923). Frye instituted a special test for admitting scientific evidence. In Frye, a defense lawyer asked a psychologist, Dr. William Moulton Marston, who had developed and published studies of a systolic blood pressure test for conscious deception, to examine a young man accused of murdering a prominent physician. Dr. Marston came to Washington and was prepared to testify that the accused was truthful in retracting his confession to the murder. The trial court would not hear of it. The jury convicted. The defendant appealed. In a short opinion pregnant with implications, the Court of Appeals for the District of Columbia affirmed the exclusion of the expert’s opinion that the defendant was not lying to him.

In United States v. Semrau, defense counsel invited Dr. Steven Laken to examine the owner and CEO of two firms accused of criminal fraud in billing Medicare and Medicaid for psychiatric services that the firm supplied in nursing homes. Like Marston, Dr. Laken, had invented and published on an impressive method of lie detection. Following three sessions with the defendant, Dr. Laken concluded that the accused “was generally truthful as to all of his answers collectively.” As in Frye, the district court excluded such testimony. As in Frye, a jury convicted. As in Frye, the defendant appealed. As in Frye, the court of appeals affirmed.

Dr. Marston held degrees from Harvard in law and in psychology. He worked hard to develop and popularize psychological theories (and he created the comic book character, Wonder Woman). Like Marston, Dr. Laken is highly creative, productive, and enterprising. Dr. Laken started his scientific career in genetics and cellular and molecular medicine. He achieved early fame for discovering a genetic marker and developing a screening test for an elevated risk of a form of colon cancer. For that accomplishment, MIT’s Technology Review recognized him as one of the most important 35 innovators under the age of 35 and noted that “Laken believes his methods could spot virtually any illness with a genetic component, from asthma to heart disease.” I do not know if that happened. After four years as Director of Business Development and Intellectual Asset Management at Exact Sciences, a “molecular diagnostics company focused on colorectal cancer,” Laken left genetic science to found Cephos, “the world-class leader in providing fMRI lie detection, and in bringing fMRI technology to commercialization.”1/

Despite these parallels, Laken is not Marston, and Semrau is not Frye. For one thing, in Frye, the trial judge excluded the evidence without an explanation. In Semrau, the trial judge had a magistrate conduct a two-day hearing. Two highly qualified experts called by the government challenged the validity of Dr. Laken’s theories, and the magistrate judge wrote a 43-page report recommending exclusion of the fMRI testimony from the trial.2/

Furthermore, in Frye, there was no previous body of law imposing a demanding standard on the proponents of scientific evidence—the Frye court created from whole cloth the influential “general acceptance” test.3/ In Semrau, the court began with the Federal Rules of Evidence, ornately embroidered with the Supreme Court's opinions in Daubert v. Merrell Dow Pharmaceuticals and two related cases and with innumerable lower court opinions applying the Daubert trilogy. This legal tapestry requires a showing of “reliability” rather than “general acceptance,” and it usually involves attending to four or five factors relating to scientific validity enumerated in Daubert.3/ I want to look briefly at a few of these in the context of Semrau.

* * *

Even though the only judges to address fMRI-based lie detection (those in Semrau) have deemed it inadmissible under both the Daubert standard (and under the Frye criterion of general acceptance), Cephos continues to advise potential clients that “[t]he minimum requirements for admissibility of scientific evidence under the U.S. Supreme Court ruling Daubert v. Merrell Dow Pharmaceuticals, are likely met.” One can only wonder whether its “legal advisors,” such as Dr. Henry Lee (see note 1), are comfortable with Cephos’s reasoning that
According to a PubMed search, using the keywords ”fMRI” or “functional magnetic resonance imaging” yields over 15,000 fMRI publications. Therefore, the technique from which the conclusions are drawn is undoubtedly generally accepted.
The reasoning is peculiar, or at least incomplete. The sphygmomanometer that Dr. Marston used also was “undoubtedly generally accepted.” This pressure meter was invented in 1881, improved in 1896, and modernized in 1901, when Harvey Cushing popularized the device in the medical community. However, the acknowledged ability to measure systolic blood pressure reliably and accurately does not validate the theory—which predated Marston—that blood pressure is a reliable and valid indicator of conscious deception. Likewise, the number of publications about fMRI in general—and even particular evidence that it is a wonderful instrument with which to measure blood oxygenation levels in parts of the brain—reveals very little about the validity of the theory that these levels are well correlated with conscious deception. To be sure, there is more research on this association than there was on the blood pressure theory in Frye, but the Semrau courts were not overly impressed with applicability of the experimentation to the examination conducted in the case before it.4/

* * *

In addition to directing attention to general acceptance, Daubert v. Merrell Dow Pharmaceuticals identifies “the known or potential rate of error in using a particular scientific technique” as a factor to consider in determining “evidentiary reliability.” The Daubert Court took this factor from circuit court cases involving polygraphy and “voiceprints.” Unfortunately, the ascertainment of meaningful error rates has long confused the courts,5/ and the statistics in Semrau are not presented as clearly as one might hope.

According to Cephos, “[p]eer review results support high accuracy,” but this short statement begs vital questions. Accuracy under what conditions? How “high” is it? Higher for diagnoses of conscious deception than for diagnoses of truthfulness, or vice versa? The court of appeals began its description Semrau’s evidence on this score as follows:
Based on these studies, as well as studies conducted by other researchers, Dr. Laken and his colleagues determined the regions of the brain most consistently activated by deception and claimed in several peer-reviewed articles that by analyzing a subject's brain activity, they were able to identify deception with a high level of accuracy. During direct examination at the Daubert hearing, Dr. Laken reported these studies found accuracy rates between eighty-six percent and ninety-seven percent. During cross-examination, however, Dr. Laken conceded that his 2009 “Mock Sabotage Crime” study produced an “unexpected” accuracy decrease to a rate of seventy-one percent. ...
But precisely what do these “accuracy rates” measure? By “identify deception,” does the court mean that 71%, 86%, and 97% are the proportions of subjects who were diagnosed as deceptive out of those whom the experimenters asked to lie? If we denote a diagnosis of deception as a “positive” finding (like testing positive for a disease), then such numbers are observed values for the sensitivity of the test. They indicate the probability that given a lie, the fMRI test will detect it—in symbols, P(diagnose liar | liar), where “|” means “given.” The corresponding conditional error probability is the false negative probability P(diagnose truthful | liar) = 1 – sensitivity. It is the probability of missing the act of lying when there is a lie.

So far so good. But it takes two probabilities to characterize the accuracy of a diagnostic test. The other conditional probability is known as specificity. Specificity is the probability of a negative result when the condition is not present. In symbols that apply here, the specificity is P(diagnose truthful | truthful). Its complement, 1 – specificity, is the false positive, or false alarm, probability, P(diagnose liar | truthful). That is, the false alarm probability is the probability of diagnosing the condition as present (the subject is lying) when it is absent (the subject actually is not lying). What might the specificity be? According to the court,
Dr. Laken testified that fMRI lie detection has “a huge false positive problem” in which people who are telling the truth are deemed to be lying around sixty to seventy percent of the time. One 2009 study was able to identify a “truth teller as a truth teller” just six percent of the time, meaning that about “nineteen out of twenty people that were telling the truth we would call liars.” . . .
Why was this not a problem for Dr. Laken in this case? Well, the fact that the technique has a high false positive error probability (that it classifies most truthful subjects as liars) does not mean that it also has a high false negative probability (that it classifies most lying subjects as truthful). Dr. Laken conceded that the false positive probability, P(diagnose liar | truthful), is large (around 0.65, from the paragraph quoted immediately above). Indeed the reference to 6% accuracy for classifying liars (the technique’s sensitivity to lying), corresponds to a false positive probability of 100% – 6% = 0.94. The average figure for this false alarm probability, according to Dr. Laken’s statements in the preceding quoted paragraph, is lower, but it is still a whopping 0.65. Nevertheless, if the phrase “accuracy rates” in the first quoted paragraph refers to specificity, then the estimates of specificity that he provided are respectable. The average of 0.71, 0.86, and 0.97 is 0.85.

What do these numbers prove? One answer is that they apply only under the conditions of the experiments and only to subjects of the type tested in these experiments. The opinions take this strict view of the data, pointing out that the experimental subjects were younger than Semrau and that they faced low penalties for lying. Indeed, the court explained that
Dr. Peter Imrey, a statistician, testified: “There are no quantifiable error rates that are usable in this context. The error rates [Dr. Laken] proposed are based on almost no data, and under circumstances [that] do not apply to the real world [or] to the examinations of Dr. Semrau.”
These remarks go largely to the Daubert question. If the experiments are of little value in estimating an error rate in populations that would be encountered in practice, then the validity of the technique is difficult to gauge, and Cephos’s assurance that this factor weighs in favor of admissibility is vacuous. If there is no way to estimate the conditional error probability for the examination of Semrau, then it is hard to conclude that the test has been validated for its use in the case.

* * *

Fair enough, but I want to go beyond this easy answer. Psychologists often are willing to generalize from laboratory conditions to the real world and from young subjects (usually psychology students) to members of the general public. So let us indulge, at least arguendo, the heroic assumption that the ballpark figures for the specificity and the false alarm probability apply to defendants asserting innocence in cases like Semrau. On this assumption, how useful is the test?

Judging from the experiments as described in the court of appeals opinion, if Semrau is truthful in denying any intent to defraud, there is roughly a 0.85 probability of detecting it, and if he lies, there is maybe a 0.65 probability of misdiagnosing him as truthful. So the evidence—the diagnosis of truthfulness—is not much more probable when he is truthful than when he is lying. As such, the fMRI diagnosis of truthfulness has little probative value. (The likelihood ratio is .85/.65 = 1.3.)

That a diagnosis of deception is almost as probable for truthful subjects as for mendacious ones bears mightily on the Rule 403 balancing of prejudice against probative value. The court held that this balancing justified exclusion of Dr. Laken’s testimony, largely for reasons that I won’t go into.6/ It referred to questions about “reliability” in general, but it did not use the error probabilities to shed a more focused light on the probative value of the evidence.

However, it seems from the opinion that Dr. Laken offered at least one probability to show that his diagnosis was correct. The court noted that
Dr. Imrey also stated that the false positive accuracy data reported by Dr. Laken does not “justify the claim that somebody giving a positive test result ... [h]as a six percent chance of being a true liar. That simply is mathematically, statistically and scientifically incorrect.”
It is hard to understand what the “six percent chance” for “somebody giving a positive test result” had to do with the negative diagnosis (not lying) for Semrau. A jury provided with negative fMRI evidence (“He was not lying”) must decide whether the result is a true negative or a false negative—not what might have happened had there been a positive diagnosis.

As for the 6% solution, it is impossible to know from the opinion how Dr. Laken arrived at such a number for the probability that a subject is lying given a positive diagnosis. The conditional probabilities from the experiments run in the opposite direction. They address the probability of evidence (a diagnosis) given an unknown state of the world (a liar or a truthful subject). If Dr, Laken really opined on the probability of the state of the world (a liar) given the fMRI signals, then he either was naively transposing a conditional probability—a no-no discussed many times in this blog—or he was using Bayes’ rule. In light of Dr. Imrey’s impeccable credentials as a biostatistician and his unqualified dismissal of the number as “mathematically, statistically and scientifically incorrect,” I would not bet on the latter explanation.

Notes

1. If the firm’s website is any indication, it is not an equivalent leader in good grammar. Apparently seeking the attention of wayward lawyers, it advertises that “[i]f you or your client professes their innocence, we may provide pro bono consulting.” The website also offers intriguing reasons to believe in the company’s prowess: it is “represented by one of the top ten intellectual property law firms”; it has “been asked to present to the ... Sandra Day O’Connor Federal Courthouse”; and its legal advisors include Dr. Henry C. Lee (whose website includes “recent sightings of Dr. Lee.”). In addition to its lie-detection work, Cephos offers DNA testing, so perhaps I should not say that Dr. Laken has withdrawn entirely from genetic science.

2. The court of appeals buttressed its approval of the report with the observation that “Professor Owen Jones, who observed the hearing” and is on the faculties of law and biology at Vanderbilt University, stated in an interview with Wired, that the report was “carefully done.”

3. For elaboration, see David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence—Expert Evidence (2d ed. 2011) http://www.aspenpublishers.com/product.asp?catalog_name=Aspen&product_id=0735593531

4. For a short discussion of validity in this context, see Francis X. Shen & Owen D. Jones, Brain Scans as Evidence: Truths, Proofs, Lies, and Lessons, 62 Mercer L. Rev. 861 (2011),

5. See David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence—Expert Evidence (2d ed. 2011).

6. The court appeals wrote that “the district court did not abuse its discretion in excluding the fMRI evidence pursuant to Rule 403 in light of (1) the questions surrounding the reliability of fMRI lie detection tests in general and as performed on Dr. Semrau, (2) the failure to give the prosecution an opportunity to participate in the testing, and (3) the test result's inability to corroborate Dr. Semrau's answers as to the particular offenses for which he was charged.”

Thursday, September 20, 2012

Dear Judges: A Letter from the Electronic Frontier Foundation to the Ninth Circuit

On the eve of the en banc oral argument in Haskell v. Harris, The Electronic Frontier Foundation (EFF) filed a letter asking "the Court to consider the ENCODE project findings in determining the outcome of this case." It seems hard to oppose the idea that the court should consider relevant scientific research, but without input from the scientific community, will the judges do better than they have in the past as "amateur scientists" (to use the skeptical phrase of Chief Justice Rehnquist in Daubert v. Merrell Dow Pharmaceuticals, Inc.)?

Deciphering the ENCODE papers' descriptions of the data is no easy task, and EFF's lawyers do not seem to be up to it. Their letter asserts that the project "has determined that more than 80% of DNA once thought to be no more than 'junk' has at least one biochemical function, controlling how our cells, tissue and organs behave." This is not a fair characterization of the findings. Which geneticist ever claimed that all noncoding DNA plays no role in how cells behave? The issue always has been how much junk, how much func -- and what "functions"?

What does EFF mean by "controlling"? Making organs function? Stimulating tissue growth? Turning normal cells into cancerous ones? Making us tall or short, fat or skinny, gay or straight? None of those things are mentioned in the Nature cover story cited in the letter. Instead, the EFF relies on New York Times reporter Gina Kolata's misleading news article for EFF's claim that "The ENCODE project has determined that 'junk' DNA plays a critical role in determining a person’s susceptibility to disease and physical traits like height."

My earlier postings described the limited meaning of the phrase "biochemical function" in the cited paper. I'd love to see a citation to a page of an ENCODE paper that asserts that fully 80% of the noncoding DNA is determining "susceptibility to disease and physical traits like height." And if I were a judge, I would demand an explanation of why "physical traits like height" are, in the words of the EFF letter, "sensitive and private."

After the judges consider the ENCODE papers (by having their law clerks read them?), will they be better informed about the actual privacy implications of the CODIS loci than they were before this excursion into this realm of the bioinformatics? I would not bet on it, but maybe I am growing cynical.

Wednesday, September 19, 2012

On the "clear" outcome of "established" law

Today's New York Times included an editorial (California and the Fourth Amendment) on Haskell v. Harris, the challenge to the California Proposition requiring DNA sampling on arrest. En banc oral argument takes place today. The following is a letter I sent to the Times editor. I expect somewhere between 0 and 50 percent of it to be published there (point estimate = 0):

Dear Editor,

Your editorial (September 19) asserts that the constitutionality of taking DNA on arrest “should be clear” given “established rights against unreasonable search and seizure.” Yet, over vigorous dissents, federal courts of appeals have ruled otherwise—twice in panels of the Ninth Circuit and once in the Third Circuit.

Whether acquiring purely biometric data from arrestees necessitates a warrant is doubtful, and whether acquiring DNA data is “unreasonable” is a close question. The physical invasion of personal security is minor when the individual is already in custody and the sampling is only marginally more intrusive than fingerprinting. The medical information content of the identification profile is (given current knowledge) only slightly more significant than that of a fingerprint. Very few false convictions arising from DNA database searches have been documented. (One in Australia has been reported.)

Contrary to the suggestion in the editorial, what divided the judges in the Ninth Circuit was not whether “the law’s real purpose was investigation.” No one doubted that. The dissenting judge believed that the Supreme Court already had decided that “fingerprints may not be taken from an arrestee solely for an investigative purpose, absent a warrant or reasonable suspicion that the fingerprints would help solve the crime for which he was taken into custody.” What the Court actually held was “that transportation to and investigative detention at the station house without probable cause or judicial authorization together violate the Fourth Amendment.” The dissenting judge also worried, among other things, that “it is possible that ... at some future time,” an identification profile might permit strong inferences about the diseases an arrestee has or might develop.

I do not claim that arrestee DNA sampling clearly is constitutional. There are a number of valid concerns about indefinite sample retention and other matters. Neither do I maintain that its benefits (which are not well quantified) plainly outweigh its costs and its impact on legitimate interests in personal privacy and security. But assertions that the balance is “clear” and that the “established” law dictates the result oversimplify a delicate constitutional question.

Tuesday, September 18, 2012

ENCODE’S “Functional Elements” and the CODIS Loci (Part II. Alice in Genomeland)

Yesterday, I introduced the concepts and terms required to ascertain whether the estimated proportion of the genome that encodes the structure of proteins or regulates gene expression has jumped from 5 or 10% to 80%. Today, I shall focus on the possible meanings of "functional" to show that this is not what the ENCODE papers state or imply. “Functional” is an adjective, and Alice learned from Humpty Dumpty that adjectives are malleable:
"When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less."
"The question is," said Alice, "whether you can make words mean so many different things."
"The question is," said Humpty Dumpty, "which is to be master—that's all."
Alice was too much puzzled to say anything, so after a minute Humpty Dumpty began again. "They've a temper, some of them—particularly verbs, they're the proudest—adjectives you can do anything with, but not verbs—however, I can manage the whole lot! Impenetrability! That's what I say!"
Like Humpty, who was redefining the word “glory,” the ENCODE authors recognized that “functional” can have many meanings. As Ewan Birney later explained:
Like many English language words, “functional” is a very useful but context-dependent word. Does a “functional element” in the genome mean something that changes a biochemical property of the cell (i.e., if the sequence was not here, the biochemistry would be different) or is it something that changes a phenotypically observable trait that affects the whole organism?1/
Still other possibilities exist. For example, the first paper to use the adjective “junk” for noncoding DNA noted that even debris accumulated in the course of evolution or introduced from viral infections could have a function simply by creating spaces between genes.2/ The pieces of dead wood that are joined together to form the hull of a row boat have a function—they exclude the water from the vessel to keep it afloat. This does not mean that the detailed structure of the planks—the precise width of each plank or the number of ridges on its surface—affects its functionality. And, just as something can be inactive and functional, so too something can be alive with activity and yet be nonfunctional.

ENCODE uses biochemical activity—the notion that “the biochemistry would be different”—as a synonym for functional. Here is the definition of “functional” in the top-level paper:
Operationally, we define a functional element as a discrete genome segment that encodes a defined product (for example, protein or non-coding RNA) or displays a reproducible biochemical signature (for example, protein binding, or a specific chromatin structure).3/
This definition may be useful for the purpose of describing the size of ENCODE’s catalog of elements for later study, but it contrasts sharply with the notion of functional as affecting a nontrival phenotype. The ENCODE papers show that 80% of the genome displays signs of certain types of biochemical activity—even though the activity may be insignificant, pointless, or unnecessary. This 80% includes all of the introns, for they are active in the production of pre-mRNA transcripts. But this hardly means that they are regulatory or otherwise functional.4/ Indeed, if one carries the ENCODE definition to its logical extreme, 100% of the genome is functional—for all of it participates in at least one biochemical process—DNA replication.

That the ENCODE project would not adopt the most extreme biochemical definition is understandable—that definition would be useless. But the ENCODE definition is still grossly overinclusive from the standpoint of evolutionary biology. From that persective, most estimates of the proportion of “functional” DNA are well under 80%. Various biologists or related specialists have provided varying guestimates:
  • Under 50%: “About 1% … is coding. Something like 1-4% is currently expected to be regulatory noncoding DNA ... . About 40-50% of it is derived from transposable elements, and thus affirmatively already annotated as “junk” in the colloquial sense that transposons have their own purpose (and their own biochemical functions and replicative mechanisms), like the spam in your email. And there’s some overlap: some mobile-element DNA has been co-opted as coding or regulatory DNA, for example. [¶] … Transposon-derived sequence decays rapidly, by mutation, so it’s certain that there’s some fraction of transposon-derived sequence we just aren’t recognizing with current computational methods, so the 40-50% number must be an underestimate. So most reasonable people (ok, I) would say at this point that the human genome is mostly junk (“mostly” as in, somewhere north of 50%).”5/

  • 40%: “ENCODE biologist John Stamatoyannopoulos … said … that some of the activity measured in their tests does involve human genes and contributes something to our human physiology. He did admit that the press conference mislead people by claiming that 80% of our genome was essential and useful. He puts that number at 40%.”6/

  • 20%: “[U]sing very strict, classical definitions of “functional” [to refer only to] places where we are very confident that there is a specific DNA:protein contact, such as a transcription factor binding site to the actual bases—we see a cumulative occupation of 8% of the genome. With the exons (which most people would always classify as “functional” by intuition) that number goes up to 9%. … [¶] In addition, in this phase of ENCODE we did [not] sample … completely in terms of cell types or transcription factors. [W]e’ve seen [at most] around 50% of the elements. … A conservative estimate of our expected coverage of exons + specific DNA:protein contacts gives us 18%, easily further justified (given our [limited] sampling) to 20%.”7/
So why did the ENCODErs opt for the broadest arguable definition? Birney’s answer is that it describes a quantity that the project could measure; that the larger number underscores that a lot is happening in the genome; that it would have confused readers to receive a range of numbers; and that the smaller number would not have counted the efforts of all the researchers.

Whether these are very satisfactory reasons for trumpeting a widely misunderstood number is a matter that biologists can debate. All I can say is that (1) I have been unable to extract a clear number—whatever one should make of it—for a percentage of the genome that constitutes the regulatory elements—the promoters, enhancers, silencers, ncRNA “genes,” and so on; (2) this number is almost surely less than the 80% figure that, at first glance, one might have thought ENCODE was reporting; and (3) “functional element” as defined by the ENCODE Project is not a term that has clear or direct implications for claims of the law enforcement community that the loci used in forensic identification are not coding and therefore not informative.

Of course, none of this means that the description from law enforcement is correct. It simply means that even after this phase of ENCODE, there are still a huge number of base pairs that might or might not be regulatory or influence regulation and hence, gene expression. And the CODIS STRs might or might not be among them. Published reports suggest that they are not,8/ but the logic that just because a DNA sequence is noncoding (and nonregulatory), it conveys zero information about phenotype is flawed. It overlooks the possibility of a correlation between the nonfunctional sequence (because it sits next to an exon or a regulatory sequence).9/ Again, however, the published literature reviewing the CODIS STRs does not reveal any population-wide correlations that permit valid and strong inferences about disease status or propensity or other socially significant phenotypes.10/

Will this situation change? A thoughtful answer would take up a lot of space.11/ For now, I'll just repeat the aphorism attributed to Yogi Berra, Neils Bohr, and Storm P: "It's hard to make predictions, especially about the future."

Notes

1. Ewan Birney, ENCODE: My Own Thoughts, Ewan’s Blog: Bioinformatician at Large, Sept. 5, 2012, http://genomeinformatician.blogspot.co.uk/2012/09/encode-my-own-thoughts.html.

2. David E. Comings, The Structure and Function of Chromatin, in 3 Advances in Human Genetics 237, 316 (H. Harris & K. Hirschhorn eds. 1972) (“Large spaces between genes may be a contributing factor to the observation that most recombination in eukaryotes is inter- rather than intragenic. Furthermore, if recombination tended to be sloppy with most mutational errors occurring in the process, it would an obvious advantage to have it occur in intergenic junk.”). For more discussion of this paper, see T. Ryan Gregory, ENCODE (2012) vs. Comings (1972), Sept. 7, 2012, http://www.genomicron.evolverzone.com/2012/09/encode-2012-vs-comings-1972/.

3. Ian Dunham et al., An Integrated Encyclopedia of DNA Elements in the Human Genome, 489 Nature 57 (2012).

4. These regions do contain some RNA-coding sequences, and those small parts could be doing something interesting (producing RNAs that are regulatory or that defend against infection by viral DNA, for example), but this kind of activity does not exist in the bulk of the introns that are, under the ENCODE definition, 100% functional.

5. Sean Eddy, ENCODE Says What?, Sept. 8, 2012, http://selab.janelia.org/people/eddys/blog/?p=683. He adds that:
[A]s far as questions of “junk DNA” are concerned, ENCODE’s definition isn’t relevant at all. The “junk DNA” question is about how much DNA has essentially no direct impact on the organism’s phenotype—roughly, what DNA could I remove (if I had the technology) and still get the same organism. Are transposable elements transcribed as RNA? Do they bind to DNA-binding proteins? Is their chromatin marked? Yes, yes, and yes, of course they are—because at least at one point in their history, transposons are “alive” for themselves (they have genes, they replicate), and even when they die, they’ve still landed in and around genes that are transcribed and regulated, and the transcription system runs right through them.
6. Faye Flam, Skeptical Takes on Elevation of Junk DNA and Other Claims from ENCODE Project, Sept. 12, 2012, http://ksj.mit.edu/tracker/2012/09/skeptical-takes-elevation-junk-dna-and-o. Stamatoyannopoulos added that:
What the ENCODE papers … have to say about transposons is incredibly interesting. Essentially, large numbers of these elements come alive in an incredibly cell-specific fashion, and this activity is closely synchronized with cohorts of nearby regulatory DNA regions that are not in transposons, and with the activity of the genes that those regulatory elements control. All of which points squarely to the conclusion that such transposons have been co-opted for the regulation of human genes -- that they have become regulatory DNA. This is the rule, not the exception.
7. Ewan Birney, ENCODE: My Own Thoughts, Ewan’s Blog: Bioinformatician at Large, Sept. 5, 2012, http://genomeinformatician.blogspot.co.uk/2012/09/encode-my-own-thoughts.html.

8. E.g., Sara H. Katsanis & Jennifer K. Wagner, Characterization of the Standard and Recommended CODIS Markers, J. Forensic Sci. (2012).

9. E.g., David H. Kaye, Two Fallacies About DNA Databanks for Law Enforcement, 67 Brook. L. Rev. 179 (2001).

10. E.g., Sara H. Katsanis & Jennifer K. Wagner, Characterization of the Standard and Recommended CODIS Markers, J. Forensic Sci. (2012); Jennifer K. Wagner, Reconciling ENCODE and CODIS, Penn Medicine News Blog, Sept. 18, 2012, http://news.pennmedicine.org/blog/2012/09/reconciling-encode-and-codis.html.

11. For my earlier, and possibly dated, effort to evaluate the likelihood that the CODIS loci someday will prove to be powerfully predictive or diagnostic, see David H. Kaye, Please, Let's Bury the Junk: The CODIS Loci and the Revelation of Private Information, 102 Nw. U. L. Rev. Colloquy 70 (2007) and Mopping Up After Coming Clean About "Junk DNA", Nov. 23, 2007.

Sunday, September 16, 2012

ENCODE’S “Functional Elements” and the CODIS Loci (Part I)

Last week I noted some of the hyperbolic headlines accompanying the coordinated publication of a large number of datasets from the ENCODE Project . The abstract of the top-level paper begins as follows:
The human genome encodes the blueprint of life, but the function of the vast majority of its nearly three billion bases is unknown. The Encyclopedia of DNA Elements (ENCODE) project has systematically mapped regions of transcription, transcription factor association, chromatin structure and histone modification. These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions.1/
Hoping to decipher these sentences, I have been reading about gene regulation. This modest effort stems from more than academic curiosity. If the popular and even some of the scientific press is to be believed, ENCODE has exorcized “junk DNA” from the body of scientific knowledge.2/ The bright light suddenly shining on the “dark matter” of the genome (to introduce another sloppy metaphor)3/ raises a giant question mark for the criminal justice system. Law enforcement authorities have always insisted that the snippets of DNA used to generate DNA identification profiles are just nonfunctional "junk."4/ Now, according to New York Times science correspondent Gina Kolata,
As scientists delved into the “junk” — parts of the DNA that are not actual genes containing instructions for proteins — they discovered a complex system that controls genes. At least 80 percent of this DNA is active and needed. … [¶] … The thought before the start of the project, said Thomas Gingeras, an Encode researcher from Cold Spring Harbor Laboratory, was that only 5 to 10 percent of the DNA in a human being was actually being used.5/
This juxtaposition of percentages suggests that the scientific community has shifted from the view that “only 5 to 10 percent” of the genome is functional (“needed” for the organism to function normally) to a sudden realization that 80% falls into this category.

But the more I read, the clearer it became that this description of a sudden phase transition in science is wildly inaccurate. Johns Hopkins biostatistian Steve Salzberg, in a penetrating and provocative Simply Statistics podcast interview, describes the 80% figure touted in the ENCODE paper as irresponsible.6/ University of Toronto biochemist Lawrence Moran saw it a repeat of a similar, problematic performance five years ago, at the conclusion of the pilot phase of ENCODE.7/ Responding to criticism, ENCODE Project leader Ewan Birney explained the new knowledge this way:
After all, 60% of the genome with the new detailed manually reviewed (GenCode) annotation is either exonic or intronic, and a number of our assays (such as PolyA- RNA, and H3K36me3/H3K79me2) are expected to mark all active transcription. So seeing an additional 20% over this expected 60% is not so surprising.8/
“Not so surprising”? A whopping 60%—not a minor 5 or 10%—was already estimated to be “active”? What is going on here?

The answer lies in the definition of some key terms (like exons, introns, and transcription) and requires a rudimentary understanding of the fundamentals of gene expression and its regulation in human beings. This posting presents the essential terminology and concepts. Part II will apply them to explain what ENCODE’s “assign[ing] biochemical functions for 80% of the genome” means. Anyone who knows what RNA transcripts and transcription factors do can skip this first part (or can read it to let me know of my inaccuracies).

To avoid suspense, I shall lay out my conclusions here and now: (1) if ENCODE gives a clear number for a percentage of the genome that regulates genes—the promoters, enhancers, silencers, ncRNA “genes,” and so on—I have yet to find it; (2) this number is almost surely less than the 80% figure reported for functionality; and (3) “functional element” as defined by the ENCODE Project is not a term that has clear or direct implications for claims of the law enforcement community that the loci used in forensic identification are not coding and therefore not informative. Those claims of zero information are somewhat exaggerated, but that is another story. For now, I merely describe some basics of gene expression and regulation.

Genes make proteins. But how? There are three big steps (with many activities within each step): transcription; post-transcription modification and transportation; and translation. All involve RNA, a single-stranded molecule related to DNA, and proteins. The basic picture is
  • Transcription to precursor messenger RNA: DNA + proteins --> pre-mRNA (in nucleus)
  • Post-transcriptional modification and transportation: pre-mRNA + proteins and RNAs -> mature m-RNA (in cytoplasm)
  • Translation to protein: mRNA + tRNA and proteins --> expressed protein (in cytoplasm)
In the first big step, the base pairs of the gene are transcribed jot-for-jot into an RNA molecule (precursor messenger RNA, or pre-mRNA). In the second major step, the transcript is modified at its ends, edited to remove parts that do not code for the protein that will be made (splicing), and the mature messenger RNA (m-RNA) is moved outside the nucleus. In the third phase, another type of RNA (transfer RNA, or tRNA) stitches together individual amino acids in the order dictated by the m-RNA transcript to form a protein, thereby translating the DNA sequence mirrored in the mRNA into the amino-acid order of the protein. Translation occurs on a kind of microscopic workbench (a ribosome) made of yet another RNA (ribosomal RNA, or rRNA).

For all this to happen, the DNA, which lies tightly coiled in the chromosomes (in a protein-DNA matrix known as chromatin), must open up for transcription to occur. Thus, changes in the chromatin regulate transcription, and these changes can be brought about in a number of ways. Transcription factors (specialized proteins) bind to the DNA. The bound transcription factors then recruit an enzyme (RNA polymerase) that produces RNA. This occurs within a region of DNA, known as a promoter, near the start of the protein-coding DNA (the structural gene). The level of transcription is influenced by activator or repressor proteins that bind to still other small regions (enhancers and silencers, respectively) that also lie outside the structural gene. In short, chemical interactions that open or close the chromatin that houses the DNA and transcription factors regulate the first step in the DNA-to-protein process.

In the past decade, other mechanisms of regulation or control of gene expression have been discovered. Many DNA sequences are not transcribed into messenger RNA, but they are transcribed into a variety of other RNAs. These non-protein-coding DNA sequences can be thought of as genes for RNA. Courting confusion, they usually are called “noncoding” (ncDNA)—because they do not code for protein—but they certainly code for RNAs that are crucial to translation—rRNA and tRNA—and for other RNAs that affect transcription, translation, and DNA replication. So it turns out that the genome is abuzz with transcription-to-RNA activity and other events that feed into the expression of the (protein-)coding DNA.

Yet, this hardly means that every biochemical event along the DNA is functionally important. Some, perhaps many, non-mRNA transcripts are just “noise.” They may float around for a while, but they may not do anything except wither away. In addition, large segments of the DNA transcribed in the course of making mRNA appear in the initial transcript (the pre-mRNA) but never make it into mature mRNA. These unused parts of the pre-mRNA transcripts correspond to long stretches of DNA, known as introns, that interrupt the smaller coding parts—the exons—that are translated into proteins. The initially transcribed intronic parts are removed from the pre-mRNA in a process called RNA splicing. Most of the RNA from introns probably just dissipates.9/

All these terms are a mouthful, but armed with this basic understanding of genes, RNA, and proteins, we can see why the 80% figure does not mean what one might think. We shall also see that the estimated proportion of the genome that encodes the structure of proteins or regulates gene expression has not jumped from 5 or 10% to 80%.

Notes

1. Ian Dunham et al., An Integrated Encyclopedia of DNA Elements in the Human Genome, 489 Nature 57 (2012).

2. E.g., Elizabeth Pennisi, ENCODE Project Writes Eulogy for Junk DNA, 337 Science 1159 (2012).

3. E.g., Gina Kolata, Bits of Mystery DNA, Far From ‘Junk,’ Play Crucial Role, N.Y. Times, Sept. 5, 2012. In one respect, the "dark matter" metaphor misrepresents dark matter. The presence of dark matter is inferred from its gravitational effects on visible matter. The presence of noncoding DNA is known from experiments that detect and characterize it just as they do coding DNA. Perhaps the metaphor means that the sequence of “dark matter” DNA cannot be deduced from the structure of a protein made in a cell. This, however, is like saying that dark matter is matter than cannot be seen with the naked eye. And that is not what astronomers mean by dark matter.

4. E.g., House Committee on the Judiciary, Report on the DNA Analysis Backlog Elimination Act of 2000, 106th Cong., 2d Sess., H.R. Rep. No. 106-900(1), at 27 (“the genetic markers used for forensic DNA testing … show only the configuration of DNA at selected ‘junk sites’ which do not control or influence the expression of any trait.”); New York State Law Enforcement Council, Legislative Priorities 2012: DNA at Arrest, at 5, http://nyslec.org/pdfs/2012/1_DNA_2012.pdf (“The pieces of DNA that are analyzed for the databank were specifically chosen because they are ‘junk DNA.’).

5. Kolata, supra note 2.

6. Interview by Roger Peng with Steven Salzberg, podcast on Simply Statistics, Sept. 7, 2012, http://simplystatistics.org/post/31056769228/interview-with-steven-salzberg-about-the-encode (“Why do they feel a need to say that 80% of the genome is functional? … They know it’s not true. They shouldn’t say it. … You don’t distort the science to get into the headlines.”).

7. Laurence A. Moran, The ENCODE Data Dump and the Responsibility of Scientists, Sept. 6, 2012, http://sandwalk.blogspot.com/2012/09/the-encode-data-dump-and-responsibility_6.html (“This is, unfortunately, another case of a scientist acting irresponsibly by distorting the importance and the significance of the data.”).

8. Ewan Birney, ENCODE: My Own Thoughts, Sept. 5, 2011

9. Post-splicing processing of a small fraction of the RNA from introns can produce noncoding RNAs that may regulate protein expression. L. Fedorova1 & A. Fedorov, Puzzles of the Human Genome: Why Do We Need Our Introns?, 6 Current Genomics 589, 592 (2005).

I am grateful to Eileen Kane for explaining some of the molecular biology to me.

Friday, September 7, 2012

Trashing Junk DNA

You have seen the headlines:
  • Bits of Mystery DNA, Far From 'Junk,' Play Crucial Role (New York Times)
  • 'Junk DNA' Concept Debunked by New Analysis of Human Genome (Washington Post)
  • 'Junk DNA' Debunked (Wall Street Journal)
  • Breakthrough Study Overturns Theory of 'Junk DNA' in Genome (Guardian)
Or maybe you heard MSNBC report that the data from ENCODE "shows us living beyond our genes" --whatever that means -- or listened to CBC intone that "'Junk DNA has a purpose" -- sounds divine -- or saw the Independent's mishugina announcement that "Scientists Debunk 'Junk DNA' Theory to Reveal Vast Majority of Human Genes Perform a Vital Function!" -- like we did not know that genes were functional and important?

The level of hype here is phenomenal. (Some useful clarification can be found at the Nature News blog.) In the next few days, I hope to post some quick thoughts on what the ENCODE figures (like 80%) being bandied about for the "functional" or "biologically active" fraction of the human genome mean for the loci used in forensic DNA identification.


(If any readers have insights to share, post a comment or send me an email at kaye at alum.mit.edu, and I'll try to use them. I am still educating myself about some of the details of gene regulation and can use any help I can get.)