Friday, March 25, 2011

The Mattrix: Missing the Difference Between Statistical Significance and Study Design

This week, the Supreme Court released a unanimous opinion on the need for "a statistically significant number" in a securities fraud complaint. In a rare display of solidarity, there were no separate opinions to accompany Justice Sotomayer's opinion for the Court in Matrixx Initiatives, Inc. v. Siracusano, No. 09-1156, 2011 WL 977060 (U.S. Mar. 22, 2011).

The issue was whether a complaint for securities fraud under federal law could be based on the failure of a manufacturer of a homeopathic remedy for colds to disclose "adverse events associated with a product if the reports do not disclose a statistically significant number of adverse events." Adverse event reports (AERs) are reports from consumers or physicians sent to the FDA. The adverse events here were loss of smell (anosmia) following the use of the nasal spray or gel marketed as the Zicam Cold Remedy.

I have no quarrel with the result -- a reasonable investor might want to know of such reports if they are sufficiently extensive and disturbing that they could prompt the FDA to take some action or might lead to costly lawsuits. Of course, the fact that investors might feel this way does not mean that AERs provide valid statistical proof of causation, and the Court cautioned that "we do not attempt to define here what constitutes reliable evidence of causation." Nevertheless, the Court offered some remarks on this latter topic that are not entirely lucid.

The problem with AERs is that they are "anecdotal" -- a series of little stories of event X (e.g., use of the remedy) followed by event Y (e.g., anosmia). At best, such anecdotes can establish an association between X and Y, but even this requires a comparison group. After all, some people who do not use the Mattrix product also suffer a loss of smell. If the condition occurs no more often among users of ZiCam (or similar nostrums) than among people who do not use it, then the number of AERs does not prove that ZiCam is associated with -- let alone causes -- asnomia.

This is where statistical significance comes in. One has to consider, not the sheer number of AERs, but a statistic such as the difference between the proportion of ZiCam users who develop asnomia and the proportion of nonusers who come down with it. If one posits that each group is a random sample of some larger population, then the probability of observing a difference of this size or larger can be computed. If this probability is small, then the AERs are statistically significant evidence that something other than chance is at work.

That "something" might not be ZiCam at all, but some other factor also associated with ZiCam use. That is why clinical trials (or, at a minimum, further analysis of potentially confounding variables in observational studies) are important. They can help eliminate other factors as explanations for an observed difference. Thus, "a statistically significant number of adverse events" does very little to establish causation, but it can serve as a trigger for further study. In the absence of a significant association of any kind, however, there is little reason to undertake further studies that might clarify the speculative causal links. In simple terms, there is no reason to consider rival explanations for an association if there is no association to explain.

Justice Sotomayer's opinion is a little fuzzy about all this. On the one hand, the opinion states that “the mere existence of reports of adverse events . . . says nothing in and of itself about whether the drug is causing the adverse events.” Fair enough. Mere temporal association is not necessarily causation. That is a matter of study design, not of statistical significance.

On the other hand, the Justice lists "a temporal relationship" in a single patient as one indication of "a reliable causal link." Furthermore, the opinion suggests that even when the number of AERs cannot reasonably be attributed to anything but chance, medical researchers or regulators could treat them as proving causation, at least when combined with other information. As support for this view, Justice Sotomayer writes that "ethical considerations may prohibit researchers from conducting randomized clinical trials to confirm a suspected causal link for the purpose of obtaining statistically significant data."

To be sure, other types of studies and knowledge of biological mechanisms are relevant in causal analysis. But the fact that "medical professionals and researchers do not limit the data they consider to the results of randomized clinical trials or to statistically significant evidence" obviously does not mean that one can infer causation in the absence of good experimental or observational data. AERs do not constitute such data. Shareholders may care about them, and they can stimulate further research that occasionally pans out, but courts should not accept them as proof of "a reliable causal link."


Matrixx Initiatives, Inc. v. Siracusano, No. 09-1156, 2011 WL 977060 (U.S. Mar. 22, 2011)

David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence, New York: Aspen Pub. Co. (2d ed. 2011)

David H. Kaye & David A. Freedman, Statistics, in Reference Manual on Scientific Evidence (Federal Judicial Center ed., 3d ed. 2011) (preprint)

No comments:

Post a Comment