Saturday, May 26, 2007

On Avandia: The difference between statistically significant and not statistically significant ...

... is not statistically significant.

I'm placing less and less trust in p-values. Perhaps the Bayesians are getting to me. But hear me out.

In the Avandia saga, GSK got their drug approved on two adequate and well-controlled trials (and have studied their drug even more!). There was some concern over cardiovascular risks (including heart attack), but apparently the risk did not outweigh the benefits. Steve Nissen performs a meta-analysis on GSK's data from 42 (!) randomized control trials, and now the lawyers are lining up, and the FDA's favorite congressmen are keeping the fax lines busy with request letters and investigations.

Here's how the statistics is shaking out: the results from the meta-analysis shows a 43% increase in relative risk of myocardial infarction, with a p-value of 0.03. The (unspecified) increase in deaths didn't reach statistical significance with a p-value of 0.06.

Argh. Seriously, argh. Does this mean that the relative risk of myocardial infarction is "real" but the increase in deaths is "not real"? Does the 43% increase in relative risk even mean anything? (C'mon people, show the absolute risk increase as well!)

According to the Mayo clinic, the risk is 1/1000 (Avandia) vs. 1/1300 (other medications) in the diabetic study populations. That works out to a 30% increase in relative risk, not the same as what MedLineToday reported. The FDA's safety alert isn't very informative, either.

Fortunately, the NEJM article is public, so you can get your fill of statistics there. So, let me reference Table 4. My question: was the cardiovascular risk real in all studies combined (p=0.03), but not in DREAM (p=0.22), ADOPT (p=0.27), or all small trials combined (p=0.15)? That seems to be a pretty bizarre statement to make, and is probably why the European agencies, the FDA, and Prof. John Buse of UNC-Chapel Hill (who warned the FDA of cardiovascular risks in 2000) have urged patients not to switch right away.

The fact of the matter is if you look for something hard enough, you will find it. It apparently took 42 clinical trials, 2 of them very large, to find a significant p-value. Results from such a meta-analysis on the benefits of a drug probably wouldn't be taken as seriously.

Let me say this: the cardiovascular risks may be real. Steve Nissen's and John Buse's words on the matter are not to be taken lightly. But I think we need to slow down and not get too excited over a p-value that's less than 0.05. This needs a little more thought, not just because I'm questioning whether the statistical significance of the MI analysis means anything, but also because I'm questioning whether then non-significance of the mortality analysis means the death rates aren't different.

Update: Let me add one more thing to this post. The FDA realizes that p-values don't tell the whole story. They have statistical reviewers, medical reviewers, pharmacokinetic reviewers, and so forth. They look at the whole package, including the p-values, medical mechanism of action, how the drug moves through the body, and anything else that might affect how the drug changes the body. Likewise, Nissen and companies discusses the medical aspects of this drug, and doesn't let the p-values tell the whole story. This class of compounds -- the -glitazones (also known as PPAR agonists) -- are particularly troublesome for reasons described in the NEJM article. So, again, don't get too excited about p-values.

Tuesday, May 1, 2007

A plunge into the dark side

I'm referring, of course, to Bayesian statistics. My statistical education is grounded firmly in frequentist inference, though we did cover some Bayesian topics in the advanced doctorate classes. I even gave one talk on empirical Bayes. However, in the last 8 or so years, all that knowledge (such as it was) was covered over.

No more. I've had as a goal to get my feet wet again, because I knew some time or another I would have to deal with it. Well, that some time or another is now, and it probably won't be another eight years after this time before I have to do it again. So off to a short course I go, and equipped with books by Gelman, et al. and Gamerman, I'll be a fully functional Bayesian imposter in no time. I'm looking forward to it.