Full disclosure here: at one time I wanted to be a complementary and alternative (CAM) researcher. Or integrative, or whatever the cool kids call it these days. I thought that CAM research would yield positive fruit if they could just tighten up their methodology and leave nothing to question. While this is not intended to be a discussion of my career path, I’m glad I did not go down that road.
This article is a discussion of why. The basic premise of the article is that positive clinical trials do not really provide strong evidence of an implausible therapy, for much the same reason that doctors will give a stronger test to an individual who tests positive for HIV. A positive test will provide some, but not conclusive, evidence for HIV simply because HIV is so rare in the population. The predictive value of even a pretty good test is poor. And the predictive value of a pretty good clinical trial is pretty poor if the treatment has not been established.
Put it this way, if we have a treatment that has zero probability of working (the “null hypothesis” in statistical parlance), there will be a 5% probability that it will show a significant result in a conventional clinical trial. But let’s turn that on its head using Bayes Rule:
Prob (treat is useless| positive trial) * Prob(positive trial) = Prob (positive trial | treatment is useless) * Prob (treat is useless) (ok, this is just the definition of Prob (treat is useless AND positive trial)
Expanding, and using the law of total probability:
Prob (treat is useless| positive trial) = Prob (positive trial | treatment is useless)* Prob (treat is useless) / ((Prob positive trial|treat is useless)*Prob(treat is useless) + Prob(positive trial|treat is not useless)*Prob(treat is not useless))
Now we can substitute, assuming that our treatment is in fact truly useless:
Prob (treat is useless| positive trial) = p-value * 1 / (p-value * 1 + who cares * 0) = 1
That is to say, if we know the treatment is useless, the clinical trial is going to offer no new knowledge of the result, even if it was well conducted.
Drugs that enter in human trials are required to have some evidence for efficacy and safety, such as that gained from in vitro and animal testing. The drug development paradigm isn’t perfect in this regard, but the principle of the requirement of scientific and empirical evidence for safety and efficacy is sound. When we get better models for predicting safety and efficacy we will all be a lot happier. The point is to reduce the probability of futility to something low and maximize the probability of a positive trial given the treatment is not useless, which would result in something like:
Prob (treat is useless | positive trial) = p-value * <something tiny> / (p-value * something tiny + something large * something close to 1) = something tiny
Of course, there are healthy debates regarding the utility of the p-value. I question it as well, given that it requires a reference to trials that can never be run. These debates need to be had among regulators, academia, and industry to determine the best indicators of evidence of efficacy and safety.
But CAM studies have a long way to go before they can even think about such issues.