Thursday, March 19, 2015

Lying with statistics, CAM version

Full disclosure here: at one time I wanted to be a complementary and alternative (CAM) researcher. Or integrative, or whatever the cool kids call it these days. I thought that CAM research would yield positive fruit if they could just tighten up their methodology and leave nothing to question. While this is not intended to be a discussion of my career path, I’m glad I did not go down that road.

This article is a discussion of why. The basic premise of the article is that positive clinical trials do not really provide strong evidence of an implausible therapy, for much the same reason that doctors will give a stronger test to an individual who tests positive for HIV. A positive test will provide some, but not conclusive, evidence for HIV simply because HIV is so rare in the population. The predictive value of even a pretty good test is poor. And the predictive value of a pretty good clinical trial is pretty poor if the treatment has not been established.

Put it this way, if we have a treatment that has zero probability of working (the “null hypothesis” in statistical parlance), there will be a 5% probability that it will show a significant result in a conventional clinical trial. But let’s turn that on its head using Bayes Rule:

Prob (treat is useless| positive trial) * Prob(positive trial) = Prob (positive trial | treatment is useless) * Prob (treat is useless) (ok, this is just the definition of Prob (treat is useless AND positive trial)

Expanding, and using the law of total probability:

Prob (treat is useless| positive trial) = Prob (positive trial | treatment is useless)* Prob (treat is useless) / ((Prob positive trial|treat is useless)*Prob(treat is useless) + Prob(positive trial|treat is not useless)*Prob(treat is not useless))

Now we can substitute, assuming that our treatment is in fact truly useless:

Prob (treat is useless| positive trial) = p-value * 1 / (p-value * 1 + who cares * 0) = 1

That is to say, if we know the treatment is useless, the clinical trial is going to offer no new knowledge of the result, even if it was well conducted.

Drugs that enter in human trials are required to have some evidence for efficacy and safety, such as that gained from in vitro and animal testing. The drug development paradigm isn’t perfect in this regard, but the principle of the requirement of scientific and empirical evidence for safety and efficacy is sound. When we get better models for predicting safety and efficacy we will all be a lot happier. The point is to reduce the probability of futility to something low and maximize the probability of a positive trial given the treatment is not useless, which would result in something like:

Prob (treat is useless | positive trial) = p-value * <something tiny> / (p-value * something tiny + something large * something close to 1) = something tiny

Of course, there are healthy debates regarding the utility of the p-value. I question it as well, given that it requires a reference to trials that can never be run. These debates need to be had among regulators, academia, and industry to determine the best indicators of evidence of efficacy and safety.

But CAM studies have a long way to go before they can even think about such issues.

Monday, March 16, 2015

Lying with statistics, anti-vax edition 2015

Sometimes Facebook’s suggestions of things to read lead to some seriously funny material. After clicking on a link about vaccines, Facebook recommended I read an article about health outcomes in unvaccinated children. Reading this rubbish made me as annoyed as a certain box of blinking lights, but it again affords me the opportunity to describe how people can confuse, bamboozle, and twist logic using bad statistics.

First of all, Health Impact News has all the markings of a crank site. For instance, its banner claims it is a site for “News that impacts your health that other media sources may censor.” This in itself ought to be a red flag, just like Kevin Trudeau’s Natural Cures They Don’t Want You to Know About.

But enough about that. Let’s see how this article and the referred study abuses statistics.

First of all, this is a bit of a greased pig. Their link leads to a malformed PDF file on a site called vaccineinjury.info. The site’s apparent reason for existence is to host a questionnaire for parents who did not vaccinate their children. So I’ll have to go on what the article says. There appeNo study of health outcomes of vaccinated people versus unvaccinated has ever been conducted in the U.S. by CDC or any other agency in the 50 years or more of an accelerating schedule of vaccinations (now over 50 doses of 14 vaccines given before kindergarten, 26 doses in the first year).ars to be another discussion on the vaccineinjury.info site, which I’ll get to in a moment.

The authors claim

No study of health outcomes of vaccinated people versus unvaccinated has ever been conducted in the U.S. by CDC or any other agency in the 50 years or more of an accelerating schedule of vaccinations (now over 50 doses of 14 vaccines given before kindergarten, 26 doses in the first year).

Here’s one. A simple Pubmed search will bring up others fairly quickly. These don’t take long to find. What happens after this statement is a long chain of unsupported assertions about what data the CDC has and has not collected, that I really don’t have an interest in debunking right now (and so leave as an exercise).

So on to the good stuff. They have a pretty blue and red bar graph that’s just itching to be shredded, so let’s do it. This blue and red bar graph is designed to demonstrate that vaccinated children are more likely to develop certain medical conditions, such as asthma and seizures, than unvaccinated children. Pretty scary stuff, if their evidence were actually true.

One of the most important principles in statistics is defining your population. If you fail at that, you might as well quit, get your money back from SAS, and call it a day, because nothing that comes after that is meaningful. You might as well make up a bunch of random numbers if that’s the case, because that will be just as meaningful.

This study fails miserably at defining its population. The best I can tell, the comparison is between a population in an observation study called KIGGS and respondents to an open invitation survey conducted at vaccineinjury.info.

What could go wrong? Rhetorical question.

We don’t know who responded to the vaccineinjury.info questionnaire, but it is aimed at parents who did not vaccinate their children. This pretty much tanks the rest of their argument. From what I can tell, these respondents seem to be motivated to give answers favorable to the antivaccine movement. That the data they present are supplemented with testimonials gives this away. They are comparing apples to rotten oranges.

The right way to answer a question like this is a matched case-control study of vaccinated and unvaccinated children. An immunologist is probably the best one to determine which factors need to be included in the matching. That way, an analysis conditioned on the matching can clearly point to the effect of the vaccinations rather than leave open the questions of whether the differences in cases were due to differences in inherent risk factors.

I’m wondering if there isn’t some ascertainment bias going on as well. Though I really couldn’t tell what the KIGGS population was, it was represented as the vaccinated population. So in addition to imbalances in risk factors, I’m wondering if the “diagnosis” in the unvaccinated population was derived from the parents were asked which medical conditions their children have. In that case, we have no clue what the real rate is like, because we are comparing parents’ judgments (and parents probably more likely to ignore mainstream medicine at that) with, presumably, a GP’s more rigorous diagnosis. That’s not to say that no children in the survey were diagnosed by an MD, but without that documentation (which this web-based survey isn’t going to be able to provide), the red bars in the pretty graph are essentially meaningless. (Which they were even before this discussion.)

But let’s move on.

The vaccineinjury.info cites some other studies that seem to agree with their little survey. For instance, McKeever, et al. published a study in the American Journal of Public Health in 2004 from which the vaccineinjury.info site claims an association between vaccines and the development of allergies. However, that apparent association, as stated in the study, is possibly the result of ascertainment bias (the association was only strong in a stratum with the least frequent GP visits). Even objections to the discussion of ascertainment bias leave the evidence of association of vaccines and allergic diseases unclear.

The vaccineinjury.info site also cites the Guinea-Bisseau study reported by Kristensen et al.in BMJ in 2000. They claim, falsely, that the study showed a higher mortality in vaccinated children.

They also cite a New Zealand study.

What they don’t do is describe how they chose the studies to be displayed on the web site. What were the search terms? Were these studies cherry-picked to demonstrate their point? (Probably, but they didn’t do a good job.)

What follows the discussion of other studies is an utter waste of internet space. They report the results of their “survey,” I think. Or somebody else’s survey. I really couldn’t figure out what was meant by “Questionnaire for my unvaccinated child ("Salzburger Elternstudie")”. The age breakdown for the “children” is interesting, for 2 out of the 1004 “children” were over 60! At any rate, if you are going to be talking about diseases in children, you need to present it by age, because, well, age is a risk factor in disease development. But they did not do this.

What is interesting about the survey, though, is the reasons the parents did not vaccinate their children, if only to give a preliminary notion of the range of responses.

In short, vaccineinjury.info, and the reporting site Health Impact News, present statistics that are designed to scare rather than inform. Proper epidemiological studies, contrary to the sites’ claims, have been conducted and provide no clear evidence to the notion that vaccinations cause allergies except in rare cases. In trying to compile evidence for their claims, they failed to provide evidence that they did a proper systematic review, and even misquoted the conclusions of the studies they presented.

All in all, a day in the life of a crank website.