## Monday, July 28, 2008

## Monday, July 7, 2008

### There is only one word for results like this: ouch!

The mysterious Condor has posted a (slightly enhanced, but presumably not in a data-changing fashion) graph summarizing a subgroup of an analysis on ENHANCE, and it showed that adding ezetimibe to a statin regimen didn't do any good. At all. The one apparently statistically significant good result for the 2nd quartile IMT looks spurious to me and really makes no sense except as a Type I error. Conversely, the statistically significantly bad result for the 3rd quartile looks like another Type I error. Overall, this look like nothing, or if anything a modestly negative effect.

Ouch!

Ouch!

Labels:
ENHANCE,
graphics,
Type I error

## Sunday, July 6, 2008

### How can "functional unblinding work"?

So, continuing in a series on blinding (i.e. the hiding of treatment group assignments from study personnel until all the data have been collected and cleaned and analysis programs written), I will talk about possible ways to do so-called "functional unblinding" -- that is, effectively getting an answer on a drug's treatment effect before treatment codes are released for analysis. Here I will assume that treatment codes are actually remaining sealed (so we aren't talking about "cheating"). This kind of cheating is a serious breach of ethics and merits its own discussion, but I'm saving that for another time.

Also, this post was inspired by the ENHANCE trial and the fallout from it, but I'm not going to make any further comment about it except to say that there are a lot of other features to that situation that make it, at best, appear suspicious. (And to the wrong people.)

So, on to the question, "Is it possible to determine a treatment effect without unblinding the trial?" My answer: a very risky yes, and in some circumstances. I think it is going to be very difficult to show that no treatment effect exists, while a huge treatment effect will be clear. Since I'll be throwing several statistical tools at the problem, this post will only be the first in a series.

The first method is graphical and is called kernel density estimation. This method has the nice feature that it can quickly be done in R or SAS (and I think most other statistical packages) and shows nice graphs. Here I simulated 3 possible drug trials. The first one, the treatment had no effect whatsoever. In the second one, the drug had what I would consider a moderate effect (equal to the standard deviation of the outcome). In the third one, the drug had a huge effect (and probably what would not commonly be seen in drug trials today--3 times the standard deviation of the outcome). I ran the default kernel density estimate in R (using the density() function with defaults), and came up with the image accompanying the post. The top graph looks like a normal distribution graph, as one would expect. The middle graph also looks like a normal distribution, but it is more spread out than the top one. The third one clearly shows two groups.

Identifying huge effects seems to be pretty easy, at least by this method. Identifying moderate effects is a whole lot harder, and distinguishing them from no effect is a bit risky.

However, this isn't the only method of analyzing this problem, and so I will talk about some other methods next time.

## Wednesday, July 2, 2008

### Grassley thinks blinding doesn't matter, at least in ENHANCE

So, I've been meaning to discuss this for some time, and will do so, but I will note that Sen. Grassley thinks blinding doesn't matter on the ENHANCE trial, that simulations could have been run to assess statistical significance on the basis of blinded data.

Of course, this is disturbing on several levels. I'm going to argue that kind of analysis is possible but risky. At the same time, this will make blinding arguments much weaker. As it stands now, anyone who lays eyes on an unsealed randomization schedule, the results of an unblinded analysis, or any summary that might involve unblinded analysis is considered unblinded and therefore should not make decisions that influence further conduct of the study. The worst case scenario of this new argument is that anybody with blinded data and the potential knowledge of how to assess statistical significance based on blinded data will be considered unblinded.

Now, we're getting into murky territory.

Of course, this is disturbing on several levels. I'm going to argue that kind of analysis is possible but risky. At the same time, this will make blinding arguments much weaker. As it stands now, anyone who lays eyes on an unsealed randomization schedule, the results of an unblinded analysis, or any summary that might involve unblinded analysis is considered unblinded and therefore should not make decisions that influence further conduct of the study. The worst case scenario of this new argument is that anybody with blinded data and the potential knowledge of how to assess statistical significance based on blinded data will be considered unblinded.

Now, we're getting into murky territory.

Subscribe to:
Posts (Atom)