Sunday, May 25, 2008

Blinding and randomization, Part II

I could title this post "things that go bump with blinding and randomization." The nice, clean picture I presented in the first part of this series works a lot of the time, but usually, there are problems.

Before I go into them, there's one aspect I didn't touch in the first part, and that is the business aspect. And that's what concerns Schering-Plough -- if they were unblinded to trial results they could possibly make financial decisions, such as "cashing out" while shareholders are stuck footing the bill for trial results. I usually don't mess with that end of things, but it's a very important one for management of companies.

Ok, so back to things that go bump:

  1. It's hard to make a placebo. Sometimes, it's really hard to match the drug. If there's an actively-compared trial, what happens if the active control is an intravenous injection and the experimental treatment is a pill? You could dummy up so that everybody gets an IV and a pill (and only one is active), but if you get too complicated, there's too much room for error.
  2. The primary endpoint is not the only expression of a drug. For example, if your drug is known to dry out skin, and a patient presents with a severe skin drying adverse event, your investigator has a pretty good idea of what the assigned treatment is.
  3. If all outcomes come back pretty close to each other, relative to uncertainty in treatment, you have a pretty good idea the treatment has no effect. While this may not unblind individual patients, it gives a pretty good idea of trial results during the writing of analysis programs, and presumably to senior management so they can make decisions based off "very likely" results.
It's the third case I want to treat in some detail, and the third case that relates to the ENHANCE trial.

Wednesday, May 14, 2008

Blinding and randomization: the basics

Before I start talking too much about if it's possible to effectively unblind a study without knowing treatment codes, it will be helpful to establish why blinding is important. (Also called masking.) In a nutshell, these techniques, when applied correctly, correct our unconscious tendencies to bias results in favor of an experimental treatment.

Randomization is the deliberate introduction of chance into the process of assigning subjects to treatments. This technique not only establishes a statistical foundation for analysis methods used to draw conclusions from the data in the trial, but also corrects the well-known tendency for doctors to assign sicker patients to one treatment group (the experimental treatment if placebo-controlled, or active control if the active control is well-established, e.g.).

Blinding is the keeping secret the treatment assignment from either the patient, the doctor, or both. (Statisticians and other study personnel are kept blinded as well.) Single-blind studies maintain treatment assignment from the subject. Double-blind studies maintain treatment assignment from both the subject and doctor. I have run across a case before where the doctor was blinded to treatment assignment but not the subject, but those are rare.

For some examples of kinds of bias handled by these techniques, see here.

If a particular patient experiences problems with treatment in such a way that the treatment assignment has to be known, we have ways of exposing just the treatment assignment of one patient without having to expose everybody's treatment assignment. If all goes well, this is a relatively rare event. That's a big "if."

At the end of the study, ideally we produce the statistical analysis with dummy randomization codes in order to get a "shell" of what the statistical analysis will look like. This analysis is conducted according to some prespecified plan that is documented in the study protocol and a statistical analysis plan. In many cases, we will draw up in Microsoft Word or other editing software a shell of what the tables will look like. (I've heard about some efforts at using OASIS table model for both shells and analysis.) When we are satisfied with the results, we drop in the true randomization codes (seen for the first time) and hope nothing strange happens. (Usually, very little goes awry unless there was a problem in specifying the data structure of the true randomization codes.)

Any analysis that occurs afterward might be used to generate hypotheses, but isn't used to support an efficacy or safety claim. If something interesting does come up, it has to be confirmed in a later study.

Ideally.

What happens when things aren't so ideal? Stay tuned.

Tuesday, May 13, 2008

Ban abbrs!

So, maybe this is a little bit radical, but I think we should stop using abbreviations. We have the technology to automatically expand abbreviations in writing, so that advantage is pretty much lost. For anyone who is a nonlinear reader and writer, the rules create a huge waste of time -- you have to track the first use of an abbreviation in order to expand it and put it in a table of abbreviations. For a nonlinear writer, who might write the introduction after the body, this can get cumbersome. For a nonlinear reader, such as someone who uses SQ3R or similar method, it creates an irritation to have to thumb back to a table of abbreviations (and another huge waste of time).

So, how about it, style guide writers? Time to move grammar out of the IBM Selectric age?

Thursday, May 8, 2008

Can the blind really see?

That's Sen. Grassley's concern, stated here. (A thorough and well-done blog with some eye candy, though I don't agree with a lot of opinions expressed there.)

I've wondered about this question even before the ENHANCE trial came to light, but, since I'm procrastinating on getting out a deliverable (at 11:30pm!) I'm going to just say that I plan to write about this soon.

Saturday, May 3, 2008

Critical thinking about vaccines

I encourage people to think critically about vaccines (just like any other topic). However, pseudoskepticism about vaccines (just like any other topic) is harmful because discouraging others from vaccinating leads to a rise in, for example, whooping cough. Orac has also written about the rise in measles in the wake of decreasing vaccination.

Remember, the decisions you make about vaccination affect others as well. Some critical thinking about vaccines is good -- for example knowing when to go ahead or delay a shot due to illness (or knowing what conditions may lead to a life-changing reaction). However, a blind rejection is as bad as blind acceptance.

Friday, May 2, 2008

Well, why not?

Since I'm posting, I might as well point toward Derek Lowe's post about the failure of the Singulair/Claritin idea. Too bad for Merck, though one has to wonder how long this drug combination strategy among big pharma is going to play out. After all, wouldn't it be about as cheap to take two pills (since one is over-the-counter) as it would be to ask your insurance to fork it over for a prescription version of a combination? Heck, a lot of people take the combination separately now, anyway.

So at any rate, Derek deduces that the problem lies in efficacy. Is it possible to support a marketing claim that the combination is more than the sum of its parts? Merck apparently thinks so, but the FDA does not. Unless there's an advisory committee meeting on this, or the drug eventually gets approved, or efforts to get results of all clinical trials posted publically succeed, we won't know for sure. But what I do know is that for one of these combinations to gain marketing approval, at the very least there has to be a statistically significant synergistic effect. That means that the treatment effect has to be greater than the sum of the treatment effects of the drugs alone. Studies that demonstrate this effect tend to have a lot of patients, especially if there are multiple dose levels involved. It isn't easy, and I've known more than one combination development program to fizzle out.

Update: but see this serious safety concern for Singulair reported by Pharmalot.

It's easy to make silly claims when you take numbers out of contexts

I often respect Mark Schauss, but when he shows his hatred of the pharmaceutical and healthcare industries his logic tends to go out the window.

Take for example his latest silly claim "stay out of hospitals to live longer." Ok, I guess one could make the argument that behaviors or genetic predispositions that lead one to a hospital stay would probably tend to shorten life. Fair enough. But rather than taking that fairly obvious argument, we are treated to a naked number: 99,000 deaths from nosocomial (hospital-related) infections per year. Rather than delve into that number, Mark simply calls it "unacceptable."

Granted, we all want to reduce that number. But let's take a closer look by reviewing the report on which Mark bases his post. (Link is a pdf.)
The infection rate per 1,000 patient-days was highest in ICUs (13.0), followed by high-risk nurseries (6.9), and well-baby nurseries (2.6).
Now, let's think about the claim that people are better off out of the hospital than in the hospital. The highest infection rates are in ICUs and high risk nurseries. Well-baby nurseries registered as well. Sounds to me like if someone needs to be in one of these places, they have some pretty serious problems, and infections considered, in the hospital is better than outside the hospital. I doubt that bolting from the ICU to avoid infection is going to, in the long run, lead to a longer life.

99,000 is a number we all want to go down to zero, and I suspect that more judicious use of antibiotics, solving the problems with overtired and overworked healthcare practitioners, and avoiding drug dispensing and therapeutic errors will all be part of the solution. But before we go making any silly conclusions based on this number, let's see what the problems really are and solve them rather than cut off our noses to spite our faces.