Wednesday, May 14, 2008

Blinding and randomization: the basics

Before I start talking too much about if it's possible to effectively unblind a study without knowing treatment codes, it will be helpful to establish why blinding is important. (Also called masking.) In a nutshell, these techniques, when applied correctly, correct our unconscious tendencies to bias results in favor of an experimental treatment.

Randomization is the deliberate introduction of chance into the process of assigning subjects to treatments. This technique not only establishes a statistical foundation for analysis methods used to draw conclusions from the data in the trial, but also corrects the well-known tendency for doctors to assign sicker patients to one treatment group (the experimental treatment if placebo-controlled, or active control if the active control is well-established, e.g.).

Blinding is the keeping secret the treatment assignment from either the patient, the doctor, or both. (Statisticians and other study personnel are kept blinded as well.) Single-blind studies maintain treatment assignment from the subject. Double-blind studies maintain treatment assignment from both the subject and doctor. I have run across a case before where the doctor was blinded to treatment assignment but not the subject, but those are rare.

For some examples of kinds of bias handled by these techniques, see here.

If a particular patient experiences problems with treatment in such a way that the treatment assignment has to be known, we have ways of exposing just the treatment assignment of one patient without having to expose everybody's treatment assignment. If all goes well, this is a relatively rare event. That's a big "if."

At the end of the study, ideally we produce the statistical analysis with dummy randomization codes in order to get a "shell" of what the statistical analysis will look like. This analysis is conducted according to some prespecified plan that is documented in the study protocol and a statistical analysis plan. In many cases, we will draw up in Microsoft Word or other editing software a shell of what the tables will look like. (I've heard about some efforts at using OASIS table model for both shells and analysis.) When we are satisfied with the results, we drop in the true randomization codes (seen for the first time) and hope nothing strange happens. (Usually, very little goes awry unless there was a problem in specifying the data structure of the true randomization codes.)

Any analysis that occurs afterward might be used to generate hypotheses, but isn't used to support an efficacy or safety claim. If something interesting does come up, it has to be confirmed in a later study.

Ideally.

What happens when things aren't so ideal? Stay tuned.