Tuesday, July 31, 2007

Biostatistics makes the news, and hope for advances

Black hole or black art, biostatistics (and its mother field statistics) is a topic people tend to avoid. I find this unfortunate, because it makes discussions of drug development and surveillance news rather difficult.

Yet these discussions affect millions of lives, from Avandia to echinacaea to zinc. So I get tickled pink when a good discussion of statistics comes out in the popular press. They even quoted two biostatisticians who know what they are talking about, Susan Ellenberg and Frank Harrell, Jr. Thanks, BusinessWeek! Talking about the pros and cons of meta-analysis is a difficult task, and that's if you're aiming for an audience of statisticians. To tackle the topic in a popular news magazine is courageous, and I hope establishes a trend.

On the other hand, I have a few friends who cannot pick up a copy of USA Today without casting a critical eye. Turns out, they had a professor who constantly was mining the paper for examples of bad statistical graphics. (I have nothing against USA Today. In fact, I've appreciated their treatment of transfats.)

In other news, two new books on missing data have been released this year. Little and Rubin have released the second edition of their useful 1987 book. Molenberghs and Kenward have come out with their book that's designed specifically for missing data in clinical studies. I ended up picking up the latter for its focus, and I attended a workshop earlier this year by Geert Molenberghs that was pretty good. I'm very glad these books have been release because they're sorely needed. And at the Joint Statistical Meetings this year, there was a very good session on missing data (including a very good presentation by a colleague). I hope this means in the future we can think more intelligently about how to handle missing data because, well, in clinical trials you can count on patients dropping out.

Friday, July 20, 2007

Whistleblower on "statistical reporting system"

Whether you love or hate Peter Rost (and there seems to be very little in between), you can't work in the drug or CRO industry and ignore him. Yesterday, he and Ed Silverman (Pharmalot) broke a story on a director of statistics who blew the whistle on Novartis. Of course, this caught my eye.

While I can't really determine whether Novartis is "at fault" from these two stories (and related echos throughout the pharma blogs), I can tell you about statistical reporting systems, and why I think that these allegations can impact Novartis's bottom line in a major way.

Gone are the days of doing statistics with pencil, paper, and a desk calculator. These days, and especially in commercial work, statistics are all done with a computer. Furthermore, no statistical calculation is done in a vacuum. Especially in a clinical trial, there are thousands of these calculations which must be integrated and presented so that they can be interpreted by a team of scientists and doctors who then decide whether a drug is safe and effective (or, more accurately, whether a drug's benefits outweigh its risks).

A statistical reporting system, briefly, is a collection of standards, procedures, practices, and computer programs (usually SAS macros, but may involve programs in any language) that standardize the computation and reporting of statistics. Assuming they are well-written, these processes and programs are general enough to process the data any kind of study and produce reports that are consistent across all studies, and, hopefully, across all product lines in a company. For example, there may be one program to turn raw data into summary statistics (n, mean, median, standard deviation) and present them in a standardized way in a text table. Since this is a procedure we do many times, we'd like to just be able to "do it" without having to fuss over the details. We feed the variable name in (and perhaps some other details like number of decimal places) and voila the table. Not all statistics is that routine (and good for me because that means job security), but perhaps 70-80% is and can be made more efficient. Other programs and standards will take care of titles, footnotes, column headers, formatting, tracking, and validation in a standardized and efficient way. This saves a lot of time in both programming and in review and validation of tables.

So far, so good. But what happens when these systems break? As you might expect, you have to pay careful attention to these statistical reporting systems, even go so far as applying some software development life cycle methodology. If they break, you influence not just one calculation but perhaps thousands. And there is no way of knowing - obscure bugs in the code might influence just 10 out of a whole series of studies, where a more serious bug might affect everything. If this system is applied to every product in house (and it should probably be general enough to apply to at least one category of products, such as all cancer products), the integrity of the data analysis for a whole series of products is compromised.

Allegations were also made that a contract programmer was told to change dates on adverse events, which could either be a benign but bizarre request if the reasons for the change are well-documented (it's better to change dates in the database than at the program level, because it's easier to audit changes to a database and specific changes to specific dates keep a program from being generalizable to other similar circumstances) or an ethical nightmare if the changes were done to make the safety profile of the drug look better. From Pharmalot's report, the latter was alleged.

You might guess the consequences of systematic errors in data submitted to the FDA. The FDA does have the authority to kick out an application if it has good reason to believe that its data is incorrect. This application has to go through the resubmission process, after it is completely redone. (The FDA will only do this if there are systematic problems.) This erodes the confidence the reviewers have in the application, and probably even all applications submitted by a sponsor who made the errors. This kind of distrust is very costly, resulting in longer review periods, more work to assure the validity of the data, analysis, and interpretation, and, ultimately, lower profits. Much lower.

It doesn't look like the FDA has invoked its Application Integrity Policy on Novartis's Tasigna or any other product. But it has invoked its right to three more months of review time, saying it needs to "review additional data."

So, yes, this is big trouble as of now. Depending on the investigation, it could get bigger. A lot bigger.

Update: Pharmalot has posted a response from Novartis. In it, Novartis reiterates their confidence in the integrity of their data and claims to have proactively shared all data with the FDA (as they should). They also claim that the extension to the review time for the NDA was for the FDA to consider amendments to the submission.

This is a story to watch (and without judgment, for now, since this is currently a matter of "he said, she said"). And, BTW, I think Novartis responded very quickly. (Ed seems to think that 24 hours was too long.)