Sunday, February 28, 2010

How to waste millions of dollars on clinical trials

When I started working for my current company (a clinical research organization), I was presented with a project that had discontinued due to lack of continued funding. The sponsor simply wanted to stop enrolling and get any conclusions from the study they could find. We presented some basic summary statistics, and then they presented me with an ethical dilemma: they found a couple of numbers that "looked good" and wanted me to generate a p-value that they could put into a press release so that they could get more funding. In statistics this is known as data dredging or p-value hunting. P-values generated this way are known to be extremely biased in favor of the drug being studied simply because the brain is very good at picking out patterns. Recently, however, I read that this company had a failed Phase 3 trial, which cost millions of dollars.

In a previous life, I worked with someone whose large Phase 2 trial failed on its primary endpoint. However, a secondary endpoint looked very good. They commissioned a Phase 3 study with thousands of patients to study and hopefully confirm the new endpoint. However, that study ended up failing as well, and I believe development of the drug was discontinued.

In my opinion, those failed studies could have been avoided. A Phase 2 study need not reach statistical significance (and certainly should not be designed so that it has to), but results in Phase 2 should be robust enough and strong enough to inspire confidence going into Phase 3. For example, estimated treatment effect should be clinical relevant, even if confidence intervals are wide enough to extend to 0. Related secondary endpoints should show similar trends. Relevant subgroups should show similar effects, and different clinical sites should show a solid trend.

I personally would prefer Bayesian methods which can quantify these concepts I just listed, and can even give a probability of success in a Phase 3 trial (with given enrollment) based on the treatment effect and variation present in the Phase 2 trial. However, these methods aren't necessary to apply the concepts above.

In both of the cases I listed above, the causes were extremely worthy, and products that are able to accomplish what the sponsors wanted would have been useful additions to medical practice. However, these products are probably now on the shelves, millions of dollars too late. The end of Phase 2 can be a very difficult soul-searching time, especially when a Phase 2 trial gives equivocal or negative results. It's better to shelve the compound or even run further small proof-of-concept studies than waste such large sums of money on failed large trials.

Sunday, February 7, 2010

Barnard’s exact test -- a test that ought to be used more

Barnard’s exact test – a powerful alternative for Fisher’s exact test (implemented in R) | R-statistics blog describes the use of Barnard's test, which I think is a more preferable test to Fisher's exact.

Barnard's exact test has one further advantage over Fisher's exact: Fisher's exact requires two fixed margins (e.g. the number of subjects in a treatment group and the number of subjects with a given adverse event), whereas in most places it is used only one of the margins is fixed (i.e. the number in a treatment group but not the number with a given adverse event).

The downside is that not too many software packages implement it. Specifically, SAS doesn't seem to implement it, so it doesn't get much use in the pharmaceutical industry. Having an implementation in R is a good start, so maybe more people will explore it and it will see more use.

Tuesday, February 2, 2010

Bradford Cross's "100 proof" project

While it sounds like the way to make the perfect whiskey, the 100 proof project is actually a way to remember the fundamentals of mathematics and how to write a proof. Bradford Cross is taking up the project just for this reason, and we can all benefit.

Every once in a while I pull out the functional analysis book just to brush up on things like the spectral theorem (a ghost of grad school past), but that doesn't have the impact that this project will have.

I found the project via John Cook's AnalysisFact Twitter feed.