Tuesday, September 28, 2010

Future of Clinical Trials conference


Up over at  Ask-Cato.

Posted via Blogaway

Tuesday, September 21, 2010

Future of clinical trials recap

Clinical trials are complex, so any meeting about the future of trials is going to be complex. Indeed, the  Future of Clinical Trials meeting had something from many perspectives from recruitment to ethics to statistics. Of course, I viewed most of the presentations with an eye for how to apply them to adaptive trials. So, here's the themes of what I heard in the presentations:


  • Relationships are going to be the most important key to the success of any clinical trial. Pharma companies are starting to outsource in such a way that they expect a strategic partner-level participation by the vendor (such as a clinical research organization-CRO), and the CRO had best bring its A-game regarding project management, design and execution of trials.
  • I had not thought about this particular area, but business development is going to play a key role as well. We discussed several aspects, but one that sticks in my mind is structuring contracts in such a way to minimize change orders. I think this will be helpful because change orders take precious time away from the team and make the relationship more difficult to maintain.
  • Regulatory uncertainty drives us to be more efficient, but we are also uncertain about the changes that are required to make us more efficient. We can expect the difficult regulatory environment to get worse before it gets better because of the recent politicization of drug safety.
  • I think a new wave of technologies is going to make designing and running trials more efficient. Improvements are being made to study startup, clinical trial management, patient recruitment, site selection, and ethics approval of protocols. It may take a while, but any company wanting to stay competitive will need to either employ some of these technologies or use something else to make up the lag in efficiency.
This is only a small overview. I think we will be hearing a lot more about these issues in the years to come.

Wednesday, September 15, 2010

Adaptive trials can be hard on the team

Clinical trials are hard enough to do as it is, because many people coming from many different backgrounds and having many different focuses have to coordinate their efforts to make a good quality finished product--a clinical trial with good data that answers the research questions in a persuasive and scientifically valid way. Add to that mix several interim analyses with tight turnaround times (required to make the interim analysis results useful enough to adapt the trial) and you really are putting your sites, clinical, data management, and statistical teams in the pressure cooker. Making stupid mistakes that your teams would not ordinarily make is a real danger (believe me, it is and I have made a few of those myself), and one that can endanger the results of the interim analysis. Here are some ideas to cut down on those stupid mistakes:


  • Overplan during study startup.
  • Get the whole trial execution team, including data management and stats, together around the table in the beginning.
  • Do a dry run of the interim analysis, with everybody around the table. Personally, I think it's worth it to fly people in if they are scattered around the world, but at the very least use the web conferencing technologies.
  • Draw a diagram of data flow for the interim analysis. Use Visio, a white board, note cards and string, or whatever is useful. The process of making this diagram is more important than the diagram itself, but the diagram is important as well. Of course, this process will more than likely change during the course of the study but these diagrams can be updated as well.
  • Fuss over details. Little details can trip up the team when the chips are down. Make the process as idiot-proof as possible. I once had a situation where I screwed up an interim analysis because I forgot to change the randomization directory from a dummy randomization (so blinded programmers to write programs) to the real randomization (so I could produce the reports). After that, I talked with the lead programmer and refined the report production process even further.
  • Plan for turnover. You like members of your team, and some of them will go away during the execution of the trial. New members will come on board. Business continuity planning is very important and is increasingly being scrutinized. Scrutinize it on your trials. Because you've overplanned, done some dry runs, drawn diagrams, and fussed over details, you've written all these down, so the content's readily available to put together in a binder (or pdf). You might even repeat the dry run process with new staff.
  • And, for the statisticians, run clinical trial simulations. A well-done simulation will not only show how the trial performs, but also illuminate the assumptions behind the trial. Then simulations can be performed to show how robust the trial is regarding those assumptions.
Running adaptive trials is hard, but a thoughtful process and a prepared staff will help you realize the potential gains that adaptive trials can bring.

Saturday, September 11, 2010

Bayesian dose-ranging trials, ASTIN, and execution of adaptive clinical trials

Bayesian adaptive trials have a lot of potential to cut down sample sizes in the dose-ranging trials and enable better selection of the best dose to take into pivotal trials. The canonical example is the ASTIN trial, published in Clinical Trials in 2005.

The power of the Bayesian adaptive trial as it is used in the ASTIN trial is that data from all subjects is used to find the dose of choice (in the case of ASTIN, the ED95, or the dose that gives 95% of the efficacy beyond the control). This is in contrast to most parallel-group multi-dose trials, where only trials from a particular treatment group are used to estimate the treatment effect at that dose, and also different from most dose-effect models such as Emax where the dose-response curve is assumed to have a certain shape. For example, the ASTIN trial was able to detect non-monotone dose-response curve (and good thing, too!).

What is notable about the ASTIN trial is that the literature is very transparent on the methodology and the operational aspects of the trial. Thus, the whole clinical trial project team can learn important lessons in the running of any adaptive trial, including modern flexible adaptive trials such as ASTIN.

Though a little heavy on the math, I recommend any clinical trial professional check out the literature on the ASTIN trial (ignoring the math if necessary and concentrating on the overall idea), starting with the article linked above.

Thursday, September 9, 2010

Meta-analysis is under the microscope again, this time for drug safety

FDA Asks For “Restraint” On Drug Safety Worries - Matthew Herper - The Medicine Show - Forbes

Meta-analysis is a class of techniques used to combine data from multiple, often disparate, studies on a given topic. Essentially the methodology involves reverse-engineering published literature or data from a website and then statistically combining the results. Of course, as with all statistical analyses, there are several ways of doing a meta-analysis, and within each way there are lots of smaller assumptions that affect the way a meta-analysis should be interpreted. Bias, especially publication bias, are primary worries.

In the article linked above, FDA reviewers are calling for restraint in the use of this tool, and for good reason. In the drive toward transparency and open data (or at least open results in our industry), coupled with the wide availability of statistical software, anybody can easily create a meta-analysis. The Vioxx and Avandia examples show that a meta-analysis can kick off a process of scrutiny that will eventually cause a drug to be pulled from the market or relegated to a "last resort" status. The ugly downside of this, of course, is that some drugs may be inappropriately targeted and its use inappropriately reduced due to market withdrawal, patient fears, or refusal of reimbursement. The reviewers note that Triotropium should not follow the path of Vioxx and Avandia despite a negative meta-analysis.

My comment is that they are absolutely right in that the meta-analysis is only one aspect of the whole picture. In the cases of Vioxx and Avandia, further investigations were made into the data, and these further investigations supported the original meta-analysis. It is not automatic, however, that a drug should be targeted for removal or usage reduction in light of a negative meta-analysis, but rather a more detailed analysis that includes the original approval data and any subsequent post-marketing data.

What does likability have to do with statistics?

Likeability is a very important skill for statisticians. While the best among us are recognized for our skill, being likeable will entice our clients to listen to us more closely, and with active listening skills we will be able to better understand our clients' problems. This is Tukey's saying "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong quesiton, which can always be made precise" in action.

With so many people now needing statistical services, we statisticians need to be good listeners, good communicators, and likeable. So I heartily recommend Bruna Martinuzzi's Likeability: It's an Inside Job.

Statistics and Statisticians in Clinical Trials – Beginning with the End in Mind

Up over at Ask Cato.

Sunday, September 5, 2010

Great things coming up


In just a couple of weeks, I'll be giving my talk at the  Future of Clinical Trials  conference. For the next few weeks, I'll be posting material here and at Ask Cato about the best ways to negotiate with the FDA, design, and execute adaptive clinical trials so they can reach their potential.