Sunday, June 27, 2010

Reproducible research - further options

Mostly just a list of possible reproducible research options as a follow up to a previous entry. I still don't like these quite as much as R/Sweave, but they might do in a variety of situations.

  • Inference for R - connects R with Microsoft Office 2003 or later. I evaluated this a couple of years ago and I think there's a lot to like about it. It is very Weave-like, with a slight disadvantage that it really prefers the data to be coupled tightly with the report. However, I think it is just as easy to decouple these without using Inference's data features, which is advantageous when you want to regenerate the report when data is updated. Another disadvantage is that I didn't see a way to easily redo a report quickly, as you can with Sweave/LaTeX by creating a batch or shell script file (perhaps this is possible with Inference). Advantages - you can also connect to Excel and Powerpoint. If you absolutely require Office 2003 or later, Inference for R is worth a look. It is, however, not free.
  • R2wd (link is to a very nice introduction) which is a nice package a bit like R2HTML, except it writes to a Word file. (Sciviews has something similar, I think.) This is unlike many of the other options I've written about, because everything must be generated from R code. It is also a bit rough around the edges (for example, you cannot just write wdBody(summary(lm(y~x,data=foo))). I think some of the dependent packages, such as Statcomm, also allow connections to Excel and other applications, if that is needed.
  • There are similar solutions that allow connection to Openoffice or Google Documents, some of which can be found in the comments section of the previous link.

The solutions that connect R with Word are very useful for businesses that rely on the Office platform. The solutions that connect to Openoffice are useful for those who rely on the Openoffice platform, or need to exchange documents with those who rely on Microsoft Office but do not want to purchase it. However, for reproducible research in the way I'm describing these solutions are not ideal, because it allows the display version to be edited easily, which would make it difficult to update the report if there is new data. Perhaps if there were a solution to make the document "comment-only" (i.e. no one could edit the document but could only add comments) this would be a workable solution. So far, it's possible to manually set a protection flag to allow redlining but not source editing of a Word file, but my Windows skills are not quite sufficient to have that happen from, for example, a batch file or other script.

Exchanging with Google Docs is a different beast. Google Docs allows easy collaboration without having to send emails with attachments. I think that this idea will catch on, and once IT personnel are satisfied with security this idea (whether it's Google's system, Microsoft's attempt at catching up, or someone else's) will become the primary way of editing small documents that require heavy collaboration. Again, I'm not clear if it's possible to share a Google document with putting it into a comment-only mode, which I think would be required for a reproducible research context to work, but I think this technology will be very useful.

Posted via email from Randomjohn's posterous

The effect of protocol amendments on statistical inference of clinical trials

Lu, Chow, and Zhang recently released1 an article detailing some statistical adjustments they claim need to be made when a clinical trial protocol is amended. While I have not investigated their method (they seem to revert to my first choice when there is no obvious or straightforward algorithm – the maximum likelihood method), I do appreciate the fact that they have even considered this issue at all. I have been thinking for a while that the way we tinker with clinical trials during their execution (all for good reasons, mind you) ought to be reflected in the analysis. For example, if a sponsor is unhappy with enrollment they will often alter the inclusion/exclusion criteria to speed enrollment. This, as Lu, et al. point out, tends to increase the variance of the treatment effect (and possibly affect the means as well). But rather than assess that impact directly, we end up analyzing a mixture of populations.

This and related papers seem to be rather heavy on the math, but I will be reviewing these ideas more closely over the coming weeks.

Posted via email from Randomjohn's posterous

Sunday, June 13, 2010

Drug development from a kindergartener's point of view: sharing is good

Derek Lowe notes this effort by several drug makers to share data from failed clinical trials in Alzheimer's disease. The reason we do not have very good treatments for Alzheimer's is that it's a very tough nut to crack, and we're not even sure the conventional wisdom about the mechanisms (the amyloid plaque theory, for instance) are correct. The hope is that in sharing data from a whole string of failed clinical trials, someone will be able to find something that can move a cure–or at least an effective treatment–forward.

It should be appreciated that participating in this initiative is not easy. Desire to protect the privacy of research participants is embedded deeply within the clinical trial process, and if any of the sensitive personal-level data is to be made public, it has to be anonymized (and documented).

The data is also very expensive to collect, and the desire to protect it vigorously as a trade secret is very strong.

I think this effort is notable in light of the drive toward open data discussed by Tim Berners-Lee in his recent TED talk. This effort seems to be the first of several in difficult diseases such as Parkinson's. Stay tuned, because this will be something to watch closely.

Posted via email from Randomjohn's posterous

Sunday, June 6, 2010

How to waste millions of dollars with clinical trials: MS drug trial 'a fiasco' – and NHS paid for it - Health News, Health & Families - The Independent

The most expensive publicly funded drug trial in history is condemned today as a "fiasco" which has wasted hundreds of millions of NHS cash and raised fresh concerns about the influence of the pharmaceutical industry.
The scheme involved four drugs for multiple sclerosis launched in the 1990s which were hailed as the first treatment to delay progression of the disabling neurological condition that affects 80,000 people in the UK.
It was set up in 2002 after the National Institute for Clinical Excellence (Nice) unexpectedly ruled that the drugs were not cost effective and should not be used on the NHS. To head off opposition from patient groups and the pharmaceutical industry, the Department of Health established the largest NHS "patient access scheme", to provide patients with the drugs, costing an average £8,000 a year, on the understanding that if they turned out to be less effective than expected, the drug companies would reduce the price.
The first report on the outcome was due after two years but was not published until last December, seven years later. It showed that the drugs failed to delay the onset of disability in patients – defined as walking with a stick or using a wheelchair – and may even have hastened it. On that basis, the drug companies would have had to pay the NHS to use them to make them cost effective.
Despite this finding, the price was not reduced and the scientific advisory group monitoring the scheme advised that "further follow up and analyses" were required. It said that disability may yet improve, the disease may have become more aggressive and the measure of disability used may have underestimated benefit. There were 5,583 patients in the scheme at a cost to the NHS of around £50m a year, amounting to £350m over seven years to 2009. The Multiple Sclerosis Society said twice as many patients were using the drugs outside the trial. That implies a total NHS cost of £700m for a treatment that does not work.
In a series of articles in today's British Medical Journal, experts criticise the scheme. James Raftery, professor of health technology assessment at the University of Southampton and an adviser to Nice, said the scientific advisory group included representatives from the four drug companies, two MS groups, and the neurologists treating patients, all of whom had lobbied for the continued use of the drugs on the NHS.
"The independence of this group is questionable," he said. "Monitoring and evaluation of outcomes must be independent. Transparency is essential, involving annual reports, access to data, and rights to publish. Any of these might have helped avoid the current fiasco."
Professor Christopher McCabe, head of health economics at the University of Leeds, writing with colleagues in the BMJ, said: "None of the reasons for delaying the price review withstand critical assessment." Professor McCabe told The Independent: "We should be asking questions about paying for these drugs. In terms of disability avoidance, the evidence is not there."
Alastair Compston, professor of neurology at the University of Cambridge, defended the scheme. He said that despite a disappointing outcome, the scheme had "advanced the situation for people with multiple sclerosis" by improving understanding and care of the disease. Neil Scolding, professor of neurosciences at the University of Bristol, said the proportion of British patients treated with drugs (10-15 per cent) was tiny compared to France and Germany (40-50 per cent). He said the scheme had also led to the appointment of 250 multiple sclerosis nurses.
"[Though] expensive and flawed, if it turns out to have been no better than a clever wooden horse, then the army of MS healthcare specialists it delivered may make it more than worthwhile," he wrote. The MS Society claimed success for the scheme up to 2007 but after publication of the results last December, withdrew its support.
MS: why the drugs don't work
Multiple sclerosis is a chronic disease. It may take 40 years to run its course. In developing drugs to slow its progression, doctors have used brain scans to show lesions which the drugs appeared to prevent, and gave quicker results. Some experts thought the lesions were the disease but little effort was made to check. But preventing lesion formation does not prevent disability caused by the condition. The drugs deal with the lesions, not the disease.
Jeremy Laurance

Shouldn't those scientific questions about disease progression have been dealt with before the trial began?

Friday, June 4, 2010

Healthcare and the drive toward open data

If you haven't seen the TED talk by Tim Berners-Lee, you should. Follow the link right now if you haven't. It will only take 6 minutes, and I'll be waiting right here when you're done.

So now you've seen the talk, right? Good. We can take the following as axioms in today's technology and culture:
  • There is a stronger push toward greater transparency, and resisting that push is futile.
  • With technology, people will create their own open data. When they create their own open data, they will create their own analyses. And when they create their own analyses, they will create their own conclusions.
To see the first point, I direct you to the excellent Eye on FDA, a blog run by Mark Senak who is a communications professional who is very familiar with the workings of the FDA and pharmaceutical industry. Further evidence is the opening of data by the government, as evidenced by the creation of the Data.gov site in the US and corresponding sites in other countries. Even municipalities such as San Francisco are joining the movement. And, of course, the efforts of Berners-Lee is further evidence.

The second point is perhaps a little harder to see. However, realize that there are discussion forums where doctors discuss adverse events of drugs, without any intervention or oversight from the companies that create these products. This also gets discussed, in an informal way, on the CafePharma.com website, the wild, wild west of pharma rep discussion boards. With text analysis and web search tools, it is possible to analyze this data, too, much like the Google Flu trend tool.

The traditionally tight-lipped pharmaceutical and biotech industries, who rely on data for their livelihoods, have to adjust, far beyond what is presented in the prescribing information labels. So far, this movement hasn't been kind to pharma companies, as evidenced by the Avandia meta-analysis that essentially brought the drug down and created a multibillion dollar nightmare for GlaxoSmithKline which, by the way, was based on data that GSK had published through its efforts on being transparent with clinical trial data. At first, it sounded like Steve Nissen (the primary author of the meta-analysis) was simply trying to bring the company down. However, as more details emerged, it turned out that GSK was aware of the results of the meta-analysis before publication. GSK was caught with its pants down, in essence, not sure as a company how to function in this new environment of data.

Attacking an analysis on the basis of methodological questions doesn't seem to work, at least so far. For example, the analysis method that Nissen used in the Avandia meta-analysis wasn't quite correct, as it ignored studies where no cardiovascular events occurred. Nor did the fact that the oft-cited 43% increase in risk was a bit misleading as the cardiovascular event risk of Avandia was very small, and small risks tend to result in large relative risks.

Here, GSK was a leader in opening its data, and it got burned. However, this kind of openness will have to continue, as there is too much public good in having open data. One way or another, we will have to adjust to living in an open-data world.

Update: Bonus: Ask-Cato notes this issue from a different perspective