It is no wonder that between 66% and 75% of clinical trials whose results are published in the major medical journals are funded by the drug industry.
The derision and hostility of the audience was palpable when, at a recent cardiology conference in Delhi, I stated that substantial data published in the leading medical journals of the world not only exaggerated drug effects but could also be considered misleading. There was stunned silence when I mentioned the New England Journal of Medicine (NEJM) as one of the leading culprits publishing manipulated data because the NEJM is considered by most to be the holy grail of medical journals.
I was compelled to rub matters in by quoting Richard Horton, editor of The Lancet: “Journals have devolved into information laundering operations for the pharmaceutical industry.” In the same year (2004) Marcia Angell, former editor of the NEJM, criticised the industry for transforming into “primarily a marketing machine” and co-opting “every institution that might stand in its way”. Richard Smith, former editor of the British Medical Journal (BMJ) noted that Jerry Kassirer, another former editor of the NEJM, had argued that the drug industry has deflected the moral compass of many physicians.
In an excellent article, published in the journal PLOS Medicine in 2005, Richard Smith says that although a substantial income is earned by medical journals printing advertisements, it is still the least corrupting form of dependence. The advertisements may be misleading (with profits worth millions) but they are visible for all to see and criticise. Moreover, as in every sphere, the public is aware that an advertisement is a larger than life sales pitch.
A larger problem is the publication of clinical trials by medical journals. Readers are deeply influenced by these randomised controlled trials, which they believe are the acme of scientific rigour and evidence. A large trial published in a leading medical journal will be distributed around the world and may often be covered by the international media. A favourable trial is worth thousands of pages of advertising for a drug company and that’s the reason it is prepared to spend as much as a million dollars to purchase reprints that may be distributed globally.
A study of manufacturers-supported trials of non-steroidal anti-inflammatory drugs in the treatment of arthritis, published in the Archives of Internal Medicine in 1994, found that not a single trial published negative results out of a total of 56 examined. Every trial demonstrated that the company’s drug was superior or as good as the control treatment. In 2003, a systemic review of 30 studies comparing outcomes of industry-sponsored with non-industry-sponsored trials found that, overall, studies funded by a company were four-times more likely to have a favourable result compared to studies funded from elsewhere.
It is no wonder then that between 66% and 75% of trials published in the major journals – Annals of Internal Medicine, JAMA, The Lancet and NEJM – are funded by the drug industry. It therefore becomes imperative that medical journals too declare conflicts of interest. A ‘conflict of interest’ is defined as a set of conditions in which professional judgment regarding a primary interest (such as patient’s welfare or validity of research) tends to be unduly influenced by a secondary interest (e.g., financial advantage). The reality however is that the big medical journals have a serious conflict of interest while dealing with industry trials for fear of losing large incomes earned by sales of reprints if they become too critical.
It is not rare for an editor to get a call from the industry to be told to that massive amounts of reprints will be ordered in the event of the industry sponsored paper getting published. An editor may be compelled to decide whether to earn $100,000 in profits or firing a subeditor.
In 2004, the BMJ devoted an entire issue to conflicts of interest, with a cover page showing doctors dressed as pigs at a banquet accompanied by industry salespeople as lizards. The drug industry threatened to withdraw £75,000 worth of advertising. The Annals of Internal Medicine lost around $1.5 million in advertising revenue subsequent to publishing a paper critical of advertising by the drug industry.
The NEJM had maintained a solid policy on reviews and editorials: “Because the essence of reviews and editorials is selection and interpretation of the literature, the Journal expects that authors of such articles will not have any financial in a company (or its competitor) that make a product discussed in the article.” By 2002, however, the editors of NEJM were complaining that it was difficult to find non-conflicted authors. As a result, they substantially lowered their benchmark by altering the rule to allow the author gain not more than $10,000! There was no limit for income received from companies whose product was not mentioned.
A 2012 study found that the cost for the median and largest reprint order for The Lancet was £287,383 Pounds and £1.55 million, respectively.
Among the journals mentioned above, the NEJM has the highest impact factor (IF). It is the number of citations a paper has elicited in a year, calculated over the last two years. Most doctors consider the IF to be the most prestigious ornament of a medical journal, but Peter Gotzsche is not impressed. In his book Deadly Medicines and Organized Crime, Gotzsche describes a review by The Cochrane Collaboration of Pfizer’s antifungal drug voriconazole. The review outlines two studies published in NEJM with both having misleading abstracts. In one of the trials, voriconazole was significantly inferior to the comparator medicine, amphotericin B, but the paper concluded that voriconazole was a suitable alternative. There were in fact more deaths in the voriconazole group and the stated reduction in fungal infections disappeared when the Cochrane reviewers included infections that had been arbitrarily excluded from the study.
Also read: Priggish NEJM editorial on data-sharing misses the point it almost made
The other trial published in NEJM had voriconazole being given for 77 days while amphotericin was provided for a mere 10 days. The administration of amphotericin was hampered by the absence of any premedication to reduce drug toxicity, nor were fluids given to reduce kidney toxicity in this group. The trial was significantly flawed but the conclusion published by NEJM was, “In patients with invasive aspergillosis, initial therapy with voriconazole led to better responses and improved survival and resulted in fewer side effects than the standard approach of initial therapy with amphotericin B” (emphasis added).
The NEJM earned a lot of money from selling the reprints of these two articles, and the drug company ensured that the journal’s IF improved by receiving a large number of ghost-written secondary publications, which cited the original (flawed) papers. Pfizer’s voriconazole trials were cited an amazing 192 and 344 times, respectively, in the subsequent three years. The Cochrane reviewers selected a supposedly random sample of 25 references to each of these trials to find that not one paper referred to the flaws in the original trials.
Further, none of these four medical journals in the US has been willing to reveal revenue earned from advertising or sale of reprints. The Lancet conceded that 41% of their revenue was from reprints while BMJ made less than 7% from reprints. The situation in specialty journals is no better. Gotzsche considers editors (of specialty journals) often have conflicts of interest to companies that submit papers to them, including owning shares and being paid consultants. Many specialty journals publish industry-sponsored symposia. The industry usually pays for getting them published, they are rarely peer reviewed, use brand names instead of generic names, and have misleading titles.
A US Congressional Investigation of spinal device products led to the revelation in 2012 that an orthopaedic surgeon had received more than $20 million in patent royalties and more than $2 million in consulting fees from Medtronic during his stint as editor of the Journal of Spinal Disorders and Techniques. Medtronic sells spinal implants and was able to publish in every issue – with this orthopaedic surgeon as editor – a positive paper on their device. The FDA had found an adverse event rate of 10% to 50% but not a single device-associated adverse effect was reported in 13 industry-sponsored papers regarding safety and efficacy in 780 patients treated with the device.
Drug companies often employ the following techniques to acquire salutatory results of their product:
- Conduct a trial of your drug against a treatment known to be inferior
- Compare your drug against too low a dose of the competitor drug
- Use multiple endpoints in the trial and select for publication those that show favorable results
- Conduct multicenter trials and select for publication results from centers that are favorable
- Conduct subgroup analyses and select for publication those that are favorable
- Provide results most likely to impress reduction in relative rather than absolute risk.
The last technique is the most commonly used method of exaggerating effects of a drug in most industry-sponsored articles on cardiology published in NEJM. The absolute difference would be less than 1% but the data is highlighted in relative terms that, albeit statistically significance, have little clinical relevance. It is for this reason that the American Statistical Association (ASA), after becoming increasingly concerned with the obsession of researchers with the p value, recently called for a rethink with an official guideline. The p value is a statistical measure used to understand how odd a result was: the lower the p value, the less out of line the result is. The ASA wrote,
- p values do not measure the probability that the studied hypothesis is true or the probability that the data were produced by random chance alone
- p values can indicate how incompatible the data are with a specified statistical model
- Scientific conclusions and business or policy decisions should not be based only on whether a p value passes specific threshold
- Proper inference requires full reporting and transparency
- A p value or statistical significance does not measure the size of an effect or the importance of a result
- By itself, a p value does not provide a good measure of evidence regarding a model or hypothesis
The executive director of the ASA has gone on record that the “p value was never intended to be a substitute for scientific reasoning.” Clearly the ASA statement is intended to navigate research into a “post p<0.05 era” – referring to the threshold researchers have become obsessed with.
In the meantime, the festering corruption in medical journals may be tackled by more public funding of trials and stoppage of publication of complete trials as is done now. Medical journals should only provide critiques of protocols and results put up for public scrutiny in regulated websites. The entire raw data of a trial should be made visible for researchers, doctors, patients and regulators in order to draw plausible conclusions.
Deepak Natarajan is a cardiologist based in New Delhi.
Comments are closed.