The Atlantic: Ioannidis [is] ... what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem.
... He first stumbled on the sorts of problems plaguing the field, he explains, as a young physician-researcher in the early 1990s at Harvard. At the time, he was interested in diagnosing rare diseases, for which a lack of case data can leave doctors with little to go on other than intuition and rules of thumb. But he noticed that doctors seemed to proceed in much the same manner even when it came to cancer, heart disease, and other common ailments. Where were the hard data that would back up their treatment decisions? There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases. A new “evidence-based medicine” movement was just starting to gather force, and Ioannidis decided to throw himself into it, working first with prominent researchers at Tufts University and then taking positions at Johns Hopkins University and the National Institutes of Health. He was unusually well armed: he had been a math prodigy of near-celebrity status in high school in Greece, and had followed his parents, who were both physician-researchers, into medicine. Now he’d have a chance to combine math and medicine by applying rigorous statistical analysis to what seemed a surprisingly sloppy field.
... In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process—in which journals ask researchers to help decide which studies to publish—to suppress opposing views. “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.
... He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.
... Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested. Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case for at least 12 years after the results were discredited.
Carson Chow gives a Bayesian formulation of Ioannidis' argument here (click through to see the equations):
John Ioannidis published a very interesting paper in PLoS Biology in 2005 entitled “Why most published research findings are false.” In it he argued that most affirmative results in biology papers that are based on a statistical significance test (e.g. p-value less than 0.05) are probably wrong. His argument was couched in traditional statistics language but it is really a Bayesian argument. The paper is a wake up call that we may need to look more closely at how we use statistics and even how we do research.
The question he asked was Given some hypothesis, what is the probability that the hypothesis is true given that an experiment confirms the result (up to some level of statistical significance)? ...
In high energy physics (where we don't talk about p values, but rather number of SDs of significance of a result) it has been a common bit of folk wisdom since before I entered the field that at any given time there must be some number of multi-SD anomalies in recent experimental results, but that most (perhaps all) of these will eventually go away. (If you think about it, this is basically Ioannidis' claim.) Fortunately, because we are studying the fundamental laws of Nature (and because everyone in the field understands basic statistics; in medicine it seems almost no one does), these anomalies tend to be revisited, and meta-analyses are always done, so wrong results are not likely to become accepted conclusions for years or decades at a time.
I disagree that most people in medicine do not understand basic statistics. They actually do and that may be a dangerous thing. The problem in medicine is mostly sociological and not scientific. In particle physics there is a very good idea of the priors and likelihoods so it's just a matter of waiting before you are confident of a result. In medicine you have no such thing. As Ioannidis shows, if people used Bayesian inference and multiple comparison in their estimates of significance then there would be a smaller chance of finding erroneous results. There would also be fewer high impact results and that would impact their abilities to get funding, form companies and get paid. Most researchers in medical schools are paid in soft money, which means their salary relies on grants. The system is built so that the incentives to be correct are much much less than the incentive to have a high impact story. There is almost no repercussion for being wrong unless you commit fraud. Most Nature papers are not cited after a few years because they don't hold up so the expectation that results are not always right is to some degree already built in. Ideas are also not straightforward to prove and especially to disprove in medicine. The field is really data sparse and prior dominated. Experiments on humans can take decades to unfold. The field is self-correcting although the time scale can be very long.
ReplyDeleteWhen I wrote that I was mainly referring to medical practitioners, not researchers.
ReplyDeleteHowever, if people *really* understood statistics they would adopt a "we actually don't know" position most of the time and they would understand Ioannidis' argument. Perhaps most medical researchers understand that they are doing junk science (and can just blame it on the soft money incentive system), but I doubt it.
I pointed out that we have the same problem with wrong results in physics but people have built up a meta-understanding such that they don't take the results seriously until they have been more carefully checked. What is claimed below in the Atlantic article would not be the case.
ReplyDeleteFlowers to Pakistan
ANMOL
ReplyDeleteSCIENTIFIC ASTROLOGICAL SERVICES
Our Scientific Services Include:
·
Education,
·
Career,
·
Business,
·
Love,
Romance,
·
Marriage,
·
Match
making for Marriage
·
Match
making for Business partnes
·
Children,
·
Health,
·
Income,
·
Gain,
·
Family
Relations
·
Birth Time
Rectification for Unknown time or partially known time
·
Needy
Gemstones etc. etc.
and GUIDE the appropriate Remedial measures to solve the
prevailing problems in one’s life.
ANMOL ASTROLOGERS
1st Floor, Opp.
WLC College,
Near Karvy and Star Hospital.
Beside Shankalal & Sons,
Road No. 10, Banjara Hills,
Hyderabad-500 034.
Ph.No.
9666686091
anmolastrologers108@gmail.com
http://www.anmolastrologers.com
SERCHING
KEYWORDS
Predictions Free Service ,Register
And Win Your Lucky Gems,Click Here For Free Horoscope,
Shining Star,Yantra & Mantra,Lal
Kitab Remedies, Vedic Pooja,Gemstone, Astrologers in Hyderabad, Astrology ;
Online Astrology ; Indian Astrology ; Vedic Astrology ; Scientific Astrology ;
True Astrology ; Researched Astrology ; Hindu Astrology ; Bhrigu Astrology;
Ancient Astrology ; Parasar Astrology ; Parasari Astrology ; Jaimini Astrology
; Horary Astrology ; Prashna Astrology ; Career Astrology ; Marriage Astrology
; Love Astrology ; Businees Astrology ; Job Astrology ; ; Money Astrology ;
Gemstone Astrology ; Astrological
Service ; Astrological Forecasting ; Astrological Consultation ;
Astrological Analysis ; Astrological Prediction ; Astrological Reading ;
Astrology 2014 ; Horoscope 2014 ; Horoscope ; Horoscope Analysis ; Horoscope
Reading ; Tarot ; Tarot Reading ; Palmistry ; Palmistry Reading ; Palmistry
Analysis ;Numerology ; Numerological Analysis ; Numerological Forecasting ;
Numerological Reading ; Numerologist ;Astrologer ; Nameology ; Astrology ; India's No1
Astrologer ; Hindi Astrologer ; Marathi Aastrologer ;Bangali Astrologer ;
Bengali Astrologer ; Oriya Astrologer ; Odiya Astrologer ; Gujrati Astrologer;
Andaman Nicobar,Andhra Pradesh,Arunachal Pradesh, Assam,Bihar,
Chandigarh,Chhattisgarh,Dadra & Nagar Haveli,Daman & Diu, Delhi, Goa,
Gujarat,Haryana,Himachal Pradesh,Jammu & Kashmir,Karnataka,Kerala,Madhya
Pradesh, Manipur,Mizoram,Orissa,Rajasthan,Tamil Nadu,Tripura,Uttar Pradesh,
Uttaranchal, West Bengal,