You Can’t Believe Everything You Read-Even in a Scientific Journal!

Many proponents of Complementary and Alternative Medicine (CAM) reject the very idea of scientific evaluation of their methods. If scientists say they can find no trace of Ch’i or vertebral subluxations or water memory, well that shows that science doesn’t know everything, right? These are the easiest CAM believers to deal with because there is nothing to be done except agree to disagree. If mystical and undetectable entities can only be seen and understood by those who have faith in them, then there’s no foundation for productive debate.

More challenging are those promoters of alternative medicine who seek to claim legitimacy through scientific credentials or research. Of course, who could object to scientific training for CAM providers or proper research into their claims? Well….

The first problem is that the general public sometimes imagines scientific credentials to be validation in and of themselves for the scientific claims one may make. Sadly, this is not true. Linus Pauling was a brilliant scientist who happened to be dead wrong about the value of megadoses of Vitamin C. Andrew Weil may have an M.D., but he is much more a priest of New Age mysticism than a doctor of scientific medicine. Scientists, intelligent or not, are not much less susceptible to the errors that lead to false beliefs in medicine than non-scientists. We experience non-specific (aka “placebo”) effects, we fall prey to availability bias, confirmation bias, and all the other cognitive traps that the method of science is designed to help us avoid. If we don’t follow the method, we’re just as likely to be wrong as the next guy.

And just as scientific credentials are often erroneously interpreted as evidence for one’s claims, scientific publications are frequently granted more respect than they deserve. For many different reasons, simply being able to produce citations in a journal does not constitute a QED for a medical claim. For starters, publication bias leads to the publication of positive findings and the quiet lonely demise of negative findings often enough to make conclusions based on published literature less reliable than we’d all like. This is more of a problem in some countries than others. In China and Russia, for example, nearly 100% of the clinical trials published show positive results. Perhaps the scientists in these countries are smarter and better at their work than those in Europe and the U.S.? Or, perhaps there is a cultural stigma to publishing negative results? In any case,  no one is immune from the pressure to report positive and dramatic results from one’s research, to justify not only the publication but the time and money spent on the research, as well as to show oneself to be a smart and productive scientist, and this reduces the reliability of published research results.

Judging the credibility of published research requires looking at how the research was conducted. There is a hierarchy of reliability for clinical evidence, from the easiest to find and least reliable, personal anecdote and clinical experience, to the most rigorous controlled clinical trial. And though we open ourselves to charges of cultural or institutional bias by saying so, there is a hierarchy of reliability for the journals in which clinical trials are published. Even excluding countries in which negative results simply aren’t published, there is a meaningful difference in reliability between a study that appears in the New England Journal of Medicine or the Lancet versus one that appears in the Journal of Complementary and Alternative Medicine or The Chiropractic Journal. Journals that exist solely for the purpose of publishing research that does not meet the publication standards of established high-quality journals are clearly going to be more ideological, less evidence-based, and less reliable as a guide to the truth about clinical therapies.

Sadly, the scientific literature in veterinary medicine is not only sparser than for human medicine, it is frequently of inferior quality. As a simple function of resources, trials are smaller and less well-designed, and they are often funded by pharmaceutical companies, commercial firms, or others with a vested interest in the outcome, which has been clearly shown to affect the outcome of research studies. A review of published studies in veterinary dermatology recently found widespread misuse and misinterpretation of basic statistics and data presentation methods.

So with cognitive biases, ideological biases, publication bias, the influence of funding source, and all the other factors that limit the reliability of published research, how are we to decide which therapies work and which don’t, what are the risks and the benefits of various therapies, and all the other question that we need to answer to provide good care?

To start with, we should follow the principles of science-based medicine, which differs in a small but highly significant way from evidence-based medicine in considering the plausibility of the underlying physiologic rationale as a key component in evaluating a therapy. Running a hundred clinical trials, with all the errors to which they are prone, in order to test the efficacy of manipulating mysterious energy fields that can only be real if everything science has shown us is all wrong makes no sense. As one bloger described it, this is Tooth Fairy Science. You can conduct a well-designed study to evaluate how much money the tooth fairy leaves for each different type of tooth, age of child, etc. You can even do statistical analyses on the findings. None of this means the Tooth Fairy exists.

Next, we can educate ourselves about the source of information. A study of chiropractic funded by chiropractors, conducted by chiropractors, and published in an all-chiropractic journal isn’t automatically wrong, but it’s a good bit less reliable and more susceptible to bias than an independent study published in the New England Journal of Medicine. A journal that follows the principles of the CONSORT statement, that requires registration of trials with stated primary endpoints in advance and full disclosure of funding and potential conflicts of interest is more likely to be reliable than one that does not.

Finally, we can read each study in a critical and thoughtful way. Not just the abstract and the discussion section. I’m am frequently amazed at how often those parts of a paper do not present an accurate interpretation of the actual data reported. Looking at studies critically also means recognizing the limitations of different kinds of evidence. A small case series is less reliable than a retrospective cohort study, which is less reliable than a prospective randomized controlled double-blinded study, and so on. This means more work for the individual clinician, of course, but that is a price worth paying for better medicine.

This entry was posted in General, Science-Based Veterinary Medicine. Bookmark the permalink.

5 Responses to You Can’t Believe Everything You Read-Even in a Scientific Journal!

  1. Bartimaeus says:

    Another problem with some articles written by supporters of alternative medicine is deliberate (probably) misrepresentation or misinterpretation of their bibliographies. I have seen one article on veterinary acupuncture in which the author claims a scientific base to modern acupuncture, but reading the references shows that that base is a house of cards. Among the references supporting “scientific acupuncture” are one book chapter written by an acupuncturist that can be summarizes as “People don’t believe in Chi, so lets call it neuroanatomical acupuncture instead” with nothing to support that assertion, and another paper by Edzard Ernst saying acupuncture was no better than placebo.

    I think that some authors expect that no one will check the references they list, especially in a non peer-reviewed article. This problem is not limited to CAM supporters, but seems to be especially prevalent with them.

  2. skeptvet says:

    Yes, checking references is another of the chores that no one likes to do but that is necessary to really evaluate the article properly. In a review article on veterinary joint supplements, the authors went through an elaborate process of grading and ranking interventions based on “evidence” and concluded there was a reasonable probability that glucosamine was beneficial. They cited one reference, and when I read it it compared glucosamine/chondroitin supplementation to an NSAID and found no benefit from the glucosamine.

  3. gwen says:

    I just came back from a combined nursing/physician conference and not only did they include a couple of doctors pedaling woo, but they were passing out gideon bibles in the display area. I am boycotting Contemporary Forums Conferences. When I pay big bucks for a conference, I expect science, not woo. I will be sending a letter to that effect.

  4. Pingback: Homeopathy Works for Arthritis–Or Maybe Not « The SkeptVet Blog

  5. Pingback: Yunnan Paiyao–Secret Herbal Formula to Stop Bleeding? « The SkeptVet Blog

Leave a Reply

Your email address will not be published. Required fields are marked *