Evidence Update-Chinese Studies of Acupuncture Are Always Positive: Perfect Medicine or Hidden Bias?

What many people don’t realize about scientific studies is that since they are designed, conducted, analyzed, and reported by fallible human beings, they are prone to all sorts of bias and error. They often contain mechanisms to minimize these sources of error, which is why they are still more reliable than personal experience, history, and other uncontrolled sources of information. And, of course, the best compensation for the failings of individual scientists is the work of other scientists, critiquing the work, trying to replicate it, and generally bashing it about until the truth falls out. Science is a community endeavor, and the community keeps the individual honest. At least, that’s the theory.

However, some communities prize this sort of critical, competitive error correction process more than others. I have written often about the general tendency of people in the alternative medicine community to prefer unity and validation of each other’s theories to rigorous, skeptical scrutiny aimed at paring away bias and error. As a category, Alternative Medicine only exists to protect some practices from the standards of evidence scientific medicine are expected to conform to. If these therapies can prove themselves by accepted scientific means, they are not “alternative” or “complementary” and wouldn’t need to be “integrated” with regular medicine because they would simply be regular medicine.

One example of the misuse of science to confirm and support rather than challenge excepted beliefs is the literature concerning acupuncture. The vast majority of acupuncture studies are done in countries where it is a widely accepted practice (though not as widely as sometimes claimed), and where most practitioners and others already accept its effectiveness. China, in particular, contributes a tremendous percentage of the research on acupuncture. And as I’ve discussed before, there is strong evidence that Chinese acupuncture studies are biased in favor of acupuncture. This evidence includes studies which have shown negative results of acupuncture research are almost never published, studies are often inaccurately reported as randomized when they aren’t, and systematic reviews often selectively search and report the literature in ways that are favorable to acupuncture. Yet another study has now been published which confirms that Chinese researchers simply do not produce or report negative results for acupuncture.

Yuyi Wang, Liqiong Wang, Qianyun Chai, Jianping Liu. Positive results in randomized controlled trials on acupuncture published in chinese journals: a systematic literature review. J Altern Complement Med 2014 May;20(5):A129

This review found 847 reported randomized clinical trials of acupuncture in Chinese journals. 99.8% of these reported positive results. Of those that compared acupuncture to conventional therapies, 88.3% found acupuncture superior, and 11.7% found it as good as conventional treatments. Very few of the studies properly reported important markers of quality and control for bias such as blinding, allocation concealment, and losses to follow-up.

Of course, one could argue that the failure to publish negative results, and the overwhelming superiority of acupuncture compared with conventional treatments is evidence that acupuncture is incredibly effective and that Chinese researchers do a nearly perfect job of employing it. That seems a pretty implausible interpretation, however, It would suggest that acupuncture is unlike any other therapy ever tested scientifically, and that Chinese acupuncturists are nearly perfect clinicians. It would also beg the question of why acupuncture was practiced in one form or another for thousands of years without meaningfully improving the life-expectancy or mortality patterns of people in China while science-based medicine has dramatically extended life and reduced disease there as everywhere else.

A more likely interpretation of this and the other studies showing that the Chinese almost never report failures in acupuncture treatment is simply that the design, conduct, and reporting of these studies is biased towards supporting the already widespread belief that acupuncture works. Belief trumps and distorts science all the time, and this is likely yet another example of this. All kinds of cultural theories can be advanced to explain these findings, and the differences between the acupuncture literature in China and that in the English language literature, where negative studies are much more common. I am no sociologist, but I do know that science exists specifically as a method for combating the natural human tendency to seek confirmation rather than refutation of our existing beliefs, and that no system for checking human bias can be successful without an explicit commitment to following the methods and accepting the results even when they are not consistent with what we want to believe.

I have talked previously about the dangers of alternative medicine research functioning as marketing and propaganda rather than a careful and genuine effort to seek the truth. The analysis of the Chinese literature pertaining to acupuncture, and most of the literature related to homeopathy, illustrate this danger. It is imperative that scientific evaluation of alternative therapies be held to at least as high a standard as research on conventional treatments is in order to prevent people, and our pets, from being subjected to ineffective or unsafe therapies under the misguided belief that they have been proven to work.

Posted in Acupuncture | Leave a comment

Update-Do Dogs Defecate in Alignment with the Earth’s Magnetic Field?

Earlier this year, I reviewed a research study claiming that dogs orient themselves to fluctuations in the earth’s magnetic field when defecating. I was asked to re-post the review on Publons, a site which publishes reviews of journal articles. The authors have posted a response there which answers some concerns I expressed in my review, but which also illustrates some of the same misconceptions about statistics and hypothesis testing that I originally discussed. I have responded both here and on the Publons forum.

The summary of the reactions of the media on our paper is very fitting and we agree. The critic of our study is, however, biased and indicates that the author did not read the paper carefully, misinterpreted it in some cases, and, in any case is so “blinded” by statistics that he forgets biology. Statistics is just a helpful mean to prove or disprove observed phenomena. The problem is that statistics can “prove” phenomena and relations which actually do not exist, but it can also “disprove” phenomena which objectively exist. So, not only approaches which ignore proper statistics might be wrong but also uncritical sticking on statistical purity and ignoring real life.

To begin with, I believe the author and I agree that statistics are easily and commonly misused in science. Unfortunately, this response seems to perpetuate some of the misconceptions about the role of statistics in testing hypotheses I discussed in my original critique.

Statistics never prove or disprove anything. Schema such as Hill’s Criteria of Causation and other mechanisms for evaluating the evidence for relationships observed in research studies illustrate the fact that establishing the reality of hypothesized phenomena in nature is a complex business that must rest on a comprehensive evaluation of many different kinds of evidence. It is unfortunate that p-values have become the sine qua non of validating explanations of natural phenomena, at least in medicine (which is the domain I am most familiar with). The work of John Ionnidis and the growing interest in Bayesian statistical methods are examples of the move in medical research to address the problem of improper use and reliance on frequentist statistical methods.

That said, these methods do have an important role in data analysis, and they contribute significantly to our ability to control for chance and other sources of error in research. The proper role of statistical hypothesis testing is to help assess the likelihood that our findings might be due to chance or confounding variables, which humans are notoriously terrible at recognizing. If we employ these tools improperly, then they cease to fulfill this function and instead they generate a false impression of truth or reliability for results that may easily be artifacts of chance or bias.

The authors accuse me of being “so ‘blinded’ by statistics that he forgets biology.” This is ironic since their paper uses statistics to “prove” something which a broader consideration of biology, evolution, and other information would suggest is improbable. Even if the statistical methods were perfectly and properly applied, they would not be “proof” of anything any more than improper use of statistics would be definitive “disproof” or the authors’ hypothesis. While I discussed some concerns about how statistics were used in the paper, my objections were broader than that, which the authors do not appear to acknowledge.

The author of this critic blames us of “data mining”. Well, first we should realize that there is nothing wrong about data mining. This is an approach normally used in current biology and a source of many interesting and important findings. We would like to point out that we have not “played” with statistics in order to find out eventually some “positive” results. And we have definitively not sorted data out. We just tested several hypotheses and always when we rejected one, we returned all the cards (i.e. data) into the game and tested, independently, anew, another hypothesis.

Though I am not a statistician, I believe there is a consensus that while exploratory analysis of data is, of course, appropriate and necessary, the post-hoc application of statistical significance tests to data after patterns in the data have already been observed is incorrect and misleading. This is what the paper appeared to suggest was done, and this would fit the definition of inappropriate data-dredging.

Note also that we performed this search for the best explanation in a single data sample of one dog only, the borzoi Diadem, for which we had most data. When we had found a clue, we tested this final hypothesis in other dogs, now without Diadem.

This was not indicated in the description of the methods provided in the original paper. If the exploratory analysis was done with one data set while the authors remained blind to the data set actually analyzed in the paper, then that would be an appropriate method of data analysis. The subsequent statistically significant results would not, of course, necessarily prove the hypothesis to be true, but they would at least reliably indicate the likelihood that they were due solely to chance effects.

This does not, however, entirely answer the concern that the study began without a defined hypothesis and examined a broad range of behaviors and magnetic variables in order to identify a pattern or relationship. As exploratory, descriptive work this is, of course, completely appropriate. But the authors then use statistical hypothesis testing to support very strong claims to have “proven” a hypothesis not even identified until after the data collection was completed. This seems a questionable way to employ frequentist statistical methods.

Let us illustrate our above arguments about statistics and “real life” on two examples. Most medical diagnoses are done through exclusion or verification of different hypotheses in subsequent steps. Does it mean that when the physician eventually finds that a patient suffers under certain illness, the diagnosis must be considered improbable because the physician has already before tested (and rejected) several other hypotheses?

This analogy is inapplicable. The process of inductive reasoning a clinician engages in to seek a diagnosis in an individual patient is not truly analogous to the process of collecting data and then evaluating it statistically to assess the likelihood that patterns seen in the data are due to chance. Making multiple statistical comparisons, particularly after one has already sought for patterns in the data, invalidates the application of statistical hypothesis testing. The fact that in other contexts, and without the use of such statistical methods, people consider possible explanations and then accept or reject them based on their observations is irrelevant.

Or imagine that we want to test the hypothesis that the healthy human can run one kilometer with an average speed of 3 m/s. We find volunteers all over the country who should organize races and measure the speed. We shall get a huge sample of data, we have an impression that our hypothesis is correct but the large scatter makes the result insignificant. So we try to find out what could be the factors influencing speed. We test the age – and find out that indeed older people are slower than younger ones, so we divide the sample into age categories, but the scatter is still too high, so we test the effect of sex, we find a slight influence, but it still cannot explain the scatter, we test the position of the sun and time of the day, but find no effect, we test the effect of wind, but the wind was weak or it was windless during races, so we find no effect. We are desperate and we visit the places where the races took place – and we find the clue: some races were done downhill (and people ran much faster), some uphill (and people ran much slower), those who ran in flat land ran on average with the speed we expected. So we can now conclude that our hypothesis was correct and moreover we found an effect of the slope on running speed. We publish a paper describing these findings and then you publish a critic arguing that our approach was just data mining and was wrong and hence our observation is worthless and that the slope has no effect on running speed at all. Absurd!

Again, this example simply describes a process for considering and evaluating multiple variables in order to explain an observed outcome, which is not the objection raised to the original paper. If the only hypothesis in a study such as described here was that at least one human being could run this fast, then a single data point would be sufficient proof and statistics would be unnecessary. However, if one is trying to explain differences in the average speed of different groups of people based on the sorts of variables mentioned, the reliability of the conclusions and the appropriateness of the statistical methods used would depend on how the data was collected and analyzed. In any case, nothing about this has any direct relevance to whether or not the data collection and analysis in the original paper was appropriate or justified the authors’ conclusions.

As I said in the original critique, this study raises an interesting possibility; that dogs may adjust their behavior to features of the magnetic field of the earth. The study was clearly a broadly targeted exploration of behavior and various features of the magnetic environment: “we monitored spontaneous alignment in dogs during diverse activities (resting, feeding and excreting) and eventually focused on excreting (defecation and urination incl. marking) as this activity appeared to be most promising with regard to obtaining large sets of data independent of time and space, and at the same time it seems to be least prone to be affected by the surroundings.” It did not apparently start with a specific, clearly defined hypothesis and prediction, so in this sense it seems an interesting exploratory project.

However, with such a broad focus, with mostly post-hoc hypothesis generation, and with a lack of clear controls for a number of possible alternative explanations, the study cannot be viewed as definitive “proof” of the validity of the explanation the authors provide for their observations, though this is what is claimed in the paper: “…for the first time that (a) magnetic sensitivity was proved in dogs, (b) a measurable, predictable behavioral reaction upon natural MF fluctuations could be unambiguously proven in a mammal, and (c) high sensitivity to small changes in polarity, rather than in intensity, of MF was identified as biologically meaningful.”

I agree with the authors that their results are interesting and should be a stimulus for further research, but I do not agree that the results provide the unambiguous proof they claim. As always, replication and research focused on testing specific predictions based on the hypothesis put forward in this report, with efforts to account for alternative explanations of these observations, will be needed to determine whether the authors’ confidence in their findings is justified.

 

Posted in General | 1 Comment

Shocking News! Media Coverage of Healthcare Research Often Not Very Good.

As a veterinarian, explaining science to non-scientists and interpreting the meaning of scientific research is a key part of my job. Pet owners cannot make truly informed decisions about what to do for their animal companions without reliable information they can understand. This blog arose out of my efforts to provide better information to my clients, and it has led to further efforts to inform the public, and my colleagues in veterinary medicine, about how to evaluate medical interventions and understand the scientific research we need to support making decisions for pets.

My own knowledge about how we understand health and disease has come from many years of academic study. This includes a master’s degree I will be finishing this year in epidemiology, the branch of science specifically devoted to understanding health and disease and generating safe and effective healthcare interventions. And hopefully I have developed some ability to effectively communicate about science through my academic background, my years as a veterinarian, and my work speaking and writing for the veterinary community and, of course, in this blog.

In a sense, this blog has made me part of “The Media,” as has my involvement with the American Society of Veterinary Journalists. Unfortunately, taken as a whole “The Media” does not do a very good job of covering scientific topics, and journalists seem to contribute to misconceptions at least as often as they dispel them. A newly published study looking specifically at media coverage of healthcare research illustrates this starkly.

Schwitzer GA. A Guide to Reading Health Care News Stories. JAMA Intern Med. Published online May 05, 2014. doi:10.1001/jamainternmed.2014.1359

This paper reports on a 7-year evaluation of media stories from print and electronic media of various kinds. It details a number of specific errors in how journalists often present and interpret scientific research that lead to a false understanding of what the results mean. The conclusion of the study was

After reviewing 1889 stories (approximately 43%newspaper articles, 30% wire or news services stories, 15%online pieces [including those by broadcast and magazine companies], and 12%network television stories), the reviewers graded most stories unsatisfactory on 5 of 10 review criteria: costs, benefits, harms, quality of the evidence, and comparison of the new approach with alternatives. Drugs, medical devices, and other interventions were usually portrayed positively; potential harms were minimized, and costs were ignored.

The specific kinds of mistakes made in many stories about healthcare research struck me not only because I see them all the time in the media, but because they mirror very closely exactly the sorts of mistakes made by advocates of alternative therapies. Though this study did not, unfortunately, look specifically at coverage of alternative medicine, my subjective impression is that the media makes the same sorts of errors but is even less careful and critical in coverage of this area. Pieces on veterinary medicine, in particular, are often poor quality because they are part of the “lifestyle” or “human interest” beat rather and treated as entertainment rather than being written by qualified science journalists interested in the truth about healthcare practices.

In any case, here are the major problems the study identified in media coverage of healthcare science:

Risk Reduction Stated in Relative, Not Absolute, Terms
Stories often framed benefits in the most positive light by including statistics on the relative reduction in risk but not the absolute reduction in risk. Consequently, the potential benefits of interventions were exaggerated.

While journalists are often understandably loath to talk about anything that sounds like math, it is impossible to appropriately talk about the effects of medical therapies without identifying the difference between absolute and relative risk. If you have a 1 in a million chance of developing a terrible disease, and something raises your chances to 2 in a million, that is a relative risk increase of 100%. Sounds terrible! But the thing is, at a chance of 2 in a million, you are still almost certainly not going to get that disease. And doubling your risk does not make it meaningfully more likely that you will. Such a simple distinction is critical to deciding whether medical interventions are worthwhile

Failure to Explain the Limits of Observational Studies
Often, the stories fail to differentiate association from causation.

You may have heard the saying “correlation does not mean cause and effect.” Just because two things are associated doesn’t mean one caused the other. If, for example, a study found that carrying matches in your pocket was associated with an increase in your risk of lung cancer of ten times, would that mean matches cause lung cancer? Of course not! Carrying matches may mean you’re a smoker, and smoking certainly does cause lung cancer, but the simple association between matches and cancer doesn’t mean one causes the other.

Here’s a great site that illustrates all kinds of such bogus associations. While this may not be something everyone appreciates in daily life, journalists writing about healthcare research ought to understand it.

The Tyranny of the Anecdote
Stories may include positive patient anecdotes but omit trial dropouts, adherence problems, patient dissatisfaction, or treatment alternatives.

I’ve written about anecdotes and miracle stories many times. The number one “argument” presented in the comments on this blog in defense of treatments I evaluated critically is the presentation of anecdotes that look like they show the treatment working. Anecdotes can only suggest hypotheses to test, but they can never prove these hypotheses true.

There are many reasons treatments that don’t work may seem like they do, and professionals who interpret and explain science should know anecdotes are unreliable and often misleading. While personal stories make for more interesting and emotionally appealing narratives, they should always be used carefully only to illustrate something that has been demonstrated to be true or false by more reliable evidence.

Surrogate Markers May Not Tell the Whole Story
Journalists should distinguish changes in surrogate markers of disease from clinical endpoints, including serious disease or death. Many news stories, however, focus only on surrogate markers, as do many articles in medical journals.

The bottom line for any medical treatment is whether it reduces the meaningful symptoms of disease, including the most final of all, death. It makes no difference if a therapy raises or lowers the amount of some chemical we can measure in the blood if that isn’t a clear and well-established indicator that the therapy will also reduce suffering or prevent death. Surrogate markers are, as the article suggests, overused by healthcare researchers in many cases because they are often cheaper and easier to measure than real symptoms or mortality, but they have significant limitations, and this should be made clear when talking about research using them.

Stories About Screening Tests That Do Not Explain the Tradeoffs of Benefits and Harms
Stories about screening tests often emphasize or exaggerate potential benefits while minimizing or ignoring potential harms. We found many stories that lacked balance about screening for cardiovascular disease and screening for breast, lung, ovary, and prostate cancer.

I have frequently referred to the growing appreciation in human medicine, which has not yet come very far in the veterinary field, that screening tests have risks as well as benefits, and these need to be carefully weighed. The Choosing Wisely project is a key resource for people trying to make smart decisions about screening tests, as is the web site for the U.S. Preventative Services Task Force. Both provide real evidence to help balance the risks and benefits of potential screening tests. Journalists should be aware of the limitations and pitfalls of screening and risks such as overdiagnosis and should include those considerations in stories about screening tests.

Fawning Coverage of New Technologies
Journalists often do not question the proliferation of expensive technologies.

I would add that journalists rarely question the value or evidence for alternative therapies and tend to fawn over them and their proponents more often than not. Reporting that is truly informative and useful must be thoughtful and based on assessment of the real evidence, not simply unquestioningly enthusiastic about therapies with a token quote or two from skeptics for “balance.” Drugs are not the only medical treatment to have risks, but it seems journalists are far more likely to talk about the risks of pharmaceuticals than other treatments.

Uncritical Health Business Stories
Health business stories often provide cheerleading for local researchers and businesses, not a balanced presentation of what new information means for patients. Journalists should be more skeptical of what they are told by representatives of the health care industry.

I would argue that identifying any potential bas, financial or otherwise, in a source for a news story should be an ordinary part of journalistic practice. The idea behind seeking multiple sources is not just to provide a superficial impression of balance by including opposing points of view regardless of merit but to ensure that the journalist has a comprehensive awareness of the evidence for and against the treatment they are writing about so that they can provide a useful explanation of what is known about it. The study also found, however, that journalists often don’t follow this practice.

Single-Source Stories and Journalism Through News Releases
Half of all stories reviewed relied on a single source or failed to disclose the conflicts of interest of sources. However, journalists are expected to independently vet claims. Our project identified 121 stories (8% of all applicable stories) that apparently relied solely or largely on news releases as the source of information.

There really shouldn’t be any need to point out that this is lazy and unacceptable journalistic practice and does not lead to accurate, useful information for the public.

I don’t want to suggest that there are not many excellent journalists providing accurate and informative interpretation and analysis of healthcare research. The study specifically identifies examples of stories that succeeded in avoiding the mistakes they found, and there are certainly many in the media who do a brilliant job reporting and explaining health sciences research. Hopefully, by identifying common problems and mistakes, this study will contribute to improving the quality of healthcare science journalism.

Posted in General | 1 Comment

Evidence Update-Data for Resveratrol and Antioxidants Not Looking Good

At the very beginning of this blog in 2009, I wrote about a compound called resveratrol. I concluded at that time that it was “promising but unproven.” I subsequently reported on a scandal in which a key researcher studying this compound had a large number of papers retracted due to fraud. And last November, I passed along the conclusions of a couple of reviews of the existing evidence to that point concerning resveratrol. The conclusions had changed little from my first report. Resveratrol shows promising properties in the lab and in animal models, but it has not yet been shown to be an effective treatment or preventative health agent in humans or, of course, in pets.

A new study has recently been published which adds to the existing evidence that intake of resveratrol from natural food sources, notably red wine and chocolate, is not associated with  a reduced risk of any disease.

Semba RD. Resveratrol in red wine, chocolate, grapes not associated with improved health. JAMA Intern Med. Published online May 12, 2014. doi:10.1001/jamainternmed.2014.1582

This study followed nearly 800 people for nine years and measured the amount of resveratrol metabolites excreted in the urine, which represented how much resveratrol was being consumed in the diet. Though the study had some methodological limitations (it was an observational study, not a clinical trial, so there was no randomization, and it’s not clear from the abstract if there was any blinding for the analysis), it is a piece of data to add to our current understanding. The conclusion, unfortunately, was that

The antioxidant resveratrol found in red wine, chocolate and grapes was not associated with longevity or the incidence of cardiovascular disease, cancer and inflammation.

This does not, of course, mean that we can definitively say dietary resveratrol is of no value or that higher doses provided as a supplement might not have beneficial effects, There iss till some weak clinical trial evidence to suggest some benefits from supplementation might be beneficial in some cases. The bottom line here is that we don’t know for certain, but the compound is looking less promising the more we study it.

This is less surprising than it might once have been since the theoretical reason to expect a benefit from resveratrol is its antioxidant effects. While “antioxidant” is something of a magic word in alternative medicine, science-based medicine has been soberly and carefully investigating the idea that such compounds might have a wide variety of beneficial health effects. As I discussed earlier this year, however, the evidence is becoming more and more convincing that antioxidants have fewer benefits than once supposed and, as should be expected with all therapies that do anything at all, they come with risks.

A recent animal model study actually found evidence suggesting free radicals might in some cases be protective and that the reason they increase with age may be because they are part of the body’s attempt to fight the effects of aging rather than the cause of it. This is, of course, only one lab study, but it is a piece in a growing body of information which suggests that the hype and reality about antioxidants like resveratrol may differ quite a bit. Enthusiastic recommendations for antioxidant supplement products are clearly not warranted, and justifying therapies, including herbal and other alternative treatments, as useful by arguing they provide antioxidant effects is a weak and unconvincing rationale.

 

Posted in Herbs and Supplements | Leave a comment

Videos

This link leads to a YouTube channel with videos related to evidence-based veterinary medicine and scientific evaluation of alternative medicine.

youtubevideosnip5-2014

 

Posted in Presentations & Lectures | Leave a comment

Finally, A Journalist Takes a Skeptical View of Claims for Veterinary Acupuncture

I am occasionally interviewed by journalists writing articles about alternative medicine for pets. Many of these articles deal focus on acupuncture, and they tend to follow a pretty consistent pattern:

1. Story about a dog or cat with some pain or disability, often that has not responded to conventional treatment.

2. Owner takes pet to veterinarian who recommends acupuncture (or other alternative treatment).

3. Brief summary of claims for benefits for acupuncture ad reference to long history of use in people and pets. Sometimes there is a reference at this point to research studies supporting the use of acupuncture (almost never to studies that do not support it).

4. Brief quote from grumpy, killjoy token skeptic (that would be me) to create the impression of “balanced” reporting.

5. Return to story of pet from beginning of article, now all better and frolicking happily.

It is well known that the media caters to our inherent preference for stories over statistics, and that when journalists cover scientific subjects, especially those that are controversial, nuance and thoughtful analysis of evidence are often sacrificed for a compelling narrative. And the well-meaning notion that a journalist should present voices from “all sides” of a controversy all too often results in stories that suggest a legitimate debate about scientific facts when, in reality, there is strong evidence and consensus on one “side” and the unshakeable faith of a small minority on the other.

I was pleased, therefore, to see a recent piece about veterinary acupuncture in Slate which took the scientific evidence, and the perspective of skeptics, more seriously than the feel-good anecdotes of believers.

If Your Veterinarian Offers Acupuncture, Find a Different Vet

Sticking needles in your dog won’t make it feel better.

By

The title may be a bit extreme, since unfortunately a lot of otherwise excellent veterinarians have been fooled by the claims and shaky evidence for acupuncture, but the overall message of the article is right on target. Mr. Palmer appreciates that while the scientific evidence is mixed, an appropriate evaluation requires considering the quality and limitations of studies and the preponderance of the evidence.

If you’re an acupuncture enthusiast, you’re probably getting ready to point me toward studies proving the efficacy of veterinary acupuncture. Before you do that, let’s make a deal: I will concede that there are studies supporting veterinary acupuncture if you concede that there are studies opposing it. The issue is assessing the quality of the studies and determining where the weight of the evidence lies.

His conclusion, with which I agree entirely, is that the most reasonable interpretation of the balance of the evidence is that acupuncture is a placebo for humans and likely has little to no predictable beneficial effects in animals. Most veterinary acupuncture studies are deeply flawed, and better research could be justified, but the best evidence in humans does not suggest this is a promising area for veterinary medicine.

Mr. Palmer’s piece also points out that the potential financial conflicts of interest which alternative medicine proponents so blithely use to dismiss research on pharmaceuticals or other conventional therapies are just as much of an issue in research on alternative veterinary therapies, including acupuncture. One of the most prominent researchers in veterinary acupuncture and so-called Traditional Chinese Veterinary Medicine (TCVM) also makes his living teaching TCVM and selling herbs and related products. While this does not automatically invalidate the research Dr. Xie is involved in, it points to a clear a priori bias which necessitates rigorous scientific controls, including replication by others, in order to generate reliable evidence. These controls are seldom present in veterinary acupuncture research.

It is encouraging to see the mainstream media identify the unimpressive scientific reality behind the widespread positive, and anecdote-driven claims for veterinary acupuncture. Here are some links to previous posts on veterinary acupuncture and TCVM which offer more details about these claims and the science, or lack of science, associated with them.

Veterinary Acupuncture

“Electroacupuncture” as a Treatment for Nausea & Vomiting Caused by Morphine in Dogs

Acupuncture vs Opioids for Surgical Pain in Dogs: Which is Better?

JAVMA Article on Electroacupuncture for IVDD

Traditional Chinese Veterinary Medicine

Evaluation of the Chinese Herbal Remedies San Ren Tang, Wei Lin Tang, and Alisma for Feline Urinary Tract Disease

The History of Veterinary Acupuncture: It’s Not What You Think

 

Posted in Acupuncture | 6 Comments

Presentations About Evidence-Based Veterinary Medicine

These are three presentations I gave to veterinarians at the 2014 Western Veterinary Conference. I previously posted the slide decks and a bibliography. These videos are narrated versions of the slide decks.

What is Evidence-Based Veterinary Medicine & Why Do We Need It?

 

Evidence-Based Veterinary Medicine: Myths & Misconceptions

 

The Future of Evidence-Based Veterinary Medicine: Challenges & Opportunities

Posted in Presentations & Lectures | Leave a comment

Pox Parties for Dogs: Brought to You by Veterinary Homeopaths

When I first heard of Pox Parties, I found it amazing, and frightening, that people actually fear vaccines so much that they will deliberately expose their children to infectious disease to avoid them. Such parents take their children to play with others who have active infections, or even mail infectious material to each other to expose their children (which is illegal as well as stupid).

It is true that some children exposed in this manner will develop a protective immunity. Some will also have to endure an active infection, which for those of us old enough to have experience can dimly recall is very unpleasant. And a small umber of these children will experience serious, even life-threatening or permanently disabling injury from these infections.  Given the well-demonstrated and remarkable safety and effectiveness of vaccines, there is simply no excuse other than ignorance and irrational fear for this behavior.

I was somewhat surprised to see recently that some veterinarians are apparently recommending a version of the Pox Party for dogs in order to avoid vaccinating for parvovirus and canine distemper, two common and very serious infectious diseases. I was not, however, surprised that these veterinarians were homeopaths.

An article by Dr. Will Falconer (about whose bizarre and dangerous view of medicine I have written before here, here, and here) and posted on the Facebook page for the Academy of Veterinary Homeopathy, shares the “exciting” news about how to avoid evil vaccines for parvo and distemper.

Imagine avoiding risky vaccinations while getting very strong immune protection against parvo and distemper, the two potentially deadly diseases of puppies.

You know vaccinations are grossly over provided in our broken system of veterinary medicine. The pushing of vaccinations by Dr. WhiteCoat throughout your animal’s life doesn’t add to her immunity…And you know that vaccines are harmful. Chronic disease often follows vaccination, even a single vaccination.

Despite his assumption that his readers share his understanding about the devastating consequences of vaccination, Dr. Falconer is actually completely deluded on this point. While vaccines have risks, like any medical therapy with any effects at all, they also have benefits, and the two must be balanced against one another. Vaccination has greatly reduced, and in some cases completely eliminated infectious diseases that afflicted humanity and caused enormous suffering and death for millennia, and the risks have proven to be surprisingly few in light of the enormous benefits.

Vaccine protocols are changing in veterinary medicine in recognition that these therapies provide even greater protection than once thought and do not need to be given as often as they traditionally have been. However, no reasonable veterinarian, and no legitimate scientific evidence, supports these kinds of hysterical claims about the dangers of vaccination or the idea that they can or should be entirely avoided.

The alternative Dr. Falconer and his colleagues propose to vaccination for parvovirus and distemper is further illustration of a truly astounding level of delusion and ignorance of history:

A lecture on parvo by Dr. Todd Cooney lit us up, as he showed us statistics from his homeopathic practice in Indiana that the vaccinated pups had less chance of surviving parvo than those not vaccinated for that disease!

Parvo vaccine itself was immune suppressive.

Parvovirus was ubiquitous in the environment.

Animals treated homeopathically when sick with parvo had far better survival rates than those treated with the usual drugs.

Distemper was prevented by taking pups to a known wildlife area where raccoons with distemper lived.

Dr. Rosemary Manziano learned of the outbreak of canine distemper in raccoons in her area through the CDC. She boldly suggested to her puppy owners over a period of 11 years that they visit a pond known to be a hangout for these raccoons. After a brief period of sniffing around the bushes and maybe drinking the water, the pups were brought home.

This was repeated a week later, and on the third week, the good doctor would test for distemper titers, the evidence of immune response. Lo and behold, these pups had fantastic titers indicating strong immunity! And, in case you’re wondering, not one puppy ever got sick in the least. This happened in well over a hundred pups and was, as Dr. Manziano called it, “fool proof immunization.”

After eleven years, it stopped working. She assumed that the disease in raccoons had run its course, natural resistance having been gained by their population.

Dr. Manziano suggested that her new pup owners who wanted natural immunization take short, five minute visits to the most popular dog parks. Those parks with the highest dog traffic were recommended.

This kind of irresponsible advice is not supported by scientific evidence, but then that sort of evidence is of no concern to people who practice the mystical discipline of homeopathy anyway. The reality is that these veterinarians appear to be discouraging their clients from making use of safe and effective therapies that have dramatically reduced the risk of life-threatening illness in dogs and instead recommend exposing dogs to these very serious diseases in the bizarre belief that they are more likely to develop protective immunity without active infection or harm. This belief requires a dramatic ignorance of the entire field of immunology and the history of vaccination or simply a complete rejection of science in favor of the infallible wisdom of uncontrolled personal experience.

In either case, it seems indefensible and unethical. It is a sad comment on the state of the veterinary profession that these doctors are allowed to promote such practices while maintaining the appearance of legitimacy, and the exclusive right to practice veterinary medicine, that comes with being licensed veterinarians. Their behavior not only places their patients at unnecessary risk but undermines the legitimacy of the veterinary profession and the confidence of the public.

 

Posted in Homeopathy, Vaccines | 11 Comments

Australian National Health and Medical Research Council Review Concludes Homeopathy Doesn’t Work

When I reviewed the case against homeopathy as part of the debate over the AVMA resolution to acknowledge it as a placebo therapy, there were many previous reviews to refer to. In addition to the numerous systematic reviews of the scientific literature, the UK House of Commons Science and Technology Committee had recently conducted an extensive review and hearings and concluded:

In our view, the systematic reviews and meta-analyses conclusively demonstrate that homeopathic products perform no better than placebos. We could find no support from independent experts for the idea that there is good evidence for the efficacy of homeopathy.

Many other organizations have, upon reviewing the evidence, come to the same conclusion, including:

The British Veterinary Association-

The BVA cannot endorse the use of homeopathic medicines, or indeed any medicine making therapeutic claims, which have no proven efficacy.

The Australian Veterinary Association-

That the Board agreed that the veterinary therapies of homeopathy and homotoxicology are considered ineffective therapies in accordance with the AVA promotion of ineffective therapies Board resolution.

And even the AVMA Council on Research-

There is no clinical evidence to support the use of homeopathic remedies for treatment or prevention of diseases in domestic animals.

A new, comprehensive review similar to that of the House of Commons committee, complete with input from homeopaths, has just been complete and submitted for public comment by the Australian National Health and Medical Research Council, comes once again to the only rational, evidence-based conclusion:

NHMRC concludes that the assessment of the evidence from research in humans does not show that homeopathy is effective for treating the range of health conditions considered.

The organization reviewed the scientific literature, including evidence specifically submitted by homeopaths in defense of their claims. I have reviewed such evidence before and found it not reliable or supportive of these claims, and the NHMRC review aggress:

There were no health conditions for which there was reliable evidence that homeopathy was effective. No good-quality, well-designed studies with enough participants for a meaningful result reported either that homeopathy caused greater health improvements than a substance with no effect on the health condition (placebo), or that homeopathy caused health improvements equal to those of another treatment.

**For some health conditions, homeopathy was found to be not more effective than placebo.

**For other health conditions, some studies reported that homeopathy was more effective than placebo, or as effective as another treatment, but those studies were not reliable.

**For the remaining health conditions it was not possible to make any conclusion about whether homeopathy was effective or not, because there was not enough evidence.

The review discusses the evidence examined and the procedures used to evaluate it in detail. All of the procedures were vetted by the Australian Cochrane Centre and correspond to the standard practices of evidence-based medicine applied to conventional therapies. The conclusion was clear:

Based on all the evidence considered, there were no health conditions for which there was reliable evidence that homeopathy was effective. No good-quality, well-designed studies with enough participants for a meaningful result reported either that homeopathy caused greater health improvements than placebo, or caused health improvements equal to those of another treatment.

The evidence, and independent systematic review of the evidence concerning homeopathy is unequivocal. Homeopathy is no better than a placebo. Failure to accept this conclusion does not represent a scientific controversy but a fundamentally religious faith in homeopathy that is not capable of being altered by any evidence.

Posted in Homeopathy | 5 Comments

“Electroacupuncture” as a Treatment for Nausea & Vomiting Caused by Morphine in Dogs

A study was recently published investigating the possible effects of “electroacupuncture” on nausea and vomiting induced by morphine in dogs. This study illustrates some of the challenges of evaluating acupuncture in general, and it shows how the ambiguity in study results can allow for positive or negative conclusions depending on one’s point of view.

Koh RB, Isaza N, Huisheng X, Cooke K, Robertson S. Effects of maropitant, acepromazine, and electroacupuncture on vomiting associated with administration of morphine in dogs. J Amer Vet Med Assoc 2014;244(7):820-29.

The Study

222 dogs who were going to be neutered were included in the study, 37 in each of six groups:

saline injection (placebo control)
maropitant injection (anti-emetic medication)
acepromazine injection (anti-emetic/sedative medication)
electrical stimulation/needling at 1 acupuncture point
electrical stimulation/needling at 5 acupuncture points
electrical stimulation/needling at location not designated as acupuncture point

Twenty minutes after one of the treatments above, the dogs were given an injection of morphine, a narcotic pain reliever commonly used for surgical patients. Morphine frequently induces nausea and vomiting shortly after injection, so the authors evaluated the dogs for the occurrence of vomiting or retching, the number of times dogs vomited or retched, the time it took for vomiting or retching to stop, and signs of nausea evaluated on a subjective numerical nausea assessment scale.

The dogs were randomly assigned to the treatment groups, and there appeared to be no significant differences between the dogs in the various groups. Assessment of vomiting and retching were made by an investigator aware of the treatment used in each dog. Assessment of nausea was made using a video recording assessment by a blinded investigator.

In terms of the objective sign of response to treatment, the occurrence of vomiting and retching, these events occurred significantly less in the medication groups than in the placebo and acupuncture groups, which did not differ from each other. Out of the 37 dogs in each group, the number (percentage) experiencing vomiting or retching were as follows:

Saline- 28 (75.7%)
maropitant- 14 (37.8%)
acepromazine- 17 (45.9%)
1 acupuncture point- 24 (64.8%)
5 acupuncture points- 26 (70.3%)
location not designated as acupuncture point- 32 (86.5%)

When the number of vomiting or retching episodes in total were counted, the pattern was similar, but the two acupuncture groups appeared to have fewer episodes than the control groups:

Saline- 88
maropitant- 21
acepromazine- 38
1 acupuncture point- 35
5 acupuncture points- 34
location not designated as acupuncture point- 109

In terms of the number (percentage) of dogs showing signs of nausea, the results were as follows:

Saline- 11 (29.7%)
maropitant- 12 (32.4%)
acepromazine- 3 (8.1%)
1 acupuncture point- 7 (18.9%)
5 acupuncture points- 4 (10.8%)
location not designated as acupuncture point- 15 (40.5%)

Nausea scores were calculated on a 4-point scale 10min, 15min, and 20min after the morphine was given. The average scores ranged from 1.1 to 1.6, so there was little difference between groups. Statistically, the scores for maropitant, saline, and sham acupuncture were higher than before the morphine at 2 of these 3 time points while the score did not change after morphine for the other three groups.

What Does It Mean?

In humans, there is some evidence acupuncture may be effective at reducing nausea and vomiting in a variety of situations, though some reviews for some conditions find no benefit. The tricky part in assessing this is that nausea and vomiting are very subjective and influenced by mental states which are highly subject to placebo effects and bias. It is challenging to blind patients to whether or not they are receiving acupuncture, and it is impossible to blind the people giving the acupuncture therapy, so some residual bias exists in all acupuncture studies. This is problematic because the attitude and demeanor of the acupuncturist has a significant effect on outcome in acupuncture studies, so unblinded therapists, patients, and investigators make the results of studies looking at highly subjective symptoms difficult to rely on.

In this study, the sham control for the acupuncture treatment was application of the same therapy (needling and electrical stimulation) at a location not considered an acupuncture ;point. The biggest problem with this is that there is no such thing as an acupuncture point. The locations for needling in dogs are determined subjectively based on interpretation of historical texts for humans which were themselves based on mystical and supernatural criteria, and the localization of acupuncture points is highly subjective and inconsistent depending on the individual acupuncturist and the school of acupuncture in which they were trained. They have not been consistently correlated with identifiable anatomic structures, and they cannot be consistently and repeatedly detected by acupuncturists tested under controlled conditions.

More challenging still is the question of whether the “electroacupuncture” tested in this study is really acupuncture at all. Sticking needles in the skin and running electrical current through them undoubtedly has physiologic effects. Transcutaneous electrical nerve stimulation (TENS) is a conventional therapy that uses this process to treat pain and nausea. The only thing that qualifies the therapy used in this study as acupuncture is the choice of needling location based on meridians and acupuncture points. However, since these don’t really exist, it is not clear that the sham treatment was really sham or the “real” acupuncture treatment real or distinct from any other method of using needles and electrical current.

As for the other treatments used, maropitant is a widely used and well-studied anti-emetic which has good evidentiary support for its effectiveness. Acepromazine is a sedative which has been reported to have anti-nausea effects, though it is less widely used for this purposes because of its sedating effects. And certainly, morphine is well-known to cause nausea and vomiting in dogs. So in general the study design was reasonable apart from the question of how acupuncture or “electroacupuncture” are defined.

The investigator assessing vomiting and retching was not blind to treatment, so even though these are pretty objective measures, it is possible for there to be some bias in this assessment. The results show pretty clearly that dogs given maropitant or acepromazine were less likely to vomit than those given placebo and that acupuncture had no more effect than the placebo. So for the most obviously important and objective measure, the acupuncture did not seem to work and the medications did.

When the total number of vomiting and retching episodes were counted, it appeared that the acupuncture was almost as effective as the maropitant. However, this measure by definition only included dogs who vomited. So if this is a true representation of the effects of these treatments, it would mean that maropitant and acepromazine prevents vomiting and acupuncture does not, however if the patient does vomit acupuncture is about as good as acepromazine, though not as good as maropitant, in reducing the number of times the dog vomits. This is a possible interpretation, however it is a bit convoluted and raises the question of whether using acupuncture instead of maropitant makes sense since it would seem preventing vomiting is better than simply reducing the number of times a dog vomits after getting morphine.

The blinding was appropriate for the assessment of signs of nausea, though this measure is inherently more subjective, and the nausea scale is not a validated instrument. However, the results for this measure make little sense. The number of dogs showing signs of nausea at all was lower for the acupuncture treatment groups and the acepromazine group than for the sham acupuncture, placebo and maropitant groups. For this to be true result, it would mean that acupuncture was better at reducing nausea than maropitant even though maropitant is better and preventing vomiting. This is possible, of course, but it seems the more plausible interpretation is that the results don’t make sense because this measure is not reliable.

The results of the numerical nausea scores are similarly dubious. Again, the score increased after morphine for the saline, sham acupuncture, and maropitant but not for the acepromazine and acupuncture treatments. This might mean that the acupuncture reduced nausea and the maropitant didn’t, but again this makes little sense given the extensive evidence for the anti-emetic effects of maropitant and the significantly lower number of dogs given this drug who vomited compared with those treated with placebo or with acupuncture. It is also hard to believe that a drug so effective in preventing vomiting was worse than saline placebo in preventing nausea. It seems more likely that the unvalidated nausea scale simply wasn’t a useful measurement tool.

It is also not clear that the differences in scores seen, even if they are real, are meaningful. The largest change for the maropitant seen was an increase from 1.0 to 1.5 on a 4-point scale. In humans evaluated with nausea scales, the smallest change in nausea considered even marginally significant is a change of 15%. The largest change measured in this study was only 15% for the sham acupuncture, and the change for the saline placebo was no more than 10%.

Bottom Line

The most positive interpretation possible for this study is that sticking needles into patients and running electrical current through them might reduce morphine-induced nausea marginally but does not prevent vomiting compared with maropitant. The question of whether or not this procedure can be legitimately called acupuncture, rather than TENS, hinges on whether or not the locations selected by these particular acupuncturists yield a better outcome than other locations.

While this study suggests they might, the bulk of the literature in acupuncture suggests needling locations don’t really matter for the effects of needling. Different acupuncture schools use so many different locations, ranging from locations on the entire body to just those on the ear or the hand, that any point at all would be considered an acupuncture point by someone. Acupucnturists are inconsistent in where they locate specific points, and no consistent detectable anatomic or physiologic structure has been associated with all the points claimed to be special. So it seems unlikely that using any particular traditional scheme for selecting needling location would be beneficial over any other.

In any case, the amount of change in nausea was not only clinically insignificant, but it requires us to accept that a drug which clearly prevents vomiting is less effective than a saline placebo in preventing nausea, which makes little sense.

The results also suggest that the nausea scale employed is not a very useful instrument. It would be worthwhile to have a validated measurement tool for this problem in veterinary patients, but developing it beginning with testing of a controversial intervention such as “electroacupuncture” is probably not the best way to proceed. Once such an instrument has been validated with more evidence-based therapies, using it to evaluate acupuncture would be reasonable.

This study provides little evidence for the value of acupuncture in treating morphine-induced nausea. Replication with a valid instrument would be necessary to document that there is, in fact, an effect of acupuncture on nausea and vomiting in dogs, but even if this were done this study suggests such effects are unlikely to be clinically significant, so there is little reason to substitute acupuncture for medications with clear efficacy, or even to add acupuncture to these treatments.

 

 

 

Posted in Acupuncture | 1 Comment