Another Homeopathy Study: Mastitis in Dairy Cows

Science is ultimately an epistemological enterprise. The purpose of scientific research is to understand how things work, often with the goal of influencing them. Clinical trials in medicine aim to confirm or disprove hypotheses about diseases or medical therapies. However, no research study is perfect, and all have flaws or limitations in their methodology that casts some doubt on their findings. And even with appropriate controls and methodology, it is possible for biases and other factors to influence the results of any experiment. A real scientific truth must be robust enough to be demonstrable repeatedly and by different investigators. The balance of the evidence, and the consistency of a theory with established knowledge, not the results of any particular experiment, are the most reliable guides to what is true of false.

Ultimately, though, science is pointless if it doesn’t at some point allow us to decide that some hypotheses are true and others are false. All scientific truths may be provisional, “truth” with a small “t” as it were, but there comes a point when the evidence is strong enough that doubting an established scientific truth is unreasonable. Perhaps it is theoretically possible that previous experiments have been in error and the sun actually does circle the earth, but the evidence against this hypothesis is overwhelming that believing it is possible requires at least willful ignorance, if not outright blind faith.

And while it is true that absence of evidence is not necessarily evidence of absence, when you have looked hard enough for long enough for evidence that an idea is true and you have failed to find any, it becomes reasonable to take this failure as at least some evidence against the idea. So when I see clinical trials published concerning hypotheses that are implausible in themselves and that can only be true if mountains of evidence against them are all mistaken, I tend to give an exasperated sigh and set to reading the paper with some a priori bias against the hypothesis. Is this closed-minded? Only if expecting that an experiment designed to show the sun revolves around the earth will fail to do so is closed-minded. To be open-minded about ideas with a long history of failure to demonstrate their veracity and which require giving up much more solidly established ideas is not a virtue, it is wishful thinking or cognitive dissonance in action.

Homeopathy is an idea whose time has come and gone. It is theoretically implausible in the extreme, and decades of research have failed to support its theories or show any meaningful clinical effects. It is a flat-Earth hypothesis, and the expenditure of resources on further clinical trials to investigate it is the epitome of Tooth Fairy Science.  Trials such as these, however honest the intent behind them, merely muddy the waters by providing proponents of a failed idea with what would appear to be evidence to support their claims but what is in reality, if the balance of the voluminous research available is considered, not meaningful.

So whenever I see a study purporting to demonstrate a clinical effect of homeopathy, I look carefully at the designs, the statistics, and all the markers of quality for a scientific paper. I also look at the publication in which the study appears. There are some journals devoted exclusively to alternative medicine, and these exist only to publish CAM studies not judged to be of sufficient quality to be published in standard journals. These journals rarely publish negative findings (though to be fair neither do most mainstream journals), so a degree of skepticism about papers that appear in them is warranted.

I want to be fair to such studies, and to their authors who undoubtedly honestly believe that they are trying to use real, legitimate science to investigate a practice which their experience suggests has value. However, even if such studies are methodologically no more flawed than many which investigate truly legitimate ideas, they frustrate me. Especially in veterinary medicine, where resources for clinical research are so limited and where there are a multitude of serious problems and plausible potential solutions to investigate, it seems a shame to spend intellectual and material capital on an idea which a dispassionate analysis of the voluminous evidence would have long since caused us to abandon.

I recently came across a paper investigating the potential therapeutic use of homeopathy for mastitis in dairy cows, and I wanted to examine it closely both to see whether it met the standards for real evidence homeopathy might have clinical benefits as well as to provide an example of  how one approaches an evidence-based reading of a scientific publication. The first step in such a reading is to recognize the limitations of one’s own knowledge and expertise. As a small animal vet, I know very little about bovine mastitis, and I am certainly no statistician. So I sought help from colleagues more familiar with these subjects than I, and many people generously offered their perspective on this paper. I have incorporated these perspectives in my analysis, though any errors or misinterpretations arising from them are strictly my responsibility.

Werner, C. Sobiraj, A. Sundrum, A. Efficacy of homeopathic and antibiotic treatment strategies in cases of mild and moderate bovine clinical mastitis. Journal of Dairy Research 2010;77:460-467

The study design was a bit odd. Initially, 136 cows (147 affected udder quarters) were randomly allocated to treatment with antibiotics, homeopathy, or placebo, which is standard practice. The sickest cows, those with signs of systemic illness or a fever, were excluded. This introduces a possible bias as these are the cases most likely to need an effective therapy, whereas less ill animals are more likely to recover on their own regardless of the effectiveness of treatment. Most cases of mastitis are mild and may be self-limiting, depending on the organism involved, so it is appropriate to study interventions for these, but we must simply bear in mind that the effect of treatment may be harder to judge accurately when the diseases often resolves by itself.

The initial randomization was counteracted to some extent, however, by the fact that cases not responding to treatment in the first 5 days were shifted from whatever treatment group they were in to the other (antibiotic or homeopathic treatment), or from the placebo group to one of the two treatment groups. This decision was made at the discretion of one of the investigators, which introduces another potential bias.

Blinding of the farmers/owners to treatment group was incomplete as the antibiotic treatment approach differed significantly from the homeopathic and placebo treatments (which also differed somewhat from each other). After the first 5 days, the farmer took over treatment and was able to distinguish antibiotic treatment form the other two groups, which might have affected other aspects of their care and evaluation of the animals. So any assessments made after the first 5 days could be influenced by bias associated with the farmers knowing what treatment the cows were receiving and thus managing them differently.

The antibiotic treatment also involved local therapy applied directly to the teat, whereas the homeopathic and placebo treatment involved only oral medication administration. This, again could have influenced results if local treatment alone, regardless of agent or use of systemic treatment, had an impact on outcome.

The homeopathic treatments used were “low dilution” preparations, which unlike most common homeopathic remedies could actually contain some residual amount of the substance the remedies were prepared from. This raises the question of whether or not any effect seen would be due to homeopathic methods or to potential physiological effects of the original agents. This is significant since even is these agents have some effect, the majority of the homeopathic remedies in use no longer contain any of them, so most of these remedies would not be able to take advantage of any such effect.

The results mostly showed no difference between treatments, though cases of mastitis with positive bacterial cultures did seem to respond better to antibiotic treatment compared to homeopathic and placebo treatment. In fact, the authors themselves remarked, “in our opinion, contagious pathogens had to be excluded from mastitis studies dealing with alternative medicine because of their epidemiological background and the existence of well-proven conventional elimination strategies.” Essentially, they acknowledge that mastitis with infection already has an effective treatment and it would be unethical to deny this to patients in order to test alternative treatments.  Of course, this only leaves again the cases most likely to get better on their own to test and treat with alternative therapies.

The homeopathic treatment appeared to be statistically different from the placebo only at one of the 6 evaluation time points, Day 56 after the beginning of treatment, and only for the subgroup with positive bacterial cultures. The rate of total cure seen with antibiotic treatment was lower than reported elsewhere, which raises the possibility that the lack of a clear superiority of antibiotic treatment over homeopathy might be due to the failure of the antibiotic treatment applied in this trial rather than a true equivalence between antibiotic and homeopathic treatment.

Finally, from the point of view of statistical analysis, there were several issues that would decrease confidence in the conclusions. The sample size was relatively small, and the number of animals in the study may not have been enough to justify the statistical conclusions reached (not all the relevant information to judge this was provided in the methods section). The biggest problem with the statistical methods, however, and by far the most common statistical error made in papers reporting the results of clinical trials, is the use of multiple comparisons at multiple time points without correction for the probability of random positive results.

The threshold for statistical significance is usually set at 5%. This means that if you plan to compare two treatments in terms of a single measurement, say the percentage of animals cured in each group, then a statistically significant difference between the groups would only happen by chance 5% of the time, which is pretty unlikely. The difference could, of course,  be due to many other factors besides the original hypothesis of the investigators. Statistical significance does not mean the hypothesis is true, only that random chance by itself is unlikely to explain the difference seen.

However, the more comparisons you make, the more likely you are to get some that show a difference which isn’t real just by chance. There are statistical tools for correcting for this, but they do not appear to have been used in this study. Thus, comparing multiple measures (somatic cell counts, milk score, palpation score, etc) on multiple days is likely by random chance alone to lead to some difference that looks significant even though it isn’t. For such a difference to be accepted as real, it either needs to be evaluated by proper statistical methods or at least be seen repeatedly in multiple studies by different investigators.

If a large number of studies are done without appropriate correction for making multiple comparisons between groups, and if each one shows a couple of significant differences but these are not consistently the same measurement in every study, then it is likely that each study found a couple of false differences by chance. Yet in alternative medicine, such differences, even if only found in a couple of studies without appropriate statistical methods, is often cited as proof of a treatment effect. This is misleading. It allows one to cite many papers purporting to show an effect of a treatment, which conveys an impression of scientific legitimacy even if the difference shown by each paper is not real and there is no consistency among the papers as to what exactly the effect is.

Another methodological concern is the apparent use of unplanned subgroup analysis. This means that after the study data was collected, the authors divided the study groups into subsidiary groups (e.g. mastitis cases with positive bacterial cultures and with negative bacterial cultures) and then compared the responses of these subgroups to the different treatments. As with multiple outcome measures, subgroup comparisons can lead to false conclusions without appropriate statistical controls and careful interpretation of the results.

The study was published in the Journal of Dairy Science, which is a reputable scientific publication.

So overall, what can we conclude from this paper? Does it demonstrate “an effectiveness of the homeopathic treatment strategy as a ‘regulation therapy’ stimulating the activity of immune response and resulting in a long-time healing” as the authors conclude? Not at all. It is unclear exactly what this statement means in any case, but it is certain that the weaknesses in designs and execution of the study and data analysis do not allow a great deal of confidence in the hypothesis that homeopathy is as good as or better than antibiotic treatment, or even superior to no treatment, for mastitis in dairy cattle. In cases without bacterial infection and with only mild or moderate localized diseases, which are likely to get better without treatment anyway, homeopathy was not demonstrated to be any more or less effective than the antibiotic therapy used in this study, which was itself less effective than other studies have reported. By itself this would be pretty weak evidence for using homeopathy to treat mastitis.

Again, most scientific studies have such flaws or weaknesses, and this one is not exceptionally inferior in design or executions. However, when the specific flaws of the study are considered in conjunction with the weaknesses of the theory underlying the homeopathic approach and the overall failure of decades of scientific research to show any benefit to homeopathic treatment, the results are essentially meaningless. To overcome the theoretical issues and the failure of any evidence of effectiveness to accumulate despite the amount of research so far done on homeopathic remedies, a quite dramatic and unequivocal result would be necessary to raise any reasonable question of whether homeopathic treatment might be beneficial. This study provides no such results.

Could studies such as this be done better, with fewer methodological flaws? Sure? Should they be? Do we really need still more negative or inconclusive studies of homeopathy before we are allowed to judge it a useless therapy without being accused of unscientific closed-mindedness? Or must we continue to test every possible hypothesis that has its advocates indefinitely? If we are never allowed to declare an idea dead, thenwhat purpose does science serve?

This entry was posted in Homeopathy. Bookmark the permalink.

20 Responses to Another Homeopathy Study: Mastitis in Dairy Cows

  1. Suzanne says:

    Considering there are still people in the US that sincerely believe the earth is flat, no, there will never be enough evidence to satisfy some people.

    I am so ready for homeopathy to be declared dead. Thanks for this in-depth analysis. Though I’m sure homepathic believers will only hear “bla bla bla positive result! bla bla”.

  2. Rita says:

    Poor bloody dairy cows: it was said at a recent conference of the Universities’ Federation for Animal Welfare that the (disgraceful) welfare level of dairy cows had not improved in the last ten years: mastitis is a major cause of suffering and failing fitness: “…life expectancy is falling below three lactations, cows having to be culled for infertility, mastitis and lameness” (John Webster, Animal Welfare, Limping towards Eden, Blackwell 2005, p109). Haven’t we done enough to these unfortunate animals without subjecting them to treatments practically bound to fail, when there is an (only partially, alas!) effective treatment available. This really is not fair!

  3. Rita says:

    Mice, not cows, but, on a recent homeopathy study, SB says this:

    “There must be a less expensive and less cruel way to kill off 142 mice. D-Con is cheaper and spring loaded traps are quicker. Given that homeopathy is divorced from reality, this is more needless cruelty to animals than a reasonable scientific study.”

    Can nothing be done about this state of affairs?

  4. ellen says:

    jan 15 20111

    skeptvet–this just aired in canada, but you can view it on youtube:

    CBC Marketplace – Homeopathy: Cure or Con? Part 1 of 2
    http://www.youtube.com/watch?v=kFKojcTknbU

  5. ellen says:

    jan 15 20111

    oops. that should be 2011. lol

  6. ellen says:

    here’s are a few more links-

    CBC News – Health – Marketplace examines homeopathy
    http://www.cbc.ca/health/story/2011/01/14/f-homeopathy-naturopathic-marketplace.html?ref=rss

    CBC Marketplace Investigates Homeopathy: A Review
    http://sciencebasedtherapy.wordpress.com/2011/01/15/cbc-marketplace-investigates-homeopathy-a-review/

  7. ellen says:

    and just for laughs…

    Shocked Canadian Homeopaths and Homeopathy Supporters Ask, “Is the CBC Marketplace Show Infiltrated by Pharmaceutical Company Sponsored Skeptics?”

    http://homeopathyresource.wordpress.com/2011/01/06/shocked-canadian-homeopaths-ask-is-the-cbc-marketplace-show-infiltrated-by-pharmaceutical-company-skeptics/

  8. ellen says:

    here’s are a few more links-

    another oops. here are a few more links.

  9. ellen says:

    it appears that homeopaths are on the warpath. 😉

    The War on Natural Health Freedom
    http://www.freezepage.com/1294921448XMTNBEWKAD

  10. skeptvet says:

    Great links, Ellen, thanks!

  11. phayes says:

    Excellent article.

    “Could studies such as this be done better, with fewer methodological flaws? Sure. Should they be? ”

    No – I’d say probably this one but certainly all ultradilute homeopathy CTs are pure pseudoscience (pathological science at best) because neither it nor even much better ones are actually in-principle capable of providing support for the hypothesis that homeopathy works. IOW, I believe there’s a fundamental methodological flaw in trying to test a bizarre physics/chemistry/biology textbook mocking hypothesis with a CT (or CTs) alone which no amount of skill and care in trial design and conduct can fix:

    “clinical trials published concerning hypotheses that are implausible in themselves and that can only be true if mountains of evidence against them are all mistaken,”

    “Statistical significance does not mean the hypothesis is true, only that random chance by itself is unlikely to explain the difference seen.”

    Those two observations taken together lead me to conclude that if ever a large and very well done CT of some homeopathic remedy did produce positive results, of course it would be reasonable to rule out random chance as an explanation, but of all the other possible explanations – from the vaguely plausible to the extremely implausible – the homeopathic efficacy hypothesis is among the most extremely implausible. So, in Bayesian terms, it’s posterior probability would’ve been raised by the surprising results but so equally would those of a myriad other possibilities – many if not most of them (a prank/fraud, equipment error, human error, …) with much larger prior probabilities.

  12. Soroush Ebrahimi says:

    BBC2’s QED programme circa 1992 showed an experiment where a herd of cows were randomly split, each was given a trough. Blindly, to one a homeopathic remedy was added and to the other just plain water. The incidence of mastitis was then considered after some weeks. The herd that had had the homeopathic remedy suffered only one case – there other double figure incidents. Hence Q E D.
    There are many homeopathic vets who treat large herd of cows sheep etc homeopathically and you cannot even use the ‘placebo’ excuse as there is no way of fooling a herd of cows.

  13. skeptvet says:

    If you will read the Case Against Homeopathy and my response to the AVH, The Evidence For Homeopathy-A Close Look, you will see that I have read many published studies of hoemopathy in food animals, and they are not strong or compelling evidence of beneficial effects. There are many sources of error that good scientific studies are intended to mitigate, and generally the large animal homeopathy literature fails to account for most of these.

    In terms of placebo effects, it is not a matter of “fooling a herd of cows.” It is a matter of not having subjective measures of response to treatment that are made by people who know which treatment the cows are receiving. If a farmer knows one group of cows is getting a treatment he believes is effective and another group are not, he is likely to treat and evaluate the two groups very differently, which leads to false conclusions. When such effects are controlled for by proper study design, the apparent effects of homeopathy go away.

  14. v.t. says:

    Soroush, all that shows (without actually looking at the study details) is that the so-called experiment was used as a homeopathic preventative measure – which homeopathy obviously did not work. So, if it can’t even prevent disease, how’s it going to actually treat it?

    Homeopaths will always find a way to “make” homeopathy “work”, including contradicting their own claims and their own experiments and when challenged will loudly exclaim the rest of us simply do not understand how it works. Ironic, isn’t it?

  15. Ann says:

    Skeptvet above wrote…”If a farmer knows one group of cows is getting a treatment he believes is effective and another group are not, he is likely to treat and evaluate the two groups very differently, which leads to false conclusions.”

    One wonders what Skeptvet means by this leads to false conclusions!

    Surely if what the farmer wants is to ‘prevent’ mastitis developing in her/his cows (a painfull condition mostly treated by antibiotics) & by using Homeopathy, wither or not her/his behaviour changes towards the cows when administering it, the result is that most of the cows do not develop mastitis – surely if the result is positive for the cows the science doesnt need to be proved!!

    I grew up with homeopathy used in my family home…then in my teens & twenties thought it rubbish & used conventional medicine i.e. steroids for eczema etc – but this did not cure the problem. I returned to homeopathy via a allopath/homopath Dr (there are plenty of them). For me it’s been succesfull for preventing & getting rid of many ailments – I’ve used it for over 40 years.

    When I broke my hip (I have osteoporosis likely due to using steroids) it was repaired by a surgeon – however I took Comfrey/knitbone 6c to help it heal. It is irrelevant to me wither or not my believe in homeopathy is what makes it work for me – the facts are I made a very quick recovery & was able to drive again in 4 weeks after a hip operation.

    Generally a homeopathic Dr would advise patients/clients to eat healthly & excersise also…it’s a holistic treatment where the client/patient takes on responsibility for their health – rather than relying on pill popping.

    Like me most of my friends (a few of whom are scientists) use it for their families & pets also. We are intelligent people who use homeopathy because it works – as it’s inexpensive it also keeps down medical & vet costs.

  16. skeptvet says:

    Actually, the fact that the group getting homeopathy may not get as much mastitis doesn’t mean the homeopathy is responsible. That’s exactly the point of blinding experiemtns. If people know whether they are getting (or giving their animals) a treatment or aplacebo, they behave differently in ways that can make useless therapies look like they work. The farmers may clean the udders more carefully, keep the stalls cleaner, milk the cows differntly, or do any number of things that can affect mastitis risk all because they know which group is getting the treatment and which isn’t. This is a well-established fact of human behavior and is the reason blinded studies are conducted and unblinded studies are not very useful.

    The fact that you get better after you take a remedy doesn’t porve the remedy made you better. If you get a bunch of people to take the remedy and another bunch just like them to not take it, and nobody knows who’s getting what, and the people on the remedy still get better faster or more often than those not taking it, then you have some evidence it works. When you try this with homeopathy, it almost never works. The only reasonable explanation for this is that it is a placebo and we are fooling ourselves into thinking it works, just like we did for thousands of years with bloodletting and many other remedies. The hard part, of course, is that people can’t help believe their own experiences no matter how clear the evidence is that they aren’t reliable. That’s why nonsense like homeopathy persists.

  17. Sukh johal says:

    Is there any homeopathic medicine which help to prevent mastitis just same like as the vaccine

  18. skeptvet says:

    Absolutely not.

  19. Siraj says:

    All the trials done against homeopathy proving its inefficiency are done by allopaths or their proponents.This in itself is a very strong lobby bias and a preconceived notion. Moreover, allopathic trial results have manyatimes contradicted the results in the market place. Rampant unethical practices have been reported wrapped under sugarcoated results and beautiful graphs. A recent case of pioglitazone Study revealed several death Due to cardiovascular Failure during the trial but the researchers carpeted the findings because big Pharma were funding the..

  20. skeptvet says:

    Nonsense. Many of the studies done that show no benefits are done by homeopaths. And by your reasoning, any study done by homeopaths that is positive should be ignored because of their anti-allopathicbias. This is not how science works.

Leave a Reply

Your email address will not be published. Required fields are marked *