I recently ran across a fantastic web site, Testing Treatments, which explains clearly and simply how we use science to test our medical treatments. For anyone not already very familiar with this process, this site will explain why a lot of the evidence people offer on this site for their favorite therapies isn’t really evidence we should trust. One major problem with anecdote and other kinds of low-quality evidence, is the influence of the placebo effect. Contrary to what many people believe, this is just as big a source of error in veterinary studies as in human research.
One of the most entrenched fallacies used to defend the use of anecdotal evidence to validate veterinary treatments is the idea that any perceived effect must be real because the placebo effect doesn’t work in animals. While our veterinary patients don’t experience the perception of relief, when they actually aren’t getting better, solely because of their beliefs or expectations, as humans do, placebo effects do still fool us when trying to decide if our therapies are working in our pets.
Improvements associated with human contact, better care when involved in a clinical trial, operant conditioning, the natural course of disease, and many other factors do occur, and they can fool us into thinking an ineffective therapy is working. These are aspects of the cluster of errors usually labeled the placebo effect.
In addition, there is good evidence for what is usually called the “caregiver placebo effect.” Since many of the measures for the effect of our therapies are subjective and assessed by owners and vets, not directly by the patients, the beliefs and expectations of owners and vets can often create an impression of improvement where none really exists. This is especially true when the measures are so difficult to objectively, consistently asses, such as pain, nausea, etc.
For these reasons, a placebo control is essential in any veterinary clinical study if we are to have confidence in the apparent effects of the treatment being tested. This is not controversial, except sometimes for folks pushing therapies they believe are effective but that haven’t been tested, or that have failed testing, in placebo controlled trials.
However, I have always assumed that with proper placebo controls, a clinical trial would usually be able to detect a real difference in outcome when the treatment being tested truly worked. Even though placebo effects would make the control group appear to be getting better, they would affect the treatment group to the same degree. And the real effects of a treatment that works ought to make the improvement in the treatment group significantly greater than those in the placebo group. However, a cool new study in cats suggests this may not be true, and that caregiver placebo effects may be masking the real effects of some treatments.
M.E. Gruen, E. Griffith, A. Thomson, W. Simpson, and B.D.X. Lascelles. Detection of Clinically Relevant Pain Relief in Cats with Degenerative Joint Disease Associated Pain. J Vet Intern Med 2014;28:346–350
The authors in this study were interested in whether the effect of a pain reliever in cats with arthritis could be better identified by looking for recurrence of symptoms after the medication was stopped than by looking for improvement in symptoms while the medication was being given. They compared meloxicam, an NSAID for which there is already good evidence of efficacy, to an identical placebo.
The study selected cats with significant symptoms of arthritis and used owner surveys to evaluate symptom severity. All cats got a placebo for 2 weeks, and the investigators and owners knew they were getting only placebo. This was to accustom the cats and owners to giving the medication and monitoring symptoms. Then the cats were randomly assigned to two groups, one getting meloxicam and one placebo for three weeks. During this time, both owners and investigators were blinded to which treatment each cat was getting, which reduced the influence of bias. Finally, the cats on meloxicam were shifted to placebo, but only the investigators knew this, not the owners.
During the treatment period, all cats appeared to get better according to the owner evaluations. This is consistent with placebo effects. The cats on meloxicam did not improve significantly more than those on placebo, which would seem to suggest that the meloxicam wasn’t working. However, when the cats on meloxicam were switched to placebo, without the owners knowing it, these cats got significantly worse while the placebo group did not change. This indicates that the meloxicam was having an effect, however it was swamped during the treatment period by the owner placebo effect, which influenced the results of both groups.
The authors do a good job of assessing the limitations of their own study, which is an important element in the discussion section for any scientific paper. The investigators were not blinded during the last three weeks, so potentially they could have influenced the owner evaluations in some way. There are also some weaknesses in how the diagnosis of arthritis was established, how the severity of disease was evaluated, and some other factors. But such limitations occur in every study, which is why no single paper can be definitive evidence. This was still an clever and illuminating project, and if it holds up when replicated, it may lead to a significant change in how placebo effects are controlled for in veterinary research.