WVC 2016: Clinical Audit

Here is my presentation on clinical audit.

“Clinical audit is a process used by health professionals to assess, evaluate and improve patient care…Clinical audits can be used to compare current practice with the best available evidence. It provides a methodology to assess if the best evidence-based medicine is being applied within the practice.”1

Clinical Auditing is a quality improvement process in clinical practice that seeks to establish guidelines for dealing with particular problems, based on documented evidence when it is available, measuring the effectiveness of these guidelines once they have been put into effect, and modifying them as appropriate. It should be an ongoing upwards spiral of appraisal and improvement.2

The most obvious and significant reason to conduct clinical audits is to improve patient care. Though it is a difficult idea to prove empirically, it is likely true that formal processes to evaluate patient outcomes, implement evidence-based changes in clinical practices, and then assess the impact of these changes on patient outcomes should improve patient care. There is some research evidence to support this concept, though the effectiveness of clinical audit depends on many factors, such as the baseline level of performance, the details of how changes are implemented, and others.3

In addition, clinical audit can contribute to job satisfaction for all members of the veterinary healthcare team. Taking active steps to assess the effectiveness of one’s practices and implement change and then seeing the results of those efforts can give veterinarians and veterinary nurses a greater enthusiasm for their work, and a sense of confidence in their clinical decisions.

Clinical audit can also be used as a means to strengthen client confidence in a practice and reassure clients about the risks of specific interventions. Knowing that there is an established and ongoing process of quality control in place promotes trust in the doctors and the practice. And if clients have concerns about the risks of surgery or other interventions, clinical audit enables us to provide them with specific and relevant data to support fully informed consent and reassure them about the procedure and our commitment to safe and effective care

There are two basic types of clinical audit, though the specific goals and procedures are tailored to each setting and question. Standards-based audits are those which compare current practice and results to some designated standard. The standards can be goals set specifically for the practice or derived from published clinical guidelines or other external, evidence-based sources.

A standards-based audit could evaluate how often antibiotics prescribed for urinary tract infections turned out to be appropriate based on subsequent culture and sensitivity results, with an eye to adjusting empirical antibiotic choices if necessary. Or such an audit could be focused on ensuring a certain percentage of newly-diagnosed cases of feline chronic kidney disease were fully staged according to the published IRIS guidelines.5 Or one could use a process audit to evaluate patient outcomes, such as the proportion of spay surgeries experiencing seroma or other complications, with the goal of optimizing practices to minimize these.

Critical Incident or Significant Event audits are structured discussions that occur in response to some identified error or process failure or simply an undesired outcome. Such events can be clinical, such as an anesthetic death or patient escape in the hospital, or they can be procedural, such as a failure to submit a urine sample for culture or to report biopsy results in a timely manner.

The goal of a significant event audit is not to assign blame but to determine if any feature of hospital policy or procedure might have increased the risk of such events and how this risk might be reduced through any process changes.

The specific steps in a clinical audit will be somewhat different in every setting. However, there are some general principles that apply broadly to the audit process. For audits to result in meaningful, effective change in practices, the entire healthcare team must be convinced of the value and legitimacy of the process. This requires participation and input by all team members, rather than an authoritarian “top-down” approach. Having clear and transparent procedures agreed upon in advance for how audits are to be conducted and a regular, consistent execution of these procedures will facilitate creating an institutional culture in which audits are seen as a routine and necessary part of patient care.

The general steps of the audit process are illustrated in Figure 1.

clinical audit diagram

Figure 1. The process of clinical audit (from Dunn, 2012)3

The process begins by identifying a question or area of concern to investigate. All members of the team should be invited to offer questions for consideration and an open discussion of these and how to structure and prioritize this should be held involve the entire team. Individuals or a small team responsible for a particular audit should then identify guidelines or standards to be used as benchmarks and develop a plan for collecting appropriate data. This plan should again be discussed with all members of the team who will be involved to make sure it is appropriate and not unduly complex or burdensome.

Once the data is collected, it should be analyzed to compare the results to the designated standards. Audits are not intended as clinical research projects, and no statistical analysis is necessary or appropriate. The goal is to compare the results in the practice setting clearly and directly with the designated guideline or standard in a way that identifies potential areas for improvement.

Since the overall purpose of audits is to facilitate improvement, when an audit identifies practices that do not meet the designated standard, the entire team should again be involved in formulating a plan for implementing changes to improve performance. Once these are well established, a repeat of the audit cycle can be sued to assess whether the changes have had the desired effect.

Though audits are not intended to be a form of controlled clinical research, they do have the potential to produce data useful to those outside the practice in which they are conducted. The results are specific to the environment in which the audit was conducted and so cannot necessarily be generalized to other settings. However, in the evidence-poor environment of veterinary medicine, clinical audit can at least suggest hypotheses about best practices or the impact of clinical practice guidelines and provide at least rough estimates of the frequency of some outcomes in various settings. Therefore, sharing the results of clinical audits is to be encouraged. This also has the added benefit of making clients aware of the active quality improvement processes in place, helping to build confidence in the practice.

There are a number of barriers to conducting clinical audits. The biggest in the U.S. is the lack of awareness of the audit process. Clinical audit is well-established in the human healthcare system in various forms. And in the U.K., veterinary practices are required to have some formal mechanism for assessing outcomes and improving performance, which can be satisfied by a clinical audit mechanism. However, there is not widespread awareness of the value or methods of clinical audit among veterinarians in the U.S.

Other practical barriers include time and other resources, resistance to change from team members, and concerns about the impact of identifying errors or undesirable outcomes on employee performance evaluations or client confidence. All of these are limitations that have been successfully addressed in the implementation of clinical audit in human medicine and that can be overcome in the use of this valuable tool in veterinary medicine.

RESOURCES The Royal College of Veterinary Surgeons (RCVS) Knowledge group has published a useful Clinical Audit Toolkit that is freely available online. The references below also include several useful guides to the implementation of clinical audit in veterinary practice.2,4,6


  1. Cockroft P, Holmes M. Handbook of evidence-based veterinary medicine. Oxford, England: Blackwell Publishing, 2003
  2. Viner, B. Introducing clinical audit into veterinary practice.PhD. dissertation, Middlesex University, London, England. 2006.
  3. Jamtvedt G, Young JM, Kristoffersen DT,O’BrienMA,Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2006;(2):CD000259.
  4. Dunn, J. Clinical audit: A tool in defense of clinical standards. In Practice. 2012;34:167-169.
  5. International Renal Interest Society (IRIS). IRIS staging of CKD (modified 2013). Available at: http://www.iris-kidney.com/guidelines/staging.aspx. Downloaded October 8, 2015.
  6. Viner, B. Using audit to improve clinical effectiveness. In Practice. 2009;31:240-243.


clinical audit title

Posted in Science-Based Veterinary Medicine | Leave a comment

WVC 2016: What You Know that Ain’t Necessarily So

Here is my presentation on the evidence for a few common veterinary practices.

Experienced clinicians often have an enormous knowledge base about the health problems their patients present with and the available diagnostic and therapeutic options. This knowledge is built over time from a variety of sources: basic pathophysiology and clinical information learned in school; practice tips and pearls imparted by professors and speakers at continuing education meetings; review articles and primary research papers in veterinary journals; textbooks; advice from mentors and colleagues in practice; and of course clinical experience with previous cases.

This knowledge base smoothly and efficiently informs the day-to-day activities of clinical practice. When cases with familiar features are seen, the appropriate diagnostic and treatment steps often come to mind automatically, or with minimal prodding. Unlike students and new graduates, experienced clinicians often have little sense of dredging up facts committed to memory and more of a sense of simply knowing things. One of the hallmarks of expertise is that the collating of observations and relevant knowledge into a coherent picture of the problem and a plan become less deliberate and more automatic with time.1

While this process, which is a universal and automatic feature of how the human brain functions, leads to greater efficiency than the explicit, conscious use of algorithms and reference sources employed by less experienced practitioners, it has a number of potential limitations. One problem, for example, is that the knowledge one relies on often can no longer be connected to its original source. We often simply know things without being aware of how we came to know them. This limits our ability to judge the reliability of the source of our knowledge. In fact, such established, automatic knowing often generates a sense of certainty greater than that which accompanies deliberately seeking and finding information.2 We are more likely to trust what we already know, even if we don’t remember where we learned it, than we are to trust what we have just discovered after searching a trustworthy source of information.

There are also a large number of well-characterized cognitive biases and sources of errors inherent in how our brains acquire, process, store, and utilize information that can lead us astray.3 These are more likely to create error when our reasoning is automatic rather than deliberate, as it necessarily must be in an efficient clinical environment that is not devoted primarily to teaching.

One of the major functions of evidence-based veterinary medicine (EBVM) is to provide tools and resources to make the knowledge base we employ more reliable. This includes generating better quality information through research and facilitating the integration of that information into clinical decision making. When the relevant evidence is of high quality, this can add confidence to our decisions.

More commonly, when the evidence has significant limitations, we may end up with less confidence in our knowledge than we would have without an explicit evaluation of the evidence. However, this is not as undesirable an outcome as it may appear. A clear, accurate understanding of the uncertainty associated with a particular practice protects us, and our patients, from the dangers of acting with unjustified confidence. We are more likely to weigh thoughtfully the risks and benefits of action in the context of an individual case when we understand the degree of uncertainty about our ability to predict or manipulate the patient’s condition.

Being clear about the sources of our knowledge, and the appropriate level of confidence to have in them, also aids in fulfilling our duty to provide clients with informed consent. Surveys of veterinary clients have shown that they value truthfulness highly in the information we provide to them, and that they want to be made aware of the uncertainties involved in the treatment of their animals.4-5 Only if we understand the reliability and limitations of the information we employ in making our recommendations can we give clients the knowledge and guidance they need to make informed choices.

The purpose of these lectures is to examine some widespread or long-standing beliefs and practices in small animal medicine and assess their evidentiary foundations. In some cases, this may clearly validate or invalidate these beliefs. In most cases, however, such an exploration will likely not lead to greater certainty but to a clearer understanding of the degree of uncertainty associated with these beliefs. Hopefully, this will be useful in making clinical decisions and in communicating with clients. The exercise may also be useful in illustrating how to make use of the research literature in establishing and maintaining the knowledge base that informs one’s clinical practice.

Some of the topics that will be covered include:

  1. Pheromone therapy for behavioral problems in dogs and cats
  2. Anti-histamines for treatment of atopic dermatitis in dogs
  3. Steroids for anaphylaxis and acute allergic reactions


  1. Benner, P. From novice to expert. Amer J Nursing, 1982; 82(3):402-7.
  2. Burton, R. On Being Certain: Believing You’re Right Even When You’re Not. New York: St. Martin’s Press. 2008
  3. McKenzie, BA. Veterinary clinical decision-making: cognitive biases, external constraints, and strategies for improvement. J Amer Vet Med Assoc. 2014;244(3):271-276.
  4. Mellanby RJ, Crisp J, De Palma G, et al. Perceptions of veterinarians and clients to expressions of clinical uncertainty. J Small Anim Pract 2007;48:26–31.
  5. Stoewen DL, et al.  (2014) Qualitative study of the information expectations of clients accessing oncology care at a tertiary referral center for dogs with life-limiting cancer. J Am Vet Med Assoc. 2014;245(7):773-83.



What Aint So Title

Posted in Presentations, Lectures, Publications & Interviews | 5 Comments

WVC 2016: The Laser Craze

Here are my notes and slides from my presentation of low-level lasers.

Laser therapy is, at its simplest, the application of light to living organisms to obtain health benefits. However, there is a bewildering amount of detail behind this simple idea. The wavelength and power of the laser used, the location and duration of exposure, the number of treatments, the conditions for which treatment might be useful, and many other variables are subject to extensive debate. Generally, low-level or “cold” lasers (which is really a misnomer since many do generate heat during use) utilize wavelengths between 600-1000nm and power levels from 5-500mW. More powerful lasers are used in surgery, but these function primarily to cut or cauterize tissue, break up uroliths, or otherwise cause controlled damage. Low-level lasers are intended to have biological effects on tissue, known as photobiomodulation, without causing damage.

The FDA classifies lasers, from Class 1 to Class 4, based primarily on their potential to harm the user or the patient. Low-level laser therapy typically involves Class 3 lasers, though more powerful Class 4 devices are sometimes used for non-surgical therapy.

The most common recommended uses of low-level laser therapy are to facilitate wound healing, reduce inflammation, and improve musculoskeletal pain or disease. However, proponents of laser therapy, and companies selling therapeutic lasers, often claim or suggest that low-level lasers can treat nearly any medical condition. Lasers have been promoted for use in specific clinical problems (allergic skin disease, gingivitis, bacterial and viral infections, envenomation, etc.), vaguely defined general health improvement (enhancing immune function, normalizing metabolic function, “energizing” cells, etc.), and unscientific nonsense (fixing “Qi-stagnation”1). Some practitioners recommend using laser as a means of stimulating acupuncture points, which adds the question of the efficacy of acupuncture and the selection of such points to the original question of the potential utility of laser therapy itself.

Sorting through the claims made for lasers, from the reasonable to the ridiculous, is challenging due to the heterogeneity of lasers and therapeutic approaches employed and the complexity and inconsistency of the available research on medical lasers.

The principles of Evidence-based Veterinary Medicine (EBVM) can help us sort through the evidence concerning low-level laser therapy and try and identify the strengths and limitations of the evidence for specific potential uses. Though there are various ways to organize our evaluation of the existing evidence, it is generally agreed that some sort of hierarchy of evidence is appropriate, with the most reliable types evidence at the apex and the most available, but less reliable evidence, at the base. Figure 1 illustrates one way of visualizing such a hierarchy of evidence.

evidence pyramid

Figure 1. Hierarchy of evidence. (CAT-critically-appraised topic, RCT-randomized clinical trial, CE- continuing education)

Within the levels of this hierarchy, there are multiple types of evidence which themselves have different levels of reliability. Randomized clinical trials, for example, provide better evidence than case reports, and studies in the species one intends to treat are more likely to predict outcomes than studies in another species. However, this scheme provides a convenient overview of one useful way to think about the types of evidence available to guide decisions about the use of lasers.

Systematic reviews of multiple clinical trials, with detailed analysis of the limitations of each trial and an overall assessment of the quality of evidence are the most trustworthy source of evidence on any clinical intervention. Unfortunately, such reviews are often not available in veterinary medicine, and there are none for low-level laser therapy.

Controlled clinical trials of naturally occurring disease in the target species are the next most reliable form of evidence, and there appear to be none available for veterinary patients. Some studies in clinical patients do exist, though they have significant limitations.

For example, a pilot study adding laser therapy to standard treatment for dogs with acute intervertebral disk disease suggested laser treatment might have shortened time to ambulation after surgery.2 However, the absence of randomization, blinding, and placebo control limit the strength of this conclusion, and another similar study did not report a clinical benefit.3

There have been many experimental studies of laser therapy for a variety of conditions in veterinary species, however the methodological quality is variable and the results are mixed. Some studies of wound healing, for example, show possible benefits4 while many others do not.5-9 Small studies looking at lasers for skin disease have also found mixed results (some beneficial effect on non-inflammatory alopecia10, no apparent effect on atopic pedal pruritis11), though again there are significant methodological limitations in these studies.

Two systematic reviews of lab animal studies are available. One found that there were some potentially beneficial effects in bone healing models, though there were few studies to review.12 The other reviewed in vitro and animal model studies relevant to wound healing and concluded rather strongly that, “these studies failed to show unequivocal evidence to substantiate the decision for trials with [low-level laser therapy] in a large number of patients…We conclude that this type of phototherapy should not be considered a valuable (adjuvant) treatment for this selected, generally therapy-refractory condition in humans.”13

The evidence, as usual, is much more voluminous for use of lasers in human medicine. There are literally hundreds of systematic reviews available for specific conditions, often with several different reviews of the same set of studies for particular indications. There is great inconsistency in the results. Most reviews conclude that the evidence is not strong enough to support definitive statements about efficacy. Some reviews do show some positive results, with weak to moderate evidence supporting a benefit for particular conditions, though in some cases other reviews of the same evidence reach different conclusions.

There is an enormous body of in vitro evidence showing effects of laser light on various tissues. It is clear that laser has significant biological effects in such models, and some of the effects seen could potentially have clinical benefits in living patients.

Finally, there are, of course, innumerable anecdotes regarding the effects of laser therapy, and some laser advocates are absolutely convinced by their own experiences that low-level laser is a powerful therapeutic tool. However, such anecdotal evidence can be found for most every intervention available, and equally strong anecdotal support by thousands of people over centuries has existed for many therapies that scientific research has shown conclusively to be ineffective, such as bloodletting and homeopathy. Therefore, the value of such evidence is limited to suggesting hypotheses for further research, and it cannot validate any claims about laser therapy.

Most experimental and clinical studies of low-level laser therapy have found few adverse effects. Inappropriate use of higher-power lasers or excessive duration of treat can result in heating of tissues and thermal damage in some cases. There are also potential risks to operators of laser equipment. And in the absence of research evaluating the long-term effects of ongoing laser treatment, the potential for some of the many biological effects of light on cellular metabolism to result in harm is unknown.

Safety guidelines, from government agencies, manufacturers, and the medical literature, are available and should be scrupulously followed.

Lasers have significant measurable effects on living tissues in laboratory experiments, so it is plausible that they might have clinical benefits. The extensive research done in humans, however, has so far only found limited evidence to support the use of lasers in a few conditions, and high-quality controlled studies often contradict the positive findings of initial, small and poorly controlled trials.

The experimental evidence in veterinary species is mixed and low quality, and there are no high-quality published clinical trials validating laser therapy for specific indications. It is possible that high-quality research may one day validate some of the claimed benefits for laser therapy. However, at present the best that can be said about this intervention is that it appears promising for some conditions, such as wounds and musculoskeletal pain.

The growing popularity of lasers is based largely on anecdotal evidence and economic factors. Laser units are being aggressively marketed to veterinarians, often using unsubstantiated claims of clinical benefits. Laser therapy represents a potential source of income for practitioners and, of course, for laser device manufacturers. It appears likely that this profit potential contributes to an enthusiasm for laser therapy not matched by the quality of scientific evidence for its benefits to patients.

Veterinary therapies often lack robust high-quality clinical trial evidence to support their use, and this is not itself a reason to avoid these therapies. However, when employing interventions that have not yet been rigorously demonstrated to be safe and effective, we have a duty to acknowledge the limitations of the evidence. Clients should be fully informed about the uncertainties concerning the effectiveness of laser therapy and the potential for unforeseen effects. Established therapies with stronger evidence identifying their risks and benefits should take precedence over promising but unproven therapies like laser treatment. And those interested in promoting low-level laser, particularly those marketing laser equipment and training, should proportion their claims to the available evidence and assume some responsibility for developing the evidence base further so that practitioners and animal owners can make better-informed decisions about this practice.


  1. Petermann, U. Pulse laser as ATP-generator: the use of low level laser-therapy in alleviating Qi-shortcomings. Zeitschrift für Ganzheitliche Tiermedizin. 2012 26 1 8-14
  2. Draper WE, Schubert TA, Clemmons RM, Miles SA. Low-level laser therapy reduces time to ambulation in dogs after hemilaminectomy: a preliminary study. J Small Anim Pract. 2012;53(8):465–469.
  3. Williams, C. Barone, G. Is Low Level Laser Therapy an Effective Adjunctive Treatment to Hemilaminectomy in Dogs with Acute Onset Parapleglia Secondary to Intervertebral Disc Disease? Proceedings, American College of Veterinary Internal Medicine Forum, Denver, CO. June, 2010.
  4. Efficacy of low level LASER therapy on wound healing in dogs. Indian Journal of Veterinary Surgery 2011 32 2 103-106 Singh, M., Bhargava, M. K., Sahi, A., Jawre, S., Singh, R., Chandrapuria, V. P., Kocchar, G.
  5. Grayson LC; Cassie NL; Juergen P. et al. Effect of laser treatment on first-intention incisional wound healing in ball pythons (Python regius). Am J Vet Res. October 2015;76(10):904-12.
  6. Kurach LM, Stanley BJ, Gazzola KM, et al. The Effect of Low-Level Laser Therapy on the Healing of Open Wounds in Dogs. Vet Surg. 2015 Oct 8. doi: 10.1111/vsu.12407. [Epub ahead of print]
  7. Madhya Pradesh Pashu Chikitsa Vishwavidyalaya, Jabalpur, MP, India. Low level laser therapy for the healing of contaminated wounds in dogs: histopathological changes. Indian Journal of Veterinary Surgery. 2013 34 1 57-58
  8. In de Braekt MM, van Alphen FA, Kuijpers-Jagtman AM, et al. Effect of low level laser therapy on wound healing after palatal surgery in beagle dogs. Lasers Surg Med. 1991;11(5):462-70.
  9. Petersen SL, Botes C, Olivier A, et al. The effect of low level laser therapy (LLLT) on wound healing in horses. Equine Vet J. 1999 May;31(3):228-31.
  10. Olivieri, L. Cavina, D. Radicchi, G. et al. Efficacy of low-level laser therapy on hair regrowth in dogs with noninflammatory alopecia: a pilot study. Veterinary Dermatology, 2015, 26, 1, pp 35-e11
  11. Stich, A. N.; Rosenkrantz, W. S.; Griffin, C. E. Clinical efficacy of low-level laser therapy on localized canine atopic dermatitis severity score and localized pruritic visual analog score in pedal pruritus due to canine atopic dermatitis. Veterinary Dermatology, 2014, 25, 5, pp 464-e74
  12. Tajali, SB. MacDermid, JC. Houghton, P. et al. Effects of low power laser irradiation on bone healing in animals: a meta-analysis. Journal of Orthopaedic Surgery and Research 2010, 5:1
  13. Lucas, C. Criens-Poublon, LJ. Cockrell, CT. et al. Wound healing in cell studies and animal model experiments by Low Level Laser Therapy; were clinical studies justified? a systematic review. Lasers Med Sci. 2002;17(2):110-34.



Posted in Presentations, Lectures, Publications & Interviews | 1 Comment

WVC 2016: Overdiagnosis and Overtreatment

Here are the notes and slides from my presentation on Overdiagnosis.


A 5 year-old Labrador retriever presents for an acute cranial cruciate ligament rupture. Otherwise, the dog is healthy in every way with no clinical symptoms other than lamenesss. However, pre-operative bloodwork shows moderate elevations in ALT, and an ultrasound exam shows some indistinct, mildly hypoechoic nodular lesions in the liver. An ultrasound-guided needle biopsy is performed which ultimately shows benign nodular hyperplasia. Unfortunately, the dog dies of complications from the biopsy procedure. What this dog really died from, however, was overdiagnosis.

Veterinarians, especially early in their careers, are often fearful of misdiagnosis; incorrectly identifying disorders in their patients or diagnosing diseases the patients do not actually have. However, few worry about the dangers of overdiagnosis; the correct diagnosis and treatment of disorders patients do have but which will never cause clinical symptoms or mortality.

Overdiagnosis is now recognized as a common and serious problem in human medicine which causes significant harm in terms of cost and suffering for patients and their caregivers. There are annual international conferences on preventing overdiagnosis, and a consortium of seventy specialty groups has created the online resource Choosing Wisely (www.choosingwisely.org) to help physicians and patients make better decisions and reduce overdiagnosis and overtreatment. Changes in clinical practice guidelines for many conditions, including highly publicized changes in breast cancer and prostate cancer screening programs, have resulted from the recognition that overdiagnosis harms patients. Yet the subject of overdiagnosis is virtually unknown in veterinary medicine.


A challenge in controlling overdiagnosis is that there is no absolute way to know in advance if a finding in a particular patient is going to prove clinically important or not. This can only be known after sufficient time has passed to evaluate whether or not the finding has resulted in disease or death. However, it is possible to evaluate the frequency of overdiagnosis in a population based on the frequency of diagnosis and mortality data for specific conditions and populations. As an example, CT imaging of clinically healthy people frequently leads to diagnosis of cancers which, based on mortality figures, are never going to lead to early mortality (Table 1).




% with lesion detected by CT


10-yr risk of cancer mortality


Chance lesion is lethal cancer (c=b/a) Chance lesion is not lethal cancer


Lung (smokers) 50 1.8 3.6 96.4
Lung (never smoked) 15 0.1 0.7 99.3
Kidney 23 0.05 0.2 99.8
Liver 15 0.08 0.5 99.5
Thyroid (US) 67 0.005 <0.01 >99.99

Table 1. Detection and risk of mortality from cancer using CT imaging in asymptomatic humans. (From Welch, 20121)

Similar data is available for many other diseases suggesting that overdiagnosis is extremely common in human medicine. There is no published data specifically addressing the frequency of overdiagnosis in veterinary medicine.


The diagnosis of diseases that are unlikely to ever cause significant illness or mortality causes harm in several ways. The testing leading to diagnosis, and the treatment often offered once such a diagnosis is made, have financial costs. It is estimated, for example, that overdiagnosis and overtreatment of clinically irrelevant lesions detected through mammography costs $4 billion annually in the United States alone.2 Another study suggests that unnecessary treatment of people with mild hypertension in the U.S., with no benefit in terms of reducing symptoms or early mortality, may cost $32 billion annually.3

These are only estimates for the costs of overdiagnosis for two diseases in one country, so the global financial cost is undoubtedly much greater. And despite the impulse to feel that no price should be put on efforts to improve health and treat disease, it is undeniable that such waste raises the overall costs of healthcare and reduces access for some people, all without any benefit to patients.

There are no estimates of the costs of overdiagnosis in veterinary medicine. The economic model of the veterinary profession is quite different from human medicine, and the financial costs of overdiagnosis may not impact the overall cost of veterinary care or the availability of care as dramatically. However, these costs are still a waste of client resources, and they can reduce the ability of some clients to pay for subsequent care that may be necessary or beneficial for their animals.

The financial costs of overdiagnosis, however, have not been the major drivers of change in clinical practice in human medicine. This has been the physical and emotional harm to patients. Diagnostic testing and treatment for conditions not destined to cause illness or death can cause both physical injury an psychological distress. It has been estimated that, for example, that prostate specific antigen (PSA) screening for prostate cancer in men will identify cancer in 30-100 patients who would never have been clinically effected for every death such testing prevents.1 And for these men who are overdiagnosed and go on to have biopsies or treatment, up to 50% will experience sexual dysfunction, 30% will have urination difficulties, and 1-2 per thousand screened will die as a result of unnecessary treatment. Research has also shown that quality of life diminishes after a diagnosis of prostate cancer, and that the risk of suicide and cardiovascular death increases immediately following such a diagnosis.4-5

Similar evidence of physical and psychological harm from overdiagnosis is available for many other conditions in human medicine. There has been, however, no published research on the risks of overdiagnosis in the veterinary field.


Overdiagnosis is driven by numerous factors. Screening tests, imaging, and other diagnostics employed without specific clinical justification frequently lead to the detection of abnormalities. Such abnormalities are far less likely to be clinically important than those which cause symptoms, and therefore they often represent overdiagnosis. However, once an abnormality is detected, some of the psychological harm to patients or caregivers has already occurred. And because of the anxiety induced by the finding and the desire of both clients and doctors to take some action, even when it is unclear this will benefit the patient (a phenomenon known as “commission bias”6), further testing and even therapy often results from an initial overdiagnosis.

Overdiagnosis also stems from the development of more sensitive tests, which identify conditions earlier and prior to the onset of clinical symptoms. Expanded definitions of disorders which encompass patients previously not consider to have these disorders can also lead to overdiagnosis and overtreatment. Psychologically, doctors are prone to overdiagnosis because they are likely to be punished, in the form of blame or even litigation, for failing to diagnose a medical condition if it does eventually lead to symptoms or death. However, doctors are almost never punished for unnecessarily diagnosing and treating conditions which would never have cause any harm if undiagnosed.

The inappropriate reliance on anecdotal evidence and clinical experience to guide diagnostic practices also contributes to overdiagnosis. Even in the face of strong evidence, for example, that mammography of women under 50 years of age leads to significantly more harm from overdiagnosis than benefit from earlier diagnosis and treatment, there has been significant resistance to the change in screening guidelines implemented to reduce this harm. Much of that resistance is justified with the use of anecdotes of women who had been diagnosed and treated for breast cancer because they and their doctors believed, rightly or wrongly, that this intervention had saved their lives.

Any doctor who believes their use of screening or other diagnostic interventions in asymptomatic patients has saved someone’s life will be very reluctant to stop using those interventions regardless of the evidence that they do more harm than good. Individual stories are always more psychologically compelling than statistically data. But acting on our emotional response to such stories and ignoring the evidence for regarding overdiagnosis ultimately causes more unnecessary suffering for real patients.

Patients are similarly inclined to seek diagnosis and take action on it even if the statistical evidence suggests it is in their best interests not to. Doing so gives people a sense of control over their fate, or that of their animals. Even if this sense of control is an illusion, it tends to outweigh rational considerations. One survey found that 98% of people mistakenly diagnosed with cancer through screening were still glad they had had the test once the follow-up evaluation showed they actually did not have cancer.7 Like doctors, patients are inclined to take action rather than choosing inaction, even when inaction is demonstrably the better choice.

Finally, we cannot ignore the potential influence of financial interests on overdiagnosis. Companies selling diagnostic tools and veterinarians using them receive income from the use of these tools. And the follow-up testing and treatment of diseases, even when they are overdiagnoses, also generates revenue. While doctors are unlikely to intentionally pursue unnecessary testing and treatment purely for financial gain, it would be naive to imagine such revenue has no impact on doctors’ decision making. Federal law prohibits doctors from referring patients to diagnostic facilities in which they have a financial interest because research has shown such an interest increases the number of tests done and the costs to patients.8-9 There is no reason to believe veterinarians to be exempt from the same potential for financial self-interest to influence clinical decisions.


The first step in reducing the harms from overdiagnosis is to understand the phenomenon and its causes. This includes developing the data to identify overdiagnosis of specific conditions. Because overdiagnosis can only be identified in retrospect in individual patients, we must gather and analyze epidemiologic data to recognize the level of risk for overdiagnosis of particular diseases. We cannot safely rely on anecdotes and uncontrolled clinical experience alone to drive our diagnostic and therapeutic practices. We need data.

In the meantime, in the absence of such data, the best strategy is to understand the limitations of our diagnostic tests, including important measures such as their positive and negative predictive value, which help us to appreciate the likely significance and reliability of test results in particular patient populations.  We should also ensure that we have an appropriate clinical index of suspicion for any condition before we begin testing for it. “Fishing expeditions,” “shotgun diagnostics,” indiscriminate imaging, and other such irrational diagnostic practices raise the risk of overdiagnosis.

We must also learn to accept the inevitable uncertainty in medicine and be honest with clients about our ability to predict and control all patient outcomes. We need to recognize and disclose that testing and treatment have costs and risks as well as benefits, especially in patients without significant clinical symptoms associated with the disorders we are trying to diagnose and treat. Though it is psychologically more difficult for us, it is often wiser to avoid action when there is not good evidence to show that our actions will truly benefit our patients. Don’t just do something, stand there!


  1. Welch, HG. Schwartz, L. Woloshin, S. (2012) Overdiagnosed: Making people sick in the pursuit of health. Boston: Beacon Press.
  2.  Ong, M. Mandl, KD.  National Expenditure For False-Positive Mammograms And Breast Cancer Overdiagnoses Estimated At $4 Billion A Year. Health Aff April 2015 vol. 34 no. 4 576-583
  3. Martin, SA. Boucher, M. Wright, JM. Et al. Mild hypertension in people at low risk. BMJ 2014;349:g5432
  4. Heijnsdijk EA, Wever EM, Auvinen A et al: Quality-of-life effects of prostate-specific antigen screening. N Eng J Med 2012; 367: 595.
  5. Fang F et al: Immediate Risk of Suicide and Cardiovascular Death After a Prostate Cancer Diagnosis: Cohort Study in the United States. JNCI 2010; 102: 307.
  6. McKenzie, BA. Veterinary clinical decision-making: cognitive biases, external constraints, and strategies for improvement. JAVMA. 2014;44(3):271-276.
  7. Schwartz, LM. Woloshin, S. Fowler, FJ., et al. Enthusiasm for Cancer Screening in the United States FREE  JAMA. 2004;291(1):71-78.
  8. Levin, David C.; Rao, Vijay M. (2008). “Turf Wars in Radiology: Updated Evidence on the Relationship Between Self-Referral and the Overutilization of Imaging”. Journal of the American College of Radiology 5 (7): 806–810.
  9. Gazelle, G.S.; Halpern, E.F.; Ryan, H.S.; Tramontano, A.C. (November 2007). “Utilization of diagnostic medical imaging: comparison of radiologist referral versus same-specialty referral”. Radiology 245 (2): 517–22.




Posted in Presentations, Lectures, Publications & Interviews | 4 Comments

Marijuana and Cannabis-Based Products for Pets: Any News?

In 2013, I wrote about a particular medical marijuana product marketed for veterinary use, Canna-pet, as an illustration of the uncertainties and issues surrounding the potential medical use of cannabis-derived products. At that time, my conclusion was that 1) there is enough pre-clinical evidence to suggest cannabinoids of various types have physiologic effects that could prove beneficial, 2) there is limited evidence for some clinical use in humans, 3) but overall the evidence in humans is weak, 4) and in veterinary species it is non-existent.

Sadly, the state of the evidence hasn’t changed in the intervening couple of years, but the marketing of such products to pet owners and veterinarians has continued to grow. The lack of meaningful regulation of dietary supplements allows the sale of unproven remedies so long as the benefits are only implied and not directly stated. This loophole has created a wonderful opportunity for companies to profit from products that might or might work and might or might not be safe.  This does not strike me as serving the best interests of patients. The money and energy put into marketing these products could be better used to fund research to identify the true risks and benefits.

The only veterinary “research” that has emerged recently is the kind that I have discussed many times before, research that is intended to sell an idea or product rather than to find out the truth about it. This is the kind of research most preferred by companies selling such products and by alternative medicine advocates such as the AHVMA, and both are involved in this particular study.

Consumers’ perceptions of hemp products for animals. JAHVMA. Spring, 2016, vol. 2

The full details of the study are not available except to subscribers, but the results are summarized on the AHVMA web site, and this summary has been widely distributed by Canna-Pet. It consistent of an online survey “provided by Canna-Pet to their customers.” Obviously, this represents clear selection bias, since those responding to the survey are going to be those buying a hemp product for use in their pets because they expect or hope it will help. Anyone who doesn’t have a pre-existing bias in favor of using such a product, or who has used it and had negative experiences, is not going to be a customer and so is not going to participate in this survey.

Of the 632 respondents, about half felt it had helped their pet with pain, sleep, anxiety, and in cats inflammation. No data are provided on the conditions for which owners felt it wasn’t helpful or any other relevant information about the animals or their conditions, other treatments, and so on.

About 15-20% of owners reported undesirable effects, such as sedation or excessive appetite.

There is no way to evaluate this survey without more information, other than to say that it appears to contain no controls whatsoever for bias. As such, it is likely to be as unreliable as most online testimonials. And it is actually a bit surprising that even with a survey that doesn’t control for bias, the best the company could say about the results was that only about half of the users of the product felt it was beneficial. Not a powerful endorsement given the exclusion of likely sources of negative feedback.

In any case, such a survey at most represents attitudes towards cannabis products and says nothing about whether or not they actually work. However, the media and advocates for veterinary use of cannabis are certainly spinning it as at least implying such products are effective or worth a try.

As I mentioned in my previous post, the good news about the diminishing stigma associated with marijuana is the possibility of real research into chemical compounds that will likely prove to have medical benefits. The bad news is that there is some evidence legalization has led to an increase in marijuana poisoning for dogs, and it has provided more opportunities for companies to get into the business of selling the potential benefits of cannabis-based products before doing the necessary work of proving these benefits exist and that they are worth the risks.

One example of this is a new company vying with Canna-Pet for this potentially lucrative market, Canna Companion. This company is a little more circumspect in their claims for the medical effects of their product, but they still aggressively promote it as a “holistic” therapy, playing on the mythology that things labeled “holistic” or “natural” can be assumed to be safe based on pre-clinical evidence or anecdote alone and don’t require the rigorous clinical testing conventional therapies are expected to undergo.

The company web site, like that of Canna-Pet, doesn’t discuss any clinical studies in companion animals since there aren’t any. They simply point to the basic science research that shows the potential for benefits from cannabis-based products. Such research often fails to live up to its promise when specific products are tested in real-world patients, but this never seems to be a concern for companies marketing untested products. The company certainly has some legitimate scientists working for them, and their Chief Clinical Epidemiologist is a well-qualified public health researcher at the NC State Veterinary College. Such an individual should be well-suited to organizing rigorous scientific evaluation of cannabis-based products. I was, therefore, quite disappointed by the very unscientific his experience and credentials are used to promote the product:

Professor Peter Cowen of North Carolina State Unversity’s College of  Veterinary Medicine and now the Company’s Chief Clinical Epidemiologist and an Advosiry Board member commented: Based on my own experience with my dog, Londun, there is something of extreme value here. I am impressed not only from a therapeutic perspective, but also from a psychological perspective.”   Professor Cowen is orchestrating a clincial study at NSCU for the coming year and the Company is  looking forward to presenting those findings to the veterinary community and the public at large.

An anecdote from an epidemiologist is worth no more than an anecdote from anyone else, and the company certainly sounds like the results of the study they are planning are a foregone conclusion. The risk of bias here is, obviously, quite high, and I wonder how eager Canna Companion will be to promote the results if they turn out not to support the product, unlikely as that is. Only time will tell.

That might also be the most appropriate conclusion to the question of whether or not cannabis-based products are useful for veterinary patients. At the moment, there is no reliable evidence, so only time will tell. Hopefully, the pursuit of profits before science won’t lead to too many animals being exposed to useless of even harmful substances before we have the data we need to know what cannabis-based products might be useful for which problems.

Posted in Herbs and Supplements | 4 Comments

Myofascial Trigger Points-Real or Imaginary?

One of the reasons I chose the acupuncture course I am currently taking is that the instructors are very clear about rejecting the Traditional Chinese Medicine mythology of Qi, Yin and Yang, and all the rest that is often used to justify or explain the potential benefits of needling. The course purports to take a purely scientific approach to understanding and using acupuncture. As I have discussed previously, however, a fair bit of traditional acupuncture practice is accepted as effective based on anecdotal experience and then rationalized post hoc with sometimes questionable anatomic or neurophysiologic explanations. One of the most intensively used of these is the myofascial trigger point concept (MTrP).

Myofascial trigger points are supposed to be focal areas of tension or contraction in muscles which are irritable and contribute to chronic refractory pain. The argument is that these develop in response to local injury, to certain postural or activity patterns, or even to diseases in internal organs or the nervous system. Practitioners who treat such trigger points claim to be able to detect them as knots or taut bands within muscles.  Such trigger points are treated primarily by “releasing” them via some kind of stimulus, such as needling, electrical stimulation, massage, laser therapy, and so on.

The concept of the MTrP is more widely accepted in the conventional medical community than acupuncture more generally, though it is primarily utilized by osteopathic physicians, chiropractors, and others who focus on physical manipulative therapies, such as massage therapists, physical therapists, etc. However, the validity of the concept and the effect of needling and other rMTrp releasing therapies is often assumed as proven in this course and then used as a explanation for some of the proposed effects of acupuncture. This is not surprising since the course director, Dr. Robinson, is an osteopath as well as a veterinarian.

There certainly is some research evidence to support the concept of MRtP and the effect of needling as a treatment. But then there is also research evidence that appears to support acupuncture, and as we have seen when looking at it carefully and critically, it doesn’t necessarily mean what advocates claim it means. (1, 2) There is clearly controversy about the MTrP concept and the effects of therapies focused on myofascial release, and it is worth bearing this in mind rather than simply accepting the idea as true and using it to then justify some claims for acupuncture.

The most recent narrative review challenging this concept was published last year, and the authors make a quite definitive claim about it:

Quintner JL, Bove GM, Cohen ML. A critical evaluation of the trigger point phenomenon. Rheumatology (Oxford). 2015 Mar;54(3):392-9.

We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted.

Their claim rests on several grounds. The first is the problem with consistent identification of trigger points. Several studies involving experts who treat MTrP look at inter-observer reliability. These experts were asked to examine the same patients and give independent assessments of where trigger points were found. In these studies, the practitioners claimed to locate trigger points in different places and did not agree with each other to any significant extent unless they were first told what the underlying diagnosis was. This suggests that without knowing what is wrong with the patient in advance, even experts cannot reliably detect trigger points on physical exam and that they are inclined to base their subjective identification of such points primarily on what they expect to find when they already know the diagnosis, rather than on what they actually feel when doing a physical exam.

This is a pretty serious problem given that physical examination is supposed to be the main way trigger points are found. It is the same kind of problem that helped demonstrate that the Vertebral Subluxation touted by chiropractors as a major cause of illness was actually imaginary. If such inability to detect trigger points turns out to be a consistent finding, it would strongly suggest that such points don’t exist as objective entities which can be detected by physical examination, which would greatly undermine the idea that they exist at all or are a major source of clinical symptoms.

The authors also review other ways of identifying and characterizing trigger points, including biopsy findings, electromyography, and others, and they conclude that the evidence is mixed and unclear as to whether there is a single, common lesion that can be found on physical exam and associated with clinical disease.

This review also examines the evidence that needling to release trigger points is clinically effective. A number of systematic reviews and clinical trials have been done on this question, and the over conclusions are: the quality of the research evidence is mixed and often too low to be reliable; many other therapies are usually used along with trigger point needling, so it is difficult to determine which, if any, might be responsible for improvement; trigger points are identified in many very different patients with very different underlying diseases, so variation in how these patients do is high and complicates comparison of studies looking at needling for trigger point release.

Other critics of MTrP theory have made similar criticisms, including some physical medicine practitioners who have shifted from automatic acceptance of the concept to skepticism. I haven’t invested the time in examining the evidence as closely as I have looked at that concerning acupuncture, so I don’t have a strong opinion, but I do have some skepticism about the concept.

In particular, I am concerned by the inherent subjectivity in detection of trigger points and assessment of patient response to therapy. In demonstrating the location and treatment of trigger points, Dr. Robinson rests a lot of weight on interpretation of patient behaviors that could reasonably be interpreted differently. As not only a vet who has practiced for many years but someone with training in animal behavior, I know how easy it is to project our own expectations onto the behavior of other animals. If I expect to find pain in a certain spot and initially don’t, it is easy to press just a little harder until I get the reaction I expect, often without even realizing I am doing so. The lack of an objective, verifiable way of detecting trigger points and their resolution with needling is, then, a significant problem for this concept in veterinary medicine.

Given the lack of clarity on MTrP theory, it is not very helpful to use this concept as an explanation or guide for acupuncture. It simply shifts the ground from one muddy and poorly demonstrated set of ideas to another. There is no doubt, of course, that people often feel better when given various kinds of manual treatments. I suspect the same is true of many companion animals who have, after all, been intensively selected for generations to accept or even desire human contact. However, we must be cautious in projecting our expectations, beliefs, and theories onto our animal patients without robust objective evidence, since we run the risk of being fooled by the caregiver placebo effect and other phenomena that can leave us believing we have helped them when in reality we have not.


Here are a few of the studies discussed in the Quintner review:

Inter-observer reliability in MTrP detection:
Hsieh CY, Hong CZ, Adams AH et al. Interexaminer reliability of the palpation of trigger points in the trunk and lower limb muscles. Arch Phys Med Rehabil 2000;81: 258_64.

Lew PC, Lewis J, Story I. Inter-therapist reliability in locating latent myofascial trigger points using palpation. Man Ther 1997;2:87_90.

Myburgh C, Larsen AH, Hartvigsen J. A systematic, critical review of manual palpation for identifying myofascial trigger points: evidence and clinical significance. Arch Phys Med Rehabil 2008;89:1169_76.

Wolfe F, Simons DG, Fricton J et al. The fibromyalgia and myofascial pain syndromes: a preliminary study of tender points and trigger points in persons with fibromyalgia, myofascial pain syndrome and no disease. J Rheumatol 1992;19:944_51.

Clinical effect of needling trigger points:
Annaswamy TM, De Luigi AJ, O’Neill BJ et al. Emerging concepts in the treatment of myofascial pain: a review of medications, modalities, and needle-based interventions.PM R 2011;3:940_61.

Cummings TM, White AR. Needling therapies in the management of myofascial trigger point pain: a systematic review. Arch Phys Med Rehabil 2001;82:986_92.

Ho KY, Tan KH. Botulinum toxin A for myofascial trigger point injection: a qualitative systematic review. Eur J Pain 2007;11:519_27.

Rickards LD. The effectiveness of non-invasive treatments for active myofascial trigger point pain: a systematic review of the literature. Int J Osteopathic Med 2006;9: 120_36.

Tough EA, White AR, Cummings TM et al. Acupuncture and dry needling in the management of myofascial trigger point pain: a systematic review and metaanalysis of randomised controlled trials. Eur J Pain 2009; 13:3_10.



Posted in General | Leave a comment

Is Evidence-based Medicine a Dead End?

A recent blog post promoted by the American Holistic Veterinary Medical Association (AHVMA) asks the question, “Is Evidence-based Medicine at a Dead End?” Since scientific evidence often fails to support the beliefs and claims of alternative vets, and the AHVMA has demonstrated many times that it accepts only evidence that supports the approaches it promotes, it is no surprise that the article contains a variety of tired misconceptions about EBVM and unproven assumptions about CAM that lead to the inevitable conclusion the author wanted to arrive at.

The first step is to suggest that EBM has somehow diverged from the presumably pure state in which it originated under the impetus of the “founding father” of EBM, Dr. David Sackett:

Evidence-Based Medicine (EBM) as practiced today (and not as originally conceived by Sackett), emphasizes fact-based medicine.

I suspect Dr. Sackett would be surprised to hear that emphasizing facts over beliefs and opinions is somehow a departure from his vision for EBM. The very fact that this can be used as a criticism of EBM highlights the disdain CAM practitioners and the AHVMA have for those pesky facts, and their preference for belief and pure faith-based medicine.

Large numbers of responses are analyzed, so this is valid for populations as a whole. In veterinary medicine this is called herd medicine, and decisions are made that will best protect the whole herd.

This is intended to build up to the claim CAM practitioners often make that because scientific research involves studying groups the results aren’t applicable to individuals, who are all unique. This is a common fallacy I’ve covered before, and which involves both the Vegas Delusion and the Snowflake Fallacy.

The Vegas Delusion is the idea that because statistics apply to groups and the outcome of any even for an individual is not perfectly predictable, we can ignore statistics in making decisions. The name for this fallacy comes from the fact that gamblers often use it to justify their hopes of winning despite the odds being overwhelming that they will lose. It is true that some individuals win at games of chance, and sometimes they win big. But casinos are lucrative businesses and gambling is a problem that ruins lives because the statistics do predict what will happen to most people most of the time.

Similarly, in medicine the prognosis or response to treatment can’t be perfectly predicted for any individual patient. But the averages that affects groups are often a useful and realistic guide to what will probably happen, and ignoring these to make up treatments haphazardly for each individual patient is a dangerous and unreliable way to practice medicine.

The Snowflake Fallacy refers to the belief that because every individual is unique, we cannot use information about a group of patients to guide the treatment of any particular patient. While it is true that we are all unique in many ways, we are also very much alike in many ways. And often the differences that we notice immediately, in appearance or behavior, aren’t relevant to how we respond to medical treatment. The significance of such differences has to be demonstrated with good research, not simply assumed or made up.

There is also a lie inherent in the claim that CAM individualizes treatment more than conventional medicine because it doesn’t rely on controlled scientific research. CAM practitioners do use information about groups of patients to guide their treatment of each individual, they simply use the haphazard and uncontrolled personal observations of their patients and of other CAM practitioners instead of scientific research. A homeopath who chooses an “individualized” remedy from a repertory is doing so based on patterns of symptoms and responses to treatment observed in other patients by other doctors. This is applying group results to individuals, it is just doing so without any effort to control for bias, placebo effects, and other important sources of error.

The author then goes on to point out that it is often difficult to control for placebo effects when testing non-pharmacologic therapies, such as acupuncture. Because of this, she believes EBM automatically finds fault with research in such therapies and so denies their obvious benefits unfairly.

While it is true that it is challenging to control for placebo effects in such approaches, it often can be done. In the case of acupuncture, needling at locations not considered to be “real” acupuncture points, or not needling at all but simply convincing the patient you have, often has just as much a clinical effect as verum acupuncture (1, 2). This most likely means that the “real” acupuncture is only an elaborate placebo, but acupuncturists are unwilling to accept this conclusion, choosing instead to claim that the method of testing doesn’t work. They have no alternative to propose, and simply expect such therapies to be accepted as effective on the basis of clinical experience alone. This is clearly a self-serving approach and not an example of a fatal flaw in evidence-based medicine.

This vet then goes on to argue that because scientific research studies try to minimize variables and simplify circumstances to make results easier to evaluate, that science ignores complexity and can only be useful in evaluating and treating very simple problems with single causes. Again, there is some truth to the notion that one will almost never find a real-world situation as clear and simple as even the most complex research study, so there are things such studies can’t tell us about phenomena in the real world. But the issue isn’t whether scientific evidence is perfect, it is whether it is more reliable than the alternatives.

The alternative proposed to scientific research is simply trial-and-error experience with individual patients. The CAM alternative to clinical research is simply to try and identify patterns in anecdotal experiences and our own clinical practice and rely on those. This is what human beings did for all of history until the scientific method was developed. In was a spectacular failure in comparison to what we have achieved using science.

Haphazard uncontrolled observation never doubled average life expectancy, dramatically reduced infant and maternal mortality, eliminated entire diseases, or accomplished anything like the amazing improvements in health and well-being in thousands of years that we have achieved in a mere couple of centuries. People like this author are not suggesting a new alternative or improvement in understanding disease diagnosis and treatment. They are proposing we return to the folk medicine methods that served us so poorly for so long.

Constructive criticism and improvement to scientific methods, including evidence-based medicine, is essential, and no one challenges concepts or practices in science more vigorously than scientists. But identifying, exaggerating, and fabricating weaknesses in EBM and then proposing we return to the even more limited and unreliable methods of history is not in the best interests of patients, human or veterinary.




Posted in General | Leave a comment

Evidence Update: Dodds Study on Vaccine Dose in Small Breed Dogs

I have been able to get a look at the published paper for the study I recently discussed by Dr. Jean Dodds investigating  giving lower doses of vaccine to small breed dogs. There is nothing in the published report that changes my earlier conclusion. This study adds nothing of substance to our understanding of optimal vaccination practices. In design and execution, it is simply a marketing tool to promote a set of pre-existing beliefs about vaccination, and in itself does not help to clarify what optimal vaccination practices might be.

The argument Dr. Dodds seems to be making contains a number of elements I agree with and believe to be supported by good science:

  1. The effectiveness and duration of immunity vary by vaccine type and with many other factors, but in general core canine vaccines are very effective at preventing illness and likely most pets who receive the initial vaccine series at the appropriate time are well-protected for at least 3 years and probably much longer.
  2. Vaccines can have adverse effects, and while these are rare they can be potentially serious. The precise factors that make some individuals more susceptible to such reactions than others are unclear, but size appears to be a factor, with small-breed dogs reporting more reactions that larger breeds. (this is quite a bit more restrained than previous statements she has made about “vaccinosis” in small animals)
  3. Avoiding unnecessary vaccination in animals already immune to particular infectious diseases is a desirable goal.
  4. Titers can often tell us if an animal is already immune, depending on the disease in question, though they generally cannot tell us if the animal is vulnerable to a disease since they only reflect part of the overall immune response.

She adds to these a number of claims which are not supported by good evidence, including most of the claims related to this specific study.

The dose of canine distemper virus (CDV) and canine parvovirus vaccine (CPV) vaccines can be reduced to 50%, but not more, for small breed and small mixed breed type dogs, based on body weight, and still convey full duration of immunity.

She states this in the introduction, indicating it is a pre-existing belief she intends to buttress with this study. However, her citations for this very clear and specific claim include three of her other papers expressing this opinion and an editorial from 1999 discussing concerns among practitioners about vaccination practices. No specific research is cited that supports this claim. And elsewhere in the paper, she makes it clear that the claim is actually based primarily on her personal experience, aka anecdotal evidence.

In the informed consent sheet for clients, she says “Clinical experience has shown…” and “One of the principal investigators has nearly five decades of clinical and research experience with vaccinations in companion animals. This experience has shown…” and then repeats this claim. It is not a claim supported by research evidence but simply something she has come to believe based on patients she has seen, and it should be clearly presented as such, as mere opinion appropriate for generating a hypothesis but not for making confident claims.

The only relevant research she cites is one study in which children were shown to have an adequate protective response to a lower quantity of Hepatitis B vaccine. This was tested primarily to reduce the cost of vaccination and make vaccination available to more people, not to avoid adverse effects. But in any case, it doesn’t validate the general concept that vaccines should be dosed by body weight, which is not accepted vaccine science in human or veterinary medicine.

As for the study itself, it suffers from many serious flaws that likely would have prevented publication in an ordinary veterinary journal, which may be part of why it appears in the journal of the AHVMA.

The first issue is selection bias. The subjects were recruited by an announcement on Dr. Dodds’ web page and emails to “holistic veterinarians.” This does not appear to have been very successful since only 13 animals were recruited. But in any case, these likely represent an unusual patient population, since “holistic” veterinarians, and of course Dr. Dodds, recommend quite different approaches to preventative and therapeutic healthcare than most vets, including different vaccination practices. These animals may not be sufficiently similar to pets that receive standard veterinary care, including with respect to their vaccine history. This would limit the ability to generalize any results to other populations.

Another problem was the lack of any standard definition for “a half dose of vaccine,” which is what participating vets were told to give. While all used the same specific vaccine, this vague description of the main intervention being tested allows for a lot of unpredictable variation from subject to subject, and makes it hard to compare with any other research that may be done. The specific antigenic load given would be much more useful information at this stage of research.

A core problem with the study is that it did not address any of the underlying issues of whether giving a half dose of vaccine would protect dogs as well from disease or reduce the number of adverse vaccine reactions. Neither of these subjects was evaluated in any of the study dogs. All that was done was that antibody levels were measured before vaccination and at 4 and 6 months later. Here are the main results:

DODDS - J Am Hol Vet Med Assoc  table 1

All dogs had titers considered indication of immunity before being vaccinated. Most, but not all, dogs had an increase in their titer after vaccination at 4 months (9/13 for CPV and 11/13 for CDV) and 6 months (6/8 for CPV and 3/8 for CDV). This tells us, at most, that a smaller amount of a vaccine than usually given promotes some increase in antibody levels for CPV and CDV for some dogs. This, unfortunately, tells nothing about how to best vaccinate dogs to protect them from these diseases while minimizing any adverse health effects.

(The difference in the number of samples at 4 and 6 months reflected that while all dogs had blood samples taken at both times, “5 dogs had samples drawn at 6 months but these were inadvertently discarded.” Accidentally throwing out nearly ¼ of your samples is a pretty serious error in any study, and raises questions about the validity of the data as well as the conclusions.)

These data, even if accepted as legitimate, do not answer any of the pertinent questions, such as whether dogs receiving half of the usual vaccine dose would be protected as well long-term or healthier and less likely to experience health problems than dogs receiving the usual vaccine dose. The study doesn’t, in other words, provide any real evidence to support or refute the claims Dr. Dodds and many other “holistic” vets make about the best vaccination practices. And given she has admitted that she had no intention of following these dogs further or conducting any larger trials based on this “pilot” study, it is pretty clear that the only purpose of this study was to generate ammunition for a marketing campaign to promote ideas about vaccination that Dr. Dodds has developed entirely based on personal experience and belief.

I have addressed both the evidence concerning risks and benefits of vaccination and the issue of using titers to help make vaccination decisions. Limitations in the available evidence make a variety of different practices equally justifiable. While I probably vaccinate less than many conventional vets, I refrain from making definitive statements beyond the evidence about the effects of various approaches to vaccination. Dr. Dodds’ position is somewhat intermediate between the rabidly anti-vaccine views of some holistic vets and the unthinking annual vaccination too often still recommended by many conventional vets, and she and I are probably not too far apart in principle. However, she chooses to emphasize the risks of vaccination (especially in places where, unlike this article, she talks about nonsense like “vaccinosis”), and she makes confident claims about the best vaccination approach that she presents as science-based but which really are simply her opinion.

In this study, she has provided the illusion of scientific evidence to support these claims, but the reality is that this study is too flawed in design and execution to add anything useful to the question. Unfortunately, Dr. Dodds and others are already promoting it widely as evidence that their preferred vaccination approaches are better for patients than those of others, including the current most evidence-based guidelines. This is a misleading misuse of science consistent, unfortunately, with her approach in many other areas.

Posted in Vaccines | 9 Comments

“Traditional Chinese” Emergency and Critical Care Medicine?

I ran across this article recently with a board-certified specialist in veterinary emergency medicine recommending so-called Traditional Chinese Veterinary Medicine (TCVM) for critically ill patients.

As I’ve discussed in detail, there is some very limited evidence for a few potentially useful effects from passing electricity through acupuncture needles. However, the bulk of TCVM practice, and all of the theories behind it, is pure folk mythology and pseudoscience. It is always amazing and disappointing to see someone with an advanced scientific education treating such beliefs systems, and the therapies associated with them, as if they were in any way equivalent to science-based medicine or legitimate to experiment with on our sickest patients without good research evidence to support the claims made for them.

Such individuals would never tolerate the same near complete absence of evidence for a conventional drug or therapy. They are willing to give untested chemicals (herbs) and needle patients based solely on individual clinical experience and the belief that these practices have been used historically with success (which is often untrue).

Ultimately, it comes down to believing that a therapy is helpful based on individual clinical experience not only in the absence of high-quality evidence but in the absence of any controlled evidence or even a plausible theory. The history of medicine is one long lesson in why uncontrolled clinical observation is a very, very poor second to scientific research in evaluating the efficacy of our therapies. From bloodletting to internal mammary artery ligation, from Lourdes water to antibiotics for cats with interstitial cystitis, every ineffective therapy ever tried has appeared to work sometimes based on trial-and-error use. Either every possible treatment works for some patients, or clinical observation is an unreliable way to validate our treatments. Personally, I think the case is much stronger for the latter conclusion than the former.

I also think it is more than a question of whether or not we have clinical trial evidence. Of course we lack that for many of our treatments. But even therapies based on sound basic physiology and pre-clinical in vitro and animal model testing fail most of the time when subjected to clinical studies. Isn’t even less likely that a therapy based on Tonifying Yang or Releasing Wind is going to be truly effective? The rationale matters, especially in the absence of good controlled evidence.

Of course, in challenging these beliefs, I am immediately subjected to accusations that I am “closed-minded.” An open mind means not judging automatically and without regard to evidence, but it doesn’t mean not judging. We all have to make judgments about the safety and efficacy of the therapies we use. There is nothing inherently better or fairer about a positive judgment. If someone chooses to believe TCVM or bloodletting, or any other unscientific approach works based on the weak evidence on uncontrolled personal observation, they are not being more fair or open-minded than a critic who asks for better evidence than this before accepting such therapies. They are simply applying a different, looser standard of evidence.

I don’t claim with certainty that these therapies do not work, only that their theoretical foundations are unscientific, which makes the prior probability of their working very low, and that there is no good reason to believe they work in the absence of good-quality evidence to raise this probability. This is not being closed-minded, merely applying the principles of science and evidence-based medicine, which it seems to me have proven their worth quite dramatically compared with history, tradition, and anecdotes.

While this vet is usually careful to recommend these treatments with conventional care or instead of it only if the owner declines conventional treatment, I still can’t help feel it is unethical for a specialist to promote and legitimize such pseudoscience. We are essentially experimenting on sick patients without acknowledging this and claiming to have effective treatments when they are both implausible and not properly tested. We are giving a special pass to something to avoid the usual scientific testing we require of all our other therapies only because someone has slapped the label “alternative” on it.
Here are some examples of the comments in the article that I find disturbing:

If you have a patient that is bleeding post-operatively (post-op spay) or an unstable hemoabdomen that needs to go to the operating room, you can try dry needling Tian-Ping.

One indication for acupuncture could be in a post-op soft palate resection in a brachycephalic dog. By injecting B-12 at An-Shen to help calm a patient instead of writing an order for Acepromazine PRN

there are six typical Traditional Chinese Veterinary Medicine (TCVM) patterns for heart failure….If an owner is unwilling to do MV and the pet has collapse of Yang Qi, points for shock can be used as well.

If you have a feline patient with megacolon, and the owner is unwilling ? or it is too risky ? to place a pet under anesthesia for a de-obstipation, then enemas, lactulose, intravenous fluids and acupuncture can be used. There are 2 typical patterns for the Eastern diagnosis of megacolon, it is either Qi deficiency, or Yin and Blood deficiency. The acupuncture points would be selected based on what pattern they were exhibiting.

We treat many primary IMHAs and when they respond quickly it is great, but often we have patients that do not respond to the typical immunosuppressives. The traditional Chinese medicine pattern would need to be identified since there are different patterns. Typically for an extravascular hemolysis case, the main issues tend to be spleen Qi deficiency/blood deficiency. So selecting acupuncture points that would tonify the Qi/Blood, support the spleen, and immunomodulating points such as (LI-4, LI-10, LI-14, ST-36, GV-14) would be best. If the patient has evidence of intravascular hemolysis, clearing the heat and damp would be important and thus direct your acupuncture approach. The use of herbal therapy is becoming more popular and for a non-responding, primary ITP case Gui Pi Tang may be helpful.

the cat that is having an acute asthma attack that is not responding to typical interventions such as oxygen, steroids, and bronchodilators. Knowing LI-20, Bi-tong and Lung-hui acupuncture points can come in very handy. There are really countless uses for dry needling, aqua and electrical acupuncture in the CCU and it will likely become a more routine treatment in the critical care veterinary setting.

Posted in General | 16 Comments

Alternative Standards for Alternative Continuing Education Courses

I have written several times about the efforts of alternative vets to circumvent the systems intended to ensure quality and scientific legitimacy in continuing education for veterinarians. In brief, most states require vets to regularly take a certain number of hours of continuing education. The idea is that scientific and medical knowledge grows and changes over time. Since states give vets an exclusive monopoly on practicing veterinary medicine, they want to ensure the public is protected against veterinarians who have outdated or inaccurate knowledge and skills.

Such requirements are meaningless if there is no control over the kinds of education vets can use for this license requirement. If I can take a class in origami or a Renaissance poetry, that’s great fun but it doesn’t help ensure I am an up-to-date and competent vet. So most states require continuing education (or CE) courses be accredited to be used for licensure. Vets can still take any courses they want in any subject, they just can’t get credit for them for licensing purposes unless they are accredited.

The American Association of Veterinary State Boards (AAVSB) is the main organization recognized as accrediting CE courses, through its RACE group. Appropriately, RACE requires the content of CE courses have some minimal scientific evidence or compatibility with science.

[Courses must] build upon or refresh the participant in the standards for practice and the foundational, evidence-based material presented in accredited colleges or schools of veterinary medicine or accredited veterinary technician programs…CE programs that advocate unscientific modalities of diagnosis or therapy are not eligible for RACE approval…All scientific information referred to, reported or used in RACE Program Applications in support or justification of an animal-care recommendation must conform to the medically accepted and scientifically supported standards of experimental design, data collection and analysis.

This category includes all conventional medical and surgical sub-categories that are evidence based… Based on scientific principles, there must be an established “probability” of success that conforms to the medically accepted and scientifically supported standards of experimental design, data collection and analysis.

  1. Content of a Category One: Scientific Program must be supported by:
  2. Availability of beneficial evidence (peer reviewed journal) OR
  3. Three peer reviewed studies OR

iii. Study review – Case control studies leading to the benefit of the patient OR

  1. Evidence based studies OR
  2. Proven usefulness /effectiveness OR
  3. Evidence of rigorous scientific research OR

vii. FDA (animal approved) objective information/about the product (safety) plus one of the categories above.

RACE does approve some CE offerings involving alternative therapies, but many do not qualify for accreditation due to lack of compliance with RACE requirements for scientific validity. Organizations of alternative medicine providers have often responded to the denial of RACE approval not by producing better scientific support for the disputed content but by circumventing the approval process. The American Holistic Veterinary Medical Association (AHV MA) and other groups have advocated bypassing RACE and getting CE approval directly through the states. Smaller numbers of individuals are better able to influence the political process at the state level, so this has often been a successful strategy.

Also, since the AHVMA has now qualified for a seat in the AVMA House of Delegates, some states automatically accept their content as legitimate CE regardless of scientific validity or RACE approval. House of Delegate membership essentially only requires only a certain number of members who also belong to the AVMA, so it is not a mark of any kind of legitimacy to the mission of the organization. However, it is a useful component to an overall campaign to market alternative medicine, and in this case it facilitates bypassing the usual standards veterinary CE courses must meet.

The Academy of Veterinary Homeopathy (AVH) actually sued AAVSB over denial of RACE approval, unsuccessfully. (1,2)

But the CAVM community has gone even further, creating an alternative CE accreditation board specifically to approve alternative medicine content, RAIVE. As they state on their web site:

The CAVM community now relies on RAIVE to validate their educational meetings. We urge all state boards to do the same for CAVM courses. The opinion of the RACE committee is no longer valid.

Failing to approve every element of their CE offerings, regardless of scientific validity, apparently invalidates the entire CE approval process. The AHVM, RAIVE and other CAVM organizations are now attempting to get state veterinary medical boards to accept RAIVE accreditation as comparable to RACE approval. Washington state, for example, is currently accepting input on a proposal to do just this (The proposal)

The RAIVE web site argues there is no need to demonstrate scientific validity for CE offerings, only that CAVM should be taught by “experts” in that field. As I have pointed out before, that by definition requires that any judgment of CAVM be made by individuals who already believe in its safety and efficacy and have committed themselves to practicing and teaching specific modalities. This effectively eliminates any possibility for falsifying or even significantly challenging these methods, and makes the standard of validity not scientific evidence but expert opinion. It is the perfect closed shop.

The site further specifically states that scientific evidence is a secondary consideration and need only be developed if the methods they advocate are already accepted and taught:

RAIVE recognizes that evidence based medicine does not define the practice of veterinary medicine, but is a process of clinical decision-making, and veterinarians can also benefit from education in emerging subjects with little scientific support. In these cases, RAIVE approved CE must incorporate experts with advanced training or deep clinical experience in these subjects.

RAIVE recognizes that not all CAVM modalities meet the currently accepted standards of evidence. However, RAIVE also recognizes that in order for the evidence to be produced, CE in these modalities must be encouraged in order to develop and strengthen skills in practitioners of these modalities.

So we should teach people to use these methods first, and then maybe test them scientifically later? This effort to avoid the usual standards of evidence that conventional veterinary CE offerings are expected to meet, and to specifically reject scientific testing as the primary standard in favor of individual expertise, reflects the common view often manifest in the CAVM community that science is useful only for proving what one already “knows” from personal experience or for convincing others of the validity of such knowledge. The issue is not whether CAVM methods are or can be scientifically validated. Some practices might meet this standards, and others clearly do not. As I’ve said repeatedly, many aspects of CAVM can and should be investigated scientifically. But others, like Chinese Medicine and Reiki, cannot be because they are belief systems not scientific hypotheses. Others, like homeopathy, have already been evaluated and proven not to work. And CAVM proponents themselves often claim scientific evaluation is unnecessary or inappropriate for their methods. Is it really appropriate for a vet to maintain their license, their legal monopoly to practice veterinary medicine, by study things that are inherently unfalsifiable or incompatible with science or that have already been proven false?

The core issue is that the ethos of the CAVM community views scientific evidence as, at most, a nice extra to add on after one has already figured things out by trial and error and, at worst, completely irrelevant to the evaluation of the treatments we use on our patients. This is not just a disagreement about the evidence for specific practices, but an attempt to fundamentally alter the epistemological foundations of veterinary medicine.



Posted in General | 4 Comments