WVC 2015: Can We Trust the Scientific Literature?

INTRODUCTION

Evidence-based veterinary medicine (EBVM) is the explicit and deliberate integration of scientific research evidence with the circumstances and needs of individual patients to support clinical decision making. It requires that the practitioner be able to find and critically evaluate published reports of scientific research, and it therefore requires that these reports be accurate and reliable. But there is abundant evidence in human medicine that published research findings can often be misleading due to uncontrolled sources of error and bias. There is far less research concerning this problem in veterinary medicine, but a few studies have suggested the veterinary literature may suffer from similar weaknesses.

BIAS IN THE HUMAN MEDICAL LITERATURE

The design of scientific research studies is specifically intended to control for known sources of bias. The way that study subjects are selected and assigned to different groups, the way that researchers and subjects are blinded to the nature of these groups, and the way in which the resulting data are analyzed can all reduce error and lead to more reliable results. However, when studies are poorly designed, conducted, or reported, hidden bias can remain and the results can be misleading.

A clear source of potential bias is the funding for a study. In the human medical literature, funding source has been shown to be associated with outcomes in predictable ways that indicate bias is present. This bias can be reduced by strict adherence to standards of study design and conduct. However, evaluating the quality of these controls for bias requires that the methods used be adequately reported, and again the human medical literature shows that often insufficient information is present in many published reports to allow accurate assessment of the risk of funding bias.

The same is true for many sources of error in the human medical literature, such as publication bias, selection bias, information bias, confounding, and others. Often, the steps necessary to effectively minimize these sources of error cannot be assessed because the published reports provide insufficient information about how the studies were conducted. Some of the core aspects of study design and conduct, such as subject allocation and allocation concealment, blinding, and handling of subjects lost to follow up, are commonly not adequately reported or not properly implemented.

BIAS IN THE VETERINARY MEDICAL LITERATURE

There is less information about potential bias in the veterinary literature. The studies that have been done suggest that the quality of reporting is generally very low, and the risk of bias in individual studies is often be impossible to determine because of insufficient information. Several studies have found serious deficiencies in the reporting of randomization methods, blinding, and handling of losses to follow up, and serious problems with the quality of statistical methods employed in many study reports.

Even though risk of bias can be difficult or impossible to determine when reporting quality is low, some researchers have found that poor reporting is associated with a higher likelihood of positive results in published studies. This at least suggests that poor reporting may signal that the results biased in favor of the investigators preconceptions. Little research has been done on the subjects of funding and publication bias in the veterinary medical literature.

A SYSTEMATIC REVIEW OF VETERINARY CLINICAL TRIALS

I am currently conducting a systematic review of veterinary clinical trial reports intended to evaluate the quality of reporting and the risk of bias using instruments commonly used for this purpose in the human medical literature. Preliminary results suggest very poor quality of reporting generally, with some variation across species areas. Because of this poor reporting, it is difficult to assess the risk of bias in veterinary clinical trial reports. However, consistent with previous studies, these data do suggest that reporting quality is associated with the likelihood of positive outcomes, suggesting again that poorer quality studies are more likely to generate results consistent with the hypothesis under investigation. If this represents uncontrolled bias, then these results may be misleading.

This study does not consider the issue of publication bias since it is difficult to identify unpublished veterinary clinical trials. Unlike in human medical research, there is currently no widely used or mandatory registry for clinical trials, making it difficult to detect studies which may not be published due to negative results.

WHERE DO WE GO FROM HERE?

The deficiencies in the veterinary clinical trial literature are not a reason to abandon evidence-based medicine. EBM has had significant benefits in terms of patient care when implemented in the human medical field, and it is likely to be equally beneficial in veterinary medicine. However, the ability of veterinarians to practice EBVM is hampered by the limited reliability of published veterinary research, and steps should be undertaken to improve the reliability of the literature. Such steps could include mandatory reporting guidelines adopted by journals, funding agencies, and research institutions. This would improve the quality of reporting, which would make it easier to accurately assess the real risk of bias in published studies.

A number of organizations are also currently working towards the creation of a clinical trial registry for veterinary studies. This would greatly improve our ability to assess potential sources of bias, including publication bias and bias associated with losses to follow up in studies. If such a registry contained the original data collected, it would also help identify and deter questionable analytic or statistical practices.

Controlled scientific research is clearly superior to informal methods of evaluating diagnostic and therapeutic interventions because it minimizes the role of biases of all kinds. However, this function cannot be performed effectively if proper methods of study design and conduct are not employed. And the reliability of published evidence cannot be critically evaluated if the necessary information is not reported. A key feature, then, in the development of evidence-based veterinary medicine needs to be improvements in the quality and reliability of the veterinary medical literature.
 

studies are wrong title

Posted in Presentations, Lectures, Publications & Interviews | Leave a comment

WVC 2015: EBVM in the Trenches

WHAT IS EVIDENCE-BASED MEDICINE (EBVM)?

Evidence-based veterinary medicine (EBVM) is the formal, explicit application of the philosophy and methods of science to generating understanding and making decisions in veterinary medicine. It is often associated with academic research and university or specialty practice. However, EBVM also provides a perspective and a set of behaviors veterinarian in clinical practice can employ to control bias, reduce errors, and manage information more efficiently in general practice as well. In this setting, where limited time and resources and the agendas of our clients constrain our actions, EBVM can facilitate better clinical decision making and improve patient care, aid in managing uncertainty, communicating effectively with clients, and establishing habits to facilitate ongoing learning and improvement throughout one’s career.

WHAT CAN EBVM DO FOR ME?

The main benefit of employing EBVM techniques is having better information on which to base our clinical practice. When there is good quality evidence to help us evaluate diagnostic and therapeutic interventions, EBVM helps us find this evidence and learn how to use it to inform our decisions. When, as is often the case, the evidence is poor in quality and quantity, understanding this helps us to avoid the risks of unjustified certainty and be mindful of the need for flexibility and lifelong learning in clinical practice.

Better information, and more informed decision making, leads to better patient care. There is evidence from human medicine that the implementation of evidence-based clinical practice guidelines and other EBM tools improves patient outcomes, and this is the goal of EBVM as well.

Finally, EBVM can help practitioners meet our ethical obligations to patients and clients. We have a duty to our patients to provide the best care possible, and EBVM facilitates this. We also have a duty to provide truly informed consent to our clients. Only by understanding the evidence behind our recommendations, and having a clear view of the degree of uncertainty present, can we effectively guide clients in making decisions for their animals.

THE STEPS OF EVIDENCE-BASED PRACTICE Evidence-based practice involves following a set of explicit steps to integrate formal scientific research information with the individual circumstances of each case to facilitate decision making. The busy practitioner will clearly not be able to execute each step for every problem in every case, nor is this necessary. But by regularly employing the EBVM process, we build and maintain a knowledge base that informs our decisions.

Of course, every veterinary clinician already has extensive knowledge and opinions that inform his or her practice. However, without EBVM, our knowledge base is haphazard and uncritically derived from sources of unknown or low reliability. EBVM allows us to have greater confidence in the knowledge we rely on when making recommendations for individual patients.

These are the basic steps of EBVM:

  1. Ask useful questions
  2. Find relevant evidence
  3. Assess the value of the evidence
  4. Draw a conclusion
  5. Assign a level of confidence to your conclusion

Asking Useful Questions Vague or overly broad questions impede effective use of research evidence in informing clinical practices. “Does drug X work?” or “What should I do about disease Y?” are not questions that are likely to lead to the recovery of useful information from published research. There are a number of schemes for constructing questions the scientific literature can help answer. One of the easiest is the PICO scheme.

P– Patient, Problem Define clearly the patient in terms of signalment, health status, and other factors relevant to the treatment, diagnostic test, or other intervention you are considering. Also clearly and narrowly define the problem and any relevant comorbidities. This is a routine part of good clinical practice and so does not represent “extra work” when employed as part of the EBVM process.

I– Intervention Be specific about what you are considering doing, what test, drug, procedure, or other intervention you need information about.

C– Comparator What might you do instead of the intervention you are considering? Nothing is done in isolation, and the value of most of our interventions can only be measured relative to the alternatives. Always remember that educating the client, rather than selling a product or procedure, should often be considered as an alternative to any intervention you are contemplating.

O– Outcome What is the goal of doing something? What, in particular, does the client wish to accomplish. Being clear and explicit, with yourself and the client, about what you are trying to achieve (cure, extended life, improved performance, decreased discomfort, etc.) is essentially in evidence-based practice.

FIND RELEVANT EVIDENCE

Experienced clinicians typically have opinions on the value of most interventions they routinely consider. Unfortunately, we rarely know where those opinions originally came from or how consistent they are with the current best scientific evidence. And given the constraints of time and resources, practitioners will rarely have the ability to find and critically evaluate all the primary research studies relevant to a particular question. Fortunately, there are sources of evidence that can provide reliable guidance in an efficient, practical manner.

The best EBVM resource for busy clinicians is the evidence-based clinical practice guidelines. These are comprehensive evaluations of the research in a general subject area that explicitly and transparently identify the relevant evidence and the quality of that evidence and make recommendations with clear disclosure of the level of confidence one should place in those recommendations based on the evidence.

Sadly, many guidelines produced in veterinary medicine are not evidence based but opinion-based (so-called GOBSAT or “Good Old Boys Sat At a Table” guidelines). These are no more reliable than any other form of expert opinion. Excellent examples of truly evidence-based guidelines are those of the RECOVER Initiative for small animal CPR and the guidelines produced by the International Task Force for Canine Atopic Dermatitis.

After evidence-based guidelines, the next most useful resources are systematic reviews and critically-appraised topics (CATs). These are more focused but still explicit and transparent reviews of the available evidence on specific topics. Systematic reviews can be identified by searching the VetSRev database, a free online resource produced by the Centre for Evidence-based Veterinary Medicine (CEVM) at the University of Nottingham. Unfortunately, getting full-text copies of these reviews can be challenging for vets not at universities, but there are a number of options depending on where one practices.

Critically appraised topics are also produced by CEVM and freely available on the web as BestBetsforVets. There are a number of other free CAT resources, including the Banfield Applied Research and Knowledge web site.

Finally, primary research studies are a useful source of guidance for clinicians, though they take more effort and expertise to find and critically evaluate.

ASSESS THE VALUE OF THE EVIDENCE

The most challenging part of the EBVM process for vets in practice is critical appraisal, learning to identify important limitations in published research study that affect how confident we can be in the conclusions and how relevant they are to our patients. There are resources available to teach these skills, and hopefully this will become more common in veterinary colleges, but for most practitioners pre-appraised evidence, such as guidelines and systematic reviews, will be more useful.

The clinician still has an important role, however, in determining the relevance of research evidence to individual patients. The details of a patient’s medical condition, the values, goals, and resources of the owner, and the expertise and resources available to the veterinarian all determine the degree to which a particular conclusion based on formal research is applicable to a given patient. The role of EBVM is not to replace clinician judgment with automatic reliance on published research but to ensure the clinician has the best available information and understands clearly what is known and not known when tailoring the evidence to the needs of individual animals.

DRAW A CONCLUSION

Ultimately, the job of a veterinarian is to guide the client in making decisions about care for their animals. When the clinician is aware of the existing evidence and its limitations and clearly appreciates the degree of uncertainty, then he or she can best help the client to understand their options. Making evidence-informed decisions and clearly communicating with clients about the needs and choices for their animal is the core of clinical veterinary medicine, and this is what the tools and methods of EBVM exist to support.

ASSIGN A LEVEL OF CONFIDENCE TO YOUR CONCLUSIONS

Often, the relevant research evidence is incomplete or flawed, and sometimes there is little or no such evidence applicable to a given patient’s needs. EBVM is still useful in this situation, because it allows us to clearly, systematically identify and communicate the uncertainty inherent in our work.

It is also important that we openly discuss with clients our use of evidence to inform our recommendations. Research has suggested that clients want to be told about the uncertainties involved in the treatment of their animals, and that discussing this does not reduce their confidence in their veterinarians. Clients also identify truthfulness as their highest priority in communication with their vet. By explicitly discussing our process in identifying and evaluating relevant evidence, we enhance our clients’ understanding of the role we play, and we help them to appreciate the value of our expertise, not only the products and procedures we sell.

EBVM AND THE GENERAL PRACTITIONER

The job of the general practitioner is to be informed about the research evidence relevant to their patients’ needs and to think critically about this evidence and the uncertainty it contains. It is also the role of practitioners to communicate clearly with clients about this information and guide them in making informed decisions. Ideally, general practitioners can also contribute by sharing what they learn in applying the EBVM process. Critically appraising individual studies or synthesizing the literature on particular questions will create useful information that can then be shared with colleagues.

Properly applied and with adequate support, EBVM can enhance the quality of information supporting the decisions and recommendations of vets in clinical practice. This not only reduces stress and wasted effort for veterinarians but improves client communication and patient care.

ebvm in the trenches title

Posted in Presentations, Lectures, Publications & Interviews | Leave a comment

WVC 2015: What You Know than Ain’t Necessarily So

INTRODUCTION

Experienced clinicians often have an enormous knowledge base about the health problems their patients present with and the available diagnostic and therapeutic options. This knowledge is built over time from a variety of sources: basic pathophysiology and clinical information learned in school; practice tips and pearls imparted by professors and speakers at continuing education meetings; review articles and primary research papers in veterinary journals; textbooks; advice from mentors and colleagues in practice; and of course clinical experience with previous cases.

This knowledge base smoothly and efficiently informs the day-to-day activities of clinical practice. When cases with familiar features are seen, the appropriate diagnostic and treatment steps often come to mind automatically, or with minimal prodding. Unlike students and new graduates, experienced clinicians often have little sense of dredging up facts committed to memory and more of a sense of simply knowing things. One of the hallmarks of expertise is that the collating of observations and relevant knowledge into a coherent picture of the problem and a plan become less deliberate and more automatic with time.1

While this process, which is a universal and automatic feature of how the human brain functions, leads to greater efficiency than the explicit, conscious use of algorithms and reference sources employed by less experienced practitioners, it has a number of potential limitations. One problem, for example, is that the knowledge one relies on often can no longer be connected to its original source. We often simply know things without being aware of how we came to know them. This limits our ability to judge the reliability of the source of our knowledge. In fact, such established, automatic knowing often generates a sense of certainty greater than that which accompanies deliberately seeking and finding information.2 We are more likely to trust what we already know, even if we don’t remember where we learned it, than we are to trust what we have just discovered after searching a trustworthy source of information.

There are also a large number of well-characterized cognitive biases and sources of errors inherent in how our brains acquire, process, store, and utilize information that can lead us astray.3 These are more likely to create error when our reasoning is automatic rather than deliberate, as it necessarily must be in an efficient clinical environment that is not devoted primarily to teaching.

One of the major functions of evidence-based veterinary medicine (EBVM) is to provide tools and resources to make the knowledge base we employ more reliable. This includes generating better quality information through research and facilitating the integration of that information into clinical decision making. When the relevant evidence is of high quality, this can add confidence to our decisions.

More commonly, when the evidence has significant limitations, we may end up with less confidence in our knowledge than we would have without an explicit evaluation of the evidence. However, this is not as undesirable an outcome as it may appear. A clear, accurate understanding of the uncertainty associated with a particular practice protects us, and our patients, from the dangers of acting with unjustified confidence. We are more likely to weigh thoughtfully the risks and benefits of action in the context of an individual case when we understand the degree of uncertainty about our ability to predict or manipulate the patient’s condition.

Being clear about the sources of our knowledge, and the appropriate level of confidence to have in them, also aids in fulfilling our duty to provide clients with informed consent. Surveys of veterinary clients have shown that they value truthfulness highly in the information we provide to them, and that they want to be made aware of the uncertainties involved in the treatment of their animals.4-5 Only if we understand the reliability and limitations of the information we employ in making our recommendations can we give clients the knowledge and guidance they need to make informed choices.

The purpose of these lectures is to examine some widespread or long-standing beliefs and practices in small animal medicine and assess their evidentiary foundations. In some cases, this may clearly validate or invalidate these beliefs. In most cases, however, such an exploration will likely not lead to greater certainty but to a clearer understanding of the degree of uncertainty associated with these beliefs. Hopefully, this will be useful in making clinical decisions and in communicating with clients. The exercise may also be useful in illustrating how to make use of the research literature in establishing and maintaining the knowledge base that informs one’s clinical practice.

 

REFERENCES

  1. Benner, P. From novice to expert. Amer J Nursing, 1982; 82(3):402-7.
  2. Burton, R. On Being Certain: Believing You’re Right Even When You’re Not. New York: St. Martin’s Press. 2008
  3. McKenzie, BA. Veterinary clinical decision-making: cognitive biases, external constraints, and strategies for improvement. J Amer Vet Med Assoc. 2014;244(3):271-276.
  4. Mellanby RJ, Crisp J, De Palma G, et al. Perceptions of veterinarians and clients to expressions of clinical uncertainty. J Small Anim Pract 2007;48:26–31.
  5. Stoewen DL, et al.  (2014) Qualitative study of the information expectations of clients accessing oncology care at a tertiary referral center for dogs with life-limiting cancer. J Am Vet Med Assoc. 2014;245(7):773-83.

Slides

what aint so title

Posted in Presentations, Lectures, Publications & Interviews | Leave a comment

Skeptvet’s Acupuncture Adventure

I have collected here the various posts written about my experiences taking the Medical Acupuncture for Veterinarians training course. This provided an opportunity to take a detailed look at the claims and evidence made for acupuncture outside of the realm of Chinese folk medicine.

Introduction- My views on acupuncture at the start of the course

Claims and Evidence Regarding Points and Channels

Approaching the Evidence for Medical Acupuncture

Acupuncture and Spinal Cord Injury

Acupuncture and Anesthesia/Analgesia

Acupuncture and Neuromodulation of Cranial Nerves

Themes in the Approach to the Evidence for Acupuncture

Myofascial Trigger Points-Real or Imaginary?

The Hand’s-on Training and Wrap-Up

Posted in Acupuncture, Topic-Based Summaries | 3 Comments

WVC 2016: Clinical Audit

Here is my presentation on clinical audit.

WHAT IS CLINICAL AUDIT?
“Clinical audit is a process used by health professionals to assess, evaluate and improve patient care…Clinical audits can be used to compare current practice with the best available evidence. It provides a methodology to assess if the best evidence-based medicine is being applied within the practice.”1

Clinical Auditing is a quality improvement process in clinical practice that seeks to establish guidelines for dealing with particular problems, based on documented evidence when it is available, measuring the effectiveness of these guidelines once they have been put into effect, and modifying them as appropriate. It should be an ongoing upwards spiral of appraisal and improvement.2

WHY SHOULD I CONDUCT CLINICAL AUDITS?
The most obvious and significant reason to conduct clinical audits is to improve patient care. Though it is a difficult idea to prove empirically, it is likely true that formal processes to evaluate patient outcomes, implement evidence-based changes in clinical practices, and then assess the impact of these changes on patient outcomes should improve patient care. There is some research evidence to support this concept, though the effectiveness of clinical audit depends on many factors, such as the baseline level of performance, the details of how changes are implemented, and others.3

In addition, clinical audit can contribute to job satisfaction for all members of the veterinary healthcare team. Taking active steps to assess the effectiveness of one’s practices and implement change and then seeing the results of those efforts can give veterinarians and veterinary nurses a greater enthusiasm for their work, and a sense of confidence in their clinical decisions.

Clinical audit can also be used as a means to strengthen client confidence in a practice and reassure clients about the risks of specific interventions. Knowing that there is an established and ongoing process of quality control in place promotes trust in the doctors and the practice. And if clients have concerns about the risks of surgery or other interventions, clinical audit enables us to provide them with specific and relevant data to support fully informed consent and reassure them about the procedure and our commitment to safe and effective care

TYPES OF CLINICAL AUDIT
There are two basic types of clinical audit, though the specific goals and procedures are tailored to each setting and question. Standards-based audits are those which compare current practice and results to some designated standard. The standards can be goals set specifically for the practice or derived from published clinical guidelines or other external, evidence-based sources.

A standards-based audit could evaluate how often antibiotics prescribed for urinary tract infections turned out to be appropriate based on subsequent culture and sensitivity results, with an eye to adjusting empirical antibiotic choices if necessary. Or such an audit could be focused on ensuring a certain percentage of newly-diagnosed cases of feline chronic kidney disease were fully staged according to the published IRIS guidelines.5 Or one could use a process audit to evaluate patient outcomes, such as the proportion of spay surgeries experiencing seroma or other complications, with the goal of optimizing practices to minimize these.

Critical Incident or Significant Event audits are structured discussions that occur in response to some identified error or process failure or simply an undesired outcome. Such events can be clinical, such as an anesthetic death or patient escape in the hospital, or they can be procedural, such as a failure to submit a urine sample for culture or to report biopsy results in a timely manner.

The goal of a significant event audit is not to assign blame but to determine if any feature of hospital policy or procedure might have increased the risk of such events and how this risk might be reduced through any process changes.

HOW DOES CLINICAL AUDIT WORK?
The specific steps in a clinical audit will be somewhat different in every setting. However, there are some general principles that apply broadly to the audit process. For audits to result in meaningful, effective change in practices, the entire healthcare team must be convinced of the value and legitimacy of the process. This requires participation and input by all team members, rather than an authoritarian “top-down” approach. Having clear and transparent procedures agreed upon in advance for how audits are to be conducted and a regular, consistent execution of these procedures will facilitate creating an institutional culture in which audits are seen as a routine and necessary part of patient care.

The general steps of the audit process are illustrated in Figure 1.

clinical audit diagram

Figure 1. The process of clinical audit (from Dunn, 2012)3

The process begins by identifying a question or area of concern to investigate. All members of the team should be invited to offer questions for consideration and an open discussion of these and how to structure and prioritize this should be held involve the entire team. Individuals or a small team responsible for a particular audit should then identify guidelines or standards to be used as benchmarks and develop a plan for collecting appropriate data. This plan should again be discussed with all members of the team who will be involved to make sure it is appropriate and not unduly complex or burdensome.

Once the data is collected, it should be analyzed to compare the results to the designated standards. Audits are not intended as clinical research projects, and no statistical analysis is necessary or appropriate. The goal is to compare the results in the practice setting clearly and directly with the designated guideline or standard in a way that identifies potential areas for improvement.

Since the overall purpose of audits is to facilitate improvement, when an audit identifies practices that do not meet the designated standard, the entire team should again be involved in formulating a plan for implementing changes to improve performance. Once these are well established, a repeat of the audit cycle can be sued to assess whether the changes have had the desired effect.

Though audits are not intended to be a form of controlled clinical research, they do have the potential to produce data useful to those outside the practice in which they are conducted. The results are specific to the environment in which the audit was conducted and so cannot necessarily be generalized to other settings. However, in the evidence-poor environment of veterinary medicine, clinical audit can at least suggest hypotheses about best practices or the impact of clinical practice guidelines and provide at least rough estimates of the frequency of some outcomes in various settings. Therefore, sharing the results of clinical audits is to be encouraged. This also has the added benefit of making clients aware of the active quality improvement processes in place, helping to build confidence in the practice.

BARRIERS TO CLINICAL AUDIT
There are a number of barriers to conducting clinical audits. The biggest in the U.S. is the lack of awareness of the audit process. Clinical audit is well-established in the human healthcare system in various forms. And in the U.K., veterinary practices are required to have some formal mechanism for assessing outcomes and improving performance, which can be satisfied by a clinical audit mechanism. However, there is not widespread awareness of the value or methods of clinical audit among veterinarians in the U.S.

Other practical barriers include time and other resources, resistance to change from team members, and concerns about the impact of identifying errors or undesirable outcomes on employee performance evaluations or client confidence. All of these are limitations that have been successfully addressed in the implementation of clinical audit in human medicine and that can be overcome in the use of this valuable tool in veterinary medicine.

RESOURCES The Royal College of Veterinary Surgeons (RCVS) Knowledge group has published a useful Clinical Audit Toolkit that is freely available online. The references below also include several useful guides to the implementation of clinical audit in veterinary practice.2,4,6

REFERENCES

  1. Cockroft P, Holmes M. Handbook of evidence-based veterinary medicine. Oxford, England: Blackwell Publishing, 2003
  2. Viner, B. Introducing clinical audit into veterinary practice.PhD. dissertation, Middlesex University, London, England. 2006.
  3. Jamtvedt G, Young JM, Kristoffersen DT,O’BrienMA,Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2006;(2):CD000259.
  4. Dunn, J. Clinical audit: A tool in defense of clinical standards. In Practice. 2012;34:167-169.
  5. International Renal Interest Society (IRIS). IRIS staging of CKD (modified 2013). Available at: http://www.iris-kidney.com/guidelines/staging.aspx. Downloaded October 8, 2015.
  6. Viner, B. Using audit to improve clinical effectiveness. In Practice. 2009;31:240-243.

Slides

clinical audit title

Posted in Science-Based Veterinary Medicine | 2 Comments

WVC 2016: What You Know that Ain’t Necessarily So

Here is my presentation on the evidence for a few common veterinary practices.

INTRODUCTION
Experienced clinicians often have an enormous knowledge base about the health problems their patients present with and the available diagnostic and therapeutic options. This knowledge is built over time from a variety of sources: basic pathophysiology and clinical information learned in school; practice tips and pearls imparted by professors and speakers at continuing education meetings; review articles and primary research papers in veterinary journals; textbooks; advice from mentors and colleagues in practice; and of course clinical experience with previous cases.

This knowledge base smoothly and efficiently informs the day-to-day activities of clinical practice. When cases with familiar features are seen, the appropriate diagnostic and treatment steps often come to mind automatically, or with minimal prodding. Unlike students and new graduates, experienced clinicians often have little sense of dredging up facts committed to memory and more of a sense of simply knowing things. One of the hallmarks of expertise is that the collating of observations and relevant knowledge into a coherent picture of the problem and a plan become less deliberate and more automatic with time.1

While this process, which is a universal and automatic feature of how the human brain functions, leads to greater efficiency than the explicit, conscious use of algorithms and reference sources employed by less experienced practitioners, it has a number of potential limitations. One problem, for example, is that the knowledge one relies on often can no longer be connected to its original source. We often simply know things without being aware of how we came to know them. This limits our ability to judge the reliability of the source of our knowledge. In fact, such established, automatic knowing often generates a sense of certainty greater than that which accompanies deliberately seeking and finding information.2 We are more likely to trust what we already know, even if we don’t remember where we learned it, than we are to trust what we have just discovered after searching a trustworthy source of information.

There are also a large number of well-characterized cognitive biases and sources of errors inherent in how our brains acquire, process, store, and utilize information that can lead us astray.3 These are more likely to create error when our reasoning is automatic rather than deliberate, as it necessarily must be in an efficient clinical environment that is not devoted primarily to teaching.

One of the major functions of evidence-based veterinary medicine (EBVM) is to provide tools and resources to make the knowledge base we employ more reliable. This includes generating better quality information through research and facilitating the integration of that information into clinical decision making. When the relevant evidence is of high quality, this can add confidence to our decisions.

More commonly, when the evidence has significant limitations, we may end up with less confidence in our knowledge than we would have without an explicit evaluation of the evidence. However, this is not as undesirable an outcome as it may appear. A clear, accurate understanding of the uncertainty associated with a particular practice protects us, and our patients, from the dangers of acting with unjustified confidence. We are more likely to weigh thoughtfully the risks and benefits of action in the context of an individual case when we understand the degree of uncertainty about our ability to predict or manipulate the patient’s condition.

Being clear about the sources of our knowledge, and the appropriate level of confidence to have in them, also aids in fulfilling our duty to provide clients with informed consent. Surveys of veterinary clients have shown that they value truthfulness highly in the information we provide to them, and that they want to be made aware of the uncertainties involved in the treatment of their animals.4-5 Only if we understand the reliability and limitations of the information we employ in making our recommendations can we give clients the knowledge and guidance they need to make informed choices.

The purpose of these lectures is to examine some widespread or long-standing beliefs and practices in small animal medicine and assess their evidentiary foundations. In some cases, this may clearly validate or invalidate these beliefs. In most cases, however, such an exploration will likely not lead to greater certainty but to a clearer understanding of the degree of uncertainty associated with these beliefs. Hopefully, this will be useful in making clinical decisions and in communicating with clients. The exercise may also be useful in illustrating how to make use of the research literature in establishing and maintaining the knowledge base that informs one’s clinical practice.

Some of the topics that will be covered include:

  1. Pheromone therapy for behavioral problems in dogs and cats
  2. Anti-histamines for treatment of atopic dermatitis in dogs
  3. Steroids for anaphylaxis and acute allergic reactions

REFERENCES

  1. Benner, P. From novice to expert. Amer J Nursing, 1982; 82(3):402-7.
  2. Burton, R. On Being Certain: Believing You’re Right Even When You’re Not. New York: St. Martin’s Press. 2008
  3. McKenzie, BA. Veterinary clinical decision-making: cognitive biases, external constraints, and strategies for improvement. J Amer Vet Med Assoc. 2014;244(3):271-276.
  4. Mellanby RJ, Crisp J, De Palma G, et al. Perceptions of veterinarians and clients to expressions of clinical uncertainty. J Small Anim Pract 2007;48:26–31.
  5. Stoewen DL, et al.  (2014) Qualitative study of the information expectations of clients accessing oncology care at a tertiary referral center for dogs with life-limiting cancer. J Am Vet Med Assoc. 2014;245(7):773-83.

 

Slides

What Aint So Title

Posted in Presentations, Lectures, Publications & Interviews | 6 Comments

WVC 2016: The Laser Craze

Here are my notes and slides from my presentation of low-level lasers.

WHAT IS LASER THERAPY?
Laser therapy is, at its simplest, the application of light to living organisms to obtain health benefits. However, there is a bewildering amount of detail behind this simple idea. The wavelength and power of the laser used, the location and duration of exposure, the number of treatments, the conditions for which treatment might be useful, and many other variables are subject to extensive debate. Generally, low-level or “cold” lasers (which is really a misnomer since many do generate heat during use) utilize wavelengths between 600-1000nm and power levels from 5-500mW. More powerful lasers are used in surgery, but these function primarily to cut or cauterize tissue, break up uroliths, or otherwise cause controlled damage. Low-level lasers are intended to have biological effects on tissue, known as photobiomodulation, without causing damage.

The FDA classifies lasers, from Class 1 to Class 4, based primarily on their potential to harm the user or the patient. Low-level laser therapy typically involves Class 3 lasers, though more powerful Class 4 devices are sometimes used for non-surgical therapy.

WHAT ARE THE POSSIBLE USES FOR LASER THERAPY?
The most common recommended uses of low-level laser therapy are to facilitate wound healing, reduce inflammation, and improve musculoskeletal pain or disease. However, proponents of laser therapy, and companies selling therapeutic lasers, often claim or suggest that low-level lasers can treat nearly any medical condition. Lasers have been promoted for use in specific clinical problems (allergic skin disease, gingivitis, bacterial and viral infections, envenomation, etc.), vaguely defined general health improvement (enhancing immune function, normalizing metabolic function, “energizing” cells, etc.), and unscientific nonsense (fixing “Qi-stagnation”1). Some practitioners recommend using laser as a means of stimulating acupuncture points, which adds the question of the efficacy of acupuncture and the selection of such points to the original question of the potential utility of laser therapy itself.

Sorting through the claims made for lasers, from the reasonable to the ridiculous, is challenging due to the heterogeneity of lasers and therapeutic approaches employed and the complexity and inconsistency of the available research on medical lasers.

WHAT IS THE EVIDENCE FOR LASER THERAPY?
The principles of Evidence-based Veterinary Medicine (EBVM) can help us sort through the evidence concerning low-level laser therapy and try and identify the strengths and limitations of the evidence for specific potential uses. Though there are various ways to organize our evaluation of the existing evidence, it is generally agreed that some sort of hierarchy of evidence is appropriate, with the most reliable types evidence at the apex and the most available, but less reliable evidence, at the base. Figure 1 illustrates one way of visualizing such a hierarchy of evidence.

evidence pyramid

Figure 1. Hierarchy of evidence. (CAT-critically-appraised topic, RCT-randomized clinical trial, CE- continuing education)

Within the levels of this hierarchy, there are multiple types of evidence which themselves have different levels of reliability. Randomized clinical trials, for example, provide better evidence than case reports, and studies in the species one intends to treat are more likely to predict outcomes than studies in another species. However, this scheme provides a convenient overview of one useful way to think about the types of evidence available to guide decisions about the use of lasers.

Systematic reviews of multiple clinical trials, with detailed analysis of the limitations of each trial and an overall assessment of the quality of evidence are the most trustworthy source of evidence on any clinical intervention. Unfortunately, such reviews are often not available in veterinary medicine, and there are none for low-level laser therapy.

Controlled clinical trials of naturally occurring disease in the target species are the next most reliable form of evidence, and there appear to be none available for veterinary patients. Some studies in clinical patients do exist, though they have significant limitations.

For example, a pilot study adding laser therapy to standard treatment for dogs with acute intervertebral disk disease suggested laser treatment might have shortened time to ambulation after surgery.2 However, the absence of randomization, blinding, and placebo control limit the strength of this conclusion, and another similar study did not report a clinical benefit.3

There have been many experimental studies of laser therapy for a variety of conditions in veterinary species, however the methodological quality is variable and the results are mixed. Some studies of wound healing, for example, show possible benefits4 while many others do not.5-9 Small studies looking at lasers for skin disease have also found mixed results (some beneficial effect on non-inflammatory alopecia10, no apparent effect on atopic pedal pruritis11), though again there are significant methodological limitations in these studies.

Two systematic reviews of lab animal studies are available. One found that there were some potentially beneficial effects in bone healing models, though there were few studies to review.12 The other reviewed in vitro and animal model studies relevant to wound healing and concluded rather strongly that, “these studies failed to show unequivocal evidence to substantiate the decision for trials with [low-level laser therapy] in a large number of patients…We conclude that this type of phototherapy should not be considered a valuable (adjuvant) treatment for this selected, generally therapy-refractory condition in humans.”13

The evidence, as usual, is much more voluminous for use of lasers in human medicine. There are literally hundreds of systematic reviews available for specific conditions, often with several different reviews of the same set of studies for particular indications. There is great inconsistency in the results. Most reviews conclude that the evidence is not strong enough to support definitive statements about efficacy. Some reviews do show some positive results, with weak to moderate evidence supporting a benefit for particular conditions, though in some cases other reviews of the same evidence reach different conclusions.

There is an enormous body of in vitro evidence showing effects of laser light on various tissues. It is clear that laser has significant biological effects in such models, and some of the effects seen could potentially have clinical benefits in living patients.

Finally, there are, of course, innumerable anecdotes regarding the effects of laser therapy, and some laser advocates are absolutely convinced by their own experiences that low-level laser is a powerful therapeutic tool. However, such anecdotal evidence can be found for most every intervention available, and equally strong anecdotal support by thousands of people over centuries has existed for many therapies that scientific research has shown conclusively to be ineffective, such as bloodletting and homeopathy. Therefore, the value of such evidence is limited to suggesting hypotheses for further research, and it cannot validate any claims about laser therapy.

IS LASER THERAPY SAFE?
Most experimental and clinical studies of low-level laser therapy have found few adverse effects. Inappropriate use of higher-power lasers or excessive duration of treat can result in heating of tissues and thermal damage in some cases. There are also potential risks to operators of laser equipment. And in the absence of research evaluating the long-term effects of ongoing laser treatment, the potential for some of the many biological effects of light on cellular metabolism to result in harm is unknown.

Safety guidelines, from government agencies, manufacturers, and the medical literature, are available and should be scrupulously followed.

BOTTOM LINE
Lasers have significant measurable effects on living tissues in laboratory experiments, so it is plausible that they might have clinical benefits. The extensive research done in humans, however, has so far only found limited evidence to support the use of lasers in a few conditions, and high-quality controlled studies often contradict the positive findings of initial, small and poorly controlled trials.

The experimental evidence in veterinary species is mixed and low quality, and there are no high-quality published clinical trials validating laser therapy for specific indications. It is possible that high-quality research may one day validate some of the claimed benefits for laser therapy. However, at present the best that can be said about this intervention is that it appears promising for some conditions, such as wounds and musculoskeletal pain.

The growing popularity of lasers is based largely on anecdotal evidence and economic factors. Laser units are being aggressively marketed to veterinarians, often using unsubstantiated claims of clinical benefits. Laser therapy represents a potential source of income for practitioners and, of course, for laser device manufacturers. It appears likely that this profit potential contributes to an enthusiasm for laser therapy not matched by the quality of scientific evidence for its benefits to patients.

Veterinary therapies often lack robust high-quality clinical trial evidence to support their use, and this is not itself a reason to avoid these therapies. However, when employing interventions that have not yet been rigorously demonstrated to be safe and effective, we have a duty to acknowledge the limitations of the evidence. Clients should be fully informed about the uncertainties concerning the effectiveness of laser therapy and the potential for unforeseen effects. Established therapies with stronger evidence identifying their risks and benefits should take precedence over promising but unproven therapies like laser treatment. And those interested in promoting low-level laser, particularly those marketing laser equipment and training, should proportion their claims to the available evidence and assume some responsibility for developing the evidence base further so that practitioners and animal owners can make better-informed decisions about this practice.

REFERENCES

  1. Petermann, U. Pulse laser as ATP-generator: the use of low level laser-therapy in alleviating Qi-shortcomings. Zeitschrift für Ganzheitliche Tiermedizin. 2012 26 1 8-14
  2. Draper WE, Schubert TA, Clemmons RM, Miles SA. Low-level laser therapy reduces time to ambulation in dogs after hemilaminectomy: a preliminary study. J Small Anim Pract. 2012;53(8):465–469.
  3. Williams, C. Barone, G. Is Low Level Laser Therapy an Effective Adjunctive Treatment to Hemilaminectomy in Dogs with Acute Onset Parapleglia Secondary to Intervertebral Disc Disease? Proceedings, American College of Veterinary Internal Medicine Forum, Denver, CO. June, 2010.
  4. Efficacy of low level LASER therapy on wound healing in dogs. Indian Journal of Veterinary Surgery 2011 32 2 103-106 Singh, M., Bhargava, M. K., Sahi, A., Jawre, S., Singh, R., Chandrapuria, V. P., Kocchar, G.
  5. Grayson LC; Cassie NL; Juergen P. et al. Effect of laser treatment on first-intention incisional wound healing in ball pythons (Python regius). Am J Vet Res. October 2015;76(10):904-12.
  6. Kurach LM, Stanley BJ, Gazzola KM, et al. The Effect of Low-Level Laser Therapy on the Healing of Open Wounds in Dogs. Vet Surg. 2015 Oct 8. doi: 10.1111/vsu.12407. [Epub ahead of print]
  7. Madhya Pradesh Pashu Chikitsa Vishwavidyalaya, Jabalpur, MP, India. Low level laser therapy for the healing of contaminated wounds in dogs: histopathological changes. Indian Journal of Veterinary Surgery. 2013 34 1 57-58
  8. In de Braekt MM, van Alphen FA, Kuijpers-Jagtman AM, et al. Effect of low level laser therapy on wound healing after palatal surgery in beagle dogs. Lasers Surg Med. 1991;11(5):462-70.
  9. Petersen SL, Botes C, Olivier A, et al. The effect of low level laser therapy (LLLT) on wound healing in horses. Equine Vet J. 1999 May;31(3):228-31.
  10. Olivieri, L. Cavina, D. Radicchi, G. et al. Efficacy of low-level laser therapy on hair regrowth in dogs with noninflammatory alopecia: a pilot study. Veterinary Dermatology, 2015, 26, 1, pp 35-e11
  11. Stich, A. N.; Rosenkrantz, W. S.; Griffin, C. E. Clinical efficacy of low-level laser therapy on localized canine atopic dermatitis severity score and localized pruritic visual analog score in pedal pruritus due to canine atopic dermatitis. Veterinary Dermatology, 2014, 25, 5, pp 464-e74
  12. Tajali, SB. MacDermid, JC. Houghton, P. et al. Effects of low power laser irradiation on bone healing in animals: a meta-analysis. Journal of Orthopaedic Surgery and Research 2010, 5:1
  13. Lucas, C. Criens-Poublon, LJ. Cockrell, CT. et al. Wound healing in cell studies and animal model experiments by Low Level Laser Therapy; were clinical studies justified? a systematic review. Lasers Med Sci. 2002;17(2):110-34.

Slides

Laser

Posted in Presentations, Lectures, Publications & Interviews | 4 Comments

WVC 2016: Overdiagnosis and Overtreatment

Here are the notes and slides from my presentation on Overdiagnosis.

WHAT IS OVERDIAGNOSIS?

A 5 year-old Labrador retriever presents for an acute cranial cruciate ligament rupture. Otherwise, the dog is healthy in every way with no clinical symptoms other than lamenesss. However, pre-operative bloodwork shows moderate elevations in ALT, and an ultrasound exam shows some indistinct, mildly hypoechoic nodular lesions in the liver. An ultrasound-guided needle biopsy is performed which ultimately shows benign nodular hyperplasia. Unfortunately, the dog dies of complications from the biopsy procedure. What this dog really died from, however, was overdiagnosis.

Veterinarians, especially early in their careers, are often fearful of misdiagnosis; incorrectly identifying disorders in their patients or diagnosing diseases the patients do not actually have. However, few worry about the dangers of overdiagnosis; the correct diagnosis and treatment of disorders patients do have but which will never cause clinical symptoms or mortality.

Overdiagnosis is now recognized as a common and serious problem in human medicine which causes significant harm in terms of cost and suffering for patients and their caregivers. There are annual international conferences on preventing overdiagnosis, and a consortium of seventy specialty groups has created the online resource Choosing Wisely (www.choosingwisely.org) to help physicians and patients make better decisions and reduce overdiagnosis and overtreatment. Changes in clinical practice guidelines for many conditions, including highly publicized changes in breast cancer and prostate cancer screening programs, have resulted from the recognition that overdiagnosis harms patients. Yet the subject of overdiagnosis is virtually unknown in veterinary medicine.

HOW COMMON IS OVERDIAGNOSIS

A challenge in controlling overdiagnosis is that there is no absolute way to know in advance if a finding in a particular patient is going to prove clinically important or not. This can only be known after sufficient time has passed to evaluate whether or not the finding has resulted in disease or death. However, it is possible to evaluate the frequency of overdiagnosis in a population based on the frequency of diagnosis and mortality data for specific conditions and populations. As an example, CT imaging of clinically healthy people frequently leads to diagnosis of cancers which, based on mortality figures, are never going to lead to early mortality (Table 1).

 

Organ

 

% with lesion detected by CT

(a)

10-yr risk of cancer mortality

(b)

Chance lesion is lethal cancer (c=b/a) Chance lesion is not lethal cancer

(d=1-c)

Lung (smokers) 50 1.8 3.6 96.4
Lung (never smoked) 15 0.1 0.7 99.3
Kidney 23 0.05 0.2 99.8
Liver 15 0.08 0.5 99.5
Thyroid (US) 67 0.005 <0.01 >99.99

Table 1. Detection and risk of mortality from cancer using CT imaging in asymptomatic humans. (From Welch, 20121)

Similar data is available for many other diseases suggesting that overdiagnosis is extremely common in human medicine. There is no published data specifically addressing the frequency of overdiagnosis in veterinary medicine.

WHAT IS THE HARM OF OVERDIAGNOSIS?

The diagnosis of diseases that are unlikely to ever cause significant illness or mortality causes harm in several ways. The testing leading to diagnosis, and the treatment often offered once such a diagnosis is made, have financial costs. It is estimated, for example, that overdiagnosis and overtreatment of clinically irrelevant lesions detected through mammography costs $4 billion annually in the United States alone.2 Another study suggests that unnecessary treatment of people with mild hypertension in the U.S., with no benefit in terms of reducing symptoms or early mortality, may cost $32 billion annually.3

These are only estimates for the costs of overdiagnosis for two diseases in one country, so the global financial cost is undoubtedly much greater. And despite the impulse to feel that no price should be put on efforts to improve health and treat disease, it is undeniable that such waste raises the overall costs of healthcare and reduces access for some people, all without any benefit to patients.

There are no estimates of the costs of overdiagnosis in veterinary medicine. The economic model of the veterinary profession is quite different from human medicine, and the financial costs of overdiagnosis may not impact the overall cost of veterinary care or the availability of care as dramatically. However, these costs are still a waste of client resources, and they can reduce the ability of some clients to pay for subsequent care that may be necessary or beneficial for their animals.

The financial costs of overdiagnosis, however, have not been the major drivers of change in clinical practice in human medicine. This has been the physical and emotional harm to patients. Diagnostic testing and treatment for conditions not destined to cause illness or death can cause both physical injury an psychological distress. It has been estimated that, for example, that prostate specific antigen (PSA) screening for prostate cancer in men will identify cancer in 30-100 patients who would never have been clinically effected for every death such testing prevents.1 And for these men who are overdiagnosed and go on to have biopsies or treatment, up to 50% will experience sexual dysfunction, 30% will have urination difficulties, and 1-2 per thousand screened will die as a result of unnecessary treatment. Research has also shown that quality of life diminishes after a diagnosis of prostate cancer, and that the risk of suicide and cardiovascular death increases immediately following such a diagnosis.4-5

Similar evidence of physical and psychological harm from overdiagnosis is available for many other conditions in human medicine. There has been, however, no published research on the risks of overdiagnosis in the veterinary field.

WHAT CAUSES OVERDIAGNOSIS?

Overdiagnosis is driven by numerous factors. Screening tests, imaging, and other diagnostics employed without specific clinical justification frequently lead to the detection of abnormalities. Such abnormalities are far less likely to be clinically important than those which cause symptoms, and therefore they often represent overdiagnosis. However, once an abnormality is detected, some of the psychological harm to patients or caregivers has already occurred. And because of the anxiety induced by the finding and the desire of both clients and doctors to take some action, even when it is unclear this will benefit the patient (a phenomenon known as “commission bias”6), further testing and even therapy often results from an initial overdiagnosis.

Overdiagnosis also stems from the development of more sensitive tests, which identify conditions earlier and prior to the onset of clinical symptoms. Expanded definitions of disorders which encompass patients previously not consider to have these disorders can also lead to overdiagnosis and overtreatment. Psychologically, doctors are prone to overdiagnosis because they are likely to be punished, in the form of blame or even litigation, for failing to diagnose a medical condition if it does eventually lead to symptoms or death. However, doctors are almost never punished for unnecessarily diagnosing and treating conditions which would never have cause any harm if undiagnosed.

The inappropriate reliance on anecdotal evidence and clinical experience to guide diagnostic practices also contributes to overdiagnosis. Even in the face of strong evidence, for example, that mammography of women under 50 years of age leads to significantly more harm from overdiagnosis than benefit from earlier diagnosis and treatment, there has been significant resistance to the change in screening guidelines implemented to reduce this harm. Much of that resistance is justified with the use of anecdotes of women who had been diagnosed and treated for breast cancer because they and their doctors believed, rightly or wrongly, that this intervention had saved their lives.

Any doctor who believes their use of screening or other diagnostic interventions in asymptomatic patients has saved someone’s life will be very reluctant to stop using those interventions regardless of the evidence that they do more harm than good. Individual stories are always more psychologically compelling than statistically data. But acting on our emotional response to such stories and ignoring the evidence for regarding overdiagnosis ultimately causes more unnecessary suffering for real patients.

Patients are similarly inclined to seek diagnosis and take action on it even if the statistical evidence suggests it is in their best interests not to. Doing so gives people a sense of control over their fate, or that of their animals. Even if this sense of control is an illusion, it tends to outweigh rational considerations. One survey found that 98% of people mistakenly diagnosed with cancer through screening were still glad they had had the test once the follow-up evaluation showed they actually did not have cancer.7 Like doctors, patients are inclined to take action rather than choosing inaction, even when inaction is demonstrably the better choice.

Finally, we cannot ignore the potential influence of financial interests on overdiagnosis. Companies selling diagnostic tools and veterinarians using them receive income from the use of these tools. And the follow-up testing and treatment of diseases, even when they are overdiagnoses, also generates revenue. While doctors are unlikely to intentionally pursue unnecessary testing and treatment purely for financial gain, it would be naive to imagine such revenue has no impact on doctors’ decision making. Federal law prohibits doctors from referring patients to diagnostic facilities in which they have a financial interest because research has shown such an interest increases the number of tests done and the costs to patients.8-9 There is no reason to believe veterinarians to be exempt from the same potential for financial self-interest to influence clinical decisions.

HOW DO WE REDUCE OVERDIAGNOSIS & OVERTREATMENT?

The first step in reducing the harms from overdiagnosis is to understand the phenomenon and its causes. This includes developing the data to identify overdiagnosis of specific conditions. Because overdiagnosis can only be identified in retrospect in individual patients, we must gather and analyze epidemiologic data to recognize the level of risk for overdiagnosis of particular diseases. We cannot safely rely on anecdotes and uncontrolled clinical experience alone to drive our diagnostic and therapeutic practices. We need data.

In the meantime, in the absence of such data, the best strategy is to understand the limitations of our diagnostic tests, including important measures such as their positive and negative predictive value, which help us to appreciate the likely significance and reliability of test results in particular patient populations.  We should also ensure that we have an appropriate clinical index of suspicion for any condition before we begin testing for it. “Fishing expeditions,” “shotgun diagnostics,” indiscriminate imaging, and other such irrational diagnostic practices raise the risk of overdiagnosis.

We must also learn to accept the inevitable uncertainty in medicine and be honest with clients about our ability to predict and control all patient outcomes. We need to recognize and disclose that testing and treatment have costs and risks as well as benefits, especially in patients without significant clinical symptoms associated with the disorders we are trying to diagnose and treat. Though it is psychologically more difficult for us, it is often wiser to avoid action when there is not good evidence to show that our actions will truly benefit our patients. Don’t just do something, stand there!

REFERENCES

  1. Welch, HG. Schwartz, L. Woloshin, S. (2012) Overdiagnosed: Making people sick in the pursuit of health. Boston: Beacon Press.
  2.  Ong, M. Mandl, KD.  National Expenditure For False-Positive Mammograms And Breast Cancer Overdiagnoses Estimated At $4 Billion A Year. Health Aff April 2015 vol. 34 no. 4 576-583
  3. Martin, SA. Boucher, M. Wright, JM. Et al. Mild hypertension in people at low risk. BMJ 2014;349:g5432
  4. Heijnsdijk EA, Wever EM, Auvinen A et al: Quality-of-life effects of prostate-specific antigen screening. N Eng J Med 2012; 367: 595.
  5. Fang F et al: Immediate Risk of Suicide and Cardiovascular Death After a Prostate Cancer Diagnosis: Cohort Study in the United States. JNCI 2010; 102: 307.
  6. McKenzie, BA. Veterinary clinical decision-making: cognitive biases, external constraints, and strategies for improvement. JAVMA. 2014;44(3):271-276.
  7. Schwartz, LM. Woloshin, S. Fowler, FJ., et al. Enthusiasm for Cancer Screening in the United States FREE  JAMA. 2004;291(1):71-78.
  8. Levin, David C.; Rao, Vijay M. (2008). “Turf Wars in Radiology: Updated Evidence on the Relationship Between Self-Referral and the Overutilization of Imaging”. Journal of the American College of Radiology 5 (7): 806–810.
  9. Gazelle, G.S.; Halpern, E.F.; Ryan, H.S.; Tramontano, A.C. (November 2007). “Utilization of diagnostic medical imaging: comparison of radiologist referral versus same-specialty referral”. Radiology 245 (2): 517–22.

 

Slides

Overdiagnosis

Posted in Presentations, Lectures, Publications & Interviews | 6 Comments

Marijuana and Cannabis-Based Products for Pets: Any News?

In 2013, I wrote about a particular medical marijuana product marketed for veterinary use, Canna-pet, as an illustration of the uncertainties and issues surrounding the potential medical use of cannabis-derived products. At that time, my conclusion was that 1) there is enough pre-clinical evidence to suggest cannabinoids of various types have physiologic effects that could prove beneficial, 2) there is limited evidence for some clinical use in humans, 3) but overall the evidence in humans is weak, 4) and in veterinary species it is non-existent.

Sadly, the state of the evidence hasn’t changed in the intervening couple of years, but the marketing of such products to pet owners and veterinarians has continued to grow. The lack of meaningful regulation of dietary supplements allows the sale of unproven remedies so long as the benefits are only implied and not directly stated. This loophole has created a wonderful opportunity for companies to profit from products that might or might work and might or might not be safe.  This does not strike me as serving the best interests of patients. The money and energy put into marketing these products could be better used to fund research to identify the true risks and benefits.

The only veterinary “research” that has emerged recently is the kind that I have discussed many times before, research that is intended to sell an idea or product rather than to find out the truth about it. This is the kind of research most preferred by companies selling such products and by alternative medicine advocates such as the AHVMA, and both are involved in this particular study.

Consumers’ perceptions of hemp products for animals. JAHVMA. Spring, 2016, vol. 2

The full details of the study are not available except to subscribers, but the results are summarized on the AHVMA web site, and this summary has been widely distributed by Canna-Pet. It consistent of an online survey “provided by Canna-Pet to their customers.” Obviously, this represents clear selection bias, since those responding to the survey are going to be those buying a hemp product for use in their pets because they expect or hope it will help. Anyone who doesn’t have a pre-existing bias in favor of using such a product, or who has used it and had negative experiences, is not going to be a customer and so is not going to participate in this survey.

Of the 632 respondents, about half felt it had helped their pet with pain, sleep, anxiety, and in cats inflammation. No data are provided on the conditions for which owners felt it wasn’t helpful or any other relevant information about the animals or their conditions, other treatments, and so on.

About 15-20% of owners reported undesirable effects, such as sedation or excessive appetite.

There is no way to evaluate this survey without more information, other than to say that it appears to contain no controls whatsoever for bias. As such, it is likely to be as unreliable as most online testimonials. And it is actually a bit surprising that even with a survey that doesn’t control for bias, the best the company could say about the results was that only about half of the users of the product felt it was beneficial. Not a powerful endorsement given the exclusion of likely sources of negative feedback.

In any case, such a survey at most represents attitudes towards cannabis products and says nothing about whether or not they actually work. However, the media and advocates for veterinary use of cannabis are certainly spinning it as at least implying such products are effective or worth a try.

As I mentioned in my previous post, the good news about the diminishing stigma associated with marijuana is the possibility of real research into chemical compounds that will likely prove to have medical benefits. The bad news is that there is some evidence legalization has led to an increase in marijuana poisoning for dogs, and it has provided more opportunities for companies to get into the business of selling the potential benefits of cannabis-based products before doing the necessary work of proving these benefits exist and that they are worth the risks.

One example of this is a new company vying with Canna-Pet for this potentially lucrative market, Canna Companion. This company is a little more circumspect in their claims for the medical effects of their product, but they still aggressively promote it as a “holistic” therapy, playing on the mythology that things labeled “holistic” or “natural” can be assumed to be safe based on pre-clinical evidence or anecdote alone and don’t require the rigorous clinical testing conventional therapies are expected to undergo.

The company web site, like that of Canna-Pet, doesn’t discuss any clinical studies in companion animals since there aren’t any. They simply point to the basic science research that shows the potential for benefits from cannabis-based products. Such research often fails to live up to its promise when specific products are tested in real-world patients, but this never seems to be a concern for companies marketing untested products. The company certainly has some legitimate scientists working for them, and their Chief Clinical Epidemiologist is a well-qualified public health researcher at the NC State Veterinary College. Such an individual should be well-suited to organizing rigorous scientific evaluation of cannabis-based products. I was, therefore, quite disappointed by the very unscientific his experience and credentials are used to promote the product:

Professor Peter Cowen of North Carolina State Unversity’s College of  Veterinary Medicine and now the Company’s Chief Clinical Epidemiologist and an Advosiry Board member commented: Based on my own experience with my dog, Londun, there is something of extreme value here. I am impressed not only from a therapeutic perspective, but also from a psychological perspective.”   Professor Cowen is orchestrating a clincial study at NSCU for the coming year and the Company is  looking forward to presenting those findings to the veterinary community and the public at large.

An anecdote from an epidemiologist is worth no more than an anecdote from anyone else, and the company certainly sounds like the results of the study they are planning are a foregone conclusion. The risk of bias here is, obviously, quite high, and I wonder how eager Canna Companion will be to promote the results if they turn out not to support the product, unlikely as that is. Only time will tell.

That might also be the most appropriate conclusion to the question of whether or not cannabis-based products are useful for veterinary patients. At the moment, there is no reliable evidence, so only time will tell. Hopefully, the pursuit of profits before science won’t lead to too many animals being exposed to useless of even harmful substances before we have the data we need to know what cannabis-based products might be useful for which problems.

Posted in Herbs and Supplements | 7 Comments

Myofascial Trigger Points-Real or Imaginary?

One of the reasons I chose the acupuncture course I am currently taking is that the instructors are very clear about rejecting the Traditional Chinese Medicine mythology of Qi, Yin and Yang, and all the rest that is often used to justify or explain the potential benefits of needling. The course purports to take a purely scientific approach to understanding and using acupuncture. As I have discussed previously, however, a fair bit of traditional acupuncture practice is accepted as effective based on anecdotal experience and then rationalized post hoc with sometimes questionable anatomic or neurophysiologic explanations. One of the most intensively used of these is the myofascial trigger point concept (MTrP).

Myofascial trigger points are supposed to be focal areas of tension or contraction in muscles which are irritable and contribute to chronic refractory pain. The argument is that these develop in response to local injury, to certain postural or activity patterns, or even to diseases in internal organs or the nervous system. Practitioners who treat such trigger points claim to be able to detect them as knots or taut bands within muscles.  Such trigger points are treated primarily by “releasing” them via some kind of stimulus, such as needling, electrical stimulation, massage, laser therapy, and so on.

The concept of the MTrP is more widely accepted in the conventional medical community than acupuncture more generally, though it is primarily utilized by osteopathic physicians, chiropractors, and others who focus on physical manipulative therapies, such as massage therapists, physical therapists, etc. However, the validity of the concept and the effect of needling and other rMTrp releasing therapies is often assumed as proven in this course and then used as a explanation for some of the proposed effects of acupuncture. This is not surprising since the course director, Dr. Robinson, is an osteopath as well as a veterinarian.

There certainly is some research evidence to support the concept of MRtP and the effect of needling as a treatment. But then there is also research evidence that appears to support acupuncture, and as we have seen when looking at it carefully and critically, it doesn’t necessarily mean what advocates claim it means. (1, 2) There is clearly controversy about the MTrP concept and the effects of therapies focused on myofascial release, and it is worth bearing this in mind rather than simply accepting the idea as true and using it to then justify some claims for acupuncture.

The most recent narrative review challenging this concept was published last year, and the authors make a quite definitive claim about it:

Quintner JL, Bove GM, Cohen ML. A critical evaluation of the trigger point phenomenon. Rheumatology (Oxford). 2015 Mar;54(3):392-9.

We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted.

Their claim rests on several grounds. The first is the problem with consistent identification of trigger points. Several studies involving experts who treat MTrP look at inter-observer reliability. These experts were asked to examine the same patients and give independent assessments of where trigger points were found. In these studies, the practitioners claimed to locate trigger points in different places and did not agree with each other to any significant extent unless they were first told what the underlying diagnosis was. This suggests that without knowing what is wrong with the patient in advance, even experts cannot reliably detect trigger points on physical exam and that they are inclined to base their subjective identification of such points primarily on what they expect to find when they already know the diagnosis, rather than on what they actually feel when doing a physical exam.

This is a pretty serious problem given that physical examination is supposed to be the main way trigger points are found. It is the same kind of problem that helped demonstrate that the Vertebral Subluxation touted by chiropractors as a major cause of illness was actually imaginary. If such inability to detect trigger points turns out to be a consistent finding, it would strongly suggest that such points don’t exist as objective entities which can be detected by physical examination, which would greatly undermine the idea that they exist at all or are a major source of clinical symptoms.

The authors also review other ways of identifying and characterizing trigger points, including biopsy findings, electromyography, and others, and they conclude that the evidence is mixed and unclear as to whether there is a single, common lesion that can be found on physical exam and associated with clinical disease.

This review also examines the evidence that needling to release trigger points is clinically effective. A number of systematic reviews and clinical trials have been done on this question, and the over conclusions are: the quality of the research evidence is mixed and often too low to be reliable; many other therapies are usually used along with trigger point needling, so it is difficult to determine which, if any, might be responsible for improvement; trigger points are identified in many very different patients with very different underlying diseases, so variation in how these patients do is high and complicates comparison of studies looking at needling for trigger point release.

Other critics of MTrP theory have made similar criticisms, including some physical medicine practitioners who have shifted from automatic acceptance of the concept to skepticism. I haven’t invested the time in examining the evidence as closely as I have looked at that concerning acupuncture, so I don’t have a strong opinion, but I do have some skepticism about the concept.

In particular, I am concerned by the inherent subjectivity in detection of trigger points and assessment of patient response to therapy. In demonstrating the location and treatment of trigger points, Dr. Robinson rests a lot of weight on interpretation of patient behaviors that could reasonably be interpreted differently. As not only a vet who has practiced for many years but someone with training in animal behavior, I know how easy it is to project our own expectations onto the behavior of other animals. If I expect to find pain in a certain spot and initially don’t, it is easy to press just a little harder until I get the reaction I expect, often without even realizing I am doing so. The lack of an objective, verifiable way of detecting trigger points and their resolution with needling is, then, a significant problem for this concept in veterinary medicine.

Given the lack of clarity on MTrP theory, it is not very helpful to use this concept as an explanation or guide for acupuncture. It simply shifts the ground from one muddy and poorly demonstrated set of ideas to another. There is no doubt, of course, that people often feel better when given various kinds of manual treatments. I suspect the same is true of many companion animals who have, after all, been intensively selected for generations to accept or even desire human contact. However, we must be cautious in projecting our expectations, beliefs, and theories onto our animal patients without robust objective evidence, since we run the risk of being fooled by the caregiver placebo effect and other phenomena that can leave us believing we have helped them when in reality we have not.

References

Here are a few of the studies discussed in the Quintner review:

Inter-observer reliability in MTrP detection:
Hsieh CY, Hong CZ, Adams AH et al. Interexaminer reliability of the palpation of trigger points in the trunk and lower limb muscles. Arch Phys Med Rehabil 2000;81: 258_64.

Lew PC, Lewis J, Story I. Inter-therapist reliability in locating latent myofascial trigger points using palpation. Man Ther 1997;2:87_90.

Myburgh C, Larsen AH, Hartvigsen J. A systematic, critical review of manual palpation for identifying myofascial trigger points: evidence and clinical significance. Arch Phys Med Rehabil 2008;89:1169_76.

Wolfe F, Simons DG, Fricton J et al. The fibromyalgia and myofascial pain syndromes: a preliminary study of tender points and trigger points in persons with fibromyalgia, myofascial pain syndrome and no disease. J Rheumatol 1992;19:944_51.

Clinical effect of needling trigger points:
Annaswamy TM, De Luigi AJ, O’Neill BJ et al. Emerging concepts in the treatment of myofascial pain: a review of medications, modalities, and needle-based interventions.PM R 2011;3:940_61.

Cummings TM, White AR. Needling therapies in the management of myofascial trigger point pain: a systematic review. Arch Phys Med Rehabil 2001;82:986_92.

Ho KY, Tan KH. Botulinum toxin A for myofascial trigger point injection: a qualitative systematic review. Eur J Pain 2007;11:519_27.

Rickards LD. The effectiveness of non-invasive treatments for active myofascial trigger point pain: a systematic review of the literature. Int J Osteopathic Med 2006;9: 120_36.

Tough EA, White AR, Cummings TM et al. Acupuncture and dry needling in the management of myofascial trigger point pain: a systematic review and metaanalysis of randomised controlled trials. Eur J Pain 2009; 13:3_10.

 

 

Posted in General | 2 Comments