As a proponent of evidence-based medicine (EBM), I often emphasize that in the absence of good-quality controlled research evidence, we cannot reliably assess the safety or efficacy of medical treatments. And as a skeptic, I am often forced to clarify that skepticism is not the automatic rejection of the unfamiliar but the position that conclusions about whether specific claims are true or false should be based on reliable evidence. Again, in the absence of such evidence, I do not assume claims to be false–I classify them as unproven. A popular aphorism to express this position is, “The absence of evidence is not evidence of absence.”
However, despite a degree of truth in it, this aphorism is frequently and widely misused. Proponents of unproven therapies often use it to suggest that such therapies should be given the benefit of the doubt, assumed to be innocent (that is, safe and effective) until proven guilty (that is unsafe or ineffective). When I point out on this blog that specific practices have no good evidence to support them (and no, anecdotes and testimonials do not count as “good” evidence), the response from advocates of these practices is often, “You can’t prove X doesn’t work!” That entirely misses the point in several ways.
For one thing, the burden of proof is always on those making a claim, not those asking for evidence to support it. If I say there is an invisible, man-eating dragon in my garage and you doubt me, it is not your job to conclusively prove the dragon doesn’t exist. There is no need for you to take this, or any other claim about the world, seriously until the person making the claim provides evidence for it.
This is especially true for claims that contain within them assumptions that violate well-established facts. Since invisibility and dragons have never been proven to exist, and there is good reason to think they do not, my claim is not only unproven but unlikely. In statistical terms, my hypothesis has a low prior probability.
Prior probability is, in a sense, an exception to the principle that “unproven” does not mean “false.” It is technically true that my guard-dragon is only unproven, not definitively disproven. But in practical terms, the prior probability of its existence is so low that it makes more sense to behave as if it does not exist than to behave as if it does. Would you choose never to enter my garage no matter what just in case the dragon might eat you? Would this be sensible?
The same principle applies in medicine. Many therapies, both conventional and alternative, are unproven in the sense that there is not robust research evidence to characterize their safety and efficacy in all possible situations. One misconception about evidence-based medicine is that such an absence or weakness of evidence means we are to refrain from making any decision about these therapies. EBM does not require that we stand idle with our hands in our pockets whenever there is no systematic review or large, high-quality clinical trial evidence available concerning the therapies we are considering using. We must quantify and acknowledge the uncertainty associated with weak evidence, but this is only one part of the job of balancing the need to intervene with the degree of uncertainty about the consequences of our interventions.
Unfortunately, some proponents of EBM do fall a little ways into this trap, often concluding that in the absence of perfect evidence, “no conclusion can be drawn” and “more evidence is needed.” More evidence may often be desirable, but a clinician working with actual patients must always draw a conclusion, however tentative. That is our job, to guide and care for patients as best we can using the evidence we have, not the evidence we wish we had.
On the other hand, proponents of implausible or unproven therapies often make the opposite error, assuming that the absence of evidence frees them to do as they like without taking into account the uncertainty of not having good scientific evidence. While clinicians may often be forced to rely on clinical experience alone, we should never forget how deeply unreliable a guide this is. As I’ve pointed out before, the three most dangerous words in medicine are “In my experience….” The absence of evidence should not reassure us that anecdotes, personal experience, historical or cultural tradition or any other information with low reliability and high risk of bias can be sufficient to support or recommend a therapy. In medicine, “unproven” may not mean “false,” but it absolutely means “risky!”
Because there is always a chance of doing harm, of making a patient worse when we intervene, it is incumbent on doctors to be wary of interventions with a great deal of uncertainty or a lack of evidence about safety and effectiveness. This is reflected in another popular aphorism in medicine Primum non nocere (First, do no harm). And just as the benefits of a therapy remain unknown when there is no strong scientific evidence, only anecdote and uncontrolled observation, so the safety is uncertain in the absence of good-quality evidence. There needs to be a very urgent need to act, and very clear disclosure of the uncertainty to clients and patients, before we use a therapy when we can’t know the true safety or efficacy.
The necessity to avoid making things worse is a major reason why we generally avoid using therapies without good evidence for their effects. It is widely accepted that if a pharmaceutical company invents a new drug, they don’t start selling it to patients on the basis that it hasn’t yet been proven not to work! These companies are required to go to great lengths to identify the risks and benefits before we are willing to give new medicines to our patients.
A low prior probability makes such a precautionary approach even more appropriate. Even if a therapy is “unproven,” in the sense of there not being much reliable research evaluating it, if the theories and assumptions behind the therapy contradict established scientific understanding, then the burden of proof is even higher, and the principle of avoiding such therapies in order not to unintentionally do harm is even more appropriate.
As an illustration, here are a few “unproven” practices that, nevertheless, most of us would follow despite the lack of controlled scientific evidence because they have a high prior probability of benefitting us:
- Wearing a parachute when jumping out of an airplane in flight
- Looking both ways before crossing the street
- Taking an ambulance rather than a taxi to the hospital after having been shot in the chest
There are no high-quality clinical trials to show that these practices reduce injury or death, but it is rational to follow them anyway because of high prior probability, because they are based on well-established principles and sound reasoning.
The opposite is also true. Here are a few examples of untested and unproven practices that we would avoid despite the absence of controlled scientific evidence because the prior probability of their efficacy is very low:
- Using one’s Qi or spiritual energy to fly when jumping out of an airplane without a parachute
- Using The Force to detect oncoming cars when crossing the street without looking
- Calling a cab to the hospital and waiting by the curb for it to come after having been shot in the chest
The quantity and reliability of the evidence is, of course, often more complex and in greater dispute in medicine than in examples such as these, but my point is that sometimes the absence of evidence should be taken as a reason to avoid a therapy when it has low prior probability of being safe and effective. This is a rational, common practice that we all follow in many other situations, and it makes sense in medicine as well.
Finally, it is important to remember that absolute, 100% proof is never a product of science. Even the strongest evidence can be undone by the discovery of new facts, by unidentified weaknesses in the research, or by rare events. People do survive falling out of airplanes without parachutes.
Proponents of implausible or unproven therapies often trumpet this concept as a way of defending their practices. Since nothing is absolutely certain in science, and since a few crazy ideas have proven to be true, and a few well-demonstrated claims have turned out to be false, it is tempting to conclude that one can believe anything one likes since no one knows for certain. This is pretty obviously a silly and dangerous conclusion. While people may rarely survive jumping out of airplanes without parachutes, that doesn’t make choosing not to use a parachute a sensible decision.
The corollary of the fact that science never provides absolute, eternal truth is that at some point we have to accept the evidence as “good enough.” The precise point will be fuzzy and subject to debate, but in the real world we have to be able to make decisions based on some reasonable level of probability informed by science. If a therapy has been studied extensively over a significant period of time and no evidence of a benefit has emerged, it is not rational to say forever that “no conclusion can be drawn” and “more evidence is needed.” At some point, enough is enough.
The failure to produce good evidence of a meaningful benefit despite reasonable effort is itself evidence that there is no such benefit to find. When a therapy has a low prior probability and fails to be validated after a fair effort, this absence of evidence is evidence of absence of any real benefit. With limited time and resources, we cannot afford to forever keep trying to find evidence for implausible claims that have failed multiple attempts at validation, and we harm our patients by wasting our efforts and resources in such endless and almost certainly futile efforts. The fact that we can never say with 100% certainty that a claim is false does not justify never making the pragmatic decision to ignore that claim and move on to more promising hypotheses.
It is often true that the absence of evidence is not evidence of absence. It is also sometimes true that the absence of evidence is sometimes suggestive that a claim is false, particularly when highly motivated individuals have tried to find or produce positive evidence and have repeatedly failed. And it is always true that we must make decisions in the context of some uncertainty. EBM helps us quantify this uncertainty and integrate it into our decision-making, but it doesn’t obviate making decisions. And sometimes, the most appropriate decision for our patients is that “unproven” means “risky” and, in some cases, “unlikely to be true.”