Shocking News! Media Coverage of Healthcare Research Often Not Very Good.

As a veterinarian, explaining science to non-scientists and interpreting the meaning of scientific research is a key part of my job. Pet owners cannot make truly informed decisions about what to do for their animal companions without reliable information they can understand. This blog arose out of my efforts to provide better information to my clients, and it has led to further efforts to inform the public, and my colleagues in veterinary medicine, about how to evaluate medical interventions and understand the scientific research we need to support making decisions for pets.

My own knowledge about how we understand health and disease has come from many years of academic study. This includes a master’s degree I will be finishing this year in epidemiology, the branch of science specifically devoted to understanding health and disease and generating safe and effective healthcare interventions. And hopefully I have developed some ability to effectively communicate about science through my academic background, my years as a veterinarian, and my work speaking and writing for the veterinary community and, of course, in this blog.

In a sense, this blog has made me part of “The Media,” as has my involvement with the American Society of Veterinary Journalists. Unfortunately, taken as a whole “The Media” does not do a very good job of covering scientific topics, and journalists seem to contribute to misconceptions at least as often as they dispel them. A newly published study looking specifically at media coverage of healthcare research illustrates this starkly.

Schwitzer GA. A Guide to Reading Health Care News Stories. JAMA Intern Med. Published online May 05, 2014. doi:10.1001/jamainternmed.2014.1359

This paper reports on a 7-year evaluation of media stories from print and electronic media of various kinds. It details a number of specific errors in how journalists often present and interpret scientific research that lead to a false understanding of what the results mean. The conclusion of the study was

After reviewing 1889 stories (approximately 43%newspaper articles, 30% wire or news services stories, 15%online pieces [including those by broadcast and magazine companies], and 12%network television stories), the reviewers graded most stories unsatisfactory on 5 of 10 review criteria: costs, benefits, harms, quality of the evidence, and comparison of the new approach with alternatives. Drugs, medical devices, and other interventions were usually portrayed positively; potential harms were minimized, and costs were ignored.

The specific kinds of mistakes made in many stories about healthcare research struck me not only because I see them all the time in the media, but because they mirror very closely exactly the sorts of mistakes made by advocates of alternative therapies. Though this study did not, unfortunately, look specifically at coverage of alternative medicine, my subjective impression is that the media makes the same sorts of errors but is even less careful and critical in coverage of this area. Pieces on veterinary medicine, in particular, are often poor quality because they are part of the “lifestyle” or “human interest” beat rather and treated as entertainment rather than being written by qualified science journalists interested in the truth about healthcare practices.

In any case, here are the major problems the study identified in media coverage of healthcare science:

Risk Reduction Stated in Relative, Not Absolute, Terms
Stories often framed benefits in the most positive light by including statistics on the relative reduction in risk but not the absolute reduction in risk. Consequently, the potential benefits of interventions were exaggerated.

While journalists are often understandably loath to talk about anything that sounds like math, it is impossible to appropriately talk about the effects of medical therapies without identifying the difference between absolute and relative risk. If you have a 1 in a million chance of developing a terrible disease, and something raises your chances to 2 in a million, that is a relative risk increase of 100%. Sounds terrible! But the thing is, at a chance of 2 in a million, you are still almost certainly not going to get that disease. And doubling your risk does not make it meaningfully more likely that you will. Such a simple distinction is critical to deciding whether medical interventions are worthwhile

Failure to Explain the Limits of Observational Studies
Often, the stories fail to differentiate association from causation.

You may have heard the saying “correlation does not mean cause and effect.” Just because two things are associated doesn’t mean one caused the other. If, for example, a study found that carrying matches in your pocket was associated with an increase in your risk of lung cancer of ten times, would that mean matches cause lung cancer? Of course not! Carrying matches may mean you’re a smoker, and smoking certainly does cause lung cancer, but the simple association between matches and cancer doesn’t mean one causes the other.

Here’s a great site that illustrates all kinds of such bogus associations. While this may not be something everyone appreciates in daily life, journalists writing about healthcare research ought to understand it.

The Tyranny of the Anecdote
Stories may include positive patient anecdotes but omit trial dropouts, adherence problems, patient dissatisfaction, or treatment alternatives.

I’ve written about anecdotes and miracle stories many times. The number one “argument” presented in the comments on this blog in defense of treatments I evaluated critically is the presentation of anecdotes that look like they show the treatment working. Anecdotes can only suggest hypotheses to test, but they can never prove these hypotheses true.

There are many reasons treatments that don’t work may seem like they do, and professionals who interpret and explain science should know anecdotes are unreliable and often misleading. While personal stories make for more interesting and emotionally appealing narratives, they should always be used carefully only to illustrate something that has been demonstrated to be true or false by more reliable evidence.

Surrogate Markers May Not Tell the Whole Story
Journalists should distinguish changes in surrogate markers of disease from clinical endpoints, including serious disease or death. Many news stories, however, focus only on surrogate markers, as do many articles in medical journals.

The bottom line for any medical treatment is whether it reduces the meaningful symptoms of disease, including the most final of all, death. It makes no difference if a therapy raises or lowers the amount of some chemical we can measure in the blood if that isn’t a clear and well-established indicator that the therapy will also reduce suffering or prevent death. Surrogate markers are, as the article suggests, overused by healthcare researchers in many cases because they are often cheaper and easier to measure than real symptoms or mortality, but they have significant limitations, and this should be made clear when talking about research using them.

Stories About Screening Tests That Do Not Explain the Tradeoffs of Benefits and Harms
Stories about screening tests often emphasize or exaggerate potential benefits while minimizing or ignoring potential harms. We found many stories that lacked balance about screening for cardiovascular disease and screening for breast, lung, ovary, and prostate cancer.

I have frequently referred to the growing appreciation in human medicine, which has not yet come very far in the veterinary field, that screening tests have risks as well as benefits, and these need to be carefully weighed. The Choosing Wisely project is a key resource for people trying to make smart decisions about screening tests, as is the web site for the U.S. Preventative Services Task Force. Both provide real evidence to help balance the risks and benefits of potential screening tests. Journalists should be aware of the limitations and pitfalls of screening and risks such as overdiagnosis and should include those considerations in stories about screening tests.

Fawning Coverage of New Technologies
Journalists often do not question the proliferation of expensive technologies.

I would add that journalists rarely question the value or evidence for alternative therapies and tend to fawn over them and their proponents more often than not. Reporting that is truly informative and useful must be thoughtful and based on assessment of the real evidence, not simply unquestioningly enthusiastic about therapies with a token quote or two from skeptics for “balance.” Drugs are not the only medical treatment to have risks, but it seems journalists are far more likely to talk about the risks of pharmaceuticals than other treatments.

Uncritical Health Business Stories
Health business stories often provide cheerleading for local researchers and businesses, not a balanced presentation of what new information means for patients. Journalists should be more skeptical of what they are told by representatives of the health care industry.

I would argue that identifying any potential bas, financial or otherwise, in a source for a news story should be an ordinary part of journalistic practice. The idea behind seeking multiple sources is not just to provide a superficial impression of balance by including opposing points of view regardless of merit but to ensure that the journalist has a comprehensive awareness of the evidence for and against the treatment they are writing about so that they can provide a useful explanation of what is known about it. The study also found, however, that journalists often don’t follow this practice.

Single-Source Stories and Journalism Through News Releases
Half of all stories reviewed relied on a single source or failed to disclose the conflicts of interest of sources. However, journalists are expected to independently vet claims. Our project identified 121 stories (8% of all applicable stories) that apparently relied solely or largely on news releases as the source of information.

There really shouldn’t be any need to point out that this is lazy and unacceptable journalistic practice and does not lead to accurate, useful information for the public.

I don’t want to suggest that there are not many excellent journalists providing accurate and informative interpretation and analysis of healthcare research. The study specifically identifies examples of stories that succeeded in avoiding the mistakes they found, and there are certainly many in the media who do a brilliant job reporting and explaining health sciences research. Hopefully, by identifying common problems and mistakes, this study will contribute to improving the quality of healthcare science journalism.

This entry was posted in General. Bookmark the permalink.

One Response to Shocking News! Media Coverage of Healthcare Research Often Not Very Good.

  1. Beccy Higman says:

    My perception is that there are too many journalists doing science stories with an inadequate science education. It appears to me that those journalists who have a higher level of qualifaction in science, in general, write more accurate stories whereas the pure arts graduates think they can report the science the same way they’d report any other story, which can work, but all too often doesn’t.

Leave a Reply

Your email address will not be published. Required fields are marked *