Central to the nutritional and general healthcare philosophy of most alternative practitioners is that anything we eat is either good or bad. They often quote that cutting edge physician Hippocrates saying “Let food be thy medicine” (though they less commonly quote him saying “A physician without a knowledge of Astrology has no right to call himself a physician” or “What medicines do not heal, the lance will; what the lance does not heal, fire will.” for some reason). So-called “super foods” are lauded as having nearly magical healing properties, whereas foods like high fructose corn syrup, or even entire categories of foods such as grains are considered off-limits in a healthy diet. One of the most common scare tactics used to promote alternative approaches to pet nutrition is the claim that commercial pet food causes cancer.
So what is the evidence to support these claims that foods cause or cure cancer or other serious disease? Well, often there is little or no evidence at all, but occasionally there will be an observational study (one without any controls for chance, bias, or other sources of error). Usually these studies are done in humans, and they tend to drive food fads for both humans and, to a lesser extent, pets despite the dangers of extrapolating from human research to veterinary patients. However, as has often been pointed out, research studies are only as good as the quality of their design and conduct, and single studies, especially observational studies, are rarely solid enough evidence to justify major changes in behavior. I recently ran across a systematic review which has examined the human nutrition literature and has shed some light on why it sometimes seems like everything we causes or cures cancer.
Schoenfeld JD, Ioannidis JP. Is everything we eat associated with cancer? A systematic cookbook review. Am J Clin Nutr. 2013 Jan;97(1):127-34. doi: 10.3945/ajcn.112.047142. Epub 2012 Nov 28.
This clever little study selected recipes at random from a popular cookbook and then evaluated all the ingredients to see if there was any research literature suggesting they increased or decreased cancer risk. Here are the ingredients for which some research studies were found:
veal, salt, pepper spice, flour, egg, bread, pork, butter, tomato, lemon, duck, onion, celery, carrot, parsley, mace, sherry, olive, mushroom, tripe, milk, cheese, coffee, bacon, sugar, lobster, potato, beef, lamb, mustard, nuts, wine, peas, corn, cinnamon, cayenne, orange, tea, rum, and raisin.
Of these, there were more than 5 studies for 65% of the ingredients, with over 216 publications altogether. About 40% of the studies found an increased risk of cancer associated with one of these ingredients, 33% found a decreased cancer risk, and about 25% found no clear evidence either way. When a risk was identified, the statistical support was weak or not technically significant in 80% of the studies, so most individual did not show very robust results. About half of the meta-analyses included, however, had stronger statistical results, which is not surprising since the whole point of meta-analyses is that evaluations of multiple studies give stronger evidence than the results of individual studies. The distribution of effects reported in the meta-analyses centered around zero, suggesting random variation but no clear real effect.
The authors’ discussion summarizes very well not only the results of this study but the general problem with much of the observational and pre-clinical research often used to justify specific practices in the absence of clinical trials:
80% of ingredients from randomly selected recipes had been studied in relation to malignancy and the large majority of these studies were interpreted by their authors as offering evidence for increased or decreased risk of cancer. However, the vast majority of these claims were based on weak statistical evidence. Many statistically insignificant “negative” and weak results were relegated to the full text rather than to the study abstract. Individual studies reported larger effect sizes than did the meta-analyses.
…the credibility of studies in this and other fields is subject to publication and other selective outcome and analysis reporting biases, whenever the pressure to publish fosters a climate in which “negative” results are undervalued and not reported. Ingredients viewed as “unhealthy” may be demonized, leading to subsequent biases in the design, execution and reporting of studies. Some studies that narrowly meet criteria for statistical significance may represent spurious results, especially when there is large flexibility in analyses, selection of contrasts, and reporting. When results are overinterpreted, the emerging literature can skew perspectives and potentially obfuscate other truly significant findings. This issue may be especially problematic in areas such as cancer epidemiology, where randomized trials may be exceedingly difficult and expensive to conduct; therefore, more reliance is placed on observational studies, but with a considerable risk of trusting false-positive or inflated results.
Overinterpretation of individual studies with often small effects that are only marginally significant statistically and may be insignificant clinically is a major problem in all areas of medicine, and it manifests especially dramatically in alternative medicine, where any data is good data so long as it supports existing beliefs. To find out what is actually true and what will real improve health, we must be mindful of the limitations in the evidence and seek to improve the design, conduct, and reporting of clinical studies. That is a job for the researchers. However, the job for the rest of us is to know at least enough about research evidence to be wary of overinterpretation and placing excessive confidence in data that does not merit it. This will hopefully dampen the wild swings back and forth between claims like “Food X will kill you” and “Food X will make you live forever.”