Skip to content
Health
Link copied to clipboard

Health watchdog gives RXs for interpreting myriad studies

Readers eat up health news like popcorn for the brain, and that’s understandable.

(TNS)

Readers eat up health news like popcorn for the brain, and that's understandable. Most people want to know what they can do to stay healthier longer. But study findings can be tricky to a point that people don't know what to believe.

Today you may read about a study showing that a particular diet may cause cancer. Tomorrow another study will claim that the same diet protects against cancer. How do you know what to believe?

Gary Schwitzer, publisher of the website HealthNewsReview.org (www.healthnewsreview.org

Journalists are looking for stories; researchers, institutions and journals are looking for media coverage; and public relations staffs are looking to write news releases. "We're losing sight of the impact on readers," Schwitzer said, "and we need to slow down."

Here's his list of what to watch for and watch out for:

— Mainstream stories about animal research often make unfounded links to people, he said. Some articles and press releases neglect to emphasize that animals, not people, were used in the research.

Look for conflicts of interest among study researchers, institutions and finances. Know who the study funders are. If you know that a pharmaceutical company is funding drug research, be cautious about evaluating the findings.

Remember that not all studies are equally reliable. The randomized control trial (RCT) is considered to be the most scientifically vigorous study design because it can show cause and effect by conducting research with similar groups of people who are randomly assigned to either a group that receives an intervention or a group that receives no intervention or a placebo. The sole difference in comparison groups is the intervention.

Conducting an RCT isn't always possible. If, for example, a study on the correlation between lung cancer and cigarette smoking were to be conducted, it would be unethical to use an RCT design that would require one group to smoke.

— The alternative is an observational study. These studies, often well structured and reliable, are the only way that certain research questions can be studied. But they show only associations, not cause and effect. A good evaluation of such a study requires looking at biases, chance, cause and other variables that researchers may have failed to control for or eliminate, damaging the veracity of the findings.

When you read such studies, be aware of the use of causal language when referring to findings that don't, in fact, prove cause and effect. Schwitzer said studies on diet, for example, are famous for erroneously implying cause and effect, when, in fact, they have shown only a statistical association.

Assess the quality of evidence. Look at the size of a study group. Generally, a larger study population is a better indication that it represents the general population. Keep in mind that a good study also will tell you about the trial's dropout rate. If it's high, it could cause an imbalance that results in skewed findings.

When evaluating a study, look for the risks and benefits. When findings are reported only in percentages, you're not getting the full impact of the results. A drug study that reports a reduction in hip fractures of 50 percent sounds impressive. But, Schwitzer said, the question you need to ask is, 50 percent of what?

For example, if people not on the drug had hip fractures at a rate of 2 in 100 and those on the drug had hip fractures at a rate of 1 in 100, then 2-1 = 1 percent absolute risk, or the exact number represented by 1 percent of the study population. That means that one person benefited who might not have otherwise, and the other 99 had to take the drug, run the risk of side effects and pay for the drug, all with no benefit. Yes, 2-1 also is a 50 percent reduction, so it isn't inaccurate; it's just half the story and not a very helpful half, Schwitzer added.

Until study findings can be replicated by independent researchers, they're preliminary and inconclusive. "What's happening," Schwitzer said, "is that too much flawed research is getting published, too many journalists are trumpeting unfounded claims, and people can be hurt by inaccurate, imbalanced, incomplete health care news."

———

©2015 Chicago Tribune

Visit the Chicago Tribune at www.chicagotribune.com

Distributed by Tribune Content Agency, LLC.