Skip to content
Science
Link copied to clipboard

Criticism of a Criticism of Science Journalism

A Slate piece lays out some obvious rules for science journalists. Does this critique itself mislead by failing to name names?

I came across this piece in Slate, reprinted from New Scientist, which I found interesting because it's about rules science journalists should follow, and the piece itself breaks a rule I've always considered important, which is to be specific whenever possible.

The rules outlined by the Slate/New Scientist piece are no-brainers for those of us at the Inquirer – especially for my colleagues who specialize in medical reporting. The impression the Slate story leaves, by implying these rules aren't already widely followed, is that it's just the Wild West at newspapers and we freely distort and hype news to get more play, with little or no intention of explaining complicated things or digging up the truth:

A checklist would look something like the following. Every story on new research should include the sample size and highlight where it may be too small to draw general conclusions. Any increase in risk should be reported in absolute terms as well as percentages: For example, a "50 percent increase" in risk or a "doubling" of risk could merely mean an increase from 1 in 1,000 to 1.5 or 2 in 1,000. A story about medical research should provide a realistic time frame for the work's translation into a treatment or cure. It should emphasize what stage findings are at: If it is a small study in mice, it is just the beginning; if it's a huge clinical trial involving thousands of people, it is more significant. Stories about shocking findings should include the wider context: The first study to find something unusual is inevitably very preliminary; the 50th study to show the same thing may be justifiably alarming. Articles should mention where the story has come from: a conference lecture, an interview with a scientist, or a study in a peer-reviewed journal, for example.

Another concern is the sometimes misguided application of "balance" in science reporting. An obsession with including both sides of a story has often obscured the fact that the weight of scientific evidence lies firmly on one side—witness some coverage of climate change and GM crops.

I agree that these are good rules, but to learn anything from this piece, I need to know who is failing to follow them. Not following such basic good practice is a failure of journalistic integrity. If you're going to accuse newspaper science reporters of lacking integrity, you need good examples. Instead there's a blanket condemnation of the coverage of two stories that date back to the 20th century – the start of the vaccine/autism scare and a silly story about cultists and cloning.

I'm not sure what the British press did with the vaccine scare. Here in the U.S. one of the main characters promoting the story was not a journalist but a very pretty blonde named Jenny McCarthy who got lots of TV time. If there are newspaper reporters who unscrupulously rode the wave of McCarthy-induced panic, I want to know who they are.

The other story was also from the late 1990s, when a cult known as the Raelians claimed to have cloned a baby. The cult's claim led to a congressional hearing in which the Raelians testified in their robes and medallions. Some congressmen tried to take them seriously so they could be seen as protecting the public with anti-cloning legislation. It was a circus, and a good chance to poke some fun at our public officials. There were funny stories and debunking stories. If there were misleading stories I'd like to see examples.

It's easy to see the correct way a story should have been reported years later and/or when one side is being promoted by UFO-cult members in robes. I'd be much more interested in examples of misleading reporting on the health and environmental risks from the Japanese nuclear leak, or airport X-ray scanners or GM salmon. To get that, you have to go to the Knight Science Journalism Tracker. They cover big issues in science and aren't afraid to tell you who they think got it right, and who got it wrong. You don't have to be a journalist to read this.

Another concern I have with the Slate piece is the old refrain condemning false balance. Again, this complaint isn't specific enough. A blanket prohibition against interviewing the "wrong" side could discourage reporters from debunking false claims, since you have to explain what the bunk is before you can debunk it. There are too many science writers rewriting press releases out there and not nearly enough debunking.

A journalist friend used to use the term thumb-sucker to describe stories in which a reporter would interview people with opposing viewpoints but fail to add any analysis or to draw a conclusion. That term – thumb-sucker - should be more common, because that's what we should be avoiding.  It's good to offer contrasting viewpoints to clarify the nature of a controversy, as long as a reporter exposes bad logic or factual errors where they exist and the piece is well researched enough to lead somewhere.

One serious problem not mentioned by Slate is the confusion between stories and blog posts. Stories are researched, reported, edited pieces that appear in print. Many blog posts aren't edited and don't usually appear at all in the print version of the paper. I think of them as "Zen and the Art of Writing" warm-up exercises. This is blog post. Yesterday's piece in which I created a voice for my cat was a blog post. The "Higgs the Cat" piece had the best hits/effort ratio of anything I've written since starting the evolution column and associated blog. We try to be responsible in writing these posts, but there are no rules.