Should we believe health advice in newspapers? Just take it with a pinch of salt (but avoid salt for health reasons)
Why is it that newspapers often give completely
contradictory health advice? It’s difficult enough to keep track of what is
supposed to be healthy, without the papers changing their minds the whole time.
Let’s just look at the Daily Mail for example, a paper that hands out health advice
more freely than a Jehovah’s witness hands out pamphlets.
A brief look through the Daily Mail health pages tells you
“Aspirin causes cancer” but also “Aspirin prevents cancer”. “Beer causes cancer” except when “Beer prevents cancer”. “Coffee causes cancer” but also “Coffee prevents cancer”. “Eggs cause cancer” apart from the times when “Eggs prevent cancer”. “Soya causes cancer” but also “Soya prevents cancer”. “Stress causes cancer”
but sometimes “Stress prevents cancer”.
See
a trend here? Well, part of the reason for this is that knowledge about a
subject changes over time. We all remember adverts where doctors recommended cigarettes, but you’d struggle to find any serious person nowadays claiming
cigarettes aren’t harmful (except maybe Nigel Farage).
But that’s only part of the story. To understand more about
it, let’s look at how medical research gets published, and how newspapers
access this research.
If you’re a researcher wanting to get your work published,
you submit it to a professional journal. There’s a kind of pecking order of
these, with the top ones being well known (Nature, The British Medical Journal,
The Lancet etc).
At the top of the journal food-chain, journal editors are able
to employ lots of independent experts who can “peer-review” articles before
they are published, checking that everything looks accurate. It’s the journal’s
reputation on the line too. These journals make their money by being a trusted
resource, that people are keen to subscribe to and keep up to date with cutting
edge research. Because you often have to pay to access these articles, it’s
less likely that the general public or the press have access to them.
A bit down the chain are the less big-name journals that
provide articles free to access, and make their money through advertising.
These journals still peer-review articles and often play an important role in
publishing articles that bigger journals might not take on, but are still
important, for example articles about treatments that have been tried but don’t
work.
Further down the food-chain are journals that rely on
charging researchers to publish in them, just for the kudos of having their
research published. The majority of these journals are honest, but a few are just
money-making schemes, that will accept any article as their business depends on
it. They will often actively go looking for researchers to submit articles to
them. While they claim to be “peer-reviewed” often these articles aren’t.
This was demonstrated by David Mazieres and Eddie Kohler of
New York University and the University of California, Los Angeles. They were
contacted by a journal that sent hundreds of emails out to researchers,
suggesting that they submit work to their journal. They wrote a paper called “GetMe Off Your Fucking Mailing List” as a joke, literally to get them to take them off their fucking mailing list.
Surprisingly, their paper was “peer-reviewed” and accepted for publication by
impressively named “International Journal of Advanced Computer Technology”
Now, that’s an extreme case, but it shows that we need to be
cautious about some of these medical articles that are the easiest for
newspapers to access. Just because something has been published in a “journal”
doesn’t mean it’s good research, and proper expert peer-review is essential.
Unfortunately, it’s often the very articles that are less rigorously checked
that are easiest to access by newspapers.
Why is this a problem? Well, proper research is often really
difficult to do, but misinterpreting results is easy. As any statistician worth
their salt will tell you, 86.7% of statistics are made up. So if a an article
hasn’t been “peer-reviewed”, you’re going to have to review it yourself.
Researchers have to be careful that their findings are
“statistically significant”, and they use various ways to do this. They often
quote a factor in their articles called a “p value” which is a measure of how likely it is the results they found are due to
chance. For example a p value of 0.05 suggests that there’s only a 5 in 100
chance that the results of the research are just due to chance. That sounds
impressive right? Well, yeah, but if you research is looking at 100 things that
might cause cancer, 5 things might show up as positive just by chance.
Also, you need to be sure that the things you are looking at
really are the cause, and aren’t just “confounding factors” meaning that the thing you are looking at isn’t really the cause, but is
associated with something that is. Let’s say you’re looking at what causes
obesity, and you find that Diet drinks are linked to it. It might be that they
really are – but it’s unlikely considering the amount of calories in them is
often zero. What might be the reason is that overweight people could be more
likely to drink Diet drinks because they want to lose weight.
Comments
Post a Comment