In many science articles, indeed in most of them, you will come across some form of statistical data. This can be in terms of numbers of things affected by something, expected outcomes, or percentages, for example.
More often than not, these statistics have been drawn from some scientific source, and along the path from source to published news article, they will get warped, misinterpreted, and even taken completely out of context.
Oftentimes, the reporter will emphasise something that, scientifically, has little meaning. An example of this I mentioned in a previous post—just because a certain number of test mice died when eating food from a GM crop, for example, does not mean that the crop is unsafe.
A lot of the time, the reporter will see this one piece of information, and miss or ignore everything else around it. Sure, a couple of mice died, but the sample size (the amount of mice tested) might be in the tens, or hundreds, or even thousands. Just because some of the mice died, doesn’t mean that it will be statistically relevant to anything.
Any number of factors could have influenced this outcome, from environmental conditions, to the genetic makeup of the individual mouse. Again, these are things that reporters will often miss or ignore.
The way statistics are presented can have a large impact on the way they are received. For example, it is much more impressive to say that some new drug has caused 100 deaths, than to say it has cured 200 people.
When talking about statistics, particularly percentages, a frame of reference is important. Say for example, you come across this sentence: 10% of patients in a study were adversely affected by a test drug.
We have no way of knowing from that sentence how important this really is. Does it mean that 10 out of every 100 people will be adversely affected? Or that 10 out of every 100 people might be adversely affected? And how many people were there in the study anyway? If there were only 10 people, and one of them was adversely affected, technically this is still 10% of the population size, but could something else be the issue? Just because 1 person was affected, can we really say that 1 out of every 10 people in the world will be affected?
Usually scientists will cover all of these bases within their reports, and of course to them it all makes perfect sense. But it is easy to see how when such statistics are taken out of context by the reporter, they can be blown out of proportion, or presented in such a way that is just plain confusing, and doesn’t represent what the scientist was trying to say at all.
Now of course this isn’t just the fault of the reporter, perhaps scientists need to be more careful when explaining their statistics?