Statistical error

Today in statistics we talked about Type I and Type I error.

Type 1 error is when you reject the null hypothesis when you should not have. You see an outlier in the data that turns out to be there by chance as part of the normal distribution, but you think it is not – you think it shows a cause and effect relationship between two variables.

Type II error is when you fail to reject the null hypothesis when you should have. You see a data point inside the normal distribution so you think it is there by chance when in actuality it reflects the pattern of a cause and effect relationship between two variables.

Type I error is the most talked about, probably because it’s easiest to measure. Statistics has a convention of seeking a 95% or 99% confidence level in your results. That means you are willing to accept a 5% or 1% threshold as an acceptable risk for making a Type I error, or for identifying a causal relationship in the data that turns out to be there by chance.

Type II error doesn’t really have any conventional measurements. In this scenario, there is a causal relationship between two variables, but because the data are within the realm of normal distribution, you think they are there by chance.

The effects of climate change are just such a Type II error, science historian Naomi Oreskes writes in a recent op-ed column in The New York Times. She argues that scientists are trained to be skeptical – to assume the null hypothesis of no relationship is right unless the researcher can show there is less than a 1 in 20 chance that the relationship is real.

Here the relationship is between putting carbon into the atmosphere mainly by burning fossil fuels, and the effects of climate change such as extreme weather, ocean acidification, etc.

Oreskes argues that because of its predilection toward skepticism, science has actually underestimated the effects of climate change. In 2009 in at the 15th Conference of Parties in Copenhagen, world leaders on the recommendation of scientists agreed on a maximum threshold of 2 degrees C as the “safe” limit for global warming.

However, we are now at 0.8 degrees C and already seeing a lot of damage. Droughts and wildfires in California, superstorms, blizzards and floods in the East Coast and Midwest, coral reef dieoffs in the oceans, melting ice in Greenland and the Arctic – all these effects are happening much faster and more intensely than scientists predicted.

Is this an example of a Type II error? Maybe not technically in statistics, unless we are talking about a particular study. But it does bring out a general trend in science. Normally being skeptical pays off, because if there is a causal relationship, the results of ongoing research will back this up over time. The theory of evolution is an example of a hypothesis that has stood the test of time – decades of research have continued to add evidence making it stronger.

However, we don’t have a lot of time where climate change is concerned. Because carbon stays in the atmosphere for so long after we burn it – in 100 years half the carbon we put there today will still be there – we already have another degree of warming in the pipeline.

That combined with the warming we’ve already seen means there is not much time left to get a handle on the chief cause of climate change, fossil fuel consumption. And if the relationship between burning fossil fuels and all the effects of climate change we are seeing is actually stronger than we realized, we may have even less time than we think.

Leave a Reply

Your email address will not be published. Required fields are marked *