A recent article by Naomi Oreskes (co-author of the brilliant but depressing The Collapse of Western Civilization: A View From the Future) questions why we always play dumb in climate science [Playing Dumb on Climate Change].
Prof. Oreskes argues the well-accepted (read: rarely questioned) 95% confidence limit in statistical tests is a severe standard: it reflects a greater fear of Type I errors (false positives) over Type II errors (false negative). It essentially asks scientists to “play dumb”: pretend they know nothing about the phenomenon and reject causality unless there is only a 1 in 20 odds that the observed relationship occurred by chance.
But, the 95% confidence standard is a convention: it has no basis in nature. What if we’re not so dumb – instead of a blank slate, what if we have good theory to guide our empirical investigation? Or, what if the consequences of a false negative are much greater than a false positive? Should accept lower odds of a Type I error (and higher odds of a Type II error) by lowering the required confidence level? What is that level? Should it vary?
Solid theory and high consequences from false negatives is certainly the case for climate science. But, this is a much broader issue across all sciences. Why 95%? During the birth of statistics in the 18th and 19th centuries, there were good reasons to play dumb. There are good reasons to be smarter now.