Trends that will shape us: Transportation

On April 7, I participated in a panel discussion at the Columbus Metropolitan Club; topic: Trends that Will Shape Us: Transportation. Other guests include Jack Marchbanks (Director, Ohio Department of Transportation) and Kevin Chambers (Managing Director – Logistics, Distribution and Supply Chain, JobsOhio).

It was an interesting and lively conversation: spanning public transit, the impact of COVID on cities, social equity, infrastructure, freight and logistics.  Check it out!

Link to recording

 

 

How SUVs conquered the world – at the expense of its climate

I was interviewed by The Guardian (UK) newspaper about Sport Utility Vehicles (SUVs): how they came to dominate the US market, and their damage to the environment, cities and people:

How SUVs conquered the world – at the expense of its climate,”  The Guardian, 1 September 2020.

The article was also reprinted in Slate:

C U, SUV: The hulking car has become the world’s most dominant form of transportation—and one of its biggest climate threats,” Slate, 8 September 2020.

 

Should we always play dumb in science?

A recent article by Naomi Oreskes (co-author of the brilliant but depressing The Collapse of Western Civilization: A View From the Future) questions why we always play dumb in climate science [Playing Dumb on Climate Change].

Prof. Oreskes argues the well-accepted (read: rarely questioned) 95% confidence limit in statistical tests is a severe standard: it reflects a greater fear of Type I errors (false positives) over Type II errors (false negative).  It essentially asks scientists to “play dumb”: pretend they know nothing about the phenomenon and reject causality unless there is only a 1 in 20 odds that the observed relationship occurred by chance.

But, the 95% confidence standard is a convention: it has no basis in nature.  What if we’re not so dumb – instead of a blank slate, what if we have good theory to guide our empirical investigation?  Or, what if the consequences of a false negative are much greater than a false positive?  Should accept lower odds of a Type I error (and higher odds of a Type II error) by lowering the required confidence level?  What is that level?  Should it vary?

Solid theory and high consequences from false negatives is certainly the case for climate science.  But, this is a much broader issue across all sciences.  Why 95%?  During the birth of statistics in the 18th and 19th centuries, there were good reasons to play dumb.  There are good reasons to be smarter now.