The Cassandra Articles

He knows that they're the fools.

NOAA vs. Punxsutawney Phil

Each year, around May, the National Oceanic and Atmospheric Administration (NOAA) releases their annual Atlantic Hurricane Season Outlook. This gets picked up by all the major media outlets and is usually displayed prominently, especially when the prediction is for an "above average" season. For a long time, this annual ritual has been followed by me asking myself the question: "Is this really news?"

If the NOAA's yearly prediction had a good record of being accurate, then one would conclude that this is indeed news. But what if their predictions have a terrible track record, would that still be news? What if a coin flip was just as accurate as the NOAA's yearly predictions? Has anyone ever gone back and evaluated the NOAA's track record on these Hurricane Season Outlook reports?

Of course, nowadays "news organizations" are no longer really about news and so do not really care about pesky details such as predication accuracy. Their business is not about news, it is about getting eyeballs on their content and generating revenue. The annual NOAA report is always a conversation piece, and it is even better when an "above average" forecast can be sensationalized. Analytical thinking and facts just get in the way of their market share.

However, let's pretend for a few minutes that we lived in a world that made some sense and revisit the question of whether the NOAA's annual outlook has any scientific value at all. After years of raising this question to myself, I finally decided to spend the time to examine the NOAA's track record on these predictions. As my analysis shows (see below), if you flipped a coin, you probably would do better than the NOAA's prediction.

The NOAA does some great work and has a lot of good scientists, so I do not mean to disparage the organization as a whole, but there is no scientific value in these these wildly speculative and unreliable seasonal predications, especially in the way the media reports them. This is merely a public relations vehicle and, luckily for them, this is all the media cares about nowadays, so they eagerly eat up this "non-news".

My criticism lies not with the NOAA, but with the political environment that forces a scientific organization to produce meaningless press releases and the media outlets for regurgitating press releases without any thought, or worse, over-sensationalizing them. If you are going to convey a story about a prediction, basic critical thinking and rudimentary journalistic principles would say that you should be reviewing and presenting the track record of the predictor.

Criticism is easy and solutions are hard, so let me not be merely a critic. My solution is to employ Punxsutawney Phil for these annual hurricane predictions. I am sure Phil is not busy this time of year and he definitely has the experience for this type of work. This frees up NOAA resources which they can divert for doing actual science and the media still gets their annual spectacle.

NOAA vs Punxsutawney Phil

The Analysis

The NOAA's Hurricane Season Outlook's main data points that get reported are the percentages for the season being below, above or near "normal", so that is the focus of this analysis.

Over the 13 years period from 2001 through 2013, their prediction has been wrong 9 times: they are correct only 31% of the time. Here's the year by year predictions and actuals:

NOAA Atlantic Hurricane Season Predictions
YearPredictionActualResult
2013above averagebelow averageWrong
2012averageabove averageWrong
2011above averageaverageWrong
2010above averageabove averageCorrect
2009averagebelow averageWrong
2008above averageabove averageCorrect
2007above averageaverageWrong
2006above averagebelow averageWrong
2005above averageabove averageCorrect
2004above averageabove averageCorrect
2003above averageaverageWrong
2002averagebelow averageWrong
2001averageabove averageWrong

Given that there are three prediction choices (below, above and near average), this 31% prediction rate seems like it is no better than a chance guess, but their record is actually worse than that. In those 13 years, the NOAA prediction has been "above average" for 9 of those years and they have only been correct when the season has actually turned out to be "above average". The boy who cries wolf is bound to be right eventually. An even more cynical view would that they lean toward the sensational for the added public relation benefits.

During this period they have also never predicted a "below average" season, despite the fact that a below average season occurred 4 times. Note that they have actually predicted a below average season for 2014, so it will be interesting to see what happens this year.

All this means that they have no track record at predicting if the season will be "below average", and any time they have predicted it to be "normal", they were wrong. In fact, when they say it will be "normal", they have been evenly wrong in both directions: two have been "above average" and two have been "below average".

To be fair, the NOAA actually breaks down this data into much more detail and connects it to the historical data, but that is not the sort of detail that media outlets get into. I have not focused my analysis on these details, but on the simplified classification that the media reports, since it is the media's use of this data that is most in question.

If you read the report details, you can see the NOAA is really just reporting statistical correlations with past data, so you cannot really say their data is "wrong". However, the data they have really does not allow any anything to be concluded with any useful notion of statistical significance. It is the only data they have, so they use it, but it cannot really be used in any meaningful way, which is why their track record of using this data for predictions is so abysmal.

"You don't need a weather man to know which way the wind blows." - Subterranean Homesick Blues by Bob Dylan, 1965

Methodology

Some notes on the methodology used here:

  • The data sources used were: the NOAA Atlantic Hurricane Season Outlook reports from 2001 through 2013 and the NOAA HURDAT2 yearly hurricane data from 1851 through 2013. See http://www.cpc.ncep.noaa.gov/products/outlooks/hurricane-archive.shtml and http://www.aoml.noaa.gov/hrd/hurdat/Data_Storm.html
  • Maximum wind speeds were from the HURDAT2 data and used the Saffir-Simson Hurricane Wind Scale to classify the storms.
  • A "normal" season is defined as the rounded, yearly averages of the number of actual hurricanes from the years 1980 through 2013.
  • The "normal" season average is compared to the actual number of hurricanes in a given year to determine if the year actually was below, above or near average.