How Musings about Nate Silver are Misguided
I’m a Nate Silver fan. Make no mistake. His book is at the top of my holiday reading list. I have been an avid follower of both his predictive models and his musings for the past 4.5 years. And, when my sister picked me up at the Austin airport on Tuesday evening and we headed to an election watch party, we wound up in a rather heated discussion as to the degree to which the results of the Presidential election were in doubt (I claimed that, as of Tuesday at 7:30 PM Central…they weren’t).
By late Tuesday night, analysts everywhere — despite their political affiliation — rejoiced. Tuesday night was a clear victory for math. The visualization below by the folk at Simply Statistics isn’t the prettiest thing in the world, but it shows how Silver’s predictions compared to the actual vote %:
That’s all well and good. As I said, a victory for math. The Simply Statistics post made an accurate statement when they wrote:
While the pundits were claiming the race was a “dead heat”, the day before the election Nate gave Obama a 90% chance of winning. Several pundits attacked Nate (some attacks were personal) for his predictions and demonstrated their ignorance of Statistics.
But, somehow, this “Silver vs. pundits” thing has taken an interpretive turn off course, as this is now being hailed as being the death knell for the punditry profession:
Hold the phone there, Leroy!
From Google, a pundit is:
An expert in a particular subject or field who is frequently called on to give opinions about it to the public
Pundits do a lot more than simply predict election outcomes. Political pundits, certainly, weigh in on how elections might turn out. But, they also weigh in on interpreting the electorate’s behavior, explaining and debating economic/foreign/healthcare policy, and proposing and defending various political strategies and tactics.
Nate Silver’s intent is to predict the outcome of the election based on the data available at the point of the prediction. One of Silver’s main, front page visualizations actually illustrates how his predictions changed over time:
The closer to the election, the more the probabilities drifted towards 100%/0%. This makes sense (and gets us to the main point of this post). There are two — and only two — underlying factors in the accuracy of the prediction at any point in time:
- The mindset/preference of all voters at that point of time — this is measured, quite accurately, by the slew of polls conducted throughout the election cycle
- The amount of time between that point and time and the election — the opportunity for “something to happen” that changes the mindset/preference of voters
Either one of these factors can drive the confidence (dangerous word to use here — that’s not in strict statistical significance meaning of the word) of the prediction up.
Pundits — who, often, also double as partisan strategists — actually weigh in on the interpretation of both of these factors:
- The polls represent facts…but the facts have to be interpreted. And, that interpretation is with an eye to action. Which is…
- What can “we” (strategist hat) or “they” (pundit hat) do to cause “something to happen” between now and the election to change voter mindset?
Michael Gerson weighed in on this with an op-ed piece that, while I don’t agree with his main conclusion, I think makes this point very eloquently:
The most interesting and important thing about politics is not the measurement of opinion but the formation of opinion. Public opinion is the product — the outcome — of politics; it is not the substance of politics.
Now, let’s pivot from political predictive analytics to marketers and marketing analytics. If a CMO goes to the CEO on Day 1 of a new quarter and says, “Our data is showing that we’re losing market share, but we’ve been working on a major new campaign that is launching next week, and we think we will turn the corner by the end of the quarter,” then he is making a reasonable claim. If, however, that same statement is made on Day 89 of the quarter…it is clearly poppycock.
Analytics — and predictive analytics in particular — are based on historical and “now” data. Analysts can and should be presenting “the truth” as to how things stand. They should be identifying specific problem spots so that, in collaboration with marketers, they can look to do something that will “make something happen” that drives more favorable results.
“Analytics vs. Punditry” is a false debate, as is “Analytics vs. Strategy.”