After a flurry of activity surrounding our site launch and the election, we thought this would be an appropriate time to take a step back and revisit what we are trying to accomplish with PunditTracker.
The way we see it, we have two overarching goals: deflating inflated pundit reputations and surfacing the good pundits.
(1) Deflating inflated pundit reputations
Few would dispute the notion that there is an accountability gap in the world of punditry. Simply put, the incentives in the industry are all wrong, which results in hits (correct predictions) being reported far more frequently than misses (incorrect predictions).
Let’s assume the role of each industry participant to illustrate why this is the case:
- Pundit: When wrong, keep quiet. When right, self-promote: write a book, go on the speaking circuit, etc. Easy, no?
- Media: Pundit comes on your program and makes a bold prediction that turns out laughably wrong. What should you do? If you point out that fact, you indirectly discredit yourself. Instead, introduce the pundit again as an “expert”, with no reference to the wrong call. Moreover, consider how news is generated. Some event happens, which triggers you to sift through history to see who got it right. The flip side does not hold: there is no corresponding consideration for things that didn’t happen and a search for those that got it wrong saying it would.
- Us: Not only are we force-fed a skewed sample of prediction outcomes, but we are also psychologically wired to remember hits more than misses. As we discussed in a previous post, unusual information has an outsized grip on our memory. Bold calls are typically incorrect, so we quickly forget those. Meanwhile, bold calls that turn out right are unusual and therefore stick in our mind. And because we tend to confuse ease of recall with frequency, we develop a warped sense of the pundit’s batting average.
All these factors have given rise to what we call the pundit playbook. Pundits are entirely incentivized to churn out a bunch of bold predictions, knowing that there is plenty of upside if they get one right and no downside if they get them all wrong. Ever wonder how One-Hit Wonders and Broken Clocks are able to sell books about their new predictions despite pathetic track records?
PunditTracker aims to fix this moral hazard by playing the role of public scorekeeper. There are two notable twists in how we score predictions. Our first twist is to incorporate boldness in addition to accuracy. The typical “hit rate” or “batting average” approach (# of correct calls divided by # of total calls) assumes all predictions are equal, which is decidedly not the case. The daily prediction “the sun will rise tomorrow” would (hopefully) yield a perfect hit rate, after all. We instead use a “$1 Yield” metric, which measures the average payout had you bet $1 on each of the pundit’s predictions, based on consensus odds at the time (odds are driven by user votes). A yield of exactly $1.00, for instance, means the pundit’s predictions were no better or worse than the consensus view at the time.
(2) Surfacing the good pundits (assuming they exist)
A fundamental problem PunditTracker faces is that the pundits we are currently tracking are likely below-average. The reason is that the majority of pundits on our site are household names. In today’s prediction industry, a mainstream pundit is usually one who has perfected the playbook, making bombastic predictions simply to garner media attention. In other words, the mainstream pundit is all too often a bad pundit. This election prediction season was a perfect example: many of the political pundits we tracked are arguably not pundits at all but rather partisan mouthpieces (see: broken clocks). This is simply a function of the environment: we would love to track those who refuse to employ the playbook, but they aren’t on television.
This problem triggered the idea for the second twist to our website: providing users with a platform to make predictions. When you vote on the likelihood of a pundit’s prediction, you are effectively making a prediction of your own. This enables us to score our users the exact same way we grade pundits. By leveling the playing field between “pundits” and “users,” we can introduce a much-needed dose of meritocracy into the system.
By deflating inflated pundit reputations and surfacing the good pundits, we strive to disrupt the prediction industry. This won’t be an easy task, as there are plenty of entrenched interests at play and the system is wired for self-protection.
The good news is that our timing may prove fortuitous. “Moneyball” has popularized the notion of data-driven analysis, and given the public’s growing demand for accountability, we believe the stars are aligned for disruption. For this to happen it needs to be a collective effort. While we are ramping up the number of pundits tracked on PunditTracker (now more than 120), we readily admit that there are many more to catalog. So the next time you see a prediction on television, hear one on the radio, or read one on the Internet, take a second to send it our way to track and ultimately score. With your voice and our platform, we can finally bring accountability to the prediction industry.
We would like to close by thanking our users for all their suggestions for the website. Keep ’em coming!