Posts Tagged ‘General’
In our last post, we highlighted our preference for “second acts”, which we defined as repeated success in different environments. We focused on repeated success in that discussion. Here, we will delve into the second component of the definition: different environments.
As with broken clocks, which are right twice each day, many pundits make the same prediction over and over, knowing that they will be intermittently correct. The stock market offers an ideal breeding ground for this type of pundit, given its long cycles and binary outcomes (the market is either up or down). Those who are always optimistic on the stock market (known as “perma-bulls”) are right during bull markets, while those who are always downbeat (“perma-bears”) are right during bear markets. Both groups are anointed as gurus when the environment happens to be in their favor, but they are dead wrong during the other cycle. The behavioral elements discussed in our One-Hit Wonders post are again at play: recent, vivid, and unusual information are front-loaded in our brains. But another bias comes into play here: the “fundamental attribution error.”
The fundamental attribution error refers to the idea that when explaining successes and failures, we tend to overweight the role of the individual and underweight the roles of chance and context. For instance, in the early 2000s, much ink was spilled in lauding Terry Semel and Meg Whitman (then CEOs of Yahoo and eBay, respectively) as superstar executives. The alternative explanation — that these CEOs were running companies that happened to be at the right place at the right time — never received much thought. That’s not surprising, given that cause-and-effect, character-driven narratives are inherently much more appealing than chalking things up to good fortune.
Similarly, rather than considering that some pundits offer a static viewpoint which happens to coincide with an existing cycle, we elevate them to oracle status. We hope PunditTracker will help distinguish those pundits with the mental flexibility to provide insight in different environments from the broken clock crowd.
At PunditTracker, we place extra weight on second acts: people who demonstrate repeated success in different environments. Examples include former Apple CEO Steve Jobs and NFL coach Bill Parcells.
Just as half-truths are more dangerous than outright lies, the most dangerous pundit is the one who parlays a single correct call into guru status. While the media plays a central role in this game, our brains are culpable as well. Behavioral studies reveal that certain types of information have an outsized grip on our memory, including that which is recent, vivid, and unusual. This explains why we: (1) buy earthquake insurance after a (recent) earthquake, (2) believe that more people die from homicide (vivid) than from stomach cancer, (3) complain that we always get stuck in the slowest-moving line (unusual) at the grocery store.
The pundit playbook fully exploits this dynamic. To understand how, let’s place ourselves in the shoes of a pundit. What is the best way to make a call that meets all three criteria: recent, vivid, and unusual? Well, recent is easy—just make a lot of calls. That way, there will always be a fresh one out there. And vivid is just another word for bold. So if we make bold calls frequently, we are two-thirds of the way there. But how about unusual?
The beauty of the pundit playbook is that unusual takes care of itself. Bold calls are typically incorrect, so the correct ones are by definition unusual. Said differently, bold calls that turn out wrong are less likely to be remembered because they fail to meet the unusual threshold. Pundits are therefore entirely incentivized to churn out brash predictions, knowing that only the correct ones will stick in our mind. And because we tend to confuse ease of recall with frequency, we develop a warped sense of the pundit’s batting average.
This phenomenon is found in all walks of life, including sports. NBA guard Chauncey Billups, for instance, has been dubbed “Mr. Big Shot,” presumably because he has hit many clutch shots in his career. A closer look at the numbers, however, suggests that Billups’ nickname might be undeserved. Data from 82games.com reveals that Billups’ game-winning shot percentage between 2003 and 2008 was a paltry 16% (6 for 37), well below his 42% overall career shooting average. Our hunch is that a few of those game-winners were in high-profile, nationally televised games (vivid), thus sticking in the public’s mind and inflating Billups’ reputation as a clutch shooter.
The real danger comes when actions are taken based on a false premise. In this case, Billups’ inflated reputation is likely to garner him most of his team’s game-winning shot attempts, even though other players would be better options. Teammate Carmelo Anthony, for instance, had a sterling 48% game-winning shot percentage (13 for 27) over the same timeframe.
As anyone in marketing knows, once established, associations are very sticky. This explains how pundits are able to cash in for many years on the “one big call” they got right, despite sporting a terrible track record both before and afterwards. By playing the role of public scorekeeper, we hope that surfacing the data for everyone to see will help expose the one-hit wonders.
We are thrilled to announce the launch of the PunditTracker.com website. The blog will continue in its current form, serving to contextualize, analyze, and discuss the content on the main site.
Let us know what you think!
As our readers know by now, we consider the “hit rate” approach typically used to evaluate pundits to be flawed. This approach assumes that all predictions are equal, which is decidedly not the case. When we embarked on addressing the accountability gap in the prediction industry, we knew an alternative scoring system was needed.
Our solution will be to calibrate each prediction for boldness. We will measure this by asking our users how likely they think a given prediction is to occur. If everyone says “unlikely,” then the call is bold, and the pundit, if correct, should receive more credit than he would for a called deemed “likely”. This moment-in-time gauge of consensus opinion will underpin our scoring algorithm.
The trick, of course, is getting people to vote. While we could bank on the appetite of our user base to hold pundits accountable, we have decided to throw in an extra twist: we will score our users as well. When users vote on the boldness of a pundit’s prediction, we have all the data needed to grade them just as we do our pundits (on an opt-in basis, of course). We envision a significant social component to the website, including user rankings and the ability for users to submit their own predictions once they achieve a certain score.
This notion of user scores should not only make the website interactive and engaging but also lend more integrity to our underlying pundit scoring algorithm. In the coming weeks, we will be providing more examples of how the scoring system will work. As always, we welcome any feedback.
When we at PunditTracker initially discussed the notion of creating a website to hold pundits accountable, the first question on our minds was: “Why doesn’t this exist already?” There is clearly demand for such a service: the Google search “pundits accountability” yields more than one million results.
As we began developing the site, we quickly realized the answer to our question: Because it is more difficult than it sounds.
Predictions are rarely black-and-white and therefore fail to fit a neat scoring system. Consider the following examples:
- Pundit: “Gold will go up 20% this year.”
Outcome: Gold goes up 19%.
Should the pundit receive some (partial) credit?
- Pundit: “Donald Trump will probably run for president.”
Outcome: Trump does not run.
Was the pundit wrong, or does the hedge “probably” provide an escape hatch?
- Pundit: “Dwight Howard will get traded.”
Outcome: Dwight Howard is still on the Magic.
Given that there is no specified end date, when can this call officially be marked wrong?
- Pundit: “The new iPhone will be groundbreaking.”
Outcome: The new iPhone is released to rave reviews.
How do you define groundbreaking? Isn’t it purely subjective?
From close calls to hedged calls to unbounded calls to subjective calls, pundit predictions are typically bursting with shades of grey. A cynic (realist?) would ascribe this to a deliberate ploy by pundits to garner media attention while evading blame. A recent Wall Street Journal article discussed the brilliance of the “40% rule.” (Our running joke internally is that there is a 49.99% chance that PunditTracker will be a smashing success).
Moreover, even when predictions are black-and-white, scoring them is not necessarily so. A fairly obvious scoring system employs what’s called a “hit rate” or “batting average” approach: take the number of correct calls and divide it by the number of total calls. If I make ten calls and get seven right, my hit rate is 70%. The problem is that this figure is useless without context. If I predict each day that the sun is going to rise tomorrow, I am (hopefully) going to have a perfect hit rate. Using this system, I receive a score twice as high as the pundit who predicted the 2008 financial collapse but then missed a trivial call the following year. That hardly seems fair, which suggests that predictions should somehow be calibrated for “boldness.”
We have wrestled with all these issues—and more—while designing our scoring system. We will share some of our solutions with you over the coming weeks. As always, we welcome any feedback.
- One Hit Wonders = Pundits who are able to cash in for many years on the one big call they got right, despite sporting a terrible track record both before and afterwards.
- Broken Clocks = Pundits who make the same prediction over and over, knowing that they will be intermittently correct.
As a reminder, we are always looking for suggestions for which pundits to track — simply click the button at the bottom of the page. We have already received some great ideas.
PunditTracker’s mission is to bring accountability to the prediction industry.
The absence of media memory creates significant moral hazard in the world of punditry. Nuance and restraint do not play well on soundbite- and ratings-driven media, so shelf space is granted to those who espouse more extreme views. Ideally, these pundits would gain or lose credibility based on the outcomes of their calls. The 24-hour news cycle, however, means that the media is always latching on to the new flavor of the day. Rare is the postmortem to evaluate prior stories.
Pundits are highly incentivized to adhere to the following playbook:
- make a brash prediction
- if wrong, don’t worry…. no one will remember
- if right, selectively tout for self-promotion
- repeat cycle
By cataloging and scoring the predictions of pundits, we hope to bring some balance to the equation. Pundits who demonstrate a track record of making of accurate, out-of-consensus calls will appropriately receive their due. Meanwhile, those who are bombastic solely to garner media attention will be exposed.
The website is slated to launch in 2012 and will initially track three types of pundits: Financial, Political, and Sports. In the meantime, we will be running the site as a blog, previewing what’s to come as well as sharing some general thoughts about the prediction industry. We welcome all feedback, including suggestions on who you would like to see tracked – just click here.