Fact: According to the latest Rasmussen poll released Saturday July 12, and promptly headlined by the Drudge Report, “The race for the White House is tied. The Rasmussen Reports daily Presidential Tracking Poll for Saturday shows Barack Obama and John McCain each attract 43% of the vote.” Newsweek is reporting a similar result in its own poll, with Obama moving down and McCain up (“Obama, McCain in Statistical Dead Heat“), and other polls increasingly show a similarly close race.
Analysis: I’ve been tracking the growing divide between two quite different methods purporting to offer statistical predictive analysis for the November presidential election. Polls are saying one thing, but Prediction Markets are saying another.
The most prominent Prediction Market (PM) offering presidential predictions, based on the wisdom of their particular crowd, is Intrade and they still have Obama handily whipping McCain. More than $4.5 million has been wagered at Intrade on those two candidates, and the money is banking on the Democrat by more than two to one. Other PM’s (like NewsFutures) continue to reflect Obama as a prohibitive favorite.
So whom to trust? Admittedly, they’re answering slightly different questions. PM wagerers are betting on who’ll win on Election Day, while pollsters typically ask, “Whom would you vote for if the election were held today?” But pollsters came to prefer that question after sociological research showed it better reflected likely voting behavior.
True science is cool for several reasons, but foremost among them is: falsifiability. You can think of this as “testability” but the scientific method itself necessarily includes the prediction and its potential refutation, not just the test. This is sexy stuff at the core of the philosophy of science. As a grad student at Stanford back in the day, I was exposed to the works of Karl Popper, Thomas Kuhn and Imre Lakatos (giants of falsifiability), only because Political Science – like Economics – was under heavy criticism for its purported non-falsifiability. Calumny, my professors cried. “We are too falsifiable! Let’s make predictions!” Only to be proven wrong…
Within a couple of years, while still young and wasting my time in “applied politics,” I used to say that one thing I really liked about working on campaigns was the definitive end. There came, inevitably, Election Day. No matter what your candidate had done or left undone during the campaign, no matter how frenetic the final weeks and days, the day of reckoning (and voting) came as predicted – and you had your result.
Then came the 2000 election. Oh well.
I’ve written before about Prediction Markets – and there is increasing interest in the intelligence community in their potential for warning and predictive analysis. But they are somewhat controversial in an intelligence context, both within and outside the community. Many remember the abortive prediction market attempted in DARPA’s TIA program several years ago, done in by the “shock value” of the revelation that a government agency might be fostering “wagers” on assassinations and the like.
Of course one of the hallmarks of “prediction science” is the value and necessity of falsifiability. And in political prognostication, you should try to make your predictions as clearly and definitively as the available evidence allows.
Is Obama way up, or has McCain pulled even?
It may be that average people talking to pollsters are weighing different evidence than the self-chosen participants in prediction markets. Differences in campaign style and focus – NASCAR (McCain’s favored sport) vs. Wilco (Obama’s favored band), let’s say – might be echoing and resonating among the vast masses of American votes, most of whom are only slowly coming to focus on their choice this fall.
What is actually being put to the test each election year is the definition and utility within the political context of “Wisdom of the crowds,” the paradigm popularized by James Surowiecki holding that the choices of “groups are often smarter than the smartest individual within them.” Poll results depend on the accumulated count of which candidate people say they’re voting for; PM results reflect the accumulated cognitive guesswork of those who think they know which candidate people will vote for.
The answer of “who’s right” for political prediction isn’t as simple as positing “crowds” or “groups” in opposition to “individuals.” Prediction Markets are groups themselves, but of self-appointed, self-interested non-randomly chosen “experts.” Poll results are commonly reported as a communal vox populi, but every individual respondent answers in the solitude of their phone call from a stranger.
There are some thorny sociological, psychological, and mathematical issues still puzzling those trying to understand whether Prediction Markets offer great value, or not. Microsoft Research has explored prediction markets, running an internal one as the “Information Forecasting Exchange” from 2003-2006. Internal efforts at Yahoo and Google have also been noted. But frankly, I’m not actively promoting PM’s to government friends, as I don’t believe we understand the results and supporting science well enough yet.
Indeed, it appears to me that PM’s are growing not from corporate or government use, but mostly organically from within academia, stock-futures circles and political-junkie communities. I’m reading the interesting variety of writers and prediction-marketeers at MidasOracle, which brings together widely ranging posts from faculty members at Harvard and other universities, daytraders, and even a few “amateurs.”
And meanwhile the ultimate politcal junkies, the pollsters, are stepping across the divide: the aforementioned Rassmussen polling firm has launched its own Prediction Market! But as if to underline the confusion between these two approaches, it turns out that while Rasmussen’s polls show a dead heat, the Rasmussen Prediction Market has Obama clobbering McCain!
It just goes to show that the “Wisdom of the Crowd” depends upon which crowd you want to trust: a crowd of self-appointed experts, or a crowd of the great unwashed.
I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.” – William F. Buckley, 1965
Finally, there’s always that wacky “unpredictability of human events” X factor, where people sometimes just decide to go hog-wild and do something outside the bounded game-theory boxes. Tom Edsall, once a political writer for the Washington Post and now blogging for the Huffington Post, wrote yesterday that there’s even some thought that Hillary Clinton may rise again, at the Democratic convention in Denver.
Take that, Science!
Filed under: Government, Intelligence, Microsoft, R&D, Society, Technology | Tagged: Barack Obama, campaign, campaigns, Clinton, DARPA, daytraders, daytradign, daytrading, Democrat, Democratic, Democratic Convention, Democrats, election, faculty, falsifiability, future, futures, futurology, gamblers, gambling, game theory, gaming, Google, Google Research, Hillary Clinton, history of science, Huffington Post, Imre Lakatos, Intelligence, Intelligence Community, Intrade, James Surowiecki, John McCain, Karl Popper, life, math, mathematics, McCain, media, Microsoft, Microsoft Research, MidasOracle, NASCAR, news, NewsFutures, Obama, online gambling, philosophy of science, political, political campaigns, Political Science, politics, poll, polls, pollsters, prediction, prediction markets, professors, psychology, Rasmussen, Republican, science, scientific, sociology, Stanford, Thomas Edsall, Thomas Kuhn, TIA, Wilco, William F. Buckley, wisdom of the crowd, Yahoo |
[…] Lewis Sheperd (the Chief Technology Officer of Microsoft’s Institute for Advanced Technology in Go…: Indeed, it appears to me that [prediction markets] are growing not from corporate or government use, but mostly organically from within academia, stock-futures circles and political-junkie communities. I’m reading the interesting variety of writers and prediction-marketeers at Midas Oracle, which brings together widely ranging posts from faculty members at Harvard and other universities, daytraders, and even a few “amateurs.” […]
LikeLike
Hi Lewis,
The validity of any forecasting technique should be tested by looking at historical results. The way to track accuracy of a prediction market is to look at how historical predictions did against their forecasts. Google published a very nice of their analysis of that here:
http://googleblog.blogspot.com/2005/09/putting-crowd-wisdom-to-work.html
We have the done the same analysis for around 5,000 markets we run at Hubdub (www.hubdub.com) and we are getting similar results.
Nigel
LikeLike
Nigel – thank you for the links. Your point on historical results is well taken; political polling certainly has an observable track record and I’m sure that the political PM field is developing and analyzing its own.
The Google work I knew about, to be honest I’m not sure who followed whom – GOOG and MSFT each ran prediction market efforts with product-development subjects, but both seemed interesting. I have now registered for Hubdub to check it out, thanks again – lewis
LikeLike