FACT: Among the inspired ideas of polymath Danny Hillis (pioneer of parallel computing) was establishing the Long Now Foundation, whose projects include the millennial Long-Now Clock (“the world’s slowest computer”) and the notion of “Long Bets.” A Long Bet is an “accountable prediction,” meaning one that has a specified end-date and a testable, wagerable, proposition. One of the early Long Bets posted wagers $2,000 that “By 2020, no one will have won a Nobel Prize for work on superstring theory, membrane theory, or some other unified theory describing all the forces of nature.” That particular bet is one of many signs of scientific skepticism about string theory.
ANALYSIS: Even without the ease of hyperlinks, old-fashioned newspapers foster serendipitous connections between articles, particularly if you’re reading a Sunday morning paper with lots of sections. Sunday the Washington Post did me a service by placing in different sections a couple of articles which I connected, about intelligence “failures” and about stock-market prediction, leading me to some web-surfing about the questionable validity of string theory and some related observations about the difficulty of predicting human behavior.
In the Outlook section, the Post has an opinionated and thought-provoking op-ed piece by Mark Lowenthal, one of the most “intelligent” individuals in the recent history of the U.S. intelligence community (after all, he was the 1988 Jeopardy grand champion, as well as a former assistant director of CIA).
Lowenthal’s piece is worth reading. In fact, an early copy of it was being emailed around the community of former intelligence executives late last week and getting attention; I got it from four different people on Friday. One main point he makes is:
In 9/11, intelligence was excoriated for “failing to connect the dots” — a demeaning, inapt concept that, unfortunately, has entered the popular lexicon. But in Iraq, intelligence was blamed for connecting too many dots.”
His major thesis is that “Intelligence is not in the business of predicting or forecasting,” and that the rest of us need to acknowledge that, and to expect from our intelligence community only what it can reliably produce: information wrapped in intelligent analytic context. There’s no magic involved, save the rare occurrence when technology or espionage result in a truly unknown secret of immediate value, stolen and delivered in timely form. Those instances are rare, and as Lowenthal points out:
Intelligence tends to do worse on the “big events” (Pearl Harbor, 9/11, the fall of the Berlin Wall) because these events, by their very nature, are counterfactual or surprising. Nor can intelligence eliminate the element of surprise. Intelligence is actually good at something that can seem awfully mundane: keeping policymakers generally well-informed on a recurring basis so that they can make decisions with a reasonable sense of confidence.”
Meanwhile, over in the Post’s Sunday Business section, there’s an unrelated yet relevant article about the role of behavioral economics in understanding individual investors’ decision-making. The article, “How Thinking Costs You: When It Comes to investing, People Aren’t That Smart,” summarizes some of the work of Nobel-Prize-winner Daniel Kahneman, who among other things highlights the role of an individual’s “overconfidence” while making investment decisions. As Kahneman puts it, “It’s the idea that you know better than the market, which is a very strange idea. Individual investors have no business at all thinking they can do better.” And in remarks that sound very parallel to Mark Lowenthal’s assessment of intelligence analysis, Kahneman says:
It’s because we have no way of thinking properly about what we don’t know. What we do is we give weight to what we know and then we add a margin of uncertainty. You act on what you think will happen. In fact, in most situations what you don’t know is so overwhelmingly more important than what you do know that you have no business acting on what you know.”
There are no solidly scientific routes to “connecting the dots” when the dots themselves are incomplete, fuzzy, and drawn from the amorphously chaotic world of human behavior. These two articles drove me to think about an upcoming visit to the new Microsoft Research New England lab, in Cambridge Mass., which will focus on the social sciences, and I turned to the web to refresh myself on the background of its fascinating director Jennifer Chayes, a specialist in mathematics, theoretical computer science and cryptography (I wrote about her earlier in a post about social networks and the social sciences).
So while I was surfing around a bit Sunday afternoon, enjoying reading about Chayes’s work, I found a mention on the blog of Peter Woit, a well-regarded mathematician at Columbia University. (They apparently attended grad school together at Princeton where, according to Woit, Chayes had “even more impressive leather outfits than the Ramones“). Woit is a leading skeptic on the subject of string theory, regarding it essentially as non-falsifiable and therefore not real science. Here’s a solid review of his critique in The Times of London.
Well, now I’ve basically drawn a line (connected some dots) between intelligence analysis, market forecasting, and string theory. Is it really fair to compare making unified-theory math predictions, or stock predictions, with making foreign-affairs predictions in intelligence analysis? Probably not – if only because I think the intelligence problem is actually harder, and less amenable to rigor.
I suspect that Woit would enjoy reading Lowenthal’s book on improving intelligence analysis and the futility of connecting the dots (“Intelligence: From Secrets to Policy“). Similarly, the next time I see Mark Lowenthal I’m going to see if he’s read Peter Woit’s brilliantly feisty book “Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law.” Lowenthal may already have; after all, he’s a Jeopardy champion 🙂
And, maybe I’ll work on getting Woit, Kahneman, and Lowenthal all in a room in Cambridge to talk with Jennifer Chayes about a unified theory for predicting all human behavior….
Filed under: Technology | Tagged: 9/11, analysis, CIA, Columbia, Columbia University, computer science, cryptography, Daniel Kahneman, Danny Hillis, failure, foreign affairs, future, Government, human, hyperlinks, IC, Intelligence, intelligence analysis, Intelligence Community, intelligence reform, international relations, iraq, Jennifer Chayes, leather, Long Now, Mark Lowenthal, market forecasting, math, mathematics, media, membrane theory, Microsoft, Microsoft Research, newspaper, newspapers, Nobel, Nobel Prize, parallel computing, Pearl Harbor, Peter Woit, physics, prediction, predictive analysis, Princeton, Ramones, science, Serendipity, social, social networks, social science, social sciences, Society, stock market, string theory, superstring, superstring theory, terrorism, terrorists, The Times, Times of London, war, Washington Post |