How do intelligence analysts handle the long-discussed problem of information overload? (The same question goes for information workers and government data of any kind.)
I should put the word “problem” in quotation marks because there is now the persuasive counter-argument that there’s no such thing; Clay Shirky says that people may in fact suffer from what he calls filter failure, but that more information in an info-centric environment is still by definition a good thing.
Information overload started in Alexandria, in the library of Alexandria, right? That was the first example where we have concrete archaeological evidence that there was more information in one place than one human being could deal with in one lifetime, which is almost the definition of information overload. And the first deep attempt to categorize knowledge so that you could subset; the first take on the information filtering problem appears in the library of Alexandria.
By the time that the publishing industries spun up in Venice in the early- to mid-1500s, the ability to have access to more reading material than you could finish in a lifetime is now starting to become a general problem of the educated classes. And by the 1800s, it’s a general problem of the middle class. So there is no such thing as information overload, there’s only filter failure, right? Which is to say the normal case of modern life is information overload for all educated members of society…
So, the real question is, how do we design filters that let us find our way through this particular abundance of information?” – “Overload! Interview with Clay Shirky,” Columbia Journalism Review December 19 2008
So let’s assume that filters are working for an analyst, yet she or he still sits atop an ever-replenished mountain of new and now presumably relevant data: images, field reports, news accounts, cryptic tweets, classic message traffic, and the like. Even if filters whittle down the data to usable information – “good dots” – it still takes the human brain to investigate, explore, analyze, and connect those dots into accurate patterns, or to discern the true absence of connection, or to uncover a misleading connection.
I’ve heard often that intelligence work would be easier if we could be like Tom Cruise in Minority Report, manipulating large amounts of data by hand with 3D touch gestures. Can’t do that yet, but we’re approaching the capability in a variety of ways. One new way is inherent in the Windows 7 operating system I’ve been using in our internal Microsoft deployment for several months, because Win 7 has built-in support for multi-touch and gesture-based interaction techniques.
But most people won’t have a touch-screen laptop or monitor for their Win 7 use, at least not for a while. So there are possibilities we’re exploring internally and with partners for cheap and easy ways to take advantage of the multitouch capability in the OS. For example, why not just plug in a USB mouse-like device that actually enables full gesture-based computing?
Here’s a video showing several prototypes of exactly that, from our Applied Sciences Group and several Microsoft Research labs; check out especially the cool “Side Mouse” which appears at about 6:30 into the video:
Just thinking of national-security work, I think several of these approaches would be useful for imagery analysts, say, or defense planners, or intelligence professionals collaborating on large amounts of information. But there’s broader applicability too. I’ve been perplexed at my new Google Wave beta, in large part because the array of incoming information seems chaotic and non-intuitive; perhaps multitouch would add something there as well just among textual items.
This past week was a big exposition party for several new UI approaches from Microsoft Research, as researchers from our Cambridge and Redmond labs presented 13 separate papers and demo’s at the annual Symposium on User Interface Software and Technology (UIST 2009), in Victoria, British Columbia. UIST is a premier forum for innovation in the software and technology of human-computer interfaces, sponsored by the Association for Computing Machinery’s special interest groups on computer-human interaction and computer graphics.
Among the papers MSR presented were the following:
Ripples: Utilizing Per-Contact Visualizations to Improve User Interaction with Touch Displays
Contact Area Interaction with Sliding Widgets
Collabio: A Game for Annotating People within Social Networks
Augmenting Interactive Tables with Mice & Keyboards
Enabling Always-Available Input with Muscle-Computer Interfaces
Optically Sensing Tongue Gestures for Computer Input
Integrated Videos and Maps for Driving Directions
You can find a full list here of the papers we presented at UIST 2009, with links to some additional cool videos as well.
Let me know if you think there’s an interesting application of any of these approaches for government use in your arena. 🙂
Filed under: Government, innovation, Intelligence, Microsoft, R&D, Technology | Tagged: 3D, ACM, Alexandria, analysis, analyst, CHI, Clay Shirky, collaboration, Columbia Journalism Review, computer, data, data visualization, dataviz, displays, filter failure, gesture, Google, Google Wave, HCI, information, Intelligence, keyboard, laptop, library, Microsoft, Microsoft Research, Minority Report, monitor, mouse, MSR, multitouch, R&D, research, tech, Technology, Tom Cruise, touch, Twitter, UI, UIST, USB, widgets, Windows, Windows 7 |
I know I’m changing the subject, as you intended this post to be about technologies for managing information. But in the case of intelligence analysis, I don’t think we should enable more data. Instead, we should put more emphasis on the data we already have.
Very soon into the analysis process, the marginal value of each new piece of data starts to plummet, and eventually becomes negative. Though analysts recognize this, it’s hard to put down the mouse: afterall, the golden piece of information always feels like it’s a click away, and if they just keep on looking, they’ll eventually find the answer to all of their questions, spelled out for them in a single source document. But that document seldom exists.
Instead of asking analysts to indefinitely search and sort, we should have them do more actual analysis with the information they already have. When information is lacking, work with collectors to change that instead of foraging for it online.
LikeLike
Hi Matthew, thanks for the very thoughtful comment. I agree – and didn’t intend to highlight technologies for endless searching, or aggregation. These are technologies (hardware included) for better analysis, not for more data-collection, at least to my way of thinking.
I think that touch-enabled exploration of just existing data, already determined to be relevant, can be thought of as the equivalent of “chewing over” the information. Rolling it around and around in your head, mulling it over, puzzling it out, seeking different perspectives on the evidence you think you have assembled, but seen in a new light.
This is something that might be better shown and experienced than described here, so why not come on down and collect that beer I owe you (or you owe me), and I’ll take you into the lab and show you a few things, to get your critique. Should be fun.
LikeLike
[…] Once You Get Past Filter Failure – Lewis Shepherd, Shepherd’s Pi […]
LikeLike
Спасибо за полезную информацию! жду новых постов. 😉
LikeLike
[…] technologist by choice, so I tend to see the potential for future progress in addressing the issue. I’ve written before about moving beyond “Filter Failure” – I don’t believe there is “too much information,” but a lack of imagination and […]
LikeLike