Bullshit Detector Prototype Goes Live

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

TruthTeller PrototypeThe Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics 🙂

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0” movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection.  Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

 – Washington Post, “Debuting Truth Teller

The prototype was built with funding from a Knight Foundation’s Prototype Fund grant, and you can read more about the motivation and future plans over on the Knight Blog, and you can read TechCrunch discussing some of the political ramifications of the prototype based on the fact-checking movement in recent campaigns.

Even better, you can actually give Truth Teller a try here, in its infancy.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and  juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S.  election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

2012 Year in Review for Microsoft Research

The year draws to a close… and while the banality and divisiveness of politics and government has been on full display around the world during the past twelve months, the past year has been rewarding for me personally when I can retreat into the world of research. Fortunately there’s a great deal of it going on among my colleagues.

2012 has been a great year for Microsoft Research, and I thought I’d link you to a quick set of year-in-review summaries of some of the exciting work that’s been performed and the advances made:

Microsoft Research 2012 Year in Review

The work ranges from our Silicon Valley lab work in “erasure code” to social-media research at the New England lab in Cambridge, MA; from “transcending the architecture of quantum computers” at our Station Q in Santa Barbara, to work on cloud data systems and analytics by the eXtreme Computing Group (XCG) in Redmond itself.

Across global boundaries we have seen “work towards a formal proof of the Feit-Thompson Theorem” at Microsoft Research Cambridge (UK), and improvements for Bing search in Arab countries made at our Advanced Technology Labs in Cairo, Egypt.

All in all, an impressive array of research advance, benefiting from an increasing amount of collaboration with academic and other researchers as well. The record is one more fitting tribute to our just-departing Chief Research and Strategy Officer Craig Mundie, who is turning over his reins including MSR oversight to Eric Rudder (see his bio here), while Craig focuses for the next two years on special work reporting to CEO Steve Ballmer. Eric’s a great guy and a savvy technologist, and has been a supporter of our Microsoft Institute’s work as well … I did say he’s savvy 🙂

There’s a lot of hard work already going on in projects that should pay off in 2013, and the New Year promises to be a great one for technologists and scientists everywhere – with the possible exception of any remaining Mayan-apocalypse/ancient-alien-astronaut-theorists. But even to them, and perhaps most poignantly to them, I say Happy New Year!

MSR gets wired, WIRED gets MSR

MS Research in natural-user-interaction technologies
MSR natural-user-interaction immersive technologies

WIRED Magazine’s online site ran a great long profile of Microsoft Research late yesterday, with interviews and project features: “How Microsoft Researchers Might Invent a Holodeck.”

I have written about or mentioned all of the individual projects or technologies on my blog before, but the writing at WIRED is so much better than my own – and the photographs so cool – that I thought I should post a link to the story. Continue reading

The almighty ampersand linking R and D

According to Wikipedia, the lowly ampersand or “&” is a logogram representing the conjunction word “and” using “a ligature of the letters in et,” which is of course the Latin word for “and.”

In my line of work I most frequently encounter the ampersand in the common phrase “R&D” for research and development, although I notice that with texting and short-form social media the ampersand is making something of a comeback in frequency of use anyway.

Continue reading

Kinecting Communities

On April 16 I will be speaking at the Mobile Citizen Summit in Washington DC (registration still open), which brings together “practitioners across the  government, nonprofit, advocacy, and political spaces—the kinds of  people who develop the strategy and the tools to reach, engage, educate,  and enable citizens across the country and around the world.”

But I’m going to be talking about “mobile” in a different way than others still use the term, i.e. they focus on a handheld device, while I will be focusing on the mobile citizen. As I have said before I don’t believe our future involves experiencing “augmented reality” by always holding up little 3-inch plastic screens in front of our faces. Natural user interfaces and immersive computing offer much more to how we access computational resources – and how technology will help us interact with one another. Here’s an example, in a story from the past week.

Continue reading

Tearing the Roof off a 2-Terabyte House

I was home last night playing with the new Kinect, integrating it with Twitter, Facebook, and Zune. Particularly because of the last service, I was glad that I got the Xbox 360 model with the 250-gigabyte (gb) hard disk drive. It holds a lot more music, or photos, and of course primarily games and game data.

So we wind up with goofy scenes like my wife zooming along yesterday in Kinect Adventures’ River Rush – not only my photo (right) but in-game photos taken by the Kinect Sensor, sitting there below the TV monitor.

Later as I was waving my hands at the TV screen, swiping magically through the air to sweep through Zune’s albums and songs as if pawing through a shelf of actual LP’s, I absent-mindedly started totting up the data-storage capacity of devices and drives in my household.  Here’s a rough accounting:

  • One Zune music-player, 120gb;
  • 2 old iPods 30gb + 80gb;
  • an iPad 3G at 16gb;
  • one HP netbook 160gb;
  • an aging iMac G5 with 160gb;
  • three Windows laptops of 60gb, 150gb, and 250gb;
  • a DirecTV DVR with a 360gb disk;
  • a single Seagate 750gb external HDD;
  • a few 1gb, 2gb, and a single 32gb SD cards for cameras;
  • a handful of 2gb, 4gb, and one 16gb USB flash drives;
  • and most recently a 250gb Xbox 360, for Kinect. 

All told, I’d estimate that my household data storage capacity totals 2.5 terabytes. A terabyte, you’ll recall, is 1012 bytes, or 1,000,000,000,000 (1 trillion) bytes, or alternately a thousand gigabytes.

Continue reading

Playing with virtual data in spatial reality

For the past few months, when I’ve had visitors to Microsoft Research on the Redmond campus one of the things I’ve enjoyed demonstrating is the technology behind the new system for Xbox 360 – the controller-free gaming and immersive entertainment system that Microsoft is releasing for the holiday market in a month or so. In particular, I’ve enjoyed having Andy Wilson of MSR talk with visitors about some of the future implications in non-gaming scenarios, including general information work, and how immersive augmented-reality (AR) could transform our capabilities for working with information, virtual objects, and how we all share and use knowledge among ourselves.

We’re further along in this area than I thought we’d be five years ago, and I suspect we’ll be similarly surprised by 2015.

In particular, there is great interest (both in and out of the government circles I travel in) in the “device-less” or environmental potential of new AR technologies. Not everyone will have a fancy smartphone on them at all times, or want to stare at a wall-monitor while also wearing glasses or holding a cellphone in front of them in order to access other planes of information. The really exciting premise of these new approaches is the fully immersive aspect of  “spatial AR,” and the promise of controlling a live 3D environment of realtime data. Continue reading

Mix, Rip, Burn Your Research

You’ve done research; you’ve collected and sifted through mounds of links, papers, articles, notes and raw data. Shouldn’t there be a way to manage all that material that’s as easy and intuitive as, say, iTunes or Zune – helping you manage and share your snippets and research the way you share and enjoy your music?

Continue reading

Your choice, Dataviz as event or book

A friend wrote asking if I could make it to an event happening this week near DC. I can’t make it, but fortunately he also mentioned as consolation that he has a cool new book on the cusp of release – and I’ve now ordered my copy.

The Friend: legendary visualization and HCI guru Ben Shneiderman (Wikipedia entry). Ben is a computer-science professor at the University of Maryland and the founder of its well-known Human-Computer Interaction Laboratory (HCIL), as well as an ACM Fellow and AAAS Fellow.  He has done government a million favors over the years, consulting for agencies, including his recent work on the Recovery.gov site to help that platform of data – from hundreds of thousands of sources – organize, host, and visualize the data for millions of visitors.  I first got to know Ben through his support for better intelligence analysis – he helped invent a longtime intelligence analytics tool, Spotfire (see his article “Dynamic queries, starfield displays, and the path to Spotfire“).  Ben’s also well-known for his award-winning 2002 book Leonardo’s Laptop: Human Needs and the New Computing Technologies, which I enjoyed and still think about when brainstorming new techie toys.

Continue reading

DARPA crowd guru gets a new lab

It’s been a little over two years since I came back to the tech private sector from my government service, and it’s great when we have other folks take the same path, for it improves the knowledge of each side about the other. Today we’re announcing that Peter Lee, currently the leader of the Defense Advanced Research Projects Activity’s innovative Transformational Convergence Technology Office (TCTO), is joining Microsoft to run the mighty flagship Redmond labs of Microsoft Research.

Continue reading

%d bloggers like this: