Bullshit Detector Prototype Goes Live

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

TruthTeller PrototypeThe Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics 🙂

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0” movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated “Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection.  Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

 – Washington Post, “Debuting Truth Teller

The prototype was built with funding from a Knight Foundation’s Prototype Fund grant, and you can read more about the motivation and future plans over on the Knight Blog, and you can read TechCrunch discussing some of the political ramifications of the prototype based on the fact-checking movement in recent campaigns.

Even better, you can actually give Truth Teller a try here, in its infancy.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and  juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S.  election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

Free Tools for the New Scientific Revolution

Blogs are great for supplementing real-life events, by giving space and time for specific examples and links which can’t be referenced at the time. I was invited to give a talk last week at the first-ever NASA Information Technology Summit in Washington DC, and the topic I chose was “Government and the Revolution in Scientific Computing.” That’s an area that Microsoft Research has been focusing on quite a bit lately, so below I’ll give some examples I didn’t use at my talk.

One groundrule was that invited private-sector speakers were not allowed to give anything resembling a “sales pitch” of their company’s wares. Fair enough – I’m no salesman.  The person who immediately preceded me, keynoter Vint Cerf, slightly bent the rules and talked a bit about his employer Google’s products, but gee whiz, that’s the prerogative of someone who is in large part responsible for the Internet we all use and love today.

I described in my talk the radical new class of super-powerful technologies enabling large-data research and computing on platforms of real-time and archival government data. That revolution is happening now, and I believe government could and should be playing a different and less passive role. I advocated for increased attention to the ongoing predicament of U.S. research and development funding.

Alex Howard at O’Reilly Radar covered the NASA Summit and today published a nice review of both Vint’s talk and mine.  Some excerpts: Continue reading

Bing vs Google, the quiet semantic war

On Wednesday night I had dinner at a burger joint with four old friends; two work in the intelligence community today on top-secret programs, and two others are technologists in the private sector who have done IC work for years. The five of us share a particular interest besides good burgers: semantic technology.

Oh, we talked about mobile phones (iPhones were whipped out as was my Windows Phone, and apps debated) and cloud storage (they were stunned that Microsoft gives 25 gigabytes of free cloud storage with free Skydrive accounts, compared to the puny 2 gig they’d been using on DropBox).

But we kept returning to semantic web discussions, semantic approaches, semantic software. One of these guys goes back to the DAML days of DARPA fame, the guys on the government side are using semantic software operationally, and we all are firm believers in Our Glorious Semantic Future.

Continue reading

Gunning the Microsoft Semantic Engine

New Bing Maps Beta with embedded data layers from Twitter and other social feeds, click to enlarge screenshot

There’s a lot of information on the Internet already. Every day, more is added – a lot more. And while there are a concomitant number of new analytic or sense-making tools on the web, they butt up against the fact that the data – the all-important data – is held in multiple places, formats, and platforms.

How are we going to deal with all this? One approach is almost mechanical: ensuring that datasets can be accessed commonly, as in our new Microsoft Dallas platform associated with the Windows Azure cloud platform.  In the government realm, the anticipated reliance on “government-as-a-platform” (a meme popularized by Tim O’Reilly) holds promise in allowing somewhat aggregated datasets, openly accessible.

Continue reading

A-Space Past and Future

This week marks the second anniversary of the first live internal demo of the intelligence community’s A-Space project, groundbreaking for the IC in its goal of collaborative use of social media across agency lines. Somewhere in Maryland, a remarkable government employee and friend named Mike Wertheimer should pause and quietly celebrate the fruition of his early evangelism for it.

I was still a government employee then, but wrote about the effort at the time here on Shepherd’s Pi (“A-Space: Top-secret social networking“). It makes me chuckle to remember back to those days when it was still mostly unheard-of for IC employees to blog openly on the public web about current technology projects. Now you can’t shut ’em up! 🙂

It made sense, I thought, to set down a few notes at the time for several reasons: Continue reading

Data in the Cloud from Dallas to Mars

There’s a lot going on at this week’s Microsoft Professional Developers Conference (PDC 09); it’s a traditional launchpad for cool new stuff. I thought I’d point out several of the government-relevant announcements and technology roll-outs.

I specifically want to spotlight something called Codename Dallas, and how NASA and others have begun using it. In the keynote this morning Microsoft’s Chief Software Architect Ray Ozzie told PDC attendees (and his streaming-video audience) that a landslide of new sensors and observational systems are changing the world by recording “unimaginable volumes of data… But this data does no good unless we turn the potential into the kenetic, unless we unlock it and innovate in the realm of applications and solutions that’s wrapped around that data.”

Here’s how we’re addressing that, with a bit of step-by-step context on the overall cloud-computing platform enabling it.  The steps are: 1. Azure, 2. Pinpoint, and 3. Dallas.

Continue reading

Bad News for the Pithy

Just my luck. Right when I start to push out the pithy quotes, Reader’s Digest announces that it is filing for bankruptcy. I remember the days when everyone would recite the newest pearls from their “Quotable Quotes” column.

My little gems, such as they are, came in two recent interviews, both on the subject of semantic computing and the semantic web. The subject matter in each is somewhat similar – I wasn’t asked so much about future work that Microsoft is doing, but for assessments of different approaches in semantic computing past and present, and where the field is heading.

Continue reading

%d bloggers like this: