When Public Meets Private in Intelligence

Today’s the anniversary of the 9/11 attacks on the American homeland, the sequence of events which wound up bringing me from Silicon Valley to Washington DC in 2002, and a stint working in the Intelligence Community. I notice today that no one asks me anymore, as they often did at first back then, why I was so intent on bridging the gap between DC and the Valley (broadly, not geographically, defined).

Today it surprises few when we do something unorthodox like invite Amazon and Blue Origin founder Jeff Bezos to appear inside an intelligence agency earlier this year, for a probing one-on-one at the AFCEA Spring Intelligence Symposium with several hundred IC professionals about the rapid changes in technology, views on public/private collaboration, and the impacts of AI and robotics on his business and theirs.

That rapid pace of change continues to accelerate, following its own Moore’s-Law-like curve, and daily one sees a blurring between how “intelligence” is performed in government uses and out among the public. To wit, check out this article from early August:

News Item: BuzzFeed News Trained A Computer To Search For Hidden Spy Planes. This Is What We Found … Surveillance aircraft often keep a low profile: The FBI, for example, registers its planes to fictitious companies to mask their true identity. So BuzzFeed News trained a computer to find them by letting a machine-learning algorithm sift for planes with flight patterns that resembled those operated by the FBI and the Department of Homeland Security… First we made a series of calculations to describe the flight characteristics of almost 20,000 planes in the four months of Flightradar24 data: their turning rates, speeds and altitudes flown, the areas of rectangles drawn around each flight path, and the flights’ durations. We also included information on the manufacturer and model of each aircraft, and the four-digit squawk codes emitted by the planes’ transponders. Then we turned to an algorithm called the “random forest,” training it to distinguish between the characteristics of two groups of planes: almost 100 previously identified FBI and DHS planes, and 500 randomly selected aircraft. The random forest algorithm makes its own decisions about which aspects of the data are most important. But not surprisingly, given that spy planes tend to fly in tight circles, it put most weight on the planes’ turning rates. We then used its model to assess all of the planes, calculating a probability that each aircraft was a match for those flown by the FBI and DHS… The algorithm was not infallible: Among other candidates, it flagged several skydiving operations that circled in a relatively small area, much like a typical surveillance aircraft. But as an initial screen for candidate spy planes, it proved very effective. In addition to aircraft operated by the US Marshals and the military contractor Acorn Growth Companies, covered in our previous stories, it highlighted a variety of planes flown by law enforcement, and by the military and its contractors. Some of these aircraft use technologies that challenge our assumptions about when and how we’re being watched, tracked, or listened to. It’s only by understanding when and how these technologies are used from the air that we’ll be able to debate the balance between effective law enforcement, national security, and individual privacy.”

It has become commonplace to observe the dwindling distinctions in use of so-called “intelligence capabilities” between longstanding government intelligence agencies and so-called private-sector companies, e.g. news outlets or social-media platforms.  For a tour-de-force expression and stirring point-of-view argument you will profit from reading John Lanchester’s new and epic book-review essay “You Are the Product” in the London Review of Books, in which he treats Google, Microsoft, Facebook and the like with a critical lens and concludes:

[E]ven more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company….”

A short blog piece is not the place to examine fully this rich topic, but it is a good place to point out that I enjoy spending time helping all sides of this divide understand each other. By all sides, I mean government entities and officers (including intelligence and law enforcement), private-sector companies, and most importantly the public citizenry and customer base of those organizations. A great forum for doing that has been AFCEA, which this past week co-hosted with INSA the annual Intelligence and National Security Summit in DC. Along with helping oversee the agenda I had the opportunity to organize one of the panel sessions with my old friend (and former CIA Deputy Director of Intelligence) Carmen Medina.

Our panel – very relevant to the above discussion – was on “The Role of Intelligence in the Future Threat Environment,” and our excellent participants addressed some gnarly problems. I tweeted many of the comments and observations (see my hashtagged feed here), and you can find more content and videos from all 15 sessions archived here.

Your suggestions on new approaches to these dialogues are welcome as always. As we commemorate the horrific surprise attacks of 9/11/2001, in a rapidly changing world where real-time surveillance is performed by more and more entities, governmental and commercial, it is increasingly important to engage in thoughtful – and sometimes urgent – discussion about who watches whom, and why.

 

 

 

 

IoT Botnet Attacks – Judge for Yourself

Yesterday’s mass-IoT-botnet attack on core Internet services (Twitter, Netflix, etc. via DNS provider Dyn) is drawing a lot of attention, mainly because for the public at large it is an eye-opening education in the hidden Internet of Things connections between their beloved electronic devices and online services.

Image of swarming networked DVRs and Webcams

You can read elsewhere the as-yet-understood details of the attack (e.g. “Hacked Cameras, DVRs Powered Today’s Massive Internet Outage” by Brian Krebs). And you’ll be reading more and more warnings of how this particular attack is just the beginning (e.g. from my friend Alan Silberberg, “Mirai Botnet DDoS Just the Beginning of IoT Cybersecurity Breaches“).

But today, in the wake of the attack, a DC friend known for peering around corners asked for my opinion about the ultimate meaning of this approach, and whether this attack means “the game has changed.” Here’s my response:

Last year I was asked by Georgetown Law School to give a private briefing to the Federal Judicial Center’s annual convocation of 65 federal judges from jurisdictions across the United States. The overall FJC session addressed “National Security, Surveillance Technology and the Law,” and in part was prompted by the Edward Snowden and WikiLeaks events. Here’s an article about the conference, and you can view the full agenda here. As you can see from the agenda, I joined noted security expert Bruce Schneier in presenting on “Computer Architectures and Remote Access.” That’s a fairly technical topic, and so I asked an organizer ahead of time what the judges wanted to learn and why, and was told “They’re encountering a tidal wave of cases that involve claims against government warrants for access, and conversely claims involving botnet attacks and liability.” I then asked what level of technical proficiency I should assume in preparing my remarks, and was told, “Based on their own self-assessments, you should assume they’re newbies encountering computers for the very first time.”

After a good laugh, that was the approach I took, and with patience Bruce and I were able both to educate and to spark a great back-and-forth conversation among the nation’s judges about the intricacies of applying slowly evolving legal doctrines to rapidly evolving technical capabilities.

The answer to today’s question is Yes, the game has changed. The tidal wave is well upon us and won’t be technically turned back in large part. We can (over time) introduce tighter security into some elements of IoT devices and networks, but that won’t be easy and would hamper the ease and invisibility of IoT operations. I think eventually we’ll come to realize that the notion of “Internet Security” is going to be like “Law & Order” – a good aspiration, which in everyday practice is observed in the breaking.

We’ll develop more robust judicial and insurance remedies, to provide better penalization and risk-valuation avenues, for what will be an inevitably continuing onslaught of law-breaking.

Yet in that onslaught crimes will be better defined, somewhat better policed, definitely better prosecuted (our Judges will be better educated!), and perhaps most importantly victims will be better insured and compensated, as we learn to manage and survive each new wave of technological risk.

By the way, if you’d like to plunge into the reading list which those federal judges had assigned as their homework on surveillance technologies and national security law, click here or the image below to download the 5-page syllabus for the session, courtesy of Georgetown Law, with links to the full set of Technology Readings and Legal Readings, across fields like Interception and Location Tracking, Digital Forensics, Metadata and Social Network Analytics, Cloud Computing and Global Communications…. It’s a very rich and rewarding collection, guaranteed to make you feel as smart as a federal judge 🙂

readings-on-law-and-tech

RIP Justice Antonin Scalia

Supreme Court Justice Scalia passed away today. My wife Kathryn Ballentine Shepherd, a semi-retired attorney, has worked at the Supreme Court since 2003 (in the Curator’s KBS and Scalia.jpgOffice, giving Chambers tours and lectures on the  history of the Court and its Justices). Through her I’ve met and spent quite a bit of time with Justice Scalia over the years, and always enjoyed his writing and analyses, his humor and humanity. You see here a recent photo of Kathryn joking with him at the Supreme Court – he really seemed to love spending time with her, joshing with her in front of crowds (perhaps because she was a smart lawyer as well), and he always seemed to steer visiting friends to her for a “private” tour.

I was at Chief Justice Rehnquist’s funeral in 2005; he was deeply loved by the Supreme Court “family.” On today’s Court, the most-loved by them in my observation: Antonin Scalia.

One of the funnier moments in my recollection was at a 2006 Supreme Court Historical Society reenactment of the Aaron Burr treason trial held in the Court’s actual Chambers one evening, with Justice Scalia playing the role of the actual trial judge, Chief Justice John Marshall. Scalia peered down from the bench as the DC attorneys recruited for the event began to play out their own roles – among them Scalia’s own son Eugene, a powerhouse lawyer in his own right. “Chief Justice Marshall” (Justice Scalia) looked over his glasses and boomed out, “OK, who’s next – it says here your name is, um, Scall-ee-a, Scall-eye-a, what kind of name is that??” The audience roared with laughter. That was the common reaction to his ever-present, ever-witty humor.

For seven years I’ve recycled an old Reagan-era joke (it was originally about Thurgood Marshall), updating it for the Obama Administration and asking, “Who’s the most important conservative in Washington DC? Justice Scalia’s doctor.” In today’s hyper-politicized era, we’re about to see why….

 

Burning Man, Artificial Intelligence, and Our Glorious Future

I’ve had several special opportunities in the last few weeks to think a bit more about Artificial Intelligence (AI) and its future import for us remaining humans. Below I’m using my old-fashioned neurons to draw some non-obvious links.

The cause for reflection is the unexpected parallel between two events I’ve been involved in recently: (1) an interview of Elon Musk which I conducted for a conference in DC; and (2) the grand opening in London of a special art exhibit at the British Library which my wife and I are co-sponsoring. They each have an AI angle and I believe their small lessons demonstrate something intriguingly hopeful about a future of machine superintelligence

Continue reading

Young Americans and the Intelligence Community

IC CAE conferenceA few days ago I travelled down to Orlando – just escaping the last days of the DC winter. I was invited to participate in a conference hosted by the Intelligence Community’s Center of Academic Excellence (IC CAE) at the University of Central Florida.  The title of my speech was “The Internet, 2015-2025: Business and Policy Challenges for the Private Sector.” But I actually learned as much as I taught, maybe more. Continue reading

Insider’s Guide to the New Holographic Computing

In my seven happy years at Microsoft before leaving a couple of months ago, I was never happier than when I was involved in a cool “secret project.”

Last year my team and I contributed for many months on a revolutionary secret project – Holographic Computing – which is being revealed today at Microsoft headquarters.  I’ve been blogging for years about a variety of research efforts which additively culminated in today’s announcements: HoloLens, HoloStudio for 3D holographic building, and a series of apps (e.g. HoloSkype, HoloMinecraft) for this new platform on Windows 10.

For my readers in government, or who care about the government they pay for, PAY CLOSE ATTENTION.

It’s real. I’ve worn it, used it, designed 3D models with it, explored the real surface of Mars, played and laughed and marveled with it. This isn’t Einstein’s “spooky action at a distance.” Everything in this video works today:

These new inventions represent a major new step-change in the technology industry. That’s not hyperbole. The approach offers the best benefit of any technology: empowering people simply through complexity, and by extension a way to deliver new & unexpected capabilities to meet government requirements.

Holographic computing, in all the forms it will take, is comparable to the Personal Computing revolution of the 1980s (which democratized computing), the Web revolution of the ’90s (which universalized computing), and the Mobility revolution of the past eight years, which is still uprooting the world from its foundation.

One important point I care deeply about: Government missed each of those three revolutions. By and large, government agencies at all levels were late or slow (or glacial) to recognize and adopt those revolutionary capabilities. That miss was understandable in the developing world and yet indefensible in the United States, particularly at the federal level.

I worked at the Pentagon in the summer of 1985, having left my own state-of-the-art PC at home at Stanford University, but my assigned “analytical tool” was a typewriter. In the early 2000s, I worked at an intelligence agency trying to fight a war against global terror networks when most analysts weren’t allowed to use the World Wide Web at work. Even today, government agencies are lagging well behind in deploying modern smartphones and tablets for their yearning-to-be-mobile workforce.

This laggard behavior must change. Government can’t afford (for the sake of the citizens it serves) to fall behind again, and  understanding how to adapt with the holographic revolution is a great place to start, for local, national, and transnational agencies.

Now some background… Continue reading

Bullshit Detector Prototype Goes Live

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

TruthTeller PrototypeThe Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics 🙂

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0” movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated “Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection.  Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

 – Washington Post, “Debuting Truth Teller

The prototype was built with funding from a Knight Foundation’s Prototype Fund grant, and you can read more about the motivation and future plans over on the Knight Blog, and you can read TechCrunch discussing some of the political ramifications of the prototype based on the fact-checking movement in recent campaigns.

Even better, you can actually give Truth Teller a try here, in its infancy.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and  juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S.  election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

%d bloggers like this: