Problem Number One, Watching for Superintelligence

Two years ago, the AFCEA Intelligence Committee (I’m a member) invited Elon Musk for a special off-the-record session at our annual classified Spring Intelligence Symposium. The Committee assigned me the task of conducting a wide-ranging on-stage conversation with him, going through a variety of topics, but we spent much of our time on artificial intelligence (AI) – and particularly artificial general intelligence (AGI, or “superintelligence”).

I mention that the session was off-the-record. In my own post back in 2015 about the session, I didn’t NGA Photo: Lewis Shepherd, Elon Musk 2015characterize Elon’s side of the conversation or his answers to my questions – but for flavor I did include the text of one particular question on AI which I posed to him. I thought it was the most important question I asked…

(Our audience that day: the 600 attendees included a top-heavy representation of the Intelligence Community’s leadership, its foremost scientists and technologists, and executives from the nation’s defense and national-security private-sector partners.)

Here’s that one particular AI question I asked, quoted from my blogpost of 7/28/2015:

“AI thinkers like Vernor Vinge talk about the likelihood of a “Soft takeoff” of superhuman intelligence, when we might not even notice and would simply be adapting along; vs a Hard takeoff, which would be a much more dramatic explosion – akin to the introduction of Humans into the animal kingdom. Arguably, watching for indicators of that type of takeoff (soft or especially hard) should be in the job-jar of the Intelligence Community. Your thoughts?”

Months after that AFCEA session, in December 2015 Elon worked with Greg Brockman, Sam Altman, Peter Thiel and several others to establish and fund OpenAI, “a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence (AGI).” OpenAI says it has a full-time staff of 60 researchers and engineers, working “to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible.”

Fast-forward to today. Over the weekend I was reading through a variety of AI research and sources, keeping SpecialProjectscurrent in general for some of my ongoing consulting work for Deloitte’s Mission Analytics group. I noticed something interesting on the OpenAI website, specifically on a page it posted several months ago labelled Special Projects.”

There are four such projects listed, described as “problems which are not just interesting, but whose solutions matter.” Interested researchers are invited to apply for a position at OpenAI to work on the problem – and they’re all interesting, and could lead to consequential work.

But the first Special Project problem caught my eye, because of my question to Musk the year before:

  1. Detect if someone is using a covert breakthrough AI system in the world. As the number of organizations and resources allocated to AI research increases, the probability increases that an organization will make an undisclosed AI breakthrough and use the system for potentially malicious ends. It seems important to detect this. We can imagine a lot of ways to do this — looking at the news, financial markets, online games, etc.”

That reads to me like a classic “Indications & Warning” problem statement from the “other” non-AI world of intelligence.

I&W (in the parlance of the business) is a process used by defense intelligence and the IC to detect indicators of potential threats while sufficient time still exists to counter those efforts. The doctrine of seeking advantage through warning is as old as the art of war; Sun Tzu called it “foreknowledge.” There are many I&W examples from the Cold War, from the overall analytic challenge (see a classic thesis  Anticipating Surprise“), and from specific domain challenge (see for example this 1978 CIA study, Top Secret but since declassified, on “Indications and Warning of Soviet Intentions to Use Chemical Weapons during a NATO-Warsaw Pact War“).

The I&W concept has sequentially been transferred to new domains of intelligence like Space/Counter-Space (see the 2013 DoD “Joint Publication on Space Operations Doctrine,” which describes the “unique characteristics” of the space environment for conducting I&W, whether from orbit or in other forms), and of course since 9/11 the I&W approach has been applied intensely in counter-terrorist realms in defense and homeland security.

It’s obvious Elon Musk and his OpenAI cohort believe that superintelligence is a problem worth watching. Elon’s newest company, the brain-machine-interface startup Neuralink, sets its core motivation as avoiding a future in which AGI outpaces simple human intelligence. So I’m staying abreast of indications of AGI progress.

For the AGI domain I am tracking many sources through citations and published research (see OpenAI’s interesting list here), and watching for any mention of I&W monitoring attempts or results by others which meet the challenge of what OpenAI cites as solving “Problem #1.” So far, nothing of note.

But I’ll keep a look out, so to speak.

 

 

Docere et Facere, To Teach and To Do

“Helping aspiring data scientists forge their own career paths, more universities are offering programs in data science or analytics.” – Wall Street Journal, March 13, 2017

George Bernard Shaw’s play Man and Superman provides the maxim, “He who can, does. He who cannot, teaches.” Most of us know this as “Those who can’t do, teach.” (And Woody Allen added a punch line in Annie Hall: “… and those who can’t teach, teach gym.”)

I’m determined both to do and to teach, because I enjoy each of them. When it comes to data and advanced analytics, something I’ve been using or abusing my entire career, I’m excited about expanding what I’m doing. So below I’m highlighting two cool opportunities I’m engaging in now…

 

Teaching Big Data Architectures and Analytics in the IC

I’ve just been asked by the government to teach again a popular graduate course I’ve been doing for several years, “Analytics: Big Data to Information.” It’s a unique course, taught on-site for professionals in the U.S. intelligence community, and accredited by George Mason University within GMU’s Volgenau Graduate School of Engineering. My course is the intro Big Data course for IC professionals earning a master’s or Ph.D. from GMU’s Department of Information Sciences and Technology, as part of the specialized Directorate for Intelligence Community Programs.

I enjoy teaching enormously, not having done it since grad school at Stanford a million years ago (ok, the ’80s). The students in the program are hard-working data scientists, technologists, analysts, and program managers from a variety of disciplines within the IC, and they bring their A-game to the classroom. I can’t share the full syllabus, but here’s a summary:

This course is taught as a graduate-level discussion/lecture seminar, with a Term Paper and end-of-term Presentation as assignments. Course provides an overview of Big Data and its use in commercial, scientific, governmental and other applications. Topics include technical and non-technical disciplines required to collect, process and use enormous amounts of data available from numerous sources. Lectures cover system acquisition, law and policy, and ethical issues. It includes discussions of technologies involved in collecting, mining, analyzing and using results, with emphasis on US Government environments.

I worry that mentioning this fall’s class now might gin up too much interest (last year I was told the waiting list had 30+ students who wanted to get in but couldn’t, and I don’t want to expand beyond a reasonable number), but when I agreed this week to offer the course again I immediately began thinking about the changes in the syllabus I may make. And I solicit your input in the comments below (or by email).

math-1500720_960_720.jpgFor the 2016 fall semester, I had to make many changes to keep up with technological advance, particularly in AI. I revamped and expanded the “Machine Learning Revolution” section, and beefed up the segments on algorithmic analytics and artificial intelligence, just to keep pace with advances in the commercial and academic research worlds. Several of the insights I used came from my onstage AI discussion with Elon Musk in 2015, and his subsequent support for the OpenAI initiative.

More importantly I provided my students (can’t really call them “kids” as they’re mid-career intelligence officials!) with tools and techniques for them to keep abreast of advances outside the walls of government – or those within the walls of non-U.S. government agencies overseas. So I’m going to have to do some work again this year, to keep the course au courant, and your insight is welcome.

But as noted at the beginning, I don’t want to just teach gym – I want to be athletic. So my second pursuit is news on the work front.

 

Joining an elite Mission Analytics practice

I’m announcing what I like to think of as the successful merger of two leading consultancies: my own solo gig and Deloitte Consulting. And I’m even happy Deloitte won the coin-toss to keep its name in our merger 🙂

For the past couple of years I have been a solo consultant and I’ve enjoyed working with some tremendous clients, including government leaders, established tech firms, and great young companies like SpaceX and LGS Innovations (which traces its lineage to the legendary Bell Labs).

But working solo has its limitations, chiefly in implementation of great ideas. Diagnosing a problem and giving advice to an organization’s leadership is one thing – pulling together a team of experts to execute a solution is entirely different. I missed the camaraderie of colleagues, and the “mass-behind-the-arrowhead” effect to force positive change.

When I left Microsoft, the first phone call I got was from an old intelligence colleague, Scott Large – the former Director of NRO who had recently joined Deloitte, the world’s leading consulting and professional services firm. Scott invited me over to talk. It took a couple of years for that conversation to culminate, but I decided recently to accept Deloitte’s irresistible offer to join its Mission Analytics practice, working with a new and really elite team of experts who understand advanced technologies, are developing new ones, and are committed to making a difference for government and the citizens it serves.

Our group is already working on some impressively disruptive solutions using massive-scale data, AI, and immersive VR/AR… it’s wild. And since I know pretty much all the companies working in these spaces, I decided to go with the broadest, deepest, and smartest team, with the opportunity for highest impact.

Who could turn down the chance to teach, and to do?

 

Video of DoD Innovation Discussion at Cybersecurity Summit

Earlier this week I wrote (“Beware the Double Cyber Gap“) about an upcoming Cybersecurity Summit, arranged by AFCEA-DC, for which I would be a panelist on innovation and emerging technologies for defense.

The Summit was a big success, and in particular I was impressed with the level and quality of interaction between the government participants and their private-sector counterparts, both on stage and off. Most of the sessions were filmed, and are now available at http://www.cybersecuritytv.net.

You can watch our panel’s video, “Partnering with Industry for Innovation,” and it will provide an up-to-the-moment view of how US Cyber Command and the Department of Defense as a whole are attacking the innovation challenge, featuring leadership from the USCYBERCOM Capabilities Development Group, and the Defense Innovation Unit-Experimental. Solarflare CEO Russ Stern (a serial entrepreneur from California) and I offered some historical, technical, market, and regulatory context for the challenge those two groups face in finding the best technologies for national security. Most of my remarks are after the 16:00 minute mark; click the photo below to view the video:

photo: Lewis Shepherd; Gen. “Wheels” Wheeler (Ret.) of DIUx; Russell Stern, CEO Solarflare

From my remarks:

“I’m here to provide context. I’ve been in both these worlds – I came from Silicon Valley; I came to the Defense Intelligence Agency after 9/11, and found all of these broken processes, all of these discontinuities between American innovation & ingenuity on one hand, and the Defense Department & the IC & government at large…
Silicon was a development of government R&D money through Bell Labs, the original semiconductor; so we have to realize the context that there’s been a massive disruption in the divorcing of American industry and the technology industry, from the government and the pull of defense and defense needs. That divorcing has been extremely dramatic just in the past couple of years post-Snowden, emblematically exemplified with Apple telling the FBI, “No thanks, we don’t think we’ll help you on that national security case.”
So these kinds of efforts like DIUx are absolutely essential, but you see the dynamic here, the dynamic now is the dog chasing the tail – the Defense Department chasing what has become a massive globally disruptive and globally responsive technology industry…  This morning we had the keynote from Gen. Touhill, the new federal Chief Information Security Officer, and Greg told us that what’s driving information security, the entire industry and the government’s response to it is the Internet – through all its expressions, now Internet of Things and everything else – so let’s think about the massive disruption in the Internet just over the last five years.
Five years ago, the top ten Internet companies measured by eyeballs, by numbers of users, the Top 10 were all American companies, and it’s all the ones you can name: Amazon, Google, Microsoft, Facebook, Wikipedia, Yahoo… Guess what, three years ago the first crack into that Top 10, only six of those companies were American companies, and four – Alibaba, Baidu, Tencent, and Sohu – were Chinese companies. And guess what, today only five are American companies, and those five – Google, Amazon, Microsoft, Facebook, Yahoo – eighty percent or more of their users are non-U.S. Not one of those American internet companies has even twenty percent of their user-base being U.S. persons, U.S. citizens. Their market, four out of five of their users are global.
So when [DoD] goes to one of these CEOs and says, “Hey c’mon, you’re an American” – well, maybe, maybe not. That’s a tough case to sell. Thank God we have these people, with the guts and drive and the intellect to be able to try and make this case, that technological innovation can and must serve our national interest, but that’s an increasingly difficult case to make when [internet] companies are now globally mindsetted, globally incentivized, globally prioritizing constantly…”

Kudos to my fellow panelists for their insights, and their ongoing efforts, and to AFCEA for continuing its role in facilitating important government/industry partnerships.

Intelligence, Artificial and Existential

"Not to Be or Not to Be?" artwork by Shuwit, http://shuwit.deviantart.com/

“Not to Be or Not to Be?” artwork by Shuwit, http://shuwit.deviantart.com/

I just published a short piece over at SIGNAL Magazine on an increasingly public debate over artificial intelligence, which the editor gave a great Shakespearean title echoing Hamlet’s timeless question “To be, or not to be”: Continue reading

Meet the Future-Makers

Question: Why did Elon Musk just change his Twitter profile photo? I notice he’s now seeming to evoke James Bond or Dr. Evil:

twitter photos, Elon v Elon

I’m not certain, but I think I know the answer why. Read on… Continue reading

Insider’s Guide to the New Holographic Computing

In my seven happy years at Microsoft before leaving a couple of months ago, I was never happier than when I was involved in a cool “secret project.”

Last year my team and I contributed for many months on a revolutionary secret project – Holographic Computing – which is being revealed today at Microsoft headquarters.  I’ve been blogging for years about a variety of research efforts which additively culminated in today’s announcements: HoloLens, HoloStudio for 3D holographic building, and a series of apps (e.g. HoloSkype, HoloMinecraft) for this new platform on Windows 10.

For my readers in government, or who care about the government they pay for, PAY CLOSE ATTENTION.

It’s real. I’ve worn it, used it, designed 3D models with it, explored the real surface of Mars, played and laughed and marveled with it. This isn’t Einstein’s “spooky action at a distance.” Everything in this video works today:

These new inventions represent a major new step-change in the technology industry. That’s not hyperbole. The approach offers the best benefit of any technology: empowering people simply through complexity, and by extension a way to deliver new & unexpected capabilities to meet government requirements.

Holographic computing, in all the forms it will take, is comparable to the Personal Computing revolution of the 1980s (which democratized computing), the Web revolution of the ’90s (which universalized computing), and the Mobility revolution of the past eight years, which is still uprooting the world from its foundation.

One important point I care deeply about: Government missed each of those three revolutions. By and large, government agencies at all levels were late or slow (or glacial) to recognize and adopt those revolutionary capabilities. That miss was understandable in the developing world and yet indefensible in the United States, particularly at the federal level.

I worked at the Pentagon in the summer of 1985, having left my own state-of-the-art PC at home at Stanford University, but my assigned “analytical tool” was a typewriter. In the early 2000s, I worked at an intelligence agency trying to fight a war against global terror networks when most analysts weren’t allowed to use the World Wide Web at work. Even today, government agencies are lagging well behind in deploying modern smartphones and tablets for their yearning-to-be-mobile workforce.

This laggard behavior must change. Government can’t afford (for the sake of the citizens it serves) to fall behind again, and  understanding how to adapt with the holographic revolution is a great place to start, for local, national, and transnational agencies.

Now some background… Continue reading

Bullshit Detector Prototype Goes Live

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

TruthTeller PrototypeThe Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics 🙂

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0” movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated “Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection.  Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

 – Washington Post, “Debuting Truth Teller

The prototype was built with funding from a Knight Foundation’s Prototype Fund grant, and you can read more about the motivation and future plans over on the Knight Blog, and you can read TechCrunch discussing some of the political ramifications of the prototype based on the fact-checking movement in recent campaigns.

Even better, you can actually give Truth Teller a try here, in its infancy.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and  juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S.  election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

%d bloggers like this: