Problem Number One, Watching for Superintelligence

Two years ago, the AFCEA Intelligence Committee (I’m a member) invited Elon Musk for a special off-the-record session at our annual classified Spring Intelligence Symposium. The Committee assigned me the task of conducting a wide-ranging on-stage conversation with him, going through a variety of topics, but we spent much of our time on artificial intelligence (AI) – and particularly artificial general intelligence (AGI, or “superintelligence”).

I mention that the session was off-the-record. In my own post back in 2015 about the session, I didn’t NGA Photo: Lewis Shepherd, Elon Musk 2015characterize Elon’s side of the conversation or his answers to my questions – but for flavor I did include the text of one particular question on AI which I posed to him. I thought it was the most important question I asked…

(Our audience that day: the 600 attendees included a top-heavy representation of the Intelligence Community’s leadership, its foremost scientists and technologists, and executives from the nation’s defense and national-security private-sector partners.)

Here’s that one particular AI question I asked, quoted from my blogpost of 7/28/2015:

“AI thinkers like Vernor Vinge talk about the likelihood of a “Soft takeoff” of superhuman intelligence, when we might not even notice and would simply be adapting along; vs a Hard takeoff, which would be a much more dramatic explosion – akin to the introduction of Humans into the animal kingdom. Arguably, watching for indicators of that type of takeoff (soft or especially hard) should be in the job-jar of the Intelligence Community. Your thoughts?”

Months after that AFCEA session, in December 2015 Elon worked with Greg Brockman, Sam Altman, Peter Thiel and several others to establish and fund OpenAI, “a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence (AGI).” OpenAI says it has a full-time staff of 60 researchers and engineers, working “to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible.”

Fast-forward to today. Over the weekend I was reading through a variety of AI research and sources, keeping SpecialProjectscurrent in general for some of my ongoing consulting work for Deloitte’s Mission Analytics group. I noticed something interesting on the OpenAI website, specifically on a page it posted several months ago labelled Special Projects.”

There are four such projects listed, described as “problems which are not just interesting, but whose solutions matter.” Interested researchers are invited to apply for a position at OpenAI to work on the problem – and they’re all interesting, and could lead to consequential work.

But the first Special Project problem caught my eye, because of my question to Musk the year before:

  1. Detect if someone is using a covert breakthrough AI system in the world. As the number of organizations and resources allocated to AI research increases, the probability increases that an organization will make an undisclosed AI breakthrough and use the system for potentially malicious ends. It seems important to detect this. We can imagine a lot of ways to do this — looking at the news, financial markets, online games, etc.”

That reads to me like a classic “Indications & Warning” problem statement from the “other” non-AI world of intelligence.

I&W (in the parlance of the business) is a process used by defense intelligence and the IC to detect indicators of potential threats while sufficient time still exists to counter those efforts. The doctrine of seeking advantage through warning is as old as the art of war; Sun Tzu called it “foreknowledge.” There are many I&W examples from the Cold War, from the overall analytic challenge (see a classic thesis  Anticipating Surprise“), and from specific domain challenge (see for example this 1978 CIA study, Top Secret but since declassified, on “Indications and Warning of Soviet Intentions to Use Chemical Weapons during a NATO-Warsaw Pact War“).

The I&W concept has sequentially been transferred to new domains of intelligence like Space/Counter-Space (see the 2013 DoD “Joint Publication on Space Operations Doctrine,” which describes the “unique characteristics” of the space environment for conducting I&W, whether from orbit or in other forms), and of course since 9/11 the I&W approach has been applied intensely in counter-terrorist realms in defense and homeland security.

It’s obvious Elon Musk and his OpenAI cohort believe that superintelligence is a problem worth watching. Elon’s newest company, the brain-machine-interface startup Neuralink, sets its core motivation as avoiding a future in which AGI outpaces simple human intelligence. So I’m staying abreast of indications of AGI progress.

For the AGI domain I am tracking many sources through citations and published research (see OpenAI’s interesting list here), and watching for any mention of I&W monitoring attempts or results by others which meet the challenge of what OpenAI cites as solving “Problem #1.” So far, nothing of note.

But I’ll keep a look out, so to speak.

 

 

Docere et Facere, To Teach and To Do

“Helping aspiring data scientists forge their own career paths, more universities are offering programs in data science or analytics.” – Wall Street Journal, March 13, 2017

George Bernard Shaw’s play Man and Superman provides the maxim, “He who can, does. He who cannot, teaches.” Most of us know this as “Those who can’t do, teach.” (And Woody Allen added a punch line in Annie Hall: “… and those who can’t teach, teach gym.”)

I’m determined both to do and to teach, because I enjoy each of them. When it comes to data and advanced analytics, something I’ve been using or abusing my entire career, I’m excited about expanding what I’m doing. So below I’m highlighting two cool opportunities I’m engaging in now…

 

Teaching Big Data Architectures and Analytics in the IC

I’ve just been asked by the government to teach again a popular graduate course I’ve been doing for several years, “Analytics: Big Data to Information.” It’s a unique course, taught on-site for professionals in the U.S. intelligence community, and accredited by George Mason University within GMU’s Volgenau Graduate School of Engineering. My course is the intro Big Data course for IC professionals earning a master’s or Ph.D. from GMU’s Department of Information Sciences and Technology, as part of the specialized Directorate for Intelligence Community Programs.

I enjoy teaching enormously, not having done it since grad school at Stanford a million years ago (ok, the ’80s). The students in the program are hard-working data scientists, technologists, analysts, and program managers from a variety of disciplines within the IC, and they bring their A-game to the classroom. I can’t share the full syllabus, but here’s a summary:

This course is taught as a graduate-level discussion/lecture seminar, with a Term Paper and end-of-term Presentation as assignments. Course provides an overview of Big Data and its use in commercial, scientific, governmental and other applications. Topics include technical and non-technical disciplines required to collect, process and use enormous amounts of data available from numerous sources. Lectures cover system acquisition, law and policy, and ethical issues. It includes discussions of technologies involved in collecting, mining, analyzing and using results, with emphasis on US Government environments.

I worry that mentioning this fall’s class now might gin up too much interest (last year I was told the waiting list had 30+ students who wanted to get in but couldn’t, and I don’t want to expand beyond a reasonable number), but when I agreed this week to offer the course again I immediately began thinking about the changes in the syllabus I may make. And I solicit your input in the comments below (or by email).

math-1500720_960_720.jpgFor the 2016 fall semester, I had to make many changes to keep up with technological advance, particularly in AI. I revamped and expanded the “Machine Learning Revolution” section, and beefed up the segments on algorithmic analytics and artificial intelligence, just to keep pace with advances in the commercial and academic research worlds. Several of the insights I used came from my onstage AI discussion with Elon Musk in 2015, and his subsequent support for the OpenAI initiative.

More importantly I provided my students (can’t really call them “kids” as they’re mid-career intelligence officials!) with tools and techniques for them to keep abreast of advances outside the walls of government – or those within the walls of non-U.S. government agencies overseas. So I’m going to have to do some work again this year, to keep the course au courant, and your insight is welcome.

But as noted at the beginning, I don’t want to just teach gym – I want to be athletic. So my second pursuit is news on the work front.

 

Joining an elite Mission Analytics practice

I’m announcing what I like to think of as the successful merger of two leading consultancies: my own solo gig and Deloitte Consulting. And I’m even happy Deloitte won the coin-toss to keep its name in our merger 🙂

For the past couple of years I have been a solo consultant and I’ve enjoyed working with some tremendous clients, including government leaders, established tech firms, and great young companies like SpaceX and LGS Innovations (which traces its lineage to the legendary Bell Labs).

But working solo has its limitations, chiefly in implementation of great ideas. Diagnosing a problem and giving advice to an organization’s leadership is one thing – pulling together a team of experts to execute a solution is entirely different. I missed the camaraderie of colleagues, and the “mass-behind-the-arrowhead” effect to force positive change.

When I left Microsoft, the first phone call I got was from an old intelligence colleague, Scott Large – the former Director of NRO who had recently joined Deloitte, the world’s leading consulting and professional services firm. Scott invited me over to talk. It took a couple of years for that conversation to culminate, but I decided recently to accept Deloitte’s irresistible offer to join its Mission Analytics practice, working with a new and really elite team of experts who understand advanced technologies, are developing new ones, and are committed to making a difference for government and the citizens it serves.

Our group is already working on some impressively disruptive solutions using massive-scale data, AI, and immersive VR/AR… it’s wild. And since I know pretty much all the companies working in these spaces, I decided to go with the broadest, deepest, and smartest team, with the opportunity for highest impact.

Who could turn down the chance to teach, and to do?

 

Burning Man, Artificial Intelligence, and Our Glorious Future

I’ve had several special opportunities in the last few weeks to think a bit more about Artificial Intelligence (AI) and its future import for us remaining humans. Below I’m using my old-fashioned neurons to draw some non-obvious links.

The cause for reflection is the unexpected parallel between two events I’ve been involved in recently: (1) an interview of Elon Musk which I conducted for a conference in DC; and (2) the grand opening in London of a special art exhibit at the British Library which my wife and I are co-sponsoring. They each have an AI angle and I believe their small lessons demonstrate something intriguingly hopeful about a future of machine superintelligence

Continue reading

Intelligence, Artificial and Existential

"Not to Be or Not to Be?" artwork by Shuwit, http://shuwit.deviantart.com/

“Not to Be or Not to Be?” artwork by Shuwit, http://shuwit.deviantart.com/

I just published a short piece over at SIGNAL Magazine on an increasingly public debate over artificial intelligence, which the editor gave a great Shakespearean title echoing Hamlet’s timeless question “To be, or not to be”: Continue reading

Contributing to Intelligence Innovation

Below are two ways to contribute to innovation in government, and specifically in intelligence matters. One is for you to consider, the other is a fun new path for me.

Continue reading

Some say Obama has already chosen Cyber Czar

I’ll wade into the breach again, of analyzing (and trying to anticipate) some national-security appointments for the new Obama Administration.  Today I must admit that I’m taken with the latest reportage from the U.K. Spectator – a quite conservative publication not usually known for its closeness to the Obama inner circle.

Continue reading

“The Largest Social Network Ever Analyzed”

FACT: According to ComScore data cited in a story in Monday’s FInancial Times, “Facebook, the fast-growing social network, has taken a significant lead over MySpace in visitor numbers for the first time… Facebook attracted more than 123 million unique visitors in May, an increase of 162 per cent over the same period last year… That compared with 114.6 million unique visitors at MySpace, Facebook’s leading rival, whose traffic grew just 5 per cent during the same period… The findings mark the first time that Facebook, launched in 2004, has taken a significant lead in unique visitors, [and] come at a time of change inside Facebook, as the one-time upstart attempts to transform itself into a leading media company.

ANALYSIS:  This week several members of the Microsoft Institute met in Redmond with a visiting friend from government, and among other talks we had a very interesting discussion with Eric Horvitz, a Microsoft Research principal researcher and manager.  Eric’s well known for his work in artificial intelligence and currently serves as president of the Association for the Advancement of Artificial Intelligence (AAAI).

We talked about one of Eric’s recent projects for quite a while: “Planetary-Scale Views on a Large Instant-Messaging Network,” a project which has been described by his co-author as “the largest social network ever analyzed.” 

Continue reading

Intriguing Politics – Social Media Discussion

FACT:  The second International Conference on Weblogs and Social Media (ICWSM) wrapped up yesterday in Seattle. It was organized again by the Association for the Advancement of Artificial Intelligence (or AAAI), with co-sponsorship by Microsoft, Google, and several universities and Web 2.0 companies. The papers are already being posted online here, which is great as there were some very interesting topics explored. 

ANALYSIS:   One really thought-provoking theme was proposed by Matthew Hurst, a scientist at Microsoft’s Live Labs (and co-creator of BlogPulse), who was a participant on the “Politics and Social Media” panel.  He’s summarized his points on his own blog, but it’s definitely worth pointing out the key distinction he posed:

Firstly, politics is about scaling social organization. A premier can’t talk to every citizen, so s/he has lieutenant’s. They have their own underlings, and so on in a typical hierarchical/departmental structure. Social media, however, is all about individuals – we read entries in weblogs, etc. So, if a politician wants to connect via social media, isn’t there some sort of fundamental mismatch? Obama may have 20, 000 followers on Twitter, but how many comments has he left on blog posts?

Secondly, there is the issue of social media amplifying the polarization (or homophily) found in any topical community. Thus, individuals look around at their neighbours in the social graph and see much of what they themselves are made of.

This bottom-down, top-up dichotomy has been discussed more generally about social media and social networks (often drawing a sharp distinction between “old-media” and “new-media,” or more colloquially if imprecisely as between “the media” and “the web.”)

Continue reading