When Public Meets Private in Intelligence

Today’s the anniversary of the 9/11 attacks on the American homeland, the sequence of events which wound up bringing me from Silicon Valley to Washington DC in 2002, and a stint working in the Intelligence Community. I notice today that no one asks me anymore, as they often did at first back then, why I was so intent on bridging the gap between DC and the Valley (broadly, not geographically, defined).

Today it surprises few when we do something unorthodox like invite Amazon and Blue Origin founder Jeff Bezos to appear inside an intelligence agency earlier this year, for a probing one-on-one at the AFCEA Spring Intelligence Symposium with several hundred IC professionals about the rapid changes in technology, views on public/private collaboration, and the impacts of AI and robotics on his business and theirs.

That rapid pace of change continues to accelerate, following its own Moore’s-Law-like curve, and daily one sees a blurring between how “intelligence” is performed in government uses and out among the public. To wit, check out this article from early August:

News Item: BuzzFeed News Trained A Computer To Search For Hidden Spy Planes. This Is What We Found … Surveillance aircraft often keep a low profile: The FBI, for example, registers its planes to fictitious companies to mask their true identity. So BuzzFeed News trained a computer to find them by letting a machine-learning algorithm sift for planes with flight patterns that resembled those operated by the FBI and the Department of Homeland Security… First we made a series of calculations to describe the flight characteristics of almost 20,000 planes in the four months of Flightradar24 data: their turning rates, speeds and altitudes flown, the areas of rectangles drawn around each flight path, and the flights’ durations. We also included information on the manufacturer and model of each aircraft, and the four-digit squawk codes emitted by the planes’ transponders. Then we turned to an algorithm called the “random forest,” training it to distinguish between the characteristics of two groups of planes: almost 100 previously identified FBI and DHS planes, and 500 randomly selected aircraft. The random forest algorithm makes its own decisions about which aspects of the data are most important. But not surprisingly, given that spy planes tend to fly in tight circles, it put most weight on the planes’ turning rates. We then used its model to assess all of the planes, calculating a probability that each aircraft was a match for those flown by the FBI and DHS… The algorithm was not infallible: Among other candidates, it flagged several skydiving operations that circled in a relatively small area, much like a typical surveillance aircraft. But as an initial screen for candidate spy planes, it proved very effective. In addition to aircraft operated by the US Marshals and the military contractor Acorn Growth Companies, covered in our previous stories, it highlighted a variety of planes flown by law enforcement, and by the military and its contractors. Some of these aircraft use technologies that challenge our assumptions about when and how we’re being watched, tracked, or listened to. It’s only by understanding when and how these technologies are used from the air that we’ll be able to debate the balance between effective law enforcement, national security, and individual privacy.”

It has become commonplace to observe the dwindling distinctions in use of so-called “intelligence capabilities” between longstanding government intelligence agencies and so-called private-sector companies, e.g. news outlets or social-media platforms.  For a tour-de-force expression and stirring point-of-view argument you will profit from reading John Lanchester’s new and epic book-review essay “You Are the Product” in the London Review of Books, in which he treats Google, Microsoft, Facebook and the like with a critical lens and concludes:

[E]ven more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company….”

A short blog piece is not the place to examine fully this rich topic, but it is a good place to point out that I enjoy spending time helping all sides of this divide understand each other. By all sides, I mean government entities and officers (including intelligence and law enforcement), private-sector companies, and most importantly the public citizenry and customer base of those organizations. A great forum for doing that has been AFCEA, which this past week co-hosted with INSA the annual Intelligence and National Security Summit in DC. Along with helping oversee the agenda I had the opportunity to organize one of the panel sessions with my old friend (and former CIA Deputy Director of Intelligence) Carmen Medina.

Our panel – very relevant to the above discussion – was on “The Role of Intelligence in the Future Threat Environment,” and our excellent participants addressed some gnarly problems. I tweeted many of the comments and observations (see my hashtagged feed here), and you can find more content and videos from all 15 sessions archived here.

Your suggestions on new approaches to these dialogues are welcome as always. As we commemorate the horrific surprise attacks of 9/11/2001, in a rapidly changing world where real-time surveillance is performed by more and more entities, governmental and commercial, it is increasingly important to engage in thoughtful – and sometimes urgent – discussion about who watches whom, and why.

 

 

 

 

Problem Number One, Watching for Superintelligence

Two years ago, the AFCEA Intelligence Committee (I’m a member) invited Elon Musk for a special off-the-record session at our annual classified Spring Intelligence Symposium. The Committee assigned me the task of conducting a wide-ranging on-stage conversation with him, going through a variety of topics, but we spent much of our time on artificial intelligence (AI) – and particularly artificial general intelligence (AGI, or “superintelligence”).

I mention that the session was off-the-record. In my own post back in 2015 about the session, I didn’t NGA Photo: Lewis Shepherd, Elon Musk 2015characterize Elon’s side of the conversation or his answers to my questions – but for flavor I did include the text of one particular question on AI which I posed to him. I thought it was the most important question I asked…

(Our audience that day: the 600 attendees included a top-heavy representation of the Intelligence Community’s leadership, its foremost scientists and technologists, and executives from the nation’s defense and national-security private-sector partners.)

Here’s that one particular AI question I asked, quoted from my blogpost of 7/28/2015:

“AI thinkers like Vernor Vinge talk about the likelihood of a “Soft takeoff” of superhuman intelligence, when we might not even notice and would simply be adapting along; vs a Hard takeoff, which would be a much more dramatic explosion – akin to the introduction of Humans into the animal kingdom. Arguably, watching for indicators of that type of takeoff (soft or especially hard) should be in the job-jar of the Intelligence Community. Your thoughts?”

Months after that AFCEA session, in December 2015 Elon worked with Greg Brockman, Sam Altman, Peter Thiel and several others to establish and fund OpenAI, “a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence (AGI).” OpenAI says it has a full-time staff of 60 researchers and engineers, working “to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible.”

Fast-forward to today. Over the weekend I was reading through a variety of AI research and sources, keeping SpecialProjectscurrent in general for some of my ongoing consulting work for Deloitte’s Mission Analytics group. I noticed something interesting on the OpenAI website, specifically on a page it posted several months ago labelled Special Projects.”

There are four such projects listed, described as “problems which are not just interesting, but whose solutions matter.” Interested researchers are invited to apply for a position at OpenAI to work on the problem – and they’re all interesting, and could lead to consequential work.

But the first Special Project problem caught my eye, because of my question to Musk the year before:

  1. Detect if someone is using a covert breakthrough AI system in the world. As the number of organizations and resources allocated to AI research increases, the probability increases that an organization will make an undisclosed AI breakthrough and use the system for potentially malicious ends. It seems important to detect this. We can imagine a lot of ways to do this — looking at the news, financial markets, online games, etc.”

That reads to me like a classic “Indications & Warning” problem statement from the “other” non-AI world of intelligence.

I&W (in the parlance of the business) is a process used by defense intelligence and the IC to detect indicators of potential threats while sufficient time still exists to counter those efforts. The doctrine of seeking advantage through warning is as old as the art of war; Sun Tzu called it “foreknowledge.” There are many I&W examples from the Cold War, from the overall analytic challenge (see a classic thesis  Anticipating Surprise“), and from specific domain challenge (see for example this 1978 CIA study, Top Secret but since declassified, on “Indications and Warning of Soviet Intentions to Use Chemical Weapons during a NATO-Warsaw Pact War“).

The I&W concept has sequentially been transferred to new domains of intelligence like Space/Counter-Space (see the 2013 DoD “Joint Publication on Space Operations Doctrine,” which describes the “unique characteristics” of the space environment for conducting I&W, whether from orbit or in other forms), and of course since 9/11 the I&W approach has been applied intensely in counter-terrorist realms in defense and homeland security.

It’s obvious Elon Musk and his OpenAI cohort believe that superintelligence is a problem worth watching. Elon’s newest company, the brain-machine-interface startup Neuralink, sets its core motivation as avoiding a future in which AGI outpaces simple human intelligence. So I’m staying abreast of indications of AGI progress.

For the AGI domain I am tracking many sources through citations and published research (see OpenAI’s interesting list here), and watching for any mention of I&W monitoring attempts or results by others which meet the challenge of what OpenAI cites as solving “Problem #1.” So far, nothing of note.

But I’ll keep a look out, so to speak.

 

 

Coming to DC, One-Day Delivery from Jeff Bezos

If you read this blog you care about government and technology. And whether you’re a technologist or not, you can see the tech forces shaping and sharpening the uses of digital capabilities in accomplishing the ends of government, whether that’s citizen-service delivery and local law enforcement, or global diplomacy and nation-state combat. I’ve worked on and written about them all – from intelligence to space to AI, or the quantification of Supreme Court humor, even “Punk Rock and Moore’s Law.”

Understanding and forecasting that radical pace of external change is difficult for government professionals, and they need help doing that. Let’s say you wanted to tap someone to offer insight. Who’d be on your dream list? At the top of my dream list – my absolute “if-only-I-could-ask” list – would be Jeff Bezos, founder of Amazon and Blue Origin.

So I’m going to sit down with Jeff Bezos on stage later this month, at the annual AFCEA Intelligence Symposium, for a conversation about areas where technology critically intersects with the nation’s response to enduring challenges and opportunities, such as artificial intelligence, digital innovation, the revolution in cloud computing, and commercial space operations. (Alongside my day job at Deloitte, I serve as national Vice Chairman of the Intelligence Committee at AFCEA, the 35,000-member Armed Forces Communications & Electronics Association.)

photo: AFCEA Symposium Invitation

The 2017 Symposium features, as usual, a stellar line-up of top leaders in national security, with panels on Advanced Conventional Threats, the Contested Environment of Space, Terrorism, Cyber Threats from Nation-States and Non-State Entities, and Gray-Zone Conflicts/Hybrid Warfare (topic of last year’s Defense Science Board study on which I sat). All sessions feature senior thought-leaders from government and industry.

Jeff Bezos might be new in that particular mix, but you can understand why we invited him. He has been TIME Magazine’s Person of the Year (early in his career in 1999), Fortune Magazine’s Businessperson of the Year, topped the Forbes annual list of “World’s Greatest Leaders,” and our rationale for this conversation is his long track record of revolutionary contributions to international technological/economic advance, as well as to US national security. AWS is now of central importance to the public sector (including intelligence), and the broader contributions of Amazon and Blue Origin to the nation’s economic future and success are incalculable.

Jeff Bezos Space

 

If you have a Top Secret/SI/TK clearance, you can attend the Symposium – register. [Update: sorry, 2017 Symposium = sold out]

There’s a longstanding meme that government should be “run like a business.” I typically don’t think in precisely those terms, having been on both sides and recognizing the significant differences in intent and stakeholders.

photo: Lewis Shepherd; Gen. “Wheels” Wheeler (Ret.) of DIUx; Russell Stern, CEO Solarflare

Panel on Defense Innovation and DIUx

I’ve been more interested in helping each sector understand the unique contributions of the other, and the complexities inherent in their relationship. (See for example my recent post on DoD Innovation and DIUx in Silicon Valley.)

But the “run government like a business” impetus is understandable here in the United States as a reflection of dissatisfaction with government performance in meeting its own goals, and the expectations of the citizens it serves. President Trump recently assigned Jared Kushner to lead a new White House Office of American Innovation, and Kushner told the Washington Post “We should have excellence in government. The government should be run like a great American company.” Graph - Govt like a BusinessThe Washington Post (coincidentally owned by, yes, Jeff Bezos) ran a piece exploring the history of that thinking, dating its surge in popularity to the early 1980s under President Reagan – a timeline borne out by running the phrase through Google’s Ngram Viewer (see chart).

The last time I invited a smart young billionaire to come speak to Intelligence Community leaders, it worked out pretty well for the audience (see Burning Man and AI: What Elon Musk told me and the role of Art). So I’m aiming even higher this year…

If you don’t have a Top Secret clearance, you can’t get into the Symposium, and won’t be able to hear Bezos firsthand on April 27. But here’s a substitute, nearly as good: this week Bezos published his annual Letter to Shareholders of Amazon. Most people in the business world know about his legendary 1997 “first annual letter to shareholders” in which he laid out an extraordinary long-term vision for his company. The 2017 version is also extraordinary, and I urge you to read it in full. My friend Jeff Jonas, former IBM Chief Scientist for Context Computing and now founder/Chief Scientist at Senzing, calls it “the most impressive annual letter to shareholders I’ve ever read; this line of thinking leads to greatness.”

– – – – –

For some parting eye-candy, here’s video from last week’s annual Space Symposium in Colorado Springs, where attendees got a first-hand look at the historic Blue Origin New Shepard rocket booster (first to land vertically after spaceflight, first to relaunch again, and now a five-time-reuse trophy), and an inside tour of the crew capsule with “the largest windows in space travel.”

 

 

Docere et Facere, To Teach and To Do

“Helping aspiring data scientists forge their own career paths, more universities are offering programs in data science or analytics.” – Wall Street Journal, March 13, 2017

George Bernard Shaw’s play Man and Superman provides the maxim, “He who can, does. He who cannot, teaches.” Most of us know this as “Those who can’t do, teach.” (And Woody Allen added a punch line in Annie Hall: “… and those who can’t teach, teach gym.”)

I’m determined both to do and to teach, because I enjoy each of them. When it comes to data and advanced analytics, something I’ve been using or abusing my entire career, I’m excited about expanding what I’m doing. So below I’m highlighting two cool opportunities I’m engaging in now…

 

Teaching Big Data Architectures and Analytics in the IC

I’ve just been asked by the government to teach again a popular graduate course I’ve been doing for several years, “Analytics: Big Data to Information.” It’s a unique course, taught on-site for professionals in the U.S. intelligence community, and accredited by George Mason University within GMU’s Volgenau Graduate School of Engineering. My course is the intro Big Data course for IC professionals earning a master’s or Ph.D. from GMU’s Department of Information Sciences and Technology, as part of the specialized Directorate for Intelligence Community Programs.

I enjoy teaching enormously, not having done it since grad school at Stanford a million years ago (ok, the ’80s). The students in the program are hard-working data scientists, technologists, analysts, and program managers from a variety of disciplines within the IC, and they bring their A-game to the classroom. I can’t share the full syllabus, but here’s a summary:

This course is taught as a graduate-level discussion/lecture seminar, with a Term Paper and end-of-term Presentation as assignments. Course provides an overview of Big Data and its use in commercial, scientific, governmental and other applications. Topics include technical and non-technical disciplines required to collect, process and use enormous amounts of data available from numerous sources. Lectures cover system acquisition, law and policy, and ethical issues. It includes discussions of technologies involved in collecting, mining, analyzing and using results, with emphasis on US Government environments.

I worry that mentioning this fall’s class now might gin up too much interest (last year I was told the waiting list had 30+ students who wanted to get in but couldn’t, and I don’t want to expand beyond a reasonable number), but when I agreed this week to offer the course again I immediately began thinking about the changes in the syllabus I may make. And I solicit your input in the comments below (or by email).

math-1500720_960_720.jpgFor the 2016 fall semester, I had to make many changes to keep up with technological advance, particularly in AI. I revamped and expanded the “Machine Learning Revolution” section, and beefed up the segments on algorithmic analytics and artificial intelligence, just to keep pace with advances in the commercial and academic research worlds. Several of the insights I used came from my onstage AI discussion with Elon Musk in 2015, and his subsequent support for the OpenAI initiative.

More importantly I provided my students (can’t really call them “kids” as they’re mid-career intelligence officials!) with tools and techniques for them to keep abreast of advances outside the walls of government – or those within the walls of non-U.S. government agencies overseas. So I’m going to have to do some work again this year, to keep the course au courant, and your insight is welcome.

But as noted at the beginning, I don’t want to just teach gym – I want to be athletic. So my second pursuit is news on the work front.

 

Joining an elite Mission Analytics practice

I’m announcing what I like to think of as the successful merger of two leading consultancies: my own solo gig and Deloitte Consulting. And I’m even happy Deloitte won the coin-toss to keep its name in our merger 🙂

For the past couple of years I have been a solo consultant and I’ve enjoyed working with some tremendous clients, including government leaders, established tech firms, and great young companies like SpaceX and LGS Innovations (which traces its lineage to the legendary Bell Labs).

But working solo has its limitations, chiefly in implementation of great ideas. Diagnosing a problem and giving advice to an organization’s leadership is one thing – pulling together a team of experts to execute a solution is entirely different. I missed the camaraderie of colleagues, and the “mass-behind-the-arrowhead” effect to force positive change.

When I left Microsoft, the first phone call I got was from an old intelligence colleague, Scott Large – the former Director of NRO who had recently joined Deloitte, the world’s leading consulting and professional services firm. Scott invited me over to talk. It took a couple of years for that conversation to culminate, but I decided recently to accept Deloitte’s irresistible offer to join its Mission Analytics practice, working with a new and really elite team of experts who understand advanced technologies, are developing new ones, and are committed to making a difference for government and the citizens it serves.

Our group is already working on some impressively disruptive solutions using massive-scale data, AI, and immersive VR/AR… it’s wild. And since I know pretty much all the companies working in these spaces, I decided to go with the broadest, deepest, and smartest team, with the opportunity for highest impact.

Who could turn down the chance to teach, and to do?

 

IoT Botnet Attacks – Judge for Yourself

Yesterday’s mass-IoT-botnet attack on core Internet services (Twitter, Netflix, etc. via DNS provider Dyn) is drawing a lot of attention, mainly because for the public at large it is an eye-opening education in the hidden Internet of Things connections between their beloved electronic devices and online services.

Image of swarming networked DVRs and Webcams

You can read elsewhere the as-yet-understood details of the attack (e.g. “Hacked Cameras, DVRs Powered Today’s Massive Internet Outage” by Brian Krebs). And you’ll be reading more and more warnings of how this particular attack is just the beginning (e.g. from my friend Alan Silberberg, “Mirai Botnet DDoS Just the Beginning of IoT Cybersecurity Breaches“).

But today, in the wake of the attack, a DC friend known for peering around corners asked for my opinion about the ultimate meaning of this approach, and whether this attack means “the game has changed.” Here’s my response:

Last year I was asked by Georgetown Law School to give a private briefing to the Federal Judicial Center’s annual convocation of 65 federal judges from jurisdictions across the United States. The overall FJC session addressed “National Security, Surveillance Technology and the Law,” and in part was prompted by the Edward Snowden and WikiLeaks events. Here’s an article about the conference, and you can view the full agenda here. As you can see from the agenda, I joined noted security expert Bruce Schneier in presenting on “Computer Architectures and Remote Access.” That’s a fairly technical topic, and so I asked an organizer ahead of time what the judges wanted to learn and why, and was told “They’re encountering a tidal wave of cases that involve claims against government warrants for access, and conversely claims involving botnet attacks and liability.” I then asked what level of technical proficiency I should assume in preparing my remarks, and was told, “Based on their own self-assessments, you should assume they’re newbies encountering computers for the very first time.”

After a good laugh, that was the approach I took, and with patience Bruce and I were able both to educate and to spark a great back-and-forth conversation among the nation’s judges about the intricacies of applying slowly evolving legal doctrines to rapidly evolving technical capabilities.

The answer to today’s question is Yes, the game has changed. The tidal wave is well upon us and won’t be technically turned back in large part. We can (over time) introduce tighter security into some elements of IoT devices and networks, but that won’t be easy and would hamper the ease and invisibility of IoT operations. I think eventually we’ll come to realize that the notion of “Internet Security” is going to be like “Law & Order” – a good aspiration, which in everyday practice is observed in the breaking.

We’ll develop more robust judicial and insurance remedies, to provide better penalization and risk-valuation avenues, for what will be an inevitably continuing onslaught of law-breaking.

Yet in that onslaught crimes will be better defined, somewhat better policed, definitely better prosecuted (our Judges will be better educated!), and perhaps most importantly victims will be better insured and compensated, as we learn to manage and survive each new wave of technological risk.

By the way, if you’d like to plunge into the reading list which those federal judges had assigned as their homework on surveillance technologies and national security law, click here or the image below to download the 5-page syllabus for the session, courtesy of Georgetown Law, with links to the full set of Technology Readings and Legal Readings, across fields like Interception and Location Tracking, Digital Forensics, Metadata and Social Network Analytics, Cloud Computing and Global Communications…. It’s a very rich and rewarding collection, guaranteed to make you feel as smart as a federal judge 🙂

readings-on-law-and-tech

Video of DoD Innovation Discussion at Cybersecurity Summit

Earlier this week I wrote (“Beware the Double Cyber Gap“) about an upcoming Cybersecurity Summit, arranged by AFCEA-DC, for which I would be a panelist on innovation and emerging technologies for defense.

The Summit was a big success, and in particular I was impressed with the level and quality of interaction between the government participants and their private-sector counterparts, both on stage and off. Most of the sessions were filmed, and are now available at http://www.cybersecuritytv.net.

You can watch our panel’s video, “Partnering with Industry for Innovation,” and it will provide an up-to-the-moment view of how US Cyber Command and the Department of Defense as a whole are attacking the innovation challenge, featuring leadership from the USCYBERCOM Capabilities Development Group, and the Defense Innovation Unit-Experimental. Solarflare CEO Russ Stern (a serial entrepreneur from California) and I offered some historical, technical, market, and regulatory context for the challenge those two groups face in finding the best technologies for national security. Most of my remarks are after the 16:00 minute mark; click the photo below to view the video:

photo: Lewis Shepherd; Gen. “Wheels” Wheeler (Ret.) of DIUx; Russell Stern, CEO Solarflare

From my remarks:

“I’m here to provide context. I’ve been in both these worlds – I came from Silicon Valley; I came to the Defense Intelligence Agency after 9/11, and found all of these broken processes, all of these discontinuities between American innovation & ingenuity on one hand, and the Defense Department & the IC & government at large…
Silicon was a development of government R&D money through Bell Labs, the original semiconductor; so we have to realize the context that there’s been a massive disruption in the divorcing of American industry and the technology industry, from the government and the pull of defense and defense needs. That divorcing has been extremely dramatic just in the past couple of years post-Snowden, emblematically exemplified with Apple telling the FBI, “No thanks, we don’t think we’ll help you on that national security case.”
So these kinds of efforts like DIUx are absolutely essential, but you see the dynamic here, the dynamic now is the dog chasing the tail – the Defense Department chasing what has become a massive globally disruptive and globally responsive technology industry…  This morning we had the keynote from Gen. Touhill, the new federal Chief Information Security Officer, and Greg told us that what’s driving information security, the entire industry and the government’s response to it is the Internet – through all its expressions, now Internet of Things and everything else – so let’s think about the massive disruption in the Internet just over the last five years.
Five years ago, the top ten Internet companies measured by eyeballs, by numbers of users, the Top 10 were all American companies, and it’s all the ones you can name: Amazon, Google, Microsoft, Facebook, Wikipedia, Yahoo… Guess what, three years ago the first crack into that Top 10, only six of those companies were American companies, and four – Alibaba, Baidu, Tencent, and Sohu – were Chinese companies. And guess what, today only five are American companies, and those five – Google, Amazon, Microsoft, Facebook, Yahoo – eighty percent or more of their users are non-U.S. Not one of those American internet companies has even twenty percent of their user-base being U.S. persons, U.S. citizens. Their market, four out of five of their users are global.
So when [DoD] goes to one of these CEOs and says, “Hey c’mon, you’re an American” – well, maybe, maybe not. That’s a tough case to sell. Thank God we have these people, with the guts and drive and the intellect to be able to try and make this case, that technological innovation can and must serve our national interest, but that’s an increasingly difficult case to make when [internet] companies are now globally mindsetted, globally incentivized, globally prioritizing constantly…”

Kudos to my fellow panelists for their insights, and their ongoing efforts, and to AFCEA for continuing its role in facilitating important government/industry partnerships.

Beware the Double Cyber Gap

I’ve somehow been invited onto yet another star-studded panel in Washington DC – on October 11 at the 2016 AFCEA DC Cybersecurity Summit. I don’t recommend many cyber conferences or events, as they’ve become overly frequent and unfocused. This one’s different, and brings together acknowledged senior experts from multiple federal agencies, including the Department of Homeland Security, Department of Defense, intelligence community and others from industry. If cyber’s your game you should be there, the line-up of speakers is truly impressive.

(It’s too late to register online, but on-site registration is available for the first day at the venue, DC’s Grand Hyatt on H Street downtown. The second day, which is classified sessions at TS/SCI at a separate location, is already sold out, but Day 1 still has a few seats left.)

I realize, though, that most of my readers will not be in attendance, so I thought I’d share a few highlights which I expect from my own panel, titled “Partnering with Industry for Innovation – DIUx” and focusing on DoD’s new Defense Innovation Unit Experimental (now in Version 2.0!) and its partnerships in government and the private sector.

Our session participants:

  • Moderator: Francis Rose, Host, Government Matters on ABC 
  • Charles Nelson, Deputy Director for Outreach, U.S. Cyber Command Capabilities Development Group (CDG)
  • Lewis Shepherd, Private Consultant on Advanced Technologies and Strategic Innovation
  • Sean Singleton, Director of Engagement, DIUx
  • Russell Stern, CEO, Solarflare Communications
  • Maj Gen Robert “Wheels” Wheeler (Ret.), Senior Advisor, DIUx

We intend to cover the DIUx approach to work with innovative companies (in Silicon Valley and across the United States) for new solutions and technologies for warfighters.

But I also intend to discuss a certain two-sided disparity: the Double Cyber Gap.

If you’re of a certain age, you can’t help thinking about national security strategy as momentary scenes from “Dr. Strangelove” flicker by in your mind. I’ve always loved Stanley Kubrick’s 1964 satirical nuclear black comedy, which answered the question, “What would happen if the wrong person pushed the wrong button in a nuclear-armed world?” One of the many classic moments is a send-up of the era’s bipolar worry about superpower equipoise, with a “Doomsday Machine Gap” and its inevitable successor, a “Mineshaft Gap.”

Kubrick was skewering the mindset of the “Missile Gap” controversy, which was fresh in his mind as he wrote the screenplay during President Kennedy’s term; JFK had won office in 1960 in part by attacking Vice President Richard Nixon for ignoring an imminent Soviet “Missile Gap” superiority. As Wikipedia summarizes, “Kennedy is credited with inventing the term in 1958 as part of the ongoing election campaign, in which a primary plank of his rhetoric was that the Eisenhower administration was weak on defense. It was later learned that Kennedy was apprised of the actual situation [no actual gap] during the campaign, which has led scholars to question what the (future) president knew and when he knew it. There has been some speculation that he was aware of the illusory nature of the missile gap from the start, and was using it solely as a political tool, an example of policy by press release.”

You can read the New York Times retrospective look (it popped the Missile Gap bubble originally in a 1961 story), and go through a valuable collection of the CIA’s now declassified documents from the era. But what’s relevant is the notion of early warning about a perceived or real disparity between opposing forces. Unfortunately that’s what I see developing, in a couple of very significant ways.

The Double Cyber Gap

Picture in your mind both faces of a double-sided coin. The Double Cyber Gap consists of two linked phenomena:

  1. The Post-Snowden Gap: there’s a newly demonstrable political or ideological cleavage between Silicon Valley commercial technology companies and their erstwhile innovation partners in DoD and the US intelligence community. The Apple/FBI dispute over decrypting the San Bernardino bombing-case iPhone was only one dramatic example; others aren’t played out in open media. I’ve written and spoken about that gap for the past few years as I’ve watched it yawn open, and have tried to limit its width in my government advisory roles and while consulting for tech firms. DIUx works to that goal as well, though the Secretary of Defense himself acknowledged that its first highly-touted incarnation was a failure.
  2. The Capability-Adoption Gap: Those same commercial companies aim their innovations to the widest possible market – meaning globally. For advanced cyber capabilities (dual-use as defensive or offensive) or other digital disruptions, very predictably we know that early adopters will include nation-state government agencies (including in Russia and China), hacking communities, and individual cyber criminals working on their own illicit agendas.

You can practically draw a cyclical diagram of the progression of advanced cyber techniques and technologies, with their adoption passing rapidly from commercial bleeding-edge users to foreign actors and malevolent individuals… and then, tardily if at all, to mainline US government agencies, long after their potency is being exploited by adversaries, or reverse-engineered and exceeded.

The Double Cyber Gap presents DoD with nearly a Hobson’s Choice. DoD can rely increasingly on commercial cyber technologies because of their rapid innovation and disruption – but only while realizing that it won’t be gaining any advantage over foreign adversaries, who are adopting the same commercial capabilities and likely deploying them even faster. It’s deeply problematic for US cybersecurity strategy, and a potentially fatal flaw for DoD’s related “Third Offset” strategy as well.

Let me illustrate that “no-choice-at-all” dilemma with an intriguing behind-the-scenes story, an excerpt from a new profile of Silicon Valley entrepreneur (Y Combinator co-founder) Sam Altman, who is now not only driving his YC startups but also the new OpenAI artificial intelligence research company he has co-founded with Elon Musk and others. The excerpt presents the AI vector of what I’m calling the Double Cyber Gap:

This spring, Altman met Ashton Carter, the Secretary of Defense, in a private room at a San Francisco trade show. Altman wore his only suit jacket, a bunchy gray number his assistant had tricked him into getting measured for on a trip to Hong Kong. Carter, in a pin-striped suit, got right to it. “Look, a lot of people out here think we’re big and clunky. And there’s the Snowden overhang thing, too,” he said, referring to the government’s treatment of Edward Snowden. “But we want to work with you in the Valley, tap the expertise.”

“Obviously, that would be great,” Altman said. “You’re probably the biggest customer in the world.” The Defense Department’s proposed research-and-development spending next year is more than double that of Apple, Google, and Intel combined. “But a lot of startups are frustrated that it takes a year to get a response from you.” Carter aimed his forefinger at his temple like a gun and pulled the trigger. Altman continued, “If you could set up a single point of contact, and make decisions on initiating pilot programs with YC companies within two weeks, that would help a lot.”

“Great,” Carter said, glancing at one of his seven aides, who scribbled a note. “What else?”

Altman thought for a while. “If you or one of your deputies could come speak to YC, that would go a long way.”

“I’ll do it myself,” Carter promised.

As everyone filed out, Chris Lynch, a former Microsoft executive who heads Carter’s digital division, told Altman, “It would have been good to talk about OpenAI.” Altman nodded noncommittally. The 2017 U.S. military budget allocates three billion dollars for human-machine collaborations known as Centaur Warfighting, and a long-range missile that will make autonomous targeting decisions is in the pipeline for the following year. Lynch later told me that an OpenAI system would be a natural fit.

Altman was of two minds about handing OpenAI products to Lynch and Carter. “I unabashedly love this country, which is the greatest country in the world,” he said. At Stanford, he worked on a DARPA project involving drone helicopters. “But some things we will never do with the Department of Defense.” He added, “A friend of mine says, ‘The thing that saves us from the Department of Defense is that, though they have a ton of money, they’re not very competent.’ But I feel conflicted, because they have the world’s best cyber command.” Altman, by instinct a cleaner-up of messes, wanted to help strengthen our military—and then to defend the world from its newfound strength.

Altman is patriotic, and thoughtful – very. But his conversation with Secretary Carter might best have begun with that private reluctance he shared only with the reporter later.

Even though the Double Cyber Gap is palpable, in Altman’s thinking and elsewhere, there are ways around that Hobson’s Choice dilemma. I share those with my consulting clients and we’ll be addressing them and new ideas at the Cybersecurity Summit as well. I hope to see you there, but I’d be interested in hearing your thoughts also  (comments below or email).

%d bloggers like this: