Meet the Future-Makers

Question: Why did Elon Musk just change his Twitter profile photo? I notice he’s now seeming to evoke James Bond or Dr. Evil:

twitter photos, Elon v Elon

I’m not certain, but I think I know the answer why. Read on…

________________________________________

“Prediction is very difficult, especially about the future.”

      – Niels Bohr, winner of the 1922 Nobel Prize for Physics

“History will be kind to me for I intend to write it.”

      – Winston Churchill

If you take those two quotations to heart, you might decide to forego the difficulty of predicting the future, instead aiming to bend the future’s story arc yourself. In a nutshell, that’s what R&D is all about: making the future.

Who makes the future for the intelligence community? Who has more influence on the future technologies which intelligence professionals will use: government R&D specialists, or private-sector industry?

On the one hand, commercial industry’s R&D efforts are pulled by billions of invisible consumer hands around the globe, driving rapid innovation and ensuring that bold bets can be rewarded in the marketplace. Recent examples are things like Web search, mobile phones and tablets, and SpaceX launches.

To be fair, though, the US IC and DoD have the ability to focus intently on specific needs, with billions of dollars if necessary, and to drive exotic game-changing R&D for esoteric mission use. During my time in government I saw great recent successes which are of course classified, but they exist.

If you want to explore both sides and you have a Top Secret clearance, you’re in luck, because you can attend what I expect will be an extraordinary gathering of Future-Makers from inside and outside the IC, at next month’s AFCEA Spring Intelligence Symposium.

Spring IntellLast fall, the organizing committee for this annual classified Symposium began our planning on topics and participants. We decided that this year’s overall theme had to be “IC Research & Development” – and we decided to depart from tradition and bring together an unprecedented array of senior leaders from inside and outside, to explore the path forward for IC innovation and change.

The May 20-21 Symposium, held at NGA’s Headquarters, will be a one-of-a-kind event designed to set the tone and agenda for billions of dollars in IC investment.  On the government front, attendees will witness the roll-out of the new (classified) Science & Technology 2015-2019 Roadmap; see this article for some background on that. Attendees will also meet and hear R&D leaders from all major IC agencies, including:

  • Dr. David Honey, Director of Science and Technology, ODNI
  • Dr. Peter Highnam, Director, Intelligence Advanced Research Projects Activity (IARPA)
  • Glenn Gaffney, Deputy Director for Science & Technology, CIA
  • Stephanie O’Sullivan, Principal Deputy Director of National Intelligence
  • Dr. Greg Treverton, Chairman, National Intelligence Council
  • The IC’s functional managers for SIGINT, MASINT, GEOINT, HUMINT, OSINT, and Space

Meanwhile from the private sector, we’ll have:

  • Elon Musk, CEO/CTO of SpaceX, CEO/Chief Product Architect of Tesla Motors, CEO of SolarCity, Co-founder of PayPal
  • Gilman Louie, Partner at Alsop Louie Venture Capital, former CEO of In-Q-Tel
  • Bill Kiczuk, Raytheon VP, CTO, and Senior Principal Engineering Fellow
  • Zach Lemnios, IBM VP for Research Strategy and Worldwide Operations
  • Pres Winter, Oracle VP, National Security Group

When I first proposed that we invite an array of “outside” future-makers to balance the government discussion with a different perspective, I said to my colleagues on the planning committee, “Wouldn’t it be awesome to get someone like Elon Musk…”

Well, we did, and next month I’ll be welcoming him on stage.

These are dark and challenging times in international security, but for scientists, technologists, and engineers, there’s never been a more exciting time – and like them, intelligence professionals should stretch their horizons.

I’m looking forward to the conference… and here’s your link to register to join us.

PS: Just to whet your appetite: new video of this week’s SpaceX revolutionary Falcon9 first-stage landing attempt on a drone barge at sea – nearly made it, very exciting:

Young Americans and the Intelligence Community

IC CAE conferenceA few days ago I travelled down to Orlando – just escaping the last days of the DC winter. I was invited to participate in a conference hosted by the Intelligence Community’s Center of Academic Excellence (IC CAE) at the University of Central Florida.  The title of my speech was “The Internet, 2015-2025: Business and Policy Challenges for the Private Sector.” But I actually learned as much as I taught, maybe more.

First, several surprises I learned while preparing for my presentation. UCF is now the nation’s second-largest university, with over 60,000 students (second only to Arizona State University). The size of the undergraduate/graduate student population obviously translates into a robustly diverse set of student activities. Among those we met with were several leaders and members in the Collegiate Cyber Defense Club (see their site at HackUCF.org), which has hundreds of members, weekly meetings, and is fresh off winning the national 2014 Collegiate Cybersecurity Championship Cup, beating out more than 200 schools based on performance in competitions throughout the year.

LS speaking at IC CAE conference

IC CAE conference, photo by Mike Macedonia

Another surprise was the full extent of the umbrella activity within which the conference was organized.  The UCF IC-CAE is one of several such centers (overall info page here) established over the past decade, since a 2005 congressionally-mandated mission “to increase intelligence community job applicants who are multi-disciplinary, as well as culturally and ethnically diverse [via] grants to competitively accredited U.S. four-year colleges and universities to support the design and development of intelligence-related curricula.” There are some two dozen colleges now participating, including Duke, Penn State, and Virginia Tech.

One more significant thing I learned: young Americans today are not hostile to the nation’s intelligence mission and those who perform it. In fact, I published a short piece today at SIGNAL Magazine on some startling recent poll results, which I reviewed while preparing for the UCF visit. From my piece:

NSA’s negative coverage [over the past two years] was driven by a long series of front-page stories covering Edward Snowden’s leaked documents, including their impact on technology giants such as Facebook, Apple, Google and others. Many pundits have opined that young American millennials are horrified by the revelations and angry at the NSA for “domestic spying.”

Yet the 2015 national survey by the respected Pew Research Center asked specifically about the NSA, and it reveals that, “Young people are more likely than older Americans to view the intelligence agency positively. About six in 10 (61 percent) of those under 30 view the NSA favorably, compared with 40 percent of those 65 and older.”

 – excerpt from “NSA’s Biggest Fans are Young Americans

I didn’t know much about the IC-CAE program before my campus visit, but it strikes me overall as a valuable channel for the Intelligence Community to remain in close sync with the nation’s values and societal changes.  And, as I wrote in SIGNAL, I learned that “students at the nation’s second-largest university reflect the Pew poll findings, and on balance hold a positive view of the intelligence community and its efforts in the national interest. They admire our nation’s intelligence professionals, and they’re supportive of a robust foreign-intelligence collection program.”

I’ve posted the slides that accompanied my talk below, though of course much of the discussion isn’t reflected in the slides themselves. Smart audience, insightful questions.

    [space]

    [space] 

I must mention that the other presenters at the conference were great as well. In sum it was an enjoyable and enlightening experience, and it was reassuring to observe that America’s next great generation will be eager and expert recipients of the reins of national security.

    [space]

Debating Big Data for Intelligence

I’m always afraid of engaging in a “battle of wits” only half-armed.  So I usually choose my debate opponents judiciously.

Unfortunately, I recently had a contest thrust upon me with a superior foe: my friend Mark Lowenthal, Ph.D. from Harvard, an intelligence community graybeard (literally!) and former Assistant Director of Central Intelligence (ADCI) for Analysis and Production, Vice Chairman of the National Intelligence Council – and as if that weren’t enough, a past national Jeopardy! “Tournament of Champions” winner.

As we both sit on the AFCEA Intelligence Committee and have also collaborated on a few small projects, Mark and I have had occasion to explore one another’s biases and beliefs about the role of technology in the business of intelligence. We’ve had several voluble but collegial debates about that topic, in long-winded email threads and over grubby lunches. Now, the debate has spilled onto the pages of SIGNAL Magazine, which serves as something of a house journal for the defense and intelligence extended communities.

SIGNAL Editor Bob Ackerman suggested a “Point/Counterpoint” short debate on the topic: “Is Big Data the Way Ahead for Intelligence?” Our pieces are side-by-side in the new October issue, and are available here on the magazine’s site.

Mark did an excellent job of marshalling the skeptic’s view on Big Data, under the not-so-equivocal title, Another Overhyped Fad.”  Below you will find an early draft of my own piece, an edited version of which is published under the title A Longtime Tool of the Community”:

Visit the National Cryptologic Museum in Ft. Meade, Maryland, and you’ll see three large-machine displays, labeled HARVEST and TRACTOR, TELLMAN and RISSMAN, and the mighty Cray XMP-24. They’re credited with helping win the Cold War, from the 1950s through the end of the 1980s. In fact, they are pioneering big-data computers.

Here’s a secret: the Intelligence Community has necessarily been a pioneer in “big data” since inception – both our modern IC and the science of big data were conceived during the decade after the Second World War. The IC and big-data science have always intertwined because of their shared goal: producing and refining information describing the world around us, for important and utilitarian purposes

What do modern intelligence agencies run on? They are internal combustion engines burning pipelines of data, and the more fuel they burn the better their mileage. Analysts and decisionmakers are the drivers of these vast engines, but to keep them from hoofing it, we need big data.

Let’s stipulate that today’s big-data mantra is overhyped. Too many technology vendors are busily rebranding storage or analytics as “big data systems” under the gun from their marketing departments. That caricature is, rightly, derided by both IT cognoscenti and non-techie analysts.

I personally get the disdain for machines, as I had the archetypal humanities background and was once a leather-elbow-patched tweed-jacketed Kremlinologist, reading newspapers and HUMINT for my data. I stared into space a lot, pondering the Chernenko-Gorbachev transition. Yet as Silicon Valley’s information revolution transformed modern business, media, and social behavior across the globe, I learned to keep up – and so has the IC. 

Twitter may be new, but the IC is no Johnny-come-lately in big data on foreign targets.  US Government funding of computing research in the 1940s and ‘50s stretched from World War II’s radar/countermeasures battles to the elemental ELINT and SIGINT research at Stanford and MIT, leading to the U-2 and OXCART (ELINT/IMINT platforms) and the Sunnyvale roots of NRO.

In all this effort to analyze massive observational traces and electronic signatures, big data was the goal and the bounty.

War planning and peacetime collection were built on collection of ever-more-massive amounts of foreign data from technical platforms – telling the US what the Soviets could and couldn’t do, and therefore where we should and shouldn’t fly, or aim, or collect. And all along, the development of analog and then digital computers to answer those questions, from Vannevar Bush through George Bush, was fortified by massive government investment in big-data technology for military and intelligence applications.

In today’s parlance big data typically encompasses just three linked computerized tasks: storing collected foreign data (think Amazon’s cloud), finding and retrieving relevant foreign data (Bing or Google), and analyzing connections or patterns among the relevant foreign data (powerful web-analytic tools).

Word CloudThose three Ft. Meade museum displays demonstrate how NSA and the IC pioneered those “modern” big data tasks.  Storage is represented by TELLMAN/RISSMAN, running from the 1960’s throughout the Cold War using innovation from Intel. Search/retrieval were the hallmark of HARVEST/TRACTOR, built by IBM and StorageTek in the late 1950s. Repetitive what-if analytic runs boomed in 1983 when Cray delivered a supercomputer to a customer site for the first time ever.

The benefit of IC early adoption of big data wasn’t only to cryptology – although decrypting enemy secrets would be impossible without it. More broadly, computational big-data horsepower was in use constantly during the Cold War and after, producing intelligence that guided US defense policy and treaty negotiations or verification. Individual analysts formulated requirements for tasked big-data collection with the same intent as when they tasked HUMINT collection: to fill gaps in our knowledge of hidden or emerging patterns of adversary activities.

That’s the sense-making pattern that leads from data to information, to intelligence and knowledge. Humans are good at it, one by one. Murray Feshbach, a little-known Census Bureau demographic researcher, made astonishing contributions to the IC’s understanding of the crumbling Soviet economy and its sociopolitical implications by studying reams of infant-mortality statistics, and noticing patterns of missing data. Humans can provide that insight, brilliantly, but at the speed of hand-eye coordination.

Machines make a passable rote attempt, but at blistering speed, and they don’t balk at repetitive mindnumbing data volume. Amid the data, patterns emerge. Today’s Feshbachs want an Excel spreadsheet or Hadoop table at hand, so they’re not limited to the data they can reasonably carry in their mind’s eye.

To cite a recent joint research paper from Microsoft Research and MIT, “Big Data is notable not because of its size, but because of its relationality to other data.  Due to efforts to mine and aggregate data, Big Data is fundamentally networked.  Its value comes from the patterns that can be derived by making connections between pieces of data, about an individual, about individuals in relation to others, about groups of people, or simply about the structure of information itself.” That reads like a subset of core requirements for IC analysis, whether social or military, tactical or strategic.

The synergy of human and machine for knowledge work is much like modern agricultural advances – why would a farmer today want to trudge behind an ox-pulled plow? There’s no zero-sum choice to be made between technology and analysts, and the relationship between CIOs and managers of analysts needs to be nurtured, not cleaved apart.

What’s the return for big-data spending? Outside the IC, I challenge humanities researchers to go a day without a search engine. The IC record’s just as clear. ISR, targeting and warning are better because of big data; data-enabled machine translation of foreign sources opens the world; correlation of anomalies amid large-scale financial data pinpoint otherwise unseen hands behind global events. Why, in retrospect, the Iraq WMD conclusion was a result of remarkably-small-data manipulation.

Humans will never lose their edge in analyses requiring creativity, smart hunches, and understanding of unique individuals or groups. If that’s all we need to understand the 21st century, then put down your smartphone. But as long as humans learn by observation, and by counting or categorizing those observations, I say crank the machines for all their robotic worth.

Make sure to read both sides, and feel free to argue your own perspective in a comment on the SIGNAL site.

Bullshit Detector Prototype Goes Live

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

TruthTeller PrototypeThe Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics :)

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0″ movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated “Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection.  Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

 – Washington Post, “Debuting Truth Teller

The prototype was built with funding from a Knight Foundation’s Prototype Fund grant, and you can read more about the motivation and future plans over on the Knight Blog, and you can read TechCrunch discussing some of the political ramifications of the prototype based on the fact-checking movement in recent campaigns.

Even better, you can actually give Truth Teller a try here, in its infancy.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and  juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S.  election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

2012 Year in Review for Microsoft Research

The year draws to a close… and while the banality and divisiveness of politics and government has been on full display around the world during the past twelve months, the past year has been rewarding for me personally when I can retreat into the world of research. Fortunately there’s a great deal of it going on among my colleagues.

2012 has been a great year for Microsoft Research, and I thought I’d link you to a quick set of year-in-review summaries of some of the exciting work that’s been performed and the advances made:

Microsoft Research 2012 Year in Review

The work ranges from our Silicon Valley lab work in “erasure code” to social-media research at the New England lab in Cambridge, MA; from “transcending the architecture of quantum computers” at our Station Q in Santa Barbara, to work on cloud data systems and analytics by the eXtreme Computing Group (XCG) in Redmond itself.

Across global boundaries we have seen “work towards a formal proof of the Feit-Thompson Theorem” at Microsoft Research Cambridge (UK), and improvements for Bing search in Arab countries made at our Advanced Technology Labs in Cairo, Egypt.

All in all, an impressive array of research advance, benefiting from an increasing amount of collaboration with academic and other researchers as well. The record is one more fitting tribute to our just-departing Chief Research and Strategy Officer Craig Mundie, who is turning over his reins including MSR oversight to Eric Rudder (see his bio here), while Craig focuses for the next two years on special work reporting to CEO Steve Ballmer. Eric’s a great guy and a savvy technologist, and has been a supporter of our Microsoft Institute’s work as well … I did say he’s savvy :)

There’s a lot of hard work already going on in projects that should pay off in 2013, and the New Year promises to be a great one for technologists and scientists everywhere – with the possible exception of any remaining Mayan-apocalypse/ancient-alien-astronaut-theorists. But even to them, and perhaps most poignantly to them, I say Happy New Year!

MSR gets wired, WIRED gets MSR

MS Research in natural-user-interaction technologies
MSR natural-user-interaction immersive technologies

WIRED Magazine’s online site ran a great long profile of Microsoft Research late yesterday, with interviews and project features: “How Microsoft Researchers Might Invent a Holodeck.”

I have written about or mentioned all of the individual projects or technologies on my blog before, but the writing at WIRED is so much better than my own – and the photographs so cool – that I thought I should post a link to the story. Continue reading

The almighty ampersand linking R and D

According to Wikipedia, the lowly ampersand or “&” is a logogram representing the conjunction word “and” using “a ligature of the letters in et,” which is of course the Latin word for “and.”

In my line of work I most frequently encounter the ampersand in the common phrase “R&D” for research and development, although I notice that with texting and short-form social media the ampersand is making something of a comeback in frequency of use anyway.

Continue reading

Follow

Get every new post delivered to your Inbox.

Join 6,607 other followers

%d bloggers like this: