Insider’s Guide to the New Holographic Computing

In my seven happy years at Microsoft before leaving a couple of months ago, I was never happier than when I was involved in a cool “secret project.”

Last year my team and I contributed for many months on a revolutionary secret project – Holographic Computing – which is being revealed today at Microsoft headquarters.  I’ve been blogging for years about a variety of research efforts which additively culminated in today’s announcements: HoloLens, HoloStudio for 3D holographic building, and a series of apps (e.g. HoloSkype, HoloMinecraft) for this new platform on Windows 10.

For my readers in government, or who care about the government they pay for, PAY CLOSE ATTENTION.

It’s real. I’ve worn it, used it, designed 3D models with it, explored the real surface of Mars, played and laughed and marveled with it. This isn’t Einstein’s “spooky action at a distance.” Everything in this video works today:

These new inventions represent a major new step-change in the technology industry. That’s not hyperbole. The approach offers the best benefit of any technology: empowering people simply through complexity, and by extension a way to deliver new & unexpected capabilities to meet government requirements.

Holographic computing, in all the forms it will take, is comparable to the Personal Computing revolution of the 1980s (which democratized computing), the Web revolution of the ’90s (which universalized computing), and the Mobility revolution of the past eight years, which is still uprooting the world from its foundation.

One important point I care deeply about: Government missed each of those three revolutions. By and large, government agencies at all levels were late or slow (or glacial) to recognize and adopt those revolutionary capabilities. That miss was understandable in the developing world and yet indefensible in the United States, particularly at the federal level.

I worked at the Pentagon in the summer of 1985, having left my own state-of-the-art PC at home at Stanford University, but my assigned “analytical tool” was a typewriter. In the early 2000s, I worked at an intelligence agency trying to fight a war against global terror networks when most analysts weren’t allowed to use the World Wide Web at work. Even today, government agencies are lagging well behind in deploying modern smartphones and tablets for their yearning-to-be-mobile workforce.

This laggard behavior must change. Government can’t afford (for the sake of the citizens it serves) to fall behind again, and  understanding how to adapt with the holographic revolution is a great place to start, for local, national, and transnational agencies.

Now some background…

Programmatic Context for HoloLens

An enduring aspect of working on new technologies is stealthiness. It isn’t always the right approach – sometimes open collaboration beyond company borders has superior value to a quiet insular team. I learned the distinction well when I was in government (Intellipedia and A-Space were among our results) and in the startup culture in Silicon Valley before that.

But stealth has an electric appeal, the spark of conspiracy. At Microsoft and some other companies, the terminology is “Tented” – you have no clue about the work if you’re outside the tent, enforced as rigorously as in a SCIF.

Last March my Microsoft Institute team at Microsoft was quietly invited to add our efforts to a startling tented project in Redmond – one which was already gaining steam based on its revolutionary promise and technical wizardry, but which would require extraordinary stealth in development, for a variety of reasons.  I won’t share anything proprietary of course, but will say that our secrecy was Apple-esque, to use a Valley term of high praise.

That project is being announced to the world today, as HoloLens. I couldn’t be prouder of my (erstwhile) colleagues at Microsoft who are launching a revolutionary platform.  The praise is already rolling in.  WIRED‘s story is “Our Exclusive Hands-On With Microsoft’s Unbelievable New Holographic Goggles” while TechCrunch quickly assesses “Augmented reality has had some false starts on mobile, but in this context, it seems more viable, and thus more credible than it ever has before.”)

Next, let’s look at some background on the technical area which HoloLens now stands astride like a colossus among the Oculus Rift and Google Glass lesser-rans. Then below I’ll sketch some initial observations on the relevance for government uses and the world at large.

Technology Context: Ambient Computing

I’ve been writing about virtual reality and augmented reality (the VR/AR split) for a decade, first inside government and over the past seven years on this blog.  The term I prefer for the overall approach is “Ambient Computing” – combining advanced projection, immersion, machine vision, and environmental sensing.

Ambient computing devices are embedded all around in the environment, are addressable or usable via traditional human senses like sight, hearing, and touch/gestures, and can understand people’s intent, and even operate on their behalf.

Previous ShepherdsPi posts on Virtual Reality/Augmented Reality:

2008 War is Virtual Hell on emergent defense thinking about virtual reality

2008 Stretching collaboration with Embodied Social Proxies on robotics and VR

2009 Immersed in Augmented Reality, with the concept of “Instrumenting the World” as an important foundation for what is now called the Internet of Things.

2010 The promise of mobile augmented reality

2010 Air Everything

2011 Kinecting Communities

By 2010 I could look ahead (“Playing with virtual data in spatial reality“) and see clearly where we are heading based on trends:

We’re further along in this area than I thought we’d be five years ago, and I suspect we’ll be similarly surprised by 2015. In particular, there is great interest (both in and out of the government circles I travel in) in the “device-less” or environmental potential of new AR technologies. Not everyone will have a fancy smartphone on them at all times, or want to stare at a wall-monitor while also wearing glasses or holding a cellphone in front of them in order to access other planes of information. The really exciting premise of these new approaches is the fully immersive aspect of  “spatial AR,” and the promise of controlling a live 3D environment of realtime data.

That vision begins to become “virtually real” with today’s HoloLens announcement.

Competitive Context

I’ll leave to analysts, and to the holiday-market later this year and next, to judge where the competing technologies lie on the “hype-curve” of reality and utility.  I can list the efforts I’m playing closest attention to, and why:

Samsung’s Gear VR and Project Beyond: The Gear VR headset hasn’t lit expectations very brightly among analysts or the tech media, but it does now have alongside it the recently announced “Project Beyond,” a 360-degree panopticon camera module which is planned to capture a gigapixel of surrounding 3D footage every second, and stream that footage back to someone wearing a Gear VR headset, “essentially transporting them into that world.” Unlike HoloLens, it’s not a full computer.

Google Glass: The granddaddy of widely available AR experiments. Withdrawn from the public last week, not before inspiring a raft of venture-funded lookalikes which are also now also-rans. Google undoubtedly learned a great deal by dipping its giant toe into the virtual realm so enthusiastically with its Explorers program, but most of my friends who participated developed a “ho-hum” attitude about the device, which now gathers dust on shelves across the world.

Magic Leap: Google’s withdrawal of Glass can be seen in the context of the revelation that the search/advertising giant has now instead plowed in a large amount of cash to this start-up, followed by several A-list Silicon Valley VC funds. Magic Leap has now raised an astonishing $542 million in Series B funding – yes, that’s half a billion, with no product or launch date in sight, but a long list of developer openings on its website. (But don’t worry, the company just hired a novelist as its Chief Futurist.)

Oculus VR and the Rift (or its follow-ons): Oculus Rift has to be considered the leading rival to Microsoft’s HoloLens, so much so that Facebook acquired its parent startup company for an eye-opening $2 billion, ten months ago. Mark Zuckerberg at the time indicated patience and the long-view in his strategy, but industry watchers don’t expect a device release until late 2015 or 2016. And Rift, as of its descriptions to date, isn’t a full computing experience, merely a virtual-reality immersion.  There’s also no see-through aspect to its headset (unlike the visible real-world context of HoloLens), which has led to widely-reported nausea problems among Rift prototype users.

These all feel a bit laggard now, particularly because the companies involved (with the exception of Google) don’t have the experience of Microsoft in launching global computing platforms on which communities of developers can make magic.  Most importantly, none of these efforts are audacious enough to incorporate a full computing device (CPU, GPU, wirelessly connected) into a comfortably wearable device.

Bottom Line for Government…

Ambient computational resources are driving a new revolution, which the private sector is exploiting rapidly. That industrial and consumer revolution is in useful parallel with a virtuous cycle of ubiquitous sensing (Internet of Things) producing zettabytes of Big Data, being manipulated and mined by pioneering Machine Learning techniques for so-called “soft AI” (see IBM’s Irving Wladawsky-Berger in last week’s Wall Street Journal, “Soft Artificial Intelligence is Suddenly Everywhere“).

We humans, we of soft tissue, need all the help we can get to preside over those new and accelerating forces of technological change. The real magic is when our tools give us such powerful command in a simple and fun way. That is the promise of Holographic Computing, and HoloLens.

There are inevitably challenges. There’ll be devious uses of Holographic Computing, of course. Already we see the deceptive capabilities of regular screen-based “virtual reality,” and one can only imagine the perils of viewing these techniques from the wrong hands in full 3D immersion; check out these examples from the Emmy-winning special-video-effects (VFX) team behind HBO’s “Boardwalk Empire”:

We can’t allow government to waddle slowly behind, as real people live their lives increasingly affected by immersive technologies used for good or ill.

Governments exist to answer the needs of their citizens; government agencies and personnel should be using up-to-date tools capable of keeping up with what individual citizens are using, if only to avoid embarrassment and dinosauric irrelevance!

Holographic Computing offers government agencies real benefits:

  • Unique and insanely powerful mission applications; the company has been working on training, modeling & simulation, event forensics, gesture-driven immersive big-data visualization, distance learning, and you can easily imagine uses in widely varied fields like remote logistics management, geospatial analytics, telemedicine… Anything that uses personal computing software.
  • Government workforce and workplace transformed by collaboration transformation already evident in early applications like HoloSkype;
  • Awareness of and contemporaneous familiarity with the technological changes affecting society, through consumer and entertainment channels.

I’ll end with the newly-released overall video on Microsoft’s Holographic Computing; note the NASA/Jet Propulsion Lab scenes studying the surface of Mars first-hand. Note the 3D modeling from HoloStudio and its infinite shelf of parts. Note the HoloSkype example of real-time step-by-step advice on technical repair, from someone remote yet as near as by your side.

Imagine what you could do with HoloLens….

Let me know your ideas.

Debating Big Data for Intelligence

I’m always afraid of engaging in a “battle of wits” only half-armed.  So I usually choose my debate opponents judiciously.

Unfortunately, I recently had a contest thrust upon me with a superior foe: my friend Mark Lowenthal, Ph.D. from Harvard, an intelligence community graybeard (literally!) and former Assistant Director of Central Intelligence (ADCI) for Analysis and Production, Vice Chairman of the National Intelligence Council – and as if that weren’t enough, a past national Jeopardy! “Tournament of Champions” winner.

As we both sit on the AFCEA Intelligence Committee and have also collaborated on a few small projects, Mark and I have had occasion to explore one another’s biases and beliefs about the role of technology in the business of intelligence. We’ve had several voluble but collegial debates about that topic, in long-winded email threads and over grubby lunches. Now, the debate has spilled onto the pages of SIGNAL Magazine, which serves as something of a house journal for the defense and intelligence extended communities.

SIGNAL Editor Bob Ackerman suggested a “Point/Counterpoint” short debate on the topic: “Is Big Data the Way Ahead for Intelligence?” Our pieces are side-by-side in the new October issue, and are available here on the magazine’s site.

Mark did an excellent job of marshalling the skeptic’s view on Big Data, under the not-so-equivocal title, Another Overhyped Fad.”  Below you will find an early draft of my own piece, an edited version of which is published under the title A Longtime Tool of the Community”:

Visit the National Cryptologic Museum in Ft. Meade, Maryland, and you’ll see three large-machine displays, labeled HARVEST and TRACTOR, TELLMAN and RISSMAN, and the mighty Cray XMP-24. They’re credited with helping win the Cold War, from the 1950s through the end of the 1980s. In fact, they are pioneering big-data computers.

Here’s a secret: the Intelligence Community has necessarily been a pioneer in “big data” since inception – both our modern IC and the science of big data were conceived during the decade after the Second World War. The IC and big-data science have always intertwined because of their shared goal: producing and refining information describing the world around us, for important and utilitarian purposes

What do modern intelligence agencies run on? They are internal combustion engines burning pipelines of data, and the more fuel they burn the better their mileage. Analysts and decisionmakers are the drivers of these vast engines, but to keep them from hoofing it, we need big data.

Let’s stipulate that today’s big-data mantra is overhyped. Too many technology vendors are busily rebranding storage or analytics as “big data systems” under the gun from their marketing departments. That caricature is, rightly, derided by both IT cognoscenti and non-techie analysts.

I personally get the disdain for machines, as I had the archetypal humanities background and was once a leather-elbow-patched tweed-jacketed Kremlinologist, reading newspapers and HUMINT for my data. I stared into space a lot, pondering the Chernenko-Gorbachev transition. Yet as Silicon Valley’s information revolution transformed modern business, media, and social behavior across the globe, I learned to keep up – and so has the IC. 

Twitter may be new, but the IC is no Johnny-come-lately in big data on foreign targets.  US Government funding of computing research in the 1940s and ‘50s stretched from World War II’s radar/countermeasures battles to the elemental ELINT and SIGINT research at Stanford and MIT, leading to the U-2 and OXCART (ELINT/IMINT platforms) and the Sunnyvale roots of NRO.

In all this effort to analyze massive observational traces and electronic signatures, big data was the goal and the bounty.

War planning and peacetime collection were built on collection of ever-more-massive amounts of foreign data from technical platforms – telling the US what the Soviets could and couldn’t do, and therefore where we should and shouldn’t fly, or aim, or collect. And all along, the development of analog and then digital computers to answer those questions, from Vannevar Bush through George Bush, was fortified by massive government investment in big-data technology for military and intelligence applications.

In today’s parlance big data typically encompasses just three linked computerized tasks: storing collected foreign data (think Amazon’s cloud), finding and retrieving relevant foreign data (Bing or Google), and analyzing connections or patterns among the relevant foreign data (powerful web-analytic tools).

Word CloudThose three Ft. Meade museum displays demonstrate how NSA and the IC pioneered those “modern” big data tasks.  Storage is represented by TELLMAN/RISSMAN, running from the 1960’s throughout the Cold War using innovation from Intel. Search/retrieval were the hallmark of HARVEST/TRACTOR, built by IBM and StorageTek in the late 1950s. Repetitive what-if analytic runs boomed in 1983 when Cray delivered a supercomputer to a customer site for the first time ever.

The benefit of IC early adoption of big data wasn’t only to cryptology – although decrypting enemy secrets would be impossible without it. More broadly, computational big-data horsepower was in use constantly during the Cold War and after, producing intelligence that guided US defense policy and treaty negotiations or verification. Individual analysts formulated requirements for tasked big-data collection with the same intent as when they tasked HUMINT collection: to fill gaps in our knowledge of hidden or emerging patterns of adversary activities.

That’s the sense-making pattern that leads from data to information, to intelligence and knowledge. Humans are good at it, one by one. Murray Feshbach, a little-known Census Bureau demographic researcher, made astonishing contributions to the IC’s understanding of the crumbling Soviet economy and its sociopolitical implications by studying reams of infant-mortality statistics, and noticing patterns of missing data. Humans can provide that insight, brilliantly, but at the speed of hand-eye coordination.

Machines make a passable rote attempt, but at blistering speed, and they don’t balk at repetitive mindnumbing data volume. Amid the data, patterns emerge. Today’s Feshbachs want an Excel spreadsheet or Hadoop table at hand, so they’re not limited to the data they can reasonably carry in their mind’s eye.

To cite a recent joint research paper from Microsoft Research and MIT, “Big Data is notable not because of its size, but because of its relationality to other data.  Due to efforts to mine and aggregate data, Big Data is fundamentally networked.  Its value comes from the patterns that can be derived by making connections between pieces of data, about an individual, about individuals in relation to others, about groups of people, or simply about the structure of information itself.” That reads like a subset of core requirements for IC analysis, whether social or military, tactical or strategic.

The synergy of human and machine for knowledge work is much like modern agricultural advances – why would a farmer today want to trudge behind an ox-pulled plow? There’s no zero-sum choice to be made between technology and analysts, and the relationship between CIOs and managers of analysts needs to be nurtured, not cleaved apart.

What’s the return for big-data spending? Outside the IC, I challenge humanities researchers to go a day without a search engine. The IC record’s just as clear. ISR, targeting and warning are better because of big data; data-enabled machine translation of foreign sources opens the world; correlation of anomalies amid large-scale financial data pinpoint otherwise unseen hands behind global events. Why, in retrospect, the Iraq WMD conclusion was a result of remarkably-small-data manipulation.

Humans will never lose their edge in analyses requiring creativity, smart hunches, and understanding of unique individuals or groups. If that’s all we need to understand the 21st century, then put down your smartphone. But as long as humans learn by observation, and by counting or categorizing those observations, I say crank the machines for all their robotic worth.

Make sure to read both sides, and feel free to argue your own perspective in a comment on the SIGNAL site.

Kinecting Communities

On April 16 I will be speaking at the Mobile Citizen Summit in Washington DC (registration still open), which brings together “practitioners across the  government, nonprofit, advocacy, and political spaces—the kinds of  people who develop the strategy and the tools to reach, engage, educate,  and enable citizens across the country and around the world.”

But I’m going to be talking about “mobile” in a different way than others still use the term, i.e. they focus on a handheld device, while I will be focusing on the mobile citizen. As I have said before I don’t believe our future involves experiencing “augmented reality” by always holding up little 3-inch plastic screens in front of our faces. Natural user interfaces and immersive computing offer much more to how we access computational resources – and how technology will help us interact with one another. Here’s an example, in a story from the past week.

Continue reading

Air Everything

Like many people, I was very impressed by a video over the weekend of the Word Lens real-time translation app for iPhone.  It struck with a viral bang, and within a few days racked up over 2 million YouTube views. What particularly made me smile was digging backwards through the twitter stream of a key Word Lens developer whom I follow, John DeWeese, and finding this pearl of a tweet (right) from several months ago, as he was banging out the app out in my old stomping grounds of the San Francisco Bay Area. That’s a hacker mentality for you :)

But one thought I had in watching the video was, why do I need to be holding the little device in front of me, to get the benefit of its computational resources and display? I’ve seen the studies and predictions that “everything’s going mobile,” but I believe that’s taking too literally the device itself, the form-factor of a little handheld box of magic.

Continue reading

Mix, Rip, Burn Your Research

You’ve done research; you’ve collected and sifted through mounds of links, papers, articles, notes and raw data. Shouldn’t there be a way to manage all that material that’s as easy and intuitive as, say, iTunes or Zune – helping you manage and share your snippets and research the way you share and enjoy your music?

Continue reading

Your choice, Dataviz as event or book

A friend wrote asking if I could make it to an event happening this week near DC. I can’t make it, but fortunately he also mentioned as consolation that he has a cool new book on the cusp of release – and I’ve now ordered my copy.

The Friend: legendary visualization and HCI guru Ben Shneiderman (Wikipedia entry). Ben is a computer-science professor at the University of Maryland and the founder of its well-known Human-Computer Interaction Laboratory (HCIL), as well as an ACM Fellow and AAAS Fellow.  He has done government a million favors over the years, consulting for agencies, including his recent work on the site to help that platform of data – from hundreds of thousands of sources – organize, host, and visualize the data for millions of visitors.  I first got to know Ben through his support for better intelligence analysis – he helped invent a longtime intelligence analytics tool, Spotfire (see his article “Dynamic queries, starfield displays, and the path to Spotfire“).  Ben’s also well-known for his award-winning 2002 book Leonardo’s Laptop: Human Needs and the New Computing Technologies, which I enjoyed and still think about when brainstorming new techie toys.

Continue reading

Free Tools for the New Scientific Revolution

Blogs are great for supplementing real-life events, by giving space and time for specific examples and links which can’t be referenced at the time. I was invited to give a talk last week at the first-ever NASA Information Technology Summit in Washington DC, and the topic I chose was “Government and the Revolution in Scientific Computing.” That’s an area that Microsoft Research has been focusing on quite a bit lately, so below I’ll give some examples I didn’t use at my talk.

One groundrule was that invited private-sector speakers were not allowed to give anything resembling a “sales pitch” of their company’s wares. Fair enough – I’m no salesman.  The person who immediately preceded me, keynoter Vint Cerf, slightly bent the rules and talked a bit about his employer Google’s products, but gee whiz, that’s the prerogative of someone who is in large part responsible for the Internet we all use and love today.

I described in my talk the radical new class of super-powerful technologies enabling large-data research and computing on platforms of real-time and archival government data. That revolution is happening now, and I believe government could and should be playing a different and less passive role. I advocated for increased attention to the ongoing predicament of U.S. research and development funding.

Alex Howard at O’Reilly Radar covered the NASA Summit and today published a nice review of both Vint’s talk and mine.  Some excerpts: Continue reading


Get every new post delivered to your Inbox.

Join 6,488 other followers

%d bloggers like this: