Insider’s Guide to the New Holographic Computing

In my seven happy years at Microsoft before leaving a couple of months ago, I was never happier than when I was involved in a cool “secret project.”

Last year my team and I contributed for many months on a revolutionary secret project – Holographic Computing – which is being revealed today at Microsoft headquarters.  I’ve been blogging for years about a variety of research efforts which additively culminated in today’s announcements: HoloLens, HoloStudio for 3D holographic building, and a series of apps (e.g. HoloSkype, HoloMinecraft) for this new platform on Windows 10.

For my readers in government, or who care about the government they pay for, PAY CLOSE ATTENTION.

It’s real. I’ve worn it, used it, designed 3D models with it, explored the real surface of Mars, played and laughed and marveled with it. This isn’t Einstein’s “spooky action at a distance.” Everything in this video works today:

These new inventions represent a major new step-change in the technology industry. That’s not hyperbole. The approach offers the best benefit of any technology: empowering people simply through complexity, and by extension a way to deliver new & unexpected capabilities to meet government requirements.

Holographic computing, in all the forms it will take, is comparable to the Personal Computing revolution of the 1980s (which democratized computing), the Web revolution of the ’90s (which universalized computing), and the Mobility revolution of the past eight years, which is still uprooting the world from its foundation.

One important point I care deeply about: Government missed each of those three revolutions. By and large, government agencies at all levels were late or slow (or glacial) to recognize and adopt those revolutionary capabilities. That miss was understandable in the developing world and yet indefensible in the United States, particularly at the federal level.

I worked at the Pentagon in the summer of 1985, having left my own state-of-the-art PC at home at Stanford University, but my assigned “analytical tool” was a typewriter. In the early 2000s, I worked at an intelligence agency trying to fight a war against global terror networks when most analysts weren’t allowed to use the World Wide Web at work. Even today, government agencies are lagging well behind in deploying modern smartphones and tablets for their yearning-to-be-mobile workforce.

This laggard behavior must change. Government can’t afford (for the sake of the citizens it serves) to fall behind again, and  understanding how to adapt with the holographic revolution is a great place to start, for local, national, and transnational agencies.

Now some background…

Continue reading

Virtual recipe stirs in Apple iPad, Microsoft Kinect

Who says Apple and Microsoft can’t work together?  They certainly do, at least when it involves the ingenuity of their users, the more inventive of whom use technologies from both companies (and others).

Here’s a neat example, “a just-for-fun experiment from the guys at Laan Labs” where they whip up a neat Augmented Reality recipe: take one iPad, one Kinect, and stir:

Some technical detail from the Brothers Laan, the engineers who did the work:

We used the String Augmented Reality SDK to display real-time 3d video+audio recorded from the Kinect. Libfreenect from http://openkinect.org/ project was used for recording the data coming from the Kinect. A textured mesh was created from the calibrated depth+rgb data for each frame and played back in real-time. A simple depth cutoff allowed us isolate the person in the video from the walls and other objects. Using the String SDK, we projected it back onto a printed image marker in the real world.” – source, Laan Labs blog.

As always, check out http://www.kinecthacks.com/ for the latest and greatest Kinect hacks – or more accurately now, the latest cool uses of the openly released free Kinect SDK, available here.

There are several quiet projects underway around the DC Beltway to make use of the SDK, testing non-commercial but government-relevant deployments – more detail and examples at the appropriate time. We will eventually release a commercial SDK with even more functionality and higher-level programming controls, which will directly benefit government early adopters.

In the meantime, I may report on some of the new advances being made by our research group on Computational User Experiences, who “apply expertise in machine learning, visualization, mobile computing, sensors and devices, and quantitative and qualitative evaluation techniques to improve the state of the art in physiological computing, healthcare, home technologies, computer-assisted creativity, and entertainment.” That’s a rich agenda, and the group is in the very forefront of defining how Natural User Interaction (NUI) will enhance our personal and professional lives….

Share this post on Twitter

Email this post to a friend

Kinecting Communities

On April 16 I will be speaking at the Mobile Citizen Summit in Washington DC (registration still open), which brings together “practitioners across the  government, nonprofit, advocacy, and political spaces—the kinds of  people who develop the strategy and the tools to reach, engage, educate,  and enable citizens across the country and around the world.”

But I’m going to be talking about “mobile” in a different way than others still use the term, i.e. they focus on a handheld device, while I will be focusing on the mobile citizen. As I have said before I don’t believe our future involves experiencing “augmented reality” by always holding up little 3-inch plastic screens in front of our faces. Natural user interfaces and immersive computing offer much more to how we access computational resources – and how technology will help us interact with one another. Here’s an example, in a story from the past week.

Continue reading

Air Everything

Like many people, I was very impressed by a video over the weekend of the Word Lens real-time translation app for iPhone.  It struck with a viral bang, and within a few days racked up over 2 million YouTube views. What particularly made me smile was digging backwards through the twitter stream of a key Word Lens developer whom I follow, John DeWeese, and finding this pearl of a tweet (right) from several months ago, as he was banging out the app out in my old stomping grounds of the San Francisco Bay Area. That’s a hacker mentality for you 🙂

But one thought I had in watching the video was, why do I need to be holding the little device in front of me, to get the benefit of its computational resources and display? I’ve seen the studies and predictions that “everything’s going mobile,” but I believe that’s taking too literally the device itself, the form-factor of a little handheld box of magic.

Continue reading

Playing with virtual data in spatial reality

For the past few months, when I’ve had visitors to Microsoft Research on the Redmond campus one of the things I’ve enjoyed demonstrating is the technology behind the new system for Xbox 360 – the controller-free gaming and immersive entertainment system that Microsoft is releasing for the holiday market in a month or so. In particular, I’ve enjoyed having Andy Wilson of MSR talk with visitors about some of the future implications in non-gaming scenarios, including general information work, and how immersive augmented-reality (AR) could transform our capabilities for working with information, virtual objects, and how we all share and use knowledge among ourselves.

We’re further along in this area than I thought we’d be five years ago, and I suspect we’ll be similarly surprised by 2015.

In particular, there is great interest (both in and out of the government circles I travel in) in the “device-less” or environmental potential of new AR technologies. Not everyone will have a fancy smartphone on them at all times, or want to stare at a wall-monitor while also wearing glasses or holding a cellphone in front of them in order to access other planes of information. The really exciting premise of these new approaches is the fully immersive aspect of  “spatial AR,” and the promise of controlling a live 3D environment of realtime data. Continue reading

Slate of the Union Day

Today is “Slate of the Union” day, when the two most charismatic individuals in recent American history go on stage and attempt to reclaim mantles as innovators. I’ll leave aside the fellow with lower poll numbers for now (President Obama). More eyes in the tech world will be watching as Steve Jobs makes his newest product announcement, the Apple tablet/Tabloid/iSlate thing iPad (it’s official).

Back in the late 1980s I worked for the legendary “Mayor of Silicon Valley” Tom McEnery (he was actually the mayor of San Jose), and we did many joint projects with Apple, particularly with CEO John Sculley, a great guy.

Continue reading

The promise of mobile augmented reality

Robotvision appMy intention with this blog is always to write medium-length “think-pieces,” about technology, government, or preferably both. I’m working on several (the Jefferson Gov 2.0 piece, the Evil Twin 2.0 piece, and one on “whither the multilingual web”), but they do truly require thought and some free time, so they percolate a bit.

In the meantime, readers like the latest cool demo videos, so for Friday fun here’s another one (watch below or on youTube), which was featured on TechCrunch last night (“Bing comes to the iPhone via Robotvision”), with an augmented reality app for the iPhone which uses Bing Maps and Bing’s real-time data (website here). The company describes itself this way:

Continue reading

%d bloggers like this: