On April 16 I will be speaking at the Mobile Citizen Summit in Washington DC (registration still open), which brings together “practitioners across the government, nonprofit, advocacy, and political spaces—the kinds of people who develop the strategy and the tools to reach, engage, educate, and enable citizens across the country and around the world.”
But I’m going to be talking about “mobile” in a different way than others still use the term, i.e. they focus on a handheld device, while I will be focusing on the mobile citizen. As I have said before I don’t believe our future involves experiencing “augmented reality” by always holding up little 3-inch plastic screens in front of our faces. Natural user interfaces and immersive computing offer much more to how we access computational resources – and how technology will help us interact with one another. Here’s an example, in a story from the past week.
Sometimes long tectonic shifts reveal themselves in snapshot “key-frame” moments. Last week on April Fool’s Day 2011, many observers of the tech industry – and lots of just plain web fans – took notice of Microsoft’s reemergence on the innovation front in a rapid two-step punch/counterpunch over something as simple, and complex, as personal use of immersive Augmented Reality (AR).
In its tradition of April Fool’s jokes, Google went to great effort with the “launch” of its Gmail Motion Beta, and “now you can control Gmail with your body.” A splashy video featuring lots of Google employees acting out the part of gesture-recognition-pioneers, composing and sending emails using body movements, hand and facial gestures. Drew a small chuckle of amusement.
But it also drew the immediate notice, and bemusement, of millions of fans of Microsoft Kinect (launched before Christmas last fall and now the fastest-selling consumer electronics device in history). They instantly thought, as I did, “Why is Google taking a slightly less-than-gracious swipe at Kinect?” The Goog’s effort seemed almost resentful, and mocked the advances in gesture recognition and natural user interaction (NUI).
Immediately, Kinect users responded (the thing only costs $149, so it’s attractive to researchers and innovative hackers), and I’ve now seen several impressive Kinect hacks duplicating the Gmail Motion beta – that is, making real what Google thought was a fictional prank. For a good example, see here for Cnet’s coverage of the quick work by Evan Suma, a postdoc researcher at the University of Southern California, who created his working prototype within 30 minutes, and took another 90 minutes to make a video of him using it. Media coverage of that quick cycle seemed to get as much or more coverage as Google’s original prank!
I believe that episode underlines the inherent attraction and enormous potential of NUI, which is playing more and more of a role in Microsoft’s research and future product development. I’ve written recently about that potential (see “Air Everything“), and see more evidence every day.
Check out this great new work by students at the MIT Media Lab, combining Kinect and videoconferencing in a wonderful new immersive way. The project site is at “Kinected Conference,” where they describe their use of the embedded audio and depth sensors in the Kinect device:
We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. Four features are implemented, which are “Talking to Focus”, “Freezing Former Frames”, “Privacy Zone” and “Spacial Augmenting Reality.” – MIT Media Lab Kinected Conference Project
Check out the impressive video:
Note the thoughtful work by the MIT researchers into the interpersonal aspects of communication, incorporating their intuition and research findings on social interactivity. The implications are much vaster than simple use and control of your email inbox.
The best thing about the explosion in NUI-hack creativity is that we are now just weeks away from the Spring release of the Kinect SDK, a non-commercial Kinect for Windows software development kit from Microsoft Research. While we plan to release a commercial SDK with even more features, this SDK will be a starter kit for creating rich natural user interfaces via access to deep Kinect system information such as audio, system application-programming interfaces, and direct coding control of the Kinect sensor.
I’m excited by the work that’s already begun to explore the social implications of NUI, and (in my arena) the opportunities to advance citizen-government interaction for disadvantaged communities through pervasive immersive interactivity.
Note to Google: This episode demonstrates what William Gibson famously said: “The future is already here — it’s just not evenly distributed.” Someday it will even come to Mountain View! 🙂
Filed under: innovation, Microsoft, R&D, Society, Technology | Tagged: 3D, AR, augmented reality, cellphone, citizens, CNET, Evan Suma, Government, human, innovation, IT, Kinect, mCitizen, Microsoft, Microsoft Research, MIT, MIT Media Lab, mobile, Mobile Citizen Summit, MSR, people, politics, privacy, research, tech, Technology, USC, video, videoconferencing |