For the past few months, when I’ve had visitors to Microsoft Research on the Redmond campus one of the things I’ve enjoyed demonstrating is the technology behind the new system for Xbox 360 – the controller-free gaming and immersive entertainment system that Microsoft is releasing for the holiday market in a month or so. In particular, I’ve enjoyed having Andy Wilson of MSR talk with visitors about some of the future implications in non-gaming scenarios, including general information work, and how immersive augmented-reality (AR) could transform our capabilities for working with information, virtual objects, and how we all share and use knowledge among ourselves.
We’re further along in this area than I thought we’d be five years ago, and I suspect we’ll be similarly surprised by 2015.
In particular, there is great interest (both in and out of the government circles I travel in) in the “device-less” or environmental potential of new AR technologies. Not everyone will have a fancy smartphone on them at all times, or want to stare at a wall-monitor while also wearing glasses or holding a cellphone in front of them in order to access other planes of information. The really exciting premise of these new approaches is the fully immersive aspect of “spatial AR,” and the promise of controlling a live 3D environment of realtime data.

From paper cited at left: A unified 3D-mesh from three depth-cameras. Two people in the space are tracked separately, and can interact with and control data on the tabletop, the wall, or in-between.
This week Andy is giving that same kind of demo at a conference in New York City, and has just released a new video showing some of the same stuff we’ve been playing with in the lab, and describing the implications. His research presentation is titled Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces.
The video below shows Andy and colleague Hrvoje Benko as they describe and play with their “work,” a project called LightSpace which ace blogger Todd Bishop says takes us “one step closer to the Star Trek Holodeck.”
This video version, by the way, appears to end at about the five-minute mark, but in reality there’s a bonus “technical appendix at the 6:15 mark – wherein Andy shows some behind-the-scenes detail of how the systems work. Think of it as a hidden bonus track 🙂
The conference they’ve been presenting at, by the way, is UIST 2010, the Association for Computing Machinery’s (ACM’s) Symposium on User Interface Software and Technology, with presentations and demos on topics ranging from traditional graphical and web-user interfaces, to VR an AR software, to natural user interfaces (NUIs). Twitter has been full of #UIST2010 tweets all week, so check that for other activity there. Other collaborative Microsoft Research presentations this year include the following:
- A Framework for Robust and Flexible Handling of Inputs with Uncertainty: Julia Schwarz, Carnegie Mellon University; Scott Hudson, Carnegie Mellon University; and Andy Wilson, Microsoft Research Redmond.
- Content-Aware Dynamic Timeline for Video Browsing: Suporn Pongnumkul, University of Washington; Jue Wang, Adobe; Gonzalo Ramos, Microsoft; and Michael Cohen, Microsoft Research Redmond.
- Gestalt: Integrated Support for Implementation and Analysis in Machine Learning Processes: Kayur Patel, University of Washington; Naomi Bancroft, University of Washington; Steven Drucker, Microsoft Research Redmond; James Fogarty, University of Washington; Andrew Ko, University of Washington; and James Landay, University of Washington.
- Jogging over a Distance between Europe and Australia: Florian “Floyd” Mueller, University of Melbourne and Distance Lab; Frank Vetere, University of Melbourne; Martin R. Gibbs, University of Melbourne; Darren Edge, Microsoft Research Asia; Stefan Agamanolis, Distance Lab; and Jennifer G. Sheridan, London Knowledge Lab.
- Pen + Touch = New Tools: Ken Hinckley, Microsoft Research Redmond;, Koji Yatani, Microsoft Research and the University of Toronto; Michel Pahud, Microsoft Research Redmond; Nicole Coddington, Microsoft; Jenny Rodenhouse, Microsoft; Andy Wilson, Microsoft Research Redmond; Hrvoje Benko, Microsoft Research Redmond; and Bill Buxton, Microsoft Research.
Filed under: innovation, Microsoft, R&D, Society, Technology | Tagged: 3D, Andy Wilson, AR, augmented reality, cellphone, CHI, computers, entertainment, gaming, Hrvoje Benko, Kinect, machine-learning, Microsoft, Microsoft Research, mobile, MSR, R&D, research, Star Trek, tech, Technology, UIST, UIST2010, video, virtual reality, VR, Xbox, Xbox 360 |
[…] This post was mentioned on Twitter by Lewis Shepherd, 3D News Aggregator. 3D News Aggregator said: Playing with virtual data in spatial reality http://bit.ly/dujGKD […]
LikeLike
[…] 2010 I could look ahead (“Playing with virtual data in spatial reality“) and see clearly where we are heading based on […]
LikeLike
[…] 2010 I could look ahead (“Playing with virtual data in spatial reality“) and see clearly where we are heading based on […]
LikeLike