If you were to ask many of the Generation X members of the technology community about what inspired them to become computer scientists, engineers, and entrepreneurs – it may be the emergence of “Virtual Reality” as a new computing paradigm in the early ’90s. The novel that introduced the world to cyberspace – Neuromancer – by  William Gibson provided a vision of what cyber space and virtual reality were supposed to be. It consisted of immersive interfaces composed of everyday, interactable objects and for future computer scientists it promised exciting possibilities. In Gibson’s vision, you could interact with data in ways that you interact with your physical world. Interfaces became objects, and abstractions were grounded in immersive virtual worlds. It was an exciting book that sparked a generation of  scientists to bring an immersive world to life. Fast forward to today’s device-centric culture, virtual environments promise the ability to interact with data without having to look through the 4×7-inch window of your cellphone.

It may have taken a few decades longer than expected, and we’re still not to the level of Gibson’s fictional world, but scientific progress is definitely being made. Researchers at Microsoft have recently introduced a working prototype of a system they call “SurroundWeb” – an immersive room-sized experience for viewing web content. The project is a revitalization of the IllumiRoom project that most folks had thought would never make it out of the lab.  This new initiative seems to be focusing on how in-home users can make use of immersive projection for practical applications.

The system makes use of advances in 3D depth mapping and augmented reality through projectors to transform a room environment into one that is covered in interactable web pages. By combining the ability to map projected displays to open regions in your environment with gestural interfaces – the room itself is transformed into an application and visualization landscape without the need for the user to interact with a device.  Even more interesting, the system is able to determine how to map information onto just about any surface that can accept projection, automatically. Unlike immersive environments of the past, like the famous CAVE, specialized projection screens, and a dedicated room are no longer needed.  To me, this means that the system can make its way into common spaces and will be accepted by end-users much faster than the expensive, dedicated visualization displays of times past.

Of course, this is research, and in my experience it will take another five to seven years before commercially viable systems come online. The work is important for my readers to note because of the underlying trend – the ability to present public and immersive interfaces without the need for a specific device.

I’ve spoken to some very large AV companies that have similar visions driving their research agendas. The idea that users will no longer be bound to picture-frame edges of our displays is interesting to think about and has far reaching implications — from how consumers will watch TV, to how meetings will be conducted in corporate enterprises.  I’m excited to think about these prospects – how about you?

Share
About Christopher Jaynes

Jaynes received his doctoral degree at the University of Massachusetts, Amherst where he worked on camera calibration and aerial image interpretation technologies now in use by the federal government. Jaynes received his BS degree with honors from the School of Computer Science at the University of Utah. In 2004, he founded Mersive and today serves as the company's Chief Technology Officer. Prior to Mersive, Jaynes founded the Metaverse Lab at the University of Kentucky, recognized as one of the leading laboratories for computer vision and interactive media and dedicated to research related to video surveillance, human-computer interaction, and display technologies.

Submit Comment