Well almost.  I arrived in Santa Clara last night to introduce our newest product at DEMO tomorrow.  There’s an embargo here so I can’t share any details about the product before midnight tonight, but I can share some history with you.  This is the thing that gets me into the office at 7 a.m. on weekends.  It’s something I’ve been working on for years.

In 2007 I wrote a proposal to the National Science Foundation that described how intelligent software could be combined with the world’s public displays to create a truly shared infrastructure that would change the way we work, play and communicate with one another.  At the time, I analyzed several technology trends; the dropping price per pixel, the increasing deployment rate of public displays (over 9 trillion in 2007), and the exploding efficiency and coverage of wireless networks (here my estimates were only directionally correct as I focused on 802.11 and missed the explosion of high-bandwidth cellular).

By looking at those trends, I argued that by 2010, a pivot would be reached where transporting pixels via cables and video switches would no longer be necessary, that our public spaces would be saturated with pixels, and, by software and accompanying protocols, the world’s displays could be transformed into a shared, accessible, infrastructure for visual communication. The program managers at the computer science and engineering directorate believed in this future enough to fund some of the initial work.

If you saw me speak (or just chatted with me over a beer) between the years of 2008-2011 you probably had to hear me rant about how a shared display network could change everything.  This means transforming our educational system from traditional lectures into shared and collaborative educational interactions.  It can change how we socialize — imagine a sports bar where your Facebook status or your fantasy football picks can be posted on some of the displays.  Retail stores like Patagonia could let customers post photos of their latest adventures.  Even how we elect officials — Facebook and Twitter have done this to some extent, but imagine campaign posters and comments in a world where LCD panels at bus stops support free public commentary.

I haven’t just been talking about this vision.  Since 2007, with the help of funding from the National Science Foundation and thanks to various financial investors who were both patient and visionary, we have been building the software to make this vision become real.

Mersive is now ready to launch.  Be sure to tune in for more details this week.  I’d love to hear your feedback.

About Christopher Jaynes

Jaynes received his doctoral degree at the University of Massachusetts, Amherst where he worked on camera calibration and aerial image interpretation technologies now in use by the federal government. Jaynes received his BS degree with honors from the School of Computer Science at the University of Utah. In 2004, he founded Mersive and today serves as the company's Chief Technology Officer. Prior to Mersive, Jaynes founded the Metaverse Lab at the University of Kentucky, recognized as one of the leading laboratories for computer vision and interactive media and dedicated to research related to video surveillance, human-computer interaction, and display technologies.

Submit Comment