If you work in the AV space, it’s increasingly true you are either working with video over IP or at least need to be educated about video streaming and transport. Many of you will have heard of Quality of Service (QoS) metrics as they relate to computer networks – the ability of a network to provide different priorities to different applications. If an application has real-time and fixed bit rate requirements and is delay sensitized, QoS can guarantee a certain level of service for those applications. It’s what allows voice over IP and video streaming to operate on the Internet alongside of email. QoS speaks to the technology and architectural needs at the networking layer but doesn’t speak to how these requirements change based on the needs of the end user. It’s one thing to say, “I need streaming video at 24fps with low jitter.” But it’s another to say, “I only need to hit those requirements when it’s important to the user.” This is the domain of a new and exciting area of research termed Quality of Experience (QoE). QoE refers to the ability to deliver a certain user experience, and it is judged by user-centric metrics such as average viewing time or number of visits to a site over a commodity packet-switched network. Folks developing products in the AV industry should pay attention to this area.

Given my professional interest in delivering an elegant and efficient wireless collaboration experience for people over their existing network, I’ve been reading quite a bit of literature in this area. Although the area of research is relatively new, it is based in a rich body of work dedicated to improving the end-user experience for Internet video streaming and, ultimately, supporting a large part of our economy that is now dependent on revenue models associated with online video streaming. For a quick background primer on the area, take a look at some of the work at the all-important International Telecommunication Union (the United Nations agency commissioned with developing global communications standards).  Relevant documents on the methods for subjective determination of transmission quality can be found here, and documents related to subjective video quality assessment methods for multimedia applications can be found here.

Video-Streaming-Comes-To-MLB-iPhone-App

QoE fundamentally asks, what underlying mechanisms does the network need to provide in order for you to enjoy watching game seven of the World Series on your laptop? Traditionally QoS asks, what does the network need to provide to achieve a particular signal-to-noise and average throughput?  It’s an area of research that is producing exciting results ranging from how to modify server-side encode parameters in real-time to ensure higher end-user engagement (no click away), to predictive models of bit rate adjustments to ensure 20 percent improvement in how users rate their video viewing experience. As an example take a look at some of the work from a consortium of researchers at Carnegie-Mellon, Stony Book, University of Wisconsin-Madison, and UC Berkeley. Developments in this area are able to drive video encoding and network resource allocation in real-time based on how and when the video is being used by the application. In our case, we are streaming potentially dozens of videos simultaneously onto a shared display for collaboration. As users “pull” a video into focus, resolution is more important, and for videos that drift to the background lower bit rate, spatial resolution and even color depth are acceptable.  By driving information that is close to the end-user all the way back through both the networking QoE layer and ultimately the video encoding/streaming technology, a fundamentally better user experience is gained.

Share
About Christopher Jaynes

Jaynes received his doctoral degree at the University of Massachusetts, Amherst where he worked on camera calibration and aerial image interpretation technologies now in use by the federal government. Jaynes received his BS degree with honors from the School of Computer Science at the University of Utah. In 2004, he founded Mersive and today serves as the company's Chief Technology Officer. Prior to Mersive, Jaynes founded the Metaverse Lab at the University of Kentucky, recognized as one of the leading laboratories for computer vision and interactive media and dedicated to research related to video surveillance, human-computer interaction, and display technologies.

Submit Comment