JonvnNeumannSince the first days of computing – researchers have been interested in the boundary between humans and the computers they were designing; the human-machine interface.  I’m lucky enough to have an original copy of one of the first computer science conference proceedings, “Automata.” It’s an inspiring read.  Even back then, some of the fathers of modern computing were asking important questions about how machines were to interact with humans in a natural way. In the introduction to the proceedings, John Von Neumann (a scientist largely regarded as one of the founders of computer science theory), asks: “The development of large scale computers has led to a clearer understanding of the theory and design of information processing devices.  Is it impossible then, that man should soon develop machines with the ability to interact with those devices in ways that are more natural to him?”Obviously, given that the year of the conference was 1956, Von Neumann was peering into a future that did not yet exist.  Fast forward more than three decades and a rapid succession of better human-computer interaction technologies emerge – from the computer mouse/keyboard, to lightpen and touch panels, and now the multi-touch and gestural interfaces and even on to the latest class of many-to-many interfaces that support human-human and human-computer collaboration (see my previous posts about  Solstice).

Over time, our interaction with machines has become more intuitive (I use a simple word processor rather than typing  a page of Fortan 77 commands to get a computer to print out a table of data), more parallel (I can define many operations I want a computer to carry out with a multi-touch gesture versus a sequence of mouse clicks), and more friendly to the innate ability of out physical bodies (gestural interfaces versus punchcards).  It’s not surprising then, that the potential for a direct interface to your computer from your skin is on the horizon.

Work at the University of Tokyo is exploring how a computational substrate, complete with sensors and transistors can be woven into your skin.  Takao Someya reported in July’s edition of Nature on a polymer sheet, only one micometer thick (less than the thickness of plastic wrap) that embed organic transistors and tactile sensors.  The material, which can bend, stretch and crumple,  operates across a large range of surfaces related to the display business.  TFT matrix manufacturing, for example, and row/column pixel signaling to address millions of sensors in the mesh are being used.  As opposed to traditional photolithography used in the flat-panel display industry, skin interfaces are too large (your skin is larger than 2-meters much larger than a mass-manufactured flat panel) so they are looking at using injet deposiiton methods to build up the electronic substrates.

Currently, most research is centered around biomedical applications of “electronic skin”.  Examples, include recording electrical activity from muscles, and measuring heart rate activity over time. Researchers at Stanford are even thinking about how to use e-skin to enhance your sense of touch – embedding millions of heat/pressure sensors could measure, and then signal environmental conditions to a user wearing e-skin with enough sensitivity to register a gentle breeze, without exposing your real skin to dangerous chemicals.

The vision that drives this research is quite compelling, and promises a new era in human-machine interactions as well as human-environmental interfaces that go way beyond things like gesture.  Someya defines his vision in a recent IEEE article as:

“Human skin is so thin, yet it serves as a boundary between us and the external world.  MY dream is to make responsive electronic coverings that bridge that divide…I imagine machines and people clothed in sensitive e-skin, allowing for a two-way exchange of information.” 

It’s great to see that the vision of a perfect and seamless human-machine interface originally defined by Von Neumann and others lives on.

Share
About Christopher Jaynes

Jaynes received his doctoral degree at the University of Massachusetts, Amherst where he worked on camera calibration and aerial image interpretation technologies now in use by the federal government. Jaynes received his BS degree with honors from the School of Computer Science at the University of Utah. In 2004, he founded Mersive and today serves as the company's Chief Technology Officer. Prior to Mersive, Jaynes founded the Metaverse Lab at the University of Kentucky, recognized as one of the leading laboratories for computer vision and interactive media and dedicated to research related to video surveillance, human-computer interaction, and display technologies.

Submit Comment