VRCodes

Viral Spaces, MIT Media Lab

Envision a world where inconspicuous and unobtrusive display surfaces act as general digital interfaces which transmit both words and pictures as well as machine-compatible data. They also encode relative orientation and positioning. Any display can be a transmitter and any phone can be a receiver. Further, data can be rendered invisibly on the screen.

VRCodes present the design, implementation and evaluation of a novel visible light-based communications architecture based on undetectable, embedded codes in a picture that are easily resolved by an inexpensive camera. The software-defined interface creates an interactive system in which any aspect of the signal processing can be dynamically modified to fit the changing hardware peripherals and well as the demands of desired human interaction.

This design of a visual environment that is rich in information for both people and their devices overcomes many of the limitations imposed by radio frequency (RF) interfaces. It is scalable, directional, and potentially high capacity. We demonstrate it through NewsFlash, a multi-screen set of images where each user's phone is an informational magnifying glass that reads codes arranged around the images.

VRCodes are currently being developed by Pixels.IO as a spinoff of the Viral Spaces group. See the MIT Media Lab PLDB entry and grace@pixels.io

VRCodes was initiated by Grace Woo in the MIT Media Lab as a part of her PhD thesis. Special thanks to Andy Lippman, Ramesh Raskar, Gerald Sussman, Vincent Chan, Szymon Jakubczak and Eyal Toledano.