VISITORS at the Ricco/Maresca Gallery in New York can float on clouds, swim in a pond or plunge through line upon line of computer code – all without leaving the gallery.

The exhibit, called Osmose, is the brainchild of Charlotte Davies, an artist working for a 3D graphies company called SoftImage, based in Montreal, Canada. Davies hopes to challenge the popular image of Virtual reality as a hard-edged medium that is only suitable for playing war games in outer space. Osmose instead uses a super fast Silicon Graphics computer to conjure up real-time, soft-edged, transparent images of trees, clouds, water and plants.

Visitors can either "immerse" themselves in Osmose using a headmounted display, or they can use polarising spectacles to look at a 3D projection of the images. For the first time, visitors wearing a headset can control their environment by changing their breathing. Davies wanted people to get a feeling of floating as if they were scuba diving. So they fill their lungs to float upwards and breathe out to tumble back down. People can also direct their journey with body movements such as bending forwards or to one side, for example.

Body position is calculated from the output of three sensors – one on the headset, one at the top of the spine and one at the bottom. The sensors are simple devices made by the US company Polhemus. They relay geometrical information to the computer, which adjusts the display to match.

Pressure sensors measure the expansion and contraction of the chest. "We originally put these close to the diaphragm and measured the expansion of that region," says John Harrison, one of the design team working with Davies. "This, however, meant that the user had to be capable of deep breathing – or breathing into the stomach – something that divers, dancers and singers can do, but which is not so natural for those without this kind of expérience. So we eventually decided to place the sensors on the ribeage."

Harrison says that the biggest challenge was speed. "To give the user a sense of being surrounded by the computer-generated environment, no more than one-twentieth to one-thirtieth of a second can pass from measuring the position of the head to displaying the appropriate images."

The exhibit has sparked interest from other Virtual reality researchers. Mel Slater, who heads the Virtual reality team at University College London, says: "Generally, I support the notion of trying to read as much information as possible from the human body. Using breathing is a novel idea, which should be explored."


This article may include minor changes from the original publication in order to improve legibility and layout consistency within the Immersence Website. † Significant changes from the original text have been indicated in red square brackets.

Put online: May 28th, last verified: June 24, 2017