Some people can examine pain—or pleasure—and render it into something transcendent, creating something universally moving from something uniquely personal. These people are artists. Some artists are using virtual reality technology as a medium for expression. These artists are pioneers.

Through immersive technology, Rita Addison's haunting, award-winning Detour: Brain Deconstruction Ahead renders vivid the terror and perceptual distortions resulting from a head injury she suffered in a car accident in 1992. In 1994, MIT Media Lab Professor David Zeltzer introduced Addison, a nature photographer before her accident, to the CAVE and the folks at the University of Illinois Electronic Visualization Lab. There she worked with a team of CAVE programmers and modelers led by Marcus Thiebaux. They created a virtual gallery of scanned nature photos that she'd shot in her pre-accident life, simulated the accident, then deformed and obscured the virtual photos and gallery to simulate the effects of that injury upon one's vision. Detour premiered at SIGGRAPH '94.

A hallway within Rita Addisons virtual gallery. The nature photographs, taken by Addison before a car accident damaged her vision, have been deliberately deformed and obscured to represent the effects of the accident.

Beyond representing the side-effects of Addison's trauma, the resulting evocation speaks to the sense of psychic mangling she experienced. By presenting a stereoscopic immersive display of her brain's damaged tangle of axons and dendrites, Detour illustrates her misery much as Picasso's Guernica also captures agony. By making her loss material and specific, Addison's work also proves cathartic—with that classic Aristotelian mix of pity and terror—for other brain-injury sufferers and those close to them. Indeed, after experiencing Detour, Aldous Huxley's widow tearfully remarked to Addison, "That's what Aldous was trying to tell me his world was like!' For those with no direct connection to brain damage, Addison's work provokes a strong response purely at the aesthetic level. The response is primarily possible because of her chosen medium—although Addison would probably say, the medium chose her.

Seeing the Trees and the Forest

Char Davies, director of visual research at SOFTIMAGE, created the award-winning OSMOSE, a remarkably oceanic and organic artwork that incorporates Silicon Graphics Onyx Reality Engine and immersive, kinetic interfaces to take participants through the macrocosm and microcosm of a forested glade, among the leaves, beyond the bank, through a meadow, and also into the roots underground— a representation of the Performer code running it all.

OSMOSE enables immersants to maneuver underground, even through the root system of a tree.

Working with programmer John Harrison, who previously worked with VR artists at the Banff Art Centre in Canada, Davies created a work that, she says, represents "the metaphorical aspects of nature" and logically descends from paintings she based on similar themes years before. Davies also comments that "spatial ambiguity and transparency were most important," clearly an essential part of OSMOSE's ineffable quality of magical realism in its artistically rich suggestiveness, rather than the photo-realism so often sought after in virtual reality.

Harrison reports that he enjoys working with artists, and that his and Davies' sensibilities were well matched. Davies adds that on a project like this, every person's role is necessary, from artist to programmer, animator, and sound engineer, as in the dynamic partnership of film director, writer, cinematographer, and composer. Harrison also enjoyed the technical demands that Davies' vision commanded: her "we need this," leading to his "I don't know if this is possible," leading to her "find a way! describes the iterative creative flow between painter and technologist.

OSMOSE's 3D Cartesian wireframe grid, which extends to infinity in all directions. Movement within OSMOSE is controlled in uniquely human ways: Immersants inhale to rise, exhale to descend, lean forward or backwards to propel themselves back and forth.

Civilization Now and Then

Involved not with representations of the natural world, but with representations of the human past, Carl Loeffier, director of the SIMLAB at Carnegie-Mellon University, headed up a team that created Virtual Pompeii for the Archeological Institute of America, the oldest learned society in the U.S. In part a fundraiser for the AIA, Virtual Pompeii has visited museums in the U.S. and suggested a direction for VR- based art in education and entertainment.

Today you can learn more about Virtual Pompeii by pointing your Web browser to http://www.sgi.com/ International/UK/news_events/press_vhc.html [sic]. When visiting this site consider Loeffler's vision: that Virtual Pompeii's Doric Temple—the largest outdoor theater in 79 A.D. outside Rome—could soon be used for theatrical events in networked VR. Or, that a film documentary segueing between the actual archeological site and the virtual reconstruction could allow the narrator to move through the reconstruction. Or, consider the possibility if developed as a VRML-based Web site, real-time virtual reality segments could become part of a CD-ROM or a kiosk in a museum.

In the meantime, a Virtual Vatican is in the works.

Music of the Spheres

A scene from Christian Greuels musical virtual world, which invites participants to interact with music and sound-generated imagery.

Christian Greuel, a painter and theater designer who also works as a programmer, wields the Fakespace Soundsculpt Toolkit to use music to trigger a virtual world, experienced via the stereoscopic, high-resolution Fakespace BOOM. Turning the traditional relationship between image and sound on its head, Greuel's Vacuii, which premiered at SIGGRAPH '95, mapped music into moving 3D images.

Greuel says he's "not into heroes," but does point out that "Leonardo was a scientist-artist," adding that, "Somehow, the two [art and science] got separated." His goal "is to bring them together again, as with the Greeks." His latest project, debuting at SIGGRAPH '96, draws on the classical tradition of the still life; only here, through VR and MIDI, viewers can travel into a violin or move a skull through space to experience synesthesia. How things sound (in each section of the audio frequency spectrum in the piece of music) determines how things look. "The skull does different things depending on what's going on in the music," Greuel explains. Sounds create or change objects and thus an elusive, generative relationship exists between sound and image.

Greuel, Loeffler, Davies, and Addison, among other pioneering artists, use Silicon Graphics technology to represent and extend their visions—and create new artworks that would not otherwise have been possible for them to imagine, much less make real.


San Francisco writer Paulina Borsook (loris@well.com) writes on technology and culture.

[ed. some images from the original magazine article have been omitted in order to create balance between text and images in this web version

This article may include minor changes from the original publication in order to improve legibility and layout consistency within the Immersence Website.  † Significant changes from the original text have been indicated in red square brackets.

Put online: May 2017. Last verified: June 20173.