Perception And Embodiment In A VR Documentary

    An experimental documentary set in an Istanbul hospital

    Hospital with one entrance and two exits is a collaboration between Ainsley Sutherland, Deniz Tortum and Cagri Zaman.

    Inside of a hospital in Istanbul, Turkey, a viewer moves through an operation room corridor, with stretchers, surgical equipment, and the buzzing of fluorescent lights. She sees the hospital as a machine might see it. Hospital with one entrance and two exits is a virtual reality documentary that deliberately changes the results of a viewer’s physical movement over the course of the piece. The user’s body is highlighted as a form of instrumentation, examining and moving through the building’s architecture, evoking the instrumentation used by doctors treating patients within the hospital.

    vimeo.com

    An excerpt from Hospital

    Hospital is created from point cloud data generated from laser scanning a hospital in Istanbul. We set a LIDAR device in the center of a room, and it scans the visible area by shooting tiny lasers at every surface and measuring the time it takes for the laser pulse to reflect back. A LIDAR scanner then collects a matrix of numbers, representing the spatial position of each point that the scanner measured.

    Machines don’t see the way we see, but they still have “sense organs”. With this piece, we wanted to explore the process of translating machine senses into human senses -- what is lost, what is changed, and how can the process be made visible. Video games render 3D space in continuous planes, called meshes, but we wanted to retain the point cloud in the final experience, as a way to arrest the translation halfway between data and model.

    Increasingly, robotic sense organs navigate the world alongside us. Not only do they create maps, but they give a kind of “embodiment” to our computers. Phones have rudimentary eyes, ears, mouths, limbic systems, a self-driving car has even more senses. But in contrast to a map, which reduces space and fixes it in time, this machine perspective is shifting, situational, and responsive. The algorithm has a body to navigate and to learn with: recording not just images, but actions: successful responses, possible responses, likelihoods. As we experience space both as a physical place and as a place in which we can do certain things, so do embodied computers.

    Much has been made of the “first-person” perspective in VR, but perhaps what “first person” means is more of an assumption than a requirement. In Hospital we have attempted to modify perspective, to add physical movement and embodiment into the perspective, so that physical gestures change a user’s visual perception in a way which does not occur in everyday life.

    vimeo.com

    At one point in the experience, a viewer is able to move her head by just inches, but the camera will travel meters. This video shows me testing the system in an HTC Vive, though the piece has been optimized for Oculus.

    With Hospital, we are seeking to subvert expectations about how a space can be (for example, it can be transparent, permeable) as well as expectations about what our movements and senses tell us. A viewer learns a new environment. Physical movement doesn’t translate directly from inside the headset. Instead, by leaning forward you traverse far more space than your body would otherwise expect. Craning your body allows you to navigate in new ways without additional controllers or conventional game locomotion. Movements are amplified through the piece: if you lean right several inches, your perspective will move by several meters within the virtual environment.

    Inside of a hospital, physicians use both intimate somatic perception and robotic perception to treat patients. We wanted to explore the role of instrumentation in medical work, in part by using the point cloud data rather than a fully rendered mesh. The point cloud is somewhere in between how we see a building (as opaque planes), and how LIDAR perceives it (color values and a set of x-y-z coordinates). Transparency, color tint, and blind spots appear in the rendering. Inside the hospital, doctors use both intimate somatic perception, like learning by touch how to identify organs, and robotic perception, e.g. surgery robots.

    View this video on YouTube

    daVinci Surgery / Via youtube.com

    One example of how imaging techniques are used in tandem with surgical robots.

    Current research in scientific approaches to consciousness suggests that attention and perception of our environment are intrinsically related to conscious experience. Modifying the “possibility space” of an environment-- its affordances, what we can do within it-- doesn’t have to be limited to game mechanics, it can also be done at the level of physical navigation. One of the most interesting aspects of head-mounted VR is that it allows us to easily create interventions into our own perceptual system. For example, rapidly blurring one eye causes a user to feel a slight, painless “tap” on her eye (via binocular rivalry). It’s a shame that the current version of Oculus (Consumer Version 1) doesn’t allow manipulation of single-eye projection-- we have to use the Development Kit to test binocular rivalry.

    When we make something that's visually immersive, we're blinding someone to the external world, and providing them with a kind of substitute inner world. There are many examples of fantastic visual worlds, but within these there is also a strong push towards "physical" realism. Don't make the user sick, don't frighten the user, make her comfortable. "Sickening" VR isn't the end goal here, but it is possible to proceed more tentatively, using the instinct of a new VR user to feel about herself with her hands, to be cautious and slow, to be intermittently aware of a dual space, as part of the piece.

    If you want to read more (academic) musing on embodied perception in virtual reality, my thesis about performance and affect or my collaborator's thesis on montage and immediacy in VR might be interesting to you!


    Open Lab for Journalism, Technology, and the Arts is a workshop in BuzzFeed’s San Francisco bureau. We offer fellowships to artists and programmers and storytellers to spend a year making new work in a collaborative environment. Read more about the lab.