"Spatial storytelling" or "environmental narrative", are common descriptions of how to make 360-degree video, where a user can look anywhere and isn't confined by a frame. A lot of work has focused on staging scenes as though they are theatrical spaces, and activating the entire scene that way.
However, I've been curious about changes in the editing process. After all, it is rare that someone working as a reporter or documentarian can carefully block out action for each event. This makes the editing that goes on in postproduction pretty important. Additionally, although VR in general often gives people the illusion that they're "there", wherever there is, it is still an illusion-- and goes through even more postproduction (to mask out stitching defects and tripods and crew) than traditional video. It produces a new space, not a precise record of events. Even if VR seems like a good way to reproduce reality, it can be deceptive. What would it mean to push a 360 video in the opposite direction, away from realism?
I recently went with BuzzFeed SF's bureau chief Mat Honan to film an interview he did with Satya Nadella, the CEO of Microsoft. One of the things I wanted to try in this project was bringing context to the interview, particularly through other spherical clips. This was experimental: some things worked, and others didn't. I focused on experimenting with "blended transitions", where one scene cuts to the next in an incomplete fashion.
Viewers barely notice most cuts in video, and the cuts occur frequently and often instantaneously. "Jump cuts" are common, and cut directly from one video clip to another. "Fades" are another common type of video transitions, where the opacity of one clip decreases as the opacity of the subsequent clip increases. Editing footage so that two scenes are displayed at once is relatively uncommon, although it was once groundbreaking. Split screens, animated transitions, call-out boxes all seem reminiscent of of late-night car infomercials or reality-tv dating shows from the early 2000s. Still, one of the challenges of VR video is getting beyond the static panoramic photographic shot. Filmmakers have tried to add narrative to static panoramas by mapping traditional video onto flat surfaces.
But when both scenes were shot in spherical video, the effect was a bit different. If two shots are stitched together along a central line, a viewer experiences both scenes at once.
In this scenario, a viewer is able to see part of each scene, or turn their attention towards one scene or another entirely, experiencing the video as a tradeoff between two synchronous scenes. This tradeoff reminds me of a video installation more than a kind of "interactive video" piece. Omer Fast's "The Casting" is one example:
The Casting 2007 is a four-channel video installation projected on two double-sided screens, so that only two projections can be seen at any one time. One side shows obviously edited footage of an American soldier in plain clothes telling the artist two violent stories[…]. On the other side, Fast presents reconstructions of these two interwoven stories and of the casting session for the role of the soldier.
I saw The Casting about a year ago, and was struck by the feeling that time was still passing, and other narratives unfolding, on the screens that I couldn't see. Fast cuts together different narratives into one, a story that was never recounted exactly as we hear it. This kind of editing creates a piece that feels real, feels documentary, but actually produces a new story. I was curious about how something similar could be achieved in a single-channel video. In the installation, of course, it isn't as simple as turning your head left or right, you'd have to walk around to see the other side, so the effect of "missing" information is much more powerful.
Despite Silicon Valley futurist-inflected chatter around 360-degree video-- that it's a stepping stone to "true" VR, it's a retrofit of the medium onto "old" forms, that the resolution isn't high enough for "real presence" (whatever that means)-- I think there's still something quite interesting about the particular kind of ambient interactivity 360-video affords. In some ways, it has more in common with video installation than interactive video or games or traditional linear video.
Another approach I experimented with in this video was combining two videos through the unique geometry of individual shots (using existing lines to help create masks that were properly distorted) to create a new space. Using normal masking techniques and video layers, any scene can be interpolated with aspects from another scene. Nothing is technically new about this approach, but the effect is different in spherical video. In 360-degree video it lends itself to shots which use the entire environment.
I've been referring to this as the "fishbowl" shot, because it oscillates the viewer's POV between a camera in the conference room, and to feeling as though the conference room itself, and its contents, are the subject of other people's scrutiny. This is a kind of reversal of VR fly-on-the-wall voyeurism.
A few other things worth mentioning:
* Color correction is a bit tough when superimposing these shots, I achieved it through masking in Premiere. My workflow took me from Premiere, to After Effects, back to Premiere. If you want to know the details, leave a comment or email me-- I'm not sure that what I did is the most efficient approach but I'd be happy to step through it.
* This was a preliminary project to dip my feet in animations in 360, I have a longer project in the works. If anyone is working in hand-drawn or After Effects animation in 360 video, drop me a line!
* While I liked being able to show images of the topics being discussed in the interview (Minecraft, et cetera), I'm not sure that in this context this format had much advantage over traditional video, where one could simply cut to a product image and then cut back.
* Partial cuts are a way to address the difficult timing of 360 videos: curious viewers have something new to explore, without cutting away entirely from the scene more rapidly and possibly disorienting other viewers.
* This demo also helped me test out some ideas for another prototype I'm working on: instead of splitting the screen, I want to make an interactive interface that displays one shot when a viewer looks to the right, and a different shot when a viewer looks to the left. Unfortunately this type of interactivity isn't supported by major platforms like YouTube or Facebook.