As an Open Lab fellow at BuzzFeed, I'm developing user interfaces for VR that respond to and interact with a user's gaze through head-tracking, a project I'm calling Glance.
Recently, I spent a Saturday with a friend who makes very cool experimental interfaces, and we made a little prototype for a 3D "buzz browser".
(Caution: there's an audio component!)
If you have a Google Cardboard, you can try viewing in your phone's browser to see if the circles converge.
You can imagine if, instead of above, the camera could be placed in the middle of the rotating cubes. The cubes could be more evenly spaced; they could move slower. This wouldn't necessarily be a useful tool, but it suggests some tradeoffs and advantages when a browsing environment has spatial depth. For example, a user who is in the same visual plane as these images would have to look away from one image to see others. But they could also use spatial memory to recall the location of certain images, potentially reorganize them. I rarely use my computer's history and bookmarks functions — it's especially useless on dynamic sites like Facebook.
This talk by Bret Victor about screens and human-computer interaction made me consider what characteristics of information design in 3D space are the most useful. Are the rectangular surfaces that get most of our attention an "inhumane" interface? In this vein, I get one of two reactions when I tell people I'm working in VR:
# It's amazing, it's the future, it's like you're really there.
# It takes the worst aspects of "screens" and makes them…inescapable.
The buzz-browser was inspired by the BuzzFeed mobile view, where vastly different types of articles -- from quizzes to political news to investigative journalism -- are presented side by side. The UI doesn't make a decision about which categories articles belong to. It feels sort of like pulling articles out of a feed, browsing rather than searching, a little out of control. I wanted to play with this kind of nonhierarchical presentation of content, as well as explore how flat content appears in 3D space. The result is…a screensaver, but a screensaver that helped me think through some of these questions.
We used Three.js to make the buzz browser, and the documentation has been helpful. Starting off with a basic demo from the Three.js documentation, we created 52 cubes and mapped images onto them as textures.
We used PhantomJS to create .png files from BuzzFeed headlines. This was my first time working with it, and I was impressed. PhantomJS is a "headless browser." A headless browser is basically a web browser, just like Firefox or Google Chrome, but is operated via the command line, not a GUI. This is useful for scraping modern webpages with endless scrolling and lazy loading that only execute, load elements, or display content based on a user action. For example, we spoofed BuzzFeed's homepage as an iPhone to get a nice grid of images and titles. We also collected (though did not use) the URLs for each article.
We set up some functions that would call images into the center one after another. Then, I added an effect that produces the doubled stereoscopic image so that it can be viewed in a headset. While this worked for me and a few others, some people said that the images did not converge. This might have to do with the eye width setting, adjustable within the code.
I'm really interested in how 3D browsing could affect the way we think of "browser history." What if we were able to glance at what we'd seen that day in video playback or an array of images sized by time spent on the page? There are a few other things from this study that I want to continue thinking about.
* Communication: I like how, slowed down, the robot voices appear to struggle so hard to communicate. You have to listen carefully, phoneme by phoneme, an experience that totally changes the normal dynamic of a BuzzFeed headline.
* What about behind-the-scenes? This prototype was set up at a recent event, and I thought it was interesting that a few people mentioned wondering what this would look like if it reflected traffic information, almost for an editorial dashboard or something like that.
* It's not easy to read stuff in this format — the resolution isn't good enough yet (in a headset) — so audio is really important.
Stay tuned! My next project involves 360-degree video. I'm hoping to find ways to combine the two worlds of VR: virtual environments and spherical video.