davidicus schacher's profile

UX/UI Case Study: NextVR Content Browser

NextVR: taking UI to the next level
Challenge
NextVR films and broadcasts live sports and music in virtual reality. They build their own cameras, place broadcast trucks at events, control the video, and broadcast it to apps on VR devices. I was hired as the only UX/UI Designer, and the first thing I tackled was the app software interface.

When I started, there was an interface in place on GearVR, but the team wanted something undefinably 'better.' This would be considered the early days of VR, and the app was built almost on metal--there were none of the middleware tools a lot of people are familiar with. This was done to control overhead, because performance is especially critical in VR where far more pixels are being rendered than other media (360 degree menu), and at higher frame rates (60 fps minimum). Also, NextVR is all about stereoscopy, so everything is rendered twice, once for each eye. It meant I couldn't really prototype in a 3D tool, or script anything.

I knew if we designed this carefully, the flagship could set the stage for all the other platforms on the roadmap.
(Not my work)  1. Original badging system, where a few indicators could get stacked in the upper right hand corner of thumbnails. 2. Original UI: pages of square and rectangular (often dark) thumbnails in a world of blackness.
Research
All I had access to were people internally, so I interviewed a couple handfuls of NextVR stakeholders. The most senior ones had less than a year with the existing interface. I began taking inventory of all the screens and features needed to replace the current version, and for the near- to mid-term. A survey covered the rest of the employees, and I did special brand-related exercises with two executives to get a handle on the NextVR brand (there was no Marketing team yet).
What did we learn?
I got feedback on the entire experience (I categorized it as Discovery, App, Content, and Post-Experience), but with this study I'll focus on the app. Boiling it all down, people basically asked for three things: 'sexiness,' simplification, and the ability to browse  huge numbers of media items and events.

I started thinking about dimension and motion, and a redesign of thumbnails and organisation. I ended up estimating the maximum number of media items over the next couple years (the rough lifespan of this UI) to be under three hundred--considerably more specific than 'a huge amount.'
Solutions
This was something that people wanted to evaluate inside VR; flats didn't get me far. Very early on, I figured out a way to get low fidelity wireframes and mockups into a headset. It wasn't stereoscopic, but it only took about two minutes Photoshop to VR, which freed me to iterate quickly. I developed a couple dozen menu solutions to cover a broad array of approaches. I can't show this work for NDA reasons, but think Netflix with vertical categories, big thumbnails, small ones, stacks, grids, text menus, lists, 360 degree grids, you name it. I used a voting method and people coalesced pretty clearly around a solution with a row of parent categories, and paged grids of child events.

Perhaps surprisingly, for a group that had strongly espoused wildly spacial and innovative approaches, they all reacted poorly to that unfamiliar stuff. I think the future has to arrive methodically, in baby steps. They universally loved curvature with shallow depth--it seemed to be just that one comfortable tick more futuristic than the web and app interfaces everyone is used to.
I learned some more technical things that are now considered common knowledge. For instance, people were irritated moving their head to navigate, especially up. This was before handheld controllers were commonplace, but even so, it was useful to keep UI within a 90 degree field of view, primary elements within 60 degrees, and common interactions lower.

I devised a 'field grid' that mapped my hot zones and 1 & 10 degree lines to a sphere around the viewer. This allowed me to be specific with my tests, and it helped the engineering team implement the designs. I started a 'VR Bible' that outlined all this, UI distance, eye height, interpupilary distance, everything that I thought would be relevant to or should be standardized over multiple platforms. All this laid the foundation for the dimensionality and motion we wanted.
The resulting content browser menu screen displayed a horizontal carousel of content category thumbnails that orbit the viewer. One is selected, and its event/media thumbnails appear in grid pages below. Thinking about how many media items there could be, the original grids were 3 x 3 but drew criticism because people wanted more info than was readable on the thumbnails. I revised this to a 3 x 2 grid with larger thumbnails. The number of categories and pages could grow as needed.
1. Early 3 x 3 page grid.  2. Revised 3 x 2 page grid comp from spec doc (alignments and degree-spacers noted)
People had craved simplification originally, and said it could be more usable somehow, but couldn't articulate how. I identified one primary way of addressing this: the thumbnail framework.

Initially, a thumbnail could be large or small, square or rectangular, regardless of what it represented. Tapping a thumbnail might open a new page (or subpages) like a directory structure of nested groups. Or, it might launch a recorded or live event. They relied on overlaid badge icons to clue people in. We needed more shape and size coding, and less abstraction for readability.

I set up a new framework where square thumbnails were only for categories, and the categories were grouped consistently in one carousel rather than intermingled with events. I overcame objections to flattening the directory-structure-like nesting doll organization. Events would be grouped chronologically into pages, only first level deep. There would be only one badge overlay to indicate Live (later, we needed something more prominent, so Live took over the whole thumbnail every few seconds. I call this using the 4th dimension). I moved labels off the thumbnails to give them room to shine as graphic mini-billboards. Knowing the world was likely to be dark, these could really pop with contrast, and enhance their place in space. These changes made the content much easier to parse and browse.

Dimensionally, there was one other important feature. People reported feeling a little 'lost,' but further exploration clarified that thought and helped me kill two birds with one stone. We would think of screens, dialogs, and so on as layers; triggering one would push the existing UI back in space and blur it slightly. Backing out or completing a task would pull the last layer forward. There was a consistent distance for active UI--remember, it is stereoscopic, and some people were very sensitive to this being dialed in. Layering stabilized people's mental maps of where they were, and also felt sexy and polished. It was a bit hard on the engineers, but even they were happy about the result.
Validation
This one is simple. I had run internal surveys before and after the new UI was implemented and the results showed universal improvement.
Page from UI report. The original UI ratings grid (left) shows more negative sentiment. Pretty satisfied with all the 10s and extra green to the right.






UX/UI Case Study: NextVR Content Browser
Published:

Owner

UX/UI Case Study: NextVR Content Browser

Published: