A novel hyperspectral painting experience in virtual reality
Keming Gao · Yudi Tan · Janaki Vivrekar · Yuhan Yang
HyperBlend enables users to engage in a unique hyperspectral painting experience in virtual reality. With HyperBlend, users can draw on a canvas using a hyperspectral color palette, which renders slightly varying colors to each eye in a virtual reality interface. Through binocular fusion (the physiological process of combining signals from both eyes into a single blended image), users interpret the slightly varying colors shown to both eyes as a lustrous, hyperspectral color. By providing a flexible and accessible interface for painting with hyperspectral colors, HyperBlend prototypes a novel interaction with hyperspectral colors while creating art.
Background and Motivation
Most humans have three retinal cone cells that produce trichromatic color experiences. However, there exist a few rare individuals with tetrachromatic or four-dimensional color vision. Simulating tetrachromatic or higher dimensional color vision for trichromats is a rich problem space, where little is known about the experiential nature of higher dimensional color vision. In our project, we leverage the physiological process of binocular fusion to introduce a fourth “dimension” of color, by slightly varying the colors shown to each eye.
The virtual reality painting space is an appropriate and innovative medium to explore the effects of binocular fusion on color vision in a creative domain. By embedding hyperspectral colors into an interactive, immersive painting application, we propose a new context for color vision researchers to probe deeper into how individuals interact with color when creating, and understand the potential for hyperspectral color experiences in creative contexts.
To use HyperBlend, the user wears a Google Cardboard interfacing with the HyperBlend mobile application and provides user input (i.e. paint strokes and color selections) via a trackpad and keyboard connected to the HyperBlend desktop web client. The Hyperblend mobile app interfaces with the desktop web client via a web socket to synchronize changes across both screens.
User Interface Details
The HyperBlend app screen is split into two halves corresponding to the left and right eyes. When viewed through a Google cardboard, the screen appears as a single canvas with a single row of controls along the bottom.
The paint brush color palette (left-most control) and the background color palette (right-most control) contain a predefined selection of four “regular” colors (same color shown to both eyes) in the bottom half of the palette and four hyperspectral colors (slightly different color shown to each eye) in the top half of the palette. The predefined hyperspectral colors are selected based on informal interview data from two dichromats, conducted by a student group in Computational Color (CS 294-164) this semester.
Furthermore, users can select distinct custom colors for each half of the screen from a color wheel (located in the bottom center of each half of the screen). Users may also upload a background image, which HyperBlend transforms from the RGB color space to an RGG’B color space by displaying the image with a slightly different green color channel to each eye. In other words, the original RGB image is rendered to the left eye and an RG’B image is rendered to the right eye.
The setup for using HyperBlend includes a laptop, a smartphone, and a Google Cardboard. To accommodate different screen sizes of phones, users may calibrate their app screen and resize the canvas before the drawing via the web client. They can drag and click on their mouse to draw paint strokes on the PC. They can also change the brush size and switch left/right brush with keyboards. The same actions will be synchronized to the phone and multiple clients can visualize the drawing in real-time on the Google Cardboard with a smartphone inside.
At the start of the semester, our team brainstormed several ideas for applications centering the hyperspectral color experience generated by binocular fusion of slightly different colors (ex: counterfeit detection of makeup items, hyperspectral imaging of the fundus, hyperspectral color detection game, etc.).
As we chatted with a group of students in Computational Color (CS 294-164) this semester, we realized the lack of tools for color vision researchers to explore hyperspectral color phenomena with users in contextual task-based settings. As a result many color vision researchers resort to testing color experiences in isolation and detached from practical use cases.
VR painting is an accessible, creative activity that will enable users and color vision researchers to explore hyperspectral experiences in a specific artistic concept. The following are some of our early sketches of our idea.
We iterated on our sketches by creating a Figma wireframe mockup of the mobile and web user interfaces.
The resulting system is HyperBlend, a hyperspectral virtual reality painting application. With HyperBlend, users can experience binocularly fused colors in an artistic context and create hyperspectral paintings in virtual reality.
HyperBlend was created as a class project for Virtual Reality and Immersive Computing (CS 294-137) at UC Berkeley in Fall 2020.