UI for Bespoke Scenes
This semester, we explored a user interface for scene augmentation apps that could allow users to customize their own scenes.
We were inspired to do this because current scene augmentation apps generally rely upon a set of rules or known relationships between the objects in a room to calculate the most likely placements for a user's situation. This means that creative designers with exotic scenes or unusual objects not found in common datasets like MatterPort3D would have to build scale models or create a set like those used in filmmaking. We wanted to see if we could extend AR based scene augmentation apps to such situations. This user interface is the result.
Semantic Coloring of Matterport3D Meshes
Matterport3D meshes are colorless and are thus very user-unfriendly to be used in a Unity project. It would be difficult for the users to differentiate among the different objects in the scene. Here's an example of a colorless Matterport3D mesh:
Fortunately, Matterport3D meshes come with some semantic labeling, which made it possible to color each object in the room. The process is as follows: we first extract the segment id of each face in the mesh, which are mapped to a certain label (e.g. bed, table, tv). This allows us to collect sets of faces that belong to each object, and by assigning the same color to the faces in the same set, we were able to achieve semantic coloring. Here's what the same mesh looks like after semantic coloring:
The first feature of the system is importing of Matterport3D data into Unity. Matterport3D dataset consists of 3D meshes from 90 different buildings. The user can import the scene into the Unity application by selecting the room in the dropdown menu and then clicking the "Load Mesh" button. See demonstration below.
The second feature is the ability to interact with a scene. Users can rotate the scene freely with 3 degrees of freedom. By clicking and selecting a location on the scene, the unity project will display a bar graph on the left panel where each bar indicates the distances from the clicked point to each of the room objects. See demonstration below.
We have begun implementing a series of dropdown menus that will enable the user to search for rooms similar to theirs, first by functionality then by specifics, but haven't integrated these with the complete Matterport dataset yet. Additionally, we would like to offer tools that allow users to customize the layout of rooms and the objects within.
Inspired by the works of:
Chang, Angel, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. "Matterport3d: Learning from rgb-d data in indoor environments." arXiv preprint arXiv:1709.06158 (2017).
Keshavarzi, Mohammad, Aakash Parikh, Xiyu Zhai, Melody Mao, Luisa Caldas, and Allen Y. Yang. "SceneGen: Generative Contextual Scene Augmentation using Scene Graph Priors." arXiv preprint arXiv:2009.12395 (2020).
Zhang, Song-Hai, Shao-Kui Zhang, Yuan Liang, and Peter Hall. "A survey of 3D indoor scene synthesis." Journal of Computer Science and Technology 34, no. 3 (2019): 594-608.
...and many more! Read our paper for a full list of acknowledgements.