From January 2017 to May 2017, I worked with a group of students at Harvard University’s School of Engineering and Applied Sciences to develop a solution for the unique needs of deaf and hard of hearing students on Harvard’s campus. I led the Applications subteam of our 18-person class to design and develop the interface for our final deliverable: a tool to provide actionable data on sound levels in classrooms and other study spaces to these students.
Given the broad scope of original problem, one of the main priorities was to learn more about this space and develop insights that could guide our project forward. In this process, we took a systems level approach to the problem, identifying factors in physiology, infrastructure, and the affected individual’s experience that could contribute to the current user experience. Our research spanned research studies into existing deaf spaces (such as Gaulladet University), interviews with hard of hearing people in the Harvard ecosystem, and other various methods of inquiry; in all of these, our goal was to clarify all the dimensions of the given problem space.

From all of this, we were able to develop a number of insights:

1) Hearing disabilities cause a certain level of social discomfort, even in academic settings. 

2) Assistive listening solutions in rooms on campus are designed for larger groups, not individual listeners. 

3) With hearing aids, hearing ends up being exhausting over time. 

We crafted a number of personas, aiming to cover the various types of deaf and hard of hearing individuals. 
Based on our research, we developed two main problem statements: 

1) There are few inclusive, non-auditory means of communication that are widely applicable in social and academic settings at Harvard University, creating challenges for effective communication and participation for members of the deaf/hard of hearing community.

2) Inclusion of deaf and hard of hearing individuals in social and academic environments is hindered by both a lack of cultural awareness and education regarding effective modes of communication.

We decided to focus on the first problem statement (on tackling the issue of effective communication and participation), expanding our solution space broadly in a "technology-first" organization pattern; in this sense, we developed concrete ideas on solutions that directly solved our given problem and organized the solutions based on the medium they focused on (text, visuals, tactile, etc). 

After a period of deliberation (which considered technical feasibility and time as the primary determining factors), we settled on two potential solutions to pursue. For the purpose of time, this case study focuses on the first: Sweetspot, a student-accessible display of ideal study and work spaces based on qualities of sound in the room.

EXPANDED RESEARCH
From the interviews and general research conducted earlier, we had a few good insights to work with. In particular, we knew:

- Hearing (especially with a hearing aid) is exhausting.
- A new location’s sound profile is generally unknown before actually experiencing it.
- Existing solutions on campuses are don’t generally address personal hearing profiles or preferences.

However, we wanted to dig a little deeper by conducting our own user interviews. For time’s sake, we used these interviews to a) confirm/deny our assumptions about sound perception in educational spaces, b) potential causes of this, and c) specific locations on campus that are bad for completing user jobs (for simplicity’s sake, we defined these user jobs on common, broad student jobs -- specifically, socializing and studying).

The research confirmed our suspicions; of the 49 respondents, 57% noted difficulty hearing the professor in classroom settings, and 84% reported difficulty hearing other students in those classrooms. Additionally, based on the results of the survey, we identified a few rooms to use as ideal test rooms; Lamont Cafe (a popular library cafe) emerged as a study space that many people generally liked to study or socialize in, but its purpose changed depending on the time of day and noise level.

Having identified three primary variables that could play into a user’s choice of room (location, time, and noise), we felt confident in moving forward in the design process.

INITIAL DESIGN
Given the variables from before, we were able to define a user journey through our to-be-designed application. The path was to be kept as simple as possible:

1) Understand the user’s prioritized factor. From the research that was done, we had determined that time, location, and sound level would be the most important factors to our target user. Of these, we would like to know which was most important to the user.
2) Ask for more details of that factor. For location, we would simply ask which location on campus they would like to go to, providing additional details (such as opening times, capacity, etc.) to guide their decision.
3) Provide recommendation to user. Based on their detailed preferences, we could provide a recommendation on place and time for their desired activity.
4) Allow for exploration. Once the recommendation has been made, allow the user to change their preferences and note the changes immediately.

As we iterated on this flow, we came up with a few guiding principles that would govern our design decisions: flexibility in choice on the platform, clarity in communication (especially with abstract concepts like noise), and universality in utility. 

Based on a refined user flow and these principles, I took lead on designing the interface for the application, focusing on a desktop-first experience. Keeping in line with the principles of flexibility and responsiveness, I designed the entire application as a single page application that ‘slid’ various sections into focus as the user navigated back and forth as needed. Keeping this paradigm allowed the user to mentally keep track of the order of the steps and easily navigate to different sections.
Through this entire process, a primary concern was how the user would interact with the various data involved, primarily the loudness of the room. For key moments like selecting a loudness preference, we used common scenarios (an empty library, a busy room, etc.) as set points on a scale and mapped them to quantitative ranges based on the microphone data. Given the similar use case, we referred to the Google feature for telling users when places of interest are busiest; as such, when showing the data itself, we divided the data into hour by hour buckets and didn’t bother with a unit of measurement, as it meant little to our target users.

I also took lead on implementing the prototype, built from scratch with a few libraries used for the sliders’ styling and value fetching. The prototype can be found here

Moving forward, we hoped to think more about designing for more personalized use, exploring the various types of hearing disabilities (in frequency, overall dampening of sound, etc.) and how we might provide more helpful recommendations for all individuals, regardless of the extent of their disability.
Sweetspot
Published:

Sweetspot

Published: