• Add to Collection
  • Tools Used
  • About

    About

    TinCan.ai is a Voice User Interface (VUI) prototyping tool. TinCan.ai enables you to build an interactive prototype of your VUI (eg. Amazon Alexa… Read More
    TinCan.ai is a Voice User Interface (VUI) prototyping tool. TinCan.ai enables you to build an interactive prototype of your VUI (eg. Amazon Alexa Skill) with no coding required. You can also use TinCan.ai to test your VUI remotely with beta users, and collect utterance data. Read Less
    Published:
TinCan.ai is a Voice User Interface (VUI) prototyping tool. TinCan.ai enables you to build an interactive prototype of your VUI (eg. Amazon Alexa Skill) with no coding required. You can also use TinCan.ai to test your VUI remotely with beta users, and collect utterance data.

TinCan.ai is currently in beta and we are letting in beta users on a rolling basis. You can sign-up for the beta at https://tincan.ai
Whiteboard sketch of a voice-based "dating" app
I approached this design challenge by first asking small teams of would-be VUI designers to "whiteboard" design VUIs for various applications. These teams sketched designs for voice-based transportation apps, museum navigation, and "dating" apps. I studied these whiteboard sketches for common themes. 
TinCan/VUI Design Tool , Version 1 - "Build"
TinCan/VUI Design Tool, Version 1 - "Play"
The first prototype was based on the need to be able to test app responses with users. It included "Build" and "Play" windows. In the Build window you would be able to add labels to system responses. In the Play window you would then be able to conduct a "Wizard of Oz" style test and record a transcript of the interaction.

This first version of the prototype was tested with 8 expert users, ranging from technical developers to UX and UI designers. 
TinCan/VUI Design Tool, Version 2 - "Editor"
TinCan/VUI Design Tool, Version 2 - "Play"
In the second version of the prototype we added a number of features based on user testing. These features included:
- Ability to define "paths"
- Ability for the system to "auto-respond" to user utterances
- Ability to define Entities (ie. Slots) in association with Intents
- Ability to pre-fill Utterances in association with Intents
- Ability to use the microphone for user input

This prototype again went through a round of user testing. We tested with a mix of software developers and UX designers. We specifically targeted members of the Alexa Development community to participate in the user tests. 
TinCan.ai, current version - "Editor", with tabs for Slots, Utterances and Response
TinCan.ai, current version - "Simulate"
TinCan.ai, current version - "Launched Prototype"
The current version of TinCan.ai builds off of the previous two prototypes and includes a number of new features. Most notably, with TinCan.ai you can:
- use Microsoft LUIS to build a natural language understanding model
- dynamically add user utterances to Intents to improve natural-language training
- export utterance examples and intent schema in a format compatible with Amazon Alexa Development
- Create a "launched prototype", which is a publicly accessible URL of your voice prototype that you can share with others and use to remotely test your voice application