In Coveo's Administration Console, some users were finding it difficult to see the effect of different ML configurations on their setup. It required using multiple sections of the product to test the efficiency of ML models and a lot of manual tracking and analysis of performance. In the end, users ended up with configurations that did not help their relevance.
The goal of this project was to provide an easy way to test ML configurations and add more visibility and transparency into ML performance. We wanted users to see the effect of ML on their relevance, and even if ML was beneficial for them at all.
Our hypothesis was as followed: if users could see inside the ranking algorithm, they'd understand what part ML has in the scoring of results.
At Coveo, we have coworkers who use our product daily and have direct access to customers. We interviewed some of them to better understand the problem our users were facing.
We designed an ML model testing feature. This feature would allow the user to quickly test a query against a specific ML model, and see how each result was ranked by the algorithm. We also added a quick way to compare two configurations against each other.
Different result views were designed to access more granular pieces of information. The default one uses an accessible color palette to indicate what percentage each ranking weights represent. An optional detailed view is available if a user wants more information.
Advanced configuration settings were then added to the tester so that the user could fully test their setup without leaving the tool.
The model testing is a great first step into ML transparency at Coveo and has been well received by our users. We continue to improve this aspect of our product and plan to add relevant flows to guide users in their usage of our console.