When embarking on a new project, user research is key to finding out what the best solution is going to be for the customer. Talking directly to users will help paint a great picture but a fatal flaw of user research is often the people themselves.
We’re not calling anyone a liar but it’s easy to misremember previous behaviour or to twist the narrative slightly based on unconscious (or conscious) bias. We’re only human, after all.
This is why it’s essential to make sure you have statistics to back up your user interviews. Numbers don’t lie, and they are extremely important to validate the user research phase results.
An added bonus is that analysing the stats will often also allow you to see whether behaviour from the smaller set of people you’ve spoken to is representative of a much wider group.
We often face evaluator effects when we conduct usability tests. We think we observe something objectively, but in fact, we might be just seeing what we want to see.
So, does that mean that user research isn’t accurate?
Not at all. User research is one of the best ways to gain an understanding of how people think and act, it’s just that it’s prone to bias from both sides of the table. A well executed research phase will reveal what your customer actually wants, and which solutions might be most appropriate. It places people at the core of your design process.
The information that statistics provide gives an unbiased view of the facts, complementing the rest of your qualitative data. Many times not getting completely accurate responses can be due to the questions being asked, as it’s hard to get these absolutely right.
If you want to find out more about the ins and outs of qualitative and quantitative data, we actually wrote another article about it a while back (as well as some other useful bits).
Stats are great to be used alongside other data and research and can be the deciding factor for your stakeholders
In the real world
In a recent project with IJ Global, a data-heavy SaaS platform for people looking to uncover data on global infrastructure projects, we were given a brief to create a simplified interface for all users, both those who only run simple queries and advanced users that run complex searches, using the tool to its full capacity.
The initial assumption was that our users needed to be able to filter by everything, instantly. However, once we dug a little deeper we realised that the stats weren’t backing this assumption up.
The vast majority of people only used a subset of popular filter types, a very small number of users were exploring the rest. Our research hadn’t shown a clear distinction on this assumption so this data was vital in shaping and simplifying the interface.
We were able to remove a huge amount of noise from the interface and make it much easier and quicker to carry out the most common tasks
Key learnings
While qualitative methods of research such as user interviews and personas (amongst others) are great to gain insight into user needs and their stories, quantitative methods like stats let us view a more full picture through an objective lens.
We like to use a mix of techniques to get the best possible results. By combining methods, it’s easier to get the full picture and not go down the wrong path by relying on just one data source.