We’ve seen AI become ingrained in UX research workflows over the last year, with researchers moving from experimentation to relying on it as a tool. Taking a step back, we looked at the impact AI has brought to our industry, but also the benefits it has yet to bring. We can’t help but think that AI’s growth can’t stop here.
While it can adequately direct methodology, pull together themes and even run research sessions, in its current application, the way AI is being used in UXR feels less like the Swiss army knife we wish it was, and a little more like a spork. It’s undeniably useful in the right places, but kinda clumsy and funny looking. And this made us think, is AI really doing what researchers need it to do? How can we shape it’s use to make it work best for us?
We’ve got problems, but methodology ain’t one
While AI has sped up the pace of work, and given us more efficient ways to conduct research and understand findings, it hasn’t really changed our methods. AI has augmented UXR processes with tools that let us scale our research better and navigate data or preparation more efficiently, but we haven’t yet seen an AI application that completely changed the game.
However, we think that there are 2 places where UXR still struggles that AI solutions definitely have space to grow in; synthesis and insight socialisation.
Specifically:
How might we enable more robust democratised research by enabling nuanced, self reflective, synthesis?
How might we make past insights easier to find, connect, & apply to current customer behaviour?
Wish 1: AI keeps enabling democratised research, but gets better at reading sentiment and flagging its own context gaps.
AI is already doing the heavy lifting with problem 1: lowering the barrier to democratised research. It’s helping stakeholders and product teams default to research by giving them easy ways to engage and interpret data for themselves through chat based GPTs.
These AI tools are big force driving the boom in democratised research we’re seeing within our client’s businesses (more people outside of core research teams are conducting their own research because they are equipped to do it). GPT generated interview scripts, methodology how-to guides, automatic transcription and decent-sounding trend analysis all help people who aren’t trained in research to understand their users a bit better.
But, AI supports democratised research only as long as we keep an eye on the quality.
Having more people do research because of AI tools is a great step forward, but should come with a cautionary note, as we know, it still needs a lot of human verification. Poorly framed questions and misinterpreted trends can quickly lead teams in the wrong direction. Currently AI tools are great at directing, but we must take care to correct their use because they’re also great at confidently misdirecting.
LLMs exhibit modality bias: they over-interpret textual data as complete representations of human meaning. AI only works with text, so all analysis is based on what people say, not how they say it. It can’t interpret what people show us with gestures, their body language, or tonal sentiment. The AI-trust paradox comes in, where because AI presents the trends with confidence, it’s easier to believe. If the analysis is not considering these more nuanced data points may lead to strategic decisions being made on things that should really not be taken as fact.
Wouldn’t it be great if GPTs could point out potential bias and plot holes as readily as it points out neatly grouped themes and next steps?
Wish #2: Research repositories that learn easily and surface meaningful insights from past research.
Continuous research is the gold standard for UX research. However, it’s hamstrung if you struggle to find and use insights from the past when you need them. This is a long-standing challenge in UXR, but with democratised research, the urgency to solve this issue is increasing. Clients are increasingly looking for insights that don’t live in individual slide decks, but rather help them connect current behaviours to long-term customer trends. While AI-driven research repositories seem like the perfect solution, we feel they’re still falling just short of what’s needed: a genuinely searchable archive of past research, across formats, that surfaces patterns without stripping away critical context.
We’ve tested a range of these tools over the past year, and while many offer slick document storage and chat-based search, we’ve yet to find a goldilocks solution.
Overall, we found core tool features work well; document storage is decent, chat-based searches are convenient and places to store search results are a nice touch. But it’s in the way that these tools feed back the search outcomes is where we get a bit concerned. Smaller but important insights can get lost when they don’t neatly align with larger themes, and trend analysis tends to flatten the user nuances that give those insights meaning. AI models are trained on snapshots of information collected over time, they don’t always reflect current or consistent understanding of a topic.
Even with capable tools, managing a research repository remains a lot of work. Particularly when older research exists only as PDFs. Making AI useful often requires retroactively tagging and restructuring files so it can understand context, which quickly becomes a time-consuming task.
Wouldn’t it be great if repository searches not only located facts, but retained for who, what, and why the research was conducted to keep insights in context?
So while AI has sunk its teeth into democratising research, we’re still seeing gaps when it comes to solutions that help teams search, question, and utilise past research insights. For now, making insights truly shareable still needs human judgement.
So AI, if you’re listening, our wishes for the future of AI are…
- Highlight knowledge or context gaps to help product teams who do research to know where to nuance might be missing.
- Make insights from larger bodies of research easy to manage, maintain, and find, while respecting the context.
Ultimately, we want to get to the point where humans can step in where it matters most to interpret, connect, and move the work forward. That said, we’re genuinely excited by how far AI has come in just the last year for UXR. It’s already speeding up research through solid transcription and analysis, and it’s opened the door for more people to engage with research than ever before. We’re counting this as a positive sign for the future of AI in UXR.
If you’re interested to see how we can help move your UX work forward through solid, human research (assisted by our mechanical friends), give us a shout!