We’ve been reading about (and producing) a lot of AI thinkpieces lately that focus on speed, scale and democratisation of research. This is not about to change.
But as research becomes more operationalised, I’m craving a human moment. I had a discussion with an AI tool around the benefits of AI-assisted interviewing and an interesting stance emerged: the proposition that an AI interviewer can move beyond stigma and judgement to deliver a beneficially neutral presence in patient research.
Context-dependent, I’m not sure I believe this is true.
Neutrality is not the same as safety and reducing stigma is not the same as knowing when to stop.
I believe patient research exposes what cannot be reduced to process and I’ll explain why.
What’s so human about patient research?
My colleagues and I were recently discussing some of the most challenging interviews we’ve conducted over the years. Unsurprisingly, many of them were in the field of healthcare. Patient interviews require researchers to sit with people at their most vulnerable, and to turn those conversations into insights without flattening, exploiting, or exposing the humans behind them. These kinds of interviews are often the most unpredictable and highlight the human skills that are so valuable in a researcher’s toolkit. In light of this, I would like to take a moment to think about the emotional labour of research and ethical decision-making in the moment.
The first challenge: finding people who may not want to be found
Recruitment is a logistical hurdle but it’s also the unsung hero of research.
You can’t produce quality data without talking to the right people. But what happens when people don’t want to be found and don’t want to talk?
We conducted a diary study for SameSame 2 years ago, which involved speaking with queer youth across multiple regions about highly sensitive issues. These people were exceptionally hard to find by reaching out through specialised channels, but the challenges didn’t end there. Once recruited, they experienced a fear of evidence: the fear of diary study evidence on shared family devices; the fear of the interviews being recorded.
Here, we’re not just facing drop offs and project delays, we’re confronting the reality that our research methods carry risk, and the consequences extend far beyond adoption and the bottom line.
The interview itself: walking the line between insight and intrusion
Once someone agrees to speak, a second challenge emerges, one that no discussion guide can fully prepare you for.
Patient research often feels less like interviews and more like moments of trust, and these moments are often unpredictable.
People may share experiences they haven’t articulated before, or emotions they didn’t expect to surface. Some are exhausted from retelling their story.
We now look at the other side of the scale, moving from easing patients who are reluctant to speak, to grounding conversations when distress surfaces.
As a team, we’ve spoken about gynaecological research where patients have been more than willing to share harrowing stories of past trauma, loss and daily personal struggles.
These conversations are tough to handle at both ends, but highlights the specialised skills required of the researcher to allow this level of unpredictable vulnerability while maintaining the baseline integrity of the research.
In those moments, researchers are constantly making judgment calls:
- When is it appropriate to probe for clarity or detail?
- When is silence the most respectful response?
- How do you acknowledge emotion without positioning yourself as a therapist?
- How do you keep the conversation safe while still gathering meaningful insight?
This is an exercise in restraint. You validate feelings, but you don’t diagnose. You listen deeply, but guide the conversation and control how far it goes. This sometimes means moving slowly when speed would be easier and sometimes requires an exceptional level of tact when a conversation needs to be moved along.
Turning human stories into responsible narratives
Perhaps the hardest part of patient research comes after the conversation ends.
As researchers, we’re tasked with transforming deeply personal stories into reports, frameworks, and recommendations. This is where ethical risk quietly re-enters the room.
Powerful stories can drive empathy and change, but they can also oversimplify, sensationalise, or expose participants if handled carelessly.
Responsible synthesis asks hard questions:
- What details are truly relevant to decision-making?
- What belongs to the person, not the project?
- Are we preserving dignity, complexity, and context or just emotional impact?
The goal isn’t to strip stories of emotion, but to translate experience into insight without reducing people to their pain.
The takeaway
Patient research exposes a layer of the profession we don’t often train for: emotional regulation, ethical judgment in real time, and the ability to balance empathy with boundaries. These are not soft skills. They are risk-management skills, and I find it hard to see how we can outsource this level of accountability.
Above all, the participant’s wellbeing must always outweigh the value of any single insight and that’s controversial in environments driven by deadlines, deliverables, and business needs.
If that feels uncomfortable, it should.
Because patient research isn’t just operational. It’s personal.