AI in research - our approach
Credit: Cash Macanaya (Unsplash)
Debate about AI and its implications for individuals, organisations and wider society seems to be everywhere at the moment. It feels like no time at all since AI was spoken about as a futuristic, remote possibility – and now it is most definitely here and affecting all of our lives in numerous ways.
We know that it’s also top of mind for some of the clients that we work with – not least for those working in the regulatory space. We have previously written about some of the research we have done about the regulatory implications of AI in medical practice.
We’re also spending time thinking about what it means for our business here at Community Research – how we can harness the good things about the technology (and pass on these advantages to clients), whilst being mindful of the associated risks and without compromising on the quality of what we produce.
There are a plethora of new, tech businesses in the market research sector that are aiming to capitalise on the AI revolution – offering software which transcribes, analyses and reports on research feedback at a press of a button. Technology is also being adopted whereby AI interviewers replace human researchers[1] – sometimes even interviewing AI respondents. It can be extremely hard to navigate through this new normal – in terms of the practical application and the wider ethical and data privacy implications.
We are very alive to the fact that this innovation presents a valuable opportunity to automate some of the more basic, ‘grunt’ work, potentially making research less costly and more accessible. It also offers real possibility in the creation of new, innovative ways of presenting research findings – bringing research findings to life and to a wider audience. As a micro-business, we are used to working with diverse sub-contractors and associates – collaborating to bring the right mix of skills to a project. In this respect it feels like an extension of what we already do.
At the same time, we are conscious that AI cannot be fully trusted – it is designed to ‘try to please’ which means in practice that it simply makes things up if it cannot find a suitable answer. This well documented practice of hallucinating[2] has clear dangers. Whilst AI researchers interviewing AI respondents obviously saves money and time – it means that there is no new evidence being created – any insight is based on what is already out there (which may be US-centric and certainly won’t represent the views of more marginalised communities). Furthermore, we have experimented with using AI to support with thematic analysis of interview and group transcripts and have found that whilst it provides a basic understanding of what is coming out, it can miss the nuance of what is said. And, importantly, it misses what is not said.
We have therefore decided to use AI as an enabler, i.e. a way of producing creative outputs and as a useful ‘sense-check’ of our own analysis rather than replacing researcher input. Nothing can beat an experienced researcher who really understands the people that they are interviewing and the wider context. The research we conduct is often with some of the most vulnerable people in society on issues that have a huge impact on their lives. In a world where their voices and needs frequently go unheard, we have a duty to ensure that their views are reported truthfully, as well as in a human and humane way.
[1] The Future Of AI Market Research: Two Powerful Paths Are Emerging
[2] AI hallucinations are getting worse – and they're here to stay | New Scientist