Skip to content

AI has definitely thrown quite the curveball at qualitative researchers. – Interview with Dr. Tamara Pavasović Trošt

Tamara Pavasović Trošt is an Associate Professor of Sociology at the School of Economics and Business, University of Ljubljana. She received a PhD in Sociology from Harvard University and an MA in Political Science from Syracuse University. Her research addresses issues of stereotypes, identity, populism, history, education, collective memory, and youth values, and she specialises in qualitative and mixed methods. Recently, she has begun integrating artificial intelligence (AI) into her qualitative research and now teaches courses on its application in this field. In this interview, we discussed how AI is transforming qualitative research methods, her workflows, and her current research.

How is AI reshaping methods in qualitative research?

AI has definitely thrown quite the curveball at qualitative researchers. On one hand, there are more quantitative approaches to qualitative data analysis, such as the “text-as-data” approach, which relies on computational text analysis, natural language processing, and both supervised and unsupervised machine learning. All of these have been based on some of the principles underlying general artificial intelligence (GAI) and have been in use for a long time. In these approaches, AI has undoubtedly added capabilities, but I see it more as an extension of what qualitative researchers have been doing for decades.

On the other hand, for what I would call “purely” qualitative analysis methods, including thematic, narrative, and phenomenological analysis, AI has been truly cataclysmic. In these types of analysis, the key feature is deep reading and full immersion in the data. Although different software was available to aid in coding and analysis, it was fundamentally and exclusively within the domain of humans. You could not simply “dump” interview transcripts into a programme and click “analyse”. You could code, auto-code, search for patterns, and draw network diagrams, but the programme would not and could not actually do the analysis, such as finding and naming themes, for you. AI now does this – you can actually say “do a thematic analysis of these interviews” and it produces the themes from hundreds of pages of interview transcripts in a few seconds. This obviously raises significant ethical and epistemological questions, which we can perhaps discuss later.

You specialise in using AI for qualitative data analysis. Could you please explain your workflow?

In qualitative research, AI can now be used throughout the workflow, much like in other types of research. You can use AI from the initial stages of brainstorming ideas, searching the literature, refining research questions, and identifying gaps. More specifically, AI has contributed to transcription (speech-to-text), translation, and cleaning or sorting large volumes of qualitative data, tasks that previously had to be done manually. In these steps, the use of AI is likely to seem straightforward, although AI can, for example, bypass the entire transcription step – you can upload a voice recording, and it can analyse it without first transcribing. However, this is not recommended, as transcription remains a key element of the qualitative research process.

During data collection, AI can assist in piloting interview questions, generating relevant open-ended questions, tailoring them for different participants, and creating images and scenarios to use as stimuli in interviews or focus groups. Some AI chatbots can even conduct interviews, stimulating a human interviewer and adapting questions in real time. I find this feature epistemologically problematic and still insufficient, as it is currently possible to recognise that it is a bot rather than a real person. AI can also be used to simulate responses and generate synthetic interview or ethnographic/observational data, which is useful for piloting, preparation, scenario design, and, of course, for teaching.

The next step is AI in analysis, which is my main focus, as the use of AI in qualitative analysis has the potential to fundamentally transform the field. As I mentioned earlier, you can now input large amounts of qualitative data, including unedited interview transcripts, videos, and images, and prompt AI to perform thematic, narrative, or discourse analysis. AI tools conduct the analysis, completely bypassing the steps of iterative deep reading, familiarisation, coding, and re-reading. You receive an output that resembles the qualitative results you would traditionally obtain through manual analysis, but the actual process – the “black box” of analysis – remains opaque; you do not know exactly how the conclusions were reached.

When we talk about AI in qualitative data analysis, which software are we actually referring to?

Regarding the use of AI in this final step of analysis, there are three types. First, there are the “regular” AI tools such as ChatGPT, Gemini, Perplexity, and Copilot. With these, you input your qualitative data, write very specific and tailored prompts, and receive the output (for example, themes if you are conducting thematic analysis). However, you would not be able to publish this type of analysis in any reputable journal, and it raises significant issues, particularly concerning the privacy and confidentiality of the interview responses you provide.

The second option is a “midway” approach – AI programmes specifically designed for qualitative analysis, such as Coloop, Ailyze, Reveal, Qinsights, or Tailwind. These programmes are essentially more sophisticated versions of ChatGPT, with well-developed prompts that adhere to all textbook rules for qualitative analysis. Some research has compared the output from these programmes with that produced by human coding, and they appear to be efficient and accurate. However, the results often struggle with nuance and context, and fail to flag problematic data that a human researcher would have found obvious.

The third option is the use of new AI tools within traditional Computer-Assisted Qualitative Data Analysis Software (CAQDAS) such as Nvivo, Atlas.ti, and MaxQDA. In this context, AI tools can assist in generating themes and codes, but the researcher retains control over the entire process. For example, the software may suggest a “finding” (such as a theme) and ask you to confirm its accuracy, allowing you to decide which suggestions to accept and which to reject. In my view, this is currently the only ethical and legitimate way to use AI in qualitative analysis, as you maintain full control over the data and the process remains transparent.

What makes the use of AI different in qualitative and quantitative research?

Ah, this goes to the heart of the question of what is qualitative research. If it is intended to be an interpretative, contextual, and emergent approach that seeks a deep understanding of the nuances of social phenomena, the use of AI raises serious concerns. AI can certainly improve efficiency, and in quantitative analyses of qualitative data, it enables us to analyse vast amounts of qualitative data that we previously could not have managed due to time constraints. However, in qualitative analyses, the entire point is the iterative process of familiarisation, reading, re-reading, coding, and so on – where findings and theory should emerge from the data itself. Skipping this step omits a crucial and critical part of the purpose of conducting qualitative research.

Dr Tamara Pavasović Trošt
Dr. Tamara Pavasović Trošt (Photo credit: personal archive)

What are the main challenges of applying AI in qualitative analysis?

I mentioned some of these already, but I think the biggest challenge is the epistemological question concerning the very foundation of qualitative inquiry. Qualitative approaches such as thematic and narrative analysis are rooted in interpretive, context-sensitive, and reflexive understandings of human experience, where meaning is constructed through nuanced engagement with data, researcher positionality, and social context.AI, especially when used to automate coding or generate themes point-blank, tends to prioritise surface-level patterns, linguistic regularities, and statistical associations. Emphasising computational efficiency and pattern recognition can flatten the richness of qualitative data. This may violate the epistemological commitments of qualitative research, which values depth and the co-construction of meaning over objectivity and replicability.

The other challenges concern ethical and privacy issues. While traditional approaches were certainly marked by researcher bias, the use of AI now introduces AI model bias. I also wonder, and this remains to be seen, whether using AI for qualitative analysis can truly yield novel theoretical insights, given that it searches for patterns based on previous data. In addition, there are issues of data confidentiality, authorship, reproducibility, transparency, and the possibility of hallucinations.

You are on the editorial board of the International Journal of Qualitative Methods. Just yesterday, an article was published proposing AI as a co-researcher in qualitative research. Do you agree that AI is becoming a co-researcher that could also help with interpretations, or is it still just a tool for data analysis?

I still think AI is excellent for many tasks and can be immensely helpful, particularly for manual and time-consuming work that does not contribute to our analysis. I also believe it can assist with coding, and I find the AI tools in MaxQDA (the programme I use) useful for cross-checking and suggesting patterns I might not have noticed. I have also found Ailyze excellent for summarising trends in article abstracts over time – basically those kinds of analyses where you know exactly what you are looking for and need help with efficiency and computer power. However, the researcher’s full control and oversight remain absolutely necessary, and I see AI as simply another tool in this process – more like an opt-in assistant than a tool to which you would simply hand over your data.

How do you use AI in your own research?

I have recently switched from NVivo to MAXQDA, which I believe offers better AI tools. I still use CAQDAS in the same way as before, but now with the new “AI Assist” tool, which is helpful for code suggestions, code labels, and interacting with the data. It allows you to maintain full control over your data and is very transparent – for example, you still code the data yourself. It also includes safeguards for privacy and confidentiality, such as GDPR compliance, and does not store data. I am currently using it for a project on DNA genetic ancestry testing and ethnic identity, which relies on 80 in-depth interview transcripts that I am thematically analysing.

Have you obtained any interesting results?

I am currently preparing a paper for the upcoming World Conference on Qualitative Research, comparing my manual analysis of 30 years of history textbooks in Serbia and Croatia (published in a Nations & Nationalism article in 2018) with an analysis of the same textbooks using ChatGPT, AI Assist in MaxQDA, and Ailyze. I aim to determine whether these AI-led tools identify the same patterns and can detect context and nuance, which will enable me to comment on efficiency and accuracy versus theoretical novelty and innovation. The conference is in February – I will let you know what I find.

Do you have any book recommendations for readers of this interview?

Artificial Humanities by Nina Begus will be released in Europe in a few weeks. I think anyone interested in how AI is affecting the humanities will appreciate this book. For researchers wondering where to start, I have found the webinars and online courses on using AI in qualitative analysis by Prof. Christina Silver very helpful, as they are both practical and reflective.