Source: “Classification Cupboard” by Anton Grabolle via Better Images of AI

So you’re a qualitative researcher who finally wants to take the plunge into digital methods. Maybe your current project has yielded a treasure trove of textual material that you’d like to make sense of with the help of text analysis techniques. Maybe you got curious about newer developments in network science because you want to use a little social network analysis in an upcoming ethnographic project. Maybe you have decided to finally yield to that nagging voice in your head that keeps saying that you should probably look at what people do online, so now you think you might have to pick up some form of scraping or analytics.

If any of that applies to you, congratulations, you’re in for a fun ride. I’ve got some unsolicited words of wisdom that may help along the way.1

Mind the gap

As you venture into digital methods, you will likely have to engage with scholarship that is very different from the kind you are used to. When we pick up papers by physicists or computer scientists, we don’t just encounter new forms of data and new tools, but a different epistemic culture. It helps to have an understanding of cultural differences that may trip us up as qualitative social scientists when branching out into this kind of work. These differences are mostly tacit and seldom acknowledged, which is why I often highlight them when asked to speak about doing mixed-methods work.

First, what do I mean by epistemic culture? This term, first introduced by Karin Knorr-Cetina, is intended to trouble the idea that science generates knowledge according to a single method, as suggested by people who talk about “the scientific method.” Instead, Knorr-Cetina finds that there is not a single way of creating knowledge, but a diversity of scientific activities that differ between fields. These scientific activities include not just methods, but different forms of reasoning and of making claims about the world.

When I say that branching out confronts us with a different epistemic culture, I mean especially these kinds of differences. Let me highlight three in particular. First, in this culture there is a strong preference for explanation over understanding. This may not come as a surprise, since many social scientists are already familiar with this difference from interactions with colleagues who use quantitative methods such as surveys and regression analysis, but the degree to which a lot of computational work simply does not care about understanding might still be surprising. After all, most quantitative social scientists develop a quite detailed understanding of what they study, even if their primary concern is causal inference. Physicists and computer scientists developing (for instance) new machine-learning algorithms often have almost none of what they call “domain expertise” and mostly care about empirical datasets only insofar as they help them to validate their claims. A small number of example datasets gets reused over and over for such purposes (if interested in such data, look up “Zachary’s karate club” and “Southern Women data”).

Second, the culture you will encounter values null-hypothesis significance testing (NHST) over an inductive or abductive process of inquiry. Again, this may be a familiar difference for those who have had dealings with quantitative social science, but again the contrast will be starker. Most quantitative social scientists are familiar with other epistemologies, but researchers in other fields are sometimes baffled by how many “degrees of freedom” there are in qualitative inquiry, and the idea that things can be done “manually,” as in the grounded theory process of coding and inducing categories, rather than algorithmically, and still be considered scientific.

The third difference has to do with the importance of models and simulations over empirical research. Where qualitative researchers care about the nitty-gritty, computational researchers value elegance. For that reason, they use a variety of devices that help them get a neat handle on a world full of ambiguities, contingencies, and nuances. These devices are basically fictional constructs, most often made using the tools of mathematics. There’s a famous saying that “all models are false, but some are useful.” Models can be more than useful; they can even be “unreasonably effective” at making sense of the empirical world. That, anyway, is the hope driving a lot of scholarship you are likely to encounter.

Why does these differences matter? They affect almost the entire approach to research: priorities, research questions, and the research design. Because of the emphasis on coming up with novel explanations or new ways of modeling a domain, the priority in research will often be to develop a new technique that will do better on an established benchmark or deliver results with greater precision. Because of the focus on testing hypotheses and causal inference, research questions will usually be “why” questions. Descriptive questions (what?), process questions (how?) and retrospective questions (did?) are, for the most part, ruled out, as are more open-ended questions. Finally, the research design will differ pretty fundamentally. It will be almost entirely front-loaded, leaving little space for iteration.

Know your strengths

It is easy to feel inadequate as someone who is “only” a qualitative researcher. After all, you “just” hang out with and talk to people, and even when you use fancy language to describe this (participant–observation, interviews), it isn’t all that different from what people do in their daily lives. In fact, that’s the whole point of qualitative methods.

Even so, qualitative researchers have an important toolkit, and I often find that computational researchers appreciate its importance more than some qualitative researchers do. There are at least three strengths of this toolkit that we should highlight.

Qualitative researchers don’t just test pre-existing theory; they set out to theorize from new discoveries. They find inspiration and support for this endeavor in the philosophies of sciences of Charles Sanders Peirce, Gaston Bachelard, and Roy Bhaskar, among others, which each in their own way stress that knowledge stems from a creative and social process that cannot be totally formalized.

Further, qualitative researchers introduce a reflexive moment into their research. Recognizing that knowledge is created in the interaction between researchers and the social worlds they study, they understand that their methods and concepts are not neutral.

Finally, qualitative researchers pay attention to context rather than discrete variables. This doesn’t just help to attain a higher level of external validity, but is also more compatible with complex understandings of the social world.

Build conceptual bridges

Even though I emphasize differences, the point of what I wrote above is not the reinforce the gap between “us” and “them.” In fact, we should take such encounters as an occasion to have our assumptions and habits challenged. In my experience, this can help us recognize that, though unsettling, there is much that is exciting and worthwhile about entering the spaces between fields.

One exciting opportunity is to take on the role of a cultural ambassador. In this role, we can encourage asking more open-ended research questions, provide models for an iterative approach to inquiry, and foster ongoing reflection on the researchers’ role and ethical implications. This isn’t always easy, because it requires making the tacit explicit. Hopefully this post provides some help along the way.

Another way to build bridges is to engage in necessary conceptual work that doesn’t neatly fit in one or the other epistemic culture. I can think of a few examples:

  1. Data visualization. This is perhaps the most obvious interface between the computational and the qualitative, and I know digital humanists have written a lot about the question of how can the process of visualization help create new knowledge. Even so, I think there is still lots of room for further discussions, for instance, on what it means to move beyond text when representing and analyzing ethnographic data.
  2. Machine learning. When ML algorithms are used for the kind of things they are extremely good at, such as clustering, we quickly run into the boundary between the computational and the qualitative: What does it mean that these entities are clustered together, but not these? This may not just be an epistemological issue, but a social justice issue as well.
  3. Natural language processing. Similarly, when using NLP techniques like topic modeling, we wind up with “bags of words” that indicate topics, but taking the step from words to topics is not automatic, but involves interpretation.

Note that this list is neither exhaustive, nor do I expect it to have a long shelf life.

Taking the leap

Coming up with new conceptual and methodological frameworks that make this kind of interstitial work possible and meaningful is incredibly exciting. It holds the promise of putting you in the middle of some of the most vibrant scholarly debates taking place these days. It helps to have an understanding of differences, because in my experience they make necessary conversations quite laborious at first. It also helps to understand that qualitative researchers bring something important to the table, and feelings of inadequacy should not result in hiding one’s light under a bushel.

Most importantly, be willing to be challenged, and have fun with it.


  1. I initially prepared the bulk of what’s in this blog post for a September 2021 lecture in the Qualitative Digital Methods course for doctoral candidates convened by David Herbert at the University of Bergen.