• Ingen resultater fundet

Validity, data saturation and quality in qualitative research

3. Methodology and Research Design

3.8. Validity, data saturation and quality in qualitative research

97 As point 7 in the AAA principles states, ‘There is an ethical dimension to all professional relationships.

Whether working in academic or applied settings, anthropologists have a responsibility to maintain respectful relationships with others’.I have strived to show my interlocutors such respect by refraining from cherry-picking their more colourful statements, if I feel they do not give us the proper picture.

These kinds of choices should not be interpreted as any kind of manipulation or self-censorship. Rather, when I had the opportunity to illustrate a given process or tendency in the material, I have chosen illustrations that do not unreasonably subject certain departments to exposure, nor do frivolous harm to the people who inhabit these departments. However, it is certainly essential to reflect upon when one is adhering to such ethical standards and when one is perhaps surrendering to feelings of debt towards the people one studies without having sufficient professional justification.

98 2018:136-137,143). With the interpretivist epistemology employed in this project, I assume no pretence to find the absolute ‘truth’. My goal is rather to approach different truths and understand the

perceptions, motivations and realities of my interlocutors. Hence, the criterion that I would argue is most relevant to assess in terms of the quality of this research is to be transparent about the research process in order for the reader to know how empirical material was generated and how my

understandings came into being. Similarly, Welch and Piekkari (2017) point out that validity is a relevant quality criterion in qualitative research, but they also argue that the approach to assessing validity should be ‘pluralist’ and ‘context-dependent’, meaning that an ethnographer striving for validity should be transparent about methodological choices, assess the appropriateness of these choices for a particular research setting and evaluate analytical interpretations against the context in which the qualitative material was generated (Welch and Piekkari 2017:721–22). Much in the same vein, Symon et al. (2018:146) argue that homogenous and standardized evaluation criteria are inappropriate for assessing the quality of qualitative research; they thus call for more context-sensitive criteria.

Reliability, on the other hand, ensuring that our results can be consistently reproduced by another scholar, is an unattainable and irrelevant concept within ethnographic research, as the ethnographer’s role in co-producing ethnographic material makes it un-detachable from the researcher (cf. Pratt 2009:859).

More appropriately, Piekkari and Tietze (2016) mention reflexivity as a relevant quality criterion.

Quoting Hardy et al. (2001:554), they write that ‘reflexive knowledge is situated and includes a

recognition of the multiple translation strategies that bring it into being’ (Piekkari and Tietze 2016:230).

Following Piekkari and Tietze and others before them (cf. Alvesson and Sköldberg 2000), I likewise argue that reflexivity is a highly relevant quality criterion for organizational ethnography. Hence, I have strived to demonstrate such reflexivity within this chapter and throughout the dissertation.

3.8.2. ‘Data saturation’ and how much ethnographic material is ‘enough’

A fairly established approach to determining a conventional conclusion within qualitative research is

‘data saturation’, where the researcher comes to a point in the process of generating empirical material where a repetition of themes emerges in observations and new interviews (Burrell 2017:58; see also Fusch and Ness 2015). Although the concept of data saturation is not well-defined, and disagreements exist over when and how it is reached (Hennink, Kaiser, and Marconi 2017), to observe a repetition of themes and deem it significant enough to represent a tendency must, arguably, require a richness and

99 density in the empirical material that might be hard to achieve with only very few interviews and a few participatory observations. However, a higher number of interviews or a greater amount of time spent in the field will not automatically lead to a richer empirical material, as the richness of the material also depends on the depth of the interviews and observations and the adequacy of selected informants for the purpose of the study (Morse 2015:587). Thus, the question of how deep and how much qualitative material is ‘enough’ still seems relevant (Hennink et al. 2017:591). The insights presented in this dissertation are based on the ethnographic material described in this chapter, which I have found sufficient to conduct my analysis. By offering the details in this chapter about where, with whom and how much qualitative material has been generated, I have sought to adhere to the quality criterion of transparency and to enable the reader to assess the quality of this research.

3.8.3. A note on rigour in coding

Within organization and management studies, qualitative researchers often attempt to introduce measurements for analytical rigour belonging to more realist ontologies and apply them to interpretive research (see e.g. Seale and Silverman 1997). For example, the practice of coding as well as the codes and the codebook itself is often treated as evidence for the truthfulness of the analysis rather than a tool to conduct the analysis. One example of this is how the practice of inter-rater reliability is assumed to determine the correctness of an analysis; i.e. if two or more researchers code the material in largely similar ways, then the analysis is assumed to be more valid. However, the fact that two people

understand a bundle of qualitative material in the same way says more about the degree of coordination between the two researchers than about the truthfulness of the coding or of the

‘findings’. This is because the ethnographic material itself has (most often) been generated by the researchers who are doing the coding. As such, their understandings and engagements within the field have shaped the kind of material that has been generated in the first place. Field notes, for example, are not a 1:1 representation of what the researcher has experienced, as it is simply impossible to write down every single aspect of any situation. Rather, field notes are nothing more than a representation of what a researcher has noticed (most often guided by a research question she herself has created), what she has written down, how notes were taken, how interviews were conducted, how interviewees were selected, and what she decided was worth recording.

Similarly, the coding process is not a process of simply ordering and ‘teasing out’ what is ‘really going on’; coding is an interpretive act that involves as much co-creation as the fieldwork itself. The degree to which two researchers attribute similar interpreted meanings to interviews and observations does not reveal any underlying truth within the material (cf. Saldaña 2016:4–5). Rather, as pointed out by

100 Madden (2010:141), coding is a dual exercise of both ethnographic facts and the ethnographer's choice.

Coding qualitative material is thus a matter of indexing concrete events in the field as well as organizing the interpretive aspects of the material (see also Kouritzin 2002). The code book should not be

considered a proof or a source of any truth telling, and the coding process should not be viewed as a test that verifies any reliability of the analyses.

What I will do in the following is to account for how the qualitative material for this project was coded and interpreted in order to make my choices and understandings transparent for the readers. Readers can then make their own validity judgments as to whether they find my analyses convincing.