• Ingen resultater fundet

4 Discussion

5.1 Rigor

5.1.2 Reliability

The category of Reliability/Dependability/Auditability refers to whether the process of the study is consistent, reasonably stable over time and across researchers and methods (Miles and Huberman 1994). Silverman (2006) argues that reliability in the context of qualitative research is often achieved through transparency in the research process and Miles and Huberman (1994) suggest that some of the relevant questions to ask in this part of the evaluation are: Is the research design congruent with the research questions?

Were data collected across the full range of settings? As part of the evaluation of this part of the research we may thus return to the research questions and the overall research design.

The choice of framing the research questions within design, use, and implementation of role-oriented ESs has certain ramifications for the reliability of the findings. While including three different domains of IS research in the study has contributed to a

holistic understanding of the phenomenon of role-oriented ESs through breadth, it comes at the cost of depth within each of the domains. Focusing on a single domain could thus have increased the depth of understanding of that particular aspect of role-oriented ESs and made the findings more reliable and the theorizing more substantial within a particular domain. On the other hand, the breadth of focusing on design, implementation, and use in the research is the very premise for which the opportunity for theorizing across actors in the ES ecosystem rests.

Turning to the research design, the applicability of GTM and Case Study for answering the research questions was thoroughly argued as part of the research design (see section 2.3.1 and 2.3.2) and the usefulness of these methodologies still stands in hindsight. However, the combination of GTM and Case Study research in one research project has arguably had some implications on the research process as well as the findings. The guideline of GTM prescribing a theoretical sampling approach of selecting data sources as the analysis and the grounded theory emerges (see section 2.3.2.1) entailed that the partner case companies could not be selected a priori to initiating the collection of data. Instead, ongoing selection of partner companies that would on one hand help saturate the emerging categories of the grounded theory and on the other would fit the case study research design was made as the research progressed (see section 2.5). While the application of mixed methods has been encouraged in both ES research (Schlichter and Kræmmergaard 2010) and in research in general (Greene, Benjamin and Goodyear 2001) and mixed methods to some extend provide method triangulation (Creswell 2007), balancing between the data collection to

“satisfy” both GTM and Case Study may have weakened the execution of both methodologies. Applying GTM in isolation might thus have provided possibilities for pursuing other emerging core categories while applying Case Study research in isolation might have allowed “deeper” studies of each of the cases.

Second, while being somewhat familiar with elements of the Straussian approach to GTM from previous research projects undertaken in my training as master student, I was admittedly not fully aware of the implications of undertaking a “full” GTM study.

Aware of my novice level of experience with the methodology, I was committed to rigorously following the guidelines proposed by GTM scholars (e.g. Strauss and Corbin 1990; Holton 2007; Urquhart et al. 2010). The mere process of transcribing nearly 300 pages from interview recordings was rather strenuous and the subsequent process of coding the transcripts line-by-line was enough to question the choice of methodology from time to time. I have later learned that this is termed “flooding” and is apparently rather common among novice GTM researchers (Star 2007, p. 89). Thus, at times, I feared ending up in the category of “researchers [who] simply lack knowledge and competence in conceptualization and, as such, they embrace with enthusiasm but without understanding” (Holton 2007, p. 285). On the other hand, the detailed and rigorous analysis of the interview data gave me a confidence that the emerging grounded theory was if not appealing and elegant then at least solidly grounded. This grounding was even more important as the analysis for the GTM studies was conducted solely by me.

The use of the ATLAS.ti software may have been both a curse and a blessing in this context. The software offered significant help in tracking, comparing, and organizing the large amounts of transcribed text, without which I would most likely have succumbed to the process of analyzing the data. However, any inconsistency in relationships between data, concepts, and categories was immediately visible and

“begged” for the return to an ordered and structured coherence, which seemed to

“stifle” the abduction from the descriptive level of the data and might have limited theoretical abstraction. On the other hand, it gave a certain sense of confidence to see concepts and categories nicely ordered and related. Hence, the “ordered closeness”

with data that resulted from this detailed analysis, and seeing the theory slowly emerge, regained my confidence in doing GTM.

In regards to the setting of the data collection, the choice of collecting data across the different agents in the Microsoft ES ecosystem provided the possibility of triangulating part of the data by comparing data on similar topics between the actors. Situations where conflicting information on topics that were presented as “factual” by the respondents could thus be cross-checked with respondents from other levels of actors in the ecosystem which helped to correct factual errors and identify areas with conflict of interest. The risk of the study being biased by one group of agents in the ecosystem could thus be partly mitigated. This has strengthened the reliability of the findings.

Another approach to strengthening the reliability of the findings was the collaboration of authors from the ES vendors (paper IV). Having co-authors from each of the two vendors thus gave confidence that the approach of one vendor was not favored over the other and that factual information was double checked.

Finally, an important limitation for the reliability of the findings is that the study on use and implementation of role-tailored ES was primarily conducted through interviews with respondents. This limited the possibilities for triangulating the data from the interviews. Although some observations on the use of the systems were made during the visits at the customer sites, the duration of the observations was too short and too staged to be considered as reliable sources for triangulating the statements in the interviews. This limitation could have been addressed by combining the interview data with data from lab experiments on usability (Rogers et al. 2007). Still, such experiments may not capture the actual work of users in real-world context (Suchman 1983) and data from use in situ would thus be preferable in future studies on use of role-oriented ESs. Likewise, studying implementations of role-oriented ESs as they unfolded would have provided more reliable findings for theorizing on implementation approaches.