• Ingen resultater fundet

Chapter 3. Theoretical framework

3.4 Literature review

3.4.3 Synthesis and research gaps

In this section, I shall synthesize the preceding literature review by relating the findings of the reviewed studies to the research questions of the current thesis, as well as point out emerging research gaps.

None of the reviewed studies specifically investigated the extent to which translators choose to accept, reject and revise matches in MT-assisted TM translation (RQ1) or the

characteristics of translators’ interaction with the tool in relation to these choices (RQ1a).

However, some of the studies touched on aspects relevant to these research questions. For example, in relation to her finding that significantly more changes were made in 85-94% TM matches than in MT matches, Guerberof Arenas (2012) argued that this reflected the fact that a number of MT segments were of such quality that they could be accepted without changes. Also, both O’Brien (2007), Guerberof Arenas (2009) and Guerberof Arenas (2012) showed that translators worked faster when editing TM and MT matches than when translating from scratch, indicating that it was more productive to revise matches than to reject them. In terms of translators’ interactions with the tool when accepting, rejecting or revising a match, Désilets et al. (2008; 2009) found that translators made use of many different resources and that, when encountering translation problems, they typically made use of corpus-based resources such as TMs. They also found that translators were very competent at scanning a list of potential solutions, and that they were critical when deciding on translation solutions. Karamanis et al. (2010; 2011) found that translators often accepted matches by using keyboard shortcuts, and that the concordance search was typically the first resource used when encountering a translation problem. Similar to Désilets et al., Karamanis et al. found that translation problems were thoroughly researched, especially when their solution involved online searches. Along the same lines as Désilets et al. and Karamanis et al., LeBlanc’s (2013; 2017) studies suggested that the TM is the primary resource for translators, with translators considering it to be a “one-stop shop” and with the TM becoming the sole tool used for decision-making. This perception of the TM was also reflected in Ehrensberger-Dow (2014) and Ehrensberger-Dow and Massey (2014) who found that even simple decisions were checked against the contents in TMs, for example. Further, they found that translators have many windows and tabs open during translation.

Interestingly, Ehrensberger-Dow and Heeb (2016) found that the translator studied tended to ignore the suggestions provided by AutoSuggest, even in cases where the translator

ended up with a translation equal to the suggestion. Finally, Olohan’s (2011) study showed that, in interacting with the CAT tool, translators experienced resistances which they had to accommodate.

All of the reviewed experimental studies investigated the time spent by translators on translating with an MT-assisted TM tool (RQ2). Some of the studies measured editing speed on a segment level and showed that MT matches were edited faster than TM matches with match values of 75% or more (O’Brien 2007; Guerberof Arenas 2009; Tatsumi 2010;

Guerberof Arenas 2012). Other studies measured editing speed on a text level and showed that texts were translated faster when MT was added to a TM environment (Läubli et al.

2013; Skadiņš et al. 2011; Federico et al. 2012). Teixeira (2014b) showed that translators worked faster when they were provided with metadata in an MT-assisted TM environment than when they were not.

Most of the reviewed studies did not investigate whether translators process matches in a linear or non-linear manner in MT-assisted TM translation (RQ3), whether they perform checking of their translations or the nature of the changes implemented in this phase (RQ4).

In fact, some of the studies did not give the translators the possibility of returning to previous segments (Guerberof Arenas 2009; Guerberof Arenas 2012) or instructed the translators to avoid doing so (Tatsumi 2010; Federico et al. 2012). However, Tatsumi (2010) investigated instances where translators “revisited” segments and found that making revisits did not necessarily make one a slower translator, and that a revisit generally took half the time of the first visit to a segment. Without specifically analyzing self-revision behaviour, a number of studies suggested that translators performed self-revision if they were given the possibility (Teixeira 2011; Federico et al. 2012; Läubli et al. 2013; Teixeira 2014b). In terms of self-revision, it is worth noting that not including self-revision in the analysis might influence the results on e.g. editing speed in the sense that if translators spend a considerable amount of time on self-revision and this is not taken into account, the productivity of the translators will appear higher than it actually is. For example, Moran, Lewis and Saam (2014) found that not letting translators check their own work leads to an overstatement of the utility of MT, at least compared to translation from scratch. Not taking self-revision into account might also impact on the amount of editing observed and the quality of the target segments.

However, it seems that the choice of whether or not to include self-revision is often a trade-off between obtaining reliable data on time spent on each segment during editing and prioritizing ecological validity.

Some of the reviewed experimental studies focused on the amount of editing involved in MT-assisted TM translation (RQ5). These showed different results. Tatsumi (2010) for example found that the amount of editing needed in MT matches was larger than the editing needed in 75-79% TM matches, whereas Guerberof Arenas (2012) found that significantly more changes were made in 85-94% TM matches than in MT matches. Teixeira (2014b) found that the translators made more changes when they were provided with metadata than when they were not.

As mentioned above, the reviewed studies did not investigate review as a natural part of an MT-assisted TM translation process (RQ6). However, the results on quality are linked to this question, since errors found in quality evaluations of translations produced by means of MT-assisted TM are errors which might have been identified in a review phase. For example, Skadiņš et al. (2011) found that when MT is added to a TM environment, more errors are made, whereas Läubli et al. (2011) found that the quality of translations produced by means of MT-assisted TM was consistent with or higher than translations produced by means of TM alone. Guerberof Arenas (2009) found that errors were evident in all translators’ texts and in both TM and MT matches and in segments translated from scratch, and Guerberof Arenas (2012) found that a similar number of errors were made in MT and TM matches. Bearing in mind that some of the errors might have been corrected if translators had been allowed to check their translations in Guerberof Arenas’ studies, the findings suggest that translations produced by means of MT-assisted TM should be reviewed.

Some of the reviewed studies provided findings on translators’ attitudes to TCI (RQ7). For example, LeBlanc (2013; 2017) found that translators considered the sentence-by-sentence approach to be a disadvantage of TM systems, since it required them to work with segments instead of whole texts. LeBlanc also found that so-called “enforced recycling” limited

translators’ control over the target text and their decision-making authority and that TM implementation led to a loss of professional autonomy and a decline in professional

satisfaction on the part of the translators. Ehrensberger-Dow (2014) and Ehrensberger-Dow and Massey (2014) also suggested that translation tools may limit translators’ autonomy since even simple decisions are checked against e.g. TMs, and their study also showed that translators complained about limited space on their computer screens. Moreover, Olohan’s (2011) study showed that the technology may pose resistances such as “forgetting” where something is stored, resistances which the translator has to accommodate in order for the interaction to progress. Finally, Karamanis et al. (2010; 2011) found that translators may have a black box perception of MT technology, not knowing why it acts as it does. A translator also indicated that the use of MT is risky, since MT matches may appear to be acceptable translations, although upon closer reading they are not.

In the above synthesis, a number of research gaps have been identified. These relate specifically to a lack of research delving systematically into how translators actually interact with an MT-assisted TM tool, for example in terms of their choices to accept, reject or revise the proposed matches and in terms of their interaction with the tool in relation to these choices. Also, the literature review demonstrated a lack of research into self-revision and review in an MT-assisted TM context. Further, the experimental studies of MT-assisted TM have generally not taken the context into account and several of them have involved translators working in ways that do not correspond to typical work practices, either by asking them to work with unfamiliar tools or by imposing unfamiliar requirements or limitations on the translators’ ways of working. Finally, none of the reviewed studies dealt with Danish or even Scandinavian languages in the context of MT-assisted TM. Investigating the use of MT-assisted TM in the context of a smaller language like Danish is highly relevant when considering that SMT is data-driven and thus considered to perform better on

language pairs for which large volumes of data are available than on language pairs involving

smaller languages. MT-assisted TM translation into Danish is also highly relevant in the context of the EU and the EU’s multilingualism policy, with Danish being one of the 24 official EU languages. Based on its theoretical standpoint of viewing translation by means of MT-assisted TM as TCI and as a context-dependent activity, the current thesis aims to contribute to filling these gaps. It does so by means of an embedded mixed methods research design consisting in a workplace study in which a contextual study and, in turn, an experimental study is embedded, as will be explained in the next chapter.

Chapter 4

Methodology

Chapter 4. Methodology

In Chapter 3, TS was described as an interdisciplinary field that borrows theories and

methods from other disciplines. Particularly with regard to TPR, we saw that a wide range of methods have been applied to study translation processes, and that methods are typically triangulated. Also, we saw that the field of HCI, like TPR, increasingly recognises that interaction with artefacts should be studied in the social context in which it takes place. The methodology of the current thesis, which is described in this chapter, is informed by these observations.

As suggested by Creswell (2014, p.5), I shall address the interconnection between the philosophical worldview guiding the study, the research design related to this worldview and the specific methods applied to address the research questions. I shall first introduce

pragmatism as the worldview guiding the study. The pragmatic worldview is aligned with the thesis’ focus on practice and on building the most suitable research design with which to answer the research questions. Then, I shall introduce mixed methods before describing the mixed methods design of this thesis as well as the specific methods employed to explore professional translators’ interaction with an MT-assisted TM system and their attitudes to this interaction.

4.1 Pragmatism

Pragmatism emerged as a response to the “paradigm wars” between quantitative and qualitative research (Teddlie & Tashakkori 2009, p.14; Feilzer 2010, p.6). It rejected the

“incommensurability of paradigms” or “incompatibility thesis” advocated by quantitative or qualitative researchers who adhered to positivist/postpositivist and

constructivist/interpretivist paradigms respectively and proposed that quantitative and qualitative methods could and should not be combined (Morgan 2007, p.58).

Up to the late 1970s, quantitative research and the associated positivist paradigm dominated the social sciences (Onwuegbuzie & Leech 2005, pp.269–270; Morgan 2007, p.56; Teddlie & Tashakkori 2009, pp.5–6). Simply put, this type of research focuses on the gathering, analysis and interpretation of numerical information in order to describe and explore a phenomenon of interest or look for significant differences between groups or variables (Teddlie & Tashakkori 2009, p.5 ff.). In the latter part of the 20th century, positivism was challenged and criticized by qualitatively oriented researchers who subscribed to the worldview known as constructivism (Teddlie & Tashakkori 2009, p.6), a period referred to by Morgan as the rise of the “metaphysical paradigm” (Morgan 2007).

Qualitative research is concerned with the gathering, analysis and interpretation of narrative information (Teddlie & Tashakkori 2009, p.6). According to Morgan (2007), the metaphysical paradigm relied on the notion of the incommensurability of paradigms, i.e. that different assumptions about the nature of reality and truth on an ontological level meant that paradigms were incompatible, and meant furthermore that paradigms were also

incompatible on epistemological, methodological and method levels. From this viewpoint, paradigms thus determined methods in a top-down and unilateral manner in the sense that a specific ontological standpoint necessarily leads to certain epistemological and

methodological assumptions and choice of methods (Howe 1988, p.10; Morgan 2007, p.62), as illustrated by Jensen (2013, p.58) in Figure 4.

Figure 4. Top-down approach in the metaphysical paradigm (based on Jensen 2013, p.58)

Pragmatists rejected this top-down approach and the resulting polarization of qualitative and quantitative research, contending that methodological pluralism should be embraced (Onwuegbuzie & Leech 2005, p.272). For pragmatists, “research objectives drive studies, not the paradigm or method” (Onwuegbuzie & Leech 2005, p.278), and pragmatism thus takes a bottom-up approach, letting the research problems determine which methods are

appropriate. Thus, Tashakkori and Teddlie define pragmatism as:

“a deconstructive paradigm that debunks concepts such as “truth” and “reality” and focuses instead on “what works” as the truth regarding the research questions under investigation. Pragmatism rejects the either/or choices associated with the paradigm wars, advocates for the use of mixed methods in research, and acknowledges that the values of the researcher play a large role in interpretation of the results” (Tashakkori

& Teddlie 2003, p.713).

Feilzer even states that “[p]ragmatists do not ”care” which methods they use as long as the methods chosen have the potential of answering what it is one wants to know” (2010, p.14).

For this reason, Jensen (2013, p.59) places the research interest in the central position in pragmatism in Figure 5.

Ontology

Epistemology

Methodology

Methods

Figure 5. The central role of the research interest in pragmatism (based on Jensen 2013, p.59)

The central position of the research interest is also reflected in Howe’s “Compatibility Thesis”, which asserts that combining quantitative and qualitative methods is a good thing, and that there should be no forced choice between paradigms or methods. As stated by Creswell, “[i]nstead of focusing on methods, researchers emphasize the research problem and use all approaches available to understand the problem” (2014, p.10).