• Ingen resultater fundet

2. Methods & Data

In this chapter I present and review the methods used in data collection and analysis for the thesis. At the end of the chapter I also provide an overview of the empirical work that lay the foundation for the following six chapters.

RobotXP Gender [Men]

Exp Condition Outcome 1 Outcome 2

All Conditon 1 Condition 2

Regression Referents:

B=1.37, p

=0.03

Outcome 3 Outcome 4 B=-0.78, p=0.09

B=

4.31, p

=0

.02 B=

3.41, p=0.01

Figure 2.1: Modeled Regression

designated as the referent. In other words, the level that all other levels in the same factor is compared against. The referent is denoted by the lines between variables. For example, for the experimental condition, the referent is ‘Condition 2’, and for gender, it is

‘women’. Therefore, for outcome 2 the regression coeffecient (B) for condition 1 is 4.31 in comparison to condition 2. Likewise, in outcome 2 the regression coeffecient for men is 0.78 in comparison to women.

2.1.2 Conversation Analysis

Conversational analysis (CA) is the study of social order in interactions between people.

In CA, interaction is seen as jointly organized activity that is accomplished rather than produced. CA excels at uncovering the structure in interaction, by looking at the sequential unfolding of events and recurring patterns. One aspect that separates CA from other research methods is that in CA, analysts attempt to understand interactions as participants themselves understand them. In other words, interpretations of events are analyzed in the light of what understandings participants themselves make observable through their talk or conduct. As a research method, CA enables analysts to understand how people accomplish interaction as the interaction unfolds. This is a quality that is equally relevant in interactions with robots. Generally CA is used in HRI in two different ways; as a design resource, or as an analytical tool.

CA as a Design Resource

In one way, researchers review CA literature to discover how people engage in social action, and from this extract behaviors that can be implemented in robot designs. This is also very similar to the way psychology has informed HRI and other HCI-related fields. For example, in Pitsch et al. (2009), the authors implement behaviors on their robot based

2.1 Analytical Considerations 21 on two sociological concepts; ‘focused encounters’ (Goffman, 1961), and ‘the first five seconds’ (Schegloff, 1967). In an attempt to manipulate peoples’ gazeKuzuoka et al. (2008) implement ‘restart’ and ‘pause’ behaviors on their robot, with a basis in insights reported by Goodwin (1980). The organization of turn-taking is a central element in CA methodology, and a number of studies have implemented features that relate to turn-taking in one way or another. For example, Yamazaki et al. (2008) implement a nodding feature timed to transition relevant places (TRP), which is a place in the interaction where a speaker change can occur (Sacks et al., 1974). Fischer, Lohan, Saunders, et al. (2013) implement certain contingent features into robots with a reference to how contingency contributes to joint-action (Schegloff, 1996). Aarestrup, Jensen, and Fischer (2015) draw inspiration from Pillet-Shore (2012)as they test how people respond to lexically and prosodically different robot greetings. In other studies,Oto, Feng, and Imai (2017)investigate how people deal with silence, with a special reference to pauses, gaps, and lapses, in interaction with a robot, andOhshima, Fujimori, Tokunaga, Kaneko, and Mukawa (2017) have developed a conversational robot that designates next speakers in interaction. The authors’ modeled the turn-taking behavior of the robot after the ‘turn-taking’ system documented bySacks et al. (1974). In an interaction scenario with a virtual agent, Muhl, Nagai, and Sagerer (2007) look at the role of trouble sources in interaction and investigate how people deal with it in interactions with their virtual robot. Other work use the notion of ‘recipient design’ (Sacks et al., 1974) when designing robot behaviors (Fischer, 2016b; Fischer, Lohan,

& Foth, 2012), or use CA to model turn taking behavior in a robot system (Fukuda et al., 2016;Linssen et al., 2017;Okuno, Kanda, Imai, Ishiguro, & Hagita, 2009;Rossi, Ferland,

& Tapus, 2017).

CA as an Analytical Tool

A second way in which CA is used is as an analytical tool for making sense of interactions between people and robots. Using CA in this manner is however not unproblematic.

Conversation analysis is concerned with how people accomplish social action. The focus here is on the how and the why (Schegloff & Sacks, 1973). That is, in CA we look closely into which methods people use to make sense of the situation they are currently in. This is commonly referred to as the ‘members’ methods’, or ‘ethnomethods’. CA excels at uncovering what participants themselves find relevant and important in any interaction. That is, the analyst comes to see the social situation from the perspective of the participants, rather than from ana priori understanding of how the interaction should proceed, by looking only at the observable actions performed, and how they are responded to. Membership categories, such as gender, occupation, and age also only become relevant if participants orient to them through their social conduct. Some conversation analysts might argue that using CA to investigate interactions between people and robots makes only little sense, as robots are not ‘members’ in a sociological understanding1. However, here it is important to note that as analysts we are not interested in robots’ ‘methods’ - these have already been designed either explicitly through a script or one or more algorithms. Instead, what we are interested in is how people respond in-the-moment to the interactions that we

1Recent work in CA, for example, also study human-animal interactions (Mondémé, 2011).

design. One of the first to look at human-machine interactions from an ethnomethodological perspective was Lucy Suchman (1987), who investigated interactions between people and a photocopying machine. What was innovative about her approach was that she looked at not only at which resources people had access to and which understandings they made relevant, but also showed which resources the machine had access to, and how peoples’

actions were interpreted by the machine. In other words, she conceptualized the machine as a participant in interaction.

CA, unlike other data-driven methods, is not driven by hypotheses or formal models of how interaction work or should work. Instead, insights are drawn and developed from the data (usually video recordings) at hand, and by looking very closely at what participants’

themselves make relevant. One of the key methodical concepts behind the approach is

‘unmotivated looking’ (Sacks, 1984), which means to look for new phenomena, free of preconceptions and hypotheses. Once one or several candidate phenomena have been discovered, the ‘looking’ becomes motivated as researchers try to find out whether the same phenomenon or social practice can be found under different conditions (i.e. in several cases, with different people). As the work proceeds, findings are gathered and sorted into collections, which also serves as an indicator for how common the phenomenon under investigation is. However, results are never validated statistically in the same way that hypotheses are (in)validated using inferential statistics. Thus, CA studies published in HRI are often labeled ‘case studies’ (e.g. Arend and Sunnen, 2017; Dickerson, Robins, and Dautenhahn, 2013;Pitsch and Koch, 2010;Robins, Dautenhahn, and Dickerson, 2009).

One way to get around this problem is to publish quantitative results alongside the qualitative results. This can be done in (at least) two ways. In one way, the ‘unmotivated’

looking and the subsequent analysis identifies phenomena that can be coded, and later processed statistically. This is whatGehle, Pitsch, Dankert, and Wrede (2017) refer to as ‘Conversation Analysis with quantification’. For example, they look at when people are engaged in interaction with a robot in a museum. Based on this qualitative analysis, they classified under which circumstances people engaged with the robot, taking gaze, body posture and distance to robot into account. Similarly,Pitsch et al. (2009) identify cases in which a robot is able to engage people in what they call a “contingent entry”, which they classify as entry into an interaction where the robot responds appropriately and timely to the user, and the user responds appropriately and timely to the robot. They then code the number of people who respond to the robot when it is nodding, speaking, whistling, shifting positions, or whether they leave the interaction prematurely. Finally, they compare the people who entered the interaction contingently with those who did not, using inferential statistics. Similarly,Gehle et al. (2015) qualitatively identify trouble sources in interactions between visitors to a museum and a museum guide robot, which then are quantitatively processed. Similar approaches to combining CA and inferential statistics are also utilized byOpfermann and Pitsch (2017)and (Cyra & Pitsch, 2017). Finally, also Fischer et al. (2015)start with ‘unmotivated looking’ in a collaborative assembly scenario.

Here they identified participant responses, such as nodding, smiling, and gaze behaviors using conversation analytical methods. Subsequently, these responses were coded and processed with inferential statistics.

2.2 Data collection methods 23