• Ingen resultater fundet

8.4 Results

8.4.2 Behavioral Analyses

Analyses show that participants’ behavior significantly changes for participants in the repair condition after the repair action.

Instruction Phase

The degree to which participants use speech to instruct the robot is evaluated using a logistic regression. Results of participants’ instructional behavior show that the likelihood of instructing the robot verbally is 20% in the repair condition and 26.5% in the non-repair

condition before the robot has made any error (see Figure 8.7). This difference is not statistical significant (B=-0.38, SE=0.44, p=0.39). No effects for participant gender or previous experience are found.

Before Error

After Error 20

40 60 80 100

LikelihoodofUsingSpeech

Repair No Repair

* p<0.05

** p<0.01

*** p<0.00

**

Figure 8.7: Speech to Robot

However, after the robot has made an error, the likelihood of using speech is 26.2%

for the no-repair condition, while it is only 9.2% for the repair condition. This result is statistically significant (B=-1.55, SE=0.56, p=0.006). The likelihood is positively correlated with previous experience with robots, so the more experience, the greater the likelihood that participants will use speech to direct the robot. However, this effect is only marginally significant (B=0.58, SE=0.31, p=0.06) and does not interact with the experimental condition. No effects for participant gender are found.

These results show that the repair action does indeed give participants a better under-standing of how to instruct the robot, and in particular how not to do so, evidenced by the vast drop of speech-based instructions from participants in the repair condition after the robot has made an error and participants (in the repair condition) had initiated repair.

H4 is therefore supported.

Handover Phase

Participants are initially unsure as to how to complete the handover phase, which is exemplified inExample 8.1.

Initially (1), the participant waits for the robot and does nothing for 3.1 seconds, until he shifts his gaze to the robot (2), which he sustains for 1.3 seconds. He shifts his gaze again down (3) for another 1.3 seconds, before he again looks at the robot (4), which he sustains for 0.6 seconds. After looking down again (5) for 2.8 seconds, he asks the the robot to hand over the leg while doing a gesture (6). Finally, the robot transports the leg (7), and the participant grabs the leg as the robot releases its grip. The entire sequence takes 20.1 seconds. Each of the gaze shifts displays an orientation, on behalf of the participant, to what he considers to be the robot’s turn to perform an action. When this does not happen, he produces a verbal utterance to have the robot hand over the leg. While this utterance has no effect on the robot, the gesture he produces in synchrony with his speech is recognized by the robot, which subsequently initates the handover sequence.

8.4 Results 123

1. Participant waits for the robot to lift the

leg 2. Participant looks to the robot face as it

stops its motion

3. Participant looks down 4. Participant look up to the robot again

5. Participant looks down again and raises his

eyebrows 6. Participant says “Can you hand it over to

me?” while doing a gesture

7. Participant waits while the robot transports

the leg 5. Participant grabs the leg as the robot

re-leases

Example 8.1: First Handover

1. Participant waits for the robot to lift the

peg 2. Participant looks to the robot face as it

stops its motion

3. Participant says“hand it over to me”and

produces a gesture 4. Participant grabs the peg as the robot re-leases

Example 8.2: Fourth Handover

During the course of the experiment, participants adjust to the robot over time and become increasingly savvy about how to interact with the robot best. Thus, the handover sequence for the fourth leg is much smoother as demonstrated, inExample 8.2

In the fourth handover shown above, the participant waits for the robot to lift the peg (1), looks to the robot (2) for 2.6 seconds, produces a verbal utterance together with a gesture (3) and grabs the peg as the robot releases it’s grip (4). While the hesitation in (2) is quite significant and displays an orientation to what the participants considers to be the robot’s turn to perform an action, the interaction is overall more smooth. In comparison, this sequence takes only 14 seconds to complete.

The analysis shows that over the course of the experiment participants learn how to interact with the robot. In order to capture this phenomemon quantitatively, pauses that occur between when the robot has picked up a leg and is ready for the next command and when participants initiate the first handover action are measured. These pauses decrease in duration linearly over the four handover iterations (seeFigure 8.8below).

However, the handover pauses display a high level of interpersonal variability, as shown by the large standard deviations inFigure 8.8. Further analysis shows that for about one third of the participants the interaction does not develop linearly (as depicted inFigure 8.8 andExamples 8.1 and8.2). While these participants indeed adjust to the robot over time and become increasingly savvy about how to interact with the robot best, they are less fluent in the second execution of the task than they are in the first (seeFigure 8.9).

Initially, when the robot stops after it has lifted the first leg, participants initiate the next action after a short delay, which indicates that they cannot predict the robot’s next action (as seen inExample 8.1). However, in round 2, they hesitate even longer, indicating

8.4 Results 125

Hando ver1

Hando ver2

Hando ver3

Hando ver4 0

2 4 6 8

TimeinSeconds

Figure 8.8: Handover Pauses

0 10 20 30 40

1 2 3 4

Handover Number

TimeinSeconds

Interaction Formats Linear Non-linear

Figure 8.9: Handover Performance

that they expect the robot to carry out its task autonomously. This is demonstrated in Examples 8.3and 8.4.

The participant first waits for the robot to pick up the leg (1), during which time the participant keeps his gaze fixated on the robot’s face. This gaze continues until one second after the robot has picked up the leg. Next, the participant looks down (2) for one second, after which he glances quickly toward the robot’s face (3), and then back down again(4), before holding out his own hand in front of the robot (5). He sustains the gesture for 1 second, before he makes a second circular gesture with the same hand (6). Next, he drops his hand down again and waits for 2.1 seconds (7) until the robot starts moving. Finally, he grabs the leg before the robot releases its grip (8). The entire sequence lasts 12.4 seconds.

In contrast toExamples 8.1and8.2 in which the participant became more fluent in the interaction with each new handover phase, this participant (as well as many others) is less fluent in the second interaction than in the first.

As with the first handover, the participant in the second handover looks towards the robot’s face as it finishes its motion (1). He keeps his gaze fixated on the robot’s face for 1.4 seconds, whereafter he looks down toward the green leg and (2) and keeps his gaze there for 1 second. He then looks back up, makes a circular gesture (3), looks back down (4), and looks back up the the robot’s face (5). These gaze shifts are rapid and last for less than 0.3 seconds. Next, the participant looks down toward the leg again and lets his hand drop down (6). He keeps this posture for 3.1 seconds. He then holds out his arm (7) for 0.4 seconds and againg produces a circular gesture before the robot starts moving and he is able to grasp the leg before the robot releases its hold. The entire sequence is 19.4 seconds long. Thus, rather than becoming more fluent, the participant inExamples 8.3 and 8.4 displays more signs of confusion (for evidenced by, for example, over time the numerous and rapid gaze shifts) Moreover, pauses become longer, not shorter.

The longer stretches of inactivity featured in the beginning of both interactions, as the robot finishes its motion and looks to the participant for the next instruction, are in themselves interesting to investigate further. Initially, when the robot stops after it has lifted the first leg, the participant initiates the next action after a short delay (1 second), which indicates that he cannot predict the robot’s next action. However, in handover 2, he hesitates even longer, indicating that he expects the robot to carry out its task autonomously. Thus, the participant assumes that the robot understands that the current task is a repetition of the previous one and that it has successfully learned from the previous interaction what the next step will be, namely to hand over the leg after it has picked it up, without being explicitly signalled to do so again. However, it does become apparant during the second handover that this is not the case and participants are indeed able to recover from the unfulfilled expectation that the robot learns from interaction, as for example people do.

Over the course of the experiment almost all participants become more fluent in their interaction with the robot, as indicated byFigure 8.8and byExample 8.5.

In order to validate the above findings quantitatively, and to find out whether these results interact with the experimental condition, these two interaction formats were coded as either 0 (linear) or 1 (non-linear, i.e. the second handover is longer than the first), and

8.4 Results 127

1. Participants looks to the robots face as it

finishes its motion 2. Participant looks down

3. Participant looks back up towards the

robots face 4. Participant looks down again

5. Participants looks toward the robot’s face

and holds out his hand 6. Participant makes circular gesture with his hand

7. Participant drops his hand again 8. Participant grabs peg as the robot releases

Example 8.3: First Handover

1. Participants looks to the robots face as it

finishes its motion 2. Participant looks down toward the peg

3. Participant looks back up towards the robot’s face and makes a circular gesture with his hand

4. Participant looks down again (still doing the circular gesture)

5. Participants looks toward the robot’s face 6. Participant looks down again towards the peg and drops his hand down

7. Participant holds out his arm (face palm

up) 8. Participant looks up toward the robot’s face

and makes a circular gesture

Example 8.4: Second Handover