• Ingen resultater fundet

4.3 Method 55

In the high-aware condition the human wizard noted down whether the participants expressed agreement, and if that was the case, the robot said:

It should be interesting for you then.

Regardless of the user response the, robot said this also in thelow-aware condition. In the high-awarecondition when the wizards identifies a response that expressed disagreement, or if the participant simply did not respond, the robot said:

Then it’s good that you’re here. Maybe you get to enjoy playing with it.

At the end of the interaction, the robot in the high-aware condition recalled what the participant previously had expressed about his or her stance towards Legos. If participants expressed agreement, the robot says:

You said before that you liked playing with Legos. Did you enjoy it?

Whereas had he or she expressed disagreement or not responded to the question, the robot said:

You said before that you didn’t like playing with Legos. Did you enjoy it?

In thelow-aware condition, the robot make no reference to participants’ previous response, but said instead:

Playing with Legos is a good exercise for the brain. Did you enjoy it?

4.3.2 Participants

52 participants were recruited from the University of Southern Denmark, campus Søn-derborg. Mean age was 23.6 (SD=3.6). Only 26.9% of the participants were women and these were not perfectly distributed between the conditions. Thus, nine women interacted with the robot in the high-aware condition, while five interacted with the robot in the low-awarecondition. Participants are ethnically very diverse; most of the participants come from Denmark or Germany, while other participants come from Australia, Bulgaria, China, Croatia, Iceland, India, Latvia, Lithuania, Moldova, Norway, Pakistan, Poland, Romania, Slovakia, and Spain. 42.3% of the participants have had experience with robots before, but participants with or without previous experience are distributed equally between the conditions.

4.3.3 Robot and Software

The robot used for the experiment was the EZ-Robot Humanoid JD identical to the robot used inChapter 3, but with several modifications to its programming. Rather than using the Contingency Spotter as inChapter 3, I programmed new behaviors for the robot using the EZ Builder application and the EZ-Robots Software Development Kit (SDK), which are detailed below.

4.3 Method 57

Vision and Gaze System

The robot’s internal VGA camera tracked the relative location of participants’ faces from the center of the 2D video stream. These data were then coupled with the robot’s vertical and horizontal head motor control to the effect that the robot always gazed towards participants. This functionality, which relies on the OpenCV library (Bradski, 2000), is turned on in both conditions. This functionality is referred to as ‘face tracking’. This tracking is different from the contingent gaze tracking presented inChapter 3. The system presented inChapter 3 tracks the eyes of the participant and their projected gaze, while the current method uses facial recognition to track the face. However, probably due to the robot’s relatively low degrees of freedom the two implemented methods produce visually very similar behaviors.

Dialog Management

In order to produce speech, the robot accessed the IBM Watson Text-To-Speech service, which produces speech on-the-fly2. All speech and nonverbal actions were pre-scripted and were to some extent timed by the wizard. This also applies to the feedback the robot supplied to participants as they progressed through the assembly. Feedback options available to the wizard were elicited by running pilot studies of the experiment, first with human participants, in which one plays the ‘robot’ and later also in a setup with a robot, similar to what is presented here3.

4.3.4 Wizard-of-Oz Module

I developed an extra module as a plugin for the EZ-Builder program through which the wizard can easily control the robot (see figure 4.1). The module, written in C# with WinForms and the EZ-B SDK, consists of two types of elements, which manipulate the robot’s state in various ways. One type sets explicit states for the robot, for example, the weather. This was done through a simple ‘set’ command together with an if-statement:

1 private void ddWeather_SelectedIndexChanged(object sender, EventArgs e)

2 {

3 if (ddWeather.Text == "Great")

4 {

5 EZ_Builder.Scripting.VariableManager.SetVariable("$weather", "great");

6 }

7

8 if (ddWeather.Text == "Bad")

9 {

10 EZ_Builder.Scripting.VariableManager.SetVariable("$weather", "bad");

11 }

12 }

Source Code 4.1: Weather Control

The other type of element is a single button called intervention (see figure 4.1). This button was used during the experiment by the wizard to time certain events, for example to

2https://www.ibm.com/watson/services/text-to-speech/)

3This work was done by Anna Kryvous, as part of her MA Thesis

continue the interaction after a participant response. The button was colored red to indicate to the wizard that the interaction was stalled until he or she pressed the intervention button, after which it would return to its basic gray color. This was done on the plugin side by incrementing a counter by one every time the button was clicked, set the variable in EZ-Builder with the same value as the counter, and return the original color to the button:

1 private void btnInterv_Click(object sender, EventArgs e)

2 {

3 intCount++;

4 EZ_Builder.Scripting.VariableManager.SetVariable("$intervention", intCount)

;

5 EZ_Builder.Invokers.SetBackColor(bntInterv, Color.LightGray);

6 7 }

Source Code 4.2: Intervention Button Code

On the EZ-Builder side, the script waited for a change in the variable$intervention before proceeding.

Figure 4.1: Wizard-of-Oz Module The entire source code for the plugin can be seen in Appendix B.1.

4.3.5 Assembly Task 4.3.6 Analysis

Questionnaire responses are analyzed using linear multiple regression. Predictor variables include the experimental condition, experience with robots, and gender.

Subjective Measures

Prior to the experiment, participants completed a demographic questionnaire their eliciting info about participants’ age, sex and previous experience with robots. After the experiment,