• Ingen resultater fundet

5.1 Associations with Artificial Intelligence - Younger Adults

5.1.3 Sub-Question Two

“How well do participants know about new image recognition- and text analysing services and which concerns do participants relate to these two AI-services?”

Image recognition service (Familiarity): In general, three out of five participants i.e. all the males, were acquainted with the Watson Platform and the AI-service Visual Recognition.

As the image recognition technology behind Visual Recognition was explained two women (AM) (SI) seemed to have an ‘aha moment’ as they asked if this technology was also used in services provided by Google and Facebook. One woman (AM) related the image recognition service to Facebook’s service, which recognises faces on pictures to easier tag names to the faces:

“Is that also what Facebook does when one uploads a picture: Is this the girl or what?”

(20:30)

Another woman (SI) related the image recognition service to Google’s ‘Reverse Google Image Search’, which can identify similar looking pictures to one that a person uploads:

“But is it not like the Reverse Google Image search?” (20:43)

Even though the women did , before the interview, seem to have considered the actual technology

within these image recognition, the women still demonstrated a good understanding of the technology

as they were able to refer to other providers of the service. This was exemplified by their quest for

even more knowledge about how the image recognition service functioned in practice. These

questions were addressing some relevant points, which demonstrated the women’s good

understanding of the technology:

“But well, how many details can it detect? How developed is it? (23:06 – (AM))

“ What kind of pictures is it allowed to analyse through” (37:36 – (SI))

The rest of the participants, all males, demonstrated a very good knowledge of the service and sometimes they demonstrated expert knowledge. This might be a result of their educational background. Also, the males had previously had demonstrations of the service. One male (WI) even used the service as he would educate his class about what AI is:

“I use it once in a while in classes to sort of create a picture for the pupils of what AI really is. It is such a sublime example… (18:42)

Another male (TO) had visited IBM where he had the service demonstrated:

“I visited IBM this summer. There, they talked about both services, and you could go to their website and test them. You could also upload a picture from Google for instance, of a dog, and then it could find, well then it would scan it…” (20:05)

After the image recognition service was presented to the participants they did only mention a few utilities like:

“I have also seen it been used for harvesting… I have seen a video about an apple orchard… when they should sort the apples into different colours… there was a little machine in that machine that was putting the apples in different baskets. (54:41)

Another male (TO) stated that he had seen the image recognition technology been used for the same purpose, just with tomatoes. (55:00)

Image recognition service (Associated concerns): Next, the issues that the young adults related to the

image recognition service were directed by the women who had several critical question about

privacy rights for pictures:

“Which pictures does [Visual Recognition] have the right to analyse through?”

(37:36 - woman (SI))

“Yes, are there any rights?” (37:41 – woman (AM))

“… does it also search in my pictures on my Instagram profile to analyse those pictures?” (38:13 - woman (SI))

“Even though your [pictures] are private?” (38:26 – woman (AM))

“But do they also save the pictures? (38:47 – woman (AM))

These questions suggest a lack of knowledge about data ownership rights on free social media sites and how data are shared between different online actors. Also, the women’s questions indicate a concern for, which pictures the image recognition service can analyse through. On a related note, one woman (SI) expressed her concern for tracking of consumer behaviour online as it will only provide consumers with content that is similar in type, which means that people will not have their knowledge horizon broadened:

“… because it gets so personal, targeted and directed that one does not have his/her horizon broadened – just like it is obvious that I am red, politically, and therefore only pop-ups from Politiken arrives on my site, because that is what I have clicked on previously. (27:56)

The woman’s perceived risk that one’s horizon will not be broadened, as content is target for individuals, is something that applies to many different services that have the purpose of tracking consumers’ online behaviour, this also include Visual Recognition, which can be used to find consumers’ preferences by analysing items on pictures (Cf. Appendix 3 ‘Expert interview’ for verification of statement)

Thus, this woman’s (SI) fear of being targeted with online content, might be an

expression of concern related to her online privacy, as she appears to desire that online content should

be tailored less to her preferences for content. Although, she seems to heel that act of ‘tracking data’

is the problem, and not the image recognition service.

Also noteworthy, is that the males, who know the Visual Recognition service, did not mention any concerns related to the image recognition technology. Whereas, the women who did not know the technology had several critical questions and they seemed uneasy about how the technology can analyse their pictures online.

Text analysing service – (Familiarity): With regard to the text analysing service, one male (TO) knew the service before the interview. A male (AL) mentioned how he could find the technology to be relevant for lawyers:

“Well, I think in terms of bigger companies that are dealing with lawyers or whatever, somewhere that need to send a lot of E-mails and need to quickly make an assessments of the mood: do I really need to answer this mail quickly? Therefore, I do find it extremely applicable when you need to say, well there is a person who is mad, I need to respond to this mail now…” (44:28)

In general, the understanding of the technology was good but participants needed a clue about what it could be used for like, in politics, before they were able to imagine how it could be used in real life.

As an example:

The interviewer’s suggestion:

“And actually, Tone Analyser was used to analyse who would win the election between Hillary and Trump.” (58:04 – (I))

One woman (SI) seemed enthused by how text analysing services can be used in politics and she seemed to find the technology interesting to be used for another practical issue:

“See that would be interesting to look at someone like [Dansk Folkeparti], who does

always receive at least 100.000 more votes that they expect because people will not say

out loud that they vote on [Dansk Folkeparti]… Well, [Dansk Folkeparti] can use this

information to determine to what extent people are going to vote for them… (59:54 – woman (SI))

The interview indicates that the participants had a better understanding of the visual recognition service better than the text analyzing service as they were not able to mention that many examples in which the text analyzing service is used. And further, there seemed to be a bit more confusion of how the technology works compared to the visual recognition service, e.g. one informant seemed to confuse Tone Analyzer with a voice recognition service:

“…But there is also something about talking – are there also sound? It is not just text right?” (57:51 – (AM))

Text analysing service – (Associated concerns): A woman (SI) voiced that she did not find the text analysing service to have risks in the same sense as the image recognition service as she finds each services to have respective purposes:

“I just think there is a big difference on the one [Visual Recognition] you talked about before… the way in which I am emailing with the insurance company, I do not find it to be a problem if they analyse my E-mail… But, when [data] is analysed on an application like Facebook or Instagram, which I use for private things, which does not have anything to do with an official errand, then I feel that it is trespassing my – what I feel is my personal data. But if for instance I am talking with the local municipality or insurance company or with my doctor for that sake, and it is used internal to organisations, that I have nothing against.” (47:37)

This comment implies a concern about what kind of institution that is using the services. As the

services have different affordances the participant is right when stating that the Tone Analyser would

be used more for E-mails, whereas Visual Recognition would be used more for targeting, typically

on social media (Cf. Appendix 3 ‘expert interview’ for verification of this statement). Hence, it might

seem as if the woman feels more secure about Tone Analyser as it would typically not be used on

social media sites.

The woman (SI) had second thoughts about the text analysing service as the interview went on. She explained how the text analysing service could be misused by those who know that it can be used to get in front of the queue at the customer service and how this will have an negative effect on communication between individuals:

“…But there is a problem if [text] is getting analysed in order to answer some E-mails faster. Then you might just as well write angry messages all the time to be sure that you get a quick response. Well, it has something to do with the attitude also in relation to the contact with other humans…” “Well, again there is something that is lost due to technology. Just like with online bullying, oh but it is much easier to write some shit to someone over the computer than saying it to the person verbally. (55:52; 57:14)

One woman (AM) seemed to be concerned that this service could be used in private messages:

“Can I ask about something – can it also be used in messages, private messages? (41:12)

This suggests that online privacy might be an important in relation to text material that is written on Facebook. Or, it might rather be a clarifying question which seeks explanation of how the service can be used.

Further, one male (WI) seemed most concerned about who owns the data that is used with these AI service. The male (WI) asked who owns the data that is used with the services, and seemed to support the services more when he found that the data, used together with these services, are the customers’ own, and not is not provided by IBM:

“I just have a question. If [customers] bought the right to use these services – will they then get the tool [service] for it or will they also get the underlying data?” (44:57)

“So [IBM] only provides the service for people and then it is up to the person’s own ethics to use…” (45:39)

“If so, just sell it. I love it” (45:22)

The male’s (WI) positive reaction suggests that the male (WI) does not perceive the issues to be connected to the services themselves but rather the issue might lie with the underlying data that is used together with these services. Hence, he might not find the services to be any concern as he indicates a greater concern for the data.

Summarising the main findings in group one related to the two AI services, there

seemed to be a variation in knowledge. The males seemed to have more preconceived ideas about the

services than the women. This might likely be due to their educational backgrounds and/or interests

for the topic. However, the women did seem to be able to understand the issue as their questions were

on topic and questioned some related issues about data management and privacy concerns. In fact,

these critical and clarifying questions might denote that the women wanted to learn more about the

AI services. Further, these questions also implied that the women had a concern about privacy of data

as their questions were critical about how pictures and text material is used together with these

services. These concerns did not seemed to be shared by the males as they did not express any

concerns in relation to the two AI services.