• Ingen resultater fundet

Comparison of Perceptions

5. Findings

5.3 Comparison of Perceptions

The aim of this section is to compare Bluefragments’ perception to each of their respective clients’

perceptions. The various technological frames that was analysed in our findings, presented some interesting similarities and contradictions. We have contrasted Bluefragments’ technological frames to each of the clients to gain a better understanding of how and where each client's perception is shared or distinct to that of Bluefragments. When we understand how the perceptions compare, we have been able to better ask questions in our second interview with Bluefragments about why the company was aligned with their clients or why they were misaligned. This particular (mis)alignment is important to acknowledge for us to further analyse how Bluefragments have succeeded in establishing a shared technological frame of reference.

We have chosen to compare the perceptions in Table 5.1, within four topics related to the three domains of technology; the first deals with how intelligence is understood and the second is how AI is distinguished from other technologies, if at all - these are related to Nature of Technology.

The third is how AI are or should be used in an organisation - which is related to Technology

Strategy. Finally, the fourth deals with how or whether AI is socio-materialistic - which is then

related to AI in Action.

71

Table 5.1 The alignment and misalignment between Bluefragments and their clients

The Companies //

Topics

Bluefragments IDA Frontliners Bestseller

AI as intelligent

Martinsen believes an AI is intelligent when it acts intelligently. In other words, when an AI is able to display actions, that we associate with intelligence, we must assume that it is intelligent.

What is deceptively simple in this definition is actually not a definition of

intelligence at all - it leaves it up to the person,

observing an act by AI, to judge whether that person believes said act was intelligent or not.

Aligned

Bach Keldsen believes an AI is intelligent when it provides a smart solution.

Despite that Bach Keldsen ultimately believes that intelligence is universally scalable, her way of testing this intelligence requires the same idiosyncratic

parameters that Martinsen also believes are necessary.

It is understood that there is always a specific problem with certain parameters from which you can

approximate the intelligence of the solution - in effect it is aligning Bach Keldsen and Martinsen with regards to how intelligence can be detected and verified.

Misaligned Fogel believes that intelligence is when someone or something thinks logically. Therefore, it is the processing of

information rather than the result of processing, which defines intelligence. The immediate question with this definition is how do you prove that someone or something is thinking logically. Fogel understands intelligence as universally scalable, whereas Martinsen recognises intelligence as a local phenomenon, which can only be measured idiosyncratically - if at all. In fact, the definitions are ultimately contradictory.

Misaligned Hjørnholm does not believe that AI is genuinely

intelligent. He acknowledges that this definition likely makes genuine AI

unattainable, therefore, if

you were to test any AI

according to this definition

the AI would always fail. AI

is a business tool and a

method to solve specific

problems.

72

Distinguishing AI from

other technologies - opportunities and limitations

Martinsen recognises a clear distinction between AI and other technologies, mainly in terms of AI’s operational autonomy. For him, it is mostly a technical issue related to the amount of rules you would need to set up in order for the AI to work. It is expected that the AI will be able to act

independently and intelligently, in fact it is required. He even describes the autonomy of the AI as having a life of its own.

Martinsen recognises AI’s limitations to be intelligent in one specific area.

Therefore, an AI is only as good as the competence it navigates with in its environment

Aligned

Bach Keldsen emphasis that AI has the ability to

resemble the human mind.

AI is able to learn and improve its performance, in contrast to RPA which does not do more than its instructions tell it to do.

Martinsen and Bach Keldsen seem to agree that what makes an AI useful is its ability to operate with autonomy within a set number of programmed rules.

Bach Keldsen does see that AI has its limitations. In general, many of her considerations about AI are related to the project that IDA is conducting.

Misaligned

Fogel does not see any clear distinction between AI and other technologies, or at least not between AI, robotics and automation. To him, something is AI when it is able to think by itself, but he also believes that there is a lot of software that is AI, but is not able to think by itself. Therefore, we must conclude that there is properly quite a large difference in the way Fogel and Martinsen distinguishes AI from other technologies.

Fogel states that he does not believe that AI has any limitations.

Misaligned Hjørnholm does not believe that AI requires any form of special distinction. To him, AI is a tool just like any other. He does not believe that AI technology will fundamentally change the way we solve intelligent problems, as he does not see AI as genuinely intelligent, whereas Martinsen thinks AI will fundamentally reshape the way we work.

Hjørnholm believes AI is only limited by the people using it.

The organisational purpose of AI

Martinsen does not believe that their clients develop AI as part of a strategic decision, but mostly

because it is a buzzword and

Misaligned The decision of

implementing AI is highly strategic for IDA as it is part of their IT-strategy. IDA had

Aligned

Frontliners derives its core business value from an AI driven platform and Fogel therefore heavily underlines

Both

AI was not a strategic

decision for Bestseller at

first. Bestseller saw AI as a

strategic tool in the same

73 because companies want to

‘ride the wave’, even though they are not sure of AI’s capabilities and purpose.

Martinsen understands AI to be more of a strategic tool rather than a strategic decision.

Martinsen did, however, recognise Frontliners’ use of AI as a strategic decision.

a specific purpose of incorporating AI in the organisation in order to optimise processes - and will continue to do so

throughout the

organisation. Therefore, there is a misalignment.

the strategic decision from his side. AI has been a strategic choice from the beginning, as his company would not exist if not AI was incorporated.

way that Bluefragments does, they are therefore quite aligned in this perception. However, AI became a strategic decision later in the process when Hjørnholm had more knowledge of the

technology and therefore they are misaligned on this specific point.

AI’s interactions and its socio-materialistic implications

Martinsen is of the belief that there is still a need for human interaction in order for the AI to reach a point where it can act more or less on its own. The AI needs a vast amount of data from humans, along with being controlled to ensure that it provides correct answers.

The AI is shaped and tailored for the purpose which humans ascribe to it and how the technology influences humans - which is a socio-materialistic

perspective.

Aligned

AI can only be independent to a certain extend because it will always be affected by the humans who

programmed it. Therefore, there will always be a degree of human interaction connected to the AI.

Humans develop AI for the present needs and the AI will then reflect the

ambition or the parameters that humans have given the AI at the present time.

Thereby, she perceives AI is socio-materialistic.

Aligned AI are still in need for human interaction in order to evolve and become independent, for now. Fogel perceives AI to be used for enhancing humans and is thereby also socio-materialistic in his

perception, but in a larger degree compared to Bluefragments.

Misaligned AI can only be independent to a certain extend because it will always be limited by the humans who

programmed it. Therefore, there will always be a degree of human interaction connected to the AI.

However, Hjørnholm does not believe that humans are influenced by the AI, and is therefore not

socio-materialistic in his

perspective.

74

5.3.1 Sub Conclusion

We wanted to know how our interviewees understood how an AI is intelligent. We expected that there would be an overall agreement, however we have found significant differences in the way our interviewees conceptualise the intelligence of AI. We are surprised since there was no universal agreement on what makes an AI intelligent. As evident in the table, there is also not a unanimous agreement between Bluefragments and their clients about the general distinctions and limitations of AI technology. However, we assume that the specific expectations for each specific project are aligned, since there have been a successful collaboration. Furthermore, Martinsen has knowledge within how to develop an AI solution, what is presently possible and not possible with AI, and the resources needed for the development and implementation. Therefore, he is merely focused on producing the AI solutions and not strategically planning the process and purpose for the clients.

Lastly, it is interesting to see that Martinsen, Bach Keldsen and Fogel all have a socio-materialistic perspective, but differs in their opinions to what extent there is a socio-materialistic bond between AI and humans. Hjørnholm’s point of view is unique, as he does not have a complete

socio-materialistic perspective on AI. He does not see the technology as something, which directly

influences human behaviour - unless you are unaware that you are interacting with a machine and

not a human.