• Ingen resultater fundet


5. Findings

5.2 Clients’ Perceptions of AI

5.2.2 Frontliners Nature of technology

Frontliners’ CEO and founder Danny Fabricius Fogel believes intelligence is primarily an expression of logical thought:

“I think it means something capable of thinking (.) And the more capable you are of thinking, the more logical you think, the more intelligent you are. I think, yeah.” (Appendix 5, l. 250-251).

This statement poses two immediate questions: What is meant by thinking, and what is meant by logical? In this example, we understand thinking as the efficiency of processing information.

Whereas logic, in this context, is the extent to which you are capable of clearly relaying a line of reasoning. To Fogel the more logical you think the more intelligent you are. He emphasises the underling processing of information rather than the result of said processing, which seems to suggest a scalability to his understanding of intelligence. You can be more intelligent when you think more logical, naturally, that also means you can think less logical and therefore be less intelligent.

Whether more intelligence can be acquired by learning to think more logical, remains unclear.

Nonetheless, this understanding of intelligence is largely absolute as intelligence to Fogel is not a


concept relative to perception, but an absolute concept, which can be measured on a scale. Fogel’s indicator for such a scale is human beings themselves:

“Well, that it’s a machine and an intelligence that is built, somehow. So, it's not a human intelligence, it's a machine. So, that's it. What people want to gain with this is to make a machine think like a human being, you can say. So, that’s why it’s artificial intelligence.” (Appendix 5, l. 265-267).

Fogel again places emphasis on the ability to think and an AI is then understood to be a simulation of a human's ability to think, since he clearly states that AI is a machine capable of thinking as a human being. However, he also emphasises that a simulation of a human being is something artificial when he explains what distinguishes AI from human intelligence:

“I distinguish it in that way that it’s something not stored in the mind of a human. And that’s almost about it.

It’s something that’s facilitated by a human.” (Appendix 5, l. 334-335).

In other words, Fogel recognises no difference in the nature of intelligence between an AI and a human being - it is simply a matter of how the intelligence is being stored. One intelligence is born and the other is facilitated. He later explained what he meant by facilitated:

“It’s ((AI)) something that’s facilitated by someone who actually created it. But the thing is, that within a short time machines will be better at facilitating AI than humans will. And that’s when humans are not needed.”

(Appendix 5, l. 339-340).

To Fogel, an AI is something which is created - we assume that this is meant for a specific purpose. In essence, there is in Fogel’s opinion no difference between human intelligence and artificial

intelligence, as the latter is simply a simulation of the former. Again, the idea of a scale comes into play, as he believes human beings themselves will in the near future become ‘obsolete’. He believes that the facilitation of intelligence will no longer be overseen by humans:

“Humans are like rats, just more clever. Not compared to the machine. The machine can learn so much faster than the computer ((human)), we’ve seen so many examples. It’s so naive to say we will be necessary forever and ever because we will not be necessary.“ (Appendix 5, l. 341-343).

In addition, he does not see that AI has any limits:


“I don’t think it has ((limits)).” (Appendix 5, l. 348).

It should be noted, that Fogel made the comparison between rats and humans somewhat jokily, his point, however, remained the same - that human intelligence is inferior to machine intelligence. He seems to believe that what makes AI superior is the speed at which it is able to process data. He believes that an AI’s ability to process information is essentially limitless, however, he does not seem to acknowledge, or simply does not know, that an AI has often quite strict requirements of

contextuality, which are needed to make sense of the data it is analysing. For instance, he does not distinguish between labelled and unlabelled data as a vital input for the AI. He elaborated on his argument that AI is essentially limitless:

“Because it's self-educating. So (.) the limit is if we unplug it. If stop all (.) You need some things: you need data, you need connectivity - those two things are necessary. Because if you don't have data, there's nothing to work with and if you don't have connectivity you’re cut off.” (Appendix 5, l. 352-354).

Fogel does not seem to know that there is a rather large difference between a supervised and an unsupervised method of developing AI. The way he presents his argument, seems to suggest that he believes that any AI, no matter what type or purpose, would be able to make sense of the vast amount of chaotic data found on the internet. He is of course right in how an AI needs large

amounts data, but also needs various amounts of supervision to make sense of this data - which, of course, also has to be sorted into labelled or unlabelled. Fogel confidently believes that AI is evolving:

“I think the technology has (.) I don't know if it has changed, I just think that, just like machines are learning, the humans started learning before the machines. So, it’s just regular people who evolved. So, they started working with AI in one way and learned from that, and then they found out there was a smarter way of doing things, and then they learned from that. And it will continue like that.” (Appendix 5, l. 361-364).

Fogel, again, makes a comparison between humans learning and machines learning - confirmed to his argument that there is no difference in the nature of intelligence between the two. This similarity leads him to conclude that, just as humans have evolved, machines will follow a similar evolutionary trajectory. It then begs the question, how does Fogel distinguishes AI from other technologies? Fogel does not particularly distinguishes other technologies, such as automation and robotics, much differently:


“I think it's all the same somehow. So, if something can do its own thinking then it's intelligence. If it's not a human or an animal, or whatever, then it’s artificial intelligence. There’s a lot of different software that’s AI, but it has to be able to think by itself.” (Appendix 5, l. 314-316).

Fogel does not make any concrete distinctions between what can and cannot be a specific

manifestation of AI - as long as said manifestation has the ability to think. From this, we must also conclude that Fogel distinguishes AI from other technologies by its ability to think, which is analysed as a display of logic or the ability to rely a line of clearly reasoning. Technology Strategy

For Frontliners, AI technology has been a necessary strategic asset for the company. The whole foundation of their value proposition is based on providing a platform, which is largely made up of AI technology. Fogel strongly emphasises that adopting AI has been a strategic move for the company:

“Absolutely, so I don't think (.) without AI this ((Frontliners)) would not be a project. I don't think we would have done this at all. Because you wouldn't be able to achieve what we want to achieve ((without AI)), basically.“

(Appendix 5, l. 548-549).

Since Fogel states that he would not be able to run his business without AI technology, our analysis of Frontliners’ technology strategy is therefore done with the assumption that adopting AI has been an essential strategic decision for the company. We also base this assumption on the fact that Fogel has been waiting for it to become technologically possible for him to pursue the Frontliners project:

“So it's also like that sometimes, you just have to wait for something to happen, before you're able to do what you want. And this is possible ((Frontliners idea of incorporating AI)). I might have been able to do this a year ago, I also started this project almost two years ago. And I did a lot of wrong things. So, I'm glad that I have done those now so I can move forward.” (Appendix 5, l. 555-558).

AI has been a necessary element for the viability of the Frontliners project. However, AI has largely been a new technology to Fogel and therefore something in which he has needed to gain

competence. The acceptance or even appreciation of the ‘wrong things’ he has done, is in our

understanding an indication of the underlying strategic importance he assigns to gaining this

competence. He is appreciative of having made mistakes, as it has given him strategic insight into

the capabilities and challenges associated with using AI. The insight has allowed him to move

forward with the Frontliners project.


It is essential that Frontliners has competence with AI, as they are not using the technology to improve their own organisation, but to sell a unique product to customers:

“We’re ((Frontliners)) quite different, because we're not in it to save time for ourselves or be more efficient ourselves. We're in it, because we want to offer small businesses, all stores, around the world a solution, you know, some elements to use AI actually without them even having to engage in it themselves. Because we provide a platform with some things that are made by AI.” (Appendix 5, l. 536-539).

We are not conducting a business analysis; however, we are interested in the assumptions behind the business model. Fogel believes that the extensive data analysis, which can be conducted via AI, requires certain data analytical skills which most organisations will have a hard time to acquire. For example, most cafés will not have neither the capital nor the organisational capacity to hire a data scientist. Fogel therefore believes that most SMEs would be interested in buying access to a platform, which could provide such analysis, in this case, provided by Frontliners.

We must assume that he does not view his company's organisational strength to be that of a software developer yet, since he consider Bluefragments as the technical experts (Appendix 5, l. 430-436). Fogel understands AI as an essential strategic element in creating an organisation capable of providing unique analytical services. Using AI as an essential element in your organisation creates some strategic challenges due to the extensive need for data. Fogel also saw data as one of Frontliners potential challenges:

“I think that data might be a problem - the things that I want to (.)when we talk about AI. And, I want, you know, small coffee shops and so forth to use our system. No coffee shop with 10 employees will have enough data to get something clever from Artificial Intelligence. I’m sorry to say so, but the statistical foundation, of coffee shop that size, is just not big enough. But if I have a 1.000 coffee shops in my system, with five to ten employers each, with of course anonymised data.“ (Appendix 5, l. 569-573).

He later added:

“I think that the data might be a challenge, depending on how the customers react to ((the idea)) of shared data.” (Appendix 5, l. 577-578).

Fogel admits that he does not know how his potential clients would react to the idea of sharing their data. Whether Frontliners potential clients would see it as a problem to share their data in a

common system, as it is outside the focus of this project. Nonetheless, we want to highlight that


securing a regular data flow will be of strategic importance to Frontliners, as any disruption to this flow will seriously affect Frontliners’ ability to provide the service to their clients.

Another area in which the clients’ opinions matter is with regard to the AI technology itself, Fogel pointed out that, currently, the idea of using AI technology has not created strong opinions - at least not among his clients:

“No, I don’t think that anyone has strong opinions about it ((AI technology)). I think it is a buzzword for

((companies)). I think most people don't care about it. People in big organisations ((however)), they get a lot of, you know, they have Microsoft, Saab, Google etc. Keep telling them “Hello, you need to look into this now! You need to look into this, now! You will lose if you don’t look into this now, you will lose”.

So, they ((big corps)) are now starting to look into it, and it takes much longer time than anticipated.

It's always like that.” (Appendix 5, l. 642-648).

In Fogel’s opinion, there is a major strategic potential in acquiring competence in the use of AI technology right now. He highlights the core observation of his business project; he recognises a difference in the ability to adopt AI between big and small companies. Fogel points out how big companies are unable to ignore the increasing importance of AI technology whereas small companies might simply not be aware of its importance. It might be due to the fact, that at the moment, his clients do not have strong opinions about AI (Appendix 5, l. 642-648). It could be an advantage for Fogel, as he will not have to fight any prejudice about the technology. However, it might also prove to a disadvantage, as his potential clients might not recognise the potential of acquiring services from an AI platform if they do not understand its importance. AI in Action

We briefly touched upon Fogel’s idea of machine evolution in the Nature of Technology section, as that was where he first mentioned it during the interview. However, when we asked him many of the questions related to AI in action he further explained his understanding of this evolution. Fogel does not believe AI will unequivocally undergo an evolution, but rather it depends on the project (Appendix 5, l. 102-116). He elaborated on which project he saw an AI evolve:

“What I think is interesting about this, is that we have some human behaviour and we measure that in some ways and measure what is influencing the behaviour and stuff like that. And then an AI makes some decisions based on that, to help people become better at something. So it demands human interaction.” (Appendix 5, l.



In fact, his answer is quite confusing because he seems to be giving two opposing answers at once.

In the first instance, he is saying that AI evolve independently, and in the next he is suggesting that it is dependent on human interaction. We understand his point to be that currently AI requires human interaction, but that might not be the case in the future. Essentially, he understands AI and humans to currently to be co-evolving with the only difference being that machine evolution will happen at a much greater speed than human evolution. As a consequence of this belief, he refers to the fact that humans will be unable to keep up and therefore become obsolete:

“The thing is, at some point the machines will be able to do the same things they already do, within things we’re asking it to do it within. But it will also be able to do it regarding programming itself and stuff like that.

They are already doing machine learning, it just depends what you tell it the rules are.” (Appendix 5, l. 364-367).

We understand his argument to be that machine evolution will become self-maintained - essentially making machines capable of outcompeting humans in their otherwise unique areas of competence.

Fogel is therefore also of the believe that AI can have agency:

“I think so (.) I don't think it's, I don't know if it's there yet. I think it will be capable of doing that ((acting with agency)). But I'm not sure it is there yet. I don't think it's there yet.” (Appendix 5, l. 783-784).

Since Fogel believes AI will at some point be able program itself and therefore be able to act independently from human input, it follows logically that he does not question the merits of an AI having agency. In fact, to Fogel it seems to be only a matter of time before AI technology can act with agency.

AI to Fogel is no doubt a powerful technology with a lot of potential. Whether this potential is ultimately for good or for evil remains unclear to him. When he defined AI to us, he also added that ‘he was seeing some extremely scary things already’ (Appendix 5, l. 265-277), he elaborated on this point:

“I think that terminator is actually a very realistic scenario for now. I think it’s scary because people are joking about ((it)), you know. One of my friends he has one of those vacuum cleaners - robot vacuum cleaners - and when he makes a joke he says “Well I'm just waiting for the day it discovers who’s actually creating the mess and just makes an end of it, at once. Instead of keep cleaning after you. You know, it’s funny, but it's kind of true.” (Appendix 5, l. 281-285).


It should be said that Fogel made this comment jokingly, however, the underlying sentiment clearly remains one of suspicion and fear. The original Terminator movie (1984) infamously opens with the image of an animatronic foot crushing a human skull, which, of course, makes for one of the most dystopian predictions of technological development run wild to the detriment of humans. Of course, this interpretation of the future of AI technology leaves little hope for anything that is not AI. This point of view may leave you wondering why Fogel has engaged in a business venture that is so heavily reliant on AI when he foresees the future as dystopian. However, Fogel does not see that his company can be responsible for such a dystopian development:

“The human mind is not able to foresee that ((future development)). It is not able to connect the dots, so I cannot see that what I'm using AI for, is going to lead to that. But I can see that ((happening)) if the wrong people gets a hold of it and use it, by mistake - who knows.“ (Appendix 5, l. 862-864).

Fogel believes that AI has negative potential and that it could be used to the detriment of humans.

However, he does not see any negative use of the technology as hampering his use of AI. In his opinion, he will use it for positive improvements of a workplace. At the same time, he implies that he cannot predict the future with accuracy, but he is still convinced that Frontliners use of AI will be of benefit.

In fact, Fogel additionally argues for a perspective running parallel to the rather negative one mentioned above. He believes that AI holds a great potential for enhancement of the individual - he talks about human enhancers (Appendix 5, l. 674-675), which we understand to mean personalised AI that would be used on an everyday basis. Of course, this interpretation of AI recognises the technology as something, which will be of great benefit to humanity rather than something, which will overpower us. This counter perspective is an even more socio-materialistic perspective as AI is understood to grant us abilities that we did not otherwise have, it does not exist on its own, but in unison with whomever is using it. Fogel has additional perceptions of an AI acting intelligently:

“It’s in my opinion, it's just when it ((AI)) solves problems for me. You know, maybe in a more clever way then a calculator, but it's more or less it.” (Appendix 5, l. 716-717).

An AI is intelligent when it can solve a problem for you, presumably a problem which you could not

find the solution on your own. It is therefore no surprise that what makes an AI valuable to Fogel is

anything that saves time, or provides you with a novel insight (Appendix 5, l. 778). This more positive

perspective is socio-materialistic, as the technology is understood to be something, which enhances

the user. However, the point remains that Fogel is slightly inconsistent in terms of whether the


technology developments in AI will have an overall positive or negative impact on the human species. On one hand, he makes a case that the technological developments are already out of control, and that it is only a matter of time before the automatic vacuum cleaner starts to question whether it should carry out the task it is given. On the other hand, Fogel believes that AI technology will help to enhance the individual human being to unprecedented abilities and insights. Sub conclusion

To Fogel, intelligence is the ability to think logical which can, in principle, be done both by a machine and a human. Intelligence is an absolute concept, which can be measured on a scale. He believes that an AI’s ability to process information is essentially limitless. Therefore, AI will eventually have the ability to outsmart humans - despite that it is not currently the case.

Fogel’s belief in AI’s absolute intelligence is also somewhat inconsistent, as he believes that AI acts intelligently when it solves problems. His belief is inconsistent, as he cannot seem to decide whether intelligence is a quality inherent to thought or whether it is a display of intelligent actions.

Thinking is a process, which might not be directly observed, whereas actions are. This begs the question, is it the ability to think or the ability to act that makes you intelligent? Fogel provides no clear answer to this question. The inconsistency of his answers are pronounced by the fact that he also believes that an AI will become intelligent enough to question the tasks it is given, and

ultimately give itself its own tasks to solve. According to this line of reasoning, it can be argued that not solving a given problem is equally an act of intelligence, as it displays the ability to think critically about the task one is given. Fogel never definitively answers whether thinking or acting is ultimately intelligence, making the line between thinking intelligently and acting intelligently even more obscure.

Ultimately, it is hard to determine to what extent Fogel understands AI’s

socio-materialistically. On one hand, he argues that AI currently requires human interaction, but that might not be the case in future which may be to the detriment or the improvement of the human species. We will argue that Frontliners use of AI is socio-materialistic, since the technology is being used to measure and improve human performance in an organisation. The AI platform learns from the people in the organisation and based on its input it gives suggestions to the organisation’s members on how to improve itself. It is a clear example of something social influencing the use of the material and vice versa.

However, Fogel's general understanding of AI is, in our opinion, not socio-materialistic. Fogel

seems to believe that AI is already, or will shortly, become semi or fully sentient which would make

our interactions with AI fully social rather than social-materialistic, as both parties, man and

machine, would act with absolute agency. This idea of a soon-to-be fully independent AI also