5. Findings
5.1. Bluefragments’ Perception of AI
5.1.3 AI in Action
40
is more ‘organic’, as you need the AI model to work with the data and mould itself into something useful. Whereas conventional software development is still very complex, but in nature it is very functional - A leads to B which gives you C. AI modelling is not as straightforward as it is based on trial and error:
“We cannot do that with data, we cannot say ‘Well, we just upscale the machine learning model and make it better’. That is hard work, that is based on a good insight into the data. So, that is pretty hard to scale!”
(Appendix 2, l. 571-572).
On their website, Bluefragments presents their services as specialised enterprise solutions and not as strategic services (Bluefragments, 2018a). In fairness, it does make sense for Bluefragments not to present themselves as strategic consultants. A core part of their business model, surrounding AI, is to present the technology as something, which is easy to use, gives you immediate operational benefits and only requires minimal re-schooling of your existing IT-personnel. By not including a big change in strategic philosophy to go along with the technology, they are able to sell the AI piece-by-piece to any customer. Bluefragments effectively leaves the strategy to the strategist, whomever that may be, in the given client company - such strategists might very well be PwC, Deloitte or EY. By focusing on technical solutions rather than strategic ones, Bluefragments avoids competition.
41
Bluefragments expects an intelligent machine to be something, which will challenge the status quo but they also believe that AI can be used in a positive conjunction with human action. Martinsen specifically underlines the intelligent machine’s ability to predict, so as to advice its user towards the best possible action in a given situation. We assume that is in contrast to an ‘unintelligent’ machine - as an unintelligent machine will undertake whatever action is ordered to undertake for eternity, until it is ordered to stop. What makes an unintelligent machine ‘unintelligent’ is that it in effect exists in a vacuum where it is unable to react to its in- or output. Contrast that to an intelligent machine, which can change the way it performs an action as a reaction to its in- and output. Such reactions are typically based on a complicated set of predictions, i.e. an algorithm.
Bluefragments also talks about the maturity of an AI, where maturity is understood to be the extent to which an AI is able to deliver useful results. Martinsen understands AI technology to be maturing, not as a whole, but as something, which matures in different areas, time and speed - as an example, he mentions a customer service. In the customer service example, the AI is mature since it enhances service optimisation of every customer service employee, which in turn enhances the overall customer experience. As pointed out by Martinsen, the whole company would then have benefitted from more effective communication with its customers (
Appendix 2, l. 745-757).
Martinsen also acknowledge that if customer service is an example of a mature area for AI then there are, of course, also areas where AI is not mature, which is where people’s opinions matter (Appendix 2, l. 762-764). He explained that there are different ways people’s opinions can influence the required performance of the AI:
“Of course AI doing something like a haircut, you ((a developer)) cannot do that, because that is really complex.” (Appendix 2, l. 765-766).
We understand ‘complexity’ in this context to mean the difficulty of conducting a successful interaction between man and machine as any person might have impossibly vague input data, but extremely strict output requirements, that any AI would have to interpret. The complexity of making a mechanised hairdresser is both a challenge in terms of making a machine, which would be able to understand the unique task it was given by each customer, and making a machine capable of executing said task correctly.
In some cases - even if you manage to get all the complex technical requirements right - the AI might still not be able to execute the task to a satisfactory degree:
“If you're going to a doctor, you can have a machine, an intelligent machine saying, what is wrong with you.
But if you have deathly cancer you cannot have a machine telling you that - you need a person to do that, in the
42
correct manner! And that is where it is super difficult to replace a human with a machine.” (Appendix 2, l. 767-769).
It would probably not be overly difficult to make a computer, with a high level of accuracy, tell you that you had cancer. However, it would never be a feasible solution as people require compassion rather than accuracy in such a situation. However, that still leaves a role for AI:
“The combination of the doctor and the machine that can ensure that you (as a doctor) have way better facts and arguments, when you're talking to patient because you know you have gone through 10,000 more images, then you normally would have. And you can specify a way better solution for the cure.” (Appendix 2, l. 771-773).
In effect, Martinsen makes a socio-materialistic point, as the AI grants the doctor abilities that he or she did not otherwise have. In the example, the AI is super intelligent. In fact, it is even more intelligent than the doctor, as it can go through 10,000 more images at a faster and more accurate speed. However, it cannot and should not replace the doctor, according to Martinsen, as it is simply a tool that aids him or her to better diagnose the patients and ensure better care. What makes this socio-materialistic is that only the doctor can provide the social dimension of a diagnosis, whereas only the AI can ensure the modern level of accuracy of said diagnosis.
An important part of socio-materialism is the idea that the way humans interact with materials shape them. Martinsen also believes such interactions to be important:
“The way that many AI works, is that it grows based on an input, and that's also what we saw a year ago (or) year and a half ago. When we had a chatbot that was released, and it was only learning from what was communicated with it, and within a few hours it was a racist and talked really bad, so they had to shut it down within one day. If you say it in general, ‘will AI learn based on inputs?’ - yeah that is, I guess, the only way for it to learn.” (Appendix 2, l. 800-804).