5.1. Bluefragments’ Perception of AI
5.1.1 Nature of Technology
undertake actions, which you understand to be a display of intelligence. To that extent, if a flower were to define intelligence, the definition would probably be quite different to that of a human being. Martinsen uses the example of an extra-terrestrial encounter as a defining moment for our understanding of our own intelligence. The logic goes, that we will only know how intelligent we are when we, meaningfully, can compare ourselves with someone or something else with an equal or superior intelligence to our own (Appendix 2, l. 253-255).
This understanding of intelligence runs parallel to that of the Turing test (Turing, 1950). It is not surprising that someone with a background in software engineering understands intelligence as a result of an action, rather than an inherent quality of a consciousness itself. You can only know what you can prove, and therefore intelligence is defined by what can be directly observed. AI, in Bluefragments’ perspective, is then clearly defined as:
“AI is what we define as a machine doing something that we would normally have an intelligent person to do.
So, that is AI in our perspective.” (Appendix 2, l. 264-265).
In other words, the intelligence of an AI is determined by the extent to which, the person who is affected by the AI, perceives the machine to undertake action, which equals or surpasses that of a human being. Martinsen described the term as follows:
“AI in itself is actually nothing, it's just a term. But beneath that, we have machine learning, we have neural networks and that is something!” (Appendix 2, l. 270-272).
When Martinsen says AI is nothing, he means to indicate that the term does not refer to any specific technology, but a variety of different technologies. AI is a term, which is used to describe either the desired or the actual outcome, of for instance, an ANN. You could go further and say that AI
becomes a term, which can be applied as a conclusion when the use of a tool such as an ANN has been executed correctly. Correctly being the above-mentioned display of intelligent action. AI then does not describe the technology, but its effect on humans.
Bluefragments also understands the nature of AI in terms of the operational autonomy of the technology itself. Martinsen distinguishes AI from other technologies by the amount and use of rules in an AI environment:
“We cannot set up the rules for an AI environment. They are (.) I don’t know if it’s the right word to say that, but they have their own life. But it is an impossible task for us for humans to actually set up the number of rules that we need to do, when working with AI.” (Appendix 2, l. 280-283).
Life in this context should not be understood as the AI being a living organism, but rather as a metaphor for the inherent operational autonomy of these systems. It is expected that the system will be able to act on its own with only a limited amount of instructions. Whereas traditional software requires every step in a process to be pre-programmed to successfully complete a given task. This autonomy is an essential part of how Bluefragments understands the nature of AI, and its increasing popularity. Martinsen continues explaining, that AI allows for solutions, which would be unattainable with conventional software.
However, Bluefragments does not understand AI as being a fully developed technology. On the contrary, they understand it to be in the very early stages of usability. In fact, Martinsen believes that the amount of software developers will decrease in the coming decades, as he reasons:
“Because AI would become so easy to use, so that everybody can use it even kids can ((would be able to)) use it.” (Appendix 2, l. 291).
Not surprisingly, Bluefragments are enthusiastic about the current application of AI and they see technology’s potential as far from fulfilled. Nonetheless, they do acknowledge that the technology currently holds a certain number of limitations:
“It's really important to keep in mind that AI is actually just as smart as we do it. And right now, many AI engines are more stupid than humans, and the reason for that is that they are super-intelligent in one specific area, but what they cannot do is what we can. Meaning, that we can be intelligent in many different areas and levels. A machine can only be intelligent in one very narrow area - but it can instead be so ((very)) intelligent, more than we can.”(Appendix 2, l. 299-303).
This, again, draws on the idea of intelligence as an action. We understand Google’s AlphaGo as being highly intelligent, but only in the context of playing GO. If you were to ask AlphaGo about the
weather it would not be able to answer you, in fact, you might not even be able to ask the question in the first place, as the AI is not set up for such an input (Silver & Hassabis, 2016). Martinsen addresses this when he talks about super-intelligence in specific areas.
Martinsen’s understanding of intelligence is grounded in, what we will describe as, relativism - the idea that intelligence is relative to the context. We understand relativism, in this specific
context, to be a basic term, which describes a way of thinking about intelligence rather than a
philosophical term used in a broad theoretical context. Bluefragments’ relativistic interpretation
allows for some interesting points about how we currently understand an AI’s intelligence. We are
currently willing to accept that a machine’s intelligence is contextual, but we do not apply the same
contextuality to human beings. You would most likely describe a person with the same
superintelligence, as an AI, as having some form of limited mental capacity - if they only displayed such a superintelligence in one specific area. We expect humans to have an average, but very broad intelligence. Therefore, according to Martinsen, we understand human and machine intelligence differently, as the two intelligences operate in different contexts.
However, Bluefragments expect that the context of intelligence might change. Martinsen specifically expects that the area of machine superintelligence will change in future:
“Yes! There is no doubt that over time it ((AI)) will get a smarter and smarter, of course. And we will see that in many aspects, we as humans will need to redefine ourselves as well, because of that. And ((whether)) it will happen in the next 5-10 years, I don’t know. It is totally impossible to predict. But what we can predict is that it's definitely going to be more intelligent, and even more intelligent than us in many areas.” (Appendix 2, l.
It is clear, that Bluefragments expect that the reliability and availability of AI will only increase in the coming decades, and that this change will have a profound impact. Bluefragments expect that the change will force us to rethink what we expect from ourselves and everyone arounds us, since everyone will have unimagined computing power at their fingertips. This will allow everyone to undertake actions in a matter of hours, which would previously have required years. This technological development will negatively affect the relevance of a multitude of specialisations, which Martinsen is aware of:
“There is no doubt that we will see the involvement of AI as a threat for humans. Because many people will feel threatened by this, they will see that machines can do stuff we cannot do.” (Appendix 2, l. 315-316).