• Ingen resultater fundet

Algorithmic Capitalism

Chapter 5: The Personalised Subject

4.3 Algorithmic Capitalism

As of 2017 what seems to matter with the ranking is engagement, with high ranking now based on user interaction, or ‘traffic’, clicking on ads and creating ‘network surplus value’, as

elucidated in Chapter 3. Users have been constantly clicking on the links but now RankBrain is placing greater importance on these user signals, as shown in the SEO mock-up diagram below (Figure 46). RankBrain now ostensibly deranks sites that may have good content if the user doesn’t click on the results (where before the signals were measuring keywords, relative to content). Moreover, RankBrain is combined with the amount of time a user spends on the page, or ‘dwell time’ and only Google can measure this. Once again, clicking is the measurement that determines the value of the web pages returned, constantly reflecting the cycles of user

engagement. Traffic, another important factor, diminishes over time if there is no user interaction and

[m]achine learning then becomes a “layer” on top of this. It becomes the final arbiter of rank––quality control, if you will (Kim 2017).

In 2016 Google admitted that ‘ranking systems are made up of not one, but a whole series of algorithms’. With constant tweaking to its proprietary algorithm, in 2017 there were more than 2400 changes and in 2018 more than 3200 changes (Grind et al. 2019), there are now reportedly

‘more than the 200 signals that Google uses to rank results’ (Weltevrede 2016:117; Sullivan 2010). Over the past twenty years PageRank cum RankBrain has been mythologised, fetishised and commodified because of the undisclosed ‘signals’ that determine ranking, yet its code still remains a corporate secret (Pasquale 2015).

order to communicate.112 In Speaking Code, the software studies theorist Geoff Cox argues that the importance of code, along with speech, should not be underestimated in its centrality to economics and culture as both an aesthetic and political expression. Programme code ‘mirrors the instability inherent in the relationship of speech to language’, with its performative aspects rendering it ‘interpretable in the context of its distribution and network of operations’ (Cox 2012).113 In this way code is understood not only as script but performance and, in this sense, resembles spoken language in that it is always ready for action (ibid).

This performativity in the execution of a code is the calculation of a function for algorithmic procedures that produce a value, which reflects not only the relation between their inputs and outputs but facilitates the reification of the code.

The contemporary dominating rationality in technical artefacts such as software, code, algorithms, devices and gadgets is presented as a rational universal whose production process and commodity nature are reified behind smooth technical designs (Bilic 2017:6).

It is this ‘technological rationality’ that brings code and calculation to the forefront of contemporary capitalist production, promoting efficiency and competition that prioritises commodity exchange and, along with this, new business models. ‘On the one hand, it [technics]

is a product of human society and social conditions. On the other hand, its objectified existence exerts a specific form of influence over behaviour and consciousness of humans’ (ibid:7).

This type of ‘algorithmic capitalism’ (Bilic 2017) then alters the framework of social relations, incorporating humans and things with ‘technics’; concomitantly it has a favourable role in commodity exchange by producing high profits for those who own it along with control and domination. As Friedrich Kittler proclaimed in his seminal text, There is no software, ‘copyright claims for algorithms’ have already occurred, ‘[p]recisely because software does not exist as a machine-independent faculty, software as a commercial or American medium insists all the more’ (1993). He further goes on to explain that ‘under these tragic conditions’, the German criminal law defined software as a ‘material thing’ instead of upholding ‘software as a mental property’ (ibid). In turn, the IP (Intellectual Property) becomes the defining feature of decision-making through the mathematical work carried out by the algorithm––the processing and ranking of information to make it accessible. In this way capitalism no longer only innovates with the living labour of humans.

As explained in Chapter 3, the ‘general intellect’ encompasses the social knowledge of workers–

–the living labour of users––who are searching online, yet the end products (user data) of these processes are privatised. The subsumption of labour to capital, Marx’s third stage of the division of labour, takes form through human words, however the social processing of this configuration is that of humans and the machinic combined. The social force in production is the human labour of the many creating knowledge, which now is often replaced by machines––what Marx

112 According to Kittler, everyone should be literate in at least one human language and one programming language (1999).

113 The performativity of algorithms has been much discussed, primarily in the publications of Michael Callon and Donald MacKenzie. John Law and others have written on performative ‘states’ in regard to research methods. In linguistics, John Austin discussed performativity as ‘speech acts’, later built upon by Jacque Derrida and Judith Butler in regard to gender theory and the construction of the subject, or ‘self-making’. However, this is beyond the scope of my thesis.

envisioned in the Grundrisse with his text Fragment on Machines. Thus, the machine ‘does not produce surplus value but serves to accumulate and augment surplus value based on the

exploitation of the general intellect’ (Pasquinelli 2015 cited by Bilic 2017:13). Google’s proprietary algorithm has already processed trillions of users’ search queries and interactions, yet the inputs and outputs of the black box are not transparent. ‘Code is not only invisible but also largely imperceptible in terms of its complex relationship with the economy and political agenda of giant software systems like Google’ (Parikka 2010:118 cited Soon 2016:73). With algorithmic capitalism Google’s key business strategy remains a trade secret comprised of patents (Bilic 2017:8) and along with it the ‘underlying logic of technological rationality’

(Marcuse 1941; 1964 cited by ibid:6), enacted through practices of ‘visibility management’

(Flyverbom et al. 2016).

4.4 (In)visibility Management

As relayed in the beginning of this chapter, AdWords, or Googleconomics, are the backbone of the company’s business model, yet that income is derived from ‘supporting intellectual property rights laws (IPR)’ (Munro 2016:567).

Transparency matters. And yet many companies go out of their way to hide results of their models or even their existence. One common justification is that the algorithm constitutes a “secret sauce” crucial to their business. It’s intellectual property, and it must be defended, if need be, with legions of lawyers and lobbyists. In the case of web giants like Google, Amazon, Facebook, these precisely tailored algorithms alone are worth hundreds of billions of dollars (O’Neil 2016:29).

These companies further promote ‘the need to withhold information and protect[ing] strategic positions prevails’ (e.g., Sproull and Kiesler 1995 cited by Flyverbom et al. 2017:392) by not being willing to share exactly how this technology works because it constitutes their competitive edge (Noble 2018b).

Under pressure from lawmakers concerning fair use relative to commercial interest,

‘information providers often contend that their algorithms are trade secrets that must not be divulged in a public venue’ (Gillespie 2014:185):

Our patents, trademarks, trade secrets, copyrights, and other intellectual property rights are important assets for us. Various events outside of our control pose a threat to our intellectual property rights, as well as to our products, services and technologies (Google market report cited by Bilic 2017:10).

Since buying Double Click in 2007, there have been a series of mergers and acquisitions that have increased Google Search’s market dominance by incorporating the IP rights of other companies. Besides providing enormous amounts of revenue, Google’s patents play a crucial role in withholding knowledge from competitors and the general public. In Chapter 3, Rieder mentioned that originally two patents were filed (1998, 2001) in regard to the coveted

PageRank. As of July 2017, 15.073 patents were ascribed to Google (Bilic 2017:10). The protection of the company’s IP corresponds to controlling information and ensuring the scarcity of search services in the market (ibid). This protection of IP has produced not just proprietary software, but perhaps the most revenue-generating corporate secret (patent) of all time.

When Brin and Page were academics they started out with a utopian vision of ‘organising all the world’s information and making it accessible’ and criticised the effects of advertising on search results (1998). In their white paper Brin and Page denounced their competitors as guilty of commercialisation and a lack of transparency (ibid). Although Google proclaims itself to be a

‘transparent company’ that improves the lives of the people who use its products, if they share their data, the information that it decides to disclose is regulated and organised.

Thus, if we want to understand how contemporary organizations operate, we need to investigate how they “manage visibilities”; that is, how they make things transparent, keep some things hidden and seek ways to monitor others (Flyverbom et al. 2016:98-99).

Which information is kept hidden by Google is a response to how insight and scrutiny are controlled, ‘keeping certain types of information out of the open while demanding other information be out in the open is what […] management of visibilities bring to the fore’

(ibid:107).

This visibility management reflects Google’s concealment of IP and patents but also the power dynamics inside the corporation. Despite organisational decisions, such as forcing employees to sign non-disclosure agreements before starting their jobs, ‘Google, for example, states that it seeks to “share everything, and trust Googlers to keep the information confidential”’(Flyverbom et al. 2019:397).114 Although Google ‘manages visibilities’ by hiding the workings of its

algorithms, the SEO industry and researchers (Feuz et al. 2011, Pasquale 2015, O’Neil 2016, Chun 2016, Weltevrede 2016, Noble 2018) have increasingly played an important role in

‘visualising’ the interior workings of the black box. The focus then might need to shift to visualising secrecy, as power begins with a ‘move from a politics of knowing to a politics of seeing’ (Flyverbom et al. 2016:104). Keeping in mind, however, that what one sees is not always what one knows.

Regarding the notion of secrecy, George Simmel speculated that ‘as the affairs of people at large become more and more public, those of individuals become more and more secret’ (Simmel 1906:468 cited in Beyes and Pias 2019:88).115 Ostensibly the organising of secrecy is reversed–

– nowadays individuals fight to maintain their secrets and privacy. Yet the habit of searching could alternatively become a new form of (in)visibility management. In lieu of personalised subjects subjected to Google Search and supplying their data, they could hide, control or even delete it and therefore need not give data away in exchange for free service. Rather than a Google ‘trusted user’ they could embody agency, evincing a hacker ethic with the goal of being off the radar and able to decide, just as interfaces do, what to show and what not to. Similar to the proprietary corporate search algorithms of Google, the evaluative criteria and code of which are concealed, they instead could find ways to obfuscate their online presence. Hidden from the proprietary algorithms that are designed to be obscure and that facilitate obscurity, the ‘trusted user’ could become much more like the algorithms, stealthy and arcane, shrouded in the (onion) layers of the Tor Browser instead of the filter bubble of Google Search.

114 More on Google’s ‘perks’ (2007): https://www.youtube.com/watch?v=XyVDF6BiKtQ. Google also keeps track of all the data it collects on its employees, through ‘living labs’ and ‘nudges’ (2013):

https://www.youtube.com/watch?v=9ANgEo40VSE