• Ingen resultater fundet

View of Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War

N/A
N/A
Info
Hent
Protected

Academic year: 2023

Del "View of Examining the differences between human and bot social media accounts: A case study of the Russia-Ukraine War"

Copied!
34
0
0

Indlæser.... (se fuldtekst nu)

Hele teksten

(1)

In this study, we examined online conversations on Twitter about the Russia-Ukraine War and investigated differences between bots and non-bots accounts. Using ‘Russia’ and ‘Ukraine’ as keywords, we employed a Twitter API to collect data from 17 February to 18 March on Twitter. We obtained a large dataset of over 3.7 million tweets generated by about one million distinct accounts. We then analyzed one percent of the data using interval sampling for bot detection and found that about 13.4 percent of the accounts were social media bots, responsible for about 16.7 percent of the tweets. We examined the difference between bots and non-bots regarding online conversations on the Russia-Ukraine War through account analysis, textual analysis, and interaction analysis. The results show that bots exist on both sides, bots from the Ukrainian side contributed a louder voice while bots on the Russian side demonstrated more effective communication. In addition, there were differences and similarities between bots and non-bots in the behavior of online conversations, but the difference seemed to be relatively weaker than those found in previous studies.

Contents 1. Introduction 2. Methodology 3. Findings 4. Conclusion

1. Introduction

Social media is no longer a tool of communication but also a potential ‘weapon’ that can greatly affect human perceptions and opinions (Orabi, et al., 2020). Empirical studies suggest that social media has an impact on opinion formation and transformation (Chen, et al., 2022), emotional contagion (Ferrara and Yang, 2015), and the flow of public opinion (Bradshaw and Howard, 2018; Cheng, et al., 2020). Such effects have substantive implications for international relations and politics (Barnett, et al., 2017). Some individuals and organizations utilize social media to capture improper benefits (Allem, et al., 2020; Orabi, et al., 2020), and the anonymous nature of social media makes the public more susceptible to various forms of manipulation (Tucker, et al., 2017). One of the leading tools capable of such manipulation is social media bots. Wooley and Howard (2016) argued that ‘it has become a nexus for some of the most pressing issues around algorithms, automation, and

(2)

Internet policy.’ In terms of international politics, researchers proposed that ‘the impact of social media bots should be taken into account in any study of online political dialogue’ [1].

1.1. Social media bots

Social media bots are often defined as a type of ‘automation software that controls an account on a particular OSN (Online Social Network) and has the ability to perform basic activities such as posting a message and sending a connection request’ [2]. This type of definition emphasizes the function of social media bots but often omits their potential effects on human agents. Another type of definition focuses more on the

anthropomorphic nature of social media bots and how they affect social systems. For instance, Igawa, et al. [3]

noted that ‘on Twitter social robots, called ‘bots’, pretend to be human beings in order to gain followers and replies from target users and promotes a product or agenda’. After a comprehensive analysis of the various definitions of social media bots, Ferrara, et al. (2016) further explained them as ‘a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior’. In general, Ferrara’s definition of social media bots is more comprehensive and widely

recognized. This study followed Ferrara’s definition and conceptualized bots as machine accounts that participated in selected topics by emulating human behaviors (including automatically posting content, engaging in user interactions, etc.) on social media platforms (Ferrara, et al., 2016).

1.2. Social media bots and politics

In recent years, social media bots have been used extensively for various forms of malicious political manipulation (Albadi, et al., 2019). Sometimes, due to human users’ lack of knowledge, social media bots’

influences appear larger (Everett, et al., 2016; Bolsover and Howard, 2019). Researchers have pointed out that political actors and governments worldwide have started using social media bots to muddy political issues (Forelle, et al., 2015). The New York Times and New Yorker also pointed out that social media bots have become a non-negligible political tool (Dubbin, 2013; Urbina, 2013; Woolley, 2016).

Scholars have examined various cases of social media bots interfering in the political sphere. For instance, Ratkiewicz, et al. (2011) studied midterm elections discussions on Twitter and found that social media bots infiltrated political conversations by showing support to some candidates while smearing others. In 2012, research revealed that politicians used social media bots to augment the number of followers and to achieve an

‘illusory prosperity’ of account influence (Chu, et al., 2012). In 2014, it was found that militaries, state- contracted firms, and elected officials used social media bots to set agendas by disseminating propaganda and flooding newsfeeds with political spam (Cook, et al., 2014). Bessi and Ferrara [4] found that bots have actively engaged in related conversations on Twitter during the 2016 and 2020 U.S. elections. Ferrara, et al. (2020) found that social media bots can exacerbate users’ consumption of content with the same political stance, thus enhancing existing political echo chambers. Abokhodair and McDonald (2015) examined Syrian social media bots on Twitter and depicted their behavioral patterns, such as mimicking human behaviors, reporting news, and posting misinformation. Howard and Kollanyi (2016) studied Brexit-related computational propaganda during the UK-EU Referendum and found that less than one percent of sampled accounts generated almost a third of all the messages. Albadi, et al. (2019) found that 11 percent of hate speech in the Arabic context was posted by automated agents.

Social media bots can engage in political conversations with different strategies (Ratkiewicz, et al., 2011; Chu, et al., 2012; Ferrara, et al., 2020). The Russia-Ukraine War constitutes an ideal case to investigate the behavior and the influence of social media bots. Since Russia officially declared war on Ukraine on 24 February 2022, the two countries have been engaging in an information war on social media (Bergengruen, 2022). Tim Bajarin, a famous columnist at Forbes, commented that: ‘This is the first major conflict where a true cyberwar is attached to a real war’ (Bajarin, 2022). Evidence shows that within 24 hours of the start of the War, the amount of relevant information generated on social media exceeded that of a week in the Iraq War (Johnson, 2022). Social media bots were found to be busy setting online agendas as they have done in the past (Muscat and Siebert, 2022; Purtill, 2022). However, the intervention strategies and effects of social media bots in the ongoing Russia-Ukraine War remain unknown. Ferrara, et al. (2020) concluded that there are two dimensions of manipulation by social media bots: automation (e.g., the prevalence of bots), and distortion (e.g.,

manipulation of narratives, injection of conspiracies or rumors).

(3)

In this study, we investigate the extent to which bots were used in the Russia-Ukraine War conversations on Twitter, the effects of social media bots, and how they differ from human users. Our study’s contribution lies in two aspects. First, exploring the commonalities in the ‘group behavior’ of social media bots can help better understand the operational logic of social bots. Second, this study could contribute to understanding the roles of social media bots played in shaping online public opinion regarding the Russia-Ukraine War. In addition, research on social media bots in international conflicts is few and far between, and therefore, this study can potentially provide some empirical evidence and set pathways for related and follow-up studies. In particular, the following research questions are proposed:

RQ1: To what extent do social media bots interfere with Twitter conversations about the Russia-Ukraine War?

RQ2: What are the account features of the social media bots compared to non-bots concerning the Russia-Ukraine War Twitter discussion?

RQ3: What are the textual features of the tweets posted by bots compared to non-bots concerning the Russia-Ukraine War Twitter discussion?

RQ4: How effective were those social media bots compared to non-bots in attracting likes, comments, and retweets?

2. Methodology 2.1. Data

The current study collected the research data from Twitter, a global social media with close to 400 million users (Dean, 2022). Twitter is an outlet for up-to-the-minute status updates, allowing users to respond in real- time to news and political events. In addition to hosting a huge amount of online political conversations, Twitter has become a ‘breeding ground’ for social media bots and was widely used for social media bots studies (Alothali, et al., 2018).

We collected all tweets containing keywords ‘Russia’ and ‘Ukraine’ between 17 February to 17 March 2022.

Due to the limitations of the research tools dealing with non-English languages (Albadi, et al., 2019), this study focused on English tweets. Therefore non-English tweets in the raw data were removed during data analysis. Although the Russia-Ukraine War started on 24 February 2022, there were many clues and build-ups in social media before the War. Thus we set the data collection starting date at one week before the War to capture the differences between the pre-War and post-War periods. A crawler algorithm based on ‘tweepy’, an open-source Python package, was adopted for data crawling. The data we obtained included tweet content, likes, retweets, comments, and basic information about user accounts, such as registered time, number of followers, and self-disclosure information. These data enabled the possibility of detecting social media bots based on different features of the account. In the end, our raw dataset constituted over 3.7 million tweets posted by nearly 0.97 million distinct users.

2.2. Sampling

Due to computational power constraints, a random sample of the data was used for statistical analysis. We adopted the interval sampling method (as relevant tweets on Twitter are not evenly distributed) and sampled one percent of the data for analysis. As a result, the sample dataset consisted of 37,245 tweets generated by 28,524 distinct accounts.

2.3. Bot detection

Identifying social media bot accounts has been an oft-studied topic in the past few years (Ferrara, et al., 2016;

(4)

Subrahmanian, et al., 2016). There are four existing common bot detection techniques: graph-based, machine learning-based, crowdsourcing-based, and anomaly-based (Orabi, et al., 2020). In this study, we adopted the machine learning approach, which is the most widely used approach (Chen, et al., 2022). Botometer,

developed by Yang, et al. (2022), has been proven to be a relatively reliable tool for social bot detection.

Botometer, formerly called BotOrNot, is a machine-learning framework that extracts and analyses a set of over 1,000 features, including content, network structure, temporal activity, user profile data, and text sentiment to produce a score indicating the likelihood that the inspected account is a social bot (Bessi and Ferrara, 2016). If the bot score is closer to one, the account is likely to be a social media bot. Reversely, the account belongs to non-bots [5]. In this study, we tested 28,524 accounts from the sample using Botometer and plotted the

probability distribution of bot scores (Figure 1). According to the graph, most of the cases fall below ‘0.5’, but an obvious bump was shown between ‘0.8’ and ‘1’, suggesting that a significant amount of accounts exhibit clear bot characteristics (Ferrara, et al., 2016; Davis, et al., 2016).

Figure 1: Distribution of the probability density of bot scores.

Different criteria were proposed in the existing literature to identify bots. Some researchers use a bot score of 0.5 as the threshold for marking social media bots (Badawy, et al., 2018; Ferrara, 2017a; Shao, et al., 2018;

Chen, et al., 2022). In this study, we adopted a higher threshold (0.8) for bot identification. This criterion has been used by Broniatowski, et al. (2018) in their research on ‘Russian trolls.’ Using this criterion, we detected a total of 5,439 accounts with a bot score higher than the ‘0.8’ threshold. After manual checking, we found that there are 1,623 institution/media accounts labeled as bots. These are verified accounts of organizations,

institutions, or public figures. These types of accounts were often treated as social media bots in previous studies, but we argue that they are distinctly different from bots for three reasons. First, as we argued earlier, social media bots should possess ‘anthropomorphic,’ ‘invisibility,’ and ‘automated’ characteristics (Boshmaf, et al., 2011; Igawa, et al., 2016). But institution/media accounts are distinctly different from social media bots in terms of ‘anthropomorphic’ and ‘invisibility’ criteria. These accounts usually display their true identities and detailed self-disclosure information (name, self-introduction, geolocation, etc.). Second, in terms of intent to use, social media bots can be divided into benign and malicious categories (Ferrara,et al., 2016). Stieglitz, et al. [6] noted that ‘Benign bots aggregate content, respond automatically, and perform other useful services.

Malicious bots, in contrast, are designed with a purpose to harm.’ Mainstream institutional or media accounts rarely maliciously disrupt the rules and order of online conversations. Finally, in terms of legitimacy, media accounts are legitimate sources of news and information, while social media bots are not (González-Bailón and

(5)

De Domenico, 2021). Based on these rationales, we excluded these 1,623 institution/media accounts from our analysis. Table 1 presents the final Botometer scores distribution of five intervals. Table 2 presents the final outcomes of our bot detection process.

Table 1: Distribution of Botometer scores.

Table 2: Bot detection results.

2.4. Content coding

To obtain a deeper understanding of social media bots’ activity during the Russia-Ukraine War, we examined the political stance of the tweets produced by social media bots in online conversations. Previous studies used hashtags to determine the binary political stance of bots (Bessi and Ferrara, 2016; Ferrara, et al., 2020).

However, the hashtags used by bots in the Russia-Ukraine War appear vague in meaning. Therefore, we

(6)

decided to code the political stance of the sampled tweets through machine learning. Following previous studies, we used the Support Vector Machine (SVM), a stable multi-class classification machine learning model, to classify stances and attitudes of tweets (Joachims, 1998; Chen, et al., 2022). First, we set up a coding team consisting of postgraduate students. After three training workshops and pilot coding sessions, the research team and coding team worked together to create seven political attitudes. The seven-attitude category included

‘Pro-Russia,’ ‘Pro-Ukraine,’ ‘Anti-Russia,’ ‘Anti-Ukraine,’ ‘Pro-Russia and Anti-Ukraine,’ ‘Pro-Ukraine and Anti-Russia,’ and ‘Neutral.’ Examples of typical tweets corresponding to different stance attitudes are provided in Table 3. Tweets reflecting pro- or anti- Russia/Ukraine usually carry words or hashtags with obvious value judgments, such as ‘invasion,’ ‘Nazi,’ ‘#istandwithukraine,’ etc., whereas neutral tweets tend to be news reports, calls for peace, or completely unrelated content. To improve the accuracy of machine learning, we further combined these seven categories into three broader political stances: ‘The Russia side, The Ukraine side, and Neutral’ (see Figure 2).

(7)

Table 3: Example tweets for political attitude coding.

(8)

Figure 2: Seven political attitudes and three political stances.

Two coders were trained to code the three types of stances manually. The Cohen’s Kappa value of two coders is 0.84, demonstrating a good inter-coder reliability. After manual coding, we obtained a data set containing 6,170 tweets with different stances and attitudes. We randomly divided these tweets into the training set and the testing set for machine learning. Among them, 5,995 tweets were in the training set, including 2,230 tweets on the Russian side, 1,883 on the Ukrainian side, and 1,882 neutral. The rest 175 tweets were used for model validation. After model validation, the accuracy of our machine learning model turned out to be 98.6 percent, which outperformed those of previous studies using the SVM multi-class model (Guo, et al., 2020; Chen, et al., 2022). In addition, we also performed an extra validity check for the outcomes of the model. We randomly extracted 200 machine-predicted tweets and returned them to the two coders for validation. It was found that the performance of consistency between coders and the model is 92.0 percent, indicating a high level of accuracy. Finally, we processed 6,200 bot tweets and 27,787 non-bot tweets in our model for prediction. The results showed that 4.77 percent of bot tweets were on the Russian side, 42.39 percent on the Ukrainian side,

(9)

and the rest, 52.84 percent, were neutral. For non-bot tweets, 11.65 percent were on the Russian side, 43.57 percent on the Ukrainian side, and the rest, 44.78 percent, were neutral (see Table 4).

Table 4: SVM prediction results.

2.5. Data analysis

We took an event-based difference approach, which is widely used in social media bot research (Bessi and Ferrara, 2016; Albadi, et al., 2019; Zelenkauskaite, et al., 2021). The approach provided a method for

answering the research questions we raised. More specifically, the data analysis for this study consists of three parts: a) account analysis, b) textual analysis, and c) interaction analysis. All three parts present descriptive statistics to show the similarities and differences between bots and non-bots accounts. The text of the tweets was also included as an essential object of investigation, and we analyzed hashtags, keywords, sentiment, and the co-occurrence network of ‘mentions’ (@) in tweets.

Hashtag and keywords: Since hashtags and keywords are valuable metrics for analyzing tweets, we use a

‘bottom-up’ approach by Zelenkauskaite, et al. (2021) to extract features of tweets through automated

programs, supplemented by manual analysis to describe the profile of the event. In terms of hashtags, we used regular expression operations to identify #hashtags in the texts and performed word frequency analysis. Then the TF-IDF feature algorithm was applied for keyword analysis. When a word is more important to represent the text, its TF-IDF value will be higher (Aizawa, 2003). The raw text was first converted to lowercase, then removed their URL links, and finally, word stemming was extracted. We removed words like ‘we,’ ‘is,’ ‘will,’

‘has,’ ‘now,’ and other functional words to ensure that the keyword list retains unique words representing the meaning of the text.

Sentiment: To implement sentiment analysis, we used TextBlob, a Python text data processing library. The library provides a simple API for typical natural language processing (NLP) tasks such as lexical tagging, noun phrase extraction, sentiment analysis, classification, translation, etc. (Manguri, et al., 2020). The TextBlob sentiment analysis function returns the sentiment polarity score, ranging from -1.0 to 1.0 (0 indicates neutral;

-1.0 indicates negative sentiment; 1.0 indicates positive sentiment) (Gujjar, et al., 2021).

Network: We performed a network analysis of the tweets using Gephi 0.9.5. Gephi is an open-source software package for network visualization and analysis. It can help researchers reveal network data patterns (Bastian, et al., 2009).

3. Findings

(10)

3.1. Bots detection (RQ1)

The first research question (RQ1) intends to explore the extent to which social media bots interfere with online conversations about the Russia-Ukraine War. We first report descriptive statistics for the samples included in our analysis (see Table 5). There were 3,816 bot accounts (13.4 percent) and 22,805 non-bot accounts (80.0 percent), which were responsible for 6,200 (16.7 percent) bot tweets and 27,787 (74.6 percent) non-bot tweets.

The data revealed a considerable proportion of social media bots engaging in online conversations about the Russia-Ukraine War.

By extrapolating for the entire data set, we estimate that there are about 126,100 to 133,860 bots in the data set, accounting for roughly 13.0 percent — 13.8 percent of the total activity in Russia-Ukraine online

conversations and responsible for about 606,360 to 632,400 tweets, accounting for 16.3 percent to 17.0 percent of the total volume (95 percent confidence level). In addition, because we excluded institutional/media

accounts and unknown accounts from our data processing, they did not appear in our subsequent analysis.

Table 5: Results of extrapolation of samples to the entire dataset.

Note: The ‘Population estimate’ column is based on statistical extrapolation at a 95 percent confidence level.

3.1.1. Bots activity level

The conversation about the Russia-Ukraine War in Twitter space is also part of the confrontation between Russia and Ukraine in cyberspace (Jaitner, 2015). Will the online discussion on social media be affected by the progress of the war between the two sides? Following the work by Zelenkauskaite, et al. (2021), we counted the number of bot tweets and non-bot tweets in each hour and derived two curves. To further explore the contributing factors for the changes in the curves, we linked the volume of tweets generated by bots and non- bots to critical events of the Russia-Ukraine War. According to Al Jazeera’s ‘Timeline: A month of Russia’s war in Ukraine’ and ‘Timeline: The first 100 days of Russia’s war in Ukraine’ (see Appendix), we sorted out the key events of the Russia-Ukraine war and used the relevant information to plot Figure 3.

(11)

Figure 3: Time-series distribution of bot and non-bot tweets (combined with offline events).

Note: Larger version of Figure 3 available here.

Overall, the trajectories of the two curves are relatively stable and similar, and the bots and non-bots did not show obvious divergence at any time node. This indicates no significant difference between the temporal trends in the production of tweets by bots and non-bots. Specifically, bot and non-bot tweeting curves showed an obvious upward trend on 22 February when the U.S. stepped up sanctions toward Russia and warned of war risks. Next, both curves showed a steep spike on 24 February, when Putin announced the commencement of the special military operation. The tweeting curves of both bots and non-bots spiked up again on 26 and 28

February, respectively. The only difference between the two curves is that non-bot tweets show obvious diurnal variation, while such fluctuations are less pronounced in bot tweets.

3.2. Account analysis (RQ2)

RQ2: What are the account features of those social media bots compared to non-bots concerning the Russia- Ukraine War Twitter discussion? To answer this question, we first analyzed the average account age, the average number of daily tweets, and the average following and follower numbers of the accounts.

3.2.1. Basic statistics

(12)

Table 6: Basic statistics for bot and non-bot accounts.

For average account age, the results showed that the average account age of bot accounts was 5.4 years, and the average account age of non-bot accounts was 7.5 years, indicating an overall younger profile of bot accounts. This finding is consistent with those of Hagen, et al. (2022). By visualizing the creation time of the accounts (Figure 4), we can see that 41.9 percent of bot accounts were created in the last three years (2020, 2021, 2022).

Figure 4: Distribution of the creation years of bots.

To understand the activity level of those accounts, we divided the total number of tweets by the number of days since the day the account was created to obtain the average daily number of tweets. Results show that bot accounts were much more active (38.9 tweets/day on average) than non-bot accounts (17.6 tweets/day on average) during the observation period.

We also examined the differences between bots and non-bots in terms of their following/follower status.

Twitter’s following function can help users receive public posts from targeted users. The number of ‘follower’

refers to how many people are ‘following’ an account (Wald, et al., 2013). Previous studies found that social media bots had a higher number of ‘following’ and fewer ‘followers’ than human users (Stieglitz, et al., 2017).

We found a similar pattern: the average ratio of ‘following’ by ‘follower’ of non-bot accounts (1:20) was much higher than that of bots (1:7).

3.2.2. Self-disclosure

(13)

We further analyzed the self-disclosure differences between bots and non-bots, including geolocation and lexical features of account self-description. First, we analyzed the geolocation of the accounts. The geolocation of accounts in online conversations has been examined in previous research (Bessi and Ferrara, 2016; Shane, 2017; Zelenkauskaite, et al., 2021). We visualized the geolocation of bot and non-bot accounts separately (Figure 5). It should be noted that the geolocation of Twitter accounts can be edited by users, and therefore the location information does not reflect the precise location of the accounts. But according to previous research, they are still a valuable indicator (Ferrara, et al., 2016; Subrahmanian, et al., 2016).

The results show that for bot accounts, most of them came from the U.S. (37.4 percent), the U.K. (9.1 percent), and India (6.6 percent), whereas for non-bot accounts, most of them came from the U.S. (38.4 percent), India (12.5 percent), and the U.K. (5.4 percent) (Figure 5). Other countries in our top five lists were Canada, Australia, Japan, and Nigeria.

Figure 5: Geolocation distribution of bots and non-bots (Top 5).

We then analyzed the self-descriptions of bots and non-bots. Twitter users can edit their self-description with no more than 160 words. These descriptions often show the user’s self-image construction (Ahn, 2011) and their self-identity awareness (Tufekci, 2008). The words used in these texts were analyzed for 3,816 bots and 22,805 non-bots. After removing function words such as adverbs, conjunctions, and emoticons, the top 40 high-frequency words were summarized in Table 7 (bots) and Table 8 (non-bots).

More than half of the high-frequency self-description words were shared by bots and non-bots. Pronouns such as ‘my,’ ‘you,’ and ‘we’ ranked very high, which indicates that bots tended to imitate non-bots and set their self-descriptions through an anthropomorphic, informal tone. Both bots and non-bots used the word ‘follow’ to seek attention and interaction from other users.

However, there were still some differences between bots and non-bots regarding self-descriptions. For bots, one obvious feature is that the term ‘http’ only appeared in the bots list and occupied the second position, indicating that bots usually embed external links in their self-descriptions. In addition, some news-related

(14)

terms such as ‘breaking,’ ‘information,’ and ‘latest’ were exclusive to bots’ descriptions. Furthermore, politics- related terms were more often used in self-descriptions of bots. For instance, country names and political figures were mentioned by many bots, such as ‘india’ and ‘ukraine’ and ‘trump.’ Words concerning political events and social movements were also mentioned frequently, like ‘resist,’ ‘blm,’ (Black Lives Matter) and

‘fbr’ (Follow Back Resistance). Finally, the bots’ self-descriptions also included words such as ‘technology,’

‘tech,’ and ‘crypto,’ which did not appear in the list of non-bots.

While both bots and non-bots often use the word ‘follow’ in their self-descriptions to seek attention and interaction from other users, our manual review of the content revealed significant differences in the way they use ‘follow’. Specifically, non-bots tend to use phrases such as ‘I follow back,’ ‘if you follow me, I will follow you back’ in their self-descriptions, aiming to boost the number of followers of the account itself. But bots are more inclined to direct other users to follow their ‘ally’ accounts with some expressions like ‘move to @*’

‘please follow us @*’ ‘for more news at @*’ etc. This pattern has also been confirmed several times in previous studies (Ferrara, 2017b; Bastos and Mercea, 2019).

Table 7: High frequency keywords in the self-introduction of bot accounts (Top 40).

(15)

Table 8: High frequency keywords in the self-introduction of non-bot accounts (Top 40).

3.3. Textual analysis (RQ3)

RQ3 intends to explore the textual features of the tweets posted by bots compared to non-bots concerning the Russia-Ukraine War discussion.

3.3.1. Topic differences

Twitter tweets often contain hashtags to indicate their relevant conversation topics, which can be used to represent users’ topic preferences (Pöschko, 2011). In this study, we analyzed the topic preferences of the bots and non-bots by calculating the frequency of hashtags used in the sampled tweets, and Table 9 shows the top 20 most used hashtags in their tweets.

(16)

Table 9: Frequency of hashtags in bot and non-bot tweets (Top 20).

Further, we calculated the relative popularity of hashtags among bot and non-bot tweets (from Table 7) (Arlt, et al., 2019). The calculation formula is:

Where Fh_bot represents the frequency of hashtag ‘h’ in bot tweets, and Fall_bot refers to the total frequency of the most used hashtags in bot tweets. Fh_bot/Fall_bot indicates hashtag h’s relative frequency in bot tweets.

Similarly, Fh_human/Fall_human indicates hashtag h’s relative frequency in non-bot tweets. The ratio of two values refers to the hashtag h’s relative popularity in both bot and non-bot tweets. Then, we conducted a log

(17)

transformation of the value of relative popularity so that the results could be easily visualized. A negative result indicates that hashtags were more likely to appear in bots’ tweets, and a positive number means that the hashtags were more likely to be used by non-bots. The user-generated hashtags are often case sensitive, such as

‘Russia’ and ‘russia’, but their meanings are the same. Therefore, our analysis converted all hashtags to lowercase. Figure 6 visualizes the relative popularity of the top 20 hashtags for bots and non-bots.

The findings from Table 7 and Figure 6 can be summarized in the following. First, from the perspective of common features, both sides of the vertical axis involve many neutral hashtags that directly describe the Russia-Ukraine War, like ‘#ukraine war’ ‘#russiaukrainewar’ and ‘#ukrainerussiawar’ etc. Second, for differences, non-bots relatively used more hashtags about political leaders: ‘#biden’ and ‘#putin’ were often mentioned. Also, hashtags ‘#usa’ ‘#nato’ only appeared on the left side of the vertical axis (non-bots), suggesting that the U.S. and NATO frequently appear in the non-bots narrative. For bots, they were more likely to use hashtags with obvious opinion stances, such as #stoprussia, #Ukraineunderattack, and

#helpukraine. Finally, bot accounts used hashtags related to newscasts (such as #news, #breaking) more frequently.

Figure 6: The relative popularity of the top 20 hashtags for bots and non-bots.

Notes: The horizontal axis is the Ph_relative value of hashtags. A positive number means the hashtag was used more frequently by bots, and a negative number indicates that non-bots used the hashtag more frequently. The vertical axis is the sum of the frequency of hashtags used in the tweets of the two types of

accounts.

(18)

Note: Larger version of Figure 6 available here.

In addition to hashtag analysis, we analyzed the content of tweets to reveal opinions and narrative strategies.

Python word-splitting algorithm and the relative popularity formula mentioned earlier were employed in this analysis. In Figure 7, we visualized the relative popularity of the top 50 most used words in the tweets of bots and non-bots. First, terms directly related to the Russia-Ukraine War occupied a considerably high proportion for both bots and non-bots. Second, for differences, words expressing opposition to war and calling for peace (‘peace,’ ‘stop,’ etc.) were more prevalent among non-bots. Non-bots also focused more on terms related to military conflicts, such as ‘military,’ ‘forces,’ ‘troops,’ and ‘weapons.’ For bots, they used more media-related terms such as ‘youtube,’ ‘live,’ and ‘news.’ In general, the words most frequently used by bots were relatively focused, while words used by non-bots were more diverse.

Figure 7: The relative popularity of the top 50 most used words in the tweets of bots and non-bots.

Notes: The horizontal axis is the Ph_relative value of words. A positive number means the word was used more frequently by bots, and a negative number means non-bots used the word more frequently. The vertical

axis indicates the frequency of words in tweets. We log-transformed the sum of the frequencies for ease of visualization.

Note: Larger version of Figure 7 available here.

(19)

3.3.2. Opinion stance

We divided the political stances conveyed by tweets into three categories according to the rules mentioned earlier: the Russian side, the Ukrainian side, and neutral (see the Methodology section). According to the prediction outcomes of our SVM model, for bots, 4.77 percent of tweets belonged to the Russian side, 42.39 percent belonged to the Ukrainian side, and the remaining 52.84 percent were neutral. For non-bots, 11.65 percent of the tweets belonged to the Russian side, 43.57 percent belonged to the Ukrainian side, and the remaining 44.78 percent were neutral. Further, we compared the percentage of tweets belonging to the Russian and Ukrainian sides among bots and non-bots (see Figure 8). The results show that tweets belonging to the Ukrainian side occupy a larger share of both types of accounts, which suggests that the pro-Ukrainian voices have an overwhelming advantage over pro-Russian voices in the online conversation about the Russia-Ukraine War on Twitter. In addition, the proportion of tweets speaking for the Ukrainian side was higher in bot tweets (89.88 percent) than in non-bot tweets (78.9 percent). In other words, bots amplified the voice of the Ukrainian side.

Figure 8: Distribution of political stances of bot and non-bot tweets.

We backtracked the accounts responsible for these tweets and investigated the consistency of their political stances. Following the previous studies (e.g., Cinelli, et al., 2021), we used the average level of political stance conveyed by tweets by a particular account to measure the consistency of accounts. For instance, if account i posts ai tweets in the dataset and the political stance of each tweet is noted as Ci = {-1, 0, 1} (where the Russian side was assigned a value of ‘-1’, the neutral was assigned a value of ‘0’ and the Ukrainian side was assigned a value of ‘1’), then the political stance xi of account i can be expressed as follows:

Based on the values of xi, we divided the accounts into five categories (see Table 10): xi=-1 was defined as the strong supporter of Russia; xi=(-1, 0) was defined as the moderate supporter of Russia; xi=0 was neutral; xi=(0,

(20)

1) was defined as the moderate supporter of Ukraine; and xi=1 was defined as the strong supporter of Ukraine.

The proportion of pro-Ukrainian accounts in bots is higher than in non-bots. Both the proportions of strong supporters of Russia and Ukraine were higher in non-bots than in bots.

Table 10: Political stance of accounts by degrees of consistency.

As a follow-up to the geolocation analysis we did earlier, we analyzed the geolocation of bots with different political stances in Table 11. First, about half of the bots in all three political stances choose to disclose their geolocation. Second, the distribution of the identified geographical information is highly concentrated, with the five most mentioned countries in each stance accounting for a considerable proportion. Specifically, on the Ukrainian side, the top five countries were the U.S. (22.2 percent), India (5.6 percent), the U.K. (3.5 percent), Ukraine (3.3 percent), and Canada (1.5 percent). On the Russian side, the top three countries were the U.S.

(19.6 percent), India (6.1 percent), the U.K. (3.7 percent), and the fourth and fifth countries were Nigeria (2.4 percent) and Japan (1.4 percent).

(21)

Table 11: Geolocation ranking of bots with different political stances (Top 5 countries).

3.3.3. Sentiment analysis

We analyzed the sentiment of the tweets in the sample using TextBlob. For bots and non-bots (see Figure 9), the trends in the distribution of sentiment polarity of their tweets are generally similar, but still, there were some differences. Specifically, tweets posted by non-bots showed relatively more positive or negative

sentiment, while bot tweets had a higher proportion of neutral sentiment. This may be because social bots will retweet news stories more often.

We also investigated the sentiment polarity distribution of bot tweets and non-bot tweets with different

political stances (see Figure 10, Figure 11). In bot tweets, we found that tweets with clear political stances (the Russian side or the Ukrainian side) often show more positive or negative sentiments. Both the Russian and the Ukrainian sides tweeted more positive rather than negative sentiments. Therefore, tweets were more likely to show political stances by expressing support for their side rather than opposition to the other side. The distributions of tweets with clear political stances (the Russian side or the Ukrainian side) were very similar across the three sentiment categories. The proportions of neutral sentiment for both bot and non-bot tweets were higher than 50 percent. Compared to bot tweets, non-bot tweets showed more non-neutral sentiment in all three political stances, especially for the Russian side.

Figure 9: Sentiment polarity distribution of bot and non-bot tweets.

(22)

Figure 10: Distribution of sentiment polarity of bot tweets with different political stances.

Figure 11: Distribution of sentiment polarity of non-bot tweets with different political stances.

3.4. Interaction analysis (RQ4) 3.4.1. Comments, retweets, and likes

In this part of the analysis, we calculated the average numbers of comments, retweets, and likes of tweets posted by bots and non-bots, respectively. Table 12 shows the findings. The average number of likes received by non-bot tweets was 33.84, whereas bot tweets only received 3.25 likes on average. The findings for the number of comments and the number of retweets were very similar. Tweets posted by bot accounts were less likely to trigger interactive actions from other accounts.

Table 13 presents a crosstabulation of interaction statistics by political stance. Overall, tweets from the Russian side and the Ukrainian side obtained more retweets, comments, and likes than those with a neutral stance. This

(23)

suggests opinion drives more interaction on social media. Interestingly, the data also show that tweets by bots from the Russian side received higher average retweets, likes, and comments than those from the Ukrainian side. But for tweets from non-bot accounts, the results are reversed.

Table 12: Interaction statistics for bot and non-bot tweets.

Table 13: Interaction statistics for tweets of different political positions.

Note: * denotes statistical significance at p < 0.05 level.

In terms of temporal patterns, we performed a time-series analysis of interaction data obtained by bot and non- bot tweets within the sample (Figure 12). What appears evident from observing the upper and lower panels is that both bot and non-bot tweets showed a spike around 24–25 February, which could be a reaction to the start of the War. Likes, comments, and retweets of non-bot tweets all showed a second peak between 15 March and 17 March, but for non-bot tweets, this was not evident in the bot tweets.

(24)

Figure 12: The time-series changes in interaction data of tweets.

3.4.2. @Mention: Potential network structure

Twitter allows users to mention other users in their tweets by using the “@” symbol. Mention is a type of social interaction that can form a co-occurrence network by connecting the mentioned with the user who initiated the mention. In this study, we conducted a co-occurrence network analysis of ‘mentions’ in bot and non-bot tweets to explore the differences between the two types of accounts.

First, the basic statistics of the co-occurrence relations of ‘mentions’ are presented in Table 14. There were 1,368 co-occurrence relations in 6,200 bot tweets involving 597 bots and 471 mentioned accounts. And there were 8,408 co-occurrence relations in 27,787 non-bot tweets, including 4,133 non-bot accounts and 3,325 mentioned accounts. The comparison shows that bots were less active than non-bots using the mention function, indicating that non-bot accounts were more likely to interact with other accounts.

Table 14: The statistics of ‘mention’ action in tweets .

(25)

Figure 13 presents the co-occurrence network of bots and non-bots. In terms of bots, there are five relatively discrete and sparsely interlinked communities. For non-bot accounts, seven densely knit communities could be observed.

Figure 13: The co-occurrence network of bot accounts (left) and non-bot accounts (right).

Note: Larger version of Figure 13 available here.

Prior research suggested that bots may interact with each other to enhance their influence and visibility in online conversations (Howard and Kollanyi, 2016; Duh, et al., 2018). Thus, we further examined the

categories of mentioned accounts in the co-occurrence network. The results show that among the 471 accounts mentioned by bots, 108 accounts exist in our sample dataset. Of the 108 accounts, 25 (23.15 percent) were bots, and 83 (76.85 percent) were non-bots. Among the 3,325 accounts mentioned by non-bots, 357 were in our sample dataset. Of the 357 accounts, only 3 (0.84 percent) were bots, and 354 (99.16 percent) were non-bot accounts. Therefore, both bots and non-bots were more inclined to mention non-bot accounts.

4. Conclusion

The tools of political discussion have radically changed since the advent of online social media (Harvey, 2013). The popularity of platforms such as Twitter has accelerated the process of political discussion, but the invention of social media bots could bring potential perils associated with the abuse of these platforms (Woolley and Howard, 2016; Shorey and Howard, 2016; Maréchal, 2016). Our study investigated the

engagement of social media bots on the issue of the Russia-Ukraine War and summarized our findings in four aspects as follows.

4.1. Level of bot intervention

(26)

We found that bots were extensively involved in online conversations concerning the Russia-Ukraine War.

About 3,816 bot accounts (13.4 percent) produced 6,200 tweets in our sampled data. We estimate that at least 126,100 bots were actively engaged in the Russia-Ukraine War conversations, and they were responsible for about 606,360 to 632,400 tweets during the observation period.

We found many tweets involving the Russia-Ukraine War had a clear political stance. Specifically, the

percentage of bot tweets from the Ukrainian side was nearly nine times higher than that of the Russian side. In addition, the proportion of tweets supporting the Ukrainian side was about 11 percent higher for bots than non- bots. It seems that social media bots amplified the voice of the Ukrainian side in the online conversations of the Russia-Ukraine War. Similarly, we found in our geolocation analysis that although Russia was involved in the conflict, very few bot accounts labeled themselves as coming from Russia.

There could be two possible reasons explaining the relatively lower activities of bots on the Russian side. On the one hand, Twitter suspended many bot accounts on the Russian side after the War started (Collins and Korecki, 2022). On the other hand, since 4 March 2022 (before our data collection date), the Russian

government blocked its citizens from accessing Twitter (Milmo, 2022). However, the bots on the Russian side were more ‘effective’ because they performed better than the bots on the Ukrainian side in terms of attracting likes, retweets, and comments. To disentangle this puzzle, we examined the content of pro-Russia tweets and found that bots on the Russian side more often posted controversial content, such as the eastward expansion of NATO. The tweets posted by the Ukrainian side were more likely to express condemnation of Russia or support for Ukraine.

4.2. Differences between bots and non-bots

We identified several differences between bots and non-bots. First, bots were typically younger than non-bots, with nearly half of the accounts created in the last three years (2020, 2021, 2022). Second, considering that the average daily tweet volume of bots was more than twice that of non-bots, it seems that bots were more active than non-bots on Twitter. However, the influence they brought to the online conversation was not as significant as expected because bots have far fewer followers on average than non-bots. Therefore, the scale of users that bots can reach is limited. Third, in terms of topic preferences, based on features we extracted from hashtags and word frequencies, bots used more hashtags with strong opinion stances (e.g., #stoprussia, #helpukraine).

They also attempted to introduce unrelated discussion topics, such as #cybersecurity and #bitcoin. Fourth, from the perspective of narrative strategy, one of the most salient features was that bots tended to pose as news media accounts and use news stories to exert their influence.

It is worth noting that although bots are significantly weaker than non-bots in capturing interaction through likes, retweets, and comments, they were trying to establish more interactions with non-bots to expand their influence on human users. For instance, bots often direct other users to follow their ‘ally’ accounts in self- introduction by using phrases like ‘move to @*’ ‘please follow us @*’ ‘for more news at @*’ etc. In addition, according to the analysis of the co-occurrence network of ‘mention,’ 76.85 percent of bots tend to mention non-bots to establish connections with them.

4.3. Bot and non-bot similarity

There are a few aspects that bot and non-bot social media accounts demonstrated similarity. First, the tweeting volume curves of both exhibited correspondence with off-line events. Second, both bots and non-bots mostly claim to come from the U.S., the U.K., and India. Third, we found that both bots and non-bots adopted

informal tones in their self-descriptions. Finally, we found most of the tweets were neutral in sentiment, which is quite different from previous study findings, where bots were more inclined to produce content with extreme sentiments (Stella, et al., 2018; Albadi, et al., 2019).

Our study revealed that bots and non-bots behaved differently in some aspects but similarly in others. Overall, the differences were generally less prominent than previous studies. We speculate that there could be several possible explanations. First, we raised the threshold for determining bots to 0.8. So bots with bot scores lower than 0.8 were classified as non-bots, influencing the overall statistics of the non-bot group. Second, we excluded institution/media accounts from our study (González-Bail´na and De Domenico, 2021). Human and

(27)

bot differences became less pronounced when we excluded institution/media accounts from our analysis.

Third, political issues of different natures may invite different levels of bots intervention. The nature of the Russia-Ukraine War may also directly affect the extent of bot participation. In previous research, bots have shown strong intervention during elections and referendums. For example, Shao, et al. (2018) estimated that bots accounted for about 31 percent of the content produced during the 2016 presidential election. Stella and Ferrara (2018) found bots accounted for about 23.6 percent of the Catalan referendum (2018). The active bot participation from both sides of the issue could cause a relatively high percentage of bots involvement. Plus, social media platforms usually show higher tolerance toward political competition within a democracy.

However, in the case of the Russia-Ukraine War, the opinion environment on Twitter was one-sided and less controversial. In addition, Russia’s blocking of Twitter further limited the role pro-Russia bots can play (Milmo, 2022).

Our investigation provided another example of the extensive involvement of social media bots in the online political conversation. As some scholars have warned, social media bots are gradually being ‘weaponized’

(Jones, 2019; Orabi, et al., 2020). As social media platforms become more important in human life, bots may have greater potential to influence and shape individual opinions and public opinion (Aral and Walker, 2010;

Chen, et al., 2022). Especially when social media become a tool for national interest and political propaganda, they can be used to challenge social and international order. We argue that this challenge is mainly reflected in the following three aspects. First, previous studies have repeatedly mentioned the ‘power’ of bots in spreading dis(mis)information (Ferrara, 2017a; Shao, et al., 2018; Albadi, et al., 2019), and organic opinions may be concealed to a certain extent. Second, anthropomorphic bots are able to disrupt rational discussions by

disseminating extreme emotions. With the continuous evolution of artificial intelligence technology, bots have gradually demonstrated their ability to emotional contagion (Stella and Ferrara, 2018; Shi, et al., 2020). This provocation of emotions is obviously an underestimated risk, and it is hard to imagine the harm extreme and irrational emotions could do to public discussions. Third, in an online environment where dis(mis)information is ubiquitous, users’ distrust of information may further translate into a general social distrust. Given these considerations, research on the development and influences of social bots should be continued in the future, and social media platforms should develop strong policies to prevent undesriable social impacts incurred by bot use.

4.4. Limitations

There are a few limitations to this study that need to be acknowledged. Due to time and cost constraints, we did not perform bot detection and analysis on the entire dataset. In addition, since we have adopted a higher

threshold for determining bots, the dataset of non-bot accounts may contain bots with less obvious bot characteristics. Furthermore, our data set only includes textual data; however, many tweets participated in online conversations of the Russia-Ukraine War using images or videos, which deserves more attention.

Finally and most importantly, similar to many social media bot studies, we could only observe the external behavior exhibited by these accounts. The operation motivation and the algorithms associated with these accounts remain largely unknown. Future studies could aim to unravel these issues to understand social media bots.

About the authors

Fei Shen is an associate professor in the Department of Media and Communication at City University of Hong Kong.

E-mail: feishen [at] cityu [dot] edu [dot] hk

Erkun Zhang is a Ph.D. candidate in the School of Journalism and Communication at Beijing Normal University.

E-mail: 202231021002 [at] mail [dot] bnu [dot] edu [dot] cn

Wujiong Ren is a postgraduate student in the School of Journalism and Communication at Beijing Normal University.

(28)

E-mail: wjren [at] mail [dot] bnu [dot] edu [dot] cn

Yuan He is an associate professor in the School of Journalism and Communication at Hebei University.

E-mail: melina_hy [at] qq [dot] com

Quanxin Jia is a Ph.D. candidate in the Department of Communication at the University of Macau.

E-mail: 201921021003 [at] mail [dot] bnu [dot] edu [dot] cn

Hongzhong Zhang is the dean and professor in the Journalism and Communication School at Beijing Normal University.

E-mail: zhanghz9 [at] 126 [dot] com

Acknowledgements

This project was made possible thanks to funding from the New Media Research Center of Beijing Normal University.

Fei Shen, Erkun Zhang, Wujiong Ren, Yuan He, and Quanxin Jia all contributed equally to this work and share first authorship. Hongzhong Zhang is the corresponding author.

Notes

1.Albadi, et al., 2019, p. 3.

2.Boshmaf, et al., 2011, p. 93.

3.Igawa, et al., 2016, p. 73.

4.Bessi and Ferrara, 2016, p. 10.

5.We use the term ‘non-bot’ because we raised the threshold for identifying bots and this will lead to missing out some bot accounts with nonsignificant automation characteristics. Therefore, we refer to these accounts with scores below 0.8 as “non-bots,” which are overwhelmingly composed of real users.

6.Stieglitz, et al., 2017, p. 4.

References

N. Abokhodair, D. Yoo, and D.W. McDonald, 2015. “Dissecting a social botnet: Growth, content and influence in Twitter,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 839–851.

doi: https://doi.org/10.1145/2675133.2675208, accessed 30 January 2023.

J. Ahn, 2011. The effect of social network sites on adolescents' social and academic development: Current theories and controversies, Journal of the American Society for information Science and Technology, volume 62, number 8, pp. 1,435–1,445.

doi: https://doi.org/10.1002/asi.21540, accessed 30 January 2023.

A. Aizawa, 2003. “An information-theoretic perspective of tfidf measures,” Information Processing &

Management, volume 39, number 1, pp. 45–65.

doi: https://doi.org/10.1016/S0306-4573(02)00021-3, accessed 30 January 2023.

(29)

N. Albadi, M. Kurdi, and S. Mishra, 2019. “Hateful people or hateful bots? Detection and characterization of bots spreading religious hatred in Arabic social media,” Proceedings of the ACM on Human-Computer Interaction, volume 3, number CSCW, article number 61, pp. 1–25.

doi: https://doi.org/10.1145/3359163, accessed 30 January 2023.

j.P. Allem, P. Escobedo, and L. Dharmapuri, 2020. “Cannabis surveillance with Twitter data: Emerging topics and social bots,” American Journal of Public Health, volume 110, number 3, pp. 357–362.

doi: https://doi.org/10.2105/AJPH.2019.305461, accessed 30 January 2023.

E. Alothali, N. Zaki, E.A. Mohamed, and H. Alashwal, 2018. “Detecting social bots on Twitter: A literature review,” 2018 International Conference on Innovations in Information Technology (IIT), pp. 175–180.

doi: https://doi.org/10.1109/INNOVATIONS.2018.8605995, accessed 30 January 2023.

S. Aral and D. Walker, 2010. “Creating social contagion through viral product design: A randomized trial of peer influence in networks,” ICIS 2010 Proceedings, at https://aisel.aisnet.org/icis2010_submissions/44/, accessed 30 January 2023.

D. Arlt, A. Rauchfleisch, and M.S. Schäfer, 2019. “Between fragmentation and dialogue. Twitter communities and political debate about the Swiss ‘nuclear withdrawal initiative’,” Environmental Communication, volume 13, number 4, pp. 440–456.

doi: https://doi.org/10.1080/17524032.2018.1430600, accessed 30 January 2023.

A. Badawy, E. Ferrara, and K. Lerman, 2018. “Analyzing the digital traces of political manipulation: The 2016 Russian interference Twitter campaign,” 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 258–265.

doi: https://doi.org/10.1109/ASONAM.2018.8508646, accessed 30 January 2023.

T. Bajarin, 2022. Russian-Ukraine War ... The most broadcast war in history that includes physical and cyber warfare, Forbes (2 March), at https://www.forbes.com/sites/timbajarin/2022/03/02/ukraine-russian-warthe- most-broadcast-war-in-history/?sh=779eafe15b3b, accessed 2 March 2022.

G.A. Barnett, W.W. Xu, J. Chu, K. Jiang, C. Huh, J.Y. Park, and H.W. Park, 2017. “Measuring international relations in social media conversations,” Government Information Quarterly, volume 34, number 1, pp. 37–44.

doi: https://doi.org/10.1016/j.giq.2016.12.004, accessed 30 January 2023.

M. Bastian, S. Heymann, and M. Jacomy, 2009. “Gephi: An open source software for exploring and

manipulating networks,” Proceedings of the International AAAI Conference on Web and Social Media, volume 3, number 1, pp. 361–362.

doi: https://doi.org/10.1609/icwsm.v3i1.13937, accessed 30 January 2023.

M.T. Bastos and D. Mercea, 2019. “The Brexit botnet and user-generated hyperpartisan news,” Social Science Computer Review, volume 37, number 1, pp. 38–54.

doi: https://doi.org/10.1177/0894439317734157, accessed 30 January 2023.

V. Bergengruen, 2022. Telegram Becomes a Digital Battlefield in Russia-Ukraine War, Time (21 March), https://time.com/6158437/telegram-russia-ukraine-information-war/, accessed 10 April 2022.

A. Bessi and E. Ferrara, 2016. “Social bots distort the 2016 U.S. Presidential election online discussion,” First Monday, volume 21, number 11, at https://firstmonday.org/article/view/7090/5653, accessed 30 January 2023.

doi: https://doi.org/10.5210/fm.v21i11.7090, accessed 30 January 2023.

G. Bolsover and P. Howard, 2019. “Chinese computational propaganda: Automation, algorithms and the manipulation of information about Chinese politics on Twitter and Weibo,” Information, Communication &

Society, volume 22, number 14, pp. 2,063–2,080.

doi: https://doi.org/10.1080/1369118X.2018.1476576, accessed 30 January 2023.

(30)

Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, 2011. “The socialbot network: When bots socialize for fame and money,” ACSAC ’11: Proceedings of the 27th Annual Computer Security Applications

Conference, pp. 93–102.

doi: https://doi.org/10.1145/2076732.2076746, accessed 30 January 2023.

S. Bradshaw and P.N. Howard, 2018. “The global organization of social media disinformation campaigns,”

Journal of International Affairs, volume 71, number 1.5, pp. 23–32, and at https://jia.sipa.columbia.edu/global- organization-social-media-disinformation-campaigns, accessed 30 January 2023.

D.A. Broniatowski, A.M. Jamison, S. Qi, L. AlKulaib, T. Chen, A. Benton, S.C. Quinn, and M. Dredze, 2018.

“Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate,” American Journal of Public Health, volume 108, number 10, pp. 1,378–1,384.

doi: https://doi.org/10.2105/AJPH.2018.304567, accessed 30 January 2023.

X. Chen, S. Gao, and X. Zhang, 2022. “Visual analysis of global research trends in social bots based on bibliometrics,” Online Information Review, volume 46, number 6, pp. 1,076–1,094.

doi: https://doi.org/10.1108/OIR-06-2021-0336, accessed 30 January 2023.

C. Cheng, Y. Luo, and C. Yu, 2020. “Dynamic mechanism of social bots interfering with public opinion in network,” Physica A: Statistical Mechanics and its Applications, volume 551, 124163.

doi: https://doi.org/10.1016/j.physa.2020.124163, accessed 30 January 2023.

Z. Chu, S. Gianvecchio, H. Wang, and S. Jajodia, 2012. “Detecting automation of Twitter accounts: Are you a human, bot, or cyborg?” IEEE Transactions on Dependable and Secure Computing, volume 9, number 6, pp.

811–824.

doi: https://doi.org/10.1109/TDSC.2012.75, accessed 30 January 2023.

M. Cinelli, G. De Francisci Morales, A. Galeazzi, and M. Starnini, 2021. “The echo chamber effect on social media,” Proceedings of the National Academy of Sciences, volume 118, number 9 (23 February),

e2023301118.

doi: https://doi.org/10.1073/pnas.2023301118, accessed 30 January 2023.

B. Collins and N. Korecki, 2022. “Twitter bans over 100 accounts that pushed #IStandWithPutin,” NBC News (4 March), at https://www.nbcnews.com/tech/internet/twitter-bans-100-accounts-pushed-istandwithputin- rcna18655, accessed 28 October 2022.

D.M. Cook, B. Waugh, M. Abdipanah, O. Hashemi, and S.A. Rahman, 2014. “Twitter deception and

influence: Issues of identity, slacktivism, and puppetry,” Journal of Information Warfare, volume 13, number 1, pp. 58–71, and at https://www.jinfowar.com/journal/volume-13-issue-1/twitter-deception-and-influence- issues-identity-slacktivism, accessed 30 January 2023.

C.A. Davis, O. Varol, E. Ferrara, A. Flammini, and F. Menczer, 2016. “Botornot: A system to evaluate social bots,” WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 273–274.

doi: https://doi.org/10.1145/2872518.2889302, accessed 30 January 2023.

B. Dean, 2022. “How many people use Twitter in 2022?” (5 January), at https://backlinko.com/twitter-users, accessed 15 April 2022.

R. Dubbin, 2013. “The rise of Twitter bots,” New Yorker (14 November), at

http://www.newyorker.com/tech/elements/the-rise-of-twitter-bots, accessed 4 October 2022.

A. Duh, M. Slak Rupnik, and D. Korošak, 2018. “Collective behavior of social bots is encoded in their temporal Twitter activity,” Big Data, volume 6, number 2, pp. 113–123.

doi: https://doi.org/10.1089/big.2017.0041, accessed 30 January 2023.

R.M. Everett, J.R.C. Nurse, and A. Erola, 2016. “The anatomy of online deception: What makes automated

(31)

text convincing?” SAC ’16: Proceedings of the 31st Annual ACM Symposium on Applied Computing, pp.

1,115–1,120.

doi: https://doi.org/10.1145/2851613.2851813, accessed 30 January 2023.

E. Ferrara, 2017a. “Disinformation and social bot operations in the run up to the 2017 French presidential election,” First Monday, volume 22, number 8, at https://firstmonday.org/article/view/8005/6516, accessed 30 January 2023.

doi: https://doi.org/10.5210/fm.v22i8.8005, accessed 30 January 2023.

E. Ferrara, 2017b. “Contagion dynamics of extremist propaganda in social networks,” Information Sciences, volume 418–419, pp. 1–12.

doi: https://doi.org/10.1016/j.ins.2017.07.030, accessed 30 January 2023.

E. Ferrara and Z. Yang, 2015. “Measuring emotional contagion in social media,” PloS ONE, volume 10, number 11, e0142390.

doi: https://doi.org/10.1371/journal.pone.0142390, accessed 30 January 2023.

E. Ferrara, H. Chang, E. Chen, G. Muric, and J. Patel, 2020. Characterizing social media manipulation in the 2020 U.S. presidential election, First Monday, volume 25, number 11, at

https://firstmonday.org/article/view/11431/9993, accessed 30 January 2023.

doi: https://doi.org/10.5210/fm.v25i11.11431, accessed 30 January 2023.

E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, 2016. “The rise of social bots,” Communications of the ACM, volume 59, number 7, pp. 96–104.

doi: https://doi.org/10.1145/2818717, accessed 30 January 2023.

M. Forelle, P. Howard, A. Monroy-Hernández, and S. Savage, 2015. “Political bots and the manipulation of public opinion in Venezuela,” at https://ora.ox.ac.uk/objects/uuid:07cbc55b-f9e2-44c3-a6f9-daab377c8f8c, accessed 30 January 2023.

S. González-Bailón and M. De Domenico, 2021. “Bots are less central than verified accounts during contentious political events,” Proceedings of the National Academy of Sciences, volume 118, number 11 (8 March), e2013443118.

doi: https://doi.org/10.1073/pnas.2013443118, accessed 30 January 2023.

J.P. Gujjar and H.P. Kumar, 2021. “Sentiment analysis: Textblob for decision making,” International Journal of Scientific Research & Engineering Trends, volume 7, number 2, pp. 1,097–1,099, and at

https://ijsret.com/wp-content/uploads/2021/03/IJSRET_V7_issue2_289.pdf, accessed 30 January 2023.

L. Guo, J.A. Rohde, and H.D. Wu, 2020. “Who is responsible for Twitter’s echo chamber problem? Evidence from 2016 U.S. election networks,” Information, Communication & Society, volume 23, number 2, pp. 234–

251.doi: https://doi.org/10.1080/1369118X.2018.1499793, accessed 30 January 2023.

L. Hagen, S. Neely, T.E. Keller, R. Scharf, and F.E. Vasquez, 2022. “Rise of the machines? Examining the influence of social bots on a political discussion network,” Social Science Computer Review, volume 40, number ), pp. 264–287.

doi: https://doi.org/10.1177/0894439320908190, accessed 30 January 2023.

K. Harvey (editor), 2013. Encyclopedia of social media and politics. Thousand Oaks, Calif.: Sage.

doi: https://dx.doi.org/10.4135/9781452244723, accessed 30 January 2023.

P.N. Howard and B. Kollanyi, 2016. “Bots,# StrongerIn, and# Brexit: Computational propaganda during the UK-EU referendum,” at https://ora.ox.ac.uk/objects/uuid:d7787894-7c41-4c3b-a81d-d7c626b414ad, accessed 30 January 2023.

(32)

R.A. Igawa, S. Barbon, Jr., K.C.S. Paulo, G.S. Kido, R.C. Guido, M.L.P. Júnior, and I.N. da Silva, 2016.

“Account classification in online social networks with LBCA and wavelets,” Information Sciences, volume 332, pp. 72–83.

doi: https://doi.org/10.1016/j.ins.2015.10.039, accessed 30 January 2023.

M. Jaitner, 2015. “Russian information warfare: Lessons from Ukraine,” In: K. Geers (editor). Cyber war in perspective: Russian aggression against Ukraine. Tallinn: NATO Cooperative Cyber Defence Centre of Excellence, pp. 87–94, and at https://ccdcoe.org/uploads/2018/10/Ch10_CyberWarinPerspective_Jaitner.pdf, accessed 30 January 2023.

T. Joachims, 1998. “Text categorization with Support Vector Machines: Learning with many relevant

features,” In: C. Nédellec and C. Rouveirol (editors). Machine learning: ECML-98. Lecture Notes in Computer Science, volume 1398. Berlin: Springer, pp. 137–142.

doi: https://doi.org/10.1007/BFb0026683, accessed 30 January 2023.

D. Johnson, 2022. “Ukraine could be the most documented war in human history,” Slate (24 February), at https://slate.com/technology/2022/02/ukraine-russia-livestream-google-maps.html, accessed 24 February 2022.

M.O. Jones, 2019. “Propaganda, fake news, and fake trends: The weaponization of Twitter bots in the gulf crisis,” International Journal of Communication, volume 13, at

https://ijoc.org/index.php/ijoc/article/view/8994, accessed 30 January 2023.

K.H. Manguri, R.N. Ramadhan, and P.R.M. Amin, 2020. “Twitter sentiment analysis on worldwide COVID- 19 outbreaks,” Kurdistan Journal of Applied Research, volume 5, number 3, pp. 54–65.

doi: https://doi.org/10.24017/covid.8, accessed 30 January 2023.

N. Maréchal, 2016. “When bots tweet: Toward a normative framework for bots on social networking sites”

International Journal of Communication, volume 10, at https://ijoc.org/index.php/ijoc/article/view/6180, accessed 30 January 2023.

D. Milmo, 2022. “Russia blocks access to Facebook and Twitter,” Guardian (4 March), at

https://www.theguardian.com/world/2022/mar/04/russia-completely-blocks-access-to-facebook-and-twitter, accessed 28 October 2022.

S. Muscat and Z. Siebert, 2022. “Laptop generals and bot armies: The digital front of Russia’s Ukraine war,”

Heinrich Böll Stiftung, Brussels office, European Union (1 March), at

https://eu.boell.org/en/2022/03/01/laptop-generals-and-bot-armies-digital-front-russias-ukraine-war, accessed 12 April 2022.

M. Orabi, D. Mouheb, Z. Al Aghbari, and I. Kamel, 2020. “Detection of bots in social media: A systematic review,” Information Processing & Management, volume 57, number 4, 102250.

doi: https://doi.org/10.1016/j.ipm.2020.102250, accessed 30 January 2023.

J. Pöschko, 2011. “Exploring Twitter hashtags,” arXiv:1111.6553 (28 November).

doi: https://doi.org/10.48550/arXiv.1111.6553, accessed 30 January 2023.

J. Purtill, 2022. “When it comes to spreading disinformation online, Russia has a massive bot army on its side,” ABC News (29 March), at https://www.abc.net.au/news/science/2022-03-30/ukraine-war-twitter-bot- network-amplifies-russian-disinformation/100944970?

utm_campaign=abc_news_webandutm_content=linkandutm_medium=content_sharedandutm_source=abc_news_web, accessed 10 April 2022.

J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, A. Flammini, and F. Menczer, 2011. “Detecting and tracking political abuse in social media,” Proceedings of the International AAAI Conference on Web and Social Media, volume 5, number 1, pp. 297–304.

doi: https://doi.org/10.1609/icwsm.v5i1.14127, accessed 30 January 2023.

Referencer

RELATEREDE DOKUMENTER

BOTS AMONG US: PREVALENCE, INFLUENCE, AND ROLES OF AUTOMATED ACCOUNTS IN THE GERMAN TWITTER FOLLOW NETWORK Felix Victor Münch Leibniz Institute for Media Research |

comprehensive  discourse  analysis  of  social  media  content  and  a  series  of  in-­depth   interviews  with  leaders  of  the  social  movement,  this  case

This paper is the result of a literature review of social interaction and online communication, as well as a pilot study on MMOG players, examining the relationship

The evaluation of SH+ concept shows that the self-management is based on other elements of the concept, including the design (easy-to-maintain design and materials), to the

In a series of lectures, selected and published in Violence and Civility: At the Limits of Political Philosophy (2015), the French philosopher Étienne Balibar

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Denne urealistiske beregning af store konsekvenser er absurd, specielt fordi - som Beyea selv anfører (side 1-23) - &#34;for nogle vil det ikke vcxe afgørende, hvor lille

Driven by efforts to introduce worker friendly practices within the TQM framework, international organizations calling for better standards, national regulations and