Coronavirus Disease 2019, commonly referred to as COVID-19, or simply coronavirus, is a zoonotic infection believed to have emerged from a wet market in Wuhan, China in late-2019. COVID-19 quickly circumnavigated the globe, and by January, 2020 the World Health Organization (WHO) declared the outbreak a “Public Health Emergency of International Concern” (Center for Disease Control and Prevention, 2020). As of May 10, 2021, over 157 million coronavirus infections resulted in more than 3.2 million deaths worldwide; in the United States alone, over thirty-two million cases contributed to more than 575,000 deaths (World Health Organization, 2021)). Millions of others suffer the short- and long-term effects of the illness. The pandemic also prompted surges in global poverty and joblessness, the shuttering of countless businesses, mental health trauma, and food system disruptions resulting in new and worsening hunger crises (Usher et al., 2020; World Health Organization, 2020). Additionally, it inspired a wave of hate and hostility, especially towards those deemed “responsible” for the disease - namely, individuals of Asian descent, especially from China (Fan et al., 2020).
Indeed, anti-Asian hate speech on Twitter rose 900% in the wake of the coronavirus outbreak, and traffic to sites and posts specifically targeting Asians for hate or harassment likewise ballooned (The Federal, 2020). This type of animus is not confined to the Internet, however (Ren & Feagin, 2020). In fact, more than 6,600 hate crime incidents targeting Asians and Asian Americans were reported in the year following the onset of the pandemic, more than a150% increase (Brockell 2021). Likewise, during the first quarter of 2021 in major U.S. cities, reported hate crimes rose more than 164%, relative to the prior year (Center for the Study of Hate and Extremism 2021).
Extant work linking hateful online material to offline acts of hate-inspired violence (Reitman, 2018; Velásquez et al., 2020) suggests the parallel growth of online and offline hate related to COVID-19 is more than mere correlation. Given the dangers posed by cyberhate, both in cyberspace and the physical world, it is critical to understand this new wave of animosity. This paper seeks to do so using data from Twitter collected between January 1, 2020 and February 28, 2021. Our exploratory analysis proceeds in several steps. We begin by looking at daily trends in anti-Asian sentiment related to the pandemic. We then evaluate the sentiment of these tweets. Finally, we analyze how discourse on Twitter concerning the coronavirus, China, and individuals of Asian descent has evolved throughout the course of the pandemic.
In the subsequent section we provide an overview of cyberhate, noting its origins and expansion. This is followed by a discussion of hate linked to pandemics. We then detail the methodology of our study, and conclude with an interpretation of our findings, discussion of their implications, and suggestions for building upon our work.
Cyberhate: A Brief Overview
One of the most troubling online trends is the rising tide of hateful content or cyberhate. Cyberhate is a distinct form of cyberviolence (Wall, 2003) that utilizes computer technology to profess negative attitudes towards others based on a host of group-based characteristics, including race, ethnicity, religion, sex, sexual orientation, gender identity, and national origin, among others (Hawdon et al., 2014). This distinguishes cyberhate from other forms of cyberviolence, such as cyberbullying or cyberharassment, which target individuals in isolation. While cyberhate is increasingly prevalent, it is not new. In fact, hate groups have long utilized the Internet to proselytize their messages of intolerance. Of note, notorious white supremacist group, Stormfront, launched its inaugural website in 1995, quickly garnering a sizable and steadfast online following. Stormfront’s successful foray into cyberspace led other hate groups to similarly establish an online presence (Brown, 2009; Chen et al., 2008; Cooper, 2010). The Internet is attractive to hate groups because it allows them to transcend geographic constraints, facilitating recruitment and the spread of their messages broadly and efficiently to a large audience. In a quote captured by the website, Hate Groups on the Internet, Don Black, a former grand wizard of the Ku Klux Klan, emphasized the importance of the Internet to hate movements, stating:
It’s [the Internet] been a tremendous boon for us. That’s why I dedicate most of my time to this. I feel like I’ve accomplished more on the Web than in my 25 years of political activism. Whereas before, we could reach only people with pamphlets or holding rallies with no more than a few hundred people, now we can reach potentially millions (Hate Groups on the Internet, n.d.).
The advent of social media fundamentally altered and accelerated the spread of hate material online. Indeed, social media has ushered in a veritable cottage industry of cyberhate whereby anyone with access to the Internet can quickly and easily create and share hateful content. Organized hate groups, while still active online, are now far outpaced by individuals who maintain websites dedicated to hate, or simply make hateful comments on social media sites, in comments sections of online articles, or myriad other virtual spaces (Potok, 2011). Consequently, exposure to cyberhate is increasing. Exposure among youth and young adults in America increased 24% over a recent three-year period, from 53% in 2013 to more than 70% in 2016 (Costello et al., 2019; also see Beirich & Buchanan, 2018). Moreover, 24% of Americans report “frequent” exposure to cyberhate, while another 41% claim they encounter cyberhate at least “occasionally”, underscoring the ubiquity of the problem in America (Reichelmann et al. 2020). Other nations are similarly grappling with expanding cyberhate. A study by Reichelmann and colleagues (2020) exploring encounters with cyberhate in five European nations found that recent exposure ranged from a low of 64.8% in France to a high of 78.5% in Finland.
The growth in cyberhate is concerning because exposure can result in numerous ill-effects (Subrahmanyam & Šmahel, 2011), including mood swings, anger, fear (Tynes et al., 2004; Tynes, 2005), social distrust (Nasi et al., 2015), discrimination (Cowan & Mettrick, 2002; Foxman & Wolf, 2013), and the inter-generational perpetuation of extremist ideologies (Perry, 2000; Tynes, 2005). Exposure has also been identified as an initial step towards radicalization (Cowan and Melltrick 2002; Foxman and Wolf 2013; Leets 2002), and in extreme cases, frequent encounters with cyberhate can instigate offline violence (Federal Bureau of Investigation, 2011; for a list of deadly attacks see Freilich et al., 2011; The New America Foundation International Security Program, 2015).
Examples of cyberhate translating to offline violence are myriad. For instance, in 2015, Dylann Roof took the lives of nine African Americans in a church in Charleston, South Carolina, used social media to spew hate and connect with others who held similarly racist beliefs. Moreover, Robert Bowers posted anti-Semitic and neo-Nazi hate on the social media site, Gab, before killing eleven people in 2018 at a synagogue in Pittsburgh, Pennsylvania (Beckett, 2018). One of the deadliest examples comes from 2019, when Brenton Tarrant murdered forty-nine people in two mosques in Christchurch, New Zealand. Prior to doing so, Tarrant uploaded his hate-laden manifesto online, which, in part, discussed the Internet’s role in nurturing and molding his radical belief system (Zadrozny & Collins, 2019). These - and numerous other - examples of deadly acts of hate underscore the potentially devastating effect of cyberhate. Indeed, the violent events detailed above all have an online footprint, involving the creation of, or interaction with, cyberhate. At present, there is grave concern over the effects of the torrent of cyberhate targeting Asians as a result of the pandemic. To understand why Asians are being targeted by hateful attacks, the subsequent section explores the long, troubling, and entangled history of pandemics and violence towards minority groups.
Pandemics and Hate
The COVID-19 outbreak caused sickness and death, protracted economic downturns, mandated lockdowns, and calls to practice social distancing; in turn, this led to fear, anxiety, depression, grief, bereavement, disorientation, anger, and, at times, even hatred (Bartos et al., 2020; Coelho et al., 2020; Duan et al., 2020). Victims of hate and harassment linked to the coronavirus have reported physical and verbal attacks, discrimination, and avoidance (Gover et al., 2020). Asians and Asian Americans have been among the most frequently targeted for this type of hate, with victims being murdered, beaten, stabbed, spat on, and verbally assaulted, among other forms of attacks (Donaghue, 2020; Haynes, 2020; Lu & Sheng 2020; Mullis & Glenn, 2020; Ren & Feagin, 2020).
The recent wave of anti-Asian attacks fits with an extended pattern of hate and hostility towards minority groups following the outbreak of novel diseases. To understand why, Eamon (1998, p. 2) asserts that “outbreaks of new diseases, especially epidemics, are striking visual affirmations of something gone terribly wrong in the wider social sphere.” New diseases strain societies and test their ability to assimilate newness, and a common response to emergent hardships is the displacement of blame. The shifting of blame, or scapegoating, results in “othering”, whereby prejudice and fear lead to the stigmatization and ostracizing of out-group members seen as responsible for the new social strain (Gover et al., 2020; Kim & Sundstrom, 2014). Individuals or groups seen as outsiders, or different in some fundamental, easily distinguishable, and threatening way, are common targets of culpability during catastrophic events (Ginzburg, 1993).
For centuries, immigrants have been miscast in America as sources of disease and contamination, perceived as sickly and dirty (Taylor, 2019). It is therefore unsurprising that immigrants are the most acute targets of erroneous blame during pandemics (Scott, 2020). More than any other group, immigrants represent “otherness,” a link between one’s home and the outside world (Shams, 2020), and are thus depicted as the mechanism bringing threats from beyond a society’s borders. Placing blame on others helps alleviate fear of pandemics as people seek, even if in vain, to shed light on the unknown, allowing them to gain a sense of control (McCauley, 2013). Nelkin and Gilman (2020) contend that “Blaming has always been a means to make mysterious and devastating diseases comprehensible and therefore possibly controllable.”
History is replete with examples of immigrants facing blame for novel illnesses. For instance, the Back Death, which ravaged Europe in the fourteenth century, led to the systematic and violent targeting of Jews and other minorities who were accused of spreading the deadly disease by poisoning wells and food supplies (Cohn, 2012). In 1892, a cholera outbreak led to the forced quarantine of Russian Jewish immigrants in both Europe and the United States. The nineteenth century saw various immigrant groups, including from Ireland, Italy, and China, blamed for bringing polio and cholera to America, and Haitian refugees were falsely impugned for the AIDS epidemic in the 1980s. The SARS outbreak in America in 2003 was largely blamed on Asian Americans, the H1N1 pandemic in 2009 on immigrants from Central and South America. (Blanding & Solomon, 2020) and in 2014 when fears over Ebola rose in America, African immigrants were targeted for stigmatization (Cole, 2020).
The World Health Organization (WHO), conscious of the potential stigmatization, discrimination, and violence that can manifest during pandemics, issued new guidelines in 2015 for naming emerging diseases. The WHO advised that disease names should not contain geographic locations, peoples’ names, or references to culture or populations. The name coronavirus was chosen because it specifically avoided any such references. Nevertheless, a litany of derogatory names for the disease rapidly materialized, chiefly targeting those of Asian descent. Terms and phrases such as Chinese virus, China virus, Wuhan virus, kung flu, and wuflu, among other similarly offensive monikers, have been used to describe the coronavirus. These expressions have become commonplace on social media, inspiring both online and offline hate. We explore the use of derogatory words and phrases for the coronavirus on Twitter in the ensuing sections.
Data and Methods
Data for this study were procured from Twitter between January 1, 2020 and February 28, 2021. We used ten keywords associated with COVID-19 and anti-Asian sentiment to collect tweets. The keywords are: wuhancoronavirus, wuhanvirus, chinesevirus, chinavirus, coronaviruschina, ccpvirus, chinacoronavirus, chinaliedpeopledied, wuflu, and kungflu. These keywords were explored based on the two most prominent keywords espousing anti-Asian sentiment on Twitter: Chinavirus and Chinesevirus. We collected all tweets with these two keywords and then found new keywords based on frequency of use. If the frequency of a new word was over a chosen threshold, this word was added to our keyword list. This threshold was determined first by whether the number of tweets with one keyword was more than 1% of the total number of tweets. If so, the keyword was considered for inclusion. Second, we re-collected tweets containing this keyword. If the number of tweets including other keywords was more than 50%, this keyword was added to our keyword list. In total, we collected 4,471,574 tweets using these ten keywords. After removing the retweets and tweets in languages other than English, we were left with 3,537,559 tweets published by 879,672 unique users.
The Twitter data served several purposes. First, we tracked the ebb and flow of tweets per day containing at least one of the keywords containing anti-Asian sentiment over a fourteen-month period. In addition to examining trends, we explored the sentiment of the tweets. Sentiment analysis is a natural language processing technique that can be used on textual data, such as tweets. It allows for text to be rated as positive, negative, or neutral. We used the Valence Aware Dictionary and Sentiment Reasoner (VADER) model (Gilbert & Hutto 2014) to predict the sentiment of tweets. This model uses a long short-term memory (LSTM) architecture and is trained on tweets created by twenty raters. This model outputs a value between –1 and 1. However, since we used keywords that are explicitly hateful to collect tweets, most of the tweets have a negative value, or a negative sentiment. Thus, when we refer to sentiment scores, we use absolute average values.
Finally, we explored the evolution of discourse on the pandemic on Twitter to gain insights into how words are used in relation to other specific words. To do so, we examined the twenty most similar words to the words Chinese, China, Asians, and virus in tweets over an eight-month period. This was accomplished by training weekly word2vec models. A word2vec model represents words as vectors in a high-dimensional space, and words that are similar will be close to each other. Specifically, we used the skip-gram model (Mikolov et al., 2013) with negative sampling, which is a shallow neural network, to predict the context of the use of words. For each week from January 21, 2020 to the week of August 29, 2020, we trained a separate word2vec model.
We begin by presenting trends in daily tweets from January 1, 2020 to February 28, 2021 containing at least one of the ten keywords noted above. The results are depicted in Figure 1. Notably, the trend line begins to rise on January 21, 2020, the same day the first case of coronavirus was confirmed in the United States. The daily rate of tweets continues to escalate before receding through the month of February. In mid-March, as the pandemic worsened, daily tweets spiked, particularly on March 18, 2020, when tweets containing one of the ten keywords topped 80,000 for the day. Of note, the World Health Organization (WHO) declared COVID-19 a pandemic on March 11, 2020. On March 13, 2020, then-President Trump declared the pandemic a National Emergency and instituted a travel ban on non-U.S. citizens traveling from Europe. Moreover, on March 16, 2020, immediately preceding the surge in tweets per day, former President Trump referred to the coronavirus as the Chinese virus on Twitter. This tweet connected with the ex-President’s eighty-eight million Twitter followers, in addition to others who saw the message retweeted or heard about it through other outlets, such as various social media sites or mainstream news organizations. Over the next several days, the former president repeated the phrases Chinese virus and China virus on social media, as well as during a White House press conference on March 19 (Fallows, 2020).
April saw a dip in tweets per day, and the rate remained relatively low until a few upticks in July. The spike on July 8, 2020 coincides with the United States’ decision to begin the process of withdrawing from the World Health Organization on the previous day. Trump routinely criticized the WHO’s response to the pandemic, accusing the agency of being a “puppet of China.” Stating that China has “total control” over the WHO, Trump argued that Chinese officials failed to adequately report information about the coronavirus to the WHO, which in turn he accused of succumbing to Chinese pressure to mislead the world about the deadliness of the virus and how it is transmitted (Holland & Nichols, 2020). The final surge in daily tweets in 2020 comes in early October, immediately following the positive COVID-19 diagnoses of the Former-President and -First Lady, Melania Trump, as well as Trump’s subsequent admittance to Walter Reed National Military Medical Center for treatment.
Daily tweets containing anti-Asian sentiment remained low, relatively speaking, following the November 2020 presidential election. Interestingly, the next noticeable spike in daily tweets came on January 27, 2021, when tweets per day surpassed 15,000. This uptick occurred one day after newly elected President Biden signed an executive order on racial equality. It called on federal agencies to ensure official actions and statements, including those related to COVID-19, do not contribute to racist and xenophobic attacks against Asian Americans and Pacific Islanders. The order further noted previous actions by the federal government contributed to xenophobia in America. While the order did not specifically mention the former president, nor derogatory terms Trump used for the coronavirus, it stated that referencing the pandemic by its location of origin contributed to hostility and violence in America towards certain groups (Vercellone, 2021). It thus appears that Twitter users were reacting to an attempt by the incoming Biden administration to diminish anti-Asian animus. Some users likely saw this executive order as an attack on former-President Trump and his handling of the pandemic, or an example of political correctness run amok, and therefore used words such as China virus or wuflu defiantly. Other users may have used derogatory terms for COVID-19 in a non-hateful manner following the executive order, perhaps discussing why such terms should no longer be used, for instance. Whatever the case, daily rates of anti-Asian keywords usage on Twitter dipped dramatically following the spike on January 27, 2021 and remained low until the end of the reporting period of this study, February 28, 2021.
Next, the sentiment analysis of the Twitter data demonstrates that, as expected, the tweets we collected are overwhelmingly negative, with an average sentiment score of –0.11. There is variation in sentiment, however. We classify tweets with high sentiment as those with a value of over 0.99. Some examples of tweets that clear this threshold are:
My great great great great great great great great great great great great great great great great great great great great great great great great grand parents survived the Black Death. #Covid_19 #ChinaLiedPeopleDied
It IS a China Virus!! I love all people and pray for everyone to recover BUT this did start in China! Love and respect our great COUNTRY and GREAT PRESIDENT! God bless you and keep you healthy…
We cherish the freedom in USA. But we should not abuse the freedom like rep. Omar does, or even use the freedom to attack this great country, like CCP [Chinese Communist Party] does with CCP virus. USA is far from being perfect, but it is one of the greatest in the whole world since God has blessed her
He won against Hillary.
He won with both Supreme Court picks
He won against impeachment hoax
He won against Iran
He won building the economy
He won rebuilding the military
He won building the wall
He won against Wuhan virus
He wins against Pelosi
He’ll win against Obiden
God Bless America, God Bless Hong Kong, God Bless Japan, God Bless UK, God Bless Thailand, God Bless Taiwan, God Bless Italy, God Bless Spain…. God Damx china!#WuhanVirus #ChineseVirus
Each of these tweets makes reference to COVID-19 in a derogatory manner. The first tweet places blame for the disease on China’s alleged duplicity, using the hashtag ChinaLiedPeopleDied. The second tweet refers to COVID-19 as the China Virus, and the third tweet refers to the coronavirus as the CCP [Chinese Communist Party] virus. The fourth tweet uses the label Wuhan virus to refer to COVID-19. The fifth tweet is more explicitly hateful, not only using several stigmatizing names for the coronavirus, but also specifically condemning China. However, while these scores register negative sentiment scores, they do not score as negatively as those we classify as low sentiment. Tweets with low sentiment are defined as those with a score less than -0.99. Some examples of tweets with low sentiment are:
I HATE CHINAI HATE CHINAI HATE CHINA I HATE CHINAI HATE CHINAI HATE CHINAI HATE CHINAI HATE CHINAI HATE CHINAI HATE CHINAI HATE CHINAI HATE CHINAI HATE CHINA#ChineseVirus19 #ChineseProductsInDustbin #ChinaLiedPeopleDied #chinalies #WHOlied
FUCK YOU FUCK YOU FUCK YOU
FUCK YOU FUCK YOU FUCK YOU
FUCK YOU FUCK YOU FUCK YOU
FUCK YOU FUCK YOU FUCK YOU
FUCK YOU FUCK YOU FUCK YOU
FUCK YOU FUCK YOU FUCK YOU
FUCK YOU FUCK YOU FUCK YOU
FUCK YOU FUCK YOU
YOURE THE REASON WE CAN’T SAFELY VOTE MR. CHINA VIRUS IS A HOAX
Facing long queues while using fucking mask at airports, it’s all because of fucking #China and #ChinaVirus #ChineseVirus ..Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China Fuck #China
Peace protest is fine. But loot, rob, steal. Was insult, unrespect for rslan and his family. One dead, protest. Nazi dictactor communist evil china killed many americans by ccp virus. No one protest in front evil china embassy. Too scare of evil china shoot, crash by tanks
Chinese is the most racist in the worldThey hate AmericanThey hate AfricanThey hate JapanThey hate TaiwanThey hate Hong KongThey hate ThailandThey hate IndiaThey hate VietnamThey hate all country#ChineseVirus#ChinaMustExplain #ChinaMustPay
The first tweet repeatedly expresses hatred for China while using several disparaging terms for COVID-19. It also impugns the WHO. The second expletive-laden tweet employs repetition to emphasize the user’s discontent, and refers to the coronavirus as the CHINA VIRUS. The third tweet again uses stigmatizing names for COVID-19 and multiple expletives aimed at China, while also blaming China for routine inconveniences associated with the pandemic. The fourth tweet refers to social and political unrest in the United States over the summer months of 2020, while using the phrase ccp virus and calling China evil on three occasions. The final tweet refers to Chinese people as racist and uses the hashtags ChineseVirus, ChinaMustExplain, and ChinaMustPay. These hashtags not only link the outbreak directly to China, but also suggest China must account for the origins of the virus, hinting at malicious intent on the part of China, and that China must somehow make reparations for the damage resulting from the pandemic.
Lastly, we explored how language on Twitter around the words Chinese, China, Asians, and virus evolved since the start of the COVID-19 outbreak. Table 1 shows the twenty words most similar to these four words at the outset of the pandemic. They were obtained from a model trained on our dataset of tweets during the week of January 21, 2020. Several of the words most similar to Chinese appear to reference the pandemic. Namely, Wuhan, epicenter, and epicentre are commonly found in Twitter discussions involving the word Chinese. There are more direct references to COVID-19 when examining words commonly used in conjunction with China. In particular, virus, Wuhan, coronavirus, epicenter, and spreading all relate to the word China. Twitter discussions involving the word virus are, as expected, dominated by talk of COVID-19. Indeed, China is the word most similar to the word virus on Twitter during this week. Finally, discussion linked to the word Asian revolves around issues of racism and xenophobia. There are also several generally derogatory words associated with the word Asian on Twitter during this particular week. Of course, the context of the Twitter conversations cannot be discerned from this information alone. Thus, we cannot say whether, for instance, people are disdaining anti-Asian xenophobia or promoting it.
Twitter discussion involving these same four words changed dramatically by the week of July 29, 2020, as illustrated in Table 2. Again, the twenty words most similar to these keywords were obtained from a model trained on our dataset of tweets during the week of July 29, 2020. Notably, conversations center more on the pandemic. They are also decidedly more acrimonious. Included in the twenty words, phrases, and hashtags most similar to the word Chinese are spies, Wuhan, communists, spreaded, colluded, bioweapon, chinaliedpeopledied, banchineseproducts, biowar, and chinamustpay. Many of these same words and phrases are found to be used in tweets alongside the word China. Words and phrases associated with virus on Twitter during this week include China, Chinese, Wuhan, intentionally, purposely, chinavirus, and kungflu. Words similar to Asians, as in January, remain largely focused on racism, xenophobia, stereotyping, and scapegoating.
The coronavirus outbreak has elicited both goodwill and rancor in America. Representing a common enemy, the pandemic has bonded people from all walks of life. Social distancing, wearing a face covering in public, and quarantining represent forms of unity, acknowledgments that shared sacrifice is needed to benefit the collective. Countless acts of selflessness and empathy have emerged in the wake of the outbreak, ranging from medical professionals and other frontline workers laboring under dangerous and exhausting conditions, to ordinary citizens helping friends and family in need of financial or emotional support. The hashtag #COVIDkindness trended on Twitter shortly after the outbreak, created to highlight the countless caring acts performed during the pandemic. Even so, it remains unclear if the coronavirus outbreak will ultimately bring American society closer together or further divide an already polarized nation (Boxell, Gentzkow, & Shaprio, 2020). Indeed, accounts of benevolence and sacrifice during the pandemic are counterweighted by stories of enmity related to COVID-19. This study focuses on forces that serve to broaden social and cultural fissures during the pandemic, particularly anti-Asian hate on Twitter related to COVID-19. Several important conclusions can be drawn from our work.
Our trend analysis of daily tweets demonstrates how quickly hateful comments can spread on social media platforms. Even before the first confirmed case of COVID-19 in America, inflammatory keywords associated with hateful tweets were already being used by Twitter users to describe the disease. This analysis also points to the ability of powerful and influential individuals to steer social media narratives, as well as the broader public discourse. Of note, a surge in tweets containing one or more of the inflammatory keywords we explored immediately followed former-President Trump’s use of the phrase Chinese Virus on Twitter in mid-March 2020. Previously, ex-President Trump had referenced the disease as a foreign virus, and retweeted supporters’ use of the phrase China Virus; however, this was the first time he produced an original tweet using the phrase. The tweet was quickly met with condemnation by ordinary citizens, medical professionals, and government officials, including the Congressional Asian Pacific Americans Caucus, but it nevertheless had a profound effect on Twitter activity, and the phrase Chinese virus quickly trended (Yam, 2020). The proliferation of the phrase Chinese virus suggest that derogatory language might appear more acceptable when used by individuals in positons of authority. In addition, with time the repeated use of pejorative language can serve to normalize it, effectively desensitizing people to hateful speech and increasing outgroup prejudice (Soral, Bilewicz, & Winiewski, 2018). Indeed, following Trump’s usage of the phrases China virus and Chinese virus, several other prominent government officials began using similar language to describe the coronavirus, while also aggressively blaming China for the pandemic (Rogers, 2020; Wu, 2020).
Our analysis also points to the cumulative harm of cyberhate. Twitter conversations around the words China, Chinese, Asians, and virus became markedly more negative and hateful between January and July 2020. We found that derogatory words and phrases were increasingly likely to be used in conversations about China, individuals of Asian descent, or viruses in general. This trend is undeniably troubling. It suggests a normalization of hate on Twitter, and also the possibility that hate towards China and other individuals of Asian descent might become entrenched in social media. The normalization of hate towards a particular group leads to stigmatization and othering, and by explicitly seeking to dehumanize others, hate speech is used to rationalize acts of discrimination and violence. Thus, if online rhetoric regarding Asians continues to become more ubiquitous and extreme, we are likely to continue to witness elevated rates of hateful attacks targeting Asians offline as the pandemic continues, and perhaps even after it subsides.
Indeed, reports of anti-Asian hate crimes have increased in the early months of 2021, even as the pandemic has started to subside, and restrictions related to COVID-19 have been relaxed. Russell Jeung, co-founder of Stop AAPI (Asian American Pacific Islander) Hate, argues that Asians are more vulnerable to attacks as society gradually returns to a state of normalcy. This is because targets of hate were somewhat protected from direct attacks during the period of quarantine. However, as more and more people break quarantine, and interactions between assailants and potential targets are more prevalent, there are concerns that a year’s worth of pent-up anger may result in a swell of attacks against Asians and Asian Americans (Yam 2021).
Feelings of fear, anxiety, or peril can elicit myriad responses in humans. Too often, people have responded to the uncertainty of the novel coronavirus with hate. Our work demonstrates not only the prevalence of anti-Asian hate related to COVID-19 on Twitter, but also its evolution. This study is largely exploratory in nature, though, and we implore researchers to build upon our work. In particular, we examined one specific type of hate related to the pandemic. But, while COVID-19 related hate targeting Asians is most common online, others are targeted based on traits such as age, political ideology, religion, and willingness to take safety precautions to guard against the disease. Exploring these additional types of cyberhate and hostility will offer a more holistic picture of online animus tied to the coronavirus. It is also critical to continue to track trends in hate speech on social media related to the pandemic over a longer period and on other networking platforms. It is yet unclear if the animosity towards Asians on Twitter will remain after the pandemic recedes and, if it does, to what degree. Additionally, we focused on Twitter because it is one of the most commonly used forms of social media, and Twitter data are readily available for research purposes. It is likely, however, that other social networking sites that do not monitor or police speech or otherwise cater to extreme ideas have a greater frequency of COVID-19 related hate material. Examining those sites is therefore important as well, especially since sites like 8kun or fringe forums on Reddit have previously served as incubators of radicalism.
Ultimately, the findings of this study are discouraging. The outbreak of COVID-19 has led to the stigmatization, harassment, and abuse of individuals of Asian descent, clear evidence of which we found by examining Twitter activity. Even so, we hope this work can shed light on three important issues; first, the deleterious effect of linking a disease to a particular segment of the population, second, the interplay between offline and online rhetoric, and third, the potential for cyberhate to provoke violence in the physical world. In doing so, we hope this study encourages individuals to be more cautious and measured in how they talk about novel pandemics in the future, both offline and online.