AU hate crime incident in context: From cybersecurity to dignity

By Tijana Milosovic

As an American University alumna and scholar of cyberbullying, I was extremely troubled to hear about the series of racially charged events that took place on campus earlier this month. Most readers are familiar with the distressing details by now — bananas were found hanging from noose-shaped strings marked with AKA, an abbreviation for the University’s predominantly African-American sorority. Shortly thereafter, the Anti-Defamation League alerted the University to an online post written by a “white supremacist” urging followers to “troll” the newly-elected President of the American University Student Government (AUSG), Taylor Dumpson.

It goes without saying that the top priority for the University is to work with authorities to guarantee the safety of President Dumpson and all those on campus. But as the AU community wrestles with the insidious and complex nature of these grave incidents, it is important to put them in context so we can more clearly identify the sources and roots of the problem. 

As a researcher who has been studying cyberbullying and online harassment for the past four years, I will focus here on the Internet-mediated aspects of the incident, while interrogating responses to such incidents, including the tendency to fixate on safety and security-oriented responses at the expense of broader cultural and social dynamics.

As students of media know all too well, framing plays an incredibly important part in influencing the scope of possible solutions. Rather than concentrating only on safety measures, which can fall short, this incident offers an opportunity to discuss some of the cultural aspects of this problem, which can lead to more effective and far-reaching responses in the long term.

Cyber-harassment, trolling or cyberbullying?

As an overall caveat to discussions involving “cyber” (as in “cyberbullying” or “cyberharassment”), I find the term itself to be problematic, as it can connote that the digital world is somehow a mysterious place apart from the “real” or “offline” worlds. 

AU administrators referred to the posts targeting President Dumpson as “cyber-harassment,” and I find this term to be more appropriate for this case than “cyberbullying.” While I have not seen the original posts, from what I have read about the incident, the terms  “online hate speech” or “harmful speech online” would also apply.

Although definitions and understandings vary, hate speech usually refers to the specific targeting of minority populations —offensive speech (which can also call on violence) that is directed against a group and inspired by some characteristics of the group (e.g. race, ethnicity, gender, sexual orientation etc.). As such, “online hate speech” might be the most accurate term to describe the gravity of the incidents that occurred earlier this month.  While the initial AU letter said the author of the hateful post called on others to “troll” President Dumpson,  trolling has a specific connotation and culture around it. For the most thoughtful inquiry into trolling and how it fits into the mainstream culture, I highly recommend scholar Whitney Phillips’ book “This is Why We Can’t Have Nice Things.” While the following definition is most certainly a generalization (and simplification), trolling tends to refer to being annoying or purposefully offensive (e.g. “just for lolz”), because you can –for fun. Although trolling can definitely be racially-charged, I think that labelling the incident as “trolling” might perhaps inadvertently assign this particular harassment incident a more benign connotation.

“Cyberbullying”, however, is a distinct term. It may be difficult to arrive at an agreed-upon definition of cyberbullying, but researchers tend to observe that it constitutes some form of repeated harm involving electronic technology. However, when trying to study and measure cyberbullying, it may not be obvious what counts as “repetition.”  Is it more comments by one person? Or is it enough if many people merely see one comment? Or do they also need to re-post it or continue commenting on it? If it needs to happen continuously over time in order to be called cyberbullying, how long does it need to happen for us to classify it as continuous? To be characterized as “cyberbullying,” some scholars emphasize that a behavior typically needs to involve some form of “power imbalance” between the so-called victim and perpetrator. Cyberbullying is frequently used to refer to peer aggression (among children and youth) and the definition tends to be derived from offline bullying. Authors caution against using the term “bullying” for all instances of interpersonal conflict online, as these incidents can have very different characteristics. Nonetheless, the term cyberbullying tends to be applied to anything and everything in the popular discourse, and this, in my view, contributes to the overall confusion around the issue, including a belief that the phenomenon is more widespread than the empirical research suggests.

Bullying and harassment as safety/security or tech problems

Cyberbullying tends to be understood primarily as an “e-safety” problem or a security issue. It is also often presented as a technological problem. In the media and public discourse, it is not uncommon to have technology, or some features thereof, blamed for what is often characterized as an increasing number of harassment incidents. For instance, when harassment is anonymous, technological affordances of anonymity are said to be contributing to the problem. In my view, such discourse tends to ignore the cultural factors that normalize humiliation. This might be evident in the post-election atmosphere in the United States, which is how I would contextualize what happened —normalizing and implicitly sanctioning racist or sexist behavior (Note: while I have not conducted empirical analysis of the post-election discourse and how it might be normalizing ethnic hatred or racism, initial journalistic investigations into the subject suggest it will continue to be an important and fruitful area of research going forward). 

In my own work, I explain cultural aspects of the problem by contextualizing bullying in the “dignity theory framework”. I point to some of the factors at the cultural level that encourage humiliation and promote what some dignity authors term “false dignity”. This concept refers to deriving one’s worth from external insignia of success –which can be anything from money, good looks and expensive clothes, to any other token of success that may range from toys in childhood to sex in adolescence —all measured in more likes on social media platforms.  These less frequently examined cultural assumptions and values are by no means peculiar to youth, as they permeate adult interactions as well, a point often lost when technology becomes a scapegoat for wider social problems and media panics in public discourse. This is well exemplified in cyberbullying cases involving children and especially in those high profile incidents that gather significant public attention and where cyberbullying is in some way linked to a child’s suicide (usually as a misleading oversimplification—presenting cyberbullying as the cause behind the suicide). Under such circumstances, public discourse tends to revolve around blaming the so-called bullies or online platforms where bullying happened. Such simplistic binaries and finger-pointing can prevent a constructive discussion of the problem that accounts for its complexity.

Responding to the question of whether the resources that victims of online harassment have at their disposal are actually helpful, in my own research, I have found very little evidence as to the efficacy of social media companies’ anti-harassment enforcement tools (e.g. blocking, filtering, or various forms of reporting etc.). I make an argument for more transparency in how companies enforce their anti-bullying and anti-harassment policies and whether/how they measure their effectiveness, and also for a continuous independent evaluation of effectiveness of companies’ tools (e.g. how efficiently companies respond to reported posts but also with regards to how satisfied users are and which solutions that companies are not yet implementing would be helpful). I also call for more funding from both industry and governments for educational initiatives and innovative forms of counseling for those who need it. You might have seen that Facebook documents with its classified anti-abuse policies got leaked recently on Guardian, adding fuel to the debate on effectiveness of companies’ enforcement mechanisms, suggesting the company “is too lenient on those peddling hate speech.” Consider that according to the newspaper, Facebook had previously advised its moderators “to ignore certain images mocking people with disabilities”. In my discussions with some of the e-safety experts, Facebook has been characterized as one of the better companies on the market in terms of these policies, one that has developed its policies significantly over time and that can set the standards for the rest of the social media industry, at least when it comes to protecting children from cyberbullying.

Resolving the problem—protection vs. participation?

Many see the problems of cyberbullying, trolling, and hate speech as inherently involving tradeoffs with respect to privacy, security, convenience or engagement. The proposition is that in order to protect themselves from harassment, students might need to take practical steps to protect their privacy, such as disabling geolocation or opting for more restrictive privacy settings. Participation and engagement are seen as pitted against protection, that more of one requires less of the other. Meanwhile, some students might be happy to see various online platforms proactively crawling their shared content in order to identify harassment cases in real-time (without these cases having to be reported first by users). However, would students be comfortable if platforms’ moderators were looking into the content that they shared privately? Thinking of harassment in the context of comments on online news platforms—doing away with comments sections entirely has been an ongoing struggle for many platforms.

I think the issue of response goes back to how the problem is perceived and framed. From what I have seen, the advice provided to students by administrators focuses to a large extent on safety and security—framing the problem in this way. It is, of course, important to point out these security-related aspects with practical advice for students on how to protect themselves and to ensure that everyone is safe. But we should also be careful not to ignore the cultural aspects. While users should certainly familiarize themselves with the tools that online platforms provide to protect their privacy, companies need to do a better job of raising awareness of these tools and to ensure that they are effective and easy to use. 

At the same time, creating a culture — not just on campus but more widely, in the society — where these hate-related problems are openly talked about is very important as well. Following the AU case, those who committed these acts need to be identified and held accountable. But we should be vigilant not to miss an important opportunity to discuss the heart of the problem — the normalization, sanctioning and pervasiveness of hate and prejudice on campus and beyond, as well as broader dignity-related aspects of the problem. 

About the Author



Tijana Milosevic is a researcher at the Department of Communication, University of Oslo, Norway, and a member of the EU Kids Online research network (about 150 researchers in 33 countries researching children and digital media). She received her PhD from the American University School of Communication. EU Kids Online will soon be collecting data in several countries as part of a new survey on young people’s digital media use. Tijana is also conducting research in Norway with children and teens inquiring whether they find social media platforms’ tools against digital bullying to be effective. As part of the COST Action project DigiLitEY, she analyzes privacy-related aspects of the Internet of Things and smart toys in particular. You can follow her on Twitter @TiMilosevic and @EUKIDSONLINE and at

Her forthcoming book Protecting Children Online: Cyberbullying Policies of Social Media Companies (MIT Press, 2017) examines these issues in depth, focusing on children and young people in particular.