AU hate crime incident in context: From cybersecurity to dignity

As an American University alumna and scholar of cyberbullying, I was extremely troubled to hear about the series of racially charged events that took place on campus earlier this month. Most readers are familiar with the distressing details by now — bananas were found hanging from noose-shaped strings marked with AKA, an abbreviation for the University’s predominantly African-American sorority. Shortly thereafter, the Anti-Defamation League alerted the University to an online post written by a “white supremacist” urging followers to “troll” the newly-elected President of the American University Student Government (AUSG), Taylor Dumpson.

It goes without saying that the top priority for the University is to work with authorities to guarantee the safety of President Dumpson and all those on campus. But as the AU community wrestles with the insidious and complex nature of these grave incidents, it is important to put them in context so we can more clearly identify the sources and roots of the problem. 

As a researcher who has been studying cyberbullying and online harassment for the past four years, I will focus here on the Internet-mediated aspects of the incident, while interrogating responses to such incidents, including the tendency to fixate on safety and security-oriented responses at the expense of broader cultural and social dynamics.

As students of media know all too well, framing plays an incredibly important part in influencing the scope of possible solutions. Rather than concentrating only on safety measures, which can fall short, this incident offers an opportunity to discuss some of the cultural aspects of this problem, which can lead to more effective and far-reaching responses in the long term.

Cyber-harassment, trolling or cyberbullying?

As an overall caveat to discussions involving “cyber” (as in “cyberbullying” or “cyberharassment”), I find the term itself to be problematic, as it can connote that the digital world is somehow a mysterious place apart from the “real” or “offline” worlds. 

AU administrators referred to the posts targeting President Dumpson as “cyber-harassment,” and I find this term to be more appropriate for this case than “cyberbullying.” While I have not seen the original posts, from what I have read about the incident, the terms  “online hate speech” or “harmful speech online” would also apply.

Although definitions and understandings vary, hate speech usually refers to the specific targeting of minority populations —offensive speech (which can also call on violence) that is directed against a group and inspired by some characteristics of the group (e.g. race, ethnicity, gender, sexual orientation etc.). As such, “online hate speech” might be the most accurate term to describe the gravity of the incidents that occurred earlier this month.  While the initial AU letter said the author of the hateful post called on others to “troll” President Dumpson,  trolling has a specific connotation and culture around it. For the most thoughtful inquiry into trolling and how it fits into the mainstream culture, I highly recommend scholar Whitney Phillips’ book “This is Why We Can’t Have Nice Things.” While the following definition is most certainly a generalization (and simplification), trolling tends to refer to being annoying or purposefully offensive (e.g. “just for lolz”), because you can –for fun. Although trolling can definitely be racially-charged, I think that labelling the incident as “trolling” might perhaps inadvertently assign this particular harassment incident a more benign connotation.

“Cyberbullying”, however, is a distinct term. It may be difficult to arrive at an agreed-upon definition of cyberbullying, but researchers tend to observe that it constitutes some form of repeated harm involving electronic technology. However, when trying to study and measure cyberbullying, it may not be obvious what counts as “repetition.”  Is it more comments by one person? Or is it enough if many people merely see one comment? Or do they also need to re-post it or continue commenting on it? If it needs to happen continuously over time in order to be called cyberbullying, how long does it need to happen for us to classify it as continuous? To be characterized as “cyberbullying,” some scholars emphasize that a behavior typically needs to involve some form of “power imbalance” between the so-called victim and perpetrator. Cyberbullying is frequently used to refer to peer aggression (among children and youth) and the definition tends to be derived from offline bullying. Authors caution against using the term “bullying” for all instances of interpersonal conflict online, as these incidents can have very different characteristics. Nonetheless, the term cyberbullying tends to be applied to anything and everything in the popular discourse, and this, in my view, contributes to the overall confusion around the issue, including a belief that the phenomenon is more widespread than the empirical research suggests.

Bullying and harassment as safety/security or tech problems

Cyberbullying tends to be understood primarily as an “e-safety” problem or a security issue. It is also often presented as a technological problem. In the media and public discourse, it is not uncommon to have technology, or some features thereof, blamed for what is often characterized as an increasing number of harassment incidents. For instance, when harassment is anonymous, technological affordances of anonymity are said to be contributing to the problem. In my view, such discourse tends to ignore the cultural factors that normalize humiliation. This might be evident in the post-election atmosphere in the United States, which is how I would contextualize what happened —normalizing and implicitly sanctioning racist or sexist behavior (Note: while I have not conducted empirical analysis of the post-election discourse and how it might be normalizing ethnic hatred or racism, initial journalistic investigations into the subject suggest it will continue to be an important and fruitful area of research going forward). 

In my own work, I explain cultural aspects of the problem by contextualizing bullying in the “dignity theory framework”. I point to some of the factors at the cultural level that encourage humiliation and promote what some dignity authors term “false dignity”This concept refers to deriving one’s worth from external insignia of success –which can be anything from money, good looks and expensive clothes, to any other token of success that may range from toys in childhood to sex in adolescence —all measured in more likes on social media platforms.  These less frequently examined cultural assumptions and values are by no means peculiar to youth, as they permeate adult interactions as well, a point often lost when technology becomes a scapegoat for wider social problems and media panics in public discourse. This is well exemplified in cyberbullying cases involving children and especially in those high profile incidents that gather significant public attention and where cyberbullying is in some way linked to a child’s suicide (usually as a misleading oversimplification—presenting cyberbullying as the cause behind the suicide). Under such circumstances, public discourse tends to revolve around blaming the so-called bullies or online platforms where bullying happened. Such simplistic binaries and finger-pointing can prevent a constructive discussion of the problem that accounts for its complexity.

Responding to the question of whether the resources that victims of online harassment have at their disposal are actually helpful, in my own research, I have found very little evidence as to the efficacy of social media companies’ anti-harassment enforcement tools (e.g. blocking, filtering, or various forms of reporting etc.). I make an argument for more transparency in how companies enforce their anti-bullying and anti-harassment policies and whether/how they measure their effectiveness, and also for a continuous independent evaluation of effectiveness of companies’ tools (e.g. how efficiently companies respond to reported posts but also with regards to how satisfied users are and which solutions that companies are not yet implementing would be helpful). I also call for more funding from both industry and governments for educational initiatives and innovative forms of counseling for those who need it. You might have seen that Facebook documents with its classified anti-abuse policies got leaked recently on Guardian, adding fuel to the debate on effectiveness of companies’ enforcement mechanisms, suggesting the company “is too lenient on those peddling hate speech.” Consider that according to the newspaper, Facebook had previously advised its moderators “to ignore certain images mocking people with disabilities”. In my discussions with some of the e-safety experts, Facebook has been characterized as one of the better companies on the market in terms of these policies, one that has developed its policies significantly over time and that can set the standards for the rest of the social media industry, at least when it comes to protecting children from cyberbullying.

Resolving the problem—protection vs. participation?

Many see the problems of cyberbullying, trolling, and hate speech as inherently involving tradeoffs with respect to privacy, security, convenience or engagement. The proposition is that in order to protect themselves from harassment, students might need to take practical steps to protect their privacy, such as disabling geolocation or opting for more restrictive privacy settings. Participation and engagement are seen as pitted against protection, that more of one requires less of the other. Meanwhile, some students might be happy to see various online platforms proactively crawling their shared content in order to identify harassment cases in real-time (without these cases having to be reported first by users). However, would students be comfortable if platforms’ moderators were looking into the content that they shared privately? Thinking of harassment in the context of comments on online news platforms—doing away with comments sections entirely has been an ongoing struggle for many platforms.

I think the issue of response goes back to how the problem is perceived and framed. From what I have seen, the advice provided to students by administrators focuses to a large extent on safety and security—framing the problem in this way. It is, of course, important to point out these security-related aspects with practical advice for students on how to protect themselves and to ensure that everyone is safe. But we should also be careful not to ignore the cultural aspects. While users should certainly familiarize themselves with the tools that online platforms provide to protect their privacy, companies need to do a better job of raising awareness of these tools and to ensure that they are effective and easy to use. 

At the same time, creating a culture — not just on campus but more widely, in the society — where these hate-related problems are openly talked about is very important as well. Following the AU case, those who committed these acts need to be identified and held accountable. But we should be vigilant not to miss an important opportunity to discuss the heart of the problem — the normalization, sanctioning and pervasiveness of hate and prejudice on campus and beyond, as well as broader dignity-related aspects of the problem. 

About the Author

Tijana Milosevic is a researcher at the Department of Communication, University of Oslo, Norway, and a member of the EU Kids Online research network (about 150 researchers in 33 countries researching children and digital media). She received her PhD from the American University School of Communication. EU Kids Online will soon be collecting data in several countries as part of a new survey on young people’s digital media use. Tijana is also conducting research in Norway with children and teens inquiring whether they find social media platforms’ tools against digital bullying to be effective. As part of the COST Action project DigiLitEY, she analyzes privacy-related aspects of the Internet of Things and smart toys in particular. You can follow her on Twitter @TiMilosevic and @EUKIDSONLINE and at www.tijanamilosevic.org

Her forthcoming book Protecting Children Online: Cyberbullying Policies of Social Media Companies (MIT Press, 2017) examines these issues in depth, focusing on children and young people in particular.

AU hosts Facebook Live event on cyber-safety in the wake of campus hate crimes

Following last Monday’s racist incident on campus and subsequent online harassment directed at AU Student Government President Taylor Dumpson, American University hosted a Facebook Live video event on Friday outlining practical steps students can take to protect themselves from online hate.

Watch the video here.

Vice President for Communication Terry Flannery and Assistant Director of Physical Security and Police Technology Doug Pierce touched on several actions the university is taking to protect students, both on campus and online, while also addressing best practices students can adopt to protect themselves in the offline and online worlds.

Mr. Pierce highlighted the importance of actively managing one’s privacy settings on social media platforms like Facebook, Instagram, Twitter, and Snapchat, including disabling geolocation, which raises the risk that online threats could develop into real-world threats to a victim’s physical security.

In what was a concerted effort to engage with the university community in an open and transparent manner, Ms. Flannery and Mr. Pierce took questions from users posted in real-time, many of whom expressed a growing sense of frustration over the larger issues of hate and fear bubbling up on campus and beyond. Echoing these sentiments was a general sense of disillusionment as to the efficacy of one-off discussions (no matter how well-intended) in response to what many see as merely the latest in a string of incidents reflecting deep-seated divisions on campus.

Addressing the larger political and social context, Ms. Flannery explained, “I think so much has been affected by the current political climate that we’re in. And I’ve seen many people who’ve, in their social media streams, talk[ed] about how they’re taking a break because the heated rhetoric following the election resulted in just the kind of heated emotion that was difficult. But this is different, what we’re talking about now. We’re talking about people who are targeting you personally because of what you represent or particular views, or your identity based on race, or other factors. And so I think it’s a particularly egregious form of hate; and a particularly personal form of hate.”

Identifying strategies for dealing with direct online harassment, Mr. Pierce suggested avoiding engagement with perpetrators and so-called trolls. “Don’t respond to the messages from these people who are trolling you and trying to provoke a reaction. That reaction is exactly what they’re trying to achieve by doing this. So the goal would be to not give them that satisfaction.”

But several students balked at the notion of self-censorship in the face of deplorable expressions of hate. “We shouldn’t have to hide ourselves online or in person,” wrote one commenter, highlighting the difficulty and dissonance many social media users experience when balancing tradeoffs like privacy versus security online.

“We tend to pit participation on these platforms against protection… that more of one requires less of the other,” explains Internet Governance Lab affiliated alumna Dr. Tijana Milosovic, a post-doctoral fellow in the Department of Communication at the University of Oslo who studies online hate and cyberbullying (she received her Ph.D. in Communication Studies from AU in 2015).  “From what I have seen, the Facebook talk given by AU administrators focuses to a large extent on safety and security–framing the problem in this way. It is, of course, important to point out these security-related aspects, with advice for students on how to protect themselves and to ensure that everyone is safe and feels safe. However, I think we should be cautious not to forget the cultural aspect of it.”

And while framing the discourse of hate, whether online or offline, is increasingly difficult given the labyrinthine and constantly shifting web of actors seeking to pollute the public sphere with vitriol, Dr. Milosovic argues that combating it requires a more holistic approach. “We tend to forget the aspects of our culture that normalize humiliation. I think this is very evident with the new administration–normalizing and implicitly (or even explicitly!) sanctioning such behavior… Creating a culture (not just on campus but more widely, in the society) where these hate-related problems are openly talked about is very important as well.”

Dr. Fernanda Rosa on ‘Global Internet, Local Governance: A Sociotechnical Approach to Internet Exchange Points (IXPs)’

Last month, AU SOC PhD candidate and Internet Governance Lab fellow Fernanda Rosa successfully defended her dissertation titled, ‘Global Internet, Local Governance: A Sociotechnical Approach to Internet Exchange Points (IXPs)’. 

As the physical points through which Internet service providers (ISPs)(e.g. Verizon, AT&T, etc.) and content delivery networks (CDNs)(e.g. Akamai, Cloudflare, Amazon, etc.) exchange web traffic, IXPs play an incredibly important — though largely unseen — role in delivering content to end-users. As such, these physical sites of interconnection mediate all manner of public interest questions, including efforts to bridge the digital divide, notions of Internet sovereignty and data localization, innovation, interoperability and more.

As Dr. Rosa explains:

The Internet is an arrangement of interconnected private networks, each of them with different types of technical and political control (Abbate, 1999; Roberts et al., 2011; DeNardis, 2014). Although the network interconnections are not visible to Internet users, they are typical and essential for the Internet. Such connections are physical and logical, and are possible through cables and structures distributed among countries (Nye, 2014). In this context, an important interconnection facility gains relevance within the Internet architecture, the Internet Exchange Points (IXPs).  IXPs are physical facilities where these networks can interconnect, including Internet Service Providers (e.g. Comcast) content intermediaries (e.g. Google, Facebook), universities, banks and other networks. The purpose of my dissertation is to illuminate the sociotechnical (Winner, 1986; Latour, 1999) aspects of the IXPs, making visible the controversies behind them and the social and political values at stake. In my dissertation, IXPs’ functions and affordances will be elucidated and contextualized in light of public interest questions, such as digital divide; sovereignty and infrastructure dependence among countries; Internet surveillance; the Internet economy and the privatization of Internet governance. To do that, I use a mixed methods approach, including qualitative interviews, on-site observations, data collection on IXPs websites and analysis of quantitative datasets about Internet routes and traffic.

We congratulate Dr. Rosa on her defense and look forward to seeing how her research develops going forward.

Facebook looks to counter ‘information operations’

Last November, Facebook CEO Mark Zuckerberg called claims that his company may have influenced the outcome of the U.S. presidential election by enabling the spread of propaganda a “pretty crazy idea.” But with a report published on Thursday by Facebook’s own security team titled “Information Operations and Facebook,” it is clear attitudes at the social network have changed.

“We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” the report explains.

In an effort to combat these new forms of social network-mediated propaganda campaigns, Facebook’s security team said it would increase its use of machine learning and “new analytical techniques” to remove fake accounts and disrupt “information (or influence) operations,” defined as “actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”

Additionally, the report seeks to “untangle” and move away from the term “fake news,” which it (rightly) argues has become a catch-all used to “refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumors, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces.”

Instead, the report identifies three distinct categories of abuse falling under the umbrella of “information operations”:

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

Citing the 2016 U.S. Presidential election as a case study, the report explains how Facebook security experts “monitored” suspicious activity during the lead-up to the election and found a deluge of false news, false amplifiers, and disinformation (sidebar: if Facebook was monitoring suspicious activity during the run-up to the election why did Mr. Zuckerberg call such activity “crazy” after election day?).

The Facebook report comes less than a week after researchers at Oxford published the latest in a  series of studies analyzing the role of automated accounts (or “bots”) in disseminating “junk news” on social media in the weeks preceding national elections in the U.S.Germany, and France. According to the study, one-quarter of all political links shared in France prior to last Sunday’s election contained “misinformation,” though the researchers point out that in general French voters were sharing better quality news than Americans during the lead-up to the U.S. presidential election (whether this reflects stronger media literacy in France or more sophisticated propaganda is unclear).

While Facebook’s security team would not confirm the identity of the actors “engaged in false amplification using inauthentic Facebook accounts,” together the Facebook and Oxford reports add to a growing body of evidence — including from U.S. intelligence officials and private cybersecurity firms — attributing the surge in automated accounts and propaganda to a larger information operation orchestrated by Russian intelligence and aimed at influencing elections and/or sowing distrust in political institutions.

Regardless, the notion that governments are targeting social networks to mine intelligence and influence political outcomes seems less “crazy” and more like the new normal in global politics.

 

Q&A with Internet Governance Lab Faculty Fellow Jennifer Daskal

Joining the Internet Governance Lab as a Faculty Fellow, Jennifer Daskal is an Associate Professor of Law at American University Washington College of Law, where she teaches and writes in the fields of criminal, national security, and constitutional law. She is on academic leave from 2016-2017, and has received an Open Society Institute Fellowship to work on issues related to privacy and law enforcement access to data across borders. From 2009-2011, Daskal was counsel to the Assistant Attorney General for National Security at the Department of Justice. Prior to joining DOJ, Daskal was senior counterterrorism counsel at Human Rights Watch, worked as a staff attorney for the Public Defender Service for the District of Columbia, and clerked for the Honorable Jed S. Rakoff. She also spent two years as a national security law fellow and adjunct professor at Georgetown Law Center.

Daskal is a graduate of Brown University, Harvard Law School, and Cambridge University, where she was a Marshall Scholar. Recent publications include Law Enforcement Access to Data Across Borders: The Evolving Security and Rights Issues (Journal of National Security Law and Policy 2016); The Un-Territoriality of Data (Yale Law Journal 2015); Pre-Crime Restraints: The Explosion of Targeted, Non-Custodial Prevention (Cornell Law Review 2014); and The Geography of the Battlefield: A Framework for Detention and Targeting Outside the ‘Hot’ Conflict Zone (University of Pennsylvania Law Review 2013). Daskal has published op-eds in the New York TimesWashington Post, and International Herald Tribune and has appeared on BBC, C-Span, MSNBC, and NPR, among other media outlets. She is an Executive Editor of and regular contributor to the Just Security blog.

Recently, we discussed her research and some of the many hot topics arising at the intersection of Internet governance and national security law. 

You’ve worked at the Department of Justice, in the NGO space at Human Rights Watch, in the DC Public Defender’s office, and now in academia. How do these varied experiences inform your current work? When it comes to the intersection of Internet governance and national security law, does Miles’s law hold (does where you stand really depend on where you sit)?

The move from Human Rights Watch to the National Security Division at the Department of Justice was quite eye-opening.  I thought I had prepared myself for the shift, but the adage that where you stand depends on where you sit turned out to be even more true than I had imagined.  In many ways, it makes sense.  At Human Rights Watch, the primary goal was to ensure that government pursued its national security policies in ways that protected human rights.  In the government, the primary goal was to protect the American public from the perceived national security threats.  Ideally, these two goals work in tandem, and both policy and law are generally at their best when it does.  But the primary starting point is quite different and that alters the lens through which just about everything is viewed.

Much of your research focuses on law enforcement’s use of online data.  To what extent are law enforcement officials concerned about the risks of fragmentation/balkanization associated with data localization and so-called “Internet sovereignty”? 

That depends a great deal on who you ask (and where you sit).  As Americans, we have long been used to having access to or control over a majority of the world’s data, thanks in large part to the dominance of American service providers.  Fragmentation of the Internet is thus a threat that undermines this dominance. But for many countries, this is not the case.  Mandatory data localization requirements and Internet fragmentation provide a means of ensuring access to sought-after data and asserting control.

From my perspective, these trends are quite concerning.  Mandatory data localization laws are extremely costly for companies that want to operate internationally, often pricing smaller start-ups out of the market.  The trend toward localization also serves as a means for authoritarian governments to limit free speech and assert increased control.

Any early indications as to how the new administration may handle cross-border data requests? Should we expect a more transactional approach, more multilateral cooperation, or a continuation of the status quo? What impacts could such decisions have on privacy and interoperability? 

The new administration hasn’t yet taken a public stance on these issues, but there are two key issues that ought to be addressed in short order.  First is the concerning impact of the Second Circuit decision in the so-called Microsoft Ireland case.  As a result of that decision, U.S. warrants for stored communications (such as emails) do not reach data that is held outside the United States. If the data is outside the United States, the U.S. government must make a mutual legal assistance request for the data to the country where it is located – even if the only foreign government connection to the investigation is simply that the data happens to be held there.  This makes little normative or practical sense, incentivizes the very kind of data localization efforts that the United States ought to be resisting, undercuts privacy, and is stymying law enforcement’s ability to access sought-after data in legitimate investigations.

As numerous Second Circuit judges opined, Congress should weigh in—and the new administration should support an update to the underlying law.  Specifically, Congress should amend the underlying statute to ensure U.S. law enforcement can access to extraterritorially-located data pursuant to a warrant based on probable cause, but also ensure that both law enforcement and the courts take into account countervailing foreign government interests.

Conversely, foreign governments are increasingly frustrated by U.S. laws that preclude U.S.-based companies from turning over emails and other stored communications content to foreign governments – even in situations where the foreign governments are seeking access to data about their own citizens in connection with a local crime.  These frustrations are also further spurring data localization requirements, excessively broad assertions of extraterritorial jurisdiction in ways that put U.S. companies in the middle of two conflicting legal obligations, and use of surreptitious means to access sought-after data.  These provisions should likewise be amended to permit, in specified circumstances, foreign governments to access that data directly from U.S.-based companies.  The legislation should specify baseline substantive and procedural standards that must be met in order to benefit from this access – standards that are essential to protecting Americans’ data from overzealous foreign governments.

What role do private companies play in establishing the normative and legal bounds of cross-border data requests? Do you see this role changing going forward?

Private companies play significant roles in numerous different ways.  They are, after all, the recipients of the requests.  They thus decide when to object and when to comply.  They also have a strong policy voice – meeting with government officials in an effort to shape the rules.  And they also exert significant power through a range of technological and business decisions about where to store their data and where to locate their people; these decisions determine whether they are subject to local compulsory process or not.

While the majority of ISPs and content platforms are currently located in the U.S., many have expressed concerns about the long-term impact(s) policies like Trump’s travel ban could have for Silicon Valley. Taking these concerns to their logical conclusion, do you see the geography of ISPs and content platforms changing significantly as a result of these policies, and if so, how might these changes alter the legal landscape vis-a-vis cross-border data requests?

I think it’s a fair assumption that whatever the reason, at some point the share of ISPs and content platforms located in the United States will decrease.  It is, as a result, critically important that the United States think about the broader and long-term implications of the rules it sets.  At some point, it may no longer hold the dominant share of the world’s data and will need the cooperation of foreign partners to access sought-after data.  The rules and policies that are adopted should take these long-term interests into account.

Can you tell us a bit about what you’re currently working on?

I continue to work on issues associated with law enforcement access to data across borders, engaging in a comparative analysis as to how some of these key issues are playing out in both the United States and the European Union.  More broadly, I am also examining the increasingly powerful role of private sector in setting norms, policies, and rules in this space. And I continue to do research and writing on the Fourth Amendment as it applies to the digital age.