Internet Governance Lab Fellow and PhD Candidate Olga Khrustaleva at IETF 99 in Prague

Olga Khrustaleva is a PhD student at American University's School of Communication

Olga Khrustaleva is a PhD student at American University's School of Communication

By Kenneth Merrill

Most Internet users are familiar with status code 404 “Not Found,” used when a requested web page is unavailable. Less recognizable, at least until recently, was status code 451, a new protocol standardized in 2016 and used to signal when a requested resource is unavailable “for legal reasons.” A reference to Ray Bradbury’s dystopian novel Fahrenheit 451, the new status code applies to resources inaccessible for a host of legal considerations, including national security, copyright violations, privacy, and local laws proscribing certain types of content (e.g. hate speech, blasphemy laws, etc.). 

Last week, SOC Ph.D. Candidate and Internet Governance Law fellow Olga Khrustaleva, who this summer is working as an Internet of Rights fellow with Article 19, a London-based digital rights advocacy group, presented research at the 99th meeting of the Internet Engineering Task Force (IETF) in Prague on the implications of status code 451 for human rights, both globally and in various national contexts. Using a web crawler that searches top-level resources for content being blocked or otherwise censored, the crawler reports any instances of 451 status codes and analyzes them to see what categories of content are being blocked where. 

Using a web crawler that searches top-level resources for content being blocked or otherwise censored, the crawler reports any instances of 451 status codes and analyzes them to see what categories of content are being blocked where. 

While the project is in its early stages — the research tools were unveiled at an IETF-sponsored “hackathon” in Prague last week — initial findings focusing on Reddit in Turkey showed that LGBT content was among the largest categories of content returning the status code. Similarly, the tools are being used to investigate occurrences of status code 451 in Russia, where LinkedIn was recently blocked due to the company’s refusal to comply with the country’s data localization law

The researchers expect to augment these initial findings with data collected from the team’s browser extension, with the goal of extending the project to other national contexts going forward. As Ms. Khrustaleva explains, “status code 451 makes digital censorship more transparent and gives more clarity to the end users who get a better idea why the page they are trying to access is unavailable.” But understanding how the code is used and under what circumstances will help shed light on how content filtering is implemented across national contexts. 

Internet Governance Lab Welcomes Faculty Fellow Dr. Eric Novotny

Dr. Eric Novotny is the Hurst Adjunct Professorial Lecturer in the School of International Service at American University. 

Dr. Eric Novotny is the Hurst Adjunct Professorial Lecturer in the School of International Service at American University. 

By Kenneth Merrill

Joining the Internet Governance Lab as a Faculty Fellow, Dr. Eric Novotny is the Hurst Adjunct Professorial Lecturer in the School of International Service at American University. He is also Senior Advisor, Democracy, and Technology, at the U.S. Agency for International Development. In this position, Dr. Novotny designs and manages a large portfolio of programs that use advanced information and communication technologies (ICTs) to stimulate economic growth, improve democratic processes, and reform governance policies in developing countries. Some of these efforts are stand-alone technology and governance projects while others embed advanced ICTs in larger development projects in applied areas such as service delivery and critical infrastructure. USAID has assistance programs in 80 countries worldwide. He holds a B.A. in Political Science, and M.A. in Government, and a Ph.D. in International Relations from Georgetown University, as well as an M.Phil in Management Studies from Oxford.

Dr. Novotny also serves as a faculty coordinator and coach for the Cyber 9/12 Student Challenge (along with Washington College of Law professor Melanie Teplinksi), a global competition designed to “encourage and educate the next generation of foreign policy leaders in cyber security issues.” Sponsored by the Atlantic Council, the annual event features teams of students from universities around the world competing to analyze, synthesize, and respond to the technical, legal, and policy issues involved in a fictional cyber security related scenario. In 2017 46 teams from 35 universities participated with AU teams winning awards three out of the past five years.

As the Program Director for the School of International Service Masters program in US Foreign Policy and Security Studies in the Fall, Dr. Novotny will teach courses in International Communication and Cyber Security Policy. In this capacity, Dr. Novotny will also continue his research, which focuses broadly on the intersection of Cyber Security and Internet Freedom, including projects titled, “Building Anti-Censorship into the Core Internet Architecture,” “Cyber Security Risk Management for Non-governmental Organizations,” and “Cyber Cabalities and Interference in the Electoral Process.”

Internet Governance Lab at Cyber Week in Tel Aviv

By Kenneth Merrill

As a massive cyberattack spread across the globe on Tuesday, cybersecurity experts gathered in Tel Aviv for Cyber Week 2017, an annual conference bringing together scholars, industry leaders, and government officials to share methods and knowledge on a range of topics relevant to cybersecurity.

Among the experts in attendance was American University School of Communication Professor and Internet Governance Lab Co-director Dr. Laura DeNardis, who delivered a presentation titled “Privacy Complications in Cyber Physical Systems,” examining the privacy and security implications of the “Internet of Things.”

Also at the conference was Washington College of Law Professor and Internet Governance Lab Faculty Fellow Jennifer Daskal, who presented her work “Data and Territory: A Round Peg in a Square Hole,” addressing conflicts of law occurring at the intersection of the Internet and jurisdiction.

Both presentations, and indeed the entire conference, could not have been more timely.

On Tuesday ransomware attacks spread from Ukraine across the globe, crippling thousands of systems, including a major shipping company, at least one airport, ATM machines, and supermarket cash registers. Coming on the heels of a similar attack in May using the WannaCry ransomware, Tuesday’s Petya ransomware attack also used Eternal Blue, one of several hacking tools stolen from the National Security Administration and leaked by a group called the Shadow Brokers. And while it is still unclear who may be behind this latest attack (the fact that neither ransomware attacks collected much in the way of ransoms is leading some to suggest proxies working on behalf of nation-states), Professor DeNardis’s presentation underscored the extent to which the Internet of things introduces countless new vectors through which malicious code can spread.

Meanwhile, Professor Daskal’s discussion focusing on the incongruities of territorial sovereignty in cyberspace proved especially salient on Wednesday as Canada’s Supreme Court ruled that it could force Google to remove search results worldwide. Also on Wednesday, Pavel Durov, founder of the controversial messaging app Telegram, agreed to comply with a Russian law that requires information technology companies operating in the country to store data locally, as well as agreeing to hand over information to Russian authorities on request.

Cyber Week 2017 runs through Thursday, June 28th. You can follow along at #CyberWeek.

SOC PhD candidate Fernanda Rosa awarded Columbia University grant to study IXPs

Fernanda Rosa is a PhD student in Communication at American University. 

Fernanda Rosa is a PhD student in Communication at American University. 

SOC Ph.D. candidate Fernanda Rosa has been awarded a prestigious grant from Columbia University’s School of International and Public Affairs (SIPA) to fund her dissertation research investigating the role of Internet exchange points (IXPs) in Internet governance. The project, titled “Global Internet, Local Governance: A Sociotechnical Approach to Internet Exchange Points,” examines these important sites in the Internet’s physical infrastructure from a science and technology studies (STS) perspective while making visible their impact on Internet governance and freedom of expression online.

Additionally, Ms. Rosa was awarded a 2017 Google Policy Fellowship, which will take her to Mexico City this summer where she will be conducting research for her dissertation and working with Red en Defensa de los Derechos Digitales (R3D), a Mexican organization dedicated to the defense of human rights in the digital sphere.

We congratulate Fernanda on these impressive accomplishments and look forward to seeing what she discovers as her research proceeds. You can follow her on Twitter @fefe_rosa

Are we coding for the future we want? Dr. Isabelle Zaugg on Digital Standards and Lost Languages

Dr. Isabelle Zaugg received her PhD from American University's School of Communication in 2017. 

Dr. Isabelle Zaugg received her PhD from American University's School of Communication in 2017. 

American University SOC Ph.D. student Dr. Isabelle Zaugg successfully defended her dissertation on Wednesday titled “Digitizing Ethiopic: Coding for Linguistic Continuity in the Face of Digital Extinction.” Drawing on field work conducted in Ethiopia last year as part of a Fulbright-Hays Doctoral Dissertation Research Abroad Fellowship, Dr. Zaugg’s project investigates the relationship between the growth of information communication technologies and rapid declines in language diversity.

Abstract: Despite the growing sophistication of digital technologies, it appears they are contributing to language extinction on a par with devastating losses in biodiversity.  With language extinction comes loss of identity, inter-generational cohesion, culture, and a global wealth of knowledge to address future problems facing humanity.  Linguists estimate a 50%-90% loss of language diversity during the 21st century, with the lack of digital support for minority languages and scripts a contributing factor.

Over time, digital design has come to support an increasing number of languages, but this process has been largely market-driven, excluding languages of communities too small or poor to represent viable markets.  Lack of support for a language in the digital sphere means that language communities begin using other more dominant or “prestigious’ languages for digital communication.  This results in “digital extinction,” including the impossibility of raising youth fluent in their mother-tongue.  Once the youth in a community have stopped using a language, it is typically on the path to extinction within the next few generations.

This research investigates the role of digital design and governance in including or excluding languages from the digital sphere through the instrumental case study of Ethiopic, a script that supports a number of languages at risk of digital extinction, including the national language of Ethiopia.  Using qualitative and quantitative methods, the dissertation investigates late 20th century efforts to include the Ethiopic script in Unicode-ISO/IEC 10646, the dominant digital sister standards that allow scripts of the world to appear on devices, websites, and software, as well as the ongoing challenges Ethiopic-based languages face for full digital viability in the 21st century.

Concluding with policy recommendations and best practices for digital design, governance, and advocacy efforts to preserve language diversity, this research sheds light on far-reaching implications for the public good of digital design and governance.  The decisions we make about digital technologies will impact generations to come, and this dissertation asks, “Are we coding for the future we want?”

Please join us in congratulating Dr. Zaugg on her defense and wishing her well as she begins a Mellon-Sawyer Seminar Postdoctoral Fellowship in “Global Language Justice” at the Institute for Comparative Literature and Society at Columbia University in the Fall.

Recap: Will the Internet Fragment? A Conversation with Milton Mueller

By Kenneth Merrill

Following last week’s terror attacks in London, Prime Minister Theresa May stated unequivocally that “enough is enough,” adding that there is “far too much tolerance of extremism” in British society. In particular, Ms. May called out Internet companies to do more to shut down online “safe spaces,” suggesting that her government would look to broker “international agreements to regulate cyber space so that terrorists cannot plan online.”

What such international agreements might look like in practice is unclear, but according to Internet governance scholar Milton Mueller, Prime Minister May’s comments reflect a growing trend, in which nation-states are looking to assert a greater degree of control over global data flows.

“It is an attempt to fit the round peg of global communications into the square hole of territorial states,” explained Dr. Mueller on Tuesday at an event marking the release of his new book Will the Internet Fragment?: Sovereignty, Globalization, and Cyberspace. Hosted by New America’s Open Technology Institute, the event was moderated by Internet Governance Lab Co-Director Dr. Derrick Cogburn and featured Dr. Mueller in conversation with Rebecca MacKinnon, Director of the Ranking Digital Rights project at New America; Tim Mauer, Co-director of the Cyber Policy Initiative at the Carnegie Endowment for International Peace; and Angela McKay, Senior Director of Cybersecurity Policy and Strategy at Microsoft. 

A video of the event is available here.

In answering the book’s title question, Dr. Mueller began the discussion by interrogating the concept of “fragmentation,” suggesting that the term “realignment” more precisely captures current efforts to assert notions of territorial sovereignty in cyberspace. In this way, Mueller’s remarks contextualized “efforts to set up gateways to filter content, using data localization to keep internet routing within state borders, and requiring governments and users to use local companies to store data” as attempts to “partition cyberspace in order to subordinate its [the Internet’s] control to sovereign states.”

“Governments are trying to have their cake and eat it too,” explained Rebecca MacKinnon, who’s 2012 book Consent of the Networked described new modes of Internet censorship and the ways in which private companies have assumed governance functions formerly reserved for nation-states. But as governments bemoan the inability to regulate content within their borders many of these same nation-states are happy to extend locally developed policies extraterritorially, explained Ms. MacKinnon, citing the Microsoft/Ireland caseand efforts to apply the EU’s “right to be forgotten” globally as examples of this sort of extraterritorial extension.

These cases, along with Prime Minister May’s recent comments, help underscore the fact that efforts to realign the Internet to fit Westphalian notions of territorial sovereignty are no longer merely the Orwellian fantasies of authoritarian states but are gaining legitimacy in more democratic national contexts. In response to these trends, Mueller proposes “a liberation movement for cyberspace, in which we recognize that we’re creating a globally interconnected polity around the Internet,” suggesting that “perhaps it is time for this polity to assert its own identity and own authority and come up with global organizations for Internet governance.”

But as Tim Mauer pointed out, the prospects for such a liberation movement seem increasingly remote given large-scale structural changes to the existing liberal order. As geopolitical developments point towards a more neo-realist order, Mauer argued that we could expect to see more “contested forms of [Internet] governance” as opposed to international agreements and transnational consensus.

Meanwhile, Angela McKay of Microsoft presented several ways in which emerging technologies like the adoption of cloud computing and the Internet of Things might present challenges and opportunities for realignment. In particular, Ms. McKay highlighted cloud adoption as an example of a fundamental change in Internet architecture and the way its governed, with a more homogeneous set of firms managing a more diffuse, heterogeneous set of end-points. Conversely, with the growth of the Internet of Things, a new set of formerly non-technical industries will be thrust into Internet governance and information technology policy discussions, bringing with them a new set of norms, best-practices, and values that will alter the dynamics of existing private-public partnerships and require new modes of Internet governance going forward.

AU School of Communication scholars featured at ICA 2017

By Kenneth Merrill

The International Communication Association (ICA) held its annual conference in San Diego last week and AU’s School of Communication (SOC) was well represented with a dozen scholars presenting research on a wide range of topics.

On Sunday, SOC Professor and Internet Governance Lab Faculty Fellow Aram Sinnreich presented the latest in his work shedding light on the dark money used to influence policy debates over intellectual property rights (IPR) online. Drawing on a chapter from Professor Sinnreich’s forthcoming book (Yale University Press), the presentation, titled “Following the Money Behind Intellectual Property Law,” uses a mix of quantitative and qualitative analysis of public lobbying and campaign finance records to identify patterns of expenditure and agenda setting in the increasingly powerful and opaque IPR lobby. 

On Thursday, SOC Professor, Internet Governance Lab Faculty Fellow, and Director of the Communication Studies Division (currently on sabbatical) Kathryn Montgomery presented a paper titled, “Health Wearables: Ensuring Fairness, Preventing Discrimination, and Promoting Equity in an Emerging Internet-of-Things Environment,” based on an ongoing project investigating the intersection of the Internet of things (IoT) and privacy more broadly.

Also on Thursday, Caty Borum Chattoo, Director of the Center for Social Media and Social Impact (CSMi), presented her latest work titled “Storytelling for Social Change: Leveraging Documentary and Comedy for Public Engagement with Global Poverty.” Part of the CSMI’s Rise Up Media and Social Change Project, the presentation also provided an opportunity to welcome the Center’s new Postdoctoral Fellow, Dr. Amy Henderson Riley, whose work focuses on entertainment-education as a strategy for individual and social change.  

Other scholars representing SOC included SOC Assistant Professor Filippo Trevisan presenting his paper “Media Justice: Race, Borders, Disability and Data,” Professor Paula Weissman presenting “Strategic Communication by Health and Medical Organizations: Self-Interest vs. Informed Decision Making”, Assistant Professor Benjamin Stokes, PhD candidate Samantha Dols, and Doctoral Research Assistant and Adjunct Professor Kara Andrade with “Here We Listen: Positioning a Hybrid ‘Listening Station’ to Circulate Marginalized Voices Across Physical and Digital Channels in a Neighborhood,” and Ph.D. candidate David Proper with his presentation “Troubling Republicanism: Carly Fiorina and Conservative Republican Gendered Discourses.”

AU hate crime incident in context: From cybersecurity to dignity

By Tijana Milosovic

As an American University alumna and scholar of cyberbullying, I was extremely troubled to hear about the series of racially charged events that took place on campus earlier this month. Most readers are familiar with the distressing details by now — bananas were found hanging from noose-shaped strings marked with AKA, an abbreviation for the University’s predominantly African-American sorority. Shortly thereafter, the Anti-Defamation League alerted the University to an online post written by a “white supremacist” urging followers to “troll” the newly-elected President of the American University Student Government (AUSG), Taylor Dumpson.

It goes without saying that the top priority for the University is to work with authorities to guarantee the safety of President Dumpson and all those on campus. But as the AU community wrestles with the insidious and complex nature of these grave incidents, it is important to put them in context so we can more clearly identify the sources and roots of the problem. 

As a researcher who has been studying cyberbullying and online harassment for the past four years, I will focus here on the Internet-mediated aspects of the incident, while interrogating responses to such incidents, including the tendency to fixate on safety and security-oriented responses at the expense of broader cultural and social dynamics.

As students of media know all too well, framing plays an incredibly important part in influencing the scope of possible solutions. Rather than concentrating only on safety measures, which can fall short, this incident offers an opportunity to discuss some of the cultural aspects of this problem, which can lead to more effective and far-reaching responses in the long term.

Cyber-harassment, trolling or cyberbullying?

As an overall caveat to discussions involving “cyber” (as in “cyberbullying” or “cyberharassment”), I find the term itself to be problematic, as it can connote that the digital world is somehow a mysterious place apart from the “real” or “offline” worlds. 

AU administrators referred to the posts targeting President Dumpson as “cyber-harassment,” and I find this term to be more appropriate for this case than “cyberbullying.” While I have not seen the original posts, from what I have read about the incident, the terms  “online hate speech” or “harmful speech online” would also apply.

Although definitions and understandings vary, hate speech usually refers to the specific targeting of minority populations —offensive speech (which can also call on violence) that is directed against a group and inspired by some characteristics of the group (e.g. race, ethnicity, gender, sexual orientation etc.). As such, “online hate speech” might be the most accurate term to describe the gravity of the incidents that occurred earlier this month.  While the initial AU letter said the author of the hateful post called on others to “troll” President Dumpson,  trolling has a specific connotation and culture around it. For the most thoughtful inquiry into trolling and how it fits into the mainstream culture, I highly recommend scholar Whitney Phillips’ book “This is Why We Can’t Have Nice Things.” While the following definition is most certainly a generalization (and simplification), trolling tends to refer to being annoying or purposefully offensive (e.g. “just for lolz”), because you can –for fun. Although trolling can definitely be racially-charged, I think that labelling the incident as “trolling” might perhaps inadvertently assign this particular harassment incident a more benign connotation.

“Cyberbullying”, however, is a distinct term. It may be difficult to arrive at an agreed-upon definition of cyberbullying, but researchers tend to observe that it constitutes some form of repeated harm involving electronic technology. However, when trying to study and measure cyberbullying, it may not be obvious what counts as “repetition.”  Is it more comments by one person? Or is it enough if many people merely see one comment? Or do they also need to re-post it or continue commenting on it? If it needs to happen continuously over time in order to be called cyberbullying, how long does it need to happen for us to classify it as continuous? To be characterized as “cyberbullying,” some scholars emphasize that a behavior typically needs to involve some form of “power imbalance” between the so-called victim and perpetrator. Cyberbullying is frequently used to refer to peer aggression (among children and youth) and the definition tends to be derived from offline bullying. Authors caution against using the term “bullying” for all instances of interpersonal conflict online, as these incidents can have very different characteristics. Nonetheless, the term cyberbullying tends to be applied to anything and everything in the popular discourse, and this, in my view, contributes to the overall confusion around the issue, including a belief that the phenomenon is more widespread than the empirical research suggests.

Bullying and harassment as safety/security or tech problems

Cyberbullying tends to be understood primarily as an “e-safety” problem or a security issue. It is also often presented as a technological problem. In the media and public discourse, it is not uncommon to have technology, or some features thereof, blamed for what is often characterized as an increasing number of harassment incidents. For instance, when harassment is anonymous, technological affordances of anonymity are said to be contributing to the problem. In my view, such discourse tends to ignore the cultural factors that normalize humiliation. This might be evident in the post-election atmosphere in the United States, which is how I would contextualize what happened —normalizing and implicitly sanctioning racist or sexist behavior (Note: while I have not conducted empirical analysis of the post-election discourse and how it might be normalizing ethnic hatred or racism, initial journalistic investigations into the subject suggest it will continue to be an important and fruitful area of research going forward). 

In my own work, I explain cultural aspects of the problem by contextualizing bullying in the “dignity theory framework”. I point to some of the factors at the cultural level that encourage humiliation and promote what some dignity authors term “false dignity”. This concept refers to deriving one’s worth from external insignia of success –which can be anything from money, good looks and expensive clothes, to any other token of success that may range from toys in childhood to sex in adolescence —all measured in more likes on social media platforms.  These less frequently examined cultural assumptions and values are by no means peculiar to youth, as they permeate adult interactions as well, a point often lost when technology becomes a scapegoat for wider social problems and media panics in public discourse. This is well exemplified in cyberbullying cases involving children and especially in those high profile incidents that gather significant public attention and where cyberbullying is in some way linked to a child’s suicide (usually as a misleading oversimplification—presenting cyberbullying as the cause behind the suicide). Under such circumstances, public discourse tends to revolve around blaming the so-called bullies or online platforms where bullying happened. Such simplistic binaries and finger-pointing can prevent a constructive discussion of the problem that accounts for its complexity.

Responding to the question of whether the resources that victims of online harassment have at their disposal are actually helpful, in my own research, I have found very little evidence as to the efficacy of social media companies’ anti-harassment enforcement tools (e.g. blocking, filtering, or various forms of reporting etc.). I make an argument for more transparency in how companies enforce their anti-bullying and anti-harassment policies and whether/how they measure their effectiveness, and also for a continuous independent evaluation of effectiveness of companies’ tools (e.g. how efficiently companies respond to reported posts but also with regards to how satisfied users are and which solutions that companies are not yet implementing would be helpful). I also call for more funding from both industry and governments for educational initiatives and innovative forms of counseling for those who need it. You might have seen that Facebook documents with its classified anti-abuse policies got leaked recently on Guardian, adding fuel to the debate on effectiveness of companies’ enforcement mechanisms, suggesting the company “is too lenient on those peddling hate speech.” Consider that according to the newspaper, Facebook had previously advised its moderators “to ignore certain images mocking people with disabilities”. In my discussions with some of the e-safety experts, Facebook has been characterized as one of the better companies on the market in terms of these policies, one that has developed its policies significantly over time and that can set the standards for the rest of the social media industry, at least when it comes to protecting children from cyberbullying.

Resolving the problem—protection vs. participation?

Many see the problems of cyberbullying, trolling, and hate speech as inherently involving tradeoffs with respect to privacy, security, convenience or engagement. The proposition is that in order to protect themselves from harassment, students might need to take practical steps to protect their privacy, such as disabling geolocation or opting for more restrictive privacy settings. Participation and engagement are seen as pitted against protection, that more of one requires less of the other. Meanwhile, some students might be happy to see various online platforms proactively crawling their shared content in order to identify harassment cases in real-time (without these cases having to be reported first by users). However, would students be comfortable if platforms’ moderators were looking into the content that they shared privately? Thinking of harassment in the context of comments on online news platforms—doing away with comments sections entirely has been an ongoing struggle for many platforms.

I think the issue of response goes back to how the problem is perceived and framed. From what I have seen, the advice provided to students by administrators focuses to a large extent on safety and security—framing the problem in this way. It is, of course, important to point out these security-related aspects with practical advice for students on how to protect themselves and to ensure that everyone is safe. But we should also be careful not to ignore the cultural aspects. While users should certainly familiarize themselves with the tools that online platforms provide to protect their privacy, companies need to do a better job of raising awareness of these tools and to ensure that they are effective and easy to use. 

At the same time, creating a culture — not just on campus but more widely, in the society — where these hate-related problems are openly talked about is very important as well. Following the AU case, those who committed these acts need to be identified and held accountable. But we should be vigilant not to miss an important opportunity to discuss the heart of the problem — the normalization, sanctioning and pervasiveness of hate and prejudice on campus and beyond, as well as broader dignity-related aspects of the problem. 

About the Author

tijana.jpg

 

Tijana Milosevic is a researcher at the Department of Communication, University of Oslo, Norway, and a member of the EU Kids Online research network (about 150 researchers in 33 countries researching children and digital media). She received her PhD from the American University School of Communication. EU Kids Online will soon be collecting data in several countries as part of a new survey on young people’s digital media use. Tijana is also conducting research in Norway with children and teens inquiring whether they find social media platforms’ tools against digital bullying to be effective. As part of the COST Action project DigiLitEY, she analyzes privacy-related aspects of the Internet of Things and smart toys in particular. You can follow her on Twitter @TiMilosevic and @EUKIDSONLINE and at www.tijanamilosevic.org

Her forthcoming book Protecting Children Online: Cyberbullying Policies of Social Media Companies (MIT Press, 2017) examines these issues in depth, focusing on children and young people in particular.

AU hosts Facebook Live event on cyber-safety in the wake of campus hate crimes

By Kenneth Merrill

Following last Monday’s racist incident on campus and subsequent online harassment directed at AU Student Government President Taylor Dumpson, American University hosted a Facebook Live video event on Friday outlining practical steps students can take to protect themselves from online hate.

Watch the video here.

Vice President for Communication Terry Flannery and Assistant Director of Physical Security and Police Technology Doug Pierce touched on several actions the university is taking to protect students, both on campus and online, while also addressing best practices students can adopt to protect themselves in the offline and online worlds.

Mr. Pierce highlighted the importance of actively managing one’s privacy settings on social media platforms like Facebook, Instagram, Twitter, and Snapchat, including disabling geolocation, which raises the risk that online threats could develop into real-world threats to a victim’s physical security.

In what was a concerted effort to engage with the university community in an open and transparent manner, Ms. Flannery and Mr. Pierce took questions from users posted in real-time, many of whom expressed a growing sense of frustration over the larger issues of hate and fear bubbling up on campus and beyond. Echoing these sentiments was a general sense of disillusionment as to the efficacy of one-off discussions (no matter how well-intended) in response to what many see as merely the latest in a string of incidents reflecting deep-seated divisions on campus.

Addressing the larger political and social context, Ms. Flannery explained, “I think so much has been affected by the current political climate that we’re in. And I’ve seen many people who’ve, in their social media streams, talk[ed] about how they’re taking a break because the heated rhetoric following the election resulted in just the kind of heated emotion that was difficult. But this is different, what we’re talking about now. We’re talking about people who are targeting you personally because of what you represent or particular views, or your identity based on race, or other factors. And so I think it’s a particularly egregious form of hate; and a particularly personal form of hate.”

Identifying strategies for dealing with direct online harassment, Mr. Pierce suggested avoiding engagement with perpetrators and so-called trolls. “Don’t respond to the messages from these people who are trolling you and trying to provoke a reaction. That reaction is exactly what they’re trying to achieve by doing this. So the goal would be to not give them that satisfaction.”

But several students balked at the notion of self-censorship in the face of deplorable expressions of hate. “We shouldn’t have to hide ourselves online or in person,” wrote one commenter, highlighting the difficulty and dissonance many social media users experience when balancing tradeoffs like privacy versus security online.

“We tend to pit participation on these platforms against protection… that more of one requires less of the other,” explains Internet Governance Lab affiliated alumna Dr. Tijana Milosovic, a post-doctoral fellow in the Department of Communication at the University of Oslo who studies online hate and cyberbullying (she received her Ph.D. in Communication Studies from AU in 2015).  “From what I have seen, the Facebook talk given by AU administrators focuses to a large extent on safety and security–framing the problem in this way. It is, of course, important to point out these security-related aspects, with advice for students on how to protect themselves and to ensure that everyone is safe and feels safe. However, I think we should be cautious not to forget the cultural aspect of it.”

And while framing the discourse of hate, whether online or offline, is increasingly difficult given the labyrinthine and constantly shifting web of actors seeking to pollute the public sphere with vitriol, Dr. Milosovic argues that combating it requires a more holistic approach. “We tend to forget the aspects of our culture that normalize humiliation. I think this is very evident with the new administration–normalizing and implicitly (or even explicitly!) sanctioning such behavior… Creating a culture (not just on campus but more widely, in the society) where these hate-related problems are openly talked about is very important as well.”

Facebook looks to counter ‘information operations’

By Kenneth Merrill

Last November, Facebook CEO Mark Zuckerberg called claims that his company may have influenced the outcome of the U.S. presidential election by enabling the spread of propaganda a “pretty crazy idea.” But with a reportpublished on Thursday by Facebook’s own security team titled “Information Operations and Facebook,” it is clear attitudes at the social network have changed.

“We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” the report explains.

In an effort to combat these new forms of social network-mediated propaganda campaigns, Facebook’s security team said it would increase its use of machine learning and “new analytical techniques” to remove fake accounts and disrupt “information (or influence) operations,” defined as “actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”

Additionally, the report seeks to “untangle” and move away from the term “fake news,” which it (rightly) argues has become a catch-all used to “refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumors, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces.”

Instead, the report identifies three distinct categories of abuse falling under the umbrella of “information operations”:

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

Citing the 2016 U.S. Presidential election as a case study, the report explains how Facebook security experts “monitored” suspicious activity during the lead-up to the election and found a deluge of false news, false amplifiers, and disinformation (sidebar: if Facebook was monitoring suspicious activity during the run-up to the election why did Mr. Zuckerberg call such activity “crazy” after election day?).

The Facebook report comes less than a week after researchers at Oxford published the latest in a  series of studies analyzing the role of automated accounts (or “bots”) in disseminating “junk news” on social media in the weeks preceding national elections in the U.S.Germany, and France. According to the study, one-quarter of all political links shared in France prior to last Sunday’s election contained “misinformation,” though the researchers point out that in general French voters were sharing better quality news than Americans during the lead-up to the U.S. presidential election (whether this reflects stronger media literacy in France or more sophisticated propaganda is unclear).

While Facebook’s security team would not confirm the identity of the actors “engaged in false amplification using inauthentic Facebook accounts,” together the Facebook and Oxford reports add to a growing body of evidence — including from U.S. intelligence officials and private cybersecurity firms — attributing the surge in automated accounts and propaganda to a larger information operation orchestrated by Russian intelligence and aimed at influencing elections and/or sowing distrust in political institutions.

Regardless, the notion that governments are targeting social networks to mine intelligence and influence political outcomes seems less “crazy” and more like the new normal in global politics.