Are we coding for the future we want? Dr. Isabelle Zaugg on Digital Standards and Lost Languages

American University SOC Ph.D. student Dr. Isabelle Zaugg successfully defended her dissertation on Wednesday titled “Digitizing Ethiopic: Coding for Linguistic Continuity in the Face of Digital Extinction.” Drawing on field work conducted in Ethiopia last year as part of a Fulbright-Hays Doctoral Dissertation Research Abroad Fellowship, Dr. Zaugg’s project investigates the relationship between the growth of information communication technologies and rapid declines in language diversity.

Abstract: Despite the growing sophistication of digital technologies, it appears they are contributing to language extinction on a par with devastating losses in biodiversity.  With language extinction comes loss of identity, inter-generational cohesion, culture, and a global wealth of knowledge to address future problems facing humanity.  Linguists estimate a 50%-90% loss of language diversity during the 21st century, with the lack of digital support for minority languages and scripts a contributing factor.

Over time, digital design has come to support an increasing number of languages, but this process has been largely market-driven, excluding languages of communities too small or poor to represent viable markets.  Lack of support for a language in the digital sphere means that language communities begin using other more dominant or “prestigious’ languages for digital communication.  This results in “digital extinction,” including the impossibility of raising youth fluent in their mother-tongue.  Once the youth in a community have stopped using a language, it is typically on the path to extinction within the next few generations.

This research investigates the role of digital design and governance in including or excluding languages from the digital sphere through the instrumental case study of Ethiopic, a script that supports a number of languages at risk of digital extinction, including the national language of Ethiopia.  Using qualitative and quantitative methods, the dissertation investigates late 20th century efforts to include the Ethiopic script in Unicode-ISO/IEC 10646, the dominant digital sister standards that allow scripts of the world to appear on devices, websites, and software, as well as the ongoing challenges Ethiopic-based languages face for full digital viability in the 21st century.

Concluding with policy recommendations and best practices for digital design, governance, and advocacy efforts to preserve language diversity, this research sheds light on far-reaching implications for the public good of digital design and governance.  The decisions we make about digital technologies will impact generations to come, and this dissertation asks, “Are we coding for the future we want?”

Please join us in congratulating Dr. Zaugg on her defense and wishing her well as she begins a Mellon-Sawyer Seminar Postdoctoral Fellowship in “Global Language Justice” at the Institute for Comparative Literature and Society at Columbia University in the Fall.


AU School of Communication scholars featured at ICA 2017

The International Communication Association (ICA) held its annual conference in San Diego last week and AU’s School of Communication (SOC) was well represented with a dozen scholars presenting research on a wide range of topics.

On Sunday, SOC Professor and Internet Governance Lab Faculty Fellow Aram Sinnreich presented the latest in his work shedding light on the dark money used to influence policy debates over intellectual property rights (IPR) online. Drawing on a chapter from Professor Sinnreich’s forthcoming book (Yale University Press), the presentation, titled “Following the Money Behind Intellectual Property Law,” uses a mix of quantitative and qualitative analysis of public lobbying and campaign finance records to identify patterns of expenditure and agenda setting in the increasingly powerful and opaque IPR lobby. 

On Thursday, SOC Professor, Internet Governance Lab Faculty Fellow, and Director of the Communication Studies Division (currently on sabbatical) Kathryn Montgomery presented a paper titled, “Health Wearables: Ensuring Fairness, Preventing Discrimination, and Promoting Equity in an Emerging Internet-of-Things Environment,” based on an ongoing project investigating the intersection of the Internet of things (IoT) and privacy more broadly.

Also on Thursday, Caty Borum Chattoo, Director of the Center for Social Media and Social Impact (CSMi), presented her latest work titled “Storytelling for Social Change: Leveraging Documentary and Comedy for Public Engagement with Global Poverty.” Part of the CSMI’s Rise Up Media and Social Change Project, the presentation also provided an opportunity to welcome the Center’s new Postdoctoral Fellow, Dr. Amy Henderson Riley, whose work focuses on entertainment-education as a strategy for individual and social change.  

Other scholars representing SOC included SOC Assistant Professor Filippo Trevisan presenting his paper “Media Justice: Race, Borders, Disability and Data,” Professor Paula Weissman presenting “Strategic Communication by Health and Medical Organizations: Self-Interest vs. Informed Decision Making”, Assistant Professor Benjamin Stokes, PhD candidate Samantha Dols, and Doctoral Research Assistant and Adjunct Professor Kara Andrade with “Here We Listen: Positioning a Hybrid ‘Listening Station’ to Circulate Marginalized Voices Across Physical and Digital Channels in a Neighborhood,” and Ph.D. candidate David Proper with his presentation “Troubling Republicanism: Carly Fiorina and Conservative Republican Gendered Discourses.” 


AU hate crime incident in context: From cybersecurity to dignity

As an American University alumna and scholar of cyberbullying, I was extremely troubled to hear about the series of racially charged events that took place on campus earlier this month. Most readers are familiar with the distressing details by now — bananas were found hanging from noose-shaped strings marked with AKA, an abbreviation for the University’s predominantly African-American sorority. Shortly thereafter, the Anti-Defamation League alerted the University to an online post written by a “white supremacist” urging followers to “troll” the newly-elected President of the American University Student Government (AUSG), Taylor Dumpson.

It goes without saying that the top priority for the University is to work with authorities to guarantee the safety of President Dumpson and all those on campus. But as the AU community wrestles with the insidious and complex nature of these grave incidents, it is important to put them in context so we can more clearly identify the sources and roots of the problem. 

As a researcher who has been studying cyberbullying and online harassment for the past four years, I will focus here on the Internet-mediated aspects of the incident, while interrogating responses to such incidents, including the tendency to fixate on safety and security-oriented responses at the expense of broader cultural and social dynamics.

As students of media know all too well, framing plays an incredibly important part in influencing the scope of possible solutions. Rather than concentrating only on safety measures, which can fall short, this incident offers an opportunity to discuss some of the cultural aspects of this problem, which can lead to more effective and far-reaching responses in the long term.

Cyber-harassment, trolling or cyberbullying?

As an overall caveat to discussions involving “cyber” (as in “cyberbullying” or “cyberharassment”), I find the term itself to be problematic, as it can connote that the digital world is somehow a mysterious place apart from the “real” or “offline” worlds. 

AU administrators referred to the posts targeting President Dumpson as “cyber-harassment,” and I find this term to be more appropriate for this case than “cyberbullying.” While I have not seen the original posts, from what I have read about the incident, the terms  “online hate speech” or “harmful speech online” would also apply.

Although definitions and understandings vary, hate speech usually refers to the specific targeting of minority populations —offensive speech (which can also call on violence) that is directed against a group and inspired by some characteristics of the group (e.g. race, ethnicity, gender, sexual orientation etc.). As such, “online hate speech” might be the most accurate term to describe the gravity of the incidents that occurred earlier this month.  While the initial AU letter said the author of the hateful post called on others to “troll” President Dumpson,  trolling has a specific connotation and culture around it. For the most thoughtful inquiry into trolling and how it fits into the mainstream culture, I highly recommend scholar Whitney Phillips’ book “This is Why We Can’t Have Nice Things.” While the following definition is most certainly a generalization (and simplification), trolling tends to refer to being annoying or purposefully offensive (e.g. “just for lolz”), because you can –for fun. Although trolling can definitely be racially-charged, I think that labelling the incident as “trolling” might perhaps inadvertently assign this particular harassment incident a more benign connotation.

“Cyberbullying”, however, is a distinct term. It may be difficult to arrive at an agreed-upon definition of cyberbullying, but researchers tend to observe that it constitutes some form of repeated harm involving electronic technology. However, when trying to study and measure cyberbullying, it may not be obvious what counts as “repetition.”  Is it more comments by one person? Or is it enough if many people merely see one comment? Or do they also need to re-post it or continue commenting on it? If it needs to happen continuously over time in order to be called cyberbullying, how long does it need to happen for us to classify it as continuous? To be characterized as “cyberbullying,” some scholars emphasize that a behavior typically needs to involve some form of “power imbalance” between the so-called victim and perpetrator. Cyberbullying is frequently used to refer to peer aggression (among children and youth) and the definition tends to be derived from offline bullying. Authors caution against using the term “bullying” for all instances of interpersonal conflict online, as these incidents can have very different characteristics. Nonetheless, the term cyberbullying tends to be applied to anything and everything in the popular discourse, and this, in my view, contributes to the overall confusion around the issue, including a belief that the phenomenon is more widespread than the empirical research suggests.

Bullying and harassment as safety/security or tech problems

Cyberbullying tends to be understood primarily as an “e-safety” problem or a security issue. It is also often presented as a technological problem. In the media and public discourse, it is not uncommon to have technology, or some features thereof, blamed for what is often characterized as an increasing number of harassment incidents. For instance, when harassment is anonymous, technological affordances of anonymity are said to be contributing to the problem. In my view, such discourse tends to ignore the cultural factors that normalize humiliation. This might be evident in the post-election atmosphere in the United States, which is how I would contextualize what happened —normalizing and implicitly sanctioning racist or sexist behavior (Note: while I have not conducted empirical analysis of the post-election discourse and how it might be normalizing ethnic hatred or racism, initial journalistic investigations into the subject suggest it will continue to be an important and fruitful area of research going forward). 

In my own work, I explain cultural aspects of the problem by contextualizing bullying in the “dignity theory framework”. I point to some of the factors at the cultural level that encourage humiliation and promote what some dignity authors term “false dignity”This concept refers to deriving one’s worth from external insignia of success –which can be anything from money, good looks and expensive clothes, to any other token of success that may range from toys in childhood to sex in adolescence —all measured in more likes on social media platforms.  These less frequently examined cultural assumptions and values are by no means peculiar to youth, as they permeate adult interactions as well, a point often lost when technology becomes a scapegoat for wider social problems and media panics in public discourse. This is well exemplified in cyberbullying cases involving children and especially in those high profile incidents that gather significant public attention and where cyberbullying is in some way linked to a child’s suicide (usually as a misleading oversimplification—presenting cyberbullying as the cause behind the suicide). Under such circumstances, public discourse tends to revolve around blaming the so-called bullies or online platforms where bullying happened. Such simplistic binaries and finger-pointing can prevent a constructive discussion of the problem that accounts for its complexity.

Responding to the question of whether the resources that victims of online harassment have at their disposal are actually helpful, in my own research, I have found very little evidence as to the efficacy of social media companies’ anti-harassment enforcement tools (e.g. blocking, filtering, or various forms of reporting etc.). I make an argument for more transparency in how companies enforce their anti-bullying and anti-harassment policies and whether/how they measure their effectiveness, and also for a continuous independent evaluation of effectiveness of companies’ tools (e.g. how efficiently companies respond to reported posts but also with regards to how satisfied users are and which solutions that companies are not yet implementing would be helpful). I also call for more funding from both industry and governments for educational initiatives and innovative forms of counseling for those who need it. You might have seen that Facebook documents with its classified anti-abuse policies got leaked recently on Guardian, adding fuel to the debate on effectiveness of companies’ enforcement mechanisms, suggesting the company “is too lenient on those peddling hate speech.” Consider that according to the newspaper, Facebook had previously advised its moderators “to ignore certain images mocking people with disabilities”. In my discussions with some of the e-safety experts, Facebook has been characterized as one of the better companies on the market in terms of these policies, one that has developed its policies significantly over time and that can set the standards for the rest of the social media industry, at least when it comes to protecting children from cyberbullying.

Resolving the problem—protection vs. participation?

Many see the problems of cyberbullying, trolling, and hate speech as inherently involving tradeoffs with respect to privacy, security, convenience or engagement. The proposition is that in order to protect themselves from harassment, students might need to take practical steps to protect their privacy, such as disabling geolocation or opting for more restrictive privacy settings. Participation and engagement are seen as pitted against protection, that more of one requires less of the other. Meanwhile, some students might be happy to see various online platforms proactively crawling their shared content in order to identify harassment cases in real-time (without these cases having to be reported first by users). However, would students be comfortable if platforms’ moderators were looking into the content that they shared privately? Thinking of harassment in the context of comments on online news platforms—doing away with comments sections entirely has been an ongoing struggle for many platforms.

I think the issue of response goes back to how the problem is perceived and framed. From what I have seen, the advice provided to students by administrators focuses to a large extent on safety and security—framing the problem in this way. It is, of course, important to point out these security-related aspects with practical advice for students on how to protect themselves and to ensure that everyone is safe. But we should also be careful not to ignore the cultural aspects. While users should certainly familiarize themselves with the tools that online platforms provide to protect their privacy, companies need to do a better job of raising awareness of these tools and to ensure that they are effective and easy to use. 

At the same time, creating a culture — not just on campus but more widely, in the society — where these hate-related problems are openly talked about is very important as well. Following the AU case, those who committed these acts need to be identified and held accountable. But we should be vigilant not to miss an important opportunity to discuss the heart of the problem — the normalization, sanctioning and pervasiveness of hate and prejudice on campus and beyond, as well as broader dignity-related aspects of the problem. 

About the Author

Tijana Milosevic is a researcher at the Department of Communication, University of Oslo, Norway, and a member of the EU Kids Online research network (about 150 researchers in 33 countries researching children and digital media). She received her PhD from the American University School of Communication. EU Kids Online will soon be collecting data in several countries as part of a new survey on young people’s digital media use. Tijana is also conducting research in Norway with children and teens inquiring whether they find social media platforms’ tools against digital bullying to be effective. As part of the COST Action project DigiLitEY, she analyzes privacy-related aspects of the Internet of Things and smart toys in particular. You can follow her on Twitter @TiMilosevic and @EUKIDSONLINE and at

Her forthcoming book Protecting Children Online: Cyberbullying Policies of Social Media Companies (MIT Press, 2017) examines these issues in depth, focusing on children and young people in particular.

AU hosts Facebook Live event on cyber-safety in the wake of campus hate crimes

Following last Monday’s racist incident on campus and subsequent online harassment directed at AU Student Government President Taylor Dumpson, American University hosted a Facebook Live video event on Friday outlining practical steps students can take to protect themselves from online hate.

Watch the video here.

Vice President for Communication Terry Flannery and Assistant Director of Physical Security and Police Technology Doug Pierce touched on several actions the university is taking to protect students, both on campus and online, while also addressing best practices students can adopt to protect themselves in the offline and online worlds.

Mr. Pierce highlighted the importance of actively managing one’s privacy settings on social media platforms like Facebook, Instagram, Twitter, and Snapchat, including disabling geolocation, which raises the risk that online threats could develop into real-world threats to a victim’s physical security.

In what was a concerted effort to engage with the university community in an open and transparent manner, Ms. Flannery and Mr. Pierce took questions from users posted in real-time, many of whom expressed a growing sense of frustration over the larger issues of hate and fear bubbling up on campus and beyond. Echoing these sentiments was a general sense of disillusionment as to the efficacy of one-off discussions (no matter how well-intended) in response to what many see as merely the latest in a string of incidents reflecting deep-seated divisions on campus.

Addressing the larger political and social context, Ms. Flannery explained, “I think so much has been affected by the current political climate that we’re in. And I’ve seen many people who’ve, in their social media streams, talk[ed] about how they’re taking a break because the heated rhetoric following the election resulted in just the kind of heated emotion that was difficult. But this is different, what we’re talking about now. We’re talking about people who are targeting you personally because of what you represent or particular views, or your identity based on race, or other factors. And so I think it’s a particularly egregious form of hate; and a particularly personal form of hate.”

Identifying strategies for dealing with direct online harassment, Mr. Pierce suggested avoiding engagement with perpetrators and so-called trolls. “Don’t respond to the messages from these people who are trolling you and trying to provoke a reaction. That reaction is exactly what they’re trying to achieve by doing this. So the goal would be to not give them that satisfaction.”

But several students balked at the notion of self-censorship in the face of deplorable expressions of hate. “We shouldn’t have to hide ourselves online or in person,” wrote one commenter, highlighting the difficulty and dissonance many social media users experience when balancing tradeoffs like privacy versus security online.

“We tend to pit participation on these platforms against protection… that more of one requires less of the other,” explains Internet Governance Lab affiliated alumna Dr. Tijana Milosovic, a post-doctoral fellow in the Department of Communication at the University of Oslo who studies online hate and cyberbullying (she received her Ph.D. in Communication Studies from AU in 2015).  “From what I have seen, the Facebook talk given by AU administrators focuses to a large extent on safety and security–framing the problem in this way. It is, of course, important to point out these security-related aspects, with advice for students on how to protect themselves and to ensure that everyone is safe and feels safe. However, I think we should be cautious not to forget the cultural aspect of it.”

And while framing the discourse of hate, whether online or offline, is increasingly difficult given the labyrinthine and constantly shifting web of actors seeking to pollute the public sphere with vitriol, Dr. Milosovic argues that combating it requires a more holistic approach. “We tend to forget the aspects of our culture that normalize humiliation. I think this is very evident with the new administration–normalizing and implicitly (or even explicitly!) sanctioning such behavior… Creating a culture (not just on campus but more widely, in the society) where these hate-related problems are openly talked about is very important as well.”

Facebook looks to counter ‘information operations’

Last November, Facebook CEO Mark Zuckerberg called claims that his company may have influenced the outcome of the U.S. presidential election by enabling the spread of propaganda a “pretty crazy idea.” But with a report published on Thursday by Facebook’s own security team titled “Information Operations and Facebook,” it is clear attitudes at the social network have changed.

“We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” the report explains.

In an effort to combat these new forms of social network-mediated propaganda campaigns, Facebook’s security team said it would increase its use of machine learning and “new analytical techniques” to remove fake accounts and disrupt “information (or influence) operations,” defined as “actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”

Additionally, the report seeks to “untangle” and move away from the term “fake news,” which it (rightly) argues has become a catch-all used to “refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumors, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces.”

Instead, the report identifies three distinct categories of abuse falling under the umbrella of “information operations”:

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

Citing the 2016 U.S. Presidential election as a case study, the report explains how Facebook security experts “monitored” suspicious activity during the lead-up to the election and found a deluge of false news, false amplifiers, and disinformation (sidebar: if Facebook was monitoring suspicious activity during the run-up to the election why did Mr. Zuckerberg call such activity “crazy” after election day?).

The Facebook report comes less than a week after researchers at Oxford published the latest in a  series of studies analyzing the role of automated accounts (or “bots”) in disseminating “junk news” on social media in the weeks preceding national elections in the U.S.Germany, and France. According to the study, one-quarter of all political links shared in France prior to last Sunday’s election contained “misinformation,” though the researchers point out that in general French voters were sharing better quality news than Americans during the lead-up to the U.S. presidential election (whether this reflects stronger media literacy in France or more sophisticated propaganda is unclear).

While Facebook’s security team would not confirm the identity of the actors “engaged in false amplification using inauthentic Facebook accounts,” together the Facebook and Oxford reports add to a growing body of evidence — including from U.S. intelligence officials and private cybersecurity firms — attributing the surge in automated accounts and propaganda to a larger information operation orchestrated by Russian intelligence and aimed at influencing elections and/or sowing distrust in political institutions.

Regardless, the notion that governments are targeting social networks to mine intelligence and influence political outcomes seems less “crazy” and more like the new normal in global politics.