Guest User

Recap: Will the Internet Fragment? A Conversation with Milton Mueller

By Kenneth Merrill

Following last week’s terror attacks in London, Prime Minister Theresa May stated unequivocally that “enough is enough,” adding that there is “far too much tolerance of extremism” in British society. In particular, Ms. May called out Internet companies to do more to shut down online “safe spaces,” suggesting that her government would look to broker “international agreements to regulate cyber space so that terrorists cannot plan online.”

What such international agreements might look like in practice is unclear, but according to Internet governance scholar Milton Mueller, Prime Minister May’s comments reflect a growing trend, in which nation-states are looking to assert a greater degree of control over global data flows.

“It is an attempt to fit the round peg of global communications into the square hole of territorial states,” explained Dr. Mueller on Tuesday at an event marking the release of his new book Will the Internet Fragment?: Sovereignty, Globalization, and Cyberspace. Hosted by New America’s Open Technology Institute, the event was moderated by Internet Governance Lab Co-Director Dr. Derrick Cogburn and featured Dr. Mueller in conversation with Rebecca MacKinnon, Director of the Ranking Digital Rights project at New America; Tim Mauer, Co-director of the Cyber Policy Initiative at the Carnegie Endowment for International Peace; and Angela McKay, Senior Director of Cybersecurity Policy and Strategy at Microsoft. 

A video of the event is available here.

In answering the book’s title question, Dr. Mueller began the discussion by interrogating the concept of “fragmentation,” suggesting that the term “realignment” more precisely captures current efforts to assert notions of territorial sovereignty in cyberspace. In this way, Mueller’s remarks contextualized “efforts to set up gateways to filter content, using data localization to keep internet routing within state borders, and requiring governments and users to use local companies to store data” as attempts to “partition cyberspace in order to subordinate its [the Internet’s] control to sovereign states.”

“Governments are trying to have their cake and eat it too,” explained Rebecca MacKinnon, who’s 2012 book Consent of the Networked described new modes of Internet censorship and the ways in which private companies have assumed governance functions formerly reserved for nation-states. But as governments bemoan the inability to regulate content within their borders many of these same nation-states are happy to extend locally developed policies extraterritorially, explained Ms. MacKinnon, citing the Microsoft/Ireland caseand efforts to apply the EU’s “right to be forgotten” globally as examples of this sort of extraterritorial extension.

These cases, along with Prime Minister May’s recent comments, help underscore the fact that efforts to realign the Internet to fit Westphalian notions of territorial sovereignty are no longer merely the Orwellian fantasies of authoritarian states but are gaining legitimacy in more democratic national contexts. In response to these trends, Mueller proposes “a liberation movement for cyberspace, in which we recognize that we’re creating a globally interconnected polity around the Internet,” suggesting that “perhaps it is time for this polity to assert its own identity and own authority and come up with global organizations for Internet governance.”

But as Tim Mauer pointed out, the prospects for such a liberation movement seem increasingly remote given large-scale structural changes to the existing liberal order. As geopolitical developments point towards a more neo-realist order, Mauer argued that we could expect to see more “contested forms of [Internet] governance” as opposed to international agreements and transnational consensus.

Meanwhile, Angela McKay of Microsoft presented several ways in which emerging technologies like the adoption of cloud computing and the Internet of Things might present challenges and opportunities for realignment. In particular, Ms. McKay highlighted cloud adoption as an example of a fundamental change in Internet architecture and the way its governed, with a more homogeneous set of firms managing a more diffuse, heterogeneous set of end-points. Conversely, with the growth of the Internet of Things, a new set of formerly non-technical industries will be thrust into Internet governance and information technology policy discussions, bringing with them a new set of norms, best-practices, and values that will alter the dynamics of existing private-public partnerships and require new modes of Internet governance going forward.

Share

AU School of Communication scholars featured at ICA 2017

By Kenneth Merrill

The International Communication Association (ICA) held its annual conference in San Diego last week and AU’s School of Communication (SOC) was well represented with a dozen scholars presenting research on a wide range of topics.

On Sunday, SOC Professor and Internet Governance Lab Faculty Fellow Aram Sinnreich presented the latest in his work shedding light on the dark money used to influence policy debates over intellectual property rights (IPR) online. Drawing on a chapter from Professor Sinnreich’s forthcoming book (Yale University Press), the presentation, titled “Following the Money Behind Intellectual Property Law,” uses a mix of quantitative and qualitative analysis of public lobbying and campaign finance records to identify patterns of expenditure and agenda setting in the increasingly powerful and opaque IPR lobby. 

On Thursday, SOC Professor, Internet Governance Lab Faculty Fellow, and Director of the Communication Studies Division (currently on sabbatical) Kathryn Montgomery presented a paper titled, “Health Wearables: Ensuring Fairness, Preventing Discrimination, and Promoting Equity in an Emerging Internet-of-Things Environment,” based on an ongoing project investigating the intersection of the Internet of things (IoT) and privacy more broadly.

Also on Thursday, Caty Borum Chattoo, Director of the Center for Social Media and Social Impact (CSMi), presented her latest work titled “Storytelling for Social Change: Leveraging Documentary and Comedy for Public Engagement with Global Poverty.” Part of the CSMI’s Rise Up Media and Social Change Project, the presentation also provided an opportunity to welcome the Center’s new Postdoctoral Fellow, Dr. Amy Henderson Riley, whose work focuses on entertainment-education as a strategy for individual and social change.  

Other scholars representing SOC included SOC Assistant Professor Filippo Trevisan presenting his paper “Media Justice: Race, Borders, Disability and Data,” Professor Paula Weissman presenting “Strategic Communication by Health and Medical Organizations: Self-Interest vs. Informed Decision Making”, Assistant Professor Benjamin Stokes, PhD candidate Samantha Dols, and Doctoral Research Assistant and Adjunct Professor Kara Andrade with “Here We Listen: Positioning a Hybrid ‘Listening Station’ to Circulate Marginalized Voices Across Physical and Digital Channels in a Neighborhood,” and Ph.D. candidate David Proper with his presentation “Troubling Republicanism: Carly Fiorina and Conservative Republican Gendered Discourses.”

Share

AU hate crime incident in context: From cybersecurity to dignity

By Tijana Milosovic

As an American University alumna and scholar of cyberbullying, I was extremely troubled to hear about the series of racially charged events that took place on campus earlier this month. Most readers are familiar with the distressing details by now — bananas were found hanging from noose-shaped strings marked with AKA, an abbreviation for the University’s predominantly African-American sorority. Shortly thereafter, the Anti-Defamation League alerted the University to an online post written by a “white supremacist” urging followers to “troll” the newly-elected President of the American University Student Government (AUSG), Taylor Dumpson.

It goes without saying that the top priority for the University is to work with authorities to guarantee the safety of President Dumpson and all those on campus. But as the AU community wrestles with the insidious and complex nature of these grave incidents, it is important to put them in context so we can more clearly identify the sources and roots of the problem. 

As a researcher who has been studying cyberbullying and online harassment for the past four years, I will focus here on the Internet-mediated aspects of the incident, while interrogating responses to such incidents, including the tendency to fixate on safety and security-oriented responses at the expense of broader cultural and social dynamics.

As students of media know all too well, framing plays an incredibly important part in influencing the scope of possible solutions. Rather than concentrating only on safety measures, which can fall short, this incident offers an opportunity to discuss some of the cultural aspects of this problem, which can lead to more effective and far-reaching responses in the long term.

Cyber-harassment, trolling or cyberbullying?

As an overall caveat to discussions involving “cyber” (as in “cyberbullying” or “cyberharassment”), I find the term itself to be problematic, as it can connote that the digital world is somehow a mysterious place apart from the “real” or “offline” worlds. 

AU administrators referred to the posts targeting President Dumpson as “cyber-harassment,” and I find this term to be more appropriate for this case than “cyberbullying.” While I have not seen the original posts, from what I have read about the incident, the terms  “online hate speech” or “harmful speech online” would also apply.

Although definitions and understandings vary, hate speech usually refers to the specific targeting of minority populations —offensive speech (which can also call on violence) that is directed against a group and inspired by some characteristics of the group (e.g. race, ethnicity, gender, sexual orientation etc.). As such, “online hate speech” might be the most accurate term to describe the gravity of the incidents that occurred earlier this month.  While the initial AU letter said the author of the hateful post called on others to “troll” President Dumpson,  trolling has a specific connotation and culture around it. For the most thoughtful inquiry into trolling and how it fits into the mainstream culture, I highly recommend scholar Whitney Phillips’ book “This is Why We Can’t Have Nice Things.” While the following definition is most certainly a generalization (and simplification), trolling tends to refer to being annoying or purposefully offensive (e.g. “just for lolz”), because you can –for fun. Although trolling can definitely be racially-charged, I think that labelling the incident as “trolling” might perhaps inadvertently assign this particular harassment incident a more benign connotation.

“Cyberbullying”, however, is a distinct term. It may be difficult to arrive at an agreed-upon definition of cyberbullying, but researchers tend to observe that it constitutes some form of repeated harm involving electronic technology. However, when trying to study and measure cyberbullying, it may not be obvious what counts as “repetition.”  Is it more comments by one person? Or is it enough if many people merely see one comment? Or do they also need to re-post it or continue commenting on it? If it needs to happen continuously over time in order to be called cyberbullying, how long does it need to happen for us to classify it as continuous? To be characterized as “cyberbullying,” some scholars emphasize that a behavior typically needs to involve some form of “power imbalance” between the so-called victim and perpetrator. Cyberbullying is frequently used to refer to peer aggression (among children and youth) and the definition tends to be derived from offline bullying. Authors caution against using the term “bullying” for all instances of interpersonal conflict online, as these incidents can have very different characteristics. Nonetheless, the term cyberbullying tends to be applied to anything and everything in the popular discourse, and this, in my view, contributes to the overall confusion around the issue, including a belief that the phenomenon is more widespread than the empirical research suggests.

Bullying and harassment as safety/security or tech problems

Cyberbullying tends to be understood primarily as an “e-safety” problem or a security issue. It is also often presented as a technological problem. In the media and public discourse, it is not uncommon to have technology, or some features thereof, blamed for what is often characterized as an increasing number of harassment incidents. For instance, when harassment is anonymous, technological affordances of anonymity are said to be contributing to the problem. In my view, such discourse tends to ignore the cultural factors that normalize humiliation. This might be evident in the post-election atmosphere in the United States, which is how I would contextualize what happened —normalizing and implicitly sanctioning racist or sexist behavior (Note: while I have not conducted empirical analysis of the post-election discourse and how it might be normalizing ethnic hatred or racism, initial journalistic investigations into the subject suggest it will continue to be an important and fruitful area of research going forward). 

In my own work, I explain cultural aspects of the problem by contextualizing bullying in the “dignity theory framework”. I point to some of the factors at the cultural level that encourage humiliation and promote what some dignity authors term “false dignity”. This concept refers to deriving one’s worth from external insignia of success –which can be anything from money, good looks and expensive clothes, to any other token of success that may range from toys in childhood to sex in adolescence —all measured in more likes on social media platforms.  These less frequently examined cultural assumptions and values are by no means peculiar to youth, as they permeate adult interactions as well, a point often lost when technology becomes a scapegoat for wider social problems and media panics in public discourse. This is well exemplified in cyberbullying cases involving children and especially in those high profile incidents that gather significant public attention and where cyberbullying is in some way linked to a child’s suicide (usually as a misleading oversimplification—presenting cyberbullying as the cause behind the suicide). Under such circumstances, public discourse tends to revolve around blaming the so-called bullies or online platforms where bullying happened. Such simplistic binaries and finger-pointing can prevent a constructive discussion of the problem that accounts for its complexity.

Responding to the question of whether the resources that victims of online harassment have at their disposal are actually helpful, in my own research, I have found very little evidence as to the efficacy of social media companies’ anti-harassment enforcement tools (e.g. blocking, filtering, or various forms of reporting etc.). I make an argument for more transparency in how companies enforce their anti-bullying and anti-harassment policies and whether/how they measure their effectiveness, and also for a continuous independent evaluation of effectiveness of companies’ tools (e.g. how efficiently companies respond to reported posts but also with regards to how satisfied users are and which solutions that companies are not yet implementing would be helpful). I also call for more funding from both industry and governments for educational initiatives and innovative forms of counseling for those who need it. You might have seen that Facebook documents with its classified anti-abuse policies got leaked recently on Guardian, adding fuel to the debate on effectiveness of companies’ enforcement mechanisms, suggesting the company “is too lenient on those peddling hate speech.” Consider that according to the newspaper, Facebook had previously advised its moderators “to ignore certain images mocking people with disabilities”. In my discussions with some of the e-safety experts, Facebook has been characterized as one of the better companies on the market in terms of these policies, one that has developed its policies significantly over time and that can set the standards for the rest of the social media industry, at least when it comes to protecting children from cyberbullying.

Resolving the problem—protection vs. participation?

Many see the problems of cyberbullying, trolling, and hate speech as inherently involving tradeoffs with respect to privacy, security, convenience or engagement. The proposition is that in order to protect themselves from harassment, students might need to take practical steps to protect their privacy, such as disabling geolocation or opting for more restrictive privacy settings. Participation and engagement are seen as pitted against protection, that more of one requires less of the other. Meanwhile, some students might be happy to see various online platforms proactively crawling their shared content in order to identify harassment cases in real-time (without these cases having to be reported first by users). However, would students be comfortable if platforms’ moderators were looking into the content that they shared privately? Thinking of harassment in the context of comments on online news platforms—doing away with comments sections entirely has been an ongoing struggle for many platforms.

I think the issue of response goes back to how the problem is perceived and framed. From what I have seen, the advice provided to students by administrators focuses to a large extent on safety and security—framing the problem in this way. It is, of course, important to point out these security-related aspects with practical advice for students on how to protect themselves and to ensure that everyone is safe. But we should also be careful not to ignore the cultural aspects. While users should certainly familiarize themselves with the tools that online platforms provide to protect their privacy, companies need to do a better job of raising awareness of these tools and to ensure that they are effective and easy to use. 

At the same time, creating a culture — not just on campus but more widely, in the society — where these hate-related problems are openly talked about is very important as well. Following the AU case, those who committed these acts need to be identified and held accountable. But we should be vigilant not to miss an important opportunity to discuss the heart of the problem — the normalization, sanctioning and pervasiveness of hate and prejudice on campus and beyond, as well as broader dignity-related aspects of the problem. 

About the Author

tijana.jpg

 

Tijana Milosevic is a researcher at the Department of Communication, University of Oslo, Norway, and a member of the EU Kids Online research network (about 150 researchers in 33 countries researching children and digital media). She received her PhD from the American University School of Communication. EU Kids Online will soon be collecting data in several countries as part of a new survey on young people’s digital media use. Tijana is also conducting research in Norway with children and teens inquiring whether they find social media platforms’ tools against digital bullying to be effective. As part of the COST Action project DigiLitEY, she analyzes privacy-related aspects of the Internet of Things and smart toys in particular. You can follow her on Twitter @TiMilosevic and @EUKIDSONLINE and at www.tijanamilosevic.org

Her forthcoming book Protecting Children Online: Cyberbullying Policies of Social Media Companies (MIT Press, 2017) examines these issues in depth, focusing on children and young people in particular.

Share

AU hosts Facebook Live event on cyber-safety in the wake of campus hate crimes

By Kenneth Merrill

Following last Monday’s racist incident on campus and subsequent online harassment directed at AU Student Government President Taylor Dumpson, American University hosted a Facebook Live video event on Friday outlining practical steps students can take to protect themselves from online hate.

Watch the video here.

Vice President for Communication Terry Flannery and Assistant Director of Physical Security and Police Technology Doug Pierce touched on several actions the university is taking to protect students, both on campus and online, while also addressing best practices students can adopt to protect themselves in the offline and online worlds.

Mr. Pierce highlighted the importance of actively managing one’s privacy settings on social media platforms like Facebook, Instagram, Twitter, and Snapchat, including disabling geolocation, which raises the risk that online threats could develop into real-world threats to a victim’s physical security.

In what was a concerted effort to engage with the university community in an open and transparent manner, Ms. Flannery and Mr. Pierce took questions from users posted in real-time, many of whom expressed a growing sense of frustration over the larger issues of hate and fear bubbling up on campus and beyond. Echoing these sentiments was a general sense of disillusionment as to the efficacy of one-off discussions (no matter how well-intended) in response to what many see as merely the latest in a string of incidents reflecting deep-seated divisions on campus.

Addressing the larger political and social context, Ms. Flannery explained, “I think so much has been affected by the current political climate that we’re in. And I’ve seen many people who’ve, in their social media streams, talk[ed] about how they’re taking a break because the heated rhetoric following the election resulted in just the kind of heated emotion that was difficult. But this is different, what we’re talking about now. We’re talking about people who are targeting you personally because of what you represent or particular views, or your identity based on race, or other factors. And so I think it’s a particularly egregious form of hate; and a particularly personal form of hate.”

Identifying strategies for dealing with direct online harassment, Mr. Pierce suggested avoiding engagement with perpetrators and so-called trolls. “Don’t respond to the messages from these people who are trolling you and trying to provoke a reaction. That reaction is exactly what they’re trying to achieve by doing this. So the goal would be to not give them that satisfaction.”

But several students balked at the notion of self-censorship in the face of deplorable expressions of hate. “We shouldn’t have to hide ourselves online or in person,” wrote one commenter, highlighting the difficulty and dissonance many social media users experience when balancing tradeoffs like privacy versus security online.

“We tend to pit participation on these platforms against protection… that more of one requires less of the other,” explains Internet Governance Lab affiliated alumna Dr. Tijana Milosovic, a post-doctoral fellow in the Department of Communication at the University of Oslo who studies online hate and cyberbullying (she received her Ph.D. in Communication Studies from AU in 2015).  “From what I have seen, the Facebook talk given by AU administrators focuses to a large extent on safety and security–framing the problem in this way. It is, of course, important to point out these security-related aspects, with advice for students on how to protect themselves and to ensure that everyone is safe and feels safe. However, I think we should be cautious not to forget the cultural aspect of it.”

And while framing the discourse of hate, whether online or offline, is increasingly difficult given the labyrinthine and constantly shifting web of actors seeking to pollute the public sphere with vitriol, Dr. Milosovic argues that combating it requires a more holistic approach. “We tend to forget the aspects of our culture that normalize humiliation. I think this is very evident with the new administration–normalizing and implicitly (or even explicitly!) sanctioning such behavior… Creating a culture (not just on campus but more widely, in the society) where these hate-related problems are openly talked about is very important as well.”

Share

Facebook looks to counter ‘information operations’

By Kenneth Merrill

Last November, Facebook CEO Mark Zuckerberg called claims that his company may have influenced the outcome of the U.S. presidential election by enabling the spread of propaganda a “pretty crazy idea.” But with a reportpublished on Thursday by Facebook’s own security team titled “Information Operations and Facebook,” it is clear attitudes at the social network have changed.

“We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” the report explains.

In an effort to combat these new forms of social network-mediated propaganda campaigns, Facebook’s security team said it would increase its use of machine learning and “new analytical techniques” to remove fake accounts and disrupt “information (or influence) operations,” defined as “actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”

Additionally, the report seeks to “untangle” and move away from the term “fake news,” which it (rightly) argues has become a catch-all used to “refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumors, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces.”

Instead, the report identifies three distinct categories of abuse falling under the umbrella of “information operations”:

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

Citing the 2016 U.S. Presidential election as a case study, the report explains how Facebook security experts “monitored” suspicious activity during the lead-up to the election and found a deluge of false news, false amplifiers, and disinformation (sidebar: if Facebook was monitoring suspicious activity during the run-up to the election why did Mr. Zuckerberg call such activity “crazy” after election day?).

The Facebook report comes less than a week after researchers at Oxford published the latest in a  series of studies analyzing the role of automated accounts (or “bots”) in disseminating “junk news” on social media in the weeks preceding national elections in the U.S.Germany, and France. According to the study, one-quarter of all political links shared in France prior to last Sunday’s election contained “misinformation,” though the researchers point out that in general French voters were sharing better quality news than Americans during the lead-up to the U.S. presidential election (whether this reflects stronger media literacy in France or more sophisticated propaganda is unclear).

While Facebook’s security team would not confirm the identity of the actors “engaged in false amplification using inauthentic Facebook accounts,” together the Facebook and Oxford reports add to a growing body of evidence — including from U.S. intelligence officials and private cybersecurity firms — attributing the surge in automated accounts and propaganda to a larger information operation orchestrated by Russian intelligence and aimed at influencing elections and/or sowing distrust in political institutions.

Regardless, the notion that governments are targeting social networks to mine intelligence and influence political outcomes seems less “crazy” and more like the new normal in global politics.

Share

Q&A with Internet Governance Lab Faculty Fellow Jennifer Daskal

Dr. Jennifer Daskal is an Associate Professor of Law at American University Washington College of Law, where she teaches and writes in the fields of criminal, national security, and constitutional law. 

Dr. Jennifer Daskal is an Associate Professor of Law at American University Washington College of Law, where she teaches and writes in the fields of criminal, national security, and constitutional law. 

By Kenneth Merrill

Joining the Internet Governance Lab as a Faculty Fellow, Jennifer Daskal is an Associate Professor of Law at American University Washington College of Law, where she teaches and writes in the fields of criminal, national security, and constitutional law. She is on academic leave from 2016-2017, and has received an Open Society Institute Fellowship to work on issues related to privacy and law enforcement access to data across borders. From 2009-2011, Daskal was counsel to the Assistant Attorney General for National Security at the Department of Justice. Prior to joining DOJ, Daskal was senior counterterrorism counsel at Human Rights Watch, worked as a staff attorney for the Public Defender Service for the District of Columbia, and clerked for the Honorable Jed S. Rakoff. She also spent two years as a national security law fellow and adjunct professor at Georgetown Law Center.

Daskal is a graduate of Brown University, Harvard Law School, and Cambridge University, where she was a Marshall Scholar. Recent publications include Law Enforcement Access to Data Across Borders: The Evolving Security and Rights Issues (Journal of National Security Law and Policy 2016); The Un-Territoriality of Data (Yale Law Journal 2015); Pre-Crime Restraints: The Explosion of Targeted, Non-Custodial Prevention (Cornell Law Review 2014); and The Geography of the Battlefield: A Framework for Detention and Targeting Outside the ‘Hot’ Conflict Zone (University of Pennsylvania Law Review 2013). Daskal has published op-eds in the New York TimesWashington Post, and International Herald Tribune and has appeared on BBC, C-Span, MSNBC, and NPR, among other media outlets. She is an Executive Editor of and regular contributor to the Just Security blog.

Recently, we discussed her research and some of the many hot topics arising at the intersection of Internet governance and national security law. 

You’ve worked at the Department of Justice, in the NGO space at Human Rights Watch, in the DC Public Defender’s office, and now in academia. How do these varied experiences inform your current work? When it comes to the intersection of Internet governance and national security law, does Miles’s law hold (does where you stand really depend on where you sit)?

The move from Human Rights Watch to the National Security Division at the Department of Justice was quite eye-opening.  I thought I had prepared myself for the shift, but the adage that where you stand depends on where you sit turned out to be even more true than I had imagined.  In many ways, it makes sense.  At Human Rights Watch, the primary goal was to ensure that government pursued its national security policies in ways that protected human rights.  In the government, the primary goal was to protect the American public from the perceived national security threats.  Ideally, these two goals work in tandem, and both policy and law are generally at their best when it does.  But the primary starting point is quite different and that alters the lens through which just about everything is viewed.

Much of your research focuses on law enforcement’s use of online data.  To what extent are law enforcement officials concerned about the risks of fragmentation/balkanization associated with data localization and so-called “Internet sovereignty”? 

That depends a great deal on who you ask (and where you sit).  As Americans, we have long been used to having access to or control over a majority of the world’s data, thanks in large part to the dominance of American service providers.  Fragmentation of the Internet is thus a threat that undermines this dominance. But for many countries, this is not the case.  Mandatory data localization requirements and Internet fragmentation provide a means of ensuring access to sought-after data and asserting control.

From my perspective, these trends are quite concerning.  Mandatory data localization laws are extremely costly for companies that want to operate internationally, often pricing smaller start-ups out of the market.  The trend toward localization also serves as a means for authoritarian governments to limit free speech and assert increased control.

Any early indications as to how the new administration may handle cross-border data requests? Should we expect a more transactional approach, more multilateral cooperation, or a continuation of the status quo? What impacts could such decisions have on privacy and interoperability? 

The new administration hasn’t yet taken a public stance on these issues, but there are two key issues that ought to be addressed in short order.  First is the concerning impact of the Second Circuit decision in the so-called Microsoft Ireland case.  As a result of that decision, U.S. warrants for stored communications (such as emails) do not reach data that is held outside the United States. If the data is outside the United States, the U.S. government must make a mutual legal assistance request for the data to the country where it is located – even if the only foreign government connection to the investigation is simply that the data happens to be held there.  This makes little normative or practical sense, incentivizes the very kind of data localization efforts that the United States ought to be resisting, undercuts privacy, and is stymying law enforcement’s ability to access sought-after data in legitimate investigations.

As numerous Second Circuit judges opined, Congress should weigh in—and the new administration should support an update to the underlying law.  Specifically, Congress should amend the underlying statute to ensure U.S. law enforcement can access to extraterritorially-located data pursuant to a warrant based on probable cause, but also ensure that both law enforcement and the courts take into account countervailing foreign government interests.

Conversely, foreign governments are increasingly frustrated by U.S. laws that preclude U.S.-based companies from turning over emails and other stored communications content to foreign governments – even in situations where the foreign governments are seeking access to data about their own citizens in connection with a local crime.  These frustrations are also further spurring data localization requirements, excessively broad assertions of extraterritorial jurisdiction in ways that put U.S. companies in the middle of two conflicting legal obligations, and use of surreptitious means to access sought-after data.  These provisions should likewise be amended to permit, in specified circumstances, foreign governments to access that data directly from U.S.-based companies.  The legislation should specify baseline substantive and procedural standards that must be met in order to benefit from this access – standards that are essential to protecting Americans’ data from overzealous foreign governments.

What role do private companies play in establishing the normative and legal bounds of cross-border data requests? Do you see this role changing going forward?

Private companies play significant roles in numerous different ways.  They are, after all, the recipients of the requests.  They thus decide when to object and when to comply.  They also have a strong policy voice – meeting with government officials in an effort to shape the rules.  And they also exert significant power through a range of technological and business decisions about where to store their data and where to locate their people; these decisions determine whether they are subject to local compulsory process or not.

While the majority of ISPs and content platforms are currently located in the U.S., many have expressed concerns about the long-term impact(s) policies like Trump’s travel ban could have for Silicon Valley. Taking these concerns to their logical conclusion, do you see the geography of ISPs and content platforms changing significantly as a result of these policies, and if so, how might these changes alter the legal landscape vis-a-vis cross-border data requests?

I think it’s a fair assumption that whatever the reason, at some point the share of ISPs and content platforms located in the United States will decrease.  It is, as a result, critically important that the United States think about the broader and long-term implications of the rules it sets.  At some point, it may no longer hold the dominant share of the world’s data and will need the cooperation of foreign partners to access sought-after data.  The rules and policies that are adopted should take these long-term interests into account.

Can you tell us a bit about what you’re currently working on?

I continue to work on issues associated with law enforcement access to data across borders, engaging in a comparative analysis as to how some of these key issues are playing out in both the United States and the European Union.  More broadly, I am also examining the increasingly powerful role of private sector in setting norms, policies, and rules in this space. And I continue to do research and writing on the Fourth Amendment as it applies to the digital age.

Share

Recap — Cybersecurity in an Age of Uncertainty: US-Israel Perspectives

By Kenneth Merrill

Cybersecurity experts from academia, industry, government, and civil society met at American University last week to discuss some of the key challenges and opportunities facing citizens, private enterprise, and policymakers as they wrestle with questions over how best to manage information security threats.

Hosted by the American University (AU) Center for Israel Studies, AU’s Internet Governance Lab, the American Associates of Ben-Gurion University of the Negev, The American University Washington School of Law, AU’s School of International Service, the Kogod School of Business, Kogod’s Cybersecurity Governance Center, and the AU School of Communication, the two-day conference drew on US-Israeli perspectives to address some of the most important cybersecurity issues facing citizens and governments in both countries and beyond.

How can cybersecurity and human rights coexist in the digital era? How should private and public entities in Israel and the United States work to prevent and respond to cybertheft while preserving innovation essential to economic growth and national security? What are the national policies of each country with respect to the protection of cyber infrastructure and the role of cyber operations in national security strategy? At the same time as these global cyber challenges emerge, the United States and Israel have two of the most robust and innovative cybersecurity industry clusters. What cooperation and collaboration is necessary to support the massive cybersecurity industries in these countries?

To begin to answer these questions the conference kicked off with a keynote address from Professor Yuval Elovici of Ben-Gurion University of the Negev’s Department of Information Systems Engineering and the director of the University’s Cyber Security Research Center. Titled “The Internet of Things: The New Frontier of Cyber Conflict,” professor Elovici’s remarks provided an accessible yet in-depth assessment of the state of play at the intersection of the Internet of things (IoT) and cyber security. Citing multiple examples of his research teams’ experimental approaches to testing various IoT security vulnerabilities, professor Evolici left many in the audience rightly concerned, as Internet-connected devices become increasingly embedded in our homes and lives.

On Tuesday morning the conference began with a networking breakfast and opening remarks by AU President Cornelius Kerwin. This was followed by four panel discussions, the first of which focused on Cybersecurity and Human Rights. “Cybersecurity is the great human rights issue of our time — human security depends on cyber security,” explained AU School of Communication professor and Internet Governance Lab co-director Dr. Laura DeNardis, who moderated the panel.

 

Left to right: professor Jennifer Daskal, Benjamin Dean, Michael Nelson, Eldar Haber, and Dr. Laura DeNardis.

Left to right: professor Jennifer Daskal, Benjamin Dean, Michael Nelson, Eldar Haber, and Dr. Laura DeNardis.

Joining Dr. DeNardis were Washington College of Law professor and Internet Governance Lab Faculty Fellow Jennifer Daskal, Eldar Haber of the University of Haifa, Michael Nelson of Cloudfare, and Benjamin Dean of the Center for Democracy and Technology. Replying to a question from the audience regarding law enforcement access to data held outside the U.S. (even by U.S.-based companies, as in the recentMicrosoft/Ireland case), professor Daskal explained how approaches to jurisdiction that turn on physical location promote data localization, which paradoxically imposes local consequences on human rights.

 

Left to right: Stephen Thomas, Eli Ben-Meir, Yuval Elovici, and professor Erran Carmel.

Left to right: Stephen Thomas, Eli Ben-Meir, Yuval Elovici, and professor Erran Carmel.

The next panel saw Kogod School of Business professor Erran Carmel moderate a wide-ranging discussion titled “The Cybersecurity Industry”. Participants included Eli Ben Meir of CyGov and former chief of military intelligence research for the Israeli Defense Forces, professor Elovici of Ben-Gurion University of the Negev, and Stephen Thomas of Cyberbit, a cybersecurity company with offices in Israel and Austin, Texas. Highlighting the important role that Israel and the U.S. play as key nodes in the geography of cybersecurity and technological innovation, the panel discussed ways in which each country can learn from the other as governments and the private sector look to cultivate the next generation of cybersecurity professionals.

Following a networking lunch, the conference continued with a panel on “Cybertheft Prevention and Response” moderated by Washington School of Law professor Melanie Teplinsky. Professor Teplinski was joined by Rebekah Lewis of the Kogod Cybersecurity Governance Center, Jonathan Meyer of Shappard Mullin Richter & Hampton LLC, Ran Nahmias of Check Point Software Technologies, and Eric Wenger of Cisco. Together the panel discussed varying strategic approaches to thinking about cybersecurity in an environment where the volume of attacks and number of threat vectors continues to increase. The conversation sought to distinguish between headline-grabbing acts of cyberwar and espionage and the sorts of attacks that keep chief information security officers (CISO) and everyday users up at night.

Left to right: Eric Wenger, Rebekah Lewis, Ran Nahmias, Jonathan Meyer, and professor Melanie Teplinski.

Left to right: Eric Wenger, Rebekah Lewis, Ran Nahmias, Jonathan Meyer, and professor Melanie Teplinski.

The conference closed with a panel titled, “Cyber Security and National Security Policies — The U.S. and Israel,” moderated by AU School of International Service professor Eric Novotny. Joining professor Novotny were Amir Becker of the Israeli Embassy, Camille Stewart of Deloitte, and Lior Tabansky of the Blavatnik Interdisciplinary Cyber Research Centre at the University of Tel Aviv. Placing cybersecurity in a geopolitical context, the discussion allowed for a wide-ranging discussion of the national security implications for both the U.S. and Israel, as well as the impact that recent domestic political controversies could have on the global cyber order.

Left to right: Amir Becker, Camille Stewart, Lior Tabansky, and professor Eric Novotny.

Left to right: Amir Becker, Camille Stewart, Lior Tabansky, and professor Eric Novotny.

Share

Panel celebrates Dr. Derrick Cogburn’s new book “Transnational Advocacy Networks in the Information Society: Partners or Pawns?”

Dr. Cogburn presents work from his new book “Transnational Advocacy Networks in the Information Society: Partners or Pawns?”

Dr. Cogburn presents work from his new book “Transnational Advocacy Networks in the Information Society: Partners or Pawns?”

By Kenneth Merrill

In 2016 global governance of the Internet took a major step forward with the transition of the so-called IANA functions (the management of the authoritative mapping of domain names and IP address numbers) from US government oversight to a new multistakeholder arrangement housed under the International Corporation for Assigned Names and Numbers (ICANN). The product of decades of negotiations and planning by an array of stakeholders from government, industry, and civil society, the IANA transition proceeded even as new organizational forms and ICTs emerged to disrupt the established technosocial order, both within and outside these institutions of Internet governance like ICANN, the Internet Governance Forum (IGF), and the Internet Engineering Task Force (IETF), among others. Interrogating the impact of these new organizational forms on multistakeholder approaches to Internet governance, AU School of International Service professor (and Internet Governance Lab co-director) Dr. Derrick Cogburn’s new book titled, Transnational Advocacy Networks in the Information Society: Partners or Pawns?, provides a timely contribution to an increasingly important field of study.

On Monday the Internet Governance Lab co-hosted a book launch celebrating Dr. Cogburn’s new book and discussing some of the important questions it raises for the future of multistakeholder Internet governance. Following introductory remarks from AU School of International Service Dean Dr. James Goldgeier and AU School of Communication professor and Internet Governance Lab director Dr. Laura DeNardis, the event proceeded with a panel discussion led by Ambassador Diana Lady Dougan, Senior Advisor at the Center for Strategic and International Studies (CSIS) and former Assistant Secretary of State at the US Department of State.

Panel participants included Fiona Alexander, Associate Administrator in the Office of International Affairs at the National Telecommunications and Information Administration (NTIA), Mike Nelson of the content delivery network and internet service provider CloudFlare, Andrea Glorioso, Digital Economy and Cyber Counselor with the E.U. Delegation to the U.S., Marc Rotenberg, President and Executive Director of the Electronic Privacy Information Center (EPIC), and Larry Irving, President and CEO of the Irving Information Group.

Among the topics discussed were the role of civil society groups and transnational advocacy networks (TANs) in shaping the multistakeholder governance process and the IANA transition, differing approaches to Internet governance and information policy across various jurisdictional contexts (e.g. competing approaches to privacy taken by the EU and the US), and the role of the private sector (especially tech firms) in contributing to bottom-up governance of the Internet.

Nearly twenty years after political scientists Margaret Keck and Kathryn Sikkink developed the concept of transnational activist networks (TANs) and their role in global political movements like anti-war, anti-globalization, and the environmental movement, professor Cogburn’s book provides a timely adaption of the concept, using it to contextualize an increasingly important array of civil society and non-state actors and their impact on the field of Internet governance.

Watch the entire discussion here.

Share

Internet Governance Lab organizes symposium on Internet governance research methods

Texas A&M professor Sandra Braman presents at symposium on Internet governance research methods.

Texas A&M professor Sandra Braman presents at symposium on Internet governance research methods.

By Kenneth Merrill

The Internet Governance Lab hosted a group of scholars on Monday to lay the groundwork for a forthcoming first-of-its-kind book focused on Internet Governance research methods.

Following introductory remarks by AU Dean Jim Goldgeier, contributors engaged in a lively discussion covering a wide range of methods and approaches to studying Internet governance. Participants included Laura DeNardis, Nanette Levinson, and Derrick Cogburn of the Internet Governance Lab, Sandra Braman of Texas A&M, Asvatha Babu of American University’s School of International Service, Farzaneh Badiei of Georgia Tech, the University of Zurich’s Rolf Weber, Eric Jardine of Virginia Tech, Rikke Frank Jorgensen of the Danish Institute for Human Rights, Maya Aguilar of American University, Francesca Musiani and Meryem Marzouki both of the French National Centre for Scientific Research, Ron Deibert of the University of Toronto’s Citizen Lab,  American University School of Communication professor Aram Sinnreich and AU School of Communication PhD candidate Kenneth Merrill.

While the field of Internet governance continues to grow, both in its size and importance for scholars, policymakers, and IT practitioners, there exists little in the way of literature addressing the methods used to study this increasingly important subject area. Relevant methodological traditions include quantitative, qualitative, and descriptive analyses drawing on case studies, interviews, and participant observation at Internet governance fora like the International Corporation for Assigned Names and Numbers (ICANN) and the Internet Governance Forum (IGF). Meanwhile, the proliferation of large data sets, artificial intelligence, and machine learning pose novel opportunities and constraints, including important ethical implications. Highlighting this point, Ron Deibert explained the importance of reverse-engineering and hacking ICTs as a means of identifying potential threats to privacy and free expression online, a practice that can place scholars and research subjects at risk.

Additionally, the symposium addressed the extent to which many methods are inextricably intertwined with theory. Here the field of Science and Technologies Studies (STS) is instructive, as methods like Actor Network Theory (ANT) operate from a distinct epistemological standpoint that needs to be accounted for when discussing the method. To this end, part of the project will seek to trace the history of the field and identify the epistemological nodes around which these disparate methodological approaches coalesce.

Share

How Trump’s war with the news media could impact net neutrality

By Kenneth Merrill

In a stunning attack on the nation’s news media, the President of the United States took to Twitter on Friday to write, “The FAKE NEWS media (failing @nytimes, @NBCNews, @ABC, @CBS, @CNN) is not my enemy, it is the enemy of the American People!”. Echoing the well-worn rhetorical agitprop of tyrants past and present, Trump’s tweet was viewed by many, including those in his own party, as an attempt to divert attention from the chaos enveloping his new administration by eroding trust in a fundamental pillar of democracy. But, viewed through the narrower prism of Internet and telecommunications policy, Trump’s latest salvo aimed at the news media sheds light on the complex and increasingly interconnected interests involved in the administration’s efforts to roll back regulations like net neutrality.

Passed in 2015, the Federal Communications Commission’s (FCC) net neutrality provisions were designed to protect the principle that all data should be treated equally as it moves across the Internet. Advocates of net neutrality, including consumer rights groups and most of Silicon Valley, argue that it protects users and encourages innovation in the content space by prohibiting large telecom providers (e.g. AT&T, Verizon, Comcast, etc.) from throttling traffic (either by slowing down data from competitors or creating “fast lanes” for their own approved content). Critics, including Trump’s newly appointed FCC Chairman (and former Verizon lawyer) Ajit Pai, say the net neutrality rules discourage much-needed competition in the telecom infrastructure industry.

Of course, the lack of competition in the telecom space is hardly a new phenomenon — the concentration of ownership in the industry is a problem that predates the FCC’s net neutrality provisions by at least thirty years. Moreover, the few companies that have come to dominate in this exclusive ecosystem have, in large part, done so through the acquisition of content providers, including several of the mainstream news media outlets Trump so abhors. As such, debates over whether or not to keep net neutrality are not as simple as they may seem.

To this point, Klint Finley of Wired explains that several of the largest telecom providers will be required to adhere to the net neutrality rules, per the terms of their merger agreements, regardless of any changes the new Trump-controlled FCC may make. Comcast is contractually obligated to honor net neutrality until 2018 following its merger with NBC Universal (parent company of the “failing” @NBCNews), while Charter Communications must adhere to the provisions until 2023 after its merger with Time-Warner (parent company of the “failing” @CNN).

Criticizing the most recent media mega-merger in October, then-candidate Trump called AT&T’s proposed $85 billion acquisition of Time-Warner a deal that would place “too much concentration of power in the hands of too few.” But in prefacing his comments by calling out CNN in particular as a key part of “the power structure I’m fighting,” the president seemed more concerned with punishing his perceived enemies in the press than ensuring fair competition in the industry. And while most expect the AT&T/Time-Warner deal to be approved, some industry experts have expressed concern that the FCC’s net neutrality regulations could remain in place only to become a political cudgel used by the president to inflict pain on his critics in the press. As Harold Feld of the digital rights group Public Knowledge explains to Finley in the aforementioned Wired story:

“he could appoint commissioners who will keep the net neutrality rules on the books, but not enforce them. Then if MSNBC were to offend him he could launch an investigation into its parent company, Comcast, over net neutrality. If the administration approved the AT&T/Time-Warner deal, it would have a similar bludgeon to use against CNN. In other words, we could end up with perhaps the worst of both worlds: a highly consolidated media industry, coupled with a regulatory body that selectively enforces rules for political reasons.”

But if this sort of Machiavellian gambit is the worst-case scenario, the likely alternatives are not much better. For a self-described pro-business president who, during the campaign, promised to cut regulations by as much as “75 percent, maybe more!“, scrapping net neutrality altogether remains the most likely outcome. Others suggest some elements of net neutrality could gain bi-partisan support in Congress, although any legislation would almost certainly roll back the so-called Title II provision reclassifying broadband Internet service as a “common carrier,” on par with other utilities like telephone service. It is also likely that limits on “zero rating,” in which Internet/mobile network operators (like AT&T, Verizon, and T-mobile) provide free data to customers for certain preferred content (often their own content or through partnerships with third party content providers), will be significantly scaled back under the new FCC.

Perhaps the only silver lining for advocates of net neutrality is that any changes to the provisions will be met with the forceful and increasingly influential dissenting voice of Silicon Valley. The overwhelming majority of large content providers and online platforms like Google, Facebook, and Twitter strongly support net neutrality. And in the wake of Trump’s controversial travel ban, Apple’s fight with the FBI over encryption, the SOPA and PIPA protests, and the industry’s successful lobbying effort against a 2014 FCC proposal calling for the creation of “fast lanes,” it is clear Silicon Valley is ready and able to defend its interests in Washington.

How this all plays out remains to be seen. But it seems clear that in an environment of increasing ownership concentration, both among Internet service providers and online content platforms (recall that European lawmakers continue to pursue an antitrust case against Google), the consequences involved in reversing net neutrality will be far-reaching and difficult to contain.

Share