Internet Governance Lab organizes symposium on Internet governance research methods

Texas A&M professor Sandra Braman presents at symposium on Internet governance research methods.

Texas A&M professor Sandra Braman presents at symposium on Internet governance research methods.

By Kenneth Merrill

The Internet Governance Lab hosted a group of scholars on Monday to lay the groundwork for a forthcoming first-of-its-kind book focused on Internet Governance research methods.

Following introductory remarks by AU Dean Jim Goldgeier, contributors engaged in a lively discussion covering a wide range of methods and approaches to studying Internet governance. Participants included Laura DeNardis, Nanette Levinson, and Derrick Cogburn of the Internet Governance Lab, Sandra Braman of Texas A&M, Asvatha Babu of American University’s School of International Service, Farzaneh Badiei of Georgia Tech, the University of Zurich’s Rolf Weber, Eric Jardine of Virginia Tech, Rikke Frank Jorgensen of the Danish Institute for Human Rights, Maya Aguilar of American University, Francesca Musiani and Meryem Marzouki both of the French National Centre for Scientific Research, Ron Deibert of the University of Toronto’s Citizen Lab,  American University School of Communication professor Aram Sinnreich and AU School of Communication PhD candidate Kenneth Merrill.

While the field of Internet governance continues to grow, both in its size and importance for scholars, policymakers, and IT practitioners, there exists little in the way of literature addressing the methods used to study this increasingly important subject area. Relevant methodological traditions include quantitative, qualitative, and descriptive analyses drawing on case studies, interviews, and participant observation at Internet governance fora like the International Corporation for Assigned Names and Numbers (ICANN) and the Internet Governance Forum (IGF). Meanwhile, the proliferation of large data sets, artificial intelligence, and machine learning pose novel opportunities and constraints, including important ethical implications. Highlighting this point, Ron Deibert explained the importance of reverse-engineering and hacking ICTs as a means of identifying potential threats to privacy and free expression online, a practice that can place scholars and research subjects at risk.

Additionally, the symposium addressed the extent to which many methods are inextricably intertwined with theory. Here the field of Science and Technologies Studies (STS) is instructive, as methods like Actor Network Theory (ANT) operate from a distinct epistemological standpoint that needs to be accounted for when discussing the method. To this end, part of the project will seek to trace the history of the field and identify the epistemological nodes around which these disparate methodological approaches coalesce.

How Trump’s war with the news media could impact net neutrality

By Kenneth Merrill

In a stunning attack on the nation’s news media, the President of the United States took to Twitter on Friday to write, “The FAKE NEWS media (failing @nytimes, @NBCNews, @ABC, @CBS, @CNN) is not my enemy, it is the enemy of the American People!”. Echoing the well-worn rhetorical agitprop of tyrants past and present, Trump’s tweet was viewed by many, including those in his own party, as an attempt to divert attention from the chaos enveloping his new administration by eroding trust in a fundamental pillar of democracy. But, viewed through the narrower prism of Internet and telecommunications policy, Trump’s latest salvo aimed at the news media sheds light on the complex and increasingly interconnected interests involved in the administration’s efforts to roll back regulations like net neutrality.

Passed in 2015, the Federal Communications Commission’s (FCC) net neutrality provisions were designed to protect the principle that all data should be treated equally as it moves across the Internet. Advocates of net neutrality, including consumer rights groups and most of Silicon Valley, argue that it protects users and encourages innovation in the content space by prohibiting large telecom providers (e.g. AT&T, Verizon, Comcast, etc.) from throttling traffic (either by slowing down data from competitors or creating “fast lanes” for their own approved content). Critics, including Trump’s newly appointed FCC Chairman (and former Verizon lawyer) Ajit Pai, say the net neutrality rules discourage much-needed competition in the telecom infrastructure industry.

Of course, the lack of competition in the telecom space is hardly a new phenomenon — the concentration of ownership in the industry is a problem that predates the FCC’s net neutrality provisions by at least thirty years. Moreover, the few companies that have come to dominate in this exclusive ecosystem have, in large part, done so through the acquisition of content providers, including several of the mainstream news media outlets Trump so abhors. As such, debates over whether or not to keep net neutrality are not as simple as they may seem.

To this point, Klint Finley of Wired explains that several of the largest telecom providers will be required to adhere to the net neutrality rules, per the terms of their merger agreements, regardless of any changes the new Trump-controlled FCC may make. Comcast is contractually obligated to honor net neutrality until 2018 following its merger with NBC Universal (parent company of the “failing” @NBCNews), while Charter Communications must adhere to the provisions until 2023 after its merger with Time-Warner (parent company of the “failing” @CNN).

Criticizing the most recent media mega-merger in October, then-candidate Trump called AT&T’s proposed $85 billion acquisition of Time-Warner a deal that would place “too much concentration of power in the hands of too few.” But in prefacing his comments by calling out CNN in particular as a key part of “the power structure I’m fighting,” the president seemed more concerned with punishing his perceived enemies in the press than ensuring fair competition in the industry. And while most expect the AT&T/Time-Warner deal to be approved, some industry experts have expressed concern that the FCC’s net neutrality regulations could remain in place only to become a political cudgel used by the president to inflict pain on his critics in the press. As Harold Feld of the digital rights group Public Knowledge explains to Finley in the aforementioned Wired story:

“he could appoint commissioners who will keep the net neutrality rules on the books, but not enforce them. Then if MSNBC were to offend him he could launch an investigation into its parent company, Comcast, over net neutrality. If the administration approved the AT&T/Time-Warner deal, it would have a similar bludgeon to use against CNN. In other words, we could end up with perhaps the worst of both worlds: a highly consolidated media industry, coupled with a regulatory body that selectively enforces rules for political reasons.”

But if this sort of Machiavellian gambit is the worst-case scenario, the likely alternatives are not much better. For a self-described pro-business president who, during the campaign, promised to cut regulations by as much as “75 percent, maybe more!“, scrapping net neutrality altogether remains the most likely outcome. Others suggest some elements of net neutrality could gain bi-partisan support in Congress, although any legislation would almost certainly roll back the so-called Title II provision reclassifying broadband Internet service as a “common carrier,” on par with other utilities like telephone service. It is also likely that limits on “zero rating,” in which Internet/mobile network operators (like AT&T, Verizon, and T-mobile) provide free data to customers for certain preferred content (often their own content or through partnerships with third party content providers), will be significantly scaled back under the new FCC.

Perhaps the only silver lining for advocates of net neutrality is that any changes to the provisions will be met with the forceful and increasingly influential dissenting voice of Silicon Valley. The overwhelming majority of large content providers and online platforms like Google, Facebook, and Twitter strongly support net neutrality. And in the wake of Trump’s controversial travel ban, Apple’s fight with the FBI over encryption, the SOPA and PIPA protests, and the industry’s successful lobbying effort against a 2014 FCC proposal calling for the creation of “fast lanes,” it is clear Silicon Valley is ready and able to defend its interests in Washington.

How this all plays out remains to be seen. But it seems clear that in an environment of increasing ownership concentration, both among Internet service providers and online content platforms (recall that European lawmakers continue to pursue an antitrust case against Google), the consequences involved in reversing net neutrality will be far-reaching and difficult to contain.

Prospects for Cooperation Between Tech and Trump Complicated by Executive Orders

By Kenneth Merrill

In the wake of President Trump’s sweeping executive order restricting entry to the US to refugees and immigrants from seven majority Muslim countries and with a nascent anti-Trump movement beginning to coalesce, tech industry executives are struggling to navigate an increasingly politicized environment, in which efforts to engage the new administration are colliding with the demands of politically active usersand widespread dismay within Silicon Valley over the administration’s policies.  

Reflecting the potential impact of social media to harness popular discontent and underscoring the politically fraught position many tech CEO’s now find themselves in, the hashtag #DeleteUber began trending over the weekend after the ride-hailing app was criticized for undercutting New York City taxi drivers staging a work stoppage to protest the immigration order. Seizing on the popular backlash against Uber was the company’s chief competitor Lyft, whose co-founders Logan Green and John Zimmer announced a $1m donation to the ACLU and issued the following statement sharply criticizing the executive order:

“This weekend, Trump closed the country’s borders to refugees, immigrants, and even documented residents from around the world based on their country of origin. Banning people of a particular faith or creed, race or identity, sexuality or ethnicity, from entering the U.S. is antithetical to both Lyft’s and our nation’s core values. We stand firmly against these actions, and will not be silent on issues that threaten the values of our community.”

And Lyft was not alone. Twitter, Apple, Facebook, Google, Microsoft, Netflix, and Airbnb all released statements over the weekend, ranging from judicious to vociferous. Among the more strongly-worded repudiations was Aaron Levie of the cloud company Box, who took to Twitter to write, “On every level – moral, humanitarian, economic, logical, etc – this ban is wrong and completely antithetical to the principles of America.”

Meanwhile, Google co-founder Sergei Brin was spotted at a protest at San Fransisco International Airport less than a month after Mr. Brin’s co-founder and current Alphabet CEO Larry Page was among a group of tech executives invited to Trump Tower to meet with then President-elect Trump. And while Trump’s meeting with the tech leaders was seen by many as little more than a charm offensive aimed at paving the way for future cooperation with Washington, a new report by Adam Segal of the Council on Foreign Relations provides some context for why such cooperation is necessary.

“The Silicon Valley-Washington rift has real implications for U.S. cybersecurity and foreign policy,” writes Segal, adding, “An ugly fight between the two sides makes it more difficult to share cyber threat information, counter online extremism, foster global technology standards, promote technological innovation, and maintain an open internet.”

As the report explains, the divide between Washington and U.S. tech firms began in earnest over three years ago with the Snowden revelations, which forced global platforms to reckon with an outraged public demanding greater security and privacy protections. Most notably, these new economic and reputational incentives informed Apple’s decision to make end-to-end encryption standard across the company’s products, prompting a protracted fight with law enforcement after authorities were initially unable to access the contents of a cell phone belonging to one of the San Bernardino attackers.

But if debates over encryption, privacy, and net neutrality created the rift between Silicon Valley and Washington, last week’s immigration order left a gaping chasm between the two.

Aside from the obvious constitutional concerns, the immigration restrictions are particularly worrisome for tech companies that recruit some of their top talent from abroad.

On Wednesday Twitter joined Lyft and others, donating over $1m to the ACLU to help fight the immigration order, while the messaging platform Viber announced it would provide free international calls to the seven countries affected by the executive order. Also on Wednesday, the Hill cited several cybersecurity researchers who are declining to work with law enforcement until the immigration order is revoked.

Meanwhile, Bloomberg reported that an open letter expressing concern over Trump’s immigration policies was circulating through Silicon Valley and beyond, including among CEOs on Wall Street and in the manufacturing, energy, and consumer goods sectors.

Whether or not the combined weight of an overwhelming majority of the tech community is enough to sway the administration’s thinking on immigration (or anything for that matter) remains to be seen. Regardless, critical issues like stepping up cyberdefense, curbing data localization, and protecting a free and open Internet will require some degree of cooperation between Tech and Trump, a prospect that, at the moment, is difficult to imagine.

FTC complaint highlights growing threats from Internet-connected toys

By Kenneth Merrill

For years the Internet of things (IoT) has consistently been cited as one of the next big issues looming on the tech policy horizon. With a recent complaint filed by a group of privacy and consumer protection groups at the Federal Trade Commission (FTC) highlighting risks posed to children by Internet-connected toys, it seems the IoT’s time has come.

According to the complaint filed in December by a coalition of privacy and consumer groups, the My Friend Cayla and i-Que Intelligent Robot dolls, manufactured by U.S.-based Genesis Toys, eavesdrop on children by “recording and collecting the private conversations of young children without any limitations on collection, use, or disclosure of this personal information.” The complaint also charges Massachusetts-based voice recognition company Nuance Communications, which stores and processes the audio conversations, with using the data to market products and services to children as well as selling the data to third-parties for behavioral marketing purposes.

The dolls, which are available widely in the U.S. and abroad, instruct customers to download a mobile application that allows parents to listen and communicate with the child. But as the Norwegian Consumer Council discovered, following an in-depth legal and technical analysis of Internet-connected toys, the bluetooth enabled toys also allow strangers to covertly eavesdrop on children, creating “a substantial risk of harm because children may be subject to predatory stalking or physical danger.”

In particular, the complaint argues that the companies are in violation of FTC regulations and the Children’s Online Privacy Protection Act (COPPA), which regulates the collection of children’s personal information by online service operators. Here the privacy groups charge Genesis Toys with failing to provide adequate notice to parents regarding the collection and transmission of children’s audio conversations; failing to obtain consent for recording and collecting conversations; deceiving parents and children as to the nature of the recordings; and failing to comply with deletion and data retention regulations.

“With the growing Internet of Things, American consumers face unprecedented levels of surveillance in their most private spaces, and young children are uniquely vulnerable to these invasive practices,“ said Claire T. Gartland, Director, EPIC Consumer Privacy Project. “The FTC has an obligation here to step in and safeguard the privacy of young children against toys that spy and companies that exploit their very voices for corporate gain.”

But with an incoming president who vowed during the campaign to “cut regulations by 75%,” consumer advocacy groups are drawing on coordinated international consensus in an effort to establish norms regarding the IoT and children. “While it is unclear how the new Trump administration will handle any regulatory issues, we do have a tradition in the U.S. of protecting children from unfair and manipulative practices in the digital environment,” explains Kathryn Montgomery, Professor and Chair of the Communication Studies Department at American University (currently on sabbatical), adding that these protections include COPPA, “a law that has been in place for nearly a decade and that government and industry alike have embraced and continue to support.”

And of course it is not just children that are susceptible to violations of privacy and security at the hands of the ever-expanding IoT market. AU and the Center for Digital Democracy released a major study last month, funded by the Robert Wood Johnson Foundation, on the privacy and consumer protection concerns raised by the proliferation of health and fitness wearables.    

This comes on the heels of a massive distributed denial of service attack in October that harnessed an army of hacked internet-connected devices, including baby monitors, cameras, and routers, to flood the servers of Dyn Research, a DNS service that provides domain name resolution services for a host of Internet services, disrupting and in some cases halting Internet traffic on such services as Google Maps, Facebook, and Twitter. As The New York Times wrote following the attack, “It is too early to determine who was behind Friday’s attacks, but it is this type of attack that has election officials concerned. They are worried that an attack could keep citizens from submitting votes.”

Recap: Content Rules?! Newsfeeds, Algorithms, Content Moderation, and Ethics

By Kenneth Merrill

At Donald Trump’s first official press conference since his election the President-elect engaged in a heated exchange with reporters that culminated in him referring to CNN as “fake news”. Two days before Trump’s inauguration, the Washington, DC and New York chapters of the Internet Society (ISOC), in partnership with the Internet Governance Lab, hosted a panel of experts to interrogate the question: what is “fake news”?

Moderated by AU School of Communication Professor Aram Sinnreich, the panel included AU Professor and Executive Editor of the AU Investigative Reporting Workshop Charles Lewis; Jessa Linger of the Annenberg School of Communications; Andrew Bridges, Partner at Fenwick & West LLP; and, in New York, Gilad Lotan, head of data science at BuzzFeed; Shuli Hallak, Executive Vice President of ISOC-NY; and Harry Bruinius, author, journalist, and staff writer at The Christian Science Monitor.

“Just as trust in the institutions of government is important for a functioning democracy, so too is trust in the Internet,” explained David Vyorst of ISOC-DC. But how should we design trust into the algorithms that now mediate content for users? Should platforms bear any responsibility for the content spread across their networks? What role does the traditional news media play? And at what point do end-users bear a responsibility to speak up and develop tools and methods to combat fake news?

Grounding the concept in a historical context, AU Professor Charles Lewis began by explaining that“fake news” is not an entirely new phenomenon but fits within the rich and storied tradition of propaganda. “From truth to truthiness to post-truth society to, now, fake news, we’ve had these issues for some time,” argued Lewis. But whereas traditional propaganda models were more top-down, today’s algorithmically negotiated information spaces create distributed networks for the dissemination of propaganda.  

Here, data scientist Gilad Lotan presented findings from his own analysis of personalized propaganda spaces. “Even though fake news and propaganda have been around for a while, the fact that these algorithms personalize stories for individuals makes this very different,” explained Lotan, adding that one of the important issues at stake is the distinction between search engine optimization and algorithmic propaganda. On this point Jessa Lingel of the Annenberg School of Communications made the case for increased algorithmic transparency in order to identify how code influences public opinion and in order to provide individuals with a means of engineering tools, both technological and social, to combat the spread of fake news. 

Meanwhile Andrew Bridges of the law firm of Fenwick & West suggested that placing responsibility squarely on the shoulders of content platforms ignores larger technosocial dynamics and the important role that algorithms play in “giving us what we want.” And yet, as several audience members pointed out, engineering algorithmic solutions to the spread of propaganda and fake news should balance not only giving users what they want but also what they need, which in some cases may involve hard political and economic choices about the communication technologies we build and use.

Watch the discussion here.

Dr. Sargsyan Ph.D. Dissertation on Information Intermediary Privacy

picture-t-sargsyan.jpg

Write here...

Congratulations to Tatevik Sargsyan, who today successfully defended her dissertation “Exploring Multistakeholderism Through the Evolution of Information Intermediaries’ Privacy Policies.” Her dissertation committee was chaired by Dr. Laura DeNardis; committee members included Dr. Kathryn Montgomery, Dr. Derrick Cogburn, and Dr. Declan Fahy. The external reader was digital privacy expert Dr. Michael Zimmer of the University of Wisconsin – Milwaukee.

New Paper on Cyber Sovereignty v. Distributed Internet Governance

fullsizerender2261.jpg

On November 30, 2016, Laura DeNardis, Gordon Goldstein, and Ambassador David A. Gross presented their new paper, “The Rising Geopolitics of Internet Governance: Cyber Sovereignty v. Distributed Governance at the Columbia School of International and Public Affairs (SIPS) on November 30, 2016. The paper was part of the Columbia SIPS Tech & Policy Initiative and the panel discussion was moderated by Columbia SIPA Dean Merit Janow.

 Internet governance is at a crossroads. The 21st century has given rise to two incommensurable visions for the global Internet and how it is governed. One envisions a universal network that generally supports the free flow of information and whose governance is distributed across the private sector, governments and new global institutions in an approach that has historically been described as “multistakeholder” governance. This vision has materialized, albeit imperfectly, in how the Internet and its coordination has historically progressed and is an approach advocated by the United States government and many other countries. This is the model of Internet governance that has dominated throughout the past decade. The competing vision advocates for greater multilateral and top-down administration of the Internet in the name of social order, national cyber sovereignty, and tighter control of information flows. China and other countries interested in greater administrative control over the flow of information have been vocal proponents of a more multilateral approach to Internet governance. These visions are often debated using the language of abstract theoretical constructs but they involve actual policy choices that have arisen in particular historical contexts and whose future will have tangible effects on American foreign policy interests, American values of freedom of expression and innovation, the global digital economy, and the stability and resiliency of Internet infrastructure itself. This paper provides some historical context to the rise of distributed Internet governance, describes some of the key geopolitical conflicts that involve incommensurability between the ideology of national sovereignty and the technical topology and transnational characteristics of private Internet infrastructure, and argues for the preservation of private-sector-led multistakeholder governance rather than a shift to greater government control.