Guest User

Prospects for Cooperation Between Tech and Trump Complicated by Executive Orders

By Kenneth Merrill

In the wake of President Trump’s sweeping executive order restricting entry to the US to refugees and immigrants from seven majority Muslim countries and with a nascent anti-Trump movement beginning to coalesce, tech industry executives are struggling to navigate an increasingly politicized environment, in which efforts to engage the new administration are colliding with the demands of politically active usersand widespread dismay within Silicon Valley over the administration’s policies.  

Reflecting the potential impact of social media to harness popular discontent and underscoring the politically fraught position many tech CEO’s now find themselves in, the hashtag #DeleteUber began trending over the weekend after the ride-hailing app was criticized for undercutting New York City taxi drivers staging a work stoppage to protest the immigration order. Seizing on the popular backlash against Uber was the company’s chief competitor Lyft, whose co-founders Logan Green and John Zimmer announced a $1m donation to the ACLU and issued the following statement sharply criticizing the executive order:

“This weekend, Trump closed the country’s borders to refugees, immigrants, and even documented residents from around the world based on their country of origin. Banning people of a particular faith or creed, race or identity, sexuality or ethnicity, from entering the U.S. is antithetical to both Lyft’s and our nation’s core values. We stand firmly against these actions, and will not be silent on issues that threaten the values of our community.”

And Lyft was not alone. Twitter, Apple, Facebook, Google, Microsoft, Netflix, and Airbnb all released statements over the weekend, ranging from judicious to vociferous. Among the more strongly-worded repudiations was Aaron Levie of the cloud company Box, who took to Twitter to write, “On every level – moral, humanitarian, economic, logical, etc – this ban is wrong and completely antithetical to the principles of America.”

Meanwhile, Google co-founder Sergei Brin was spotted at a protest at San Fransisco International Airport less than a month after Mr. Brin’s co-founder and current Alphabet CEO Larry Page was among a group of tech executives invited to Trump Tower to meet with then President-elect Trump. And while Trump’s meeting with the tech leaders was seen by many as little more than a charm offensive aimed at paving the way for future cooperation with Washington, a new report by Adam Segal of the Council on Foreign Relations provides some context for why such cooperation is necessary.

“The Silicon Valley-Washington rift has real implications for U.S. cybersecurity and foreign policy,” writes Segal, adding, “An ugly fight between the two sides makes it more difficult to share cyber threat information, counter online extremism, foster global technology standards, promote technological innovation, and maintain an open internet.”

As the report explains, the divide between Washington and U.S. tech firms began in earnest over three years ago with the Snowden revelations, which forced global platforms to reckon with an outraged public demanding greater security and privacy protections. Most notably, these new economic and reputational incentives informed Apple’s decision to make end-to-end encryption standard across the company’s products, prompting a protracted fight with law enforcement after authorities were initially unable to access the contents of a cell phone belonging to one of the San Bernardino attackers.

But if debates over encryption, privacy, and net neutrality created the rift between Silicon Valley and Washington, last week’s immigration order left a gaping chasm between the two.

Aside from the obvious constitutional concerns, the immigration restrictions are particularly worrisome for tech companies that recruit some of their top talent from abroad.

On Wednesday Twitter joined Lyft and others, donating over $1m to the ACLU to help fight the immigration order, while the messaging platform Viber announced it would provide free international calls to the seven countries affected by the executive order. Also on Wednesday, the Hill cited several cybersecurity researchers who are declining to work with law enforcement until the immigration order is revoked.

Meanwhile, Bloomberg reported that an open letter expressing concern over Trump’s immigration policies was circulating through Silicon Valley and beyond, including among CEOs on Wall Street and in the manufacturing, energy, and consumer goods sectors.

Whether or not the combined weight of an overwhelming majority of the tech community is enough to sway the administration’s thinking on immigration (or anything for that matter) remains to be seen. Regardless, critical issues like stepping up cyberdefense, curbing data localization, and protecting a free and open Internet will require some degree of cooperation between Tech and Trump, a prospect that, at the moment, is difficult to imagine.

Share

FTC complaint highlights growing threats from Internet-connected toys

By Kenneth Merrill

For years the Internet of things (IoT) has consistently been cited as one of the next big issues looming on the tech policy horizon. With a recent complaint filed by a group of privacy and consumer protection groups at the Federal Trade Commission (FTC) highlighting risks posed to children by Internet-connected toys, it seems the IoT’s time has come.

According to the complaint filed in December by a coalition of privacy and consumer groups, the My Friend Cayla and i-Que Intelligent Robot dolls, manufactured by U.S.-based Genesis Toys, eavesdrop on children by “recording and collecting the private conversations of young children without any limitations on collection, use, or disclosure of this personal information.” The complaint also charges Massachusetts-based voice recognition company Nuance Communications, which stores and processes the audio conversations, with using the data to market products and services to children as well as selling the data to third-parties for behavioral marketing purposes.

The dolls, which are available widely in the U.S. and abroad, instruct customers to download a mobile application that allows parents to listen and communicate with the child. But as the Norwegian Consumer Council discovered, following an in-depth legal and technical analysis of Internet-connected toys, the bluetooth enabled toys also allow strangers to covertly eavesdrop on children, creating “a substantial risk of harm because children may be subject to predatory stalking or physical danger.”

In particular, the complaint argues that the companies are in violation of FTC regulations and the Children’s Online Privacy Protection Act (COPPA), which regulates the collection of children’s personal information by online service operators. Here the privacy groups charge Genesis Toys with failing to provide adequate notice to parents regarding the collection and transmission of children’s audio conversations; failing to obtain consent for recording and collecting conversations; deceiving parents and children as to the nature of the recordings; and failing to comply with deletion and data retention regulations.

“With the growing Internet of Things, American consumers face unprecedented levels of surveillance in their most private spaces, and young children are uniquely vulnerable to these invasive practices,“ said Claire T. Gartland, Director, EPIC Consumer Privacy Project. “The FTC has an obligation here to step in and safeguard the privacy of young children against toys that spy and companies that exploit their very voices for corporate gain.”

But with an incoming president who vowed during the campaign to “cut regulations by 75%,” consumer advocacy groups are drawing on coordinated international consensus in an effort to establish norms regarding the IoT and children. “While it is unclear how the new Trump administration will handle any regulatory issues, we do have a tradition in the U.S. of protecting children from unfair and manipulative practices in the digital environment,” explains Kathryn Montgomery, Professor and Chair of the Communication Studies Department at American University (currently on sabbatical), adding that these protections include COPPA, “a law that has been in place for nearly a decade and that government and industry alike have embraced and continue to support.”

And of course it is not just children that are susceptible to violations of privacy and security at the hands of the ever-expanding IoT market. AU and the Center for Digital Democracy released a major study last month, funded by the Robert Wood Johnson Foundation, on the privacy and consumer protection concerns raised by the proliferation of health and fitness wearables.    

This comes on the heels of a massive distributed denial of service attack in October that harnessed an army of hacked internet-connected devices, including baby monitors, cameras, and routers, to flood the servers of Dyn Research, a DNS service that provides domain name resolution services for a host of Internet services, disrupting and in some cases halting Internet traffic on such services as Google Maps, Facebook, and Twitter. As The New York Times wrote following the attack, “It is too early to determine who was behind Friday’s attacks, but it is this type of attack that has election officials concerned. They are worried that an attack could keep citizens from submitting votes.”

Share

Recap: Content Rules?! Newsfeeds, Algorithms, Content Moderation, and Ethics

By Kenneth Merrill

At Donald Trump’s first official press conference since his election the President-elect engaged in a heated exchange with reporters that culminated in him referring to CNN as “fake news”. Two days before Trump’s inauguration, the Washington, DC and New York chapters of the Internet Society (ISOC), in partnership with the Internet Governance Lab, hosted a panel of experts to interrogate the question: what is “fake news”?

Moderated by AU School of Communication Professor Aram Sinnreich, the panel included AU Professor and Executive Editor of the AU Investigative Reporting Workshop Charles Lewis; Jessa Linger of the Annenberg School of Communications; Andrew Bridges, Partner at Fenwick & West LLP; and, in New York, Gilad Lotan, head of data science at BuzzFeed; Shuli Hallak, Executive Vice President of ISOC-NY; and Harry Bruinius, author, journalist, and staff writer at The Christian Science Monitor.

“Just as trust in the institutions of government is important for a functioning democracy, so too is trust in the Internet,” explained David Vyorst of ISOC-DC. But how should we design trust into the algorithms that now mediate content for users? Should platforms bear any responsibility for the content spread across their networks? What role does the traditional news media play? And at what point do end-users bear a responsibility to speak up and develop tools and methods to combat fake news?

Grounding the concept in a historical context, AU Professor Charles Lewis began by explaining that“fake news” is not an entirely new phenomenon but fits within the rich and storied tradition of propaganda. “From truth to truthiness to post-truth society to, now, fake news, we’ve had these issues for some time,” argued Lewis. But whereas traditional propaganda models were more top-down, today’s algorithmically negotiated information spaces create distributed networks for the dissemination of propaganda.  

Here, data scientist Gilad Lotan presented findings from his own analysis of personalized propaganda spaces. “Even though fake news and propaganda have been around for a while, the fact that these algorithms personalize stories for individuals makes this very different,” explained Lotan, adding that one of the important issues at stake is the distinction between search engine optimization and algorithmic propaganda. On this point Jessa Lingel of the Annenberg School of Communications made the case for increased algorithmic transparency in order to identify how code influences public opinion and in order to provide individuals with a means of engineering tools, both technological and social, to combat the spread of fake news. 

Meanwhile Andrew Bridges of the law firm of Fenwick & West suggested that placing responsibility squarely on the shoulders of content platforms ignores larger technosocial dynamics and the important role that algorithms play in “giving us what we want.” And yet, as several audience members pointed out, engineering algorithmic solutions to the spread of propaganda and fake news should balance not only giving users what they want but also what they need, which in some cases may involve hard political and economic choices about the communication technologies we build and use.

Watch the discussion here.

Share

Dr. Sargsyan Ph.D. Dissertation on Information Intermediary Privacy

picture-t-sargsyan.jpg

Write here...

Congratulations to Tatevik Sargsyan, who today successfully defended her dissertation “Exploring Multistakeholderism Through the Evolution of Information Intermediaries’ Privacy Policies.” Her dissertation committee was chaired by Dr. Laura DeNardis; committee members included Dr. Kathryn Montgomery, Dr. Derrick Cogburn, and Dr. Declan Fahy. The external reader was digital privacy expert Dr. Michael Zimmer of the University of Wisconsin – Milwaukee.

Share

New Paper on Cyber Sovereignty v. Distributed Internet Governance

fullsizerender2261.jpg

On November 30, 2016, Laura DeNardis, Gordon Goldstein, and Ambassador David A. Gross presented their new paper, “The Rising Geopolitics of Internet Governance: Cyber Sovereignty v. Distributed Governance at the Columbia School of International and Public Affairs (SIPS) on November 30, 2016. The paper was part of the Columbia SIPS Tech & Policy Initiative and the panel discussion was moderated by Columbia SIPA Dean Merit Janow.

 Internet governance is at a crossroads. The 21st century has given rise to two incommensurable visions for the global Internet and how it is governed. One envisions a universal network that generally supports the free flow of information and whose governance is distributed across the private sector, governments and new global institutions in an approach that has historically been described as “multistakeholder” governance. This vision has materialized, albeit imperfectly, in how the Internet and its coordination has historically progressed and is an approach advocated by the United States government and many other countries. This is the model of Internet governance that has dominated throughout the past decade. The competing vision advocates for greater multilateral and top-down administration of the Internet in the name of social order, national cyber sovereignty, and tighter control of information flows. China and other countries interested in greater administrative control over the flow of information have been vocal proponents of a more multilateral approach to Internet governance. These visions are often debated using the language of abstract theoretical constructs but they involve actual policy choices that have arisen in particular historical contexts and whose future will have tangible effects on American foreign policy interests, American values of freedom of expression and innovation, the global digital economy, and the stability and resiliency of Internet infrastructure itself. This paper provides some historical context to the rise of distributed Internet governance, describes some of the key geopolitical conflicts that involve incommensurability between the ideology of national sovereignty and the technical topology and transnational characteristics of private Internet infrastructure, and argues for the preservation of private-sector-led multistakeholder governance rather than a shift to greater government control.

Share