FTC complaint highlights growing threats from Internet-connected toys

By Kenneth Merrill

For years the Internet of things (IoT) has consistently been cited as one of the next big issues looming on the tech policy horizon. With a recent complaint filed by a group of privacy and consumer protection groups at the Federal Trade Commission (FTC) highlighting risks posed to children by Internet-connected toys, it seems the IoT’s time has come.

According to the complaint filed in December by a coalition of privacy and consumer groups, the My Friend Cayla and i-Que Intelligent Robot dolls, manufactured by U.S.-based Genesis Toys, eavesdrop on children by “recording and collecting the private conversations of young children without any limitations on collection, use, or disclosure of this personal information.” The complaint also charges Massachusetts-based voice recognition company Nuance Communications, which stores and processes the audio conversations, with using the data to market products and services to children as well as selling the data to third-parties for behavioral marketing purposes.

The dolls, which are available widely in the U.S. and abroad, instruct customers to download a mobile application that allows parents to listen and communicate with the child. But as the Norwegian Consumer Council discovered, following an in-depth legal and technical analysis of Internet-connected toys, the bluetooth enabled toys also allow strangers to covertly eavesdrop on children, creating “a substantial risk of harm because children may be subject to predatory stalking or physical danger.”

In particular, the complaint argues that the companies are in violation of FTC regulations and the Children’s Online Privacy Protection Act (COPPA), which regulates the collection of children’s personal information by online service operators. Here the privacy groups charge Genesis Toys with failing to provide adequate notice to parents regarding the collection and transmission of children’s audio conversations; failing to obtain consent for recording and collecting conversations; deceiving parents and children as to the nature of the recordings; and failing to comply with deletion and data retention regulations.

“With the growing Internet of Things, American consumers face unprecedented levels of surveillance in their most private spaces, and young children are uniquely vulnerable to these invasive practices,“ said Claire T. Gartland, Director, EPIC Consumer Privacy Project. “The FTC has an obligation here to step in and safeguard the privacy of young children against toys that spy and companies that exploit their very voices for corporate gain.”

But with an incoming president who vowed during the campaign to “cut regulations by 75%,” consumer advocacy groups are drawing on coordinated international consensus in an effort to establish norms regarding the IoT and children. “While it is unclear how the new Trump administration will handle any regulatory issues, we do have a tradition in the U.S. of protecting children from unfair and manipulative practices in the digital environment,” explains Kathryn Montgomery, Professor and Chair of the Communication Studies Department at American University (currently on sabbatical), adding that these protections include COPPA, “a law that has been in place for nearly a decade and that government and industry alike have embraced and continue to support.”

And of course it is not just children that are susceptible to violations of privacy and security at the hands of the ever-expanding IoT market. AU and the Center for Digital Democracy released a major study last month, funded by the Robert Wood Johnson Foundation, on the privacy and consumer protection concerns raised by the proliferation of health and fitness wearables.    

This comes on the heels of a massive distributed denial of service attack in October that harnessed an army of hacked internet-connected devices, including baby monitors, cameras, and routers, to flood the servers of Dyn Research, a DNS service that provides domain name resolution services for a host of Internet services, disrupting and in some cases halting Internet traffic on such services as Google Maps, Facebook, and Twitter. As The New York Times wrote following the attack, “It is too early to determine who was behind Friday’s attacks, but it is this type of attack that has election officials concerned. They are worried that an attack could keep citizens from submitting votes.”

Recap: Content Rules?! Newsfeeds, Algorithms, Content Moderation, and Ethics

By Kenneth Merrill

At Donald Trump’s first official press conference since his election the President-elect engaged in a heated exchange with reporters that culminated in him referring to CNN as “fake news”. Two days before Trump’s inauguration, the Washington, DC and New York chapters of the Internet Society (ISOC), in partnership with the Internet Governance Lab, hosted a panel of experts to interrogate the question: what is “fake news”?

Moderated by AU School of Communication Professor Aram Sinnreich, the panel included AU Professor and Executive Editor of the AU Investigative Reporting Workshop Charles Lewis; Jessa Linger of the Annenberg School of Communications; Andrew Bridges, Partner at Fenwick & West LLP; and, in New York, Gilad Lotan, head of data science at BuzzFeed; Shuli Hallak, Executive Vice President of ISOC-NY; and Harry Bruinius, author, journalist, and staff writer at The Christian Science Monitor.

“Just as trust in the institutions of government is important for a functioning democracy, so too is trust in the Internet,” explained David Vyorst of ISOC-DC. But how should we design trust into the algorithms that now mediate content for users? Should platforms bear any responsibility for the content spread across their networks? What role does the traditional news media play? And at what point do end-users bear a responsibility to speak up and develop tools and methods to combat fake news?

Grounding the concept in a historical context, AU Professor Charles Lewis began by explaining that“fake news” is not an entirely new phenomenon but fits within the rich and storied tradition of propaganda. “From truth to truthiness to post-truth society to, now, fake news, we’ve had these issues for some time,” argued Lewis. But whereas traditional propaganda models were more top-down, today’s algorithmically negotiated information spaces create distributed networks for the dissemination of propaganda.  

Here, data scientist Gilad Lotan presented findings from his own analysis of personalized propaganda spaces. “Even though fake news and propaganda have been around for a while, the fact that these algorithms personalize stories for individuals makes this very different,” explained Lotan, adding that one of the important issues at stake is the distinction between search engine optimization and algorithmic propaganda. On this point Jessa Lingel of the Annenberg School of Communications made the case for increased algorithmic transparency in order to identify how code influences public opinion and in order to provide individuals with a means of engineering tools, both technological and social, to combat the spread of fake news. 

Meanwhile Andrew Bridges of the law firm of Fenwick & West suggested that placing responsibility squarely on the shoulders of content platforms ignores larger technosocial dynamics and the important role that algorithms play in “giving us what we want.” And yet, as several audience members pointed out, engineering algorithmic solutions to the spread of propaganda and fake news should balance not only giving users what they want but also what they need, which in some cases may involve hard political and economic choices about the communication technologies we build and use.

Watch the discussion here.

Dr. Sargsyan Ph.D. Dissertation on Information Intermediary Privacy


Write here...

Congratulations to Tatevik Sargsyan, who today successfully defended her dissertation “Exploring Multistakeholderism Through the Evolution of Information Intermediaries’ Privacy Policies.” Her dissertation committee was chaired by Dr. Laura DeNardis; committee members included Dr. Kathryn Montgomery, Dr. Derrick Cogburn, and Dr. Declan Fahy. The external reader was digital privacy expert Dr. Michael Zimmer of the University of Wisconsin – Milwaukee.

New Paper on Cyber Sovereignty v. Distributed Internet Governance


On November 30, 2016, Laura DeNardis, Gordon Goldstein, and Ambassador David A. Gross presented their new paper, “The Rising Geopolitics of Internet Governance: Cyber Sovereignty v. Distributed Governance at the Columbia School of International and Public Affairs (SIPS) on November 30, 2016. The paper was part of the Columbia SIPS Tech & Policy Initiative and the panel discussion was moderated by Columbia SIPA Dean Merit Janow.

 Internet governance is at a crossroads. The 21st century has given rise to two incommensurable visions for the global Internet and how it is governed. One envisions a universal network that generally supports the free flow of information and whose governance is distributed across the private sector, governments and new global institutions in an approach that has historically been described as “multistakeholder” governance. This vision has materialized, albeit imperfectly, in how the Internet and its coordination has historically progressed and is an approach advocated by the United States government and many other countries. This is the model of Internet governance that has dominated throughout the past decade. The competing vision advocates for greater multilateral and top-down administration of the Internet in the name of social order, national cyber sovereignty, and tighter control of information flows. China and other countries interested in greater administrative control over the flow of information have been vocal proponents of a more multilateral approach to Internet governance. These visions are often debated using the language of abstract theoretical constructs but they involve actual policy choices that have arisen in particular historical contexts and whose future will have tangible effects on American foreign policy interests, American values of freedom of expression and innovation, the global digital economy, and the stability and resiliency of Internet infrastructure itself. This paper provides some historical context to the rise of distributed Internet governance, describes some of the key geopolitical conflicts that involve incommensurability between the ideology of national sovereignty and the technical topology and transnational characteristics of private Internet infrastructure, and argues for the preservation of private-sector-led multistakeholder governance rather than a shift to greater government control.