Recap: Content Rules?! Newsfeeds, Algorithms, Content Moderation, and Ethics

By Kenneth Merrill

At Donald Trump’s first official press conference since his election the President-elect engaged in a heated exchange with reporters that culminated in him referring to CNN as “fake news”. Two days before Trump’s inauguration, the Washington, DC and New York chapters of the Internet Society (ISOC), in partnership with the Internet Governance Lab, hosted a panel of experts to interrogate the question: what is “fake news”?

Moderated by AU School of Communication Professor Aram Sinnreich, the panel included AU Professor and Executive Editor of the AU Investigative Reporting Workshop Charles Lewis; Jessa Linger of the Annenberg School of Communications; Andrew Bridges, Partner at Fenwick & West LLP; and, in New York, Gilad Lotan, head of data science at BuzzFeed; Shuli Hallak, Executive Vice President of ISOC-NY; and Harry Bruinius, author, journalist, and staff writer at The Christian Science Monitor.

“Just as trust in the institutions of government is important for a functioning democracy, so too is trust in the Internet,” explained David Vyorst of ISOC-DC. But how should we design trust into the algorithms that now mediate content for users? Should platforms bear any responsibility for the content spread across their networks? What role does the traditional news media play? And at what point do end-users bear a responsibility to speak up and develop tools and methods to combat fake news?

Grounding the concept in a historical context, AU Professor Charles Lewis began by explaining that“fake news” is not an entirely new phenomenon but fits within the rich and storied tradition of propaganda. “From truth to truthiness to post-truth society to, now, fake news, we’ve had these issues for some time,” argued Lewis. But whereas traditional propaganda models were more top-down, today’s algorithmically negotiated information spaces create distributed networks for the dissemination of propaganda.  

Here, data scientist Gilad Lotan presented findings from his own analysis of personalized propaganda spaces. “Even though fake news and propaganda have been around for a while, the fact that these algorithms personalize stories for individuals makes this very different,” explained Lotan, adding that one of the important issues at stake is the distinction between search engine optimization and algorithmic propaganda. On this point Jessa Lingel of the Annenberg School of Communications made the case for increased algorithmic transparency in order to identify how code influences public opinion and in order to provide individuals with a means of engineering tools, both technological and social, to combat the spread of fake news. 

Meanwhile Andrew Bridges of the law firm of Fenwick & West suggested that placing responsibility squarely on the shoulders of content platforms ignores larger technosocial dynamics and the important role that algorithms play in “giving us what we want.” And yet, as several audience members pointed out, engineering algorithmic solutions to the spread of propaganda and fake news should balance not only giving users what they want but also what they need, which in some cases may involve hard political and economic choices about the communication technologies we build and use.

Watch the discussion here.

Share