Facebook looks to counter ‘information operations’

By Kenneth Merrill

Last November, Facebook CEO Mark Zuckerberg called claims that his company may have influenced the outcome of the U.S. presidential election by enabling the spread of propaganda a “pretty crazy idea.” But with a reportpublished on Thursday by Facebook’s own security team titled “Information Operations and Facebook,” it is clear attitudes at the social network have changed.

“We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” the report explains.

In an effort to combat these new forms of social network-mediated propaganda campaigns, Facebook’s security team said it would increase its use of machine learning and “new analytical techniques” to remove fake accounts and disrupt “information (or influence) operations,” defined as “actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”

Additionally, the report seeks to “untangle” and move away from the term “fake news,” which it (rightly) argues has become a catch-all used to “refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumors, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces.”

Instead, the report identifies three distinct categories of abuse falling under the umbrella of “information operations”:

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

Citing the 2016 U.S. Presidential election as a case study, the report explains how Facebook security experts “monitored” suspicious activity during the lead-up to the election and found a deluge of false news, false amplifiers, and disinformation (sidebar: if Facebook was monitoring suspicious activity during the run-up to the election why did Mr. Zuckerberg call such activity “crazy” after election day?).

The Facebook report comes less than a week after researchers at Oxford published the latest in a  series of studies analyzing the role of automated accounts (or “bots”) in disseminating “junk news” on social media in the weeks preceding national elections in the U.S.Germany, and France. According to the study, one-quarter of all political links shared in France prior to last Sunday’s election contained “misinformation,” though the researchers point out that in general French voters were sharing better quality news than Americans during the lead-up to the U.S. presidential election (whether this reflects stronger media literacy in France or more sophisticated propaganda is unclear).

While Facebook’s security team would not confirm the identity of the actors “engaged in false amplification using inauthentic Facebook accounts,” together the Facebook and Oxford reports add to a growing body of evidence — including from U.S. intelligence officials and private cybersecurity firms — attributing the surge in automated accounts and propaganda to a larger information operation orchestrated by Russian intelligence and aimed at influencing elections and/or sowing distrust in political institutions.

Regardless, the notion that governments are targeting social networks to mine intelligence and influence political outcomes seems less “crazy” and more like the new normal in global politics.

Share