Facebook Explains How the Bad Guys Use Facebook

Sharing is caring:Share on Facebook15Tweet about this on Twitter13Share on LinkedIn13
Internet Troll

Source: Flickr (CC0 1.0)

Facebook’s Threat Intelligence team has just published a report that outlines how agents provocateurs are using the service to spread misinformation, undermine opponents and drown out dissenting voices.

Information Operations and Facebook” discusses how state and non-state actors are using the platform for “insidious forms of misuse, including attempts to manipulate civic discourse and deceive people” and gives some indication what Facebook is doing about it.

What I found most helpful:

The document provides good working definitions of what is often subsumed under “Fake News”, but which is really a set of different tactics that need to be addressed separately:

Information (or Influence) Operations – Actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome. These operations can use a combination of methods, such as false news, disinformation, or networks of fake accounts (false amplifiers) aimed at manipulating public opinion.

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

What I found most interesting:

  • The discussion about “false amplifiers” which at times sounded like straight out of Homeland. A lot of the discussions about false amplification that I have been following focuses on social bots, i.e. software that is used to amplify a message. However, the Facebook team finds that on their platform “most false amplification in the context of information operations is not driven by automated processes, but by coordinated people who are dedicated to operating inauthentic accounts.” I can imagine that this makes it much harder to control false amplification since Facebook needs to be much more careful about shutting down a person’s account than shutting down a bot. Drawing the line between enthusiastic supported and inauthentic manipulator must be quite difficult at times.
  • Facebook has developed a system that is able to recognise inauthentic accounts by identifying patterns of activity without accessing the accounts’ content. Ahead of the elections in France, this system removed more than 30,000 accounts from Facebook.

Where I have my doubts:

When talking about the US elections the report reads like Facebook is trying very hard to downplay its role and responsibility. The case study looks at how hacked information was misused and amplified by malicious actors. The report then comes to the conclusion that the reach was “marginal compared to the overall volume of civic content shared during the US election.”

By seemingly limiting their case study to the misuse of hacked information, the most widely shared pieces of misinformation are out of scope, such as the claim that Pope Francis endorsed Trump. (Jen Weedon, William Nuland, Alex Stamos, please correct me if I’m wrong).

We also don’t know anything about the definition of “overall volume of civic content shared during the US elections”. Does that include every mention of Hillary or Trump on Facebook? Does that include content posted and only seen by people living outside the US?

Facebook gives so little detail about how they came to the conclusion that the role of misinformation was marginal, that it sounds like a cop-out. Of course, this is part of a bigger problem, namely that Facebook does not allow independent researchers access to the data that they are collecting day and night.

Download “Information Operations and Facebook“.

What do you think about the report? Please leave a comment below!