Are You Human? How Bots Distort Political Debates

Robot. Image: Willow Garage on Flickr (CC BY-NC 2.0)

Robot. Image: Willow Garage on Flickr (CC BY-NC 2.0)

One of the promises of the Internet was that it facilitates debates and enables people to exchange ideas and information. We already know that that is largely not true because our habits, preferences as well as social algorithms tend to trap us in a filter bubble that only shows us opinions that are similar to the ones we already hold. That is a problem.

However, how other ideas and voices can penetrate that bubble can be at least as problematic, if those voices are not genuine. One of the ways that Facebook, Twitter and Google decide whether to show you a piece of content is sheer volume: a message that has been shared or liked more often, is assumed to be more interesting and thus more likely to be shown to a user.

The problem is that more and more of the voices that amplify controversial positions are not human – they are so called social bots. Social bots are small pieces of software that create social media accounts and like, comment, reply or retweet content according to the person who manages these bot networks. In many cases these networks comprise thousands of social media accounts. This is problematic, because bots are often used to share extremist positions and experience shows that moderate voices fall silent when discussions become too polarised.

In addition, many of these bots have become very sophisticated. Where in the past, bots simply spewed the same message over and over again (sometimes with a changing alphanumerical code added at the end to make it more difficult to be caught by spam filters), social bots have evolved significantly. In both the Ukraine and the US some social bots deliberately connect with people and share content tailored for influencers of the opposing group in order to occasionally inject it with propaganda.

  • A study conducted by Oxford University after the first Clinton-Trump debate in the US suggests that up to 580,000 Trump-supporting tweets were sent by social bots, while up to 123,000 Tweets that supported Clinton might have come from machines.
  • In Germany, a new far-right party recently emphasised (link in German) that they intend to use social bots during the 2017 general elections.
  • A recent fact sheet produced by the German Konrad Adenauer Stiftung talks about a botnet that targeted users in the Ukraine. It spanned 15,000 Twitter accounts and posted on average 60,000 messages per day.

Both studies also mention that the debate prior to Brexit was heavily saturated by social bots.

Of course, behaviour change is very difficult and it is unlikely that someone will radically change his or her position based on social media. However, this is nevertheless worrying, because it might change perceptions and those perceptions might influence what people do or say.

I think this is something that we as a society need to keep an eye on. One good place to start seems to Political Bots, a project dedicated to “algorithms, computational propaganda, and digital politics”.

What are your thoughts? Please share them below.


  • Feiyang Yuan

    Although this is my first time that know the term of ‘social bots’, I think I know what it meaning. If social bots can be seen as a kind of tool to guide and even control the direction of public voices, it would like to cause quite severe consequence. This makes me think that the privilege owned by Global North in the domain of mass media. And this kind of inequality is becoming more and more obvious.