How and Why We Need to Fix Social Media or Risk Self-Destruction

Image: Depositphotos / porteador

Image: Depositphotos / porteador

I’m not going to discuss whether Facebook was responsible for Trump winning or losing the election (read this or this if you are interested in this issue) but I want to spend some time thinking about what role social media plays in the public discourse and how we can improve the current situation.

I believe that we have a problem: On social media, if you are on the political right, you mainly hear the messages of the political right. If you are on the left, you hear messages from and for the left. This is a real problem since you cannot have a functioning society without the ability to understand and share the feelings of other members of the community. Without an exchange of information, opinions, fears and hopes, society breaks apart and splits into multiple societies.

If “societies are characterised by patterns of relationships (social relations) between individuals” (Wikipedia) then a reduction of these relationships means that the strength and coherence of the society is weakening and if there is a total lack of these social relations, then we are really talking about multiple societies. This in turn directly affects whether large challenges that need the support of the whole society can be overcome or whether part of the necessary energy is wasted on positioning one social group versus the other.

Of course, we have always had different sub-groups in each society that had barely any contact with each other. The fact that people are more likely to internalise information that is consistent with their existing world views is hardly new. That is why FOX has different viewers from MSNBC. Most of us don’t like our opinions to be challenged, and if they are we tend to reject the conflicting information, rather than question our own beliefs.

The difference with a lot digital media is that it is actively reinforcing this natural tendency to stick to what you already think you know. Facebook et al want us to feel good and comfortable when visiting their sites, so it makes perfect sense for them to reduce potential discomfort by removing content from our view that we may not “like”. If you haven’t seen it yet, you should watch this TED talk by Eli Pariser about the “filter bubble” that explains just how much of what you see in your newsfeed – and even on Google – differs from what everybody else sees. (For the record: Facebook published a study saying that the algorithm is not at fault – people are. To this, Pariser published this response.)

Rising Partisan AntipathyThe problem is that the more we live in our own bubble (be it human or algorithm induced), the less we are able to empathise with people who live outside it. The Pew Research Center recently released a study that more than half of Republicans were afraid or Democrats and vice versa. Think about that: afraid. If that comes as as much as a shock to you as it did to me, then that is probably proof that we are living quite deeply in our own bubbles.

The interesting thing is that this is a new phenomenon: While in 1994 only 21 % of Republicans and 17 % of Democrats viewed the other party as “very unfavourable”, the numbers are 58 % and 55 % respectively in 2016. I believe that the fragmentation of the media landscape has played a significant role in this process.

I’m using data from the US here because I don’t have comparable data from Europe. But I’m sure that we could also find similar processes in Europe. Take the Brexit campaign or the far-right in Germany. Fear and anger breed hate and hateful speech has exploded on social media over the last few years.

So what can social media networks do? Here are some suggestions:

Reduce the filter bubble: Show 20 % of news in the feed that you think the user will dislike. It’s healthy to be exposed to opposing views.

Kill the bots:
A lot of misinformation and hate speech is published by bots, small programs that post the same content in many places. I understand that social networks are reluctant to censor people’s speech, but I believe we shouldn’t have qualms about excluding non-human entities from the conversation.

Enforce community standards: Both Facebook and Twitter have community standards that state clearly that hate speech is not acceptable. However, the German Ministry of Justice found that Facebook deletes less than half of reported posts and Twitter only 1%. Part of the issue seems to be that both platforms only intervene when they perceive something to be a “credible” threat and when it is directed against an individual, rather than against a group.

I realise that the latter is more easily said than done and I understand why Facebook uses the argument of free speech versus societal norms to justify why they do not remove more posts. For them the question is undoubtedly: if we remove images of Nazi posts in Germany, wouldn’t we have to remove the posts of gay rights activists in Russia? After all, both might be against the law in the respective countries. It is obviously much easier to retreat behind the façade of free speech than to deal with these thorny problems.

But to me, this is not about free speech – it is about what kind of place these social networks want to be. Free speech means that the state is not allowed to interfere with your expressions of opinion. It means that you have the right to stand at a street corner and proclaim your thoughts without being arrested, that you can print and distribute your writings or set up a website and share whatever you think the world needs to hear. The state is not allowed to interfere with you and this is extremely important. However, social networks are not part of the state – they are digital properties owned by companies. And while the state is not allowed to arrest someone for spewing racist dribble at a street corner, a shop owner has every right to show someone the door, who is doing so inside his or her store.

Neuter – don’t kill – fake news: Facebook, Twitter, Google and Co also frequently like to argue that they are only the channel through which messages pass and cannot be held responsible for the content – similar to the postal service or the phone company. However, that argue does not hold water. After all, the phone company or the postal service does not influence what letters or phone calls I receive but the algorithms of Facebook and Twitter do influence what I see. While we can discuss the extent and possible bias of this influence, nobody can claim that it does not exist.

I find it worrying that following the US elections, Zuckerberg has called it a “pretty crazy idea” that Facebook might have had any influence on people’s decisions since that seems to indicate that he is not prepared to give these issues much thought or accept responsibility for the fact that he has built one of the world’s largest media houses. It’s also a rather questionable statement – why should I spend money on ads on Facebook, if the CEO doesn’t believe it has any impact? I think this tweets sums it up nicely:

(Update: Rick Webb just published a great article on Facebook’s responsibility and influence called “I’m Sorry Mr. Zuckerberg, But You Are Wrong“)

The Pew Research Center found in another study that 62% of Americans get all or some of their news from social media, of which Facebook accounts for the majority (44 %). This kind of reach carries responsibility with it – and it’s good to see that some Facebooks staff do take it seriously.

Grenade destruction training in Timor-Leste. Photo: UN Photo/Martine Perret

A member of the Facebook community team trying to decide what is news and what is satire. Photo: UN Photo/Martine Perret

The problem with fake news is not that it is fake, it is that some people believe it. The Onion and Weekly World News publish fabulous fake news every week and I think it would be sad if these were removed from Facebook.

It would also be very difficult to enforce such a ban because it would inevitably lead to discussions about what is news and what is satire. Unfortunately, many other fake news sources are not as easy to identify and that is a problem: if somebody votes for a candidate or a party because he or she honestly believes in the candidate or the policies – fine.  But if somebody votes based on blatant lies, then that is unacceptable. While sites like Hoaxmap in the German-speaking countries of Europe are doing their best to counter malicious rumours, it would be a “pretty crazy idea” to think that they can keep up with rumours that are produced for financial profit (I’m looking at you, Macedonia).

I believe that Facebook and Twitter need a kind of system that highlights links that point to questionable sources. I already have some ideas about that and will put them into a separate post soon.

Long story short: the promise of social media was that it would encourage debate. Instead, it has helped us dig the trenches from which we look suspiciously at the Others while being overwhelmed by a flood of misinformation. But it doesn’t have to be that way. While better social media networks are not a magic potion to eliminate divisions and hate between social groups, they companies that own them can do their bit to avoid making it worse. Facebook’s unofficial motto is “Move fast and break things”. It’s time to start fixing them.

What are your thoughts? Please share them below!


  1. Katja Evertz November 16, 2016
    • Timoluege November 16, 2016
  2. Emmerey Rose February 1, 2017