Last Thursday, Facebook published a blog post in which they provide some insights into how the social network is fighting extremism and terrorism on the platform. What I found most interesting, was this section:
“When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.”
The reason I find this so interesting is that early this year, one of Facebook’s lawyer’s had reportedly insisted that this type of monitoring was far too labour intensive and that the company would need a “miracle machine” to do this. Yet now, Facebook says that “in order to more quickly identify and slow the spread of terrorist content online, we joined with Microsoft, Twitter and YouTube six months ago to announce a shared industry database of ‘hashes’ — unique digital fingerprints for photos and videos — for content produced by or in support of terrorist organizations.”
I guess that means a miracle has happened!
The case that I am referring to is that of the Syrian refugee Anas Modamani, who in 2015 had taken a selfie with German Chancellor Angela Merkel. Later, his image was repeatedly used in defamatory postings on Facebook that incorrectly claimed that he had been accused of attempted murder and terrorism. Modamani sued Facebook in February of this year because he wanted to get an injunction that would force Facebook to automatically identify and remove instances of his photo being abused in this way. The court dismissed the lawsuit in March because it agreed that Facebook is just a platform provider and neither the originator nor a participant of these defamatory postings.
However, last week’s blog post clearly shows that Facebook and other tech companies have technology in place, that can do precisely what Modamani asked for. It just seems that in the case of an individual, they simply didn’t want to do it, while in the meantime the political pressure has increased sufficiently to force them to take action.
Personally, I always found the line of argument ludicrous, that it would be too much work to identify these images. If it is possible to quickly and automatically detect songs, videos and images that violate an advertiser’s copyright, then that has to be possible for other types of content as well.
In addition to defamatory content, this technology could also be applied to various forms of misinformation and propaganda that we have seen during recent elections – though we are of course getting into very murky freedom-of-speech territory here! Whether we want social networks to automatically delete misinformation that is harmful but not criminal is a very good question, and I’m not entirely sure of the answer. However, the excuse that such a system would not be technically feasible has now been disproven for once and all.
What are your thoughts? Please leave a comment below!