Social media in emergencies: is there a responsibility to verify?

I recently took part in an exercise to test Crisistracker, one of a number of promising looking tools that could help humanitarian organizations get a better idea of what is going on in the field.

With Crisistracker, volunteers are being asked to look at tweets and then geotag and categorize them, for example as information related to “People movement”, “Violence”, “Political events” etc. The idea is that you can then browse the website according to the type of events you are interested in, or look at events in a certain location.

I think Crisistracker is a great tool and Jakob Rogstadius and his team have done an excellent job. But while taking part in the exercise, I realized a fundamental conundrum that all social media dashboards have to deal with, when being used in a context where information is potentially live saving and where bad information can have serious consequences.

The question is: should you (re)publish information if you don’t know whether it’s true? On this topic the Crisistracker’s team has taken a position that is probably true for many platforms: “The purpose (…) is to create [an] overview of what is being said in social media (including rumors and bias towards specific topics) more than to create a perfect model of the real world.”

Basically their approach is: “we just share information that has been generated by others and we withhold judgement on whether that information is true or false. We trust that people using our service can make that determination themselves.”

Authority and responsibility

Obviously this is also the case when you use Hootsuite, Radian6 or any other social media monitoring tool – they don’t tell you what is true and what is false. For me, the key difference is whether a tool is automated or whether it relies on human intervention. When looking at the output of tools that work without human intervention I instinctively know that I’m looking at a raw data feed.

However, if I know that a real person has looked at a piece of information before adding metadata etc, I automatically assign that information a higher value. I assume that whoever looked at that piece of information also judged that information to be true. In a way, the volunteer has changed the authority of the piece of information by looking at it.

This is problem, if people assume that something is true because it has been “reported” by a site that intends to give people an overview of what is going on in a disaster zones.

The question is: is there a “duty of care” when relaying information? And is there a moral and/or legal responsibility when people take bad decisions based on information that you have provided in good faith but without checking the accuracy?

Volume over quality?

So far only very few initiatives, such as the Standby Task Force, have solid procedures in place to triangulate and information. Obviously, verifying information takes time and there is a case to be made for tools that can handle a high throughput of potentially rapidly changing information.

Personally, I feel that systems that have a human component should strive to gauge the accuracy of the information they publish. This might be as simple as a clear “verified”/“not verified” flag with each report and a feature to filter by that flag. Without that, I would be worried that I would accidentally mislead people who use the information I processed.

What is your opinion? Will people understand that information is just processed but not verified? Does it make a difference if you get information from humans or from machines?

  • http://ufn.virtues.fi/crisistracker Jakob Rogstadius

    Thank you Timo for your insightful post. We will definitely take these reflections into account when discussing how to best refine the prototype and make it ready for "real-world use".

    It may be worth clarifying that in CrisisTracker you don't work directly with curating tweets. Rather, the tool first clusters similar tweets into stories, and these stories are then the unit of information being curated in the tool. Without this first clustering step, the tool would be no different from services like Hootsuite and manual curation of millions of tweets would be completely impossible. You are free to consider the workload to still be overwhelming, but at least clustering makes it significantly smaller.