I find that most social media studies in the humanitarian sector fall into two categories: those that aim to directly inform programming or policy, and those that are mainly curious to see what they will find. DFID’s “Using Social Media for Research, Monitoring and Evaluation in the MENA Region: World Food Programme Case Study” falls firmly into the “let’s look and see”-category. This is not a bad thing, but it somewhat dampened my enthusiasm as I was reading the case study.
For the report, DFID looked into how news about WFP’s reduction/cancellation of food deliveries for Syrian refugees spread across Twitter in 2014/15. To do that, researchers from the University of Cardiff loaded 24,000 tweets into Cosmos, a free-for-research data analysis software developed by the university. They then ran frequency analysis, network analysis etc over the sample and tried to classify sentiment, topics and locations or origin. This is all good practice, but I find it falls short of what I would expect from a report like this. Here is why:
- They basically looked at how press releases and a high-profile action by a UK-based football player propagated through Twitter. The result was – not surprisingly – that the BBC, 60 Minutes and the WFP Twitter accounts were the most important accounts to spread that information.
- Yahoo! PlaceFinder, the tool they used to try to identify the locations of Twitter users, can interpret information in 10 languages, but not Arabic. This makes it even harder to separate messages from people in the MENA region from people outside the region.
- The query was very technical and used institutional language, rather than how ordinary people write on Twitter. The query would find messages like “WFP suspends food provision for Syrian refugees “, but it would not find “UN cuts food deliveries. How are we going to survive? “. From the report, it looks like the Arabic queries were even more technical, including the terms “food crisis” and “nutritional collapse”.
Obviously, the approach to an analysis depends on who you are doing it for. But based on this approach, I think that only the media department would really benefit from this analysis. I do not think that this approach is helpful for monitoring or evaluation of programmes as the title suggests. For that, it would need to include more voices from a wider range of people. I know that operational social media analysis is not easy (see my case study from Nepal), but to me it feels like they weren’t trying hard enough for this report.
In all fairness, when I contacted DFID, they said that this was more a test project – to dip their feet into the water of social media analysis – rather than a rigorous research project. They also said it was primarily to familiarise themselves with the opportunities and challenges of social media analysis. So my expectations were probably too high.
However, I have simply seen too many “let’s look and see”-projects and wish that more projects would tie social media analysis to concrete operational or policy needs during the design phase. After all, how much can we really learn from projects that only look at the things that are easy to analyse, rather than the things that could improve our work?
What are your thoughts? Please share them below!