How the new Twitter API rules might affect humanitarian response

Angry Twitter birds Photo: Rosaura Ochoa

Angry birds Photo: Rosaura Ochoa

Twitters new restrictions on how tweets can be used might affect whether you can use Twitter during the next emergency. Here is why.

What is an API and why do I care?

An API is a communications protocol that helps different pieces of software to talk to each other and exchange data. Foursquare or Yelp for example are using APIs to overlay their information on Google Maps. Basically, any time somebody builds a service on top of somebody else’s data, an API is involved. The Twitter API enables people to take tweets from Twitter and re-use them in other programmes or on other websites.

What are the changes?

Twitter has recently announced that they would start enforcing certain rules for the use of their API, some of which have already existed for a while, some of which are new.

The three rules that are debated most hotly are:

  1. Services that reach 100,000 users now have to negotiate directly with Twitter if they want to continue to use the API.
  2. Services that access Twitter can only request information 60 times per hour instead of 350 times per hour.
  3. Any application that displays tweets also needs to show the Twitter user’s avatar, username, a link to the Twitter profile and include standard Twitter actions such as reply, retweet and favourite.

There is a good article in the New York Times that explains why Twitter is doing all that. In short: to make money and reduce the load that third party providers put on the Twitter infrastructure.

 What is the effect on humanitarian response systems?

A lot of web services that use social media for operational awareness could be affected by these changes. In the worst case, your favourite service could stop working when you need it most – for example when you want to use social media to find out what the situation is like after an earthquake or a flood.

I don’t think we have to worry about the limit of 100,000 users and I’m not even overly worried about the limit to 60 API calls per hour. After all, few people are able to process information faster than that anyhow. Besides, some projects like Ushahidi and Geofeedia have already said that they would take this into account and limit the number of API calls in their next release in order to comply with the rules.

What really worries me is the display rules because many tools that are useful for disaster responders are not following these rules. Below is a screenshot of a Tweet as it looks in Ushahidi. As you can see, it only includes the message, the name of the author and a link to the author’s profile but no actions like “retweet” and “reply”. Fortunately, when I contacted Ushahidi in preparation of this post, they said they were already working on it.

Tweet in Ushahidi

Very bare bones: a tweet in Ushahidi.

The same applies to Geofeedia, which doesn’t display Twitter actions alongside the tweets, either.

A tweet in Geofeedia

A tweet in Geofeedia. As you can see it does not include all elements that are necessary in order to comply with the Twitter display guidelines.

Potentially this could even affect websites like the ICRC Twitter Dashboard (in public beta). While Gael Hurlimann, the head of the ICRC online unit pointed out to me that they are only using RSS feeds for their dashboard, I’m afraid this path is wrought with perils as well: after all Twitter has actively discouraged people from accessing Twitter through RSS since last year (see: The Next Web from June 2011). The reason is simple: if you view Twitter via RSS, then you don’t see Twitter’s ads and earning money is what this is all about.

And of course there are many other small initiatives and project like Crisis Tracker, Sirenius etc who now might have to change significantly in order to comply to the new rules.

Tweets in Crisis Tracker

The way tweets are displayed in in Crisis Tracker doesn’t meet the display guidelines, either.

Don’t panic – there is good news

The good news is that it looks like Twitter is currently taking the decision to revoke access via the API manually and on a case-by-case basis, rather than through an automated process. This means that they concentrate on the services and applications that are generating noticeable load on the Twitter servers and most (all?) humanitarian tools simply don’t have enough users to fall into that category. Secondly, Twitter would have to take an active decision to cut off a tool that is used for humanitarian purposes – and that is extremely unlikely even when these tools violate the API rules.

However it could be a problem if you use a commercial product that doesn’t follow the rules for your work since the API access would be revoked for the whole service and not just for individual users.

“All your data are belong to us”

Even if the new Twitter API rules don’t mean your social media monitoring tools fall silent during the next big emergency, this discussion shows just how dependent we have become on a very small number of ICT service providers and that changes that appear small and technical can have far reaching consequences.

It’s time we start caring about these issues.

  • http://www.sirenus.net Leith Mudge

    Thanks Timo for this post.

    At Sirenus we are actively looking at the recent changes to the Twitter API rules. I think that it is likely that we will be able to achieve some sort of exemption or accommodation with Twitter for Sirenus which is specifically designed for emergency managers and disaster response organisations. You have a good point that if you are using a general tool for monitoring social media that you may strike issues with these new limits but humanitarian tools (such as Sirenus) are unlikely to have their access to the Twitter API turned off.

    • http://sm4good.com Timoluege

      Thanks, Leith. I agree – it would be very bad press for Twitter if they consciously did this in the middle of a disaster and I don't think they would do it. However, it concerns me that tools like Sirenius would basically have to depend on the goodwill and tolerance of Twitter. I'm also worried that in the future some of these rules (maybe not the ones I mentioned here) might be enforced automatically rather than manually in which case it could have negative consequences.

  • http://twitter.com/bjpaddy @bjpaddy

    We currently have a twitter feed on our website homepage which has never functioned properly because of the crankiness of the exiting Twitter API. During the height of future emergencies we had planned to give this block even greater prominence and to suck in updates from the front line by creating a curated list of aid workers from DEC member agencies tweeting about their work. We are now canning this idea because 1) Twitter clearly aren't planning to improve the API because really they don't want 3rd parties integrating tweets into other platforms 2) technically we'd be in breach of their guidelines so we have no confidence that any investment in trying to find ways to make this integration work for our users would be worthwhile.

  • http://twitter.com/HeatherLeson @HeatherLeson

    Thanks for your post!

  • isaacgriberg

    Thanks for the post Timo! Interesting and useful information. We'll keep you posted on the development of the ICRC's Twitter Dashboard. At the moment we're pulling in the tweets using RSS and not the API. Take care and hope to see you soon, Isaac

  • http://ufn.virtues.fi/crisistracker Jakob Rogstadius

    Thanks Timo for the post.

    At CrisisTracker the new API rules are a serious risk to the project as the platform works with clusters of tweets rather than individual messages, and clusters don't directly have personal data associated with them. We also have a very small development team (mostly me, a busy PhD student) and the new presentation guidelines are a challenge which we may not have resources to meet. This indeed means that the project can be cut off by Twitter at any time due to violations of the API rules.

    In addition, in humanitarian disaster or conflict anonymity can be a huge issue and presenting content in aggregate form is often far safer for individual sources than to present them along with full information about who posted the information, what they look like and where they can be reached.

    As a researcher, I also wonder if the API rules extend to how qualitative analysis of Twitter content can be reported in papers published in academic conferences and journals.

    • http://sm4good.com Timoluege

      Thank you for you thoughts, Jakob. I certainly hope it won't come to that.