How the new Twitter API rules might affect humanitarian response

Angry Twitter birds Photo: Rosaura Ochoa

Angry birds Photo: Rosaura Ochoa

Twitters new restrictions on how tweets can be used might affect whether you can use Twitter during the next emergency. Here is why.

What is an API and why do I care?

An API is a communications protocol that helps different pieces of software to talk to each other and exchange data. Foursquare or Yelp for example are using APIs to overlay their information on Google Maps. Basically, any time somebody builds a service on top of somebody else’s data, an API is involved. The Twitter API enables people to take tweets from Twitter and re-use them in other programmes or on other websites.

What are the changes?

Twitter has recently announced that they would start enforcing certain rules for the use of their API, some of which have already existed for a while, some of which are new.

The three rules that are debated most hotly are:

  1. Services that reach 100,000 users now have to negotiate directly with Twitter if they want to continue to use the API.
  2. Services that access Twitter can only request information 60 times per hour instead of 350 times per hour.
  3. Any application that displays tweets also needs to show the Twitter user’s avatar, username, a link to the Twitter profile and include standard Twitter actions such as reply, retweet and favourite.

There is a good article in the New York Times that explains why Twitter is doing all that. In short: to make money and reduce the load that third party providers put on the Twitter infrastructure.

 What is the effect on humanitarian response systems?

A lot of web services that use social media for operational awareness could be affected by these changes. In the worst case, your favourite service could stop working when you need it most – for example when you want to use social media to find out what the situation is like after an earthquake or a flood.

I don’t think we have to worry about the limit of 100,000 users and I’m not even overly worried about the limit to 60 API calls per hour. After all, few people are able to process information faster than that anyhow. Besides, some projects like Ushahidi and Geofeedia have already said that they would take this into account and limit the number of API calls in their next release in order to comply with the rules.

What really worries me is the display rules because many tools that are useful for disaster responders are not following these rules. Below is a screenshot of a Tweet as it looks in Ushahidi. As you can see, it only includes the message, the name of the author and a link to the author’s profile but no actions like “retweet” and “reply”. Fortunately, when I contacted Ushahidi in preparation of this post, they said they were already working on it.

Tweet in Ushahidi

Very bare bones: a tweet in Ushahidi.

The same applies to Geofeedia, which doesn’t display Twitter actions alongside the tweets, either.

A tweet in Geofeedia

A tweet in Geofeedia. As you can see it does not include all elements that are necessary in order to comply with the Twitter display guidelines.

Potentially this could even affect websites like the ICRC Twitter Dashboard (in public beta). While Gael Hurlimann, the head of the ICRC online unit pointed out to me that they are only using RSS feeds for their dashboard, I’m afraid this path is wrought with perils as well: after all Twitter has actively discouraged people from accessing Twitter through RSS since last year (see: The Next Web from June 2011). The reason is simple: if you view Twitter via RSS, then you don’t see Twitter’s ads and earning money is what this is all about.

And of course there are many other small initiatives and project like Crisis Tracker, Sirenius etc who now might have to change significantly in order to comply to the new rules.

Tweets in Crisis Tracker

The way tweets are displayed in in Crisis Tracker doesn’t meet the display guidelines, either.

Don’t panic – there is good news

The good news is that it looks like Twitter is currently taking the decision to revoke access via the API manually and on a case-by-case basis, rather than through an automated process. This means that they concentrate on the services and applications that are generating noticeable load on the Twitter servers and most (all?) humanitarian tools simply don’t have enough users to fall into that category. Secondly, Twitter would have to take an active decision to cut off a tool that is used for humanitarian purposes – and that is extremely unlikely even when these tools violate the API rules.

However it could be a problem if you use a commercial product that doesn’t follow the rules for your work since the API access would be revoked for the whole service and not just for individual users.

“All your data are belong to us”

Even if the new Twitter API rules don’t mean your social media monitoring tools fall silent during the next big emergency, this discussion shows just how dependent we have become on a very small number of ICT service providers and that changes that appear small and technical can have far reaching consequences.

It’s time we start caring about these issues.

7 Comments
  1. Leith Mudge 2 years ago
    • Timoluege 2 years ago
  2. @bjpaddy 2 years ago
  3. @HeatherLeson 2 years ago
  4. isaacgriberg 2 years ago
  5. Jakob Rogstadius 2 years ago
    • Timoluege 2 years ago

Leave a Reply

Your email address will not be published. Required fields are marked *