news
Research
12 December 2017

Can data technology revolutionise humanitarian action?

For Prof. Taithe, enthusiasm for the data is vastly outstripped by the capacity to meaningfully analyse it.


This common and promising assumption is the object of an article copublished by Bertrand Taithe, Visiting Professor in the International History Department of the Graduate Institute and Professor of Cultural History at the University of Manchester. Informed by a historical perspective, but also by interviews and other materials that suggest some of the claims made on behalf of technology are exaggerated, the article argues that the enthusiasm for the data is vastly outstripped by the capacity to meaningfully analyse it. Interview.

In this article, you pose a challenge to the “promise of technology to revolutionise humanitarian action” with the use of data. How and why can such optimism – that borders on “technological determinism” – not only present epistemic problems but also potentially translate into problematic practices?

The article originates from research related to the United Nations and collection of security data. It broadens the questions to interrogate the assumptions and hopes of the humanitarian and security communities. The modes of data collection by international agencies, especially in places like Darfur, were highly problematic, and many such institutions claimed the data did not exist, and if it did, it could or should not be used. Meanwhile, it was possible to acquire this data in roundabout ways. If data should or cannot be used, then why was so much of it collected? The larger claims that need to be highlighted here are that technological “fixes” can be applied to problems based on data and that such solutions are de-politicised. This presents epistemic problems, because it twists our approaches towards solving issues and the ways we think we can solve issues. In volatile contexts such as conflicts, data is not neutral and can be used in multiple ways to mobilise violence or further complicate problems in informant networks, international peacekeeping forces and local armed forces. In practice, some of these data technologies create a climate of dissidence, paranoia and suspicion. This can also create despondency as humanitarian workers may lose control of the situation which is managed by security protocols made elsewhere and liable to become counter-intuitive. The ensuing alienation may prevent them from integrating into their immediate environment and responding effectively. While technologies close the distance between the field and the desk, they are ultimately based on a doctrine of minimising risk taking that actually does not minimise exposure to danger. Data can be summoned upwards from the field via technologies but the absorption of this data, production of an archive and processing into decisions are obscured. Feedbacks to the communities in a manner that is useful for them are so far missing, and if anything, as interviews of refugees in Chad have shown, only serve to create serious problems.

The article argues that the data produced for the humanitarian sector is more often “inactionable” than “actionable”. What are the implications of this and what kinds of “closures” can it generate?

Much of the data produced turned out to be unusable in quality and reliability, and there was also a quantitative problem as a lot more data was produced than can be used. There are also instances in which the use of data was much more ambiguous. These produce closures because the promises made are not delivered nor are they as accurate as claimed to be. The idea of “real-time data” is hyped inordinately and does not work out in practice. The argument that data technologies can alter power relations was also found to be problematic. That said, technology – and in particular communication technology – can be helpful in some situations like natural disasters, and less so in other cases involving conflict for example. The point is that sole reliance on such technologies as universally applicable solutions needs to be viewed with caution, especially as policy decisions are based on them with a disproportionate faith in their power to affect society and humanitarian work.  

In what ways are we able to see “failures” of these data-driven technologies in the sector of humanitarian action? And why does the drive for technological innovation and more data production continue despite those failures?

Our research showed that the mere provision of technology and information did not automatically generate action and distribution of social justice – especially when sociocultural relations between people and data technology were not taken into account. Our point was not to identify failures, but rather to show the limitations of the hopes on which arguments for such technologies rest. Let’s consider the historical example of the statistical calculation of casualties in the Crimean War of 1854. One could assume that due to the seemingly “static” nature of operations in this war, where soldiers were shipped to the peninsula and back, the calculation was a simple matter of looking at shipping records and so on. Yet this proved extremely difficult to do because complexities of life are not accounted for in such an approach; statisticians of the time were acutely aware of these limitations. The same could be said about the ability to evaluate civilian casualties in recent wars like in Iraq, along with political reasons in the latter. Such calculations occur by way of approximations and proxies, so any claim that by using an algorithm one can bypass complexities of life seems exaggerated – and all the more so in post hoc calculations. Proxy data is good for rhetorical purposes, such as advertising the scale of a catastrophe to generate public support or funds. In the Ebola crisis for example, the Centers for Disease Control and Prevention (CDC) produced a maximal evolution of disease charts that was picked up by the media, which in turn led to some resources being promised and delivered. As long as we all know that the data was hypothetical and might serve a political purpose, then there is no problem, but if we start believing it as truth, then confusion might arise.

How do we historicise these trends of data and information in humanitarian action and what kinds of policy and rhetorical continuities can we find?

The history of humanitarianism shows that data collection is just one of the things humanitarians do and has always been part of such operations. Even the earliest projects produced very detailed statistics, even if only to justify expenditures. This process is carried out mostly post hoc and is part of the production of knowledge, which is nothing new, but the tools and methods change. For instance, today geographic information systems (GIS) allow statistics to be converted into a map almost in real time. It is worrying when people say that data reflect or can replace the “real”. I believe the “real” is mediated and data is one of those mediations; data is thus not neutral, even if correctly collected. Historicising these trends is helpful to identify the mediated processes. One of the continuities we can find is in the language of humanitarianism today, in which recipients of relief are seen as “beneficiaries”. The term comes from the world of insurance, which relies heavily on the probabilistic analysis of risk; this analysis is based upon certain kinds of data that do not always correspond to lived realities, and yet that serve to generate capital for the industry.

The article builds up the case for a concept of “data hubris”. Could you elaborate more on this term, with particular reference to how we can think of technology and data in times of data-driven “truths”?

We chose this term to try to capture in a concise and yet attentive manner the elevated status given to a dangerous way of thinking. It is in the end a modest call for the application of the kind of critical lens we use for policy and discourse analysis. Very recently, a humanitarian international organisation released bonds to invite private capital for hospital projects, where the “success” of these endeavours may bring returns. It is interesting that behind this probabilistic game for purpose of generating capital lies an assumption that the data that will come from the field will be sufficiently good and precise to allow a neutral evaluation at the end of the scheme; it also implies that humanitarian aid can be quantified in the same way as products like automobiles or any other industry. Sensitivity towards these assumptions and not taking them for granted could be a way to approach such “truths”.

In the light of this discussion, is it productive for scholarship to maintain the distinctions between data, information and knowledge?

The real problem is when data may be considered as knowledge. Information is often incomplete and not triangulated given the methodological and resource-based limitations of doing so. Yet decisions are based on incomplete information. So it has to be acknowledged that knowledge cannot be produced in real time, and this can make us a little bit more modest about what we do and how we do it. An example of a study from Haiti in the aftermath of the earthquake showed the fallacious nature and claims of data. We found that the medical data on the number of amputees and benefits provided for instance was shockingly poor due to the fact that the enumeration was based on many changing definitions of “amputation”. On examining the compiled medical data with interviews of the listed recipients, we found not only glaring discrepancies but also evidence of incredible savvy patients who even though they faced terrible injuries were nevertheless able to manipulate humanitarian aid resources. This does not only deflate the claims of accuracy or reliability of the statistical data, but it also challenges narratives that usually go along with the production of such data. These narratives deny the capacity of the recipients to make choices while articulating a claim that the humanitarian providers “know best” because they are armed with data. When a person who is recorded to have “two crushed feet” tells you how she “walked” from one hospital to another, based on the understanding of the hierarchy of resources, let’s say between a Haitian hospital and an American one, the irony of a view that rests its foundation on data is exposed. A purely quantitative study would have produced a much skewed perception of this reality whereas when we include a Haitian patient centric view the exercise of choice and agency becomes part of the narrative and shows that data by itself is not a representation of reality.

Full citation of the article: Read, Róisín, Bertrand Taithe and Roger Mac Ginty. ”Data Hubris? Humanitarian Information Systems and the Mirage of Technology.” Third World Quarterly 37, no. 8 (2016): 1314–31. doi:10.1080/01436597.2015.1136208.

Interview by Aditya Kiran Kakati, PhD Candidate in International History