Humanitarian practitioners shouldn’t aim to copy evidence-based medicine
Photo credit: Peter Biro/IRC
Author: Rick Bartoldus, Evidence to Action Officer, International Rescue Committee
We’re at a special point in the history of evidence use in humanitarian and development work with donors, policy-makers, and implementers now consistently talking about the importance of evidence in decision making, including during the Humanitarian Evidence Week organised by Evidence Aid.
At this point, we need to focus on what the future will look like for evidence use. In particular, we need to find new approaches to asking the “when, why, for whom, and how” of effective interventions. To do this, we need to work together to build an evidence ecosystem that supports meaningful use of evidence across all of our organizations.
Why don’t we just copy the medical field?
Over 3 years ago, the International Rescue Committee made a public commitment to using evidence consistently in our work, and developed a dedicated Evidence to Action Team to support these efforts. At first, our goal was to learn as much from evidence-based medicine as possible, and to apply these lessons to our work. We were not unique in this regard: evidence-based medicine is one of the best success stories of evidence use, and is commonly cited as a gold standard.
In general, lessons from evidence-based medicine are often applied to development and humanitarian work as some version of a supply-and-demand metaphor: focus on increasing supply of evidence, demand for evidence, and making it is easy as possible for producers to connect to consumers. Often this is paired with a call to action for more ‘rigor’ in decision-making. Unfortunately, there’s a few features of humanitarian work that it difficult, and even misleading, to apply these lessons – and the tools attached to them – directly to our work.
“Did it work” vs. “When, why, for whom and how does it work”
Research methods and tools developed in the medical field evolved to estimate simple causal chains as effectively as possible – if I give this person this pill, do they get better? Unsurprisingly, when these tools get applied to messy and context-affected interventions, problems arise. Impact evaluation research (of which randomized control trials are one type) are a great example to use. This isn’t to say that impact evaluations are bad (at the IRC we strongly believe in their importance, and even help produce them), but there are many issues with the way that impact evaluations tend to be conducted in our work.
In particular, impact evaluations developed for humanitarian and development work tend to treat long causal chains (go to a soft skills training then, 12 steps later get income, empowerment, etc.) the same as relatively short causal chains (get medicine, get better). While they can confidently say to what degree the intervention caused a change in the outcome (or failed to cause a change) they often cannot tell you why (or why not). The proliferation of these types of evaluations lead to a situation in which there is more and more information about what has worked in specific contexts, but a lack of knowledge about when, why, for whom, and how the interventions work in general.
We can do better – but what happens then?
As many researchers have noted, well-built impact evaluations can be used to answer questions of when, why, for whom, and how, but these are currently uncommon. We need to build better evidence for sure, but if we did have better evidence would we even be prepared to use it?
Most advice about using evidence in our work focuses on learning about whether an intervention ‘works’ or not. Similarly, organizations that try to build large universal portals for evidence tend to focus on highlighting whether or not a given intervention ‘worked’ for an outcome. At the current moment, none of these is fully set up to answer questions of when, why, for whom, or how. Even if better evidence was common, we wouldn’t be set up as a sector to use it consistently.
Building infrastructure that can support better evidence
So what would the future look like? There are promising examples, but they are quite young. For example, the BRIDGE Collaborative has just completed a cross-sectoral practitioners guide for how to apply evidence in a way that goes beyond “what works” (and they want you to test it). Similarly, the IRC has developed an Outcomes and Evidence Framework that embeds research results within theories of change, so that you can always engage with research in the context of larger theories.
Promising examples are not enough though, so how do we continue building toward the future? At the current moment, our best bet is to collaborate and share more. In particular, we need to move away from the ‘supply and demand’ based lessons of evidence based medicine, and instead move towards ecosystem models of evidence. In ecosystem models, we blur the lines between producers and consumers of evidence, and instead focus on questions of what groups fill which functions to ensure that evidence is synthesized, simplified, and shared with the right people at the right time. This framing highlights a few lessons for collaborating and moving forward, some of which we’ve highlighted here.
What we all need to do
The first step is getting to know who’s in your ‘evidence ecosystem.’ Reach out to major organizations that act as bridges between producers, synthesizers, and users of evidence, such as the Campbell Collaboration, International Initiative for Impact Evaluation, ALNAP, Evidence Aid, and BRIDGE. One way to do this is to attend conferences like Evidence Week – and actually follow up with the people you meet!
The second step is learning how to reduce duplication of effort between your organizations – what tools, resources, and databases can you latch on to make your work easier? Also, avoid making new tools if similar ones exist. Instead, see if you can work with the makers of similar tools to either adapt them or improve them. Lastly, make sure to connect your colleagues (both inside and outside of your organization) to existing resources.
The third step is making sure we put as much in to the evidence ecosystem as we take out. If you are a funder, find ways to support intermediary organizations or support collaboration between groups. If you are an implementing organization, share as much of your ideas, tools, and data as possible. At the IRC, we’ve found that the benefits of radical openness can greatly outweigh the costs – which is why many of our key evidence tools are open source and publically available.
While it will be a long journey, by working together and improving collaboration between our groups, we can build toward a world in which evidence can consistently inform the important questions of our work.
Rick Bartoldus is an Evidence to Action Officer at the International Rescue Committee. In his current position, he helps humanitarian technical staff to use research evidence in their work by conducting training on IRC’s evidence products and writing evidence reviews. He also manages the production and maintenance of the IRC’s Evidence Maps and Interactive Outcomes and Evidence Framework, which are resources that aim to make high-quality and relevant research evidence more accessible for humanitarian practitioners. He holds a M.A. in Development Economics with a certificate in gender analysis and a B.A. in International Service, both from American University.