Summary of the results of a study on the role of scientific, peer-reviewed evidence in humanitarian decision-making

Added November 19, 2018

Short report: A summary of the results of a study on the role of scientific, peer-reviewed evidence in humanitarian decision-making

Authors: Dell Saulnier (Karolinska Institutet [KI]) and Claire Allen (Evidence Aid [EA])

Evidence Aid (EA) and researchers at Karolinska Institutet (KI) conducted a joint study on the role of evidence in decision making for humanitarian response. The study used an online survey to gather information on how, when, and why decision makers in humanitarian response use scientific, peer-reviewed evidence to make decisions.

Methods

The study was an online cross-sectional survey, which was open from 20 August to 15 October 2018. The survey was developed jointly by EA and KI. It included both open- and closed-ended questions about the participant’s demographics, experience and role in humanitarian response, and their access and use of evidence; with a total of 15 questions. The survey asked specifically about research evidence which was defined as ‘information from research studies, done for the purpose of answering a question and in a systematic and transparent way, that often includes examination of its quality by experts who were not involved in the research’.

The survey was conducted through the online database REDCap. A description of the study and a link to the survey was shared through EA’s social media channels (Facebook, LinkedIn and Twitter), website, and mailing lists of EA and KI (including, but not limited to HIFA and Disaster_Outreach). Participants were self-selected from the audience that was approached, and consented to take part in the study before beginning the survey. All respondents and their responses were anonymized by REDCap during the survey. After the close of the survey, descriptive statistics were used to analyze the data from the closed-ended questions and content analysis was used for the answers to the open-ended questions. The preliminary results of the analysis are presented below. A final analysis by subgroup is planned for early 2019, followed by a full report.

Preliminary results

Forty-seven people completed the study, with some of these only answering some of the questions. The average age of participants was 46 years, with an average of 12 years of experience of working in the humanitarian sector (range: 0 – 45). Men made up 54% of the respondents, and 91% of the respondents held a Masters degree or higher.

Most respondents worked in Europe or North America, mostly in the roles of humanitarian response program director or manager, independent consultant, or policy maker (Table 1). None reported that they made decisions whilst carrying out a funding role. All but two respondents reported they had access to scientific research evidence through their workplace. Research evidence (as defined) was used by over half the respondents at multiple points when making decisions about the content of humanitarian response programs, although less frequently when bidding for funding or funding programs.

Across the respondents, multiple types of information were considered to be evidence, from “everything” to research, experimental studies, secondary data or literature synthesis, organizational reports, grey literature, and seminars. To be able to make decisions for humanitarian response, four respondents felt that evidence had to be contextually relevant. This meant being applicable to a program, context or population, preferably with recommended solutions or results to support effectiveness or implementation. Value was placed on evidence that took information from the field into account or validated a field experience, although ‘staff know best’ was used as a rationale by one respondent for not using evidence for decision making, which corresponds to another statement which was given during a needs assessment carried out by EA in 2011.

Personal assessment of the quality of the information and trust in the source were equal deciding factors for whether information was considered evidence or not. Personal assessment of quality focused on those practices that are associated with good quality scientific research, most often mentioned as transparent and high-quality design, methods and analysis, external evaluation and triangulation, and credibility, validity, clarity, and congruence between results and conclusions. Beyond assessing information about the conduct of a study, respondents used their trust in the source to help them decide on what was good quality evidence. While accurate and factual data were noted as important reasons to trust a source, another characteristic that helped respondents decide on what was evidence was their personal trust in peers, experts, partners and collaborators, and organizations.

As for why respondents used evidence when making decisions about humanitarian response, four main reasons were described in the replies to an open-ended question. First, using evidence was generally considered across respondents to be best practice when making decisions. It allowed decision makers to use aid effectively and to maximize their impact, enabling program content to be of good quality and relevant to the context and population.

Second, using evidence was noted as a reassurance that decision makers were making the right decisions. Evidence was used to help decision makers to understand a situation and feel informed, and then to support, justify, and validate choices and recommendations. Evidence also acted as basis for reassuring others by enabling decision makers to advocate for support and the rationale of their program content.

Third, personal and organizational values influenced respondents’ use of evidence. Evidence was felt to contribute to decisions that would be of the most benefit to beneficiaries, and three respondents noted evidence as a core value and an ethical obligation for decision making. Personal beliefs in the use of evidence – from it being the best way to make decisions to it “is not required to use [evidence to support decisions]” – were given by two respondents.

Finally, evidence was used as a tool for protection by five respondents. It was used to protect beneficiaries by avoiding poor quality decisions or program content that might lead to harm, to protect the reputation of the decision maker or their organization, or during the review or auditing process, to avoid judgement from others for not using evidence, and to be seen as credible.

Table 1. Summary of the respondents

  Number Percent
Country of work (n=46)    
North America – Canada, USA 16 34.8
Europe – Greece, Ireland, Italy, Switzerland, Turkey, UK 14 30.4
Africa – Cameroon, Eritrea, Kenya, Lesotho, Rwanda, South Africa, Tunisia, Uganda, Zimbabwe 10 21.7
Multiple countries 4 8.7
Asia – India, Japan 2 4.4
Role as decision-maker (n=45)    
Program director/program manager 16 35.6
Independent consultant 11 24.4
Policy maker 8 17.8
Top level management 2 4.4
Donor 0 0
Other (researcher, coordinator, implementation) 5 11.1
None 3 6.7
Access to research evidence at work (n=38)  
Access to paid and free scientific journals or databases 23 60.5
Access to free scientific journals or databases 13 34.2
No access to scientific journals or databases 2 5.3
Evidence used during (n=46):  
Identifying priority areas 30 65.2
Doing needs assessments 28 60.9
Designing programs or response 28 60.9
Implementing programs or response 26 56.5
Evaluating programs or response 25 54.3
Monitoring programs or response 24 52.2
Bidding for funds 20 43.5
Funding programs or response 13 28.3
Other (writing reports, research, teaching and training) 4 8.7
None of the above 1 2.2

 

الإنكار 免责声明 免責聲明 Disclaimer Clause de non-responsabilité Haftungsausschluss Disclaimer 免責事項 Aviso legal Exención de responsabilidad

Share