Ari Gandolfo & Abeba Taddese, Results for All
Applications are due by May 14, 2018 via email to firstname.lastname@example.org.
The workshop will:
During the workshop, participants will seek to answer questions such as, what are the most common barriers to effective policy implementation in different government office contexts? What type of evidence is needed to unlock implementation? How and when should it be considered? What strategies and mechanisms are governments in different countries introducing to improve and integrate evidence use in policy design and implementation? How can we learn from their experiences?
Results for All and AFIDEP invite public officials and policymakers to form a team of three or four individuals who are working together to implement a specific policy, in any sector, and who want to learn how and when to use evidence to overcome policy implementation challenges. A team must include members from at least two government ministries / departments / agencies, and be approved by senior leadership via the signature at the end of this application. Teams from seven to eight countries will be selected for participation in the workshop.
Teams are encouraged to include:
Teams will be expected to develop a power point presentation outlining a policy implementation challenge in advance of the workshop, and to engage in follow-up activities to put roadmaps into practice.
Teams from Brazil, Colombia, Ghana, India, Indonesia, Kenya, Malawi, Mexico, Nigeria, Philippines, Rwanda, South Africa, Tanzania, Uganda, and Zambia are especially encouraged to apply.
What to Read this Month
Over the last few months my team at Results for All has been engaged in consultations to assess the demand for a new global evidence network that could bring government policymakers together to exchange innovative ideas and learn from each other to advance evidence use in policymaking.
We have spoken to policymakers in government, members of the research and academic community, as well as non-governmental partners and initiatives in countries including Colombia, Chile, Finland, Nigeria, South Africa, Tanzania, Uganda, and Zimbabwe, among many others. In every conversation, we heard about the importance of building or shifting the culture of evidence use. While we expect and assume that organizational culture will be different in varied contexts, we observed an interesting tendency in the policymaking community to speak about culture and evidence use in a way that suggested some universality across policy areas and levels of government. We noted further that in the context of evidence use, culture was often spoken of in broad and vague terms, such as “the culture is not developed enough,” “there is no culture of producing data,” or “mid-level technocrats have a lot of influence, and the ability to shift government culture.”
We are curious about the notion of an evidence use culture in government, and believe it is essential to better understand this culture so we can identify strategies to help strengthen evidence use in government.
What is culture?
The challenge in understanding what a culture of evidence use in government looks like begins with the definition of culture itself, a term with many meanings. The first of Merriam Webster’s six definitions for culture describes it as a set of attitudes, values, goals, and practices shared across an institution or organization. Matsumoto et al suggest that while attitudes, values, and goals can be shared by a group, they can also be differentiated at an individual level.
This practical guide on changing culture developed by Bloomberg Philanthropies’ What Works Cities initiative offers a definition of culture that gets at norms: “culture is the difference between what you tolerate and don’t tolerate.” According to the guide, culture embodies interactions between the different elements of a system such as people, beliefs, values, and attitudes. It is both causal and dependent on an organization’s knowledge, processes, and systems. It is not a singular thing – an individual or organization can be defined by multiple cultures. And it is both learned and a legacy that can be shaped over time. These conflicting and dynamic elements are what make culture hard to define.
Levels of culture
To understand culture as it relates to evidence use in government, it is helpful to explore the different levels in which culture presents itself in an organization. This includes artifacts, values, and assumptions, captured in a helpful visual here.
The visible and tangible elements of an organization are its artifacts. They are what you see when you walk into an office – desks, chairs, computers, plants, and filing systems. Reports, briefs, databases, and knowledge management systems are also types of artifacts. Artifacts can give a sense of office culture – we might for example, assume that a brightly colored office with an open floor plan has a creative mission, and sense entrenched bureaucracy in a dark, traditionally furnished office. Or we might expect an office with the technology for collecting and storing data, to routinely use evidence to inform policy and programs.
Yet these visual cues about an office’s culture may be misleading if we do not understand the organization’s values and the underlying assumptions that drive the daily work of its leaders and employees. For example, a government office may have the relevant evidence artifacts such as a knowledge management system or evaluations, but lack shared values to guide and encourage evidence use in decision making. But even when there are tangible artifacts, and a government office publicly articulates the value of using evidence in policymaking, if the underlying assumption is that using evidence is too costly or time consuming, the office is unlikely to translate its artifacts and values to systematic use of evidence in policy decisions. The challenge is that it can be hard to uncover hidden assumptions – feelings, perceptions, thoughts, or beliefs – that shape an organization’s visible artifacts and values. Artifacts and values can also be disconnected and even contradictory, most noticeably in government when financial commitments needed to support desired policies or policymaker behavior do not line up with a government’s stated values.
In the context of evidence-informed policymaking, it is important to build artifacts – the systems and processes governments need to ensure evidence is appropriately sourced and used to inform strategic thinking, policy development, implementation of policy options, and monitoring and evaluation. It is also critical to build and instill a shared and publicly expressed value in using evidence. But to influence behavior change and shift attitudes about evidence use, it is imperative that we consider the basic assumptions that guide how work is done and decisions are made. When what we say (reflecting values) does not align with how we behave (building and using artifacts), it is a sign that we need to dig deeper to understand the assumptions that govern our behavior
What should governments do to strengthen underlying assumptions and shift the culture toward evidence use?
It takes time and intentional effort to build or change the evidence culture in government. And to truly do so, we will need to scratch beneath the surface to investigate the underlying assumptions that influence whether individuals and organizations actually use evidence in their work. These assumptions determine whether values become empty statements and artifacts gather dust or, ultimately, whether evidence use becomes a cultural norm.
Abeba Taddese is the Executive Director of Results for All, a global initiative of Results for America.
What to Read this Month
What to Read this Month
Enhancing Evidence-Informed Decision Making: Strategies for Engagement Between Public Health Faculty and Policymakers in Kenya | Nasreen Jessani et al, Evidence & Policy
“It would behove policymakers in the Kenyan government to lead the change with respect to outreach and inclusion of academic researchers in policy deliberation.”
This article explores the interactions between academic knowledge brokers and health policymakers, and concludes that a combination of personal relationships and institutional partnerships are needed to facilitate engagement.
Is Results-Based Aid More Effective than Conventional Aid? Evidence from the Health Sector in El Salvador | Pedro Bernal et al, Inter-American Development Bank
“Using a difference-in-difference approach and national health systems data we find that preventive health services increased by 19.8% in conventional aid municipalities and by 42% in RBA [results-based aid] municipalities compared to national funds, suggesting that the results-based conditionality roughly doubled aid effectiveness.”
This study is one of the first to measure the effects of results-based financing on the quality of public services provided by government municipalities.
Policy Relevant Evidence Maps: A Method to Inform Decision Making in the Public Sector (Webinar) | Carin van Zyl & Laurenz Langer, GESI
“DPME’s policy-relevant evidence maps follow a rigorous and transparent research process.”
This webinar outlines the steps used in South Africa’s Department of Planning, Monitoring, and Evaluation (DPME) to construct an evidence map to inform decision making in the area of human settlements, and how the process can be adapted elsewhere.
The What Works Network Five Years On | UK Government
“We have hugely talented public sector leaders, but we can still do more to make the best evidence available to them, and to ensure that the time and money invested in our public services are used to the best possible effect.”
In the last five years, the ten What Works Centers in the UK have produced or commissioned 288 evidence reviews used to improve public services; other activities highlighted in this report include publicizing evidence gaps, creating evidence comparison toolkits, and conducting evidence use audits of government departments.
Increasing the Use of Data and Evidence in Real-World Policy: Stories from J-PAL’s Government Partnership Initiative | Samantha Carter & Claire Walsh, J-PAL
“When governments decide to use data and evidence to improve policy, the results can be powerful.”
During its first two years, J-PAL’s Government Partnership Initiative (GPI) has supported 28 partnerships in 15 countries, helping to scale-up effective programs, and improve systems for data and evidence use.
Unsure which version of a program works best? Check out this A/B testing tool from ideas42
“A/B testing works by using an experimental procedure that provides different versions of parts of a program – such as a letter, a web site, or step in the process – to people at random. Statistical analysis can confirm which version is working better and by how much.”
Businesses constantly experiment with A/B testing to refine products and services and how they are marketed, but increasingly, governments are using the strategy to identify which of two service options citizens prefer, and how to communicate messages and create behavior change. This tool can help you prepare two versions to test and what to measure, randomly assign the versions between two groups, and analyze the results to determine which version works best.