Advancing evidence-informed policymaking in Africa: The role of peer learning networks — October 4, 2018

Advancing evidence-informed policymaking in Africa: The role of peer learning networks

Collage 4

The concept of peer learning has its roots in the classroom and can be best described as a reciprocal two-way sharing of knowledge, ideas, and experiences. Outside of a classroom context, peer learning-focused exchanges can take place through formal or informal networks of groups or individuals who have a shared history or purpose. Peer learning networks are not a new phenomenon, but recent years have seen a rise in learning-focused networks that facilitate a sharing of knowledge, tools, resources, and ideas among government policymakers working to advance national development priorities in Africa. The new crop of peer learning-focused networks are symbolic of a wide shift from expert-driven learning approaches to country- and problem-driven learning agendas. The members of these networks are typically drawn together to advance a common objective or goal, for example, promoting open government (Open Government Partnership), improving public financial management (Collaborative Africa Budget Reform Initiative), advancing universal health coverage (Joint Learning Network for Universal Health Coverage), strengthening evaluation systems (Twende Mbele), or increasing knowledge sharing in evidence-informed policymaking (Africa Evidence Network). The broad appeal of these peer networks is the learning that takes place among equals – there is an appreciation and expectation that everyone has something to share and learn – in a space that strengthens social trust and promotes tacit knowledge exchange and practical learn-by-doing approaches.

Peer learning networks and evidence-informed policymaking

In evidence-informed policymaking, decision-makers use the best available evidence to inform government policy and programmes. Evidence can be generated by research such as evaluations and rigorous studies; it can also include contextual evidence drawn from an analysis of surveys and administrative data; or experiential data that are based on feedback received from citizens. Although the way in which the key elements of the policy process are often described – typically some version of agenda setting, policy formulation, implementation, and monitoring and evaluation – suggests a rational and linear process. In reality however, policymaking unfolds as a complex and messy process involving many different actors. Importantly, beyond evidence, policymaking is influenced by the political, social, and economic context in which decisions are made, such as the openness in government, the pattern of election cycles, the level of citizen participation and the freedom of journalists.

Networks that support practical learning and the sharing of experiences are particularly suited to the uncertain, complex, and messy dimensions of the policy process, where there is no predefined one-size-fits-all solution to addressing a policy challenge. By facilitating a sharing of lessons learned, ideas and accomplishments in a space that builds trust and a deep sense of community, networks targeting decision-makers and the policy process have the potential to: 1) foster an openness to new strategies and approaches for advancing evidence use, put forth by trusted government peers who are regarded as equals; 2) deepen ownership of and commitment to evidence practices in respective government offices, engendered by belonging to a community of supportive peers who are grappling with similar challenges in integrating evidence into policy; and 3) spread, accelerate and normalise good practices for evidence use in government among members and their institutions.

Peer learning networks appear to take two complementary approaches to strengthening evidence use in government: building champions, and supporting systems change at the organisational and institutional levels. Both are needed to advance the use of evidence in policy.

Building champions

The practical learn-by-doing and problem-based approaches of a peer learning network can help to build policymaker knowledge, skill, confidence and motivation. Policymakers who are confident in their ability to find, appraise and use evidence, and who understand the complexities of the policymaking process, are more likely to champion and use evidence in decision-making. Through a network’s ability to function as a platform for sharing ideas, policymakers can be exposed to new ways of thinking that encourage a shift in government culture towards greater evidence use. Network members can become advocates for evidence-informed policymaking, persuading and inspiring others to become better and more systematic at using evidence to inform the decisions that affect the lives of their citizens. For instance, with its new Africa Evidence Leadership Award, the Africa Evidence Network is doing just this; spotlighting the work of champions who are committed to strengthening the use of evidence in policy, to raise awareness about evidence-informed policymaking across the continent. Peer pressure can also serve as a positive motivational force for policymakers to become better at finding and using evidence in policy and to bring new ideas and approaches to their work.

Supporting systems change

The knowledge gained from interactions with peers in a network can inspire policymakers to introduce new government systems and platforms to support evidence use. For example, through participation in the Evidence-Informed Policy Network (EVIPNet), the Ministry of Health in Malawi launched a Knowledge Translation Platform to improve the quality and accessibility of health research and strengthen partnerships between policymakers and the research community. In Sierra Leone, the Africa Cabinet Government Network (ACGN) supported the development of a new Cabinet Manual that requires ministries across the national government to provide evidence to support policy proposals. Policymaker participation in a network can also spur policy reform such as the revision of Ghana’s national health insurance policy to align Primary Health Care and Universal Health Care, spearheaded by practitioners in Ghana who participate in the Joint Learning Network for Universal Health Coverage.

The power of peer learning networks in Africa

While peer learning networks alone cannot address the many constraints policymakers face in generating, sharing, and using evidence, they deserve a featured place in the toolbox of promising approaches for accelerating the spread of evidence practices in governments across Africa. The evidence champion that a network cultivates can play a powerful role in demonstrating and promoting awareness about the value of using evidence, and advocating for improved evidence use in policy. But as others have noted, it is hard for these champions to translate knowledge into any type of action without organisational systems and institutional leadership and guidelines to incentivise and govern the use of evidence in policy. In the nascent field of evidence-informed policymaking there is clear demand and room for peer learning networks to support both champion building and systems change in Africa. The enthusiastic and engaged participation of evidence champions from Ghana, Kenya, Malawi, Nigeria, Rwanda, South Africa, and Uganda, in a recent workshop in Nairobi, Kenya to explore evidence use in policy implementation, only serves to confirm this demand and validate the potential of peer learning networks for advancing evidence-informed policymaking in Africa.

Abeba Taddese is the Executive Director, Results for All. Contact: abeba@results4all.org
This blog was originally published in AFIDEP’s September 2018 issue of African Development Perspectives and on the AFIDEP blog here.

Advertisements
Setting the Agenda for EIPM in Ghana: Common Trends and Recommendations — September 4, 2018

Setting the Agenda for EIPM in Ghana: Common Trends and Recommendations

Kirchuffs Atengble is the founder and Executive Director of PACKS Africa, a think tank operating from Accra, Ghana to improve the uptake of evidence in policymaking processes across the continent of Africa through information systems research and knowledge management.


Ghana Blog Cover

For the past six years or so, I have been involved in various ways with the organisation of programmes and events in the evidence-informed policymaking (EIPM) space in Ghana (read more about EIPM here). This July, for instance, I had the opportunity to serve as rapporteur, for the third time, for the biennial international conference organized by the Consortium of Academic and Research Libraries in Ghana (CARLIGH).

One thing I’d like to highlight is the notable momentum around efforts to advance the use of research and evidence in policy in Ghana, which is also reflected across the rest of the Africa region. This is an opportune time to explore challenges and opportunities for strengthening evidence use in policy across the continent. In this blog, I explore three core issues relevant to the EIPM agenda in Ghana.

A common reference point for available evidence

A common issue is the need for clear entry points to access evidence. Libraries in Ghana have been working over the years to promote access to content from their institutions, including academic and research articles and papers. It is common knowledge within academic circles, regularly re-echoed at EIPM forums, that research produced by such institutions mostly in the form of theses and dissertations do not in any way inform policy. The Association of African Universities (AAU) has been working to improve the policy relevance of these documents, but support has been very minimal.

With respect to government data, the Ghana Open Data Initiative (GODI), launched in 2012, seeks to make data from public agencies widely available. But as it stands, the available data on the platform is limited and out-of-date. A new initiative (the e-Transform project) is now being implemented in partnership with Mobile Web Ghana to revive GODI. A fully functional GODI is a great resource for stakeholders within the sector.

The initiatives mentioned above are a few of the ongoing projects designed to promote access to evidence for decision making. But how are policymakers to find and navigate these different platforms? I suggest a common platform to consolidate information generated on a range of topics from different sources, and serve as a one-stop-shop for policymakers. Such a platform would serve as a portal, combining different access points (not replicating data collection or producing new studies), and structure access according to thematic issues for easy retrieval by policymakers, citizens, academics, development partners, and other interested stakeholders. An example of such a portal is the AGORA portal of the Research4life programme.

Cultivating knowledge brokers

Academics and researchers have a tendency to lament that their studies are not used to inform policy. Often this is because the research is not accessible to policymakers, both physically and intellectually. Policymakers have very limited time available to digest voluminous publications and need information that is presented in a user-friendly and easy-to-understand format.

Different initiatives are helping to develop the capacity of knowledge producers, enabling them to repackage resources for easy assimilation by the targeted beneficiaries. Others are focused on developing the skillsets of policymakers and their support staff to enable them to undertake such activities from the policymaking perspective.

These are all laudable initiatives. But it is important to also consider developing the skillsets of existing intermediaries, such as library professionals, communication specialists, and knowledge/evidence aggregators, who can become knowledge brokers for the sector.

To achieve efficiency and effectiveness within the ever-evolving knowledge economy of today will require specialisation. Knowledge producers should be allowed to concentrate on the core of their work, and policymakers the same. The brokering work should be left for the professionals. Here, I point to ideas like the rapid synthesis services offered by the knowledge translation and systematic review group at Makerere University. Sarah Quarmby shares her experience with the Wales Centre for Public Policy.

Business process re-engineering for public agencies and cultural change

Having described the different initiatives underway to strengthen evidence use in Ghana and offered suggestions for improving evidence practices in government, I would now like to draw attention to the institutional processes of policymaking in the country. The prevailing processes make it difficult for policymakers to engage routinely with evidence. For example, there have been instances cited that suggest that the core units involved in EIPM within Ministries in Ghana, such as the Research, Statistics, and Information Management (RSIM) and Policy Planning, Monitoring and Evaluation (PPME) directorates, have very little (if any) collaboration.

Further, some institutions are staffed by relatives and close associates of influential people in society, who may be more interested in status rather than the opportunity to improve policies or programs. This can create a problem for the entire institution, particularly if decisions are based on instinct and the use of evidence is not prioritized. This increases the risk for policies and programs to be developed with the wrong assumptions.

Wouldn’t you want to see the re-structuring of business processes, to enable the consultation of appropriate and adequate evidence from available sources during policymaking – both internally in Ministries, Departments and Agencies, and externally among sector players? It’s difficult to give a perfect example of what this could look like, but the growing collaboration between the Parliament of Uganda’s Department of Research Services (DRS) and the Ugandan National Academy of Sciences (UNAS) is a promising case to highlight.

Conclusion

To end on an encouraging note, it is worth mentioning that the Ghana Health Service, together with other sector players, has developed the very first Research Agenda for the health sector. This is the kind of pace-setting initiative that I would like to see more of to improve the uptake of evidence in policy processes in Ghana.


About the author: Kirchuffs Atengble is the founder and Executive Director of PACKS Africa, a think tank operating from Accra, Ghana to improve the uptake of evidence in policymaking processes across the continent of Africa through information systems research and knowledge management. Mr. Atengble was a partner of the VakaYiko consortium in Ghana, which received funding from the UK’s Department for International Development (DfID) to improve research uptake across three other countries – South Africa, Uganda and Zimbabwe.

Supporting Evidence Use by Policymakers in El Salvador: Following Up on MCC Activities 2 Years Later | Part 2 of 2 — August 27, 2018

Supporting Evidence Use by Policymakers in El Salvador: Following Up on MCC Activities 2 Years Later | Part 2 of 2

2017 Training Picture

The June 2017 M&E training. Source: FOMILENIO II

This is the second installment of a two-part blog series by Results for All describing the partnership between MCC and the Government of El Salvador. The first installment described the evidence workshop for policymakers held in July 2016 to discuss and promote the use of findings from evaluations of previous MCC-funded programs in the country, as well as the M&E trainings for public officials held the following year. This second installment highlights interviews conducted with the organizers, trainers, and participants. It considers the impact of the workshop and trainings on Salvadoran policymakers, what has changed in the last two years, and remaining challenges to evidence-informed policymaking in the country. Note that the majority of interviews consisted of written questions and responses. Quotes used here have been translated from Spanish.

“The purpose of the Evidence Workshop was to extract the evidence produced from the MCC evaluations, as well as the findings of the empirical investigations and evaluations of other donors, and to put the results into use in the processes and decisions important for El Salvador, including in policy, strategy development, or project design in the sectors of education, infrastructure, investment climate, and possibly in the security sector.” – FOMILENIO II

What Contributed to Successful Implementation?

The evidence workshop took place within two important contexts. First, MCC committed to implementing evidence workshops in partner countries with compact or threshold programs, and the El Salvador workshop acted as a pilot for this effort. Second, a 2015 reform of the executive branch in El Salvador strengthened the role of the Presidency’s Technical and Planning Secretariat, SETEPLAN, in directing the Five-Year Development Plan, organizing the National Planning System, and operating the Monitoring and Evaluation (M&E) Subsystem. To fulfill the latter mandate, SETEPLAN identified a need to strengthen the M&E skills of key personnel in government, civil society institutions, and academia. The goals of MCC and SETEPLAN thus aligned well, and their strong pre-existing relationship facilitated the planning of the 2016 evidence workshop and 2017 M&E trainings.

Workshop organizers at the MCC El Salvador Country Team, FOMILENIO II, and SETEPLAN further stressed that implementation was facilitated by “MCC’s commitment to promote rigorous evaluations and put lessons learned into practice” as well as the fact that high-level authorities from both MCC and the Government of El Salvador supported the workshop early on and attended in person.

How Did Participants Apply Learnings to Their Work?

One government stakeholder described how  the Government of El Salvador identified and established M&E indicators for national planning instruments, such as the Five-Year Development Plan, as a result of the capacity building and knowledge acquired. The government also intends to use these lessons learned to achieve the Sustainable Development Goals and address multidimensional poverty. Another key stakeholder from government stated that “I believe that there have been important advances in the outcome indicators” used to measure government work.

Several of the participants who attended the 2016 evidence workshop came from academia, and highlighted in interviews how they have used the impact evaluations presented at the workshop in the university courses they teach, enriching the classes “with concrete examples.” Another participant from academia mentioned using learning from the 2017 M&E training when conducting a project evaluation for a nongovernmental entity. Another wrote that unemployment hindered her from applying the learning from the events.

Results of the evaluations of the 2017 M&E trainings found that participants especially appreciated modules on Theory of Change for policies and programs, when to use different types of evaluations (needs assessments, impact evaluations, process evaluations, and cost-effectiveness analysis), and what questions each can help answer.

When asked what helped or hindered them from applying the learning to their work, most participants with experience in government described obstacles due to the culture and leadership of the institution. One wrote that “It has been necessary to develop a culture around the evidence. The resistance in this sense is quite strong. The systems for capturing information have their limitations, which are very difficult to overcome in times when resources are scarce. In this process it has been important to count on the support of the authorities.” Another put it this way: “Considering that I have been at the front of M&E units, I can tell you that knowledge is easier to apply from an institution that gives importance to evaluation. Many times the obstacles come from the bosses, the superiors who are not interested in the issues, much less the real results and in part this is because they do not know about the subject.” One respondent noted that while the strong technical team within SETEPLAN “is receptive to MCC’s findings, the current political and fiscal climate has hindered full adoption and implementation of these lessons learned.”

What Are the Main Challenges to Using Evidence for Policymaking in El Salvador?

Regarding challenges to evidence use in general, participants spoke overwhelmingly about the need to create an evaluation and learning culture within their institutions. “It is necessary to create the culture of M&E in our professionals […] It is necessary to stop seeing evaluation as a way to measure and punish the employee,” wrote one participant. Similarly, another interviewee explained how “evaluation can be seen as an audit and not as a learning and improvement process.” Another stated that “The information systems that are available do not respond to current needs and the evolution of the programs. Nor is there a culture of evaluation and many of the processes are susceptible to human errors due to lack of systematization.” The frequent turnover of governments and personnel exacerbates efforts to create an evidence culture, and necessitates “continuously sensitizing decision-makers and managers of public policies, programs and projects on the importance of the generation and use of evidence,” according to another stakeholder from government.

In addition to the lack of a culture of using evidence and evaluations, access to relevant evaluations or updated statistics was a commonly cited challenge, especially for participants from outside government. Other challenges cited include:

  • “lack of knowledge about how the results of an evaluation can be concretized into actions for continuous improvement of the work of government”
  • “availability of quality and timely information in the short term for public decision-making”
  • “to consolidate an effective M&E System that transcends periods of government” and
  • how to ensure that “the information generated by the M&E System is actually used for decision making.”

There may also be an assumption that evaluations are always costly and take a long time. One interviewee explained that policymakers often work with consultants who over-charge for evaluations of sub-par quality. Lastly, one interviewee noted that with El Salvador’s current financial conditions, the government may have knowledge of the highest-impact interventions, yet lack the funds to implement them.

What is the Attitude Toward Evidence Use in El Salvador, and What Has Changed in the Last Two Years?

On the whole, workshop organizers and participants stressed that the mood has shifted in El Salvador, and that while room for improvement remains, significant advances have been made in the last two years to institutionalize evidence use, and to view evidence and evaluations as positive and essential tools for government. One participant described a shift in measuring impact, from purely qualitative to quantitative evaluations using the counterfactual. Another wrote that “there are mechanisms already institutionalized to publicize the results that are available. A lot of attention and resources have also been given to evaluation to be able to continue improving interventions.” Several others highlighted greater interest in or commitment by public institutions to measure results and open themselves to citizen participation. The fact that the M&E trainings had participation from top-level ministerial staff all the way down to program staff is a clear sign of the government’s commitment to use evidence in a more systematic way, declared one stakeholder.

Several participants spoke of remaining challenges, pointing mainly to limited resources for conducting evaluations, and the political environment. As one stakeholder wrote, “The technical offices within the government are receptive toward the use of evidence-based decision making, but the electoral environment and political leadership of both sides of the aisle favors decision making using a different calculus.” Another put it more frankly, “the bosses do not like to know that things go wrong and that they should be adjusted.” However, just as a different interviewee observed, the fear of admitting that things have gone wrong is not unique to El Salvador.

Onwards and Upwards: Catalyzing Continued Culture Change for Evidence Use

Overall, the organizers and participants of the 2016 evidence workshop and 2017 M&E trainings spoke positively of the events and the relationship between MCC and the Government of El Salvador that made them possible. Most participants noted that they were able to apply their learning in some capacity in their work, though how they are doing so differed widely, from better identifying indicators for national planning, to using concrete examples of evaluations in university teaching. Organizers and participants had similar responses when asked about challenges to evidence-informed policymaking in El Salvador: nearly every interviewee emphasized, at some point, the importance of developing institutional cultures of evidence use and learning. That includes by sensitizing government authorities, who participants explained can present obstacles when they lack familiarity with evidence methodologies or topical issues, when they see evaluation as primarily an audit and not a learning tool, when they are reluctant to see negative results, or when they devalue evidence compared to political or electoral priorities.

At Results for All, we have studied many other training programs and initiatives that aim to build policymaker knowledge, skill, and motivation to use evidence in government. The challenges cited by stakeholders interviewed for this blog resonate with our research – namely that institutional cultures, political leadership, and staff turnover can impede the effectiveness of isolated training programs or events that focus on individual government personnel. Instead, we find that creating cultures of evidence use, and institutionalizing evidence-informed policymaking in government, requires the right incentives.

In the United States, our colleagues at Results for America help incentivize political leaders to become evidence champions by highlighting their work in the media (policymakers love press) and recruiting them to join peer learning and advocacy networks. At Results for All, we recently hosted a peer learning workshop for teams of government policymakers from nine countries, focused on using evidence to improve policy implementation. We saw how the opportunity to share and learn from their peers motivated teams of government policymakers from around the world to apply for the workshop. We found that policymakers really want to learn about and create guidelines, policies, and frameworks that can incentivize, systematize, and govern evidence use in their institutions. Lastly, we witnessed strong demand for a sustained network of evidence champions, one that could provide a powerful global platform to advocate for and incentivize cultures of evidence-informed policymaking and learning in government.

Over the next few months, we will publish a series of briefs and case studies on mechanisms to incentivize evidence use in government. We are also shaping a strategy for a global peer learning network on evidence-informed policymaking. The network has the potential to unite leaders from governments like El Salvador to highlight, disperse, and deepen existing evidence practices, pair governments grappling with similar challenges, and jointly develop guidelines and tools to govern evidence use and incentivize continued progress.


Ari Gandolfo is Projects and Partnerships Manager at Results for All, a global initiative dedicated to helping policymakers use data and evidence to improve the lives of citizens. We are committed to highlighting and accelerating the spread of good practices for evidence use in government, developing tools and resources to support policymakers, and building innovative partnerships and networks to recognize and promote peer learning among evidence champions in government.

Do you have comments, questions, or ideas for us? Do you have experience with incentives in the public sector that could help to strengthen and make evidence use in policymaking routine? You can reach us at info@results4all.org.

 

 

Supporting Evidence Use by Policymakers in El Salvador: Following Up on MCC Activities 2 Years Later | Part 1 of 2 — August 20, 2018

Supporting Evidence Use by Policymakers in El Salvador: Following Up on MCC Activities 2 Years Later | Part 1 of 2

2016 Workshop PictureThe 2016 evidence workshop. Source: FOMILENIO II

Thanks to a commitment to monitor and evaluate the impact of its investments, and a focus on using data and evidence to get results, the Millennium Challenge Corporation (MCC) generates a lot of evidence on “what works” to improve economic growth and development in its partner countries in Africa, Asia, Eastern Europe, and Latin America. However, do partner country governments use the evidence generated by MCC-funded programs to inform their policies and planning? Or are those valuable lessons on “what works” forgotten as MCC and its partners move on to new priorities? In July 2016, MCC held a workshop in El Salvador intended to make better use of the evidence generated by its own investments in the country, and followed up with a Monitoring and Evaluation training for policymakers one year later. How did participants apply the evidence and learning to their work, and what helped or hindered them from doing so? What has changed in the two years since the initial workshop, and what challenges remain for using evidence in government policy?

At Results for All, we are committed to shining a spotlight on good practice initiatives that promote the systematic use of evidence in government decision-making. We have studied other evidence use training programs, such as this one in Ghana, and interrogated to what extent individual policymakers are able to apply what they learn to their work, noting the challenge of creating lasting impact with these programs. We wanted to help MCC tell the story of their work in El Salvador, see what lessons from this experience resonate with our previous research, and identify what insights could be shared to advance evidence-informed policymaking in other contexts. In this two-part blog series, we describe the programs held in El Salvador in 2016 and 2017, follow up with Salvadoran policymakers and partners to consider the impact of the activities, and discuss implications for furthering evidence-informed policymaking in El Salvador and other countries.

Development Assistance Grounded in Partnership: The MCC Model in El Salvador

The Millennium Challenge Corporation (MCC) is a foreign aid agency of the U.S. government, created by Congress in 2004, which supports economic growth, poverty reduction, and institutional strengthening in developing countries with good governance. By focusing on selective countries with good policies, and emphasizing country ownership, evidence-based programs, and rigorous monitoring and evaluation, the MCC model helps to ensure that U.S. dollars are well spent to get cost-effective results.

How it works: MCC provides time-limited grants to competitively selected partner countries. To be eligible for the five-year “compacts,” countries must pass the MCC Scorecard, a set of independent (non-MCC) indicators related to good governance, economic freedom, and investing in citizens. Countries close to passing the eligibility criteria can receive small “threshold” grants to support policy and institutional reform, to help improve their policy performance in key areas and work towards future collaboration with MCC. Once selected, partner country governments work with MCC to design programs that align with national development priorities, and refine plans through consultations with civil society and the private sector. Countries must then establish a Millennium Challenge Account (MCA), a local accountable government entity responsible for implementing the grant funds, which are also subject to rigorous and independent monitoring and evaluation to assess the impacts of the MCC-funded programs.

MCC in El Salvador: MCC has awarded two five-year compacts to El Salvador. The first, from 2007 to 2012, invested $461 million in education, public services, agricultural production, rural business development, and transportation infrastructure. The second compact, active from 2015 to 2020, will invest up to $277 million (with an additional $88 million contributed by the Government of El Salvador) in regulatory reforms, education, and logistical infrastructure to increase El Salvador’s productivity and competitiveness in international markets, in order to promote economic growth and private investment in the country.

The compacts are implemented by Fondo del Milenio, commonly referred to as FOMILENIO, the Millennium Challenge Account (MCA) or entity responsible for the MCC investments in El Salvador. For the second compact, FOMILENIO II is led by the Presidency’s Technical and Planning Secretariat (SETEPLAN, for its Spanish acronym) and includes representatives from federal ministries, civil society and academia, the private sector, subnational governments, and MCC.

A Bigger Bang for the Buck: A Workshop to Make Better Use of Evidence Already Generated

Early in the second compact, MCC and FOMILENIO II hosted a workshop (Spanish) entitled “Closing the Gap: Strengthening the Ties between Evaluation and Policy,” which aimed to promote the use of evidence generated through MCC investments in the design and implementation of new government programs and policies. In essence, the workshop sought to take stock of the knowledge acquired through previous MCC-funded work in El Salvador, and help Salvadoran policymakers identify where it could be applied to make further progress on government priorities.

Over 180 policymakers, practitioners, and researchers attended the event, held in San Salvador on July 28, 2016. Plenary sessions discussed MCC and FOMILENIO I achievements and the importance of using impact evaluations for policy, planning, and budgeting purposes. Representatives from MCC, the World Bank, Inter-American Development Bank, Abdul Latif Jameel Poverty Action Lab (J-PAL), USAID, Mathematica Policy Research, and other partners then presented the results of the evaluations of MCC-funded projects to improve education and El Salvador’s investment climate. Participants discussed the findings and ways to incorporate that knowledge into new programs and policies.

“As part of the workshop, participants committed to use the lessons learned to improve education, gender, and legal and regulatory policy to make the business climate more competitive and help ensure that better educated students can find higher paying jobs in El Salvador.” (MCC’s Statistically Speaking Newsletter, January 2017)

The evidence workshop provides a good example of collaboration between MCC and an in-country Millennium Challenge Account and offers a relatively simple strategy to promote evidence-informed policymaking – and a continued return on MCC investments – by jointly brainstorming how to make better use of evidence already generated by MCC-funded programs.


Findings from MCC’s First Compact with El Salvador: Supporting Technical Education

At the July evidence workshop, MCC and FOMILENIO partners presented final evaluation findings from already concluded MCC-funded programs. For example, Mathematica Policy Research presented findings from the evaluation of the Formal Technical Education Sub-activity. The sub-activity included a scholarship program meant to increase graduation rates from post-secondary technical-vocational schools. The results? Boys who were given scholarships to the technical-vocational schools were more likely to graduate compared to boys who did not receive scholarships. Their graduation rates, 80% vs 63%, differed by a statistically significant 27%, suggesting that scholarships are an effective means to increase boys’ graduation rates. However, the difference for girls, 77% vs 76%, was not significant. Why the variation? We can hypothesize that scholarships may serve as a stronger motivator for boys than girls because they reduce boys’ incentives to emigrate or find low-skilled work to provide for their families, which is common in El Salvador.

Importantly, the evaluation also found that graduation from a technical-vocational school did not guarantee higher income, sometimes because the skills taught by the program did not match those needed by local employers. Early collaboration with the private sector in the design of technical curriculum is therefore essential.


Enabling Evidence Use: Tiered M&E Training for Public Officials

After the initial evidence workshop, the Government of El Salvador requested a certificate course for relevant ministries and implementing entities, to improve the M&E capacity in government. In response, MCC and FOMILENIO II worked with J-PAL to deliver three sets of trainings (Spanish) from June to August 2017. The trainings aimed to show how impact evaluation findings and other evidence can be used to improve public policies and results for citizens, identify challenges to evidence-informed policymaking and strategies to overcome them, and enhance relevant skills in monitoring and evaluation. The first targeted high-level public officials, principally ministers and directors, while the second brought in mid-level public officials responsible for project design and implementation and included more technical information. The third training was designed for M&E specialists and technical staff, with more emphasis on statistics and econometrics and other skills needed to conduct, coordinate, or supervise the evaluation of public policies and projects.

In addition to enhancing the skills of participants, the training was meant to create a cadre of professionals that could form a potential Monitoring and Evaluation Committee supported by the Presidency’s Technical and Planning Secretariat, a plan that could help to institutionalize the use of evidence in policy and planning decisions.

Beyond Sharing Evidence and Building Skills: Assessing Changes Two Year Later

This blog introduced the partnership between MCC and the Government of El Salvador, and described the evidence workshop held in July 2016 and follow-on trainings held one year later. How did policymakers in El Salvador apply what they learned in the workshop and training to their work in government? What helped or hindered them from doing so? How have their attitudes toward evidence and their motivation to use it changed in the last two years, and why? To answer these questions, the second blog in this series features interviews with policymakers who organized and participated in the workshop and trainings, and discusses what remaining challenges impede policymakers from using evidence in El Salvador, and how MCC and other donors can help.


Ari Gandolfo is Projects and Partnerships Manager at Results for All, a global initiative dedicated to helping policymakers use data and evidence to improve the lives of citizens. We are committed to highlighting and accelerating the spread of good practices for evidence use in government, developing tools and resources to support policymakers, and building innovative partnerships and networks to recognize and promote peer learning among evidence champions in government. You can find other training programs and evidence-informed policymaking initiatives we have studied here. In particular, you may want to look at this case study on evidence use training for civil servants in Ghana, or the many other programs we summarize in this table that aim to build policymaker knowledge, skill, and motivation to use evidence in government.

 

 

 

Where do policymakers go when they need a safe space to engage with each other on evidence use in policy implementation? — August 6, 2018

Where do policymakers go when they need a safe space to engage with each other on evidence use in policy implementation?

They come together in a workshop where they can share experiences, lessons, and strategies for using evidence more effectively in the implementation of social policies.

m0OEI_cw_edit.jpeg.jpg

On July 23-25, Results for All and our partners at AFIDEP and IDinsight hosted “Using Evidence to Improve Policy Implementation: A Peer Learning Workshop for Government Policymakers” in Nairobi, Kenya, providing such a space for ten teams of evidence champions and policymakers from nine countries: Chile, Ghana, Kenya, Malawi, Mexico, Nigeria, Rwanda, South Africa, and Uganda. Each government team is tasked with implementing a specific social policy, such as increasing the quality of public education, meeting family planning targets, and supporting the most vulnerable households, and sought to use this opportunity to inform that work. To learn more, take a look at the workshop agenda, and read about the participating teams and the policies they are working on in this set of short policy briefs we wrote together. You can also see photos from the workshop and several video interviews with participants on our Twitter.

Final Workshop Image

Why a focus on policy implementation?

Over the last few months, we’ve been engaged in a series of consultations to assess the demand for a global evidence network and understand how funders are supporting evidence use to inform government priorities. A consistent theme in our conversations has been the lack of attention given to policy implementation or translation of policy to action. We also heard from many participants that the specific focus on evidence use in policy implementation is what drew them to apply for and participate in the workshop.

Policy implementation challenges can occur due to a myriad of factors, including unclear policy goals and outcomes; an absence of political support or financial resources; missing or weak evidence on the effectiveness of an intervention; inadequate skills or motivation among public officials tasked with frontline service delivery; and incorrect assumptions about human behavior and local needs. Addressing these implementation challenges requires a variety of evidence: evidence on how to mobilize political and financial support for the policy; evidence on whether the policy has worked elsewhere and under what conditions; evidence on how to enable and incentivize frontline agents to best implement and track the policy; and evidence from local stakeholders to best tailor the policy to their context and needs.

When implementation, along with monitoring and evaluation activities, is not linked to policy design but instead treated as a distinct down-stream activity, the incentive to produce evidence in an ongoing and iterative process to inform policy is weak. This puts evidence-informed policymaking at risk. Policymakers can only ensure the benefits of evidence-informed policymaking when implementation succeeds. We think it is critical therefore, for governments to take a systematic and structured approach to using evidence to bridge the gap between policy design and implementation, to achieve better results for the people they represent and serve.

Initial insights and reflections

We’re still processing our learnings from the workshop and will share a lot more in the weeks to come, but here are some initial takeaways.

1: Common evidence use challenges persist across diverse contexts

We were not surprised to learn of the many common challenges that workshop participants face in using evidence to inform policy implementation, regardless of the specific policy, sector, or country context. These include the lack of a learning and results-oriented evaluation culture; the difficulties associated with integrating and using data across the multitude of agencies working to address complex social problems; the challenge of turning raw data into useable information; the absence of structured partnerships with the research community and media; and a lack of tools and understanding on not only how to engage with citizens, but importantly on how to use the inputs that they provide to improve policy implementation.

2: A safe space for sharing challenges, experiences, and accomplishments is attractive to policymakers, even those with more advanced evidence use

In our consultations we also heard a lot of interest in peer learning and networking between governments, and we received a lot of interest in this workshop due, we think, to the fact that peer learning was a central theme. We wanted to test whether policymakers from a diverse set of countries, most of whom had never met before, could connect over common missions and challenges, openly share their successes and failures, and provide real value to each other’s work. The verdict thus far is yes: participants have told us over and over what a great opportunity it was to connect with others in government, and to now have a network of international peers with whom they can discuss ideas and share resources. We confess that we were skeptical that representatives from some countries – especially Mexico, Chile, and South Africa, with their very advanced evaluation systems – would benefit from being in the room as much as others – but participants have reiterated that the workshop gave them a lot to think about and apply to their work. Some noted that this was a great opportunity to showcase their country’s learning and growth around evidence use, and that they were interested in more forums that provided this platform.

3: Participants want more practical, hands-on tools with immediate applications to their work

Workshop participants were especially keen to use tools, like checklists, behavioral insights tools, and design thinking, that helped them reflect on their own experiences, map out connections, and chart a way forward. This interest signals the potential for future network activities focused on jointly developing model policies, frameworks, or guidelines for evidence production and use, which participants could then adapt to their own contexts.

qvAr3SWw_edit.jpeg

4: Connecting government and civil society is a valuable function for a network or community of practice

We designed this workshop as a forum primarily for government policymakers to learn from each other. However, one of the most popular sessions was a ‘marketplace of citizen engagement solutions,’ where we invited eight non-governmental organizations from Nairobi to set up booths and showcase their work to participants. The organizations – Africa’s Voices Foundation, Code for Africa, Local Development Research Institute, Map Kibera Trust, Muungano wa Wanavijiji, Open Institute, Twaweza East Africa, and Well Told Story – are each using technology and innovative approaches to collect and analyze citizen perspectives, feedback, and ideas in order to identify social problems, point to improvements in public programs, and spark behavior change and collective action. Participants told us they relished the opportunity to speak with these organizations and learn of new and innovative tools to collaborate with communities to collect data and source solutions, and we heard also from presenters that they appreciated the opportunity to interact with such highly engaged policymakers. Overall, this marketplace taught us that in addition to satisfying demand for peer learning among governments, a network or community of practice focused on evidence use can also be a powerful bridge between government and civil society.

pdshmx7w-jpeg.jpg

5: A thematic or sector focus could deepen the conversation on institutionalizing evidence use

We did not expect to solve huge, complex social problems in a 2.5-day conference. Rather, we were interested in exploring whether we could have a meaningful conversation about strengthening institutional practices and processes for evidence use in policy implementation, across the different policies and contexts represented in the workshop, and whether this approach would be valuable to policymakers. There was general agreement that a conversation about evidence practices and processes is critical to strengthening evidence use in policy implementation and that there are many lessons to learn from different policy areas, but workshop participants also indicated that working groups with a thematic or sector focus could help provide deeper, richer insights and value to support their work. For some, this means a focus on themes like data collection, evaluation capacity, and statistical systems, while others felt that sectoral working groups could be helpful in addressing contextual factors that are specific to sectors. As a follow-up to the workshop we will be conducting a series of short surveys to engage further on the issues or themes that would be most helpful for policymakers to engage in through a network, and we will continue to shape a network strategy in collaboration with partners.


Using Evidence to Improve Policy Implementation: A Peer Learning Workshop for Government Policymakers” was an opportunity for a global community of committed evidence champions to share real-world experiences in implementation, discuss common challenges, and collectively shine a spotlight on good practices for using evidence to improve the implementation of policies and generate results for their populations. We heard from participants and partners that the workshop was a resounding success, giving all of us new ideas and questions to take back to our work, connections and partnerships to continue to grow, and the inspiration to continue advancing the use of evidence to get results in government. We know that here at Results for All, the workshop gave us a lot to think about, and we’ll be sharing more insights, reflections, and takeaways in our forthcoming report later this month.

jdvArR3w_edit.jpeg

 

Evidence-Informed Policymaking Reading List | July 2018 — July 3, 2018

Evidence-Informed Policymaking Reading List | July 2018

Our Network Mapping Report is Here!
 
Peer learning networks are powerful platforms for bringing countries together to exchange information, learn, and collaborate. Results for All conducted research to assess whether there is demand in government for a peer learning network that could help policymakers share experiences and amplify solutions to advance the use of evidence in policy, to achieve better outcomes for citizens. To inform a network strategy and avoid potential overlap with existing initiatives, we engaged in a comprehensive process of identifying, researching, and mapping active networks that have similar missions. The subsequent report identifies and classifies 50+ networks for peer learning and evidence use in government; describes 8 modes of engaging network members; and synthesizes research and key informant interviews into 13 lessons on network organization, engagement, and measurement. It then matches select networks against 5 criteria and concludes that a new network premised on these criteria could support evidence-informed policymaking and add value to current initiatives.

We hope this report will be useful for a variety of actors seeking to support evidence-informed policymaking, and identify opportunities to enhance collaboration and fill gaps in this important field.

Results for All Network Mapping Report


What to Read this Month

“An accurate, concise and unbiased synthesis of the available evidence is arguably one of the most valuable contributions a research community can offer decision-makers.”
The article identifies four principles that can help it make it easier for evidence producers and users to commission, appraise, share, and use evidence in policy. These four principles – inclusive, rigorous, transparent, and accessible – should apply to every evidence synthesis, which if done well becomes a global public good.

“The innovative contribution of the current study is the development and validation of a set of measurable indicators specifically devoted to assess and support EIPM in the field of public health, intended to be jointly used by governmental policy-makers and researchers, but also by other stakeholders involved in various stages of the policy-making cycle.”
The article describes a Delphi study that led to the development of indicators that can be used to assess the extent to which policies are informed by evidence. The indicators cover issues related to the skill and experience of staff working on policies, documentation of evidence in policy documents, communication and participation with key stakeholders, and procedures for monitoring and evaluating evidence use in policy. The indicators could also help to encourage establishment of routine processes for advancing evidence use in policy.

“There’s a limited evidence base about knowledge brokers, but preliminary findings suggest that they do have the potential to improve the uptake of evidence.”
Insights from the Wales Center for Public Policy on the role it plays as a knowledge broker in evidence-informed policymaking – helping to  build an understanding of evidence needs and questions, improve access to evidence, promote  interaction between evidence users and producers, and strengthen capacity to engage with research. The Center is refining its theory of change to better understand approaches that work, and to take a more systematic approach to facilitating evidence use in policymaking.

“Just as a journalist is trained to tell a compelling story so that an audience’s attention is captured and held so that facts of a story can be relayed to a reader or viewer, so too do scientists or policy experts need to capture attention and communicate both the importance and complexity of issues to their audiences.”
A useful article describing how storytelling influences the policy process and offering key steps to help policy actors build a better narrative.

“Guerrero, a Harvard-educated surgeon-turned-epidemiologist, understood violence as an epidemic transmitted from person to person. As with any epidemic, he tried to map the outbreak and understand its transmission. Data came first.”
A great story about data-driven policing and violence prevention in Colombian cities.

What We’re Working On

Later this month, we’re convening teams of government policymakers from nine countries to share experiences, challenges, and lessons on how to use different types of evidence to overcome roadblocks, create political buy-in, engage with stakeholders, and mobilize financial resources to support and improve policy implementation. “Using Evidence to Improve Policy Implementation: A Peer Learning Workshop for Government Policymakers” will take place from July 23-25in Nairobi, Kenya, in partnership with the African Institute for Development Policy (AFIDEP) and IDinsight. We’ll share profiles of the participating teams and the policies they are working on in the coming weeks on Twitter (@resultsforall) so be sure to follow us there!
Open Call for Applications: Peer Learning Workshop on Policy Implementation — April 24, 2018

Open Call for Applications: Peer Learning Workshop on Policy Implementation

Workshop Banner

Applications are due by May 14, 2018 via email to info@results4all.org.

Download the application here.

The Workshop:

Results for All and the African Institute for Development Policy (AFIDEP) will host a two-and-a-half-day workshop for policymakers from around the world to:

  • Discuss the challenges governments face in effectively implementing policies; and
  • Share experiences and strategies for using evidence to improve policy implementation.

The workshop will:

  • Facilitate dialogue, exchange, and active engagement among participants, to more deeply understand policy implementation challenges and lessons learned in different contexts; and
  • Introduce tools and approaches for improving implementation using various types of evidence.

During the workshop, participants will seek to answer questions such as, what are the most common barriers to effective policy implementation in different government office contexts? What type of evidence is needed to unlock implementation? How and when should it be considered? What strategies and mechanisms are governments in different countries introducing to improve and integrate evidence use in policy design and implementation? How can we learn from their experiences?

Workshop Outcomes:

  • Participants will learn from and interact with peers leading policy implementation activities from 7-8 national governments.
  • Participants will work in country teams to diagnose root causes of policy implementation challenges and create solution-based roadmaps.
  • Participants will provide feedback and shape future collaboration, including a potential global network for government leaders to advance the use of evidence in public policy.

Who Should Participate?

Results for All and AFIDEP invite public officials and policymakers to form a team of three or four individuals who are working together to implement a specific policy, in any sector, and who want to learn how and when to use evidence to overcome policy implementation challenges. A team must include members from at least two government ministries / departments / agencies, and be approved by senior leadership via the signature at the end of this application. Teams from seven to eight countries will be selected for participation in the workshop.

Teams are encouraged to include:

  • A program manager or director in a ministry / department / agency, who oversees the implementation of the policy in question.
  • A public official or practitioner at the national or subnational level, who has a role in operationalizing the policy, or collecting operational data and evidence.
  • An analyst, manager, or director from a national finance or planning ministry / department, who has a coordinating role in managing or evaluating policy.
  • A technical expert from a research or evaluation unit or statistical office, who has a role in producing or sourcing evidence to inform policy options or implementation strategies.

Teams will be expected to develop a power point presentation outlining a policy implementation challenge in advance of the workshop, and to engage in follow-up activities to put roadmaps into practice.

Teams from Brazil, Colombia, Ghana, India, Indonesia, Kenya, Malawi, Mexico, Nigeria, Philippines, Rwanda, South Africa, Tanzania, Uganda, and Zambia are especially encouraged to apply.

Download the application and apply here.

Advancing evidence-informed policymaking: What’s culture got to do with it? — March 29, 2018

Advancing evidence-informed policymaking: What’s culture got to do with it?

Over the last few months my team at Results for All has been engaged in consultations to assess the demand for a new global evidence network that could bring government policymakers together to exchange innovative ideas and learn from each other to advance evidence use in policymaking.

We have spoken to policymakers in government, members of the research and academic community, as well as non-governmental partners and initiatives in countries including Colombia, Chile, Finland, Nigeria, South Africa, Tanzania, Uganda, and Zimbabwe, among many others. In every conversation, we heard about the importance of building or shifting the culture of evidence use. While we expect and assume that organizational culture will be different in varied contexts, we observed an interesting tendency in the policymaking community to speak about culture and evidence use in a way that suggested some universality across policy areas and levels of government. We noted further that in the context of evidence use, culture was often spoken of in broad and vague terms, such as “the culture is not developed enough,” “there is no culture of producing data,” or “mid-level technocrats have a lot of influence, and the ability to shift government culture.”

We are curious about the notion of an evidence use culture in government, and believe it is essential to better understand this culture so we can identify strategies to help strengthen evidence use in government. 

What is culture?

The challenge in understanding what a culture of evidence use in government looks like begins with the definition of culture itself, a term with many meanings. The first of Merriam Webster’s six definitions for culture describes it as a set of attitudes, values, goals, and practices shared across an institution or organization. Matsumoto et al suggest that while attitudes, values, and goals can be shared by a group, they can also be differentiated at an individual level.

This practical guide on changing culture developed by Bloomberg Philanthropies’ What Works Cities initiative offers a definition of culture that gets at norms: “culture is the difference between what you tolerate and don’t tolerate.” According to the guide, culture embodies interactions between the different elements of a system such as people, beliefs, values, and attitudes. It is both causal and dependent on an organization’s knowledge, processes, and systems. It is not a singular thing – an individual or organization can be defined by multiple cultures. And it is both learned and a legacy that can be shaped over time. These conflicting and dynamic elements are what make culture hard to define.

Levels of culture

To understand culture as it relates to evidence use in government, it is helpful to explore the different levels in which culture presents itself in an organization. This includes artifacts, values, and assumptions, captured in a helpful visual here.

The visible and tangible elements of an organization are its artifacts. They are what you see when you walk into an office – desks, chairs, computers, plants, and filing systems. Reports, briefs, databases, and knowledge management systems are also types of artifacts. Artifacts can give a sense of office culture – we might for example, assume that a brightly colored office with an open floor plan has a creative mission, and sense entrenched bureaucracy in a dark, traditionally furnished office. Or we might expect an office with the technology for collecting and storing data, to routinely use evidence to inform policy and programs.

Yet these visual cues about an office’s culture may be misleading if we do not understand the organization’s values and the underlying assumptions that drive the daily work of its leaders and employees. For example, a government office may have the relevant evidence artifacts such as a knowledge management system or evaluations, but lack shared values to guide and encourage evidence use in decision making. But even when there are tangible artifacts, and a government office publicly articulates the value of using evidence in policymaking, if the underlying assumption is that using evidence is too costly or time consuming, the office is unlikely to translate its artifacts and values to systematic use of evidence in policy decisions. The challenge is that it can be hard to uncover hidden assumptions – feelings, perceptions, thoughts, or beliefs – that shape an organization’s visible artifacts and values. Artifacts and values can also be disconnected and even contradictory, most noticeably in government when financial commitments needed to support desired policies or policymaker behavior do not line up with a government’s stated values.

In the context of evidence-informed policymaking, it is important to build artifacts – the systems and processes governments need to ensure evidence is appropriately sourced and used to inform strategic thinking, policy development, implementation of policy options, and monitoring and evaluation. It is also critical to build and instill a shared and publicly expressed value in using evidence. But to influence behavior change and shift attitudes about evidence use, it is imperative that we consider the basic assumptions that guide how work is done and decisions are made. When what we say (reflecting values) does not align with how we behave (building and using artifacts), it is a sign that we need to dig deeper to understand the assumptions that govern our behavior

What should governments do to strengthen underlying assumptions and shift the culture toward evidence use?

    1. Take time to know the office – For many government offices, a conversation to understand barriers and challenges that inhibit evidence use, and clarify performance expectations and intended outcomes of policies, is a good starting point for those who would like to see greater use of evidence in policymaking. Build the communications skills to hold these conversations. A needs assessment can help to diagnose the gaps in knowledge, awareness, and capacity that can influence assumptions around what it takes to find, understand, and use evidence.
    2. Invest in leaders and champions – Strong role models who demonstrate the importance of using evidence through their actions can inspire others and help to change behavior patterns. Highlighting respected leaders who support innovation, change, and learning can positively influence other public officials’ assumptions and attitudes toward evidence use.
    3. Build knowledge and awareness – Policymakers who are confident in their ability to find, appraise, and synthesize evidence, and who understand the complexities of the policymaking process, are more likely to use evidence in their decision making process. Training courses or events such as dedicated research weeks can raise awareness about the value of using evidence and change assumptions that using evidence is too intimidating or complex.
    4. Create a compelling narrative – Ruth Levine gets at a moral argument for evidence-informed policymaking here and here. Moving from a compliance and monitoring mindset to a compelling narrative that points to failed outcomes for citizens when we do not use evidence can be a way to shift attitudes and behavior toward evidence use. Make responsible allocation and spending of limited government resources about doing right by citizens – achieving healthier populations, delivering quality education for all, accelerating financial empowerment for women.
    5. Promote networks and relationships – Whether formal or informal, peer interactions can help policymakers strengthen technical skills and shift attitudes and assumptions by exposing them to new ideas. As an organization, this could mean giving staff the time and space to connect with each other to share information, lessons, and experiences.
    6. Recognize and reward desired behavior – Different strategies can be used to motivate policymakers to use evidence in decision making, ranging from financial performance incentives to less resource-intensive award and recognition programs. Governments can use these strategies to promote and reward desired behavior, nudging policymakers to shift their assumptions and actions to align with organizational values.

It takes time and intentional effort to build or change the evidence culture in government. And to truly do so, we will need to scratch beneath the surface to investigate the underlying assumptions that influence whether individuals and organizations actually use evidence in their work. These assumptions determine whether values become empty statements and artifacts gather dust or, ultimately, whether evidence use becomes a cultural norm.

Abeba Taddese is the Executive Director of Results for All, a global initiative of Results for America.

How do governments use evidence for policy? 100+ mechanisms and a short survey — October 18, 2017

How do governments use evidence for policy? 100+ mechanisms and a short survey

Results for All is currently assessing whether a global evidence network that facilitates collaboration and an exchange of experiences between policymakers could help to advance and institutionalize the use of evidence in government. We invite you to participate by taking our short survey, here.

The survey will take less than 10 minutes, and will close on October 31.

If you are not a government policymaker, you can still click on the link above and provide input in the space provided. Additionally, we encourage you to contact us at any time to learn more about our work.

We appreciate your support in forwarding the survey to other government policymakers who can help us to assess the demand for a global evidence network.

The Global Landscape Review is here! — July 25, 2017

The Global Landscape Review is here!

Results for All’s just-released “100+ Government Mechanisms to Advance the Use of Data and Evidence in Policymaking: A Landscape Review” and case studies on Ghana, Kenya and Canada can be downloaded here

By Abeba Taddese, Executive Director, Results for All

For the last 18 months, Results for America’s global Results for All initiative has been engaged in a landscape review to understand the different approaches governments are taking to create formal strategies and mechanisms – specifically, policies, programs, platforms, systems and operational practices – to advance and institutionalize the use of data and evidence in decision making.

We’ve had a fulfilling year of learning from government leaders, experts and citizens around the world, and we are eager to share some of our insights here:

  • The last 5 to 7 years have been a busy time for governments. Outside of the long-established evaluation systems in countries like Mexico, Colombia and Chile, we observe that many of the formal structures governments are putting in place to support evidence-informed policymaking (EIP) are quite recent. Separately, we note a growing body of literature on evidence-informed policymaking, notably exploring constraints or barriers to EIP, and factors that enable data- and evidence-driven decision making.                                                                                                                           
  • While institutional strategies and mechanisms are necessary and often a precondition for routine and consistent use of data and evidence in policy and programs, they aren’t enough on their own. There is widespread agreement among policymakers and evidence producers alike, that policymaking is complex, multi-dimensional, and influenced by many factors. It is far from a linear “evidence in, policy out” process. Contextual factors ranging from leadership, commitment and allocation of resources to political climate, values and belief systems are critical influences in any policy process.                                                                                                                                                                                                                                          
  • Governments are taking different, context-specific approaches to creating formal strategies and mechanisms. And they are sharing information about their processes and learning from each other. The study tours to Mexico, Colombia and the United States that helped to inform South Africa’s monitoring and evaluation system, the data-driven community safety approach in Saskatchewan, Canada (Hub) adapted from Scotland’s Violence Reduction Unit model, and the collaboration between the Office of the Prime Minister in Uganda and Malaysia’s Performance Management and Delivery Unit (PEMANDU) are a few examples that stand out.                                                                                                                                        
  • Ultimately, EIP isn’t about a specific approach or type of evidence, but rather finding context-appropriate ways to make better use of data and evidence in real-life policy and program decisions. This last point is worth underscoring, in the spirit of ensuring that we don’t end up with a jargon-laden theoretical field that distracts the EIP community – whether government actors, nongovernmental organization partners (NGOs) or the philanthropic community – from the end goal of achieving better outcomes for populations.                                                                                                                                                                                                                               
  • There appears to be an emphasis in government on creating structures and systems to improve access to data and evidence, while NGOs are playing a more central role in facilitating partnerships between policymakers and evidence producers as well as building the individual capacity of policymakers. Our review is not exhaustive or definitive, so we can’t say for certain why this might be the case. But we surmise that there may be a “build and they will come” approach to the use of data and evidence in government policymaking, and that governments may prioritize spending of finite resources on tangible infrastructure. For NGOs, partnership building and training activities often offer less bureaucratic and politicized entry points for supporting government efforts to advance EIP.

The landscape review is accompanied by resource tables and a series of case studies on evidence-informed policymaking training in Ghana, demographic dividend policies in Kenya, and a community safety strategy in Canada. Our goal for this body of work that identifies more than 100 strategies and mechanisms for advancing the use of data and evidence in government policy and practice, is to promote a sharing of experiences and lessons learned among leaders in government, NGOs and other partners.

We’ll be building on this work in the months ahead, and close with a few questions we hope to explore further:

  • How effective are government strategies and mechanisms in promoting the use of data and evidence? Are there approaches that are more effective than others in improving the use of evidence, and that ultimately have the greatest impact in achieving development objectives?                                                                        
  • How can governments be best supported in their efforts to institutionalize the use of data and evidence? Could structured joint learning and networking approaches help to accelerate the adoption of strategies and mechanisms for advancing the use of data and evidence?

We are grateful to the experts interviewed for this review, who contributed their time and input (you can find many of them listed in Appendix 2 of the report), and to the William and Flora Hewlett Foundation for generously supporting this work.

We encourage you to continue visiting the Evidence in Action blog for updates. If you have questions or would like more information, please contact me at Abeba@results4all. And please share your feedback with us by tweeting at @resultsforall with the hashtag #GlobalLandscapeReview.

Thanks for reading!