Canada’s Feminist International Assistance: Can Bad Policy be Well Implemented? (Part 2)

Canada’s Feminist International Assistance: Can Bad Policy be Well Implemented? (Part 2)
Photo by Ives Ives on Unsplash

In Part 1 of this blog post, I ranted about discussed why Canada’s Feminist International Assistance Policy is likely to fail. However, even a bad policy can be implemented in ways that make failure less likely and create opportunities to learn. In this blog, I put forward 10 suggestions for doing this.

  1. Define empowerment. The policy, strangely enough, provides no definition of “empowerment.” It should. Without conceptual clarity about the intended impact, it won’t be possible to track progress. Defining empowerment is not easy; most scholars agree that it is a multi-dimensional concept (see, for example, Naila Kabeers classic text). These dimensions include economic (some measure of financial autonomy), political (being represented and making decisions), knowledge (education), and psychological (the sense that one has value and deserves a good and fair existence).
  2. Acknowledge that the dimensions of empowerment are not necessarily aligned. An intervention, for example, may have a positive impact on one dimension and a negative one on another, such as providing more economic resources to women but making them more vulnerable to violence. Not only must the impacts of interventions be measured on more than one dimension, but also we must anticipate that an intervention may have both positive and negative impacts. An ethical decision about whether or not to continue an intervention can only be made once we acknowledge this and then do our best to measure impacts.
  3. Increase available resources for evaluation. The policy wants to be bold in trying something new. But where real lives are at stake, we must carefully monitor and evaluate intended and unintended, positive and negative impacts. If Canada really wants to allocate 95% of its international assistance to what is basically an experiment, then at least 30% of the funds should be used for monitoring and evaluation.
  4. Don’t leave monitoring and evaluation to the implementers. This is obvious. Much too often large funders trust the “self-assessment” of their implementing partners, who have little incentive (and often few resources) to invest in honest, solid evaluations. Independent evaluators are therefore essential.
  5. Be creative, persistent, and rigorous about evaluation. Include a team of independent evaluators in the design and implementation of a number of “flagship” projects and programs. Give them enough time and resources to collect baseline data. Keep that team in the field for as long as it takes to measure impacts, which often materialize long after a project has officially ended. Implement similar projects in different locations and contexts to learn about the impact of these variables (rural versus urban, for example). Test different approaches to a similar problem (for example, what works best for retaining girls in school — better sanitary installations, better curricula, or a conditional cash transfer to mothers?). In short, step up your evaluation game.
  6. Develop different strategies for different contexts. The success of a project often depends on its context. A project may work in a secular middle-income country (Vietnam) but not in a poor, religious, conflict-ridden country (Afghanistan). Develop a list of sectors and approaches that are suited (or not) for a given context. This must happen at the strategic level; it cannot be left to implementers. There is always an implementer happy to carry out even the most inappropriate project. A recent example is “Promoting Women’s Political Participation in Afghanistan,” implemented by the National Democratic Institute, which predictably failed completely because, in the Afghan context, all democracy promotion projects are bound to fail. Another example is USAID’s “Promote” program — a $216 million, 5-year program to improve the status of more than 75,000 women in all levels of Afghan society — which has had virtually no impact.
  7. Focus. According to our policy, Canada’s aid is to be disbursed almost everywhere (with 50% earmarked for sub-Saharan countries), and in almost all sectors (the six action areas are really only broad labels under which every sector of development aid is subsumed). Canada is a small donor (see Stephen Brown’s blog), and given the experimental character of this new aid policy, focusing on a carefully selected subset of sectors would be useful. This means more attention and more resources for a given sector, which leads to better learning and, in the end, to better results.
  8. Put more emphasis on “do no harm.” The current policy is silent on this issue, which is a glaring negligence for an “activist and transformative” approach. In many contexts, “activist and transformative” invariably means “intrusive and conflictual” (rapid social change always is). A commitment to “do no harm” principles should be made at the policy level and demanded from implementers.
  9. Commit to learning and let others learn. Suggestions 3, 4, and 5 refer to evaluation. Conducting solid evaluations is difficult and costly (but not as costly as uninformed development). This hard-won knowledge must be shared. Commit to more transparency. Do not bury the data; publish good and bad results. This way other donors can learn as well.
  10. Embrace Experiment. Chances are this new policy will be not much more than a branding exercise, which at the implementation level will lead to little change beyond re-labelling projects. Should the policy lead to substantial changes in funding priorities and approaches to development, however, then GAC should acknowledge the experimental character of the policy — and embrace it. This would include acknowledging that the majority of the policy’s causal claims are untested, and that we have a lot to learn about what works, where, why, and how. This would also mean accepting uncertainty — experiments may end in failure or success. The only way to make sense of failure is to ensure that everyone can learn from it. Embracing experiment, essentially, means prioritizing curiosity over ideology. To the benefit of everyone.

Related Articles








The CIPS Blog is written only by subject-matter experts. 


CIPS blogs are protected by the Creative Commons license: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)