As Stephen Brown noted in a recent blog, much of the media commentary on the latest Peer Review of Canada’s development co-operation has focused on its recommendations to increase spending and implement the Feminist International Assistance Policy. Brown reminds us that the Peer Review makes other useful recommendations to improve the quality of Canadian aid and its coherence with private investments.
This blog highlights two issues that did not make it into DAC’s recommendations, namely Canada’s involvement in fragile and conflict-affected situations (FCAS), as well as its approach to evaluation and institutional learning.
On the first, Canada’s peers concluded that “Involving all relevant government bodies when a crisis strikes ensures that Canada’s response is coherent. Canada also demonstrates flexibility in using appropriate instruments to respond to peoples’ needs in crisis, and to help build and create stability.” They also noted that “Canada has begun … use of integrated conflict analysis” and suggested that its “Peace and Stabilization Operations Programme is a good model for a whole-of-government approach to fragility and crisis.”
It is hard to disagree that the systematic analysis of conflict dynamics in FCAS could improve Canada’s involvement in those complicated contexts. Yet it is fair to ask what evidence the reviewers found to justify their conclusions about Canada’s whole-of-government responses and about the Peace and Stabilization Operations Programme (PSOP).
In 2015–2016, Global Affairs Canada (GAC) released evaluations of bilateral co-operation in its largest whole-of-government efforts in FCAS at the time: Afghanistan, Haiti, the West Bank and Gaza. All three reviews concluded that Canadian co-operation had been relevant to FCAS priorities and had led to some short- and medium-term results. However, shifting Canadian priorities and insufficient investment in national institutions jeopardized the sustainability of changes on the ground.
The subsequent evaluation of programming by the Stabilization and Reconstruction Taskforce (START, which preceded PSOP), concluded that START had supported relevant projects in many FCAS. Yet it also noted that shifting Canadian priorities and a weakened field presence jeopardized the sustainability of those initiatives — notably in Afghanistan and South Sudan, which had sunk back into violent conflict. Given those official conclusions and the fact that there has not yet been an evaluation of PSOP programming, it seems incongruous to conclude that Canada’s whole-of-government responses to crises are “coherent” or that PSOP has overcome its predecessor’s shortcomings.
The second issue is Canada’s approach to evaluation and institutional learning. There, peer reviewers noted that GAC “is strengthening its in-house evaluation function in an effort to enhance efficiency, quality and usefulness of evaluations.” They also suggested that new “directives for decentralised evaluations — planned for 2018 — provide an opportunity to improve transparency … while promoting learning across branches.”
Again, it is difficult to disagree with observations about ongoing changes. Yet Canada’s peers could have provided evidence-based analysis of Canada’s actual evaluation practices, including in FCAS, since they have been Canada’s largest international engagements since 9/11.
As Nipa Banerjee has shown, GAC’s 2015 evaluation of Canada–Afghanistan co-operation was not methodologically rigorous. Nor were key lessons applied at the time. My experience with Canada–Haiti co-operation suggests that GAC’s evaluation of that program was more rigorous and that many of its recommendations were integrated into Canada’s new co-operation strategy. The evaluation of Canada–South Sudan co-operation, released in 2017, was more innovative since it was the first joint evaluation of humanitarian, development, and stabilization activities. Yet its conclusions were not applied, given the new civil war on the ground.
So GAC’s evaluations of co-operation in FCAS have varied in methodological rigour and in the extent to which they actually led to programmatic changes. Moreover, Canada did not involve its FCAS partners in those evaluation processes, missing opportunities to foster the shared learning and mutual accountability that are supposed to be central to our partnerships with FCAS.
New opportunities to address the limitations of Canada’s past practices are emerging. A joint evaluation of Canada’s whole-of-government engagement in Colombia is due for release in late 2018. If it is based on a more methodologically robust assessment, it may reveal deeper insights into what is possible in the context of a nationally owned peacebuilding process. A rigorous evaluation of Canada–Mali co-operation, due in 2019, could help us understand what is possible in a less permissive environment marked by weak governance and ongoing conflict. Those exercises could be stepping-stones to a meta-evaluation, in 2020, of what has been learned from Canada’s involvement in FCAS. Some of our FCAS partners could be involved in those evaluations so that they might also learn from our joint efforts. By that time, a cumulative assessment of what has and has not been possible and why, in different fragile and conflict-affected situations, will be long overdue. It might even contribute to the informed national conversation that we could not have before, about what Canada has been doing in FCAS since the heady days after September 11, 2001.