Concepedia

TLDR

Evidence on strengthening institutions is scarce because of limited exogenous variation, measurement difficulties, and the temptation to cherry‑pick among many indicators. The study evaluates a governance program that imposes participation requirements for marginalized groups to make local institutions more democratic and egalitarian, testing for learning‑by‑doing effects. The authors use a randomized governance program in Sierra Leone, innovative outcome measures, and a preanalysis plan to prevent data mining. Short‑run gains in public goods and economic outcomes were found, yet no lasting changes in collective action, decision making, or marginalized participation emerged, and the authors show that omitting a preanalysis plan could have led to two equally erroneous interpretations of institutional impacts.

Abstract

Abstract Despite their importance, there is limited evidence on how institutions can be strengthened. Evaluating the effects of specific reforms is complicated by the lack of exogenous variation in institutions, the difficulty of measuring institutional performance, and the temptation to “cherry pick” estimates from among the large number of indicators required to capture this multifaceted subject. We evaluate one attempt to make local institutions more democratic and egalitarian by imposing participation requirements for marginalized groups (including women) and test for learning-by-doing effects. We exploit the random assignment of a governance program in Sierra Leone, develop innovative real-world outcome measures, and use a preanalysis plan (PAP) to bind our hands against data mining. The intervention studied is a “community-driven development” program, which has become a popular strategy for foreign aid donors. We find positive short-run effects on local public goods and economic outcomes, but no evidence for sustained impacts on collective action, decision making, or the involvement of marginalized groups, suggesting that the intervention did not durably reshape local institutions. We discuss the practical trade-offs faced in implementing a PAP and show how in its absence we could have generated two divergent, equally erroneous interpretations of program impacts on institutions.

References

YearCitations

Page 1