Concepedia

TLDR

Recent studies of manpower training programs show that different nonexperimental estimators yield widely varying impact estimates, prompting calls for experimental evaluation and highlighting a lack of systematic guidance for selecting among estimators. This paper investigates whether simple specification tests can guide the choice of a suitable nonexperimental estimator for manpower training programs. Reanalysis of the National Supported Work data demonstrates that a simple testing procedure narrows the set of nonexperimental estimators to those consistent with experimental estimates of program impact.

Abstract

Abstract The recent literature on evaluating manpower training programs demonstrates that alternative nonexperimental estimators of the same program produce an array of estimates of program impact. These findings have led to the call for experiments to be used to perform credible program evaluations. Missing in all of the recent pessimistic analyses of nonexperimental methods is any systematic discussion of how to choose among competing estimators. This article explores the value of simple specification tests in selecting an appropriate nonexperimental estimator. A reanalysis of the National Supported Work Demonstration data previously analyzed by proponents of social experiments reveals that a simple testing procedure eliminates the range of nonexperimental estimators at variance with the experimental estimates of program impact.

References

YearCitations

Page 1