Cristian Gugiu – Interdisciplinary Evaluation Doctoral Student, WMU
October 30, 2007
One of the cornerstones of methodology is that "a weak design yields unreliable conclusions." While this principle is certainly true, the constraints of conducting evaluations in real-world settings often necessitate the implementation of less-than-ideal designs. To date, no quantitative or qualitative method exists for estimating the impact of sampling, measurement error, and design on the precision of an evaluative conclusion. Consequently, evaluators formulate recommendations and decision makers implement program and policy changes without full knowledge of the robustness of an evaluative conclusion. In light of the billions of dollars spent annually on evaluations and the countless millions of lives that are affected, the impact of decision error can be disastrous. This paper will introduce an analytical method that can be used to estimate the degree of confidence that can be placed on an evaluative conclusion and discuss the factors that impact the precision of a summative conclusion.
View the slides for this presentation: