April 29th, 2014

Presenter(s): Dr. Michael Scriven - Distinguished Professor, Claremont Graduate University and Co-Director, Claremont Evaluation Center

Abstract: Professional, scientifically acceptable evaluation requires a wide range of competencies from the social science methodology toolkit, but that’s not enough. Unlike the old-time social scientists, you’ve got to get to an evaluative conclusion—that’s what it means to say you’re doing evaluation—and to do that you apparently have to have some evaluative premises. Where do those come from, and how can you validate them against attack by people who don’t like your conclusion? The way most evaluators—and the texts—do this is by relying on common agreement or intuitions about what value claims are correct, but of course that won’t work when there’s deep disagreement e.g., about abortion, suicide hot lines, creationism in the science curriculum, ‘natural’ medicine, healthcare for the poor, torture, spying and war. Evaluation-specific methodology covers the selection and verification/refutation of all value claims we encounter in professional evaluation; and how to integrate them with data claims (and data syntheses) by inferences from and to them; and how to represent the integrated result in an evaluation report. Important sub-topics include: rubrics, needs assessment, the measurement of values, crowd-sourced evaluation, and the special case of ethical value claims. (Not to mention a completely new philosophy of science.)

The views expressed in this presentation are those of the individual presenters and do not necessarily reflect the views of The Evaluation Center or Western Michigan University.

Link to view other Evaluation Cafés or to learn about upcoming Cafés: wmich.edu/evalctr/evaluation-cafe/

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…