Invited talk at the National Center for Health Statistics (NCHS), Centers for Disease Control in Hyattsville, Maryland, October 2011.
Speaker: Daniel Oberski, PhD (University of Maryland, College Park)
Introduction: Aaron Maitland, PhD (NCHS)
Slides are at: daob.org/media/homepage/files/Oberski-NCHS-2011.zip
Abstract
It is well-known that design characteristics of survey questions such as the number of categories, full versus partial labeling of answer scales, the linguistic complexity of the request, etc. can influence the response obtained. Although each question's design must be tailored to the intended measure, there is also evidence that some question designs are better than others in general (Dijkstra & van der Zouwen 1982, Alwin & Krosnick 1991, Alwin 2007).
I report on the findings of several large cross-national surveys where the response reliability and validity of 3011 questions could be estimated from built-in Multitrait-Multimethod (MTMM) experiments. For each of the 3011 questions analyzed, many design characteristics were coded by a team of coders. These codes for design characteristics were then related to the estimated reliabilities and validities in a predictive meta-analysis (see Saris & Gallhofer 2007 for an early analysis of a much smaller dataset).
I built the predictive meta-analysis and coding scheme into an online web application called "Survey Quality Predictor" (SQP2). SQP2 provides a forecast of a question's reliability based on its design characteristics.
In the talk I will discuss the approach taken to estimate the reliability and validity of survey questions, some results of the predictive meta-analysis, and demonstrate the alpha version of the new computer web application SQP2. The demonstration shows how a given survey question may be coded on its design characteristics to obtain an estimate of its reliability and internal validity.