Garbarski, Dana; Dykema, Jennifer; Schaeffer, Nora Cate; Jones, Cameron P; Neman, Tiffany S; Edwards, Dorothy Farrar
doi: 10.1093/poq/nfad028pmid: 37705920
Interviewers’ postinterview evaluations of respondents’ performance (IEPs) are paradata, used to describe the quality of the data obtained from respondents. IEPs are driven by a combination of factors, including respondents’ and interviewers’ sociodemographic characteristics and what actually transpires during the interview. However, relatively few studies examine how IEPs are associated with features of the response process, including facets of the interviewer-respondent interaction and patterns of responding that index data quality. We examine whether features of the response process—various respondents’ behaviors and response quality indicators—are associated with IEPs in a survey with a diverse set of respondents focused on barriers and facilitators to participating in medical research. We also examine whether there are differences in IEPs across respondents’ and interviewers’ sociodemographic characteristics. Our results show that both respondents’ behaviors and response quality indicators predict IEPs, indicating that IEPs reflect what transpires in the interview. In addition, interviewers appear to approach the task of evaluating respondents with differing frameworks, as evidenced by the variation in IEPs attributable to interviewers and associations between IEPs and interviewers’ gender. Further, IEPs were associated with respondents’ education and ethnoracial identity, net of respondents’ behaviors, response quality indicators, and sociodemographic characteristics of respondents and interviewers. Future research should continue to build on studies that examine the correlates of IEPs to better inform whether, when, and how to use IEPs as paradata about the quality of the data obtained.
Gummer, Tobias; Kunz, Tanja; Rettig, Tobias; Höhne, Jan Karem
doi: 10.1093/poq/nfad027
When answering political knowledge questions in web surveys, respondents can look up the correct answer on the Internet. This response behavior artificially inflates political knowledge scores that are supposed to measure fact-based information. In the present study, we address the existing knowledge gaps of previous research regarding looking up answers to political knowledge questions in web surveys. We conducted an experimental study based on the German Internet Panel, a large-scale population survey that uses a probability-based sample. Based on this experiment, we show that instructions help to reduce the number of lookups to knowledge questions in web surveys. We provide further evidence that looking up answers results in more correct answers to knowledge questions and, thus, in inflated political knowledge scores. Finally, our findings illustrate the challenges and benefits of using self-reported or paradata-based lookup measures as well as a combined measure that aims at utilizing both to detect lookups to political knowledge questions in web surveys.
Bollinger, Christopher Robert; Tasseva, Iva Valentinova
doi: 10.1093/poq/nfad025pmid: 37705921
We use a unique panel of household survey data—the Austrian version of the European Union Statistics on Income and Living Conditions (SILC) for 2008–2011—which have been linked to individual administrative records on both state unemployment benefits and earnings. We assess the extent and structure of misreporting across similar benefits and between benefits and earnings. We document that many respondents fail to report participation in one or more of the unemployment programs. Moreover, they inflate earnings for periods when they are unemployed but receiving unemployment compensation. To demonstrate the impact of income source confusion on estimators, we estimate standard Mincer wage equations. Since unemployment is associated with lower education, the reports of unemployment benefits as earnings bias downward the returns to education. Failure to report unemployment benefits also leads to substantial sample bias when selecting on these benefits, as one might in estimating the returns to job training.
West, Brady T; Andridge, Rebecca R
doi: 10.1093/poq/nfad018pmid: 37705923
Among the numerous explanations that have been offered for recent errors in pre-election polls, selection bias due to non-ignorable partisan nonresponse bias, where the probability of responding to a poll is a function of the candidate preference that a poll is attempting to measure (even after conditioning on other relevant covariates used for weighting adjustments), has received relatively less focus in the academic literature. Under this type of selection mechanism, estimates of candidate preferences based on individual or aggregated polls may be subject to significant bias, even after standard weighting adjustments. Until recently, methods for measuring and adjusting for this type of non-ignorable selection bias have been unavailable. Fortunately, recent developments in the methodological literature have provided political researchers with easy-to-use measures of non-ignorable selection bias. In this study, we apply a new measure that has been developed specifically for estimated proportions to this challenging problem. We analyze data from 18 different pre-election polls: 9 different telephone polls conducted in 8 different states prior to the US presidential election in 2020, and nine different pre-election polls conducted either online or via telephone in Great Britain prior to the 2015 general election. We rigorously evaluate the ability of this new measure to detect and adjust for selection bias in estimates of the proportion of likely voters that will vote for a specific candidate, using official outcomes from each election as benchmarks and alternative data sources for estimating key characteristics of the likely voter populations in each context.
Henninger, Felix; Kieslich, Pascal J; Fernández-Fontelo, Amanda; Greven, Sonja; Kreuter, Frauke
doi: 10.1093/poq/nfad034pmid: 37705922
Survey participants’ mouse movements provide a rich, unobtrusive source of paradata, offering insight into the response process beyond the observed answers. However, the use of mouse tracking may require participants’ explicit consent for their movements to be recorded and analyzed. Thus, the question arises of how its presence affects the willingness of participants to take part in a survey at all—if prospective respondents are reluctant to complete a survey if additional measures are recorded, collecting paradata may do more harm than good. Previous research has found that other paradata collection modes reduce the willingness to participate, and that this decrease may be influenced by the specific motivation provided to participants for collecting the data. However, the effects of mouse movement collection on survey consent and participation have not been addressed so far. In a vignette experiment, we show that reported willingness to participate in a survey decreased when mouse tracking was part of the overall consent. However, a larger proportion of the sample indicated willingness to both take part and provide mouse-tracking data when these decisions were combined, compared to an independent opt-in to paradata collection, separated from the decision to complete the study. This suggests that survey practitioners may face a trade-off between maximizing their overall participation rate and maximizing the number of participants who also provide mouse-tracking data. Explaining motivations for paradata collection did not have a positive effect and, in some cases, even reduced participants’ reported willingness to take part in the survey.
Struminskaya, Bella; Sakshaug, Joseph W
doi: 10.1093/poq/nfad030pmid: 37705919
Survey researchers frequently use supplementary data sources, such as paradata, administrative data, and contextual data to augment surveys and enhance substantive and methodological research capabilities. While these data sources can be beneficial, integrating them with surveys can give rise to ethical and data privacy issues that have not been completely resolved. In this research synthesis, we review ethical considerations and empirical evidence on how privacy concerns impact participation in studies that collect these novel data sources to supplement surveys. We further discuss potential approaches for safeguarding participants’ data privacy during data collection and dissemination that may assuage their concerns. Finally, we conclude with open questions and suggested avenues for future research.