Access the full text.
Sign up today, get DeepDyve free for 14 days.
John Robinson, H. Schuman, S. Presser (1982)
Questions and answers in attitude surveys
K. Rasinski (1989)
THE EFFECT OF QUESTION WORDING ON PUBLIC SUPPORT FOR GOVERNMENT SPENDINGPublic Opinion Quarterly, 53
W. Belson (1982)
The design and understanding of survey questions
J. Morton-Williams (1979)
The use of “verbal interaction coding” for evaluating a questionnaireQuality and Quantity, 13
C. Cannell, S. Lawson, Doris Hausser (1976)
A Technique for Evaluating Interviewer Performance.Journal of Marketing Research, 13
P. Sheatsley, T. DeMaio, H. Schuman, S. Presser (1985)
Approaches to Developing Questionnaires (Statistical Policy Working Paper 10).@@@Questions and Answers in Attitude Surveys.Journal of the American Statistical Association, 80
C. Cannell, Fowler Fj, Marquis Kh (1968)
The influence of interviewer and respondent psychological and behavioral variables on the reporting in household interviews.Vital and health statistics. Series 2, Data evaluation and methods research, 26
W. Schaible, T. Jabine, M. Straf, J. Tanur, R. Tourangeau (1987)
Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines.Journal of the American Statistical Association, 82
F. Fowler, T. Mangione (1990)
Standardized Survey Interviewing
C. Cannell, S. Lawson, Doris Hausser (1975)
A technique for evaluating interviewer performance : a manual for coding and analyzing interviewer behavior from tape recordings of household interviews
George Katona, S. Payne (1951)
The Art of Asking QuestionsJournal of the American Statistical Association, 47
Abstract Although writing clear questions is accepted as a general goal in surveys, procedures to ensure that each key term is consistently understood are not routine. Researchers who do not adequately test respondent understanding of questions must assume that ambiguity will not have a large or systematic effect on their results. Seven questions that were drawn from questions used in national health surveys were subjected to special pretest procedures and found to contain one or more poorly defined terms. When the questions were revised to clarify the definition of key terms, significantly different estimates resulted. The implication is that unclear terms are likely to produce biased estimates. The results indicate that evaluation of survey questions to identify key terms that are not consistently understood and defining unclear terms are ways to reduce systematic error in survey measurement. This content is only available as a PDF. © 1992, the American Association for Public Opinion Research
Public Opinion Quarterly – Oxford University Press
Published: Jan 1, 1992
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.