Race of Interviewer Effects in Telephone Surveys Preceding the 2008 U.S. Presidential Election

Race of Interviewer Effects in Telephone Surveys Preceding the 2008 U.S. Presidential Election Abstract Race of interviewer effects are presumed to occur in surveys because respondents answer questions differently depending on interviewer race. This article explored an alternative explanation: differential respondent recruitment. Data from telephone interviews conducted during the 2008 U.S. Presidential election campaign by major survey organizations (ABC News/Washington Post, CBS News/New York Times, and Gallup) indicate that African-American interviewers were more likely to elicit statements of the intent to vote for Barack Obama than White interviewers. But this effect occurred because African-American interviewers were more likely than White interviewers to elicit survey participation by African-American respondents, and/or White interviewers were more likely to elicit participation by White respondents. Thus, differences between interviewers in terms of responses obtained are not necessarily because of respondent lying. During most of the 2008 U.S. Presidential nomination and general election campaign season, Barack Obama consistently led John McCain and other Republican candidates in preelection polls (Gallup, 2008). But doubt was often expressed before that election because of concern about what was dubbed the “Bradley effect”: a presumed tendency for respondents to overstate their support for an African-American candidate running against a White candidate (Berman, 2008; Carroll, 2008; Cillizza, 2008; Liasson, 2008; Zernike, 2008). The close resemblance of the final national preelection poll results that year to the government’s vote totals for the 2008 election led observers to conclude that the Bradley effect did not occur (Frankovic, 2009; Langer et al., 2009; Zernike & Sussman, 2008), a finding consistent with careful analysis of polls done preceding other recent elections involving African-American candidates (Hopkins, 2009). A much larger body of research has explored another way that race might have affected the results of preelection polls that year and other surveys as well: race of interviewer effects. In this literature, researchers documented the influence of an interviewer’s race on the answers that people give to survey questions (West & Blom, 2017). This literature generally pointed to misreporting as the main mechanism behind this effect, attributing the difference in responses collected by different race interviewers to respondents altering their answers depending on who asks the questions (Anderson, Silver, & Abramson, 1988; Cotter, Cohen, & Coulter, 1982; Davis, 1997a, 1997b; Finkel, Guterbock, & Borg, 1991; Schaeffer, 1980). In this article, we explore whether such effects were apparent in data collected before the 2008 U.S. Presidential election, and we explore an alternative explanation for such effects: the impact of interviewer race on respondents’ decisions about whether to participate in a survey in the first place. Specifically, we explore whether African-American interviewers were more likely to elicit expressions of the intent to vote for Barack Obama than were White interviewers, and whether this might have occurred because African-American interviewers elicited survey participation by African-American respondents at a higher rate than did White interviewers, and/or White interviewers elicited survey participation by White respondents at a higher rate than did African-American interviewers. This alternative explanation fits well with the recent work that more generally examined the effect of interviewer characteristics on various components of error stemming from the total survey error framework (Groves et al., 2009; see West & Blom, 2017; West, Kreuter, & Jaenichen, 2013; West & Olson, 2010). This line of work showed that between-interviewer variance is at least partly because of varying nonresponse error (e.g., different interviewers recruiting respondents with various demographic characteristics at different rates) in addition to measurement error (e.g., respondents adjusting their answers differently for different interviewers; West & Olson, 2010). This article adds to this line of work by exploring whether race of interviewer might cause interviewer-related nonresponse bias. We begin below by reviewing past studies of interviewer effects, and we propose theoretical accounts that could explain such effects. Then, we outline the explanation on which we focus and test it using data from 36 telephone surveys conducted before the 2008 U.S. Presidential election by ABC/Washington Post, CBS/New York Times, and the Gallup Organization. Past Studies on the Effects of Interviewer Race Evidence of interviewer effects was published early in the history of survey research (Cantril, 1944), and since then, >100 studies have explored the effects of interviewer race. Past studies found that assessments of attitudes, behaviors, political knowledge, and other constructs appear to have been affected by the race of an interviewer (Cotter, Cohen, & Coulter, 1982; Davis & Silver, 2003; Schaeffer, 1980). Many such illustrations involved questions that explicitly mentioned race. For example, African-American interviewers were shown to elicit more racially tolerant responses than did White interviewers (Hatchett & Schuman, 1975). Likewise, respondents expressed warmer feelings toward a racial group and said that they trusted members of that racial group more when interviewed by a member of that group than when interviewed by a member of a different racial group (Anderson, Silver, & Abramson, 1988; Schuman & Converse, 1971), and respondents were more likely to say they supported a political candidate when the interviewer shared the candidate’s race than when the interviewer did not share the candidate’s race (Davis, 1997b; Finkel, Guterbock, & Borg, 1991). Other studies suggest that the race of an interviewer can also affect answers to questions that do not explicitly mention racial groups (Campbell, 1981; Cotter, Cohen, & Coulter, 1982; Schuman & Converse, 1971; Weeks & Moore, 1981). For example, when interviewed by an African-American interviewer and naming their favorite actors, African-American respondents listed more African-Americans and fewer Whites than when interviewed by White interviewers (Schuman & Converse, 1971). Furthermore, when interviewed by Whites, African-American respondents reported warmer feelings toward Whites than when interviewed by an African-American (Anderson, Silver, & Abramson, 1988). Conversely, African-Americans were more likely to express more pro-African-American attitudes and higher racial consciousness when interviewed by African-Americans than when interviewed by Whites (Davis, 1997a). Possible Mechanisms of Interviewer Race on Survey Responses A number of possible explanations might account for these effects of interviewer race. One mechanism involves the rule of conversational politeness (Campbell, 1981; Kane & Macaulay, 1993; Schuman & Converse, 1971). The interaction between a respondent and an interviewer is a conversation, and respondents approach it as if the usual rules of conversations apply (Holbrook et al., 2000). One such rule, especially in conversations with strangers, is to be respectful and polite and not to insult one’s conversational partner (Leech, 1983). Respondents might feel that it would be impolite to express an attitude or intended behavior that may signal disapproval of a social group to which the interviewer belongs. For example, if respondents assume that most African-Americans support most African-American candidates running for public office, then expressing a negative evaluation of such a candidate or expressing an intention to vote against such a candidate might be perceived as impolite when talking to an African-American interviewer. This might lead a respondent to decline to answer a question (e.g., by saying that he/she has not made up his/her mind yet about for whom to vote) or perhaps by expressing an intention to vote for an African-American candidate when interviewed by an Africa-American interviewer. Alternatively, interviewer effects might occur because of automatic aspects of the cognitive process by which survey responses are constructed. When respondents are asked a question to which they do not have a prestored answer in long-term memory, they can retrieve pieces of information from memory to build an answer (Tourangeau, Rips, & Rasinski, 2000; Zaller & Feldman, 1992). This process can be affected by the social context in which the respondent is asked the question, including the interviewer’s characteristics (Nelson & Norton, 2005; Rasinski et al., 2005; Srull & Wyer, 1989). For example, interacting with a professional, competent, and friendly African-American interviewer might prime favorable images of African-Americans in the mind of a respondent during the interview. These favorable images might enhance a respondent’s inclination to express favorable opinions of African-Americans generally and a greater likelihood of voting for an African-American candidate when interviewed by an African-American interviewer than when interviewed by a White interviewer (Huddy et al., 1997; Krysan & Couper, 2003). An Alternative Explanation All of the above explanations assert that the race of an interviewer can influence the process by which a respondent generates his or her answer to a question. However, interviewer race may appear to have an influence answers to race-related questions, but not by influencing the response process, and instead by altering the composition of the participating sample (Jones & Lang, 1980). Put differently, what have appeared to be interviewer effects may have resulted from nonresponse bias attributable to the race of the interviewer (West & Olson, 2010). Because people are more cooperative with others who appear more similar to themselves (Dohrenwend, Colombotos, & Dohrenwend, 1968), potential survey respondents might be more willing to agree to initiate an interview with a same-race interviewer. That is, African-Americans might be more willing to be interviewed by an African-American than by a White interviewer and vice versa. Indeed, several studies have found that recruitment success increased when interviewer and respondent race were matched (Johnson et al., 2000; Webster, 1996). This will naturally result in an imbalance in the racial composition of the respondent pool of White and African-American interviewers; the pool of individuals interviewed by African-American interviewers may contain a greater proportion of African-American respondents than the pool of individuals interviewed by White interviewers, and vice versa. Although not specifically about interviewer race, the analysis by West and Olson (2010) showed that for some questions, what appeared to be interviewer error could be traced back to nonresponse error. To the extent that African-American respondents, answering honestly and accurately, give systematically different answers to survey questions than do White respondents, differential recruitment by different race interviewers will lead to the collection of different responses by different race interviewers, even in the absence of any respondents distorting their answers to questions. For example, African-American respondents were more likely to vote for Barack Obama intend than White respondents. Consequently, the percent of respondents who expressed an intent to vote for Mr Obama may have been greater among interviews done by African-American interviewers than among interviews done my White interviewers. One might imagine the race of interviewer effects would be weaker during telephone interviews than during interviews done face-to-face (which have been the focus of many past studies) because face may seem difficult to detect over the phone, and people may be more comfortable answering questions in ways that might insult telephone interviewers, who are far away. But in fact, as compared with respondents interviewed face-to-face, respondents interviewed by telephone are more likely to provide socially desirable answers (Holbrook et al., 2003; West, Kreuter, & Jaenichen, 2013). Moreover, according to the work by linguists such as Baugh (2000; Purnell, Idsardi, & Baugh, 1999), >80% of listeners are able to correctly identify accents in English associated with particular races (e.g., African-American Vernacular English, Chicano English) based on hearing the single word, “Hello.” Therefore, race of interviewer may indeed have impact on the cooperation decisions of potential respondents during telephone interviews. In this article, we explore the effect of interviewer race on data collected via telephone interviewing during the 2008 U.S. Presidential election campaign by three leading survey organizations: ABC/Washington Post, CBS/New York Times, and the Gallup Organization. We hypothesized that reported intention to vote for Mr Obama would be greater among respondents interviewed by African-American interviewers than those interviewed by White interviewers and that this effect would be especially pronounced among African-American respondents. Second, we explored whether African-American interviewers recruited a greater proportion of African-American respondents than did White interviewers. Finally, we assessed whether the relation between interviewer race and intention to vote for Mr Obama was mediated by what we call “recruitment effects”: the tendency of African-American interviewers to recruit a greater proportion of African-American respondents than White interviewers. This logic is founded on the possibility that respondent offers honest reports of their voting intentions regardless of interviewer race. Effects of race on survey participation may be the result of conscious and intentional decision-making by interviewers or respondents or may be the result of unintentional, nonconscious decision-making.1 We also tested whether race-of-interviewer effects were stronger earlier in the preelection season. This hypothesis grows from the speculation that people may be more certain about their candidate preferences later in a campaign, after they have gathered more information about the candidates. As a result, when asked for whom they would vote, most respondents might retrieve a strong cue from their long-term memories, which would not be easily distorted by the characteristics of the interviewer. But long before Election Day, respondents might be less certain of their candidate preferences, so circumstantial factors at the time of the interview might more easily distort responses. Thus, time of interview offered a vehicle to test whether the effect of interviewer race on collected data might be the result of response distortion. Method We compiled 36 data sets from the three leading survey organizations (11 from ABC/WP, 16 from CBS/NYT, and 9 from Gallup). The data were collected between March and November of 2008, and respondents were asked whether they would vote for Barack Obama or John McCain if the 2008 presidential election were being held on the day of the survey interview (see Table 1 for the fielding dates and sample size for each survey, and see Figure 1 for the responses collected by African-American and White Interviewers over time). Table 1 Field Dates and Sample Size of Individual Samples by Survey Organization Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 aStudy N refers to the number respondents who were registered to vote and interviewed by either a White or Black interviewer. Table 1 Field Dates and Sample Size of Individual Samples by Survey Organization Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 aStudy N refers to the number respondents who were registered to vote and interviewed by either a White or Black interviewer. Figure 1 View largeDownload slide Percent of respondents who said they would vote for Barack Obama by race of interviewer during the 2008 presidential election season (unweighted percentages of registered voters, N = 223,420. Figure 1 View largeDownload slide Percent of respondents who said they would vote for Barack Obama by race of interviewer during the 2008 presidential election season (unweighted percentages of registered voters, N = 223,420. All analyses reported here were done using data from respondents who completed their interviews (i.e., no partials or breakoffs) and answered the voting intention question (typically asked earlier in the interview) and the respondent race question (typically asked toward the end of the interview). We do not have data from people who refused to participate or broke off their interviews midway through, or did not answer either the voting intention or race question. All analyses were performed on the subset of respondents who reported that they were registered to vote2 and who were interviewed by a White interviewer or an African-American interviewer. The analyses relied on interviewers’ reports of their races, and not on respondents’ perceptions of the interviewers’ races (Davis & Silver, 2003). Gallup Data Collection The nine Gallup data sets came from daily tracking polls in which approximately 1,000 U.S. adults were interviewed each day (March 1 to November 2, 2008). Dual-frame random digit dial (RDD) sampling was implemented, including cell phones and landlines (Gallup, 2008). The combined full sample size was 242,985; 196,706 people said they were registered to vote and were interviewed by an African-American interviewer or a White interviewer. Measures Vote Choice. Respondents were asked: “Suppose the presidential election were held today, and it included Barack Obama and Joe Biden as the Democratic candidates and John McCain and Sarah Palin as the Republican candidates. Would you vote for (Barack Obama and Joe Biden, the Democrats) or (John McCain and Sarah Palin, the Republicans)?” The order of response options in the second sentence only (in parentheses) was randomized. People who said they would vote for Mr Obama were coded 1, and all other respondents—including those who said they would vote for Mr McCain, volunteered other candidate names, said neither candidate, refused to answer, or indicated that they did not know—were coded 0.3 Interviewer Race. Interviewers were asked to indicate their race at the time of hiring by answering this question: “What is your race? Are you White, African-American, Asian or some other race?” Among registered voters in the Gallup datasets, 85.5% of respondents were interviewed by a White interviewer, and 9.7% were interviewed by an African-American interviewer. Interviewer race was coded 1 for respondents interviewed by an African-American interviewer and 0 for respondents who were interviewed by a White interviewer. Respondent Race. Respondents were asked, “Are you, yourself, of Hispanic origin or descent, such as Mexican, Puerto Rican, Cuban, or other Spanish background?” and “What is your race? Are you White, African-American, Asian, or some other race?”. Responses to the two questions were used to assign respondents to one of the following categories: non-Hispanic White, non-Hispanic Black, other race (including Hispanics), and race unspecified because of a “don’t know” response or refusal to answer. We created three dummy variables identifying each of the last three of these racial groups and treating non-Hispanic Whites as the omitted group. Demographics. Sex, age, education, income, region, and party identification were included in the analyses (see the Supplementary Materials for question wordings). ABC/WP Data Collection We analyzed data from RDD surveys done by ABC/WP via landline telephones between February 28 and November 3, 2008.4 Interviewing was conducted by TNS (Roper Center 2012a). All respondents in 11 surveys totaled 21,787, and 14,778 respondents said they were registered to vote and were contacted by either an African-American interviewer or a White interviewer. Measures Vote Choice. Respondents were asked: “If the 2008 presidential election were being held today and the candidates were (Barack Obama and Joe Biden, the Democrats,) and (John McCain and Sarah Palin, the Republicans,) for whom would you vote?” Respondents were randomly assigned to hear the names of the Democratic candidates first or the names of the Republican candidates’ names first. Respondents who said they would vote for Mr Obama were coded 1, and all other respondents—those that said they would vote for Mr McCain, gave other candidate names, said neither or indicated they would not vote, refused to answer, or indicated that they did not know—were coded 0.5 Interviewer Race. Interviewer race was obtained from administrative data collected by TNS from the interviewers. Interviewers specified their race by selecting White, Black, Hispanic, Native American, or other race. Of the respondents registered to vote, 50.5% were interviewed by a White interviewer, and 37.4% were interviewed by a Black interviewer.6 Interviewer race was represented by a dummy variable coded 1 for people interviewed by an African-American and 0 for people interviewed by a White interviewer. Respondent Race. Respondents were asked: “Are you of Hispanic origin or background?” and were then asked: “Are you White, Black, or some other race?” Responses to the two questions were used to assign respondents to one of the following categories: non-Hispanic White, non-Hispanic Black, other race (including Hispanics), and race unspecified because of a “don’t know” response or refusal to answer. We created three dummy variables identifying each of the last three of these racial groups and treating non-Hispanic Whites as the omitted group. Demographics. The wordings of the questions measuring sex, age, education, income, region, and party identification are available in the Supplementary Materials. CBS/NYT Data Collection. CBS/NYT conducted RDD interviewing via cell phones and landlines (Roper Center, 2012b). We used data they collected between March 15 and November 3, 2008. A total of 16,425 were interviewed; 11,936 respondents said they were registered to vote and were interviewed by either a White or Black interviewer. Measures Vote Choice. Respondents were asked: “If the 2008 presidential election were being held today and the candidates were (Barack Obama for President and Joe Biden for Vice President, the Democrats), and (John McCain for President and Sarah Palin for Vice President, the Republicans), would you vote for (Barack Obama and Joe Biden) or (John McCain and Sarah Palin)?” The order of candidate names in parentheses was randomly determined, with the Republicans either first or second in both sentences. Stated intentions to vote for Barack Obama were coded 1, and other responses—including people who stated intentions to vote for John McCain, volunteered other candidate names, said they would not vote, said that it depends or that they were undecided, refused to answer, or indicated that they did not know—were coded 0.7 Interviewer Race. Interviewer race was determined by observation of the interviewers by the staff of the survey organization and by interviewer reports when a person’s race was not clear by observation alone. Interviewers could select Caucasian, African-American, Hispanic, or other race. We used data collected by Caucasian or African-American interviewers. Of the respondents who said they were registered to vote, 61.9% were interviewed by a Caucasian interviewer, and 18.7% were interviewed by an African-American interviewer.8 Interviewer race was coded 1 for respondents interviewed by an African-American and 0 for respondents who were interviewed by a White interviewer. Respondent Race. Respondents were asked, “Are you of Hispanic origin or decent, or not?” and “Are you White, Black, Asian, or some other race?” Responses to the two questions were used to assign respondents to one of the following categories: non-Hispanic White, non-Hispanic Black, other race (including Hispanics), and race unspecified because of a “don’t know” response or a refusal to answer. Three dummy variables identified each of the last three of these racial groups. Non-Hispanic Whites were the omitted group. Demographics. Sex, age, education, income, region, and party identification were also included in the analysis (see the Supplementary Materials for the question wordings). Analysis Chi-squares were computed to test whether the distribution of reported voting intentions were different depending on whether they were collected by African-American and White interviewers. Chi-squares were also computed to test whether the distribution of respondent’s race varied depending on whether the interviewer was African-American or White. We estimated the parameters of linear probability models (LPMs) via ordinary least squares (OLS) to assess the influence of interviewer race on reported voting intentions controlling for respondent race.9 The indirect effect of interviewer race on reported voting intentions mediated by recruitment of respondents was tested using 1,000 bootstrapped samples, a method that avoids relying on the normality assumption made by other methods (Hayes, 2009). All analyses were conducted without weights. Results As expected, the proportion of respondents who said that they would vote for Mr Obama was greater among people interviewed by African-American interviewers than among people interviewed by White interviewers (see Table 2). When talking to an African-American interviewer, 43.9% of respondents said they would vote for Mr Obama, whereas only 42.0% of respondents interviewed by a White interviewer said so. Likewise, the proportion of respondents who said that they would vote for Mr McCain was greater among people who were interviewed by a White interviewer (43.6%) than among people who were interviewed by an African-American interviewer (42.4%), χ2(3) = 49.74, p < .001. Table 2 Voting Intention by Interviewer Race Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 49.74 (p < .001). DK; Don’t Know. Table 2 Voting Intention by Interviewer Race Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 49.74 (p < .001). DK; Don’t Know. Also as expected, interviewer race affected respondent recruitment (see Table 3). Among respondents interviewed by a White interviewer, the proportion of White respondents (83.7%) was greater than that proportion among respondents interviewed by an African-American interviewer (81.4%). Among respondents interviewed by an African-American, the proportions of African-American respondents (8.2%) and other race respondents (8.2%) were greater than these proportions among respondents interviewed by a White interviewer (6.7 and 7.6%, respectively). The effect of interviewer race on the distribution of respondent race was statistically significant (χ2(3) = 120.18, p < .001). Table 3 Respondent Race by Interviewer Race Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 120.18 (p < .001). DK; Don’t Know. Table 3 Respondent Race by Interviewer Race Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 120.18 (p < .001). DK; Don’t Know. The effect of interviewer race on voting intentions was significantly mediated by the effect of interviewer race on recruitment (see Tables 4 and 5). The direct effect of interviewer race was statistically significant when respondent race was not controlled (b = .007, p = .012, see Model 1, Table 4),10 and the effect of interviewer race disappeared completely when controlling for respondent race (b = .003, p = .209, see Model 2, Table 4). Bootstrapping (Hayes, 2009) assessing whether interviewer effects were mediated by recruitment effects (see Table 5) revealed that 78% of the total effect of interviewer race on voting for Mr Obama (Total Effect 95% confidence interval, CI = [.004, .017]) is explained by the indirect effect through disproportional recruitment based on respondent race (Indirect Effect 95% CI = [.006, .009]). Table 4 Effects of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. Table 4 Effects of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. Table 5 Mediation Analysis Using Bootstrapping Methods Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Note. Indirect effect calculated based on 1,000 bootstrapped samples (Hayes, 2009). Fixed effects for survey organization were included in the model. Table 5 Mediation Analysis Using Bootstrapping Methods Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Note. Indirect effect calculated based on 1,000 bootstrapped samples (Hayes, 2009). Fixed effects for survey organization were included in the model. Also as expected, the effect of interviewer race on vote choice was stronger during the months longer before Election Day (see Table 6, Column 1). According to the estimates of the parameter estimates of an equation in which month of the survey and its interaction with interviewer race were predictors, African-American interviewers were more likely to elicit reports of intent to vote for Mr Obama than were White interviewers long before Election Day (b = .019, p = .012, see Column 1, Row 1, Table 6), and this effect grew marginally significantly weaker as the election approached (b = −.002, p = .067, see Row 2, Table 6). Table 6 Assessing Change Over Time in the Effect of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. Month is a number ranging from 3 to 11. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. Table 6 Assessing Change Over Time in the Effect of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. Month is a number ranging from 3 to 11. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. On the other hand, the effect of interviewer race on respondent race (i.e., “recruitment effect”) was equivalently strong across the period of 9 months (see Table 6, Column 2). African-American interviewers were significantly more likely than White interviewers to recruit African-American respondents (b = .017, p < .01, see Column 2, Row 1, Table 6), and this effect did not change over time (b = −.000, p = .436, see Column 2, Row 2, Table 6). If recruitment effects were consistently strong throughout the campaigning period, then why was there a decrease in the effect of interviewer race on voting intentions? Additional analysis showed that this was attributable partly to a decrease in the impact of respondent race on voting intentions over time (see Table 6, Column 3). African-American respondents were significantly more likely than White respondents to report the intention to vote for Mr Obama (b = .342, p < .001, see Column 3, Row 4, Table 6) long before the election. However, this preference for Mr Obama among the African-American respondents (as compared with White respondents) grew slightly but significantly weaker as Election Day approached (b = −.008, p < .001, see Column 3, Row 5, Table 6). This gradual decline in the impact of respondent race on voting intentions for Mr Obama may have accounted for the decline in race of interviewer effects in general, even when recruitment effects remained constant. Discussion These data suggest that interviewer race affected reported voting intentions in preelection polls before the 2008 U.S. Presidential election. The proportion of voters expressing an intention to vote for Mr Obama was approximately 2 percentage points greater in interviews conducted by African-American interviewers than in interviews conducted by White interviewers. This effect occurred because White interviewers were more likely than African-American interviewers to recruit White respondents. These results are consistent with prior evidence that found that interviewer effects are often because of nonresponse error variance than to measurement issues (West & Olson, 2010). In our study, >70% of the total race of interviewer effect was attributable to a difference in the distribution of respondent race across interviewers. The effect of interviewer race on measured voting intentions was stronger earlier in the election cycle and decreased as the election approached. This decline occurred because the relation of respondent race to voting intentions decreased as the election approached. The effect of interviewer race has usually been thought of as a result of respondents lying to interviewers (e.g., “social desirability bias”) or a result of nonconscious influence of interviewer race on the cognitive process by which respondents generate predictions of their voting behavior. Our findings suggest caution before making such statements, as the actual mechanism may have been an entirely different one, one that has nothing to do with social desirability bias in responses. We found that an interviewer’s race can exert an effect long before the moment when a respondent reports his or her answer to a question. The impact of interviewer race may occur within just seconds of the beginning of a phone call with a potential respondent, when that individual is deciding whether to hang up or to continue the conversation. At the very least, we should recognize that alternative mechanisms could be at work and understanding the operation of these mechanisms will help survey practitioners to take precautions. Survey organizations might be tempted to see our results as having implications for how to compose their interviewing staffs. For example, perhaps the chronic overrepresentation of women among survey respondents is a result of the predominance of women among interviewers. So perhaps stacking interviewing staffs with people who possess the demographics of people typically underrepresented in survey samples (e.g., men and young people in addition to racial minorities) will enhance the overall cooperation rates of such individuals and improve sample representativeness, and perhaps interviewers living in a particular region of the country might be especially successful at recruiting potential respondents who live in that region (e.g., perhaps, telephone numbers located in the South should be called by interviewers with southern accents). If the effect of interviewer race is confined to the recruiting process, then it is tempting to imagine that poststratification weighting that includes race as a factor will solve it. Such weighting is routinely done to adjust the racial makeup of a participating sample to match that of the population. If our results are correct, the more interviews are done by African-American interviewers, the larger the unweighted proportion of participating African-American respondents will be, and the less severe the weighting will need to be. Proper weighting should bring that proportion into line with the population, so any biasing effect of interviewer race seems likely to be eliminated as a result. This is a significant observation. If race of interviewer effects are simply effects on participation decisions, conventional weighting should eliminate any impact of the “arbitrary” proportion of interviews done by African-American interviewers. However, this sort of weighting may not be sufficient to eliminate race of interviewer effects because some systematic measurement error variance may still exist and indeed may be exacerbated. For example, imagine that a survey’s participating sample underrepresents African-Americans, as is typical. Furthermore, imagine that African-American respondents in the participating sample were more likely to have been interviewed by an African-American interviewer than is true for White respondents, as was true in the data we analyzed. Weighting up the African-American respondents would therefore increase the “apparent” proportion of weighted data points collected by African-American interviewers and would enhance any distortion in reports caused by their race (see Supplementary Tables A2 and A3 for an exploration). This is a complex issue and future studies should certainly investigate it further. The results reported here also help understand so-called “house effects” in preelection poll results. The coefficients for fixed effects of survey organizations in Tables 4 and 5 indicate that the proportion of respondents who were African-American was significantly smaller for the Gallup Organization than for CBS/NYT and for ABC/WP. Furthermore, the proportion of respondents reporting an intention to vote for Mr Obama was significantly lower for the Gallup Organization than for CBS/NYT and for ABC/WP. This pattern has been noted by other observers of such polls (Blumenthal, 2012). The present findings contribute to the body of evidence justifying such efforts and raise the possibility that race of interviewer effects on respondent recruitment may contribute to this between-organization differences in results. The difference between Gallup and the other organizations observed here might be attributable to the composition of their interviewing staffs. In the data examined here, 9.7% of Gallup’s interviews were conducted by an African-American interviewer, in contrast to 18.7% of interviews done by CBS/NYT and 37.4% of interviews done by ABC/WP. The difference in the composition of interviewers’ race is, of course, merely because of the location of the calling stations for these organizations (e.g., Omaha, Nebraska for Gallup, New York City for CBS/NYT, Horsham, Pennsylvania for ABC/WP). This pattern might seem to be consistent with the conclusion that Gallup obtained fewer reports of intentions to vote for Mr Obama because fewer of their respondents were interviewed by African-American interviewers. However, if race of interviewer affected respondent recruitment and not self-reports of voting intentions, and if poststratification of Gallup’s data was done optimally, then these effects should have disappeared in the firms’ reports of poststratified results. Limitations This article examined only one interviewer characteristic, race. We have not examined other interviewer characteristics could have influenced differences in cooperation rates across interviewers, such as interviewer age, gender, personality, or experience. As more interviewing is done via interactive voice response or visual presentations via the Internet or smartphones, interviewer characteristics will become less relevant. But many of the world’s most important surveys continue to be conducted by telephone and face-to-face, so our results continue to have relevance. In this study, we were not able to consider the race of the interviewer on earlier call attempts to the respondent before the call that led to a completed interview. If an interviewer gained cooperation by scheduling a callback was different from the interviewer who conducted the interview, our estimates of the impact of interviewer race are likely to be attenuated. Future studies might use data on interviewer characteristics for all call attempts to fully capture this dynamic process that may lead to nonresponse bias. One potential confound in this study was interviewer shift. As we did not have access to the interviewer’s shift information, this factor could not be controlled in our data. However, we do not see a particular reason why African American and White interviewers might have worked during systematically different shifts. Furthermore, given that there is little evidence of differential assignment of cases to interviewers before the interviews take place (West & Olson, 2010), interviewer shift probably was not substantially correlated with interviewer race. We hope that future studies can explore this issue. The effects reported here were generally on the smaller side, with the main effect of interviewer race ranging about 1–2 percentage points. Although these effects are small in absolute terms, such effects can be consequential in a close election. Also, because interviewer effects were stronger in the earlier months of the campaign season and can be compounded if the interviewer pool is highly skewed in terms of race, certain situations can cause notably stronger interviewer effects. We assumed that respondent race was measured without error. However, we cannot conclusively rule out the possibility that respondents misreported reported their race depending on the race of the interviewer. If in fact interviewer race affected both respondents’ reports of their own races and voting intentions, the findings here may exacerbate the apparent strength of the impact of interviewer race on respondents’ participation decisions. The arguments that race is part of a person’s deep-rooted self-identity developed from life experiences (Helms, 1990) and that people thus place “value and emotional significance” on belonging to social categories such as these (Tajfel, 1982) suggest that systematic measurement error is unlikely to be substantial in measurements of respondent race. Still, this is a possibility that we cannot rule out. Conclusion Taken together, these results suggest that race of interviewer effects was alive and well (though small) in 2008 but that their mechanism was different from that conventionally assumed. Whereas researchers have been quick to attribute such effects to respondent distortion of their reports, the effects may instead be because of impact on the decision about whether to participate in a survey. We look forward to more research on effects of interviewer race, especially reconsideration of whether past studies really revealed effects on reporting or instead revealed recruitment effects instead. Supplementary Data Supplementary Data are available at IJPOR online. Nuri Kim is an Assistant Professor at the Wee Kim Wee School of Communication and Information at the Nanyang Technological University, Singapore. Jon Krosnick is a Professor of Communication, Political Science, and Psychology at Stanford University and University Fellow at Resources for the Future. Yphtach Lelkes is an Assistant Professor of Political Communication, at the Annenberg School for Communication, University of Pennsylvania. Acknowledgments The authors thank ABC News, the Washington Post, CBS News, the New York Times, and the Gallup Organization for making their data available to them, and the authors thank Gary Langer, Frank Newport, and Sarah Dutton for their advice and help. Footnotes 1One might imagine testing an interaction between interviewer race and respondent race, on the assumption that matching and mismatching yield different consequences (see West & Blom, 2017). But this study treats respondent race as a dependent variable, affected by interviewer race. Therefore, it would not be sensible to explore an interaction between those variables. 2To maximize comparability of results across months, all reported analyses were conducted with respondents who were registered to vote because only these individuals were asked the candidate choice question in some months. People registered to vote were identified by the following questions: “Are you registered to vote at your present address, or not?” (ABC/WP); “Are you registered to vote in the precinct or election district where you now live, or aren’t you?” (CBS/NYT); and “Are you now registered to vote in your precinct or election district, or not?” (Gallup). Results were the essentially same when using all available data from registered and unregistered respondents. 3Respondents who said they would vote for neither candidate, refused to answer, or said they did not know were asked a follow-up question probing toward whom they leaned. We analyzed only responses to the initial vote choice question and ignored answers to the leaning question. We coded all non-Obama votes as 0 because we wanted to preserve the sample without item nonresponse issues. If we code only the Obama and McCain votes, the respondents who indicated a non-Obama, non-McCain vote (e.g., undecided, neither, unsure, volunteered another candidate name, do not know) would be dropped from our analysis (approximately 13% of the study sample). We wanted to make sure that we kept these respondents, so that all analyses were based on comparable samples of respondents over time. 4Each survey included interviews with an oversample of African Americans who were recruited separately; these respondents were not included in the analyses reported here. 5Respondents who said they would vote for neither candidate, refused to answer, or said that they did not know were asked a follow-up question probing toward whom they leaned. Respondents who said that they would not vote were not asked this follow-up question. We analyzed only responses to the initial vote choice question and ignored answers to the leaning question. 6Hispanic interviewers interviewed 5.8% of respondents who were registered to vote, and other interviewers interviewed another 2.7% of respondents who were registered to vote. Race of interviewer was unspecified for 3.5% of respondents who said they were registered to vote. 7People who said that they would vote for another candidate, said that it depends or that they were undecided, refused to answer, or said that they did not know were asked a follow-up question probing toward whom they leaned. People who stated that they would not vote were not asked this follow-up question. We only analyzed responses to the initial vote choice question. 8Total 2.9% of respondents were interviewed by a Hispanic interviewer, and <1% was interviewed by an interviewer of another race. The interviewer’s race was unspecified for 16.1% of the respondents who were registered to vote. 9Although logit or probit has often been recommended when predicting binary dependent variables, the LPM estimated by OLS is popular in econometric analyses (Angrist & Pischke, 2009) because this approach produces unbiased estimates of effects that are more readily interpretable (Angrist & Pischke, 2009; Freedman, 2008). When we estimated the parameters of models using logistic regression, the results were essentially equivalent (see Supplementary Materials, Table A1). 10Two dummy variables to capture fixed effects of survey organizations (ABC/WP and CBS/NYT) were included in all regression equations. The Gallup Organization was the omitted category. References Anderson B. , Silver B. , Abramson P. ( 1988 ). The effects of the race of the interviewer on measures of electoral participation by Blacks in SRC National Election Studies . Public Opinion Quarterly , 52 , 53 – 83 . doi: 10.1086/269108 Google Scholar CrossRef Search ADS Angrist J. D. , Pischke J.-S. ( 2009 ). Mostly harmless econometrics: An empiricists companion . Princeton : Princeton University Press . Baugh J. ( 2000 ). Beyond ebonics: Linguistic pride and racial prejudice . New York, NY : Oxford University Press . Berman J. ( 2008 , October 14). Will the Bradley effect be Obama’s downfall? ABC/WP News. Retrieved from: http://ABC/WPnews.go.com/Politics/story?id=6031233&page=1#.T6i qZ59YvUg Blumenthal M. ( 2012 , June 17). Race matters: Why Gallup poll finds less support for president Obama. HuffingtonPost.com. Retrieved from: http://www.huffingtonpost.com/2012/06/17/gallup-poll-race-barack-obama_n_1589937.html Campbell B. ( 1981 ). Race-of-interviewer effects among southern adolescents . Public Opinion Quarterly , 45 , 231 – 234 . doi: 10.1086/268654 Google Scholar CrossRef Search ADS Cantril H. ( 1944 ). Gauging public opinion . Princeton : Princeton University Press . Google Scholar CrossRef Search ADS Carroll J. ( 2008 , October 14). Will Obama suffer from the ‘Bradley effect’? CNN. Retrieved from: http://www.cnn.com/2008/POLITICS/10/13/obama.bradley.effect/ Cillizza C. ( 2008 , June 10). Race, Polling and the ‘Bradley Effect’. The Washington Post. Retrieved from: http://voices.washingtonpost.com/thefix/eye-on-2008/the-race-issue.html Cotter P. , Cohen J. , Coulter P. ( 1982 ). Race-of-interviewer effects in telephone interviews . Public Opinion Quarterly , 46 , 278 – 284 . doi: 10.1086/268719 Google Scholar CrossRef Search ADS Davis D. W. ( 1997a ). Nonrandom measurement error and race of interviewer effects among African Americans . Public Opinion Quarterly , 61 , 183 – 207 . doi: 10.1086/297792 Google Scholar CrossRef Search ADS Davis D. W. ( 1997b ). The direction of race of interviewer effects among African Americans: Donning the Black mask . American Journal of Political Science , 41 , 309 – 330 . doi: 10.2307/2111718 Google Scholar CrossRef Search ADS Davis D. W. , Silver B. D. ( 2003 ). Stereotype threat and race of interviewer effects in a survey on political knowledge . American Journal of Political Science , 47 , 33 – 45 . doi: 10.1111/1540-5907.00003 Google Scholar CrossRef Search ADS Dohrenwend B. , Colombotos J. , Dohrenwend B. ( 1968 ). Social distance and interviewer effects . Public Opinion Quarterly , 32 , 410 – 422 . doi: 10.1086/267624 Google Scholar CrossRef Search ADS Finkel S. , Guterbock T. , Borg M. ( 1991 ). Race-of-interviewer effects in a presidential poll: Virginia 1989 . Public Opinion Quarterly , 55 , 313 – 330 . doi: 10.1086/269264 Google Scholar CrossRef Search ADS Frankovic K. ( 2009 , June 18). Time to move beyond “The Bradley effect?”. CBS/NYT News. Retrieved from: http://www.CBS/NYTnews.com/stories/2008/10/13/opinion/pollpositions/main4519166.shtml Freedman D. A. ( 2008 ). Randomization does not justify logistic regression . Statistical Science , 23 , 237 – 249 . http://dx.doi.org/10.1214/08-STS262 Google Scholar CrossRef Search ADS Gallup . ( 2008 , June 9). Gallup daily: Obama takes lead over McCain, 48% to 42%. Gallup.com. Retrieved from: http://www.gallup.com/poll/107764/gallup-daily-Obama-takes-lead-over-McCain-48-42.aspx Groves R. M. , Fowler F. J. Jr , Couper M. P. , Lepkowski J. M. , Singer E. , Tourangeau R. ( 2009 ). Survey methodology . Hoboken, NJ : John Wiley and Sons . Hatchett S. , Schuman H. ( 1975 ). White respondents and race-of-interviewer effects . Public Opinion Quarterly , 39 , 523 – 528 . doi: 10.1086/268249 Google Scholar CrossRef Search ADS Hayes A. F. ( 2009 ). Beyond Baron and Kenny: Statistical mediation analysis in the new millennium . Communication Monographs , 76 , 408 – 420 . http://dx.doi.org/10.1080/03637750903310360 Google Scholar CrossRef Search ADS Helms J. E. ( 1990 ). Black and White racial identity: Theory, research, and practice . Westport, CT : Praeger . Holbrook A. , Green M. , Krosnick J. ( 2003 ). Telephone versus face-to-face interviewing of national probability samples with long questionnaires . Public Opinion Quarterly , 67 , 79 – 125 . Google Scholar CrossRef Search ADS Holbrook A. , Krosnick J. , Carson R. , Mitchell R. ( 2000 ). Violating conversational conventions disrupts cognitive processing of attitude questions . Journal of Experimental Social Psychology , 36 , 465 – 494 . doi: 10.1006/jesp.1999.1411 Google Scholar CrossRef Search ADS Hopkins D. ( 2009 ). No more Wilder effect, never a Whitman effect: When and why polls mislead about black and female candidates . Journal of Politics , 71 , 769 – 781 . doi: 10.1017/s0022381609090707 Google Scholar CrossRef Search ADS Huddy L. , Billig J. , Bracciodieta J. , Hoeffler L. , Moynihan P. , Pugliani P. ( 1997 ). The effect of interviewer gender on the survey response . Political Behavior , 19 , 197 – 220 . doi: 10.1023/a:1024882714254 Google Scholar CrossRef Search ADS Johnson T. , Fendrich M. , Shaligram C. , Garcy A. , Gillespie S. ( 2000 ). An evaluation of the effects of interviewer characteristics in an RDD telephone survey of drug use . Journal of Drug Issues , 30 , 77 – 102 . doi: 10.1023/a:1024882714254 Google Scholar CrossRef Search ADS Jones W. , Lang J. ( 1980 ). Sample composition bias and response bias in a mail survey: A comparison of inducement methods.” Journal of Marketing Research , 17 , 69 – 76 . doi: 10.2307/3151119 Google Scholar CrossRef Search ADS Kane E. , Macaulay L. ( 1993 ). Interviewer gender and gender attitudes . Public Opinion Quarterly , 57 , 1 – 28 . doi: 10.1086/269352 Google Scholar CrossRef Search ADS Krysan M. , Couper M. ( 2003 ). Race in the live and the virtual interview: Racial deference, social desirability, and activation effects in attitude surveys . Social Psychology Quarterly , 66 , 364 – 383 . doi: 10.2307/1519835 Google Scholar CrossRef Search ADS Langer G. , Craighill P. , Moynihan P. , Cohen J. , Agiesta J. , Lambert D. ( 2009 ). Best practices in pre-election polling: Lessons from the field. Paper presented at the AAPOR annual conference, Hollywood, FL. Leech G. ( 1983 ). Principles of pragmatics . New York, NY : Longman . Liasson M. ( 2008 , October 14). As Obama leads polls, Bradley effect examined. NPR. Retrieved from: http://www.npr.org/templates/story/story.php?storyId=95702879 Nelson L. , Norton M. ( 2005 ). From student to superhero: Situational primes shape future helping . Journal of Experimental Social Psychology , 41 , 425 – 430 . doi: 10.1016/j.jesp.2004.08.003 Google Scholar CrossRef Search ADS Purnell T. , Idsardi W. , Baugh J. ( 1999 ). Perceptual and phonetic experiments on American English dialect identification . Journal of Language and Social Psychology , 18 , 10 – 30 . Google Scholar CrossRef Search ADS Rasinski K. , Visser P. , Zagatsky M. , Rickett E. ( 2005 ). Using implicit goal priming to improve the quality of self-report data . Journal of Experimental Social Psychology , 41 , 321 – 327 . doi: 10.1016/j.jesp.2004.07.001 Google Scholar CrossRef Search ADS Roper Center . ( 2012a ). ABC News/Washington Post poll: October monthly—2008 Presidential election [Codebook]. Retrieved from: http://webapps.ropercenter.uconn.edu/ Roper Center . ( 2012b ). CBS News poll: 2008 Presidential election [Codebook]. Retrieved from: http://webapps.ropercenter.uconn.edu/ Schaeffer N. C. ( 1980 ). Evaluating race-of-interviewer effects in a national survey . Sociological Methods and Research , 8 , 400 – 419 . doi: 10.1177/004912418000800403 Google Scholar CrossRef Search ADS Schuman H. , Converse J. ( 1971 ). The effects of black and white interviewers on black responses in 1968 . Public Opinion Quarterly , 35 , 44 – 68 . doi: 10.1086/267866 Google Scholar CrossRef Search ADS Srull T. , Wyer R. ( 1989 ). Person memory and judgment . Psychological Review , 96 , 58 – 83 . doi: 10.1037//0033-295x.96.1.58 Google Scholar CrossRef Search ADS PubMed Tajfel H. ( 1982 ). Social psychology of intergroup relations . Annual Review of Psychology , 33 , 1 – 39 . http://dx.doi.org/10.1146/annurev.ps.33.020182.000245 Google Scholar CrossRef Search ADS Tourangeau R. , Rips L. , Rasinski K. ( 2000 ). The psychology of survey response . Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Webster C. ( 1996 ). Hispanic and Anglo interviewer and respondent ethnicity and gender: The impact on survey response quality . Journal of Marketing Research , 33 , 62 – 72 . doi: 10.2307/3152013 Google Scholar CrossRef Search ADS Weeks M. , Moore R. ( 1981 ). Ethnicity-of-interviewer effects on ethnic respondents . Public Opinion Quarterly , 45 , 245 – 249 . doi: 10.1086/268655 Google Scholar CrossRef Search ADS West B. T. , Blom A. G. ( 2017 ). Explaining interviewer effects: A research synthesis . Journal of Survey Statistics and Methodology , 5 , 175 – 211 . West B. T. , Kreuter F. , Jaenichen U. ( 2013 ). “Interviewer” effects in face-to-face surveys: A function of sampling, measurement error, or nonresponse? Journal of Official Statistics , 29 , 277 – 297 . Google Scholar CrossRef Search ADS West B. T. , Olson K. ( 2010 ). How much of interviewer variance is really nonresponse error variance? Public Opinion Quarterly , 74 , 1004 – 1026 . Google Scholar CrossRef Search ADS Zaller J. , Feldman S. ( 1992 ). A simple theory of survey response: Answering questions versus revealing preferences . American Journal of Political Science , 36 , 579 – 616 . doi: 10.2307/2111583 Google Scholar CrossRef Search ADS Zernike K. ( 2008 , October 11). Do polls lie about race? The New York Times. Retrieved from: http://www.CBS/NYTimes.com/2008/10/12/weekinreview/12zernike.html?_r=2&oref=slogin Zernike K. , Sussman D. ( 2008 , November 5). For pollsters, the racial effect that wasn’t. The New York Times. Retrieved from: http://www.CBS/NYTimes.com/2008/11/06/us/politics/06poll.html © The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Public Opinion Research Oxford University Press

Race of Interviewer Effects in Telephone Surveys Preceding the 2008 U.S. Presidential Election

Loading next page...
 
/lp/ou_press/race-of-interviewer-effects-in-telephone-surveys-preceding-the-2008-u-EDvsZlCa4E
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.
ISSN
0954-2892
eISSN
1471-6909
D.O.I.
10.1093/ijpor/edy005
Publisher site
See Article on Publisher Site

Abstract

Abstract Race of interviewer effects are presumed to occur in surveys because respondents answer questions differently depending on interviewer race. This article explored an alternative explanation: differential respondent recruitment. Data from telephone interviews conducted during the 2008 U.S. Presidential election campaign by major survey organizations (ABC News/Washington Post, CBS News/New York Times, and Gallup) indicate that African-American interviewers were more likely to elicit statements of the intent to vote for Barack Obama than White interviewers. But this effect occurred because African-American interviewers were more likely than White interviewers to elicit survey participation by African-American respondents, and/or White interviewers were more likely to elicit participation by White respondents. Thus, differences between interviewers in terms of responses obtained are not necessarily because of respondent lying. During most of the 2008 U.S. Presidential nomination and general election campaign season, Barack Obama consistently led John McCain and other Republican candidates in preelection polls (Gallup, 2008). But doubt was often expressed before that election because of concern about what was dubbed the “Bradley effect”: a presumed tendency for respondents to overstate their support for an African-American candidate running against a White candidate (Berman, 2008; Carroll, 2008; Cillizza, 2008; Liasson, 2008; Zernike, 2008). The close resemblance of the final national preelection poll results that year to the government’s vote totals for the 2008 election led observers to conclude that the Bradley effect did not occur (Frankovic, 2009; Langer et al., 2009; Zernike & Sussman, 2008), a finding consistent with careful analysis of polls done preceding other recent elections involving African-American candidates (Hopkins, 2009). A much larger body of research has explored another way that race might have affected the results of preelection polls that year and other surveys as well: race of interviewer effects. In this literature, researchers documented the influence of an interviewer’s race on the answers that people give to survey questions (West & Blom, 2017). This literature generally pointed to misreporting as the main mechanism behind this effect, attributing the difference in responses collected by different race interviewers to respondents altering their answers depending on who asks the questions (Anderson, Silver, & Abramson, 1988; Cotter, Cohen, & Coulter, 1982; Davis, 1997a, 1997b; Finkel, Guterbock, & Borg, 1991; Schaeffer, 1980). In this article, we explore whether such effects were apparent in data collected before the 2008 U.S. Presidential election, and we explore an alternative explanation for such effects: the impact of interviewer race on respondents’ decisions about whether to participate in a survey in the first place. Specifically, we explore whether African-American interviewers were more likely to elicit expressions of the intent to vote for Barack Obama than were White interviewers, and whether this might have occurred because African-American interviewers elicited survey participation by African-American respondents at a higher rate than did White interviewers, and/or White interviewers elicited survey participation by White respondents at a higher rate than did African-American interviewers. This alternative explanation fits well with the recent work that more generally examined the effect of interviewer characteristics on various components of error stemming from the total survey error framework (Groves et al., 2009; see West & Blom, 2017; West, Kreuter, & Jaenichen, 2013; West & Olson, 2010). This line of work showed that between-interviewer variance is at least partly because of varying nonresponse error (e.g., different interviewers recruiting respondents with various demographic characteristics at different rates) in addition to measurement error (e.g., respondents adjusting their answers differently for different interviewers; West & Olson, 2010). This article adds to this line of work by exploring whether race of interviewer might cause interviewer-related nonresponse bias. We begin below by reviewing past studies of interviewer effects, and we propose theoretical accounts that could explain such effects. Then, we outline the explanation on which we focus and test it using data from 36 telephone surveys conducted before the 2008 U.S. Presidential election by ABC/Washington Post, CBS/New York Times, and the Gallup Organization. Past Studies on the Effects of Interviewer Race Evidence of interviewer effects was published early in the history of survey research (Cantril, 1944), and since then, >100 studies have explored the effects of interviewer race. Past studies found that assessments of attitudes, behaviors, political knowledge, and other constructs appear to have been affected by the race of an interviewer (Cotter, Cohen, & Coulter, 1982; Davis & Silver, 2003; Schaeffer, 1980). Many such illustrations involved questions that explicitly mentioned race. For example, African-American interviewers were shown to elicit more racially tolerant responses than did White interviewers (Hatchett & Schuman, 1975). Likewise, respondents expressed warmer feelings toward a racial group and said that they trusted members of that racial group more when interviewed by a member of that group than when interviewed by a member of a different racial group (Anderson, Silver, & Abramson, 1988; Schuman & Converse, 1971), and respondents were more likely to say they supported a political candidate when the interviewer shared the candidate’s race than when the interviewer did not share the candidate’s race (Davis, 1997b; Finkel, Guterbock, & Borg, 1991). Other studies suggest that the race of an interviewer can also affect answers to questions that do not explicitly mention racial groups (Campbell, 1981; Cotter, Cohen, & Coulter, 1982; Schuman & Converse, 1971; Weeks & Moore, 1981). For example, when interviewed by an African-American interviewer and naming their favorite actors, African-American respondents listed more African-Americans and fewer Whites than when interviewed by White interviewers (Schuman & Converse, 1971). Furthermore, when interviewed by Whites, African-American respondents reported warmer feelings toward Whites than when interviewed by an African-American (Anderson, Silver, & Abramson, 1988). Conversely, African-Americans were more likely to express more pro-African-American attitudes and higher racial consciousness when interviewed by African-Americans than when interviewed by Whites (Davis, 1997a). Possible Mechanisms of Interviewer Race on Survey Responses A number of possible explanations might account for these effects of interviewer race. One mechanism involves the rule of conversational politeness (Campbell, 1981; Kane & Macaulay, 1993; Schuman & Converse, 1971). The interaction between a respondent and an interviewer is a conversation, and respondents approach it as if the usual rules of conversations apply (Holbrook et al., 2000). One such rule, especially in conversations with strangers, is to be respectful and polite and not to insult one’s conversational partner (Leech, 1983). Respondents might feel that it would be impolite to express an attitude or intended behavior that may signal disapproval of a social group to which the interviewer belongs. For example, if respondents assume that most African-Americans support most African-American candidates running for public office, then expressing a negative evaluation of such a candidate or expressing an intention to vote against such a candidate might be perceived as impolite when talking to an African-American interviewer. This might lead a respondent to decline to answer a question (e.g., by saying that he/she has not made up his/her mind yet about for whom to vote) or perhaps by expressing an intention to vote for an African-American candidate when interviewed by an Africa-American interviewer. Alternatively, interviewer effects might occur because of automatic aspects of the cognitive process by which survey responses are constructed. When respondents are asked a question to which they do not have a prestored answer in long-term memory, they can retrieve pieces of information from memory to build an answer (Tourangeau, Rips, & Rasinski, 2000; Zaller & Feldman, 1992). This process can be affected by the social context in which the respondent is asked the question, including the interviewer’s characteristics (Nelson & Norton, 2005; Rasinski et al., 2005; Srull & Wyer, 1989). For example, interacting with a professional, competent, and friendly African-American interviewer might prime favorable images of African-Americans in the mind of a respondent during the interview. These favorable images might enhance a respondent’s inclination to express favorable opinions of African-Americans generally and a greater likelihood of voting for an African-American candidate when interviewed by an African-American interviewer than when interviewed by a White interviewer (Huddy et al., 1997; Krysan & Couper, 2003). An Alternative Explanation All of the above explanations assert that the race of an interviewer can influence the process by which a respondent generates his or her answer to a question. However, interviewer race may appear to have an influence answers to race-related questions, but not by influencing the response process, and instead by altering the composition of the participating sample (Jones & Lang, 1980). Put differently, what have appeared to be interviewer effects may have resulted from nonresponse bias attributable to the race of the interviewer (West & Olson, 2010). Because people are more cooperative with others who appear more similar to themselves (Dohrenwend, Colombotos, & Dohrenwend, 1968), potential survey respondents might be more willing to agree to initiate an interview with a same-race interviewer. That is, African-Americans might be more willing to be interviewed by an African-American than by a White interviewer and vice versa. Indeed, several studies have found that recruitment success increased when interviewer and respondent race were matched (Johnson et al., 2000; Webster, 1996). This will naturally result in an imbalance in the racial composition of the respondent pool of White and African-American interviewers; the pool of individuals interviewed by African-American interviewers may contain a greater proportion of African-American respondents than the pool of individuals interviewed by White interviewers, and vice versa. Although not specifically about interviewer race, the analysis by West and Olson (2010) showed that for some questions, what appeared to be interviewer error could be traced back to nonresponse error. To the extent that African-American respondents, answering honestly and accurately, give systematically different answers to survey questions than do White respondents, differential recruitment by different race interviewers will lead to the collection of different responses by different race interviewers, even in the absence of any respondents distorting their answers to questions. For example, African-American respondents were more likely to vote for Barack Obama intend than White respondents. Consequently, the percent of respondents who expressed an intent to vote for Mr Obama may have been greater among interviews done by African-American interviewers than among interviews done my White interviewers. One might imagine the race of interviewer effects would be weaker during telephone interviews than during interviews done face-to-face (which have been the focus of many past studies) because face may seem difficult to detect over the phone, and people may be more comfortable answering questions in ways that might insult telephone interviewers, who are far away. But in fact, as compared with respondents interviewed face-to-face, respondents interviewed by telephone are more likely to provide socially desirable answers (Holbrook et al., 2003; West, Kreuter, & Jaenichen, 2013). Moreover, according to the work by linguists such as Baugh (2000; Purnell, Idsardi, & Baugh, 1999), >80% of listeners are able to correctly identify accents in English associated with particular races (e.g., African-American Vernacular English, Chicano English) based on hearing the single word, “Hello.” Therefore, race of interviewer may indeed have impact on the cooperation decisions of potential respondents during telephone interviews. In this article, we explore the effect of interviewer race on data collected via telephone interviewing during the 2008 U.S. Presidential election campaign by three leading survey organizations: ABC/Washington Post, CBS/New York Times, and the Gallup Organization. We hypothesized that reported intention to vote for Mr Obama would be greater among respondents interviewed by African-American interviewers than those interviewed by White interviewers and that this effect would be especially pronounced among African-American respondents. Second, we explored whether African-American interviewers recruited a greater proportion of African-American respondents than did White interviewers. Finally, we assessed whether the relation between interviewer race and intention to vote for Mr Obama was mediated by what we call “recruitment effects”: the tendency of African-American interviewers to recruit a greater proportion of African-American respondents than White interviewers. This logic is founded on the possibility that respondent offers honest reports of their voting intentions regardless of interviewer race. Effects of race on survey participation may be the result of conscious and intentional decision-making by interviewers or respondents or may be the result of unintentional, nonconscious decision-making.1 We also tested whether race-of-interviewer effects were stronger earlier in the preelection season. This hypothesis grows from the speculation that people may be more certain about their candidate preferences later in a campaign, after they have gathered more information about the candidates. As a result, when asked for whom they would vote, most respondents might retrieve a strong cue from their long-term memories, which would not be easily distorted by the characteristics of the interviewer. But long before Election Day, respondents might be less certain of their candidate preferences, so circumstantial factors at the time of the interview might more easily distort responses. Thus, time of interview offered a vehicle to test whether the effect of interviewer race on collected data might be the result of response distortion. Method We compiled 36 data sets from the three leading survey organizations (11 from ABC/WP, 16 from CBS/NYT, and 9 from Gallup). The data were collected between March and November of 2008, and respondents were asked whether they would vote for Barack Obama or John McCain if the 2008 presidential election were being held on the day of the survey interview (see Table 1 for the fielding dates and sample size for each survey, and see Figure 1 for the responses collected by African-American and White Interviewers over time). Table 1 Field Dates and Sample Size of Individual Samples by Survey Organization Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 aStudy N refers to the number respondents who were registered to vote and interviewed by either a White or Black interviewer. Table 1 Field Dates and Sample Size of Individual Samples by Survey Organization Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 Organization Field dates Sample size Gallup (Daily Tracking Polls) March 1–March 31 30,361 Gallup (Daily Tracking Polls) April 1–April 30 30,311 Gallup (Daily Tracking Polls) May 1–May 31 29,313 Gallup (Daily Tracking Polls) June 1–June 30 27,298 Gallup (Daily Tracking Polls) July 1–July 31 30,347 Gallup (Daily Tracking Polls) August 1–August 31 31,387 Gallup (Daily Tracking Polls) September 1–September 30 30,572 Gallup (Daily Tracking Polls) October 1–October 31 31,359 Gallup (Daily Tracking Polls) November 1–November 2 2,037 Total N 242,985 Study Na 196,706 ABC/Washington Post February 28 to March 2 1,126 ABC/Washington Post April 10–April 13 1,197 ABC/Washington Post May 8–May 11 1,122 ABC/Washington Post June 12–June 15 1,125 ABC/Washington Post July 10–July 13 1,119 ABC/Washington Post August 19–August 22 1,298 ABC/Washington Post September 5–September 7 1,133 ABC/Washington Post September 19–September 22 1,082 ABC/Washington Post September 27–September /29 1,271 ABC/Washington Post October 8–October 11 1,101 ABC/Washington Post October 16 to November 3 10,213 Total N 21,787 Study Na 14,778 CBS News March 15–March 18 1,073 CBS News 20-Mar 542 CBS/New York Times March 28 to April 2 1,368 CBS/New York Times April 25–April 29 1,065 CBS/New York Times May 1–May 3 671 CBS News May 30 to June 3 1,038 CBS/New York Times October 3–October 5 957 CBS/New York Times October 10–October 13 1,070 CBS/New York Times October 17–October 19 518 CBS/New York Times October 19–October 22 1,152 CBS/New York Times October 25–October 29 1,439 CBS News October 28–October 30 833 CBS News October 29 to November 1 1,167 CBS News October 29–October 31 1,390 CBS News October 31 to November 2 1,051 CBS News November 1–November 3 1,091 Total N 16,425 Study Na 11,936 aStudy N refers to the number respondents who were registered to vote and interviewed by either a White or Black interviewer. Figure 1 View largeDownload slide Percent of respondents who said they would vote for Barack Obama by race of interviewer during the 2008 presidential election season (unweighted percentages of registered voters, N = 223,420. Figure 1 View largeDownload slide Percent of respondents who said they would vote for Barack Obama by race of interviewer during the 2008 presidential election season (unweighted percentages of registered voters, N = 223,420. All analyses reported here were done using data from respondents who completed their interviews (i.e., no partials or breakoffs) and answered the voting intention question (typically asked earlier in the interview) and the respondent race question (typically asked toward the end of the interview). We do not have data from people who refused to participate or broke off their interviews midway through, or did not answer either the voting intention or race question. All analyses were performed on the subset of respondents who reported that they were registered to vote2 and who were interviewed by a White interviewer or an African-American interviewer. The analyses relied on interviewers’ reports of their races, and not on respondents’ perceptions of the interviewers’ races (Davis & Silver, 2003). Gallup Data Collection The nine Gallup data sets came from daily tracking polls in which approximately 1,000 U.S. adults were interviewed each day (March 1 to November 2, 2008). Dual-frame random digit dial (RDD) sampling was implemented, including cell phones and landlines (Gallup, 2008). The combined full sample size was 242,985; 196,706 people said they were registered to vote and were interviewed by an African-American interviewer or a White interviewer. Measures Vote Choice. Respondents were asked: “Suppose the presidential election were held today, and it included Barack Obama and Joe Biden as the Democratic candidates and John McCain and Sarah Palin as the Republican candidates. Would you vote for (Barack Obama and Joe Biden, the Democrats) or (John McCain and Sarah Palin, the Republicans)?” The order of response options in the second sentence only (in parentheses) was randomized. People who said they would vote for Mr Obama were coded 1, and all other respondents—including those who said they would vote for Mr McCain, volunteered other candidate names, said neither candidate, refused to answer, or indicated that they did not know—were coded 0.3 Interviewer Race. Interviewers were asked to indicate their race at the time of hiring by answering this question: “What is your race? Are you White, African-American, Asian or some other race?” Among registered voters in the Gallup datasets, 85.5% of respondents were interviewed by a White interviewer, and 9.7% were interviewed by an African-American interviewer. Interviewer race was coded 1 for respondents interviewed by an African-American interviewer and 0 for respondents who were interviewed by a White interviewer. Respondent Race. Respondents were asked, “Are you, yourself, of Hispanic origin or descent, such as Mexican, Puerto Rican, Cuban, or other Spanish background?” and “What is your race? Are you White, African-American, Asian, or some other race?”. Responses to the two questions were used to assign respondents to one of the following categories: non-Hispanic White, non-Hispanic Black, other race (including Hispanics), and race unspecified because of a “don’t know” response or refusal to answer. We created three dummy variables identifying each of the last three of these racial groups and treating non-Hispanic Whites as the omitted group. Demographics. Sex, age, education, income, region, and party identification were included in the analyses (see the Supplementary Materials for question wordings). ABC/WP Data Collection We analyzed data from RDD surveys done by ABC/WP via landline telephones between February 28 and November 3, 2008.4 Interviewing was conducted by TNS (Roper Center 2012a). All respondents in 11 surveys totaled 21,787, and 14,778 respondents said they were registered to vote and were contacted by either an African-American interviewer or a White interviewer. Measures Vote Choice. Respondents were asked: “If the 2008 presidential election were being held today and the candidates were (Barack Obama and Joe Biden, the Democrats,) and (John McCain and Sarah Palin, the Republicans,) for whom would you vote?” Respondents were randomly assigned to hear the names of the Democratic candidates first or the names of the Republican candidates’ names first. Respondents who said they would vote for Mr Obama were coded 1, and all other respondents—those that said they would vote for Mr McCain, gave other candidate names, said neither or indicated they would not vote, refused to answer, or indicated that they did not know—were coded 0.5 Interviewer Race. Interviewer race was obtained from administrative data collected by TNS from the interviewers. Interviewers specified their race by selecting White, Black, Hispanic, Native American, or other race. Of the respondents registered to vote, 50.5% were interviewed by a White interviewer, and 37.4% were interviewed by a Black interviewer.6 Interviewer race was represented by a dummy variable coded 1 for people interviewed by an African-American and 0 for people interviewed by a White interviewer. Respondent Race. Respondents were asked: “Are you of Hispanic origin or background?” and were then asked: “Are you White, Black, or some other race?” Responses to the two questions were used to assign respondents to one of the following categories: non-Hispanic White, non-Hispanic Black, other race (including Hispanics), and race unspecified because of a “don’t know” response or refusal to answer. We created three dummy variables identifying each of the last three of these racial groups and treating non-Hispanic Whites as the omitted group. Demographics. The wordings of the questions measuring sex, age, education, income, region, and party identification are available in the Supplementary Materials. CBS/NYT Data Collection. CBS/NYT conducted RDD interviewing via cell phones and landlines (Roper Center, 2012b). We used data they collected between March 15 and November 3, 2008. A total of 16,425 were interviewed; 11,936 respondents said they were registered to vote and were interviewed by either a White or Black interviewer. Measures Vote Choice. Respondents were asked: “If the 2008 presidential election were being held today and the candidates were (Barack Obama for President and Joe Biden for Vice President, the Democrats), and (John McCain for President and Sarah Palin for Vice President, the Republicans), would you vote for (Barack Obama and Joe Biden) or (John McCain and Sarah Palin)?” The order of candidate names in parentheses was randomly determined, with the Republicans either first or second in both sentences. Stated intentions to vote for Barack Obama were coded 1, and other responses—including people who stated intentions to vote for John McCain, volunteered other candidate names, said they would not vote, said that it depends or that they were undecided, refused to answer, or indicated that they did not know—were coded 0.7 Interviewer Race. Interviewer race was determined by observation of the interviewers by the staff of the survey organization and by interviewer reports when a person’s race was not clear by observation alone. Interviewers could select Caucasian, African-American, Hispanic, or other race. We used data collected by Caucasian or African-American interviewers. Of the respondents who said they were registered to vote, 61.9% were interviewed by a Caucasian interviewer, and 18.7% were interviewed by an African-American interviewer.8 Interviewer race was coded 1 for respondents interviewed by an African-American and 0 for respondents who were interviewed by a White interviewer. Respondent Race. Respondents were asked, “Are you of Hispanic origin or decent, or not?” and “Are you White, Black, Asian, or some other race?” Responses to the two questions were used to assign respondents to one of the following categories: non-Hispanic White, non-Hispanic Black, other race (including Hispanics), and race unspecified because of a “don’t know” response or a refusal to answer. Three dummy variables identified each of the last three of these racial groups. Non-Hispanic Whites were the omitted group. Demographics. Sex, age, education, income, region, and party identification were also included in the analysis (see the Supplementary Materials for the question wordings). Analysis Chi-squares were computed to test whether the distribution of reported voting intentions were different depending on whether they were collected by African-American and White interviewers. Chi-squares were also computed to test whether the distribution of respondent’s race varied depending on whether the interviewer was African-American or White. We estimated the parameters of linear probability models (LPMs) via ordinary least squares (OLS) to assess the influence of interviewer race on reported voting intentions controlling for respondent race.9 The indirect effect of interviewer race on reported voting intentions mediated by recruitment of respondents was tested using 1,000 bootstrapped samples, a method that avoids relying on the normality assumption made by other methods (Hayes, 2009). All analyses were conducted without weights. Results As expected, the proportion of respondents who said that they would vote for Mr Obama was greater among people interviewed by African-American interviewers than among people interviewed by White interviewers (see Table 2). When talking to an African-American interviewer, 43.9% of respondents said they would vote for Mr Obama, whereas only 42.0% of respondents interviewed by a White interviewer said so. Likewise, the proportion of respondents who said that they would vote for Mr McCain was greater among people who were interviewed by a White interviewer (43.6%) than among people who were interviewed by an African-American interviewer (42.4%), χ2(3) = 49.74, p < .001. Table 2 Voting Intention by Interviewer Race Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 49.74 (p < .001). DK; Don’t Know. Table 2 Voting Intention by Interviewer Race Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Candidate Interviewer race White (%) Black (%) Obama 42.0 43.9 McCain 43.6 42.4 Other candidate 0.7 0.5 Neither/DK/refused 13.7 13.2 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 49.74 (p < .001). DK; Don’t Know. Also as expected, interviewer race affected respondent recruitment (see Table 3). Among respondents interviewed by a White interviewer, the proportion of White respondents (83.7%) was greater than that proportion among respondents interviewed by an African-American interviewer (81.4%). Among respondents interviewed by an African-American, the proportions of African-American respondents (8.2%) and other race respondents (8.2%) were greater than these proportions among respondents interviewed by a White interviewer (6.7 and 7.6%, respectively). The effect of interviewer race on the distribution of respondent race was statistically significant (χ2(3) = 120.18, p < .001). Table 3 Respondent Race by Interviewer Race Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 120.18 (p < .001). DK; Don’t Know. Table 3 Respondent Race by Interviewer Race Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Respondent race Interviewer race White (%) Black (%) White 83.7 81.4 Black 6.7 8.2 Other (including Hispanic) 7.6 8.2 DK/refused 2.0 2.1 Total 100.0 100.0 N 194,369 29,051 Note. χ2(3) = 120.18 (p < .001). DK; Don’t Know. The effect of interviewer race on voting intentions was significantly mediated by the effect of interviewer race on recruitment (see Tables 4 and 5). The direct effect of interviewer race was statistically significant when respondent race was not controlled (b = .007, p = .012, see Model 1, Table 4),10 and the effect of interviewer race disappeared completely when controlling for respondent race (b = .003, p = .209, see Model 2, Table 4). Bootstrapping (Hayes, 2009) assessing whether interviewer effects were mediated by recruitment effects (see Table 5) revealed that 78% of the total effect of interviewer race on voting for Mr Obama (Total Effect 95% confidence interval, CI = [.004, .017]) is explained by the indirect effect through disproportional recruitment based on respondent race (Indirect Effect 95% CI = [.006, .009]). Table 4 Effects of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. Table 4 Effects of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Predictor Model 1 Model 2 Black Interviewer .007** .003 Black Respondent (non-Hispanic) – .285**** Other Race Respondent (including Hispanic) – .054**** Respondent Race Unspecified – .048**** Female .024**** .024**** Age: 25–34 years −.095**** −.091**** Age: 35–44 years −.136**** −.128**** Age: 45–54 years −.145**** −.132**** Age: 55–64 years −.153**** −.132**** Age: 65+ years −.195**** −.166**** Age: DK/Reference −.209**** −.196**** Education: High school graduation −.010** .003 Education: Some college .022**** .032**** Education: College graduation .063**** .078**** Education: Post graduation .125**** .141**** Education: DK/Reference .026** .018 Income: $20,000–$34,999a −.013**** −.000 Income: $35,000–$49,999a −.022**** −.006 Income: $50,000–$74,999a −.026**** −.007* Income: $75,000–$99,999a −.033**** −.012** Income: Above $100,000a −.028**** −.005 Income: DK/Reference −.071**** −.056**** Region: Midwest −.001 .002 Region: South −.066**** −.080**** Region: West .021**** .024**** Party identification: Republican −.303**** −.291**** Party identification: Democrat .372**** .342**** Party identification: DK/Reference −.155**** −.160**** Source: ABC/WP .044**** .041**** Source: CBS/NYT .046**** .044**** Constant .525**** .464**** R2 .35 .37 N 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. Table 5 Mediation Analysis Using Bootstrapping Methods Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Note. Indirect effect calculated based on 1,000 bootstrapped samples (Hayes, 2009). Fixed effects for survey organization were included in the model. Table 5 Mediation Analysis Using Bootstrapping Methods Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Effect Coefficient SE 95% CI Total effect of interviewer race on support for Obama .010 .003 [.004, .017] Indirect effect through recruitment of respondents .008 .001 [.006, .009] Proportion of total effect mediated 78% Note. Indirect effect calculated based on 1,000 bootstrapped samples (Hayes, 2009). Fixed effects for survey organization were included in the model. Also as expected, the effect of interviewer race on vote choice was stronger during the months longer before Election Day (see Table 6, Column 1). According to the estimates of the parameter estimates of an equation in which month of the survey and its interaction with interviewer race were predictors, African-American interviewers were more likely to elicit reports of intent to vote for Mr Obama than were White interviewers long before Election Day (b = .019, p = .012, see Column 1, Row 1, Table 6), and this effect grew marginally significantly weaker as the election approached (b = −.002, p = .067, see Row 2, Table 6). Table 6 Assessing Change Over Time in the Effect of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. Month is a number ranging from 3 to 11. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. Table 6 Assessing Change Over Time in the Effect of Interviewer Race and Respondent Race on Reported Intention to Vote for Mr Obama Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Predictor Dependent Variable Vote for Obama Black Respondent Vote for Obama Black Interviewer .019** .017*** – Black Interviewer × Month −.002* .000 – Month .006**** .001**** .006**** Black Respondent (non-Hispanic) – – .342**** Black Respondent × Month – – −.008**** Other Race Respondent (including Hispanic) – – .055**** Race Unspecified Respondent – – .042**** Female .024**** – .024**** Age: 25–34 years −.094**** – −.091**** Age: 35–44 years −.136**** – −.128**** Age: 45–54 years −.145**** – −.131**** Age: 55–64 years −.153**** – −.132**** Age: 65+ years −.194**** – −.166**** Age: DK/Ref −.203**** – −.190**** Education: High school graduation −.010** – .003 Education: Some college .021**** – .031**** Education: College graduation .063**** – .078**** Education: Post graduation .125**** – .141**** Education: DK/Reference .023** – .019* Income: $20,000–$34,999a −.013*** – .000 Income: $35,000–$49,999a −.023**** – −.006 Income: $50,000–$74,999a −.026**** – −.008* Income: $75,000–$99,999a −.034**** – −.012** Income: Above $100,000 a −.028**** – −.006 Income: DK/Reference −.071**** – −.056**** Region: Midwest −.001 – .002 Region: South −.066**** – −.080**** Region: West .021**** – .025**** Party identification: Republican −.303**** – −.291**** Party identification: Democrat .372**** – .343**** Party identification: DK/Reference −.156**** – −.161**** Source: ABC/WP .033**** .009**** .031**** Source: CBS/NYT .038**** .010**** .036**** Constant .482**** .059**** .422**** R2 .35 .00 .37 N 223,420 223,420 223,420 Note. Cell entries are unstandardized ordinary least squares regression coefficients. Month is a number ranging from 3 to 11. DK; Don’t Know. ****p < .001; ***p < .01; **p < .05; *p < .10. On the other hand, the effect of interviewer race on respondent race (i.e., “recruitment effect”) was equivalently strong across the period of 9 months (see Table 6, Column 2). African-American interviewers were significantly more likely than White interviewers to recruit African-American respondents (b = .017, p < .01, see Column 2, Row 1, Table 6), and this effect did not change over time (b = −.000, p = .436, see Column 2, Row 2, Table 6). If recruitment effects were consistently strong throughout the campaigning period, then why was there a decrease in the effect of interviewer race on voting intentions? Additional analysis showed that this was attributable partly to a decrease in the impact of respondent race on voting intentions over time (see Table 6, Column 3). African-American respondents were significantly more likely than White respondents to report the intention to vote for Mr Obama (b = .342, p < .001, see Column 3, Row 4, Table 6) long before the election. However, this preference for Mr Obama among the African-American respondents (as compared with White respondents) grew slightly but significantly weaker as Election Day approached (b = −.008, p < .001, see Column 3, Row 5, Table 6). This gradual decline in the impact of respondent race on voting intentions for Mr Obama may have accounted for the decline in race of interviewer effects in general, even when recruitment effects remained constant. Discussion These data suggest that interviewer race affected reported voting intentions in preelection polls before the 2008 U.S. Presidential election. The proportion of voters expressing an intention to vote for Mr Obama was approximately 2 percentage points greater in interviews conducted by African-American interviewers than in interviews conducted by White interviewers. This effect occurred because White interviewers were more likely than African-American interviewers to recruit White respondents. These results are consistent with prior evidence that found that interviewer effects are often because of nonresponse error variance than to measurement issues (West & Olson, 2010). In our study, >70% of the total race of interviewer effect was attributable to a difference in the distribution of respondent race across interviewers. The effect of interviewer race on measured voting intentions was stronger earlier in the election cycle and decreased as the election approached. This decline occurred because the relation of respondent race to voting intentions decreased as the election approached. The effect of interviewer race has usually been thought of as a result of respondents lying to interviewers (e.g., “social desirability bias”) or a result of nonconscious influence of interviewer race on the cognitive process by which respondents generate predictions of their voting behavior. Our findings suggest caution before making such statements, as the actual mechanism may have been an entirely different one, one that has nothing to do with social desirability bias in responses. We found that an interviewer’s race can exert an effect long before the moment when a respondent reports his or her answer to a question. The impact of interviewer race may occur within just seconds of the beginning of a phone call with a potential respondent, when that individual is deciding whether to hang up or to continue the conversation. At the very least, we should recognize that alternative mechanisms could be at work and understanding the operation of these mechanisms will help survey practitioners to take precautions. Survey organizations might be tempted to see our results as having implications for how to compose their interviewing staffs. For example, perhaps the chronic overrepresentation of women among survey respondents is a result of the predominance of women among interviewers. So perhaps stacking interviewing staffs with people who possess the demographics of people typically underrepresented in survey samples (e.g., men and young people in addition to racial minorities) will enhance the overall cooperation rates of such individuals and improve sample representativeness, and perhaps interviewers living in a particular region of the country might be especially successful at recruiting potential respondents who live in that region (e.g., perhaps, telephone numbers located in the South should be called by interviewers with southern accents). If the effect of interviewer race is confined to the recruiting process, then it is tempting to imagine that poststratification weighting that includes race as a factor will solve it. Such weighting is routinely done to adjust the racial makeup of a participating sample to match that of the population. If our results are correct, the more interviews are done by African-American interviewers, the larger the unweighted proportion of participating African-American respondents will be, and the less severe the weighting will need to be. Proper weighting should bring that proportion into line with the population, so any biasing effect of interviewer race seems likely to be eliminated as a result. This is a significant observation. If race of interviewer effects are simply effects on participation decisions, conventional weighting should eliminate any impact of the “arbitrary” proportion of interviews done by African-American interviewers. However, this sort of weighting may not be sufficient to eliminate race of interviewer effects because some systematic measurement error variance may still exist and indeed may be exacerbated. For example, imagine that a survey’s participating sample underrepresents African-Americans, as is typical. Furthermore, imagine that African-American respondents in the participating sample were more likely to have been interviewed by an African-American interviewer than is true for White respondents, as was true in the data we analyzed. Weighting up the African-American respondents would therefore increase the “apparent” proportion of weighted data points collected by African-American interviewers and would enhance any distortion in reports caused by their race (see Supplementary Tables A2 and A3 for an exploration). This is a complex issue and future studies should certainly investigate it further. The results reported here also help understand so-called “house effects” in preelection poll results. The coefficients for fixed effects of survey organizations in Tables 4 and 5 indicate that the proportion of respondents who were African-American was significantly smaller for the Gallup Organization than for CBS/NYT and for ABC/WP. Furthermore, the proportion of respondents reporting an intention to vote for Mr Obama was significantly lower for the Gallup Organization than for CBS/NYT and for ABC/WP. This pattern has been noted by other observers of such polls (Blumenthal, 2012). The present findings contribute to the body of evidence justifying such efforts and raise the possibility that race of interviewer effects on respondent recruitment may contribute to this between-organization differences in results. The difference between Gallup and the other organizations observed here might be attributable to the composition of their interviewing staffs. In the data examined here, 9.7% of Gallup’s interviews were conducted by an African-American interviewer, in contrast to 18.7% of interviews done by CBS/NYT and 37.4% of interviews done by ABC/WP. The difference in the composition of interviewers’ race is, of course, merely because of the location of the calling stations for these organizations (e.g., Omaha, Nebraska for Gallup, New York City for CBS/NYT, Horsham, Pennsylvania for ABC/WP). This pattern might seem to be consistent with the conclusion that Gallup obtained fewer reports of intentions to vote for Mr Obama because fewer of their respondents were interviewed by African-American interviewers. However, if race of interviewer affected respondent recruitment and not self-reports of voting intentions, and if poststratification of Gallup’s data was done optimally, then these effects should have disappeared in the firms’ reports of poststratified results. Limitations This article examined only one interviewer characteristic, race. We have not examined other interviewer characteristics could have influenced differences in cooperation rates across interviewers, such as interviewer age, gender, personality, or experience. As more interviewing is done via interactive voice response or visual presentations via the Internet or smartphones, interviewer characteristics will become less relevant. But many of the world’s most important surveys continue to be conducted by telephone and face-to-face, so our results continue to have relevance. In this study, we were not able to consider the race of the interviewer on earlier call attempts to the respondent before the call that led to a completed interview. If an interviewer gained cooperation by scheduling a callback was different from the interviewer who conducted the interview, our estimates of the impact of interviewer race are likely to be attenuated. Future studies might use data on interviewer characteristics for all call attempts to fully capture this dynamic process that may lead to nonresponse bias. One potential confound in this study was interviewer shift. As we did not have access to the interviewer’s shift information, this factor could not be controlled in our data. However, we do not see a particular reason why African American and White interviewers might have worked during systematically different shifts. Furthermore, given that there is little evidence of differential assignment of cases to interviewers before the interviews take place (West & Olson, 2010), interviewer shift probably was not substantially correlated with interviewer race. We hope that future studies can explore this issue. The effects reported here were generally on the smaller side, with the main effect of interviewer race ranging about 1–2 percentage points. Although these effects are small in absolute terms, such effects can be consequential in a close election. Also, because interviewer effects were stronger in the earlier months of the campaign season and can be compounded if the interviewer pool is highly skewed in terms of race, certain situations can cause notably stronger interviewer effects. We assumed that respondent race was measured without error. However, we cannot conclusively rule out the possibility that respondents misreported reported their race depending on the race of the interviewer. If in fact interviewer race affected both respondents’ reports of their own races and voting intentions, the findings here may exacerbate the apparent strength of the impact of interviewer race on respondents’ participation decisions. The arguments that race is part of a person’s deep-rooted self-identity developed from life experiences (Helms, 1990) and that people thus place “value and emotional significance” on belonging to social categories such as these (Tajfel, 1982) suggest that systematic measurement error is unlikely to be substantial in measurements of respondent race. Still, this is a possibility that we cannot rule out. Conclusion Taken together, these results suggest that race of interviewer effects was alive and well (though small) in 2008 but that their mechanism was different from that conventionally assumed. Whereas researchers have been quick to attribute such effects to respondent distortion of their reports, the effects may instead be because of impact on the decision about whether to participate in a survey. We look forward to more research on effects of interviewer race, especially reconsideration of whether past studies really revealed effects on reporting or instead revealed recruitment effects instead. Supplementary Data Supplementary Data are available at IJPOR online. Nuri Kim is an Assistant Professor at the Wee Kim Wee School of Communication and Information at the Nanyang Technological University, Singapore. Jon Krosnick is a Professor of Communication, Political Science, and Psychology at Stanford University and University Fellow at Resources for the Future. Yphtach Lelkes is an Assistant Professor of Political Communication, at the Annenberg School for Communication, University of Pennsylvania. Acknowledgments The authors thank ABC News, the Washington Post, CBS News, the New York Times, and the Gallup Organization for making their data available to them, and the authors thank Gary Langer, Frank Newport, and Sarah Dutton for their advice and help. Footnotes 1One might imagine testing an interaction between interviewer race and respondent race, on the assumption that matching and mismatching yield different consequences (see West & Blom, 2017). But this study treats respondent race as a dependent variable, affected by interviewer race. Therefore, it would not be sensible to explore an interaction between those variables. 2To maximize comparability of results across months, all reported analyses were conducted with respondents who were registered to vote because only these individuals were asked the candidate choice question in some months. People registered to vote were identified by the following questions: “Are you registered to vote at your present address, or not?” (ABC/WP); “Are you registered to vote in the precinct or election district where you now live, or aren’t you?” (CBS/NYT); and “Are you now registered to vote in your precinct or election district, or not?” (Gallup). Results were the essentially same when using all available data from registered and unregistered respondents. 3Respondents who said they would vote for neither candidate, refused to answer, or said they did not know were asked a follow-up question probing toward whom they leaned. We analyzed only responses to the initial vote choice question and ignored answers to the leaning question. We coded all non-Obama votes as 0 because we wanted to preserve the sample without item nonresponse issues. If we code only the Obama and McCain votes, the respondents who indicated a non-Obama, non-McCain vote (e.g., undecided, neither, unsure, volunteered another candidate name, do not know) would be dropped from our analysis (approximately 13% of the study sample). We wanted to make sure that we kept these respondents, so that all analyses were based on comparable samples of respondents over time. 4Each survey included interviews with an oversample of African Americans who were recruited separately; these respondents were not included in the analyses reported here. 5Respondents who said they would vote for neither candidate, refused to answer, or said that they did not know were asked a follow-up question probing toward whom they leaned. Respondents who said that they would not vote were not asked this follow-up question. We analyzed only responses to the initial vote choice question and ignored answers to the leaning question. 6Hispanic interviewers interviewed 5.8% of respondents who were registered to vote, and other interviewers interviewed another 2.7% of respondents who were registered to vote. Race of interviewer was unspecified for 3.5% of respondents who said they were registered to vote. 7People who said that they would vote for another candidate, said that it depends or that they were undecided, refused to answer, or said that they did not know were asked a follow-up question probing toward whom they leaned. People who stated that they would not vote were not asked this follow-up question. We only analyzed responses to the initial vote choice question. 8Total 2.9% of respondents were interviewed by a Hispanic interviewer, and <1% was interviewed by an interviewer of another race. The interviewer’s race was unspecified for 16.1% of the respondents who were registered to vote. 9Although logit or probit has often been recommended when predicting binary dependent variables, the LPM estimated by OLS is popular in econometric analyses (Angrist & Pischke, 2009) because this approach produces unbiased estimates of effects that are more readily interpretable (Angrist & Pischke, 2009; Freedman, 2008). When we estimated the parameters of models using logistic regression, the results were essentially equivalent (see Supplementary Materials, Table A1). 10Two dummy variables to capture fixed effects of survey organizations (ABC/WP and CBS/NYT) were included in all regression equations. The Gallup Organization was the omitted category. References Anderson B. , Silver B. , Abramson P. ( 1988 ). The effects of the race of the interviewer on measures of electoral participation by Blacks in SRC National Election Studies . Public Opinion Quarterly , 52 , 53 – 83 . doi: 10.1086/269108 Google Scholar CrossRef Search ADS Angrist J. D. , Pischke J.-S. ( 2009 ). Mostly harmless econometrics: An empiricists companion . Princeton : Princeton University Press . Baugh J. ( 2000 ). Beyond ebonics: Linguistic pride and racial prejudice . New York, NY : Oxford University Press . Berman J. ( 2008 , October 14). Will the Bradley effect be Obama’s downfall? ABC/WP News. Retrieved from: http://ABC/WPnews.go.com/Politics/story?id=6031233&page=1#.T6i qZ59YvUg Blumenthal M. ( 2012 , June 17). Race matters: Why Gallup poll finds less support for president Obama. HuffingtonPost.com. Retrieved from: http://www.huffingtonpost.com/2012/06/17/gallup-poll-race-barack-obama_n_1589937.html Campbell B. ( 1981 ). Race-of-interviewer effects among southern adolescents . Public Opinion Quarterly , 45 , 231 – 234 . doi: 10.1086/268654 Google Scholar CrossRef Search ADS Cantril H. ( 1944 ). Gauging public opinion . Princeton : Princeton University Press . Google Scholar CrossRef Search ADS Carroll J. ( 2008 , October 14). Will Obama suffer from the ‘Bradley effect’? CNN. Retrieved from: http://www.cnn.com/2008/POLITICS/10/13/obama.bradley.effect/ Cillizza C. ( 2008 , June 10). Race, Polling and the ‘Bradley Effect’. The Washington Post. Retrieved from: http://voices.washingtonpost.com/thefix/eye-on-2008/the-race-issue.html Cotter P. , Cohen J. , Coulter P. ( 1982 ). Race-of-interviewer effects in telephone interviews . Public Opinion Quarterly , 46 , 278 – 284 . doi: 10.1086/268719 Google Scholar CrossRef Search ADS Davis D. W. ( 1997a ). Nonrandom measurement error and race of interviewer effects among African Americans . Public Opinion Quarterly , 61 , 183 – 207 . doi: 10.1086/297792 Google Scholar CrossRef Search ADS Davis D. W. ( 1997b ). The direction of race of interviewer effects among African Americans: Donning the Black mask . American Journal of Political Science , 41 , 309 – 330 . doi: 10.2307/2111718 Google Scholar CrossRef Search ADS Davis D. W. , Silver B. D. ( 2003 ). Stereotype threat and race of interviewer effects in a survey on political knowledge . American Journal of Political Science , 47 , 33 – 45 . doi: 10.1111/1540-5907.00003 Google Scholar CrossRef Search ADS Dohrenwend B. , Colombotos J. , Dohrenwend B. ( 1968 ). Social distance and interviewer effects . Public Opinion Quarterly , 32 , 410 – 422 . doi: 10.1086/267624 Google Scholar CrossRef Search ADS Finkel S. , Guterbock T. , Borg M. ( 1991 ). Race-of-interviewer effects in a presidential poll: Virginia 1989 . Public Opinion Quarterly , 55 , 313 – 330 . doi: 10.1086/269264 Google Scholar CrossRef Search ADS Frankovic K. ( 2009 , June 18). Time to move beyond “The Bradley effect?”. CBS/NYT News. Retrieved from: http://www.CBS/NYTnews.com/stories/2008/10/13/opinion/pollpositions/main4519166.shtml Freedman D. A. ( 2008 ). Randomization does not justify logistic regression . Statistical Science , 23 , 237 – 249 . http://dx.doi.org/10.1214/08-STS262 Google Scholar CrossRef Search ADS Gallup . ( 2008 , June 9). Gallup daily: Obama takes lead over McCain, 48% to 42%. Gallup.com. Retrieved from: http://www.gallup.com/poll/107764/gallup-daily-Obama-takes-lead-over-McCain-48-42.aspx Groves R. M. , Fowler F. J. Jr , Couper M. P. , Lepkowski J. M. , Singer E. , Tourangeau R. ( 2009 ). Survey methodology . Hoboken, NJ : John Wiley and Sons . Hatchett S. , Schuman H. ( 1975 ). White respondents and race-of-interviewer effects . Public Opinion Quarterly , 39 , 523 – 528 . doi: 10.1086/268249 Google Scholar CrossRef Search ADS Hayes A. F. ( 2009 ). Beyond Baron and Kenny: Statistical mediation analysis in the new millennium . Communication Monographs , 76 , 408 – 420 . http://dx.doi.org/10.1080/03637750903310360 Google Scholar CrossRef Search ADS Helms J. E. ( 1990 ). Black and White racial identity: Theory, research, and practice . Westport, CT : Praeger . Holbrook A. , Green M. , Krosnick J. ( 2003 ). Telephone versus face-to-face interviewing of national probability samples with long questionnaires . Public Opinion Quarterly , 67 , 79 – 125 . Google Scholar CrossRef Search ADS Holbrook A. , Krosnick J. , Carson R. , Mitchell R. ( 2000 ). Violating conversational conventions disrupts cognitive processing of attitude questions . Journal of Experimental Social Psychology , 36 , 465 – 494 . doi: 10.1006/jesp.1999.1411 Google Scholar CrossRef Search ADS Hopkins D. ( 2009 ). No more Wilder effect, never a Whitman effect: When and why polls mislead about black and female candidates . Journal of Politics , 71 , 769 – 781 . doi: 10.1017/s0022381609090707 Google Scholar CrossRef Search ADS Huddy L. , Billig J. , Bracciodieta J. , Hoeffler L. , Moynihan P. , Pugliani P. ( 1997 ). The effect of interviewer gender on the survey response . Political Behavior , 19 , 197 – 220 . doi: 10.1023/a:1024882714254 Google Scholar CrossRef Search ADS Johnson T. , Fendrich M. , Shaligram C. , Garcy A. , Gillespie S. ( 2000 ). An evaluation of the effects of interviewer characteristics in an RDD telephone survey of drug use . Journal of Drug Issues , 30 , 77 – 102 . doi: 10.1023/a:1024882714254 Google Scholar CrossRef Search ADS Jones W. , Lang J. ( 1980 ). Sample composition bias and response bias in a mail survey: A comparison of inducement methods.” Journal of Marketing Research , 17 , 69 – 76 . doi: 10.2307/3151119 Google Scholar CrossRef Search ADS Kane E. , Macaulay L. ( 1993 ). Interviewer gender and gender attitudes . Public Opinion Quarterly , 57 , 1 – 28 . doi: 10.1086/269352 Google Scholar CrossRef Search ADS Krysan M. , Couper M. ( 2003 ). Race in the live and the virtual interview: Racial deference, social desirability, and activation effects in attitude surveys . Social Psychology Quarterly , 66 , 364 – 383 . doi: 10.2307/1519835 Google Scholar CrossRef Search ADS Langer G. , Craighill P. , Moynihan P. , Cohen J. , Agiesta J. , Lambert D. ( 2009 ). Best practices in pre-election polling: Lessons from the field. Paper presented at the AAPOR annual conference, Hollywood, FL. Leech G. ( 1983 ). Principles of pragmatics . New York, NY : Longman . Liasson M. ( 2008 , October 14). As Obama leads polls, Bradley effect examined. NPR. Retrieved from: http://www.npr.org/templates/story/story.php?storyId=95702879 Nelson L. , Norton M. ( 2005 ). From student to superhero: Situational primes shape future helping . Journal of Experimental Social Psychology , 41 , 425 – 430 . doi: 10.1016/j.jesp.2004.08.003 Google Scholar CrossRef Search ADS Purnell T. , Idsardi W. , Baugh J. ( 1999 ). Perceptual and phonetic experiments on American English dialect identification . Journal of Language and Social Psychology , 18 , 10 – 30 . Google Scholar CrossRef Search ADS Rasinski K. , Visser P. , Zagatsky M. , Rickett E. ( 2005 ). Using implicit goal priming to improve the quality of self-report data . Journal of Experimental Social Psychology , 41 , 321 – 327 . doi: 10.1016/j.jesp.2004.07.001 Google Scholar CrossRef Search ADS Roper Center . ( 2012a ). ABC News/Washington Post poll: October monthly—2008 Presidential election [Codebook]. Retrieved from: http://webapps.ropercenter.uconn.edu/ Roper Center . ( 2012b ). CBS News poll: 2008 Presidential election [Codebook]. Retrieved from: http://webapps.ropercenter.uconn.edu/ Schaeffer N. C. ( 1980 ). Evaluating race-of-interviewer effects in a national survey . Sociological Methods and Research , 8 , 400 – 419 . doi: 10.1177/004912418000800403 Google Scholar CrossRef Search ADS Schuman H. , Converse J. ( 1971 ). The effects of black and white interviewers on black responses in 1968 . Public Opinion Quarterly , 35 , 44 – 68 . doi: 10.1086/267866 Google Scholar CrossRef Search ADS Srull T. , Wyer R. ( 1989 ). Person memory and judgment . Psychological Review , 96 , 58 – 83 . doi: 10.1037//0033-295x.96.1.58 Google Scholar CrossRef Search ADS PubMed Tajfel H. ( 1982 ). Social psychology of intergroup relations . Annual Review of Psychology , 33 , 1 – 39 . http://dx.doi.org/10.1146/annurev.ps.33.020182.000245 Google Scholar CrossRef Search ADS Tourangeau R. , Rips L. , Rasinski K. ( 2000 ). The psychology of survey response . Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Webster C. ( 1996 ). Hispanic and Anglo interviewer and respondent ethnicity and gender: The impact on survey response quality . Journal of Marketing Research , 33 , 62 – 72 . doi: 10.2307/3152013 Google Scholar CrossRef Search ADS Weeks M. , Moore R. ( 1981 ). Ethnicity-of-interviewer effects on ethnic respondents . Public Opinion Quarterly , 45 , 245 – 249 . doi: 10.1086/268655 Google Scholar CrossRef Search ADS West B. T. , Blom A. G. ( 2017 ). Explaining interviewer effects: A research synthesis . Journal of Survey Statistics and Methodology , 5 , 175 – 211 . West B. T. , Kreuter F. , Jaenichen U. ( 2013 ). “Interviewer” effects in face-to-face surveys: A function of sampling, measurement error, or nonresponse? Journal of Official Statistics , 29 , 277 – 297 . Google Scholar CrossRef Search ADS West B. T. , Olson K. ( 2010 ). How much of interviewer variance is really nonresponse error variance? Public Opinion Quarterly , 74 , 1004 – 1026 . Google Scholar CrossRef Search ADS Zaller J. , Feldman S. ( 1992 ). A simple theory of survey response: Answering questions versus revealing preferences . American Journal of Political Science , 36 , 579 – 616 . doi: 10.2307/2111583 Google Scholar CrossRef Search ADS Zernike K. ( 2008 , October 11). Do polls lie about race? The New York Times. Retrieved from: http://www.CBS/NYTimes.com/2008/10/12/weekinreview/12zernike.html?_r=2&oref=slogin Zernike K. , Sussman D. ( 2008 , November 5). For pollsters, the racial effect that wasn’t. The New York Times. Retrieved from: http://www.CBS/NYTimes.com/2008/11/06/us/politics/06poll.html © The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.

Journal

International Journal of Public Opinion ResearchOxford University Press

Published: Mar 3, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off