“Fear-of-the-State Bias” in Survey Data

“Fear-of-the-State Bias” in Survey Data Abstract Public attitude surveys provide invaluable data for assessing how people view their countries’ democratic progress and government performance, in addition to a range of other outcomes. Yet, these data are vulnerable to substantial biases deriving from interviewer effects. Apart from social desirability bias resulting from a (non)coethnic interviewer, this article demonstrates that the perception of a government interviewer is another crucial mechanism that generates bias in the African context. The evidence suggests that fear of the state, rather than social desirability, leads people in less open societies to provide more positive assessments of democratic and government performance and underreport corruption. In identifying this new source of bias, the article discusses potential improvements to survey protocols and modes of administration. Survey data collected through face-to-face interviews are susceptible to different types of response (and nonresponse) bias. For example, the literature has shown biased responses resulting from the gender, race, religion, and ethnicity of the interviewer (Beatty & Herrmann, 2002; Groves, 2004; Groves et al., 2009; Tourangeau & Yan, 2007). Evidence for these interviewer effects can be found in a variety of contexts in both the developed and the developing world. In the African context, recent work has demonstrated substantial biases resulting from the effect of an interviewer-respondent ethnic (mis)match (Adida, Ferree, Posner, & Robinson, 2016; Dionne, 2014; Samii, 2013). This finding is not surprising, given the social salience of ethnicity in many African countries. However, little (if any) work has theorized or investigated response biases that result from the respondent’s perception of a government interviewer and the accompanying fear that this perception activates. This dearth of research is surprising, given that survey respondents in many African countries are likely to be fearful of the state and thus far less likely to provide honest responses to government interviewers. Not taking this response bias into account may lead to incorrect inferences about citizen preferences for democracy in addition to evaluations of government performance and corruption. This article makes two contributions with respect to evaluating survey data and improving survey methodology in sensitive contexts. First, it is to the best of my knowledge the first study that demonstrates the bias resulting from the perception of a government interviewer. It investigates how this bias affects conclusions for a range of politically sensitive survey questions relating to democratic progress, government performance, corruption, and trust in political parties. Empirically, this is achieved using a multilevel model with a range of control variables. Even after controlling for respondent characteristics and country-level freedom, the effect of perceiving a government interviewer is substantial. These respondents systematically overstate their endorsement of patronage activities and the supply of (and satisfaction with) democracy while understating their preference for democracy. In addition, they are less likely to criticize their governments or point out corruption. Similarly, they overstate their trust in the ruling party and its performance. In contrast, they understate their trust in the opposition. The second contribution of the article is to articulate a new theory—fear-of-the-state bias—that seeks to explain the significant response biases apparent in the data. In contrast to the social desirability thesis, the biases resulting from perceiving a government interviewer are more in line with respondents being fearful of government or party reprisals. Building on this theory, the article unpacks the pooled cross-country regression results to shed light on some of the conditions that lead to a fearful response bias. I draw on the illustrative case of Mozambique to show that a fear-of-the-state bias is particularly likely in countries with a repressive political climate where the dominant party has power over the legislature and a far-reaching grip on the media and other aspects of economic life including the provision of government jobs. Contrasting results from more open democratic countries such as Cabo Verde and South Africa provide further empirical support for this argument. Types of Survey Response Bias There is an extensive literature on bias in survey response in sociology and psychology and increasingly in political science (Adida et al., 2016; DeMaio, 1984; Krumpal, 2013). In addition to well-known respondent errors (or biases) deriving from event encoding, question comprehension, and memory, interviewer effects are an important source of bias that can be ameliorated by survey methodology (Groves, 2004). Many of these interviewer effects are described as a form of social desirability bias based on a particular characteristic, such as the race, religion, or ethnicity of the interviewer (Beatty & Herrmann, 2002; Tourangeau & Yan, 2007). Social desirability bias refers to “answers which [sic] reflect an attempt to enhance some socially desirable characteristics or minimize the presence of some socially undesirable characteristics” (U.S. Department of Health, Education, and Welfare 1977, p. 47 cited in DeMaio, 1984, p. 257). In the African context, social desirability bias has increasingly been deployed to explain interviewer coethnicity effects (Adida et al., 2016; Dionne, 2014; Samii, 2013). The recent upsurge in these studies is not surprising, as ethnicity is thought to be highly socially salient across Sub-Saharan Africa. Studies, however, have produced divergent results. In rural Malawi, Dionne (2014) found that nonresponse to most questions was not associated with the interviewer-respondent ethnic match.1 In contrast, Samii (2013) found less nonresponse for coethnic matches in Burundi. Using a large sample of African countries, Adida et al. (2016) found that the average response bias generated by a noncoethnic interviewer-respondent match was substantial for certain questions. They concluded, “In almost all cases, the direction of this effect is consistent with stronger social desirability bias in ethnically mismatched interviews” (p. 14). This article acknowledges that there is a coethnic effect for certain questions in many African countries and that this may be because of social desirability. However, this article argues that for certain questions (and for particular countries), the coethnic effect because of social desirability may be absent or may be less important than the effect of other factors. More specifically, this article shows that the perception of a government interviewer generates more frequent biases for politically sensitive questions. This article’s results do not easily conform to the social desirability thesis, which warrants further theorizing and testing of other explanations. Apart from social desirability bias, the literature points to a few other reasons for misreporting (or nonresponse). One reason is the threat of disclosure and the related consequences stemming from another party (Tourangeau et al., 2000). Beatty and Herrmann (2002, p. 72) term this a “conflict of interest” that occurs when respondents think that the information could be used against them, or when they do not trust the interviewer. This form of sensitivity is most relevant to the questions analyzed in this article, wherein the threat of disclosure is nearly certain because the respondent often believes that she is directly disclosing the sensitive information to the third party (Tourangeau & Yan, 2007). In light of this, it is important to ask: For which types of questions are respondents more likely to fear third-party repercussions? The literature has generally focused on questions about illegal activities such as crime or drug use, for which respondents may have an adequate response but choose not to provide it because it could put them at risk (Beatty & Herrmann, 2002, p. 75). However, questions that relate to legal activities or legitimate views may also generate fear in respondents, particularly in repressive societies. For example, reporting on poor government performance or corruption may put people at risk, even if they are not involved. In this case, respondents have what Beatty and Herrmann (2002) term “negative communicative intent” or a lack of motivation to provide the solicited information. This leaves respondents with a few options. They can choose to say they “do not know” (DK) or “refuse to answer” (RA), which are errors of omission. Alternatively, they can provide an answer they know to be incorrect (an error of commission). An error of commission is more likely in some cases, especially when a DK or an RA may in itself reveal information to the interviewer. Although the focus in survey design is generally on reducing errors of omission (and increasing the effective sample size), the reasons for nonresponse or misreporting should be carefully considered in assessing which type of error is preferable (Beatty & Herrmann, 2002). For example, in this article, fear of a government interviewer may be the underlying factor driving nonresponse and misreporting. Thus, a reduction in the effective sample size because of DK or RA responses may be preferable to the biases that result from systematic misreporting (Dionne, 2014, p. 475). This article examines the effect of a respondent perceiving a government interviewer on misreporting for a range of outcome variables. Fortunately, the Afrobarometer (in contrast to other opinion polls) includes a question at the end of its survey that asks the respondent: “Who do you think sent us to do this interview?” (Afrobarometer Codebook Rounds 3 and 4).2Table 1 presents a descriptive breakdown of responses for the aggregated categories and a few of the important subcategories. Notably, 55% of respondents in the full sample believes that the interviewer is a government representative, while the remainder is split between 28% not government and 17% missing, DK, or refused. Table 1 Breakdown of Interviewer Perceptions for Full Sample (%) Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Note: Unweighted percentages for some of the 28 total options within the aggregated state and nonstate categories. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. Table 1 Breakdown of Interviewer Perceptions for Full Sample (%) Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Note: Unweighted percentages for some of the 28 total options within the aggregated state and nonstate categories. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. I argue that respondents in unstable countries who believe the government sent the interviewer are more likely to misreport in a way that is theoretically consistent with a fear of the state. Here, it is important to emphasize that the posited mechanism generating bias is not fear per se, but the perception of a state interviewer that activates any fear that a respondent may already have. As the data in Table 2 show, the percentage perceiving a state interviewer can be high in both fearful and unfearful countries (e.g., 79% in Mozambique compared with 73% in South Africa). Table 2 Breakdown of Interviewer Perceptions by Country (%) Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Note: In contrast to the country breakdowns, in the total breakdown, I do not exclude responses that are missing, DK, and RA. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. Table 2 Breakdown of Interviewer Perceptions by Country (%) Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Note: In contrast to the country breakdowns, in the total breakdown, I do not exclude responses that are missing, DK, and RA. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. Meanwhile, Table 3 shows that repression of the media and individual expression is high on average, with a few notable exceptions (e.g., South Africa and Cape Verde). Table 3 Individual and Media Repression (2005 and 2008) Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Note: Freedom House (FH), civil liberties (CLs), and political rights (PRs) score with one denoting the most freedom. Reporters Sans Frontieres (RSF) World Press Freedom Index with lower scores denoting more media freedom. Table 3 Individual and Media Repression (2005 and 2008) Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Note: Freedom House (FH), civil liberties (CLs), and political rights (PRs) score with one denoting the most freedom. Reporters Sans Frontieres (RSF) World Press Freedom Index with lower scores denoting more media freedom. On misreporting, the literature has generally neglected the possibility that the respondent is likely to provide systematically different responses to questions that are politically sensitive when questioned by perceived state representatives.3 Here, I argue that the social desirability bias thesis is generally less plausible than fear of state repercussion or fear of losing out on state benefits. My argument is made by analyzing biased responses that do not comport well with the social desirability thesis and by using country case studies to delve deeper into the underlying conditions that could be driving the biases apparent in the data. For some of the survey questions, one could maintain that social desirability is an equally plausible explanation for bias. However, for that to be valid, one would also have to argue that social desirability is activated by the perception of a government interviewer. This seems less likely to be a mechanism that would trigger socially desirable responses, compared with other mechanisms such as a coethnic interviewer or other interviewers that come from institutions that can provide greater social proof (e.g., religious and traditional leaders). Findings: The Effect of Fear-of-the-State In this section, I use multilevel (random effects) modeling based on two (pooled) waves of the Afrobarometer (Rounds 3 and 4) data to estimate the effect of perceiving a government interview sponsor.4 In the Afrobarometer survey, the individual-level observations (obtained from an individual within a visited household) are nested within clusters (i.e., countries). A multilevel model can be used to account for clustering, given that it is theoretically likely that country-level attributes will affect the individual outcome variables of interest. The model assumes that unobserved contextual factors at the country level (which are captured in the random intercepts) are uncorrelated with the included independent variables. A violation of this assumption could result in heterogeneity bias, though.5 I closely follow Adida et al.’s (2016) model specification, which controls for age, gender, education, and urban or rural residence, whether one belongs to a minority ethnic group, ethnic group size, and ethnic group size squared, a wave dummy, and whether the interview was conducted in one’s home language.6 At the country level, I control for media freedom using Reporters Sans Frontieres (RSF) World Press Freedom Index.7 I also control for individual freedom of expression and belief using Freedom House’s (FH) measure. Importantly, this measure captures, inter alia, “the ability of the people to engage in private (political) discussions without fear of harassment or arrest by the authorities” (Dahlberg, Holmberg, Rothstein, Khomenko, & Svensson, 2016, p. 66).8 The first column of Table 4 presents the results for the full sample for the effect of perceiving a government interviewer on a range of political outcomes. The results show statistically significant effects that are consistent with the fear-of-the-state theory.9 Respondents who think that a government representative is interviewing them are more likely to assert that leaders should help their home communities as opposed to not favoring their own family or group. This is arguably a statement showing support for (or a lack of criticism of) politicians engaging in nepotistic or patronage practices. It is also a likely response for people who are fearful of state repression in countries where politicians thrive on patronage. Social desirability does not seem to be a plausible explanation, unless people view patronage as socially desirable. Indeed, Adida et al. (2016) find no social desirability bias effect (i.e., noncoethnic match) for this question. For a similar reason, it seems unlikely that social desirability would help explain why respondents report a lower preference for democracy. Here, Adida et al. (2016) find that the social desirability effect results in a higher reported preference for democracy. Given this, it seems more likely that underreporting a preference for democracy would aim to appease the state and its nondemocratic practices. This is an important finding. The literature has documented that an increasing number of “Africans”10 prefer democracy as their system of government. This result suggests that Africans prefer it even more than the data have historically shown. Another important finding is that respondents overstate the extent of democracy and their satisfaction with democracy, probably because they fear criticizing the government’s performance on this dimension. The literature has also shown that the reported supply of democracy often exceeds the demand for democracy (Bratton & Houessou, 2014). These two findings taken together suggest one reason why this paradox exists. A fear-of-the-state may also be driving other results, which cannot easily be explained by the social desirability thesis. For example, Table 4 shows that respondents are less likely to report that party competition causes conflict (statistically significant at the 90% level). For other survey questions, social desirability seems like a credible explanation for response biases. For example, social desirability might explain why respondents report to the perceived government interviewer that they are far more likely to trust the ruling party and less likely to trust the opposition. In a dominant party system, these are arguably socially desirable responses, given the probability that the government representative is from the ruling party. However, both biases also align well with the theory that fearful respondents are more likely to vocalize support for the ruling party and distrust for the opposition when they think a government (i.e., ruling party) representative is at their door. In the Adida et al. (2016) study, there was no social desirability effect on the trust in opposition question. Turning to evaluations of government officials at different levels, respondents are considerably more likely to approve of the President’s and MP’s performance. In contrast, there is no statistically significant difference for their evaluation of local government’s performance. This may suggest that citizens are less fearful of local government officials, who are generally far lower in the state’s power hierarchy than officials at the provincial and national level. Again, it is difficult to argue that higher ratings of elected officials are socially desirable, particularly in light of the insignificant difference on local government’s performance. One would expect that if social desirability was the mechanism activating bias, respondents would be even more likely to approve of local government officials, as these are people who generally come from, and live within the community. Table 4 shows that the differences for questions about corruption are statistically significant for all types of government officials, starting from the offices of the President or Prime Minister, all the way down to local government officials and councilors. Although corruption has been a prominent element in the African politics literature, I am not familiar with empirical work that demonstrates that Africans systematically underreport the extent of corruption when they believe they are being asked by a government representative. Moreover, this study also shows that fearful respondents provide more positive ratings of how well the government handles corruption. For corruption questions, social desirability is clearly a plausible reason for biased responses, particularly if one argues that a respondent becomes more aware of what is socially desirable when asked by a government representative. However, in addition to the social desirability of less corruption, I contend that the respondent is also worried that the perceived government interviewer or his colleagues will jail, fine, or punish the respondent for reporting corruption. This argument is based largely on interviews and anecdotal accounts during field research in Mozambique in late 2015/early 2016. Examining Underlining Conditions: Country Case Studies In what contexts are respondents more likely to be fearful of government reprisals when they are questioned by a state interviewer? This section seeks to explore this question through country-level analysis of the Afrobarometer data as well as expert data on media freedom, and political and individual freedoms. I purposely select cases from three groups of countries to illustrate the importance of context. The first case, Mozambique, is classified as “fearful” based on government restrictions on media and individual freedom, as well as its volatile political arena. The second case, Cabo Verde, is classified as “unfearful,” given its high scores for media and individual freedom and its allowance for open multiparty competition. The data will show that responses here are generally unaffected by the perceived identity of the interviewer. In fact, responses to a few questions suggest that Cabo Verdeans are taking the government to task. In parallel to Cabo Verde, similar levels of freedom and stable competition characterize the third case, South Africa. However, South Africa is classified as a “critical” case because it also has an active climate of protest and criticism of the government. In contrast to fearful countries, perceiving a state interviewer corresponds to overreporting corruption. Why does the perceived government affiliation of the interviewer matter in the fearful (and critical) countries? What are the features of these countries that make it more likely for respondents to misreport in a particular way when they believe the enumerator is from the government? The next sections attempt to answer these questions by examining the sociopolitical conditions and trajectories of particular countries. Mozambique Mozambique is the emblematic “fearful” case. In fact, the fear-of-the-state hypothesis was informed by a cumulative year of research and work experience in Mozambique from 2013 to 2016. From late 2015 to mid-2016, I conducted over 100 interviews with government officials, local leaders, and NGO staff, in addition to conversing with other residents in several cities and rural areas throughout the country. The interviews featured questions about ethnicity, political party affiliation, and other topics. Across all interviews, there was one common source of angst, fear, or reticence to offer answers. It was not ethnicity. In not one of my interviews did someone feel uncomfortable in reporting her mother language and ethnicity, in succession. Moreover, when prompted, government officials spoke openly about working with people from different ethnic groups as if it were a nonissue. In contrast, in almost every one of my interviews, discussing one’s party affiliation and the link between political party competition and development were patently sensitive issues. On several occasions, the higher-up at the government office who granted me permission to interview his staff told them they could speak freely because my project was not political, but academic research. The respondents who were willing to comment on the link between political competition and development provided examples of the unwillingness of local people to participate in government projects because of a general fear of violence. This was especially evident after people heard of recent clashes between the two parties and of recent assassination attempts on the leader of the main opposition party, Afonso Dhlakama.11 Opposition party members and regular citizens also spoke of a fear of losing access to jobs and other state benefits if they did not support the ruling party. Furthermore, the party ID card from the ruling party, FRELIMO (Frente de Libertaçao de Moçambique), was frequently cited as a necessary condition for obtaining employment at a state institution.12 Apart from these accounts, Mozambique’s history makes it highly likely that people would be fearful of violence or government repression for criticizing its performance. To begin with, Mozambicans from all walks of life were deeply affected by a 17-year postindependence civil war from 1975 to 1992. Since multiparty democracy began in 1994, there have been numerous clashes between FRELIMO and the main opposition party, RENAMO (Resistência Nacional Moçambicana), often around elections, and the real threat of a return to full-blown civil war has persisted. Every election since 1994 has seen deadly violence, which “has always been targeted, localised [sic], and intentional” (Pereira & Nhanale, 2014, p. 22). RENAMO has boycotted elections amid allegations of the state manipulating electoral rules and electoral fraud, which has also been a repeated claim made by international observers. In November 2000, >40 people were killed in rioting at RENAMO protests over the 1999 elections (Manning, 2001, p. 164). During the same year, prominent journalist and activist Carlos Cardoso was killed while investigating corruption linked to the privatization of the state's largest bank, Banco Comercial de Moçambique (BCM). Although these events are illustrative of the general political climate in Mozambique, it is important to draw on additional expert data corresponding to the time period under study. To that end, RSF’s World Press Freedom Index as well as FH’s Freedom in the World rankings provide evidence in support of the claim that the fear-of-the-state bias will play out in countries where there are restrictions on media, as well as political rights and civil liberties. The World Press Freedom Index ranges from 0 to 100, with lower scores denoting more media freedom. For the 2 years included in the sample, Mozambique’s average media freedom ranking was 15.5. Table 3 shows that this ranking is in the middle of the distribution for the sample, although I am highly skeptical about the validity of this ranking, particularly in light of Cardoso’s assassination. With respect to other freedoms, I again draw on FH’s freedom of expression and belief measure, which ranges from 0 to 16, with higher scores denoting more freedom. I also draw on the widely used FH political rights and civil liberties indices that rate countries on a scale of 1–7, with one denoting the most freedom.13 Although many people have described Mozambique as a postconflict democratization success story, Mozambique has never been classified as free. Since its transition to multi-party democracy in 1994, FH has ranked it as partly free (scores of 3 or 4) for both political rights and civil liberties and its average freedom of expression score for 2005 and 2008 was 11 (see Table 3). These scores place it at the less free end of the distribution for the sample. Against the backdrop of the political context described above, the country-level regression results provide further empirical support for the fear-of-the-state hypothesis.14 Column 2 in Table 4 shows substantial differences on several outcome variables, all of which align with the fear-of-the-state bias theory. First, Mozambicans overreport the extent of, and satisfaction with democracy when they think they are being interviewed by the state. Similarly, they overreport trust in the ruling party. In terms of government performance, they overreport approval for their MP’s and local government’s performance. Finally, they underreport corruption among the President, MPs, and national and local government officials, while they overreport how well the government is fighting it. In sum, for 10 outcome variables that are politically sensitive, survey responses are considerably biased by the perception that the interviewer is a government representative. Moreover, these biases are aligned with fear of state repercussions. Cabo Verde The first case, Mozambique, was illustrative of a fearful country. Cabo Verde, in contrast, helps to shed light on variation outside the dominant pattern. One of the more robust democracies on the continent, Cabo Verde is classified as an unfearful case. Although it may be rare for the sample, it provides evidence for the theory that when citizens are not fearful of the state, they provide more honest responses when they think the government is interviewing them. In stark contrast to the other countries in the data set, Cabo Verde has experienced several peaceful elections and transitions in government since multiparty elections were introduced in 1991. Party competition has thus been relatively healthy and stable. Among the countries ranked free by FH, Cabo Verde is the only one in the Afrobarometer sample to continuously receive the highest freedom score of one for both political freedom (since 1992) and civil liberties (since 2004). Its freedom of expression score of 15 places it at the top end of the distribution alongside South Africa and a few other countries. Moreover, its average media freedom score of seven is something to aspire to on the continent. Cabo Verde’s democratic and peaceful history in the postindependence period underpins the lack of fear among survey respondents. For the first set of political questions (presented in Column 3), the only statistically significant difference is underreporting support for patronage. This result is in contrast to the aggregate result of overreporting support for patronage, and cannot be explained by either fear or social desirability. Instead, this result suggests that respondents may be calling out the government for patronage practices. Similarly, Cabo Verdeans also report to perceived state representatives that they have less trust in the ruling party. This finding is contrary to the pooled result of reporting greater trust and shows again that respondents in Cabo Verde are not afraid of voicing concerns about trust when they think the government is interviewing them. On the performance questions, Cabo Verde provides a sharp contrast to the aggregate results and the fearful country case. While there is no difference on approval for the President or the MP, respondents underrate approval for local government. This finding cannot be explained by fear or social desirability, but perhaps by the respondent’s desire to communicate to the government official that local government needs to be improved. For the corruption questions, Cabo Verdeans do not change their ratings of corruption toward a government representative. Somewhat surprisingly, they overreport how well government is handling corruption (significant at 90% level). Earlier I had argued that this aggregate association was suggestive of the fear-of-the-state bias. In the case of Cabo Verdeans, I think this result could also be because of their desire to commend the government’s performance on corruption. The World Governance Indicators’ (WGI) control of corruption measure shows that Cabo Verde climbed significantly from 1999 to 2008.15 In sum, the results in Cabo Verde provide further support to the fear-of-the-state hypothesis while undermining the social desirability thesis; in a country where citizens are not likely to be fearful, the perception of a state interviewer mechanism does not generate differences on most outcome variables. For the few variables where there are differences, respondents generally seem to be challenging the ruling party and its performance, rather than appealing to social norms and values. South Africa South Africa is similar to Cabo Verde, but it has an active climate of protest and government criticism, and is thus classified as a “critical” case. Moreover, its press has been able to conduct investigative journalism of government corruption and lampoon leaders from the ruling party for their deficiencies. Its average media freedom (7.25) and freedom of expression scores (15) lie at the high end of the sample. It has also scored either 1 or 2 for political rights and civil liberties since the end of apartheid. While South Africa has not seen party turnover since the end of apartheid and the introduction of multi-party elections in 1994, its elections have been widely regarded as free and fair with little electoral violence between parties or suppression of the opposition. The ruling party, the African National Congress (ANC), has won all elections with over 60% of the vote (with the exception of the 2016 local elections) and has never faced a real threat of losing its majority in parliament. Still, opposition parties campaign in the open throughout the country and even members of the ruling party have been quick to criticize its leadership or split off to form new parties. In dozens of interviews (and informal conversations) carried out in South Africa in 2016, regular citizens and elected officials from several parties spoke openly about what they viewed as corrupt or inappropriate activities in government as well as local government officials being removed from their posts. In several municipalities, the ruling party recommended that I speak to their fellow Democratic Alliance (DA) councilors from the opposition party. This is a noteworthy divergence from Mozambique, where FRELIMO municipal councilors generally told me that the councilors from the opposition party, the Movimento Democrático de Moçambique (MDM), were unavailable or uninterested in answering my questions. In light of this, it seems that South Africa’s climate allows for critical appraisals of government performance, as people are unafraid to hold corrupt politicians to account. The Afrobarometer data bear this out. On performance questions, South Africans are identical to Cabo Verdeans in that they underreport approval for local government's performance when they think the government is sponsoring the interview. However, in contrast to Cabo Verdeans, South Africans overreport corruption among MPs, the President, and (unspecified) government officials (the latter two significant at 90% level). This result is unique to the data set (with the exception of Nigeria) and demonstrates the relatively higher willingness of South Africans to point out corruption compared with other countries in Sub-Saharan Africa. Moreover, this result shows that social desirability is clearly not driving differences because government representatives do not view more corruption as desirable. Finally, one somewhat surprising result is that South Africans understate trust in the opposition (significant at 90% level). This is in line with the aggregate results but contrary to what one would expect in a critical country. However, in contrast to fearful countries, South Africans do not overstate support for the ruling party. In sum, South Africa is an important case that provides further evidence for the fear-of-the-state hypothesis. In contrast to most countries in the sample, South Africa’s open political climate enables ordinary citizens and opposition parties to criticize without fear of retribution. As a result, the perceived identity of a government interviewer in South Africa does not generate differences that align with fear or social desirability. Instead, the perception of a state interviewer generates a bias that stems from a lack of fear as well as a desire to be critical of the government. Implications and Recommendations for Improving Survey Methodology As described earlier, the literature has thoroughly documented the high potential for bias that affects responses to sensitive questions about racial and religious attitudes, sexual behavior, drug use, and voter turnout (Gonzalez-Ocantos et al., 2012, p. 204). For those survey questions, researchers have created social desirability scales and used indirect methods to obtain more accurate estimates and minimize nonresponse (DeMaio, 1984; Nederhof, 1985). Yet, there is little research on how the perceived institutional affiliation of the interviewer generates bias, particularly in unsafe contexts where respondents may fear reprisals by the state. In light of this, future work could test and use different methods to measure these politically sensitive ratings and behaviors. One first step may be to experiment with new approaches to reduce the percentage of people that believes the interviewer comes from the state. For example, the Afrobarometer survey has an introductory script in the interview protocol that states that the interviewer comes from an independent research organization and does not represent the government or a political party. Moreover, the interviewer affirms that all information will remain confidential. Despite this, just over a quarter of the sample is convinced by the protocol that the interviewer is independent. Given this, the protocol could be strengthened to provide greater reassurance of confidentiality to the respondent (Beatty & Herrmann, 2002). Perhaps, the presentation of additional credentials and proof of nongovernmental affiliation could be helpful in this regard. In addition, partnering with organizations that are widely known and viewed to be separate from the state could increase the probability of obtaining more accurate responses. In addition, indirect modes of administration such as endorsement experiments, list experiments (or item count or unmatched count technique), or randomized response techniques could also be tested (Coutts & Jann, 2011; Glynn, 2013; Rosenfeld, Imai, & Shapiro, 2015). Despite their limitations, these indirect methods may improve the accuracy of data collected from sensitive questions. For example, Blair, Imai, and Lyall (2014) combined list and endorsement experiments in Afghanistan to elicit more credible data and produce more efficient estimates. Another advantage of these experiments is that they are more implementable than self-administration in low literacy contexts. In contexts where literacy is higher, self-administration (which eliminates the face-to-face interaction with the state representative) may be preferable because respondents are shown to be more willing to answer sensitive questions when they are self-administered (Groves et al., 2009; Krumpal, 2013). Although there has been significant testing of these different methods, further research in politically sensitive contexts would be helpful in understanding whether these methods obtain more reliable estimates. Conclusions This article has aimed to shed light on a neglected source of response bias in African public opinion survey data. In contrast to previous studies that have highlighted a social desirability bias based on coethnicity, this article describes a fear-of-the-state bias in which respondents who believe the interviewer comes from the government are systematically more likely to favorably rate democratic progress and government performance while underreporting corruption and trust in the opposition. This article focused on sensitive political outcome variables, wherein theory would suggest a biased response based on fear of government reprisals or fear of losing state benefits. However, this bias would likely affect other outcome variables in other parts of the world where media is restricted and attacks on dissenting political views are prevalent. Moreover, the results of this article suggest the need for greater caution in drawing strong conclusions from public opinion data in sensitive contexts. To measure bias in contexts outside Africa, other surveys such as Latinobarómetro, Arab Barometer, and Pew polls would do well to include a question on the perceived institutional affiliation of the interviewer. This article used multilevel modeling along with country case studies to investigate when and how the perception of a state interviewer will generate biased survey responses. Restricted media freedom and political repression are just a few factors that are likely driving the fear-of-the-state bias. In countries with a freer democratic climate, survey respondents are less influenced by the perceived government identity of the interviewer, and can even be critical of government performance and corruption. Beyond this article’s findings, the analytical approach sought to demonstrate the need for unpacking cross-country regression results. There is far too much (important) variation across African countries, which must be investigated and brought to bear to sharpen country-level policymaking and scholarly knowledge. Finally, while existing quantitative survey data can be helpful in generating this knowledge, so can a qualitative understanding of the contexts in which surveys take place. In this article, interviews and informal conversations helped inform specific doubts about the quantitative data and lines of inquiry that could be further examined. They also helped shape the selection of country cases and interpretations of the survey data results. While these qualitative data are anecdotal and suggestive, this article argues that data collected outside of surveys from the people who are most knowledgeable about day-to-day sociopolitical realities should be an invaluable resource for understanding survey data and their limitations. Zack Zimbalist is a PhD candidate in the Department of African Studies at the School of Advanced International Studies (SAIS) at Johns Hopkins University. His research interests are centered on the political economy of development and democratization in Sub-Saharan Africa. He holds a Master’s in Development Studies from the University of KwaZulu-Natal, Durban, South Africa, and a BA in International Studies-Economics from UCSD. Appendix The main text of this article analyzed how respondents intentionally misreport answers to particular questions. However, the respondent could also choose to not respond. Although the literature often overlaps in discussing the reasons behind a respondent’s choice to misreport or not respond, there are differences that should be highlighted. In contrast to misreporting, a nonresponse decision could also derive from the unavailability of the information requested or from a mismatch in the accuracy the respondent believes she can provide vis-à-vis what she thinks is required (Beatty & Herrmann, 2002). The main source of overlap between misreporting and nonresponse is thus the respondent’s motivation to provide the information asked. In the context of this study, it is likely that in addition to misreporting, fearful respondents also provide more DK and RA responses to state interviewers. To the best of my knowledge, nonresponse has not been treated as a choice that is driven by a fear-of-the state bias. Figure 1 shows item nonresponse rates for political questions, by perceived government affiliation.16 Of the 16 variables presented, 14 have nonresponse rates that are higher among people who think that the government is interviewing them. The only exceptions to this pattern are opinions on support for patronage and contacting a government official. Apart from these variables, respondents are more likely to not respond to questions on democracy, government performance, and corruption. Figure 1 View largeDownload slide Nonresponse by perceived interviewer affiliation (%) Figure 1 View largeDownload slide Nonresponse by perceived interviewer affiliation (%) Footnotes 1He did find higher refusals to questions about sexual behavior when one is interviewed by a coethnic. 2No response options are provided and the response is coded to fit into one of 28 categories, which did not change across the survey rounds. I code the following responses as coming from the government: government (general), national government, provincial government, local government, President/PM’s (Prime Minister) office, Parliament/National Assembly, National Intelligence/Secret Service, and political party/politicians. The following responses are classified as non-government: no one, government census/statistics office, education or social affairs department/ministry, tax or finance department, health department, other government department, constitutional commission, electoral commission, national planning commission, public utility company, NGO (non-governmental organization), research company, newspapers/media, university/school/college, private company, international organization, god or a religious organization, town hall, and village chief. As with responses to all other variables, I code those who refuse to answer or don’t know the interviewer’s affiliation as “missing,” so as not to make any assumptions about whether those respondents are systematically not reporting an answer. 3A few single-country case studies have highlighted this possibility. In a study on Zimbabwe, Chikwana et al. (2005, p. 100) concluded that “If people think they are talking to agents of the central government, they are much more cautious about what they say.” In Ethiopia, Mattes and Teka (2016, p. 2) find substantial political fear and suspicion among respondents, which are jointly associated with nonresponse and overly positive views of government. Also see Mattes and Shenga (2013) on Mozambique. 4Round 3 took place in 2005/2006 and Round 4 was carried out in 2008/2009. For more precise dates of the surveys, see: http://www.afrobarometer.org/surveys-and-methods/survey-schedule. The countries included in the pooled data set are Benin, Botswana, Burkina Faso, Cabo Verde, Ghana, Kenya, Lesotho, Liberia, Madagascar, Malawi, Mali, Mozambique, Namibia, Nigeria, Senegal, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe. In each wave, roughly 1,200 households are interviewed in each country, with the exceptions of Nigeria, South Africa, and Uganda (wherein roughly 2,400 households are interviewed). Burkina Faso and Liberia were only added in Round 4 and thus make up half the average number of observations per country in the pooled data set. 5As a robustness check, I also run pooled ordinary least squares regressions with country and year dummy variables. This model is not susceptible to the heterogeneity bias threat discussed in the random effects model because by adding the country dummies, the model controls out the unobserved heterogeneity at the higher level. The results are similar and available on request. 6There are other plausible Afrobarometer individual-level control variables that I exclude because they are biased based on the perception of a government interviewer. To preserve observations, I do not control for the (non)coethnic match between interviewer and respondent or enumerator ethnic group size. This approach avoids dropping a large number of observations and, more importantly, entire countries from the sample for which data on enumerator ethnicity were not available (i.e., Botswana, Cabo Verde, Lesotho, Liberia, Madagascar, and Tanzania). This approach resulted in preserving 53,125 observations from 20 countries compared with 38,381 observations from 14 countries. 7For information on their methodology, see: https://rsf.org/en/world-press-freedom-index 8The data are taken from the QoG 2016 Standard data set (Dahlberg et al., 2016). 9I do not present the estimates for the other control variables. However, to give a sense of the relative magnitude of these effects, only the urban resident predictor and sometimes the Round 4 wave dummy are larger. The coefficients on education, media freedom, and freedom of expression are not directly comparable because they are not dummy variables. The controls for gender, ethnic group size, minority group, and home language are generally insignificant. Results available on request. Table 4 Effect of a Perceived Government Interview Sponsor on Political Outcomes Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Notes: Column 1 presents unstandardized coefficients from multilevel linear models with individuals as the lower-level unit of analysis and country-level random intercepts. In Columns 2–4, country-specific regressions. The number of observations for questions asked in only one wave range from 15,211 to 16,766. The data are unweighted. Standard errors in parentheses. *p < 0.10, **p< 0.05, ***p < 0.01. Table 4 Effect of a Perceived Government Interview Sponsor on Political Outcomes Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Notes: Column 1 presents unstandardized coefficients from multilevel linear models with individuals as the lower-level unit of analysis and country-level random intercepts. In Columns 2–4, country-specific regressions. The number of observations for questions asked in only one wave range from 15,211 to 16,766. The data are unweighted. Standard errors in parentheses. *p < 0.10, **p< 0.05, ***p < 0.01. 10I use the term Africans for ease of exposition. Attempting to generalize to a continental population of over 1.2 billion people residing in 54 different countries is misplaced. 11Interviews with district officials in Manica province, where Dhlakama's convoy had recently been attacked. 12See Cho and Bratton (2006, p. 737) for similar findings in Lesotho. 13For more information, see: https://freedomhouse.org/report-types/freedom-world 14In the case of Mozambique and Cabo Verde, I removed ethnic group controls based on the respondent’s ethnicity because there was no ethnic group data for a large percentage of the sample. 15See: http://databank.worldbank.org/data/reports.aspx?source=worldwide-governance-indicators 16I follow guidance from the Round 6 Afrobarometer survey manual and do not weigh the cross-tabs between perceived identity of the interviewer and the outcome variables. See: http://www.afrobarometer.org/sites/default/files/survey_manuals/ab_r6_survey_manual_en.pdf Acknowledgments Thanks are due to Amanda Kerrigan, Carlos Shenga, and two anonymous reviewers for their suggestions. Some of the research and writing was completed under the auspices of a George L. Abernethy Doctoral Research Fellowship. References Adida C. L., Ferree K. E., Posner D. N., Robinson A. L. ( 2016). Who's asking? Interviewer coethnicity effects in African survey data. Comparative Political Studies , 49, 1– 31. doi: 10.1177/0010414016633487 Beatty P., Herrmann D. ( 2002). To answer or not to answer: Decision processes related to survey item nonresponse. In Dillman D. A., Eltinge J. L., Groves R. M., Little R. J. (Eds.), Survey nonresponse  (pp. 71– 86). New York: John Wiley and Sons. Blair G., Imai K., Lyall J. ( 2014). Comparing and combining list and endorsement experiments: Evidence from Afghanistan. American Journal of Political Science , 58, 1043– 1063. doi: 10.1111/ajps.12086 Google Scholar CrossRef Search ADS   Bratton M., Houessou R. ( 2014). Demand for democracy is rising in Africa, but most political leaders fail to deliver (Policy Paper No. 11). Retrieved from Afrobarometer website: http://afrobarometer.org/publications Chikwana A., Sithole T., Bratton M. ( 2005). Propaganda and public opinion in Zimbabwe. Journal of Contemporary African Studies , 23( 1), 77– 108. doi: http://dx.doi.org/10.1080/0258900042000329466 Google Scholar CrossRef Search ADS   Cho W., Bratton M. ( 2006). Electoral institutions, partisan status, and political support in Lesotho. Electoral Studies , 25, 731– 750. doi:10.1016/j.electstud.2005.12.001 Google Scholar CrossRef Search ADS   Coutts E., Jann B. ( 2011). Sensitive questions in online surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT). Sociological Methods & Research , 40( 1), 169– 193. doi: 10.1177/0049124110390768 Google Scholar CrossRef Search ADS   Dahlberg S., Holmberg S., Rothstein B., Khomenko A., Svensson R. ( 2016). The quality of government standard dataset (Basic codebook), version Jan16. University of Gothenburg: The Quality of Government Institute. doi:10.18157/QoGBasJan16. Retrieved from http://www.qog.pol.gu.se DeMaio T. ( 1984). Social desirability and survey measurement: A review. In Turner C., Martin E. (Eds.), Surveying subjective phenomena  (Vol. 2, pp. 257– 281). New York, NY: Russell Sage Foundation. Dionne K. ( 2014). The politics of local research production: Surveying in a context of ethnic competition. Politics, Groups, and Identities , 2, 459– 480. doi: 10.1080/21565503.2014.930691 Google Scholar CrossRef Search ADS   Glynn A. N. ( 2013). What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly , 77, 159– 172. doi: 10.1093/poq/nfs070 Google Scholar CrossRef Search ADS   Gonzalez-Ocantos E., De Jonge C. K., Melendez C., Osorio J., Nickerson D. W. ( 2012). Vote buying and social desirability bias: Experimental evidence from Nicaragua. American Journal of Political Science , 56( 1), 202– 217. doi: 10.1111/j.1540-5907.2011.00540.x Google Scholar CrossRef Search ADS   Groves R. M. ( 2004). Survey errors and survey costs . Hoboken, NJ: John Wiley & Sons, Inc. Google Scholar CrossRef Search ADS   Groves R. M., Fowler F. J.Jr, Couper M. P., Lepkowski J. M., Singer E., Tourangeau R. ( 2009). Survey methodology . Hoboken, NJ: John Wiley & Sons, Inc. Krumpal I. ( 2013). Determinants of social desirability bias in sensitive surveys: A literature review. Quality and Quantity , 47, 2025– 2047. doi: 10.1007/s11135-011-9640-9 Google Scholar CrossRef Search ADS   Manning C. ( 2001). Competition and accommodation in post-conflict democracy: The case of Mozambique. Democratization , 8, 140– 168. doi: 10.1080/714000204 Google Scholar CrossRef Search ADS   Mattes R., Shenga C. ( 2013). Uncritical citizenship: Mozambicans in comparative perspective. In Bratton M. (Ed.), Voting and democratic citizenship in Africa: Where next  (pp. 159– 177). Boulder, CO: Lynne Rienner. Mattes R., Teka M. ( 2016). Ethiopians’ views of democratic government: Fear, ignorance, or unique understanding of democracy? (Working Paper No. 164). www.afrobarometer.org. Nederhof A. J. ( 1985). Methods of coping with social desirability bias: A review. European Journal of Social Psychology , 15, 263– 280. doi: 10.1002/ejsp.2420150303 Google Scholar CrossRef Search ADS   Pereira J. C., Nhanale E. ( 2014). The 2014 general elections in Mozambique: Analysis of fundamental questions . Johannesburg, South Africa: Open Society Initiative for Southern Africa. Retrieved from: http://www.eldis.org/go/home&id=69467&type=Document#.WCHIpJNpG8o Rosenfeld B., Imai K., Shapiro J. N. ( 2015). An empirical validation study of popular survey methodologies for sensitive questions. American Journal of Political Science , 60, 783– 802. doi: 10.1111/ajps.12205 Google Scholar CrossRef Search ADS   Samii C. ( 2013). Perils or promise of ethnic integration? Evidence from a hard case in Burundi. American Political Science Review , 107( 03): 558– 573. doi: 10.1017/S0003055413000282 Google Scholar CrossRef Search ADS   Tourangeau R., Rips L. J., Rasinski K. ( 2000). The psychology of survey response . Cambridge, England: Cambridge University Press. Retrieved from: https://doi.org/10.1017/cbo9780511819322 Google Scholar CrossRef Search ADS   Tourangeau R., Yan T. ( 2007). Sensitive questions in surveys. Psychological Bulletin , 133, 859– 883. doi: 10.1037/0033-2909.133.5.859 Google Scholar CrossRef Search ADS PubMed  © The Author 2017. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Public Opinion Research Oxford University Press

“Fear-of-the-State Bias” in Survey Data

Loading next page...
 
/lp/ou_press/fear-of-the-state-bias-in-survey-data-JUP0AuGb03
Publisher
Oxford University Press
Copyright
© The Author 2017. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.
ISSN
0954-2892
eISSN
1471-6909
D.O.I.
10.1093/ijpor/edx020
Publisher site
See Article on Publisher Site

Abstract

Abstract Public attitude surveys provide invaluable data for assessing how people view their countries’ democratic progress and government performance, in addition to a range of other outcomes. Yet, these data are vulnerable to substantial biases deriving from interviewer effects. Apart from social desirability bias resulting from a (non)coethnic interviewer, this article demonstrates that the perception of a government interviewer is another crucial mechanism that generates bias in the African context. The evidence suggests that fear of the state, rather than social desirability, leads people in less open societies to provide more positive assessments of democratic and government performance and underreport corruption. In identifying this new source of bias, the article discusses potential improvements to survey protocols and modes of administration. Survey data collected through face-to-face interviews are susceptible to different types of response (and nonresponse) bias. For example, the literature has shown biased responses resulting from the gender, race, religion, and ethnicity of the interviewer (Beatty & Herrmann, 2002; Groves, 2004; Groves et al., 2009; Tourangeau & Yan, 2007). Evidence for these interviewer effects can be found in a variety of contexts in both the developed and the developing world. In the African context, recent work has demonstrated substantial biases resulting from the effect of an interviewer-respondent ethnic (mis)match (Adida, Ferree, Posner, & Robinson, 2016; Dionne, 2014; Samii, 2013). This finding is not surprising, given the social salience of ethnicity in many African countries. However, little (if any) work has theorized or investigated response biases that result from the respondent’s perception of a government interviewer and the accompanying fear that this perception activates. This dearth of research is surprising, given that survey respondents in many African countries are likely to be fearful of the state and thus far less likely to provide honest responses to government interviewers. Not taking this response bias into account may lead to incorrect inferences about citizen preferences for democracy in addition to evaluations of government performance and corruption. This article makes two contributions with respect to evaluating survey data and improving survey methodology in sensitive contexts. First, it is to the best of my knowledge the first study that demonstrates the bias resulting from the perception of a government interviewer. It investigates how this bias affects conclusions for a range of politically sensitive survey questions relating to democratic progress, government performance, corruption, and trust in political parties. Empirically, this is achieved using a multilevel model with a range of control variables. Even after controlling for respondent characteristics and country-level freedom, the effect of perceiving a government interviewer is substantial. These respondents systematically overstate their endorsement of patronage activities and the supply of (and satisfaction with) democracy while understating their preference for democracy. In addition, they are less likely to criticize their governments or point out corruption. Similarly, they overstate their trust in the ruling party and its performance. In contrast, they understate their trust in the opposition. The second contribution of the article is to articulate a new theory—fear-of-the-state bias—that seeks to explain the significant response biases apparent in the data. In contrast to the social desirability thesis, the biases resulting from perceiving a government interviewer are more in line with respondents being fearful of government or party reprisals. Building on this theory, the article unpacks the pooled cross-country regression results to shed light on some of the conditions that lead to a fearful response bias. I draw on the illustrative case of Mozambique to show that a fear-of-the-state bias is particularly likely in countries with a repressive political climate where the dominant party has power over the legislature and a far-reaching grip on the media and other aspects of economic life including the provision of government jobs. Contrasting results from more open democratic countries such as Cabo Verde and South Africa provide further empirical support for this argument. Types of Survey Response Bias There is an extensive literature on bias in survey response in sociology and psychology and increasingly in political science (Adida et al., 2016; DeMaio, 1984; Krumpal, 2013). In addition to well-known respondent errors (or biases) deriving from event encoding, question comprehension, and memory, interviewer effects are an important source of bias that can be ameliorated by survey methodology (Groves, 2004). Many of these interviewer effects are described as a form of social desirability bias based on a particular characteristic, such as the race, religion, or ethnicity of the interviewer (Beatty & Herrmann, 2002; Tourangeau & Yan, 2007). Social desirability bias refers to “answers which [sic] reflect an attempt to enhance some socially desirable characteristics or minimize the presence of some socially undesirable characteristics” (U.S. Department of Health, Education, and Welfare 1977, p. 47 cited in DeMaio, 1984, p. 257). In the African context, social desirability bias has increasingly been deployed to explain interviewer coethnicity effects (Adida et al., 2016; Dionne, 2014; Samii, 2013). The recent upsurge in these studies is not surprising, as ethnicity is thought to be highly socially salient across Sub-Saharan Africa. Studies, however, have produced divergent results. In rural Malawi, Dionne (2014) found that nonresponse to most questions was not associated with the interviewer-respondent ethnic match.1 In contrast, Samii (2013) found less nonresponse for coethnic matches in Burundi. Using a large sample of African countries, Adida et al. (2016) found that the average response bias generated by a noncoethnic interviewer-respondent match was substantial for certain questions. They concluded, “In almost all cases, the direction of this effect is consistent with stronger social desirability bias in ethnically mismatched interviews” (p. 14). This article acknowledges that there is a coethnic effect for certain questions in many African countries and that this may be because of social desirability. However, this article argues that for certain questions (and for particular countries), the coethnic effect because of social desirability may be absent or may be less important than the effect of other factors. More specifically, this article shows that the perception of a government interviewer generates more frequent biases for politically sensitive questions. This article’s results do not easily conform to the social desirability thesis, which warrants further theorizing and testing of other explanations. Apart from social desirability bias, the literature points to a few other reasons for misreporting (or nonresponse). One reason is the threat of disclosure and the related consequences stemming from another party (Tourangeau et al., 2000). Beatty and Herrmann (2002, p. 72) term this a “conflict of interest” that occurs when respondents think that the information could be used against them, or when they do not trust the interviewer. This form of sensitivity is most relevant to the questions analyzed in this article, wherein the threat of disclosure is nearly certain because the respondent often believes that she is directly disclosing the sensitive information to the third party (Tourangeau & Yan, 2007). In light of this, it is important to ask: For which types of questions are respondents more likely to fear third-party repercussions? The literature has generally focused on questions about illegal activities such as crime or drug use, for which respondents may have an adequate response but choose not to provide it because it could put them at risk (Beatty & Herrmann, 2002, p. 75). However, questions that relate to legal activities or legitimate views may also generate fear in respondents, particularly in repressive societies. For example, reporting on poor government performance or corruption may put people at risk, even if they are not involved. In this case, respondents have what Beatty and Herrmann (2002) term “negative communicative intent” or a lack of motivation to provide the solicited information. This leaves respondents with a few options. They can choose to say they “do not know” (DK) or “refuse to answer” (RA), which are errors of omission. Alternatively, they can provide an answer they know to be incorrect (an error of commission). An error of commission is more likely in some cases, especially when a DK or an RA may in itself reveal information to the interviewer. Although the focus in survey design is generally on reducing errors of omission (and increasing the effective sample size), the reasons for nonresponse or misreporting should be carefully considered in assessing which type of error is preferable (Beatty & Herrmann, 2002). For example, in this article, fear of a government interviewer may be the underlying factor driving nonresponse and misreporting. Thus, a reduction in the effective sample size because of DK or RA responses may be preferable to the biases that result from systematic misreporting (Dionne, 2014, p. 475). This article examines the effect of a respondent perceiving a government interviewer on misreporting for a range of outcome variables. Fortunately, the Afrobarometer (in contrast to other opinion polls) includes a question at the end of its survey that asks the respondent: “Who do you think sent us to do this interview?” (Afrobarometer Codebook Rounds 3 and 4).2Table 1 presents a descriptive breakdown of responses for the aggregated categories and a few of the important subcategories. Notably, 55% of respondents in the full sample believes that the interviewer is a government representative, while the remainder is split between 28% not government and 17% missing, DK, or refused. Table 1 Breakdown of Interviewer Perceptions for Full Sample (%) Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Note: Unweighted percentages for some of the 28 total options within the aggregated state and nonstate categories. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. Table 1 Breakdown of Interviewer Perceptions for Full Sample (%) Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Categories  State  Nonstate  DNK, RA, missing  Government (general)  39.12      President/PM's office  5.66      Intelligence  0.32      Census/statistics office    1.94    NGO    6.20    Research company    10.99    University/school    2.47    DK      16.41  Total average  55  28  17  Note: Unweighted percentages for some of the 28 total options within the aggregated state and nonstate categories. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. I argue that respondents in unstable countries who believe the government sent the interviewer are more likely to misreport in a way that is theoretically consistent with a fear of the state. Here, it is important to emphasize that the posited mechanism generating bias is not fear per se, but the perception of a state interviewer that activates any fear that a respondent may already have. As the data in Table 2 show, the percentage perceiving a state interviewer can be high in both fearful and unfearful countries (e.g., 79% in Mozambique compared with 73% in South Africa). Table 2 Breakdown of Interviewer Perceptions by Country (%) Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Note: In contrast to the country breakdowns, in the total breakdown, I do not exclude responses that are missing, DK, and RA. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. Table 2 Breakdown of Interviewer Perceptions by Country (%) Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Country  State  Nonstate  Benin  60  40  Botswana  77  23  Burkina Faso  73  27  Cape Verde  68  32  Ghana  70  30  Kenya  55  45  Lesotho  72  28  Liberia  46  54  Madagascar  66  34  Malawi  74  26  Mali  73  27  Mozambique  79  21  Namibia  54  46  Nigeria  76  24  Senegal  67  33  South Africa  73  27  Tanzania  75  25  Uganda  59  41  Zambia  56  44  Zimbabwe  47  53  Total average  55  28  Note: In contrast to the country breakdowns, in the total breakdown, I do not exclude responses that are missing, DK, and RA. Pooled for 2005 and 2008. DK = do not know; RA = refuse to answer. Meanwhile, Table 3 shows that repression of the media and individual expression is high on average, with a few notable exceptions (e.g., South Africa and Cape Verde). Table 3 Individual and Media Repression (2005 and 2008) Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Note: Freedom House (FH), civil liberties (CLs), and political rights (PRs) score with one denoting the most freedom. Reporters Sans Frontieres (RSF) World Press Freedom Index with lower scores denoting more media freedom. Table 3 Individual and Media Repression (2005 and 2008) Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Country  FH CL and PR (1–7)  FH expression (0–16)  RSF media (0–100)  Mozambique  3.25  11.0  15.5  South Africa  1.75  15.0  7.25  Cape Verde  1.0  15.0  7.0  Total average  3.07  12.49  19.10  Note: Freedom House (FH), civil liberties (CLs), and political rights (PRs) score with one denoting the most freedom. Reporters Sans Frontieres (RSF) World Press Freedom Index with lower scores denoting more media freedom. On misreporting, the literature has generally neglected the possibility that the respondent is likely to provide systematically different responses to questions that are politically sensitive when questioned by perceived state representatives.3 Here, I argue that the social desirability bias thesis is generally less plausible than fear of state repercussion or fear of losing out on state benefits. My argument is made by analyzing biased responses that do not comport well with the social desirability thesis and by using country case studies to delve deeper into the underlying conditions that could be driving the biases apparent in the data. For some of the survey questions, one could maintain that social desirability is an equally plausible explanation for bias. However, for that to be valid, one would also have to argue that social desirability is activated by the perception of a government interviewer. This seems less likely to be a mechanism that would trigger socially desirable responses, compared with other mechanisms such as a coethnic interviewer or other interviewers that come from institutions that can provide greater social proof (e.g., religious and traditional leaders). Findings: The Effect of Fear-of-the-State In this section, I use multilevel (random effects) modeling based on two (pooled) waves of the Afrobarometer (Rounds 3 and 4) data to estimate the effect of perceiving a government interview sponsor.4 In the Afrobarometer survey, the individual-level observations (obtained from an individual within a visited household) are nested within clusters (i.e., countries). A multilevel model can be used to account for clustering, given that it is theoretically likely that country-level attributes will affect the individual outcome variables of interest. The model assumes that unobserved contextual factors at the country level (which are captured in the random intercepts) are uncorrelated with the included independent variables. A violation of this assumption could result in heterogeneity bias, though.5 I closely follow Adida et al.’s (2016) model specification, which controls for age, gender, education, and urban or rural residence, whether one belongs to a minority ethnic group, ethnic group size, and ethnic group size squared, a wave dummy, and whether the interview was conducted in one’s home language.6 At the country level, I control for media freedom using Reporters Sans Frontieres (RSF) World Press Freedom Index.7 I also control for individual freedom of expression and belief using Freedom House’s (FH) measure. Importantly, this measure captures, inter alia, “the ability of the people to engage in private (political) discussions without fear of harassment or arrest by the authorities” (Dahlberg, Holmberg, Rothstein, Khomenko, & Svensson, 2016, p. 66).8 The first column of Table 4 presents the results for the full sample for the effect of perceiving a government interviewer on a range of political outcomes. The results show statistically significant effects that are consistent with the fear-of-the-state theory.9 Respondents who think that a government representative is interviewing them are more likely to assert that leaders should help their home communities as opposed to not favoring their own family or group. This is arguably a statement showing support for (or a lack of criticism of) politicians engaging in nepotistic or patronage practices. It is also a likely response for people who are fearful of state repression in countries where politicians thrive on patronage. Social desirability does not seem to be a plausible explanation, unless people view patronage as socially desirable. Indeed, Adida et al. (2016) find no social desirability bias effect (i.e., noncoethnic match) for this question. For a similar reason, it seems unlikely that social desirability would help explain why respondents report a lower preference for democracy. Here, Adida et al. (2016) find that the social desirability effect results in a higher reported preference for democracy. Given this, it seems more likely that underreporting a preference for democracy would aim to appease the state and its nondemocratic practices. This is an important finding. The literature has documented that an increasing number of “Africans”10 prefer democracy as their system of government. This result suggests that Africans prefer it even more than the data have historically shown. Another important finding is that respondents overstate the extent of democracy and their satisfaction with democracy, probably because they fear criticizing the government’s performance on this dimension. The literature has also shown that the reported supply of democracy often exceeds the demand for democracy (Bratton & Houessou, 2014). These two findings taken together suggest one reason why this paradox exists. A fear-of-the-state may also be driving other results, which cannot easily be explained by the social desirability thesis. For example, Table 4 shows that respondents are less likely to report that party competition causes conflict (statistically significant at the 90% level). For other survey questions, social desirability seems like a credible explanation for response biases. For example, social desirability might explain why respondents report to the perceived government interviewer that they are far more likely to trust the ruling party and less likely to trust the opposition. In a dominant party system, these are arguably socially desirable responses, given the probability that the government representative is from the ruling party. However, both biases also align well with the theory that fearful respondents are more likely to vocalize support for the ruling party and distrust for the opposition when they think a government (i.e., ruling party) representative is at their door. In the Adida et al. (2016) study, there was no social desirability effect on the trust in opposition question. Turning to evaluations of government officials at different levels, respondents are considerably more likely to approve of the President’s and MP’s performance. In contrast, there is no statistically significant difference for their evaluation of local government’s performance. This may suggest that citizens are less fearful of local government officials, who are generally far lower in the state’s power hierarchy than officials at the provincial and national level. Again, it is difficult to argue that higher ratings of elected officials are socially desirable, particularly in light of the insignificant difference on local government’s performance. One would expect that if social desirability was the mechanism activating bias, respondents would be even more likely to approve of local government officials, as these are people who generally come from, and live within the community. Table 4 shows that the differences for questions about corruption are statistically significant for all types of government officials, starting from the offices of the President or Prime Minister, all the way down to local government officials and councilors. Although corruption has been a prominent element in the African politics literature, I am not familiar with empirical work that demonstrates that Africans systematically underreport the extent of corruption when they believe they are being asked by a government representative. Moreover, this study also shows that fearful respondents provide more positive ratings of how well the government handles corruption. For corruption questions, social desirability is clearly a plausible reason for biased responses, particularly if one argues that a respondent becomes more aware of what is socially desirable when asked by a government representative. However, in addition to the social desirability of less corruption, I contend that the respondent is also worried that the perceived government interviewer or his colleagues will jail, fine, or punish the respondent for reporting corruption. This argument is based largely on interviews and anecdotal accounts during field research in Mozambique in late 2015/early 2016. Examining Underlining Conditions: Country Case Studies In what contexts are respondents more likely to be fearful of government reprisals when they are questioned by a state interviewer? This section seeks to explore this question through country-level analysis of the Afrobarometer data as well as expert data on media freedom, and political and individual freedoms. I purposely select cases from three groups of countries to illustrate the importance of context. The first case, Mozambique, is classified as “fearful” based on government restrictions on media and individual freedom, as well as its volatile political arena. The second case, Cabo Verde, is classified as “unfearful,” given its high scores for media and individual freedom and its allowance for open multiparty competition. The data will show that responses here are generally unaffected by the perceived identity of the interviewer. In fact, responses to a few questions suggest that Cabo Verdeans are taking the government to task. In parallel to Cabo Verde, similar levels of freedom and stable competition characterize the third case, South Africa. However, South Africa is classified as a “critical” case because it also has an active climate of protest and criticism of the government. In contrast to fearful countries, perceiving a state interviewer corresponds to overreporting corruption. Why does the perceived government affiliation of the interviewer matter in the fearful (and critical) countries? What are the features of these countries that make it more likely for respondents to misreport in a particular way when they believe the enumerator is from the government? The next sections attempt to answer these questions by examining the sociopolitical conditions and trajectories of particular countries. Mozambique Mozambique is the emblematic “fearful” case. In fact, the fear-of-the-state hypothesis was informed by a cumulative year of research and work experience in Mozambique from 2013 to 2016. From late 2015 to mid-2016, I conducted over 100 interviews with government officials, local leaders, and NGO staff, in addition to conversing with other residents in several cities and rural areas throughout the country. The interviews featured questions about ethnicity, political party affiliation, and other topics. Across all interviews, there was one common source of angst, fear, or reticence to offer answers. It was not ethnicity. In not one of my interviews did someone feel uncomfortable in reporting her mother language and ethnicity, in succession. Moreover, when prompted, government officials spoke openly about working with people from different ethnic groups as if it were a nonissue. In contrast, in almost every one of my interviews, discussing one’s party affiliation and the link between political party competition and development were patently sensitive issues. On several occasions, the higher-up at the government office who granted me permission to interview his staff told them they could speak freely because my project was not political, but academic research. The respondents who were willing to comment on the link between political competition and development provided examples of the unwillingness of local people to participate in government projects because of a general fear of violence. This was especially evident after people heard of recent clashes between the two parties and of recent assassination attempts on the leader of the main opposition party, Afonso Dhlakama.11 Opposition party members and regular citizens also spoke of a fear of losing access to jobs and other state benefits if they did not support the ruling party. Furthermore, the party ID card from the ruling party, FRELIMO (Frente de Libertaçao de Moçambique), was frequently cited as a necessary condition for obtaining employment at a state institution.12 Apart from these accounts, Mozambique’s history makes it highly likely that people would be fearful of violence or government repression for criticizing its performance. To begin with, Mozambicans from all walks of life were deeply affected by a 17-year postindependence civil war from 1975 to 1992. Since multiparty democracy began in 1994, there have been numerous clashes between FRELIMO and the main opposition party, RENAMO (Resistência Nacional Moçambicana), often around elections, and the real threat of a return to full-blown civil war has persisted. Every election since 1994 has seen deadly violence, which “has always been targeted, localised [sic], and intentional” (Pereira & Nhanale, 2014, p. 22). RENAMO has boycotted elections amid allegations of the state manipulating electoral rules and electoral fraud, which has also been a repeated claim made by international observers. In November 2000, >40 people were killed in rioting at RENAMO protests over the 1999 elections (Manning, 2001, p. 164). During the same year, prominent journalist and activist Carlos Cardoso was killed while investigating corruption linked to the privatization of the state's largest bank, Banco Comercial de Moçambique (BCM). Although these events are illustrative of the general political climate in Mozambique, it is important to draw on additional expert data corresponding to the time period under study. To that end, RSF’s World Press Freedom Index as well as FH’s Freedom in the World rankings provide evidence in support of the claim that the fear-of-the-state bias will play out in countries where there are restrictions on media, as well as political rights and civil liberties. The World Press Freedom Index ranges from 0 to 100, with lower scores denoting more media freedom. For the 2 years included in the sample, Mozambique’s average media freedom ranking was 15.5. Table 3 shows that this ranking is in the middle of the distribution for the sample, although I am highly skeptical about the validity of this ranking, particularly in light of Cardoso’s assassination. With respect to other freedoms, I again draw on FH’s freedom of expression and belief measure, which ranges from 0 to 16, with higher scores denoting more freedom. I also draw on the widely used FH political rights and civil liberties indices that rate countries on a scale of 1–7, with one denoting the most freedom.13 Although many people have described Mozambique as a postconflict democratization success story, Mozambique has never been classified as free. Since its transition to multi-party democracy in 1994, FH has ranked it as partly free (scores of 3 or 4) for both political rights and civil liberties and its average freedom of expression score for 2005 and 2008 was 11 (see Table 3). These scores place it at the less free end of the distribution for the sample. Against the backdrop of the political context described above, the country-level regression results provide further empirical support for the fear-of-the-state hypothesis.14 Column 2 in Table 4 shows substantial differences on several outcome variables, all of which align with the fear-of-the-state bias theory. First, Mozambicans overreport the extent of, and satisfaction with democracy when they think they are being interviewed by the state. Similarly, they overreport trust in the ruling party. In terms of government performance, they overreport approval for their MP’s and local government’s performance. Finally, they underreport corruption among the President, MPs, and national and local government officials, while they overreport how well the government is fighting it. In sum, for 10 outcome variables that are politically sensitive, survey responses are considerably biased by the perception that the interviewer is a government representative. Moreover, these biases are aligned with fear of state repercussions. Cabo Verde The first case, Mozambique, was illustrative of a fearful country. Cabo Verde, in contrast, helps to shed light on variation outside the dominant pattern. One of the more robust democracies on the continent, Cabo Verde is classified as an unfearful case. Although it may be rare for the sample, it provides evidence for the theory that when citizens are not fearful of the state, they provide more honest responses when they think the government is interviewing them. In stark contrast to the other countries in the data set, Cabo Verde has experienced several peaceful elections and transitions in government since multiparty elections were introduced in 1991. Party competition has thus been relatively healthy and stable. Among the countries ranked free by FH, Cabo Verde is the only one in the Afrobarometer sample to continuously receive the highest freedom score of one for both political freedom (since 1992) and civil liberties (since 2004). Its freedom of expression score of 15 places it at the top end of the distribution alongside South Africa and a few other countries. Moreover, its average media freedom score of seven is something to aspire to on the continent. Cabo Verde’s democratic and peaceful history in the postindependence period underpins the lack of fear among survey respondents. For the first set of political questions (presented in Column 3), the only statistically significant difference is underreporting support for patronage. This result is in contrast to the aggregate result of overreporting support for patronage, and cannot be explained by either fear or social desirability. Instead, this result suggests that respondents may be calling out the government for patronage practices. Similarly, Cabo Verdeans also report to perceived state representatives that they have less trust in the ruling party. This finding is contrary to the pooled result of reporting greater trust and shows again that respondents in Cabo Verde are not afraid of voicing concerns about trust when they think the government is interviewing them. On the performance questions, Cabo Verde provides a sharp contrast to the aggregate results and the fearful country case. While there is no difference on approval for the President or the MP, respondents underrate approval for local government. This finding cannot be explained by fear or social desirability, but perhaps by the respondent’s desire to communicate to the government official that local government needs to be improved. For the corruption questions, Cabo Verdeans do not change their ratings of corruption toward a government representative. Somewhat surprisingly, they overreport how well government is handling corruption (significant at 90% level). Earlier I had argued that this aggregate association was suggestive of the fear-of-the-state bias. In the case of Cabo Verdeans, I think this result could also be because of their desire to commend the government’s performance on corruption. The World Governance Indicators’ (WGI) control of corruption measure shows that Cabo Verde climbed significantly from 1999 to 2008.15 In sum, the results in Cabo Verde provide further support to the fear-of-the-state hypothesis while undermining the social desirability thesis; in a country where citizens are not likely to be fearful, the perception of a state interviewer mechanism does not generate differences on most outcome variables. For the few variables where there are differences, respondents generally seem to be challenging the ruling party and its performance, rather than appealing to social norms and values. South Africa South Africa is similar to Cabo Verde, but it has an active climate of protest and government criticism, and is thus classified as a “critical” case. Moreover, its press has been able to conduct investigative journalism of government corruption and lampoon leaders from the ruling party for their deficiencies. Its average media freedom (7.25) and freedom of expression scores (15) lie at the high end of the sample. It has also scored either 1 or 2 for political rights and civil liberties since the end of apartheid. While South Africa has not seen party turnover since the end of apartheid and the introduction of multi-party elections in 1994, its elections have been widely regarded as free and fair with little electoral violence between parties or suppression of the opposition. The ruling party, the African National Congress (ANC), has won all elections with over 60% of the vote (with the exception of the 2016 local elections) and has never faced a real threat of losing its majority in parliament. Still, opposition parties campaign in the open throughout the country and even members of the ruling party have been quick to criticize its leadership or split off to form new parties. In dozens of interviews (and informal conversations) carried out in South Africa in 2016, regular citizens and elected officials from several parties spoke openly about what they viewed as corrupt or inappropriate activities in government as well as local government officials being removed from their posts. In several municipalities, the ruling party recommended that I speak to their fellow Democratic Alliance (DA) councilors from the opposition party. This is a noteworthy divergence from Mozambique, where FRELIMO municipal councilors generally told me that the councilors from the opposition party, the Movimento Democrático de Moçambique (MDM), were unavailable or uninterested in answering my questions. In light of this, it seems that South Africa’s climate allows for critical appraisals of government performance, as people are unafraid to hold corrupt politicians to account. The Afrobarometer data bear this out. On performance questions, South Africans are identical to Cabo Verdeans in that they underreport approval for local government's performance when they think the government is sponsoring the interview. However, in contrast to Cabo Verdeans, South Africans overreport corruption among MPs, the President, and (unspecified) government officials (the latter two significant at 90% level). This result is unique to the data set (with the exception of Nigeria) and demonstrates the relatively higher willingness of South Africans to point out corruption compared with other countries in Sub-Saharan Africa. Moreover, this result shows that social desirability is clearly not driving differences because government representatives do not view more corruption as desirable. Finally, one somewhat surprising result is that South Africans understate trust in the opposition (significant at 90% level). This is in line with the aggregate results but contrary to what one would expect in a critical country. However, in contrast to fearful countries, South Africans do not overstate support for the ruling party. In sum, South Africa is an important case that provides further evidence for the fear-of-the-state hypothesis. In contrast to most countries in the sample, South Africa’s open political climate enables ordinary citizens and opposition parties to criticize without fear of retribution. As a result, the perceived identity of a government interviewer in South Africa does not generate differences that align with fear or social desirability. Instead, the perception of a state interviewer generates a bias that stems from a lack of fear as well as a desire to be critical of the government. Implications and Recommendations for Improving Survey Methodology As described earlier, the literature has thoroughly documented the high potential for bias that affects responses to sensitive questions about racial and religious attitudes, sexual behavior, drug use, and voter turnout (Gonzalez-Ocantos et al., 2012, p. 204). For those survey questions, researchers have created social desirability scales and used indirect methods to obtain more accurate estimates and minimize nonresponse (DeMaio, 1984; Nederhof, 1985). Yet, there is little research on how the perceived institutional affiliation of the interviewer generates bias, particularly in unsafe contexts where respondents may fear reprisals by the state. In light of this, future work could test and use different methods to measure these politically sensitive ratings and behaviors. One first step may be to experiment with new approaches to reduce the percentage of people that believes the interviewer comes from the state. For example, the Afrobarometer survey has an introductory script in the interview protocol that states that the interviewer comes from an independent research organization and does not represent the government or a political party. Moreover, the interviewer affirms that all information will remain confidential. Despite this, just over a quarter of the sample is convinced by the protocol that the interviewer is independent. Given this, the protocol could be strengthened to provide greater reassurance of confidentiality to the respondent (Beatty & Herrmann, 2002). Perhaps, the presentation of additional credentials and proof of nongovernmental affiliation could be helpful in this regard. In addition, partnering with organizations that are widely known and viewed to be separate from the state could increase the probability of obtaining more accurate responses. In addition, indirect modes of administration such as endorsement experiments, list experiments (or item count or unmatched count technique), or randomized response techniques could also be tested (Coutts & Jann, 2011; Glynn, 2013; Rosenfeld, Imai, & Shapiro, 2015). Despite their limitations, these indirect methods may improve the accuracy of data collected from sensitive questions. For example, Blair, Imai, and Lyall (2014) combined list and endorsement experiments in Afghanistan to elicit more credible data and produce more efficient estimates. Another advantage of these experiments is that they are more implementable than self-administration in low literacy contexts. In contexts where literacy is higher, self-administration (which eliminates the face-to-face interaction with the state representative) may be preferable because respondents are shown to be more willing to answer sensitive questions when they are self-administered (Groves et al., 2009; Krumpal, 2013). Although there has been significant testing of these different methods, further research in politically sensitive contexts would be helpful in understanding whether these methods obtain more reliable estimates. Conclusions This article has aimed to shed light on a neglected source of response bias in African public opinion survey data. In contrast to previous studies that have highlighted a social desirability bias based on coethnicity, this article describes a fear-of-the-state bias in which respondents who believe the interviewer comes from the government are systematically more likely to favorably rate democratic progress and government performance while underreporting corruption and trust in the opposition. This article focused on sensitive political outcome variables, wherein theory would suggest a biased response based on fear of government reprisals or fear of losing state benefits. However, this bias would likely affect other outcome variables in other parts of the world where media is restricted and attacks on dissenting political views are prevalent. Moreover, the results of this article suggest the need for greater caution in drawing strong conclusions from public opinion data in sensitive contexts. To measure bias in contexts outside Africa, other surveys such as Latinobarómetro, Arab Barometer, and Pew polls would do well to include a question on the perceived institutional affiliation of the interviewer. This article used multilevel modeling along with country case studies to investigate when and how the perception of a state interviewer will generate biased survey responses. Restricted media freedom and political repression are just a few factors that are likely driving the fear-of-the-state bias. In countries with a freer democratic climate, survey respondents are less influenced by the perceived government identity of the interviewer, and can even be critical of government performance and corruption. Beyond this article’s findings, the analytical approach sought to demonstrate the need for unpacking cross-country regression results. There is far too much (important) variation across African countries, which must be investigated and brought to bear to sharpen country-level policymaking and scholarly knowledge. Finally, while existing quantitative survey data can be helpful in generating this knowledge, so can a qualitative understanding of the contexts in which surveys take place. In this article, interviews and informal conversations helped inform specific doubts about the quantitative data and lines of inquiry that could be further examined. They also helped shape the selection of country cases and interpretations of the survey data results. While these qualitative data are anecdotal and suggestive, this article argues that data collected outside of surveys from the people who are most knowledgeable about day-to-day sociopolitical realities should be an invaluable resource for understanding survey data and their limitations. Zack Zimbalist is a PhD candidate in the Department of African Studies at the School of Advanced International Studies (SAIS) at Johns Hopkins University. His research interests are centered on the political economy of development and democratization in Sub-Saharan Africa. He holds a Master’s in Development Studies from the University of KwaZulu-Natal, Durban, South Africa, and a BA in International Studies-Economics from UCSD. Appendix The main text of this article analyzed how respondents intentionally misreport answers to particular questions. However, the respondent could also choose to not respond. Although the literature often overlaps in discussing the reasons behind a respondent’s choice to misreport or not respond, there are differences that should be highlighted. In contrast to misreporting, a nonresponse decision could also derive from the unavailability of the information requested or from a mismatch in the accuracy the respondent believes she can provide vis-à-vis what she thinks is required (Beatty & Herrmann, 2002). The main source of overlap between misreporting and nonresponse is thus the respondent’s motivation to provide the information asked. In the context of this study, it is likely that in addition to misreporting, fearful respondents also provide more DK and RA responses to state interviewers. To the best of my knowledge, nonresponse has not been treated as a choice that is driven by a fear-of-the state bias. Figure 1 shows item nonresponse rates for political questions, by perceived government affiliation.16 Of the 16 variables presented, 14 have nonresponse rates that are higher among people who think that the government is interviewing them. The only exceptions to this pattern are opinions on support for patronage and contacting a government official. Apart from these variables, respondents are more likely to not respond to questions on democracy, government performance, and corruption. Figure 1 View largeDownload slide Nonresponse by perceived interviewer affiliation (%) Figure 1 View largeDownload slide Nonresponse by perceived interviewer affiliation (%) Footnotes 1He did find higher refusals to questions about sexual behavior when one is interviewed by a coethnic. 2No response options are provided and the response is coded to fit into one of 28 categories, which did not change across the survey rounds. I code the following responses as coming from the government: government (general), national government, provincial government, local government, President/PM’s (Prime Minister) office, Parliament/National Assembly, National Intelligence/Secret Service, and political party/politicians. The following responses are classified as non-government: no one, government census/statistics office, education or social affairs department/ministry, tax or finance department, health department, other government department, constitutional commission, electoral commission, national planning commission, public utility company, NGO (non-governmental organization), research company, newspapers/media, university/school/college, private company, international organization, god or a religious organization, town hall, and village chief. As with responses to all other variables, I code those who refuse to answer or don’t know the interviewer’s affiliation as “missing,” so as not to make any assumptions about whether those respondents are systematically not reporting an answer. 3A few single-country case studies have highlighted this possibility. In a study on Zimbabwe, Chikwana et al. (2005, p. 100) concluded that “If people think they are talking to agents of the central government, they are much more cautious about what they say.” In Ethiopia, Mattes and Teka (2016, p. 2) find substantial political fear and suspicion among respondents, which are jointly associated with nonresponse and overly positive views of government. Also see Mattes and Shenga (2013) on Mozambique. 4Round 3 took place in 2005/2006 and Round 4 was carried out in 2008/2009. For more precise dates of the surveys, see: http://www.afrobarometer.org/surveys-and-methods/survey-schedule. The countries included in the pooled data set are Benin, Botswana, Burkina Faso, Cabo Verde, Ghana, Kenya, Lesotho, Liberia, Madagascar, Malawi, Mali, Mozambique, Namibia, Nigeria, Senegal, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe. In each wave, roughly 1,200 households are interviewed in each country, with the exceptions of Nigeria, South Africa, and Uganda (wherein roughly 2,400 households are interviewed). Burkina Faso and Liberia were only added in Round 4 and thus make up half the average number of observations per country in the pooled data set. 5As a robustness check, I also run pooled ordinary least squares regressions with country and year dummy variables. This model is not susceptible to the heterogeneity bias threat discussed in the random effects model because by adding the country dummies, the model controls out the unobserved heterogeneity at the higher level. The results are similar and available on request. 6There are other plausible Afrobarometer individual-level control variables that I exclude because they are biased based on the perception of a government interviewer. To preserve observations, I do not control for the (non)coethnic match between interviewer and respondent or enumerator ethnic group size. This approach avoids dropping a large number of observations and, more importantly, entire countries from the sample for which data on enumerator ethnicity were not available (i.e., Botswana, Cabo Verde, Lesotho, Liberia, Madagascar, and Tanzania). This approach resulted in preserving 53,125 observations from 20 countries compared with 38,381 observations from 14 countries. 7For information on their methodology, see: https://rsf.org/en/world-press-freedom-index 8The data are taken from the QoG 2016 Standard data set (Dahlberg et al., 2016). 9I do not present the estimates for the other control variables. However, to give a sense of the relative magnitude of these effects, only the urban resident predictor and sometimes the Round 4 wave dummy are larger. The coefficients on education, media freedom, and freedom of expression are not directly comparable because they are not dummy variables. The controls for gender, ethnic group size, minority group, and home language are generally insignificant. Results available on request. Table 4 Effect of a Perceived Government Interview Sponsor on Political Outcomes Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Notes: Column 1 presents unstandardized coefficients from multilevel linear models with individuals as the lower-level unit of analysis and country-level random intercepts. In Columns 2–4, country-specific regressions. The number of observations for questions asked in only one wave range from 15,211 to 16,766. The data are unweighted. Standard errors in parentheses. *p < 0.10, **p< 0.05, ***p < 0.01. Table 4 Effect of a Perceived Government Interview Sponsor on Political Outcomes Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Dependent variables  Full sample (I)  Mozambique (II)  Cabo Verde (III)  South Africa (IV)  Support for patronage  0.02*** (0.01)    −0.06* (0.03)    Preference for democracy  −0.02*** (0.01)        Extent of democracy  0.07*** (0.01)  0.15*** (0.05)      Satisfaction with democracy  0.06*** (0.01)  0.44*** (0.07)      Party competition leads to conflict  −0.02* (0.01)        Trust ruling party  0.12*** (0.012)  0.20*** (0.07)  −0.18*** (0.07)    Trust opposition parties  −0.03** (0.01)      −0.07* (0.04)  President’s/PM’s performance  0.08*** (0.01)        MPs’ performance  0.05*** (0.01)  0.14*** (0.05)      Local government councilors’ performance  0.02 (0.01)  0.12** (0.05)  −0.12** (0.05)  −0.14*** (0.04)  Presidential office corruption  −0.05*** (0.01)  −0.14*** (0.06)    0.07* (0.04)  Corruption among MPs  −0.06*** (0.01)  −0.18*** (0.06)    0.07** (0.03)  Corruption among local government councilors  −0.04*** (0.01)        Corruption among government officials (unspecified) (R4 only)  −0.07*** (0.01)      0.09* (0.05)  Corruption among national government officials (R3 only)  −0.07*** (0.02)  −0.30*** (0.08)      Corruption among local government officials (R3 only)  −0.09*** (0.02)  −0.22*** (0.08)      How well government fights corruption?  0.07*** (0.01)  0.18*** (0.06)  0.11*(0.06)    Number of observations (households)  30,000–37,500  1,287–1,694  1,294–1,543  2,809–3,642  Notes: Column 1 presents unstandardized coefficients from multilevel linear models with individuals as the lower-level unit of analysis and country-level random intercepts. In Columns 2–4, country-specific regressions. The number of observations for questions asked in only one wave range from 15,211 to 16,766. The data are unweighted. Standard errors in parentheses. *p < 0.10, **p< 0.05, ***p < 0.01. 10I use the term Africans for ease of exposition. Attempting to generalize to a continental population of over 1.2 billion people residing in 54 different countries is misplaced. 11Interviews with district officials in Manica province, where Dhlakama's convoy had recently been attacked. 12See Cho and Bratton (2006, p. 737) for similar findings in Lesotho. 13For more information, see: https://freedomhouse.org/report-types/freedom-world 14In the case of Mozambique and Cabo Verde, I removed ethnic group controls based on the respondent’s ethnicity because there was no ethnic group data for a large percentage of the sample. 15See: http://databank.worldbank.org/data/reports.aspx?source=worldwide-governance-indicators 16I follow guidance from the Round 6 Afrobarometer survey manual and do not weigh the cross-tabs between perceived identity of the interviewer and the outcome variables. See: http://www.afrobarometer.org/sites/default/files/survey_manuals/ab_r6_survey_manual_en.pdf Acknowledgments Thanks are due to Amanda Kerrigan, Carlos Shenga, and two anonymous reviewers for their suggestions. Some of the research and writing was completed under the auspices of a George L. Abernethy Doctoral Research Fellowship. References Adida C. L., Ferree K. E., Posner D. N., Robinson A. L. ( 2016). Who's asking? Interviewer coethnicity effects in African survey data. Comparative Political Studies , 49, 1– 31. doi: 10.1177/0010414016633487 Beatty P., Herrmann D. ( 2002). To answer or not to answer: Decision processes related to survey item nonresponse. In Dillman D. A., Eltinge J. L., Groves R. M., Little R. J. (Eds.), Survey nonresponse  (pp. 71– 86). New York: John Wiley and Sons. Blair G., Imai K., Lyall J. ( 2014). Comparing and combining list and endorsement experiments: Evidence from Afghanistan. American Journal of Political Science , 58, 1043– 1063. doi: 10.1111/ajps.12086 Google Scholar CrossRef Search ADS   Bratton M., Houessou R. ( 2014). Demand for democracy is rising in Africa, but most political leaders fail to deliver (Policy Paper No. 11). Retrieved from Afrobarometer website: http://afrobarometer.org/publications Chikwana A., Sithole T., Bratton M. ( 2005). Propaganda and public opinion in Zimbabwe. Journal of Contemporary African Studies , 23( 1), 77– 108. doi: http://dx.doi.org/10.1080/0258900042000329466 Google Scholar CrossRef Search ADS   Cho W., Bratton M. ( 2006). Electoral institutions, partisan status, and political support in Lesotho. Electoral Studies , 25, 731– 750. doi:10.1016/j.electstud.2005.12.001 Google Scholar CrossRef Search ADS   Coutts E., Jann B. ( 2011). Sensitive questions in online surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT). Sociological Methods & Research , 40( 1), 169– 193. doi: 10.1177/0049124110390768 Google Scholar CrossRef Search ADS   Dahlberg S., Holmberg S., Rothstein B., Khomenko A., Svensson R. ( 2016). The quality of government standard dataset (Basic codebook), version Jan16. University of Gothenburg: The Quality of Government Institute. doi:10.18157/QoGBasJan16. Retrieved from http://www.qog.pol.gu.se DeMaio T. ( 1984). Social desirability and survey measurement: A review. In Turner C., Martin E. (Eds.), Surveying subjective phenomena  (Vol. 2, pp. 257– 281). New York, NY: Russell Sage Foundation. Dionne K. ( 2014). The politics of local research production: Surveying in a context of ethnic competition. Politics, Groups, and Identities , 2, 459– 480. doi: 10.1080/21565503.2014.930691 Google Scholar CrossRef Search ADS   Glynn A. N. ( 2013). What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly , 77, 159– 172. doi: 10.1093/poq/nfs070 Google Scholar CrossRef Search ADS   Gonzalez-Ocantos E., De Jonge C. K., Melendez C., Osorio J., Nickerson D. W. ( 2012). Vote buying and social desirability bias: Experimental evidence from Nicaragua. American Journal of Political Science , 56( 1), 202– 217. doi: 10.1111/j.1540-5907.2011.00540.x Google Scholar CrossRef Search ADS   Groves R. M. ( 2004). Survey errors and survey costs . Hoboken, NJ: John Wiley & Sons, Inc. Google Scholar CrossRef Search ADS   Groves R. M., Fowler F. J.Jr, Couper M. P., Lepkowski J. M., Singer E., Tourangeau R. ( 2009). Survey methodology . Hoboken, NJ: John Wiley & Sons, Inc. Krumpal I. ( 2013). Determinants of social desirability bias in sensitive surveys: A literature review. Quality and Quantity , 47, 2025– 2047. doi: 10.1007/s11135-011-9640-9 Google Scholar CrossRef Search ADS   Manning C. ( 2001). Competition and accommodation in post-conflict democracy: The case of Mozambique. Democratization , 8, 140– 168. doi: 10.1080/714000204 Google Scholar CrossRef Search ADS   Mattes R., Shenga C. ( 2013). Uncritical citizenship: Mozambicans in comparative perspective. In Bratton M. (Ed.), Voting and democratic citizenship in Africa: Where next  (pp. 159– 177). Boulder, CO: Lynne Rienner. Mattes R., Teka M. ( 2016). Ethiopians’ views of democratic government: Fear, ignorance, or unique understanding of democracy? (Working Paper No. 164). www.afrobarometer.org. Nederhof A. J. ( 1985). Methods of coping with social desirability bias: A review. European Journal of Social Psychology , 15, 263– 280. doi: 10.1002/ejsp.2420150303 Google Scholar CrossRef Search ADS   Pereira J. C., Nhanale E. ( 2014). The 2014 general elections in Mozambique: Analysis of fundamental questions . Johannesburg, South Africa: Open Society Initiative for Southern Africa. Retrieved from: http://www.eldis.org/go/home&id=69467&type=Document#.WCHIpJNpG8o Rosenfeld B., Imai K., Shapiro J. N. ( 2015). An empirical validation study of popular survey methodologies for sensitive questions. American Journal of Political Science , 60, 783– 802. doi: 10.1111/ajps.12205 Google Scholar CrossRef Search ADS   Samii C. ( 2013). Perils or promise of ethnic integration? Evidence from a hard case in Burundi. American Political Science Review , 107( 03): 558– 573. doi: 10.1017/S0003055413000282 Google Scholar CrossRef Search ADS   Tourangeau R., Rips L. J., Rasinski K. ( 2000). The psychology of survey response . Cambridge, England: Cambridge University Press. Retrieved from: https://doi.org/10.1017/cbo9780511819322 Google Scholar CrossRef Search ADS   Tourangeau R., Yan T. ( 2007). Sensitive questions in surveys. Psychological Bulletin , 133, 859– 883. doi: 10.1037/0033-2909.133.5.859 Google Scholar CrossRef Search ADS PubMed  © The Author 2017. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.

Journal

International Journal of Public Opinion ResearchOxford University Press

Published: Dec 16, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off