Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Advance Letters in a Telephone Survey on Domestic Violence: Effect on Unit Nonresponse and Reporting

Advance Letters in a Telephone Survey on Domestic Violence: Effect on Unit Nonresponse and Reporting Abstract Advance letters (ALs) are one tool for improving response rates. However, it is not sufficiently clear whether ALs affect nonresponse bias, and how their effect relates to the study topic. (1) The effect of ALs on outcome rates, (2) recruitment effort, (3) their differential effect on subgroups, and (4) their effect on reporting on sensitive topics are examined. The data stem from a split-ballot experiment implemented in a telephone survey on “Violence against Men.” The study comprises responses from approximately 950 men aged 21–70 years. The results indicate a positive effect of ALs on response, cooperation, and contact rates, and higher response rates among older respondents. Self-reports on sensitive topics were not affected by the ALs. Introduction Telephone surveys are a key mode of data collection, but deteriorating response rates challenge their leading position (De Leeuw, Callegaro, Hox, Korendijk, & Lensvelt-Mulders, 2007). Thus, efforts have been made to reduce the number of nonrespondents and increase the response rates, though methods such as mailing of advance letters (ALs). In this article, the effects of ALs (in combination with interviewers referring to the AL during recruitment) on response to a telephone survey on domestic violence are under scrutiny. Although studies on the effect of ALs have been conducted during the past 20–30 years, continual assessment is important because the environment of computer-assisted telephone interview (CATI) surveys is changing: not only are communication technologies changing but so are communication habits, for example, coverage of telephone books is decreasing. Furthermore, existing research neglects certain aspects of measurement and selection biases. Critical for the assessment of the effect and effectiveness of ALs is not to merely look at outcome rates but also consider potential nonresponse and measurement bias. Before reporting findings, ALs will be embedded in the methodological discussion about nonresponse, nonresponse bias, and measurement error. The main purpose of ALs is to increase response rates. Low response rates are problematic, first, because high rates of unit nonresponse increase survey cost, for example, because of additional call attempts, refusal conversion, follow-up mailing, incentives, and ALs (Davern et al., 2010; Groves & Peytcheva, 2008). However, these greater efforts can still fail to obtain interviews from sample members who differ from prior respondents (Curtin, Presser, & Singer, 2000; Keeter, Miller, Kohut, Groves, & Presser, 2000; Peytchev, 2012). According to Davern (2013), expensive efforts to increase response rates have been harmful because the money would have been better spent on either increasing the sample size or conducting a rigorous nonresponse bias analyses (Davern, 2013, pp. 907–908). This situation leads to the second problem: nonresponse bias, which may systematically distort descriptive and inferential statistics and thus make unbiased estimates of population characteristics almost impossible (Berinsky, 2008; Groves et al., 2004; Groves & Lyberg, 2001). Unfortunately, nonresponse bias is hard to measure because it requires comparing data from the population and the sample. Thus, to reduce nonresponse bias, reducing nonresponse is the most common recommendation (Groves, 2006, p. 647). The underlying assumption is that higher response rates reduce nonresponse bias by recruiting a more diverse group of respondents and achieving an overall more balanced representation of the target population (Groves, Singer, & Corning, 2000). However, research suggests that changes in response rates do not necessarily alter survey estimates, and substantial robustness can be achieved even with relatively low response rates (Curtin, Presser, & Singer, 2005; Davern et al., 2010; Groves, 2006; Holle, Hochadel, Reitmeir, Meisinger, & Wichmann, 2006; Keeter et al., 2000). Furthermore, indiscriminate attempts to improve response rates can reduce data quality in terms of nonresponse bias and measurement error (Fricker & Tourangeau, 2011). Thus, the nonresponse rate is in itself not a measure of survey quality; it is simply easier to measure than other indicators. Thus, research needs to focus on reducing nonresponse bias (Groves & Couper, 1998; Link & Mokdad, 2005), taking the totality of survey quality indicators into account (Biemer, 2011; Groves & Lyberg, 2011; Hox, De Leeuw, & Chang, 2012). We should ask: Do means of improving response rates improve survey quality? Or do they introduce bias through sampling, nonresponse, and potentially measurement? To theorize potential effects of ALs, several reasons for nonresponse can be categorized. Generally, nonresponse is a function of design characteristics, interviewer, social environment, household, and individual characteristics (Groves & Couper, 1998); on a design level, nonresponse is influenced by sample design, survey mode, fieldwork protocols like frequency and timing of contacts, duration of data collection periods, topic, sponsors of the survey, and so forth (Dillman, Gallegos, & Frey, 1976; Groves et al., 2004; Groves, 2006). These survey design specifics are correlated with personal characteristics apparent in different levels of reachability: older people (Groves & Couper, 1998), larger households (Freeth, 2004; Groves & Couper, 1998), households with older people and/or children (Groves & Couper, 1998), work-from-home individuals (Groves et al., 2004), and people without a migration background (Lynn, Clarke, Martin, & Sturgis, 2002) are easier to contact. Moreover, lifestyle and leisure time activities influence reachability (Durrant & Steele, 2009; Groves & Couper, 1998; Guzy, 2015). Leverage-salience theory explains survey participation by taking design features into account. It assumes varying effects on subgroups (because of different leverages) and across designs (because of different salience exhibited during the survey request) (Groves et al., 2000, p. 302). Thus, this theory highlights a potentially different composition of respondents if design features like ALs or study topic exert different leverage and/or salience in subgroups. In addition, interviewer training (Groves & Couper, 1998) and interviewers’ strategies, tactics (Dijkstra & Smit, 2002; Snijkers & De Leeuw, 1999), and attitude (Hox & De Leeuw, 2002; Weidmann, Schmich, & Schiller-Born, 2008) have a potential influence on nonresponse—both independently and in interaction with other factors. Design and interviewer characteristics interact with the societal level. Increasing nonresponse rates might be because of individualization, lack of free time, changing leisure time activities, increased mobility, and increasing concerns about privacy and confidentiality (Singer & Presser, 2007) as well as increased skepticism about surveys and new communication technologies, call-blocking devices, active call monitoring, or answering machines (Dillman, 2007; von der Lippe, Schmich, & Lange, 2011). On a personal level, nonresponse is influenced by gender and age in particular, socioeconomic status and ethnicity (Freeth, 2004; Groves, 2004; Groves et al., 2004; Smith, 1983; Stoop, 2005), but also by interest in the study topic and personal affection (Aust & Schröder, 2009). Especially in surveys on sensitive topics, such as victimization in this case, participation is likely to be closely linked to the personal level; for example, victims might participate more frequently than nonvictims because they are “eager to tell” (Stangeland, 1996). ALs are interventions on a survey design level with an effect on the personal and interviewer level. On the interviewer level, an AL offers the interviewer support and credibility in their recruitment effort and increases their confidence (Baulne & Courtemanche, 2008; Groves & Snowden, 1987). Interviewers find ALs helpful in the initial contact with households by allaying respondents’ initial suspicions, who value them as a more professional way of seeking an interview (Collins, Sykes, Wilson, & Blackshaw, 2001).1 On the personal level, the prenotifications could stimulate interest, boost both intrinsic and extrinsic motivation, suggest legitimacy and credibility, and provide reassurance about the confidentiality of the study, and thus increase trust and evoke principles of social exchange and reciprocity (Baulne & Courtemanche, 2008; Collins et al., 2001)—assuming the respondent has received and read the AL. Following this assumption, it is also plausible that there might be an effect on reporting—implying a link between nonresponse and measurement error. Survey researchers speculate that converted refusers make poor reporters (Fricker & Tourangeau, 2011; Sakshaug, Yan, & Tourangeau, 2011; Tourangeau, Groves, & Redline, 2010), and they assume common causes for response propensities and response accuracy. Consequently, if an AL increases the propensity to participate, it might also affect the reporting behavior, particularly on sensitive topics. The relation between nonresponse and measurement is described as “The same sample members who are likely to misreport because they are in the socially undesirable category may also be likely to refuse to take part in the survey because they find the topic sensitive” (Tourangeau et al., 2010, p. 415). So far, little research has explored the interrelation of ALs’ effect on measurement and nonresponse. In general, a literature review suggests a positive effect of ALs on response rates (Camburn, Lavrakas, Battaglia, Massey, & Wright, 1995; Smith, Chey, Jalaludin, Salked, & Capon, 1995; Traugott, Groves, & Lepkowski, 1987). In their meta-analysis, De Leeuw and colleagues (2007) identified an increase in the cooperation rate of about 11% and in the response rate of about 8% after sending ALs. They concluded: “ALs are a general tool that can be applied in many different types of surveys” (De Leeuw et al., 2007, p. 425). The positive effect is higher if the AL is personalized, the conducting institution is a university, the study is described in detail, the social utility of the respondents’ cooperation is emphasized, confidentiality of data is promised, and sampling techniques are described (Berinsky, 2008; De Leeuw et al., 2007; Engel & Pötschke, 2004; Groves, 2004). However, Woodruff, Mayer, and Clapp (2006) and Singer, van Hoewyk, and Maher (2000) did not find a difference in response and cooperation rates. Groves (2004) found a potentially negative effect of prenotifications if the prospective respondent regarded the topic as unpleasant. ALs might also make it easier for selected persons to prepare their refusal. In sum, most research looks at the average impact of ALs (Vogl, Parsons, Owens, & Lavrakas, forthcoming), but there might be differential effects: there might be a double or even triple bias using ALs in telephone surveys,2 which might offset positive effects. First, sending ALs for telephone surveys requires both telephone numbers and mail addresses. In random digit dialing (RDD) samples, telephone numbers have to be matched with addresses. Only respondents for whom a telephone number can be successfully matched to a mailing address will receive an AL or be part of the study.3 In 2007, De Leeuw and colleagues estimated an average matching rate for an RDD sample at approximately 40%, but they also noticed cross-country variation (De Leeuw et al., 2007), and a decrease over time is likely. In Germany, in 2009, only about half the households had a published landline number (Häder, 2015). For mobile phone numbers, only about 2% are listed, most of which are business contacts (Häder, 2015). In list-based samples, matching telephone numbers have to be identified. In both cases, this matching process entails dangers for coverage error: people with published telephone numbers have different sociodemographic backgrounds than those without. People without published addresses in directories are younger, single or divorced, and employed; live in cities; and have young children (Häder, 2015; von der Lippe et al., 2011). Some results suggest that listed households “may be more cooperative to telephone survey requests in part because of demographic differences in the two populations” (Parsons, Owens, & Skogan, 2002, p. 2). The effect of ALs can therefore be ambiguous: they help recruit respondents who are already easier to reach (von der Lippe et al., 2011, pp. 112–113). Second, beyond the sampling issue, the effect of ALs depends on someone in the household (preferably the selected sample member) receiving and reading the letter (Link & Mokdad, 2005). However, a substantial number of the letters are considered to be junk and are not read (Couper, Mathiowetz, & Singer, 1995; Link & Mokdad, 2005). In a study on the effect of ALs in RDD samples, only 61% of 1,000 respondents remembered seeing the AL (Groves & Snowden, 1987; Link & Mokdad, 2005). Beyond this disappointing result, some social groups are more likely to read it than others are. Women read the letter more frequently than men and lower-income groups more frequently than higher-income groups (Groves, 2004; Groves & Snowden, 1987). Third, ALs can have differential effects on respondent groups’ cooperation after they read them. Goldstein and Jennings (2002) and Groves and Snowden (1987) analyzed the effect of ALs in telephone surveys and concluded that notification letters were most successful among older participants who generally exhibit low response rates. However, in their meta-analysis, De Leeuw et al. (2007) could not find any indication that ALs have different effects on different population groups. Fourth, persuading reluctant respondents by referring to an AL could produce data with increased measurement error (Biemer, 2001; Groves & Couper, 1998). From research with various refusal conversion efforts, we know that a potential link exists between nonresponse and measurement error in that reluctant respondents are likely to give poor information; however, evidence for this link is still limited (Tourangeau et al., 2010). Olson (2006) assumed that response propensity and response accuracy have common causes but found the relationship between nonresponse bias, measurement error, and propensity to respond to be specific to the type of nonresponse. Kaminska, McCutcheon, and Billiet (2011) found a relationship between the two factors that is fully explained by cognitive ability. In sum, although ALs can potentially increase response, they can also have differential effects on subgroups and thereby contribute to nonresponse bias (Link & Mokdad, 2005); for example, some people are more likely to respond because the letter legitimizes the study, but for other people, the letter is a warning and they prepare their refusal tactics. The same or similar causes to differential participation among subgroups can also affect response behavior and thus contribute to measurement error. Another potential implication of ALs is the increased costs for the survey. The heterogeneity of results indicates a variation in the effectiveness of ALs, which calls for further investigation. Research Questions Based on these thoughts and results from previous research, a split-ballot experiment on the effect of ALs in CATI was implemented for the study “Violence against men” conducted in Germany in 2007. The aim was to research (a) the effect of a personalized AL with a salutation by name and the interviewer referring to it during telephone contact on outcome rates and (b) recruitment effort, (c) potentially different effects on sociodemographic subgroups as well as on the (d) reporting on sensitive topics. Technically speaking, it is a comparison between the AL and non-AL group regarding (1) outcome statistics, (2) the propensity to complete an interview or refuse, and (3) the number of call attempts. In another step, (4) sociodemographics in the population with two respondent groups, those with and without ALs, are compared. Furthermore, AL and non-AL groups are compared among the respondents to determine effects on (5) reporting of violence. In brief, the assumption is that ALs increase trust, credibility, and response rates with the frequency of reported victimization being higher through self-selection and/or changes in reporting. Research Design Data Collection and Sampling Strategy In principle, two strategies for random sampling for CATI studies exist: telephone directories and RDD. Similar to the United States, German public telephone directories are an inadequate source of telephone samples because they are neither complete nor up to date. A growing share of telephone numbers are not published now (Häder, 2015). Although potentially every household with a telephone can be captured in RDD designs, postal addresses are usually unknown. This factor mostly prohibits the mailing of ALs. Thus, a different sampling strategy was used. The sample was based on addresses from official registration databases. For random samples, registration office data seem to be the most complete and up-to-date source. German residents are legally obligated to register their home address at a local registration office. Compared with telephone samples, with registration data, the sampling is based on individuals rather than households. Thus, addressing an AL personally to a specific person is straightforward and the probability that the AL will be read improves. Because a complete list of inhabitants does not exist in Germany, all Bavarian administrative units were stratified into “big city” (>100,000 inhabitants), “town” (40,000–100,000 inhabitants), and “rural area” (on average <3,000 inhabitants per municipality) in a first step. Then, 1 big city, 3 towns, and 20 rural municipalities from four rural administrative districts within Bavaria were randomly selected. Next, for each of the three categories of size of municipality (big city, town, rural area), 1,500 addresses of men aged 21–70 with German citizenship were randomly selected per strata from these official registration databases (4,500 addresses in total). Information obtainable from registration offices for scientific research includes name, title, address, year of birth, nationality, and sex. The most important information for telephone surveys—telephone numbers—is not available. In a third step, telephone number entries in public directories were matched with the selected names. Telephone numbers for 2,539 of 4,473 sample members could be identified. Half the potential respondents were randomly assigned the AL group and received a personalized AL with information about the study, the conducting institution, the data collection, and an assurance of confidentiality (see Supplementary Material). The non-AL group did not receive any prenotification. Finally, approximately 940 men aged 21–70 years were surveyed on their experience of intimate partner violence. In total, 374 of the respondents had not received an AL and 562 had. Up to 20 call attempts were made per number, with an average of 3.7. Callback times alternated in terms of weekdays/weekend and afternoon/evening with flexible appointments following respondents’ requests. The field phase lasted 3 weeks. The data were collected through CATIs from February to March 2007 at the telephone survey facilities at a Bavarian University. The calls were made without telephone number suppression and respondents could associate the numbers with the University—if they had received and read the AL. The interviewers knew about the AL experiment and referred to the AL during recruitment, if the contact was marked to have received a prenotification.4 The introduction text was: “Good afternoon. My name is XX. I am calling from the University of Eichstaett. Recently, you have received a letter from us. We are conducting a survey on conflicts in relationships and would like to ask you to participate. Your participation is voluntary. All data remains anonymous and we follow data protection guidelines.” The sentence in italics was omitted for the non-AL group. The sampling strategy introduced a potential bias in that people without published telephone numbers could not participate in the study. People with listed telephone numbers differ in certain characteristics from those with unlisted numbers (Deutschmann & Häder, 2002; Otte, 2002; Parsons et al., 2002). For the sample, people without a published telephone number tended to be younger compared with people with telephone numbers published in directories and with the target population (see Table 2). Three potential explanations exist for this age effect. Until the 1990s, publishing telephone numbers in public directories was obligatory (Häder, 2015). Thus, younger people with newer telephone lines often do not have a directory entry because they have never been obliged to. Additionally, younger people tend to be more mobile, and a change of residency implies a change of landline number. Many times, younger people do not have a landline phone at all, the so-called mobile-only population. Mobile telephone numbers tend to be listed less frequently than landline numbers (Häder, 2015). Furthermore, the share of persons without a published telephone number in the sample increased with the size of place of residency. In cities, 54.3% of people did not publish their telephone number; 45.3%, in towns; and 29.6%, in rural areas. These percentages could be related to the age structure in the population. To evaluate the effect of ALs, two approaches were used. First, the outcome statistics of the AL and non-AL group were compared and the proportion of respondents with various sociodemographic characteristics was evaluated, with testing for statistically significant differences. Second, logistic regression equations that predicted completion of an interview for eligible sample units with theoretically sensible predictors were used, testing for interactions of ALs with demographic variables to gauge whether the predictors were more strongly related to the dependent variable under the AL-conditions than the non-AL condition. The same analytical steps were taken for reporting on sensitive topics. Before presenting the results, a word of caution is necessary. Talking about the effect of ALs in this context is a simplification of a complex system of potential interrelations between different factors, only one of which is sending an AL. In other words, effects cannot necessarily be exclusively ascribed to ALs. On principal, differential interviewer behavior could be a confounder because interviewers were not blind to the experimental condition (although there is no trace of a statistical effect or evidence from supervising fieldwork, see also Footnote 4). Furthermore, it is not clear if the AL was received, read, or remembered by respondents. Thus, making inferences about the effects of ALs refers to many factors that are potentially associated with ALs and not the experimental isolation of exclusive effects caused by ALs. At the same time, it is often the case in research practice that interviewers cannot be taken out of the equation, and therefore, they may interact with other design features. Results on Effect of ALs Effect of ALs on Outcome Rates To make the results compatible with international literature on survey nonresponse, final disposition codes were categorized according to the AAPOR (2016) guidelines and calculated outcome rates.5 In this experiment, ALs did not have any impact on refusal rates (non-AL: 39.4%; AL: 38.8%; see Table 1), but they positively influenced cooperation (non-AL: 43.0%; AL: 53.2%) and response rates (non-AL: 30.8%; AL: 46.9%). In other words, more potential respondents could be reached after receiving an AL, and they were then more likely to participate in the survey. Also, the contact rate was higher under the AL condition (non-AL: 71.5%; AL: 88.1%; see Table 1). The cases in which the lines were always busy (non-AL: 2.7%; AL: 0.7%), nobody ever answered (non-AL: 13.0%; AL: 4.6%), or only an answering machine responded (non-AL: 9.8%; AL: 2.3%) were less frequent for men who had received an AL. This result does not make immediate sense, unless respondents could identify the displayed telephone number as being related to a scientific study rather than an unknown caller. However, this explanation remains a speculation. Table 1 Outcome Rates According to AAPOR (2016) Definition** AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% *e is the estimated proportion of cases of unknown eligibility that are eligible. Enter a different value or accept the estimate in this line as a default. This estimate is based on the proportion of eligible units among all units in the sample for which a definitive determination of status was obtained (a conservative estimate). This would be used if a different estimate is not entered. For guidance about how to compute other estimates of e, see AAPOR’s 2016 Eligibility Estimates. AL = advance letter. ** The difference between the outcome rates in the AL versus non-AL group is statistically significant (χ2 = 164.698, p < .001). View Large Table 1 Outcome Rates According to AAPOR (2016) Definition** AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% *e is the estimated proportion of cases of unknown eligibility that are eligible. Enter a different value or accept the estimate in this line as a default. This estimate is based on the proportion of eligible units among all units in the sample for which a definitive determination of status was obtained (a conservative estimate). This would be used if a different estimate is not entered. For guidance about how to compute other estimates of e, see AAPOR’s 2016 Eligibility Estimates. AL = advance letter. ** The difference between the outcome rates in the AL versus non-AL group is statistically significant (χ2 = 164.698, p < .001). View Large Table 2 Socio Demographics in the Population and for Respondents With and Without an AL Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) *Source: Statistisches Landesamt Bayern 2008. **Refers to male population only View Large Table 2 Socio Demographics in the Population and for Respondents With and Without an AL Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) *Source: Statistisches Landesamt Bayern 2008. **Refers to male population only View Large Logistic regression models were developed to assess the likelihood of completing an interview for eligible persons based on receiving an AL versus not receiving an AL.6 Although the AL significantly improved the odds of completing an interview for eligible sample members, the effect was rather small (Nagelkerke pseudo R2 = 0.011). Without an AL, the odds of completing an interview decreased by 0.7 (β = −0.360; SE = 0.091; Wald = 15.730; df = 1; p < .01; Exp(B) = 0.698; 95% confidence interval [CI] = 0.584–0.834; baseline category: eligible, noninterview). Furthermore, demographic variables were introduced into the regression model with interview versus eligible, noninterview being the dependent variable and AL being the independent variable of interest. Respondents’ age and place of residency were included as covariates in the logistic models. In this model, when respondent age and population density of the place of residency were controlled for, the odds of completing an interview without an AL were reduced by 0.7 (β = −0.364; SE = 0.091; Wald = 15.893; df = 1; p < .01; Exp(B) = 0.695; 95% CI = 0.581–0.831; baseline category: eligible, noninterview). In another step, an interaction effect between AL and respondent age was taken into account. However, there was no change in the explained variation. In sum, response, cooperation, and contact rates improved when an AL had been sent. Refusal rates remained consistent. There were no significant interaction effects between sociodemographics and the effect of ALs on the response rate, with age being an exception. Recruitment Effort One aspect of the recruitment effort was the number of call attempts. To receive a final disposition code, the number of call attempts was significantly higher for the AL group compared with the non-AL group (F1= 48.4; p < .01; eta2 = 0.019). On average, sample units who had received an AL required 3.8 call attempts, whereas men without an AL required only 2.9 attempts to receive a final disposition code (e.g., complete interview, final refusal, noneligible). In the logistic regression model, AL and number of call attempts or the interaction between these two did not improve the prediction of whether eligible people completed an interview. This outcome seems counterintuitive, and it contradicts findings obtained by Hembroff, Rusz, Rafferty, McGee, and Ehrlicher (2005, p. 240). However, the call attempts in the AL group resulted in a significantly higher number of completed interviews. It is likely that a higher willingness to cooperate after receiving an AL resulted in a higher number of callbacks, which increased the total number of attempts. In other words, cold calls receive more immediate hang-ups or refusals, whereas recipients of AL are more likely to eventually complete an interview in a later call attempt. This is not a downside of ALs. As for cost implications, principal cost elements are interviewer and supervisor labor, telephone charges, and postage. The biggest cost factor is the number of call attempts interviewers needed to make to obtain a completed interview. The higher number of call attempts in the AL group as such implies higher costs, but it is counterbalanced by a higher success rate. In the AL group, 2.26 phone numbers of the sample were required to complete an interview, and in the non-AL group, it was 3.40. Extrapolating the total number of calls that would have been required to complete 1,000 interviews indicates that without an AL 3,396 telephone numbers would have been required compared with 2,260 with an AL. Furthermore, to complete 1,000 interviews with an AL, 8,678 versus 9,983 calls would have been required. This translates—with an average number of 14.14 call attempts per hour in this study—into 614 interviewer hours with an AL and 706 without an AL. This cost calculation does not account for costs for matching addresses with telephone numbers which can—depending on matching method—incur significant costs and somewhat counterbalance the comparative advantage of ALs in terms of survey costs. Effects in Subgroups and Potential Sampling Bias Several studies indicate that nonrespondents differ from respondents based on their sociodemographics (Freeth 2004; Groves & Couper, 1998; von der Lippe et al. 2011) and are likely to differ in other characteristics as well (Guzy, 2015; Stoop, 2004). Introducing ALs potentially limits the amount of nonresponse, ideally without distorting relevant sample characteristics. Comparing the distribution of sociodemographic variables in the experimental and the control group and—as far as possible—with the target population permits a more detailed picture of the effect of ALs on different subgroups. Unfortunately, most demographic characteristics and the victimization experience of the people who declined to participate remain unknown. To estimate potential biases associated with the use of an AL, sociodemographics of both respondent groups and if applicable with the population figures from official statistics are compared.7 Most likely, because of the topic of the study—life and conflicts in intimate relationships—the proportion of married men among respondents was higher than in the general population, whereas single, divorced, and widowed men were underrepresented (see Table 2; χ2 = 12.108: df = 3; p < .01). A possible explanation could be that to these men the topic of the study “intimate relationships” did not seem relevant. Comparing the AL and non-AL groups, the proportion of divorced men among respondents was particularly low when a prenotification had been sent. Again, this outcome is likely to be associated with the topic of the study. Earlier we noted that owing to the sampling procedure younger men are underrepresented in the CATI sample because their telephone numbers are less frequently listed in public directories. This underrepresentation remained in both respondent groups. In other words, the ALs did not have an overall significant effect on age groups (T = −1.275; df = 1; p = .20). However, with the use of the ungrouped variable for age, a correlation exists between age and AL (eta = 0.267). With increasing respondent age, ALs were overall more effective, or in other words, the proportion of older respondents was incrementally higher in the AL group. Male respondents aged ≥50 years are more likely to complete a telephone interview with an AL. ALs seemed to partially counterbalance higher refusal rates in older respondents. In this respect, ALs potentially improved the demographic accuracy of the survey—notwithstanding the problem of unpublished telephone numbers: compared with the general population, older age groups were more likely to participate in the study. Our original sample of addresses was stratified by population density with one third of addresses from each stratum. After telephone numbers were matched, rural areas had a share of 40.9% (1,038) in the CATI sample; towns, 31.8% (807); and the city, 27.4% (696). The larger a place of residency, the higher the number of people with unpublished numbers. The AL and the non-AL groups represented a similar distribution in these three categories. In other words, ALs did not have a differential effect in the three categories of size of place of residency (χ2 = 1.022; df = 2; p = .60). Again, potential bias elicited from unpublished telephone numbers, and ALs did not alter this (see Table 2). In terms of other sociodemographic variables, the respondents from the AL and non-AL group did not differ: educational background (p = .53), income (p = .63), and size of household (p = .23) were not significantly different between those two groups. These variables were not available for the original sample or for the general population. Therefore, the bias through unpublished numbers or nonrespondents cannot be estimated. Bringing all the sociodemographic variables (age, marital status, persons in the household, educational level, income) into a logistic regression model and predicting being a respondent based on receiving or not receiving an AL explained only 4.6% of the variation, with age and marital status having the only significant main effect. Every year of age increased the chance of being a respondent with a prenotification marginally (β = 0.015; SE = 0.007; Wald = 4.816; df = 1; p = .03; Exp(B) = 1.016; CI 95% = 1.002–1.030). Marital status had a significant main effect, with “widowed” decreasing the odds of participating in the survey with an AL significantly (β = −1.493; SE = 0.404; Wald = 13.634; df = 1; p < .01; Exp(B) = 0.225; CI 95%: 0.102–0.496). No category of formal education or number of people in the household had a significant effect on the odds of completing an interview with an AL. Furthermore, the odds of completing an interview with an AL were to be at least increased by a factor of two for men with a monthly income between 1,000 and 1,999€ and between 2,500 and 2,999€ (income of 1,000–1,499€: β = 0.816; SE = 0.394; Wald = 4.302; df = 1; p = .038; Exp(β) = 2.263; CI 95% = 1.046–4.894; income of 1,500–1,999€: β = 0.831; SE = 0.383; Wald = 4.716; df = 1; p = .030; Exp(β) = 2.295; CI 95% = 1.084–4.858; income of 2,500–2,999€: β = 0.870; SE = 0.419; Wald = 4.307; df = 1; p = .038; Exp (β) = 2.387; CI 95% = 1.050–5.430). To summarize, the AL did not distort the sample regarding sociodemographic variables. The only noticeable effects are rather small: ALs decreased the proportion of widowed respondents and increased the proportion of older respondents (those over 50 years of age). Effect of ALs on Reporting on Sensitive Topics Measurement error is difficult to assess because for most attitude or behavioral questions we do not know the true value or have a reliable benchmark. Because of a lack of alternatives, we assumed higher rates or more frequent reporting on sensitive topics was better. The hypotheses were that an AL increases the trust in and credibility of the survey institution and could thus increase self-reports of experiences with intimate partner violence. Nevertheless, the results imply that ALs do not affect self-reports on victimization in any significant way (see Table 3). The reported frequency of the 10 items on experience of domestic violence did not differ significantly between the AL and non-AL groups. However, after controlling for respondent age and whether the respondent and his partner lived in a joint household, there was a significant difference between the AL and non-AL groups for the variables: “shouted at me” (p < .01) and “ignored me” (p < .01), with the AL group reporting a higher frequency of these instances. Table 3 Reporting on Questions on Experience of Domestic Violencea My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 aMean values, results from ANOVA, 1 = often, 2 = sometimes, 3 = rarely, 4 = never. * p < .05; ** p < .01. ANOVA = analysis of variance. View Large Table 3 Reporting on Questions on Experience of Domestic Violencea My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 aMean values, results from ANOVA, 1 = often, 2 = sometimes, 3 = rarely, 4 = never. * p < .05; ** p < .01. ANOVA = analysis of variance. View Large Concluding, ALs did not have an overall effect on the reporting on sensitive topics, while only after controlling for age, 2 of 10 items produced a significant difference. Therefore, the AL seems more relevant in terms of recruiting rather than reporting. Conclusion and Discussion In existing literature, average outcome rate is often the only efficiency criterion for ALs. This article adds new dimensions to analyze the effect of ALs: it additionally analyzes (a) recruitment effort with cost implications, (b) potential distortions during recruitment in subgroups, and (c) potential for measurement error. It describes a split-ballot experiment and evaluates multiple facets of the problem. The objective is to detect potential distortions introduced through ALs during sampling, recruitment, and measurement. The particular strengths of this article lie in the total survey error perspective, bias estimation through comparisons of population, sample and respondents (with and without AL), and the analysis of differential reporting. Previous research has mostly neglected these aspects. In sum, the results regarding outcome rates include positive effects of ALs on contact, cooperation, and response rate but no effect on the refusal rate. Unexpectedly, ALs increased contact rates, and the AL group required higher recruitment effort. This finding means that more AL recipients could be reached by telephone, thus giving more opportunities to recruit respondents, but they required more call attempts to receive a final disposition code than non-AL recipients did. Thus, contrary to Groves (2004) finding, ALs did not seem to serve as prewarning for respondents to prepare their refusal. With regard to selection effects, ALs increased the participation of older age groups (50+). Thus, ALs can potentially counterbalance higher refusal rates among older respondents. These findings are in line with previous research (Goldstein & Jennings, 2002; Groves & Snowden, 1987). Beyond that, ALs did not affect sociodemographic groups differentially, confirming results from the meta-analysis of De Leeuw and colleagues (2007). From a total survey error perspective, nonresponse bias and measurement error are crucial, rather than mere response or nonresponse rates. In this respect, the challenge of using ALs lies in the sampling procedure: to send ALs in telephone surveys, both telephone numbers and mail addresses are required. Therefore, some matching procedures become necessary which can introduce bias—in this case toward persons without telephone numbers in public directories. Thus, older persons and persons in rural areas were overrepresented. Taking these two results together—recruiting more older people with an AL and overrepresenting rural areas and older people in the sample—the sample accuracy might have suffered more from the sample bias than it benefited from the recruitment bias. Regarding measurement error, the assumption was that ALs increase the credibility and trustworthiness of a study and thus might reveal higher rates of victimization.8 However, both experimental groups reported the same frequency of domestic violence. The AL did not seem to have an impact on the measurement of victimization experience. However, the higher response rates in the AL condition can facilitate the study of sensitive topics by increasing the number of cases for detailed analysis. The scope of these results is restricted through the geographical limitation to Bavaria, using only men as the target population, and domestic violence being a specific sensitive topic of a survey. Another limitation is that interviewers were not blind to the experimental condition (see also Footnote 4). Thus, they could have used differential behavior for the experimental and control group and confounded the findings. As a consequence, the evident effects cannot be solely ascribed to the AL; further, we can never be certain that the AL has been received and read. The effects could potentially also be because of the behavior of the interviewer. However, for practical applications, the interrelation of interviewer behavior and ALs is natural and desirable. Thus, this fact does not hamper the practical value of this research. Yet, caution is required in making inferences. If future research aims to disentangle interviewer behavior and the effect of ALs, a more complex experimental design is required with four groups: no AL, AL and blind interviewer, AL and informed interviewer, and no AL but with the interviewer believing an AL has been sent and with random assignment of interviewers to a group. Furthermore, analysis should include measurement of or controls for variance between interviewers. With that said, and as elaborated in the literature review, the effects of ALs are embedded in a complex social system and are the product of multiple factors, all of which are difficult to control. Nonresponse is a function of design characteristics, interviewer, social environment, household, and individual respondent characteristics. It is likely that the same holds for the effects of ALs. Interviewing is always a data construction process and not data gathering or collection. The relevance of these different factors should be disentangled in future research. Despite these limitations, the design and the results can inform future research and help to explore the effects of ALs on nonresponse and measurement bias further. The lessons learnt were that providing telephone numbers and mail addresses can be a challenge practically and in terms of sample accuracy. Nevertheless, registration data have the advantage of additional sociodemographic information for all sample members. Thus, a nonresponse analysis and the development of survey weights are facilitated. What we need is further investigation of the interaction of topic and ALs with design features of the AL among subgroups. We should strive to understand the causes of higher response rates and particularly ALs’ effect on interviewer strategies. In addition, measurement error is worth further research. Overall, the total survey error perspective is valuable to get a better understanding of differential effects of ALs (and other means of increasing response rates) and should be more consistently used in research. Psychographic data on respondents could also help to compare the two experimental samples and could also indicate potential differences in the two groups. Funding This work was supported by the Faculty of Social Sciences at the University of Vienna, Austria, and the Catholic University of Eichstaett-Ingolstadt, Germany. Biographical Notes Susanne Vogl is a postdoc researcher at the Institute of Sociology at the University of Vienna. Her research focuses on interview methodology in qualitative, quantitative, and mixed methods research and with special populations. Footnotes 1 As a matter of fact, it is often not clear if a positive effect of ALs on response rates can be exclusively ascribed to ALs or to the interviewer behavior or a combination of both. Thus, in many studies—like the one presented here—a combination of ALs and the interviewer behavior is assessed. 2 This holds for landline as well as mobile telephone surveys, although to a different extent. Also, face-to-face or Web surveys can also use ALs (De Leeuw et al., 2007; Dillman, 2009). 3 In Germany, the so-called Gabler–Häder design is commonly used instead of RDD because of a complicated numbering system in Germany and a hit rate with RDD of <0.5% (Häder, 2015, p. 2). The design is based on a pool of all presumed working landline telephone numbers, which serves as a sampling frame for samples stratified by region and other known variables. 4 When interviewers administer both conditions in a survey experiment, their behavior could potentially become a confounding factor if they have differential behavior in one group that contributes to the findings. If interviewer behavior is a confounder and ALs have an effect on interviewer behavior, interviewer behavior could mediate the effect of AL on outcome. Tests for interviewer effects yielded a significant effect of the AL, and a significant effect of interviewer on the outcome (interview or refusal)—some interviewers were better at recruiting than others. This effect remains significant both in the treatment and the control groups. Furthermore, there was no interaction effect between interviewer and treatment (F = 0.846; p = .718). However, AL and interviewer had separate effects (AL: F = 6.631; p = .008; interviewer: F = 2.488; p < .001). If the interviewer changed his or her behavior depending on AL or non-AL, one would expect an interaction effect. Introducing interviewer as a covariate in an ANOVA also suggested that an interviewer does not exert a confounding effect (F = 0.524; p = .469), whereas the effect of AL remains significant (F = 18.554; p < .001). Because telephone numbers were randomly assigned to interviewers, and in the light of these results, I do not assume the interviewer behavior is a confounding factor. 5 Although the AAPOR guidelines apply to RDD samples, they can be used for this study based on a registry-based sample. Although most criteria for eligibility could be judged in the selection process in the registration offices, some needed to be checked in a screening at the beginning of the interview. Therefore, not every person with a listed telephone number was necessarily eligible. Alternatively, people without published telephone numbers were subsumed under Category 2 (eligible, noninterview). Consequently, Response Rates 3 and 4 (because of changes in e), Cooperation Rates 1 and 2, Refusal Rates 2 and 3, and Contact Rates 1, 2, and 3 would be different from the approach presented here. Contact Rates would be higher, Refusal and Cooperation Rates lower, and Response Rate about the same. 6 Only eligible people are included in the model. 7 Unfortunately, except for age and place of residency, no information on the original sample before matching telephone numbers is available. Because of the random sampling procedure, discrepancies between the population and the respondents in general can be caused by differences between people with and without listed numbers as well as nonrespondents and respondents. 8 Owing to a lack of a true value, the assumption is “more is better.” References AAPOR. 2016 . Standard definitions: Final dispositions of case codes and outcome rates for surveys ( 9 th edn). Lenexa, KS : AAPOR . Aust F. , Schröder H. ( 2009 ). Sinkende Stichprobenausschöpfung in der Umfrageforschung–ein Bericht aus der Praxis. In Weichbold M. , Bacher J. , Wolf C. (Eds.), Umfrageforschung. Herausforderungen und Grenzen: Österreichische Zeitschrift für Soziologie/Sonderheft (vol. 9 . pp. 195 – 212 ). Wiesbaden : VS, Verl. für Sozialwiss . Baulne J. , Courtemanche R. ( 2008 ). Is there really any benefit in sending out introductory letters in Random Digit Dialling (RDD) surveys? In Statistics Canada (Ed.), Proceedings of Statistics Canada Symposium 2008: Data Collection: Challenges, Achievements and New Directions, http://www.statcan.gc.ca/pub/11-522-x/2008000/article/11001-eng.pdf Berinsky A. J. ( 2008 ). Survey non-response. In Donsbach W. , Traugott M. W. (Eds.), Public Opinion Research (pp. 309 – 321 ). Los Angeles : Sage . Biemer P. P. ( 2001 ). Nonresponse bias and measurement bias in a comparison of face-to-face and telephone interviewing . Journal of Official Statistics , 17 , 295 – 320 . Biemer P. P. ( 2011 ). Total survey error: Design, implementation, and evaluation . Public Opinion Quarterly , 74 , 817 – 848 . https://doi.org/10.1093/poq/nfq058 Google Scholar Crossref Search ADS Camburn D. , Lavrakas P. J. , Battaglia M. P. , Massey J. T. , Wright R. A. ( 1995 ). Using advance respondent letters in Random-Digit-Dialing telephone surveys. In American Statistical Association (Ed.), Proceedings of the 50th Annual Conference of the American Association for Public Opinion Research (pp. 969–974). Alexandria, VA. Retrieved from http://www.amstat.org/sections/srms/proceedings/ Collins M. , Sykes W. , Wilson P. , Blackshaw N. ( 2001 ). Nonresponse: The UK experience. In Groves R. M. , Biemer P. , Lyberg L. , Massey J. T. , Nicholls W. L. , Waksberg J. (Eds.), Wiley series in survey methodology. Telephone survey methodology (pp. 213 – 231 ). New York, NY : Wiley . Couper M. , Mathiowetz N. , Singer E. ( 1995 ). Related households, mail handling and returns to the 1990 census . International Journal of Public Opinion Research , 7 , 172 – 177 . Google Scholar Crossref Search ADS Curtin R. , Presser S. , Singer E. ( 2000 ). The effects of response rate changes on the index of consumer sentiment . Public Opinion Quarterly , 64 , 413 – 428 . Google Scholar Crossref Search ADS PubMed Curtin R. , Presser S. , Singer E. ( 2005 ). Changes in telephone survey nonresponse over the past quarter century . Public Opinion Quarterly , 69 , 87 – 98 . doi: 10.1093/poq/nfi002 Google Scholar Crossref Search ADS Davern M. ( 2013 ). Nonresponse rates are a problematic indicator of nonresponse bias in survey research . Health Services Research , 48 , 905 – 912 . https://doi.org/10.1111/1475-6773.12070 Google Scholar Crossref Search ADS PubMed Davern M. , McAlpine D. , Beebe T. J. , Ziegenfuss J. , Rockwood T. , Thiede Call K. ( 2010 ). Are lower response rates hazardous to your health survey? An analysis of three state telephone health surveys . Health Services Research , 45 , 1324 – 1344 . Google Scholar Crossref Search ADS PubMed De Leeuw E. , Callegaro M. , Hox J. , Korendijk E. , Lensvelt-Mulders G. ( 2007 ). The influence of advance letters on response in telephone surveys: A meta-analysis . Public Opinion Quarterly , 71 , 413 – 443 . Google Scholar Crossref Search ADS Deutschmann M. , Häder S. ( 2002 ). Nicht-eingetragene in CATI-surveys (unlisted numbers in CATI surveys). In Gabler S. , Häder S. (Eds.), Telefonstichproben: Methodische Innovationen und Anwendungen in Deutschland (pp. 68 – 84 ). Münster : Waxmann . Dijkstra W. , Smit J. H. ( 2002 ). Persuading reluctant recipients in telephone surveys. In Groves R. M. , Dillman D. A. , Eltinge J. L. , Little R. J. A. (Eds.), “A wiley-interscience publication”. Survey nonresponse (pp. 121 – 134 ). New York, NY : Wiley . Dillman D. A. ( 2007 ). Mail and internet surveys: The tailored design method ( 2 nd ed.). Hoboken : Wiley . Dillman D. A. ( 2009 ). Internet, mail, and mixed-mode surveys: The tailored design method . Hoboken : Wiley . Dillman D. A. , Gallegos J. G. , Frey J. H. ( 1976 ). Reducing refusal rates for telephone interviews . Public Opinion Quarterly , 40 , 66 – 78 . Google Scholar Crossref Search ADS Durrant G. B. , Steele F. ( 2009 ). Multilevel modelling of refusal and non-contact in household surveys: evidence from six UK Government surveys . Journal of the Royal Statistical Society , 172 , 361 – 381 . https://doi.org/10.1111/j.1467-985X.2008.00565.x Google Scholar Crossref Search ADS Engel U. , Pötschke M. ( 2004 ). Nonresponse und Stichprobenqualität: Ausschöpfung in Umfragen der Markt- und Sozialforschung . Frankfurt am Main : Verlagsgruppe Deutscher Fachverl . Freeth S. ( 2004 ). The labour force survey. Report of the 2001 census-linked study of survey nonresponse. Working Paper. London: Office for National Statistics. Fricker S. , Tourangeau R. ( 2011 ). Examining the relationship between nonresponse propensity and data quality in two national household surveys . Public Opinion Quarterly , 74 , 934 – 955 . https://doi.org/10.1093/poq/nfq064 Google Scholar Crossref Search ADS Goldstein K. M. , Jennings M. K. ( 2002 ). The effect of advance letters on cooperation in a list sample telephone survey . Public Opinion Quarterly , 66 , 608 – 617 . Google Scholar Crossref Search ADS Groves R. M. ( 2004 ). Survey errors and survey costs : Hoboken : John Wiley & Sons . Groves R. M. ( 2006 ). Nonresponse rates and nonresponse bias in household surveys . Public Opinion Quarterly , 70 , 646 – 675 . Google Scholar Crossref Search ADS Groves R. M. , Couper M. P. ( 1998 ). Nonresponse in household interview surveys . New York, NY : Wiley . Retrieved from http://www.zentralblatt-math.org/zmath/en/search/?an=0961.62007 Groves R. M. , Fowler F. , Couper M. , Lepkowski J. M. , Singer E. , Tourangeau R. ( 2004 ). Survey methodology . Hoboken : Wiley . Groves R. M. , Lyberg L. ( 2001 ). An overview of nonresponse issues in telephone surveys. In Groves R. M. , Biemer P. , Lyberg L. , Massey J. T. , Nicholls W. L. , Waksberg J. (Eds.), Wiley series in survey methodology. Telephone survey methodology (pp. 191 – 211 ). New York, NY : Wiley . Groves R. M. , Lyberg L. ( 2011 ). Total survey error: Past, present, and future . Public Opinion Quarterly , 74 , 849 – 879 . https://doi.org/10.1093/poq/nfq065 Google Scholar Crossref Search ADS Groves R. M. , Peytcheva E. ( 2008 ). The impact of nonresponse rates on nonresponse bias: A meta-analysis . Public Opinion Quarterly , 72 , 167 – 189 . Google Scholar Crossref Search ADS Groves R. M. , Singer E. , Corning A. ( 2000 ). Leverage-saliency theory of survey participation . Public Opinion Quarterly , 64 , 299 – 308 . Google Scholar Crossref Search ADS PubMed Groves R. M. , Snowden C. ( 1987 ). The Effects of Advanced Letters on Response Rates in Linked Telephone Surveys. In Proceedings of the American Statistical Association, Survey Research Methods Section. http://ww2.amstat.org/sections/srms/Proceedings/papers/1987_113.pdf Guzy N. ( 2015 ). Nonresponse Bias in telefonischen Opferbefragungen. In Schupp J. , Wolf C. (Eds.), Nonresponse bias (pp. 161 – 207 ). Wiesbaden : Springer Fachmedien Wiesbaden . Häder S. ( 2015 ). Stichproben in der Praxis . Mannheim : Gesis Survey Guidelines . doi 10.15465/gesis-sg_014 Hembroff L. A. , Rusz D. , Rafferty A. , McGee H. , Ehrlicher N. ( 2005 ). The cost-effectiveness of alternative advance mailings in a telephone survey . Public Opinion Quarterly , 69 , 232 – 245 . Google Scholar Crossref Search ADS Holle R. , Hochadel M. , Reitmeir P. , Meisinger C. , Wichmann H. E. ( 2006 ). Prolonged recruitment efforts in health surveys: effects on response, costs, and potential bias . Epidemiology , 17 , 639 – 643 . https://doi.org/10.1097/01.ede.0000239731.86975.7f Google Scholar Crossref Search ADS PubMed Hox J. , De Leeuw E. ( 2002 ). The influence of interviewers’ attitude and behaviour in household survey nonresponse. An international comparison. In Groves R. M. , Dillman D. A. , Eltinge J. L. , Little R. J. A. (Eds.), “A Wiley-Interscience publication”. Survey nonresponse (pp. 103 – 120 ). New York, NY : Wiley . Hox J. , De Leeuw E. D. , Chang H.-T. ( 2012 ). Nonresponse versus measurement error: Are reluctant respondents worth pursuing? Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique , 113 , 5 – 19 . https://doi.org/10.1177/0759106311426987 Google Scholar Crossref Search ADS Kaminska O. , McCutcheon A. L. , Billiet J. ( 2011 ). Satisficing among reluctant respondents in a cross-national context . Public Opinion Quarterly , 74 , 956 – 984 . https://doi.org/10.1093/poq/nfq062 Google Scholar Crossref Search ADS Keeter S. , Miller C. , Kohut A. , Groves R. M. , Presser S. ( 2000 ). Consequences of reducing nonresponse in a national telephone survey . Public Opinion Quarterly , 64 , 125 – 148 . Google Scholar Crossref Search ADS PubMed Link M. W. , Mokdad A. ( 2005 ). Advance letters as a means of improving respondent cooperation in random digit dial studies . Public Opinion Quarterly , 69 , 572 – 587 . Google Scholar Crossref Search ADS Lynn P. , Clarke P. , Martin J. , Sturgis P. ( 2002 ). The effects of extended interviewer efforts on nonresponse bias. In Groves R. M. , Dillman D. A. , Eltinge J. L. , Little R. J. A. (Eds.), “A Wiley-Interscience publication”. Survey nonresponse (pp. 135 – 147 ). New York, NY : Wiley . Olson K. ( 2006 ). Survey participation, nonresponse bias, measurment error bias, and total bias . Public Opinion Quarterly , 70 , 737 – 758 . Google Scholar Crossref Search ADS Otte G. ( 2002 ). Erfahrungen mit zufallsgenerierten Telefonstichproben in drei lokalen Umfragen. In Gabler S. , Häder S. (Eds.), Telefonstichproben. Methodische innovationen und anwendungen in deutschland (pp. 85 – 110 ). Münster : Waxmann . Parsons J. , Owens L. , Skogan W. ( 2002 ). Using advance letters in RDD surveys: Results of two experiments . Survey Reserach , 33 ( 1 ), 1 – 2 . Peytchev A. ( 2012 ). Consequences of survey nonresponse . The ANNALS of the American Academy of Political and Social Science , 645 , 88 – 111 . https://doi.org/10.1177/0002716212461748 Google Scholar Crossref Search ADS Sakshaug J. W. , Yan T. , Tourangeau R. ( 2011 ). Nonresponse error, measurement error, and mode of data collection: Tradeoffs in a multi-mode survey of sensitive and non-sensitive items . Public Opinion Quarterly , 74 , 907 – 933 . https://doi.org/10.1093/poq/nfq057 Google Scholar Crossref Search ADS Singer E. , Presser S. ( 2007 ). Privacy, confidentiality, and respondent burden as factors in telephone survey nonresponse. In Lepkowski J. M. , Tucker C. , Brick J. M. , De Leeuw E. D. , Japec L. , Lavrakas P. J. , Sangster R. L. (Eds.), Wiley series in survey methodology. Advances in telephone survey methodology (pp. 449 – 470 ). Hoboken : Wiley . Singer E. , van Hoewyk J. , Maher M. P. ( 2000 ). Experiments with incentives in telephone surveys . Public Opinion Quarterly , 64 , 171 – 188 . https://doi.org/10.1086/317761 Google Scholar Crossref Search ADS PubMed Smith F. ( 1983 ). On the validity of inferences from non-random samples . Journal of the Royal Statistical Society, Series A 146 , 394 – 403 . doi: 10.2307/2981454 Google Scholar Crossref Search ADS Smith W. , Chey T. , Jalaludin B. , Salked G. , Capon T. ( 1995 ). Increasing response rates in telephone surveys: A randomised trial . Journal of Public Healtz Medicine , 17 , 33 – 38 . Snijkers G. , De Leeuw D. ( 1999 ). Interviewers' tactics for fighting survey nonresponse . Journal of Official Statistics , 15 , 185 – 198 . Stangeland P. ( 1996 ). The effect of interview methods and response rate on victim survey crime rates. In Australian Institute of Criminology (Ed.), AIC Conference Proceedings (pp. 141–148) http://www.aic.gov.au/media_library/publications/proceedings/27/stangeland.pdf Stoop I. ( 2004 ). Surveying nonrespondents . Field Methods , 16 , 23 – 54 . doi: 10.1177/1525822X03259479 Google Scholar Crossref Search ADS Stoop I. ( 2005 ). Nonresponse in sample surveys: The hunt for the last respondent . The Hague : Social and Cultural Planning Office . Tourangeau R. , Groves R. M. , Redline C. D. ( 2010 ). Sensitive topics and reluctant respondents: Demonstrating a link between nonresponse bias and measurement error . Public Opinion Quarterly , 74 , 413 – 432 . https://doi.org/10.1093/poq/nfq004 Google Scholar Crossref Search ADS Traugott M. W. , Groves R. M. , Lepkowski J. M. ( 1987 ). Using dual frame designs to reduce nonresponse in telephone surveys . Public Opinion Quarterly , 51 , 522 – 539 . https://doi.org/10.1086/269055 Google Scholar Crossref Search ADS Vogl S. , Parsons J. A. , Owens L. K. , Lavrakas P. J. ( forthcoming ). Experiments on the Effects of Advance Letters in Surveys. In Lavrakas P. J. , Traugott M. W. , Kennedy C. , Holbrook A. L. , De Leeuw E. D. , West B. T. (Eds.), Experimental methods in survey research: techniques that combine random sampling with random assignment. Wiley series in survey methodology . Hoboken : Wiley . von der Lippe E. , Schmich P. , Lange C. ( 2011 ). Advance letters as a way of reducing non-response in a National Health Telephone Survey: Differences between listed and unlisted numbers . Survey Research Methods , 5 , 103 – 116 . Weidmann C. , Schmich P. , Schiller-Born S. ( 2008 ). Der Einfluss von Kontrollüberzeugungen der Interviewer auf die Teilnahme an telefonischen Befragungen . Methoden, Daten, Analysen (mda) , 2 , 125 – 147 . Woodruff S. I. , Mayer J. A. , Clapp E. ( 2006 ). Effects of an introductory letter on response rates to a teen/parent telephone health survey . Evaluation Review , 30 , 817 – 823 . https://doi.org/10.1177/0193841X05285662 Google Scholar Crossref Search ADS PubMed © The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Public Opinion Research Oxford University Press

Advance Letters in a Telephone Survey on Domestic Violence: Effect on Unit Nonresponse and Reporting

Loading next page...
1
 
/lp/ou_press/advance-letters-in-a-telephone-survey-on-domestic-violence-effect-on-FDeAARtMw6

References (44)

Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.
ISSN
0954-2892
eISSN
1471-6909
DOI
10.1093/ijpor/edy006
Publisher site
See Article on Publisher Site

Abstract

Abstract Advance letters (ALs) are one tool for improving response rates. However, it is not sufficiently clear whether ALs affect nonresponse bias, and how their effect relates to the study topic. (1) The effect of ALs on outcome rates, (2) recruitment effort, (3) their differential effect on subgroups, and (4) their effect on reporting on sensitive topics are examined. The data stem from a split-ballot experiment implemented in a telephone survey on “Violence against Men.” The study comprises responses from approximately 950 men aged 21–70 years. The results indicate a positive effect of ALs on response, cooperation, and contact rates, and higher response rates among older respondents. Self-reports on sensitive topics were not affected by the ALs. Introduction Telephone surveys are a key mode of data collection, but deteriorating response rates challenge their leading position (De Leeuw, Callegaro, Hox, Korendijk, & Lensvelt-Mulders, 2007). Thus, efforts have been made to reduce the number of nonrespondents and increase the response rates, though methods such as mailing of advance letters (ALs). In this article, the effects of ALs (in combination with interviewers referring to the AL during recruitment) on response to a telephone survey on domestic violence are under scrutiny. Although studies on the effect of ALs have been conducted during the past 20–30 years, continual assessment is important because the environment of computer-assisted telephone interview (CATI) surveys is changing: not only are communication technologies changing but so are communication habits, for example, coverage of telephone books is decreasing. Furthermore, existing research neglects certain aspects of measurement and selection biases. Critical for the assessment of the effect and effectiveness of ALs is not to merely look at outcome rates but also consider potential nonresponse and measurement bias. Before reporting findings, ALs will be embedded in the methodological discussion about nonresponse, nonresponse bias, and measurement error. The main purpose of ALs is to increase response rates. Low response rates are problematic, first, because high rates of unit nonresponse increase survey cost, for example, because of additional call attempts, refusal conversion, follow-up mailing, incentives, and ALs (Davern et al., 2010; Groves & Peytcheva, 2008). However, these greater efforts can still fail to obtain interviews from sample members who differ from prior respondents (Curtin, Presser, & Singer, 2000; Keeter, Miller, Kohut, Groves, & Presser, 2000; Peytchev, 2012). According to Davern (2013), expensive efforts to increase response rates have been harmful because the money would have been better spent on either increasing the sample size or conducting a rigorous nonresponse bias analyses (Davern, 2013, pp. 907–908). This situation leads to the second problem: nonresponse bias, which may systematically distort descriptive and inferential statistics and thus make unbiased estimates of population characteristics almost impossible (Berinsky, 2008; Groves et al., 2004; Groves & Lyberg, 2001). Unfortunately, nonresponse bias is hard to measure because it requires comparing data from the population and the sample. Thus, to reduce nonresponse bias, reducing nonresponse is the most common recommendation (Groves, 2006, p. 647). The underlying assumption is that higher response rates reduce nonresponse bias by recruiting a more diverse group of respondents and achieving an overall more balanced representation of the target population (Groves, Singer, & Corning, 2000). However, research suggests that changes in response rates do not necessarily alter survey estimates, and substantial robustness can be achieved even with relatively low response rates (Curtin, Presser, & Singer, 2005; Davern et al., 2010; Groves, 2006; Holle, Hochadel, Reitmeir, Meisinger, & Wichmann, 2006; Keeter et al., 2000). Furthermore, indiscriminate attempts to improve response rates can reduce data quality in terms of nonresponse bias and measurement error (Fricker & Tourangeau, 2011). Thus, the nonresponse rate is in itself not a measure of survey quality; it is simply easier to measure than other indicators. Thus, research needs to focus on reducing nonresponse bias (Groves & Couper, 1998; Link & Mokdad, 2005), taking the totality of survey quality indicators into account (Biemer, 2011; Groves & Lyberg, 2011; Hox, De Leeuw, & Chang, 2012). We should ask: Do means of improving response rates improve survey quality? Or do they introduce bias through sampling, nonresponse, and potentially measurement? To theorize potential effects of ALs, several reasons for nonresponse can be categorized. Generally, nonresponse is a function of design characteristics, interviewer, social environment, household, and individual characteristics (Groves & Couper, 1998); on a design level, nonresponse is influenced by sample design, survey mode, fieldwork protocols like frequency and timing of contacts, duration of data collection periods, topic, sponsors of the survey, and so forth (Dillman, Gallegos, & Frey, 1976; Groves et al., 2004; Groves, 2006). These survey design specifics are correlated with personal characteristics apparent in different levels of reachability: older people (Groves & Couper, 1998), larger households (Freeth, 2004; Groves & Couper, 1998), households with older people and/or children (Groves & Couper, 1998), work-from-home individuals (Groves et al., 2004), and people without a migration background (Lynn, Clarke, Martin, & Sturgis, 2002) are easier to contact. Moreover, lifestyle and leisure time activities influence reachability (Durrant & Steele, 2009; Groves & Couper, 1998; Guzy, 2015). Leverage-salience theory explains survey participation by taking design features into account. It assumes varying effects on subgroups (because of different leverages) and across designs (because of different salience exhibited during the survey request) (Groves et al., 2000, p. 302). Thus, this theory highlights a potentially different composition of respondents if design features like ALs or study topic exert different leverage and/or salience in subgroups. In addition, interviewer training (Groves & Couper, 1998) and interviewers’ strategies, tactics (Dijkstra & Smit, 2002; Snijkers & De Leeuw, 1999), and attitude (Hox & De Leeuw, 2002; Weidmann, Schmich, & Schiller-Born, 2008) have a potential influence on nonresponse—both independently and in interaction with other factors. Design and interviewer characteristics interact with the societal level. Increasing nonresponse rates might be because of individualization, lack of free time, changing leisure time activities, increased mobility, and increasing concerns about privacy and confidentiality (Singer & Presser, 2007) as well as increased skepticism about surveys and new communication technologies, call-blocking devices, active call monitoring, or answering machines (Dillman, 2007; von der Lippe, Schmich, & Lange, 2011). On a personal level, nonresponse is influenced by gender and age in particular, socioeconomic status and ethnicity (Freeth, 2004; Groves, 2004; Groves et al., 2004; Smith, 1983; Stoop, 2005), but also by interest in the study topic and personal affection (Aust & Schröder, 2009). Especially in surveys on sensitive topics, such as victimization in this case, participation is likely to be closely linked to the personal level; for example, victims might participate more frequently than nonvictims because they are “eager to tell” (Stangeland, 1996). ALs are interventions on a survey design level with an effect on the personal and interviewer level. On the interviewer level, an AL offers the interviewer support and credibility in their recruitment effort and increases their confidence (Baulne & Courtemanche, 2008; Groves & Snowden, 1987). Interviewers find ALs helpful in the initial contact with households by allaying respondents’ initial suspicions, who value them as a more professional way of seeking an interview (Collins, Sykes, Wilson, & Blackshaw, 2001).1 On the personal level, the prenotifications could stimulate interest, boost both intrinsic and extrinsic motivation, suggest legitimacy and credibility, and provide reassurance about the confidentiality of the study, and thus increase trust and evoke principles of social exchange and reciprocity (Baulne & Courtemanche, 2008; Collins et al., 2001)—assuming the respondent has received and read the AL. Following this assumption, it is also plausible that there might be an effect on reporting—implying a link between nonresponse and measurement error. Survey researchers speculate that converted refusers make poor reporters (Fricker & Tourangeau, 2011; Sakshaug, Yan, & Tourangeau, 2011; Tourangeau, Groves, & Redline, 2010), and they assume common causes for response propensities and response accuracy. Consequently, if an AL increases the propensity to participate, it might also affect the reporting behavior, particularly on sensitive topics. The relation between nonresponse and measurement is described as “The same sample members who are likely to misreport because they are in the socially undesirable category may also be likely to refuse to take part in the survey because they find the topic sensitive” (Tourangeau et al., 2010, p. 415). So far, little research has explored the interrelation of ALs’ effect on measurement and nonresponse. In general, a literature review suggests a positive effect of ALs on response rates (Camburn, Lavrakas, Battaglia, Massey, & Wright, 1995; Smith, Chey, Jalaludin, Salked, & Capon, 1995; Traugott, Groves, & Lepkowski, 1987). In their meta-analysis, De Leeuw and colleagues (2007) identified an increase in the cooperation rate of about 11% and in the response rate of about 8% after sending ALs. They concluded: “ALs are a general tool that can be applied in many different types of surveys” (De Leeuw et al., 2007, p. 425). The positive effect is higher if the AL is personalized, the conducting institution is a university, the study is described in detail, the social utility of the respondents’ cooperation is emphasized, confidentiality of data is promised, and sampling techniques are described (Berinsky, 2008; De Leeuw et al., 2007; Engel & Pötschke, 2004; Groves, 2004). However, Woodruff, Mayer, and Clapp (2006) and Singer, van Hoewyk, and Maher (2000) did not find a difference in response and cooperation rates. Groves (2004) found a potentially negative effect of prenotifications if the prospective respondent regarded the topic as unpleasant. ALs might also make it easier for selected persons to prepare their refusal. In sum, most research looks at the average impact of ALs (Vogl, Parsons, Owens, & Lavrakas, forthcoming), but there might be differential effects: there might be a double or even triple bias using ALs in telephone surveys,2 which might offset positive effects. First, sending ALs for telephone surveys requires both telephone numbers and mail addresses. In random digit dialing (RDD) samples, telephone numbers have to be matched with addresses. Only respondents for whom a telephone number can be successfully matched to a mailing address will receive an AL or be part of the study.3 In 2007, De Leeuw and colleagues estimated an average matching rate for an RDD sample at approximately 40%, but they also noticed cross-country variation (De Leeuw et al., 2007), and a decrease over time is likely. In Germany, in 2009, only about half the households had a published landline number (Häder, 2015). For mobile phone numbers, only about 2% are listed, most of which are business contacts (Häder, 2015). In list-based samples, matching telephone numbers have to be identified. In both cases, this matching process entails dangers for coverage error: people with published telephone numbers have different sociodemographic backgrounds than those without. People without published addresses in directories are younger, single or divorced, and employed; live in cities; and have young children (Häder, 2015; von der Lippe et al., 2011). Some results suggest that listed households “may be more cooperative to telephone survey requests in part because of demographic differences in the two populations” (Parsons, Owens, & Skogan, 2002, p. 2). The effect of ALs can therefore be ambiguous: they help recruit respondents who are already easier to reach (von der Lippe et al., 2011, pp. 112–113). Second, beyond the sampling issue, the effect of ALs depends on someone in the household (preferably the selected sample member) receiving and reading the letter (Link & Mokdad, 2005). However, a substantial number of the letters are considered to be junk and are not read (Couper, Mathiowetz, & Singer, 1995; Link & Mokdad, 2005). In a study on the effect of ALs in RDD samples, only 61% of 1,000 respondents remembered seeing the AL (Groves & Snowden, 1987; Link & Mokdad, 2005). Beyond this disappointing result, some social groups are more likely to read it than others are. Women read the letter more frequently than men and lower-income groups more frequently than higher-income groups (Groves, 2004; Groves & Snowden, 1987). Third, ALs can have differential effects on respondent groups’ cooperation after they read them. Goldstein and Jennings (2002) and Groves and Snowden (1987) analyzed the effect of ALs in telephone surveys and concluded that notification letters were most successful among older participants who generally exhibit low response rates. However, in their meta-analysis, De Leeuw et al. (2007) could not find any indication that ALs have different effects on different population groups. Fourth, persuading reluctant respondents by referring to an AL could produce data with increased measurement error (Biemer, 2001; Groves & Couper, 1998). From research with various refusal conversion efforts, we know that a potential link exists between nonresponse and measurement error in that reluctant respondents are likely to give poor information; however, evidence for this link is still limited (Tourangeau et al., 2010). Olson (2006) assumed that response propensity and response accuracy have common causes but found the relationship between nonresponse bias, measurement error, and propensity to respond to be specific to the type of nonresponse. Kaminska, McCutcheon, and Billiet (2011) found a relationship between the two factors that is fully explained by cognitive ability. In sum, although ALs can potentially increase response, they can also have differential effects on subgroups and thereby contribute to nonresponse bias (Link & Mokdad, 2005); for example, some people are more likely to respond because the letter legitimizes the study, but for other people, the letter is a warning and they prepare their refusal tactics. The same or similar causes to differential participation among subgroups can also affect response behavior and thus contribute to measurement error. Another potential implication of ALs is the increased costs for the survey. The heterogeneity of results indicates a variation in the effectiveness of ALs, which calls for further investigation. Research Questions Based on these thoughts and results from previous research, a split-ballot experiment on the effect of ALs in CATI was implemented for the study “Violence against men” conducted in Germany in 2007. The aim was to research (a) the effect of a personalized AL with a salutation by name and the interviewer referring to it during telephone contact on outcome rates and (b) recruitment effort, (c) potentially different effects on sociodemographic subgroups as well as on the (d) reporting on sensitive topics. Technically speaking, it is a comparison between the AL and non-AL group regarding (1) outcome statistics, (2) the propensity to complete an interview or refuse, and (3) the number of call attempts. In another step, (4) sociodemographics in the population with two respondent groups, those with and without ALs, are compared. Furthermore, AL and non-AL groups are compared among the respondents to determine effects on (5) reporting of violence. In brief, the assumption is that ALs increase trust, credibility, and response rates with the frequency of reported victimization being higher through self-selection and/or changes in reporting. Research Design Data Collection and Sampling Strategy In principle, two strategies for random sampling for CATI studies exist: telephone directories and RDD. Similar to the United States, German public telephone directories are an inadequate source of telephone samples because they are neither complete nor up to date. A growing share of telephone numbers are not published now (Häder, 2015). Although potentially every household with a telephone can be captured in RDD designs, postal addresses are usually unknown. This factor mostly prohibits the mailing of ALs. Thus, a different sampling strategy was used. The sample was based on addresses from official registration databases. For random samples, registration office data seem to be the most complete and up-to-date source. German residents are legally obligated to register their home address at a local registration office. Compared with telephone samples, with registration data, the sampling is based on individuals rather than households. Thus, addressing an AL personally to a specific person is straightforward and the probability that the AL will be read improves. Because a complete list of inhabitants does not exist in Germany, all Bavarian administrative units were stratified into “big city” (>100,000 inhabitants), “town” (40,000–100,000 inhabitants), and “rural area” (on average <3,000 inhabitants per municipality) in a first step. Then, 1 big city, 3 towns, and 20 rural municipalities from four rural administrative districts within Bavaria were randomly selected. Next, for each of the three categories of size of municipality (big city, town, rural area), 1,500 addresses of men aged 21–70 with German citizenship were randomly selected per strata from these official registration databases (4,500 addresses in total). Information obtainable from registration offices for scientific research includes name, title, address, year of birth, nationality, and sex. The most important information for telephone surveys—telephone numbers—is not available. In a third step, telephone number entries in public directories were matched with the selected names. Telephone numbers for 2,539 of 4,473 sample members could be identified. Half the potential respondents were randomly assigned the AL group and received a personalized AL with information about the study, the conducting institution, the data collection, and an assurance of confidentiality (see Supplementary Material). The non-AL group did not receive any prenotification. Finally, approximately 940 men aged 21–70 years were surveyed on their experience of intimate partner violence. In total, 374 of the respondents had not received an AL and 562 had. Up to 20 call attempts were made per number, with an average of 3.7. Callback times alternated in terms of weekdays/weekend and afternoon/evening with flexible appointments following respondents’ requests. The field phase lasted 3 weeks. The data were collected through CATIs from February to March 2007 at the telephone survey facilities at a Bavarian University. The calls were made without telephone number suppression and respondents could associate the numbers with the University—if they had received and read the AL. The interviewers knew about the AL experiment and referred to the AL during recruitment, if the contact was marked to have received a prenotification.4 The introduction text was: “Good afternoon. My name is XX. I am calling from the University of Eichstaett. Recently, you have received a letter from us. We are conducting a survey on conflicts in relationships and would like to ask you to participate. Your participation is voluntary. All data remains anonymous and we follow data protection guidelines.” The sentence in italics was omitted for the non-AL group. The sampling strategy introduced a potential bias in that people without published telephone numbers could not participate in the study. People with listed telephone numbers differ in certain characteristics from those with unlisted numbers (Deutschmann & Häder, 2002; Otte, 2002; Parsons et al., 2002). For the sample, people without a published telephone number tended to be younger compared with people with telephone numbers published in directories and with the target population (see Table 2). Three potential explanations exist for this age effect. Until the 1990s, publishing telephone numbers in public directories was obligatory (Häder, 2015). Thus, younger people with newer telephone lines often do not have a directory entry because they have never been obliged to. Additionally, younger people tend to be more mobile, and a change of residency implies a change of landline number. Many times, younger people do not have a landline phone at all, the so-called mobile-only population. Mobile telephone numbers tend to be listed less frequently than landline numbers (Häder, 2015). Furthermore, the share of persons without a published telephone number in the sample increased with the size of place of residency. In cities, 54.3% of people did not publish their telephone number; 45.3%, in towns; and 29.6%, in rural areas. These percentages could be related to the age structure in the population. To evaluate the effect of ALs, two approaches were used. First, the outcome statistics of the AL and non-AL group were compared and the proportion of respondents with various sociodemographic characteristics was evaluated, with testing for statistically significant differences. Second, logistic regression equations that predicted completion of an interview for eligible sample units with theoretically sensible predictors were used, testing for interactions of ALs with demographic variables to gauge whether the predictors were more strongly related to the dependent variable under the AL-conditions than the non-AL condition. The same analytical steps were taken for reporting on sensitive topics. Before presenting the results, a word of caution is necessary. Talking about the effect of ALs in this context is a simplification of a complex system of potential interrelations between different factors, only one of which is sending an AL. In other words, effects cannot necessarily be exclusively ascribed to ALs. On principal, differential interviewer behavior could be a confounder because interviewers were not blind to the experimental condition (although there is no trace of a statistical effect or evidence from supervising fieldwork, see also Footnote 4). Furthermore, it is not clear if the AL was received, read, or remembered by respondents. Thus, making inferences about the effects of ALs refers to many factors that are potentially associated with ALs and not the experimental isolation of exclusive effects caused by ALs. At the same time, it is often the case in research practice that interviewers cannot be taken out of the equation, and therefore, they may interact with other design features. Results on Effect of ALs Effect of ALs on Outcome Rates To make the results compatible with international literature on survey nonresponse, final disposition codes were categorized according to the AAPOR (2016) guidelines and calculated outcome rates.5 In this experiment, ALs did not have any impact on refusal rates (non-AL: 39.4%; AL: 38.8%; see Table 1), but they positively influenced cooperation (non-AL: 43.0%; AL: 53.2%) and response rates (non-AL: 30.8%; AL: 46.9%). In other words, more potential respondents could be reached after receiving an AL, and they were then more likely to participate in the survey. Also, the contact rate was higher under the AL condition (non-AL: 71.5%; AL: 88.1%; see Table 1). The cases in which the lines were always busy (non-AL: 2.7%; AL: 0.7%), nobody ever answered (non-AL: 13.0%; AL: 4.6%), or only an answering machine responded (non-AL: 9.8%; AL: 2.3%) were less frequent for men who had received an AL. This result does not make immediate sense, unless respondents could identify the displayed telephone number as being related to a scientific study rather than an unknown caller. However, this explanation remains a speculation. Table 1 Outcome Rates According to AAPOR (2016) Definition** AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% *e is the estimated proportion of cases of unknown eligibility that are eligible. Enter a different value or accept the estimate in this line as a default. This estimate is based on the proportion of eligible units among all units in the sample for which a definitive determination of status was obtained (a conservative estimate). This would be used if a different estimate is not entered. For guidance about how to compute other estimates of e, see AAPOR’s 2016 Eligibility Estimates. AL = advance letter. ** The difference between the outcome rates in the AL versus non-AL group is statistically significant (χ2 = 164.698, p < .001). View Large Table 1 Outcome Rates According to AAPOR (2016) Definition** AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% AL Outcome and outcome rates No Yes I = Interviews 29.40% 374 44.40% 563 R = Refusal and break off (2.1) 37.40% 478 36.70% 466    Refusal 29.70% 377 29.60% 375    Break off 8.00% 101 7.20% 91 NC = Noncontact (2.2) 1.70% 21 3.70% 47    Noncontact 0.40% 5 0.40% 5    Respondent never available 1.30% 16 3.30% 42 O = Other (2.0, 2.3) 1.30% 17 1.50% 29    Physically or mentally unable/incompetent 0.60% 7 1.20% 15    Language problem 0.60% 7 0.60% 7    Miscellaneous 0.20% 3 0.60% 7 e* 0.942 0.942 UH = Unknown household 25.60% 325 7.60% 96    Not attempted or worked 0.10% 1 0% 0    Always busy 2.70% 34 0.70% 9    No answer 13.00% 165 4.60% 58    Answering machine 9.80% 124 2.30% 29 Not eligible 4.30% 55 5.40% 68    Fax 0.20% 3 0.30% 4    Nonworking/disconnected 0.10% 1 0.30% 4    Nonworking number 0.30% 4 0.10% 1    Business, other organization 0.30% 4 0% 0    No eligible respondent 3.40% 43 4.60% 59 Response Rate 1    I/(I + P) + (R + NC + O) + (UH + UO) 30.80% 46.90% Cooperation Rate 1    I/(I + P + R + O) 43.00% 53.20% Refusal Rate 1    R/[(I + P) + (R + NC + O) + (UH + UO)] 39.30% 38.80% Contact Rate 1     [(I + P) + R + O]/[(I + P) + R + O + NC + (UH + UO)] 71.50% 88.10% *e is the estimated proportion of cases of unknown eligibility that are eligible. Enter a different value or accept the estimate in this line as a default. This estimate is based on the proportion of eligible units among all units in the sample for which a definitive determination of status was obtained (a conservative estimate). This would be used if a different estimate is not entered. For guidance about how to compute other estimates of e, see AAPOR’s 2016 Eligibility Estimates. AL = advance letter. ** The difference between the outcome rates in the AL versus non-AL group is statistically significant (χ2 = 164.698, p < .001). View Large Table 2 Socio Demographics in the Population and for Respondents With and Without an AL Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) *Source: Statistisches Landesamt Bayern 2008. **Refers to male population only View Large Table 2 Socio Demographics in the Population and for Respondents With and Without an AL Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) Respondents Socio demographics Population (%)* CATI sample (%) AL (%) Non-AL (%) Gender: Male 49 100 100 100 Ethnicity: German citizenship ** 89 100 100 100 Marital status 20.3 (114) 19.3 (72)    Single 34.50 76.3 (429) 72.5 (271)    Married 55.00 2.5 (14) 7.2 (27)    Divorced 9.00 0.9 (5) 1.1 (4)    Widowed 1.60 Age in years** 17.70 12.9 (327) 13.5 (76) 14.2 (53)     21–30 23.30 17.0 (432) 15.9 (89) 16.6 (62)     31–40 24.30 27.5 (697) 28.2 (158) 35.6 (133)     41–50 17.80 20.9 (532) 21.9 (123) 18.7 (70)     51–60 17.00 21.7 (552) 20.5 (115) 15.0 (56)     61–70 Size of place of residency** 14    City (< 500,000) 22 27.4 (695) 27.2 (153) 26.5 (99)    Town (< 20,000–100,000) 27 31.8 (806) 30.1 (169) 33.2 (124) Rural area (< 2,000) 40.9 (1,037) 42.7 (240) 40.4 (151) Income (monthly, net)    < 500€ – – 3.4 (18) 5.9 (20)    500–999€ 6.0 (32) 7.4 (25)    1,000–1,499€ 17.1 (91) 16.0 (54)    1,500–1,999€ 25.2 (134) 24.3 (82)    2,000–2,499€ 15.8 (84) 16.0 (54)    2,500–2,999€ 11.8 (63) 10.4 (35)    3,000€+ 20.7 (110) 19.9 (67) Size of household:    1 7.7 (43) 11.8 (44)    2 37.4 (210) 30.2 (113)    3 17.8 (100) 21.1 (79)    4 26.5 (149) 25.1 (94)    5+ 10.7 (60) 11.8 (44) Educational attainment    No formal degree 0.5 (3) 0.9 (5)    Hauptschule 37.4 (207) 31.9 (181)    Mittlere Reife 25.3 (140) 29.3 (166)    Abitur/Fachabitur 12.6 (70) 11.8 (67)    University degree 24.2 (134) 26.1 (148) *Source: Statistisches Landesamt Bayern 2008. **Refers to male population only View Large Logistic regression models were developed to assess the likelihood of completing an interview for eligible persons based on receiving an AL versus not receiving an AL.6 Although the AL significantly improved the odds of completing an interview for eligible sample members, the effect was rather small (Nagelkerke pseudo R2 = 0.011). Without an AL, the odds of completing an interview decreased by 0.7 (β = −0.360; SE = 0.091; Wald = 15.730; df = 1; p < .01; Exp(B) = 0.698; 95% confidence interval [CI] = 0.584–0.834; baseline category: eligible, noninterview). Furthermore, demographic variables were introduced into the regression model with interview versus eligible, noninterview being the dependent variable and AL being the independent variable of interest. Respondents’ age and place of residency were included as covariates in the logistic models. In this model, when respondent age and population density of the place of residency were controlled for, the odds of completing an interview without an AL were reduced by 0.7 (β = −0.364; SE = 0.091; Wald = 15.893; df = 1; p < .01; Exp(B) = 0.695; 95% CI = 0.581–0.831; baseline category: eligible, noninterview). In another step, an interaction effect between AL and respondent age was taken into account. However, there was no change in the explained variation. In sum, response, cooperation, and contact rates improved when an AL had been sent. Refusal rates remained consistent. There were no significant interaction effects between sociodemographics and the effect of ALs on the response rate, with age being an exception. Recruitment Effort One aspect of the recruitment effort was the number of call attempts. To receive a final disposition code, the number of call attempts was significantly higher for the AL group compared with the non-AL group (F1= 48.4; p < .01; eta2 = 0.019). On average, sample units who had received an AL required 3.8 call attempts, whereas men without an AL required only 2.9 attempts to receive a final disposition code (e.g., complete interview, final refusal, noneligible). In the logistic regression model, AL and number of call attempts or the interaction between these two did not improve the prediction of whether eligible people completed an interview. This outcome seems counterintuitive, and it contradicts findings obtained by Hembroff, Rusz, Rafferty, McGee, and Ehrlicher (2005, p. 240). However, the call attempts in the AL group resulted in a significantly higher number of completed interviews. It is likely that a higher willingness to cooperate after receiving an AL resulted in a higher number of callbacks, which increased the total number of attempts. In other words, cold calls receive more immediate hang-ups or refusals, whereas recipients of AL are more likely to eventually complete an interview in a later call attempt. This is not a downside of ALs. As for cost implications, principal cost elements are interviewer and supervisor labor, telephone charges, and postage. The biggest cost factor is the number of call attempts interviewers needed to make to obtain a completed interview. The higher number of call attempts in the AL group as such implies higher costs, but it is counterbalanced by a higher success rate. In the AL group, 2.26 phone numbers of the sample were required to complete an interview, and in the non-AL group, it was 3.40. Extrapolating the total number of calls that would have been required to complete 1,000 interviews indicates that without an AL 3,396 telephone numbers would have been required compared with 2,260 with an AL. Furthermore, to complete 1,000 interviews with an AL, 8,678 versus 9,983 calls would have been required. This translates—with an average number of 14.14 call attempts per hour in this study—into 614 interviewer hours with an AL and 706 without an AL. This cost calculation does not account for costs for matching addresses with telephone numbers which can—depending on matching method—incur significant costs and somewhat counterbalance the comparative advantage of ALs in terms of survey costs. Effects in Subgroups and Potential Sampling Bias Several studies indicate that nonrespondents differ from respondents based on their sociodemographics (Freeth 2004; Groves & Couper, 1998; von der Lippe et al. 2011) and are likely to differ in other characteristics as well (Guzy, 2015; Stoop, 2004). Introducing ALs potentially limits the amount of nonresponse, ideally without distorting relevant sample characteristics. Comparing the distribution of sociodemographic variables in the experimental and the control group and—as far as possible—with the target population permits a more detailed picture of the effect of ALs on different subgroups. Unfortunately, most demographic characteristics and the victimization experience of the people who declined to participate remain unknown. To estimate potential biases associated with the use of an AL, sociodemographics of both respondent groups and if applicable with the population figures from official statistics are compared.7 Most likely, because of the topic of the study—life and conflicts in intimate relationships—the proportion of married men among respondents was higher than in the general population, whereas single, divorced, and widowed men were underrepresented (see Table 2; χ2 = 12.108: df = 3; p < .01). A possible explanation could be that to these men the topic of the study “intimate relationships” did not seem relevant. Comparing the AL and non-AL groups, the proportion of divorced men among respondents was particularly low when a prenotification had been sent. Again, this outcome is likely to be associated with the topic of the study. Earlier we noted that owing to the sampling procedure younger men are underrepresented in the CATI sample because their telephone numbers are less frequently listed in public directories. This underrepresentation remained in both respondent groups. In other words, the ALs did not have an overall significant effect on age groups (T = −1.275; df = 1; p = .20). However, with the use of the ungrouped variable for age, a correlation exists between age and AL (eta = 0.267). With increasing respondent age, ALs were overall more effective, or in other words, the proportion of older respondents was incrementally higher in the AL group. Male respondents aged ≥50 years are more likely to complete a telephone interview with an AL. ALs seemed to partially counterbalance higher refusal rates in older respondents. In this respect, ALs potentially improved the demographic accuracy of the survey—notwithstanding the problem of unpublished telephone numbers: compared with the general population, older age groups were more likely to participate in the study. Our original sample of addresses was stratified by population density with one third of addresses from each stratum. After telephone numbers were matched, rural areas had a share of 40.9% (1,038) in the CATI sample; towns, 31.8% (807); and the city, 27.4% (696). The larger a place of residency, the higher the number of people with unpublished numbers. The AL and the non-AL groups represented a similar distribution in these three categories. In other words, ALs did not have a differential effect in the three categories of size of place of residency (χ2 = 1.022; df = 2; p = .60). Again, potential bias elicited from unpublished telephone numbers, and ALs did not alter this (see Table 2). In terms of other sociodemographic variables, the respondents from the AL and non-AL group did not differ: educational background (p = .53), income (p = .63), and size of household (p = .23) were not significantly different between those two groups. These variables were not available for the original sample or for the general population. Therefore, the bias through unpublished numbers or nonrespondents cannot be estimated. Bringing all the sociodemographic variables (age, marital status, persons in the household, educational level, income) into a logistic regression model and predicting being a respondent based on receiving or not receiving an AL explained only 4.6% of the variation, with age and marital status having the only significant main effect. Every year of age increased the chance of being a respondent with a prenotification marginally (β = 0.015; SE = 0.007; Wald = 4.816; df = 1; p = .03; Exp(B) = 1.016; CI 95% = 1.002–1.030). Marital status had a significant main effect, with “widowed” decreasing the odds of participating in the survey with an AL significantly (β = −1.493; SE = 0.404; Wald = 13.634; df = 1; p < .01; Exp(B) = 0.225; CI 95%: 0.102–0.496). No category of formal education or number of people in the household had a significant effect on the odds of completing an interview with an AL. Furthermore, the odds of completing an interview with an AL were to be at least increased by a factor of two for men with a monthly income between 1,000 and 1,999€ and between 2,500 and 2,999€ (income of 1,000–1,499€: β = 0.816; SE = 0.394; Wald = 4.302; df = 1; p = .038; Exp(β) = 2.263; CI 95% = 1.046–4.894; income of 1,500–1,999€: β = 0.831; SE = 0.383; Wald = 4.716; df = 1; p = .030; Exp(β) = 2.295; CI 95% = 1.084–4.858; income of 2,500–2,999€: β = 0.870; SE = 0.419; Wald = 4.307; df = 1; p = .038; Exp (β) = 2.387; CI 95% = 1.050–5.430). To summarize, the AL did not distort the sample regarding sociodemographic variables. The only noticeable effects are rather small: ALs decreased the proportion of widowed respondents and increased the proportion of older respondents (those over 50 years of age). Effect of ALs on Reporting on Sensitive Topics Measurement error is difficult to assess because for most attitude or behavioral questions we do not know the true value or have a reliable benchmark. Because of a lack of alternatives, we assumed higher rates or more frequent reporting on sensitive topics was better. The hypotheses were that an AL increases the trust in and credibility of the survey institution and could thus increase self-reports of experiences with intimate partner violence. Nevertheless, the results imply that ALs do not affect self-reports on victimization in any significant way (see Table 3). The reported frequency of the 10 items on experience of domestic violence did not differ significantly between the AL and non-AL groups. However, after controlling for respondent age and whether the respondent and his partner lived in a joint household, there was a significant difference between the AL and non-AL groups for the variables: “shouted at me” (p < .01) and “ignored me” (p < .01), with the AL group reporting a higher frequency of these instances. Table 3 Reporting on Questions on Experience of Domestic Violencea My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 aMean values, results from ANOVA, 1 = often, 2 = sometimes, 3 = rarely, 4 = never. * p < .05; ** p < .01. ANOVA = analysis of variance. View Large Table 3 Reporting on Questions on Experience of Domestic Violencea My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 My partner … AL Non-AL …insulted or verbally abused me 3.25 3.32 …shouted at me 3.05 3.12** … ignored me 3.15 3.11 … accused me of being a bad lover 3.78 3.76 … threatened to leave me 3.67 3.73 … deliberately damaged my belongings 3.93 3.91 … pushed or shoved me 3.83 3.87 … slapped me 3.98 3.94* … threatened me with a knife or a weapon 4.00 4.00 … hit me with an object that could have injured me 3.99 3.99 aMean values, results from ANOVA, 1 = often, 2 = sometimes, 3 = rarely, 4 = never. * p < .05; ** p < .01. ANOVA = analysis of variance. View Large Concluding, ALs did not have an overall effect on the reporting on sensitive topics, while only after controlling for age, 2 of 10 items produced a significant difference. Therefore, the AL seems more relevant in terms of recruiting rather than reporting. Conclusion and Discussion In existing literature, average outcome rate is often the only efficiency criterion for ALs. This article adds new dimensions to analyze the effect of ALs: it additionally analyzes (a) recruitment effort with cost implications, (b) potential distortions during recruitment in subgroups, and (c) potential for measurement error. It describes a split-ballot experiment and evaluates multiple facets of the problem. The objective is to detect potential distortions introduced through ALs during sampling, recruitment, and measurement. The particular strengths of this article lie in the total survey error perspective, bias estimation through comparisons of population, sample and respondents (with and without AL), and the analysis of differential reporting. Previous research has mostly neglected these aspects. In sum, the results regarding outcome rates include positive effects of ALs on contact, cooperation, and response rate but no effect on the refusal rate. Unexpectedly, ALs increased contact rates, and the AL group required higher recruitment effort. This finding means that more AL recipients could be reached by telephone, thus giving more opportunities to recruit respondents, but they required more call attempts to receive a final disposition code than non-AL recipients did. Thus, contrary to Groves (2004) finding, ALs did not seem to serve as prewarning for respondents to prepare their refusal. With regard to selection effects, ALs increased the participation of older age groups (50+). Thus, ALs can potentially counterbalance higher refusal rates among older respondents. These findings are in line with previous research (Goldstein & Jennings, 2002; Groves & Snowden, 1987). Beyond that, ALs did not affect sociodemographic groups differentially, confirming results from the meta-analysis of De Leeuw and colleagues (2007). From a total survey error perspective, nonresponse bias and measurement error are crucial, rather than mere response or nonresponse rates. In this respect, the challenge of using ALs lies in the sampling procedure: to send ALs in telephone surveys, both telephone numbers and mail addresses are required. Therefore, some matching procedures become necessary which can introduce bias—in this case toward persons without telephone numbers in public directories. Thus, older persons and persons in rural areas were overrepresented. Taking these two results together—recruiting more older people with an AL and overrepresenting rural areas and older people in the sample—the sample accuracy might have suffered more from the sample bias than it benefited from the recruitment bias. Regarding measurement error, the assumption was that ALs increase the credibility and trustworthiness of a study and thus might reveal higher rates of victimization.8 However, both experimental groups reported the same frequency of domestic violence. The AL did not seem to have an impact on the measurement of victimization experience. However, the higher response rates in the AL condition can facilitate the study of sensitive topics by increasing the number of cases for detailed analysis. The scope of these results is restricted through the geographical limitation to Bavaria, using only men as the target population, and domestic violence being a specific sensitive topic of a survey. Another limitation is that interviewers were not blind to the experimental condition (see also Footnote 4). Thus, they could have used differential behavior for the experimental and control group and confounded the findings. As a consequence, the evident effects cannot be solely ascribed to the AL; further, we can never be certain that the AL has been received and read. The effects could potentially also be because of the behavior of the interviewer. However, for practical applications, the interrelation of interviewer behavior and ALs is natural and desirable. Thus, this fact does not hamper the practical value of this research. Yet, caution is required in making inferences. If future research aims to disentangle interviewer behavior and the effect of ALs, a more complex experimental design is required with four groups: no AL, AL and blind interviewer, AL and informed interviewer, and no AL but with the interviewer believing an AL has been sent and with random assignment of interviewers to a group. Furthermore, analysis should include measurement of or controls for variance between interviewers. With that said, and as elaborated in the literature review, the effects of ALs are embedded in a complex social system and are the product of multiple factors, all of which are difficult to control. Nonresponse is a function of design characteristics, interviewer, social environment, household, and individual respondent characteristics. It is likely that the same holds for the effects of ALs. Interviewing is always a data construction process and not data gathering or collection. The relevance of these different factors should be disentangled in future research. Despite these limitations, the design and the results can inform future research and help to explore the effects of ALs on nonresponse and measurement bias further. The lessons learnt were that providing telephone numbers and mail addresses can be a challenge practically and in terms of sample accuracy. Nevertheless, registration data have the advantage of additional sociodemographic information for all sample members. Thus, a nonresponse analysis and the development of survey weights are facilitated. What we need is further investigation of the interaction of topic and ALs with design features of the AL among subgroups. We should strive to understand the causes of higher response rates and particularly ALs’ effect on interviewer strategies. In addition, measurement error is worth further research. Overall, the total survey error perspective is valuable to get a better understanding of differential effects of ALs (and other means of increasing response rates) and should be more consistently used in research. Psychographic data on respondents could also help to compare the two experimental samples and could also indicate potential differences in the two groups. Funding This work was supported by the Faculty of Social Sciences at the University of Vienna, Austria, and the Catholic University of Eichstaett-Ingolstadt, Germany. Biographical Notes Susanne Vogl is a postdoc researcher at the Institute of Sociology at the University of Vienna. Her research focuses on interview methodology in qualitative, quantitative, and mixed methods research and with special populations. Footnotes 1 As a matter of fact, it is often not clear if a positive effect of ALs on response rates can be exclusively ascribed to ALs or to the interviewer behavior or a combination of both. Thus, in many studies—like the one presented here—a combination of ALs and the interviewer behavior is assessed. 2 This holds for landline as well as mobile telephone surveys, although to a different extent. Also, face-to-face or Web surveys can also use ALs (De Leeuw et al., 2007; Dillman, 2009). 3 In Germany, the so-called Gabler–Häder design is commonly used instead of RDD because of a complicated numbering system in Germany and a hit rate with RDD of <0.5% (Häder, 2015, p. 2). The design is based on a pool of all presumed working landline telephone numbers, which serves as a sampling frame for samples stratified by region and other known variables. 4 When interviewers administer both conditions in a survey experiment, their behavior could potentially become a confounding factor if they have differential behavior in one group that contributes to the findings. If interviewer behavior is a confounder and ALs have an effect on interviewer behavior, interviewer behavior could mediate the effect of AL on outcome. Tests for interviewer effects yielded a significant effect of the AL, and a significant effect of interviewer on the outcome (interview or refusal)—some interviewers were better at recruiting than others. This effect remains significant both in the treatment and the control groups. Furthermore, there was no interaction effect between interviewer and treatment (F = 0.846; p = .718). However, AL and interviewer had separate effects (AL: F = 6.631; p = .008; interviewer: F = 2.488; p < .001). If the interviewer changed his or her behavior depending on AL or non-AL, one would expect an interaction effect. Introducing interviewer as a covariate in an ANOVA also suggested that an interviewer does not exert a confounding effect (F = 0.524; p = .469), whereas the effect of AL remains significant (F = 18.554; p < .001). Because telephone numbers were randomly assigned to interviewers, and in the light of these results, I do not assume the interviewer behavior is a confounding factor. 5 Although the AAPOR guidelines apply to RDD samples, they can be used for this study based on a registry-based sample. Although most criteria for eligibility could be judged in the selection process in the registration offices, some needed to be checked in a screening at the beginning of the interview. Therefore, not every person with a listed telephone number was necessarily eligible. Alternatively, people without published telephone numbers were subsumed under Category 2 (eligible, noninterview). Consequently, Response Rates 3 and 4 (because of changes in e), Cooperation Rates 1 and 2, Refusal Rates 2 and 3, and Contact Rates 1, 2, and 3 would be different from the approach presented here. Contact Rates would be higher, Refusal and Cooperation Rates lower, and Response Rate about the same. 6 Only eligible people are included in the model. 7 Unfortunately, except for age and place of residency, no information on the original sample before matching telephone numbers is available. Because of the random sampling procedure, discrepancies between the population and the respondents in general can be caused by differences between people with and without listed numbers as well as nonrespondents and respondents. 8 Owing to a lack of a true value, the assumption is “more is better.” References AAPOR. 2016 . Standard definitions: Final dispositions of case codes and outcome rates for surveys ( 9 th edn). Lenexa, KS : AAPOR . Aust F. , Schröder H. ( 2009 ). Sinkende Stichprobenausschöpfung in der Umfrageforschung–ein Bericht aus der Praxis. In Weichbold M. , Bacher J. , Wolf C. (Eds.), Umfrageforschung. Herausforderungen und Grenzen: Österreichische Zeitschrift für Soziologie/Sonderheft (vol. 9 . pp. 195 – 212 ). Wiesbaden : VS, Verl. für Sozialwiss . Baulne J. , Courtemanche R. ( 2008 ). Is there really any benefit in sending out introductory letters in Random Digit Dialling (RDD) surveys? In Statistics Canada (Ed.), Proceedings of Statistics Canada Symposium 2008: Data Collection: Challenges, Achievements and New Directions, http://www.statcan.gc.ca/pub/11-522-x/2008000/article/11001-eng.pdf Berinsky A. J. ( 2008 ). Survey non-response. In Donsbach W. , Traugott M. W. (Eds.), Public Opinion Research (pp. 309 – 321 ). Los Angeles : Sage . Biemer P. P. ( 2001 ). Nonresponse bias and measurement bias in a comparison of face-to-face and telephone interviewing . Journal of Official Statistics , 17 , 295 – 320 . Biemer P. P. ( 2011 ). Total survey error: Design, implementation, and evaluation . Public Opinion Quarterly , 74 , 817 – 848 . https://doi.org/10.1093/poq/nfq058 Google Scholar Crossref Search ADS Camburn D. , Lavrakas P. J. , Battaglia M. P. , Massey J. T. , Wright R. A. ( 1995 ). Using advance respondent letters in Random-Digit-Dialing telephone surveys. In American Statistical Association (Ed.), Proceedings of the 50th Annual Conference of the American Association for Public Opinion Research (pp. 969–974). Alexandria, VA. Retrieved from http://www.amstat.org/sections/srms/proceedings/ Collins M. , Sykes W. , Wilson P. , Blackshaw N. ( 2001 ). Nonresponse: The UK experience. In Groves R. M. , Biemer P. , Lyberg L. , Massey J. T. , Nicholls W. L. , Waksberg J. (Eds.), Wiley series in survey methodology. Telephone survey methodology (pp. 213 – 231 ). New York, NY : Wiley . Couper M. , Mathiowetz N. , Singer E. ( 1995 ). Related households, mail handling and returns to the 1990 census . International Journal of Public Opinion Research , 7 , 172 – 177 . Google Scholar Crossref Search ADS Curtin R. , Presser S. , Singer E. ( 2000 ). The effects of response rate changes on the index of consumer sentiment . Public Opinion Quarterly , 64 , 413 – 428 . Google Scholar Crossref Search ADS PubMed Curtin R. , Presser S. , Singer E. ( 2005 ). Changes in telephone survey nonresponse over the past quarter century . Public Opinion Quarterly , 69 , 87 – 98 . doi: 10.1093/poq/nfi002 Google Scholar Crossref Search ADS Davern M. ( 2013 ). Nonresponse rates are a problematic indicator of nonresponse bias in survey research . Health Services Research , 48 , 905 – 912 . https://doi.org/10.1111/1475-6773.12070 Google Scholar Crossref Search ADS PubMed Davern M. , McAlpine D. , Beebe T. J. , Ziegenfuss J. , Rockwood T. , Thiede Call K. ( 2010 ). Are lower response rates hazardous to your health survey? An analysis of three state telephone health surveys . Health Services Research , 45 , 1324 – 1344 . Google Scholar Crossref Search ADS PubMed De Leeuw E. , Callegaro M. , Hox J. , Korendijk E. , Lensvelt-Mulders G. ( 2007 ). The influence of advance letters on response in telephone surveys: A meta-analysis . Public Opinion Quarterly , 71 , 413 – 443 . Google Scholar Crossref Search ADS Deutschmann M. , Häder S. ( 2002 ). Nicht-eingetragene in CATI-surveys (unlisted numbers in CATI surveys). In Gabler S. , Häder S. (Eds.), Telefonstichproben: Methodische Innovationen und Anwendungen in Deutschland (pp. 68 – 84 ). Münster : Waxmann . Dijkstra W. , Smit J. H. ( 2002 ). Persuading reluctant recipients in telephone surveys. In Groves R. M. , Dillman D. A. , Eltinge J. L. , Little R. J. A. (Eds.), “A wiley-interscience publication”. Survey nonresponse (pp. 121 – 134 ). New York, NY : Wiley . Dillman D. A. ( 2007 ). Mail and internet surveys: The tailored design method ( 2 nd ed.). Hoboken : Wiley . Dillman D. A. ( 2009 ). Internet, mail, and mixed-mode surveys: The tailored design method . Hoboken : Wiley . Dillman D. A. , Gallegos J. G. , Frey J. H. ( 1976 ). Reducing refusal rates for telephone interviews . Public Opinion Quarterly , 40 , 66 – 78 . Google Scholar Crossref Search ADS Durrant G. B. , Steele F. ( 2009 ). Multilevel modelling of refusal and non-contact in household surveys: evidence from six UK Government surveys . Journal of the Royal Statistical Society , 172 , 361 – 381 . https://doi.org/10.1111/j.1467-985X.2008.00565.x Google Scholar Crossref Search ADS Engel U. , Pötschke M. ( 2004 ). Nonresponse und Stichprobenqualität: Ausschöpfung in Umfragen der Markt- und Sozialforschung . Frankfurt am Main : Verlagsgruppe Deutscher Fachverl . Freeth S. ( 2004 ). The labour force survey. Report of the 2001 census-linked study of survey nonresponse. Working Paper. London: Office for National Statistics. Fricker S. , Tourangeau R. ( 2011 ). Examining the relationship between nonresponse propensity and data quality in two national household surveys . Public Opinion Quarterly , 74 , 934 – 955 . https://doi.org/10.1093/poq/nfq064 Google Scholar Crossref Search ADS Goldstein K. M. , Jennings M. K. ( 2002 ). The effect of advance letters on cooperation in a list sample telephone survey . Public Opinion Quarterly , 66 , 608 – 617 . Google Scholar Crossref Search ADS Groves R. M. ( 2004 ). Survey errors and survey costs : Hoboken : John Wiley & Sons . Groves R. M. ( 2006 ). Nonresponse rates and nonresponse bias in household surveys . Public Opinion Quarterly , 70 , 646 – 675 . Google Scholar Crossref Search ADS Groves R. M. , Couper M. P. ( 1998 ). Nonresponse in household interview surveys . New York, NY : Wiley . Retrieved from http://www.zentralblatt-math.org/zmath/en/search/?an=0961.62007 Groves R. M. , Fowler F. , Couper M. , Lepkowski J. M. , Singer E. , Tourangeau R. ( 2004 ). Survey methodology . Hoboken : Wiley . Groves R. M. , Lyberg L. ( 2001 ). An overview of nonresponse issues in telephone surveys. In Groves R. M. , Biemer P. , Lyberg L. , Massey J. T. , Nicholls W. L. , Waksberg J. (Eds.), Wiley series in survey methodology. Telephone survey methodology (pp. 191 – 211 ). New York, NY : Wiley . Groves R. M. , Lyberg L. ( 2011 ). Total survey error: Past, present, and future . Public Opinion Quarterly , 74 , 849 – 879 . https://doi.org/10.1093/poq/nfq065 Google Scholar Crossref Search ADS Groves R. M. , Peytcheva E. ( 2008 ). The impact of nonresponse rates on nonresponse bias: A meta-analysis . Public Opinion Quarterly , 72 , 167 – 189 . Google Scholar Crossref Search ADS Groves R. M. , Singer E. , Corning A. ( 2000 ). Leverage-saliency theory of survey participation . Public Opinion Quarterly , 64 , 299 – 308 . Google Scholar Crossref Search ADS PubMed Groves R. M. , Snowden C. ( 1987 ). The Effects of Advanced Letters on Response Rates in Linked Telephone Surveys. In Proceedings of the American Statistical Association, Survey Research Methods Section. http://ww2.amstat.org/sections/srms/Proceedings/papers/1987_113.pdf Guzy N. ( 2015 ). Nonresponse Bias in telefonischen Opferbefragungen. In Schupp J. , Wolf C. (Eds.), Nonresponse bias (pp. 161 – 207 ). Wiesbaden : Springer Fachmedien Wiesbaden . Häder S. ( 2015 ). Stichproben in der Praxis . Mannheim : Gesis Survey Guidelines . doi 10.15465/gesis-sg_014 Hembroff L. A. , Rusz D. , Rafferty A. , McGee H. , Ehrlicher N. ( 2005 ). The cost-effectiveness of alternative advance mailings in a telephone survey . Public Opinion Quarterly , 69 , 232 – 245 . Google Scholar Crossref Search ADS Holle R. , Hochadel M. , Reitmeir P. , Meisinger C. , Wichmann H. E. ( 2006 ). Prolonged recruitment efforts in health surveys: effects on response, costs, and potential bias . Epidemiology , 17 , 639 – 643 . https://doi.org/10.1097/01.ede.0000239731.86975.7f Google Scholar Crossref Search ADS PubMed Hox J. , De Leeuw E. ( 2002 ). The influence of interviewers’ attitude and behaviour in household survey nonresponse. An international comparison. In Groves R. M. , Dillman D. A. , Eltinge J. L. , Little R. J. A. (Eds.), “A Wiley-Interscience publication”. Survey nonresponse (pp. 103 – 120 ). New York, NY : Wiley . Hox J. , De Leeuw E. D. , Chang H.-T. ( 2012 ). Nonresponse versus measurement error: Are reluctant respondents worth pursuing? Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique , 113 , 5 – 19 . https://doi.org/10.1177/0759106311426987 Google Scholar Crossref Search ADS Kaminska O. , McCutcheon A. L. , Billiet J. ( 2011 ). Satisficing among reluctant respondents in a cross-national context . Public Opinion Quarterly , 74 , 956 – 984 . https://doi.org/10.1093/poq/nfq062 Google Scholar Crossref Search ADS Keeter S. , Miller C. , Kohut A. , Groves R. M. , Presser S. ( 2000 ). Consequences of reducing nonresponse in a national telephone survey . Public Opinion Quarterly , 64 , 125 – 148 . Google Scholar Crossref Search ADS PubMed Link M. W. , Mokdad A. ( 2005 ). Advance letters as a means of improving respondent cooperation in random digit dial studies . Public Opinion Quarterly , 69 , 572 – 587 . Google Scholar Crossref Search ADS Lynn P. , Clarke P. , Martin J. , Sturgis P. ( 2002 ). The effects of extended interviewer efforts on nonresponse bias. In Groves R. M. , Dillman D. A. , Eltinge J. L. , Little R. J. A. (Eds.), “A Wiley-Interscience publication”. Survey nonresponse (pp. 135 – 147 ). New York, NY : Wiley . Olson K. ( 2006 ). Survey participation, nonresponse bias, measurment error bias, and total bias . Public Opinion Quarterly , 70 , 737 – 758 . Google Scholar Crossref Search ADS Otte G. ( 2002 ). Erfahrungen mit zufallsgenerierten Telefonstichproben in drei lokalen Umfragen. In Gabler S. , Häder S. (Eds.), Telefonstichproben. Methodische innovationen und anwendungen in deutschland (pp. 85 – 110 ). Münster : Waxmann . Parsons J. , Owens L. , Skogan W. ( 2002 ). Using advance letters in RDD surveys: Results of two experiments . Survey Reserach , 33 ( 1 ), 1 – 2 . Peytchev A. ( 2012 ). Consequences of survey nonresponse . The ANNALS of the American Academy of Political and Social Science , 645 , 88 – 111 . https://doi.org/10.1177/0002716212461748 Google Scholar Crossref Search ADS Sakshaug J. W. , Yan T. , Tourangeau R. ( 2011 ). Nonresponse error, measurement error, and mode of data collection: Tradeoffs in a multi-mode survey of sensitive and non-sensitive items . Public Opinion Quarterly , 74 , 907 – 933 . https://doi.org/10.1093/poq/nfq057 Google Scholar Crossref Search ADS Singer E. , Presser S. ( 2007 ). Privacy, confidentiality, and respondent burden as factors in telephone survey nonresponse. In Lepkowski J. M. , Tucker C. , Brick J. M. , De Leeuw E. D. , Japec L. , Lavrakas P. J. , Sangster R. L. (Eds.), Wiley series in survey methodology. Advances in telephone survey methodology (pp. 449 – 470 ). Hoboken : Wiley . Singer E. , van Hoewyk J. , Maher M. P. ( 2000 ). Experiments with incentives in telephone surveys . Public Opinion Quarterly , 64 , 171 – 188 . https://doi.org/10.1086/317761 Google Scholar Crossref Search ADS PubMed Smith F. ( 1983 ). On the validity of inferences from non-random samples . Journal of the Royal Statistical Society, Series A 146 , 394 – 403 . doi: 10.2307/2981454 Google Scholar Crossref Search ADS Smith W. , Chey T. , Jalaludin B. , Salked G. , Capon T. ( 1995 ). Increasing response rates in telephone surveys: A randomised trial . Journal of Public Healtz Medicine , 17 , 33 – 38 . Snijkers G. , De Leeuw D. ( 1999 ). Interviewers' tactics for fighting survey nonresponse . Journal of Official Statistics , 15 , 185 – 198 . Stangeland P. ( 1996 ). The effect of interview methods and response rate on victim survey crime rates. In Australian Institute of Criminology (Ed.), AIC Conference Proceedings (pp. 141–148) http://www.aic.gov.au/media_library/publications/proceedings/27/stangeland.pdf Stoop I. ( 2004 ). Surveying nonrespondents . Field Methods , 16 , 23 – 54 . doi: 10.1177/1525822X03259479 Google Scholar Crossref Search ADS Stoop I. ( 2005 ). Nonresponse in sample surveys: The hunt for the last respondent . The Hague : Social and Cultural Planning Office . Tourangeau R. , Groves R. M. , Redline C. D. ( 2010 ). Sensitive topics and reluctant respondents: Demonstrating a link between nonresponse bias and measurement error . Public Opinion Quarterly , 74 , 413 – 432 . https://doi.org/10.1093/poq/nfq004 Google Scholar Crossref Search ADS Traugott M. W. , Groves R. M. , Lepkowski J. M. ( 1987 ). Using dual frame designs to reduce nonresponse in telephone surveys . Public Opinion Quarterly , 51 , 522 – 539 . https://doi.org/10.1086/269055 Google Scholar Crossref Search ADS Vogl S. , Parsons J. A. , Owens L. K. , Lavrakas P. J. ( forthcoming ). Experiments on the Effects of Advance Letters in Surveys. In Lavrakas P. J. , Traugott M. W. , Kennedy C. , Holbrook A. L. , De Leeuw E. D. , West B. T. (Eds.), Experimental methods in survey research: techniques that combine random sampling with random assignment. Wiley series in survey methodology . Hoboken : Wiley . von der Lippe E. , Schmich P. , Lange C. ( 2011 ). Advance letters as a way of reducing non-response in a National Health Telephone Survey: Differences between listed and unlisted numbers . Survey Research Methods , 5 , 103 – 116 . Weidmann C. , Schmich P. , Schiller-Born S. ( 2008 ). Der Einfluss von Kontrollüberzeugungen der Interviewer auf die Teilnahme an telefonischen Befragungen . Methoden, Daten, Analysen (mda) , 2 , 125 – 147 . Woodruff S. I. , Mayer J. A. , Clapp E. ( 2006 ). Effects of an introductory letter on response rates to a teen/parent telephone health survey . Evaluation Review , 30 , 817 – 823 . https://doi.org/10.1177/0193841X05285662 Google Scholar Crossref Search ADS PubMed © The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

International Journal of Public Opinion ResearchOxford University Press

Published: Jun 1, 2019

There are no references for this article.