Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Using Informational Video to Elicit Participation in Online Survey Research: A Randomized Controlled Trial

Using Informational Video to Elicit Participation in Online Survey Research: A Randomized... Survey research is a widely used method for collecting data about a population of interest (Edwards et al., 2002, 2009). A high survey response rate—the proportion of individuals in a sample population participating in a survey—mitigates concerns about nonresponse bias and is thus a significant component for the external validity of survey-based findings (Dillman, Smyth, & Christian, 2009; Groves et al., 2009; Singer, 2006). However, response rates to surveys have been in general decline in recent decades (Curtin, Presser, & Singer, 2005; de Leeuw & de Heer, 2002), and survey data are increasingly collected online (Callegaro et al. 2014; Dillman et al., 2009). Although web-based surveys represent a fast and inexpensive way to collect information from people (Dillman et al., 2009; Wright, 2005), this development amplifies the problem of low response rates. The response rates in online surveys are typically lower than those in surveys using other data collection methods (Daikeler, Mosnjak, & Manfreda, 2020; Petchenik & Watermolen, 2011; Tourangeau, Couper, & Steiger, 2003). Identifying strategies for optimizing response rates in online surveys is thus important. Researchers have examined a range of approaches for increasing survey response rates (Edwards et al., 2002, 2009; Fan & Yan, 2010). For example, research has tested the effects of monetary incentives (Beebe, Davern, McAlpine, Call, & Rockwood, 2005; Göritz, 2006; Göritz & Luthe, 2013a, 2013b, 2013c; Rose, Sidle, & Griffith, 2007; Singer & Ye, 2013) and donation incentives (Deutskens, Ruyter, Wetzels, & Oosterveld, 2004; Gattellari & Ward, 2001), different forms of text or picture appeal in the survey invitation letter (Dillman, Singer, Clark, & Treat, 1996; Kropf & Blair, 2005; Pedersen & Nielsen, 2016), and use of survey gaming techniques (Bailey, Pritchard, & Kernohan, 2015; Mavletova, 2015). Similarly, research finds that both advance notification letters (Aizpurua et al., 2018; de Leeuw, Callegaro, Hox, Korendijk, & Lensvelt-Mulders, 2007) and personalized greetings in survey invitation letters (Griggs et al., 2018; Heerwegh & Loosveldt, 2006) are effective means for promoting survey response rates. Yet one approach remains understudied: use of informational video. Embedded in the display or email inviting people to participate in a survey, a video may elicit participation by means of audio and visual cues. As we discuss later, research supports that video may affect survey responses by motivating and priming people to respond (Grant, 2007, 2008; Haan, Ongena, Vannieuwenhuyze, & De Glopper, 2017; Tourangeau, Couper, & Steiger, 2003). However, no research appears to have tested, directly and robustly, the effectiveness of embedding informational video in the survey invitation letter. This article examines the effects of embedding a link to an informational video in an email inviting people to participate in an online survey. A sample of 28,510 Danish parents of 15,000 children received an email invitation to participation in a nation-wide survey about their child and family. Clustered at the individual child level, we randomly assigned each parent couple to either a control group (CG) or one of two treatment groups. Parents in the CG received a typical survey invitation email. Parents in the two treatment groups received the same email—but with one important distinction: The email also included a link to an informational video of a researcher emphasizing the social significance of the survey and the value of the individual’s participation. Parents in the two treatment groups (TG1 and TG2), were exposed to different videos. As we elaborate on later, the videos were of identical audio content and length, but they differed in visual content. The TG1 video represents a low-cost video. Survey investigators may themselves produce a video similar to the TG1 video with little effort. In contrast, the TG2 video represents a high-cost video. Producing a video similar to the TG2 video requires a substantial time investment and technical know-how, and hiring somebody to produce such video is substantially costlier. Comparing the effects of the two videos, we seek leverage on questions about the content- and cost-effectiveness of informational videos in survey research. Does imagery seeking to invoke and increase respondents’ motivation and perceived significance of survey participation (in this case, footage of children) have greater effects than footage of a person? Does a high-cost video promote survey response rates relative to a cheaper and easier-to-produce video? We examine the intention-to-treat (ITT) effect of the two videos on the survey response rate—that is, differences across the three experiment groups (CG, TG1, and TG2). Moreover, we perform additional analyses among the respondents who decided to participate. For this subpopulation, we examine ITT effects on survey initiation time, survey dropout rate, and survey completion time. Effects of Video on Survey Participation Informational video provides audio and visual stimuli that may prime the viewers’ attitudes and emotions, and thus affect their judgments and decision-making. Similar to the ways that information campaigns have long been recognized to influence individuals’ behavior (Weiss & Tschirhart, 1994), the embedding of an informational video in the invitation to participate in a survey may positively affect an individuals’ inclination to participate. Many surveys implement various forms of text appeals or visual cues in the survey invitation to motivate people to respond. A survey invitation may state how the survey connects to a socially salient phenomenon or question, thus linking survey response to something that people care about and find important (Dillman et al., 1996; Kropf & Blair, 2005; Pedersen & Nielsen, 2016). Similarly, a survey invitation may seek to compel people to respond by emphasizing the value of participation to the survey investigators (Gaski 2004; Porter & Whitcomb, 2005), or by showing the investigators or sponsor behind the survey (Dommeyer & Ruggiero, 1996; Porter & Whitcomb, 2005; Sloan, Kreiger, & James, 1997). As we elaborate below, a video may elicit survey participation by psychological mechanisms similar to those explaining the effectiveness of these more conventional approaches for increasing response rates. The response rate effect of an informational video depends on its content—the specific audio and visual stimuli in the video. Drawing on insights on human motivation from social psychology research (Grant, 2007, 2008) and social interface theory (Haan et al., 2017; Tourangeau, Couper, & Steiger, 2003), a video can involve at least two types of cues that may promote the viewers’ interest in the survey and, thus, entice them to respond. First, a video provides a medium for activating individuals’ prosocial motivation—a desire to protect and promote the well-being of others (Grant & Berg, 2011) relating to a basic human need for feeling connected to and experiencing care for others (Deci & Ryan, 2000). By highlighting how a survey connects to a socially salient phenomenon or question, a video can promote the perceived prosocial impact of survey participation, thus increasing individuals’ willingness to respond. By providing audio and visual cues priming the prosocial importance and value of the survey, a video may improve survey response rates and response quality.1 Second, a video provides a medium for “humanizing” the survey—showing the investigator(s) behind the survey (Haan et al., 2017; Tourangeau, Couper, & Steiger, 2003). By seeing and hearing the person or people behind the survey, viewers’ immediate responses become more similar to responses in face-to-face human interactions (Tourangeau, Couper, & Steiger, 2003)—that is, viewers become more likely to accept the survey invitation. Psychological mechanisms referring to human compassion, sympathy, and a basic need for relatedness (Deci & Ryan, 2000) explain why “humanizing cues” may have positive effects on survey response rates and quality. Data We conducted our study as part of a nationwide, longitudinal survey examining questions related to child development and well-being. Our sample population is Danish citizens being parents to an infant or toddler. Our sample involves 28,510 parents, each receiving an invitation to participate in a nation-wide online survey (multi-device) about their child and family in November or December 2017. The survey was administered by Statistics Denmark, a government institution under the Danish Ministry of Economic Affairs and the Interior.2 The parents received the survey invitation in e-Boks, a digital mailbox system providing all Danish citizens with a private email account tied to their social security number. Danish public agencies use e-Boks as a secure platform for digital communication with citizens (see www.e-boks.com). Sample parents were provided 45 days to respond. Nonresponding parents received up to two survey reminders (after 7 and 14 days). The study was IRB approved by The Internal Review Board at VIVE—The Danish Center for Social Science Research, and preregistered at the AEA RCT registry (AEARCTR-0002541). In line with Danish guidelines for ethics in social sciences research, the first page of the survey informed participants about informed consent. Design Our study design involves a three-arm, parallel cluster randomized controlled trial. With the individual child constituting the randomization unit, each child’s parents receive identical treatment.3 Each parent couple was randomly assigned to one of three experimental groups: a CG or one of two treatment groups (TG1 or TG2). Parents in the CG received a typical survey invitation email. The email subject line was: “Personal letter from Statistics Denmark,” the survey invitation addressed the respondent by name, the invitation text length was 280 words, and the survey link (URL) was placed at the middle of the letter. All of these features are in line with research recommendations suggesting that survey invitation letters should have personalized greetings (Griggs et al., 2018; Heerwegh & Loosveldt, 2006), a subject line with a request from an authority figure, contain longer text, and not have the URL at the top of the invitation (Kaplowitz, Lupi, Couper, & Thorp, 2012). The full survey invitation appears in Supplementary Figure S1. Parents in the two treatment groups received the same email as those in the CG, but the email included also a link to an informational video. Clicking the link opened up a different video for parents in TG1 and TG2. Both videos had the same length (2 min, 17 s) and identical audio content. The two videos offered distinct visual stimuli: For TG1 parents, the video showed a researcher speaking directly to the viewer in front of a white background. While the TG2 video contained excerpts of the same video footage as in the TG1 video, the main part of the TG2 video showed various clips of children (looking directly at the viewer, interacting with each other, playing with toys, etc.). In terms of audio content, the researcher presented herself (to “humanize” the survey) and briefly elaborated on the survey content (questions about the child and the family). However, the main focus was on the purpose of the survey and the importance of survey participation. Priming the prosocial value of the survey, the audio track used language such as “Your help is needed for gaining insights into how to best ensure a good and safe upbringing for small children,” “Knowledge on why some children are not thriving as well as others is needed,” and “We really hope for your participation. The answers we receive provide insights into how to best support children and their families in Denmark.” A test group of parents viewed and commented on both videos. Everyone provided positive feedback, especially for the TG2 video, thus supporting our a priori theoretical expectations. The videos are available at https://www.youtube.com/embed/UD_PdmLlo5E (TG1) and https://www.youtube.com/embed/1o72L2zkYls (TG2). Both videos were instructed, recorded, and edited by a communication firm specializing in visual communication. The total production cost was approximately $8,500 (55,213 DKK).4 The would-be cost of producing only the TG1 video is $3,650 (23,800 DKK) and $8,000 (51,800 DKK) for the TG2 video. The distinction between the two videos allows for leverage on questions about the content- and cost-effectiveness of informational videos in survey research. Based on insights from social psychology (Grant, 2007, 2008), additional exposure to the video clips of the children in the TG2 video may prime the perceived prosocial importance and value of the survey to greater extents than only showing the researcher behind the survey, thus resulting in a higher survey response rate for TG2 than for TG1. Our setup allows us to test this notion and, in the case of differences, estimate the cost-effectiveness of the videos. Statistics Denmark did not have technical capacity to connect each treatment respondent to data on whether that individual clicked the video link and viewed the video. Such data would have allowed us to supplement the estimated ITT effects with estimation of treatment-on-the-treated effects, providing added leverage on the potential cause of our findings. However, we were able to track the view count for each video. The survey response rate is our primary outcome measure—operationalized using a binary variable signifying whether the sample parent had accessed the survey and answered one or more items. Moreover, we use the subsample of the parents who accessed the survey and answered one or more items for examining effects in relation to three secondary outcomes: (a) survey initiation time, (b) survey dropout rate, and (c) survey completion time. Survey initiation time and survey completion time are both captured using numerical variables (number of days from survey invitation to survey response and number of minutes from start to completion of the survey). The survey dropout rate is captured using a binary variable signifying completion of the survey. Statistical Analysis Descriptive sample statistics are estimated and reported using percentages, counts, and chi-square tests. Descriptive statistics for the outcome measures are computed using percentages (for categorical variables) and means and standard deviations (for numerical variables). We examine the effects of the two video treatments on binary outcomes—the survey response rate and the dropout rate—using linear probability model (LPM) regression. Drawing on advantages in terms of interpretation and usefulness (Deke, 2014), we use the LPM instead of logit regression.5 However, for robustness testing, we show how the estimated linear probability p-values align with the p-values from conventional logit regression. For numerical outcomes—survey initiation time and survey completion time—we use ordinary least squares (OLS) regression. As for robustness, given potential concerns about the normal distribution of time scales, we show how the estimated OLS p-values align with nonparametric p-values from ordered logit regression. In all model specifications, standard errors are corrected for clustering of parents by children. Results Descriptive Statistics Table 1 shows descriptive sample statistics (full sample and by experiment group). Chi-square tests for differences in means suggest that the three experiment groups balance well on observable characteristics. Only in terms of child gender do we observe slight imbalance (higher proportion of girls in TG2 relative to CG and TGI at p = .04). Table 1. Descriptive Sample Statistics, Full Sample and by Experiment Group . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 Note. Percentages, counts, and chi-squared tests. Open in new tab Table 1. Descriptive Sample Statistics, Full Sample and by Experiment Group . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 Note. Percentages, counts, and chi-squared tests. Open in new tab Table 2 shows descriptive statistics for the outcome measures (full sample and by experiment group). Responses from 12,375 parents were obtained, yielding a survey response rate of 43.41%. Among the 12,375 responders, 14.59% initiated but did not complete the survey, the average time from survey invitation send out to survey response was about 10 days, and the average time spent on survey responses was about 28 min. Chi-square tests comparing the 12,375 responders with the 16,135 nonresponders in terms of the characteristics reported in Table 1 show that responding parents are more likely to be females, older, more educated, have a younger child, a child not “at risk,” and are less likely to be self-employed, unemployed or on leave, and retired (at p < .05). These differences do not confound the internal validity of our study, but our results should be interpreted in this light. Our tracking of the number of views for each video reveals that about one-fifth of the treatment respondents clicked the video link (TG1: 1,807 of 9,292 = 19.24%; TG2: 2,073 of 9,500 = 21.82%). We return to discuss the implication of the view counts for the interpretation of our findings. Table 2. Descriptive Statistics for Outcome Measures, Full Sample and by Experiment Group . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 Note. Percentages, counts, means, standard deviations (in parentheses), and minimum and maximum values (in brackets). Open in new tab Table 2. Descriptive Statistics for Outcome Measures, Full Sample and by Experiment Group . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 Note. Percentages, counts, means, standard deviations (in parentheses), and minimum and maximum values (in brackets). Open in new tab Treatment Effects Table 3, model 1, shows the ITT effect of the two video treatments on the survey response rate. Assignment to the video treatment does not affect the survey response rate. We do not find evidence for any difference in the survey response rate between the three experimental groups (TG1 vs. CG, TG2 vs. CG, and TG2 vs. TG1) at p < .05. Table 3. Effects of the Video Treatments on the Survey Response Rate, Initiation Time, Dropout Rate, and Completion Time . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 Note. Betas, standard errors, confidence intervals, and p-values. Baseline: CG. The variables for survey response rate and dropout rate are rescaled to percentages (0, 100) to facilitate interpretation. Column “p-values” shows two values: p-values referring to the reported betas and standard errors, and p-values (in parentheses) from robustness testing using logit/ordered logit regression instead of linear probability model (LPM)/ordinary least squares (OLS) regression. Confirming the robustness of our estimates, the logit/ordered logit p-values are near-identical to the LPM/OLS p-values. * p < .05. Open in new tab Table 3. Effects of the Video Treatments on the Survey Response Rate, Initiation Time, Dropout Rate, and Completion Time . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 Note. Betas, standard errors, confidence intervals, and p-values. Baseline: CG. The variables for survey response rate and dropout rate are rescaled to percentages (0, 100) to facilitate interpretation. Column “p-values” shows two values: p-values referring to the reported betas and standard errors, and p-values (in parentheses) from robustness testing using logit/ordered logit regression instead of linear probability model (LPM)/ordinary least squares (OLS) regression. Confirming the robustness of our estimates, the logit/ordered logit p-values are near-identical to the LPM/OLS p-values. * p < .05. Open in new tab Did the videos affect the behavior of the subsample of parents who provided a complete or partial response? Table 3, models 2–4, shows the ITT effect of the two video treatments on survey initiation time, survey dropout rate, and survey completion time. We do not find evidence for any significant differences in survey initiation time or in survey completion time across the three experimental groups. The dropout rate appears to be lower for TG1 relative to the CG (at p < .05). However, the reduction in dropout amounts to 1.68% and is, thus, small in size. Moreover, the p-value of .04 emphasizes, in the context of both our sample size and the number of conducted regressions, a need for caution in the extrapolation of this result. Heterogeneous Effects The preceding analyses suggest that use of informational video is an ineffective strategy for increasing survey response rates. Still, the video treatments could have effects for particular subpopulations of the sample parents.6 We examine for heterogeneous effects by re-estimating our survey response rate model specification (Table 3, model 1) stratified by demographic and socioeconomic characteristics: child gender, parent gender, child age, parent age, “at-risk” risk status, educational attainment, and employment status. We also estimate the difference in treatment effects for all subgroups. The results reveal no heterogeneous effects for any subgroup (at p < .05) (see Supplementary Table S1). Conclusion What is the survey response rate effect of embedding a link to an informational video in the email inviting people to participate in a survey? While theory supports that such use of video may promote survey response rates, this article does not find support for this notion. Using a three-arm, parallel cluster randomized controlled trial among a sample of 28,510 Danish citizens, we do not identify any differences in survey response rates across experimental conditions. The null result is robust across sample subgroups. Similarly, we find no significant effects on the survey initiation time or the survey completion time among those who provided a complete or partial response. Although we detect a small difference in dropout rates, the effect is small in size and marked by a questionable level of precision (p = .04) given our sample size and the number of conducted regressions. In sum, this article does not offer clear evidence of any positive or negative effects of embedding a link to an informational video in the email inviting people to participate in a survey. What may explain this null-finding? The relatively modest video view counts (one-fifth of the treatment respondents clicked the video link) indicate what we theorize is a most-likely explanation. Respondents’ decision to participate is often made upon reading the email invitation letter (or prior to that). Nonresponders may have tended to not click the video link, whereas some responders may have answered the survey without watching the video. We also emphasize how the features of our research design affect and frame the inference of our (null) findings. First, we cannot dismiss that our findings could be a product of ineffective video design, that is, that the content of the videos failed to substantively appeal to and motivate the viewers. Relatedly, all email invitation letters, including those to the treated parents, briefly informed about the overall aim of the survey—information that may have diminished the motivational effects of the videos. Having said that, we believe that not disclosing this information in the invitation letter would artificially diverge from typical survey procedures. Second, all email invitation letters were sent to e-Boks—an official digital mailbox system used by public agencies for digital communication with citizens. Moreover, two well-known and reputable organizations were listed as the survey sponsors, the email invitations were personalized, and survey completion involved participation in a cash prize lottery. These factors may have positive impacts on the parents’ inclinations to respond, thereby diminishing the effects of the video treatments. Third, we emphasize how our findings are based on ITT estimate: We identify the effects of embedding a link to an informational video in the survey invitation letter, and not the effects of watching an informational video. Moreover, we recognize how embedding the video directly in survey invitation letter (no link) could have a positive effect. In light of these limitations, we encourage more research on the use of informational video in online survey invitations. Such efforts should precede any full dismissal of the use of informational video for promoting survey response rates. Similarly, future research should examine effects on other outcomes. For example, informational videos may have positive effects on survey response quality, and use of informational video could be viewed as appropriate or even expected by certain target groups. However, until then, our findings call for caution and restraint. Informational video in online survey invitations imposes video production costs that may not translate into net benefits in terms of survey participation and responses. Resources may be better spent elsewhere. Supplementary Data Supplementary Data are available at IJPOR online. Funding This work was supported by the Novo Nordisk Foundation (grant number FF17SH0026396). Conflict of Interest Mogens Jin Pedersen, PhD, is an associate professor in the Department of Political Science at University of Copenhagen, Denmark, and an affiliate senior researcher at VIVE—The Danish Center for Social Science Research. Anders Bo Bojesen is an analyst at VIVE—The Danish Center for Social Science Research. Signe Boe Rayce is a senior researcher at VIVE—The Danish Center for Social Science Research. Maiken Pontoppidan is a senior researcher at VIVE--The Danish Center for Social Science Research. Footnotes 1 In particular, a video may involve audio and visual stimuli resonating with normative and affective motives for engaging in prosocial behavior (Knoke and Wright-Isak, 1982). For example, a video may provide the narrative of a societal issue that we need to understand and learn more about, thus appealing to a person’s normative prosocial motives to participate (what one thinks one ought to do in the public interest). Similarly, a video may provide the narrative of a group of people in need, thus appealing to a person’s affective prosocial motives to participate (what one is compelled to do out of feelings of compassion or sympathy). 2 In each month of November and December 2017, Statistics Denmark pulled a sample of 7,500 children from their registers on the full Danish population having turned 9, 24, or 36 months in the previous month (about 2,500 children in each child age group). Each sample involved oversampling of “at risk” children: Each child age group comprised about 1,500 “at risk” children and a random sample of about 1,000 children from the remaining population “At risk” children were defined by whether the child’s mother satisfied one or more of the following four criteria at the time of the child’s birth: (a) age 21 or less, (b) lower secondary education as the highest completed education, (c) fit to work but unemployed 1, 2, and/or 3 years before the child’s date of birth or (d) not cohabiting with the child’s father. The 28,510 sample parents represent the parents to the 15,000 children who Statistics Denmark were able to identify as viable for survey purposes (besides deceased parents, some mothers and fathers were missing (“unknown”) in the Statistic Denmark registers. Parents who did no longer see their child were also excluded from our examination). 3 Randomization was stratified by trial month (November and December 2017), child age group (9, 24, and 36 months), and child status (“at risk” and “not at risk”), 4 Denmark has a nondeductible value added tax (VAT) of 25%. Costs are excluding VAT. The researcher shown and speaking in the videos was the principal investigator of the research project. The children extras in the TG2 video were unpaid. 5 In contrast to logistic regression, the LPM produces parameter estimates that are directly interpretable as the mean marginal effect of covariates on outcome. Moreover, the LPM yields unbiased estimates when treatment status is binary or categorical (meaning that functional form is irrelevant) and randomly assigned (Deke, 2014). 6 For example, gender research finds evidence for stereotype beliefs that females are more emotional and caring than males (Haines, Deaux, & Lofaro, 2016), and mothers are more likely than fathers to be the primary child caregiver in the family (Galinsky, Aumann, & Bond, 2013). Therefore, the TG2 video could prime the female parents’ inclination to respond, and thus be an effective means for eliciting survey response among females. References Aizpurua E. Park K. H. Avery M. Wittrock J. Muilenburg R. Losch M. E. ( 2018 ). The impact of advance letters on cellphone responses in a statewide dual-frame survey . Survey Practice , 11 , 1 – 9 . doi:org/10.29115/SP-2018-0011 Google Scholar Crossref Search ADS WorldCat Bailey P. Pritchard G. Kernohan H. ( 2015 ). Gamification in market research: Increasing enjoyment, participant engagement and richness of data, but what of data validity? International Journal of Market Research , 57 , 17 – 28 . Google Scholar Crossref Search ADS WorldCat Beebe T. J. Davern M. E. McAlpine D. D. Call K. T. Rockwood T. H. ( 2005 ). Increasing response rates in a survey of Medicaid enrollees: The effect of a prepaid monetary incentive and mixed modes (mail and telephone) . Medical Care , 43 , 411 – 414 . Google Scholar Crossref Search ADS PubMed WorldCat Callegaro M. Baker R. P. Bethlehem J. Göritz A. S. Krosnick J. A. Lavrakas P. J. ( 2014 ). Online panel research: A data quality perspective . Hoboken, NJ : Wiley . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Curtin R. Presser S. Singer E. ( 2005 ). Changes in telephone survey nonresponse over the past quarter century . Public Opinion Quarterly , 69 , 87 – 98 . Google Scholar Crossref Search ADS WorldCat Daikeler J. Mosnjak M., Manfreda K. L. ( 2020 ). Web versus other survey modes: An updated and extended meta-analysis comparing response rates . Journal of Survey Statistics and Methodology , 8 , 513 – 539 . doi:org/10.1093/jssam/smz008 Google Scholar Crossref Search ADS WorldCat Deci E. L. Ryan R. M. ( 2000 ). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior . Psychological Inquiry , 11 , 227 – 268 . Google Scholar Crossref Search ADS WorldCat Deke J. ( 2014 ). Using the linear probability model to estimate impacts on binary outcomes in randomized controlled trials . Evaluation Technical Assistance Brief for OAH & ACYF Teenage Pregnancy Prevention Grantees 6, 1 – 5 . Google Scholar OpenURL Placeholder Text WorldCat de Leeuw E. Callegaro E. Hox J. Korendijk E. Lensvelt-Mulders G. ( 2007 ). The influence of advance letters on response in telephone surveys: A meta-analysis . Public Opinion Quarterly 71, 413 –4 43 . Google Scholar Crossref Search ADS WorldCat de Leeuw E. de Heer W. ( 2002 ). Trends in household survey nonresponse: A longitudinal and international comparison. In Groves R. M. Dillman D. A. Eltinge J. L. Little R. J. A. (Eds.), Survey nonresponse (pp. 41 – 54 ). New York, NY : John Wiley . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Deutskens E. Ruyter K. D. Wetzels M. Oosterveld P. ( 2004 ). Response rate and response quality of internet-based surveys: An experimental study . Marketing Letters , 15 , 21 – 36 . Google Scholar Crossref Search ADS WorldCat Dillman D. A. Singer E. Clark J. R. Treat J. B. ( 1996 ). Effects of benefits appeals, mandatory appeals, and variations in statements of confidentiality on completion rates for census questionnaires . Public Opinion Quarterly , 60 , 376 – 389 . Google Scholar Crossref Search ADS WorldCat Dillman D. A. Smyth J. D. Christian L. M. ( 2009 ). Mail and internet surveys: The tailored design method (3rd ed.). New York, NY : John Wiley . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Dommeyer C. J. Ruggiero L. A. ( 1996 ). The effects of a photograph on mail survey response . Marketing Bulletin , 7 , 51 – 77 . Google Scholar OpenURL Placeholder Text WorldCat Edwards P. J. Roberts I. Clarke M. J. DiGuiseppi C. Pratap S. Wentz R. Kwan I. ( 2002 ). Increasing response rates to postal questionnaires: Systematic review . British Medical Journal , 324 , 1183 – 1195 . Google Scholar Crossref Search ADS PubMed WorldCat Edwards P. J. Roberts I. Clarke M. J. DiGuiseppi C. Wenzt R. Kwan I. Pratap S. ( 2009 ). Methods to increase response to postal and electronic questionnaires . Cochrane Database of Systematic Reviews , 2009 . Art. No.: MR000008. 10.1002/14651858.MR000008.pub4. Google Scholar OpenURL Placeholder Text WorldCat Fan W. Yan Z. ( 2010 ) Factors affecting response rates of the web survey: A systematic review . Computers in Human Behavior , 26 , 132 – 139 . Google Scholar Crossref Search ADS WorldCat Galinsky E. Aumann K. Bond J.T. ( 2013 ). Times are changing: Gender and generation at work and at home in the USA. In Poelmans S. Greenhaus J. H. Las Heras Maestro M. (Eds.), Expanding the boundaries of work-family research (pp. 279 – 296 ). New York, NY : Palgrave Macmillan . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Gaski J. F. ( 2004 ). Efficacy of a mail survey appeal for a Dissertation . Perceptual & Motor Skills , 99 , 1295 – 1298 . Google Scholar Crossref Search ADS WorldCat Gattellari M. Ward J. E. ( 2001 ). Will donations to their learned college increase surgeons’ participation in surveys? A randomized trial . Journal of Clinical Epidemiology , 54 , 645 – 649 . Google Scholar Crossref Search ADS PubMed WorldCat Göritz A. S. ( 2006 ). Case lotteries and incentives in online panels . Social Science Computer Review , 24 , 445 – 459 . Google Scholar Crossref Search ADS WorldCat Göritz A. S. Luthe S. C. ( 2013 a). How do lotteries and study results influence response behavior in online panels? Social Science Computer Review , 31 , 371 – 385 . Google Scholar Crossref Search ADS WorldCat Göritz A. S. Luthe S. C. ( 2013 b). Lotteries and study results in market research online panels . International Journal of Market Research , 55 , 611 – 626 . Google Scholar Crossref Search ADS WorldCat Göritz A. S. Luthe S. C. ( 2013 c). Effects of lotteries on response behavior in online panels . Field Methods , 25 , 219 – 237 . Google Scholar Crossref Search ADS WorldCat Grant A. M. ( 2007 ). Relational job design and the motivation to make a prosocial difference . Academy of Management Review , 32 , 393 – 417 . Google Scholar Crossref Search ADS WorldCat Grant A. M. ( 2008 ). The significance of task significance: Job performance effects, relational mechanisms, and boundary conditions . Journal of Applied Psychology , 93 , 108 – 124 . Google Scholar Crossref Search ADS PubMed WorldCat Grant A. M. Berg J. M. ( 2011 ). Prosocial motivation at work: When, why, and how making a difference makes a difference. In Spreitzer G. M. Cameron K. S. (Eds.), The Oxford handbook of positive organizational scholarship (pp. 28 – 44 ). New York, NY : Oxford University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Griggs A. K. Berzofsky M. E. Shook-Sa B. E. Lindquist C. H. Enders K. P. Krebs C. P. Langton L. ( 2018 ). The impact of greeting personalization on prevalence estimates in a survey of sexual assault victimization . Public Opinion Quarterly 82 , 366 –3 78 . Google Scholar Crossref Search ADS WorldCat Groves R. M. Fowler F. J. Jr. Couper M. P. Lepkowski J. M. Singer E. Tourangeau R. ( 2009 ). Survey methodology (2nd ed.). Hoboken, NJ : John Wiley & Sons, Inc . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Haan M. Ongena Y. P. Vannieuwenhuyze J. T. A. De Glopper K. ( 2017 ). Response behavior in video-web survey: A mode comparison study . Journal of Survey Statistics and Methodology , 5 , 48 – 69 . Google Scholar OpenURL Placeholder Text WorldCat Haines E. L. Deaux K. Lofaro N. ( 2016 ). The times they are a-changing … or are they not? A comparison of gender stereotypes, 1983–2014 . Psychology of Women Quarterly , 40 , 353 – 63 . Google Scholar Crossref Search ADS WorldCat Heerwegh D. Loosveldt G. ( 2006 ). Personalizing e-mail contacts: Its influence on web survey responses rate and social desirability . International Journal of Public Opinion Research , 19 , 258 –2 68 . Google Scholar Crossref Search ADS WorldCat Kaplowitz M. D. Lupi F. Couper M. P. Thorp L. ( 2012 ). The effect of invitation design on web survey response rates . Social Science Computer Review , 30, 339 – 349 . Google Scholar Crossref Search ADS WorldCat Knoke D. Wright-Isak C. ( 1982 ). Individual motives and organizational incentive systems . Research in the Sociology of Organizations , 1 , 209 – 254 . Google Scholar OpenURL Placeholder Text WorldCat Kropf M. E. Blair J. ( 2005 ). Eliciting survey cooperation: Incentives, self-interest, and norms of cooperation . Evaluation Review , 29 , 559 – 575 . Google Scholar Crossref Search ADS PubMed WorldCat Mavletova A. ( 2015 ). A gamification effect in longitudinal web surveys among children and adolescents . International Journal of Market Research , 57 , 413 – 438 . Google Scholar Crossref Search ADS WorldCat Pedersen M. J. Nielsen C. V. ( 2016 ). Improving survey response rates in online panels: Effects of low-cost incentives and cost-free text appeal interventions . Social Science Computer Review , 34 , 29 – 243 . Google Scholar Crossref Search ADS WorldCat Petchenik J. Watermolen D. J. ( 2011 ). A cautionary note on using the internet to survey recent hunter education graduates . Human Dimensions of Wildlife , 16 , 216 – 218 . Google Scholar Crossref Search ADS WorldCat Porter S.R. Whitcomb M. E. ( 2005 ). E-mail subject lines and their effect on web survey viewing and response . Social Science Computer Review , 23 , 280 – 287 . Google Scholar Crossref Search ADS WorldCat Rose D. S. Sidle S. D. Griffith K. H. ( 2007 ). A penny for your thoughts: Monetary incentives improve response rates for company-sponsored employee surveys . Organizational Research Methods , 10 , 225 – 240 . Google Scholar Crossref Search ADS WorldCat Singer E. ( 2006 ). Introduction: Nonresponse bias in household surveys . Public Opinion Quarterly , 70 , 637 – 645 . Google Scholar Crossref Search ADS WorldCat Singer E. Ye C. ( 2013 ). The use and effects of incentives in surveys . Annals of the American Academy of Political and Social Sciences , 645 , 112 – 141 . Google Scholar Crossref Search ADS WorldCat Sloan M. Kreiger N. James B. ( 1997 ). Improving response rates among doctors: Randomised trial . The BMJ , 315 , 1136 . Google Scholar Crossref Search ADS PubMed WorldCat Tourangeau R. Couper M. P. Steiger D. M. ( 2003 ). Humanizing self-administered surveys: Experiments on social pressure in web and IVR surveys . Computers in Human Behavior , 19 , 1 – 24 . Google Scholar Crossref Search ADS WorldCat Weiss J. A. Tschirhart M. ( 1994 ). Public information campaigns as policy instruments . Journal of Policy Analysis and Management , 13 , 82 – 119 . Google Scholar Crossref Search ADS WorldCat Wright K. B. ( 2005 ). Researching internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services . Journal of Computer-Mediated Communication , 10 . doi:org/10.1111/j.1083-6101.2005.tb00259.x Google Scholar OpenURL Placeholder Text WorldCat © The Author(s) 2020. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Public Opinion Research Oxford University Press

Using Informational Video to Elicit Participation in Online Survey Research: A Randomized Controlled Trial

Loading next page...
 
/lp/oxford-university-press/using-informational-video-to-elicit-participation-in-online-survey-SK5rMS6z00

References (51)

Publisher
Oxford University Press
Copyright
© The Author(s) 2020. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.
ISSN
0954-2892
eISSN
1471-6909
DOI
10.1093/ijpor/edaa023
Publisher site
See Article on Publisher Site

Abstract

Survey research is a widely used method for collecting data about a population of interest (Edwards et al., 2002, 2009). A high survey response rate—the proportion of individuals in a sample population participating in a survey—mitigates concerns about nonresponse bias and is thus a significant component for the external validity of survey-based findings (Dillman, Smyth, & Christian, 2009; Groves et al., 2009; Singer, 2006). However, response rates to surveys have been in general decline in recent decades (Curtin, Presser, & Singer, 2005; de Leeuw & de Heer, 2002), and survey data are increasingly collected online (Callegaro et al. 2014; Dillman et al., 2009). Although web-based surveys represent a fast and inexpensive way to collect information from people (Dillman et al., 2009; Wright, 2005), this development amplifies the problem of low response rates. The response rates in online surveys are typically lower than those in surveys using other data collection methods (Daikeler, Mosnjak, & Manfreda, 2020; Petchenik & Watermolen, 2011; Tourangeau, Couper, & Steiger, 2003). Identifying strategies for optimizing response rates in online surveys is thus important. Researchers have examined a range of approaches for increasing survey response rates (Edwards et al., 2002, 2009; Fan & Yan, 2010). For example, research has tested the effects of monetary incentives (Beebe, Davern, McAlpine, Call, & Rockwood, 2005; Göritz, 2006; Göritz & Luthe, 2013a, 2013b, 2013c; Rose, Sidle, & Griffith, 2007; Singer & Ye, 2013) and donation incentives (Deutskens, Ruyter, Wetzels, & Oosterveld, 2004; Gattellari & Ward, 2001), different forms of text or picture appeal in the survey invitation letter (Dillman, Singer, Clark, & Treat, 1996; Kropf & Blair, 2005; Pedersen & Nielsen, 2016), and use of survey gaming techniques (Bailey, Pritchard, & Kernohan, 2015; Mavletova, 2015). Similarly, research finds that both advance notification letters (Aizpurua et al., 2018; de Leeuw, Callegaro, Hox, Korendijk, & Lensvelt-Mulders, 2007) and personalized greetings in survey invitation letters (Griggs et al., 2018; Heerwegh & Loosveldt, 2006) are effective means for promoting survey response rates. Yet one approach remains understudied: use of informational video. Embedded in the display or email inviting people to participate in a survey, a video may elicit participation by means of audio and visual cues. As we discuss later, research supports that video may affect survey responses by motivating and priming people to respond (Grant, 2007, 2008; Haan, Ongena, Vannieuwenhuyze, & De Glopper, 2017; Tourangeau, Couper, & Steiger, 2003). However, no research appears to have tested, directly and robustly, the effectiveness of embedding informational video in the survey invitation letter. This article examines the effects of embedding a link to an informational video in an email inviting people to participate in an online survey. A sample of 28,510 Danish parents of 15,000 children received an email invitation to participation in a nation-wide survey about their child and family. Clustered at the individual child level, we randomly assigned each parent couple to either a control group (CG) or one of two treatment groups. Parents in the CG received a typical survey invitation email. Parents in the two treatment groups received the same email—but with one important distinction: The email also included a link to an informational video of a researcher emphasizing the social significance of the survey and the value of the individual’s participation. Parents in the two treatment groups (TG1 and TG2), were exposed to different videos. As we elaborate on later, the videos were of identical audio content and length, but they differed in visual content. The TG1 video represents a low-cost video. Survey investigators may themselves produce a video similar to the TG1 video with little effort. In contrast, the TG2 video represents a high-cost video. Producing a video similar to the TG2 video requires a substantial time investment and technical know-how, and hiring somebody to produce such video is substantially costlier. Comparing the effects of the two videos, we seek leverage on questions about the content- and cost-effectiveness of informational videos in survey research. Does imagery seeking to invoke and increase respondents’ motivation and perceived significance of survey participation (in this case, footage of children) have greater effects than footage of a person? Does a high-cost video promote survey response rates relative to a cheaper and easier-to-produce video? We examine the intention-to-treat (ITT) effect of the two videos on the survey response rate—that is, differences across the three experiment groups (CG, TG1, and TG2). Moreover, we perform additional analyses among the respondents who decided to participate. For this subpopulation, we examine ITT effects on survey initiation time, survey dropout rate, and survey completion time. Effects of Video on Survey Participation Informational video provides audio and visual stimuli that may prime the viewers’ attitudes and emotions, and thus affect their judgments and decision-making. Similar to the ways that information campaigns have long been recognized to influence individuals’ behavior (Weiss & Tschirhart, 1994), the embedding of an informational video in the invitation to participate in a survey may positively affect an individuals’ inclination to participate. Many surveys implement various forms of text appeals or visual cues in the survey invitation to motivate people to respond. A survey invitation may state how the survey connects to a socially salient phenomenon or question, thus linking survey response to something that people care about and find important (Dillman et al., 1996; Kropf & Blair, 2005; Pedersen & Nielsen, 2016). Similarly, a survey invitation may seek to compel people to respond by emphasizing the value of participation to the survey investigators (Gaski 2004; Porter & Whitcomb, 2005), or by showing the investigators or sponsor behind the survey (Dommeyer & Ruggiero, 1996; Porter & Whitcomb, 2005; Sloan, Kreiger, & James, 1997). As we elaborate below, a video may elicit survey participation by psychological mechanisms similar to those explaining the effectiveness of these more conventional approaches for increasing response rates. The response rate effect of an informational video depends on its content—the specific audio and visual stimuli in the video. Drawing on insights on human motivation from social psychology research (Grant, 2007, 2008) and social interface theory (Haan et al., 2017; Tourangeau, Couper, & Steiger, 2003), a video can involve at least two types of cues that may promote the viewers’ interest in the survey and, thus, entice them to respond. First, a video provides a medium for activating individuals’ prosocial motivation—a desire to protect and promote the well-being of others (Grant & Berg, 2011) relating to a basic human need for feeling connected to and experiencing care for others (Deci & Ryan, 2000). By highlighting how a survey connects to a socially salient phenomenon or question, a video can promote the perceived prosocial impact of survey participation, thus increasing individuals’ willingness to respond. By providing audio and visual cues priming the prosocial importance and value of the survey, a video may improve survey response rates and response quality.1 Second, a video provides a medium for “humanizing” the survey—showing the investigator(s) behind the survey (Haan et al., 2017; Tourangeau, Couper, & Steiger, 2003). By seeing and hearing the person or people behind the survey, viewers’ immediate responses become more similar to responses in face-to-face human interactions (Tourangeau, Couper, & Steiger, 2003)—that is, viewers become more likely to accept the survey invitation. Psychological mechanisms referring to human compassion, sympathy, and a basic need for relatedness (Deci & Ryan, 2000) explain why “humanizing cues” may have positive effects on survey response rates and quality. Data We conducted our study as part of a nationwide, longitudinal survey examining questions related to child development and well-being. Our sample population is Danish citizens being parents to an infant or toddler. Our sample involves 28,510 parents, each receiving an invitation to participate in a nation-wide online survey (multi-device) about their child and family in November or December 2017. The survey was administered by Statistics Denmark, a government institution under the Danish Ministry of Economic Affairs and the Interior.2 The parents received the survey invitation in e-Boks, a digital mailbox system providing all Danish citizens with a private email account tied to their social security number. Danish public agencies use e-Boks as a secure platform for digital communication with citizens (see www.e-boks.com). Sample parents were provided 45 days to respond. Nonresponding parents received up to two survey reminders (after 7 and 14 days). The study was IRB approved by The Internal Review Board at VIVE—The Danish Center for Social Science Research, and preregistered at the AEA RCT registry (AEARCTR-0002541). In line with Danish guidelines for ethics in social sciences research, the first page of the survey informed participants about informed consent. Design Our study design involves a three-arm, parallel cluster randomized controlled trial. With the individual child constituting the randomization unit, each child’s parents receive identical treatment.3 Each parent couple was randomly assigned to one of three experimental groups: a CG or one of two treatment groups (TG1 or TG2). Parents in the CG received a typical survey invitation email. The email subject line was: “Personal letter from Statistics Denmark,” the survey invitation addressed the respondent by name, the invitation text length was 280 words, and the survey link (URL) was placed at the middle of the letter. All of these features are in line with research recommendations suggesting that survey invitation letters should have personalized greetings (Griggs et al., 2018; Heerwegh & Loosveldt, 2006), a subject line with a request from an authority figure, contain longer text, and not have the URL at the top of the invitation (Kaplowitz, Lupi, Couper, & Thorp, 2012). The full survey invitation appears in Supplementary Figure S1. Parents in the two treatment groups received the same email as those in the CG, but the email included also a link to an informational video. Clicking the link opened up a different video for parents in TG1 and TG2. Both videos had the same length (2 min, 17 s) and identical audio content. The two videos offered distinct visual stimuli: For TG1 parents, the video showed a researcher speaking directly to the viewer in front of a white background. While the TG2 video contained excerpts of the same video footage as in the TG1 video, the main part of the TG2 video showed various clips of children (looking directly at the viewer, interacting with each other, playing with toys, etc.). In terms of audio content, the researcher presented herself (to “humanize” the survey) and briefly elaborated on the survey content (questions about the child and the family). However, the main focus was on the purpose of the survey and the importance of survey participation. Priming the prosocial value of the survey, the audio track used language such as “Your help is needed for gaining insights into how to best ensure a good and safe upbringing for small children,” “Knowledge on why some children are not thriving as well as others is needed,” and “We really hope for your participation. The answers we receive provide insights into how to best support children and their families in Denmark.” A test group of parents viewed and commented on both videos. Everyone provided positive feedback, especially for the TG2 video, thus supporting our a priori theoretical expectations. The videos are available at https://www.youtube.com/embed/UD_PdmLlo5E (TG1) and https://www.youtube.com/embed/1o72L2zkYls (TG2). Both videos were instructed, recorded, and edited by a communication firm specializing in visual communication. The total production cost was approximately $8,500 (55,213 DKK).4 The would-be cost of producing only the TG1 video is $3,650 (23,800 DKK) and $8,000 (51,800 DKK) for the TG2 video. The distinction between the two videos allows for leverage on questions about the content- and cost-effectiveness of informational videos in survey research. Based on insights from social psychology (Grant, 2007, 2008), additional exposure to the video clips of the children in the TG2 video may prime the perceived prosocial importance and value of the survey to greater extents than only showing the researcher behind the survey, thus resulting in a higher survey response rate for TG2 than for TG1. Our setup allows us to test this notion and, in the case of differences, estimate the cost-effectiveness of the videos. Statistics Denmark did not have technical capacity to connect each treatment respondent to data on whether that individual clicked the video link and viewed the video. Such data would have allowed us to supplement the estimated ITT effects with estimation of treatment-on-the-treated effects, providing added leverage on the potential cause of our findings. However, we were able to track the view count for each video. The survey response rate is our primary outcome measure—operationalized using a binary variable signifying whether the sample parent had accessed the survey and answered one or more items. Moreover, we use the subsample of the parents who accessed the survey and answered one or more items for examining effects in relation to three secondary outcomes: (a) survey initiation time, (b) survey dropout rate, and (c) survey completion time. Survey initiation time and survey completion time are both captured using numerical variables (number of days from survey invitation to survey response and number of minutes from start to completion of the survey). The survey dropout rate is captured using a binary variable signifying completion of the survey. Statistical Analysis Descriptive sample statistics are estimated and reported using percentages, counts, and chi-square tests. Descriptive statistics for the outcome measures are computed using percentages (for categorical variables) and means and standard deviations (for numerical variables). We examine the effects of the two video treatments on binary outcomes—the survey response rate and the dropout rate—using linear probability model (LPM) regression. Drawing on advantages in terms of interpretation and usefulness (Deke, 2014), we use the LPM instead of logit regression.5 However, for robustness testing, we show how the estimated linear probability p-values align with the p-values from conventional logit regression. For numerical outcomes—survey initiation time and survey completion time—we use ordinary least squares (OLS) regression. As for robustness, given potential concerns about the normal distribution of time scales, we show how the estimated OLS p-values align with nonparametric p-values from ordered logit regression. In all model specifications, standard errors are corrected for clustering of parents by children. Results Descriptive Statistics Table 1 shows descriptive sample statistics (full sample and by experiment group). Chi-square tests for differences in means suggest that the three experiment groups balance well on observable characteristics. Only in terms of child gender do we observe slight imbalance (higher proportion of girls in TG2 relative to CG and TGI at p = .04). Table 1. Descriptive Sample Statistics, Full Sample and by Experiment Group . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 Note. Percentages, counts, and chi-squared tests. Open in new tab Table 1. Descriptive Sample Statistics, Full Sample and by Experiment Group . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 . Full Sample . Experiment Groups . p-Value . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . χ2 . Trial month  November 49.9 14,220 50.72 4,878 49.5 4,645 49.4 4,697  December 50.1 14,290 49.28 4,740 50.5 4,747 50.6 4,803 0.13 Child age group  9 months 33.2 9,472 33.37 3,210 32.7 3,070 33.6 3,192  2 years 33.4 9,519 32.69 3,144 34.1 3,203 33.4 3,172  3 years 33.4 9,519 33.94 3,264 33.2 3,119 33.0 3,136 0.24 Child status  Not “at risk” 41.6 11,858 41.43 3,985 41.8 3,927 41.5 3,946  “At risk” 58.4 16,652 58.57 5,633 58.2 5,465 58.5 5,554 0.86 Child, gender  Boy 51.5 14,685 51.87 4,989 52.2 4,904 50.4 4,792  Girl 48.5 13,825 48.13 4,629 47.8 4,488 49.6 4,708 0.04 Parent, gender  Female 52.4 14,943 52.33 5,033 52.4 4,918 52.6 4,992  Male 47.6 13,567 47.67 4,585 47.6 4,474 47.5 4,508 0.95 Parent, age  Below 25 4.3 1,229 4.08 392 4.5 420 4.4 417  Age 25–29 17.6 5,023 17.74 1,706 17.1 1,605 18.0 1,712  Age 30–34 32.0 9,130 32.19 3,096 31.8 2,989 32.1 3,045  Age 35+ 46.1 13,128 46 4,424 46.6 4,378 45.5 4,326 0.45 Parent, education  Primary school 22.2 6,111 22.89 2,129 21.5 1,949 22.1 2,033  High school and technical college 35.9 9,917 34.97 3,252 37.0 3,358 35.9 3,307  Short tertiary 5.2 1,429 5.1 471 5.4 486 5.1 472  Tertiary 19.4 5,342 19.5 1,811 19.3 1,751 19.3 1,780  Academic degree 15.6 4,305 15.8 1,468 15.3 1,388 15.7 1,449  PhD 1.8 488 1.8 169 1.7 150 1.8 169 0.27 Parent, employment status  Self employed 3.7 1,021 4.1 376 3.4 310 3.7 335  Salaried worker 69.0 18,909 69.2 6,385 69.2 6,252 68.6 6,272  Unemployed or on leave 19.7 5,402 19.4 1,788 19.5 1,764 20.2 1,850  Student 6.3 1,723 6.2 568 6.5 588 6.2 567  Retired 1.3 355 1.3 115 1.3 117 1.3 123 0.35 N 28,510 9,618 9,392 9,500 Note. Percentages, counts, and chi-squared tests. Open in new tab Table 2 shows descriptive statistics for the outcome measures (full sample and by experiment group). Responses from 12,375 parents were obtained, yielding a survey response rate of 43.41%. Among the 12,375 responders, 14.59% initiated but did not complete the survey, the average time from survey invitation send out to survey response was about 10 days, and the average time spent on survey responses was about 28 min. Chi-square tests comparing the 12,375 responders with the 16,135 nonresponders in terms of the characteristics reported in Table 1 show that responding parents are more likely to be females, older, more educated, have a younger child, a child not “at risk,” and are less likely to be self-employed, unemployed or on leave, and retired (at p < .05). These differences do not confound the internal validity of our study, but our results should be interpreted in this light. Our tracking of the number of views for each video reveals that about one-fifth of the treatment respondents clicked the video link (TG1: 1,807 of 9,292 = 19.24%; TG2: 2,073 of 9,500 = 21.82%). We return to discuss the implication of the view counts for the interpretation of our findings. Table 2. Descriptive Statistics for Outcome Measures, Full Sample and by Experiment Group . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 Note. Percentages, counts, means, standard deviations (in parentheses), and minimum and maximum values (in brackets). Open in new tab Table 2. Descriptive Statistics for Outcome Measures, Full Sample and by Experiment Group . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 . Full Sample . Experiment Groups . CG . TG1 . TG2 . % . n . % . n . % . n . % . n . Response rate  No response 56.59 16,135 56.23 5,408 56.98 5,352 56.58 5,375  Response 43.41 12,375 43.77 4,210 43.02 4,040 43.42 4,125 Dropout rate  No dropout 85.40 10,569 84.54 3,559 86.21 3,483 85.50 3,527  Dropout 14.59 1,806 15.46 651 13.79 557 14.50 598 Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Mean (SD) [Min/Max] n Initiation time (days) 10.18 (8.86) [0/45] 12,375 10.24 (8.86) [0/45] 4,210 10.01 (8.76) [0/45] 4,040 10.28 (8.97) [0/45] 4,125 Completion time (minutes) 27.87 (20.56) [0/150] 12,375 27.85 (20.40) [0/150] 4,210 27.92 (20.51) [0/150] 4,040 27.84 (20.78) [0/150] 4,125 Note. Percentages, counts, means, standard deviations (in parentheses), and minimum and maximum values (in brackets). Open in new tab Treatment Effects Table 3, model 1, shows the ITT effect of the two video treatments on the survey response rate. Assignment to the video treatment does not affect the survey response rate. We do not find evidence for any difference in the survey response rate between the three experimental groups (TG1 vs. CG, TG2 vs. CG, and TG2 vs. TG1) at p < .05. Table 3. Effects of the Video Treatments on the Survey Response Rate, Initiation Time, Dropout Rate, and Completion Time . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 Note. Betas, standard errors, confidence intervals, and p-values. Baseline: CG. The variables for survey response rate and dropout rate are rescaled to percentages (0, 100) to facilitate interpretation. Column “p-values” shows two values: p-values referring to the reported betas and standard errors, and p-values (in parentheses) from robustness testing using logit/ordered logit regression instead of linear probability model (LPM)/ordinary least squares (OLS) regression. Confirming the robustness of our estimates, the logit/ordered logit p-values are near-identical to the LPM/OLS p-values. * p < .05. Open in new tab Table 3. Effects of the Video Treatments on the Survey Response Rate, Initiation Time, Dropout Rate, and Completion Time . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 . Survey Response Rate (Model 1) . Initiation Time (Model 2) . Dropout Rate (Model 3) . Completion Time (Model 4) . . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . β (SE) [95% CI] . p-Value (logit) . β (SE) [95% CI] . p-Value (ologit) . TG1 −0.76 (0.77) [−2.27, 0.75] 0.33 (0.33) −0.22 (0.20) [−0.62, 0.17] 0.27 (0.29) −1.68* (0.81) [−3.26, −0.10] 0.04 (0.04) 0.06 (0.47) [−0.86, 0.98] 0.89 (0.97) TG2 −0.34 (0.77) [−1.85, 1.15] 0.65 (0.65) 0.04 (0.20) [−0.35, 0.44] 0.84 (0.91) −0.97 (0.81) [−2.55, 0.62] 0.23 (0.23) −0.01 (0.46) [−0.92, 0.91] 0.99 (0.93) TG2 vs. TG1 0.41 (0.77) [−1.10, 1.92] 0.60 (0.60) 0.26 (0.20) [−0.13, 0.66] 0.19 (0.34) 0.71 (0.80) [−0.85, 2.27] 0.37 (0.37) −0.07 (0.48) [−1.01, 0.86] 0.88 (0.95) n 28,510 12,375 12,375 12,375 Note. Betas, standard errors, confidence intervals, and p-values. Baseline: CG. The variables for survey response rate and dropout rate are rescaled to percentages (0, 100) to facilitate interpretation. Column “p-values” shows two values: p-values referring to the reported betas and standard errors, and p-values (in parentheses) from robustness testing using logit/ordered logit regression instead of linear probability model (LPM)/ordinary least squares (OLS) regression. Confirming the robustness of our estimates, the logit/ordered logit p-values are near-identical to the LPM/OLS p-values. * p < .05. Open in new tab Did the videos affect the behavior of the subsample of parents who provided a complete or partial response? Table 3, models 2–4, shows the ITT effect of the two video treatments on survey initiation time, survey dropout rate, and survey completion time. We do not find evidence for any significant differences in survey initiation time or in survey completion time across the three experimental groups. The dropout rate appears to be lower for TG1 relative to the CG (at p < .05). However, the reduction in dropout amounts to 1.68% and is, thus, small in size. Moreover, the p-value of .04 emphasizes, in the context of both our sample size and the number of conducted regressions, a need for caution in the extrapolation of this result. Heterogeneous Effects The preceding analyses suggest that use of informational video is an ineffective strategy for increasing survey response rates. Still, the video treatments could have effects for particular subpopulations of the sample parents.6 We examine for heterogeneous effects by re-estimating our survey response rate model specification (Table 3, model 1) stratified by demographic and socioeconomic characteristics: child gender, parent gender, child age, parent age, “at-risk” risk status, educational attainment, and employment status. We also estimate the difference in treatment effects for all subgroups. The results reveal no heterogeneous effects for any subgroup (at p < .05) (see Supplementary Table S1). Conclusion What is the survey response rate effect of embedding a link to an informational video in the email inviting people to participate in a survey? While theory supports that such use of video may promote survey response rates, this article does not find support for this notion. Using a three-arm, parallel cluster randomized controlled trial among a sample of 28,510 Danish citizens, we do not identify any differences in survey response rates across experimental conditions. The null result is robust across sample subgroups. Similarly, we find no significant effects on the survey initiation time or the survey completion time among those who provided a complete or partial response. Although we detect a small difference in dropout rates, the effect is small in size and marked by a questionable level of precision (p = .04) given our sample size and the number of conducted regressions. In sum, this article does not offer clear evidence of any positive or negative effects of embedding a link to an informational video in the email inviting people to participate in a survey. What may explain this null-finding? The relatively modest video view counts (one-fifth of the treatment respondents clicked the video link) indicate what we theorize is a most-likely explanation. Respondents’ decision to participate is often made upon reading the email invitation letter (or prior to that). Nonresponders may have tended to not click the video link, whereas some responders may have answered the survey without watching the video. We also emphasize how the features of our research design affect and frame the inference of our (null) findings. First, we cannot dismiss that our findings could be a product of ineffective video design, that is, that the content of the videos failed to substantively appeal to and motivate the viewers. Relatedly, all email invitation letters, including those to the treated parents, briefly informed about the overall aim of the survey—information that may have diminished the motivational effects of the videos. Having said that, we believe that not disclosing this information in the invitation letter would artificially diverge from typical survey procedures. Second, all email invitation letters were sent to e-Boks—an official digital mailbox system used by public agencies for digital communication with citizens. Moreover, two well-known and reputable organizations were listed as the survey sponsors, the email invitations were personalized, and survey completion involved participation in a cash prize lottery. These factors may have positive impacts on the parents’ inclinations to respond, thereby diminishing the effects of the video treatments. Third, we emphasize how our findings are based on ITT estimate: We identify the effects of embedding a link to an informational video in the survey invitation letter, and not the effects of watching an informational video. Moreover, we recognize how embedding the video directly in survey invitation letter (no link) could have a positive effect. In light of these limitations, we encourage more research on the use of informational video in online survey invitations. Such efforts should precede any full dismissal of the use of informational video for promoting survey response rates. Similarly, future research should examine effects on other outcomes. For example, informational videos may have positive effects on survey response quality, and use of informational video could be viewed as appropriate or even expected by certain target groups. However, until then, our findings call for caution and restraint. Informational video in online survey invitations imposes video production costs that may not translate into net benefits in terms of survey participation and responses. Resources may be better spent elsewhere. Supplementary Data Supplementary Data are available at IJPOR online. Funding This work was supported by the Novo Nordisk Foundation (grant number FF17SH0026396). Conflict of Interest Mogens Jin Pedersen, PhD, is an associate professor in the Department of Political Science at University of Copenhagen, Denmark, and an affiliate senior researcher at VIVE—The Danish Center for Social Science Research. Anders Bo Bojesen is an analyst at VIVE—The Danish Center for Social Science Research. Signe Boe Rayce is a senior researcher at VIVE—The Danish Center for Social Science Research. Maiken Pontoppidan is a senior researcher at VIVE--The Danish Center for Social Science Research. Footnotes 1 In particular, a video may involve audio and visual stimuli resonating with normative and affective motives for engaging in prosocial behavior (Knoke and Wright-Isak, 1982). For example, a video may provide the narrative of a societal issue that we need to understand and learn more about, thus appealing to a person’s normative prosocial motives to participate (what one thinks one ought to do in the public interest). Similarly, a video may provide the narrative of a group of people in need, thus appealing to a person’s affective prosocial motives to participate (what one is compelled to do out of feelings of compassion or sympathy). 2 In each month of November and December 2017, Statistics Denmark pulled a sample of 7,500 children from their registers on the full Danish population having turned 9, 24, or 36 months in the previous month (about 2,500 children in each child age group). Each sample involved oversampling of “at risk” children: Each child age group comprised about 1,500 “at risk” children and a random sample of about 1,000 children from the remaining population “At risk” children were defined by whether the child’s mother satisfied one or more of the following four criteria at the time of the child’s birth: (a) age 21 or less, (b) lower secondary education as the highest completed education, (c) fit to work but unemployed 1, 2, and/or 3 years before the child’s date of birth or (d) not cohabiting with the child’s father. The 28,510 sample parents represent the parents to the 15,000 children who Statistics Denmark were able to identify as viable for survey purposes (besides deceased parents, some mothers and fathers were missing (“unknown”) in the Statistic Denmark registers. Parents who did no longer see their child were also excluded from our examination). 3 Randomization was stratified by trial month (November and December 2017), child age group (9, 24, and 36 months), and child status (“at risk” and “not at risk”), 4 Denmark has a nondeductible value added tax (VAT) of 25%. Costs are excluding VAT. The researcher shown and speaking in the videos was the principal investigator of the research project. The children extras in the TG2 video were unpaid. 5 In contrast to logistic regression, the LPM produces parameter estimates that are directly interpretable as the mean marginal effect of covariates on outcome. Moreover, the LPM yields unbiased estimates when treatment status is binary or categorical (meaning that functional form is irrelevant) and randomly assigned (Deke, 2014). 6 For example, gender research finds evidence for stereotype beliefs that females are more emotional and caring than males (Haines, Deaux, & Lofaro, 2016), and mothers are more likely than fathers to be the primary child caregiver in the family (Galinsky, Aumann, & Bond, 2013). Therefore, the TG2 video could prime the female parents’ inclination to respond, and thus be an effective means for eliciting survey response among females. References Aizpurua E. Park K. H. Avery M. Wittrock J. Muilenburg R. Losch M. E. ( 2018 ). The impact of advance letters on cellphone responses in a statewide dual-frame survey . Survey Practice , 11 , 1 – 9 . doi:org/10.29115/SP-2018-0011 Google Scholar Crossref Search ADS WorldCat Bailey P. Pritchard G. Kernohan H. ( 2015 ). Gamification in market research: Increasing enjoyment, participant engagement and richness of data, but what of data validity? International Journal of Market Research , 57 , 17 – 28 . Google Scholar Crossref Search ADS WorldCat Beebe T. J. Davern M. E. McAlpine D. D. Call K. T. Rockwood T. H. ( 2005 ). Increasing response rates in a survey of Medicaid enrollees: The effect of a prepaid monetary incentive and mixed modes (mail and telephone) . Medical Care , 43 , 411 – 414 . Google Scholar Crossref Search ADS PubMed WorldCat Callegaro M. Baker R. P. Bethlehem J. Göritz A. S. Krosnick J. A. Lavrakas P. J. ( 2014 ). Online panel research: A data quality perspective . Hoboken, NJ : Wiley . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Curtin R. Presser S. Singer E. ( 2005 ). Changes in telephone survey nonresponse over the past quarter century . Public Opinion Quarterly , 69 , 87 – 98 . Google Scholar Crossref Search ADS WorldCat Daikeler J. Mosnjak M., Manfreda K. L. ( 2020 ). Web versus other survey modes: An updated and extended meta-analysis comparing response rates . Journal of Survey Statistics and Methodology , 8 , 513 – 539 . doi:org/10.1093/jssam/smz008 Google Scholar Crossref Search ADS WorldCat Deci E. L. Ryan R. M. ( 2000 ). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior . Psychological Inquiry , 11 , 227 – 268 . Google Scholar Crossref Search ADS WorldCat Deke J. ( 2014 ). Using the linear probability model to estimate impacts on binary outcomes in randomized controlled trials . Evaluation Technical Assistance Brief for OAH & ACYF Teenage Pregnancy Prevention Grantees 6, 1 – 5 . Google Scholar OpenURL Placeholder Text WorldCat de Leeuw E. Callegaro E. Hox J. Korendijk E. Lensvelt-Mulders G. ( 2007 ). The influence of advance letters on response in telephone surveys: A meta-analysis . Public Opinion Quarterly 71, 413 –4 43 . Google Scholar Crossref Search ADS WorldCat de Leeuw E. de Heer W. ( 2002 ). Trends in household survey nonresponse: A longitudinal and international comparison. In Groves R. M. Dillman D. A. Eltinge J. L. Little R. J. A. (Eds.), Survey nonresponse (pp. 41 – 54 ). New York, NY : John Wiley . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Deutskens E. Ruyter K. D. Wetzels M. Oosterveld P. ( 2004 ). Response rate and response quality of internet-based surveys: An experimental study . Marketing Letters , 15 , 21 – 36 . Google Scholar Crossref Search ADS WorldCat Dillman D. A. Singer E. Clark J. R. Treat J. B. ( 1996 ). Effects of benefits appeals, mandatory appeals, and variations in statements of confidentiality on completion rates for census questionnaires . Public Opinion Quarterly , 60 , 376 – 389 . Google Scholar Crossref Search ADS WorldCat Dillman D. A. Smyth J. D. Christian L. M. ( 2009 ). Mail and internet surveys: The tailored design method (3rd ed.). New York, NY : John Wiley . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Dommeyer C. J. Ruggiero L. A. ( 1996 ). The effects of a photograph on mail survey response . Marketing Bulletin , 7 , 51 – 77 . Google Scholar OpenURL Placeholder Text WorldCat Edwards P. J. Roberts I. Clarke M. J. DiGuiseppi C. Pratap S. Wentz R. Kwan I. ( 2002 ). Increasing response rates to postal questionnaires: Systematic review . British Medical Journal , 324 , 1183 – 1195 . Google Scholar Crossref Search ADS PubMed WorldCat Edwards P. J. Roberts I. Clarke M. J. DiGuiseppi C. Wenzt R. Kwan I. Pratap S. ( 2009 ). Methods to increase response to postal and electronic questionnaires . Cochrane Database of Systematic Reviews , 2009 . Art. No.: MR000008. 10.1002/14651858.MR000008.pub4. Google Scholar OpenURL Placeholder Text WorldCat Fan W. Yan Z. ( 2010 ) Factors affecting response rates of the web survey: A systematic review . Computers in Human Behavior , 26 , 132 – 139 . Google Scholar Crossref Search ADS WorldCat Galinsky E. Aumann K. Bond J.T. ( 2013 ). Times are changing: Gender and generation at work and at home in the USA. In Poelmans S. Greenhaus J. H. Las Heras Maestro M. (Eds.), Expanding the boundaries of work-family research (pp. 279 – 296 ). New York, NY : Palgrave Macmillan . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Gaski J. F. ( 2004 ). Efficacy of a mail survey appeal for a Dissertation . Perceptual & Motor Skills , 99 , 1295 – 1298 . Google Scholar Crossref Search ADS WorldCat Gattellari M. Ward J. E. ( 2001 ). Will donations to their learned college increase surgeons’ participation in surveys? A randomized trial . Journal of Clinical Epidemiology , 54 , 645 – 649 . Google Scholar Crossref Search ADS PubMed WorldCat Göritz A. S. ( 2006 ). Case lotteries and incentives in online panels . Social Science Computer Review , 24 , 445 – 459 . Google Scholar Crossref Search ADS WorldCat Göritz A. S. Luthe S. C. ( 2013 a). How do lotteries and study results influence response behavior in online panels? Social Science Computer Review , 31 , 371 – 385 . Google Scholar Crossref Search ADS WorldCat Göritz A. S. Luthe S. C. ( 2013 b). Lotteries and study results in market research online panels . International Journal of Market Research , 55 , 611 – 626 . Google Scholar Crossref Search ADS WorldCat Göritz A. S. Luthe S. C. ( 2013 c). Effects of lotteries on response behavior in online panels . Field Methods , 25 , 219 – 237 . Google Scholar Crossref Search ADS WorldCat Grant A. M. ( 2007 ). Relational job design and the motivation to make a prosocial difference . Academy of Management Review , 32 , 393 – 417 . Google Scholar Crossref Search ADS WorldCat Grant A. M. ( 2008 ). The significance of task significance: Job performance effects, relational mechanisms, and boundary conditions . Journal of Applied Psychology , 93 , 108 – 124 . Google Scholar Crossref Search ADS PubMed WorldCat Grant A. M. Berg J. M. ( 2011 ). Prosocial motivation at work: When, why, and how making a difference makes a difference. In Spreitzer G. M. Cameron K. S. (Eds.), The Oxford handbook of positive organizational scholarship (pp. 28 – 44 ). New York, NY : Oxford University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Griggs A. K. Berzofsky M. E. Shook-Sa B. E. Lindquist C. H. Enders K. P. Krebs C. P. Langton L. ( 2018 ). The impact of greeting personalization on prevalence estimates in a survey of sexual assault victimization . Public Opinion Quarterly 82 , 366 –3 78 . Google Scholar Crossref Search ADS WorldCat Groves R. M. Fowler F. J. Jr. Couper M. P. Lepkowski J. M. Singer E. Tourangeau R. ( 2009 ). Survey methodology (2nd ed.). Hoboken, NJ : John Wiley & Sons, Inc . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Haan M. Ongena Y. P. Vannieuwenhuyze J. T. A. De Glopper K. ( 2017 ). Response behavior in video-web survey: A mode comparison study . Journal of Survey Statistics and Methodology , 5 , 48 – 69 . Google Scholar OpenURL Placeholder Text WorldCat Haines E. L. Deaux K. Lofaro N. ( 2016 ). The times they are a-changing … or are they not? A comparison of gender stereotypes, 1983–2014 . Psychology of Women Quarterly , 40 , 353 – 63 . Google Scholar Crossref Search ADS WorldCat Heerwegh D. Loosveldt G. ( 2006 ). Personalizing e-mail contacts: Its influence on web survey responses rate and social desirability . International Journal of Public Opinion Research , 19 , 258 –2 68 . Google Scholar Crossref Search ADS WorldCat Kaplowitz M. D. Lupi F. Couper M. P. Thorp L. ( 2012 ). The effect of invitation design on web survey response rates . Social Science Computer Review , 30, 339 – 349 . Google Scholar Crossref Search ADS WorldCat Knoke D. Wright-Isak C. ( 1982 ). Individual motives and organizational incentive systems . Research in the Sociology of Organizations , 1 , 209 – 254 . Google Scholar OpenURL Placeholder Text WorldCat Kropf M. E. Blair J. ( 2005 ). Eliciting survey cooperation: Incentives, self-interest, and norms of cooperation . Evaluation Review , 29 , 559 – 575 . Google Scholar Crossref Search ADS PubMed WorldCat Mavletova A. ( 2015 ). A gamification effect in longitudinal web surveys among children and adolescents . International Journal of Market Research , 57 , 413 – 438 . Google Scholar Crossref Search ADS WorldCat Pedersen M. J. Nielsen C. V. ( 2016 ). Improving survey response rates in online panels: Effects of low-cost incentives and cost-free text appeal interventions . Social Science Computer Review , 34 , 29 – 243 . Google Scholar Crossref Search ADS WorldCat Petchenik J. Watermolen D. J. ( 2011 ). A cautionary note on using the internet to survey recent hunter education graduates . Human Dimensions of Wildlife , 16 , 216 – 218 . Google Scholar Crossref Search ADS WorldCat Porter S.R. Whitcomb M. E. ( 2005 ). E-mail subject lines and their effect on web survey viewing and response . Social Science Computer Review , 23 , 280 – 287 . Google Scholar Crossref Search ADS WorldCat Rose D. S. Sidle S. D. Griffith K. H. ( 2007 ). A penny for your thoughts: Monetary incentives improve response rates for company-sponsored employee surveys . Organizational Research Methods , 10 , 225 – 240 . Google Scholar Crossref Search ADS WorldCat Singer E. ( 2006 ). Introduction: Nonresponse bias in household surveys . Public Opinion Quarterly , 70 , 637 – 645 . Google Scholar Crossref Search ADS WorldCat Singer E. Ye C. ( 2013 ). The use and effects of incentives in surveys . Annals of the American Academy of Political and Social Sciences , 645 , 112 – 141 . Google Scholar Crossref Search ADS WorldCat Sloan M. Kreiger N. James B. ( 1997 ). Improving response rates among doctors: Randomised trial . The BMJ , 315 , 1136 . Google Scholar Crossref Search ADS PubMed WorldCat Tourangeau R. Couper M. P. Steiger D. M. ( 2003 ). Humanizing self-administered surveys: Experiments on social pressure in web and IVR surveys . Computers in Human Behavior , 19 , 1 – 24 . Google Scholar Crossref Search ADS WorldCat Weiss J. A. Tschirhart M. ( 1994 ). Public information campaigns as policy instruments . Journal of Policy Analysis and Management , 13 , 82 – 119 . Google Scholar Crossref Search ADS WorldCat Wright K. B. ( 2005 ). Researching internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services . Journal of Computer-Mediated Communication , 10 . doi:org/10.1111/j.1083-6101.2005.tb00259.x Google Scholar OpenURL Placeholder Text WorldCat © The Author(s) 2020. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

International Journal of Public Opinion ResearchOxford University Press

Published: Nov 26, 2020

There are no references for this article.