Do Incentives Matter for the Diffusion of Financial Knowledge? Experimental Evidence from Uganda

Do Incentives Matter for the Diffusion of Financial Knowledge? Experimental Evidence from Uganda Abstract Many development interventions involve training of beneficiaries, based on the assumption that knowledge and skills will spread ‘automatically’ among a wider target population. However, diffusion of knowledge or innovations can be slow and incomplete. We use a randomised field experiment in Uganda to assess the impact of providing incentives for knowledge diffusion, and pay trained individuals a fee if they share knowledge obtained during a financial literacy training. Our main results are that incentives increase knowledge sharing, and that it may be cost-effective to provide such incentives. We also document an absence of assortative matching in the social learning process. 1. Introduction Knowledge diffusion is a key topic in economics, and a major determinant of economic performance at the macro- as well as the micro level. This insight has motivated extension and training interventions aimed at promoting knowledge transfer. Development practitioners typically assume that diffusion of knowledge and skills will leverage the impact of such interventions—knowledge is supposed to flow freely within social networks, eventually reaching a much larger group of agents than the sub-group initially reached by the training.1 While training interventions are often costly, they may pass economic cost–benefit tests if transferred knowledge and skills spread to a sufficiently large number of other households or firms in the target population, affecting their poverty status as well as that of the initially trained sub-group. It is well-known that there are barriers to the diffusion of technology and knowledge. The existence of such barriers to the diffusion of technology across international borders is perhaps not surprising. Technologies may not fit conditions elsewhere, and often there are costs associated with adoption (such as in the case of improved seeds or fertiliser) which may be difficult to overcome if capital markets are imperfect. But even knowledge, which supposedly may spread at relatively low cost, sometimes does not travel easily—not even within organisations, local communities, or tightly-knit social groups. In developing countries, incomplete diffusion has been documented in a range of important domains, including agricultural innovations (Feder et al., 1985), health (Chami et al., 2016) and financial literacy (Sayinzoga et al., 2016). Even if individuals are connected via social networks, the spreading of knowledge beyond initial ‘seed nodes’ typically requires time and effort on both the supply and demand side (see below). Hence, the spreading of knowledge can be viewed as an economic process, involving an investment decision by those possessing the knowledge and those seeking to access it. Viewed in this light, it seems natural to ask whether the diffusion of knowledge can be promoted by economic incentives. We study the effect of incentives on diffusion of financial literacy knowledge with a randomised controlled trial (RCT) in peri-urban Uganda. The objective of this paper is twofold. First, and most importantly, we test whether incentivizing seed nodes fosters the diffusion of knowledge within social networks. We follow BenYishay and Mobarak (2016), discussed below, who pioneered the use of an experimental approach in this context, but consider diffusion of financial knowledge rather than an agricultural innovation.2 Second, we probe the individual characteristics of seed nodes and their peers, and the alignment of these characteristics across them, to gain a better understanding of factors promoting diffusion. Our main results are that (i) incentivizing individuals has a large effect on the diffusion of knowledge, and (ii) that providing small monetary incentives is a cost-effective approach to foster the spread of information (compared to extending the extensive margin via additional trainings). We also find that (iii) social proximity does not foster social learning within the context we study. The focus on financial literacy is timely and important. Evidence suggests the impact of microfinance (interventions) varies with levels of human capital among recipients, so many microfinance institutions and NGOs have embraced financial literacy trainings as a tool to support development (the so-called ‘Finance-Plus’ strategy).3 Financial literacy captures consumers’ awareness, skills and knowledge enabling them to make informed, effective decisions about financial resources. Studies across a range of countries have shown that levels of financial literacy tend to be low (Lusardi and Mitchell, 2007, 2008). Focusing on developing countries, a small number of studies have probed how financial literacy affects economic behaviour, including insurance adoption (Giné et al., 2013), savings (Tustin, 2010; Bruhn et al., 2013; Landerretche and Martínez, 2013), bank account ownership (Cole et al., 2011), and business practices and outcomes (Sayinzoga et al., 2016). There is very little evidence on the diffusion of financial knowledge beyond trained individuals, and the little bit of evidence that is available proves to be inconsistent. While Sayinzoga et al. (2016) find no evidence of financial knowledge spillovers beyond trained village bank representatives in rural Rwanda, Berge et al. (2015) find that training content spread within borrowing groups in Tanzania. A key difference in context between these studies was that, in Tanzania, limited liability within groups implied the seed node had a direct incentive to train his peers to reduce exposure to bad financial decisions of these peers. This insight suggests diffusion processes may be partly driven by economic considerations. The paper is organised as follows. In the next section, we provide background to the topic of knowledge adoption and diffusion, summarising part of the relevant literature. In Section 3, we describe the details of our experiment and explain our sampling strategy. In Section 4 we introduce our data and outline our simple identification strategy. In Section 5, we present our empirical results and attempt to unravel the factors that influence knowledge diffusion. Discussion and conclusions are presented in Section 6. 2. Adoption and diffusion A large literature studies the diffusion and adoption of technologies in developing countries. Much of this literature focuses on the spreading of agricultural innovations, such as improved seeds and fertiliser, or bundles of agricultural activities (e.g., Feder et al., 1985; Knowler and Bradshaw, 2007). This focus seems appropriate in light of the importance of the agricultural sector in developing countries. However, other relevant domains have been studied as well, including health (e.g., bed nets, deworming pills), hygiene (water purifiers, menstrual cups), and fertility (contraceptives). A recent survey of the adoption literature is provided in Foster and Rosenzweig (2010). The diffusion of technologies that are supposed to improve human welfare is imperfect, which may be explained by a range of factors including imperfect capital markets (credit and insurance), incomplete tenure arrangements (impeding the uptake of long-term investments), and perhaps by behavioural factors (Duflo et al., 2009). Diffusion of innovations fits in the broader literature on peer effects. This literature distinguishes between three different types of effects; pure imitation, social learning, and (behavioural) complementarities or external effects. The learning literature is perhaps largest, in spite of well-known challenges to proper identification based on observational data. Manski (1993) coined the term ‘reflection problem’ to describe the difficulty of disentangling peer effects from contextual effects in social interaction models. Peers influence each other, and peer groups do not form randomly but are likely the result of homophilous peer selection. The presence of strategic considerations further complicates matters (e.g., Foster and Rosenzweig, 1995; Bandiera and Rasul, 2006; Breza, 2015). Think of learning spillovers, inviting free-riding on peers’ experimental efforts, or considerations in the context of competition for scarce resources, rival goods, or contested markets. Notwithstanding these challenges, the social learning literature has advanced considerably. The evidence suggests Bayesian learning through social networks can be effective and rapid when innovations have large payoffs for large swaths of the population, are easily observed, and can be applied homogenously across space. In early stages of the green revolution, for example, the adoption of new technology spread very quickly across large parts of India. Similarly, there is evidence of rapid social learning and diffusion in the domain of advantageous health and sanitary innovations benefitting large groups of people more or less alike (e.g., Dupas, 2014; Oster and Thornton, 2012). However, these conditions for Bayesian learning are not always met, and the evidence of widespread social learning is therefore ‘decidedly mixed’ (Breza, 2015). Diffusion of knowledge varies with the structure of social networks and the position of innovators within that network (e.g., Banerjee et al. 2013; Beaman et al., 2015; Cai et al., 2015). Moreover, interventions may have heterogeneous payoffs varying with individual attributes (e.g., Suri, 2011), so information acquired by one farmer may be uninformative for his neighbour (Munshi, 2004). In this context, individuals should carefully target whose behaviour and outcomes to observe, paying special attention to people doing unexpectedly well or that are comparable to oneself (Conley and Udry, 2010; BenYishay and Mobarak, 2016). Finally, social learning will be incomplete and diffusion will be slow if individuals cannot easily observe the experiences of their peers. In this case, information does not automatically flow from one person to another. Instead, this will only occur if both parties invest sufficient time and effort into the knowledge-sharing process. In many cases, however, the innovator stands to gain little from spreading knowledge. Since actively engaging in the sharing of knowledge typically entails a cost, investing a lot of effort into this process is unlikely to be privately optimal. Can innovators or early adopters be incentivized to share information with their peers—internalising learning externalities? This important issue is first analysed in the field by BenYishay and Mobarak (2016), who study the diffusion of agricultural innovations in rural Malawi: pit-planting and composting. They select different types of ‘communicators’ (seed nodes) and expose them to the new technology. A random subsample of these communicators receives a performance-based incentive (a bag of seeds), where performance is based on co-villagers’ knowledge about (and adoption of) the new technologies. The main lessons from the study are that co-villagers are more likely to learn from communicators comparable to themselves (see above); communicators invest more time and effort in learning about the new technology when they are incentivized to share knowledge later; and, most importantly, providing incentives to communicators increases the flow of information and fosters knowledge levels and adoption by co-villagers. Transmission of information is not automatic, and can be manipulated via economic incentives. The issue of incentivizing individuals to share knowledge speaks to a broader literature on intrinsic and extrinsic motives to engage in prosocial actions. The thrust of this literature is that the effect of incentives on prosocial behaviour may be complex. Bénabou and Tirole (2006) develop a behavioural theory that combines altruism with concerns for social reputation and self-respect. Since material or image-related incentives create doubt about the underlying motives for which good deeds are done, they may partially or fully ‘crowd out’ prosocial behaviour. This is referred to as the ‘over-justification effect.’ If respondents receive a reward for engaging in information sharing—an act of prosocial behaviour—co-villagers are unsure about the motives for sharing: was the respondent driven by altruism or desire for the reward? The former may give an impetus to somebody’s status or reputation in the village, the latter presumably would not.4 Self-image concerns suggest similar logic applies in the absence of reputational concerns (Benabou and Tirole, 2011). If respondents value conformity between actions and values or identity, then the self-image value of engaging in prosocial deeds is undermined by incentives. See, for example, Fehr and Falk (2002) or Bowles and Polonia-Reyes (2012) for extensive discussions of these issues, and related ones. On theoretical grounds, the effect of incentivizing individuals to engage in prosocial activities such as information sharing is therefore ambiguous. Ultimately it is an empirical and presumably context-specific matter whether extrinsic motives promote or discourage diffusion of knowledge. We now turn to our experiment. 3. Experimental description To test whether monetary incentives have a positive impact on knowledge diffusion we organised an RCT in Uganda. We explore the diffusion of information, and do not consider whether the training content was applied by our subjects in practice.5 The experiment was conducted in conjunction with CBS PEWOSA; a social responsibility section of Central Broadcasting Services (CBS) radio, affiliated with the Buganda kingdom. CBS-PEWOSA aims to encourage and facilitate the formation of homogeneous self-help groups of 15–30 members in communities in the Kingdom, and then to empower these groups with skills transfer programmes, income-generating activities, food security projects, and savings programmes. Groups evolve endogenously, and villagers may self-select into a group if they wish, provided they are accepted by other group members. Multiple groups may form in one village. CBS-PEWOSA has proven able to engage effectively with a large number of communities. Their modus operandi, via self-formed groups, creates a useful context for our study as it provides a natural reference group of peers to study the diffusion of knowledge. CBS-PEWOSA self-help groups receive training and support in various forms. We include 50 of these groups in an RCT with two treatment arms and one control arm. The only difference between the two treatment arms was that group members in one arm received the conventional training programme and group members in the other arm received the training as well as an incentive for diffusion. Specifically, these group members were promised a monetary reward in case sufficient knowledge diffused within their own self-help group. Our RCT involved two survey waves for the trained group members from the two treatment arms. Baseline data for trained respondents were collected in October 2014, after we randomly selected 50 groups to enrol in the experiment out of a sample frame of 153 communities groups partnering with CBS-PEWOSA. All groups in the sample are from different villages. We randomly assigned 20 groups to the incentive treatment (outlined below), 20 groups to the conventional treatment arm (without incentives), and the remaining 10 groups to the control arm. From the two types of treatment groups we randomly selected several members per group to participate in a financial literacy training. Following CBS-PEWOSA practice, we used a training ratio of 4:1 (rounding them to the nearest integer) so that we would train 6 members out of a group of 22 members. In total, we invited 274 respondents to the training, of which 136 belonged to the incentive treatment arm. Respondents were informed about their treatment status before the training began, so we leave open the possibility that treated beneficiaries will work harder during the training to become more effective knowledge communicators after the training. To incentivize diffusion, we promised participants in the treatment arm they would receive a payment of UGS 35,000 (approximately USD 10) in case they managed to share enough of the training content with their untrained CBS-PEWOSA group members. This is a significant incentive because, according to CBS-PEWOSA data, the average daily wage of their members is only UGS 6,000. Specifically, a participant qualified for the payment in case a randomly selected group member (i) passed a financial literacy test, and (ii) identified that particular participant as her primary source of information (i.e., as the communicator). We did not inform training participants about the number of CBS-PEWOSA group members that would be tested, nor about the content of the test or the relevant knowledge threshold.6 Henceforth, we will refer to group members who did not participate in the training themselves as ‘other group members’ or untrained members. The financial literacy training was conducted in January 2015 by CBS-PEWOSA field officers using the CBS-PEWOSA financial literacy manual. The training consisted of six sessions, lasting from 9:00 a.m. to 3:00 p.m. (with a one-hour lunch break). The training covered basic topics such as keeping financial records, budgeting, savings, and loan management. An outline of the contents of the training sessions is provided in Appendix 1. During the training eight subjects dropped from the sample: four from the incentivized arm and four from the conventional arm, and we were left with a sample of 266 trained subjects. Attrition status was not correlated with most of the baseline characteristics, nor with the treatment dummies (results not shown but available on request).7 Individuals dropping out of the study were not replaced because typically the training programme had already progressed a few sessions. After the training, participants in both experimental arms took a financial literacy test to gauge knowledge levels. The test was based on Sayinzoga et al. (2016) and Lusardi and Mitchell (2007), and is included in Appendix 2. We construct a knowledge index score by awarding 1 point to correct answers, ½ point to partially correct answers, and 0 points to wrong answers (and then convert the score into an outcome on a scale from 0 to 100).8 All participants were encouraged to share the learned knowledge with other members of their CBS-PEWOSA group, and were informed that we would revisit the CBS-PEWOSA group after some 10 months to measure the extent to which knowledge-sharing had actually occurred (the exact date of the return visit had not been set yet and was not announced).9 In October 2015, we revisited the 40 CBS-PEWOSA groups, and organised a follow-up survey among a random subsample of untrained members. All self-help groups still existed, so there was no attrition at the group level. We randomly selected 10 members per group, and 394 untrained members participated in the second wave of the study for a cross-section analysis—we did not contact these households at the baseline (nor did we measure their baseline financial literacy). Of these other group members, 200 were from groups with incentivized peers and 194 from groups with non-incentivized peers. Logistical reasons prevented us from engaging six members from two of the conventional training groups. We have no reason to believe these six drop-outs are different than the respondents in the sample. During this survey wave we collected some data on demographics as well as the level of financial literacy. To measure knowledge levels we used the same test as we used before to measure knowledge levels of the trained individuals. We also asked untrained group members to identify the person who shared training content with them. If the untrained member passed the test, the payment was provided to the trained peer identified as the individual engaged in knowledge diffusion. We did not receive complaints from trained group members about the payment stage, suggesting that untrained group members correctly identified the individuals engaged in sharing information with them. If certain trained group members were identified more than once we would still pay them the fee only once. We paid the performance fee to 47% of the incentivized group members. Finally, at the endline we also collected financial knowledge data among members of other groups that did not participate in either of the treatment arms (i.e., from ‘the control arm’). Specifically, we visited the remaining 10 groups that did not participate in the training intervention, and surveyed 10 random members per self-help group to assess their financial knowledge (and collected data on some simple demographic variables). Data from this control group provides the benchmark knowledge level against which knowledge gains and the cost-effectiveness of the training and incentive interventions will be assessed. 4. Data and identification strategy In Table 1a we summarise basic demographic information of trained group members, distinguishing between respondents from incentivized and non-incentivized training groups. There are no statistically significant differences between the groups (P-values from simple t-tests are reported). On average, respondents are less than 40 years old, and the majority of them are married, employed in the private sector, and have completed lower secondary school. Table 1b does the same for the samples of untrained group members, and includes the group of respondents from the control arm. Observe that for the untrained group members we collected data on fewer variables. Again, there are no significant differences across the treatment arms. We include controls in some specifications to increase the precision of our regression estimates. Table 1a: Summary of the Data for the Trained Group Members Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Notes: This table summarises the control variables of the trained individuals, collected at baseline. The P-values in the final column are the outcome of a simple t-test comparing respondents from the two treatment arms. Table 1a: Summary of the Data for the Trained Group Members Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Notes: This table summarises the control variables of the trained individuals, collected at baseline. The P-values in the final column are the outcome of a simple t-test comparing respondents from the two treatment arms. Table 1b: Summary of Data for the Untrained Group Members Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Notes: This table summarises the control variables of the untrained individuals from the three experimental arms, collected at endline. The P-values in the final three columns are the outcome of a simple t-test comparing respondents across the treatment arms. Table 1b: Summary of Data for the Untrained Group Members Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Notes: This table summarises the control variables of the untrained individuals from the three experimental arms, collected at endline. The P-values in the final three columns are the outcome of a simple t-test comparing respondents across the treatment arms. Turning to analysis, we first test whether the provision of incentives affects the effort that respondents invest in the training, and compare the financial literacy test scores of (trained) respondents across the two treatment groups. Specifically, we first regress the index score of trained respondent i (i = 1,…,7) in group j (j = 1,…,40) on the incentive dummy Dj and vectors of individual controls and group variables; Xij and Zj respectively: Scoreij=α+βDj+δXij+γZj+εij (1) If respondents engage more intensively with the training content when they are incentivized to share knowledge, then we will find β > 0. Following BenYishay and Mobarak (2016), we hypothesise that incentivized respondents work harder because they view the training as an investment opportunity. We use OLS to estimate model (1) and cluster standard errors at the group level.10 To account for the censored nature of our dependent variable we also estimate Tobit models, and to account for the ordinal nature of our dependent variable (index scores taking values 0,…,7) we also estimate ordered Probit models. Next, we turn to our main research questions and use the endline data to test whether incentives affect knowledge diffusion. For this analysis we include the individuals from the control group. We regress the index score of untrained group member k (k = 1,…,10) in group z (z = 1,…,50) on the same variables as above: Scorekz=α+β1D1z+β2D2z+δXkz+γZz+εkz, (2) where D1 is a dummy taking value one for members of any treatment group, and D2 is a dummy taking the value one for members of the incentive group (so D2 identifies a sub-group of D1). Members of the control group are the omitted category. Estimated coefficient β1 captures the effect of the conventional training intervention on untrained members, and coefficient β2 captures the additional effect of incentivizing group members to share knowledge (so that the total sharing effect for the incentivized group amounts to β1 + β2). As robustness tests we again estimate Tobit and ordered Probit models. Note that our experimental design allows us to pick up the total effect of performance fees on diffusion, which may take place via two channels: (i) additional effort of the trained group members during the training (as discussed above and captured in (1)), and (ii) additional effort of the trained group members after the training (time spent teaching their peers).11 Finally, we try to probe the knowledge diffusion process in a bit more depth. Group members are free to approach each other and invest time in either teaching the other, or learning from the other. What sort of ‘matching’ occurs in the setting we study? We are especially interested in establishing whether social proximity fosters diffusion, and therefore ask whether assortative matching occurs in the experiment. We assess the extent to which peer-teaching occurs along certain demographic lines, and ask whether incentivizing trained respondents affects their propensity to teach peers with whom they share fewer characteristics. The characteristics we consider are age (young versus old), gender, education level and tribal affiliation. 5. Empirical results 5.1 Incentives and accumulation of knowledge: effort during the training We first ask whether the promise of performance-based fees affects the effort of respondents during the training. This would be consistent with the finding of BenYishay and Mobarak, who documented that incentivized farmers are more likely to adopt the innovation themselves. To explore whether performance-based incentives affect effort during the training, we compare knowledge scores of trained respondents in the incentive treatment and conventional training arm. These data were collected shortly after finalising the training so should only reflect the effect of the incentive on accumulation of knowledge. Regression results of model (1) are reported in Table 2. Table 2: Incentives and Knowledge Accumulation of Trained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Notes: Scores in models 1–6 are calculated as scoreij/7∗100 ⁠, and in columns 7–9 we construct ordered categories for (rounded) scores. Clustered standard errors at group level are reported in brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. Table 2: Incentives and Knowledge Accumulation of Trained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Notes: Scores in models 1–6 are calculated as scoreij/7∗100 ⁠, and in columns 7–9 we construct ordered categories for (rounded) scores. Clustered standard errors at group level are reported in brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. In columns (1–3) we present the results of OLS models, in columns (4–6) we present results based on the Tobit estimator, and in columns (7–9) we use the ordered Probit estimator. Across all models we first consider a parsimonious specification, and then estimate models including vectors of controls (respondent and group variables, respectively). Across all nine models we find positive coefficients associated with the incentive dummy, and in all models these coefficients are significantly different from zero at the 1% level. The estimated coefficient is stable across specifications, which is of course what we would expect (given that, by design, treatment status is uncorrelated with individual or group characteristics). These models reveal that the promise of a performance-based incentive increases the effort (attention) of training participants to grasp the training content. The impact of the fee on effort is also economically meaningful: OLS and Tobit estimates of the parsimonious models suggest that incentives increase post-training test scores by about 30%. Not surprisingly, perhaps, we also find that better-educated respondents achieve higher knowledge scores. 5.2 Incentives and diffusion of knowledge How well did the untrained respondents do, responding to the various questions in our financial literacy test? For individuals from the control group (i.e., without trained peers), the test questions were difficult and the average score was only 43%. Observe the knowledge level of respondents from the control group is tightly estimated. We now analyse how incentives affect the diffusion of knowledge by comparing financial knowledge levels of ‘other group members’ across the experimental arms and the control arm. The reduced form models we estimate capture the joint impact of incentives on effort of trained respondents during the training (discussed above) as well as additional effort after the training—sharing the content of the training with other group members. Before presenting our results in a regression framework we first demonstrate histograms displaying the number of other group members that achieves a certain test score, split out between untrained group members from incentivized and non-incentivized groups. Figure 1 suggests that members of the control group performed worse than members of the treatment groups, and moreover that other members from groups with incentivized respondents (Panel A) tend to answer more questions correctly than group members from the conventional training arm. For example, the number of other group members scoring 6.5/7 (or 7/7) equals 16 (18) from the incentivized group, and only 9 (12) from the non-incentivized group. The number of other group members scoring 2/7 or worse equals 39 for the incentivized group, and 60 for the non-incentivized group. These patterns in the data are also evident from the regression analysis. Estimating model (2) provides the following results: Figure 1: View largeDownload slide Performance on the Knowledge Test for the Three Experimental Arms Figure 1: View largeDownload slide Performance on the Knowledge Test for the Three Experimental Arms Across all three sets of outcomes (OLS, Tobit, ordered Probit), we again consider a parsimonious specification, and then estimate more ‘complete models’ including vectors of controls, Xij and Zj. The first thing to observe is that the training dummy D1, associated with the two treatment arms, is consistently positive and significant across all specifications. We document significant sharing of knowledge within self-help groups, and according to the parsimonious OLS and Tobit specifications it is the case that group members achieve knowledge scores that are some 15% higher than those of their peers in control groups. Indeed, it appears as if the sharing of knowledge within self-help groups is almost complete. According to the parsimonious OLS model untrained members from the conventional training arm achieve knowledge scores of (43.071 + 6.450 =) 49.521, which is statistically identical to the knowledge level of trained group members (50.906, or the constant in Table 2). The second thing to observe is that, across all nine models, we find positive coefficients associated with the incentive dummy D2. Moreover, these coefficients are significantly different from zero, albeit only at the 10% level in most specifications. This, we believe, is our main result: untrained members in groups with incentivized respondents accumulate more financial knowledge than untrained members from the conventional training group (and, of course, much more than members from the control group). This difference in learning across experimental arms is relatively large. When comparing the incentive effect to the knowledge level of the conventional training group, our parsimonious OLS and Tobit estimates reveal that incentives increase knowledge sharing by some 15%.12 As before, we find that knowledge sharing within the subsample of incentivized group members is fairly complete: the knowledge scores obtained by other (untrained) group members only lag slightly behind the scores of the trained group members. Specifically, from Table 3 we learn that the average untrained group member has a knowledge score of 57.1 (43.071 + 7.586 + 6.450), which should be compared to the average score by trained (and incentivized) group members (50.906 + 15.652 = 66,558, from column 1 in Table 2). In other words, untrained group members achieve scores that, on average, are no less than 85% of the scores of trained individuals. We conjecture that slight differences in the extent to which sharing is complete across the two treatment arms may be due to increasing marginal costs (or diminishing marginal returns) to teaching and learning within self-help groups. Table 3: Incentives, Training and Knowledge Diffusion of Untrained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Notes: Columns 1–3 report results of an ordinary least squares regression. Columns 4–6 report results of a Tobit regression. Columns 7–9 report ordered probit marginal effects. Scores in models 1–6 are calculated as scoreij/7∗100 and in columns 7–9 we construct ordered categories for the scores. The Training dummy takes a value of 1 if the respondent belongs to either of the trained groups, and the Training + Incentives dummy takes a value of 1 if the respondent belongs to a group that was trained and incentivized. Clustered standard errors at group level are reported in the brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. Table 3: Incentives, Training and Knowledge Diffusion of Untrained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Notes: Columns 1–3 report results of an ordinary least squares regression. Columns 4–6 report results of a Tobit regression. Columns 7–9 report ordered probit marginal effects. Scores in models 1–6 are calculated as scoreij/7∗100 and in columns 7–9 we construct ordered categories for the scores. The Training dummy takes a value of 1 if the respondent belongs to either of the trained groups, and the Training + Incentives dummy takes a value of 1 if the respondent belongs to a group that was trained and incentivized. Clustered standard errors at group level are reported in the brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. Next, turn to the other covariates. Not surprisingly, we again find that more educated group members tend to perform better on the knowledge test. We also find that the age of the respondent matters—young respondents appear to score better, but this effect is not very large. Interestingly, group size does not matter. This presumably reflects that we have selected a fixed number of other group members for the endline survey, which implies that the probability of picking any specific untrained group member goes down as the group gets larger. From the perspective of trained group members, this reduces the expected payoff of investing time and effort in training specific individuals.13 5.3 Who learns from whom? We asked untrained group members to identify the individual who shared training content with them most—if anybody.14 Obviously matching occurs endogenously, and is not based on experimental variation. We ask whether assortative matching occurs in the self-help groups, or whether individuals are more likely to learn from group members with similar demographic characteristics. We then compare outcomes across the treatment arms to explore whether matching processes are affected by incentives. Table 4 reports the extent to which matching on such variables occurs. For each untrained individual we create four dummy variables capturing whether the group member that ‘trained’ her shared the same age, gender, education level, and tribal affiliation. We then average the value of these binary variables across respondents to obtain a (treatment arm specific) measure of assortative matching intensity. We compare this actual intensity level to the predicted level of matching that would occur if untrained and trained group members matched randomly within the self-help group. Table 4: Actual Matches Versus Possible Matches Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Notes: Respondents are classified as ‘young’ if they are younger than the average respondent (38 years). Table 4: Actual Matches Versus Possible Matches Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Notes: Respondents are classified as ‘young’ if they are younger than the average respondent (38 years). Most of the social learning occurs for pairs with the same gender—this is the case for 67% of the matches. However, this degree of assortative matching is not statistically different from the degree of matching that would occur if group members are randomly matched. Similarly, there is no evidence of assortative matching based on education level or tribal affiliation. The single demographic variable for which actual and random matching intensity is different is age, and there the opposite of assortative matching appears to occur. Closer inspection of the data revealed that the old members learned from the young, who on average performed better on the knowledge tests. Interestingly, the same patterns emerge in the data for the incentivized and the non-incentivized treatment arm. We do not observe that incentivizing causes trained individuals to single out fellow group members who are more like themselves (i.e., more assortative matching), nor that they become less ‘picky’ about whom to spend time with (less assortative matching). If we regress the measured assortative matching intensity on an incentive dummy we consistently find there is no significant correlation between matching intensity and the treatment dummy (results not shown, but available on request). To some extent this is an artefact of the social context within which we study social learning. Self-help groups are not composed of randomly selected villagers, but consist of individuals who have self-selected into the group. Social capital levels within the group are likely to be high, and we expect considerable willingness to help fellow group members. This also implies the extent of social learning in these groups may not be representative of the intensity of knowledge sharing occurring in settings where individuals cannot choose their peers, such as in natural villages. 5.4 Cost effectiveness of incentivizing diffusion Does it make economic sense to incentivize trained individuals to share knowledge with their peers? A full-blown cost–benefit analysis is beyond the scope of the current paper, and requires a comparison of the costs of the training and incentives to economic gains for beneficiaries—data that are currently unavailable. However, it is possible to compare the cost of raising knowledge scores across treatment arms. That is, we can use our survey data to compute the cost effectiveness of the conventional training approach and the alternative modality that includes incentives for knowledge sharing. Consider an ‘average’ self-help group in our sample, consisting of 24 group members. Of this self-help group, six members are invited to participate in the training, and the remaining 18 members remain untrained (by CBS-PEWOSA). If this group is allocated to a conventional training programme, the estimated implementation cost of the training amount to UGS 1.1 million. (according to CBS-PEWOSA data). If, instead, the group is assigned to an incentive programme, the total training costs (now including performance fees) amount to UGS 1.2 million.15 We ask whether the additional expenditure of UGS 100,000 increases or decreases the per-unit cost of knowledge transfer. The conventional training arm achieves the following gains in terms of knowledge scores: six trained members gain an additional 7.835 points per member (compare the constant terms in column 1 of Tables 2 and 3), and sharing effects imply that 18 untrained members gain 6.450 knowledge points. The total gain in knowledge, according to our estimates, equals 163.1 index points (or (6 × 7.835) + (18 × 6.450)), so the average cost per unit of knowledge gain amount to UGS 6,744. We can do the same exercise for the training modality that includes incentives. We find the same training now produces a total increase in knowledge equal to 393.5 index points, with an associated average per-unit cost of only UGS 3,049. In words, providing incentives for diffusion approximately cuts the average costs of knowledge transfer in half. It therefore appears like an attractive opportunity for NGOs or governments with binding budget constraints—promoting diffusion is less expensive than upscaling teaching interventions. Observe that our performance fee of UGS 35,000 was chosen in a rather arbitrary fashion, so additional efficiency gains may be possible by optimising the amount of the reward. 6. Discussion and conclusions Diffusion of knowledge and innovations often seems to occur at rates that are ‘too low.’ As a result, advantageous behaviours and production techniques may remain limited to pockets of the overall population, with adverse effects for (economic) outcomes. Limited social learning also undermines the cost effectiveness of development interventions, possibly eroding the economic rationale for such interventions. It is important to improve our understanding of how knowledge spreads in target populations to enhance the efficiency and effectiveness of trainings and projects. In this paper, we examine whether the diffusion of knowledge can be promoted by the provision of (monetary) incentives. We study social learning in the context of NGO-founded self-help groups in which individuals can self-select. Endogenous membership presumably implies these groups have high levels of social cohesion and social capital, or provide a setting where social learning is given the best chance to succeed. Care must be taken when extending the main insights of this study to other contexts, such as villages, where membership is (more) exogenous and inter-person ties are presumably looser. Our first result is evidence of knowledge diffusion even in the absence of incentives. Indeed, we find that nearly all the knowledge gained by (randomly selected) group members spills over to peers. Our second result is that incentivizing individuals to share knowledge with their group members encourages these individuals to study harder and accumulate more knowledge during the trainings. The additional increase in knowledge due to the trainings for members of the incentivized group is large, or about one-third of the gain for respondents from the conventional training arm. Our third result is that incentivizing individuals has a large effect on the diffusion of knowledge. Indeed, we demonstrate that the magnitude of the sharing gain implies that the provision of incentives is a cost-effective approach to promoting knowledge diffusion. Our design does not allow us to cleanly identify the mechanism that causes extra knowledge sharing. The reduced form result may be due to extra effort by the trained group members during the training (i.e., additional learning) and by extra effort after the training (i.e., additional sharing). However, our results suggest that knowledge spillovers within the conventional training groups are nearly complete, or that untrained group members score equally well on the knowledge test as their trained peers. This implies a necessary condition for (further) increasing diffusion is additional effort during the training, improving the knowledge level of trained group members—which is indeed what we document. In contrast, the evidence for additional effort during the sharing stage is weaker as untrained group members in the incentivized groups achieve, on average, only 85% of the score of their trained counterparts—rather than 100% as in the conventional group. Future research should include an additional treatment arm to isolate the exact contributions of additional learning and sharing as possible channels accentuating diffusion. The result that people respond to incentives is perhaps not surprising to most economists. Nevertheless, we believe it is important to confirm and emphasise this insight in specific development settings, such as the one studied here. Too often decision-makers assume that knowledge will spread automatically, or that poor villagers are keen to provide public goods for free. This may simply not be realistic. Examples include many extension initiatives in the domain of agriculture (e.g., farmer field schools), but also programmes in the public health sector. For example, it is estimated that this sector suffers from a shortage of 7 million professional health workers (WHO, 2013). To address this problem, so-called community health workers or community medicine distributors have been recruited in many developing countries. Such individuals typically receive a short training but no compensation for their time or effort. It should be no surprise that this lack of incentives translates into disappointing outcomes (Chami et al., 2016). We speculate the introduction of incentives—monetary or otherwise—may help to improve the performance of development interventions across a swath of relevant domains. While it is comforting to observe that social learning can leverage the effectiveness of training interventions, and that the extent of social learning can be manipulated by incentives, it is evident that major challenges remain for practitioners seeking to put the lessons from this analysis to practice. Specifically, there are many cases and contexts where knowledge diffusion should not stop after a single round of social learning. Individuals benefitting from the knowledge imparted on them by their peers should, in turn, share this knowledge with other villagers—and so on, until the entire target population has been reached. Affecting the behaviour of ‘downstream’ beneficiaries via individual incentives may be far from straightforward. Exploring efficient and effective designs that promote diffusion downstream of initially trained beneficiaries is left for future research. Acknowledgements We would like to thank CBS-PEWOSA for their cooperation and support during the experiment. Funding This work was supported by a grant from the Netherlands Organization for Scientific Research (N.W.O.) (Grant number 453-10-001) Footnotes 1 This is akin to assumptions underlying models of endogenous growth, premised on the idea that knowledge is a public good so that innovations readily spread within and across countries. 2 For non-experimental work pointing at the importance of incentives for diffusion, refer to Alavi and Leidner (2001), Sorenson and Fleming (2004), and Lucas and Ogilvie (2006). 3 Financial literacy trainings are often-times part of a broader training agenda, aiming to promote modern business practices and entrepreneurship (e.g., Bjorvatn and Tungodden, 2010; Karlan and Valdivia, 2011; Berge et al., 2015; Giné and Mansuri, 2014). The literature on the impact of business and entrepreneurship trainings on business practices and outcomes has produced mixed results, and is summarised by McKenzie and Woodruff (2014). 4 In a lab experiment, Ariely et al. (2009) find that monetary incentives reduce the image value of prosocial behaviour and the effort committed to such behaviour by respondents. 5 We also do not know how useful the shared information is for subjects in their daily lives. However, earlier studies from other countries suggest that financial literacy may affect economic knowledge, practices and outcomes (e.g., Sayinzoga et al., 2016). 6 Because the content of the test was unknown to the training participants they could not collude with their peers and ‘teach to the test’ in order to qualify for the performance fee. We assume there is no difference in the incentives to exert effort in answering the test questions (or that untrained individuals did not exert extra effort because they made side deals with trained individuals on splitting any reward). However, even if such collusion occurs it would not affect the main story that incentives affect diffusion. 7 To be precise, the probability of attrition was not correlated with 13 out of 15 baseline variables we collected, but we do find that it was slightly lower for older respondents and slightly higher for respondents who recently initiated a new agricultural activity. We will control for these variables in some of the regression models below. 8 Grading the tests was done blindly—without knowledge of the treatment arm to which the respondent belonged. 9 Since trained peers had some idea about when we would return to test their group members they could try to target their teaching or diffusion efforts to the period just before the announced visit. While this does not invalidate our analysis, different results might eventuate when the return visit would be completely unknown. 10 Note that the number of clusters is rather small (40). It is well-known that standard asymptotic tests can over-reject with few clusters, and Cameron et al. (2008) refer to ‘few’ as five to thirty. While our number of clusters exceeds the minimum value implied by this interpretation, all of our results are robust (in a qualitative sense) to using the cluster bootstrap-t procedure. The remaining regression analysis are based on 50 clusters, which we assume to be sufficiently large for standard asymptotic tests to be valid. 11 To identify the magnitude of these effects separately one could include an additional treatment arm where trained individuals receive an incentive to diffuse knowledge, but where this incentive is announced after the training has been completed (so that effort during the training session is unaffected by the incentive). We have not done this in our experiment. 12 The Wald test on the coefficients in columns 1–3, and columns 4–6 indicate that the explanatory variables have different effects on the dependent variables. This is evident from the F-statistics which are significant. 13 Observe that group ties within larger groups may be weaker, so that (on average) the altruistic incentive to train fellow group members may also be lower. This will be the case in both treatment arms. 14 Unfortunately we did not collect data on multiple persons involved in diffusing knowledge to untrained group members. 15 Recall that in the incentive group, 47% of the trained individuals receive a payment of UGS 35,000. References Alavi M. , Leidner D. ( 2001 ) ‘ Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues ’, MIS Quarterly , 25 : 107 – 36 . Google Scholar Crossref Search ADS Ariely D. , Bracha A. , Meier S. ( 2009 ) ‘ Doing Good or Doing Well? Image Motivation and Monetary Incentives in Behaving Prosocially ’, American Economic Review , 99 ( 1 ): 544 – 55 . Google Scholar Crossref Search ADS Bandiera O. , Rasul I. ( 2006 ) ‘ Social Networks and Technology Adoption in Northern Mozambique ’, Economic Journal , 116 : 869 – 902 . Google Scholar Crossref Search ADS Banerjee A. , Chandrasekhar A. , Duflo E. , Jackson M. ( 2013 ) ‘The Diffusion of Microfinance’ , Science , 341 ( 6144 ): 363 – 72 . Google Scholar Crossref Search ADS Beaman L. , BenYishay A. , Magruder J. , Mobarak A. ( 2015 ). ‘Can Network Theory Based Targeting Increase Technology Adoption?’ Northwestern University: Working Paper. Benabou R. , Tirole J. ( 2011 ) ‘ Identity, Morals, and Taboos: Beliefs as Assets ’, The Quarterly Journal of Economics , 126 : 805 – 55 . Google Scholar Crossref Search ADS PubMed Bénabou R. , Tirole J. ( 2006 ) ‘ Incentives and Prosocial Behavior ’, The American Economic Review , 96 : 1652 – 78 . Google Scholar Crossref Search ADS BenYishay A. , Mobarak A. M. ( 2016 ) Social Learning and Communication . National Bureau of Economic Research . Berge L. I. O. , Bjorvatn K. , Tungodden B. ( 2015 ) ‘ Human and Financial Capital for Microenterprise Development: Evidence From a Field and Lab Experiment ’, Management Science , 61 ( 4 ): 707 – 22 . Google Scholar Crossref Search ADS Bjorvatn K. , Tungodden B. ( 2010 ) ‘ Teaching Business in Tanzania: Evaluating Participation and Performance ’, Journal of the European Economic Association , 8 ( 2/3 ): 561 – 70 . Google Scholar Crossref Search ADS Breza E. ( 2015 ). Field Experiments, Social Networks, and Development. Columbia Business School. Discussion Paper. Bruhn M. , Lara Ibarra G. , McKenzie D. ( 2013 ) Why is voluntary financial education so unpopular? Experimental evidence from Mexico, The World Bank, Development Research Group. Bowles S. , Polonia-Reyes S. ( 2012 ) ‘ Economic Incentives and Social Preferences: Substitutes or Complements? ’, Journal of Economic Literature , 50 ( 2 ): 368 – 425 . Google Scholar Crossref Search ADS Cai J. , De Janvry A. , Sadoulet E. ( 2015 ) ‘ Social Networks and the Decision to Insure ’, American Economic Journal: Applied Economics , 7 : 81 – 108 . Google Scholar Crossref Search ADS Cameron A. C. , Gelbach J. , Miller D. ( 2008 ) ‘ Bootstrap-Based Improvements for Inference with Clustered Errors ’, Review of Economics and Statistics , 90 ( 3 ): 414 – 27 . Google Scholar Crossref Search ADS Chami G. , Dunne D. , Fenwick A. , Bulte E. , Clements M. , Fleming F. , Muheki E. , Kontoleon A. ( 2016 ) ‘ Profiling Non-Recipients of Mass Drug Administration for Schistosomiasis and Hookworm Infections: A Comprehensive Analysis of Praziquantel and Albendazole Coverage in Community-Directed Treatment in Uganda ’, Clinical Infectious Diseases , 62 ( 2 ): 200 – 7 . Google Scholar Crossref Search ADS PubMed Cole S. , Sampson T. , Zia B. ( 2011 ) ‘ Prices or Knowledge? What Drives Demand for Financial Services in Emerging Markets? ’, The Journal of Finance , 66 ( 6 ): 1933 – 67 . Google Scholar Crossref Search ADS Conley T. , Udry C. ( 2010 ) ‘ Learning About a New Technology: Pineapple in Ghana ’, The American Economic Review , 100 : 35 – 69 . Google Scholar Crossref Search ADS Duflo E. , Kremer M. , Robinson J. ( 2009 ) ‘ Nudging Farmers to Use Fertilizer: Theory and Experimental Evidence from Kenya ’, American Economic Review , 101 ( 6 ): 2350 – 90 . Google Scholar Crossref Search ADS Dupas P. ( 2014 ) ‘ Short-run Subsidies and Long-run Adoption of New Health Products: Evidence From a Field Experiment ’, Econometrica: Journal of the Econometric Society , 82 ( 1 ): 197 – 228 . Google Scholar Crossref Search ADS PubMed Feder G. , Just R. , Zilberman D. ( 1985 ) ‘ Adoption of Agricultural innovations in Developing Countries: Survey ’, Economic Development & Cultural Change , 33 : 255 – 98 . Google Scholar Crossref Search ADS Fehr E. , Falk A. ( 2002 ) ‘ Psychological Foundations of Incentives ’, European Economic Review , 46 ( 4 ): 687 – 724 . Google Scholar Crossref Search ADS Foster A. , Rosenzweig M. ( 1995 ) ‘ Learning by Doing and Learning from Others: Human Capital and Technical Change in Agriculture ’, Journal of Political Economy , 103 ( 6 ): 1176 – 209 . Google Scholar Crossref Search ADS Foster A. D. , Rosenzweig M. ( 2010 ) ‘ Microeconomics of Technology Adoption ’, Annual Review of Economics , 2 : 395 – 424 . Google Scholar Crossref Search ADS Giné X. , Karlan D. , Ngatia M. ( 2013 ) Social Networks, Financial Literacy and Index Insurance. Working Paper. Giné X. , Mansuri G. ( 2014 ) Money or Ideas? A Field Experiment on Constraints to Entrepreneurship In Rural Pakistan. Policy Research Working Paper 6959. Washington DC: World Bank. Karlan D. , Valdivia M. ( 2011 ) ‘ Teaching Entrepreneurship: Impact of Business Training on Microfinance Clients and Institutions ’, Review of Economics and Statistics , 93 ( 2 ): 510 – 27 . Google Scholar Crossref Search ADS Knowler D. , Bradshaw B. ( 2007 ) ‘ Farmers’ Adoption of Conservation Agriculture: A Review and Synthesis of Recent Research ’, Food Policy , 32 ( 1 ): 25 – 48 . Google Scholar Crossref Search ADS Landerretche O. M. , Martínez C. ( 2013 ) ‘ Voluntary Savings, Financial Behavior, and Pension Finance Literacy: Evidence from Chile ’, Journal of Pension Economics and Finance , 12 ( 3 ): 251 – 97 . Google Scholar Crossref Search ADS Lucas L. M. , Ogilvie D. ( 2006 ) ‘ Things are Not Always What They Seem: How Reputations, Culture, and Incentives Influence Knowledge Transfer ’, The Learning Organization , 13 ( 1 ): 7 – 24 . Google Scholar Crossref Search ADS Lusardi A. , Mitchell O. S. ( 2007 ) ‘ Baby Boomer Retirement Security: The Roles of Planning, Financial Literacy, and Housing Wealth ’, Journal of monetary Economics , 54 ( 1 ): 205 – 24 . Google Scholar Crossref Search ADS Lusardi A. , Mitchell O. S. ( 2008 ) ‘ Planning and Financial Literacy: How do Women Fare? ’, American Economic Review: Papers and Proceedings , 98 ( 2 ): 413 – 7 . Google Scholar Crossref Search ADS Manski C. ( 1993 ) ‘ Identification of Endogenous Social Effects: The Reflection Problem ’, The Review of Economic Studies , 60 : 531 – 42 . Google Scholar Crossref Search ADS McKenzie D. , Woodruff C. ( 2014 ) ‘ What Are We Learning From Business Training and Entrepreneurship Evaluations Around the Developing World? ’, The World Bank Research Observer , 29 ( 1 ): 48 – 82 . Google Scholar Crossref Search ADS Munshi K. ( 2004 ) ‘ Social Learning in a Heterogeneous Population: Technology Diffusion in the Indian Green Revolution ’, Journal of Development Economics , 73 : 185 – 213 . Google Scholar Crossref Search ADS Oster E. , Thornton R. ( 2012 ) ‘ Determinants of Technology Adoption: Private Value and Peer Effects in Menstrual Cup Take-up ’, Journal of the European Economic Association , 10 ( 6 ): 12631293 . Google Scholar Crossref Search ADS Sayinzoga A. , Bulte E. , Lensink B. ( 2016 ) ‘ Financial Literacy and Financial Behavior: Experimental Evidence From Rural Rwanda ’, Economic Journal , 126 : 1571 – 99 . Google Scholar Crossref Search ADS Sorenson O. , Fleming L. ( 2004 ) ‘ Science and the Diffusion of Knowledge ’, Research Policy , 33 ( 10 ): 1615 – 34 . Google Scholar Crossref Search ADS Suri T. ( 2011 ) ‘ Selection and Comparative Advantage in Technology Adoption ’, Econometrica: Journal of the Econometric Society , 79 ( 1 ): 159 – 209 . Google Scholar Crossref Search ADS Tustin D. H. ( 2010 ) ‘ An Impact Assessment of a Prototype Financial Literacy Flagship Programme in a Rural South African Setting ’, African Journal of Business Management , 4 ( 9 ): 1894 – 1902 . WHO ( 2013 ) A Universal Truth: No Health Without a Workforce . Geneva : World Health Organization . © The Author(s) 2018. Published by Oxford University Press on behalf of the Centre for the Study of African Economies, all rights reserved. For Permissions, please email: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of African Economies Oxford University Press

Do Incentives Matter for the Diffusion of Financial Knowledge? Experimental Evidence from Uganda

Loading next page...
 
/lp/ou_press/do-incentives-matter-for-the-diffusion-of-financial-knowledge-cMHCuWxP0Q
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of the Centre for the Study of African Economies, all rights reserved. For Permissions, please email: journals.permissions@oup.com.
ISSN
0963-8024
eISSN
1464-3723
D.O.I.
10.1093/jae/ejy006
Publisher site
See Article on Publisher Site

Abstract

Abstract Many development interventions involve training of beneficiaries, based on the assumption that knowledge and skills will spread ‘automatically’ among a wider target population. However, diffusion of knowledge or innovations can be slow and incomplete. We use a randomised field experiment in Uganda to assess the impact of providing incentives for knowledge diffusion, and pay trained individuals a fee if they share knowledge obtained during a financial literacy training. Our main results are that incentives increase knowledge sharing, and that it may be cost-effective to provide such incentives. We also document an absence of assortative matching in the social learning process. 1. Introduction Knowledge diffusion is a key topic in economics, and a major determinant of economic performance at the macro- as well as the micro level. This insight has motivated extension and training interventions aimed at promoting knowledge transfer. Development practitioners typically assume that diffusion of knowledge and skills will leverage the impact of such interventions—knowledge is supposed to flow freely within social networks, eventually reaching a much larger group of agents than the sub-group initially reached by the training.1 While training interventions are often costly, they may pass economic cost–benefit tests if transferred knowledge and skills spread to a sufficiently large number of other households or firms in the target population, affecting their poverty status as well as that of the initially trained sub-group. It is well-known that there are barriers to the diffusion of technology and knowledge. The existence of such barriers to the diffusion of technology across international borders is perhaps not surprising. Technologies may not fit conditions elsewhere, and often there are costs associated with adoption (such as in the case of improved seeds or fertiliser) which may be difficult to overcome if capital markets are imperfect. But even knowledge, which supposedly may spread at relatively low cost, sometimes does not travel easily—not even within organisations, local communities, or tightly-knit social groups. In developing countries, incomplete diffusion has been documented in a range of important domains, including agricultural innovations (Feder et al., 1985), health (Chami et al., 2016) and financial literacy (Sayinzoga et al., 2016). Even if individuals are connected via social networks, the spreading of knowledge beyond initial ‘seed nodes’ typically requires time and effort on both the supply and demand side (see below). Hence, the spreading of knowledge can be viewed as an economic process, involving an investment decision by those possessing the knowledge and those seeking to access it. Viewed in this light, it seems natural to ask whether the diffusion of knowledge can be promoted by economic incentives. We study the effect of incentives on diffusion of financial literacy knowledge with a randomised controlled trial (RCT) in peri-urban Uganda. The objective of this paper is twofold. First, and most importantly, we test whether incentivizing seed nodes fosters the diffusion of knowledge within social networks. We follow BenYishay and Mobarak (2016), discussed below, who pioneered the use of an experimental approach in this context, but consider diffusion of financial knowledge rather than an agricultural innovation.2 Second, we probe the individual characteristics of seed nodes and their peers, and the alignment of these characteristics across them, to gain a better understanding of factors promoting diffusion. Our main results are that (i) incentivizing individuals has a large effect on the diffusion of knowledge, and (ii) that providing small monetary incentives is a cost-effective approach to foster the spread of information (compared to extending the extensive margin via additional trainings). We also find that (iii) social proximity does not foster social learning within the context we study. The focus on financial literacy is timely and important. Evidence suggests the impact of microfinance (interventions) varies with levels of human capital among recipients, so many microfinance institutions and NGOs have embraced financial literacy trainings as a tool to support development (the so-called ‘Finance-Plus’ strategy).3 Financial literacy captures consumers’ awareness, skills and knowledge enabling them to make informed, effective decisions about financial resources. Studies across a range of countries have shown that levels of financial literacy tend to be low (Lusardi and Mitchell, 2007, 2008). Focusing on developing countries, a small number of studies have probed how financial literacy affects economic behaviour, including insurance adoption (Giné et al., 2013), savings (Tustin, 2010; Bruhn et al., 2013; Landerretche and Martínez, 2013), bank account ownership (Cole et al., 2011), and business practices and outcomes (Sayinzoga et al., 2016). There is very little evidence on the diffusion of financial knowledge beyond trained individuals, and the little bit of evidence that is available proves to be inconsistent. While Sayinzoga et al. (2016) find no evidence of financial knowledge spillovers beyond trained village bank representatives in rural Rwanda, Berge et al. (2015) find that training content spread within borrowing groups in Tanzania. A key difference in context between these studies was that, in Tanzania, limited liability within groups implied the seed node had a direct incentive to train his peers to reduce exposure to bad financial decisions of these peers. This insight suggests diffusion processes may be partly driven by economic considerations. The paper is organised as follows. In the next section, we provide background to the topic of knowledge adoption and diffusion, summarising part of the relevant literature. In Section 3, we describe the details of our experiment and explain our sampling strategy. In Section 4 we introduce our data and outline our simple identification strategy. In Section 5, we present our empirical results and attempt to unravel the factors that influence knowledge diffusion. Discussion and conclusions are presented in Section 6. 2. Adoption and diffusion A large literature studies the diffusion and adoption of technologies in developing countries. Much of this literature focuses on the spreading of agricultural innovations, such as improved seeds and fertiliser, or bundles of agricultural activities (e.g., Feder et al., 1985; Knowler and Bradshaw, 2007). This focus seems appropriate in light of the importance of the agricultural sector in developing countries. However, other relevant domains have been studied as well, including health (e.g., bed nets, deworming pills), hygiene (water purifiers, menstrual cups), and fertility (contraceptives). A recent survey of the adoption literature is provided in Foster and Rosenzweig (2010). The diffusion of technologies that are supposed to improve human welfare is imperfect, which may be explained by a range of factors including imperfect capital markets (credit and insurance), incomplete tenure arrangements (impeding the uptake of long-term investments), and perhaps by behavioural factors (Duflo et al., 2009). Diffusion of innovations fits in the broader literature on peer effects. This literature distinguishes between three different types of effects; pure imitation, social learning, and (behavioural) complementarities or external effects. The learning literature is perhaps largest, in spite of well-known challenges to proper identification based on observational data. Manski (1993) coined the term ‘reflection problem’ to describe the difficulty of disentangling peer effects from contextual effects in social interaction models. Peers influence each other, and peer groups do not form randomly but are likely the result of homophilous peer selection. The presence of strategic considerations further complicates matters (e.g., Foster and Rosenzweig, 1995; Bandiera and Rasul, 2006; Breza, 2015). Think of learning spillovers, inviting free-riding on peers’ experimental efforts, or considerations in the context of competition for scarce resources, rival goods, or contested markets. Notwithstanding these challenges, the social learning literature has advanced considerably. The evidence suggests Bayesian learning through social networks can be effective and rapid when innovations have large payoffs for large swaths of the population, are easily observed, and can be applied homogenously across space. In early stages of the green revolution, for example, the adoption of new technology spread very quickly across large parts of India. Similarly, there is evidence of rapid social learning and diffusion in the domain of advantageous health and sanitary innovations benefitting large groups of people more or less alike (e.g., Dupas, 2014; Oster and Thornton, 2012). However, these conditions for Bayesian learning are not always met, and the evidence of widespread social learning is therefore ‘decidedly mixed’ (Breza, 2015). Diffusion of knowledge varies with the structure of social networks and the position of innovators within that network (e.g., Banerjee et al. 2013; Beaman et al., 2015; Cai et al., 2015). Moreover, interventions may have heterogeneous payoffs varying with individual attributes (e.g., Suri, 2011), so information acquired by one farmer may be uninformative for his neighbour (Munshi, 2004). In this context, individuals should carefully target whose behaviour and outcomes to observe, paying special attention to people doing unexpectedly well or that are comparable to oneself (Conley and Udry, 2010; BenYishay and Mobarak, 2016). Finally, social learning will be incomplete and diffusion will be slow if individuals cannot easily observe the experiences of their peers. In this case, information does not automatically flow from one person to another. Instead, this will only occur if both parties invest sufficient time and effort into the knowledge-sharing process. In many cases, however, the innovator stands to gain little from spreading knowledge. Since actively engaging in the sharing of knowledge typically entails a cost, investing a lot of effort into this process is unlikely to be privately optimal. Can innovators or early adopters be incentivized to share information with their peers—internalising learning externalities? This important issue is first analysed in the field by BenYishay and Mobarak (2016), who study the diffusion of agricultural innovations in rural Malawi: pit-planting and composting. They select different types of ‘communicators’ (seed nodes) and expose them to the new technology. A random subsample of these communicators receives a performance-based incentive (a bag of seeds), where performance is based on co-villagers’ knowledge about (and adoption of) the new technologies. The main lessons from the study are that co-villagers are more likely to learn from communicators comparable to themselves (see above); communicators invest more time and effort in learning about the new technology when they are incentivized to share knowledge later; and, most importantly, providing incentives to communicators increases the flow of information and fosters knowledge levels and adoption by co-villagers. Transmission of information is not automatic, and can be manipulated via economic incentives. The issue of incentivizing individuals to share knowledge speaks to a broader literature on intrinsic and extrinsic motives to engage in prosocial actions. The thrust of this literature is that the effect of incentives on prosocial behaviour may be complex. Bénabou and Tirole (2006) develop a behavioural theory that combines altruism with concerns for social reputation and self-respect. Since material or image-related incentives create doubt about the underlying motives for which good deeds are done, they may partially or fully ‘crowd out’ prosocial behaviour. This is referred to as the ‘over-justification effect.’ If respondents receive a reward for engaging in information sharing—an act of prosocial behaviour—co-villagers are unsure about the motives for sharing: was the respondent driven by altruism or desire for the reward? The former may give an impetus to somebody’s status or reputation in the village, the latter presumably would not.4 Self-image concerns suggest similar logic applies in the absence of reputational concerns (Benabou and Tirole, 2011). If respondents value conformity between actions and values or identity, then the self-image value of engaging in prosocial deeds is undermined by incentives. See, for example, Fehr and Falk (2002) or Bowles and Polonia-Reyes (2012) for extensive discussions of these issues, and related ones. On theoretical grounds, the effect of incentivizing individuals to engage in prosocial activities such as information sharing is therefore ambiguous. Ultimately it is an empirical and presumably context-specific matter whether extrinsic motives promote or discourage diffusion of knowledge. We now turn to our experiment. 3. Experimental description To test whether monetary incentives have a positive impact on knowledge diffusion we organised an RCT in Uganda. We explore the diffusion of information, and do not consider whether the training content was applied by our subjects in practice.5 The experiment was conducted in conjunction with CBS PEWOSA; a social responsibility section of Central Broadcasting Services (CBS) radio, affiliated with the Buganda kingdom. CBS-PEWOSA aims to encourage and facilitate the formation of homogeneous self-help groups of 15–30 members in communities in the Kingdom, and then to empower these groups with skills transfer programmes, income-generating activities, food security projects, and savings programmes. Groups evolve endogenously, and villagers may self-select into a group if they wish, provided they are accepted by other group members. Multiple groups may form in one village. CBS-PEWOSA has proven able to engage effectively with a large number of communities. Their modus operandi, via self-formed groups, creates a useful context for our study as it provides a natural reference group of peers to study the diffusion of knowledge. CBS-PEWOSA self-help groups receive training and support in various forms. We include 50 of these groups in an RCT with two treatment arms and one control arm. The only difference between the two treatment arms was that group members in one arm received the conventional training programme and group members in the other arm received the training as well as an incentive for diffusion. Specifically, these group members were promised a monetary reward in case sufficient knowledge diffused within their own self-help group. Our RCT involved two survey waves for the trained group members from the two treatment arms. Baseline data for trained respondents were collected in October 2014, after we randomly selected 50 groups to enrol in the experiment out of a sample frame of 153 communities groups partnering with CBS-PEWOSA. All groups in the sample are from different villages. We randomly assigned 20 groups to the incentive treatment (outlined below), 20 groups to the conventional treatment arm (without incentives), and the remaining 10 groups to the control arm. From the two types of treatment groups we randomly selected several members per group to participate in a financial literacy training. Following CBS-PEWOSA practice, we used a training ratio of 4:1 (rounding them to the nearest integer) so that we would train 6 members out of a group of 22 members. In total, we invited 274 respondents to the training, of which 136 belonged to the incentive treatment arm. Respondents were informed about their treatment status before the training began, so we leave open the possibility that treated beneficiaries will work harder during the training to become more effective knowledge communicators after the training. To incentivize diffusion, we promised participants in the treatment arm they would receive a payment of UGS 35,000 (approximately USD 10) in case they managed to share enough of the training content with their untrained CBS-PEWOSA group members. This is a significant incentive because, according to CBS-PEWOSA data, the average daily wage of their members is only UGS 6,000. Specifically, a participant qualified for the payment in case a randomly selected group member (i) passed a financial literacy test, and (ii) identified that particular participant as her primary source of information (i.e., as the communicator). We did not inform training participants about the number of CBS-PEWOSA group members that would be tested, nor about the content of the test or the relevant knowledge threshold.6 Henceforth, we will refer to group members who did not participate in the training themselves as ‘other group members’ or untrained members. The financial literacy training was conducted in January 2015 by CBS-PEWOSA field officers using the CBS-PEWOSA financial literacy manual. The training consisted of six sessions, lasting from 9:00 a.m. to 3:00 p.m. (with a one-hour lunch break). The training covered basic topics such as keeping financial records, budgeting, savings, and loan management. An outline of the contents of the training sessions is provided in Appendix 1. During the training eight subjects dropped from the sample: four from the incentivized arm and four from the conventional arm, and we were left with a sample of 266 trained subjects. Attrition status was not correlated with most of the baseline characteristics, nor with the treatment dummies (results not shown but available on request).7 Individuals dropping out of the study were not replaced because typically the training programme had already progressed a few sessions. After the training, participants in both experimental arms took a financial literacy test to gauge knowledge levels. The test was based on Sayinzoga et al. (2016) and Lusardi and Mitchell (2007), and is included in Appendix 2. We construct a knowledge index score by awarding 1 point to correct answers, ½ point to partially correct answers, and 0 points to wrong answers (and then convert the score into an outcome on a scale from 0 to 100).8 All participants were encouraged to share the learned knowledge with other members of their CBS-PEWOSA group, and were informed that we would revisit the CBS-PEWOSA group after some 10 months to measure the extent to which knowledge-sharing had actually occurred (the exact date of the return visit had not been set yet and was not announced).9 In October 2015, we revisited the 40 CBS-PEWOSA groups, and organised a follow-up survey among a random subsample of untrained members. All self-help groups still existed, so there was no attrition at the group level. We randomly selected 10 members per group, and 394 untrained members participated in the second wave of the study for a cross-section analysis—we did not contact these households at the baseline (nor did we measure their baseline financial literacy). Of these other group members, 200 were from groups with incentivized peers and 194 from groups with non-incentivized peers. Logistical reasons prevented us from engaging six members from two of the conventional training groups. We have no reason to believe these six drop-outs are different than the respondents in the sample. During this survey wave we collected some data on demographics as well as the level of financial literacy. To measure knowledge levels we used the same test as we used before to measure knowledge levels of the trained individuals. We also asked untrained group members to identify the person who shared training content with them. If the untrained member passed the test, the payment was provided to the trained peer identified as the individual engaged in knowledge diffusion. We did not receive complaints from trained group members about the payment stage, suggesting that untrained group members correctly identified the individuals engaged in sharing information with them. If certain trained group members were identified more than once we would still pay them the fee only once. We paid the performance fee to 47% of the incentivized group members. Finally, at the endline we also collected financial knowledge data among members of other groups that did not participate in either of the treatment arms (i.e., from ‘the control arm’). Specifically, we visited the remaining 10 groups that did not participate in the training intervention, and surveyed 10 random members per self-help group to assess their financial knowledge (and collected data on some simple demographic variables). Data from this control group provides the benchmark knowledge level against which knowledge gains and the cost-effectiveness of the training and incentive interventions will be assessed. 4. Data and identification strategy In Table 1a we summarise basic demographic information of trained group members, distinguishing between respondents from incentivized and non-incentivized training groups. There are no statistically significant differences between the groups (P-values from simple t-tests are reported). On average, respondents are less than 40 years old, and the majority of them are married, employed in the private sector, and have completed lower secondary school. Table 1b does the same for the samples of untrained group members, and includes the group of respondents from the control arm. Observe that for the untrained group members we collected data on fewer variables. Again, there are no significant differences across the treatment arms. We include controls in some specifications to increase the precision of our regression estimates. Table 1a: Summary of the Data for the Trained Group Members Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Notes: This table summarises the control variables of the trained individuals, collected at baseline. The P-values in the final column are the outcome of a simple t-test comparing respondents from the two treatment arms. Table 1a: Summary of the Data for the Trained Group Members Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Incentivized group N Non-Incentivized group N Differences P-values Age 37.515 132 37.649 134 0.134 0.763 Gender (male) 0.205 132 0.209 134 0.004 0.930 Married/Engaged 0.621 132 0.619 134 −0.002 0.976 Education 3.258 132 3.269 134 0.011 0.897 Tribe (Muganda) 0.818 132 0.858 134 0.040 0.377 Religion (Non-Christians) 0.182 132 0.172 134 0.010 0.829 Own land 0.833 132 0.851 134 −0.017 0.698 Budget before spending 0.379 132 0.313 134 0.065 0.264 Take notes of expenses 0.288 132 0.231 134 0.057 0.295 Take notes of savings 0.462 132 0.500 134 −0.038 0.538 Deposit in a bank 0.265 132 0.201 134 0.064 0.221 Savings 745750 132 709873 134 35877 0.864 Borrowing 479697 132 527612 134 −47915 0.620 Loan use 0.515 132 0.522 134 −0.007 0.906 Initiated agricultural activity 0.455 132 0.403 134 0.052 0.397 Notes: This table summarises the control variables of the trained individuals, collected at baseline. The P-values in the final column are the outcome of a simple t-test comparing respondents from the two treatment arms. Table 1b: Summary of Data for the Untrained Group Members Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Notes: This table summarises the control variables of the untrained individuals from the three experimental arms, collected at endline. The P-values in the final three columns are the outcome of a simple t-test comparing respondents across the treatment arms. Table 1b: Summary of Data for the Untrained Group Members Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Incentivized group Non-incentivized group Control P-values P-values P-values N = 200 N = 194 N = 100 1 = 2 1 = 3 2 = 3 Age 37.000 38.165 37.660 0.372 0.666 0.734 Gender (male) 0.235 0.237 0.220 0.961 0.772 0.743 Married/Engaged 0.615 0.686 0.710 0.143 0.105 0.668 Education 3.295 3.046 3.200 0.126 0.624 0.437 Tribe (Muganda) 0.810 0.825 0.850 0.706 0.394 0.584 Notes: This table summarises the control variables of the untrained individuals from the three experimental arms, collected at endline. The P-values in the final three columns are the outcome of a simple t-test comparing respondents across the treatment arms. Turning to analysis, we first test whether the provision of incentives affects the effort that respondents invest in the training, and compare the financial literacy test scores of (trained) respondents across the two treatment groups. Specifically, we first regress the index score of trained respondent i (i = 1,…,7) in group j (j = 1,…,40) on the incentive dummy Dj and vectors of individual controls and group variables; Xij and Zj respectively: Scoreij=α+βDj+δXij+γZj+εij (1) If respondents engage more intensively with the training content when they are incentivized to share knowledge, then we will find β > 0. Following BenYishay and Mobarak (2016), we hypothesise that incentivized respondents work harder because they view the training as an investment opportunity. We use OLS to estimate model (1) and cluster standard errors at the group level.10 To account for the censored nature of our dependent variable we also estimate Tobit models, and to account for the ordinal nature of our dependent variable (index scores taking values 0,…,7) we also estimate ordered Probit models. Next, we turn to our main research questions and use the endline data to test whether incentives affect knowledge diffusion. For this analysis we include the individuals from the control group. We regress the index score of untrained group member k (k = 1,…,10) in group z (z = 1,…,50) on the same variables as above: Scorekz=α+β1D1z+β2D2z+δXkz+γZz+εkz, (2) where D1 is a dummy taking value one for members of any treatment group, and D2 is a dummy taking the value one for members of the incentive group (so D2 identifies a sub-group of D1). Members of the control group are the omitted category. Estimated coefficient β1 captures the effect of the conventional training intervention on untrained members, and coefficient β2 captures the additional effect of incentivizing group members to share knowledge (so that the total sharing effect for the incentivized group amounts to β1 + β2). As robustness tests we again estimate Tobit and ordered Probit models. Note that our experimental design allows us to pick up the total effect of performance fees on diffusion, which may take place via two channels: (i) additional effort of the trained group members during the training (as discussed above and captured in (1)), and (ii) additional effort of the trained group members after the training (time spent teaching their peers).11 Finally, we try to probe the knowledge diffusion process in a bit more depth. Group members are free to approach each other and invest time in either teaching the other, or learning from the other. What sort of ‘matching’ occurs in the setting we study? We are especially interested in establishing whether social proximity fosters diffusion, and therefore ask whether assortative matching occurs in the experiment. We assess the extent to which peer-teaching occurs along certain demographic lines, and ask whether incentivizing trained respondents affects their propensity to teach peers with whom they share fewer characteristics. The characteristics we consider are age (young versus old), gender, education level and tribal affiliation. 5. Empirical results 5.1 Incentives and accumulation of knowledge: effort during the training We first ask whether the promise of performance-based fees affects the effort of respondents during the training. This would be consistent with the finding of BenYishay and Mobarak, who documented that incentivized farmers are more likely to adopt the innovation themselves. To explore whether performance-based incentives affect effort during the training, we compare knowledge scores of trained respondents in the incentive treatment and conventional training arm. These data were collected shortly after finalising the training so should only reflect the effect of the incentive on accumulation of knowledge. Regression results of model (1) are reported in Table 2. Table 2: Incentives and Knowledge Accumulation of Trained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Notes: Scores in models 1–6 are calculated as scoreij/7∗100 ⁠, and in columns 7–9 we construct ordered categories for (rounded) scores. Clustered standard errors at group level are reported in brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. Table 2: Incentives and Knowledge Accumulation of Trained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Incentive dummy 15.652*** 15.192*** 15.254*** 15.759*** 15.299*** 15.357*** 0.121*** 0.126*** 0.127*** (2.542) (2.591) (2.580) (2.533) (2.547) (2.526) (0.026) (0.0265) (0.026) Age (Trained) −0.181 −0.172 −0.178 −0.169 −0.0005 −0.0004 (0.365) (0.368) (0.362) (0.364) (0.003) (0.003) Education (Trained) 6.308*** 6.249*** 6.301*** 6.240*** 0.059*** 0.059*** (1.611) (1.656) (1.603) (1.643) (0.016) (0.016) Budget before spending 0.450 0.446 0.425 0.419 0.004 0.004 (0.722) (0.736) (0.717) (0.729) (0.006) (0.007) Take notes of expenses −1.771** −1.765** −1.781** −1.775** −0.015** −0.015** (0.784) (0.775) (0.775) (0.763) (0.007) (0.007) Take note of savings 0.213 0.210 0.216 0.216 0.002 0.002 (0.819) (0.818) (0.814) (0.810) (0.007) (0.007) Savings (in logs) 1.367 1.387 1.337 1.356 0.010 0.010 (0.955) (0.949) (0.949) (0.940) (0.008) (0.008) Initiated agricultural activity −1.497 −1.531 −1.627 −1.665 −0.006 −0.006 (2.492) (2.506) (2.489) (2.497) (0.021) (0.021) Group size 0.153 0.141 0.001 (0.404) (0.396) (0.003) Group age 0.341 0.382 0.003 (1.462) (1.446) (0.012) Constant 50.906*** 26.209 20.343 50.800*** 26.591 20.924 (1.763) (19.856) (24.080) (1.753) (19.672) (23.746) N 266 261 261 266 261 261 266 261 261 R-Squared 0.120 0.176 0.177 Pseudo-R2 0.014 0.021 0.021 0.036 0.055 0.055 Notes: Scores in models 1–6 are calculated as scoreij/7∗100 ⁠, and in columns 7–9 we construct ordered categories for (rounded) scores. Clustered standard errors at group level are reported in brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. In columns (1–3) we present the results of OLS models, in columns (4–6) we present results based on the Tobit estimator, and in columns (7–9) we use the ordered Probit estimator. Across all models we first consider a parsimonious specification, and then estimate models including vectors of controls (respondent and group variables, respectively). Across all nine models we find positive coefficients associated with the incentive dummy, and in all models these coefficients are significantly different from zero at the 1% level. The estimated coefficient is stable across specifications, which is of course what we would expect (given that, by design, treatment status is uncorrelated with individual or group characteristics). These models reveal that the promise of a performance-based incentive increases the effort (attention) of training participants to grasp the training content. The impact of the fee on effort is also economically meaningful: OLS and Tobit estimates of the parsimonious models suggest that incentives increase post-training test scores by about 30%. Not surprisingly, perhaps, we also find that better-educated respondents achieve higher knowledge scores. 5.2 Incentives and diffusion of knowledge How well did the untrained respondents do, responding to the various questions in our financial literacy test? For individuals from the control group (i.e., without trained peers), the test questions were difficult and the average score was only 43%. Observe the knowledge level of respondents from the control group is tightly estimated. We now analyse how incentives affect the diffusion of knowledge by comparing financial knowledge levels of ‘other group members’ across the experimental arms and the control arm. The reduced form models we estimate capture the joint impact of incentives on effort of trained respondents during the training (discussed above) as well as additional effort after the training—sharing the content of the training with other group members. Before presenting our results in a regression framework we first demonstrate histograms displaying the number of other group members that achieves a certain test score, split out between untrained group members from incentivized and non-incentivized groups. Figure 1 suggests that members of the control group performed worse than members of the treatment groups, and moreover that other members from groups with incentivized respondents (Panel A) tend to answer more questions correctly than group members from the conventional training arm. For example, the number of other group members scoring 6.5/7 (or 7/7) equals 16 (18) from the incentivized group, and only 9 (12) from the non-incentivized group. The number of other group members scoring 2/7 or worse equals 39 for the incentivized group, and 60 for the non-incentivized group. These patterns in the data are also evident from the regression analysis. Estimating model (2) provides the following results: Figure 1: View largeDownload slide Performance on the Knowledge Test for the Three Experimental Arms Figure 1: View largeDownload slide Performance on the Knowledge Test for the Three Experimental Arms Across all three sets of outcomes (OLS, Tobit, ordered Probit), we again consider a parsimonious specification, and then estimate more ‘complete models’ including vectors of controls, Xij and Zj. The first thing to observe is that the training dummy D1, associated with the two treatment arms, is consistently positive and significant across all specifications. We document significant sharing of knowledge within self-help groups, and according to the parsimonious OLS and Tobit specifications it is the case that group members achieve knowledge scores that are some 15% higher than those of their peers in control groups. Indeed, it appears as if the sharing of knowledge within self-help groups is almost complete. According to the parsimonious OLS model untrained members from the conventional training arm achieve knowledge scores of (43.071 + 6.450 =) 49.521, which is statistically identical to the knowledge level of trained group members (50.906, or the constant in Table 2). The second thing to observe is that, across all nine models, we find positive coefficients associated with the incentive dummy D2. Moreover, these coefficients are significantly different from zero, albeit only at the 10% level in most specifications. This, we believe, is our main result: untrained members in groups with incentivized respondents accumulate more financial knowledge than untrained members from the conventional training group (and, of course, much more than members from the control group). This difference in learning across experimental arms is relatively large. When comparing the incentive effect to the knowledge level of the conventional training group, our parsimonious OLS and Tobit estimates reveal that incentives increase knowledge sharing by some 15%.12 As before, we find that knowledge sharing within the subsample of incentivized group members is fairly complete: the knowledge scores obtained by other (untrained) group members only lag slightly behind the scores of the trained group members. Specifically, from Table 3 we learn that the average untrained group member has a knowledge score of 57.1 (43.071 + 7.586 + 6.450), which should be compared to the average score by trained (and incentivized) group members (50.906 + 15.652 = 66,558, from column 1 in Table 2). In other words, untrained group members achieve scores that, on average, are no less than 85% of the scores of trained individuals. We conjecture that slight differences in the extent to which sharing is complete across the two treatment arms may be due to increasing marginal costs (or diminishing marginal returns) to teaching and learning within self-help groups. Table 3: Incentives, Training and Knowledge Diffusion of Untrained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Notes: Columns 1–3 report results of an ordinary least squares regression. Columns 4–6 report results of a Tobit regression. Columns 7–9 report ordered probit marginal effects. Scores in models 1–6 are calculated as scoreij/7∗100 and in columns 7–9 we construct ordered categories for the scores. The Training dummy takes a value of 1 if the respondent belongs to either of the trained groups, and the Training + Incentives dummy takes a value of 1 if the respondent belongs to a group that was trained and incentivized. Clustered standard errors at group level are reported in the brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. Table 3: Incentives, Training and Knowledge Diffusion of Untrained Group Members Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Scores (percent) OLS Tobit Ordered probit (1) (2) (3) (4) (5) (6) (7) (8) (9) Training dummy 6.450** 8.058*** 6.126** 6.425* 8.226*** 6.001* 0.024* 0.044*** 0.034** (3.171) (2.665) (2.958) (3.418) (2.853) (3.134) (0.012) (0.015) (0.017) Training + incentives dummy 7.586** 4.867* 4.215* 7.982** 2.853* 4.460* 0.026* 0.026* 0.023* (3.597) (2.624) (2.506) (3.845) (2.766) (2.602) (0.014) (0.016) (0.015) Age (Untrained) −0.335*** −0.328*** −0.396*** −0.387*** −0.002*** −0.002*** (0.084) (0.083) (0.095) (0.093) (0.001) (0.001) Education (Untrained) 9.367*** 9.403*** 9.736*** 9.781*** 0.051*** 0.051*** (0.655) (0.651) (0.689) (0.684) (0.008) (0.008) Group size −0.679* −0.710* −0.004** (0.352) (0.374) (0.002) Group age −1.492 −1.761 −0.008 (1.367) (1.430) (0.008) Constant 43.071*** 25.722*** 51.703*** 42.124*** 25.826*** 53.865*** (1.720) (4.317) (11.054) (1.833) (4.580) (11.968) N 494 494 494 494 494 494 494 494 494 R-Squared 0.035 0.389 0.394 Pseudo-R2 0.004 0.053 0.054 0.009 0.126 0.129 Notes: Columns 1–3 report results of an ordinary least squares regression. Columns 4–6 report results of a Tobit regression. Columns 7–9 report ordered probit marginal effects. Scores in models 1–6 are calculated as scoreij/7∗100 and in columns 7–9 we construct ordered categories for the scores. The Training dummy takes a value of 1 if the respondent belongs to either of the trained groups, and the Training + Incentives dummy takes a value of 1 if the respondent belongs to a group that was trained and incentivized. Clustered standard errors at group level are reported in the brackets. ***P < 0.01, **P < 0.05 and *P < 0.1. Next, turn to the other covariates. Not surprisingly, we again find that more educated group members tend to perform better on the knowledge test. We also find that the age of the respondent matters—young respondents appear to score better, but this effect is not very large. Interestingly, group size does not matter. This presumably reflects that we have selected a fixed number of other group members for the endline survey, which implies that the probability of picking any specific untrained group member goes down as the group gets larger. From the perspective of trained group members, this reduces the expected payoff of investing time and effort in training specific individuals.13 5.3 Who learns from whom? We asked untrained group members to identify the individual who shared training content with them most—if anybody.14 Obviously matching occurs endogenously, and is not based on experimental variation. We ask whether assortative matching occurs in the self-help groups, or whether individuals are more likely to learn from group members with similar demographic characteristics. We then compare outcomes across the treatment arms to explore whether matching processes are affected by incentives. Table 4 reports the extent to which matching on such variables occurs. For each untrained individual we create four dummy variables capturing whether the group member that ‘trained’ her shared the same age, gender, education level, and tribal affiliation. We then average the value of these binary variables across respondents to obtain a (treatment arm specific) measure of assortative matching intensity. We compare this actual intensity level to the predicted level of matching that would occur if untrained and trained group members matched randomly within the self-help group. Table 4: Actual Matches Versus Possible Matches Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Notes: Respondents are classified as ‘young’ if they are younger than the average respondent (38 years). Table 4: Actual Matches Versus Possible Matches Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Incentivized Non-incentivized Pooled Matches Actual matching Random matching P-value Actual matching Random matching P-value Actual matching Random matching P-value Same sex 0.670 0.677 0.731 0.608 0.588 0.191 0.640 0.633 0.591 Same age (young) 0.380 0.535 0.000 0.335 0.497 0.000 0.358 0.516 0.000 Same education 0.210 0.215 0.818 0.237 0.212 0.241 0.223 0.213 0.504 Same tribe 0.715 0.678 0.115 0.680 0.669 0.658 0.698 0.674 0.151 Notes: Respondents are classified as ‘young’ if they are younger than the average respondent (38 years). Most of the social learning occurs for pairs with the same gender—this is the case for 67% of the matches. However, this degree of assortative matching is not statistically different from the degree of matching that would occur if group members are randomly matched. Similarly, there is no evidence of assortative matching based on education level or tribal affiliation. The single demographic variable for which actual and random matching intensity is different is age, and there the opposite of assortative matching appears to occur. Closer inspection of the data revealed that the old members learned from the young, who on average performed better on the knowledge tests. Interestingly, the same patterns emerge in the data for the incentivized and the non-incentivized treatment arm. We do not observe that incentivizing causes trained individuals to single out fellow group members who are more like themselves (i.e., more assortative matching), nor that they become less ‘picky’ about whom to spend time with (less assortative matching). If we regress the measured assortative matching intensity on an incentive dummy we consistently find there is no significant correlation between matching intensity and the treatment dummy (results not shown, but available on request). To some extent this is an artefact of the social context within which we study social learning. Self-help groups are not composed of randomly selected villagers, but consist of individuals who have self-selected into the group. Social capital levels within the group are likely to be high, and we expect considerable willingness to help fellow group members. This also implies the extent of social learning in these groups may not be representative of the intensity of knowledge sharing occurring in settings where individuals cannot choose their peers, such as in natural villages. 5.4 Cost effectiveness of incentivizing diffusion Does it make economic sense to incentivize trained individuals to share knowledge with their peers? A full-blown cost–benefit analysis is beyond the scope of the current paper, and requires a comparison of the costs of the training and incentives to economic gains for beneficiaries—data that are currently unavailable. However, it is possible to compare the cost of raising knowledge scores across treatment arms. That is, we can use our survey data to compute the cost effectiveness of the conventional training approach and the alternative modality that includes incentives for knowledge sharing. Consider an ‘average’ self-help group in our sample, consisting of 24 group members. Of this self-help group, six members are invited to participate in the training, and the remaining 18 members remain untrained (by CBS-PEWOSA). If this group is allocated to a conventional training programme, the estimated implementation cost of the training amount to UGS 1.1 million. (according to CBS-PEWOSA data). If, instead, the group is assigned to an incentive programme, the total training costs (now including performance fees) amount to UGS 1.2 million.15 We ask whether the additional expenditure of UGS 100,000 increases or decreases the per-unit cost of knowledge transfer. The conventional training arm achieves the following gains in terms of knowledge scores: six trained members gain an additional 7.835 points per member (compare the constant terms in column 1 of Tables 2 and 3), and sharing effects imply that 18 untrained members gain 6.450 knowledge points. The total gain in knowledge, according to our estimates, equals 163.1 index points (or (6 × 7.835) + (18 × 6.450)), so the average cost per unit of knowledge gain amount to UGS 6,744. We can do the same exercise for the training modality that includes incentives. We find the same training now produces a total increase in knowledge equal to 393.5 index points, with an associated average per-unit cost of only UGS 3,049. In words, providing incentives for diffusion approximately cuts the average costs of knowledge transfer in half. It therefore appears like an attractive opportunity for NGOs or governments with binding budget constraints—promoting diffusion is less expensive than upscaling teaching interventions. Observe that our performance fee of UGS 35,000 was chosen in a rather arbitrary fashion, so additional efficiency gains may be possible by optimising the amount of the reward. 6. Discussion and conclusions Diffusion of knowledge and innovations often seems to occur at rates that are ‘too low.’ As a result, advantageous behaviours and production techniques may remain limited to pockets of the overall population, with adverse effects for (economic) outcomes. Limited social learning also undermines the cost effectiveness of development interventions, possibly eroding the economic rationale for such interventions. It is important to improve our understanding of how knowledge spreads in target populations to enhance the efficiency and effectiveness of trainings and projects. In this paper, we examine whether the diffusion of knowledge can be promoted by the provision of (monetary) incentives. We study social learning in the context of NGO-founded self-help groups in which individuals can self-select. Endogenous membership presumably implies these groups have high levels of social cohesion and social capital, or provide a setting where social learning is given the best chance to succeed. Care must be taken when extending the main insights of this study to other contexts, such as villages, where membership is (more) exogenous and inter-person ties are presumably looser. Our first result is evidence of knowledge diffusion even in the absence of incentives. Indeed, we find that nearly all the knowledge gained by (randomly selected) group members spills over to peers. Our second result is that incentivizing individuals to share knowledge with their group members encourages these individuals to study harder and accumulate more knowledge during the trainings. The additional increase in knowledge due to the trainings for members of the incentivized group is large, or about one-third of the gain for respondents from the conventional training arm. Our third result is that incentivizing individuals has a large effect on the diffusion of knowledge. Indeed, we demonstrate that the magnitude of the sharing gain implies that the provision of incentives is a cost-effective approach to promoting knowledge diffusion. Our design does not allow us to cleanly identify the mechanism that causes extra knowledge sharing. The reduced form result may be due to extra effort by the trained group members during the training (i.e., additional learning) and by extra effort after the training (i.e., additional sharing). However, our results suggest that knowledge spillovers within the conventional training groups are nearly complete, or that untrained group members score equally well on the knowledge test as their trained peers. This implies a necessary condition for (further) increasing diffusion is additional effort during the training, improving the knowledge level of trained group members—which is indeed what we document. In contrast, the evidence for additional effort during the sharing stage is weaker as untrained group members in the incentivized groups achieve, on average, only 85% of the score of their trained counterparts—rather than 100% as in the conventional group. Future research should include an additional treatment arm to isolate the exact contributions of additional learning and sharing as possible channels accentuating diffusion. The result that people respond to incentives is perhaps not surprising to most economists. Nevertheless, we believe it is important to confirm and emphasise this insight in specific development settings, such as the one studied here. Too often decision-makers assume that knowledge will spread automatically, or that poor villagers are keen to provide public goods for free. This may simply not be realistic. Examples include many extension initiatives in the domain of agriculture (e.g., farmer field schools), but also programmes in the public health sector. For example, it is estimated that this sector suffers from a shortage of 7 million professional health workers (WHO, 2013). To address this problem, so-called community health workers or community medicine distributors have been recruited in many developing countries. Such individuals typically receive a short training but no compensation for their time or effort. It should be no surprise that this lack of incentives translates into disappointing outcomes (Chami et al., 2016). We speculate the introduction of incentives—monetary or otherwise—may help to improve the performance of development interventions across a swath of relevant domains. While it is comforting to observe that social learning can leverage the effectiveness of training interventions, and that the extent of social learning can be manipulated by incentives, it is evident that major challenges remain for practitioners seeking to put the lessons from this analysis to practice. Specifically, there are many cases and contexts where knowledge diffusion should not stop after a single round of social learning. Individuals benefitting from the knowledge imparted on them by their peers should, in turn, share this knowledge with other villagers—and so on, until the entire target population has been reached. Affecting the behaviour of ‘downstream’ beneficiaries via individual incentives may be far from straightforward. Exploring efficient and effective designs that promote diffusion downstream of initially trained beneficiaries is left for future research. Acknowledgements We would like to thank CBS-PEWOSA for their cooperation and support during the experiment. Funding This work was supported by a grant from the Netherlands Organization for Scientific Research (N.W.O.) (Grant number 453-10-001) Footnotes 1 This is akin to assumptions underlying models of endogenous growth, premised on the idea that knowledge is a public good so that innovations readily spread within and across countries. 2 For non-experimental work pointing at the importance of incentives for diffusion, refer to Alavi and Leidner (2001), Sorenson and Fleming (2004), and Lucas and Ogilvie (2006). 3 Financial literacy trainings are often-times part of a broader training agenda, aiming to promote modern business practices and entrepreneurship (e.g., Bjorvatn and Tungodden, 2010; Karlan and Valdivia, 2011; Berge et al., 2015; Giné and Mansuri, 2014). The literature on the impact of business and entrepreneurship trainings on business practices and outcomes has produced mixed results, and is summarised by McKenzie and Woodruff (2014). 4 In a lab experiment, Ariely et al. (2009) find that monetary incentives reduce the image value of prosocial behaviour and the effort committed to such behaviour by respondents. 5 We also do not know how useful the shared information is for subjects in their daily lives. However, earlier studies from other countries suggest that financial literacy may affect economic knowledge, practices and outcomes (e.g., Sayinzoga et al., 2016). 6 Because the content of the test was unknown to the training participants they could not collude with their peers and ‘teach to the test’ in order to qualify for the performance fee. We assume there is no difference in the incentives to exert effort in answering the test questions (or that untrained individuals did not exert extra effort because they made side deals with trained individuals on splitting any reward). However, even if such collusion occurs it would not affect the main story that incentives affect diffusion. 7 To be precise, the probability of attrition was not correlated with 13 out of 15 baseline variables we collected, but we do find that it was slightly lower for older respondents and slightly higher for respondents who recently initiated a new agricultural activity. We will control for these variables in some of the regression models below. 8 Grading the tests was done blindly—without knowledge of the treatment arm to which the respondent belonged. 9 Since trained peers had some idea about when we would return to test their group members they could try to target their teaching or diffusion efforts to the period just before the announced visit. While this does not invalidate our analysis, different results might eventuate when the return visit would be completely unknown. 10 Note that the number of clusters is rather small (40). It is well-known that standard asymptotic tests can over-reject with few clusters, and Cameron et al. (2008) refer to ‘few’ as five to thirty. While our number of clusters exceeds the minimum value implied by this interpretation, all of our results are robust (in a qualitative sense) to using the cluster bootstrap-t procedure. The remaining regression analysis are based on 50 clusters, which we assume to be sufficiently large for standard asymptotic tests to be valid. 11 To identify the magnitude of these effects separately one could include an additional treatment arm where trained individuals receive an incentive to diffuse knowledge, but where this incentive is announced after the training has been completed (so that effort during the training session is unaffected by the incentive). We have not done this in our experiment. 12 The Wald test on the coefficients in columns 1–3, and columns 4–6 indicate that the explanatory variables have different effects on the dependent variables. This is evident from the F-statistics which are significant. 13 Observe that group ties within larger groups may be weaker, so that (on average) the altruistic incentive to train fellow group members may also be lower. This will be the case in both treatment arms. 14 Unfortunately we did not collect data on multiple persons involved in diffusing knowledge to untrained group members. 15 Recall that in the incentive group, 47% of the trained individuals receive a payment of UGS 35,000. References Alavi M. , Leidner D. ( 2001 ) ‘ Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues ’, MIS Quarterly , 25 : 107 – 36 . Google Scholar Crossref Search ADS Ariely D. , Bracha A. , Meier S. ( 2009 ) ‘ Doing Good or Doing Well? Image Motivation and Monetary Incentives in Behaving Prosocially ’, American Economic Review , 99 ( 1 ): 544 – 55 . Google Scholar Crossref Search ADS Bandiera O. , Rasul I. ( 2006 ) ‘ Social Networks and Technology Adoption in Northern Mozambique ’, Economic Journal , 116 : 869 – 902 . Google Scholar Crossref Search ADS Banerjee A. , Chandrasekhar A. , Duflo E. , Jackson M. ( 2013 ) ‘The Diffusion of Microfinance’ , Science , 341 ( 6144 ): 363 – 72 . Google Scholar Crossref Search ADS Beaman L. , BenYishay A. , Magruder J. , Mobarak A. ( 2015 ). ‘Can Network Theory Based Targeting Increase Technology Adoption?’ Northwestern University: Working Paper. Benabou R. , Tirole J. ( 2011 ) ‘ Identity, Morals, and Taboos: Beliefs as Assets ’, The Quarterly Journal of Economics , 126 : 805 – 55 . Google Scholar Crossref Search ADS PubMed Bénabou R. , Tirole J. ( 2006 ) ‘ Incentives and Prosocial Behavior ’, The American Economic Review , 96 : 1652 – 78 . Google Scholar Crossref Search ADS BenYishay A. , Mobarak A. M. ( 2016 ) Social Learning and Communication . National Bureau of Economic Research . Berge L. I. O. , Bjorvatn K. , Tungodden B. ( 2015 ) ‘ Human and Financial Capital for Microenterprise Development: Evidence From a Field and Lab Experiment ’, Management Science , 61 ( 4 ): 707 – 22 . Google Scholar Crossref Search ADS Bjorvatn K. , Tungodden B. ( 2010 ) ‘ Teaching Business in Tanzania: Evaluating Participation and Performance ’, Journal of the European Economic Association , 8 ( 2/3 ): 561 – 70 . Google Scholar Crossref Search ADS Breza E. ( 2015 ). Field Experiments, Social Networks, and Development. Columbia Business School. Discussion Paper. Bruhn M. , Lara Ibarra G. , McKenzie D. ( 2013 ) Why is voluntary financial education so unpopular? Experimental evidence from Mexico, The World Bank, Development Research Group. Bowles S. , Polonia-Reyes S. ( 2012 ) ‘ Economic Incentives and Social Preferences: Substitutes or Complements? ’, Journal of Economic Literature , 50 ( 2 ): 368 – 425 . Google Scholar Crossref Search ADS Cai J. , De Janvry A. , Sadoulet E. ( 2015 ) ‘ Social Networks and the Decision to Insure ’, American Economic Journal: Applied Economics , 7 : 81 – 108 . Google Scholar Crossref Search ADS Cameron A. C. , Gelbach J. , Miller D. ( 2008 ) ‘ Bootstrap-Based Improvements for Inference with Clustered Errors ’, Review of Economics and Statistics , 90 ( 3 ): 414 – 27 . Google Scholar Crossref Search ADS Chami G. , Dunne D. , Fenwick A. , Bulte E. , Clements M. , Fleming F. , Muheki E. , Kontoleon A. ( 2016 ) ‘ Profiling Non-Recipients of Mass Drug Administration for Schistosomiasis and Hookworm Infections: A Comprehensive Analysis of Praziquantel and Albendazole Coverage in Community-Directed Treatment in Uganda ’, Clinical Infectious Diseases , 62 ( 2 ): 200 – 7 . Google Scholar Crossref Search ADS PubMed Cole S. , Sampson T. , Zia B. ( 2011 ) ‘ Prices or Knowledge? What Drives Demand for Financial Services in Emerging Markets? ’, The Journal of Finance , 66 ( 6 ): 1933 – 67 . Google Scholar Crossref Search ADS Conley T. , Udry C. ( 2010 ) ‘ Learning About a New Technology: Pineapple in Ghana ’, The American Economic Review , 100 : 35 – 69 . Google Scholar Crossref Search ADS Duflo E. , Kremer M. , Robinson J. ( 2009 ) ‘ Nudging Farmers to Use Fertilizer: Theory and Experimental Evidence from Kenya ’, American Economic Review , 101 ( 6 ): 2350 – 90 . Google Scholar Crossref Search ADS Dupas P. ( 2014 ) ‘ Short-run Subsidies and Long-run Adoption of New Health Products: Evidence From a Field Experiment ’, Econometrica: Journal of the Econometric Society , 82 ( 1 ): 197 – 228 . Google Scholar Crossref Search ADS PubMed Feder G. , Just R. , Zilberman D. ( 1985 ) ‘ Adoption of Agricultural innovations in Developing Countries: Survey ’, Economic Development & Cultural Change , 33 : 255 – 98 . Google Scholar Crossref Search ADS Fehr E. , Falk A. ( 2002 ) ‘ Psychological Foundations of Incentives ’, European Economic Review , 46 ( 4 ): 687 – 724 . Google Scholar Crossref Search ADS Foster A. , Rosenzweig M. ( 1995 ) ‘ Learning by Doing and Learning from Others: Human Capital and Technical Change in Agriculture ’, Journal of Political Economy , 103 ( 6 ): 1176 – 209 . Google Scholar Crossref Search ADS Foster A. D. , Rosenzweig M. ( 2010 ) ‘ Microeconomics of Technology Adoption ’, Annual Review of Economics , 2 : 395 – 424 . Google Scholar Crossref Search ADS Giné X. , Karlan D. , Ngatia M. ( 2013 ) Social Networks, Financial Literacy and Index Insurance. Working Paper. Giné X. , Mansuri G. ( 2014 ) Money or Ideas? A Field Experiment on Constraints to Entrepreneurship In Rural Pakistan. Policy Research Working Paper 6959. Washington DC: World Bank. Karlan D. , Valdivia M. ( 2011 ) ‘ Teaching Entrepreneurship: Impact of Business Training on Microfinance Clients and Institutions ’, Review of Economics and Statistics , 93 ( 2 ): 510 – 27 . Google Scholar Crossref Search ADS Knowler D. , Bradshaw B. ( 2007 ) ‘ Farmers’ Adoption of Conservation Agriculture: A Review and Synthesis of Recent Research ’, Food Policy , 32 ( 1 ): 25 – 48 . Google Scholar Crossref Search ADS Landerretche O. M. , Martínez C. ( 2013 ) ‘ Voluntary Savings, Financial Behavior, and Pension Finance Literacy: Evidence from Chile ’, Journal of Pension Economics and Finance , 12 ( 3 ): 251 – 97 . Google Scholar Crossref Search ADS Lucas L. M. , Ogilvie D. ( 2006 ) ‘ Things are Not Always What They Seem: How Reputations, Culture, and Incentives Influence Knowledge Transfer ’, The Learning Organization , 13 ( 1 ): 7 – 24 . Google Scholar Crossref Search ADS Lusardi A. , Mitchell O. S. ( 2007 ) ‘ Baby Boomer Retirement Security: The Roles of Planning, Financial Literacy, and Housing Wealth ’, Journal of monetary Economics , 54 ( 1 ): 205 – 24 . Google Scholar Crossref Search ADS Lusardi A. , Mitchell O. S. ( 2008 ) ‘ Planning and Financial Literacy: How do Women Fare? ’, American Economic Review: Papers and Proceedings , 98 ( 2 ): 413 – 7 . Google Scholar Crossref Search ADS Manski C. ( 1993 ) ‘ Identification of Endogenous Social Effects: The Reflection Problem ’, The Review of Economic Studies , 60 : 531 – 42 . Google Scholar Crossref Search ADS McKenzie D. , Woodruff C. ( 2014 ) ‘ What Are We Learning From Business Training and Entrepreneurship Evaluations Around the Developing World? ’, The World Bank Research Observer , 29 ( 1 ): 48 – 82 . Google Scholar Crossref Search ADS Munshi K. ( 2004 ) ‘ Social Learning in a Heterogeneous Population: Technology Diffusion in the Indian Green Revolution ’, Journal of Development Economics , 73 : 185 – 213 . Google Scholar Crossref Search ADS Oster E. , Thornton R. ( 2012 ) ‘ Determinants of Technology Adoption: Private Value and Peer Effects in Menstrual Cup Take-up ’, Journal of the European Economic Association , 10 ( 6 ): 12631293 . Google Scholar Crossref Search ADS Sayinzoga A. , Bulte E. , Lensink B. ( 2016 ) ‘ Financial Literacy and Financial Behavior: Experimental Evidence From Rural Rwanda ’, Economic Journal , 126 : 1571 – 99 . Google Scholar Crossref Search ADS Sorenson O. , Fleming L. ( 2004 ) ‘ Science and the Diffusion of Knowledge ’, Research Policy , 33 ( 10 ): 1615 – 34 . Google Scholar Crossref Search ADS Suri T. ( 2011 ) ‘ Selection and Comparative Advantage in Technology Adoption ’, Econometrica: Journal of the Econometric Society , 79 ( 1 ): 159 – 209 . Google Scholar Crossref Search ADS Tustin D. H. ( 2010 ) ‘ An Impact Assessment of a Prototype Financial Literacy Flagship Programme in a Rural South African Setting ’, African Journal of Business Management , 4 ( 9 ): 1894 – 1902 . WHO ( 2013 ) A Universal Truth: No Health Without a Workforce . Geneva : World Health Organization . © The Author(s) 2018. Published by Oxford University Press on behalf of the Centre for the Study of African Economies, all rights reserved. For Permissions, please email: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

Journal of African EconomiesOxford University Press

Published: Nov 1, 2018

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off