Add Journal to My Library
Review of Finance
, Volume Advance Article – Nov 7, 2017

37 pages

/lp/ou_press/did-you-see-what-i-saw-interpreting-others-forecasts-when-their-Ls9ztFnqkt

- Publisher
- Oxford University Press
- Copyright
- © The Authors 2017. Published by Oxford University Press on behalf of the European Finance Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
- ISSN
- 1572-3097
- eISSN
- 1573-692X
- D.O.I.
- 10.1093/rof/rfx052
- Publisher site
- See Article on Publisher Site

Abstract We conduct a series of forecasting experiments to examine how people update their beliefs upon observing others’ forecasts. Subjects exhibit “cursedness,” that is, a propensity to underestimate the link between others’ forecasts and others’ information, which causes subjects to underreact. The behavior of sophisticated subjects is not affected by the framing of information, but unsophisticated subjects switch from underreaction to overreaction when they are only provided qualitative (rather than quantitative) forecast information. Our results have important implications for the way that financial analysts aggregate information and the way that financial institutions present forecasts to their clients. 1. Introduction How do people update their beliefs upon observing others’ forecasts? Two features of information environments complicate this task. First, people must assess the likelihood that the forecaster correctly interpreted the information he has observed; if the forecast is pure noise, then people should treat it as such and ignore it. Second, people must assess the likelihood that the forecaster has observed the same information as him; because independent signals are more informative than correlated ones, a person should update his beliefs less (more) if the forecaster’s information source is the same as (different than) the person’s. There are reasons to expect people to underreact to the information content of others’ forecasts. Eyster and Rabin (2005) show that “cursedness”—the tendency for people to underestimate the correlation between others’ information and others’ actions—explains economic behavior in a wide variety of settings. In the case of interpreting others’ forecasts, a cursed person will underestimate the likelihood that the forecast is based on useful information, and he will therefore underreact to it. In addition, psychologists have documented a tendency for people to overestimate their similarity to others (the “false consensus effect”). Consider a client (C) and a forecaster (F), both of whom have observed information about a variable of interest. Since the incremental informativeness of F’s forecast is decreasing in the likelihood that F is basing his forecast on the same information that C has already observed, a false consensus effect should cause C to underreact to F’s forecast.1 The purpose of this study is to analyze whether, and why, people underreact to the information contained in others’ forecasts. In the case of financial analysts, it is well known that they underreact to other analysts’ forecasts (Bernhardt, Campello, and Kutsoati, 2006) and recent changes in stock prices (Lys and Sohn, 1990; Abarbanell, 1991). However, determining whether this underreaction is driven by behavioral biases is difficult because analysts can have rational incentives to shade their forecasts away from their true beliefs (Trueman, 1994; Ottaviani and Sørensen, 2006). We choose to study behavior in a controlled laboratory environment because it provides two significant advantages. First, we can determine how subjects are compensated and incentivize them to report their true beliefs. Hence, if we discover underreaction, we can conclude that it is due to behavioral biases. Second, if we find underreaction, we can alter the experimental design to determine why the underreaction occurs. The context of our experiment is earnings forecasts for a hypothetical firm. In our baseline treatment, subjects choose which of two information sources to observe, and they issue initial forecasts. After issuing an initial forecast, subjects observe the initial forecast of a randomly chosen subject, and they issue revised forecasts. Both cursedness and the false consensus effect predict that subjects will underreact to the information contained in other subjects’ forecasts: cursedness predicts that subjects will underappreciate the link between others’ forecasts and the information that others have observed, while the false consensus effect predicts that subjects will believe others chose to observe the same information source as them. We confirm the prediction: subjects do in fact underreact to the information contained in others’ forecasts. Some of the underreaction in our baseline treatment might be driven by risk-aversion.2 To determine what is driving the underreaction, we create additional treatments by altering the experimental design. To eliminate the effects of cursedness, we create a second treatment that is identical to our baseline treatment except that subjects are not informed of another subject’s forecast; rather, they are informed whether another subject observed “good news” or “bad news,” and they are informed of what the other subject’s forecast would be if that subject were optimizing his expected compensation given the information he observed. Since subjects do not observe others’ actions in this treatment, cursedness cannot affect their behavior. We find that removing the effects of cursedness reduces underreaction by roughly 50%, but that the remaining underreaction is still statistically significant. We design a third treatment to determine whether the remaining underreaction is caused by people overestimating the likelihood that the other subject chooses to observe the same information source as them. This third treatment is identical to the second treatment except that instead of allowing subjects to choose which information source to observe (as in the second treatment), subjects are randomly and independently assigned to observe one information source or the other, each with 50% probability. Because subjects are informed of this assignment rule, a false consensus effect cannot affect behavior in this treatment. Nevertheless, there is underreaction in this treatment, and its magnitude is not significantly different than in the second treatment, suggesting that the false consensus effect does not explain a significant part of the underreaction that we document in the baseline treatment. In each of these first three treatments, the underreaction is concentrated in the situations in which the subject and the other forecaster observe news that is qualitatively similar (both observe good news or both observe bad news). When the subjects observe qualitatively dissimilar news (one observes good news, one observes bad news), there is no underreaction. To motivate our final treatment, consider two financial analysts covering a firm at time t, each of whom expects the quarter t + 2 earnings to be $2 per share. Suppose when the firm announces its quarter t + 1 earnings, both of the analysts become more optimistic about the quarter t + 2 earnings, for example, one revises his beliefs to $2.10 and the other revises his beliefs to $2.20. When the analysts observe each other's forecasts, how will they update their beliefs? The analysts could be tempted to simply average their forecasts, that is, for them to converge to a posterior of $2.15. We expect people to be more likely to use this “averaging” heuristic when the quantitative nature of others’ forecasts is salient; if each analyst were only informed that the other analyst also became more optimistic, it is reasonable to expect that they would each revise their beliefs upward, in which case the average of their posteriors would be greater than $2.15. In other words, we hypothesize that subjects’ propensity to underreact will be most severe when the quantitative aspect of their partner’s initial forecast is made salient than when only the qualitative nature of their partner’s initial forecast (“good news” or “bad news”) is made salient.3 To test this, we design a fourth treatment that is identical to the third treatment except that instead of informing subjects of a randomly chosen other subject’s optimal initial numerical forecast, we simply inform subjects whether a randomly chosen other subject observed “good news” or “bad news.” Since this information is also presented to subjects in the third treatment, the only difference between the two treatments is that we present subjects with a numerical forecast in the third treatment but not in the fourth treatment. We find mixed support for our hypothesis. When analyzing subject behavior in the aggregate, there is no underreaction (or overreaction) in this treatment. Hence, if one ignores subject heterogeneity, it appears that the underreaction is completely accounted for. However, when we divide subjects based on their level of sophistication, a different pattern emerges. We classify subjects as sophisticated or unsophisticated based on their performance in the initial forecasting stage of the experiment: subjects whose initial forecasts are almost always optimal (given their information) are considered sophisticated, whereas the rest are classified as unsophisticated. We find that the behavior of sophisticated subjects is the same in the third and fourth treatments, whereas the behavior of unsophisticated subjects differs significantly: unsophisticated subjects underreact in the third treatment (which provides quantitative forecasts), but overreact in the fourth treatment (where no quantitative information is provided). Hence, unsophisticated subjects can be induced to underreact by providing them with both quantitative and qualitative information, and they can be induced to overreact by providing them with only qualitative information. Our experimental environment shares many features with the real world: people observe others’ forecasts, people are unsure whether forecasters are properly interpreting the information they have observed, people are unsure of forecasters’ information sources, and forecasts can be qualitative or quantitative in nature. For example, macroeconomic forecasters issue explicit inflation and GDP growth forecasts as well as qualitative forecasts such as “inflation will rise” or “GDP growth will decline”; financial analysts provide guidance that is quantitative in nature (“Earnings per share will be $2.50” and “Our price target is $50 per share”) as well as qualitative in nature (“We have a strong buy recommendation for this stock”); sports pundits make predictions that are qualitative in nature (“I think the Warriors will win”) as well as quantitative in nature (“I think the Celtics will win by five points”). Our results suggest that people will underreact to others’ forecasts due to cursedness (Eyster and Rabin, 2005), and though sophisticated people will underreact regardless of how the information is conveyed, unsophisticated people will underreact less when the forecasts are conveyed in a qualitative rather than quantitative manner (indeed, they might even overreact to qualitative forecasts). Our finding that unsophisticated subjects underreact to quantitative forecast information but overreact to qualitative forecast information has important implications for the way that financial institutions convey information to their clients. For example, to the extent that brokerages want to generate trading volume, our findings might explain why brokerages generally emphasize the qualitative features of analysts’ forecasts (“buy” versus “hold” versus “sell”) rather than the quantitative features of analysts’ forecasts (price targets and EPS forecasts). In addition, our results have implications beyond sequential forecasting environments. By documenting cursedness within a financial environment, our paper provides additional support for the fundamental assumptions of Eyster, Rabin, and Vayanos (2017), who analyze the implications of cursedness in financial markets. The paper is organized as follows. We discuss the related literature in Section 2. We analyze behavior in our baseline treatment in Section 3, and we describe our additional treatments and analyze behavior across treatments in Section 4. Section 5 concludes. 2. Related Literature Our study is broadly related to the literature on information aggregation. Early experimental economists examined the ability of markets to aggregate information. See, for example, Forsythe, Palfrey, and Plott (1982); Plott and Sunder (1982); and Plott and Sunder (1988). These studies document that the rational expectations equilibrium provides an accurate prediction of prices and behavior when markets are complete or preferences are homogeneous, but not when markets are incomplete and preferences are heterogeneous. Our paper is more closely related to studies that examine how information is aggregated when people directly observe others’ actions. Anderson and Holt (1997) examine information cascades. In their study, subjects observe private signals drawn from one of two unobserved urns; all subjects’ signals are drawn from the same urn. Each urn contains two types of marbles, and the urns differ in their composition of marbles. The private signals that subjects observe correspond to the color of the marble that is drawn.4 Subjects sequentially predict which urn they believe the marbles were drawn from, and subjects observe other subjects’ predictions. An “information cascade” arises when subjects ignore their own private information and follow others’ predictions. According to rational expectations, an information cascade will arise whenever the first few subjects make the same prediction. Anderson and Holt (1997) document that information cascades do arise, but they arise more slowly than predicted by rational expectations. Weizsacker (2010) uses a large meta-dataset to examine subject behavior in cascades experiments. His study differs from others in that he employs reduced-form (as opposed to structural) econometric techniques, and he confirms that subjects are overly reliant on their own private signal vis-a-vis others’ actions. There are several differences between our experiments and the cascades literature. For example, in cascades experiments, subjects always receive signals from the same information source (the same urn), whereas in our experiment, subjects may or may not receive signals from the same source (i.e., earnings component). Moreover, in cascades experiments, subjects provide non-quantitative predictions (“Urn A” or “Urn B”), whereas in our experiments, subjects provide quantitative earnings forecasts. In addition, in cascades experiments, it is obvious that a subject should choose the same urn as others have chosen when her private signal is consistent with others’ predictions. The interesting scenario arises when a subject’s private signal disagrees with others’ predictions. In contrast, in our setting, subjects’ task is most difficult when their signals are similar to others’ forecasts. In these scenarios, subjects must assess the likelihood that their partner observed the same news as them. While the cascades literature documents that people are overly reliant on their own private signals, we are able to examine why subjects underreact to others’ actions in sequential, quantitative forecasting environments. We document that roughly half of the underreaction can be explained by cursedness. Our paper is also related to the literature on people’s perceptions of the correlation between their information sources and others’.5 Most economists address this issue rely on indirect psychological and legal evidence to argue that people underestimate the correlation of news sources. Glaeser and Sunstein (2009) develop a model in which “credulous Bayesians” overestimate the extent to which social signals represent new information. They model this by assuming people underestimate the amount of common noise within a standard differential information framework. Glaeser and Sunstein (2009) motivate their model based on the evidence of group polarization, which is the tendency for people’s views to become more extreme when they are gathered into groups (e.g., juries, participants of psychological studies, etc.).6Ortoleva and Snowberg (2015) assume people underestimate the degree to which information sources are correlated (correlation neglect) and show that this naturally leads to overconfidence and ideological extremeness. Levy and Razin (2015) show that correlation neglect can actually improve welfare by causing voters to vote more on information and less on preferences. By conducting laboratory experiments, we can directly examine how people update their beliefs under information source uncertainty in a controlled environment.7 Our paper is also related to the literature on learning in social networks. DeMarzo, Vayanos, and Zwiebel (2003) develop a model of “persuasion bias” in which people are unable to properly adjust for information repetitions that may occur due to the nature of the network. More specifically, in their model, information may be inefficiently aggregated because well-connected members’ information can be given too much weight relative to the optimum.8 Other researchers have conducted laboratory experiments to analyze learning in social networks.9 Unlike this literature, our focus is not on learning or information diffusion within social networks. Finally, our study is related to the analyst forecasting literature. Our findings suggest that financial analysts may have difficulty combining other analysts’ forecasts with their own private information. Brown et al. (2015) survey analysts and document that analysts pay attention to other analysts’ earnings forecasts. Moreover, when analysts notice that others’ forecasts significantly differ from their own, they try to determine why the other analysts’ forecasts differ from their own. Our findings suggest that cursedness will cause analysts to underreact to the information content of other analysts’ forecasts. Empirically, this bias should yield testable predictions that are similar to a false consensus effect, for which Williams (2013) finds supporting evidence. 3. Baseline Treatment We analyze decision making under information source uncertainty in a controlled laboratory environment. Outside of the laboratory, it is difficult to understand why behavior differs from standard theory because behavior that appears irrational can be optimal if the agents face incentives that the researcher does not observe and cannot control. For example, there is a large literature suggesting that forecasters have incentives to shade their forecasts away from their true beliefs.10 3.1 Experimental Environment We begin by describing the experimental environment and procedures in our baseline treatment (Treatment A). All subjects participated in a forecasting decision process for thirty independent periods. In each period, subjects were randomly matched with a partner. Each pair of matched subjects faced the same task of forecasting a hypothetical firm’s earnings, X. The firm’s earnings were determined by its uncertain revenue, R, cost, C, and a random noise component, ϵ, as follows: X=R−C+ϵ. Revenue was distributed according to Pr(R=$20)=0.5 and Pr(R=$10)=0.5, cost was distributed according to Pr(C=$0)=0.5 and Pr(C=$10)=0.5, and ϵ was distributed uniformly on [−$1, $1]. Revenue, cost, and ϵ were independently distributed across pairs and across periods, so each firm’s unconditional expected earnings was $10: E[X]=E[R]−E[C]+E[ϵ]=$15−$5+$0=$10. Each independent experimental period proceeded in a series of two distinct stages: an initial forecast stage and a revised forecast stage. Subjects first chose whether to observe the firm’s revenue or cost.11 Subjects were then informed of their chosen components of the earnings realization in the following manner: they were told that they had observed either “good news” or “bad news,” and they were told the exact realization associated with such news. In the case of revenue, good news was associated with high revenue (R=$20), and bad news was associated with low revenue (R=$10). For cost, good news was associated with low cost (C=$0), and bad news was associated with high cost (C=$10). After observing the realization of the earnings component, subjects issued their initial forecasts. Given the information structure, the expectation of earnings in the initial stage depends only on whether the subject has observed good news or bad news (and not on whether revenue or cost were observed): E[X| good news]=$15, and (1) E[X| bad news]=$5. (2) After issuing his initial forecast, f0, each subject was shown the initial forecast of his randomly chosen partner but not whether the partner chose to observe revenue or cost. Subjects then issued their revised forecast, fR.12 At the end of each forecast period, subjects were informed of the actual earnings realization and their potential compensation (given by Equation (3)) from both the initial and revised forecasting stages. They were not informed of the compensation, earnings component observed, or identity of their partner. Subjects were then randomly rematched with a partner and began the next period with a new, independent earnings realization. This proceeded for thirty periods, and the duration of the experiment was common knowledge among the subjects. At the end of the thirty periods, one period for each subject was randomly chosen for compensation based on his initial forecast that period, and one period for each subject was randomly chosen for compensation based on his revised forecast. We chose to pay for one random period for each forecasting stage to minimize potential period choice dependencies and the possibility of hedging across stages. Subjects were compensated based on their accuracy in the randomly chosen compensation periods: Compensation=$6−0.04(f−X)2, (3) where f is the subject’s forecast. Since the mean is the best predictor under squared loss, subjects could maximize their expected compensation by issuing forecasts equal to their conditional expectations of X.13 All sessions were conducted at the Pennsylvania State University in the Laboratory for Economics Management and Auctions. All experiments were conducted utilizing the z-Tree software (Fischbacher, 2007); a copy of the instructions provided to the subjects at the beginning of the experiment is included in Appendix B. 3.2 Theory Deriving the optimal initial forecast is a straightforward task. Expected revenue is 15, and expected cost is 5. If a subject observes that revenue is high (R = 20) or that cost is low (C = 0), the firm’s expected earnings conditional on the subject’s information is 15 ( 20−5=15−0=15), and if a subject observes that revenue is low (R = 10) or that cost is high (C = 10), the firm’s expected earnings conditional on the subject’s information is 5 ( 10−5=15−10=5). It follows that a subject’s optimal initial forecast perfectly reveals whether the subject observed good news or bad news but not necessarily whether the subject observed revenue or cost. More formally, f0*={15if the subject observes good news (R=20 or C=0)5if the subject observes bad news (R=10 or C=10). (4) Deriving the optimal revised forecasts is more complicated. Suppose a subject sees good news prior to his initial forecast so that the expected earnings conditional on this news is 15. If his partner’s initial forecast is 15, then it is likely that his partner also observed good news.14 However, in this scenario, the partner’s initial forecast does not reveal which earnings component he observed; it is possible the partner saw the same piece of good news as the subject, but it is also possible that he saw that the other earnings component was good. Another complication is that the partner might issue suboptimal initial forecasts, in which case it is possible that he observed bad news even though his initial forecast was 15. In general, the optimal revised forecast must take into account the likelihood that the partner chooses to observe cost/revenue as well as the likelihood that the partner makes mistakes in his forecasts. We formally express the optimal revised forecast ( fR*) in Equations (9)–(15) in Appendix A. All that remains is to specify the optimal choice of observing revenue versus cost. Since they are equally informative, this choice is irrelevant for the initial forecast stage. However, it matters for the revised forecast—subjects can minimize their expected forecast error by choosing to observe the earnings component that their partner is less likely to observe. If the partner is equally likely to observe revenue as cost, then it is irrelevant which component the subject chooses to observe. In equilibrium, everyone plays optimal strategies. It is easily verified that there exists a unique symmetric Nash equilibrium. Recall that subjects optimize their compensation in the revised forecast stage by choosing to observe the earnings component that their partner is less likely to observe. It follows that in any symmetric equilibrium, subjects are 50% likely to choose to observe revenue and 50% likely to observe cost. When issuing an initial forecast, others’ behavior is irrelevant, and the equilibrium initial forecast, f0NE, is identical to the optimal forecast in Equation (4): f0NE=f0*={15if the subject observes good news (R=20 or C=0)5if the subject observes bad news (R=10 or C=10). (5) Since everyone issues optimal initial forecasts in equilibrium, the partner’s initial forecast perfectly reveals whether he saw good news or bad news. We verify in Appendix A that equilibrium revised forecasts are given by15: fR NE={1623 if the subject sees good news and f0p=1510 if the subject sees good (bad) news and f0p=5(f0p=15)313 if the subject sees bad news and f0p=5, (6) where f0p is the partner’s initial forecast. The Nash equilibrium described in Equations (5) and (6) is derived assuming risk-neutrality. Risk-aversion does not affect Equation (5), since the distribution of the firm’s earnings conditional on the subject’s information in the initial forecasting stage is symmetric. However, in the revised forecast stage, the distribution of earnings conditional on the subject’s information is asymmetric when the subject and partner observe qualitatively similar news (both good or both bad), and risk aversion would push the optimal revised forecasts toward 5 (and away from 313) in the case where both subjects observe bad news, and toward 15 (and away from 1623) in the case where both subjects observe good news. Although the presence of risk aversion might cause us to overestimate the amount of underreaction in our baseline treatment, our main takeaways follow from our cross-treatment comparisons of behavior. Since the asymmetry described above is present in all four of our treatments, our cross-treatment comparisons are not affected by our assumption of risk-neutrality. 3.3 Hypotheses We do not expect subject behavior to conform to the equilibrium prediction. First, the equilibrium revised forecasts described in Equation (6) are not even optimal when others make mistakes with positive probability in their initial forecasts. Since some subjects will make mistakes in their initial forecasts, we expect subjects to discount their partner’s forecast, which would be consistent with optimal subject behavior in the revised forecast stage.16 In addition, we expect to find systematic deviations from optimal behavior. There are three aspects of our baseline treatment that we expect to cause subjects to underreact to the information content of their partner’s initial forecast: (i) subjects are informed of their partner’s forecast but not whether their partner observed good news or bad news, (ii) subjects do not know the likelihood that their partner will choose to observe the same information source as them, and (iii) the partner’s numerical forecast is salient. 3.3.a. Relationship between the Partner’s Information and the Partner’s Forecast In equilibrium, the partner’s initial forecast perfectly reveals whether he saw good news or bad news: an initial forecast of 15 reveals good news, while an initial forecast of 5 reveals bad news. However, some subjects will not issue optimal initial forecasts, and it is optimal for subjects to account for this possibility. Although it is optimal for subjects to account for the possibility that their partner makes mistakes, we expect subjects to be overly skeptical of the informational content of their partner’s initial forecast. “Cursedness” (Eyster and Rabin, 2005) refers to people’s tendency to underestimate the correlation between others’ actions and others’ information. In our setting, this corresponds to people underestimating the likelihood that the partner actually observed good (bad) news when his initial forecast is 15 (5), which would cause subjects to underreact in their forecast revisions relative to the optimum response. Cursedness can arise if people underestimate others’ ability to process information.17 3.3.b. Likelihood That the Partner Chooses to Observe the Same Information as the Subject The false consensus effect refers to people’s tendency to overestimate their similarity to others (Ross, Greene, and House, 1977). More specifically, people have a tendency to overestimate the likelihood that others engage in the same activities, have the same opinions and beliefs, and have the same preferences as them. See Krueger (1998, 2000) for more comprehensive reviews of the literature. Williams (2013) shows that financial analysts’ earnings forecasts are consistent with the false consensus effect. However, analysts’ objective functions are poorly understood, which is a complication we can avoid in the laboratory. In our baseline treatment, subjects are given the choice of which earnings component to observe. A false consensus might cause subjects to naïvely assume that their partner chooses to observe the same information as them; if a subject chooses to observe revenue, a false consensus would cause him to overestimate the likelihood that his partner also chooses to observe revenue.18 If a subject believes his partner chooses the same information as him, he would insufficiently revise his forecast upon learning that his partner observed qualitatively similar information (both good or both bad), because he would believe his partner saw the same information as him. That is, a false consensus effect should cause subjects to erroneously believe there is no incremental informational content provided in the revised forecast stage, especially when the partner’s qualitative information is consistent with the subject’s (both good or both bad). 3.3.c. Salience of the Numerical Value of the Partner’s Forecast An intuitive, but generally suboptimal, way of aggregating information is to simply take the average of forecasts. This heuristic has a long history. For example, Galton (1907) showed that in a weight-judging contest, the median weight estimate for an ox was close to its actual weight. More recently, Surowiecki (2004) argues in his popular science book that the accuracy of average forecasts demonstrates the “wisdom of crowds.”19 In our environment, averaging forecasts corresponds to a subject taking the average of his initial forecast and his partner’s initial forecast. Such a heuristic is optimal if subjects do not make mistakes in their initial forecasts and the subjects see qualitatively dissimilar news—that is, when one subject observes good (bad) news and his partner observes bad (good) news. In these situations, subjects can infer that the other subject must have observed the other earnings component (revenue versus cost), and they can infer that earnings is equal to 10, which is the average of the two optimal initial forecasts. However, the heuristic is not optimal when subjects observe qualitatively similar news, that is, when they both see bad news or they both see good news. Recalling Equation (6), the equilibrium revised forecasts in these scenarios are 313 and 1623, respectively, whereas the averaging heuristic yields revised forecasts of 5 and 15 in these scenarios, which are exactly equal to the subject’s optimal initial forecast, and would thus result in underreaction.20 We hypothesize that the quantitative nature of their partner’s initial forecast will make it likely that subjects will rely on the averaging heuristic, resulting in underreaction, especially when the news they observe is qualitatively similar to the news their partner observes. 3.3.d. Formal Hypotheses To formalize our hypotheses, we develop a measure of underreaction in the revised forecast stage. Recall that fR denotes a subject’s revised forecast, and f0* and fR* denote a subject’s optimal initial and revised forecast, respectively. If a subject observes good news in the revised forecast stage ( fR*>f0*), he should revise his forecast upward. He “underreacts” in the revised forecast stage if fR<fR*, and fR*−fR is the magnitude of the underreaction. Conversely, if a subject observes bad news in the revised forecast stage ( fR*<f0*), he should revise his forecast downward. He underreacts in the revised forecast stage if fR>fR*, and fR−fR* is the magnitude of the underreaction. Our measure of underreaction, U, formalizes the arguments in the previous paragraph: U=(fR*−fR)* sgn(fR*−f0*), (7) where sgn(·) is the sign function that maps R+ to 1 and R− to −1. Positive values of U indicate that subjects underreact in the revised forecast stage relative to the optimal revised forecast, whereas negative values indicate that they overreact. If subjects’ revised forecasts are equal to the optimal forecast plus noise, then U should have a mean that is not statistically different than 0. The three features of our baseline experiment described in the previous three subsections all lead to the following hypothesis: Hypothesis 1. Subjects’ revised forecasts will be systematically biased toward the expected earnings conditional on their own private information, that is, U>0. This hypothesis is consistent with the empirical evidence on analysts’ forecasts—Bernhardt, Campello, and Kutsoati (2006) and Chen and Jiang (2006) document that analysts issue forecasts that are too far from the consensus forecast, that is, they overweight their own private information. However, because analysts’ objective functions are poorly understood, it is difficult to determine whether they suffer from behavioral biases such as cursedness or overconfidence. Trueman (1994) and Ottaviani and Sørensen (2006) show that under reasonable assumptions, analysts shade their forecasts away from their true beliefs. In fact, Chen and Jiang (2006) argue their results are not driven by analyst overconfidence. An advantage of the laboratory is that we can easily incentivize subjects to issue forecasts that are consistent with their beliefs, so any systematic biases are unlikely to be due to strategic behavior. While cursedness predicts underreaction in all scenarios, the false consensus effect and the salience of the partner’s numerical forecast suggest that underreaction should be especially pronounced when a subject and his partner observe qualitatively similar news (both good or both bad). Hypothesis 2. The underreaction bias described in Hypothesis 1 will be most severe when subjects see good (bad) news and their partner’s initial forecast is 15 (5). 3.4 Initial Forecasts Although the focus of our study is how subjects combine the news they observe with others’ forecasts, we begin by analyzing subject behavior in the initial forecast stage of the experiment. Regarding the choice of information source, subjects choose to observe revenue 61.6% of the time, which is slightly significantly different from the equilibrium prediction of 50% (t = 1.82).21 As for the accuracy of the initial forecasts, the majority of them (51.9%) are exactly equal to the optimal initial forecast (defined in Equation (4)), and most of the suboptimal forecasts are near the optimal initial forecast. We plot the distribution of initial forecasts for each of the four possible pieces of news in Figure 1. From the figure, it is clear that subjects’ initial forecasts are informative, but noisy, signals about the news that they observed. Hence, as in Bloomfield and Hales (2009), our environment is sufficiently complex so that subjects’ forecasts often differ even when they are presented with identical information. Figure 1. View largeDownload slide Distribution of initial forecasts, by observed news. Notes: For our baseline treatment, we plot the distribution of initial forecasts for each of the four possible pieces of news: low revenue, high revenue, high cost, and low cost. Figure 1. View largeDownload slide Distribution of initial forecasts, by observed news. Notes: For our baseline treatment, we plot the distribution of initial forecasts for each of the four possible pieces of news: low revenue, high revenue, high cost, and low cost. 3.5 Revised Forecasts Recall the timing of our baseline treatment. Subjects first choose whether to observe revenue or cost. Then, they observe revenue or cost, and they issue initial forecasts. Next, they are shown the initial forecast of a randomly chosen subject (their “partner”), and finally, they issue a revised forecast. Optimal initial forecasts are either 5 or 15, depending on whether the subject observes good news or bad news (recall Equation (4)). Equilibrium revised forecasts are either 313, 10, or 1623 (recall Equation (6)). It follows that in equilibrium, subjects revise their forecasts by either 123 or 5 units. In the laboratory, the average distance between the initial forecast and revised forecast is 2.10. This is reported in Row 1 of Table I. Table I. Revised forecasts, baseline treatment We report the averages of various measures of forecast revision activity in our baseline treatment. f0 is the subject’s initial forecast, f0* is the subject’s optimal initial forecast, f0p is the partner’s initial forecast, fR is the subject’s revised forecast, fR* is the subject’s optimal revised forecast (defined in Equations (9)–(15)), and U represents the degree to which subjects underreact in their forecast revisions (defined in Equation (7)). In Row 1, we report unconditional means. In Rows 2–10, we restrict attention to the forecasts in which the subject’s partner's initial forecast is equal to 5 or 15, which are the two possible values that optimal initial forecasts can equal. In Rows 3–4 and 7–10, we divide the sample based on subjects’ level of sophistication; we classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. In Rows 5–10, we divide the sample based on whether the partner’s initial forecast is equal to the subject’s optimal initial forecast (Rows 5, 7, and 9) or the partner’s initial forecast signals he saw qualitatively dissimilar news (Rows 6, 8, and 10). We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. Sample restrictions Row f0p∈{5,15}? Subject sophistication f0p=f0*? |fR−f0| |fR−fR*| 1{fR=f0} U 1 No restriction No restriction No restriction 2.10 . 0.58 . 2 f0p∈{5,15} No restriction No restriction 1.98 2.86 0.63 1.11*** 3 f0p∈{5,15} Sophisticated subjects No restriction 1.53 1.97 0.61 0.91** 4 f0p∈{5,15} Unsophisticated subjects No restriction 2.20 3.29 0.64 1.21*** 5 f0p∈{5,15} No restriction f0p=f0* 1.43 2.91 0.70 1.32*** 6 f0p∈{5,15} No restriction f0p≠f0* 3.67 2.71 0.42 0.49 7 f0p∈{5,15} Sophisticated subjects f0p=f0* 1.02 1.88 0.70 0.85** 8 f0p∈{5,15} Sophisticated subjects f0p≠f0* 3.36 2.29 0.28 1.12 9 f0p∈{5,15} Unsophisticated subjects f0p=f0* 1.63 3.44 0.70 1.56*** 10 f0p∈{5,15} Unsophisticated subjects f0p≠f0* 3.79 2.87 0.47 0.24 Sample restrictions Row f0p∈{5,15}? Subject sophistication f0p=f0*? |fR−f0| |fR−fR*| 1{fR=f0} U 1 No restriction No restriction No restriction 2.10 . 0.58 . 2 f0p∈{5,15} No restriction No restriction 1.98 2.86 0.63 1.11*** 3 f0p∈{5,15} Sophisticated subjects No restriction 1.53 1.97 0.61 0.91** 4 f0p∈{5,15} Unsophisticated subjects No restriction 2.20 3.29 0.64 1.21*** 5 f0p∈{5,15} No restriction f0p=f0* 1.43 2.91 0.70 1.32*** 6 f0p∈{5,15} No restriction f0p≠f0* 3.67 2.71 0.42 0.49 7 f0p∈{5,15} Sophisticated subjects f0p=f0* 1.02 1.88 0.70 0.85** 8 f0p∈{5,15} Sophisticated subjects f0p≠f0* 3.36 2.29 0.28 1.12 9 f0p∈{5,15} Unsophisticated subjects f0p=f0* 1.63 3.44 0.70 1.56*** 10 f0p∈{5,15} Unsophisticated subjects f0p≠f0* 3.79 2.87 0.47 0.24 Table I. Revised forecasts, baseline treatment We report the averages of various measures of forecast revision activity in our baseline treatment. f0 is the subject’s initial forecast, f0* is the subject’s optimal initial forecast, f0p is the partner’s initial forecast, fR is the subject’s revised forecast, fR* is the subject’s optimal revised forecast (defined in Equations (9)–(15)), and U represents the degree to which subjects underreact in their forecast revisions (defined in Equation (7)). In Row 1, we report unconditional means. In Rows 2–10, we restrict attention to the forecasts in which the subject’s partner's initial forecast is equal to 5 or 15, which are the two possible values that optimal initial forecasts can equal. In Rows 3–4 and 7–10, we divide the sample based on subjects’ level of sophistication; we classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. In Rows 5–10, we divide the sample based on whether the partner’s initial forecast is equal to the subject’s optimal initial forecast (Rows 5, 7, and 9) or the partner’s initial forecast signals he saw qualitatively dissimilar news (Rows 6, 8, and 10). We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. Sample restrictions Row f0p∈{5,15}? Subject sophistication f0p=f0*? |fR−f0| |fR−fR*| 1{fR=f0} U 1 No restriction No restriction No restriction 2.10 . 0.58 . 2 f0p∈{5,15} No restriction No restriction 1.98 2.86 0.63 1.11*** 3 f0p∈{5,15} Sophisticated subjects No restriction 1.53 1.97 0.61 0.91** 4 f0p∈{5,15} Unsophisticated subjects No restriction 2.20 3.29 0.64 1.21*** 5 f0p∈{5,15} No restriction f0p=f0* 1.43 2.91 0.70 1.32*** 6 f0p∈{5,15} No restriction f0p≠f0* 3.67 2.71 0.42 0.49 7 f0p∈{5,15} Sophisticated subjects f0p=f0* 1.02 1.88 0.70 0.85** 8 f0p∈{5,15} Sophisticated subjects f0p≠f0* 3.36 2.29 0.28 1.12 9 f0p∈{5,15} Unsophisticated subjects f0p=f0* 1.63 3.44 0.70 1.56*** 10 f0p∈{5,15} Unsophisticated subjects f0p≠f0* 3.79 2.87 0.47 0.24 Sample restrictions Row f0p∈{5,15}? Subject sophistication f0p=f0*? |fR−f0| |fR−fR*| 1{fR=f0} U 1 No restriction No restriction No restriction 2.10 . 0.58 . 2 f0p∈{5,15} No restriction No restriction 1.98 2.86 0.63 1.11*** 3 f0p∈{5,15} Sophisticated subjects No restriction 1.53 1.97 0.61 0.91** 4 f0p∈{5,15} Unsophisticated subjects No restriction 2.20 3.29 0.64 1.21*** 5 f0p∈{5,15} No restriction f0p=f0* 1.43 2.91 0.70 1.32*** 6 f0p∈{5,15} No restriction f0p≠f0* 3.67 2.71 0.42 0.49 7 f0p∈{5,15} Sophisticated subjects f0p=f0* 1.02 1.88 0.70 0.85** 8 f0p∈{5,15} Sophisticated subjects f0p≠f0* 3.36 2.29 0.28 1.12 9 f0p∈{5,15} Unsophisticated subjects f0p=f0* 1.63 3.44 0.70 1.56*** 10 f0p∈{5,15} Unsophisticated subjects f0p≠f0* 3.79 2.87 0.47 0.24 Next, we turn our attention to the relationship between revised forecasts and the optimal revised forecast, fR*, which is formally defined in Equations (9)–(15) in Appendix A. In order to estimate fR*, we must estimate the likelihood that the partner observed good or bad news conditional on his initial forecast. Most initial forecasts are equal to 5 or 15 (Figure 1), and we lack the data to reasonably estimate this likelihood except when the partner’s initial forecast equals 5 or 15. Thus, we restrict attention to such observations when analyzing fR* and U.22 In our sample, the likelihood of a partner’s initial forecast being optimal conditional on it equaling 5 or 15 is 96.3%, and we use 0.95 as our estimate for this likelihood throughout our analysis.23 The optimal revised forecast is also affected by the ex ante likelihood that the partner chooses to observe revenue or cost. Recall from Section 3.4 that this probability is close to 50%, so we assume this throughout.24 Since fR* is only defined when the partner’s initial forecast equals 5 or 15, we restrict our sample to such observations in Row 2 of Table I. There, we report that the average distance between revised forecasts and optimal revised forecasts is 2.86. In Section 3.3, we argued that we expect subjects to underreact in their forecast revisions. A particularly severe form of underreaction occurs whenever a subject’s revised forecast exactly equals his initial forecast. Such revisions are optimal only if (i) subjects are certain their partner is unable to process news or (ii) they are certain that their partner chose to observe the same earnings component as them.25 Regarding (i), we reported in Figure 1 that most subjects’ initial forecasts are informative signals about the news they observed. Regarding (ii), we noted that in equilibrium, subjects randomly choose whether to see revenue or cost, and empirically, subjects choose to observe revenue 61.6% of the time.26 Nevertheless, a majority of subjects act as though their partner’s initial forecast is completely uninformative: 58% of revised forecasts are exactly equal to the subject’s initial forecast (Row 1), and 63% of revised forecasts exactly equal the initial forecast when the partner’s forecast equals 5 or 15 (Row 2). Hypothesis 1 provides a more specific prediction: that U will be significantly greater than 0, where U is defined in Equation (7). We confirm this prediction in Row 2: the average value of U is 1.11, which is highly statistically significant (t = 4.40). In other words, subjects underreact to the informational content of their partner’s initial forecast by issuing revised forecasts that are too close to the expected earnings conditional on their own private information. This suggests that subjects underreact to the information content of others’ forecasts even when there are no strategic reasons for doing so.27 So far, we have treated all revised forecasts and subjects as being homogeneous. There are at least two dimensions along which we expect subject behavior to differ: the degree to which a subject is sophisticated, and whether the partner’s initial forecast is consistent or inconsistent with the news the subject observed. 3.5.a. Subject Sophistication Whether it is due to inattention, inability to compute expected values, etc., we expect some subjects to perform poorly relative to others. The initial forecast provides the opportunity to measure subject sophistication since it does not involve strategic interaction or updating; all subjects are identically incentivized to truthfully report the expected value given their information. The setting is sufficiently simple to expect that many subjects should be able to do this optimally. To examine this, we compute the number of times each subject’s initial forecast is within $1 of the optimal initial forecast. We plot this distribution in Figure 2. It is apparent from the figure that there is substantial heterogeneity in the initial forecasting performance. Seven of the forty-four subjects issue initial forecasts that are within $1 of the optimal initial forecast in all thirty periods. Three subjects do not issue such a forecast in any of the thirty periods. We classify subjects as “sophisticated” (unsophisticated) if they issue initial forecasts that are within $1 in at least (fewer than) twenty-five of the thirty periods. Fourteen subjects are classified as sophisticated, and the other thirty are classified as unsophisticated. Figure 2. View largeDownload slide Distribution of the number of times the initial forecast is within $1 of optimum. Notes: For each n=1,…,30, we plot the number of subjects in the baseline treatment whose initial forecast is within $1 of the optimal initial forecast in exactly n of the 30 periods. Figure 2. View largeDownload slide Distribution of the number of times the initial forecast is within $1 of optimum. Notes: For each n=1,…,30, we plot the number of subjects in the baseline treatment whose initial forecast is within $1 of the optimal initial forecast in exactly n of the 30 periods. In Rows 3 and 4 of Table I, we restrict attention to the forecast revisions of sophisticated and unsophisticated subjects, respectively. Compared with the unsophisticated subjects, sophisticated subjects issue revised forecasts that are closer to their initial forecast (1.53 versus 2.20) and the optimal revised forecast (1.97 versus 3.29). Although sophisticated subjects exhibit less underreaction than unsophisticated subjects (0.91 versus 1.21), their underreaction is statistically significant (t = 2.20). Summarizing, Hypothesis 1 is confirmed for both sophisticated and unsophisticated subjects. 3.5.b. Qualitatively Similar versus Dissimilar Information Consider the relationship between the news the subject observes prior to his initial forecast and the partner’s initial forecast. When these are qualitatively dissimilar, for example, the subject observes good news (so that his optimal initial forecast is 15) and his partner’s initial forecast is 5 (signaling bad news), the subject should be able to reasonably infer that it is likely the partner observed the other earnings component, and his revised forecast should be close to 10. The task is more complicated when the partner’s initial forecast signals qualitatively similar news, for example, when the subject observes good news and the partner’s initial forecast is 15. In this situation, two scenarios are likely: (i) the subject and partner observed the exact same news, so there’s no incremental information content in the partner’s initial forecast, and (ii) the subject and partner observed different pieces of good news, so the partner’s initial forecast is incrementally informative.28 According to Hypothesis 2, underreaction should be more pronounced when the partner’s initial forecast signals that the partner’s news is qualitatively similar to the news that the subject observed. In Rows 5 and 6 of Table I, we restrict attention to the cases where the partner’s initial forecast signals information that is qualitatively similar and dissimilar, respectively, to the news that the subject observed. That is, Row 5 considers the cases where the subject observes good (bad) news and the partner’s initial forecast is 15 (5), while Row 6 considers the cases where the subject observes good (bad) news and the partner’s initial forecast is 5 (15). Not surprisingly, when the partner’s initial forecast signals qualitatively dissimilar news, the average revision distance is larger (3.67 versus 1.43), and subjects are less likely to issue revised forecasts that exactly equal their initial forecasts (42% versus 70%). More importantly, we find support for Hypothesis 2: U is significantly positive when the partner’s initial forecast signals qualitatively similar news ( U=1.32, t = 4.91) but not when the partner’s initial forecast signals qualitatively dissimilar news ( U=0.49, t = 1.27). In other words, subject behavior does not systematically deviate from optimal behavior when the partner’s initial forecast signals that his news was qualitatively different (good or bad) than the subject’s; underreaction is concentrated in the cases in which the subject sees good (bad) news and the partner’s initial forecast is 15 (5), consistent with Hypothesis 2. In Rows 7–10, we double-sort on subject sophistication and the relationship between the partner’s initial forecast and the news the subject observed. These sorts reveal that there is little difference between sophisticated and unsophisticated subjects regarding Hypothesis 2: when subjects see good (bad) news and the partner’s initial forecast is 15 (5) (Rows 7 and 9), there is significant underreaction, but when subjects see good (bad) news and the partner’s initial forecast is 5 (15) (Rows 8 and 10), there is not significant underreaction. 4. Additional Treatments We found support for Hypotheses 1 and 2 in our baseline treatment. Because these hypotheses were motivated based on several different features of the baseline treatment (described in Sections 3.3.1–3.3.3), it is unclear what conclusions can be drawn from the baseline treatment alone. We thus design three additional treatments in order to isolate the effects of the various features of the baseline treatment. 4.1 Experimental Environment 4.1.a. Treatment B Cursedness (Eyster and Rabin, 2005) should affect behavior in our baseline treatment because subjects are informed of their partner’s initial forecast—if subjects underestimate the relationship between the partner’s initial forecast and the news the partner observed, they will discount the informational content of their partner’s forecast and issue forecasts that are too close to the expected earnings conditional on their own private information. Our second treatment (Treatment B) is designed to eliminate the effects of cursedness. It is identical to Treatment A except that we did not inform subjects of their partner’s initial forecast; rather, we informed subjects whether their partner observed “good news” about the firm, which can represent either high revenue or low cost, or “bad news” about the firm, which can represent either low revenue or high cost.29 In addition, in order to make the setting as similar as possible to Treatment A, we informed subjects of the initial forecast that would be optimal based on the news the partner observed. Hence, subjects’ beliefs about the correlation between others’ actions and others’ information is irrelevant in this treatment, and cursedness cannot affect subject behavior in this treatment. 4.1.b. Treatment C We expect the false consensus effect to affect behavior in both Treatments A and B because subjects were given the choice of which earnings component to observe in both treatments; if a subject chose to observe revenue, a false consensus would cause him to overestimate the likelihood that his partner also chose to observe revenue. Our third treatment (Treatment C) is identical to Treatment B except that we did not allow subjects to choose which earnings component to observe; rather, we randomly and independently assigned subjects to observe either revenue or cost, each with 50% probability. This probability was common knowledge to the subjects, and their assignment to observe cost or revenue was accomplished through an initial stage that was identical to the original initial stage except that the buttons allowing a choice of cost or revenue were not presented. Although this treatment is designed to eliminate the influence of the false consensus effect by removing choice of information source, it is not entirely successful at ruling out more general versions of the false consensus effect. For example, a false consensus effect could cause people to have difficulty recognizing that others’ experiences could differ from their own; for example, if two people each independently flip a coin, and one of coins turns up heads, a false consensus could cause the person to project his experience onto the other person and overestimate the likelihood that the other person’s flip was also heads. In the context of our experiments, this would correspond to subjects having difficulty envisioning the partner observing information that is different from his own, even when there is no choice.30 Treatment C does not eliminate this form of a false consensus effect. 4.1.c. Treatment D We expect the salience of numerical information to affect behavior in Treatments A−C because in each of those treatments, subjects were given explicit, salient numerical forecasts related to the news their partner observed.31 In Treatment A, subjects were informed of their partner’s initial forecast, whereas in Treatments B and C, subjects were informed of what their partner’s optimal initial forecast would be given the news that the partner observed. Our fourth treatment (Treatment D) is identical to Treatment C except that we do not explicitly inform subjects of the optimal initial forecast based on the news the partner observed. Rather, subjects are only informed whether their partner observed good news (which can represent high revenue or low cost) or bad news (which can represent low revenue or high cost).32 4.1.d. Optimal Revised Forecasts in Treatments B–D Equations (9)–(15) in Appendix A, which formally define the optimal revised forecast ( fR*), and Equation (7), which defines our measure of underreaction ( U) still apply in Treatments B−D.33 Moreover, the equilibrium described in Section 3.2 is also the unique symmetric equilibrium in Treatments B−D. Table II contains a brief overview of our four treatments. Table II. Overview of treatments We present an overview of our four experimental treatments. f0p* refers to the optimal initial forecast given the news that the partner observed (e.g., what the partner’s optimal initial forecast would be given the information the partner observed). Treatment A Treatment B Treatment C Treatment D Number of subjects 44 38 46 48 Number of sessions 2 2 2 2 Subjects choose information source (revenues/costs) Yes Yes No No Subjects observe partner’s initial forecast Yes No No No Subjects informed whether partner saw good/bad news No Yes Yes Yes Subjects explicitly informed of f0p* No Yes Yes No Treatment A Treatment B Treatment C Treatment D Number of subjects 44 38 46 48 Number of sessions 2 2 2 2 Subjects choose information source (revenues/costs) Yes Yes No No Subjects observe partner’s initial forecast Yes No No No Subjects informed whether partner saw good/bad news No Yes Yes Yes Subjects explicitly informed of f0p* No Yes Yes No Table II. Overview of treatments We present an overview of our four experimental treatments. f0p* refers to the optimal initial forecast given the news that the partner observed (e.g., what the partner’s optimal initial forecast would be given the information the partner observed). Treatment A Treatment B Treatment C Treatment D Number of subjects 44 38 46 48 Number of sessions 2 2 2 2 Subjects choose information source (revenues/costs) Yes Yes No No Subjects observe partner’s initial forecast Yes No No No Subjects informed whether partner saw good/bad news No Yes Yes Yes Subjects explicitly informed of f0p* No Yes Yes No Treatment A Treatment B Treatment C Treatment D Number of subjects 44 38 46 48 Number of sessions 2 2 2 2 Subjects choose information source (revenues/costs) Yes Yes No No Subjects observe partner’s initial forecast Yes No No No Subjects informed whether partner saw good/bad news No Yes Yes Yes Subjects explicitly informed of f0p* No Yes Yes No Table III. Initial forecasts (all treatments) f0 is a subject’s initial forecast, and f0* is the optimal (i.e., compensation-maximizing) initial forecast defined in Equation (4). In Panel A, we report means, and in Panel B, we report t-statistics for the difference in means across treatments. For our statistical analysis, we compute the mean of each variable at the subject level, and then we compare the distribution of these subject-level means across treatments. (The number of observations in each treatment is thus the number of subjects in that treatment). Treatment |f0−f0*| (f0−f0*)2 1{f0=f0*} Panel A: Means A 2.15 12.41 0.52 B 1.80 12.42 0.63 C 2.17 12.97 0.52 D 2.72 14.32 0.41 Panel B: t-Statistics (difference of means) B−A –0.32 0.12 0.99 C−A 0.36 0.30 –0.16 D−A 1.37 0.69 –1.30 C−B 0.81 0.12 –1.38 D−B 1.97 0.47 –2.61 D−C 1.34 0.48 –1.33 Treatment |f0−f0*| (f0−f0*)2 1{f0=f0*} Panel A: Means A 2.15 12.41 0.52 B 1.80 12.42 0.63 C 2.17 12.97 0.52 D 2.72 14.32 0.41 Panel B: t-Statistics (difference of means) B−A –0.32 0.12 0.99 C−A 0.36 0.30 –0.16 D−A 1.37 0.69 –1.30 C−B 0.81 0.12 –1.38 D−B 1.97 0.47 –2.61 D−C 1.34 0.48 –1.33 Table III. Initial forecasts (all treatments) f0 is a subject’s initial forecast, and f0* is the optimal (i.e., compensation-maximizing) initial forecast defined in Equation (4). In Panel A, we report means, and in Panel B, we report t-statistics for the difference in means across treatments. For our statistical analysis, we compute the mean of each variable at the subject level, and then we compare the distribution of these subject-level means across treatments. (The number of observations in each treatment is thus the number of subjects in that treatment). Treatment |f0−f0*| (f0−f0*)2 1{f0=f0*} Panel A: Means A 2.15 12.41 0.52 B 1.80 12.42 0.63 C 2.17 12.97 0.52 D 2.72 14.32 0.41 Panel B: t-Statistics (difference of means) B−A –0.32 0.12 0.99 C−A 0.36 0.30 –0.16 D−A 1.37 0.69 –1.30 C−B 0.81 0.12 –1.38 D−B 1.97 0.47 –2.61 D−C 1.34 0.48 –1.33 Treatment |f0−f0*| (f0−f0*)2 1{f0=f0*} Panel A: Means A 2.15 12.41 0.52 B 1.80 12.42 0.63 C 2.17 12.97 0.52 D 2.72 14.32 0.41 Panel B: t-Statistics (difference of means) B−A –0.32 0.12 0.99 C−A 0.36 0.30 –0.16 D−A 1.37 0.69 –1.30 C−B 0.81 0.12 –1.38 D−B 1.97 0.47 –2.61 D−C 1.34 0.48 –1.33 4.2 Initial Forecasts Before focusing on forecast revision activity, we first verify that there are no major differences between the four treatments in subjects’ initial forecasts. Since this task is the same in all the treatments, there should not be significant differences across the treatments. As reported in Table 3, the average distance between the initial forecast and the optimal initial forecast ranges from 1.80 (Treatment B) to 2.72 (Treatment D), the average squared distance between the optimal initial forecast and the optimal revised forecast ranges from 12.41 (Treatment A) to 14.32 (Treatment D), and the likelihood of the initial forecast exactly equaling the optimal initial forecast ranges from 41% (Treatment D) to 63% (Treatment B). Of the eighteen possible pairwise combinations, only two are statistically significant from 0: the difference in average distance between the initial forecast and the optimum between Treatments B and D (t = 1.97) and the likelihood of the initial forecast exactly equaling the optimum between Treatments B and D (t = 2.61). Since sixteen of the eighteen t-stats are insignificant, we conclude that any differences in the initial forecasts across treatments are likely spurious. 4.3 Revised Forecasts 4.3.a. Treatment B Treatments A and B are identical except for the fact that in Treatment A, subjects are informed of their partner’s initial forecast prior to issuing their revised forecasts, whereas in Treatment B, subjects are informed of (i) whether their partner observed good news (which can represent high revenue or low cost) or bad news (which can represent low revenue or high cost) and (ii) the optimal forecast based on the news their partner observed. The correlation between the partner’s information and the partner’s action (forecast) is relevant in Treatment A but not in Treatment B. Hence, cursedness should affect behavior in Treatment A but not Treatment B. Recall that the variable U accounts for the fact that the partner’s initial forecast is sometimes wrong.34 Informing subjects of the partner’s optimal initial forecast (Treatment B) rather than their actual initial forecast (Treatment A) causes U to decrease from 1.11 (Row 2 of Table I) to 0.51 (Row 1 of Table IV). Although U is significantly positive in Treatment B (t = 2.92), the difference in U between Treatments A and B is statistically significant (t = 1.97). In terms of economic significance, U declines by more than 50%, suggesting that cursedness explains over half of the underreaction that we document in the baseline treatment. Table IV. Revised forecasts, Treatment B We report the average value of our measure of underreaction, U (defined in Equation (7)), across various subsamples of Treatment B. f0p* refers to the partner’s optimal initial forecast, which is revealed to subjects prior to their revised forecast. We classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. In the last column, we report t-tests for the difference in means between Treatments A and B. When conducting the t-tests reported in Rows 4, 6, and 8, we consider the subsample in Treatment A in which f0p=f0*. When conducting the t-tests reported in Rows 5, 7, and 9, we consider the subsample in Treatment A in which f0p≠f0*. Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment B t-test, B−A 1 No restriction No restriction 0.51*** –1.97 2 Sophisticated No restriction 0.61*** –0.70 3 Unsophisticated No restriction 0.39 –1.77 4 No restriction f0p*=f0* 0.65*** –2.01 5 No restriction f0p*≠f0* 0.09 –0.84 6 Sophisticated f0p*=f0* 0.69*** –0.48 7 Sophisticated f0p*≠f0* 0.36 –0.68 8 Unsophisticated f0p*=f0* 0.60* –1.79 9 Unsophisticated f0p*≠f0* –0.16 –0.86 Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment B t-test, B−A 1 No restriction No restriction 0.51*** –1.97 2 Sophisticated No restriction 0.61*** –0.70 3 Unsophisticated No restriction 0.39 –1.77 4 No restriction f0p*=f0* 0.65*** –2.01 5 No restriction f0p*≠f0* 0.09 –0.84 6 Sophisticated f0p*=f0* 0.69*** –0.48 7 Sophisticated f0p*≠f0* 0.36 –0.68 8 Unsophisticated f0p*=f0* 0.60* –1.79 9 Unsophisticated f0p*≠f0* –0.16 –0.86 Table IV. Revised forecasts, Treatment B We report the average value of our measure of underreaction, U (defined in Equation (7)), across various subsamples of Treatment B. f0p* refers to the partner’s optimal initial forecast, which is revealed to subjects prior to their revised forecast. We classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. In the last column, we report t-tests for the difference in means between Treatments A and B. When conducting the t-tests reported in Rows 4, 6, and 8, we consider the subsample in Treatment A in which f0p=f0*. When conducting the t-tests reported in Rows 5, 7, and 9, we consider the subsample in Treatment A in which f0p≠f0*. Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment B t-test, B−A 1 No restriction No restriction 0.51*** –1.97 2 Sophisticated No restriction 0.61*** –0.70 3 Unsophisticated No restriction 0.39 –1.77 4 No restriction f0p*=f0* 0.65*** –2.01 5 No restriction f0p*≠f0* 0.09 –0.84 6 Sophisticated f0p*=f0* 0.69*** –0.48 7 Sophisticated f0p*≠f0* 0.36 –0.68 8 Unsophisticated f0p*=f0* 0.60* –1.79 9 Unsophisticated f0p*≠f0* –0.16 –0.86 Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment B t-test, B−A 1 No restriction No restriction 0.51*** –1.97 2 Sophisticated No restriction 0.61*** –0.70 3 Unsophisticated No restriction 0.39 –1.77 4 No restriction f0p*=f0* 0.65*** –2.01 5 No restriction f0p*≠f0* 0.09 –0.84 6 Sophisticated f0p*=f0* 0.69*** –0.48 7 Sophisticated f0p*≠f0* 0.36 –0.68 8 Unsophisticated f0p*=f0* 0.60* –1.79 9 Unsophisticated f0p*≠f0* –0.16 –0.86 In Rows 2–9 of Table IV, we examine the differences in U between Treatments A and B based on subjects’ level of sophistication (Rows 2 and 3), the relationship between the subject’s optimal initial forecast and the information he learns about his partner’s news (Rows 4 and 5), and double sorts of these variables (Rows 6–9). The difference in underreaction is most pronounced when the news the subject observes prior to his initial forecast is consistent with the information he observes prior to his revised forecast (Rows 4, 6, and 8). In other words, cursedness appears to distort behavior most when the partner’s initial forecast is consistent with the news that the subject observed. 4.3.b. Treatment C Treatments B and C are identical except for the fact that in Treatment B, subjects choose which earnings component to observe, whereas in Treatment C, subjects are independently assigned to observe revenue or cost (each with 50% probability). If subjects overestimate the likelihood that other subjects choose to observe the same earnings component as them, we should expect to find more underreaction in Treatment B than in Treatment C. In Table V, we report the average U in Treatment C, as well as the t-statistic for the difference in means between Treatments B and C. The average value of U actually rises from B to C (0.51 to 0.68), although this difference is not significant (t = 0.57). In Rows 2–9 of Table V, we examine the differences in U between Treatments B and C based on subjects’ level of sophistication (Rows 2 and 3), the relationship between the subject’s optimal initial forecast and the information he learns about his partner’s news (Rows 4 and 5), and double sorts of these variables (Rows 6–9). None of the nine t-statistics reported in Table V are significant, suggesting that none of the underreaction documented in the baseline treatment is due to subjects overestimating the likelihood that their partner chooses to observe the same earnings component as him. Table V. Revised forecasts, Treatment C We report the average value of our measure of underreaction, U (defined in Equation (7)), across various subsamples of Treatment C. f0p* refers to the partner’s optimal initial forecast, which is revealed to subjects prior to their revised forecast. We classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. In the last column, we report t-tests for the difference in means between Treatments B and C. Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment C t-test, C−B 1 No restriction No restriction 0.68*** 0.57 2 Sophisticated No restriction 0.68** 0.22 3 Unsophisticated No restriction 0.68* 0.58 4 No restriction f0p*=f0* 0.87*** 0.66 5 No restriction f0p*≠f0* 0.10 0.24 6 Sophisticated f0p*=f0* 0.91*** 0.74 7 Sophisticated f0p*≠f0* 0.00 –1.18 8 Unsophisticated f0p*=f0* 0.84** 0.38 9 Unsophisticated f0p*≠f0* 0.16 0.84 Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment C t-test, C−B 1 No restriction No restriction 0.68*** 0.57 2 Sophisticated No restriction 0.68** 0.22 3 Unsophisticated No restriction 0.68* 0.58 4 No restriction f0p*=f0* 0.87*** 0.66 5 No restriction f0p*≠f0* 0.10 0.24 6 Sophisticated f0p*=f0* 0.91*** 0.74 7 Sophisticated f0p*≠f0* 0.00 –1.18 8 Unsophisticated f0p*=f0* 0.84** 0.38 9 Unsophisticated f0p*≠f0* 0.16 0.84 Table V. Revised forecasts, Treatment C We report the average value of our measure of underreaction, U (defined in Equation (7)), across various subsamples of Treatment C. f0p* refers to the partner’s optimal initial forecast, which is revealed to subjects prior to their revised forecast. We classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. In the last column, we report t-tests for the difference in means between Treatments B and C. Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment C t-test, C−B 1 No restriction No restriction 0.68*** 0.57 2 Sophisticated No restriction 0.68** 0.22 3 Unsophisticated No restriction 0.68* 0.58 4 No restriction f0p*=f0* 0.87*** 0.66 5 No restriction f0p*≠f0* 0.10 0.24 6 Sophisticated f0p*=f0* 0.91*** 0.74 7 Sophisticated f0p*≠f0* 0.00 –1.18 8 Unsophisticated f0p*=f0* 0.84** 0.38 9 Unsophisticated f0p*≠f0* 0.16 0.84 Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment C t-test, C−B 1 No restriction No restriction 0.68*** 0.57 2 Sophisticated No restriction 0.68** 0.22 3 Unsophisticated No restriction 0.68* 0.58 4 No restriction f0p*=f0* 0.87*** 0.66 5 No restriction f0p*≠f0* 0.10 0.24 6 Sophisticated f0p*=f0* 0.91*** 0.74 7 Sophisticated f0p*≠f0* 0.00 –1.18 8 Unsophisticated f0p*=f0* 0.84** 0.38 9 Unsophisticated f0p*≠f0* 0.16 0.84 4.3.c. Treatment D In Treatment C, prior to making revised forecasts, subjects are informed (i) whether their partner observed good news (which can represent high revenue or low cost) or bad news (which can represent low revenue or high cost), and (ii) the partner’s optimal initial forecast (5 if the partner observed bad news, and 15 if the partner observed good news). Treatment D is identical to Treatment C except that subjects are not informed of the partner’s optimal initial forecast. According to standard theory, there should be no difference in subject behavior between Treatments C and D, because the information that can be inferred in the two treatments is identical.35 If, however, subject behavior is affected by salient numerical information, there will likely be a difference between Treatments C and D because in Treatment C, the partner’s optimal initial forecast is made salient, whereas in Treatment D, it is not. The average value of U in Treatment D is now −0.12, which is not significantly different than zero (t = 0.62). The difference in U between Treatments C and D is highly significant (t = 2.68) indicating that on average the degree of underreaction has declined significantly under this treatment. While it appears that cursedness (Treatment B) and the numerical salience of the partner’s initial forecast (Treatment D) account for all of the aggregate underreaction documented in the baseline treatment, there are important differences in the reaction to this by our subjects. Rows 2 and 3 in Table VI document that subject sophistication is an important factor in response to Treatment D. Sophisticated subjects, those who issued an optimal initial forecast, continue to underreact ( U=0.79, t = 3.46), and there is no significant difference in their behavior between Treatments C and D (t = 0.33). Unsophisticated subjects, on the other hand, actually appear to overreact in Treatment D; their average U is –0.54, which is statistically significantly different than zero (t = 2.34). The difference in their behavior across treatments is also highly significant (t = 3.05). In other words, in Treatment D, underreaction disappears in aggregate because unsophisticated subjects overreact, and this overreaction offsets the continued underreaction by sophisticated subjects. We discuss this result further in the following section. Table VI. Revised forecasts, Treatment D We report the average value of our measure of underreaction, U (defined in Equation (7)), across various subsamples of Treatment D. f0p* refers to the partner’s optimal initial forecast, which is not directly revealed to subjects in Treatment D, but it can be inferred prior to subjects’ revised forecast. We classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. In the last column, we report t-tests for the difference in means between Treatments B and C. Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment D t-test, D−C 1 No restriction No restriction –0.12 –2.68 2 Sophisticated No restriction 0.79*** 0.33 3 Unsophisticated No restriction –0.54** –3.05 4 No restriction f0p*=f0* –0.38* –3.52 5 No restriction f0p*≠f0* 0.64*** 1.36 6 Sophisticated f0p*=f0* 0.80*** –0.45 7 Sophisticated f0p*≠f0* 0.77* 1.82 8 Unsophisticated f0p*=f0* –0.93*** –3.64 9 Unsophisticated f0p*≠f0* 0.58** 0.69 Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment D t-test, D−C 1 No restriction No restriction –0.12 –2.68 2 Sophisticated No restriction 0.79*** 0.33 3 Unsophisticated No restriction –0.54** –3.05 4 No restriction f0p*=f0* –0.38* –3.52 5 No restriction f0p*≠f0* 0.64*** 1.36 6 Sophisticated f0p*=f0* 0.80*** –0.45 7 Sophisticated f0p*≠f0* 0.77* 1.82 8 Unsophisticated f0p*=f0* –0.93*** –3.64 9 Unsophisticated f0p*≠f0* 0.58** 0.69 Table VI. Revised forecasts, Treatment D We report the average value of our measure of underreaction, U (defined in Equation (7)), across various subsamples of Treatment D. f0p* refers to the partner’s optimal initial forecast, which is not directly revealed to subjects in Treatment D, but it can be inferred prior to subjects’ revised forecast. We classify a subject as sophisticated (unsophisticated) if his initial forecast is within $1 of the optimal initial forecast in at least (fewer than) twenty-five of the thirty periods. We conduct our statistical analysis of U by treating the subject as the unit of observation. That is, we compute the average U for each subject, and then analyze the distribution of subject-level estimates of U. *, **, and *** denote statistical significance of our estimate for U at the 10%, 5%, and 1% level, respectively. In the last column, we report t-tests for the difference in means between Treatments B and C. Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment D t-test, D−C 1 No restriction No restriction –0.12 –2.68 2 Sophisticated No restriction 0.79*** 0.33 3 Unsophisticated No restriction –0.54** –3.05 4 No restriction f0p*=f0* –0.38* –3.52 5 No restriction f0p*≠f0* 0.64*** 1.36 6 Sophisticated f0p*=f0* 0.80*** –0.45 7 Sophisticated f0p*≠f0* 0.77* 1.82 8 Unsophisticated f0p*=f0* –0.93*** –3.64 9 Unsophisticated f0p*≠f0* 0.58** 0.69 Sample restrictions U Row Subject sophistication f0p=f0*? Mean, Treatment D t-test, D−C 1 No restriction No restriction –0.12 –2.68 2 Sophisticated No restriction 0.79*** 0.33 3 Unsophisticated No restriction –0.54** –3.05 4 No restriction f0p*=f0* –0.38* –3.52 5 No restriction f0p*≠f0* 0.64*** 1.36 6 Sophisticated f0p*=f0* 0.80*** –0.45 7 Sophisticated f0p*≠f0* 0.77* 1.82 8 Unsophisticated f0p*=f0* –0.93*** –3.64 9 Unsophisticated f0p*≠f0* 0.58** 0.69 4.4 Discussion Our cross-treatment analysis can be summarized as follows. Roughly half of the underreaction documented in our baseline disappears in Treatment B, where cursedness cannot affect subject behavior. There is no discernible difference in subject behavior between Treatments B and C, suggesting that the false consensus effect does not significantly impact underreaction in the baseline treatment. Although there is no underreaction in aggregate in Treatment D, where subjects are only provided with qualitative forecast information about their partner, it is not accurate to describe the treatment as removing all of the bias in our experiments. There is no perceptible difference in the behavior of sophisticated subjects between Treatments C and D, but unsophisticated subjects actually switch from underreaction in Treatment C to overreaction in Treatment D. Why do sophisticated subjects exhibit underreaction in Treatments B−D? Although we describe the behavior as “underreaction,” recall that our equilibrium predictions were derived under the assumption of risk-neutrality. Hence, it is possible that the underreaction we observe in these treatments is due to risk-aversion, since risk-aversion would push subjects’ revised forecasts toward 15 (and away from 1623) when both subjects observe good news, and toward 5 (and away from 313) when both subjects observe bad news; in other words, risk aversion could cause subjects to exhibit positive U when their news is qualitatively similar to the partner’s news. However, it is worth noting that risk-aversion cannot explain the fact that sophisticated subjects underreact more significantly in Treatment A than in Treatment B, nor can it explain why unsophisticated subjects overreact in Treatment D but underreact in Treatment C. Hence, our main findings that subjects exhibit cursedness and that unsophisticated subjects react more strongly to qualitative than quantitative information are not driven by risk-aversion. Why do unsophisticated subjects switch from underreaction in Treatment C to overreaction in Treatment D? When information about others’ information is presented in a quantitative manner (Treatment C), unsophisticated subjects treat it as though it has little incremental informational value, perhaps because they naïvely assume the partner observed the same news as they did. However, when the information is presented in a qualitative manner (Treatment D), subjects actually overreact; this may be because they treat the new information as though it is independent of the information that they already observed. While this interpretation is plausible, it is the topic of future research to more firmly identify this explanation.36 This finding has important implications for the way that financial institutions present forecasts to their clients, as it might explain why brokerages generally emphasize the qualitative features of their analysts’ forecasts rather than the quantitative features. One source of value that financial analysts provide their employers is that they generate trading volume in the firms they cover (Irvine, 2004). Our finding that unsophisticated subjects react more strongly to qualitative forecasts than for quantitative forecasts suggests that analysts can generate more trading activity by making their qualitative recommendations (buy/sell/hold) salient rather than their quantitative forecasts (earnings forecasts and price targets).37 5. Conclusion We conducted a series of controlled laboratory experiments to examine how people update their beliefs upon observing others’ forecasts. We found a strong tendency for subjects to underreact, especially when they observe forecasts that are qualitatively similar (good news or bad news) to the news they had already observed. We designed additional treatments to determine why subjects underreact. Roughly half of the underreaction is explained by cursedness (Eyster and Rabin, 2005), the tendency for people to underestimate the relationship between others’ forecasts and the news that others have observed. We found no evidence that underreaction was driven by people’s tendency to overestimate the likelihood that others choose to observe the same information sources as them, as predicted by a false consensus effect (Williams, 2013). The remaining aggregate underreaction disappeared when we presented subjects with forecast information in a qualitative rather than quantitative format, even though these two information formats conveyed the exact same information. However, the format of information affected subjects differently depending on their level of sophistication; sophisticated subjects reacted the same regardless of the way the information was presented (they underreact equally to both types of forecasts), whereas unsophisticated subjects underreacted to quantitative forecasts but overreacted to qualitative forecasts. The comparison of behavior of these two subject types has important implications for both behavioral theories of information processing and analysis of empirical data and policies. On the one hand, we have sophisticated subjects (indicated by their ability to issue rational initial forecasts) who respond to changes in the noisiness of their partner’s forecast by underreacting. However, once that issue is removed (Treatment B and beyond), they do not respond to changes in experimental design which should have no theoretical impact on their decision making [e.g., clearer elicitation of the proportions picking various types of information when the proportions are already expected in equilibrium (Treatment C) and changes to the framing of the information (Treatment D)]. Why then do these subjects continue to underreact? While these experiments cannot precisely determine their motivations, it is possible that simple risk aversion may account for a good portion of the apparent underreaction to the risk neutral prediction. The second class of subjects is the unsophisticated ones who cannot even issue an initial forecast that is optimal despite the seeming simplicity of the problem. These subjects exhibit important behavioral patterns. First, there are times when they still appear to be completely rational (e.g., when the second forecast clearly resolves the state of the world). Second, they are responsive to changes in experimental design that should not affect behavior (Treatment D), indicating that even simple framing effects may impact their choices. In fact, we demonstrate in Treatment D that changes in these conditions are enough to swing behavior in the opposite direction (from underreaction to overreaction). Recognizing that not all participants in markets and experiments can be placed under a single behavioral theory is important since aggregate results can hide important heterogeneity. For example, in decision-making settings where most of the participants are likely to be sophisticated (such as institutional investors), minor changes to the way information is presented may not matter so policymakers may wish to focus on how to eliminate uncertainty created by the noisy actions of others. In decision-making settings where many of the participants are less sophisticated, then even small interventions may drastically impact behavior. Appendix A: Proofs In this section, we provide formal proofs for various claims made earlier in the paper. Proof: (Optimal Revised Forecasts) In Treatment A, the optimal revised forecast is a mapping from the information that subject has at the revised forecast stage (the realization of the earnings component he chose to observe and his partner’s initial forecast) to R. However, we lack sufficient data to estimate the likelihood that the partner observed good or bad news whenever his forecast does not equal 5 or 15. Hence, to formally express the optimal revised forecast, we will restrict attention to the cases in which the partner’s initial forecast equals 5 or 15. Suppose a subject observes that revenue is high (R = 20) and that his partner’s initial forecast ( f0p) is 15. The subject’s optimal revised forecast, fR*, is given by fR*=ω1(10)+ω2(15)+ω3(20), (8) where ω1 is the conditional likelihood that the partner saw bad news (C = 10) but issued a sub-optimal initial forecast (i.e., 15 instead of the optimal initial forecast, 5), ω2 is the conditional likelihood that the partner observed R = 20, and ω3 is the conditional likelihood that the partner observed C = 0. Let q denote the ex ante likelihood that the partner chose to observe revenue, and let ρ denote the likelihood that the partner’s initial forecast is optimal conditional on it equaling 5 or 15. To solve for (ω1,ω2,ω3), first note that it is impossible that the partner observed that revenue is low (R = 10), and there are therefore six possible states of the world: The partner observes that revenue = 20 and issues an initial forecast of 15. The optimal revised forecast in this state of the world is 15 since the fact that revenue is 20 provides no incremental information. The likelihood of this state is qρ, since the partner chooses revenue with likelihood q, revenue is 100% likely to be high (since the subject observed that R = 20), and the partner’s initial forecast is optimal with likelihood ρ. The partner observes that revenue = 20 and issues an initial forecast of 5. The optimal revised forecast in this state of the world is 15 since the fact that revenue is twenty provides no incremental information. The likelihood of this state is q(1−ρ), since the partner chooses revenue with likelihood q, revenue is 100% likely to be high (since the subject observed that R = 20), and the partner’s initial forecast is suboptimal with likelihood 1−ρ. The partner observes that cost = 10 and issues an initial forecast of 5. The optimal revised forecast in this state of the world is 10 (since 20−10=10). The likelihood of this state is (1−q)12ρ, since the partner chooses cost with likelihood 1−q, cost is 50% likely to be high, and the partner’s initial forecast is optimal with likelihood ρ. The partner observes that cost = 10 and issues an initial forecast of 15. The optimal revised forecast in this state of the world is 10 (since 20−10=10). The likelihood of this state is (1−q)12(1−ρ), since the partner chooses cost with likelihood 1−q, cost is 50% likely to be high, and the partner’s initial forecast is suboptimal with likelihood 1−ρ. The partner observes that cost = 0 and issues an initial forecast of 15. The optimal revised forecast in this state of the world is 20 (since 20−0=20). The likelihood of this state is (1−q)12ρ, since the partner chooses cost with likelihood 1−q, cost is 50% likely to be high, and the partner’s initial forecast is optimal with likelihood ρ. The partner observes that cost = 0 and issues an initial forecast of 5. The optimal revised forecast in this state of the world is 20 (since 20−0=20). The likelihood of this state is (1−q)12(1−ρ), since the partner chooses cost with likelihood 1−q, cost is 50% likely to be high, and the partner’s initial forecast is suboptimal with likelihood 1−ρ. Applying Bayes rule, it follows that ω1,ω2, and ω3 are given by: ω1=(1−q)(1−ρ)1−q+2qρ (9) ω2=2qρ1−q+2qρ (10) ω3=(1−q)ρ1−q+2qρ. (11) Analogous reasoning can be used to solve for the optimal revised forecast in the other cases. Letting f0p denote the partner’s initial forecast, and letting λ1, λ2, and λ3 be defined as λ1=(1−q)ρ1+q−2qρ (12) λ2=2q(1−ρ)1+q−2qρ (13) λ3=(1−q)(1−ρ)1+q−2qρ, (14) it follows that the subject’s optimal revised forecast, fR*, is given by fR*={ω1(10)+ω2(15)+ω3(20) if the subject observes good news and f0p=15λ1(10)+λ2(15)+λ3(20) if the subject observes good news and f0p=5λ1(10)+λ2(5)+λ3(0) if the subject observes bad news and f0p=15ω1(10)+ω2(5)+ω3(0) if the subject observes bad news and f0p=5, (15) whenever f0p∈{5,15}. □ Proof: (Nash Equilibrium). Since subjects can minimize their forecast errors in the revised forecast stage by choosing to observe the earnings component that their partner is less likely to observe, in any symmetric equilibrium, subjects randomize between observing revenue and cost, each with 50% probability. Moreover, subjects’ initial forecasts are always optimal in equilibrium. In the notation of Equations (9)–(14), this implies that in any symmetric equilibrium, q = 0.5 and ρ = 1. Equation (6) trivially follows by plugging these values into Equations (9)–(15). □ In the rest of this section we establish some properties of incentives created by the scoring rule examined in the experiment. Throughout let G be the distribution of the random variable X, when there is the possibility of confusion we denote the expectation of X with respect to this distribution by EG(X). We assume agents have twice differentiable, strictly increasing, concave Bernoulli utility functions u with u′>0 and u″≤0. Upon receiving some information, agents are asked to provide a forecast f of X and are compensated according to the proper scoring rule: s(f,x)=α−β(f−x)2, where α,β>0 and x is the realized value of the random variable. Note that the partial derivatives of the scoring function are given by sf(f,x)=−2β(f−x) and sx(f,x)=2β(f−x) which differ only in sign. We begin by establishing two rather obvious results related to the scoring rule. Proposition 1 If a decision maker is risk neutral, then the optimal forecast f*=E(X). Proof: If a decision maker is risk neutral they will want to select a forecast to maximize the expected value of the scoring rule yielding the following first-order condition: E(sf(f*,X))=0E(−2β(f*−X))=0E(−2βf*)−E(−2βX)=0−2βf*+2βE(X)=0f*=E(X). □ When there is no uncertainty, any risk averse decision maker should prefer to report the known value of X. Proposition 2 Consider a decision maker with preferences that are strictly increasing in payoffs ( u′>0). If the distribution of X is degenerate at x, then the optimal forecast is f*=x. Proof: The agent will select a forecast to maximize the (certain) utility from the scoring rule yielding the following first-order condition: u′(s(f*,x))sf(f*,x)=0−u′(s(f*,x))2β(f*−x)=0 which is only satisfied if f*=x. It is straightforward to verify that the second-order condition at the optimal solution is given by −2βu′(s(f*,x)) so u′>0 ensures this is a maximum. □ Finally, we demonstrate that shifts in the distribution in terms of first-order stochastic dominance will result in more optimistic forecasts by any risk averse decision maker. A distribution H is said to first order stochastically dominate G if for all x, G(x)≥H(x) meaning that higher values are “always” more likely for the distribution H than G. We use fH and fG to denote the forecast made by the decision maker under the different distributions. Proposition 3 If H first-order stochastically dominates G, then for any risk averse decision maker the optimal forecasts will be such that fH*≥fG*. Proof: Prove by contradiction. Assume the forecasts are optimal but fG*>fH*. In order for each forecast to be optimal, they must satisfy the following first-order conditions: EH(u′(s(fH*,X))sf(fH*,X))=0EG(u′(s(fG*,X))sf(fG*,X))=0. Also, consider the derivative of the function inside the expectation with respect to x, which is given by −u″(s(f,x))(2β)2(f−x)2+u′(s(f,x))2β. Since u″≤0 and u′>0, we have this function is strictly positive, or the function inside the expectation is an increasing function of x. Similarly, it follows that (taking the derivative of the function with respect to f) the function inside the expectation is a strictly decreasing function of f. Assuming optimality of fH* we have EH(u′(s(fH*,X))sf(fH*,X))=0 but since H first-order stochastically dominates G and the function inside the expectation is increasing, it must be that EH(u′(s(fH*,X))sf(fH*,X))≥EG(u′(s(fH*,X))sf(fH*,X)) and, since fG*>fH* and the function is strictly decreasing in f EG(u′(s(fH*,X))sf(fH*,X))>EG(u′(s(fG*X))sf(fG*,X)). However, putting these three inequalities together we have that EG(u′(s(fG*X))sf(fG*,X))<0, which contradicts the assumption that fG* is optimal. □ Appendix B: Experimental Instructions Instructions for Treatment A You are about to participate in an experiment in economics of individual decision-making. If you follow these instructions carefully and make good decisions you will earn additional money that will be paid to you in cash at the end of the session. If you have a question at any time, please raise your hand and the experimenter will answer it. We ask that you not talk with one another for the duration of the experiment. The experiment will continue for a number of periods. Each period is independent of the other in the sense that the outcomes of one period do not directly influence the outcomes of another. In each period you will be participate in a forecasting exercise described below. How you earn money This is an experiment on financial forecasting. Each period, you will forecast a firm’s earnings and you will be compensated based on your forecast accuracy—the more accurate you are, the more money you will earn. Your earnings from any one forecast is given by: Compensation =$6−0.04( Your Forecast− Actual Earnings)2. You can maximize your expected compensation by issuing forecasts that are equal to the expected (average) earnings given the information you have observed. For example, if you think it is 50% likely a firm’s earnings are $10, and 50% likely the firm’s earnings are $20, you would maximize your expected compensation by issuing a forecast equal to $15 (because 0.5($10) + 0.5($20) = $15). How firm earnings are determined A firm’s earnings is equal to its revenues minus its costs, plus a random noise term. In other words, Actual Earnings = Revenues− Costs+ Noise. At the time of issuing your forecast, the firm’s actual earnings will not be known to you. However, you will have the opportunity to obtain information that might help you issue a more accurate forecast. Revenues are either $10 or $20, each with 50% probability. Costs are either $0 or $10, each with 50% probability. The noise term can take any value between −$1.00 and $1.00, and each value is equally likely. Note that on average, the noise term will equal $0. Revenues and costs are independent of each other. In other words, regardless of whether the firm’s revenues are $10 or $20, there’s a 50/50 chance its costs will be $0, and regardless of whether the firm’s costs are $0 or $10, there’s a 50/50 chance its revenues will be $20. Revenues and costs are also independent across time. In other words, there’s always a 50/50 chance costs will be $0, and a 50/50 chance revenues will be $20, regardless of previous periods’ costs and revenues. The noise term is independent of revenues and costs. That is, whether revenues and costs are high or low does not affect whether the noise term is unusually high or low. Practice Periods In each of these periods, you will be shown the firm’s revenues and its costs but not the noise. You will then be asked to provide your Initial Forecast for the firm’s earnings given that information. This will continue for three periods. You will not be compensated for the decisions you make in these periods. Compensation Periods At the beginning of each period, you must choose whether to observe the firm’s revenues or costs—you may observe one, but not both. After observing that information, you will be asked to place your Initial Forecast. Revised forecast opportunity Each period, you will be randomly assigned a partner. After issuing your Initial Forecast, you will be shown the Initial Forecast your partner issued that period. (Similarly, your partner will see the Initial Forecast that you issue that period.) After observing your partner’s forecast, you will be asked to issue a Revised Forecast for the firm’s earnings that period. Your partner observes information about the same firm as you. In other words, the revenues and costs are the same for your firm as they are for his or her firm. (Subjects who are not your partner, e.g., your neighbor, may observe revenues or costs from a different firm.) Note that you will generally be assigned a different random partner each period. Results and earnings determination At the end of the each period, the firm’s actual earnings will be announced and your forecast earnings from both your Initial Forecast and Revised Forecast will be displayed. After all the periods have been completed, one period will be randomly chosen as your Initial Forecast compensation period (i.e., you will be compensated based on the accuracy of your initial forecast that period), and one period will be randomly chosen as your Revised Forecast compensation period (i.e., you will be compensated based on the accuracy of the forecast you issue after observing your partner’s initial forecast). Your total earnings will be the compensation from these two forecasts and the $5 show up fee. Instructions for Treatment B You are about to participate in an experiment in economics of individual decision-making. If you follow these instructions carefully and make good decisions you will earn additional money that will be paid to you in cash at the end of the session. If you have a question at any time, please raise your hand and the experimenter will answer it. We ask that you not talk with one another for the duration of the experiment. The experiment will continue for a number of periods. Each period is independent of the other in the sense that the outcomes of one period do not directly influence the outcomes of another. In each period you will participate in the forecasting exercise described below. How you earn money This is an experiment on financial forecasting. Each period, you will forecast a firm’s earnings and you will be compensated based on your forecast accuracy—the more accurate you are, the more money you will earn. Your compensation from any one forecast is given by: Compensation =$6−0.04( Your Forecast− Actual Earnings)2. You can maximize your expected compensation by issuing forecasts that are equal to the expected (average) earnings given the information you have observed. For example, if you think it is 50% likely a firm’s earnings are $10, and 50% likely the firm’s earnings are $20, you would maximize your expected compensation by issuing a forecast equal to $15 (because 0.5($10) + 0.5($20) = $15). How firm earnings are determined A firm’s earnings are equal to its revenues minus its costs, plus a random noise term. In other words, Actual Earnings = Revenues− Costs+ Noise. At the time of issuing your forecast, the firm’s actual earnings will not be known to you. However, you will have the opportunity to obtain information that might help you issue a more accurate forecast. Revenues are either $10 or $20, each with 50% probability. High revenues ($20) are considered “good news,” and low revenues ($10) are considered “bad news.” Costs are either $0 or $10, each with 50% probability. Low costs ($0) are considered “good news,” and high costs ($10) are considered “bad news.” The noise term can take any value between –$1.00 and $1.00, and each value is equally likely. Note that on average, revenues are $15, costs are $5, and the noise term is $0. Revenues and costs are independent of each other. In other words, regardless of whether the firm’s revenues are good or bad, there’s a 50% chance its costs will be good, and regardless of whether the firm’s costs are good or bad, there’s a 50% chance its revenues will be good. Revenues and costs are also independent across time. In other words, in every period there is a 50% chance revenue will be good, and a 50% chance costs will be good, regardless of previous periods’ costs and revenues. The noise term is independent of revenues and costs. That is, whether revenues and costs are high or low does not affect whether the noise term is unusually high or low. Initial Forecasts At the beginning of each period, you must choose whether to observe the firm’s revenues or costs—you may observe one, but not both. After observing that information, you will be asked to place your Initial Forecast. Revised forecast opportunity Each period, you will be randomly assigned a partner. After issuing your Initial Forecast, you will be informed whether your partner observed “good news” or “bad news.” Recall that “good news” corresponds to either high revenues ($20) or low costs ($0), and “bad news” corresponds to either low revenues ($10) or high costs ($10). (Similarly, your partner will see whether the news you observed was “good” or “bad.”) You will not be told whether your partner chose to observe revenues or costs. After observing whether your partner observed good news or bad news, you will be asked to issue a Revised Forecast for the firm’s earnings that period. Your partner observes information about the same firm as you. In other words, the revenues and costs are the same for your firm as they are for his or her firm. (Subjects who are not your partner, e.g., your neighbor, may observe revenues or costs from a different firm.) Note that you will generally be assigned a different random partner each period. Results and earnings determination At the end of the each period, the firm’s actual earnings will be announced and your forecast compensation from both your Initial Forecast and Revised Forecast will be displayed. After all the periods have been completed, one period will be randomly chosen as your Initial Forecast compensation period (i.e., you will be compensated based on the accuracy of your initial forecast that period), and one period will be randomly chosen as your Revised Forecast compensation period (i.e., you will be compensated based on the accuracy of the forecast you issue after observing your partner’s initial forecast). Your total compensation will be the sum of your compensation from these two forecasts and the $5 show up fee. Instructions for Treatment C You are about to participate in an experiment in economics of individual decision-making. If you follow these instructions carefully and make good decisions you will earn additional money that will be paid to you in cash at the end of the session. If you have a question at any time, please raise your hand and the experimenter will answer it. We ask that you not talk with one another for the duration of the experiment. The experiment will continue for a number of periods. Each period is independent of the other in the sense that the outcomes of one period do not directly influence the outcomes of another. In each period you will participate in the forecasting exercise described below. How you earn money This is an experiment on financial forecasting. Each period, you will forecast a firm’s earnings and you will be compensated based on your forecast accuracy—the more accurate you are, the more money you will earn. Your compensation from any one forecast is given by: Compensation =$6−0.04( Your Forecast− Actual Earnings)2. You can maximize your expected compensation by issuing forecasts that are equal to the expected (average) earnings given the information you have observed. For example, if you think it is 50% likely a firm’s earnings are $10, and 50% likely the firm’s earnings are $20, you would maximize your expected compensation by issuing a forecast equal to $15 (because 0.5($10) + 0.5($20) = $15). How firm earnings are determined A firm’s earnings are equal to its revenues minus its costs, plus a random noise term. In other words, Actual Earnings = Revenues− Costs+ Noise. At the time of issuing your forecast, the firm’s actual earnings will not be known to you. However, you will obtain information that might help you issue a more accurate forecast. Revenues are either $10 or $20, each with 50% probability. High revenues ($20) are considered “good news,” and low revenues ($10) are considered “bad news.” Costs are either $0 or $10, each with 50% probability. Low costs ($0) are considered “good news,” and high costs ($10) are considered “bad news.” The noise term can take any value between –$1.00 and $1.00, and each value is equally likely. Note that on average, revenues are $15, costs are $5, and the noise term is $0. Revenues and costs are independent of each other. In other words, regardless of whether the firm’s revenues are good or bad, there’s a 50% chance its costs will be good, and regardless of whether the firm’s costs are good or bad, there’s a 50% chance its revenues will be good. Revenues and costs are also independent across time. In other words, in every period there is a 50% chance revenue will be good, and a 50% chance costs will be good, regardless of previous periods’ costs and revenues. The noise term is independent of revenues and costs. That is, whether revenues and costs are high or low does not affect whether the noise term is unusually high or low. Initial Forecasts At the beginning of each period, the computer will randomly choose whether to display the firm’s costs or revenues. It is equally likely that the computer will choose to display revenues as costs (each has 50% probability). After observing that information, you will be asked to place your Initial Forecast. Revised forecast opportunity Each period, you will be randomly assigned a partner. Your partner is also shown the firm’s revenues or costs, but not both. Whether your partner is shown revenues or costs is independent of whether you are shown revenues or costs. That is, the likelihood that your partner observes revenues is 50% regardless of whether the computer shows you the firm’s revenues or costs. After issuing your Initial Forecast, you will be informed whether your partner observed “good news” or “bad news.” Recall that “good news” corresponds to either high revenues ($20) or low costs ($0), and “bad news” corresponds to either low revenues ($10) or high costs ($10). (Similarly, your partner will see whether the news you observed was “good” or “bad.”) You will not be told whether your partner was shown the firm’s revenues or costs. After observing whether your partner observed good news or bad news, you will be asked to issue a Revised Forecast for the firm’s earnings that period. Your partner observes information about the same firm as you. In other words, the revenues and costs are the same for your firm as they are for his or her firm. (Subjects who are not your partner, e.g., your neighbor, may observe revenues or costs from a different firm.) Note that you will generally be assigned a different random partner each period. Results and earnings determination At the end of the each period, the firm’s actual earnings will be announced and your forecast compensation from both your Initial Forecast and Revised Forecast will be displayed. After all the periods have been completed, one period will be randomly chosen as your Initial Forecast compensation period (i.e., you will be compensated based on the accuracy of your initial forecast that period), and one period will be randomly chosen as your Revised Forecast compensation period (i.e., you will be compensated based on the accuracy of the forecast you issue after observing your partner’s initial forecast). Your total compensation will be the sum of your compensation from these two forecasts and the $5 show up fee. Footnotes 1 See Williams (2013) for a more detailed discussion of why a false consensus effect should lead to underreaction. 2 Our main results are based on comparison of subject behavior across treatments so the primary identified effects are not directly impacted by risk aversion. We discuss risk aversion in more detail in Section 3.2. 3 Although the averaging heuristic can be optimal if the analysts are equally rational and they are focusing on the same pieces of information, it can lead to underreaction if the analysts are focusing on different pieces of information. For example, suppose one analyst focused on the implications of the quarter t + 1 earnings announcement on one of the firm’s segment’s t + 2 performance, while the other analyst focused on the implications of the quarter t + 1 earnings on the t + 2 performance of a different segment. In this scenario, rather than averaging the posteriors, the analysts should reason that the expected earnings of one of the firm’s segments rose by $0.10, while the expected earnings of the firm’s other segment rose by $0.20. Hence, the analysts should revise their beliefs and converge to $2.30 in this scenario. The averaging heuristic essentially assigns 0 probability to this scenario, so it naturally leads to underreaction whenever that scenario is possible. 4 Each subject observes a different draw, and the marbles are drawn with replacement. 5 Questions that are somewhat similar to, but fundamentally distinct from, this issue are: “Do people understand correlations of asset returns?” and “Do people recognize the differences between correlated and uncorrelated signals?” These questions have been addressed by Eyster and Weizsäcker (2011) and Enke and Zimmerman (2016), respectively. Both studies conclude that subjects are generally unable to correctly adjust for the effects of correlations. 6 As long as people learn relevant information, group polarization is consistent with the standard Bayesian model in which more precise information leads to more extreme posterior beliefs. However, polarization occurs even when no one reveals relevant information during the discussions. 7 We are obviously not the first to use the laboratory to directly test models’ underlying assumptions. For example, Bloomfield and Hales (2002) utilize the laboratory to test, and find support for, the regime-shifting beliefs posited by Barberis, Shleifer, and Vishny (1998). 8 For a thorough review of the theoretical literature on learning in social networks, see Acemoglu et al. (2011). 9 Corazzini et al. (2012) conduct laboratory experiments to test DeMarzo, Vayanos, and Zwiebel (2003), and they argue that their results suggest that a person’s influence depends not only on how many people listen to him, but also how many people he listens to. They argue that the persuasion bias model of DeMarzo, Vayanos, and Zwiebel (2003) is an extreme case of a more general boundedly rational updating rule. Brandts, Giritligil, and Weber (2015) conduct laboratory experiments that are more closely related to DeMarzo, Vayanos, and Zwiebel (2003), and they argue that their results are consistent with persuasion bias but not with the updating rule proposed by Corazzini et al. (2012). Chandrasekhar, Larreguy, and Xandri (2016) and Mobius, Phan, and Szeidl (2015) also test whether people overweight the information of influential members in social networks. Chandrasekhar, Larreguy, and Xandri (2016) find that people mistakenly overweight influential members’ information, whereas Mobius, Phan, and Szeidl (2015) find that people do not make this mistake. As Mobius, Phan, and Szeidl (2015) note, these seemingly contradictory findings can be reconciled upon observing that in Mobius, Phan, and Szeidl (2015), people can “tag” their information sources, whereas in Chandrasekhar, Larreguy, and Xandri (2016), they cannot. 10 See, for example, Trueman (1994); Hong, Kubik, and Solomon (2000); Hong and Kubik (2003); and Ottaviani and Sørensen (2006). 11 This restriction—that subjects can observe revenue or cost, but not both—also captures the idea that financial analysts can receive private signals about the earnings quality of various earnings components following an earnings announcement. That is, some analysts might be able to infer the earnings quality of one earnings component, which will affect their one period ahead earnings forecasts, while other analysts might be able to infer the quality of a different earnings component. The order of the buttons for selecting either revenue or cost was randomized on the computer screen to avoid potential issues with biases for subjects to select the top button. 12 Subjects were not required to change their forecast but were required to enter a new (potentially similar) forecast number. 13 Since the object of this study was not compliance with a proper scoring rule, the experimental instructions provided guidance in this regard. Subjects were informed that they would maximize their expected compensation by issuing forecasts equal to what they think earnings would be on average given all the information they have observed. We formally verify the incentive properties of the scoring rule in Appendix A. 14 Assuming the partner’s initial forecasts are optimal, initial forecasts of 15 (5) reveal that he observed good (bad) news. 15 Technically, the equilibrium revised forecasts should specify the forecasts that subjects issue for any initial forecasts given by their partner, including off-equilibrium forecasts. However, since subjects’ responses (i.e., revised forecasts) in such situations have no impact on their partner’s compensation, such responses to off-equilibrium play can be safely ignored in our setting. 16 Anderson and Holt (1997) find that in an information cascades experiment subjects may rely more on their own private signal due to the fact that others’ choices might have been error prone. 17 Such a bias is consistent with the “better than average effect,” that is, the tendency for people to overestimate the likelihood that they are better than average at a given task. For example, Svenson (1981) documents that 82% of American drivers believe they are in the top 30% of drivers. See Odean (1998) for a review of this literature. 18 A fully rational subject should try to observe the opposite earnings component from his partner. For example, if he believed his partner would choose to observe revenue, he would optimally choose to observe cost, as this would minimize his expected forecast error in the revised forecast stage of the experiment. 19 As Manski (2011) notes, there is no insight to be learned from the empirical regularity that mean forecasts outperform the typical individual forecast—it follows from Jensen’s inequality that with square loss functions, the loss of the mean forecast can never exceed the mean loss of the individual forecasts. 20 Using the terminology of Larrick and Soll (2006), the forecasts “bracket” the true earnings when the subjects see qualitatively dissimilar news (one good, one bad), and the forecasts do not “bracket” the true earnings when both see good news or both see bad news. 21 Throughout the paper, we conduct our statistical analysis by treating the subject as the unit of observation. That is, for each variable, we compute the mean of the variable for each subject, and then compare the distribution of subject-level means across treatments, and our number of observations in each treatment is the number of subjects in that treatment. 22 While Anderson and Holt (1997) have examined models that are somewhat similar in the context of information cascades, the fact that subjects’ initial forecasts are continuous makes formulation of a quantal response type model analytically difficult in our setting. 23 Ninety-five percent is a conservative estimate for this probability. In our three treatments (described in the next section), the initial forecasting task is the same, and the probability is over 98% in each of those treatments. 24 Our qualitative results are not sensitive to this assumption. Given the assumptions in this paragraph, it follows from Equations (9) to (15) in Appendix A that the optimal revised forecasts are given by: fR*={16.55if the subject observes good news and the partner′s initial forecast is 1510.91if the subject observes good news and the partner′s initial forecast is 59.09if the subject observes bad news and the partner′s initial forecast is 153.45if the subject observes bad news and the partner′s initial forecast is 5. 25 Risk and/or ambiguity aversion can only explain this behavior if subjects are infinitely risk or ambiguity averse. 26 In addition, if a subject were certain that his partner would choose to observe revenue (cost), then he should have chosen to observe cost (revenue), as this would have reduced his expected forecast error in the revised forecast stage. 27 Recall that it is difficult to determine whether financial analysts are overconfident by analyzing their earnings forecasts because their objective functions are unknown. 28 Of course, it is also possible the partner saw bad news but issued a suboptimal initial forecast. This rarely occurs, but subjects should account for it in the manner described in Footnote 24. 29 Subjects are not informed which earnings component their partner observed. 30 We thank an anonymous referee for pointing this out. 31 See Section 3.3.3 for more discussion on why the salience of the partner’s numerical forecast might affect the subject’s behavior in the revised forecast stage. 32 This information was also provided to subjects in Treatments B and C—in those treatments, they were given this information plus the numerical forecasts. Hence, subjects should be able to infer the exact same information in each of the Treatments B, C, and D. 33 The only differences concern the parameters ρ and q in Equations (9)–(14), which represent the likelihood that the partner’s initial forecast is optimal conditional on it equaling 5 or 15, and the ex ante likelihood that the partner observes the same earnings component as the subject, respectively. In our baseline treatment, ρ<1 and subjects do not know the values of ρ or q. In Treatment B, ρ = 1, and subjects know this, but they do not know q. In Treatments C and D, ρ = 1 and q = 0.5, and subjects are informed of this. In Treatments B−D, ρ = 1 and q = 0.5, and it follows from Equations (9) to (15) in Appendix A that the optimal revised forecasts in these treatments are given by fR*={1623 if the subject and partner both observe good news10 if the subject observes good (bad) news and the partner observes bad (good) news313 if the subject and partner both observe bad news. (Recall that the optimal revised forecasts in Treatment A are specified in Footnote 24.) 34 Specifically, U is defined in terms of fR*, which takes into account the possibility that the partner makes mistakes in his initial forecast. See Footnotes 7 and 24, and Appendix A. 35 Recall that if the partner observes good (bad) news, his optimal initial forecast is 15 (5). 36 The optimal response is to appropriately weight these possibilities based on their relative likelihoods. 37 It is unlikely that institutional investors would be affected by such framing, but our experiments suggest that less sophisticated retail investors would be affected. References Abarbanell J. ( 1991 ): Do analysts’ earnings forecasts incorporate information in prior stock price changes? , Journal of Accounting and Economics 14 , 147 – 165 . Google Scholar CrossRef Search ADS Acemoglu D. , Dahleh M. , Lobel I. , Ozdaglar A. ( 2011 ): Bayesian learning in social networks , The Review of Economic Studies 78 , 1201 – 1236 . Google Scholar CrossRef Search ADS Anderson L. R. , Holt C. A. ( 1997 ): Information cascades in the laboratory , The American Economic Review 87 , 847 – 862 . Barberis N. , Shleifer A. , Vishny R. ( 1998 ): A model of investor sentiment , Journal of Financial Economics 49 , 307 – 343 . Google Scholar CrossRef Search ADS Bernhardt D. , Campello M. , Kutsoati E. ( 2006 ): Who herds? , Journal of Financial Economics 80 , 657 – 675 . Google Scholar CrossRef Search ADS Bloomfield R. , Hales J. ( 2002 ): Predicting the next step of a random walk: experimental evidence of regime-shifting beliefs , Journal of Financial Economics 65 , 397 – 414 . Google Scholar CrossRef Search ADS Bloomfield R. , Hales J. ( 2009 ): An experimental investigation of the positive and negative effects of mutual observation , The Accounting Review 84 , 331 – 354 . Google Scholar CrossRef Search ADS Brandts J. , Giritligil A. E. , Weber R. ( 2015 ): An experimental study of persuasion bias and social influence in networks , European Economic Review 80 , 214 – 229 . Google Scholar CrossRef Search ADS Brown L. , Call A. , Clement M. , Sharp N. ( 2015 ): Inside the “black box” of sell-side financial analysts , Journal of Accounting Research 53 , 1 – 47 . Google Scholar CrossRef Search ADS Chandrasekhar A. , Larreguy H. , Xandri J. P. ( 2016 ): Testing models of social learning on networks: evidence from a framed field experiment. Working paper. Chen Q. , Jiang W. ( 2006 ): Analysts’ weighting of private and public information , The Review of Financial Studies 19 , 319 – 355 . Google Scholar CrossRef Search ADS Corazzini L. , Pavesi F. , Petrovich B. , Stanca L. ( 2012 ): Influential listeners: an experiment in persuasion bias in social networks , European Economic Review 56 , 1276 – 1288 . Google Scholar CrossRef Search ADS DeMarzo P. , Vayanos D. , Zwiebel J. ( 2003 ): Persuasion bias, social influence, and unidimensional opinions , The Quarterly Journal of Economics 118 , 909 – 968 . Google Scholar CrossRef Search ADS Enke B. , Zimmerman F. ( 2017 ): Correlation neglect in belief formation , The Review of Economic Studies , forthcoming. Eyster E. , Rabin M. ( 2005 ): Cursed equilibrium , Econometrica 73 , 1623 – 1672 . Google Scholar CrossRef Search ADS Eyster E. , Rabin M. , Vayanos D. ( 2017 ): Financial markets where traders neglect the informational content of prices , The Journal of Finance , forthcoming. Eyster E. , Weizsäcker G. ( 2011 ): Correlation neglect in financial decision-making. Working paper, London School of Economics, DIW Berlin, and University College London. Fischbacher U. ( 2007 ): z-Tree: Zurich toolbox for ready-made economic experiments , Experimental Economics 10 , 171 – 178 . Google Scholar CrossRef Search ADS Forsythe R. , Palfrey T. , Plott C. ( 1982 ): Asset valuation in an experimental market , Econometrica 50 , 537 – 567 . Google Scholar CrossRef Search ADS Galton F. ( 1907 ): Vox populi , Nature 75 , 450 – 451 . Google Scholar CrossRef Search ADS Glaeser E. , Sunstein C. ( 2009 ): Extremism and social learning , Journal of Legal Analysis 1 , 263 – 321 . Google Scholar CrossRef Search ADS Hong H. , Kubik J. ( 2003 ): Analyzing the analysts: career concerns and biased earnings forecasts , The Journal of Finance 58 , 313 – 351 . Google Scholar CrossRef Search ADS Hong H. , Kubik J. , Solomon A. ( 2000 ): Security analysts’ career concerns and herding of earnings forecasts , RAND Journal of Economics 31 , 121 – 144 . Google Scholar CrossRef Search ADS Irvine P. ( 2004 ): Analysts’ forecasts and brokerage-firm trading , The Accounting Review 79 , 125 – 149 . Google Scholar CrossRef Search ADS Krueger J. ( 1998 ): On the Perception of Social Consensus , in: Zanna M. P. (ed.), Advances in Experimental Social Psychology, Vol. 30 , Academic Press , pp. 163 – 240 , https://doi.org/10.1016/S0065-2601(08)60384-6 . Google Scholar CrossRef Search ADS Krueger J. ( 2000 ): The Projective Perception of the Social World , in: Suls J. , Wheeler L. (eds.), Handbook of Social Comparison: Theory and Research , Springer, Boston, MA , pp. 323 – 351 , https://link.springer.com/chapter/10.1007/978-1-4615-4237-7_16. Google Scholar CrossRef Search ADS Larrick R. , Soll J. ( 2006 ): Intuitions about combining opinions: misappreciation of the averaging principle , Management Science 52 , 111 – 127 . Google Scholar CrossRef Search ADS Levy G. , Razin R. ( 2015 ): Correlation neglect, voting behavior, and information aggregation , The American Economic Review 105 , 1634 – 1645 . Google Scholar CrossRef Search ADS Lys T. , Sohn S. ( 1990 ): The association between revisions of financial analysts’ earnings forecasts and security-price changes , Journal of Accounting and Economics 13 , 341 – 363 . Google Scholar CrossRef Search ADS Manski C. ( 2011 ): Interpreting and combining heterogeneous survey forecasts , in: Clements M. P. , Hendry D. F. (eds.), Oxford Handbook of Economic Forecasting , Chapter 16, Vol. 85 , Oxford University Press, UK , pp. 457 – 472 , http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780195398649.001.0001/oxfordhb-9780195398649-e-17. Mobius M. , Phan T. , Szeidl A. ( 2015 ): Treasure hunt: social learning in the field. Working paper. Odean T. ( 1998 ): Volume, volatility, price, and profit when all traders are above average , The Journal of Finance 53 , 1887 – 1934 . Google Scholar CrossRef Search ADS Ortoleva P. , Snowberg E. ( 2015 ): Overconfidence in political behavior , The American Economic Review 105 , 504 – 535 . Google Scholar CrossRef Search ADS Ottaviani M. , Sørensen P. N. ( 2006 ): The strategy of professional forecasting , Journal of Financial Economics 81 , 441 – 466 . Google Scholar CrossRef Search ADS Plott C. , Sunder S. ( 1982 ): Efficiency of experimental security markets with insider information: an application of rational-expectations models , Journal of Political Economy 90 , 663 – 698 . Google Scholar CrossRef Search ADS Plott C. , Sunder S. ( 1988 ): Rational expectations and the aggregation of diverse information in laboratory security markets , Econometrica 56 , 1085 – 1118 . Google Scholar CrossRef Search ADS Ross L. , Greene D. , House P. ( 1977 ): The “false consensus effect”: an egocentric bias in social perception and attribution processes , Journal of Experimental Social Psychology 13 , 279 – 301 . Google Scholar CrossRef Search ADS Surowiecki J. ( 2004 ): The Wisdom of Crowds , Random House , New York . Svenson O. ( 1981 ): Are we all less risky and more skillful than our fellow drivers? , Acta Psychologica 47 , 143 – 148 . Google Scholar CrossRef Search ADS Trueman B. ( 1994 ): Analyst forecasts and herding behavior , The Review of Financial Studies 7 , 97 – 124 . Google Scholar CrossRef Search ADS Weizsacker G. ( 2010 ): Do we follow others when we should? A simple test of rational expectations , The American Economic Review 100 , 2340 – 2360 . Google Scholar CrossRef Search ADS Williams J. ( 2013 ): Financial analysts and the false consensus effect , Journal of Accounting Research 51 , 855 – 907 . Google Scholar CrossRef Search ADS © The Authors 2017. Published by Oxford University Press on behalf of the European Finance Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

Review of Finance – Oxford University Press

**Published: ** Nov 7, 2017

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

ok to continue