Does Competition Affect Truth Telling? An Experiment with Rating Agencies

Does Competition Affect Truth Telling? An Experiment with Rating Agencies Abstract We use an experimental approach to study the effect of market structure on the incidence of misreporting by credit rating agencies. In the game, agencies receive a signal regarding the type of asset held by the seller and issue a report. The sellers then present the asset, with the report if one is solicited, to the buyer for purchase. We find that competition among rating agencies significantly reduces the likelihood of misreporting. 1. Introduction Following the financial crisis of `2008, the Financial Crisis Inquiry Commission wrote that many failures within the financial markets could be attributed to the fact that “major firms and investors blindly relied on credit rating agencies (CRAs) as their arbiters of risk”. Such heavy reliance on these reports by investors is problematic due to the inherent conflict of interest between those who pay for the reports (issuers of debt) and those who produce them (credit rating agencies or CRAs hereafter). Therefore, there is always a possibility that this conflict of interest can lead to ratings inflation by the CRAs. In fact, the legislation following the financial crisis, the Dodd–Frank Wall Street Reform and Consumer Financial Act of 2010, recognized the possibility that incentives may be misaligned and included a section meant to improve the regulation of CRAs and increase investor protection (Title IX, Subtitle C). To become a CRA, US regulations mandate special accreditation by the government as a “nationally recognized statistical rating organization” or NRSRO. As of 2014, there were ten such companies in existence, with three of them (Moody’s, Standard & Poor’s and Fitch) controlling 95% of the market. Over the past few decades, the actual number of NRSROs varied considerably, falling to six in the 90s before rising again.1 In the wake of the financial crisis, when much of the blame was placed upon the presumably unscrupulous actions of the CRAs, the European Parliament began to advocate for an increase in competition in the CRA market.2 In this article, we conduct an experiment to analyze the effect of increased competition within the CRA market. An experimental approach has a number of advantages. We can control the environment of our participants and vary a number of parameters, thus allowing us to isolate and identify key factors, such as competition, that influence behavior. Furthermore, the laboratory environment is unique because it allows us to measure factors that are not readily observable through empirical methods. For example, we can estimate buyer sophistication, and observe directly whether any CRAs are misreporting. In turn, this permits us to determine when ratings inflation is sustainable, and when it is not. To study ratings inflation across market formats, we motivate our game via the theoretical work of Bolton, Freixas, and Shapiro (2012) —BFS12 hereafter. The authors suggest three potential sources of conflict: (1) the CRAs’ desire to understate risk to attract business; (2) the ability of sellers to purchase the most favorable ratings; and (3) buyer sophistication. According to the model, these conflicts can result in two distortions: (1) reduction in efficiency due to ratings shopping; and (2) higher ratings inflation during booms. Ratings shopping occurs when an issuer of debt (a seller) is not required to accept a report that he/she does not like. Therefore, if one CRA issues a negative report, the seller can search, or essentially shop, for a better one. Faure-Grimaud, Peyrache, and Quesada (2009) find that an increase in competition between CRAs results in less information disclosure. In terms of varying levels of inflation over the course of an economic cycle, an experimental study by Kluger and Slezak (2015) finds that misreporting is more likely during economic downturns. It should be noted that they do not study the effect of competition on player strategies and their design does not include an intermediary.3 Although our study is motivated by the theoretical work of BFS12, our experimental design does differ in a few ways. We allow for only one report to be purchased in the competitive treatment, and instead of the seller posting the terms, we have buyers bid for the assets. The constraint of one report makes the competition more acute between the CRAs and allows us to avoid the complication that comes with not knowing the actual value of the additional report. It is not clear that both reports should or will be viewed as providing the same value to market participants. We also believe that having buyers bid based on reports available (if any) can signal how naive or trusting the buyers are. Our findings suggest that ratings inflation is costlier when the market is competitive. According to our estimations, subjects in the monopoly treatment are 1.7 times more likely to misreport, at 10% significance level, compared to the competitive treatment.4 Additionally, we find that the buyers are relatively sophisticated in the competitive treatment. Introducing competition to the CRA market puts a downward pressure on fees, making ratings inflation costly. Bar-Isaac and Shapiro (2013) use a theoretical model with endogenous reputation and find that ratings are less accurate when fees are high. Therefore, if competition can lower fees, then it is possible that ratings inflation and ratings shopping are less likely to occur. In another theoretical work, Manso (2013) finds that an increase in competition leads to rating downgrades (tougher ratings), an increase in default frequency and a reduction in welfare. Furthermore, he suggests that the CRAs should consider long-term borrower survival, which means that soft-rating equilibrium (higher ratings) may be preferable. Thus, while an increase in competition can make rankings more accurate, such an outcome is not necessarily preferable if it decreases the odds of survival of the borrowers. Using an evolutionary approach, Hirth (2014) analyzes CRAs and competition, where investors are either sophisticated or naive. He finds that CRAs are forced to be honest when there are enough sophisticated investors in the market, so that the reputational concern is real. In our experimental design, and given our parameter values, the necessary minimum level of naive investors to make ratings inflation feasible is about 72%. Furthermore, Hirth determines that there is a critical number of CRAs in the market, above which the reputation costs are high enough to guarantee at least temporary honest behavior. Becker and Millbourn (2011) use an empirical approach to show that an increase in competition due to the entrance of Fitch to the ratings market leads to lower quality ratings from incumbents. Quality was measured via two dimensions here: (1) the ability of ratings to transmit information to investors; and (2) the ability of ratings to classify risk. Another recent empirical work of Baghai, Servaes, and Tamayo (2014) analyzes the changes in standards of the CRAs over time. They find that for corporate debt, CRAs have actually become more conservative. That is, an AAA firm from 1985 would be rated AA today. From a different standpoint, Cole and Cooley (2014) argue that the real problem is not who pays for ratings and that the reputational concerns help insure this. The actual problem, according to the authors, is the regulatory reliance on ratings and the increasing importance of risk-weighted capital in regulation, which lead to distorted ratings. Information distortion exists because those who purchase ratings do not necessarily need to reveal them. Thus, the ultimate impact of ratings is actually quite complex. According to a number of studies, ratings have a dual role: they provide information to investors and they are used for regulatory purposes (Ashcraft et al., 2011; Kisgen and Strahan, 2010). Ratings can also affect market price through regulation, independent of the information they provide about the actual asset. In turn, rating contingent regulation increases the volume of highly rated securities (Opp, Opp, and Harris, 2013). In terms of experimental literature, perhaps the most closely related work is by Mayhew, Schatzberg, and Sevcik (2001), who examine whether accounting uncertainty (signal precision) impacts auditor objectivity. The results of this study suggest that accounting uncertainty impacts auditor objectivity despite damage to auditor reputation and that in the absence of uncertainty auditors remain objective. In a subsequent study extending the 2001 design, Mayhew and Pike (2004) analyze whether investor selection of auditors can improve auditor independence and find that transferring the power to hire and fire the auditor from managers to investors significantly decreases the proportion of violations and increases the overall economic surplus.5 Other important experimental work that can help understand the dynamics of the CRA market studies sender–receiver games and cheap talk. Forsythe, Lundholm, and Rietz (1999) analyze a market where only the sellers know the true value of the asset. When cheap talk is allowed, they find that sellers make fraudulent announcements 47% of the time (with a standard deviation of 19). Similarly, Sheremeta and Shields (2013) find that the receivers are prone to deception. In a study where subjects play both roles, they find that in the role of the sender, 60% of the subjects adopt deceptive strategies by sending a favorable message when the true state of the nature is unfavorable. As receivers, nearly 70% of the subjects invest when the message is favorable.6 Adding a competition aspect to the sender–receiver game, Cassar and Rigdon (2011) find that including an additional sender increases trust and trustworthiness. However, this outcome requires an environment of complete information. In addition to laboratory experiments, there are also field experiments that consider the role and impact of CRAs. Duflo et al. (2013) conduct a randomized field experiment and provide evidence on how conflict of interest can undermine information provision by third-party auditors. The results show that a set of correct incentives or reforms can lead to greater accuracy and improve compliance with regulation. To control cheating, or ratings inflation, we can introduce punishment, change market format, include reputation costs, alter the compensation scheme or add some other mechanism with the hope of realigning player incentives. Some studies have already looked at punishment (Sánchez-Pagés and Vorsatz, 2007) while others included reputation costs in the analysis, both endogenously and exogenously (Mayhew, Schatzberg, and Sevcik, 2001; Mayhew and Pike, 2004; BFS12). In addition, Gill, Prowse, and Vlassopoulos (2013) and Faravelli, Friesen, and Gangadharan (2015) show that different compensation schemes, such as bonuses and winner-take-all tournaments may contribute to misreporting.7 We introduce competition and provide experimental evidence that the choice of market structure can either mitigate or exacerbate the conflict of interest that is inherent in issuer-pay models.8 The rest of the article is organized as follows: Section 2 describes the general environment of the credit ratings game; Section 3 details the laboratory procedures; Section 4 presents the results; and lastly, Section 5 discusses our main findings. Appendix A elaborates on the theoretical predictions of our game, Appendix B includes instructions used in experimental sessions, while Appendix C shows the screenshots of the user interface. 2. The Environment The environment in our experiment is motivated by the work of BFS12. We study the interaction of three player types: sellers, CRAs and buyers. In the market, there are two types of widgets that can be sold, either blue (b) or red (r), ω∈{b,r}. A buyer’s valuation for each widget type is summarized by Vr and V b > V r. Ex ante, the buyer does not know the type of widget sold in the market. The buyer, however, may have access to a report suggesting widget type, if the seller chooses to purchase it from the CRA. The CRA has access to a research team which receives an informative signal regarding widget type θ∈{bb,rr}. The private signal has the following informational content ω, Pr(θ=bb|ω=b)=Pr(θ=rr|ω=r)=e where e∈(12,1) is the precision of the signal. In our experiment we use high precision signal so that e = 0.90.9 Before receiving the signal, the CRA must post a fee φ at which a report can be purchased by the seller. When the seller approaches the CRA, it retrieves a signal θ and produces a report. The CRA does not have to report the signal suggested by the research team, it can produce a red or blue report, regardless of the signal. After reading the report, the seller can either: (1) accept the report and pay φ or (2) reject the report, in which case the CRA will not receive a payment. It is not possible to have unsolicited ratings. Therefore when a report is rejected by the seller, the buyer must guess or deduce the value of the widget. The published report (if any) is a message suggesting widget type, m = bbb (“blue”) or m = rrr (“red”), that is observable to buyers. The agency also faces a fixed penalty cost, equivalent to the endowment ρ, which is incurred when the report is bbb, given that the research team announced rr and the widget is r. The loss of an endowment can be viewed as an exogenous reputation cost, litigation cost, or any other cost associated with the consequence of distributing a purposefully inaccurate or inappropriate information to market participants. The buyers observe a report, if one is published, and then bid for the widget. The profits for each player are computed as follows: πbuyers={Vω−bid if winner 0 otherwise  (1) πCRAs={ρ+φ if the published report is truthful ρ+φ if the published report is not truthful and the state is blue φ if the published report is not truthful and the state is red ρ if report is rejected (not published)  (2) πsellers={bid−φ if report is accepted bid otherwise  (3) BFS12 also assume that there are different types of buyers, which we account for with the parameter α∈[0,1]. Type α = 1 bids the highest possible valuation regardless of the report received, while type α = 0 processes information rationally and bids according to message received. α can also be interpreted as a fraction of either naive, trusting or less sophisticated buyers in a given population. We define the value of the widget for type α = 0 buyers as: Wb=(1−(1−e))Vb+(1−e)Vr Wr=(1−e)Vb+(1−(1−e))Vr W0=(1/2)Vb+(1/2)Vr where Wb, Wr and W0 denote the value of the widget for the buyer based on the report observed—blue, red and no report, respectively. The theoretical predictions of the monopolistic CRA market structure suggested by BFS12 are summarized in Proposition 1. Proposition 1. The two resulting equilibria of the game are 1. The rating agency always reports blue (bbb). Ratings inflation occurs if and only if the fee satisfies the following condition φ=αWb−W0>eρ. 2. The rating agency reports truthfully. This results in a fee such that φ=min⁡[Wb−max⁡[αW0,Wr],eρ]. In Appendix A, we describe in detail the original proof by BFS12. We emphasize that the minimum value of α (denoted as α¯) required to achieve the inflationary equilibrium is 0.72, using the following parameter values: Pr(ω=b)=0.50,Vb=120,Vr=20,e=0.90, and ρ=10.10 Therefore, as long as the α≥α¯, the fee is equal to the marginal benefit of a blue report versus the alternative of not providing relevant information to trusting investors. Additionally, the fee has to be greater than eρ because reporting blue when the signal (with accuracy e) and state are red will result in a penalty. In the case that α<α¯, the CRA reports truthfully and the fee is bounded by the expected penalty cost eρ and the benefit of providing accurate information to the market. In addition to the monopolistic case, we introduce another market environment where two CRAs compete to sell their reports. In our environment, only one report can be published. BFS12 work with a different duopoly structure. They allow the seller to accept both reports, and therefore their theoretical analysis accounts for the value of the additional report. In our design, the competition between CRAs is more acute, since only one report can be purchased. Furthermore, when sellers are able to accept both reports, reject one or reject both, it becomes harder to evaluate strategic behavior without knowing the value of this additional report. The CRA market in our game can be characterized as a Bertrand competition where firms simultaneously set prices and compete to sell undifferentiated products. The theoretical predictions of the competitive environment are summarized by the following proposition: Proposition 2. In a competitive environment, where there are two CRAs in the market and where only one report is published, there are two possible equilibria: 1. If and only if α≥α¯, then there is a unique symmetric mixed strategy equilibrium in which each CRA chooses φ∈{0,eρ}with probability equal to 0.5. When a CRA sets a low fee it will report truthfully and when it sets a high fee it may lie. 2. If and only if α<α¯, then both CRAs report truthfully and the fees approach zero. The description of our proof is included in Appendix A. We show that there is no profitable deviation from the mixed strategy equilibrium when α≥0.72. The second equilibrium puts a downward pressure on fees since ratings inflation is not feasible. Which equilibrium do we expect in our experiment? As we noted earlier, the value of α plays an important role in determining whether we observe ratings inflation. However, we do not know the value of α ex ante. Instead, we claim that αCT≤αMT, which leads to two possibilities: (1) αCT=αMT where the sophistication level of buyers evolves similarly across treatments or (2) αCT<αMT where buyers in the CT treatment become more sophisticated relative to those in the MT, or that αCT evolves toward zero because more reliable information is available. We narrow our predictions to the following: Prediction 1. Ratings inflation in a competitive market is not greater than under monopoly. In the case that α<0.72, the CRAs will report truthfully in both market environments. However, when α≥0.72 the monopolist CRA has an incentive to inflate ratings while in the competitive environment, we expect that, with some probability, both CRAs will report truthfully. Thus, ratings inflation under competition is equal to or lower than under monopoly. It is important to highlight that the outcomes in the monopolistic environment can be viewed as similar to the sender–receiver game outcomes. In these games (Forsythe, Lundholm, and Rietz, 1999; Sánchez-Pagés and Vorsatz, 2007), truth telling is fairly common and highly variable.11 Therefore, we should not be surprised if we observe a significant amount of truth telling develop over the course of the game. Prediction 2. When the CRA inflates ratings φ>eρ. A CRA that is not truth telling will charge higher fees to compensate for the cost associated with being caught. The expected cost is eρ, which is based on signal precision and endowment value. Prediction 3. The fee under competition is lower than under the monopoly treatment. The competition between the CRAs puts a downward pressure on report fees, since only one report can be purchased by a seller. Rivalry in prices, or the report fees in our case, has been investigated extensively by the early experimental literature (see the overview by Davis and Holt, 1993) and found that an increase in competition decreases prices. Prediction 4. No report is viewed similar to a red report when buyers are sophisticated. In the case that buyers are sophisticated and respond appropriately to the different reports published in the market, we expect that a buyer facing no report will value the widget as much as when facing a red report. 3. Laboratory Procedures We employed a total of 144 subjects in 16 pits12 at the CEED laboratory in Ball State University. Participants, comprising undergraduate students from all fields, were recruited online via ORSEE (Greiner, 2004). All subjects were assigned to play one of the three roles: sellers, buyers or CRAs. The treatments were designed with the goal of analyzing the impact of competition on the behavior of CRAs. We run a completely between subject design, with each experimental session consisting of either one or two independent pits. Table I presents an overview of our sessions. Table I Sessions overview Note: Each pit has two sellers, four buyers and depending on treatment, either two or four CRAs. Overall, we have data for eight pits in each treatment and therefore sixteen possible market transactions. Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 Table I Sessions overview Note: Each pit has two sellers, four buyers and depending on treatment, either two or four CRAs. Overall, we have data for eight pits in each treatment and therefore sixteen possible market transactions. Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 In each monopoly treatment, we use a pit with two sellers, two CRAs and four buyers that is then split into two groups to form two separate markets. Thus, in each market, the CRA is a monopolist. These two groups are then reshuffled every period for a total of 24 periods so that each resulting group formation is unique. Note that there are no practice rounds in our experiment, so the encounters are unique within groups. In the competitive treatment, the number of CRAs is increased to four per pit, while the number of sellers, buyers and periods remains constant. Therefore in this treatment, each market has two CRAs as opposed to one. In the monopoly treatment, every session proceeds as follows: Stage 1: CRA enters the fee for writing a report. Figure 2 in Appendix C shows the user interface of Stage 1, designed in Ztree (Fischbacher, 2007). Stage 2: CRA receives information from the research team, that is, an informative signal, regarding the value of the widget (red or blue) and then selects whether to report red or blue. Stage 3: The seller observes the report and the fee and then decides whether to accept or reject the report. Stage 4: Buyers observe the report, if any, and the fee associated with the report. Buyers then bid for the widget, which can be any value between zero and 120. Final stage: Profits are computed using Equations (1–3). All players observe whether the CRA was penalized for misreporting a red signal as blue when the state was red. The information displayed to all players includes the value of the widget in the blue state, the red state and the accuracy of the research team (90%). The final stage also includes the history of past outcomes. The competition environment is similar to the monopoly, except that in Stage 3 there are now two rating agencies, randomly labeled Alpha and Sigma. Each CRA receives their own independent signal, which can be wrong 10% of the time. Therefore, there is always a possibility that the CRAs are not lying when the reports are mixed. The sellers then observe the fees and the reports from both agencies, and decide which report, if any, to accept. The instructions read to the participants at the beginning of each session are included in Appendix B. We also provide screenshots of the user interface in Appendix C. Subjects were paid for six random periods in the session at the rate of $1 per 38 points. We also paid an additional minimum show-up fee of $5, and we reserved the right to modify the payout in the favor of participants to improve earnings.13 We would like to emphasize that we never mentioned to participants whether this applies to some players or a specific group. Therefore, we do not believe that this statement resulted in an inadvertent incentive to behave otherwise.14 On average, sessions lasted just under 50 min. We summarize average earnings per player type in Table I. 4. Results We begin our analysis with a graphical overview of the posted fees and ratings inflation over time. The data is pooled by treatment and period. Panel (a) of Figure 1 shows that the mean posted fee in the monopoly treatment (MT) is greater than the one in the competition treatment (CT). Furthermore, the fees in both treatments follow a downward trend. This downward adjustment of fees is consistent with other posted-offer format experiments. Figure 1 View largeDownload slide Panels (a) and (b) display results for posted fees and ratings inflation, respectively, by treatment over time. Figure 1 View largeDownload slide Panels (a) and (b) display results for posted fees and ratings inflation, respectively, by treatment over time. Panel (b) of Figure 1 shows the average ratings inflation in each treatment over the course of the game. For periods 1 through 6, there is not much difference in ratings inflation between the two treatments. However in later periods, ratings inflation in MT occurs more often than in CT. This divergence in ratings inflation by treatment tells us that CRAs across both treatments have identical prior dispositions and that the institutions governing each treatment shape their incentives and play.15 Table II presents a brief overview of the summary statistics, according to player type. The top section of table displays the summary statistics for the CRA. We can see that ratings inflation is much more prevalent in the MT. In fact, the mean ratings inflation of 26 in the MT is significantly higher than the mean of 17 in the CT. Ratings inflation also has a higher variance under a monopolistic market structure. The fee for the report is also higher in the MT than in the CT, 54 and 30 respectively, which is in line with our expectations that prices are lower in competitive markets. Table II Summary statistics Note: “Mixed” refers to the scenario in which CRAs offer conflicting reports (blue and red). Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 Table II Summary statistics Note: “Mixed” refers to the scenario in which CRAs offer conflicting reports (blue and red). Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 The middle section of Table II breaks down the seller acceptance rate of the CRA reports. We first present summary statistics for acceptance rate of at least one report and then classify the acceptance rate according to report type. Therefore, we present acceptance rates of at least one report that is blue, at least one report that is red, and lastly, in the case of competition, when the reports are mixed (blue and red). Note that for the MT in the “at least one” category, there is only one report in the market. The results suggest that the acceptance rate is higher under the CT regardless of report classification. For example, 37% of red reports are accepted in CT, compared to 19% in MT. This result may be driven by lower fees in the CT. Acceptance rate for blue reports is 80% in the CT versus 56% in the MT. In the case of conflicting or mixed reports, possible only in the CT, sellers accept blue reports more than half of the time (59%) and red reports less than 12%t of the time. The lower section of Table II provides an overview of buyer behavior. Buyers behave more rationally in the CT compared to MT when no report is issued, where on average the bids for the asset are much lower (24 versus 41). However, bids in the MT may be higher because fewer reports are published overall, and therefore the buyers are uncertain about the widget type and report accuracy, or lack thereof. Furthermore, bids when red reports are issued are relatively accurate in both treatments (22 in MT and 23 in CT), while bids for the asset when blue reports are issued are lower (78 in MT and 71 in CT) than the actual (120) or expected (110) asset value under the specified signal precision. Next, we present the results according to our main findings. Result 1. Ratings inflation is more likely in the MT. To obtain our primary result, we use regression analysis to evaluate CRA behavior under two different treatments: MT and CT. Table III presents estimations of three different specifications, using subject random effects and clustered standard errors for each independent pit. In specification (I), we estimate a logit regression, with ratings inflation as dependent variable. We incorporate the treatment effect, with the variable MT, that takes the value of one when the CRA is a monopolist and zero otherwise. We also control for the learning process or the adjustment throughout the sessions by incorporating the Period variable in all of our regressions. We find that in the MT the odds of misreporting are 1.72 ( = exp ⁡(0.54)) times higher than in the CT. This coefficient is significant at 10% level ( p≤.10), which is not surprising given the high variance of ratings inflation in the MT. Recall from our review of earlier works, that Forsythe, Lundholm, and Rietz (1999) find the standard deviation for fraudulent announcements, which occur 47% of the time, to be 19. Therefore, our results are in line with previous work. Low incidence of ratings inflation can also be indicative of a low α, or that the population of buyers is sophisticated. Table III CRA decision All models are estimated using subject random effects and clustered standard errors at pit level (using bootstrap). Specification (I) is a logit model and constrains the sample to red signals only, while specifications (II) and (III) are OLS. ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 Table III CRA decision All models are estimated using subject random effects and clustered standard errors at pit level (using bootstrap). Specification (I) is a logit model and constrains the sample to red signals only, while specifications (II) and (III) are OLS. ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 Result 2. The fee when the CRA inflates ratings is higher than when it does not inflate ratings. However, we also observe that the posted fee when the CRA reports truthfully can be above eρ. The last two specifications of Table III analyze the factors that may affect the fees posted by the CRAs. The treatment effect is quite strong ( p≤0.01) and suggests that less competition leads to an increase in the posted fee by about 24 points. In specification (III), we check whether the CRAs that inflate ratings set higher fees. The results indicate that inflation is associated with an increase in the posted fee by about 11 points ( p≤0.05). We can interpret this as CRAs with the intention to inflate will post higher fees before seeing the signal. This increment is similar to the exogenous cost of misreporting (10 points). Note that in both treatments, the fee posted is above the exogenous cost of misreporting. Lastly, we also observe a negative and significant coefficient on the Period variable for specifications (II) and (III), which is consistent with the negative trend presented in Figure 1 panel (a). Result 3. The (accepted) report fee in the CT is about 20 points lower than in the MT and reports are also accepted at a higher rate in the CT relative to the MT. To better understand which factors influence the seller decision we estimate five different specifications, summarized in Table IV. The first two specifications combine data from both treatments, while specifications (III) through (V) focus on each treatment separately. All specifications include subject random effects and clustered standard errors at pit level. In specification (I) we look at the rejection of reports as a censoring problem. Therefore, we estimate this specification using a Tobit, in which the dependent variable is the accepted fee. The results show that a blue report is accepted with a fee of about 10 (=17.06 − 12.5 * (0.49) − 0.99) points in the CT. The accepted fee increases by about 20 points in the MT. We omit the interaction between the treatment effect and the blue report because it is not significant. Table IV Seller decision All specifications are estimated using subject random effects and clustered standard errors at the pit level (using bootstrap). All specifications are estimated using a logit model, except (I) which is a Tobit model. Censoring occurs when the report is rejected at the fee posted. ***p ≤ 0.01, **p ≤ 0.1.p ≤ 0.05, * (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 Table IV Seller decision All specifications are estimated using subject random effects and clustered standard errors at the pit level (using bootstrap). All specifications are estimated using a logit model, except (I) which is a Tobit model. Censoring occurs when the report is rejected at the fee posted. ***p ≤ 0.01, **p ≤ 0.1.p ≤ 0.05, * (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 Table V Buyer decision The linear model Y = BX, where X includes intercept and dummies, is estimated using subject random effects and clustered standard errors at pit level (using bootstrap). ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 Table V Buyer decision The linear model Y = BX, where X includes intercept and dummies, is estimated using subject random effects and clustered standard errors at pit level (using bootstrap). ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 In specification (II), we analyze the acceptance decision using a logit model. We look at the probability of accepting at least one report and find that in the CT, the odds of accepting a report are 5.21 ( =1/ exp(−1.65)) times higher than in the MT. This result is in line with the higher fees observed in the MT. High fees cause sellers to reject regardless of report type. We also estimate the impact of report type and fee on the acceptance rate by treatment, with specifications (III) through (V) presenting results for the MT and CT, respectively. Our findings suggest that blue reports are accepted at a higher rate relative to red reports in both treatments (2.03 and 1.56, in log odds, for MT and CT respectively). The lower estimated coefficient in the CT captures the fact that only one report of the two issued can be accepted. That is, a seller in the competitive environment may observe two blue reports, but can only accept one, thus affecting the likelihood of acceptance overall blue reports. In the case of fees, a 10 point decrease (slightly smaller that the standard deviation for both treatments in Table I), increases the probability of acceptance of a report by 0.3 (0.6), log odds, in the MT (CT). This indicates that fees are relatively more important in the CT. To figure out which reports, if any, are favored by sellers, we introduce a new variable, Same. This is a dummy variable that takes the value of one when the reports are the same and zero when the reports are conflicting. In specification (V), we see that blue reports are accepted at a higher rate than red reports (2.68 higher, log odds, for a blue report). Note that the acceptance rate of a red report when the two reports are not in conflict is 0.97 times higher (log odds) than acceptance rate of a red report when the two reports are conflicting. This could be interpreted as sellers showing a preference for inflated ratings. The negative coefficient on the interaction between Blue and Same stems from the fact that only one report can be published in the market. To better understand the impact of fees on the acceptance rate, we also performed a mediation analysis (Baron and Kenny, 1986) using the approach of Imai, Keele, and Tingley (2010). The dependent variable, in our case, is the acceptance of at least one report, and the fee is the mediating variable between MT (the treatment) and the acceptance rate. We find that the proportion of total effect mediated is 0.39, and that the indirect (or mediated) effect is statistically different than zero ( p≤0.01). Result 4. In the CT, bids from buyers when no report is available are similar to bids when a red report is posted, indicating that buyer sophistication may have evolved over the course of the treatment. We do not find evidence of this in the MT. Table V presents an analysis of the buyer decision via four different specifications. In the first two specifications, we estimate the impact of report type and market format on the buyer’s bid. The estimation results of specification (I) suggest that when buyers do not observe a report in CT, they bid about 21 points for the widget. The bids in the MT, on the contrary, are 18 points higher when no report is published. This is consistent with having fewer reports published due to high fees and thus increasing uncertainty regarding the true widget value. This also indicates that the α of buyers in the CT may be evolving as they are exposed to more reliable information. Note that the interaction treatment dummy with red report has a negative coefficient that cancels out the added value of no report in the MT. This indicates that buyers bid lower, in the MT, when a red report is issued compared to the case when no report is issued by approximately 19 points. Therefore, in the MT the buyers do not view the lack of a report as similar to a red report. In the case of a blue report in the CT, the bid increases by approximately 50 points, and in the case of red report, the bid decreases by 2 points, though this decrease is not significant. This means that buyers value no report the same as a red report or that the buyers are relatively sophisticated in the CT. To see how the winning bids differ, we add a dummy variable to our next specification. On average, the winner bids 23 points higher. We omit all interaction terms using the winner variable because they are not significant. Result 5. When the state is red, buyers are better off in the CT. When the state is blue, buyers are better off in the MT. In the final two specifications, we attempt to determine which market format is more efficient. In our environment, the loss of efficiency stems from the penalty incurred by the CRAs when they misreport. The total number of periods in which penalties were levied in the MT and the CT are 22 and 24, respectively. We believe that the reason for the low number of penalties in the MT is due to rejection of reports because of high fees. Recall that if the seller rejects the report, whether or not the CRA inflated ratings becomes irrelevant. In the MT, the buyers often do not observe any reports and therefore there is no penalty imposed on the CRAs, even if they misreport. An alternative variable of interest that can measure efficiency is buyer welfare. Columns (III) and (IV) in Table V summarize the surplus of buyers who purchase the widget. Note that when analyzing buyer surplus, blue refers to the state (widget type) rather than the report, as in columns (I and II). We observe that in the red state, a buyer in the MT receives about 15 points less compared to a buyer in the CT. When the state is blue, a buyer in the MT receives 4 points more compared to a buyer in the CT. However, this number is not statistically different than zero. This indicates that competition among buyers may lower surplus, particularly when the information regarding widget type is reliable. Intuitively, if buyers believe that a report is accurate, then their bids should approach the underlying widget value, which decreases their possible surplus. Overall, buyers in blue state receive about 43 points (=67–20 − 0.29 * 12.5). If we omit the dummies for asset type (specification IV), and only analyze the treatment effect, then we find that the surplus in the MT is about 6 points lower, though this difference is not statistically significant. 5. Discussion The experimental design presented in this article is motivated by the theoretical work of Bolton, Freixas, and Shapiro (2012), who rely on an exogenous cost to address reputational concerns that arise when issuing fraudulent reports. Our results indicate that lower fees observed in the competitive treatment reduce the incentive to inflate ratings. In other words, the combination of the exogenously imposed reputation cost and lower fees make ratings inflation costly for the CRAs. Therefore, the effect of competition on the morals of markets is not deleterious, if not beneficial, in our economic experiment. Whether or not competition in the ratings market actually leads to a better or more optimal outcome may depend on a number of factors. For example, when the ratings market becomes more competitive, there is a possibility of ratings shopping. That is, if the seller is not pleased with the report offered by a particular CRA, he can continue to search for a better report. This is inefficient not only due to the loss of information, since reports that are not purchased remain unpublished, but also because this creates an incentive for the CRAs to inflate ratings in order to sell reports. On the contrary, increased competition among CRAs can drive down the report fees, making ratings inflation costly. In conjunction with sophisticated buyers (recall that the ratings inflation equilibrium requires a large population of naive buyers), this provides the correct incentives for truth telling, as CRAs become more concerned about reputation. There are a number of ways to extend this work and improve our understanding of the CRA market dynamics. For example, the level of α is clearly important in determining actual buyer behavior, yet it is a difficult parameter to evaluate. Our experiment implies that sophistication is not necessarily static and may be subject to market structure. Another possibility is to introduce endogenous costs (e.g., see Mathis, McAndrews, and Rochet, 2009), and then include communication between sellers and CRAs. Such experiment could provide more information on whether our initial approach is correct and whether the competition effect holds when the experimental design is modified. Furthermore, there may be a threshold number (as suggested by Hirth, 2014) of CRAs that minimizes the conflict of interests. We only provide evidence in support of competition, but do not quantify the optimal level of competition. Moreover, whether the existence of a conflict of interest between the sellers of debt and issuers of ratings is actually important may depend on firm structure. For example, it is possible that the conflict of interest was always there, and was exacerbated when the CRAs became publicly traded corporations, which in effect diluted the value of reputation. Therefore, a number of factors may be at play here, and it is important that we study them before drawing conclusions. Lastly, from what we know of existing literature, there is no clear guide to the behavior of market participants during business cycles, which can be interpreted as a good or bad asset states in our game. For example, there is some evidence that in a downturn, people are more likely to default when the volatility is high (e.g., see Rabanal, 2014) as well as empirical evidence of “boom bias” in ratings due to conflict of interest (Dilly and Mählmann, 2016). However, collectively, there is still much to be studied in this area. We believe that additional research can provide deeper insights regarding behavior and motivation of agents during business cycles as well as the role of rating agencies in the resulting outcomes. Footnotes * For very helpful comments and suggestions, we are indebted to an anonymous referee, the editor Burton Hollifield, Aleksandr Alekseev, Yin-Wong Cheung, Dan Friedman, David Gill, John Horowitz, Luba Petersen, and participants of our presentations at ESA 2015 Europe, University of Maine, ESA 2015 North America Dallas, University of Basque Country, and the 2016 Society of Experimental Finance Conference. Daniel Mowan and Andie Cole provided excellent assistance in running the experimental sessions. This research was supported by funds granted by the Miller College of Business, Ball State University. 1 White (2010) provides an excellent overview of how financial regulatory structure affected behavior of rating agencies by increasing their market power via legislature that established NRSROs and also discusses the change from investor to issuer pay model currently in use. 2 http://www.europarl.europa.eu/news/en/news-room/content/20111219IPR34550/html/Credit-rating-agencies-MEPs-want-less-reliance-on-big-three. 3 One possible explanation for the increase in ratings inflation during economic downturns is a change in incentive structure on the seller side of the market, as buyer behavior (trust) remains relatively stable. 4 Our results, which show high variance for misreporting, are in line with cheap talk experiments. We discuss this in greater detail below. 5 Moore et al. (2006) focus on the morality behind rating inflation and propose a number of strategies to reduce the conflict of interest, such as hiring CRAs long-term regardless of their reports. 6 The authors note that investment behavior of receivers cannot be explained by risk preferences or as a best response to the subject’s own behavior in the sender’s role. 7 This line of literature uses real effort tasks to study misreporting. Subjects are asked to perform a task and are paid according to a predetermined scheme. There is no monitoring and no material costs for misreporting. Cadsby, Song, and Tapon (2010) show that target-based compensation decreases truth telling compared to piece-rate and tournament-based schemes. When a task is randomized in nature, Fischbacher and Föllmi-Heusi (2013) find that most subjects lie partially. That is, they do not report the highest outcome but do report higher than actual outcome. Conrads et al. (2014) introduce a competitive aspect by increasing the payoff difference between winners (those that report the highest outcome) and losers and find that this fosters misreporting. 8 We should also consider if truth telling is the desired behavior. According to Cassar, Friedman, and Schneider (2009), cheating facilitates trade by increasing the overall volume, though it also decreases cross-market trade significantly. In their environment, contracts can only be enforced domestically, when cheating is not possible, but not internationally, when cheating is possible. This causes high surplus traders to leave international markets for domestic ones. 9 Mayhew, Schatzberg, and Sevcik (2001) study the impact of signal certainty on the objectivity of auditors and find that there are less violations when the signal is more precise. 10 These parameters lead to the following widget values Wb=110, W0=70 and Wr=30. 11 Sánchez-Pagés and Vorsatz (2007) also find that punishment only minimally increases truth telling. On the contrary, truth telling can also be viewed as deceptive if the sender chooses the true message with the expectation that the receiver will behave in a way contrary to the message (Sutter, 2009). 12 The term pit refers to markets formed by buyers, sellers, and CRAs. In essence, each pit is a silo. 13 Due to the experimental design, the buyer is always at a disadvantage. Even when acting rationally, buyers will lose surplus to other players. We did not want the payout to reflect only the show-up fee, which would negatively affect their impressions of future experiments. We anticipated this, and wanted to state that the actual payout in the end may differ, but only in the positive direction. 14 We do want to acknowledge the possibility that some subjects may have misinterpreted the instructions. However, since we always state and write the conversion rate on the board at the beginning of each session, we think that is unlikely to have influenced subject behavior. 15 Recall that our experimental design omits practice rounds, therefore in the early periods of the game the subjects may be learning about the institutions. APPENDIX A Proofs: We closely follow the work of BFS12, who focus on pure strategies, and then add a pure and mixed strategy analysis for the competitive environment of our game. We begin with the monopolist CRA. Note that there is no reason for a seller to purchase a red report because there is no state in which a red report would increase the valuation by sophisticated buyers above their ex ante valuation W0. On the contrary, it is possible that a red report could actually decrease the valuation by trusting buyers below W0. Therefore, conditional on receiving a red signal (rr), the (net) payoff to the CRA for reporting red (m = rrr) is π(rrr|rr)−π(bbb|rr)=−φ+eρ This yields two possible information regimes (Lemma 1 of BFS12): if φ≥eρ, the CRA will always report blue (bbb); and if 0<φ<eρ the CRA will always report truthfully. Proposition 1 BFS12 If the CRA always reports blue (bbb), then the seller should be willing to purchase the report as long as the fee does not exceed αWb−W0 which is the incremental profit based on buyer value of widget types and where α refers to the fraction of naive buyers in a given population. Always reporting blue is feasible when the fee is greater than the (expected) punishment cost αWb−W0>eρ If the CRA reports truthfully, the blue report (m = bbb) will induce high valuation by all buyers, while the red report (m = rrr) will result in low valuation by sophisticated buyers and ex ante valuation by naive buyers. The maximum fee is subject to the following constraint φ≤Wb−max⁡[αW0,Wr] For the CRA to report truthfully, the fee should not be greater than eρ. Therefore, φ=min⁡[Wb−max⁡[αW0,Wr],eρ] The predicted equilibrium depends on the parameters values. We chose the following parameters, Vb=120,Vr=20,e= 0.90, and ρ = 10, and found that the minimum level of α (denoted as α¯) that supports the equilibrium in which the CRA always reports blue is 0.72. Therefore, if α≥α¯, then the CRA will always report blue, and if α<α¯, then the CRA will report truthfully. Proposition 2 Suppose there are two CRAs (labeled k,−k) in the market and that only one report is published. If α<α¯, then the CRAs must be truth telling and the fees must satisfy the following condition φk,−k<eρ. Just as in a Bertrand (price) model, each CRA has an incentive to undercut the competitor’s fee. If φk<φ−k, and reports are viewed as homogenous, then the seller will accept the report from CRA k, and thus CRA – k also will have an incentive to decrease its fee. The equilibrium prediction is that both CRAs will report truthfully and that φk,−k will approach zero. Inflationary Rating Equilibrium An inflationary rating (always report blue) equilibrium occurs when ( α≥α¯). Below we present the cases for three pure strategies and finally a mixed strategy approach. Pure Strategy 1: φk,−k<eρ A deviator can always obtain a higher profit by increasing his fee and choosing to always report blue because the seller will purchase the report as long as φ≤αWb−W0. Therefore, this fee strategy cannot be an equilibrium. Pure Strategy 2: φk,−k>eρ A CRA will always have an incentive to undercut fees. If φk<φ−k, then the seller will accept the report from CRA k, and CRA – k will have an incentive to decrease its fee. Therefore, this fee strategy cannot be an equilibrium. Pure Strategy 3: φk,−k=eρ A CRA will always have an incentive to slightly undercut the fee by some value ϵ and obtain greater profits. When that occurs, the CRA will report truthfully and obtain positive profits only when the state is blue. Therefore, this fee strategy also does not support an inflationary equilibrium. Mixed Strategy Since the CRA must set its fee before receiving a signal, when it sets a low fee it will report truthfully and when it sets a high fee it may lie. φk,−k={eρ with probability 0.50 with probability 0.5 The CRA should only lie if it receives a red signal, which occurs with probability P(θ=rr)=P(θ=rr|ω=b)P(ω=b)+P(θ=rr|ω=r)P(ω=r)=(0.1)(0.5)+(0.9)(0.5)=0.5. Recall that under NE, there is no profitable deviation. Case 1. If the CRA k sets φ=0 with probability greater than 0.5 and – k with probability equal to 0.5, then k will not sell its report when the signal is red and the state is blue. Therefore, it is profitable for CRA k to decrease the probability with which it sets a low fee (lie more) and this cannot be a NE. Case 2. If the CRA k sets φ=0 with probability less than 0.5 and – k with probability equal to 0.5, then k will not sell its report when the signal is blue. Neither CRA misreports when the signal is blue and in this case CRA – k will have a lower φ. Therefore, it is profitable for CRA k to increase the probability with which it sets a low fee and this cannot be a NE. Thus, the only mixed strategy NE is when the CRAs choose φ∈{0,eρ} with probability equal to 0.5. APPENDIX B: Instructions (Competition Treatment) Welcome! You are participating in an economics experiment at CEED Lab. In this experiment, you will participate in a market game. If you read these instructions carefully and make appropriate decisions, you may earn a considerable amount of money that will be immediately paid out to you in cash at the end of the experiment. Each participant is paid $5 for attending. Throughout this experiment, you will also earn points based on the decisions you make. The rate at which we exchange your points into cash will be explained to you shortly. We reserve the right to improve this in your favor if average payoffs are lower than expected. Please turn off all cell phones and other communication devices. During the experiment you are not allowed to communicate with other participants. If you have any questions, the experimenter will be glad to answer them privately. If you do not comply with these instructions, you will be excluded from the experiment and deprived of all payments aside from the minimum payment of $5 for attending. The experiment you will participate in will involve interaction in a market setting. In this market, there are buyers, sellers, and intermediaries. Once you begin playing, you will be assigned a specific role that you will keep throughout the duration of the experiment. In the market, sellers will be selling a good called “widgets” to buyers. These widgets have an uncertain color. Blue widgets are highly valuable to buyers while red widgets are worth considerably less. Intermediaries are tasked with evaluating the color of the widgets. You will be playing a series of rounds. Each round will consist of decisions made by intermediaries, sellers, and buyers. In the instructions below, we explain how your decisions as a buyer/seller/intermediary will affect your points and total earnings. The Experiment The experiment will feature a number of rounds. In each round, you will be assigned to a market that consists of 1 seller, 2 intermediaries, and 2 buyers. While the markets you interact in will change throughout the course of the experiment, your role will remain the same. Each round, a seller will have a single widget to sell. Neither the seller nor the buyers know the color for certain before making a transaction. They do, however, know that the widget is blue with the probability of 50% and red with the probability of 50%. The intermediaries each have access to research teams that can observe the correct color with a 90% probability. The seller observes both intermediaries requested fees and their reported colors for the widget, and can hire up to one intermediary to issue their report to buyers about the color of the widget. Each round will consist of five stages: Stage 1 (10 s): Each intermediary will privately decide how much to charge for his or her report. This amount can be any number between 0 and 120. Stage 2 (10 s): Each intermediary will then receive costless information from its research team on the color of the seller’s widget. This information is accurate 90% of the time. Each intermediary receives an independent draw of information from their research team (thus it is possible that two intermediaries assessing the same good receive different information). Each intermediary must then decide what color to report to the seller (blue or red). Stage 3 (10 s): The seller receives the reports and fees of both intermediaries. Notice that the fee asked by the intermediary is set before receiving any information from the research team. The seller must then decide either to (1) accept only one of the two fees and publish the report of the selected intermediary for buyers to view or (2) reject both reports and fees. If the seller rejects both reports, buyers will be notified “No report is available”. Stage 4 (10 s): The buyers observe either (1) a blue report, (2) a red report, or (3) “No report is available”. The buyer will also be informed about the fee associated with the report (if any). They must then decide individually how much to bid on the widget. The buyer that submits the highest bid will pay her bid and receive the widget. The winning buyer receives 120 points if the widget is blue and 20 points if it is red. The other buyer will not pay anything and will not receive any points. If both buyers submit the same bid, the computer will randomly decide with 50–50 probability which buyer to award the widget to. Stage 5 (10 s): All players observe the outcome of the round. They will learn what the winning bid was, the actual color of the widget and their own earnings. In each round, all intermediaries are endowed with 10 points. The endowment can be taken away from the hired intermediary if they report the widget is blue when s/he was informed it was red by their research team AND the widget is revealed to be red. Earnings Your earnings will be computed according to the formula for your role: Sellers: Earnings of a Seller = Winning Bid − Fee Paid to Hired Intermediary (if they hired one) Buyers: Earnings of a Winning Buyer = Value of the Widget − Winning Bidwhere the Value of the Widget is 120 if the widget is blue and 20 if the widget is red. Earnings of the Other Buyer = 0 Intermediary: Earnings of a Hired Intermediary = Report Fee + Endowment The endowment will be 0 if the hired intermediary is found to report blue when his or her research team informed them the widget was likely red AND the widget is actually red, and 10 otherwise. Earnings of the Other Intermediary = Endowment There are twenty participants in this session. There will be four markets at any point. Every round you will be rematched with four strangers from your group only. While you will not know who you are playing with, you will end up interacting with players more than once. No two markets you participate in will have exactly the same people. The points you earn from six randomly selected rounds will be added up, exchanged into dollars and paid to you, along with your show up fee, in cash at the end of the experiment. Your exchange rate is written on the board. Can I earn negative points? Yes, all players can potentially earn negative points based on their decisions and the decisions of others. If, at the end of the experiments, your total points are negative, we will deduct your show up fee the required amount up to $5. APPENDIX C: User Interface (Monopoly Treatment) Appendix C1 View largeDownload slide Intermediary’s interface. Appendix C1 View largeDownload slide Intermediary’s interface. Figure C2 View largeDownload slide (a) Sellers and (b) Buyers interfaces. Figure C2 View largeDownload slide (a) Sellers and (b) Buyers interfaces. Figure C3 View largeDownload slide Results interface. Figure C3 View largeDownload slide Results interface. References Ashcraft A. , Goldsmith-Pinkham P. , Hull P. , Vickery J. ( 2011 ) Credit ratings and security prices in the subprime MBS market , American Economic Review (Papers and Proceedings) 101 , 115 – 119 . Google Scholar CrossRef Search ADS Baghai R. P. , Servaes H. , Tamayo A. ( 2014 ) Have rating agencies become more conservative? Implications for capital structure and debt pricing Journal of Finance 69 , 1961 – 2005 . Google Scholar CrossRef Search ADS Bar-Isaac H. , Shapiro J. ( 2013 ) Ratings quality over the business cycle Journal of Financial Economics 108 , 62 – 78 . Google Scholar CrossRef Search ADS Baron R. M. , Kenny D. A. ( 1986 ) The Moderator-mediator variable distinction in social psychological research - conceptual, strategic, and statistical considerations Journal of Personality and Social Psychology 51 , 1173 – 1182 . Google Scholar CrossRef Search ADS PubMed Becker B. , Millbourn T. ( 2011 ) How did increased competition affect credit ratings? Journal of Financial Economics 101 , 493 – 514 . Google Scholar CrossRef Search ADS Bolton P. , Freixas X. , Shapiro J. ( 2012 ) The credit ratings game Journal of Finance 67 , 85 – 111 . Google Scholar CrossRef Search ADS Cantor R. , Packer F. ( 1995 ) The credit rating industry Journal of Fixed Income 5 ( 3 ) 10 – 34 . Google Scholar CrossRef Search ADS Cadsby C. B. , Song F. , Tapon F. ( 2010 ) Are you paying your employees to cheat? An experimental investigation, B.E Journal of Economic Analysis & Policy 10 , 30. Cassar A. , Rigdon M. ( 2011 ) Trust and trustworthiness in networked exchange Games and Economic Behavior 71 , 282 – 303 . Google Scholar CrossRef Search ADS Cassar A. , Friedman D. , Schneider P. ( 2009 ) Cheating in markets: a laboratory experiment Journal of Economic Behavior and Organization 72 , 240 – 259 . Google Scholar CrossRef Search ADS Cole H. L. , Cooley T. F. ( 2014 ) Rating agencies. NBER Working Paper No. 19972, National Bureau of Economic Research. Conrads J. , Irlenbusch B. , Rilke R. M. , Schielke A. , Walkowitz G. ( 2014 ) Honesty in tournaments Economic Letters 123 , 90 – 93 . Google Scholar CrossRef Search ADS Davis, D.D. and Holt, C.A. (1993) Experimental economics. Princeton University Press. Dilly M. , Mählmann T. ( 2016 ) Is there a “Boom Bias” in agency ratings? Review of Finance 20 , 979 – 1011 . Google Scholar CrossRef Search ADS Duflo E. , Greenstone M. , Pande R. , Ryan N. ( 2013 ) Truth-telling by Third-party auditors and the response of polluting firms: experimental evidence from India Quarterly Journal of Economics 128 , 1499 – 1545 . Google Scholar CrossRef Search ADS Faravelli M. , Friesen L. , Gangadharan L. ( 2015 ) Selection, tournaments, and dishonesty Journal of Economic Behavior & Organization 110 , 160 – 175 . Google Scholar CrossRef Search ADS Faure-Grimaud A. , Peyrache E. , Quesada L. ( 2009 ) The owndership of ratings The RAND Journal of Economics 40 , 234 – 257 . Google Scholar CrossRef Search ADS Fischbacher, U. (2007) z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics 10(2), 171–178. Fischbacher U. , Föllmi-Heusi F. ( 2013 ) Lies in disguise —an experimental study on cheating Journal of the European Economic Association 11 , 525 – 547 . Google Scholar CrossRef Search ADS Forsythe R. , Lundholm R. , Rietz T. ( 1999 ) Cheap talk, fraud, and adverse selection in financial markets: some experimental evidence Review of Financial Studies 12 , 481 – 518 . Google Scholar CrossRef Search ADS Gill D. , Prowse V. , Vlassopoulos M. ( 2013 ) Cheating in the workplace: an experimental study of the impact of bonuses and productivity Journal of Economic Behavior & Organization 96 , 120 – 134 . Google Scholar CrossRef Search ADS Greiner B. ( 2004 ) An online recruitment system for economic experiments, in: Kurt Kremer, Volker Macho (eds.), Forschung und wissenschaftliches Rechnen, GWDG Bericht 63. Ges. für Wiss. Datenverarbeitung, Göttingen, pp. 79–93. Hirth S. ( 2014 ) Credit Rating Dynamics and Competition Journal of Banking and Finance 49 , 100 – 112 . Google Scholar CrossRef Search ADS Imai K. , Keele L. , Tingley D. ( 2010 ) A general approach to causal mediation analysis Psychological Methods 15 , 309 – 334 . Google Scholar CrossRef Search ADS PubMed Kisgen D. J. , Strahan P. E. ( 2010 ) Do regulations based on credit ratings affect a firm’s cost of capital? The Review of Financial Studies 23 , 4324 – 4347 . Google Scholar CrossRef Search ADS Kluger B. D. , Slezak S. L. ( 2015 ) Fraudulent misreporting and the business cycle: an experimental investigation. Mimeo. Manso G. ( 2013 ) Feedback effects of credit ratings Journal of Financial Economics 109 , 535 – 548 . Google Scholar CrossRef Search ADS Mathis J. , McAndrews J. , Rochet J. C. ( 2009 ) Rating the raters: are reputation concerns powerful enough to discipline rating agencies? Journal of Monetary Economics 56 , 657 – 674 . Google Scholar CrossRef Search ADS Mayhew B. W. , Pike J. E. ( 2004 ) Does investor selection of auditors enhance auditor independence? The Accounting Review 79 , 797 – 822 . Google Scholar CrossRef Search ADS Mayhew B. W. , Schatzberg J. W. , Sevcik G. R. ( 2001 ) The effect of accounting uncertainty and auditor reputation on auditor objectivity Auditing: A Journal of Practice & Theory 20 , 49 – 70 . Google Scholar CrossRef Search ADS Moore D. A. , Tetlock P. E. , Tanlu L. , Bazerman M. H. ( 2006 ) Conflicts of interest and the case of auditor independence: moral seduction and strategic issue cycling Academy of Management Review 31 , 10 – 29 . Google Scholar CrossRef Search ADS Opp C. C. , Opp M. M. , Harris M. ( 2013 ) Rating agencies in the face of regulation Journal of Financial Economics 108 , 46 – 61 . Google Scholar CrossRef Search ADS Rabanal J. P. ( 2014 ) Strategic default with social interactions: a laboratory experiment, in: Collins Sean M. , Mark Isaac R. , Norton Douglas A. (eds.), Experiments in Financial Economics, Research in Experimental Economics , Volume 16 , Emerald Group Publishing Limited , pp. 31 – 52 . Google Scholar CrossRef Search ADS Sánchez-Pagés S. , Vorsatz P. E. ( 2007 ) An experimental study of truth-telling in a sender-receiver game Games and Economic Behavior 61 , 86 – 112 . Google Scholar CrossRef Search ADS Sheremeta R. , Shields T. ( 2013 ) Do liars believe? Beliefs and other-regarding preferences in sender-receiver games Journal of Economic Behavior and Organization 94 , 268 – 277 . Google Scholar CrossRef Search ADS Sutter M. ( 2009 ) Deception through telling the truth?! Experimental evidence from individuals and teams The Economic Journal 119 , 47 – 60 . Google Scholar CrossRef Search ADS White L. J. ( 2010 ) The credit rating agencies The Journal of Economic Perspectives 24 , 211 – 226 . Google Scholar CrossRef Search ADS © The Authors 2017. Published by Oxford University Press on behalf of the European Finance Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Review of Finance Oxford University Press

Does Competition Affect Truth Telling? An Experiment with Rating Agencies

Review of Finance , Volume Advance Article (4) – Mar 16, 2017

Loading next page...
 
/lp/ou_press/does-competition-affect-truth-telling-an-experiment-with-rating-z2BF1sy6eh
Publisher
Oxford University Press
Copyright
© The Authors 2017. Published by Oxford University Press on behalf of the European Finance Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ISSN
1572-3097
eISSN
1573-692X
D.O.I.
10.1093/rof/rfx012
Publisher site
See Article on Publisher Site

Abstract

Abstract We use an experimental approach to study the effect of market structure on the incidence of misreporting by credit rating agencies. In the game, agencies receive a signal regarding the type of asset held by the seller and issue a report. The sellers then present the asset, with the report if one is solicited, to the buyer for purchase. We find that competition among rating agencies significantly reduces the likelihood of misreporting. 1. Introduction Following the financial crisis of `2008, the Financial Crisis Inquiry Commission wrote that many failures within the financial markets could be attributed to the fact that “major firms and investors blindly relied on credit rating agencies (CRAs) as their arbiters of risk”. Such heavy reliance on these reports by investors is problematic due to the inherent conflict of interest between those who pay for the reports (issuers of debt) and those who produce them (credit rating agencies or CRAs hereafter). Therefore, there is always a possibility that this conflict of interest can lead to ratings inflation by the CRAs. In fact, the legislation following the financial crisis, the Dodd–Frank Wall Street Reform and Consumer Financial Act of 2010, recognized the possibility that incentives may be misaligned and included a section meant to improve the regulation of CRAs and increase investor protection (Title IX, Subtitle C). To become a CRA, US regulations mandate special accreditation by the government as a “nationally recognized statistical rating organization” or NRSRO. As of 2014, there were ten such companies in existence, with three of them (Moody’s, Standard & Poor’s and Fitch) controlling 95% of the market. Over the past few decades, the actual number of NRSROs varied considerably, falling to six in the 90s before rising again.1 In the wake of the financial crisis, when much of the blame was placed upon the presumably unscrupulous actions of the CRAs, the European Parliament began to advocate for an increase in competition in the CRA market.2 In this article, we conduct an experiment to analyze the effect of increased competition within the CRA market. An experimental approach has a number of advantages. We can control the environment of our participants and vary a number of parameters, thus allowing us to isolate and identify key factors, such as competition, that influence behavior. Furthermore, the laboratory environment is unique because it allows us to measure factors that are not readily observable through empirical methods. For example, we can estimate buyer sophistication, and observe directly whether any CRAs are misreporting. In turn, this permits us to determine when ratings inflation is sustainable, and when it is not. To study ratings inflation across market formats, we motivate our game via the theoretical work of Bolton, Freixas, and Shapiro (2012) —BFS12 hereafter. The authors suggest three potential sources of conflict: (1) the CRAs’ desire to understate risk to attract business; (2) the ability of sellers to purchase the most favorable ratings; and (3) buyer sophistication. According to the model, these conflicts can result in two distortions: (1) reduction in efficiency due to ratings shopping; and (2) higher ratings inflation during booms. Ratings shopping occurs when an issuer of debt (a seller) is not required to accept a report that he/she does not like. Therefore, if one CRA issues a negative report, the seller can search, or essentially shop, for a better one. Faure-Grimaud, Peyrache, and Quesada (2009) find that an increase in competition between CRAs results in less information disclosure. In terms of varying levels of inflation over the course of an economic cycle, an experimental study by Kluger and Slezak (2015) finds that misreporting is more likely during economic downturns. It should be noted that they do not study the effect of competition on player strategies and their design does not include an intermediary.3 Although our study is motivated by the theoretical work of BFS12, our experimental design does differ in a few ways. We allow for only one report to be purchased in the competitive treatment, and instead of the seller posting the terms, we have buyers bid for the assets. The constraint of one report makes the competition more acute between the CRAs and allows us to avoid the complication that comes with not knowing the actual value of the additional report. It is not clear that both reports should or will be viewed as providing the same value to market participants. We also believe that having buyers bid based on reports available (if any) can signal how naive or trusting the buyers are. Our findings suggest that ratings inflation is costlier when the market is competitive. According to our estimations, subjects in the monopoly treatment are 1.7 times more likely to misreport, at 10% significance level, compared to the competitive treatment.4 Additionally, we find that the buyers are relatively sophisticated in the competitive treatment. Introducing competition to the CRA market puts a downward pressure on fees, making ratings inflation costly. Bar-Isaac and Shapiro (2013) use a theoretical model with endogenous reputation and find that ratings are less accurate when fees are high. Therefore, if competition can lower fees, then it is possible that ratings inflation and ratings shopping are less likely to occur. In another theoretical work, Manso (2013) finds that an increase in competition leads to rating downgrades (tougher ratings), an increase in default frequency and a reduction in welfare. Furthermore, he suggests that the CRAs should consider long-term borrower survival, which means that soft-rating equilibrium (higher ratings) may be preferable. Thus, while an increase in competition can make rankings more accurate, such an outcome is not necessarily preferable if it decreases the odds of survival of the borrowers. Using an evolutionary approach, Hirth (2014) analyzes CRAs and competition, where investors are either sophisticated or naive. He finds that CRAs are forced to be honest when there are enough sophisticated investors in the market, so that the reputational concern is real. In our experimental design, and given our parameter values, the necessary minimum level of naive investors to make ratings inflation feasible is about 72%. Furthermore, Hirth determines that there is a critical number of CRAs in the market, above which the reputation costs are high enough to guarantee at least temporary honest behavior. Becker and Millbourn (2011) use an empirical approach to show that an increase in competition due to the entrance of Fitch to the ratings market leads to lower quality ratings from incumbents. Quality was measured via two dimensions here: (1) the ability of ratings to transmit information to investors; and (2) the ability of ratings to classify risk. Another recent empirical work of Baghai, Servaes, and Tamayo (2014) analyzes the changes in standards of the CRAs over time. They find that for corporate debt, CRAs have actually become more conservative. That is, an AAA firm from 1985 would be rated AA today. From a different standpoint, Cole and Cooley (2014) argue that the real problem is not who pays for ratings and that the reputational concerns help insure this. The actual problem, according to the authors, is the regulatory reliance on ratings and the increasing importance of risk-weighted capital in regulation, which lead to distorted ratings. Information distortion exists because those who purchase ratings do not necessarily need to reveal them. Thus, the ultimate impact of ratings is actually quite complex. According to a number of studies, ratings have a dual role: they provide information to investors and they are used for regulatory purposes (Ashcraft et al., 2011; Kisgen and Strahan, 2010). Ratings can also affect market price through regulation, independent of the information they provide about the actual asset. In turn, rating contingent regulation increases the volume of highly rated securities (Opp, Opp, and Harris, 2013). In terms of experimental literature, perhaps the most closely related work is by Mayhew, Schatzberg, and Sevcik (2001), who examine whether accounting uncertainty (signal precision) impacts auditor objectivity. The results of this study suggest that accounting uncertainty impacts auditor objectivity despite damage to auditor reputation and that in the absence of uncertainty auditors remain objective. In a subsequent study extending the 2001 design, Mayhew and Pike (2004) analyze whether investor selection of auditors can improve auditor independence and find that transferring the power to hire and fire the auditor from managers to investors significantly decreases the proportion of violations and increases the overall economic surplus.5 Other important experimental work that can help understand the dynamics of the CRA market studies sender–receiver games and cheap talk. Forsythe, Lundholm, and Rietz (1999) analyze a market where only the sellers know the true value of the asset. When cheap talk is allowed, they find that sellers make fraudulent announcements 47% of the time (with a standard deviation of 19). Similarly, Sheremeta and Shields (2013) find that the receivers are prone to deception. In a study where subjects play both roles, they find that in the role of the sender, 60% of the subjects adopt deceptive strategies by sending a favorable message when the true state of the nature is unfavorable. As receivers, nearly 70% of the subjects invest when the message is favorable.6 Adding a competition aspect to the sender–receiver game, Cassar and Rigdon (2011) find that including an additional sender increases trust and trustworthiness. However, this outcome requires an environment of complete information. In addition to laboratory experiments, there are also field experiments that consider the role and impact of CRAs. Duflo et al. (2013) conduct a randomized field experiment and provide evidence on how conflict of interest can undermine information provision by third-party auditors. The results show that a set of correct incentives or reforms can lead to greater accuracy and improve compliance with regulation. To control cheating, or ratings inflation, we can introduce punishment, change market format, include reputation costs, alter the compensation scheme or add some other mechanism with the hope of realigning player incentives. Some studies have already looked at punishment (Sánchez-Pagés and Vorsatz, 2007) while others included reputation costs in the analysis, both endogenously and exogenously (Mayhew, Schatzberg, and Sevcik, 2001; Mayhew and Pike, 2004; BFS12). In addition, Gill, Prowse, and Vlassopoulos (2013) and Faravelli, Friesen, and Gangadharan (2015) show that different compensation schemes, such as bonuses and winner-take-all tournaments may contribute to misreporting.7 We introduce competition and provide experimental evidence that the choice of market structure can either mitigate or exacerbate the conflict of interest that is inherent in issuer-pay models.8 The rest of the article is organized as follows: Section 2 describes the general environment of the credit ratings game; Section 3 details the laboratory procedures; Section 4 presents the results; and lastly, Section 5 discusses our main findings. Appendix A elaborates on the theoretical predictions of our game, Appendix B includes instructions used in experimental sessions, while Appendix C shows the screenshots of the user interface. 2. The Environment The environment in our experiment is motivated by the work of BFS12. We study the interaction of three player types: sellers, CRAs and buyers. In the market, there are two types of widgets that can be sold, either blue (b) or red (r), ω∈{b,r}. A buyer’s valuation for each widget type is summarized by Vr and V b > V r. Ex ante, the buyer does not know the type of widget sold in the market. The buyer, however, may have access to a report suggesting widget type, if the seller chooses to purchase it from the CRA. The CRA has access to a research team which receives an informative signal regarding widget type θ∈{bb,rr}. The private signal has the following informational content ω, Pr(θ=bb|ω=b)=Pr(θ=rr|ω=r)=e where e∈(12,1) is the precision of the signal. In our experiment we use high precision signal so that e = 0.90.9 Before receiving the signal, the CRA must post a fee φ at which a report can be purchased by the seller. When the seller approaches the CRA, it retrieves a signal θ and produces a report. The CRA does not have to report the signal suggested by the research team, it can produce a red or blue report, regardless of the signal. After reading the report, the seller can either: (1) accept the report and pay φ or (2) reject the report, in which case the CRA will not receive a payment. It is not possible to have unsolicited ratings. Therefore when a report is rejected by the seller, the buyer must guess or deduce the value of the widget. The published report (if any) is a message suggesting widget type, m = bbb (“blue”) or m = rrr (“red”), that is observable to buyers. The agency also faces a fixed penalty cost, equivalent to the endowment ρ, which is incurred when the report is bbb, given that the research team announced rr and the widget is r. The loss of an endowment can be viewed as an exogenous reputation cost, litigation cost, or any other cost associated with the consequence of distributing a purposefully inaccurate or inappropriate information to market participants. The buyers observe a report, if one is published, and then bid for the widget. The profits for each player are computed as follows: πbuyers={Vω−bid if winner 0 otherwise  (1) πCRAs={ρ+φ if the published report is truthful ρ+φ if the published report is not truthful and the state is blue φ if the published report is not truthful and the state is red ρ if report is rejected (not published)  (2) πsellers={bid−φ if report is accepted bid otherwise  (3) BFS12 also assume that there are different types of buyers, which we account for with the parameter α∈[0,1]. Type α = 1 bids the highest possible valuation regardless of the report received, while type α = 0 processes information rationally and bids according to message received. α can also be interpreted as a fraction of either naive, trusting or less sophisticated buyers in a given population. We define the value of the widget for type α = 0 buyers as: Wb=(1−(1−e))Vb+(1−e)Vr Wr=(1−e)Vb+(1−(1−e))Vr W0=(1/2)Vb+(1/2)Vr where Wb, Wr and W0 denote the value of the widget for the buyer based on the report observed—blue, red and no report, respectively. The theoretical predictions of the monopolistic CRA market structure suggested by BFS12 are summarized in Proposition 1. Proposition 1. The two resulting equilibria of the game are 1. The rating agency always reports blue (bbb). Ratings inflation occurs if and only if the fee satisfies the following condition φ=αWb−W0>eρ. 2. The rating agency reports truthfully. This results in a fee such that φ=min⁡[Wb−max⁡[αW0,Wr],eρ]. In Appendix A, we describe in detail the original proof by BFS12. We emphasize that the minimum value of α (denoted as α¯) required to achieve the inflationary equilibrium is 0.72, using the following parameter values: Pr(ω=b)=0.50,Vb=120,Vr=20,e=0.90, and ρ=10.10 Therefore, as long as the α≥α¯, the fee is equal to the marginal benefit of a blue report versus the alternative of not providing relevant information to trusting investors. Additionally, the fee has to be greater than eρ because reporting blue when the signal (with accuracy e) and state are red will result in a penalty. In the case that α<α¯, the CRA reports truthfully and the fee is bounded by the expected penalty cost eρ and the benefit of providing accurate information to the market. In addition to the monopolistic case, we introduce another market environment where two CRAs compete to sell their reports. In our environment, only one report can be published. BFS12 work with a different duopoly structure. They allow the seller to accept both reports, and therefore their theoretical analysis accounts for the value of the additional report. In our design, the competition between CRAs is more acute, since only one report can be purchased. Furthermore, when sellers are able to accept both reports, reject one or reject both, it becomes harder to evaluate strategic behavior without knowing the value of this additional report. The CRA market in our game can be characterized as a Bertrand competition where firms simultaneously set prices and compete to sell undifferentiated products. The theoretical predictions of the competitive environment are summarized by the following proposition: Proposition 2. In a competitive environment, where there are two CRAs in the market and where only one report is published, there are two possible equilibria: 1. If and only if α≥α¯, then there is a unique symmetric mixed strategy equilibrium in which each CRA chooses φ∈{0,eρ}with probability equal to 0.5. When a CRA sets a low fee it will report truthfully and when it sets a high fee it may lie. 2. If and only if α<α¯, then both CRAs report truthfully and the fees approach zero. The description of our proof is included in Appendix A. We show that there is no profitable deviation from the mixed strategy equilibrium when α≥0.72. The second equilibrium puts a downward pressure on fees since ratings inflation is not feasible. Which equilibrium do we expect in our experiment? As we noted earlier, the value of α plays an important role in determining whether we observe ratings inflation. However, we do not know the value of α ex ante. Instead, we claim that αCT≤αMT, which leads to two possibilities: (1) αCT=αMT where the sophistication level of buyers evolves similarly across treatments or (2) αCT<αMT where buyers in the CT treatment become more sophisticated relative to those in the MT, or that αCT evolves toward zero because more reliable information is available. We narrow our predictions to the following: Prediction 1. Ratings inflation in a competitive market is not greater than under monopoly. In the case that α<0.72, the CRAs will report truthfully in both market environments. However, when α≥0.72 the monopolist CRA has an incentive to inflate ratings while in the competitive environment, we expect that, with some probability, both CRAs will report truthfully. Thus, ratings inflation under competition is equal to or lower than under monopoly. It is important to highlight that the outcomes in the monopolistic environment can be viewed as similar to the sender–receiver game outcomes. In these games (Forsythe, Lundholm, and Rietz, 1999; Sánchez-Pagés and Vorsatz, 2007), truth telling is fairly common and highly variable.11 Therefore, we should not be surprised if we observe a significant amount of truth telling develop over the course of the game. Prediction 2. When the CRA inflates ratings φ>eρ. A CRA that is not truth telling will charge higher fees to compensate for the cost associated with being caught. The expected cost is eρ, which is based on signal precision and endowment value. Prediction 3. The fee under competition is lower than under the monopoly treatment. The competition between the CRAs puts a downward pressure on report fees, since only one report can be purchased by a seller. Rivalry in prices, or the report fees in our case, has been investigated extensively by the early experimental literature (see the overview by Davis and Holt, 1993) and found that an increase in competition decreases prices. Prediction 4. No report is viewed similar to a red report when buyers are sophisticated. In the case that buyers are sophisticated and respond appropriately to the different reports published in the market, we expect that a buyer facing no report will value the widget as much as when facing a red report. 3. Laboratory Procedures We employed a total of 144 subjects in 16 pits12 at the CEED laboratory in Ball State University. Participants, comprising undergraduate students from all fields, were recruited online via ORSEE (Greiner, 2004). All subjects were assigned to play one of the three roles: sellers, buyers or CRAs. The treatments were designed with the goal of analyzing the impact of competition on the behavior of CRAs. We run a completely between subject design, with each experimental session consisting of either one or two independent pits. Table I presents an overview of our sessions. Table I Sessions overview Note: Each pit has two sellers, four buyers and depending on treatment, either two or four CRAs. Overall, we have data for eight pits in each treatment and therefore sixteen possible market transactions. Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 Table I Sessions overview Note: Each pit has two sellers, four buyers and depending on treatment, either two or four CRAs. Overall, we have data for eight pits in each treatment and therefore sixteen possible market transactions. Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 Treatment Variable Sellers Rating agencies Buyers Monopoly Count 16 16 32 Average profit ($) 13.0 9.9 8.3 Average show-up fee ($) 5.4 5.3 8.3 Competition Count 16 32 32 Average profit ($) 12.4 8.5 8.7 Average show-up fee ($) 5.4 5.8 7.3 In each monopoly treatment, we use a pit with two sellers, two CRAs and four buyers that is then split into two groups to form two separate markets. Thus, in each market, the CRA is a monopolist. These two groups are then reshuffled every period for a total of 24 periods so that each resulting group formation is unique. Note that there are no practice rounds in our experiment, so the encounters are unique within groups. In the competitive treatment, the number of CRAs is increased to four per pit, while the number of sellers, buyers and periods remains constant. Therefore in this treatment, each market has two CRAs as opposed to one. In the monopoly treatment, every session proceeds as follows: Stage 1: CRA enters the fee for writing a report. Figure 2 in Appendix C shows the user interface of Stage 1, designed in Ztree (Fischbacher, 2007). Stage 2: CRA receives information from the research team, that is, an informative signal, regarding the value of the widget (red or blue) and then selects whether to report red or blue. Stage 3: The seller observes the report and the fee and then decides whether to accept or reject the report. Stage 4: Buyers observe the report, if any, and the fee associated with the report. Buyers then bid for the widget, which can be any value between zero and 120. Final stage: Profits are computed using Equations (1–3). All players observe whether the CRA was penalized for misreporting a red signal as blue when the state was red. The information displayed to all players includes the value of the widget in the blue state, the red state and the accuracy of the research team (90%). The final stage also includes the history of past outcomes. The competition environment is similar to the monopoly, except that in Stage 3 there are now two rating agencies, randomly labeled Alpha and Sigma. Each CRA receives their own independent signal, which can be wrong 10% of the time. Therefore, there is always a possibility that the CRAs are not lying when the reports are mixed. The sellers then observe the fees and the reports from both agencies, and decide which report, if any, to accept. The instructions read to the participants at the beginning of each session are included in Appendix B. We also provide screenshots of the user interface in Appendix C. Subjects were paid for six random periods in the session at the rate of $1 per 38 points. We also paid an additional minimum show-up fee of $5, and we reserved the right to modify the payout in the favor of participants to improve earnings.13 We would like to emphasize that we never mentioned to participants whether this applies to some players or a specific group. Therefore, we do not believe that this statement resulted in an inadvertent incentive to behave otherwise.14 On average, sessions lasted just under 50 min. We summarize average earnings per player type in Table I. 4. Results We begin our analysis with a graphical overview of the posted fees and ratings inflation over time. The data is pooled by treatment and period. Panel (a) of Figure 1 shows that the mean posted fee in the monopoly treatment (MT) is greater than the one in the competition treatment (CT). Furthermore, the fees in both treatments follow a downward trend. This downward adjustment of fees is consistent with other posted-offer format experiments. Figure 1 View largeDownload slide Panels (a) and (b) display results for posted fees and ratings inflation, respectively, by treatment over time. Figure 1 View largeDownload slide Panels (a) and (b) display results for posted fees and ratings inflation, respectively, by treatment over time. Panel (b) of Figure 1 shows the average ratings inflation in each treatment over the course of the game. For periods 1 through 6, there is not much difference in ratings inflation between the two treatments. However in later periods, ratings inflation in MT occurs more often than in CT. This divergence in ratings inflation by treatment tells us that CRAs across both treatments have identical prior dispositions and that the institutions governing each treatment shape their incentives and play.15 Table II presents a brief overview of the summary statistics, according to player type. The top section of table displays the summary statistics for the CRA. We can see that ratings inflation is much more prevalent in the MT. In fact, the mean ratings inflation of 26 in the MT is significantly higher than the mean of 17 in the CT. Ratings inflation also has a higher variance under a monopolistic market structure. The fee for the report is also higher in the MT than in the CT, 54 and 30 respectively, which is in line with our expectations that prices are lower in competitive markets. Table II Summary statistics Note: “Mixed” refers to the scenario in which CRAs offer conflicting reports (blue and red). Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 Table II Summary statistics Note: “Mixed” refers to the scenario in which CRAs offer conflicting reports (blue and red). Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 Monopoly (MT) Competition (CT) Mean SD Mean SD Rating Agency (CRA)  Fee posted 54 13 30 14  Ratings inflation (%) 26 21 17 15 Seller, acceptance rate (%)  At least one 43 16 76 10  Blue 56 23 80 9  Red 19 13 37 12  Blue (mixed) – – 59 21  Red (mixed) – – 12 14 Buyer, bids  Blue 78 15 71 11  Red 22 7 23 7  No report 41 9 24 10 The middle section of Table II breaks down the seller acceptance rate of the CRA reports. We first present summary statistics for acceptance rate of at least one report and then classify the acceptance rate according to report type. Therefore, we present acceptance rates of at least one report that is blue, at least one report that is red, and lastly, in the case of competition, when the reports are mixed (blue and red). Note that for the MT in the “at least one” category, there is only one report in the market. The results suggest that the acceptance rate is higher under the CT regardless of report classification. For example, 37% of red reports are accepted in CT, compared to 19% in MT. This result may be driven by lower fees in the CT. Acceptance rate for blue reports is 80% in the CT versus 56% in the MT. In the case of conflicting or mixed reports, possible only in the CT, sellers accept blue reports more than half of the time (59%) and red reports less than 12%t of the time. The lower section of Table II provides an overview of buyer behavior. Buyers behave more rationally in the CT compared to MT when no report is issued, where on average the bids for the asset are much lower (24 versus 41). However, bids in the MT may be higher because fewer reports are published overall, and therefore the buyers are uncertain about the widget type and report accuracy, or lack thereof. Furthermore, bids when red reports are issued are relatively accurate in both treatments (22 in MT and 23 in CT), while bids for the asset when blue reports are issued are lower (78 in MT and 71 in CT) than the actual (120) or expected (110) asset value under the specified signal precision. Next, we present the results according to our main findings. Result 1. Ratings inflation is more likely in the MT. To obtain our primary result, we use regression analysis to evaluate CRA behavior under two different treatments: MT and CT. Table III presents estimations of three different specifications, using subject random effects and clustered standard errors for each independent pit. In specification (I), we estimate a logit regression, with ratings inflation as dependent variable. We incorporate the treatment effect, with the variable MT, that takes the value of one when the CRA is a monopolist and zero otherwise. We also control for the learning process or the adjustment throughout the sessions by incorporating the Period variable in all of our regressions. We find that in the MT the odds of misreporting are 1.72 ( = exp ⁡(0.54)) times higher than in the CT. This coefficient is significant at 10% level ( p≤.10), which is not surprising given the high variance of ratings inflation in the MT. Recall from our review of earlier works, that Forsythe, Lundholm, and Rietz (1999) find the standard deviation for fraudulent announcements, which occur 47% of the time, to be 19. Therefore, our results are in line with previous work. Low incidence of ratings inflation can also be indicative of a low α, or that the population of buyers is sophisticated. Table III CRA decision All models are estimated using subject random effects and clustered standard errors at pit level (using bootstrap). Specification (I) is a logit model and constrains the sample to red signals only, while specifications (II) and (III) are OLS. ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 Table III CRA decision All models are estimated using subject random effects and clustered standard errors at pit level (using bootstrap). Specification (I) is a logit model and constrains the sample to red signals only, while specifications (II) and (III) are OLS. ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 (I) (II) (III) Inflate ratings Fee Fee Constant −0.901*** 37.469*** 37.222*** (0.340) (4.642) (4.692) MT 0.543* 23.479*** 22.773*** (0.298) (5.449) (5.495) Inflate – – 11.044** (2.474) Inflate × MT – – 2.150 (3.867) Period 0.003 −0.561*** −0.602*** (0.014) (0.214) (0.223) R2 – 0.134 0.143 N 758 1 152 1 152 Result 2. The fee when the CRA inflates ratings is higher than when it does not inflate ratings. However, we also observe that the posted fee when the CRA reports truthfully can be above eρ. The last two specifications of Table III analyze the factors that may affect the fees posted by the CRAs. The treatment effect is quite strong ( p≤0.01) and suggests that less competition leads to an increase in the posted fee by about 24 points. In specification (III), we check whether the CRAs that inflate ratings set higher fees. The results indicate that inflation is associated with an increase in the posted fee by about 11 points ( p≤0.05). We can interpret this as CRAs with the intention to inflate will post higher fees before seeing the signal. This increment is similar to the exogenous cost of misreporting (10 points). Note that in both treatments, the fee posted is above the exogenous cost of misreporting. Lastly, we also observe a negative and significant coefficient on the Period variable for specifications (II) and (III), which is consistent with the negative trend presented in Figure 1 panel (a). Result 3. The (accepted) report fee in the CT is about 20 points lower than in the MT and reports are also accepted at a higher rate in the CT relative to the MT. To better understand which factors influence the seller decision we estimate five different specifications, summarized in Table IV. The first two specifications combine data from both treatments, while specifications (III) through (V) focus on each treatment separately. All specifications include subject random effects and clustered standard errors at pit level. In specification (I) we look at the rejection of reports as a censoring problem. Therefore, we estimate this specification using a Tobit, in which the dependent variable is the accepted fee. The results show that a blue report is accepted with a fee of about 10 (=17.06 − 12.5 * (0.49) − 0.99) points in the CT. The accepted fee increases by about 20 points in the MT. We omit the interaction between the treatment effect and the blue report because it is not significant. Table IV Seller decision All specifications are estimated using subject random effects and clustered standard errors at the pit level (using bootstrap). All specifications are estimated using a logit model, except (I) which is a Tobit model. Censoring occurs when the report is rejected at the fee posted. ***p ≤ 0.01, **p ≤ 0.1.p ≤ 0.05, * (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 Table IV Seller decision All specifications are estimated using subject random effects and clustered standard errors at the pit level (using bootstrap). All specifications are estimated using a logit model, except (I) which is a Tobit model. Censoring occurs when the report is rejected at the fee posted. ***p ≤ 0.01, **p ≤ 0.1.p ≤ 0.05, * (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 (I) (II) (III) (IV) (V) Fee Accept Accept Accept Accept [all] [all] [monopoly] [competition] [competition] Constant –0.99 1.49*** 0.18 0.51* –0.18 (4.53) (0.28) (0.58) (0.27) (0.49) MT 21.63*** −1.65*** – – – (4.01) (0.31) Blue 17.06*** – 2.03*** 1.56*** 2.68*** (2.85) (0.74) (0.24) (0.59) Same – – – – 0.97** (0.39) Blue × Same – – – – −1.49*** (0.46) Fee – – −0.03** −0.06*** −0.06*** (0.01) (0.01) (0.01) Period −0.49*** –0.01 −0.04** −0.03*** −0.03*** (0.18) (0.01) (0.02) (0.01) (0.01) Wald χ2 74.75 28.57 23.30 53.10 59.37 Prob. >χ2 0.00 0.00 0.00 0.00 0.00 N 1 152 1 152 384 768 768 Table V Buyer decision The linear model Y = BX, where X includes intercept and dummies, is estimated using subject random effects and clustered standard errors at pit level (using bootstrap). ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 Table V Buyer decision The linear model Y = BX, where X includes intercept and dummies, is estimated using subject random effects and clustered standard errors at pit level (using bootstrap). ***p ≤ 0.01, **p ≤ 0.05, *p ≤ 0.1. (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 (I) (II) (III) (IV) Bid Bid Surplus Surplus Constant 21.39*** 9.88*** −20.10*** 15.19*** (3.63) (3.56) (5.58) (5.51) Blue 49.94*** 46.24*** 67.22*** – (3.13) (3.01) (4.52) Red –1.62 –1.62 – – (1.59) (1.62) MT 18.52*** 18.61*** – –5.75 (3.46) (3.50) (4.79) Blue × MT –9.09 –9.20 3.88 – (6.57) (6.57) (6.62) Red × MT −19.21*** −19.22*** −14.66*** – (3.97) (3.90) (4.81) Winner – 22.70*** – (1.40) Period 0.25* 0.25* −0.29** −0.43* (0.14) (0.14) (0.14) (0.23) R2 0.44 0.61 0.61 0.01 N 1 536 1 536 768 768 In specification (II), we analyze the acceptance decision using a logit model. We look at the probability of accepting at least one report and find that in the CT, the odds of accepting a report are 5.21 ( =1/ exp(−1.65)) times higher than in the MT. This result is in line with the higher fees observed in the MT. High fees cause sellers to reject regardless of report type. We also estimate the impact of report type and fee on the acceptance rate by treatment, with specifications (III) through (V) presenting results for the MT and CT, respectively. Our findings suggest that blue reports are accepted at a higher rate relative to red reports in both treatments (2.03 and 1.56, in log odds, for MT and CT respectively). The lower estimated coefficient in the CT captures the fact that only one report of the two issued can be accepted. That is, a seller in the competitive environment may observe two blue reports, but can only accept one, thus affecting the likelihood of acceptance overall blue reports. In the case of fees, a 10 point decrease (slightly smaller that the standard deviation for both treatments in Table I), increases the probability of acceptance of a report by 0.3 (0.6), log odds, in the MT (CT). This indicates that fees are relatively more important in the CT. To figure out which reports, if any, are favored by sellers, we introduce a new variable, Same. This is a dummy variable that takes the value of one when the reports are the same and zero when the reports are conflicting. In specification (V), we see that blue reports are accepted at a higher rate than red reports (2.68 higher, log odds, for a blue report). Note that the acceptance rate of a red report when the two reports are not in conflict is 0.97 times higher (log odds) than acceptance rate of a red report when the two reports are conflicting. This could be interpreted as sellers showing a preference for inflated ratings. The negative coefficient on the interaction between Blue and Same stems from the fact that only one report can be published in the market. To better understand the impact of fees on the acceptance rate, we also performed a mediation analysis (Baron and Kenny, 1986) using the approach of Imai, Keele, and Tingley (2010). The dependent variable, in our case, is the acceptance of at least one report, and the fee is the mediating variable between MT (the treatment) and the acceptance rate. We find that the proportion of total effect mediated is 0.39, and that the indirect (or mediated) effect is statistically different than zero ( p≤0.01). Result 4. In the CT, bids from buyers when no report is available are similar to bids when a red report is posted, indicating that buyer sophistication may have evolved over the course of the treatment. We do not find evidence of this in the MT. Table V presents an analysis of the buyer decision via four different specifications. In the first two specifications, we estimate the impact of report type and market format on the buyer’s bid. The estimation results of specification (I) suggest that when buyers do not observe a report in CT, they bid about 21 points for the widget. The bids in the MT, on the contrary, are 18 points higher when no report is published. This is consistent with having fewer reports published due to high fees and thus increasing uncertainty regarding the true widget value. This also indicates that the α of buyers in the CT may be evolving as they are exposed to more reliable information. Note that the interaction treatment dummy with red report has a negative coefficient that cancels out the added value of no report in the MT. This indicates that buyers bid lower, in the MT, when a red report is issued compared to the case when no report is issued by approximately 19 points. Therefore, in the MT the buyers do not view the lack of a report as similar to a red report. In the case of a blue report in the CT, the bid increases by approximately 50 points, and in the case of red report, the bid decreases by 2 points, though this decrease is not significant. This means that buyers value no report the same as a red report or that the buyers are relatively sophisticated in the CT. To see how the winning bids differ, we add a dummy variable to our next specification. On average, the winner bids 23 points higher. We omit all interaction terms using the winner variable because they are not significant. Result 5. When the state is red, buyers are better off in the CT. When the state is blue, buyers are better off in the MT. In the final two specifications, we attempt to determine which market format is more efficient. In our environment, the loss of efficiency stems from the penalty incurred by the CRAs when they misreport. The total number of periods in which penalties were levied in the MT and the CT are 22 and 24, respectively. We believe that the reason for the low number of penalties in the MT is due to rejection of reports because of high fees. Recall that if the seller rejects the report, whether or not the CRA inflated ratings becomes irrelevant. In the MT, the buyers often do not observe any reports and therefore there is no penalty imposed on the CRAs, even if they misreport. An alternative variable of interest that can measure efficiency is buyer welfare. Columns (III) and (IV) in Table V summarize the surplus of buyers who purchase the widget. Note that when analyzing buyer surplus, blue refers to the state (widget type) rather than the report, as in columns (I and II). We observe that in the red state, a buyer in the MT receives about 15 points less compared to a buyer in the CT. When the state is blue, a buyer in the MT receives 4 points more compared to a buyer in the CT. However, this number is not statistically different than zero. This indicates that competition among buyers may lower surplus, particularly when the information regarding widget type is reliable. Intuitively, if buyers believe that a report is accurate, then their bids should approach the underlying widget value, which decreases their possible surplus. Overall, buyers in blue state receive about 43 points (=67–20 − 0.29 * 12.5). If we omit the dummies for asset type (specification IV), and only analyze the treatment effect, then we find that the surplus in the MT is about 6 points lower, though this difference is not statistically significant. 5. Discussion The experimental design presented in this article is motivated by the theoretical work of Bolton, Freixas, and Shapiro (2012), who rely on an exogenous cost to address reputational concerns that arise when issuing fraudulent reports. Our results indicate that lower fees observed in the competitive treatment reduce the incentive to inflate ratings. In other words, the combination of the exogenously imposed reputation cost and lower fees make ratings inflation costly for the CRAs. Therefore, the effect of competition on the morals of markets is not deleterious, if not beneficial, in our economic experiment. Whether or not competition in the ratings market actually leads to a better or more optimal outcome may depend on a number of factors. For example, when the ratings market becomes more competitive, there is a possibility of ratings shopping. That is, if the seller is not pleased with the report offered by a particular CRA, he can continue to search for a better report. This is inefficient not only due to the loss of information, since reports that are not purchased remain unpublished, but also because this creates an incentive for the CRAs to inflate ratings in order to sell reports. On the contrary, increased competition among CRAs can drive down the report fees, making ratings inflation costly. In conjunction with sophisticated buyers (recall that the ratings inflation equilibrium requires a large population of naive buyers), this provides the correct incentives for truth telling, as CRAs become more concerned about reputation. There are a number of ways to extend this work and improve our understanding of the CRA market dynamics. For example, the level of α is clearly important in determining actual buyer behavior, yet it is a difficult parameter to evaluate. Our experiment implies that sophistication is not necessarily static and may be subject to market structure. Another possibility is to introduce endogenous costs (e.g., see Mathis, McAndrews, and Rochet, 2009), and then include communication between sellers and CRAs. Such experiment could provide more information on whether our initial approach is correct and whether the competition effect holds when the experimental design is modified. Furthermore, there may be a threshold number (as suggested by Hirth, 2014) of CRAs that minimizes the conflict of interests. We only provide evidence in support of competition, but do not quantify the optimal level of competition. Moreover, whether the existence of a conflict of interest between the sellers of debt and issuers of ratings is actually important may depend on firm structure. For example, it is possible that the conflict of interest was always there, and was exacerbated when the CRAs became publicly traded corporations, which in effect diluted the value of reputation. Therefore, a number of factors may be at play here, and it is important that we study them before drawing conclusions. Lastly, from what we know of existing literature, there is no clear guide to the behavior of market participants during business cycles, which can be interpreted as a good or bad asset states in our game. For example, there is some evidence that in a downturn, people are more likely to default when the volatility is high (e.g., see Rabanal, 2014) as well as empirical evidence of “boom bias” in ratings due to conflict of interest (Dilly and Mählmann, 2016). However, collectively, there is still much to be studied in this area. We believe that additional research can provide deeper insights regarding behavior and motivation of agents during business cycles as well as the role of rating agencies in the resulting outcomes. Footnotes * For very helpful comments and suggestions, we are indebted to an anonymous referee, the editor Burton Hollifield, Aleksandr Alekseev, Yin-Wong Cheung, Dan Friedman, David Gill, John Horowitz, Luba Petersen, and participants of our presentations at ESA 2015 Europe, University of Maine, ESA 2015 North America Dallas, University of Basque Country, and the 2016 Society of Experimental Finance Conference. Daniel Mowan and Andie Cole provided excellent assistance in running the experimental sessions. This research was supported by funds granted by the Miller College of Business, Ball State University. 1 White (2010) provides an excellent overview of how financial regulatory structure affected behavior of rating agencies by increasing their market power via legislature that established NRSROs and also discusses the change from investor to issuer pay model currently in use. 2 http://www.europarl.europa.eu/news/en/news-room/content/20111219IPR34550/html/Credit-rating-agencies-MEPs-want-less-reliance-on-big-three. 3 One possible explanation for the increase in ratings inflation during economic downturns is a change in incentive structure on the seller side of the market, as buyer behavior (trust) remains relatively stable. 4 Our results, which show high variance for misreporting, are in line with cheap talk experiments. We discuss this in greater detail below. 5 Moore et al. (2006) focus on the morality behind rating inflation and propose a number of strategies to reduce the conflict of interest, such as hiring CRAs long-term regardless of their reports. 6 The authors note that investment behavior of receivers cannot be explained by risk preferences or as a best response to the subject’s own behavior in the sender’s role. 7 This line of literature uses real effort tasks to study misreporting. Subjects are asked to perform a task and are paid according to a predetermined scheme. There is no monitoring and no material costs for misreporting. Cadsby, Song, and Tapon (2010) show that target-based compensation decreases truth telling compared to piece-rate and tournament-based schemes. When a task is randomized in nature, Fischbacher and Föllmi-Heusi (2013) find that most subjects lie partially. That is, they do not report the highest outcome but do report higher than actual outcome. Conrads et al. (2014) introduce a competitive aspect by increasing the payoff difference between winners (those that report the highest outcome) and losers and find that this fosters misreporting. 8 We should also consider if truth telling is the desired behavior. According to Cassar, Friedman, and Schneider (2009), cheating facilitates trade by increasing the overall volume, though it also decreases cross-market trade significantly. In their environment, contracts can only be enforced domestically, when cheating is not possible, but not internationally, when cheating is possible. This causes high surplus traders to leave international markets for domestic ones. 9 Mayhew, Schatzberg, and Sevcik (2001) study the impact of signal certainty on the objectivity of auditors and find that there are less violations when the signal is more precise. 10 These parameters lead to the following widget values Wb=110, W0=70 and Wr=30. 11 Sánchez-Pagés and Vorsatz (2007) also find that punishment only minimally increases truth telling. On the contrary, truth telling can also be viewed as deceptive if the sender chooses the true message with the expectation that the receiver will behave in a way contrary to the message (Sutter, 2009). 12 The term pit refers to markets formed by buyers, sellers, and CRAs. In essence, each pit is a silo. 13 Due to the experimental design, the buyer is always at a disadvantage. Even when acting rationally, buyers will lose surplus to other players. We did not want the payout to reflect only the show-up fee, which would negatively affect their impressions of future experiments. We anticipated this, and wanted to state that the actual payout in the end may differ, but only in the positive direction. 14 We do want to acknowledge the possibility that some subjects may have misinterpreted the instructions. However, since we always state and write the conversion rate on the board at the beginning of each session, we think that is unlikely to have influenced subject behavior. 15 Recall that our experimental design omits practice rounds, therefore in the early periods of the game the subjects may be learning about the institutions. APPENDIX A Proofs: We closely follow the work of BFS12, who focus on pure strategies, and then add a pure and mixed strategy analysis for the competitive environment of our game. We begin with the monopolist CRA. Note that there is no reason for a seller to purchase a red report because there is no state in which a red report would increase the valuation by sophisticated buyers above their ex ante valuation W0. On the contrary, it is possible that a red report could actually decrease the valuation by trusting buyers below W0. Therefore, conditional on receiving a red signal (rr), the (net) payoff to the CRA for reporting red (m = rrr) is π(rrr|rr)−π(bbb|rr)=−φ+eρ This yields two possible information regimes (Lemma 1 of BFS12): if φ≥eρ, the CRA will always report blue (bbb); and if 0<φ<eρ the CRA will always report truthfully. Proposition 1 BFS12 If the CRA always reports blue (bbb), then the seller should be willing to purchase the report as long as the fee does not exceed αWb−W0 which is the incremental profit based on buyer value of widget types and where α refers to the fraction of naive buyers in a given population. Always reporting blue is feasible when the fee is greater than the (expected) punishment cost αWb−W0>eρ If the CRA reports truthfully, the blue report (m = bbb) will induce high valuation by all buyers, while the red report (m = rrr) will result in low valuation by sophisticated buyers and ex ante valuation by naive buyers. The maximum fee is subject to the following constraint φ≤Wb−max⁡[αW0,Wr] For the CRA to report truthfully, the fee should not be greater than eρ. Therefore, φ=min⁡[Wb−max⁡[αW0,Wr],eρ] The predicted equilibrium depends on the parameters values. We chose the following parameters, Vb=120,Vr=20,e= 0.90, and ρ = 10, and found that the minimum level of α (denoted as α¯) that supports the equilibrium in which the CRA always reports blue is 0.72. Therefore, if α≥α¯, then the CRA will always report blue, and if α<α¯, then the CRA will report truthfully. Proposition 2 Suppose there are two CRAs (labeled k,−k) in the market and that only one report is published. If α<α¯, then the CRAs must be truth telling and the fees must satisfy the following condition φk,−k<eρ. Just as in a Bertrand (price) model, each CRA has an incentive to undercut the competitor’s fee. If φk<φ−k, and reports are viewed as homogenous, then the seller will accept the report from CRA k, and thus CRA – k also will have an incentive to decrease its fee. The equilibrium prediction is that both CRAs will report truthfully and that φk,−k will approach zero. Inflationary Rating Equilibrium An inflationary rating (always report blue) equilibrium occurs when ( α≥α¯). Below we present the cases for three pure strategies and finally a mixed strategy approach. Pure Strategy 1: φk,−k<eρ A deviator can always obtain a higher profit by increasing his fee and choosing to always report blue because the seller will purchase the report as long as φ≤αWb−W0. Therefore, this fee strategy cannot be an equilibrium. Pure Strategy 2: φk,−k>eρ A CRA will always have an incentive to undercut fees. If φk<φ−k, then the seller will accept the report from CRA k, and CRA – k will have an incentive to decrease its fee. Therefore, this fee strategy cannot be an equilibrium. Pure Strategy 3: φk,−k=eρ A CRA will always have an incentive to slightly undercut the fee by some value ϵ and obtain greater profits. When that occurs, the CRA will report truthfully and obtain positive profits only when the state is blue. Therefore, this fee strategy also does not support an inflationary equilibrium. Mixed Strategy Since the CRA must set its fee before receiving a signal, when it sets a low fee it will report truthfully and when it sets a high fee it may lie. φk,−k={eρ with probability 0.50 with probability 0.5 The CRA should only lie if it receives a red signal, which occurs with probability P(θ=rr)=P(θ=rr|ω=b)P(ω=b)+P(θ=rr|ω=r)P(ω=r)=(0.1)(0.5)+(0.9)(0.5)=0.5. Recall that under NE, there is no profitable deviation. Case 1. If the CRA k sets φ=0 with probability greater than 0.5 and – k with probability equal to 0.5, then k will not sell its report when the signal is red and the state is blue. Therefore, it is profitable for CRA k to decrease the probability with which it sets a low fee (lie more) and this cannot be a NE. Case 2. If the CRA k sets φ=0 with probability less than 0.5 and – k with probability equal to 0.5, then k will not sell its report when the signal is blue. Neither CRA misreports when the signal is blue and in this case CRA – k will have a lower φ. Therefore, it is profitable for CRA k to increase the probability with which it sets a low fee and this cannot be a NE. Thus, the only mixed strategy NE is when the CRAs choose φ∈{0,eρ} with probability equal to 0.5. APPENDIX B: Instructions (Competition Treatment) Welcome! You are participating in an economics experiment at CEED Lab. In this experiment, you will participate in a market game. If you read these instructions carefully and make appropriate decisions, you may earn a considerable amount of money that will be immediately paid out to you in cash at the end of the experiment. Each participant is paid $5 for attending. Throughout this experiment, you will also earn points based on the decisions you make. The rate at which we exchange your points into cash will be explained to you shortly. We reserve the right to improve this in your favor if average payoffs are lower than expected. Please turn off all cell phones and other communication devices. During the experiment you are not allowed to communicate with other participants. If you have any questions, the experimenter will be glad to answer them privately. If you do not comply with these instructions, you will be excluded from the experiment and deprived of all payments aside from the minimum payment of $5 for attending. The experiment you will participate in will involve interaction in a market setting. In this market, there are buyers, sellers, and intermediaries. Once you begin playing, you will be assigned a specific role that you will keep throughout the duration of the experiment. In the market, sellers will be selling a good called “widgets” to buyers. These widgets have an uncertain color. Blue widgets are highly valuable to buyers while red widgets are worth considerably less. Intermediaries are tasked with evaluating the color of the widgets. You will be playing a series of rounds. Each round will consist of decisions made by intermediaries, sellers, and buyers. In the instructions below, we explain how your decisions as a buyer/seller/intermediary will affect your points and total earnings. The Experiment The experiment will feature a number of rounds. In each round, you will be assigned to a market that consists of 1 seller, 2 intermediaries, and 2 buyers. While the markets you interact in will change throughout the course of the experiment, your role will remain the same. Each round, a seller will have a single widget to sell. Neither the seller nor the buyers know the color for certain before making a transaction. They do, however, know that the widget is blue with the probability of 50% and red with the probability of 50%. The intermediaries each have access to research teams that can observe the correct color with a 90% probability. The seller observes both intermediaries requested fees and their reported colors for the widget, and can hire up to one intermediary to issue their report to buyers about the color of the widget. Each round will consist of five stages: Stage 1 (10 s): Each intermediary will privately decide how much to charge for his or her report. This amount can be any number between 0 and 120. Stage 2 (10 s): Each intermediary will then receive costless information from its research team on the color of the seller’s widget. This information is accurate 90% of the time. Each intermediary receives an independent draw of information from their research team (thus it is possible that two intermediaries assessing the same good receive different information). Each intermediary must then decide what color to report to the seller (blue or red). Stage 3 (10 s): The seller receives the reports and fees of both intermediaries. Notice that the fee asked by the intermediary is set before receiving any information from the research team. The seller must then decide either to (1) accept only one of the two fees and publish the report of the selected intermediary for buyers to view or (2) reject both reports and fees. If the seller rejects both reports, buyers will be notified “No report is available”. Stage 4 (10 s): The buyers observe either (1) a blue report, (2) a red report, or (3) “No report is available”. The buyer will also be informed about the fee associated with the report (if any). They must then decide individually how much to bid on the widget. The buyer that submits the highest bid will pay her bid and receive the widget. The winning buyer receives 120 points if the widget is blue and 20 points if it is red. The other buyer will not pay anything and will not receive any points. If both buyers submit the same bid, the computer will randomly decide with 50–50 probability which buyer to award the widget to. Stage 5 (10 s): All players observe the outcome of the round. They will learn what the winning bid was, the actual color of the widget and their own earnings. In each round, all intermediaries are endowed with 10 points. The endowment can be taken away from the hired intermediary if they report the widget is blue when s/he was informed it was red by their research team AND the widget is revealed to be red. Earnings Your earnings will be computed according to the formula for your role: Sellers: Earnings of a Seller = Winning Bid − Fee Paid to Hired Intermediary (if they hired one) Buyers: Earnings of a Winning Buyer = Value of the Widget − Winning Bidwhere the Value of the Widget is 120 if the widget is blue and 20 if the widget is red. Earnings of the Other Buyer = 0 Intermediary: Earnings of a Hired Intermediary = Report Fee + Endowment The endowment will be 0 if the hired intermediary is found to report blue when his or her research team informed them the widget was likely red AND the widget is actually red, and 10 otherwise. Earnings of the Other Intermediary = Endowment There are twenty participants in this session. There will be four markets at any point. Every round you will be rematched with four strangers from your group only. While you will not know who you are playing with, you will end up interacting with players more than once. No two markets you participate in will have exactly the same people. The points you earn from six randomly selected rounds will be added up, exchanged into dollars and paid to you, along with your show up fee, in cash at the end of the experiment. Your exchange rate is written on the board. Can I earn negative points? Yes, all players can potentially earn negative points based on their decisions and the decisions of others. If, at the end of the experiments, your total points are negative, we will deduct your show up fee the required amount up to $5. APPENDIX C: User Interface (Monopoly Treatment) Appendix C1 View largeDownload slide Intermediary’s interface. Appendix C1 View largeDownload slide Intermediary’s interface. Figure C2 View largeDownload slide (a) Sellers and (b) Buyers interfaces. Figure C2 View largeDownload slide (a) Sellers and (b) Buyers interfaces. Figure C3 View largeDownload slide Results interface. Figure C3 View largeDownload slide Results interface. References Ashcraft A. , Goldsmith-Pinkham P. , Hull P. , Vickery J. ( 2011 ) Credit ratings and security prices in the subprime MBS market , American Economic Review (Papers and Proceedings) 101 , 115 – 119 . Google Scholar CrossRef Search ADS Baghai R. P. , Servaes H. , Tamayo A. ( 2014 ) Have rating agencies become more conservative? Implications for capital structure and debt pricing Journal of Finance 69 , 1961 – 2005 . Google Scholar CrossRef Search ADS Bar-Isaac H. , Shapiro J. ( 2013 ) Ratings quality over the business cycle Journal of Financial Economics 108 , 62 – 78 . Google Scholar CrossRef Search ADS Baron R. M. , Kenny D. A. ( 1986 ) The Moderator-mediator variable distinction in social psychological research - conceptual, strategic, and statistical considerations Journal of Personality and Social Psychology 51 , 1173 – 1182 . Google Scholar CrossRef Search ADS PubMed Becker B. , Millbourn T. ( 2011 ) How did increased competition affect credit ratings? Journal of Financial Economics 101 , 493 – 514 . Google Scholar CrossRef Search ADS Bolton P. , Freixas X. , Shapiro J. ( 2012 ) The credit ratings game Journal of Finance 67 , 85 – 111 . Google Scholar CrossRef Search ADS Cantor R. , Packer F. ( 1995 ) The credit rating industry Journal of Fixed Income 5 ( 3 ) 10 – 34 . Google Scholar CrossRef Search ADS Cadsby C. B. , Song F. , Tapon F. ( 2010 ) Are you paying your employees to cheat? An experimental investigation, B.E Journal of Economic Analysis & Policy 10 , 30. Cassar A. , Rigdon M. ( 2011 ) Trust and trustworthiness in networked exchange Games and Economic Behavior 71 , 282 – 303 . Google Scholar CrossRef Search ADS Cassar A. , Friedman D. , Schneider P. ( 2009 ) Cheating in markets: a laboratory experiment Journal of Economic Behavior and Organization 72 , 240 – 259 . Google Scholar CrossRef Search ADS Cole H. L. , Cooley T. F. ( 2014 ) Rating agencies. NBER Working Paper No. 19972, National Bureau of Economic Research. Conrads J. , Irlenbusch B. , Rilke R. M. , Schielke A. , Walkowitz G. ( 2014 ) Honesty in tournaments Economic Letters 123 , 90 – 93 . Google Scholar CrossRef Search ADS Davis, D.D. and Holt, C.A. (1993) Experimental economics. Princeton University Press. Dilly M. , Mählmann T. ( 2016 ) Is there a “Boom Bias” in agency ratings? Review of Finance 20 , 979 – 1011 . Google Scholar CrossRef Search ADS Duflo E. , Greenstone M. , Pande R. , Ryan N. ( 2013 ) Truth-telling by Third-party auditors and the response of polluting firms: experimental evidence from India Quarterly Journal of Economics 128 , 1499 – 1545 . Google Scholar CrossRef Search ADS Faravelli M. , Friesen L. , Gangadharan L. ( 2015 ) Selection, tournaments, and dishonesty Journal of Economic Behavior & Organization 110 , 160 – 175 . Google Scholar CrossRef Search ADS Faure-Grimaud A. , Peyrache E. , Quesada L. ( 2009 ) The owndership of ratings The RAND Journal of Economics 40 , 234 – 257 . Google Scholar CrossRef Search ADS Fischbacher, U. (2007) z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics 10(2), 171–178. Fischbacher U. , Föllmi-Heusi F. ( 2013 ) Lies in disguise —an experimental study on cheating Journal of the European Economic Association 11 , 525 – 547 . Google Scholar CrossRef Search ADS Forsythe R. , Lundholm R. , Rietz T. ( 1999 ) Cheap talk, fraud, and adverse selection in financial markets: some experimental evidence Review of Financial Studies 12 , 481 – 518 . Google Scholar CrossRef Search ADS Gill D. , Prowse V. , Vlassopoulos M. ( 2013 ) Cheating in the workplace: an experimental study of the impact of bonuses and productivity Journal of Economic Behavior & Organization 96 , 120 – 134 . Google Scholar CrossRef Search ADS Greiner B. ( 2004 ) An online recruitment system for economic experiments, in: Kurt Kremer, Volker Macho (eds.), Forschung und wissenschaftliches Rechnen, GWDG Bericht 63. Ges. für Wiss. Datenverarbeitung, Göttingen, pp. 79–93. Hirth S. ( 2014 ) Credit Rating Dynamics and Competition Journal of Banking and Finance 49 , 100 – 112 . Google Scholar CrossRef Search ADS Imai K. , Keele L. , Tingley D. ( 2010 ) A general approach to causal mediation analysis Psychological Methods 15 , 309 – 334 . Google Scholar CrossRef Search ADS PubMed Kisgen D. J. , Strahan P. E. ( 2010 ) Do regulations based on credit ratings affect a firm’s cost of capital? The Review of Financial Studies 23 , 4324 – 4347 . Google Scholar CrossRef Search ADS Kluger B. D. , Slezak S. L. ( 2015 ) Fraudulent misreporting and the business cycle: an experimental investigation. Mimeo. Manso G. ( 2013 ) Feedback effects of credit ratings Journal of Financial Economics 109 , 535 – 548 . Google Scholar CrossRef Search ADS Mathis J. , McAndrews J. , Rochet J. C. ( 2009 ) Rating the raters: are reputation concerns powerful enough to discipline rating agencies? Journal of Monetary Economics 56 , 657 – 674 . Google Scholar CrossRef Search ADS Mayhew B. W. , Pike J. E. ( 2004 ) Does investor selection of auditors enhance auditor independence? The Accounting Review 79 , 797 – 822 . Google Scholar CrossRef Search ADS Mayhew B. W. , Schatzberg J. W. , Sevcik G. R. ( 2001 ) The effect of accounting uncertainty and auditor reputation on auditor objectivity Auditing: A Journal of Practice & Theory 20 , 49 – 70 . Google Scholar CrossRef Search ADS Moore D. A. , Tetlock P. E. , Tanlu L. , Bazerman M. H. ( 2006 ) Conflicts of interest and the case of auditor independence: moral seduction and strategic issue cycling Academy of Management Review 31 , 10 – 29 . Google Scholar CrossRef Search ADS Opp C. C. , Opp M. M. , Harris M. ( 2013 ) Rating agencies in the face of regulation Journal of Financial Economics 108 , 46 – 61 . Google Scholar CrossRef Search ADS Rabanal J. P. ( 2014 ) Strategic default with social interactions: a laboratory experiment, in: Collins Sean M. , Mark Isaac R. , Norton Douglas A. (eds.), Experiments in Financial Economics, Research in Experimental Economics , Volume 16 , Emerald Group Publishing Limited , pp. 31 – 52 . Google Scholar CrossRef Search ADS Sánchez-Pagés S. , Vorsatz P. E. ( 2007 ) An experimental study of truth-telling in a sender-receiver game Games and Economic Behavior 61 , 86 – 112 . Google Scholar CrossRef Search ADS Sheremeta R. , Shields T. ( 2013 ) Do liars believe? Beliefs and other-regarding preferences in sender-receiver games Journal of Economic Behavior and Organization 94 , 268 – 277 . Google Scholar CrossRef Search ADS Sutter M. ( 2009 ) Deception through telling the truth?! Experimental evidence from individuals and teams The Economic Journal 119 , 47 – 60 . Google Scholar CrossRef Search ADS White L. J. ( 2010 ) The credit rating agencies The Journal of Economic Perspectives 24 , 211 – 226 . Google Scholar CrossRef Search ADS © The Authors 2017. Published by Oxford University Press on behalf of the European Finance Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

Review of FinanceOxford University Press

Published: Mar 16, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off