Ranking and Rating: Neglected Biases in Factor Analysis of Postmaterialist Values

Ranking and Rating: Neglected Biases in Factor Analysis of Postmaterialist Values Abstract The pros and cons of ranking and rating scales have been much discussed. It is widely accepted that factor analysis based on ranking scales suffers from a negativity bias, while that based on rating scales in turn suffers from a positivity bias. Recent decades have witnessed an increasing sophistication in techniques of neutralizing these biases in factor analysis. Using data from an experimental survey of postmaterialist values conducted in Japan, we applied these techniques and compared the dimensions extracted using ranking and rating scales. The results suggest that the postmaterialist and materialist dimensions are in fact positively correlated, in contrast to the common view that they are opposed to each other. Implications for postmaterialist theory are discussed. The pros and cons of ranking and rating scales have been much discussed in the field of survey methodology. Some suggest that rating is a preferable answer format for opinion data, as rating scales are easier to administer, in addition to the fact that it allows respondents to evaluate each question independently on an interval scale (Elig and Frieze, 1979; Groves and Kahn, 1979; Munson and McIntyre, 1979). Others suggest that ranking is a more effective format, as rating suffers from the problem of nondifferentiating respondents, and the reliability and validity of ranking data are greater than rating data (Krosnick, 1999; Rankin and Grube, 1980; Reynolds and Jolly, 1980). These debates are mainly concerned with the practical and administrative aspects of surveys: whether ranking or rating should be adopted as a question format. In contrast, according to Krosnick and Alwin (1988, p. 527), “relatively little work has compared the correlational structures of ranking and rating data.” We share this view and here investigated the analytic and correlational aspects of ranking and rating data: how the formats determine the latent structure of values and concepts. To this end, we review below how factor analyses of ranking data can suffer from negativity bias, while those of rating data can suffer on the contrary from positivity bias. We then discuss the latest techniques to correct such biases and apply them to data derived from an experimental survey. Correcting Negativity and Positivity Bias Ranked items and rated items are known to produce biased results in factor analysis. Ranked items are not suitable for factor analysis because of the ipsative nature entailed in their measurement (Dunlap and Cornwell, 1994, p. 115). Warnings abound in the early psychometric literature against the use of factor analysis on a set of variables with ipsative properties (Guilford, 1954; Hicks, 1970; Horst, 1965) and discussions continued in later decades (Ten Berge, 1999). The term ipsativity describes a sum of rank-ordered items or rescaled measures that are constant across respondents. In such a setup, the ranking of one item is not independent of other items, as prior rankings constrain the relative ranks of remaining items. For example, in a battery of four items, the ranking of the first item determines the ranks of remaining three items, which automatically generates negative correlations between them at about −0.33, i.e., −1/(k − 1), where k is the number of measures. This negativity in the correlation matrices is incongruent with the methodological premises of factor analysis. A warning by Dunlap and Cornwell (1994, p. 122) merits our attention: “principal components factor analysis of ipsative data will produce bipolar factors that result, not from the true underlying relationships between the variables, but from negative relationships induced solely by the ipsative nature of the measures.” The rating method is preferred to the ranking method in measuring value dimensions according to the critics (Bean and Papadakis, 1994; Van Deth, 1983). The rating method in survey questionnaires allows respondents to evaluate each item independently of the others. The absolute values obtained from the rating method are immune to the negativity bias associated with the relative ranking orders. However, proponents of the ranking method argue that the rating method potentially suffers from response set bias, where respondents respond in a uniform manner, regardless of the questions given (Zeller and Carmines, 1980, p. 94). If such response behaviors exist, “spuriously positive correlations” (Alwin and Krosnick, 1985, p. 537) among rated items can prevail, and factor analysis would thus suffer from positivity bias. In a nutshell, researchers face a dilemma between negativity bias in the ranking format and positivity bias in the rating format in conducting factor analysis. We discuss below the ways to correct these biases. After correcting the biases accrued from the measurement format, either ranking or rating, we can properly assess the nature of values and concepts that are free of noise. To this end, we propose a set of schemes to neutralize biases in factor analysis of both rank-ordered data and rating-evaluated data by applying techniques available in the field. To begin with, let us review techniques for correcting the negativity bias of ranking scales first and then the positivity bias of rating scales for conducting factor analysis or covariance structure analysis. Ranking As ipsative data produce a negatively biased correlation matrix, routine factor analysis is known to be flawed, as discussed above. However, the common factor model for ipsative measures proposed by Jackson and Alwin (1980) enables us to correct such negativity biases to a significant extent. Assuming that true evaluations or ratings of items exist in preipsative data, Jackson and Alwin devised a scheme to calculate the coefficient matrices of a factor model that would be free from negativity bias. Jackson and Alwin (ibid., p. 222) write the factor pattern coefficient matrix (factor loading matrix) for x (i.e., ipsative data) as:   Λx=Λy-1λ¯ (1) where Λy is the factor pattern coefficient matrix (factor loading matrix) for y (i.e., preipsative data), and λ¯ is a (k × 1) vector of the average values of the coefficients in the columns of Λy. The scheme imposes a set of constraints generating negative correlations among the error terms inherent in ipsative data (Alwin and Jackson, 1982; see Sacchi, 1998 for an application with a Monte Carlo simulation). However, Jackson and Alwin’s common factor model suffers from a limitation. As Jackson and Alwin (1980, p. 225) acknowledge, the interpretation of the ipsative factor loading matrix Λx remains indefinite, as the values of λ¯ are unknown in the estimation (see Equation 1). As a consequence, we are left with no definitive interpretation of the original factor loadings on the preipsative data Λy. Chan and Bentler (1993) make an important contribution to overcoming this limitation. To make it possible to estimate the factor loadings of the preipsative data Λy, Chan and Bentler (ibid., p. 229) add an extension to Equation 1 postulated by Jackson and Alwin as follows:   Λx=Λy−1λ¯=[α10α20α300β10β20β3]−[α¯β¯α¯β¯α¯β¯α¯β¯α¯β¯α¯β¯]=[α1−α¯−β¯α2−α¯−β¯α3−α¯−β¯        −α¯β1−β¯        −α¯β2−β¯        −α¯β3−β¯], (2) where α¯=(α1+α2+α3+0+0+0)/6 and β¯=(0+0+0+β1+β2+β3)/6 are the average factor loadings for the first and second factors. Instead of the one-factor exploratory-factor analytic model assumed in the Jackson and Alwin method, Chan and Bentler reformulate the model with two or more factors through the confirmatory factor analytic (CFA) model (ibid., p. 220). This is the major innovation of Chan and Bentler (1993), as the estimation of 1λ¯′ is now made possible through the fixation of parameters for both factors while imposing k constraints, where k is the difference in the number of free parameters between the preipsative model and ipsative model (ibid.: pp. 229–231; pp. 240–241). By further developing Jackson and Alwin’s ipsative factor method in the context of plural CFA models, Chan and Bentler made an advance in estimating the factor loadings of ipsative items. Despite the above contribution, Chan and Bentler’s CFA model still suffered from inconvenience in the final estimation of preipsative data. In their model, the estimated parameters of ipsative data Λx had to be transformed back into the preipsative model. Following up on Chan and Bentler’s innovations, Cheung (2004) formulated a Direct Estimation method, which allows the final estimation of preipsative factor loadings and standard errors to be obtained from specific software programs. This method was made possible by applying a restricted second-order CFA model to Chan and Bentler’s original idea of a first-order CFA model. In this approach, parameter estimates and standard errors can be calculated without including the k within-group constraints that were necessary in the original CFA model. To recapitulate, the negativity bias associated with the rank-ordered items in factor analysis or latent structure analysis can now be corrected using the exploratory common factor model conceived by Jackson and Alwin (1980), with modifications through the CFA model by Chan and Bentler (1993), and the optimal solution of Direct Estimation method by Cheung (2004). The analyses in this study adopt the strategies of the above-cited literature and apply the Direct Estimation method proposed by Cheung (2004) in estimating the factor loadings of preipsative items that are latent in the available rank-ordered ipsative data. Rating The rating format can also generate bias. In the words of Krosnick and Alwin (1988, p. 529), “respondents presumably minimize the effort they expend in reporting their values by simply rating all qualities as equally and highly desirable.” Unlike the ranking method, which intrinsically entailed negativity bias, the rating method suffers from spuriously positive correlations among evaluated items. To control for this positivity bias, the following three approaches have been adopted. The first approach is to minimize response set bias by inserting rating items in opposite directions that measure same values and concepts. Through this so-called item reversals method, respondents who illogically choose “very important” and “not important at all” for the same item would be identified as nondifferentiators, and one could obtain a latent dimension where “non-differentiators” would be placed in the middle (Altemeyer, 1996; Hellevik, 1994; Paulhus, 1991; see Hellevik, 1993 for an application). Although this is a possible way to correct response set bias, one must block such ‘yea-sayers’ at the moment of administering surveys. The second approach is to remove nondifferentiators from the analysis. Those respondents who are not motivated to scrutinize the content of each rating item attempt to minimize their response cost by giving all items the same values in the scale. Krosnick and Alwin (1988) identify about 10% of respondents as nondifferentiators for all 13 items in the same five-point scale format; they proposed removing nondifferentiators in their analysis of the 1980 General Social Survey. By silencing them, i.e., neutralizing the positivity bias, Krosnick and Alwin demonstrate that the results of the rating items appear to be closer to those obtained from the rank-ordered items. It is also possible that respondents rate all items equally after deliberating processes. In such cases, positivity bias is overcorrected. The last approach proposed by Alwin and Krosnick (1985) is to include a General Method Factor in the factor analysis. This additional factor gives identical factor loadings to all items and is uncorrelated with other factors. Alwin and Krosnick demonstrated that the loadings on this added factor are statistically significant and the results including this new factor improve goodness-of-fit measures significantly in their CFA (ibid., pp. 544–546). More importantly, the correlation of substantive latent factors in Alwin and Krosnick’s example of the parental values of child quality changes after the incorporation of the General Method Factor, i.e., controlling for the effects of the positivity bias entailed in the rating items. With this General Method Factor approach, we can control for built-in positive correlation and correct the positivity bias generated from the evaluating process of rating items. The analyses below adopt this strategy.1 Research Design To compare the ranking and rating formats, we used survey questions related to postmaterialism. Postmaterialism has been studied extensively since the seminal work by Inglehart (1977), who argued that generations who grow up in an era of peace and affluent society attain postmaterialist values, striving for self-actualization and after more aesthetic needs. Van Deth (1983) and Bean and Papadakis (1994) compared the ranking and rating formats for batteries on postmaterialism. Their questionnaires assigned the same respondents to both rate and rank the battery items one after the other. Both studies asked respondents to first answer rating questions and then answer the ranking questions to avoid forcing them rate the ranked items (Bean and Papadakis, 1994: 272). We cannot deny the contamination effects of precedent rating formats on subsequent ranking decisions though because both items are asked to the same respondents. To eliminate these contamination effects, we randomly assigned ranking and rating formats to the respondents in an experimental survey. In the wave of surveys conducted after the 2007 election of the House of Councilors in Japan, respondents were randomly split between Group A and Group B.2 For Group A, 12 rank-ordered batteries on the topic of a country’s goals were presented to participants, while for Group B, the same 12 items were presented in a rating format with an 11-point scale. In total, 848 respondents were assigned to Group A, whereas 865 respondents were assigned to Group B.3 The questions asked were as follows. The respondents in Group A were asked: “Which goal do you think is important for Japan to aim for the next 10 years? For each of Section 1 through Section 3, please choose two most important items. [Section 1] First, among these items, which item is the most important? Which item is the second most important?” The list consisted of four items, two items related to postmaterialism and the other two items associated with materialism. Section 2 and Section 3 were presented in the same way. After answering the most important and second most important items three times in each section, respondents were asked the following overall question: “Among the 12 goals mentioned in Section 1 through Section 3, which is the most important? Which is the second most important? Lastly, which item is the least important?” The respondents in Group B were instead asked: “Which goals do you think it is important for Japan to aim for in the next ten years? For each goal (a) through (l), please answer by choosing a number on the scale, with 10 as the maximum”; 12 questions were thus asked for each of the 12 items. Data and Analysis Two sets of data were used in our analysis: the first was the ranking data from Group A and the second was the rating data from Group B. The ranking scale for Group A largely followed Inglehart (1977, p. 43) from the highest score of 6 (the most important among 12 items), to 5 (the second most important among the 12 items), 4 (the most important among the 4 items), 3 (the second important among the 4 items), 2 (items not chosen), and the lowest score 1 (the least important among 12 items).4 In theory, the sum of the 12 ranked items should amount to 35 if respondents reply in a completely logical manner.5 Yet we found in practice that not all respondents complete their responses in a transitive manner.6 To rectify this quasi-ipsative nature of the ranking scale and to arrive at completely ipsative ranked items, we analyzed the respondents who meet the following four conditions: (1) answering both “most important” and “second most important” items in Section 1, Section 2, and Section 3; (2) answering all of “most important,” “second most important,” and “least important” among the 12 items; (3) choosing the “most important” item among 12 items being from the pool of three “most important” items chosen in Section 1, Section 2, and Section 3; and (4) choosing “least important” item from the pool of six items not chosen as the “most important” or the “second most important” item in Section 1, Section 2, and Section 3. Instead of the original six-point scale, we reformulated it to a five-point scale ranging from 5 (the most important among the 12 items), 4 (the most important among 4 items), 3 (the second most important among 4 items), 2 (items not chosen), and to 1 (the least important among the 12 items), which should give rise to the additive value of 29 for all respondents.7 The number of respondents considered for our analysis of ranked items was N = 522.8 For the rating data, respondents who answered all 12 items were considered for our analysis. The number of respondents who fulfilled this requirement was N = 812.9 The analyses proceeded as follows. First, we compared the uncorrected models between the ranked items [Model 1-1] and the rated items [Model 2-1]. Second, we compared the corrected models between the preipsative model of ranked items, controlling for negativity biases through the Direct Estimation method (Cheung, 2004) [Model 1-2] and the model of rated items controlling for positivity bias through the General Method Factor approach (Alwin and Krosnick, 1985) [Model 2-2]. Before presenting the results, let us preview the nuts and bolts of our analyses below. [Model 1-1] For the ipsative model without correction, a Confirmatory Factor Analysis was conducted assuming one latent factor underlying all the 12 items. The variance of this latent factor was fixed to 1 for the model to be properly identified. [Model 1-2] For the preipsative model through the Direct Estimation method, a Confirmatory Factor Analysis of two factors is conducted. For the model to be identified, the variances of two latent factors are fixed to 1.10 [Model 2-1] For the rating model without correction, a Confirmatory Factor Analysis was conducted assuming one latent factor underlying all the 12 items in the same manner as Model 1-1. The variance of this latent factor was fixed to 1 for the model to be properly identified. [Model 2-2] For the rating model to correct for positivity bias using the General Method Factor, Confirmatory Factor Analysis of two factors was conducted to allow a direct comparison with Model 1-2, the preipsative model of ranking data. The third factor, the General Method Factor, was set to have identical loading scores for all 12 items, and the covariances between Factor 1 and Factor 3 as well as between Factor 2 and Factor 3 were set to 0. For the model to be identified, the variances of three latent factors were fixed to 1. Results Table 1 and 2 report the results of Models 1-1 and 1-2 for the ranking data and of Models 2-1 and 2-2 for the rating data, respectively. Table 1 Direct Estimation method with Ranking Data Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. Table 1 Direct Estimation method with Ranking Data Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. Table 2 General Method Factor approach with Rating Data Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. Table 2 General Method Factor approach with Rating Data Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. The results in Model 1-1 largely replicated Inglehart’s finding of one-dimensional value cleavage between the postmaterialist and materialist camps, except for the items “Maintain order” and “Fight against crime,” which were found insignificant. All the six postmaterialist items loaded on this factor negatively, in the order of higher negative loadings, “Less impersonal society,” “More say on job,” “Ideas count,” “More say in government,” “More beautiful cities,” and “Freedom of speech,” while other materialist items, “Stable economy,” “Economic growth,” “Keep prices stable,” and “Strong defense forces” loaded positively on the same factor. In Inglehart’s explorative factor analysis, a single dimension is extracted on which postmaterialist items loaded positively and materialist items loaded negatively. The finding in Model 1-1 is largely identical to Inglehart’s famous value dimension (Inglehart, 1977, p. 46; Inglehart, 1979, pp. 314, 316; see also Bean and Papadakis, 1994, p. 275). The results in Model 2-1 arrived at contrasting results. All the 12 items had positive loading scores and were no longer polarized on the same dimension. In applying rating data to postmaterialism, Bean and Papadakis (1994, p. 278) extracted a similar dimension, in which the 12 items were aligned in the same direction.11 In sum, our data generally confirmed the earlier findings of the polarized dimension of ranking data and the uniform dimension of rating data. However, this picture changes dramatically once negativity and positivity biases are controlled for. The results of the Direct Estimation method for Model 1-2 suggest that all six materialist items load negatively on Factor 1, and postmaterialist items also load negatively on Factor 2 (except for “More beautiful cities”). Moreover, these two factors are significantly positively correlated (.370). This may come as a surprise, as the common understanding of the value dimension is that materialism and postmaterialism are opposed to each other and are negatively correlated.12 This implies that the value dimension is multidimensional rather than unidimensional, and materialism and postmaterialism do not constitute a single dimension but are two independent dimensions that can be correlated positively. The corrected model of rating data in Model 2-2 also showed a different result from the uncorrected model in Model 2-1. Four items, “Economic growth,” “Strong defense forces,” “More say in job’,” and “More beautiful cities,” are no longer statistically significant in Model 2-2 after controlling for positivity bias, but the remaining eight items remain significant with their signs unchanged. An important finding is that the two factors in Model 2-2 are strongly correlated with each other, with a correlation coefficient of .794. This suggests that materialist items and postmaterialist items are positively correlated even after biases are removed from the analysis. Discussion The above results are intriguing, in that the corrected ranking model and the corrected rating model show a similar picture. After being controlled for negativity bias, the ranking model revealed that materialist items and postmaterialist items constitute two independent dimensions that are positively correlated. Based on the same rank-ordered items, Inglehart maintained that postmaterialism and materialism are polarized on the same dimension and proposed the value cleavage hypothesis. Inglehart’s presentation of unidimensional cleavage is misleading, as the factor analysis entails a negativity bias because of the ipsativity properties inherent in rank-ordered items. After the biases associated with rank-ordered items through the Direct Estimation method were eliminated, the two dimensions appeared to be positively correlated to each other. The positive correlation between postmaterialist and materialist items can be further confirmed through the analysis of the rated items. Even after controlling for potential response set biases, the two dimensions of postmaterialism and materialism show a high correlation. Based on the explorative factor analysis of rated items, Bean and Papadakis (1994) argued that postmaterialist and materialist items are hardly located at the two poles of the same value dimension. Our analyses join their criticism of Inglehart’s conceptualization of a unidimensional value cleavage. Bean and Papadakis (1994) extracted two rotated factors representing postmaterialism and materialism, respectively. Given that the two rotated factors are orthogonal and uncorrelated by nature, Bean and Papadakis (1994: 278) demonstrated that postmaterialism and materialism are independent and not correlated negatively as conceptualized in Inglehart’s value dimension. Our analyses advance Bean and Papadakis’s arguments further by demonstrating that the two factors are positively correlated to each other. Inglehart (1994) replied to Bean and Papadakis by pointing out that two factors were found to be uncorrelated because of the use of varimax rotation in factor analysis. Inglehart (1982, p. 458; 1990, p. 144) has argued on other occasions that the use of varimax rotation for extracting dimensions is “an analytic fallacy” of “reductio ad varimax.” Our analyses do not suffer from this “analytic fallacy” but still lend support to multidimensionality in value cleavage. Inglehart’s extraction of a polarized value dimension rather suffers from an analytic fallacy of reductio ad ipse. Although we argue that both the corrected ranking and the rating models show a positive correlation between postmaterialism and materialism, the following caveat applies. A closer look at the factor loading of each item reveals that the content of the factors was different. In particular, the postmaterialist factor (Factor 2) in the corrected ranking model (Model 1-2) was driven by “More say in government” and “More say on job,” items related to the needs for self-esteem and participation, whereas that in the corrected rating model (Model 2-2) was characterized by “Less impersonal society” and “Freedom of speech,” items that tap into the aesthetic and intellectual needs of Maslow’s hierarchy of needs that the theory of postmaterialism is based on. Strictly speaking, the ranking scales and rating scales did not produce identical factors even after controlling for negativity and positivity bias. Despite the weight variations among the postmaterialist and materialist items, our finding is consistent in that factors of postmaterialism and materialism are positively correlated. This is congruent with the original hypothesis of a hierarchy of needs, as people can have higher needs only after basic needs are fulfilled. In other words, postmaterialist needs are based on materialist needs. Airo Hino is Professor at Faculty of Political Science and Economics, Waseda University in Tokyo where he teaches research methodology. His main research topics are survey research and political communication in advanced democracies. Ryosuke Imai is Professor at Center for Education and Innovation, Sojo University in Kumamoto. His main research topics are voting behavior and electoral politics in Japan. Acknowledgments The authors would like to thank Arthur Lupia, André Blais, and David Howell for their helpful comments on the earlier version of this note, and the anonymous reviewers for their suggestions to strengthen our analyses. Footnotes 1 Apart from the above three approaches, centering responses around the mean score for a respondent could also be an effective approach (Schwartz, 1994). To implement this, one has to simply calculate the mean score for each respondent on the relevant items, and define the new item score as the difference between the original score and the mean score. 2 The survey analyzed in this study, Waseda-CASI&PAPI2007, was conducted in the two-wave panels before and after the House of Councilors Election of Japan in 2007 (from June 16th to July 11th for the first wave, and from August 25th to September 17th for the second wave) using two modes of Paper-and-Pencil-Interview (PAPI) and Computer-Assisted-Self-Interview (CASI). The response rate of the preelection survey was 42.3% (N = 1,553) and that of the postelection survey was 58.9% (N = 1,713) in which 1,380 individuals were added as supplements. The samples were drawn from nationwide electorates based on random probability sampling. For the PAPI component, the treatments were randomized in the split-sampling by stratifying on interviewers. For the CASI component, the respondents were randomized in the CASI programs with the seed of a starting time of each interview. 3 The two groups are comparable, as respondents were randomly assigned either group, therefore generating two groups identical on all the dimensions but the questionnaire formats. The two groups are in fact not significantly different in any of the sociodemographic variables such as age, gender, and educational level (see Appendix). Another choice is to apply two different formats to the same respondents. In the latter case, respondents are perfectly "identical," but their reactions to the different questionnaire formats cannot be properly measured, as a question (e.g., in rating format) asked before is bound to contaminate the ways in which respondents react to a later question (e.g., in ranking format). 4 The order is reversed from the Inglehart scale to maintain the same direction with the rating scales. 5 The expected additive value of 12 ranked items is 35 (6 × 1 + 5 × 1 + 4 × 1 + 3 × 3 + 2 × 5 + 1 × 1), as there should be only one item to be chosen as the most important among the 12 items (6), the second most important among 12 items (5), the third most important, i.e., the item chosen as the most important at the "section stage" but not chosen as the most important or the second most important in the "final stage" among 12 items (4), and the least important among 12 items (1). Among the remaining eight items, three items should be chosen as the second most important at the section stage (3), and the rest should be not important but not the least important (2). 6 In our data, only a quarter of the respondents fall into the "complete" category in which the sum of 12 items equals to 35. The rest of additive values ranges from the minimum of 33 to the maximum of 39. 7 The transformation gives rise to the "additive ipsative data" (AID) which is identical in nature with the data used in Chan and Bentler (1993) and Cheung (2004). 8 An alternative approach is to convert ranking data to dummy variables including missing data (Maydeu-Olivares and Böckenholt, 2005). Missing data can then be handled using multiple imputation techniques (Brown and Maydeu-Olivares, 2012). With this approach, respondents who give intransitive answers can be still kept in analyses. 9 We tested how the drop of respondents in both groups would affect the composition of the respondents in both groups. In both Group A and Group B, the dropped respondents are older and less educated than those who remained in the analyses. This reflects that older and less educated respondents tended to give intransitive answers (in Group A) and to choose "Don’t Know" in one or more items (in Group B). In Group B, there were more women in the dropped respondents. We also tested if the remaining respondents are still comparable between Group A and Group B. There is no significant difference in gender and educational level between the two groups. Group B has a slightly older respondent (by 2.71 years) given that more aged respondents dropped in Group A compared with those who dropped in Group B. There was no statistical difference in terms of political knowledge. The results of these tests of sociodemographic variables are reported in Appendix. 10 The error variance of the fifth item "Stable economy" is set to the same value of the first item for the purpose of achieving model identification. The equations for the structural model and measurement model are as follows:   zi=αif1+ei(1≤i≤6),  zj=αjf2+ej(7≤j≤12).Xi=.917zi−.083∑zj−.083z12 (1≤i≤11,  i≠j). 11 To further validate the similarity of our finding with Bean and Papadakis (1994), an explorative factor analysis of rated items used in their study was conducted. Similarly to Bean and Papadakis, we managed to extract a single dimension (eigenvalue = 5.63, variance explained = 46.9%) on which all 12 items loaded positively. 12 The exception is Bean and Papadakis (1994: 279) who find positive correlation (r = .33) between their rating-based materialism and postmaterialism scales. References Altemeyer B. ( 1996). The authoritarian specter . Cambridge, MA: Harvard University Press. Alwin D. F., Jackson D. J. ( 1982). Adult values for children: An application of factor analysis to ranked preference data. In Hauser R. M., Mechanic D., Haller A. O., Hauser T. S. (Eds.), Social structure and behavior: Essays in honor of William Hamilton Sewell  (pp. 311– 329). New York, NY: Academic Press. Google Scholar CrossRef Search ADS   Alwin D. F., Krosnick J. A. ( 1985). The measurement of values in surveys: A comparison of ratings and rankings. The Public Opinion Quarterly , 49, 535– 552. doi: 10.1086/268949. Google Scholar CrossRef Search ADS   Bean C., Papadakis E. ( 1994). Polarized priorities or flexible alternatives? Dimensionality in Inglehart's materialism-postmaterialism scale. International Journal of Public Opinion Research , 6, 264– 288. doi: 10.1093/ijpor/6.3.264. Google Scholar CrossRef Search ADS   Brown A., Maydeu-Olivares A. ( 2012). Fitting a Thurstonian IRT model to forced-choice data using Mplus. Behavior Research Methods , 44, 1135– 1147. doi:10.3758/s13428-012-0217-x. Google Scholar CrossRef Search ADS PubMed  Chan W., Bentler P. M. ( 1993). The covariance structure analysis of ipsative data. Sociological Methods and Research , 22, 214– 247. doi: 10.1177/0049124193022002003. Google Scholar CrossRef Search ADS   Cheung M. W. L. ( 2004). A direct estimation method on analyzing ipsative data with Chan and Bentler's (1993) method. Structural Equation Modeling , 11, 217– 243. doi:10.1207/s15328007sem1102_5. Google Scholar CrossRef Search ADS   Dunlap W. P., Cornwell J. M. ( 1994). Factor analysis of ipsative measures. Multivariate Behavioral Research , 29, 115– 126. doi: 10.1207/s15327906mbr2901_4. Google Scholar CrossRef Search ADS PubMed  Elig T. W., Frieze I. H. ( 1979). Measuring causal attributions for success and failure. Journal of Personality and Social Psychology , 37, 621– 634. doi: 10.1037/0022-3514.37.4.621. Google Scholar CrossRef Search ADS   Groves R. M., Kahn R. L. ( 1979). Surveys by telephone: A national comparison with personal interviews . New York, NY: Academic Press. Guilford J. P. ( 1954). Psychometric methods  ( 2nd ed.). New York, NY: McGraw-Hill. Hellevik O. ( 1993). Postmaterialism as a dimension of cultural change. International Journal of Public Opinion Research , 5, 211– 233. doi: 10.1093/ijpor/5.3.211. Google Scholar CrossRef Search ADS   Hellevik O. ( 1994). Measuring value orientation: Rating versus ranking. International Journal of Public Opinion Research , 6, 292– 295. Hicks L. E. ( 1970). Some properties of ipsative, normative, and forced-choice normative measures. Psychological Bulletin , 74, 167– 184. doi: 10.1037/h0029780. Google Scholar CrossRef Search ADS   Horst P. ( 1965). Factor analysis of data matrices . New York, NY: Holt, Rinehart & Winston. Inglehart R. ( 1977). The silent revolution: Changing values and political styles among western publics . Princeton, NJ: Princeton University Press. Inglehart R. ( 1979). Value priorities and socioeconomic change. In Barnes S. H., Kaase M. (Eds.), Political action: Mass participation in five western democracies  (pp. 305– 342). Beverly Hills, CA: Sage Publications. Inglehart R. ( 1982). Changing values in Japan and the West. Comparative Political Studies , 14, 445– 479. Google Scholar CrossRef Search ADS   Inglehart R. ( 1990). Culture shift in advanced industrial society . Princeton: Princeton University Press. Inglehart R. ( 1994). Polarized priorities or flexible alternatives? Dimensionality in Inglehart's materialism - postmaterialism scale: A comment. International Journal of Public Opinion Research , 6, 289– 297. doi: 10.1093/ijpor/6.3.289-a. Google Scholar CrossRef Search ADS   Jackson D. J., Alwin D. F. ( 1980). The factor analysis of ipsative measures. Sociological Methods and Research , 9, 218– 238. doi: 10.1177/004912418000900206. Google Scholar CrossRef Search ADS   Krosnick J. A. ( 1999). Maximizing questionnaire quality. In Robinson J. P., Shaver P. R., Wrightsman L. S. (Eds.), Measures of political attitudes: Volume 2 of measures of social psychological attitudes  (pp. 37– 57). San Diego, CA: Academic Press. Krosnick J. A., Alwin D. F. ( 1988). A test of form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. The Public Opinion Quarterly , 52, 526– 538. doi: 10.1086/269128. Google Scholar CrossRef Search ADS   Maydeu-Olivares A., Böckenholt U. ( 2005). Structural equation modeling of paired-comparison and ranking data. Psychological Methods , 10, 285– 304. doi: 10.1037/1082-989X.10.3.285. Google Scholar CrossRef Search ADS PubMed  Munson J. M., McIntyre S. H. ( 1979). Developing practical procedures for the measurement of personal values in cross-cultural marketing. Journal of Marketing Research , 16, 48– 52. doi: 10.2307/3150873. Google Scholar CrossRef Search ADS   Paulhus D. L. ( 1991). Measurement and control of response bias. In Robinson J. P., Shaver P. R., Wrightsman L. S. (Eds.), Measures of personality and social psychological attitudes: Volume 1 in Measures of social psychological attitudes series  (pp. 17– 59). San Diego, CA: Academic Press. Google Scholar CrossRef Search ADS   Rankin W. L., Grube J. W. ( 1980). A comparison of ranking and rating procedures for value system measurement. European Journal of Social Psychology , 10, 233– 246. doi: 10.1002/ejsp.2420100303. Google Scholar CrossRef Search ADS   Reynolds T. J., Jolly J. P. ( 1980). Measuring personal values: An evaluation of alternative methods. Journal of Marketing Research , 17, 531– 36. doi: 10.2307/3150506. Google Scholar CrossRef Search ADS   Sacchi S. ( 1998). The dimensionality of postmaterialism: An application of factor analysis to ranked preference data. European Sociological Review , 14, 151– 175. Google Scholar CrossRef Search ADS   Schwartz S. H. ( 1994). Are there universal aspects in the structure and content of human values? Journal of Social Issues , 50, 19– 45. doi: 10.1111/j.1540-4560.1994.tb01196.x. Google Scholar CrossRef Search ADS   Ten Berge J. M. F. ( 1999). A legitimate case of component analysis of ipsative measures, and partialling the mean as an alternative to ipsatization. Multivariate Behavioral Research , 34, 89– 102. doi: 10.1207/s15327906mbr3401_4. Google Scholar CrossRef Search ADS PubMed  Van Deth J. W. ( 1983). Ranking the ratings: The case of materialist and post-materialist value orientations. Political Methodology , 9, 407– 431. Zeller R. A., Carmines E. G. ( 1980). Measurement in the social sciences: The link between theory and data . Cambridge, MA: Cambridge University Press. Appendix Table A1 Difference of Sociodemographic Variables and Political Knowledge Between Groups Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Note: t-values are reported (p-values for null-hypotheses of no difference are reported in brackets). Table A1 Difference of Sociodemographic Variables and Political Knowledge Between Groups Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Note: t-values are reported (p-values for null-hypotheses of no difference are reported in brackets). © The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Public Opinion Research Oxford University Press

Ranking and Rating: Neglected Biases in Factor Analysis of Postmaterialist Values

Loading next page...
 
/lp/ou_press/ranking-and-rating-neglected-biases-in-factor-analysis-of-YbyZNiNQla
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research.
ISSN
0954-2892
eISSN
1471-6909
D.O.I.
10.1093/ijpor/edy007
Publisher site
See Article on Publisher Site

Abstract

Abstract The pros and cons of ranking and rating scales have been much discussed. It is widely accepted that factor analysis based on ranking scales suffers from a negativity bias, while that based on rating scales in turn suffers from a positivity bias. Recent decades have witnessed an increasing sophistication in techniques of neutralizing these biases in factor analysis. Using data from an experimental survey of postmaterialist values conducted in Japan, we applied these techniques and compared the dimensions extracted using ranking and rating scales. The results suggest that the postmaterialist and materialist dimensions are in fact positively correlated, in contrast to the common view that they are opposed to each other. Implications for postmaterialist theory are discussed. The pros and cons of ranking and rating scales have been much discussed in the field of survey methodology. Some suggest that rating is a preferable answer format for opinion data, as rating scales are easier to administer, in addition to the fact that it allows respondents to evaluate each question independently on an interval scale (Elig and Frieze, 1979; Groves and Kahn, 1979; Munson and McIntyre, 1979). Others suggest that ranking is a more effective format, as rating suffers from the problem of nondifferentiating respondents, and the reliability and validity of ranking data are greater than rating data (Krosnick, 1999; Rankin and Grube, 1980; Reynolds and Jolly, 1980). These debates are mainly concerned with the practical and administrative aspects of surveys: whether ranking or rating should be adopted as a question format. In contrast, according to Krosnick and Alwin (1988, p. 527), “relatively little work has compared the correlational structures of ranking and rating data.” We share this view and here investigated the analytic and correlational aspects of ranking and rating data: how the formats determine the latent structure of values and concepts. To this end, we review below how factor analyses of ranking data can suffer from negativity bias, while those of rating data can suffer on the contrary from positivity bias. We then discuss the latest techniques to correct such biases and apply them to data derived from an experimental survey. Correcting Negativity and Positivity Bias Ranked items and rated items are known to produce biased results in factor analysis. Ranked items are not suitable for factor analysis because of the ipsative nature entailed in their measurement (Dunlap and Cornwell, 1994, p. 115). Warnings abound in the early psychometric literature against the use of factor analysis on a set of variables with ipsative properties (Guilford, 1954; Hicks, 1970; Horst, 1965) and discussions continued in later decades (Ten Berge, 1999). The term ipsativity describes a sum of rank-ordered items or rescaled measures that are constant across respondents. In such a setup, the ranking of one item is not independent of other items, as prior rankings constrain the relative ranks of remaining items. For example, in a battery of four items, the ranking of the first item determines the ranks of remaining three items, which automatically generates negative correlations between them at about −0.33, i.e., −1/(k − 1), where k is the number of measures. This negativity in the correlation matrices is incongruent with the methodological premises of factor analysis. A warning by Dunlap and Cornwell (1994, p. 122) merits our attention: “principal components factor analysis of ipsative data will produce bipolar factors that result, not from the true underlying relationships between the variables, but from negative relationships induced solely by the ipsative nature of the measures.” The rating method is preferred to the ranking method in measuring value dimensions according to the critics (Bean and Papadakis, 1994; Van Deth, 1983). The rating method in survey questionnaires allows respondents to evaluate each item independently of the others. The absolute values obtained from the rating method are immune to the negativity bias associated with the relative ranking orders. However, proponents of the ranking method argue that the rating method potentially suffers from response set bias, where respondents respond in a uniform manner, regardless of the questions given (Zeller and Carmines, 1980, p. 94). If such response behaviors exist, “spuriously positive correlations” (Alwin and Krosnick, 1985, p. 537) among rated items can prevail, and factor analysis would thus suffer from positivity bias. In a nutshell, researchers face a dilemma between negativity bias in the ranking format and positivity bias in the rating format in conducting factor analysis. We discuss below the ways to correct these biases. After correcting the biases accrued from the measurement format, either ranking or rating, we can properly assess the nature of values and concepts that are free of noise. To this end, we propose a set of schemes to neutralize biases in factor analysis of both rank-ordered data and rating-evaluated data by applying techniques available in the field. To begin with, let us review techniques for correcting the negativity bias of ranking scales first and then the positivity bias of rating scales for conducting factor analysis or covariance structure analysis. Ranking As ipsative data produce a negatively biased correlation matrix, routine factor analysis is known to be flawed, as discussed above. However, the common factor model for ipsative measures proposed by Jackson and Alwin (1980) enables us to correct such negativity biases to a significant extent. Assuming that true evaluations or ratings of items exist in preipsative data, Jackson and Alwin devised a scheme to calculate the coefficient matrices of a factor model that would be free from negativity bias. Jackson and Alwin (ibid., p. 222) write the factor pattern coefficient matrix (factor loading matrix) for x (i.e., ipsative data) as:   Λx=Λy-1λ¯ (1) where Λy is the factor pattern coefficient matrix (factor loading matrix) for y (i.e., preipsative data), and λ¯ is a (k × 1) vector of the average values of the coefficients in the columns of Λy. The scheme imposes a set of constraints generating negative correlations among the error terms inherent in ipsative data (Alwin and Jackson, 1982; see Sacchi, 1998 for an application with a Monte Carlo simulation). However, Jackson and Alwin’s common factor model suffers from a limitation. As Jackson and Alwin (1980, p. 225) acknowledge, the interpretation of the ipsative factor loading matrix Λx remains indefinite, as the values of λ¯ are unknown in the estimation (see Equation 1). As a consequence, we are left with no definitive interpretation of the original factor loadings on the preipsative data Λy. Chan and Bentler (1993) make an important contribution to overcoming this limitation. To make it possible to estimate the factor loadings of the preipsative data Λy, Chan and Bentler (ibid., p. 229) add an extension to Equation 1 postulated by Jackson and Alwin as follows:   Λx=Λy−1λ¯=[α10α20α300β10β20β3]−[α¯β¯α¯β¯α¯β¯α¯β¯α¯β¯α¯β¯]=[α1−α¯−β¯α2−α¯−β¯α3−α¯−β¯        −α¯β1−β¯        −α¯β2−β¯        −α¯β3−β¯], (2) where α¯=(α1+α2+α3+0+0+0)/6 and β¯=(0+0+0+β1+β2+β3)/6 are the average factor loadings for the first and second factors. Instead of the one-factor exploratory-factor analytic model assumed in the Jackson and Alwin method, Chan and Bentler reformulate the model with two or more factors through the confirmatory factor analytic (CFA) model (ibid., p. 220). This is the major innovation of Chan and Bentler (1993), as the estimation of 1λ¯′ is now made possible through the fixation of parameters for both factors while imposing k constraints, where k is the difference in the number of free parameters between the preipsative model and ipsative model (ibid.: pp. 229–231; pp. 240–241). By further developing Jackson and Alwin’s ipsative factor method in the context of plural CFA models, Chan and Bentler made an advance in estimating the factor loadings of ipsative items. Despite the above contribution, Chan and Bentler’s CFA model still suffered from inconvenience in the final estimation of preipsative data. In their model, the estimated parameters of ipsative data Λx had to be transformed back into the preipsative model. Following up on Chan and Bentler’s innovations, Cheung (2004) formulated a Direct Estimation method, which allows the final estimation of preipsative factor loadings and standard errors to be obtained from specific software programs. This method was made possible by applying a restricted second-order CFA model to Chan and Bentler’s original idea of a first-order CFA model. In this approach, parameter estimates and standard errors can be calculated without including the k within-group constraints that were necessary in the original CFA model. To recapitulate, the negativity bias associated with the rank-ordered items in factor analysis or latent structure analysis can now be corrected using the exploratory common factor model conceived by Jackson and Alwin (1980), with modifications through the CFA model by Chan and Bentler (1993), and the optimal solution of Direct Estimation method by Cheung (2004). The analyses in this study adopt the strategies of the above-cited literature and apply the Direct Estimation method proposed by Cheung (2004) in estimating the factor loadings of preipsative items that are latent in the available rank-ordered ipsative data. Rating The rating format can also generate bias. In the words of Krosnick and Alwin (1988, p. 529), “respondents presumably minimize the effort they expend in reporting their values by simply rating all qualities as equally and highly desirable.” Unlike the ranking method, which intrinsically entailed negativity bias, the rating method suffers from spuriously positive correlations among evaluated items. To control for this positivity bias, the following three approaches have been adopted. The first approach is to minimize response set bias by inserting rating items in opposite directions that measure same values and concepts. Through this so-called item reversals method, respondents who illogically choose “very important” and “not important at all” for the same item would be identified as nondifferentiators, and one could obtain a latent dimension where “non-differentiators” would be placed in the middle (Altemeyer, 1996; Hellevik, 1994; Paulhus, 1991; see Hellevik, 1993 for an application). Although this is a possible way to correct response set bias, one must block such ‘yea-sayers’ at the moment of administering surveys. The second approach is to remove nondifferentiators from the analysis. Those respondents who are not motivated to scrutinize the content of each rating item attempt to minimize their response cost by giving all items the same values in the scale. Krosnick and Alwin (1988) identify about 10% of respondents as nondifferentiators for all 13 items in the same five-point scale format; they proposed removing nondifferentiators in their analysis of the 1980 General Social Survey. By silencing them, i.e., neutralizing the positivity bias, Krosnick and Alwin demonstrate that the results of the rating items appear to be closer to those obtained from the rank-ordered items. It is also possible that respondents rate all items equally after deliberating processes. In such cases, positivity bias is overcorrected. The last approach proposed by Alwin and Krosnick (1985) is to include a General Method Factor in the factor analysis. This additional factor gives identical factor loadings to all items and is uncorrelated with other factors. Alwin and Krosnick demonstrated that the loadings on this added factor are statistically significant and the results including this new factor improve goodness-of-fit measures significantly in their CFA (ibid., pp. 544–546). More importantly, the correlation of substantive latent factors in Alwin and Krosnick’s example of the parental values of child quality changes after the incorporation of the General Method Factor, i.e., controlling for the effects of the positivity bias entailed in the rating items. With this General Method Factor approach, we can control for built-in positive correlation and correct the positivity bias generated from the evaluating process of rating items. The analyses below adopt this strategy.1 Research Design To compare the ranking and rating formats, we used survey questions related to postmaterialism. Postmaterialism has been studied extensively since the seminal work by Inglehart (1977), who argued that generations who grow up in an era of peace and affluent society attain postmaterialist values, striving for self-actualization and after more aesthetic needs. Van Deth (1983) and Bean and Papadakis (1994) compared the ranking and rating formats for batteries on postmaterialism. Their questionnaires assigned the same respondents to both rate and rank the battery items one after the other. Both studies asked respondents to first answer rating questions and then answer the ranking questions to avoid forcing them rate the ranked items (Bean and Papadakis, 1994: 272). We cannot deny the contamination effects of precedent rating formats on subsequent ranking decisions though because both items are asked to the same respondents. To eliminate these contamination effects, we randomly assigned ranking and rating formats to the respondents in an experimental survey. In the wave of surveys conducted after the 2007 election of the House of Councilors in Japan, respondents were randomly split between Group A and Group B.2 For Group A, 12 rank-ordered batteries on the topic of a country’s goals were presented to participants, while for Group B, the same 12 items were presented in a rating format with an 11-point scale. In total, 848 respondents were assigned to Group A, whereas 865 respondents were assigned to Group B.3 The questions asked were as follows. The respondents in Group A were asked: “Which goal do you think is important for Japan to aim for the next 10 years? For each of Section 1 through Section 3, please choose two most important items. [Section 1] First, among these items, which item is the most important? Which item is the second most important?” The list consisted of four items, two items related to postmaterialism and the other two items associated with materialism. Section 2 and Section 3 were presented in the same way. After answering the most important and second most important items three times in each section, respondents were asked the following overall question: “Among the 12 goals mentioned in Section 1 through Section 3, which is the most important? Which is the second most important? Lastly, which item is the least important?” The respondents in Group B were instead asked: “Which goals do you think it is important for Japan to aim for in the next ten years? For each goal (a) through (l), please answer by choosing a number on the scale, with 10 as the maximum”; 12 questions were thus asked for each of the 12 items. Data and Analysis Two sets of data were used in our analysis: the first was the ranking data from Group A and the second was the rating data from Group B. The ranking scale for Group A largely followed Inglehart (1977, p. 43) from the highest score of 6 (the most important among 12 items), to 5 (the second most important among the 12 items), 4 (the most important among the 4 items), 3 (the second important among the 4 items), 2 (items not chosen), and the lowest score 1 (the least important among 12 items).4 In theory, the sum of the 12 ranked items should amount to 35 if respondents reply in a completely logical manner.5 Yet we found in practice that not all respondents complete their responses in a transitive manner.6 To rectify this quasi-ipsative nature of the ranking scale and to arrive at completely ipsative ranked items, we analyzed the respondents who meet the following four conditions: (1) answering both “most important” and “second most important” items in Section 1, Section 2, and Section 3; (2) answering all of “most important,” “second most important,” and “least important” among the 12 items; (3) choosing the “most important” item among 12 items being from the pool of three “most important” items chosen in Section 1, Section 2, and Section 3; and (4) choosing “least important” item from the pool of six items not chosen as the “most important” or the “second most important” item in Section 1, Section 2, and Section 3. Instead of the original six-point scale, we reformulated it to a five-point scale ranging from 5 (the most important among the 12 items), 4 (the most important among 4 items), 3 (the second most important among 4 items), 2 (items not chosen), and to 1 (the least important among the 12 items), which should give rise to the additive value of 29 for all respondents.7 The number of respondents considered for our analysis of ranked items was N = 522.8 For the rating data, respondents who answered all 12 items were considered for our analysis. The number of respondents who fulfilled this requirement was N = 812.9 The analyses proceeded as follows. First, we compared the uncorrected models between the ranked items [Model 1-1] and the rated items [Model 2-1]. Second, we compared the corrected models between the preipsative model of ranked items, controlling for negativity biases through the Direct Estimation method (Cheung, 2004) [Model 1-2] and the model of rated items controlling for positivity bias through the General Method Factor approach (Alwin and Krosnick, 1985) [Model 2-2]. Before presenting the results, let us preview the nuts and bolts of our analyses below. [Model 1-1] For the ipsative model without correction, a Confirmatory Factor Analysis was conducted assuming one latent factor underlying all the 12 items. The variance of this latent factor was fixed to 1 for the model to be properly identified. [Model 1-2] For the preipsative model through the Direct Estimation method, a Confirmatory Factor Analysis of two factors is conducted. For the model to be identified, the variances of two latent factors are fixed to 1.10 [Model 2-1] For the rating model without correction, a Confirmatory Factor Analysis was conducted assuming one latent factor underlying all the 12 items in the same manner as Model 1-1. The variance of this latent factor was fixed to 1 for the model to be properly identified. [Model 2-2] For the rating model to correct for positivity bias using the General Method Factor, Confirmatory Factor Analysis of two factors was conducted to allow a direct comparison with Model 1-2, the preipsative model of ranking data. The third factor, the General Method Factor, was set to have identical loading scores for all 12 items, and the covariances between Factor 1 and Factor 3 as well as between Factor 2 and Factor 3 were set to 0. For the model to be identified, the variances of three latent factors were fixed to 1. Results Table 1 and 2 report the results of Models 1-1 and 1-2 for the ranking data and of Models 2-1 and 2-2 for the rating data, respectively. Table 1 Direct Estimation method with Ranking Data Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. Table 1 Direct Estimation method with Ranking Data Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Items  [Model 1-1]   [Model 1-2]   Ranking without correction   Ranking Direct Estimation method   Factor 1  Error variance  Factor 1  Factor 2  Error variance  X01: Economic growth  0.53*  0.72*  −0.83*    0.32*  (11.01)  (13.83)  (−16.58)    (16.04)  X02: Strong defense forces  0.20*  0.96*  −0.49*    0.76*  (3.98)  (15.90)  (−7.00)    (14.03)  X03: Maintain order  −0.01  1.00*  −0.31*    0.90*  (−0.29)  (16.14)  (−4.33)    (13.86)  X04: Keep prices stable  0.41*  0.83*  −0.50*    0.75*  (8.29)  (14.98)  (−7.79)    (14.54)  X05: Stable economy  0.74*  0.45*  −0.73*    0.46*  (15.90)  (8.93)  (−13.06)    (16.04)  X06: Fight crime  −0.03  1.00*  −0.19*    0.96*  (−0.64)  (16.13)  (−2.60)    (13.30)  X07: More say on job  −0.40*  0.84*    −0.64*  0.59*  (−8.20)  (15.01)    (−9.44)  (12.47)  X08: More beautiful cities  −0.17*  0.97*    −0.11  0.99*  (−3.43)  (15.96)    (−1.32)  (12.68)  X09: More say in government  −0.31*  0.90*    −0.78*  0.39*  (−6.20)  (15.53)    (−12.27)  (7.02)  X10: Freedom of speech  −0.19*  0.96*    −0.46*  0.79*  (−3.77)  (15.92)    (−5.71)  (13.15)  X11: Less impersonal society  −0.55*  0.70*    −0.57*  0.68*  (−11.40)  (13.60)    (−8.60)  (13.72)  X12: Ideas count  −0.39*  0.85*    −0.51*  0.74*  (−7.93)  (15.09)    (−6.76)  (13.85)  Correlation between Factors 1 and 2        0.37*          (4.36)    N  522  522  X2/df  20.02  15.03  RMSEA  0.19  0.16  AGFI  0.63  0.72  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. Table 2 General Method Factor approach with Rating Data Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. Table 2 General Method Factor approach with Rating Data Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Items  [Model 2-1]   [Model 2-2]   Rating without correction   Rating with General Method Factor   Factor1  Error variance  Factor1  Factor2  Error variance  X01: Economic growth  0.55*  0.69*  0.04    0.66*  (16.44)  (19.29)  (0.90)    (18.01)  X02: Strong defense forces  0.38*  0.85*  −0.07    0.72*  (10.82)  (19.81)  (−1.48)    (17.37)  X03: Maintain order  0.75*  0.44*  0.28*    0.51*  (24.22)  (17.69)  (6.06)    (18.53)  X04: Keep prices stable  0.74*  0.45*  0.48*    0.39*  (23.88)  (17.80)  (10.88)    (15.55)  X05: Stable economy  0.81*  0.35*  0.59*    0.23*  (27.14)  (16.49)  (13.65)    (9.18)  X06: Fight crime  0.71*  0.49*  0.38*    0.42*  (22.68)  (18.14)  (8.55)    (17.55)  X07: More say on job  0.53*  0.72*    −0.03  0.62*  (15.64)  (19.38)    (−0.57)  (16.19)  X08: More beautiful cities  0.59*  0.66*    −0.01  0.61*  (17.58)  (19.13)    (−0.24)  (16.37)  X09: More say in government  0.72*  0.48*    0.26*  0.52*  (22.91)  (18.08)    (5.15)  (18.20)  X10: Freedom of speech  0.64*  0.59*    0.35*  0.55*  (19.81)  (18.77)    (6.98)  (17.60)  X11: Less impersonal society  0.78*  0.40*    0.46*  0.36*  (25.64)  (17.17)    (9.20)  (12.57)  X12: Ideas count  0.47*  0.78*    0.21*  0.66*  (13.45)  (19.61)    (4.01)  (19.08)  General Method Factor        0.54*          (30.05)    Correlation between Factors 1 and 2        0.79*          (14.39)    N  812  812  X2/df  8.06  5.64  RMSEA  0.09  0.08  AGFI  0.88  0.92  Note: Empty cells mean that the respective items were not selected as a factor in confirmatory factor analysis. * p < .01, standardized loading scores are reported, t-values are in parenthesis. The results in Model 1-1 largely replicated Inglehart’s finding of one-dimensional value cleavage between the postmaterialist and materialist camps, except for the items “Maintain order” and “Fight against crime,” which were found insignificant. All the six postmaterialist items loaded on this factor negatively, in the order of higher negative loadings, “Less impersonal society,” “More say on job,” “Ideas count,” “More say in government,” “More beautiful cities,” and “Freedom of speech,” while other materialist items, “Stable economy,” “Economic growth,” “Keep prices stable,” and “Strong defense forces” loaded positively on the same factor. In Inglehart’s explorative factor analysis, a single dimension is extracted on which postmaterialist items loaded positively and materialist items loaded negatively. The finding in Model 1-1 is largely identical to Inglehart’s famous value dimension (Inglehart, 1977, p. 46; Inglehart, 1979, pp. 314, 316; see also Bean and Papadakis, 1994, p. 275). The results in Model 2-1 arrived at contrasting results. All the 12 items had positive loading scores and were no longer polarized on the same dimension. In applying rating data to postmaterialism, Bean and Papadakis (1994, p. 278) extracted a similar dimension, in which the 12 items were aligned in the same direction.11 In sum, our data generally confirmed the earlier findings of the polarized dimension of ranking data and the uniform dimension of rating data. However, this picture changes dramatically once negativity and positivity biases are controlled for. The results of the Direct Estimation method for Model 1-2 suggest that all six materialist items load negatively on Factor 1, and postmaterialist items also load negatively on Factor 2 (except for “More beautiful cities”). Moreover, these two factors are significantly positively correlated (.370). This may come as a surprise, as the common understanding of the value dimension is that materialism and postmaterialism are opposed to each other and are negatively correlated.12 This implies that the value dimension is multidimensional rather than unidimensional, and materialism and postmaterialism do not constitute a single dimension but are two independent dimensions that can be correlated positively. The corrected model of rating data in Model 2-2 also showed a different result from the uncorrected model in Model 2-1. Four items, “Economic growth,” “Strong defense forces,” “More say in job’,” and “More beautiful cities,” are no longer statistically significant in Model 2-2 after controlling for positivity bias, but the remaining eight items remain significant with their signs unchanged. An important finding is that the two factors in Model 2-2 are strongly correlated with each other, with a correlation coefficient of .794. This suggests that materialist items and postmaterialist items are positively correlated even after biases are removed from the analysis. Discussion The above results are intriguing, in that the corrected ranking model and the corrected rating model show a similar picture. After being controlled for negativity bias, the ranking model revealed that materialist items and postmaterialist items constitute two independent dimensions that are positively correlated. Based on the same rank-ordered items, Inglehart maintained that postmaterialism and materialism are polarized on the same dimension and proposed the value cleavage hypothesis. Inglehart’s presentation of unidimensional cleavage is misleading, as the factor analysis entails a negativity bias because of the ipsativity properties inherent in rank-ordered items. After the biases associated with rank-ordered items through the Direct Estimation method were eliminated, the two dimensions appeared to be positively correlated to each other. The positive correlation between postmaterialist and materialist items can be further confirmed through the analysis of the rated items. Even after controlling for potential response set biases, the two dimensions of postmaterialism and materialism show a high correlation. Based on the explorative factor analysis of rated items, Bean and Papadakis (1994) argued that postmaterialist and materialist items are hardly located at the two poles of the same value dimension. Our analyses join their criticism of Inglehart’s conceptualization of a unidimensional value cleavage. Bean and Papadakis (1994) extracted two rotated factors representing postmaterialism and materialism, respectively. Given that the two rotated factors are orthogonal and uncorrelated by nature, Bean and Papadakis (1994: 278) demonstrated that postmaterialism and materialism are independent and not correlated negatively as conceptualized in Inglehart’s value dimension. Our analyses advance Bean and Papadakis’s arguments further by demonstrating that the two factors are positively correlated to each other. Inglehart (1994) replied to Bean and Papadakis by pointing out that two factors were found to be uncorrelated because of the use of varimax rotation in factor analysis. Inglehart (1982, p. 458; 1990, p. 144) has argued on other occasions that the use of varimax rotation for extracting dimensions is “an analytic fallacy” of “reductio ad varimax.” Our analyses do not suffer from this “analytic fallacy” but still lend support to multidimensionality in value cleavage. Inglehart’s extraction of a polarized value dimension rather suffers from an analytic fallacy of reductio ad ipse. Although we argue that both the corrected ranking and the rating models show a positive correlation between postmaterialism and materialism, the following caveat applies. A closer look at the factor loading of each item reveals that the content of the factors was different. In particular, the postmaterialist factor (Factor 2) in the corrected ranking model (Model 1-2) was driven by “More say in government” and “More say on job,” items related to the needs for self-esteem and participation, whereas that in the corrected rating model (Model 2-2) was characterized by “Less impersonal society” and “Freedom of speech,” items that tap into the aesthetic and intellectual needs of Maslow’s hierarchy of needs that the theory of postmaterialism is based on. Strictly speaking, the ranking scales and rating scales did not produce identical factors even after controlling for negativity and positivity bias. Despite the weight variations among the postmaterialist and materialist items, our finding is consistent in that factors of postmaterialism and materialism are positively correlated. This is congruent with the original hypothesis of a hierarchy of needs, as people can have higher needs only after basic needs are fulfilled. In other words, postmaterialist needs are based on materialist needs. Airo Hino is Professor at Faculty of Political Science and Economics, Waseda University in Tokyo where he teaches research methodology. His main research topics are survey research and political communication in advanced democracies. Ryosuke Imai is Professor at Center for Education and Innovation, Sojo University in Kumamoto. His main research topics are voting behavior and electoral politics in Japan. Acknowledgments The authors would like to thank Arthur Lupia, André Blais, and David Howell for their helpful comments on the earlier version of this note, and the anonymous reviewers for their suggestions to strengthen our analyses. Footnotes 1 Apart from the above three approaches, centering responses around the mean score for a respondent could also be an effective approach (Schwartz, 1994). To implement this, one has to simply calculate the mean score for each respondent on the relevant items, and define the new item score as the difference between the original score and the mean score. 2 The survey analyzed in this study, Waseda-CASI&PAPI2007, was conducted in the two-wave panels before and after the House of Councilors Election of Japan in 2007 (from June 16th to July 11th for the first wave, and from August 25th to September 17th for the second wave) using two modes of Paper-and-Pencil-Interview (PAPI) and Computer-Assisted-Self-Interview (CASI). The response rate of the preelection survey was 42.3% (N = 1,553) and that of the postelection survey was 58.9% (N = 1,713) in which 1,380 individuals were added as supplements. The samples were drawn from nationwide electorates based on random probability sampling. For the PAPI component, the treatments were randomized in the split-sampling by stratifying on interviewers. For the CASI component, the respondents were randomized in the CASI programs with the seed of a starting time of each interview. 3 The two groups are comparable, as respondents were randomly assigned either group, therefore generating two groups identical on all the dimensions but the questionnaire formats. The two groups are in fact not significantly different in any of the sociodemographic variables such as age, gender, and educational level (see Appendix). Another choice is to apply two different formats to the same respondents. In the latter case, respondents are perfectly "identical," but their reactions to the different questionnaire formats cannot be properly measured, as a question (e.g., in rating format) asked before is bound to contaminate the ways in which respondents react to a later question (e.g., in ranking format). 4 The order is reversed from the Inglehart scale to maintain the same direction with the rating scales. 5 The expected additive value of 12 ranked items is 35 (6 × 1 + 5 × 1 + 4 × 1 + 3 × 3 + 2 × 5 + 1 × 1), as there should be only one item to be chosen as the most important among the 12 items (6), the second most important among 12 items (5), the third most important, i.e., the item chosen as the most important at the "section stage" but not chosen as the most important or the second most important in the "final stage" among 12 items (4), and the least important among 12 items (1). Among the remaining eight items, three items should be chosen as the second most important at the section stage (3), and the rest should be not important but not the least important (2). 6 In our data, only a quarter of the respondents fall into the "complete" category in which the sum of 12 items equals to 35. The rest of additive values ranges from the minimum of 33 to the maximum of 39. 7 The transformation gives rise to the "additive ipsative data" (AID) which is identical in nature with the data used in Chan and Bentler (1993) and Cheung (2004). 8 An alternative approach is to convert ranking data to dummy variables including missing data (Maydeu-Olivares and Böckenholt, 2005). Missing data can then be handled using multiple imputation techniques (Brown and Maydeu-Olivares, 2012). With this approach, respondents who give intransitive answers can be still kept in analyses. 9 We tested how the drop of respondents in both groups would affect the composition of the respondents in both groups. In both Group A and Group B, the dropped respondents are older and less educated than those who remained in the analyses. This reflects that older and less educated respondents tended to give intransitive answers (in Group A) and to choose "Don’t Know" in one or more items (in Group B). In Group B, there were more women in the dropped respondents. We also tested if the remaining respondents are still comparable between Group A and Group B. There is no significant difference in gender and educational level between the two groups. Group B has a slightly older respondent (by 2.71 years) given that more aged respondents dropped in Group A compared with those who dropped in Group B. There was no statistical difference in terms of political knowledge. The results of these tests of sociodemographic variables are reported in Appendix. 10 The error variance of the fifth item "Stable economy" is set to the same value of the first item for the purpose of achieving model identification. The equations for the structural model and measurement model are as follows:   zi=αif1+ei(1≤i≤6),  zj=αjf2+ej(7≤j≤12).Xi=.917zi−.083∑zj−.083z12 (1≤i≤11,  i≠j). 11 To further validate the similarity of our finding with Bean and Papadakis (1994), an explorative factor analysis of rated items used in their study was conducted. Similarly to Bean and Papadakis, we managed to extract a single dimension (eigenvalue = 5.63, variance explained = 46.9%) on which all 12 items loaded positively. 12 The exception is Bean and Papadakis (1994: 279) who find positive correlation (r = .33) between their rating-based materialism and postmaterialism scales. References Altemeyer B. ( 1996). The authoritarian specter . Cambridge, MA: Harvard University Press. Alwin D. F., Jackson D. J. ( 1982). Adult values for children: An application of factor analysis to ranked preference data. In Hauser R. M., Mechanic D., Haller A. O., Hauser T. S. (Eds.), Social structure and behavior: Essays in honor of William Hamilton Sewell  (pp. 311– 329). New York, NY: Academic Press. Google Scholar CrossRef Search ADS   Alwin D. F., Krosnick J. A. ( 1985). The measurement of values in surveys: A comparison of ratings and rankings. The Public Opinion Quarterly , 49, 535– 552. doi: 10.1086/268949. Google Scholar CrossRef Search ADS   Bean C., Papadakis E. ( 1994). Polarized priorities or flexible alternatives? Dimensionality in Inglehart's materialism-postmaterialism scale. International Journal of Public Opinion Research , 6, 264– 288. doi: 10.1093/ijpor/6.3.264. Google Scholar CrossRef Search ADS   Brown A., Maydeu-Olivares A. ( 2012). Fitting a Thurstonian IRT model to forced-choice data using Mplus. Behavior Research Methods , 44, 1135– 1147. doi:10.3758/s13428-012-0217-x. Google Scholar CrossRef Search ADS PubMed  Chan W., Bentler P. M. ( 1993). The covariance structure analysis of ipsative data. Sociological Methods and Research , 22, 214– 247. doi: 10.1177/0049124193022002003. Google Scholar CrossRef Search ADS   Cheung M. W. L. ( 2004). A direct estimation method on analyzing ipsative data with Chan and Bentler's (1993) method. Structural Equation Modeling , 11, 217– 243. doi:10.1207/s15328007sem1102_5. Google Scholar CrossRef Search ADS   Dunlap W. P., Cornwell J. M. ( 1994). Factor analysis of ipsative measures. Multivariate Behavioral Research , 29, 115– 126. doi: 10.1207/s15327906mbr2901_4. Google Scholar CrossRef Search ADS PubMed  Elig T. W., Frieze I. H. ( 1979). Measuring causal attributions for success and failure. Journal of Personality and Social Psychology , 37, 621– 634. doi: 10.1037/0022-3514.37.4.621. Google Scholar CrossRef Search ADS   Groves R. M., Kahn R. L. ( 1979). Surveys by telephone: A national comparison with personal interviews . New York, NY: Academic Press. Guilford J. P. ( 1954). Psychometric methods  ( 2nd ed.). New York, NY: McGraw-Hill. Hellevik O. ( 1993). Postmaterialism as a dimension of cultural change. International Journal of Public Opinion Research , 5, 211– 233. doi: 10.1093/ijpor/5.3.211. Google Scholar CrossRef Search ADS   Hellevik O. ( 1994). Measuring value orientation: Rating versus ranking. International Journal of Public Opinion Research , 6, 292– 295. Hicks L. E. ( 1970). Some properties of ipsative, normative, and forced-choice normative measures. Psychological Bulletin , 74, 167– 184. doi: 10.1037/h0029780. Google Scholar CrossRef Search ADS   Horst P. ( 1965). Factor analysis of data matrices . New York, NY: Holt, Rinehart & Winston. Inglehart R. ( 1977). The silent revolution: Changing values and political styles among western publics . Princeton, NJ: Princeton University Press. Inglehart R. ( 1979). Value priorities and socioeconomic change. In Barnes S. H., Kaase M. (Eds.), Political action: Mass participation in five western democracies  (pp. 305– 342). Beverly Hills, CA: Sage Publications. Inglehart R. ( 1982). Changing values in Japan and the West. Comparative Political Studies , 14, 445– 479. Google Scholar CrossRef Search ADS   Inglehart R. ( 1990). Culture shift in advanced industrial society . Princeton: Princeton University Press. Inglehart R. ( 1994). Polarized priorities or flexible alternatives? Dimensionality in Inglehart's materialism - postmaterialism scale: A comment. International Journal of Public Opinion Research , 6, 289– 297. doi: 10.1093/ijpor/6.3.289-a. Google Scholar CrossRef Search ADS   Jackson D. J., Alwin D. F. ( 1980). The factor analysis of ipsative measures. Sociological Methods and Research , 9, 218– 238. doi: 10.1177/004912418000900206. Google Scholar CrossRef Search ADS   Krosnick J. A. ( 1999). Maximizing questionnaire quality. In Robinson J. P., Shaver P. R., Wrightsman L. S. (Eds.), Measures of political attitudes: Volume 2 of measures of social psychological attitudes  (pp. 37– 57). San Diego, CA: Academic Press. Krosnick J. A., Alwin D. F. ( 1988). A test of form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. The Public Opinion Quarterly , 52, 526– 538. doi: 10.1086/269128. Google Scholar CrossRef Search ADS   Maydeu-Olivares A., Böckenholt U. ( 2005). Structural equation modeling of paired-comparison and ranking data. Psychological Methods , 10, 285– 304. doi: 10.1037/1082-989X.10.3.285. Google Scholar CrossRef Search ADS PubMed  Munson J. M., McIntyre S. H. ( 1979). Developing practical procedures for the measurement of personal values in cross-cultural marketing. Journal of Marketing Research , 16, 48– 52. doi: 10.2307/3150873. Google Scholar CrossRef Search ADS   Paulhus D. L. ( 1991). Measurement and control of response bias. In Robinson J. P., Shaver P. R., Wrightsman L. S. (Eds.), Measures of personality and social psychological attitudes: Volume 1 in Measures of social psychological attitudes series  (pp. 17– 59). San Diego, CA: Academic Press. Google Scholar CrossRef Search ADS   Rankin W. L., Grube J. W. ( 1980). A comparison of ranking and rating procedures for value system measurement. European Journal of Social Psychology , 10, 233– 246. doi: 10.1002/ejsp.2420100303. Google Scholar CrossRef Search ADS   Reynolds T. J., Jolly J. P. ( 1980). Measuring personal values: An evaluation of alternative methods. Journal of Marketing Research , 17, 531– 36. doi: 10.2307/3150506. Google Scholar CrossRef Search ADS   Sacchi S. ( 1998). The dimensionality of postmaterialism: An application of factor analysis to ranked preference data. European Sociological Review , 14, 151– 175. Google Scholar CrossRef Search ADS   Schwartz S. H. ( 1994). Are there universal aspects in the structure and content of human values? Journal of Social Issues , 50, 19– 45. doi: 10.1111/j.1540-4560.1994.tb01196.x. Google Scholar CrossRef Search ADS   Ten Berge J. M. F. ( 1999). A legitimate case of component analysis of ipsative measures, and partialling the mean as an alternative to ipsatization. Multivariate Behavioral Research , 34, 89– 102. doi: 10.1207/s15327906mbr3401_4. Google Scholar CrossRef Search ADS PubMed  Van Deth J. W. ( 1983). Ranking the ratings: The case of materialist and post-materialist value orientations. Political Methodology , 9, 407– 431. Zeller R. A., Carmines E. G. ( 1980). Measurement in the social sciences: The link between theory and data . Cambridge, MA: Cambridge University Press. Appendix Table A1 Difference of Sociodemographic Variables and Political Knowledge Between Groups Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Note: t-values are reported (p-values for null-hypotheses of no difference are reported in brackets). Table A1 Difference of Sociodemographic Variables and Political Knowledge Between Groups Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Compared groups  Gender  Age  Education  Education political knowledge  Group A (all, N = 848) versus Group B (all, N = 865)  −0.81 (.21)  1.13 (.13)  −0.53 (.30)  0.76 (.23)  Group A: Incomplete (N = 326) versus complete (N = 522)  0.08 (.47)  −4.74 (.00)  −4.54 (.00)  −2.03 (.02)  Group B: Incomplete (N = 53) versus complete (N = 812)  2.58 (.01)  −5.40 (.00)  −5.94 (.00)  −4.38 (.00)  Group A (complete, N = 522) versus Group B (complete, N = 812)  −0.35 (.36)  2.35 (.01)  0.80 (.21)  0.96 (.17)  Note: t-values are reported (p-values for null-hypotheses of no difference are reported in brackets). © The Author(s) 2018. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com

Journal

International Journal of Public Opinion ResearchOxford University Press

Published: Apr 27, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off