# Aggregation of preference information using neural networks in group multiattribute decision analysis

Aggregation of preference information using neural networks in group multiattribute decision... Abstract In this paper, we deal with group multiattribute utility analysis incorporating the preferences of multiple interested individuals. Since it is difficult to repeatedly ask these individuals questions for determining parameters of a multiattribute utility function, we gather preference information of them by asking questions that are not difficult to answer, and develop a method for selecting an alternative consistent with the preference information. Assuming that the multiattribute utility function has the multiplicative form and the corresponding single-attribute utility functions are already identified, we evaluate the trade-off between attributes by utilizing neural networks whose inputs and outputs are the preference information elicited from the interested individuals and the scaling constants for the multiattribute utility function, respectively. By performing computational experiments, we verify that the proposed method can generate scaling constants properly. Furthermore, using the preference relations from two groups with different degrees of the preferential heterogeneity, we examine the effectiveness of the proposed method, and then we show that the results of the numerical applications are reasonable and proper. 1. Introduction In multiattribute utility analysis, the preference of a decision maker is revealed by asking a series of questions, and then a multiattribute utility function of the decision maker is identified based on the preference of the decision maker (Keeney & Raiffa, 1976). By using the multiattribute utility function, the most preferred alternative for the decision maker is selected. Multiattribute utility analysis has been applied for evaluating and selecting public policies such as policy evaluation of Lisbon (Bana e Costa, 1988), management of nuclear waste from power plants (Keeney & Winterfeldt, 1994), economic evaluation of South Korea (Seo et al., 1999), policy selection and financing for the preservation of the forest (Hayashida et al., 2010) and evaluation of collaborative circulating farming (Hayashida et al., 2012). For such decision problems in public sectors or decision problems with multiple economic agents, the selected policy should influence not only the relevant organizations but also a number of interested individuals and parties. A utility function reflecting preferences of multiple decision makers is called a social welfare function, and studies from various perspectives have been accumulated (Harsanyi, 1955; Arrow, 1963; Fishburn, 1969; Keeney & Kirkwood, 1975; Keeney & Raiffa, 1976; Dyer & Sarin, 1979; Limayem & DeSanctis, 2000; Baucells & Sarin, 2003; Baucells & Shapley, 2008). Through a different perspective from the social welfare functions, we develop a multiattribute decision model which is able to incorporate diversified intentions of the multiple interested individuals. We interviewed interested individuals in application studies of multiattribute utility analysis on policy selections for forest preservation (Hayashida et al., 2010) and collaborative farming (Hayashida et al., 2012). Although the decision maker in the study on the forest preservation (Hayashida et al., 2010) is a representative of the non-profitable organization operating the project, he or she needs to interview local residents and business persons for assessment of measures and policies for financing and forest preservation because the project involves some public aspects. In the decision problem of the collaborative farming (Hayashida et al., 2012), the representative of the collaborative farming organization is eligible for decision making, and he or she should pay adequate attention to intensions of local farmers from the properties of the agricultural business. In these studies, the single decision makers eventually construct multiattribute utility functions consistent with their own preferences in the light of intensions of the interested individuals. In particular, a partial multiattribute utility function corresponding to a part of the objective hierarchy is determined on the basis of the preferences of the interested individuals. Depending on types of decision problems, it is required to formulate a multiattribute utility function directly reflecting the preferences of the multiple interested individuals. In general, in order to identify a multiattribute utility function, it is necessary to derive indifferent points and subjective probabilities from a decision maker (Keeney & Raiffa, 1976). However, it may be difficult for some decision makers to answer such questions according to circumstances. In such cases, decision analysts should support a decision maker to be able to select an appropriate alternative by deriving partial information or incomplete information of the preference of the decision maker through questions which are easy to answer. In multiattribute utility analysis with incomplete information, by pairwise comparison between alternatives and ordering the scaling constants, the preference of the decision maker is derived and the most preferred alternative is selected from among the set of all alternatives (Jacquet-Lagreze & Siskos, 1982; Kirkwood & Sarin, 1985; Weber, 1985; Wang & Malakooti, 1992; Barron & Barrett, 1996; Park & Kim, 1997; Mousseau & Slowinski, 1998; Rios-Insua & Mateos, 1998; Holloway & White, 2003; Chen et al., 2007; Wallenius et al., 2008; Sarabando & Dias, 2009; Nishizaki et al., 2010; Vetschera et al., 2010; Doumpos & Zopounidis, 2011). In this paper, we deal with multiattribute utility analysis for decision problems in which diversified intentions of the multiple interested individuals should be taken into account such as decision problems in public sectors rather than those with a single decisive leader. To identify a multiattribute utility function, namely, to determine single-attribute utility functions and then to evaluate trade-off between multiple attributes, it is necessary to derive indifferent points and subjective probabilities from a decision maker. However, it is difficult to interview multiple interested individuals and elicit such detailed information about the preferences from them. By asking questions that are relatively easy to answer compared to the indifference questions, we gather a wide range of preference information from the multiple interested individuals. We develop a method for selecting an alternative consistent with the gathered preference information as much as possible. Assuming that single-attribute utility functions are already identified, and the multiattribute utility function to be identified is in the multiplicative form, we evaluate trade-off between attributes by using neural networks. To be precise, the preference information of the multiple interested individuals is treated as the inputs to neural networks, and the outputs of the neural networks are regarded as a set of scaling constants. By employing the framework of genetic algorithms, the populations of the neural networks are learned and evolved so as to be consistent with the preference information gathered from the multiple interested individuals. The rest of this paper is organized as follows. Section 2 briefly reviews the fundamentals of multiattribute utility analysis and Section 3 devotes to reviewing the group decision making based on multiattribute utility analysis. In Section 4, assuming that the multiattribute utility function is in the multiplicative form and the corresponding single-attribute utility functions are already identified, we propose a method for selecting the most preferred alternative by utilizing neural networks whose inputs and outputs are the preference information and the scaling constants for the multiattribute utility function, respectively. In Section 5, we verify that the proposed method can generate scaling constants properly by performing computational experiments. Moreover, we demonstrate the proposed method by using a simple problem, and verify the effectiveness of the proposed method utilizing the preference information actually gathered in an application study (Hayashida et al., 2010). Finally in Section 6, we give some concluding remarks. 2. Multiattribute utility function Let $$X_1, \ldots, X_n$$ be $$n$$ attributes, and assume that a decision maker has a monotone increasing single-attribute utility function for each attribute. Suppose that there are $$m$$ alternatives denoted by $$\mathbf{x}^1=(x_1^1,\ldots,x_n^1), \dots , \mathbf{x}^m=(x_1^m,\ldots,x_n^m)$$. If the set of attributes $$\{X_1, \ldots, X_n\}$$ are mutually utility independent, for any vector of attribute values $$\mathbf{x}=(x_1,\ldots,x_n)$$, a multiattribute utility function $$U(x_1,\ldots,x_n)$$ can be represented in the multiplicative form U(x1,…,xn;k1,…,kn)=1K[∏i=1n{1+Kkiui(xi)}−1], (1) where $$u_i$$ is a single-attribute utility function for the $$i$$th attribute $$X_i$$, $$k_i$$ is a scaling constant corresponding to $$X_i$$, which satisfies $$0 \leq k_i \leq 1$$, and $$K$$ is an additional scaling constant satisfying $$1+K = \prod_{i=1}^n (1+Kk_i)$$. If indifferent points and subjective probabilities can be derived from the decision maker successfully, a vector $$\mathbf{k}=(k_1,\ldots,k_n)$$ of the scaling constants is calculated in the following way (Keeney & Raiffa, 1976), and then multiple alternatives can be ordered by the multiattribute utility function (1). From the decision maker, deriving the value of $$x_i'$$ such that $$(x_j^*,\mathbf{x}_{\bar j}^0)=(x_1^0,\ldots,x_{j-1}^0,x_j^*,x_{j+1}^0,\ldots,x_{n}^0)$$ is indifferent to $$(x_i',\mathbf{x}_{\bar i}^0)=(x_1^0,\ldots,x_{i-1}^0,x_i',x_{i+1}^0,\ldots,x_{n}^0)$$, that is (xj∗,xj¯0)∼(xi′,xi¯0), (2) since $$U(x_j^*,\mathbf{x}_{\bar j}^0;\mathbf{k})=U(x_i',\mathbf{x}_{\bar i}^0;\mathbf{k})$$, we obtain the relation kj=kiui(xi′), (3) where $$(x_j^*,\mathbf{x}_{\bar j}^0)$$ is a vector of attribute values such that the attribute value of the attribute $$X_j$$ is the best value $$x_j^*$$ and those of the other attributes $$X_1 ,\ldots,X_{j-1},X_{j+1},\ldots,X_{n}$$ are the worst values $$(x_1^0,\ldots,x_{j-1}^0,x_{j+1}^0,\ldots,x_{n}^0)$$, and $$(x_i',\mathbf{x}_{\bar i}^0)$$ is also defined similarly. Additionally, eliciting the probability $$\pi_i$$ from the decision maker such that $$(x_i^*,\mathbf{x}_{\bar i}^0)$$ is indifferent to $$(\pi_i, \mathbf{x}^*;1-\pi_i, \mathbf{x}^0)$$, that is (xi∗,xi¯0)∼(πi,x∗;1−πi,x0), (4) we have $$U(x_i^*,\mathbf{x}_{\bar i}^0;\mathbf{k})=\pi_i U(\mathbf{x}^*;\mathbf{k})+(1-\pi_i)U(\mathbf{x}^0;\mathbf{k})$$ and then it follows that ki=πi, (5) where $$(\pi_i, \mathbf{x}^*;1-\pi_i, \mathbf{x}^0)$$ is a lottery that pays the vector $$\mathbf{x}^*$$ of the best attribute values with probability $$\pi_i$$ and the vector $$\mathbf{x}^0$$ of the worst attribute values with probability $$1-\pi_i$$. By identifying the relations (5) and (3) for all $$j\not= i$$, the vector $$\mathbf{k}$$ of scaling constants is determined. In general, in order to determine the scaling constants, it is necessary to derive indifference points and subjective probabilities from the decision maker. For group decision making dealt with in this paper, however, it is generally difficult to derive appropriate answers to such questions from multiple interested individuals due to time constraint of interview assessment and limited knowledge on decision theory of the interviewees. From the viewpoints, we develop a group decision method where multiple interested individuals are asked questions which are relatively easy to answer and are not always required to answer if difficult. 3. Group decision making As for aggregation of utilities of individuals in decision making under certainty, Arrow’s Impossibility Theorem (1963) is famous and influential. Providing fundamental five assumptions, he proves that there does not exist any aggregation method of utilities of individuals satisfying all the assumptions. Keeney & Raiffa (1976) point out this result is ascribed to not assuming interpersonal utility comparison, and consider a decision problem with a single decision maker called a supra decision maker who takes into account values and affections of the related group of individuals. They provide conditions that the value function of the supra decision maker can be represented by the sum of the value functions of members of a group. Furthermore, Keeney & Kirkwood (1975) examine a similar situation under uncertainty, and give conditions for the additive and multiplicative utility functions in the context of group decision making. Before their study is published, Harsanyi (1955) also give somewhat different conditions for the additive group utility function. For group decision making under certainty, Fleming (1952) consider an additive value function, and Dyer & Sarin (1979) examine a measurable additive value function. In the present century, using the theoretical result by Baucells & Shapley (2000, 2008) and Baucells & Sarin (2003) show that a group multiattribute utility function can be represented as the weighted sum of additive utility functions of the members in the group, assuming a bilateral agreement among pairs of members on a single attribute. In the background of the above-mentioned theoretical development of group decision making, methods to pragmatically deal with real problems have been developed. In a method proposed by Vetschera (1991), a group additive value function is formulated as the sum of additive value functions of members in the group, and then the ranking order of the group and those of its members are compared. By repeatedly adjusting weights for the criteria and adding the evaluation of the group into the value functions of the members, the method leads the evaluations of the members to be consistent with that of the group. Iz & Gardiner (1993) extensively review various methods related to three fields of multiple criteria decision making, group decision making and decision support systems. Salo (1995) proposes an interactive approach reducing the number of non-dominated alternatives, calculating intervals of the alternative evaluations which are interpreted as imprecise preference information provided from members and subgroups of members in the group. Kim et al. (1999) study a method of multiple attribute group decision making where incomplete information of members in a group is derived by using the range-type utility representation. In their method, after the preference information of the members is aggregated, the additive value function of the group is constructed, and then agreeable dominance relations among the alternatives can be derived by revising the preference information of the members. Matsatsinis et al. (2005) present a method evaluating the preferences of members in a group by using a technique related to the UTA method (Jacquet-Lagreze & Siskos, 1982). In their method, after calculating relative utilities, the group additive utility with weighs of decision powers of the members is formulated. Furthermore, criteria for the group are specified, and then the global satisfaction is calculated. Finally, the acceptance of the obtained solution is judged by using an upper and a lower limits for the measurement of the group’s satisfaction. In a method by Mateos et al. (2006), it is supposed that the number of members in a group is relative small. At the beginning, each member in the group evaluates attribute values, utilities and weights as intervals. After identifying probability distributions depending on decision powers of the members and cooperative or non-cooperative situations, the attribute values, utilities and weights are generated based on Monte Carlo simulation techniques, and the ranking of the alternatives is derived. Huang et al. (2013) develop a multiattribute additive utility group decision model using the preferential differences and the preferential priorities. Bose et al. (1997) review application studies of group multiattribute utility analysis. Recently, Papamichail & French (2013) apply group multiattribute utility methodology to nuclear emergency management, and Ondrus et al. (2015) develop foresight support systems applying group multiattribute utility methods. In all the above-mentioned group multiattribute decision methods, the number of members in a group is relatively small and the members are clearly fixed. We find no attempt for developing methodology dealing with decision making problems with a number of interested individuals and parties such as decision problems in public sectors where the selected policy should influence not only the relevant organizations but also the interested individuals and parties. In such problems, in order to take into account their opinions and intentions, it is necessary to interview them, and it is desirable to gather a wide range of preference information from the multiple interested individuals. However, it is difficult to elicit detailed information about the preferences from them. To overcome this difficulty, we develop a new group decision method for decision making problems with a number of interested individuals and parties, in which questions that are relatively easy to answer are employed and the gathered preference information from them is utilized in order to determine the scaling constants of the multiattribute utility function. 4. Group multiattribute utility analysis using neural networks In this paper, we assume that single-attribute utility functions $$u_i$$, $$i=1,\ldots,n$$ for attributes $$X_i$$, $$i=1,\ldots,n$$ are already identified. Furthermore, we assume that the set of the attributes are mutually utility independent, and therefore the multiattribute utility function is in the multiplicative form. We propose a method for generating multiattribute utility functions, namely, the scaling constants consistent with the preferences of multiple interested individuals as much as possible and selecting the most preferred alternative. In order to assess the preferences of the multiple interested individuals, we provide questions which are easy to answer for them. In our method, we employ the questions used in multiattribute utility analysis under incomplete information (Nishizaki et al., 2014), in which a decision maker is asked about the strict preference relation between hypothetical outcomes which differ in only two attribute values. The following two hypothetical outcomes are presented to each interested individual. Only two attribute values are different and the other $$(n-2)$$ attribute values are the same in the two outcomes. Let $$X_i$$ and $$X_j$$ be the two attributes with different values for the given pair of the hypothetical outcomes, and let $$X_i$$ be the basis attribute. The first hypothetical outcome is Ai=(x10,…,xi−10,xi∗,xi+10,……,xn0), (6) where the value of attribute $$X_i$$ is the best and those of the other attributes $$X_1,\ldots,X_{i-1}$$, $$X_{i+1}$$, $$\ldots\ldots,X_n$$ are the worsts. The other one is Aj=(x10,……,xj−10,xj∗,xj+10,…,xn0), (7) where the value of attribute $$X_j$$ is the best and those of the other attributes $$X_1,\ldots,X_{j-1}$$, $$X_{j+1}$$, $$\ldots\ldots,X_n$$ are the worsts. The interested individuals are asked which outcome they prefer to, $$A^i$$ or $$A^j$$. That is, they are asked to choose between $$A^i \succ A^j$$ or $$A^j \succ A^i$$. If it is difficult for some interested individuals to choose, they do not have to answer. In particular, if the answer is $$A^i \succ A^j$$, the interested individual is asked to find the minimum, denoted by $$x_i^{*-}$$, of the attribute value of $$X_i$$ by decreasing the attribute value from $$x_i^*$$ under the condition that this preference relation holds. Let an outcome with $$x_i^{*-}$$ substituted for $$x_i^{*}$$ be Ai∗−=(x10,…,xi−10,xi∗−,xi+10,……,xn0). (8) It follows that we obtain a preference relation Ai∗−≻Aj (9) as substitute for $$A^i \succ A^j$$. Conversely, if the answer is $$A^j \succ A^i$$, the interested individual is asked to find the minimum, denoted by $$x_j^{*-}$$, of the attribute value of $$X_j$$ in a similar way. Let an outcome with $$x_j^{*-}$$ substituted for $$x_j^{*}$$ be Aj∗−=(x10,…,xj−10,xj∗−,xj+10,……,xn0). (10) It follows that we obtain a preference relation Aj∗−≻Ai (11) as substitute for $$A^j \succ A^i$$. The preference relations (9) and (11) are related to the equality conditions u(Ai∗−;k)−u(Aj;k)=δij+, (12) u(Aj∗−;k)−u(Ai;k)=δij−. (13) By asking similar questions for all pair of attributes, the equality conditions (12) and (13) are obtained through the preference relations (9) and (11). Let $$I^+$$ and $$I^-$$ be index sets corresponding to (12) and (13), respectively. The following mathematical programming problem can be formulated.  maximize⁡  Δ=min{minij∈I−δij−,minij∈I+δij+} (14a)  subject to  u(Ai∗−;k)−u(Aj;k)=δij+, ij∈I+ (14b) u(Aj∗−;k)−u(Ai;k)=δij−, ij∈I− (14c) k∈CK, (14d) where $$C^K$$ is the set of vectors $$\mathbf{k}=(k_1,\ldots,k_n)$$ of the scaling constants, that is CK={k∈Rn ∣0≤ki≤1, i=1,…,n}. (15) Let $$\Delta^*$$ be the optimal value of the problem (14). If $$\Delta^*\geq 0$$, there exist vectors of the scaling constants satisfying all of the preference relations (9) and (11) elicited from the interested individuals. Otherwise, there does not exist such a vector of the scaling constants (Nishizaki et al., 2014). When $$\Delta^*\geq 0$$, an optimal solution $$\mathbf k^*$$ to the problem (14) satisfies all of the preference relations (9) and (11) elicited from the interested individuals, and it also maximizes the minimum difference between the utilities of outcomes in the elicited preference relations. However, such a solution does not always represent the comprehensive preference of all the interested individuals. To justify employing this solution, some axiomatization approach might be needed. In this paper, we do not take such an approach, but propose a method generating a number of scaling constants satisfying the preference relations (9) and (11). Before describing the implementation of the proposed method with neural networks, we remark the derivation of the preference information from the interested individuals. In application studies of multiattribute utility analysis on policy selections for forest preservation (Hayashida et al., 2010) and collaborative farming (Hayashida et al., 2012), we interviewed interested individuals, and a partial multiattribute utility function corresponding to a part of the objective hierarchy was identified on the basis of the preferences of the interested individuals. To do so, we asked them to specify indifferent points and subjective probabilities (Keeney & Raiffa, 1976). However, it was very difficult for some interested individuals to directly deliver such information. We therefore at first asked them to answer which alternative is preferred, namely we elicited the preference relations such as (9) and (11) from them. As a result, we can gather a long list of such preference relations. In group multiattribute decision making, we think that it is natural and effective to use them. From this perspective, in this paper, we develop a method with neural networks where the preference relations such as (9) and (11) are used as the input data of the neural networks. In the propose method, by utilizing neural networks in which the preference relations of the interested individuals are the inputs and the outputs are regarded as the scaling constants, we generate diversified sets of the scaling constants which satisfy the preference relations of the interested individuals if there exist such scaling constants or minimize the violation of them if there does not exist. A number of neural networks evolve through a genetic algorithm so that multiattribute utility functions with the scaling constants which are the output of the neural networks are consistent with the preferences of the interested individuals, and then we can obtain diversified sets of the scaling constants in an adequate number of generations. This idea is graphically expressed as in Fig. 1. Fig. 1. View largeDownload slide Conceptual scheme. Fig. 1. View largeDownload slide Conceptual scheme. We illustrate the structure of a neural network for a three-attribute utility function in Fig. 2. As seen in the figure, $$x_1^1, x_2^1, x_3^1, x_1^2, x_2^2, x_3^2$$ are the inputs of the neural network, and this means that a preference relation $$(x_1^1,x_2^1,x_3^1) \succ (x_1^2,x_2^2,x_3^2)$$ is one of the elicited preference relations. Three outputs of the neural network correspond to a set of scaling constants $$(k_1,k_2,k_3)$$. Repeatedly inputting all of the preference relations (9) and (11) elicited from the interested individuals into the neural networks, the neural networks learn and evolve so as to be consistent with the preference of the interested individuals, namely, to output the sets of the scaling constants of the multiattribute utility function satisfying the preference relations of the interested individuals as much as possible. Fig. 2. View largeDownload slide Structure of neural networks. Fig. 2. View largeDownload slide Structure of neural networks. Let the number of attributes be $$n$$. The number of inputs then is $$2n$$, and the number of outputs is $$n$$. We assign $$3n$$ to the number of nodes in the hidden layer of the neural networks. Suppose that the outputs of a neural network are a set of scaling constants $$(\hat k_1,\ldots, \hat k_n)$$. If, for a given preference relation (x11,…,xn1)≻(x12,…,xn2), (16) the inequality u(x11,…,xn1;k^1,…,k^n)>u(x12,…,xn2;k^1,…,k^n) (17) holds, we say that the preference relation (16) is consistent with the set of scaling constants $$(\hat k_1,\ldots, \hat k_n)$$. As a method for learning of the neural networks, we employ a genetic algorithm. A neural network in the population of the artificial genetic system is a string composed of the synaptic weights and the thresholds identifying the neural network. A fitness function is defined so as to become large if the preference relations are consistent with a set of scaling constants which are the outputs of the neural network. Let $$m$$ be the number of interested individuals, and $$l_i$$ be the number of preference relations (9) and (11) elicited from the interested individual $$i$$. Let $$(x_1^{ij1},\ldots,x_n^{ij1}) \succ (x_1^{ij2},\ldots,x_n^{ij2})$$ be the $$j$$-th preference relation of individual $$i$$, and $$\hat{\mathbf k}^l_{ij}$$ be the output (scaling constants) of the $$l$$-th neural network for the input of $$(x_1^{ij1},\ldots,x_n^{ij1}) \succ (x_1^{ij2},\ldots,x_n^{ij2})$$. Let $$p^{l,st}_{ij}=1$$ if a preference relation $$(x_1^{st1},\ldots,x_n^{st1}) \succ (x_1^{st2},\ldots,x_n^{st2})$$ is consistent with the set of scaling constants $$\hat{\mathbf k}^l_{ij}$$. Otherwise, let $$p^{l,st}_{ij}=0$$. The fitness function of the $$l$$-th neural network then is defined as fl=1∑i=1mli∑i=1m∑j=1li∑s=1m∑t=1lspijl,st∑s=1mls, (18) which means the ratio of consistency for all the preference relations. An algorithm of the proposed method is summarized as follows. Step 1 Elicit the preference relations (9) and (11) from all the interested individuals. Step 2 By solving the problem (14), check whether or not there exists a set of scaling constants satisfying all the preference relations obtained in Step 1. If the optimal value of (14) is larger than or equal to $$0$$, that is $$\Delta^* \geq 0$$, there exists such a set of scaling constants. Otherwise, it does not exist. Step 3 To initialize neural networks in the population of the artificial genetic system, randomly generate fixed-length binary-encoded strings which are composed of the synaptic weights and the thresholds identifying the neural networks. Step 4 Reconstruct neural networks with the decoded synaptic weights and thresholds corresponding to the neural networks in the population. Step 5 Perform the following operation for all neural networks. Input the data of the preference relation $$j$$, $$(x_1^{ij1},\ldots,x_n^{ij1}) \succ (x_1^{ij2},\ldots,x_n^{ij2})$$, of the interested individual $$i$$ into the neural network $$l$$ and then obtain the outputs $$\hat{\mathbf k}^l_{ij}$$ which are interpreted as a set of scaling constants. If the preference relation $$t$$ of the interested individual $$s$$ is consistent with the set of scaling constants $$\hat{\mathbf k}^l_{ij}$$ for the neural network $$l$$ with the inputs of the preference relation $$j$$ of the interested individual $$i$$, namely, the inequality $$u(x_1^{st1},\ldots,x_n^{st1}; \hat{\mathbf k}^l_{ij}) > u(x_1^{st2},\ldots,x_n^{st2}; \hat{\mathbf k}^l_{ij})$$ holds, let $$p^{l,st}_{ij}=1$$. Otherwise, let $$p^{l,st}_{ij}=0$$. After repeatedly perform this operation for all the preference relations, calculate the fitness value (18) of the neural network $$l$$. Step 6 For the population, until the final generation is reached, perform the genetic operations including the preproduction by the roulette wheel selection with the elitism, the exponential scaling, the uniform crossover and the bit-reverse mutation. Step 7 If there exists a set of scaling constants satisfying all the preference relations, namely, $$\Delta^* \geq 0$$ in Step 2, after evaluating the alternatives for all sets of scaling constants corresponding to the neural networks such that the fitness value is equal to 1, that is $$f^l=1$$, terminate the procedure. Otherwise, that is $$\Delta^* < 0$$, proceed to Step 8. Step 8 Since a set of scaling constants satisfying all the preference relations does not exist, select neural networks with relatively large fitness values. That is, with reference to the largest fitness value, select neural networks with the fitness values larger than $$p$$% of the maximum. After evaluating the alternatives for all sets of scaling constants corresponding to the selected neural networks, terminate the procedure. 5. Validation and numerical applications 5.1 Computational experiment for validation To assess the validity of the proposed method, we construct eight populations consisting of two subgroups with preferential heterogeneity, varying the degree of the preferential heterogeneity. We verify whether the proposed method can generate scaling constants appropriately for the eight populations. We consider a simple decision problem with only two attributes $$X_1$$ and $$X_2$$. We assume that the interested individuals are composed of two groups, and the preference within each group is homogeneous. After specifying some specific utility function for each group, we generate the preference relations of the two groups in accordance with the utility functions. We assume that the set of the attributes are mutually utility independent for each group, and therefore the two-attribute utility function $$U^1(x_1, x_2)$$ of group 1 and $$U^2(x_1, x_2)$$ of group 2 are in the multiplicative form. For the sake of simplicity, we also assume that single-attribute utility functions of both groups are linear, and let all the domains of the functions are the interval $$[0,1]$$. That is, the single-attribute utility functions of group $$j$$, $$j=1,2$$ are uij(xi)=xi, i=1,2,uij(0)=0, uij(1)=1, i=1,2, respectively. The two-attribute utility functions $$U^j(x_1, x_2)$$, $$j=1,2$$ are then represented by Uj(x1,x2) =k1ju1j(x1)+k2ju2j(x2)+Kjk1jk2ju1j(x1)u2j(x2) =k1jx1+k2jx2+Kjk1jk2jx1x2. Assuming that the scaling constants of groups 1 and 2 for the first attribute $$X_1$$ are $$k_1^1=k_1^2=0.8$$, we determine the scaling constants of groups 1 and 2 for the second attribute $$X_2$$ by using the indifferent relation (x^1j,x2,min)∼(x1,min,x2,max), where $$x_{1, \min}=0$$, $$x_{2, \min}=0$$ and $$x_{2, \max}=1$$, and $$\hat x_1^j$$ is specified in the following. For group 1, let $$\hat x_1^1$$ be a fixed value $$0.8$$, and for group 2, let $$\hat x_1^2$$ be from $$0.8$$ to $$0.1$$, that is x^11=0.8,x^12=0.8,0.7,…,0.1. We define the difference between two values as $$D = \hat x_1^1 - \hat x_1^2$$. If $$D=0$$, groups 1 and 2 are homogeneous, and as the value of $$D$$ increases, the degree of heterogeneity increases. The scaling constants calculated from the above indifference relations are shown in Table 1. Table 1 Scaling constants of groups 1 and 2 Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Table 1 Scaling constants of groups 1 and 2 Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Utilizing the two-attribute utility functions with the scaling constants given in Table 1, we generate the preference relations for given pairs of outcomes in accordance with the two-attribute utility functions. Fifty pairs of outcomes are randomly taken up from the set of grid points $$\{(0,0), (0,0.1), \dots, (0,1), (0.1,0), (0.1,0.1), \dots , (1,0.9), (1,1)\}$$ at 0.1 length intervals in the attribute space $$[0,1]^2$$, and the chosen pairs are shown in Table 2. Table 2 Pairs of outcomes # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) Table 2 Pairs of outcomes # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) The preference relations for the pairs of outcomes given in Table 2 are determined in accordance with the two-attribute utility functions $$U^1(x_1, x_2)$$ and $$U^2(x_1, x_2)$$ of groups 1 and 2. For each group, 50 preference relations are generated, and there are 100 preference relations in total. Consider cases of $$D=0$$ and $$D=0.7$$ to illustrate generation of preference relations. For the case of $$D=0$$, the two-attribute utility functions of groups 1 and 2 are U1(x1,x2)=0.8x1+0.64x2+(−0.859)⋅0.8⋅0.64x1x2,U2(x1,x2)=0.8x1+0.64x2+(−0.859)⋅0.8⋅0.64x1x2, and the two groups have the same utility function. Therefore, the same set of preference relations is generated for the two groups. For example, for a pair $$(0.7,0.0)$$ and $$(0.1,0.2)$$, we have $$U^1(0.7,0.0)=U^2(0.7,0.0)=0.56$$ and $$U^1(0.1,0.2)=U^2(0.1,0.2)=0.199$$, and then the same preference relation $$(0.7,0.0)\succ (0.1,0.2)$$ is obtained for the two groups. For the case of $$D=0.7$$, the two-attribute utility functions of groups 1 and 2 are U1(x1,x2)=0.8x1+0.64x2+(−0.859)⋅0.8⋅0.64x1x2U2(x1,x2)=0.8x1+0.08x2+1.875⋅0.8⋅0.08x1x2 and then the two groups have different utility functions. Therefore, different preference relations for the same pair of outcomes can be generated for the two groups. For example, for a pair $$(0.7,0.0)$$ and $$(0.1,0.2)$$, we have $$U^1(0.7,0.0)=U^2(0.7,0.0)=0.56$$ and $$U^1(0.1,0.2)=0.199$$, $$U^2(0.1,0.2)=0.098$$, and then the same preference relation $$(0.7,0.0)\succ (0.1,0.2)$$ is obtained for the two groups. For a pair $$(0.6,0.0)$$ and $$(0.2,0.6)$$, however, we have $$U^1(0.6,0.0)=U^2(0.6,0.0)=0.48$$ and $$U^1(0.2,0.6)=0.491, U^2(0.2,0.6)=0.222$$. Therefore, while the preference relation $$(0.6,0.0)\prec (0.2,0.6)$$ is obtained for group 1, the reverse preference relation $$(0.6,0.0)\succ (0.2,0.6)$$ holds in group 2. The preference relations obtained in this manner are treated as inputs to neural networks. Let the maximum generation in the genetic algorithm be 10000, and for each of the eight cases given in Table 1, the procedure of our method shown in the previous section is executed ten times. The numerical result is shown in Fig. 3, where the horizontal axis is the difference $$D$$, the vertical axis of the left-hand side is the fitness value, and that of the right-hand side is the variance of the fitness values. Fig. 3. View largeDownload slide Fitness values (mean, maximum, minimum and variance). Fig. 3. View largeDownload slide Fitness values (mean, maximum, minimum and variance). As seen in Fig. 3, the maximum of the fitness values reaches one when $$D=0$$. Since the preference relations of group 1 completely coincides with those of group 2 and there does not exist a reverse relation for any preference relation when $$D=0$$, there exist sets of scaling constants which are consistent with all the preference relations. Moreover, since the fitness value reaches one, it follows that some neural networks output scaling constants which are consistent with all the preference relations. As the difference $$D$$ increases, the fitness value decreases because the number of inconsistent preference relations increases. It is also found that the variance of the fitness values which is less than 0.006 is adequately small due to the sufficiently large periods of learning duration. In Fig. 4, the scaling constants $$(k_1,k_2)$$ as the outputs of neural networks are shown, together with the scaling constants $$(k_1^1,k_2^1)$$, $$(k_1^2,k_2^2)$$ of groups 1 and 2 calculated through the indifference relations, which are depicted as dotted lines. As for the scaling constants $$(k_1,k_2)$$ from the neural networks, we show the means, maxima and minima of the outputs of neural networks with the fitness values larger than 90% of the maximum in the whole population. Fig. 4. View largeDownload slide Scaling constants. Fig. 4. View largeDownload slide Scaling constants. As seen in Fig. 4, for the case of $$D=0$$, although the scaling constants $$(k_1,k_2)$$ from the neural networks are consistent with the preference relations because the fitness value reaches one, the means of them are slightly larger than the scaling constants $$(k_1^1=k_1^2=0.8, k_2^1=k_2^2=0.64)$$ obtained from the indifference relations. This propensity can also be found in the cases of $$D=0.1$$ to $$D=0.7$$. For the scaling constant $$k_1$$ for the first attribute $$X_1$$, although the means of $$k_1$$ from the neural networks are slightly larger than $$k_1^1=k_1^2=0.8$$ obtained from the indifference relations as we mentioned above, they are close to 0.8. For the scaling constant $$k_2$$ for the second attribute $$X_2$$, in the cases of $$D=0.1$$ to $$D=0.7$$, $$k_2^1$$ and $$k_2^2$$ from the indifference relations are not the same because the indifference points of group 1 are different with those of group 2. Since the preference relations of groups 1 and 2 are merged and are used for the inputs of the neural networks, the values of $$k_1$$ from the neural networks are distributed between $$k_2^1$$ and $$k_2^2$$ from the indifference relations while the means of $$k_2$$ are slightly large similarly to $$k_1$$. We show the region of the scaling constants which are completely consistent with the preference relations for the case of $$D=0$$ in Fig. 5. As seen in the figure, the mean $$(0.87,0.73)$$ of the scaling constants from the neural networks lies in the center of the region while the point $$(0.8,0.64)$$ of the scaling constants from the indifference relations is located at the lower left of the region. From this reason, although the pairs of outcomes to generate the preference relations are randomly chosen, the range of the scaling constants $$(k_1,k_2)$$ from the neural networks lies slightly above the scaling constants $$(k_1^1=k_1^2, k_2^1=k_2^2)$$ from the indifference relations as a whole. Fig. 5. View largeDownload slide Region of consistent scaling constants in the case of $$D=0.$$ Fig. 5. View largeDownload slide Region of consistent scaling constants in the case of $$D=0.$$ These computational results demonstrate that the proposed method can generate the scaling constants consistent with the preference relations used as the inputs to the neural networks. 5.2 Numerical applications Using the actual preference information which we obtained in the multiattribute utility analysis on the environmental policy selection (Hayashida et al., 2010), we demonstrate the effectiveness of the proposed method. In this decision problem, in order to preserve the water resource in the city of Higashi-Hiroshima which is selected as one of the three most excellent sake (rice wine) brewing regions in Japan, we compared 20 alternatives for financing and activities for the preservation of the water resource by using the multiattribute utility function with 27 attributes. In particular, we interviewed the interested individuals including the sake brewers, the local residents and the business people working in the city, and the preference information of them was utilized to identify a part of the multiattribute utility function. In this paper, for the sake of simplicity, we deal with two types of fund-raising methods, the eco-labelled goods and the tax, and three types of allocation rules of the fund, the allocation giving priority to the planting trees (water volume), the allocation giving priority to the drainpipe construction (water quality), and the allocation distributing the fund equally. The combination of two fund-raising methods and three allocation rules makes six alternatives. 5.2.1 Case 1: Interested individuals with similar preferences We consider a part of the multiattribute utility function related to the sake brewers which are supposed to have similar preferences because they are similar makers of Japanese liquor. Although the five-layered hierarchy of objectives in the decision making problem consists of 27 attributes (Hayashida et al., 2010), the partial multiattribute utility function with respect to the sake brewers has three attributes $$X_{L1}$$, $$X_{L2}$$ and $$X_{L3}$$ which mean the quantity of the groundwater (Quantity), the quality of the groundwater (Quality) and the investment for the preservation of the forest (Investment), respectively. We assume that the corresponding single-attribute utility functions are already identified as the following constant risk utility functions: uL1(xL1)=1.01−1.01exp⁡(−0.0455xL1),uL2(xL2)=1.42−1.93exp⁡(−0.300xL2),uL3(xL3)=1.01−0.0107exp⁡(0.000455xL3). The attribute values for the 6 alternatives are shown in Table 3. The preference relations elicited from three interested individuals of the sake brewers are as follows. Table 3 Attribute values of alternatives $$(x_{L1},x_{L2},x_{L3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ Table 3 Attribute values of alternatives $$(x_{L1},x_{L2},x_{L3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. From each of the three interested individuals, 2, 4, and 4 preference relations are elicited, and we have 10 preference relations in total. We solve the problem (14), and find that the optimal value is positive, that is $$\Delta^* > 0$$. From this fact, it follows that there exist sets of scaling constants consistent with all of the preference relations elicited from the three interested individuals. Let the population size and the maximum generation be 100 and 10000, respectively. Using the above preference relations as the inputs to the neural networks, we execute the proposed procedure given in the previous section, and then the fitness values of 59 neural networks out of 100 reach one. The distribution of the fitness values is shown in Fig. 6, where neural networks are arranged in ascending order of the fitness values in the horizontal axis. Fig. 6. View largeDownload slide Distribution of fitness values in Case 1. Fig. 6. View largeDownload slide Distribution of fitness values in Case 1. The sets $$\mathbf k^l_{L}=(k^l_{L1}, k^l_{L3}, k^l_{L2})$$, $$l=1,\ldots,59$$ of scaling constants corresponding to the neural networks with the fitness values of one which are thought to have similar characteristics. As seen in Table 4, the means of the scaling constants decrease in the order of $$k_{L1}$$, $$k_{L3}$$ and $$k_{L2}$$, and their variances are small. Table 4 Scaling constants for Case 1 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 Table 4 Scaling constants for Case 1 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 The alternatives with the maximum utility are the same for all the multiattribute utility functions uL(xL1,xL2,xL3;kL1i,kL2i,kL3i)=1Ki[{1+KikL1iuL1(xL1)} {1+KikL2iuL2(xL2)}{1+KikL3iuL3(xL3)}−1], i=1,…,59, and it is the alternative that the fund-raising method is the tax and the allocation rule is the allocation giving priority to the drainpipe construction (water quality). For group decision making for interested individuals with similar preferences, our method can generate a number of scaling constants which are consistent with the preference relations elicited from the interested individuals, and the multiattribute utility functions corresponding to the selected neural networks maximize the utility values of the same alternative. 5.2.2 Case 2: Interested individuals with dissimilar preferences In the second case, we deal with a multiattribute utility function related to general residents which are supposed to have dissimilar preferences because they do not share common interests for forest preservation. Although in the application study (Hayashida et al., 2010) the residents are classified into three categories composed of the residents using groundwater, the residents in the foot of the mountain and the residents not using groundwater, we merge them in order to cover interested individuals with the diversified preferences. The partial multiattribute utility function with respect to the general residents has three attributes $$X_{R1}$$, $$X_{R2}$$ and $$X_{R3}$$ which mean the quantity of the groundwater (Quantity), the quality of the groundwater (Quality) and the investment for the preservation of the forest (Investment), respectively. We assume that the corresponding single-attribute utility functions are already identified as the following constant risk utility functions: uR1(xR1)=1.00−1.00exp⁡(−0.139xR1),uR2(xR2)=1.41−1.92exp⁡(−0.310xR2),uR3(xR3)=1.15−0.15exp⁡(0.00204xR3). The attribute values for the six alternatives are shown in Table 5. It should be noted that the attribute values $$x_{R3}$$ of the investment are small compared to those $$x_{L3}$$ of the sake brewers in Case 1. Table 5 Attribute values of alternatives $$(x_{R1},x_{R2},x_{R3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ Table 5 Attribute values of alternatives $$(x_{R1},x_{R2},x_{R3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ The preference relations elicited from 10 interested individuals of the general residents are as follows. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. From each of the 10 interested individuals, 4 preference relations are elicited, and we have 40 preference relations in total. We solve the problem (14), and find that the optimal value is negative, that is $$\Delta^* < 0$$. From this fact, it follows that there does not exist any set of scaling constants which is consistent with all the preference relations elicited from the three interested individuals. Thus, it is found that the fitness value of any neural network does not reach one in the genetic algorithm. Since the number of preference relations in Case 2 is four times as many as that of Case 1, we set the maximum generation at four times as large as that of Case 1, and the population size is the same as that of Case 1, namely, the maximum generation is 40000, and the population size is 100. The result is shown in Fig. 7 in the same form as Fig. 6. Fig. 7. View largeDownload slide Distribution of fitness values in Case 2. Fig. 7. View largeDownload slide Distribution of fitness values in Case 2. As seen in Fig. 7, the maximum of the fitness is 0.65 in the population. Let the fraction of the selected neural networks in Step 8 of the procedure given in the previous section be $$p=0.9$$. Thus, it follows that the neural networks with the fitness values larger than 90% of the maximum in the population are selected. The number of selected neural networks is 81, and we give the statistical data of the sets $$\mathbf k^i_{R}$$, $$i=1,\ldots,81$$ of scaling constants for the selected neural networks in Table 6. The means of the scaling constants decrease in the order of $$k_{R1}$$, $$k_{R2}$$ and $$k_{R3}$$, and their variances are small. Table 6 Scaling constants for Case 2 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 Table 6 Scaling constants for Case 2 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 The alternatives with the maximum utility are the same for all the multiattribute utility functions uR(xR1,xR2,xR3;kR1i,kR2i,kR3i) =1Ki[{1+KikR1iuR1(xR1)} {1+KikR2iuR2(xR2)}{1+KikR3iuR3(xR3)}−1], i=1,…,81, and it is the alternative that the fund-raising method is the tax and the allocation rule is the allocation giving priority to the planting trees (water volume). For group decision making for interested individuals with dissimilar preferences, although there does not exist scaling constants which are consistent with all the preference relations elicited from the interested individuals, our method can generate scaling constants which are consistent with as many preference relations as possible, and the multiattribute utility functions corresponding to the selected neural networks maximize the utility values of the same alternative. 5.2.3 Discussion By the computational experiment for validation we demonstrate that the proposed method functions successfully in a wide range of populations for group decision making. Moreover, the two numerical applications show the applicability of the proposed method to real-world decision problems. To be more precise, in Case 1 of the numerical applications, the proposed method generates scaling constants from a group in which the degree of the preferential heterogeneity is low. The population size of the artificial genetic system is 100. After the final generation, the fitness values of 59 individuals out of 100 reach the maximum of 1. As seen in Fig. 6, the fitness values are larger than 0.6 and it seems that the population is appropriately evolved. According to the procedure presented in Section 4, the individuals representing the targeted group correspond to a set of individuals with the maximum fitness value of 1 accounting for about 60% of the population. The multiattribute utility functions with the obtained scaling constants are completely consistent with the preference relations of the interested individuals, and the utility value of the same alternative is maximal for all the multiattribute utility functions. In Case 2, the proposed method generates scaling constants from a group in which the degree of the preferential heterogeneity is relatively high. As seen in Fig. 7, at the final generation the maximum value of the fitness among the population is 0.65, and the fitness values are larger than 0.5. For this case, we also think that the population is appropriately evolved. Let $$p=0.9$$, and according to the procedure, ãŁŁindividuals with the fitness values of the top 10% of the population can be interpreted as scaling constants representing the targeted group. Since there does not exist any multiattribute utility function completely consistent with the preference relations of the interested individuals, it follows that the maximum value of the fitness does not reach 1. However, it is thought that the proposed method can generate scaling constants which properly represent the target group because the variance of each scaling constant is small as seen in Table 6. As a result, the utility value of the same alternative is maximized for all the multiattribute utility functions with the obtained scaling constants. The computational experiment and numerical applications demonstrate that the proposed method can reflect diverse preferences of a number of individuals engaged in decision problems and then select alternatives appropriate for group multiattribute decision making. 6. Conclusions In this paper, we deal with group multiattribute utility analysis utilizing the preference relations elicited from interested individuals. It is difficult to interview a number of interested individuals and elicit detailed answers about their preferences such as indifferent points and subjective probabilities from them. By asking questions that are relatively easy to answer compared to the indifference questions, we gather a wide range of preference information of the interested individuals and develop a method for selecting an alternative consistent with the gathered preference information as much as possible. We utilize a large population of neural networks to represent diversified preferences of the interested individuals. The preference information of the interested individuals is imported as the input data of the neural networks, and the outputs of the neural networks are regarded as the scaling constants. The neural networks evolve through a genetic algorithm so that multiattribute utility functions with the scaling constants which are the outputs of the neural networks become consistent with the preference relations of the interested individuals, and then we can obtain a number of sets of the scaling constants reflecting the preferences of the interested individuals. After verifying the validity of the proposed method, we demonstrate its effectiveness by applying to the environmental policy selection problem (Hayashida et al., 2010). It is relatively easy to gather the preference relations used in the proposed method as we gave some instances in 5.2. From this favourable property, it is expected that by employing the proposed method for real-world decision problems in public sectors or decision problems with multiple economic agents, the intentions of the related many individuals and parties can be properly reflected in the selected policy. Funding This work was supported by JSPS KAKENHI Grant No. 26282086. References Arrow, K. J. ( 1963 ) Social Choice and Individual Values, 2nd edn. New York: Wiley. Bana e Costa, C. A. ( 1988 ) A methodology for sensitivity analysis in three-criteria problems: a case study in municipal management. Eur. J. Oper. Res. 3 , 159 – 173 . Google Scholar CrossRef Search ADS Barron, F. H. & Barrett, B. E. ( 1996 ) Decision quality using ranked attribute weights. Manag. Sci. , 42, 1515 – 1523 . Baucells, M. & Sarin, R. K. ( 2003 ) Group decisions with multiple criteria. Manag. Sci. 4 , 1105 – 1118 . Google Scholar CrossRef Search ADS Baucells, M. & Shapley, L. S. ( 2000 ) Multiperson Utility . Los Angeles, CA: The Anderson School at UCLA. Baucells, M. & Shapley, L. S. ( 2008 ) Multiperson utility. Games Econ. Behav.. 6 , 329 – 347 . Google Scholar CrossRef Search ADS Bose, U. , Davey, A. M. & Olson, D. L. ( 1997 ) Multi-attribute utility methods in group decision making: Past applications and potential for inclusion in GDSS. Omeg. , 2 , 691 – 706 . Google Scholar CrossRef Search ADS Chen, Y. , Hipel, K. W. & Kilgour, D. M. ( 2007 ) Multiple criteria sorting using case-based distance models with an application in water resources management. IEEE Trans. Syst. Man Cybern. , Part A 37, 680 – 691 . Doumpos, M. & Zopounidis, C. ( 2011 ) Preference disaggregation and statistical learning for multicriteria decision support: A review. Eur. J. Oper. Res. , 209, 203 – 214 . Dyer, J. S. & Sarin, R. ( 1979 ) Measurable multiattribute value functions. Oper. Res. 2 , 810 – 822 . Google Scholar CrossRef Search ADS Fishburn, P. C. ( 1969 ) Preferences, summation, and social welfare functions. Manag. Sci. , 1 , 179 – 186 . Google Scholar CrossRef Search ADS Fleming, M. ( 1952 ) A cardinal concept of welfare. Q. J. Econ. , 6 , 366 – 384 . Google Scholar CrossRef Search ADS Greco, S. , Matarazzo, B. & Slowinski, R. ( 2001 ) Rough sets theory for multicriteria decision analysis. Eur. J. Oper. Res. , 129, 1 – 47 . Harsanyi, J. C. ( 1955 ) Cardinal welfare, individualistic ethics, and interpersonal comparison of utility. J. Polit. Econ. , 6 , 309 – 321 . Google Scholar CrossRef Search ADS Hayashida, T. , Nishizaki, I. & Ueda, Y. ( 2010 ) Multiattribute utility analysis for policy selection and financing for the preservation of the forest. Eur. J. Oper. Res. , 2 , 833 – 843 . Google Scholar CrossRef Search ADS Hayashida, T. , Nishizaki, I. , Ueda, Y. & Honda, H. ( 2012 ) Multi-criteria evaluation for collaborative circulating farming with collective operations between arable and cattle farmers. J. Multi-Crit. Decis. Anal. , 1 , 227 – 245 . Google Scholar CrossRef Search ADS Holloway, H. A. & White, C. C. III. ( 2003 ) Question selection for multiattribute decision-aiding. Eur. J. Oper. Res. , 1 , 525 – 533 . Google Scholar CrossRef Search ADS Huang, Y.-S., Chang, W.-C., Li, W.-H. & Lin, Z.-L. ( 2013 ) Aggregation of utility-based individual preferences for group decision-making. Eur. J. Oper. Res. , 2 , 462 – 469 . Google Scholar CrossRef Search ADS Iz, P. H. & Gardiner, L. R. ( 1993 ) Analysis of multiple criteria decision support systems for cooperative groups. Group Decis. Negot. , 2, 61 – 79 . Google Scholar CrossRef Search ADS Jacquet-Lagreze, E. & Siskos, Y. ( 1982 ) Assessing a set of additive utility functions for multicriteria decision-making, the UTA method. Eur. J. Oper. Res. , 1 , 151 – 164 . Google Scholar CrossRef Search ADS Keeney, R. L. & Kirkwood, C. ( 1975 ) Group decision making using cardinal social welfare functions. Manage. Sci. , 2 , 430 – 437 . Google Scholar CrossRef Search ADS Keeney, R. L. & Raiffa, H. ( 1976 ) Decisions with Multiple Objectives: Preferences and Value Tradeoff. . New York: Wiley and Sons. Keeney, R. L. & von Winterfeldt, D. ( 1994 ) Managing nuclear waste from power plants. Risk Anal. , 1 , 107 – 130 . Google Scholar CrossRef Search ADS Kim, S. H. , Choi, S. H. & Kim, J. K. ( 1999 ) An interactive procedure for multiple attribute group decision making with incomplete information: Range-based approach. Eur. J. Oper. Res. , 1 , 139 – 152 . Google Scholar CrossRef Search ADS Kirkwood, C. W. & Sarin, R. K. ( 1985 ) Ranking with partial information: A method and an application. Oper. Res. , 3 , 38 – 48 . Google Scholar CrossRef Search ADS Limayem, M. & De Sanctis, G. ( 2000 ) Providing decisional guidance for multicriteria decision making in groups. Inf. Syst. Res. , 1 , 386 – 401 . Google Scholar CrossRef Search ADS Mateos, A. , Jiménez, A. & Ríos-Insua, S. ( 2006 ) Monte Carlo simulation techniques for group decision making with incomplete information. Eur. J. Oper. Res. , 1 , 1842 – 1864 . Google Scholar CrossRef Search ADS Matsatsinis, N. F. , Grigoroudis, E. & Samaras, A. ( 2005 ) Aggregation and disaggregation of preferences for collective decision-making. Group Decis. Negot. , 1 , 217 – 232 . Google Scholar CrossRef Search ADS Mousseau, V. & Slowinski, R. ( 1998 ) Inferring an ELECTRE-TRI model from assignment examples. J Global Optim. , 1 , 157 – 174 . Google Scholar CrossRef Search ADS Nishizaki, I. , Hayashida, T. & Ohmi, M. ( 2014 ) Multiattribute decision analysis using strict preference relations. Ann. Oper. Res. , https://doi.org/10.1007/s10479-014-1680-9. Nishizaki, I. , Katagiri, H. & Hayashida, T. ( 2010 ) Sensitivity analysis incorporating fuzzy evaluation for scaling constants of multiattribute utility functions. Cent. Eur. J. Oper. Res. , 1 , 383 – 396 . Google Scholar CrossRef Search ADS Ondrus, J. , Bui, T. & Pigneur, Y. ( 2015 ) A foresight support system using MCDM methods. Group Decis. Negot. , 2 , 333 – 358 . Google Scholar CrossRef Search ADS Papamichail, K. N. & French, S. ( 2013 ) 25 Years of MCDA in nuclear emergency management. IMA J. Manage. Math. , 2 , 481 – 503, . Google Scholar CrossRef Search ADS Park, K. S. & Kim, S. H. ( 1997 ) Tools for interactive multiattribute decisionmaking with incompletely identified information. Eur. J. Oper. Res. , 9 , 111 – 123 . Google Scholar CrossRef Search ADS Rios-Insua, S. & Mateos, A. ( 1998 ) The utility efficient set and its interactive reduction. Eur. J. Oper. Res. , 1 , 581 – 593 . Google Scholar CrossRef Search ADS Salo, A. A. ( 1995 ) Interactive decision aiding for group decision support. Eur. J. Oper. Res. , 8 , 134 – 149 . Google Scholar CrossRef Search ADS Sarabando, P. & Dias, L. C. ( 2009 ) Multiattribute choice with ordinal information: A comparison of different decision rules. IEEE Trans. Syst. Man Cybern. , Part A 39, 545 – 554 . Google Scholar CrossRef Search ADS Seo, F. , Nishizaki, I. & Park, S. H. ( 1999 ) Multiattribute utility analysis using object-oriented programming MAP and its application. J. Bus. Admin. Inf. 7 , Setsunan University, 27 – 57 . Vetschera, R. ( 1991 ) Integrating databases and preference evaluations in group decision support: A feedback-oriented approach. Decis. Support Syst. , 7, 67 – 77 . Google Scholar CrossRef Search ADS Vetschera, R. , Chen, Y. , Hipel, K. & Kilgour, D. ( 2010 ) Robustness and information levels in case-based multiple criteria sorting. Eur. J. Oper. Res. , 2 , 841 – 852 . Google Scholar CrossRef Search ADS Wallenius, J. , Dyer, J. S. , Fishburn, P. C. , Steuer, R. E. , Zionts, S. & Deb, K. ( 2008 ) Multiple criteria decision making, multiattribute utility theory: Recent accomplishments and what lies ahead. Manage. Sci. , 5 , 1336 – 1349 . Google Scholar CrossRef Search ADS Wang, J. & Malakooti, B. ( 1992 ) A feed forward neural network for multiple criteria decision making. Comput. Oper. Res. , 1 , 151 – 167 . Google Scholar CrossRef Search ADS Weber, M. ( 1985 ) A method of multiattribute decision making with incomplete information. Manage. Sci. , 3 , 1365 – 1371 . Google Scholar CrossRef Search ADS © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Management Mathematics Oxford University Press

# Aggregation of preference information using neural networks in group multiattribute decision analysis

, Volume Advance Article (3) – Sep 28, 2016
22 pages

/lp/ou_press/aggregation-of-preference-information-using-neural-networks-in-group-ZdIRiQ60J2
Publisher
Oxford University Press
ISSN
1471-678X
eISSN
1471-6798
D.O.I.
10.1093/imaman/dpw019
Publisher site
See Article on Publisher Site

### Abstract

Abstract In this paper, we deal with group multiattribute utility analysis incorporating the preferences of multiple interested individuals. Since it is difficult to repeatedly ask these individuals questions for determining parameters of a multiattribute utility function, we gather preference information of them by asking questions that are not difficult to answer, and develop a method for selecting an alternative consistent with the preference information. Assuming that the multiattribute utility function has the multiplicative form and the corresponding single-attribute utility functions are already identified, we evaluate the trade-off between attributes by utilizing neural networks whose inputs and outputs are the preference information elicited from the interested individuals and the scaling constants for the multiattribute utility function, respectively. By performing computational experiments, we verify that the proposed method can generate scaling constants properly. Furthermore, using the preference relations from two groups with different degrees of the preferential heterogeneity, we examine the effectiveness of the proposed method, and then we show that the results of the numerical applications are reasonable and proper. 1. Introduction In multiattribute utility analysis, the preference of a decision maker is revealed by asking a series of questions, and then a multiattribute utility function of the decision maker is identified based on the preference of the decision maker (Keeney & Raiffa, 1976). By using the multiattribute utility function, the most preferred alternative for the decision maker is selected. Multiattribute utility analysis has been applied for evaluating and selecting public policies such as policy evaluation of Lisbon (Bana e Costa, 1988), management of nuclear waste from power plants (Keeney & Winterfeldt, 1994), economic evaluation of South Korea (Seo et al., 1999), policy selection and financing for the preservation of the forest (Hayashida et al., 2010) and evaluation of collaborative circulating farming (Hayashida et al., 2012). For such decision problems in public sectors or decision problems with multiple economic agents, the selected policy should influence not only the relevant organizations but also a number of interested individuals and parties. A utility function reflecting preferences of multiple decision makers is called a social welfare function, and studies from various perspectives have been accumulated (Harsanyi, 1955; Arrow, 1963; Fishburn, 1969; Keeney & Kirkwood, 1975; Keeney & Raiffa, 1976; Dyer & Sarin, 1979; Limayem & DeSanctis, 2000; Baucells & Sarin, 2003; Baucells & Shapley, 2008). Through a different perspective from the social welfare functions, we develop a multiattribute decision model which is able to incorporate diversified intentions of the multiple interested individuals. We interviewed interested individuals in application studies of multiattribute utility analysis on policy selections for forest preservation (Hayashida et al., 2010) and collaborative farming (Hayashida et al., 2012). Although the decision maker in the study on the forest preservation (Hayashida et al., 2010) is a representative of the non-profitable organization operating the project, he or she needs to interview local residents and business persons for assessment of measures and policies for financing and forest preservation because the project involves some public aspects. In the decision problem of the collaborative farming (Hayashida et al., 2012), the representative of the collaborative farming organization is eligible for decision making, and he or she should pay adequate attention to intensions of local farmers from the properties of the agricultural business. In these studies, the single decision makers eventually construct multiattribute utility functions consistent with their own preferences in the light of intensions of the interested individuals. In particular, a partial multiattribute utility function corresponding to a part of the objective hierarchy is determined on the basis of the preferences of the interested individuals. Depending on types of decision problems, it is required to formulate a multiattribute utility function directly reflecting the preferences of the multiple interested individuals. In general, in order to identify a multiattribute utility function, it is necessary to derive indifferent points and subjective probabilities from a decision maker (Keeney & Raiffa, 1976). However, it may be difficult for some decision makers to answer such questions according to circumstances. In such cases, decision analysts should support a decision maker to be able to select an appropriate alternative by deriving partial information or incomplete information of the preference of the decision maker through questions which are easy to answer. In multiattribute utility analysis with incomplete information, by pairwise comparison between alternatives and ordering the scaling constants, the preference of the decision maker is derived and the most preferred alternative is selected from among the set of all alternatives (Jacquet-Lagreze & Siskos, 1982; Kirkwood & Sarin, 1985; Weber, 1985; Wang & Malakooti, 1992; Barron & Barrett, 1996; Park & Kim, 1997; Mousseau & Slowinski, 1998; Rios-Insua & Mateos, 1998; Holloway & White, 2003; Chen et al., 2007; Wallenius et al., 2008; Sarabando & Dias, 2009; Nishizaki et al., 2010; Vetschera et al., 2010; Doumpos & Zopounidis, 2011). In this paper, we deal with multiattribute utility analysis for decision problems in which diversified intentions of the multiple interested individuals should be taken into account such as decision problems in public sectors rather than those with a single decisive leader. To identify a multiattribute utility function, namely, to determine single-attribute utility functions and then to evaluate trade-off between multiple attributes, it is necessary to derive indifferent points and subjective probabilities from a decision maker. However, it is difficult to interview multiple interested individuals and elicit such detailed information about the preferences from them. By asking questions that are relatively easy to answer compared to the indifference questions, we gather a wide range of preference information from the multiple interested individuals. We develop a method for selecting an alternative consistent with the gathered preference information as much as possible. Assuming that single-attribute utility functions are already identified, and the multiattribute utility function to be identified is in the multiplicative form, we evaluate trade-off between attributes by using neural networks. To be precise, the preference information of the multiple interested individuals is treated as the inputs to neural networks, and the outputs of the neural networks are regarded as a set of scaling constants. By employing the framework of genetic algorithms, the populations of the neural networks are learned and evolved so as to be consistent with the preference information gathered from the multiple interested individuals. The rest of this paper is organized as follows. Section 2 briefly reviews the fundamentals of multiattribute utility analysis and Section 3 devotes to reviewing the group decision making based on multiattribute utility analysis. In Section 4, assuming that the multiattribute utility function is in the multiplicative form and the corresponding single-attribute utility functions are already identified, we propose a method for selecting the most preferred alternative by utilizing neural networks whose inputs and outputs are the preference information and the scaling constants for the multiattribute utility function, respectively. In Section 5, we verify that the proposed method can generate scaling constants properly by performing computational experiments. Moreover, we demonstrate the proposed method by using a simple problem, and verify the effectiveness of the proposed method utilizing the preference information actually gathered in an application study (Hayashida et al., 2010). Finally in Section 6, we give some concluding remarks. 2. Multiattribute utility function Let $$X_1, \ldots, X_n$$ be $$n$$ attributes, and assume that a decision maker has a monotone increasing single-attribute utility function for each attribute. Suppose that there are $$m$$ alternatives denoted by $$\mathbf{x}^1=(x_1^1,\ldots,x_n^1), \dots , \mathbf{x}^m=(x_1^m,\ldots,x_n^m)$$. If the set of attributes $$\{X_1, \ldots, X_n\}$$ are mutually utility independent, for any vector of attribute values $$\mathbf{x}=(x_1,\ldots,x_n)$$, a multiattribute utility function $$U(x_1,\ldots,x_n)$$ can be represented in the multiplicative form U(x1,…,xn;k1,…,kn)=1K[∏i=1n{1+Kkiui(xi)}−1], (1) where $$u_i$$ is a single-attribute utility function for the $$i$$th attribute $$X_i$$, $$k_i$$ is a scaling constant corresponding to $$X_i$$, which satisfies $$0 \leq k_i \leq 1$$, and $$K$$ is an additional scaling constant satisfying $$1+K = \prod_{i=1}^n (1+Kk_i)$$. If indifferent points and subjective probabilities can be derived from the decision maker successfully, a vector $$\mathbf{k}=(k_1,\ldots,k_n)$$ of the scaling constants is calculated in the following way (Keeney & Raiffa, 1976), and then multiple alternatives can be ordered by the multiattribute utility function (1). From the decision maker, deriving the value of $$x_i'$$ such that $$(x_j^*,\mathbf{x}_{\bar j}^0)=(x_1^0,\ldots,x_{j-1}^0,x_j^*,x_{j+1}^0,\ldots,x_{n}^0)$$ is indifferent to $$(x_i',\mathbf{x}_{\bar i}^0)=(x_1^0,\ldots,x_{i-1}^0,x_i',x_{i+1}^0,\ldots,x_{n}^0)$$, that is (xj∗,xj¯0)∼(xi′,xi¯0), (2) since $$U(x_j^*,\mathbf{x}_{\bar j}^0;\mathbf{k})=U(x_i',\mathbf{x}_{\bar i}^0;\mathbf{k})$$, we obtain the relation kj=kiui(xi′), (3) where $$(x_j^*,\mathbf{x}_{\bar j}^0)$$ is a vector of attribute values such that the attribute value of the attribute $$X_j$$ is the best value $$x_j^*$$ and those of the other attributes $$X_1 ,\ldots,X_{j-1},X_{j+1},\ldots,X_{n}$$ are the worst values $$(x_1^0,\ldots,x_{j-1}^0,x_{j+1}^0,\ldots,x_{n}^0)$$, and $$(x_i',\mathbf{x}_{\bar i}^0)$$ is also defined similarly. Additionally, eliciting the probability $$\pi_i$$ from the decision maker such that $$(x_i^*,\mathbf{x}_{\bar i}^0)$$ is indifferent to $$(\pi_i, \mathbf{x}^*;1-\pi_i, \mathbf{x}^0)$$, that is (xi∗,xi¯0)∼(πi,x∗;1−πi,x0), (4) we have $$U(x_i^*,\mathbf{x}_{\bar i}^0;\mathbf{k})=\pi_i U(\mathbf{x}^*;\mathbf{k})+(1-\pi_i)U(\mathbf{x}^0;\mathbf{k})$$ and then it follows that ki=πi, (5) where $$(\pi_i, \mathbf{x}^*;1-\pi_i, \mathbf{x}^0)$$ is a lottery that pays the vector $$\mathbf{x}^*$$ of the best attribute values with probability $$\pi_i$$ and the vector $$\mathbf{x}^0$$ of the worst attribute values with probability $$1-\pi_i$$. By identifying the relations (5) and (3) for all $$j\not= i$$, the vector $$\mathbf{k}$$ of scaling constants is determined. In general, in order to determine the scaling constants, it is necessary to derive indifference points and subjective probabilities from the decision maker. For group decision making dealt with in this paper, however, it is generally difficult to derive appropriate answers to such questions from multiple interested individuals due to time constraint of interview assessment and limited knowledge on decision theory of the interviewees. From the viewpoints, we develop a group decision method where multiple interested individuals are asked questions which are relatively easy to answer and are not always required to answer if difficult. 3. Group decision making As for aggregation of utilities of individuals in decision making under certainty, Arrow’s Impossibility Theorem (1963) is famous and influential. Providing fundamental five assumptions, he proves that there does not exist any aggregation method of utilities of individuals satisfying all the assumptions. Keeney & Raiffa (1976) point out this result is ascribed to not assuming interpersonal utility comparison, and consider a decision problem with a single decision maker called a supra decision maker who takes into account values and affections of the related group of individuals. They provide conditions that the value function of the supra decision maker can be represented by the sum of the value functions of members of a group. Furthermore, Keeney & Kirkwood (1975) examine a similar situation under uncertainty, and give conditions for the additive and multiplicative utility functions in the context of group decision making. Before their study is published, Harsanyi (1955) also give somewhat different conditions for the additive group utility function. For group decision making under certainty, Fleming (1952) consider an additive value function, and Dyer & Sarin (1979) examine a measurable additive value function. In the present century, using the theoretical result by Baucells & Shapley (2000, 2008) and Baucells & Sarin (2003) show that a group multiattribute utility function can be represented as the weighted sum of additive utility functions of the members in the group, assuming a bilateral agreement among pairs of members on a single attribute. In the background of the above-mentioned theoretical development of group decision making, methods to pragmatically deal with real problems have been developed. In a method proposed by Vetschera (1991), a group additive value function is formulated as the sum of additive value functions of members in the group, and then the ranking order of the group and those of its members are compared. By repeatedly adjusting weights for the criteria and adding the evaluation of the group into the value functions of the members, the method leads the evaluations of the members to be consistent with that of the group. Iz & Gardiner (1993) extensively review various methods related to three fields of multiple criteria decision making, group decision making and decision support systems. Salo (1995) proposes an interactive approach reducing the number of non-dominated alternatives, calculating intervals of the alternative evaluations which are interpreted as imprecise preference information provided from members and subgroups of members in the group. Kim et al. (1999) study a method of multiple attribute group decision making where incomplete information of members in a group is derived by using the range-type utility representation. In their method, after the preference information of the members is aggregated, the additive value function of the group is constructed, and then agreeable dominance relations among the alternatives can be derived by revising the preference information of the members. Matsatsinis et al. (2005) present a method evaluating the preferences of members in a group by using a technique related to the UTA method (Jacquet-Lagreze & Siskos, 1982). In their method, after calculating relative utilities, the group additive utility with weighs of decision powers of the members is formulated. Furthermore, criteria for the group are specified, and then the global satisfaction is calculated. Finally, the acceptance of the obtained solution is judged by using an upper and a lower limits for the measurement of the group’s satisfaction. In a method by Mateos et al. (2006), it is supposed that the number of members in a group is relative small. At the beginning, each member in the group evaluates attribute values, utilities and weights as intervals. After identifying probability distributions depending on decision powers of the members and cooperative or non-cooperative situations, the attribute values, utilities and weights are generated based on Monte Carlo simulation techniques, and the ranking of the alternatives is derived. Huang et al. (2013) develop a multiattribute additive utility group decision model using the preferential differences and the preferential priorities. Bose et al. (1997) review application studies of group multiattribute utility analysis. Recently, Papamichail & French (2013) apply group multiattribute utility methodology to nuclear emergency management, and Ondrus et al. (2015) develop foresight support systems applying group multiattribute utility methods. In all the above-mentioned group multiattribute decision methods, the number of members in a group is relatively small and the members are clearly fixed. We find no attempt for developing methodology dealing with decision making problems with a number of interested individuals and parties such as decision problems in public sectors where the selected policy should influence not only the relevant organizations but also the interested individuals and parties. In such problems, in order to take into account their opinions and intentions, it is necessary to interview them, and it is desirable to gather a wide range of preference information from the multiple interested individuals. However, it is difficult to elicit detailed information about the preferences from them. To overcome this difficulty, we develop a new group decision method for decision making problems with a number of interested individuals and parties, in which questions that are relatively easy to answer are employed and the gathered preference information from them is utilized in order to determine the scaling constants of the multiattribute utility function. 4. Group multiattribute utility analysis using neural networks In this paper, we assume that single-attribute utility functions $$u_i$$, $$i=1,\ldots,n$$ for attributes $$X_i$$, $$i=1,\ldots,n$$ are already identified. Furthermore, we assume that the set of the attributes are mutually utility independent, and therefore the multiattribute utility function is in the multiplicative form. We propose a method for generating multiattribute utility functions, namely, the scaling constants consistent with the preferences of multiple interested individuals as much as possible and selecting the most preferred alternative. In order to assess the preferences of the multiple interested individuals, we provide questions which are easy to answer for them. In our method, we employ the questions used in multiattribute utility analysis under incomplete information (Nishizaki et al., 2014), in which a decision maker is asked about the strict preference relation between hypothetical outcomes which differ in only two attribute values. The following two hypothetical outcomes are presented to each interested individual. Only two attribute values are different and the other $$(n-2)$$ attribute values are the same in the two outcomes. Let $$X_i$$ and $$X_j$$ be the two attributes with different values for the given pair of the hypothetical outcomes, and let $$X_i$$ be the basis attribute. The first hypothetical outcome is Ai=(x10,…,xi−10,xi∗,xi+10,……,xn0), (6) where the value of attribute $$X_i$$ is the best and those of the other attributes $$X_1,\ldots,X_{i-1}$$, $$X_{i+1}$$, $$\ldots\ldots,X_n$$ are the worsts. The other one is Aj=(x10,……,xj−10,xj∗,xj+10,…,xn0), (7) where the value of attribute $$X_j$$ is the best and those of the other attributes $$X_1,\ldots,X_{j-1}$$, $$X_{j+1}$$, $$\ldots\ldots,X_n$$ are the worsts. The interested individuals are asked which outcome they prefer to, $$A^i$$ or $$A^j$$. That is, they are asked to choose between $$A^i \succ A^j$$ or $$A^j \succ A^i$$. If it is difficult for some interested individuals to choose, they do not have to answer. In particular, if the answer is $$A^i \succ A^j$$, the interested individual is asked to find the minimum, denoted by $$x_i^{*-}$$, of the attribute value of $$X_i$$ by decreasing the attribute value from $$x_i^*$$ under the condition that this preference relation holds. Let an outcome with $$x_i^{*-}$$ substituted for $$x_i^{*}$$ be Ai∗−=(x10,…,xi−10,xi∗−,xi+10,……,xn0). (8) It follows that we obtain a preference relation Ai∗−≻Aj (9) as substitute for $$A^i \succ A^j$$. Conversely, if the answer is $$A^j \succ A^i$$, the interested individual is asked to find the minimum, denoted by $$x_j^{*-}$$, of the attribute value of $$X_j$$ in a similar way. Let an outcome with $$x_j^{*-}$$ substituted for $$x_j^{*}$$ be Aj∗−=(x10,…,xj−10,xj∗−,xj+10,……,xn0). (10) It follows that we obtain a preference relation Aj∗−≻Ai (11) as substitute for $$A^j \succ A^i$$. The preference relations (9) and (11) are related to the equality conditions u(Ai∗−;k)−u(Aj;k)=δij+, (12) u(Aj∗−;k)−u(Ai;k)=δij−. (13) By asking similar questions for all pair of attributes, the equality conditions (12) and (13) are obtained through the preference relations (9) and (11). Let $$I^+$$ and $$I^-$$ be index sets corresponding to (12) and (13), respectively. The following mathematical programming problem can be formulated.  maximize⁡  Δ=min{minij∈I−δij−,minij∈I+δij+} (14a)  subject to  u(Ai∗−;k)−u(Aj;k)=δij+, ij∈I+ (14b) u(Aj∗−;k)−u(Ai;k)=δij−, ij∈I− (14c) k∈CK, (14d) where $$C^K$$ is the set of vectors $$\mathbf{k}=(k_1,\ldots,k_n)$$ of the scaling constants, that is CK={k∈Rn ∣0≤ki≤1, i=1,…,n}. (15) Let $$\Delta^*$$ be the optimal value of the problem (14). If $$\Delta^*\geq 0$$, there exist vectors of the scaling constants satisfying all of the preference relations (9) and (11) elicited from the interested individuals. Otherwise, there does not exist such a vector of the scaling constants (Nishizaki et al., 2014). When $$\Delta^*\geq 0$$, an optimal solution $$\mathbf k^*$$ to the problem (14) satisfies all of the preference relations (9) and (11) elicited from the interested individuals, and it also maximizes the minimum difference between the utilities of outcomes in the elicited preference relations. However, such a solution does not always represent the comprehensive preference of all the interested individuals. To justify employing this solution, some axiomatization approach might be needed. In this paper, we do not take such an approach, but propose a method generating a number of scaling constants satisfying the preference relations (9) and (11). Before describing the implementation of the proposed method with neural networks, we remark the derivation of the preference information from the interested individuals. In application studies of multiattribute utility analysis on policy selections for forest preservation (Hayashida et al., 2010) and collaborative farming (Hayashida et al., 2012), we interviewed interested individuals, and a partial multiattribute utility function corresponding to a part of the objective hierarchy was identified on the basis of the preferences of the interested individuals. To do so, we asked them to specify indifferent points and subjective probabilities (Keeney & Raiffa, 1976). However, it was very difficult for some interested individuals to directly deliver such information. We therefore at first asked them to answer which alternative is preferred, namely we elicited the preference relations such as (9) and (11) from them. As a result, we can gather a long list of such preference relations. In group multiattribute decision making, we think that it is natural and effective to use them. From this perspective, in this paper, we develop a method with neural networks where the preference relations such as (9) and (11) are used as the input data of the neural networks. In the propose method, by utilizing neural networks in which the preference relations of the interested individuals are the inputs and the outputs are regarded as the scaling constants, we generate diversified sets of the scaling constants which satisfy the preference relations of the interested individuals if there exist such scaling constants or minimize the violation of them if there does not exist. A number of neural networks evolve through a genetic algorithm so that multiattribute utility functions with the scaling constants which are the output of the neural networks are consistent with the preferences of the interested individuals, and then we can obtain diversified sets of the scaling constants in an adequate number of generations. This idea is graphically expressed as in Fig. 1. Fig. 1. View largeDownload slide Conceptual scheme. Fig. 1. View largeDownload slide Conceptual scheme. We illustrate the structure of a neural network for a three-attribute utility function in Fig. 2. As seen in the figure, $$x_1^1, x_2^1, x_3^1, x_1^2, x_2^2, x_3^2$$ are the inputs of the neural network, and this means that a preference relation $$(x_1^1,x_2^1,x_3^1) \succ (x_1^2,x_2^2,x_3^2)$$ is one of the elicited preference relations. Three outputs of the neural network correspond to a set of scaling constants $$(k_1,k_2,k_3)$$. Repeatedly inputting all of the preference relations (9) and (11) elicited from the interested individuals into the neural networks, the neural networks learn and evolve so as to be consistent with the preference of the interested individuals, namely, to output the sets of the scaling constants of the multiattribute utility function satisfying the preference relations of the interested individuals as much as possible. Fig. 2. View largeDownload slide Structure of neural networks. Fig. 2. View largeDownload slide Structure of neural networks. Let the number of attributes be $$n$$. The number of inputs then is $$2n$$, and the number of outputs is $$n$$. We assign $$3n$$ to the number of nodes in the hidden layer of the neural networks. Suppose that the outputs of a neural network are a set of scaling constants $$(\hat k_1,\ldots, \hat k_n)$$. If, for a given preference relation (x11,…,xn1)≻(x12,…,xn2), (16) the inequality u(x11,…,xn1;k^1,…,k^n)>u(x12,…,xn2;k^1,…,k^n) (17) holds, we say that the preference relation (16) is consistent with the set of scaling constants $$(\hat k_1,\ldots, \hat k_n)$$. As a method for learning of the neural networks, we employ a genetic algorithm. A neural network in the population of the artificial genetic system is a string composed of the synaptic weights and the thresholds identifying the neural network. A fitness function is defined so as to become large if the preference relations are consistent with a set of scaling constants which are the outputs of the neural network. Let $$m$$ be the number of interested individuals, and $$l_i$$ be the number of preference relations (9) and (11) elicited from the interested individual $$i$$. Let $$(x_1^{ij1},\ldots,x_n^{ij1}) \succ (x_1^{ij2},\ldots,x_n^{ij2})$$ be the $$j$$-th preference relation of individual $$i$$, and $$\hat{\mathbf k}^l_{ij}$$ be the output (scaling constants) of the $$l$$-th neural network for the input of $$(x_1^{ij1},\ldots,x_n^{ij1}) \succ (x_1^{ij2},\ldots,x_n^{ij2})$$. Let $$p^{l,st}_{ij}=1$$ if a preference relation $$(x_1^{st1},\ldots,x_n^{st1}) \succ (x_1^{st2},\ldots,x_n^{st2})$$ is consistent with the set of scaling constants $$\hat{\mathbf k}^l_{ij}$$. Otherwise, let $$p^{l,st}_{ij}=0$$. The fitness function of the $$l$$-th neural network then is defined as fl=1∑i=1mli∑i=1m∑j=1li∑s=1m∑t=1lspijl,st∑s=1mls, (18) which means the ratio of consistency for all the preference relations. An algorithm of the proposed method is summarized as follows. Step 1 Elicit the preference relations (9) and (11) from all the interested individuals. Step 2 By solving the problem (14), check whether or not there exists a set of scaling constants satisfying all the preference relations obtained in Step 1. If the optimal value of (14) is larger than or equal to $$0$$, that is $$\Delta^* \geq 0$$, there exists such a set of scaling constants. Otherwise, it does not exist. Step 3 To initialize neural networks in the population of the artificial genetic system, randomly generate fixed-length binary-encoded strings which are composed of the synaptic weights and the thresholds identifying the neural networks. Step 4 Reconstruct neural networks with the decoded synaptic weights and thresholds corresponding to the neural networks in the population. Step 5 Perform the following operation for all neural networks. Input the data of the preference relation $$j$$, $$(x_1^{ij1},\ldots,x_n^{ij1}) \succ (x_1^{ij2},\ldots,x_n^{ij2})$$, of the interested individual $$i$$ into the neural network $$l$$ and then obtain the outputs $$\hat{\mathbf k}^l_{ij}$$ which are interpreted as a set of scaling constants. If the preference relation $$t$$ of the interested individual $$s$$ is consistent with the set of scaling constants $$\hat{\mathbf k}^l_{ij}$$ for the neural network $$l$$ with the inputs of the preference relation $$j$$ of the interested individual $$i$$, namely, the inequality $$u(x_1^{st1},\ldots,x_n^{st1}; \hat{\mathbf k}^l_{ij}) > u(x_1^{st2},\ldots,x_n^{st2}; \hat{\mathbf k}^l_{ij})$$ holds, let $$p^{l,st}_{ij}=1$$. Otherwise, let $$p^{l,st}_{ij}=0$$. After repeatedly perform this operation for all the preference relations, calculate the fitness value (18) of the neural network $$l$$. Step 6 For the population, until the final generation is reached, perform the genetic operations including the preproduction by the roulette wheel selection with the elitism, the exponential scaling, the uniform crossover and the bit-reverse mutation. Step 7 If there exists a set of scaling constants satisfying all the preference relations, namely, $$\Delta^* \geq 0$$ in Step 2, after evaluating the alternatives for all sets of scaling constants corresponding to the neural networks such that the fitness value is equal to 1, that is $$f^l=1$$, terminate the procedure. Otherwise, that is $$\Delta^* < 0$$, proceed to Step 8. Step 8 Since a set of scaling constants satisfying all the preference relations does not exist, select neural networks with relatively large fitness values. That is, with reference to the largest fitness value, select neural networks with the fitness values larger than $$p$$% of the maximum. After evaluating the alternatives for all sets of scaling constants corresponding to the selected neural networks, terminate the procedure. 5. Validation and numerical applications 5.1 Computational experiment for validation To assess the validity of the proposed method, we construct eight populations consisting of two subgroups with preferential heterogeneity, varying the degree of the preferential heterogeneity. We verify whether the proposed method can generate scaling constants appropriately for the eight populations. We consider a simple decision problem with only two attributes $$X_1$$ and $$X_2$$. We assume that the interested individuals are composed of two groups, and the preference within each group is homogeneous. After specifying some specific utility function for each group, we generate the preference relations of the two groups in accordance with the utility functions. We assume that the set of the attributes are mutually utility independent for each group, and therefore the two-attribute utility function $$U^1(x_1, x_2)$$ of group 1 and $$U^2(x_1, x_2)$$ of group 2 are in the multiplicative form. For the sake of simplicity, we also assume that single-attribute utility functions of both groups are linear, and let all the domains of the functions are the interval $$[0,1]$$. That is, the single-attribute utility functions of group $$j$$, $$j=1,2$$ are uij(xi)=xi, i=1,2,uij(0)=0, uij(1)=1, i=1,2, respectively. The two-attribute utility functions $$U^j(x_1, x_2)$$, $$j=1,2$$ are then represented by Uj(x1,x2) =k1ju1j(x1)+k2ju2j(x2)+Kjk1jk2ju1j(x1)u2j(x2) =k1jx1+k2jx2+Kjk1jk2jx1x2. Assuming that the scaling constants of groups 1 and 2 for the first attribute $$X_1$$ are $$k_1^1=k_1^2=0.8$$, we determine the scaling constants of groups 1 and 2 for the second attribute $$X_2$$ by using the indifferent relation (x^1j,x2,min)∼(x1,min,x2,max), where $$x_{1, \min}=0$$, $$x_{2, \min}=0$$ and $$x_{2, \max}=1$$, and $$\hat x_1^j$$ is specified in the following. For group 1, let $$\hat x_1^1$$ be a fixed value $$0.8$$, and for group 2, let $$\hat x_1^2$$ be from $$0.8$$ to $$0.1$$, that is x^11=0.8,x^12=0.8,0.7,…,0.1. We define the difference between two values as $$D = \hat x_1^1 - \hat x_1^2$$. If $$D=0$$, groups 1 and 2 are homogeneous, and as the value of $$D$$ increases, the degree of heterogeneity increases. The scaling constants calculated from the above indifference relations are shown in Table 1. Table 1 Scaling constants of groups 1 and 2 Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Table 1 Scaling constants of groups 1 and 2 Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Indifferent $$\hat x_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Values $$\hat x_1^2$$ 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Difference $$D$$ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Group 1 $$k_1^1$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^1$$ 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 $$K^1$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ $$-0.859$$ Group 2 $$k_1^2$$ 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 $$k_2^2$$ 0.64 0.56 0.48 0.40 0.32 0.24 0.16 0.08 $$K^2$$ $$-0.859$$ $$-0.804$$ $$-0.729$$ $$-0.625$$ $$-0.469$$ $$-0.208$$ $$0.313$$ $$1.875$$ Utilizing the two-attribute utility functions with the scaling constants given in Table 1, we generate the preference relations for given pairs of outcomes in accordance with the two-attribute utility functions. Fifty pairs of outcomes are randomly taken up from the set of grid points $$\{(0,0), (0,0.1), \dots, (0,1), (0.1,0), (0.1,0.1), \dots , (1,0.9), (1,1)\}$$ at 0.1 length intervals in the attribute space $$[0,1]^2$$, and the chosen pairs are shown in Table 2. Table 2 Pairs of outcomes # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) Table 2 Pairs of outcomes # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) # Pairs of outcomes # Pairs of outcomes # Pairs of outcomes 1 (0.7,0.0) (0.1,0.2) 11 (0.0,0.2) (0.9,0.1) 21 (0.1,0.2) (0.0,0.9) 2 (0.6,0.0) (0.2,0.6) 12 (0.8,0.2) (0.4,0.8) 22 (0.4,0.8) (0.9,0.1) 3 (0.2,0.7) (0.9,0.4) 13 (0.3,0.2) (1.0,0.0) 23 (0.5,0.8) (0.8,0.3) 4 (0.0,1.0) (0.7,0.2) 14 (0.3,0.1) (0.2,0.4) 24 (0.7,0.5) (0.6,0.6) 5 (0.1,0.9) (0.2,0.6) 15 (0.3,0.2) (0.0,0.4) 25 (0.3,0.5) (0.4,0.2) 6 (0.3,0.6) (0.1,0.9) 16 (0.4,0.6) (1.0,0.3) 26 (0.8,0.2) (0.4,0.5) 7 (0.8,0.2) (0.6,0.3) 17 (0.1,1.0) (0.8,0.6) 27 (0.8,0.8) (0.9,0.4) 8 (0.7,0.0) (0.2,0.3) 18 (0.3,0.8) (0.8,0.4) 28 (1.0,0.3) (0.6,0.8) 9 (0.0,1.0) (0.9,0.1) 19 (0.1,1.0) (0.9,0.4) 29 (1.0,0.2) (0.7,0.8) 10 (0.9,0.5) (0.7,0.7) 20 (0.7,1.0) (0.9,0.6) 30 (0.8,0.4) (0.6,0.8) # Pairs of outcomes # Pairs of outcomes 31 (1.0,0.1) (0.2,0.5) 41 (0.5,0.7) (0.8,0.3) 32 (0.0,0.7) (0.1,0.3) 42 (0.3,0.9) (0.9,0.3) 33 (0.0,1.0) (0.5,0.8) 43 (0.4,0.4) (0.6,0.1) 34 (0.4,0.2) (0.3,0.9) 44 (0.9,0.7) (0.8,0.8) 35 (0.5,0.6) (0.3,1.0) 45 (0.2,1.0) (0.8,0.6) 36 (1.0,0.5) (0.5,1.0) 46 (0.8,0.3) (0.7,0.8) 37 (0.2,0.7) (0.4,0.1) 47 (1.0,0.7) (0.9,0.8) 38 (0.3,0.5) (0.4,0.0) 48 (0.7,0.6) (0.8,0.3) 39 (0.0,0.2) (0.4,0.1) 49 (0.9,0.0) (0.1,0.5) 40 (0.8,0.4) (0.5,0.7) 50 (0.9,0.0) (0.1,0.5) The preference relations for the pairs of outcomes given in Table 2 are determined in accordance with the two-attribute utility functions $$U^1(x_1, x_2)$$ and $$U^2(x_1, x_2)$$ of groups 1 and 2. For each group, 50 preference relations are generated, and there are 100 preference relations in total. Consider cases of $$D=0$$ and $$D=0.7$$ to illustrate generation of preference relations. For the case of $$D=0$$, the two-attribute utility functions of groups 1 and 2 are U1(x1,x2)=0.8x1+0.64x2+(−0.859)⋅0.8⋅0.64x1x2,U2(x1,x2)=0.8x1+0.64x2+(−0.859)⋅0.8⋅0.64x1x2, and the two groups have the same utility function. Therefore, the same set of preference relations is generated for the two groups. For example, for a pair $$(0.7,0.0)$$ and $$(0.1,0.2)$$, we have $$U^1(0.7,0.0)=U^2(0.7,0.0)=0.56$$ and $$U^1(0.1,0.2)=U^2(0.1,0.2)=0.199$$, and then the same preference relation $$(0.7,0.0)\succ (0.1,0.2)$$ is obtained for the two groups. For the case of $$D=0.7$$, the two-attribute utility functions of groups 1 and 2 are U1(x1,x2)=0.8x1+0.64x2+(−0.859)⋅0.8⋅0.64x1x2U2(x1,x2)=0.8x1+0.08x2+1.875⋅0.8⋅0.08x1x2 and then the two groups have different utility functions. Therefore, different preference relations for the same pair of outcomes can be generated for the two groups. For example, for a pair $$(0.7,0.0)$$ and $$(0.1,0.2)$$, we have $$U^1(0.7,0.0)=U^2(0.7,0.0)=0.56$$ and $$U^1(0.1,0.2)=0.199$$, $$U^2(0.1,0.2)=0.098$$, and then the same preference relation $$(0.7,0.0)\succ (0.1,0.2)$$ is obtained for the two groups. For a pair $$(0.6,0.0)$$ and $$(0.2,0.6)$$, however, we have $$U^1(0.6,0.0)=U^2(0.6,0.0)=0.48$$ and $$U^1(0.2,0.6)=0.491, U^2(0.2,0.6)=0.222$$. Therefore, while the preference relation $$(0.6,0.0)\prec (0.2,0.6)$$ is obtained for group 1, the reverse preference relation $$(0.6,0.0)\succ (0.2,0.6)$$ holds in group 2. The preference relations obtained in this manner are treated as inputs to neural networks. Let the maximum generation in the genetic algorithm be 10000, and for each of the eight cases given in Table 1, the procedure of our method shown in the previous section is executed ten times. The numerical result is shown in Fig. 3, where the horizontal axis is the difference $$D$$, the vertical axis of the left-hand side is the fitness value, and that of the right-hand side is the variance of the fitness values. Fig. 3. View largeDownload slide Fitness values (mean, maximum, minimum and variance). Fig. 3. View largeDownload slide Fitness values (mean, maximum, minimum and variance). As seen in Fig. 3, the maximum of the fitness values reaches one when $$D=0$$. Since the preference relations of group 1 completely coincides with those of group 2 and there does not exist a reverse relation for any preference relation when $$D=0$$, there exist sets of scaling constants which are consistent with all the preference relations. Moreover, since the fitness value reaches one, it follows that some neural networks output scaling constants which are consistent with all the preference relations. As the difference $$D$$ increases, the fitness value decreases because the number of inconsistent preference relations increases. It is also found that the variance of the fitness values which is less than 0.006 is adequately small due to the sufficiently large periods of learning duration. In Fig. 4, the scaling constants $$(k_1,k_2)$$ as the outputs of neural networks are shown, together with the scaling constants $$(k_1^1,k_2^1)$$, $$(k_1^2,k_2^2)$$ of groups 1 and 2 calculated through the indifference relations, which are depicted as dotted lines. As for the scaling constants $$(k_1,k_2)$$ from the neural networks, we show the means, maxima and minima of the outputs of neural networks with the fitness values larger than 90% of the maximum in the whole population. Fig. 4. View largeDownload slide Scaling constants. Fig. 4. View largeDownload slide Scaling constants. As seen in Fig. 4, for the case of $$D=0$$, although the scaling constants $$(k_1,k_2)$$ from the neural networks are consistent with the preference relations because the fitness value reaches one, the means of them are slightly larger than the scaling constants $$(k_1^1=k_1^2=0.8, k_2^1=k_2^2=0.64)$$ obtained from the indifference relations. This propensity can also be found in the cases of $$D=0.1$$ to $$D=0.7$$. For the scaling constant $$k_1$$ for the first attribute $$X_1$$, although the means of $$k_1$$ from the neural networks are slightly larger than $$k_1^1=k_1^2=0.8$$ obtained from the indifference relations as we mentioned above, they are close to 0.8. For the scaling constant $$k_2$$ for the second attribute $$X_2$$, in the cases of $$D=0.1$$ to $$D=0.7$$, $$k_2^1$$ and $$k_2^2$$ from the indifference relations are not the same because the indifference points of group 1 are different with those of group 2. Since the preference relations of groups 1 and 2 are merged and are used for the inputs of the neural networks, the values of $$k_1$$ from the neural networks are distributed between $$k_2^1$$ and $$k_2^2$$ from the indifference relations while the means of $$k_2$$ are slightly large similarly to $$k_1$$. We show the region of the scaling constants which are completely consistent with the preference relations for the case of $$D=0$$ in Fig. 5. As seen in the figure, the mean $$(0.87,0.73)$$ of the scaling constants from the neural networks lies in the center of the region while the point $$(0.8,0.64)$$ of the scaling constants from the indifference relations is located at the lower left of the region. From this reason, although the pairs of outcomes to generate the preference relations are randomly chosen, the range of the scaling constants $$(k_1,k_2)$$ from the neural networks lies slightly above the scaling constants $$(k_1^1=k_1^2, k_2^1=k_2^2)$$ from the indifference relations as a whole. Fig. 5. View largeDownload slide Region of consistent scaling constants in the case of $$D=0.$$ Fig. 5. View largeDownload slide Region of consistent scaling constants in the case of $$D=0.$$ These computational results demonstrate that the proposed method can generate the scaling constants consistent with the preference relations used as the inputs to the neural networks. 5.2 Numerical applications Using the actual preference information which we obtained in the multiattribute utility analysis on the environmental policy selection (Hayashida et al., 2010), we demonstrate the effectiveness of the proposed method. In this decision problem, in order to preserve the water resource in the city of Higashi-Hiroshima which is selected as one of the three most excellent sake (rice wine) brewing regions in Japan, we compared 20 alternatives for financing and activities for the preservation of the water resource by using the multiattribute utility function with 27 attributes. In particular, we interviewed the interested individuals including the sake brewers, the local residents and the business people working in the city, and the preference information of them was utilized to identify a part of the multiattribute utility function. In this paper, for the sake of simplicity, we deal with two types of fund-raising methods, the eco-labelled goods and the tax, and three types of allocation rules of the fund, the allocation giving priority to the planting trees (water volume), the allocation giving priority to the drainpipe construction (water quality), and the allocation distributing the fund equally. The combination of two fund-raising methods and three allocation rules makes six alternatives. 5.2.1 Case 1: Interested individuals with similar preferences We consider a part of the multiattribute utility function related to the sake brewers which are supposed to have similar preferences because they are similar makers of Japanese liquor. Although the five-layered hierarchy of objectives in the decision making problem consists of 27 attributes (Hayashida et al., 2010), the partial multiattribute utility function with respect to the sake brewers has three attributes $$X_{L1}$$, $$X_{L2}$$ and $$X_{L3}$$ which mean the quantity of the groundwater (Quantity), the quality of the groundwater (Quality) and the investment for the preservation of the forest (Investment), respectively. We assume that the corresponding single-attribute utility functions are already identified as the following constant risk utility functions: uL1(xL1)=1.01−1.01exp⁡(−0.0455xL1),uL2(xL2)=1.42−1.93exp⁡(−0.300xL2),uL3(xL3)=1.01−0.0107exp⁡(0.000455xL3). The attribute values for the 6 alternatives are shown in Table 3. The preference relations elicited from three interested individuals of the sake brewers are as follows. Table 3 Attribute values of alternatives $$(x_{L1},x_{L2},x_{L3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ Table 3 Attribute values of alternatives $$(x_{L1},x_{L2},x_{L3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ Eco-labelled goods Tax Water quality $$(50, 1, 2500)$$ $$(90, 3, 7500)$$ Water quantity $$(10, 3, 2500)$$ $$(50, 5, 2500)$$ Equally allocating $$(30, 2, 2500)$$ $$(70, 4, 2500)$$ $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. $$p_{L11}:$$ $$(70,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L12}:$$ $$(0,1,0)$$ $$\succ$$ $$(15,1,10000)$$ $$p_{L21}:$$ $$(40,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L22}:$$ $$(0,5,10000)$$ $$\succ$$ $$(5,1,10000)$$ $$p_{L23}:$$ $$(60,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L24}:$$ $$(0,1,0)$$ $$\succ$$ $$(10,1,10000)$$ $$p_{L31}:$$ $$(35,1,10000)$$ $$\succ$$ $$(0,5,10000)$$ $$p_{L32}:$$ $$(0,5,10000)$$ $$\succ$$ $$(0,1,10000)$$ $$p_{L33}:$$ $$(55,1,10000)$$ $$\succ$$ $$(0,1,0)$$ $$p_{L34}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,10000)$$. From each of the three interested individuals, 2, 4, and 4 preference relations are elicited, and we have 10 preference relations in total. We solve the problem (14), and find that the optimal value is positive, that is $$\Delta^* > 0$$. From this fact, it follows that there exist sets of scaling constants consistent with all of the preference relations elicited from the three interested individuals. Let the population size and the maximum generation be 100 and 10000, respectively. Using the above preference relations as the inputs to the neural networks, we execute the proposed procedure given in the previous section, and then the fitness values of 59 neural networks out of 100 reach one. The distribution of the fitness values is shown in Fig. 6, where neural networks are arranged in ascending order of the fitness values in the horizontal axis. Fig. 6. View largeDownload slide Distribution of fitness values in Case 1. Fig. 6. View largeDownload slide Distribution of fitness values in Case 1. The sets $$\mathbf k^l_{L}=(k^l_{L1}, k^l_{L3}, k^l_{L2})$$, $$l=1,\ldots,59$$ of scaling constants corresponding to the neural networks with the fitness values of one which are thought to have similar characteristics. As seen in Table 4, the means of the scaling constants decrease in the order of $$k_{L1}$$, $$k_{L3}$$ and $$k_{L2}$$, and their variances are small. Table 4 Scaling constants for Case 1 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 Table 4 Scaling constants for Case 1 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 $$k_{L1}$$ $$k_{L2}$$ $$k_{L3}$$ Mean 0.86 0.38 0.61 Variance 0.0039 0.012 0.0065 The alternatives with the maximum utility are the same for all the multiattribute utility functions uL(xL1,xL2,xL3;kL1i,kL2i,kL3i)=1Ki[{1+KikL1iuL1(xL1)} {1+KikL2iuL2(xL2)}{1+KikL3iuL3(xL3)}−1], i=1,…,59, and it is the alternative that the fund-raising method is the tax and the allocation rule is the allocation giving priority to the drainpipe construction (water quality). For group decision making for interested individuals with similar preferences, our method can generate a number of scaling constants which are consistent with the preference relations elicited from the interested individuals, and the multiattribute utility functions corresponding to the selected neural networks maximize the utility values of the same alternative. 5.2.2 Case 2: Interested individuals with dissimilar preferences In the second case, we deal with a multiattribute utility function related to general residents which are supposed to have dissimilar preferences because they do not share common interests for forest preservation. Although in the application study (Hayashida et al., 2010) the residents are classified into three categories composed of the residents using groundwater, the residents in the foot of the mountain and the residents not using groundwater, we merge them in order to cover interested individuals with the diversified preferences. The partial multiattribute utility function with respect to the general residents has three attributes $$X_{R1}$$, $$X_{R2}$$ and $$X_{R3}$$ which mean the quantity of the groundwater (Quantity), the quality of the groundwater (Quality) and the investment for the preservation of the forest (Investment), respectively. We assume that the corresponding single-attribute utility functions are already identified as the following constant risk utility functions: uR1(xR1)=1.00−1.00exp⁡(−0.139xR1),uR2(xR2)=1.41−1.92exp⁡(−0.310xR2),uR3(xR3)=1.15−0.15exp⁡(0.00204xR3). The attribute values for the six alternatives are shown in Table 5. It should be noted that the attribute values $$x_{R3}$$ of the investment are small compared to those $$x_{L3}$$ of the sake brewers in Case 1. Table 5 Attribute values of alternatives $$(x_{R1},x_{R2},x_{R3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ Table 5 Attribute values of alternatives $$(x_{R1},x_{R2},x_{R3})$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ Eco-labelled goods Tax Water quality $$(50, 1, 300)$$ $$(90, 3, 800)$$ Water quantity $$(10, 3, 300)$$ $$(50, 5, 800)$$ Equally allocating $$(30, 2, 300)$$ $$(70, 4, 800)$$ The preference relations elicited from 10 interested individuals of the general residents are as follows. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. $$p_{R01}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R02}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R03}:$$ $$(7.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R04}:$$ $$(0,1,0)$$ $$\succ$$ $$(2.5,1,1000)$$, $$p_{R11}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R12}:$$ $$(0,5,1000)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R13}:$$ $$(10,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R14}:$$ $$(0,1,0)$$ $$\succ$$ $$(5,1,1000)$$, $$p_{R21}:$$ $$(0,1,325)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R22}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,375)$$, $$p_{R23}:$$ $$(0,1,675)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R24}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,725)$$, $$p_{R31}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R32}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R33}:$$ $$(0,3.2,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R34}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3,1000)$$, $$p_{R41}:$$ $$(7.5,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R42}:$$ $$(0,5,1000)$$ $$\succ$$ $$(2.5,1000)$$, $$p_{R43}:$$ $$(52.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R44}:$$ $$(0,1,0)$$ $$\succ$$ $$(47.5,1,1000)$$, $$p_{R51}:$$ $$(0,3.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R52}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,3.1,1000)$$, $$p_{R53}:$$ $$(0,3.4,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R54}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,3.2,1000)$$, $$p_{R61}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R62}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R63}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R64}:$$ $$(0,1,0)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R71}:$$ $$(0,2.3,1000)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R72}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,2.1,1000)$$, $$p_{R73}:$$ $$(0,4.5,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R74}:$$ $$(0,1,0)$$ $$\succ$$ $$(0,4.3,1000)$$, $$p_{R81}:$$ $$(0,1,775)$$ $$\succ$$ $$(100,1,1000)$$, $$p_{R82}:$$ $$(100,1,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R83}:$$ $$(0,1,775)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R84}:$$ $$(0,5,1000)$$ $$\succ$$ $$(0,1,825)$$, $$p_{R91}:$$ $$(12.5,1,1000)$$ $$\succ$$ $$(0,5,1000)$$, $$p_{R92}:$$ $$(0,5,1000)$$ $$\succ$$ $$(7.5,1,1000)$$, $$p_{R93}:$$ $$(82.5,1,1000)$$ $$\succ$$ $$(0,1,0)$$, $$p_{R94}:$$ $$(0,1,0)$$ $$\succ$$ $$(77.5,1,1000)$$. From each of the 10 interested individuals, 4 preference relations are elicited, and we have 40 preference relations in total. We solve the problem (14), and find that the optimal value is negative, that is $$\Delta^* < 0$$. From this fact, it follows that there does not exist any set of scaling constants which is consistent with all the preference relations elicited from the three interested individuals. Thus, it is found that the fitness value of any neural network does not reach one in the genetic algorithm. Since the number of preference relations in Case 2 is four times as many as that of Case 1, we set the maximum generation at four times as large as that of Case 1, and the population size is the same as that of Case 1, namely, the maximum generation is 40000, and the population size is 100. The result is shown in Fig. 7 in the same form as Fig. 6. Fig. 7. View largeDownload slide Distribution of fitness values in Case 2. Fig. 7. View largeDownload slide Distribution of fitness values in Case 2. As seen in Fig. 7, the maximum of the fitness is 0.65 in the population. Let the fraction of the selected neural networks in Step 8 of the procedure given in the previous section be $$p=0.9$$. Thus, it follows that the neural networks with the fitness values larger than 90% of the maximum in the population are selected. The number of selected neural networks is 81, and we give the statistical data of the sets $$\mathbf k^i_{R}$$, $$i=1,\ldots,81$$ of scaling constants for the selected neural networks in Table 6. The means of the scaling constants decrease in the order of $$k_{R1}$$, $$k_{R2}$$ and $$k_{R3}$$, and their variances are small. Table 6 Scaling constants for Case 2 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 Table 6 Scaling constants for Case 2 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 $$k_{R1}$$ $$k_{R2}$$ $$k_{R3}$$ Mean 0.86 0.63 0.49 Variance 0.00035 0.00072 0.00520 The alternatives with the maximum utility are the same for all the multiattribute utility functions uR(xR1,xR2,xR3;kR1i,kR2i,kR3i) =1Ki[{1+KikR1iuR1(xR1)} {1+KikR2iuR2(xR2)}{1+KikR3iuR3(xR3)}−1], i=1,…,81, and it is the alternative that the fund-raising method is the tax and the allocation rule is the allocation giving priority to the planting trees (water volume). For group decision making for interested individuals with dissimilar preferences, although there does not exist scaling constants which are consistent with all the preference relations elicited from the interested individuals, our method can generate scaling constants which are consistent with as many preference relations as possible, and the multiattribute utility functions corresponding to the selected neural networks maximize the utility values of the same alternative. 5.2.3 Discussion By the computational experiment for validation we demonstrate that the proposed method functions successfully in a wide range of populations for group decision making. Moreover, the two numerical applications show the applicability of the proposed method to real-world decision problems. To be more precise, in Case 1 of the numerical applications, the proposed method generates scaling constants from a group in which the degree of the preferential heterogeneity is low. The population size of the artificial genetic system is 100. After the final generation, the fitness values of 59 individuals out of 100 reach the maximum of 1. As seen in Fig. 6, the fitness values are larger than 0.6 and it seems that the population is appropriately evolved. According to the procedure presented in Section 4, the individuals representing the targeted group correspond to a set of individuals with the maximum fitness value of 1 accounting for about 60% of the population. The multiattribute utility functions with the obtained scaling constants are completely consistent with the preference relations of the interested individuals, and the utility value of the same alternative is maximal for all the multiattribute utility functions. In Case 2, the proposed method generates scaling constants from a group in which the degree of the preferential heterogeneity is relatively high. As seen in Fig. 7, at the final generation the maximum value of the fitness among the population is 0.65, and the fitness values are larger than 0.5. For this case, we also think that the population is appropriately evolved. Let $$p=0.9$$, and according to the procedure, ãŁŁindividuals with the fitness values of the top 10% of the population can be interpreted as scaling constants representing the targeted group. Since there does not exist any multiattribute utility function completely consistent with the preference relations of the interested individuals, it follows that the maximum value of the fitness does not reach 1. However, it is thought that the proposed method can generate scaling constants which properly represent the target group because the variance of each scaling constant is small as seen in Table 6. As a result, the utility value of the same alternative is maximized for all the multiattribute utility functions with the obtained scaling constants. The computational experiment and numerical applications demonstrate that the proposed method can reflect diverse preferences of a number of individuals engaged in decision problems and then select alternatives appropriate for group multiattribute decision making. 6. Conclusions In this paper, we deal with group multiattribute utility analysis utilizing the preference relations elicited from interested individuals. It is difficult to interview a number of interested individuals and elicit detailed answers about their preferences such as indifferent points and subjective probabilities from them. By asking questions that are relatively easy to answer compared to the indifference questions, we gather a wide range of preference information of the interested individuals and develop a method for selecting an alternative consistent with the gathered preference information as much as possible. We utilize a large population of neural networks to represent diversified preferences of the interested individuals. The preference information of the interested individuals is imported as the input data of the neural networks, and the outputs of the neural networks are regarded as the scaling constants. The neural networks evolve through a genetic algorithm so that multiattribute utility functions with the scaling constants which are the outputs of the neural networks become consistent with the preference relations of the interested individuals, and then we can obtain a number of sets of the scaling constants reflecting the preferences of the interested individuals. After verifying the validity of the proposed method, we demonstrate its effectiveness by applying to the environmental policy selection problem (Hayashida et al., 2010). It is relatively easy to gather the preference relations used in the proposed method as we gave some instances in 5.2. From this favourable property, it is expected that by employing the proposed method for real-world decision problems in public sectors or decision problems with multiple economic agents, the intentions of the related many individuals and parties can be properly reflected in the selected policy. Funding This work was supported by JSPS KAKENHI Grant No. 26282086. References Arrow, K. J. ( 1963 ) Social Choice and Individual Values, 2nd edn. New York: Wiley. Bana e Costa, C. A. ( 1988 ) A methodology for sensitivity analysis in three-criteria problems: a case study in municipal management. Eur. J. Oper. Res. 3 , 159 – 173 . Google Scholar CrossRef Search ADS Barron, F. H. & Barrett, B. E. ( 1996 ) Decision quality using ranked attribute weights. Manag. Sci. , 42, 1515 – 1523 . Baucells, M. & Sarin, R. K. ( 2003 ) Group decisions with multiple criteria. Manag. Sci. 4 , 1105 – 1118 . Google Scholar CrossRef Search ADS Baucells, M. & Shapley, L. S. ( 2000 ) Multiperson Utility . Los Angeles, CA: The Anderson School at UCLA. Baucells, M. & Shapley, L. S. ( 2008 ) Multiperson utility. Games Econ. Behav.. 6 , 329 – 347 . Google Scholar CrossRef Search ADS Bose, U. , Davey, A. M. & Olson, D. L. ( 1997 ) Multi-attribute utility methods in group decision making: Past applications and potential for inclusion in GDSS. Omeg. , 2 , 691 – 706 . Google Scholar CrossRef Search ADS Chen, Y. , Hipel, K. W. & Kilgour, D. M. ( 2007 ) Multiple criteria sorting using case-based distance models with an application in water resources management. IEEE Trans. Syst. Man Cybern. , Part A 37, 680 – 691 . Doumpos, M. & Zopounidis, C. ( 2011 ) Preference disaggregation and statistical learning for multicriteria decision support: A review. Eur. J. Oper. Res. , 209, 203 – 214 . Dyer, J. S. & Sarin, R. ( 1979 ) Measurable multiattribute value functions. Oper. Res. 2 , 810 – 822 . Google Scholar CrossRef Search ADS Fishburn, P. C. ( 1969 ) Preferences, summation, and social welfare functions. Manag. Sci. , 1 , 179 – 186 . Google Scholar CrossRef Search ADS Fleming, M. ( 1952 ) A cardinal concept of welfare. Q. J. Econ. , 6 , 366 – 384 . Google Scholar CrossRef Search ADS Greco, S. , Matarazzo, B. & Slowinski, R. ( 2001 ) Rough sets theory for multicriteria decision analysis. Eur. J. Oper. Res. , 129, 1 – 47 . Harsanyi, J. C. ( 1955 ) Cardinal welfare, individualistic ethics, and interpersonal comparison of utility. J. Polit. Econ. , 6 , 309 – 321 . Google Scholar CrossRef Search ADS Hayashida, T. , Nishizaki, I. & Ueda, Y. ( 2010 ) Multiattribute utility analysis for policy selection and financing for the preservation of the forest. Eur. J. Oper. Res. , 2 , 833 – 843 . Google Scholar CrossRef Search ADS Hayashida, T. , Nishizaki, I. , Ueda, Y. & Honda, H. ( 2012 ) Multi-criteria evaluation for collaborative circulating farming with collective operations between arable and cattle farmers. J. Multi-Crit. Decis. Anal. , 1 , 227 – 245 . Google Scholar CrossRef Search ADS Holloway, H. A. & White, C. C. III. ( 2003 ) Question selection for multiattribute decision-aiding. Eur. J. Oper. Res. , 1 , 525 – 533 . Google Scholar CrossRef Search ADS Huang, Y.-S., Chang, W.-C., Li, W.-H. & Lin, Z.-L. ( 2013 ) Aggregation of utility-based individual preferences for group decision-making. Eur. J. Oper. Res. , 2 , 462 – 469 . Google Scholar CrossRef Search ADS Iz, P. H. & Gardiner, L. R. ( 1993 ) Analysis of multiple criteria decision support systems for cooperative groups. Group Decis. Negot. , 2, 61 – 79 . Google Scholar CrossRef Search ADS Jacquet-Lagreze, E. & Siskos, Y. ( 1982 ) Assessing a set of additive utility functions for multicriteria decision-making, the UTA method. Eur. J. Oper. Res. , 1 , 151 – 164 . Google Scholar CrossRef Search ADS Keeney, R. L. & Kirkwood, C. ( 1975 ) Group decision making using cardinal social welfare functions. Manage. Sci. , 2 , 430 – 437 . Google Scholar CrossRef Search ADS Keeney, R. L. & Raiffa, H. ( 1976 ) Decisions with Multiple Objectives: Preferences and Value Tradeoff. . New York: Wiley and Sons. Keeney, R. L. & von Winterfeldt, D. ( 1994 ) Managing nuclear waste from power plants. Risk Anal. , 1 , 107 – 130 . Google Scholar CrossRef Search ADS Kim, S. H. , Choi, S. H. & Kim, J. K. ( 1999 ) An interactive procedure for multiple attribute group decision making with incomplete information: Range-based approach. Eur. J. Oper. Res. , 1 , 139 – 152 . Google Scholar CrossRef Search ADS Kirkwood, C. W. & Sarin, R. K. ( 1985 ) Ranking with partial information: A method and an application. Oper. Res. , 3 , 38 – 48 . Google Scholar CrossRef Search ADS Limayem, M. & De Sanctis, G. ( 2000 ) Providing decisional guidance for multicriteria decision making in groups. Inf. Syst. Res. , 1 , 386 – 401 . Google Scholar CrossRef Search ADS Mateos, A. , Jiménez, A. & Ríos-Insua, S. ( 2006 ) Monte Carlo simulation techniques for group decision making with incomplete information. Eur. J. Oper. Res. , 1 , 1842 – 1864 . Google Scholar CrossRef Search ADS Matsatsinis, N. F. , Grigoroudis, E. & Samaras, A. ( 2005 ) Aggregation and disaggregation of preferences for collective decision-making. Group Decis. Negot. , 1 , 217 – 232 . Google Scholar CrossRef Search ADS Mousseau, V. & Slowinski, R. ( 1998 ) Inferring an ELECTRE-TRI model from assignment examples. J Global Optim. , 1 , 157 – 174 . Google Scholar CrossRef Search ADS Nishizaki, I. , Hayashida, T. & Ohmi, M. ( 2014 ) Multiattribute decision analysis using strict preference relations. Ann. Oper. Res. , https://doi.org/10.1007/s10479-014-1680-9. Nishizaki, I. , Katagiri, H. & Hayashida, T. ( 2010 ) Sensitivity analysis incorporating fuzzy evaluation for scaling constants of multiattribute utility functions. Cent. Eur. J. Oper. Res. , 1 , 383 – 396 . Google Scholar CrossRef Search ADS Ondrus, J. , Bui, T. & Pigneur, Y. ( 2015 ) A foresight support system using MCDM methods. Group Decis. Negot. , 2 , 333 – 358 . Google Scholar CrossRef Search ADS Papamichail, K. N. & French, S. ( 2013 ) 25 Years of MCDA in nuclear emergency management. IMA J. Manage. Math. , 2 , 481 – 503, . Google Scholar CrossRef Search ADS Park, K. S. & Kim, S. H. ( 1997 ) Tools for interactive multiattribute decisionmaking with incompletely identified information. Eur. J. Oper. Res. , 9 , 111 – 123 . Google Scholar CrossRef Search ADS Rios-Insua, S. & Mateos, A. ( 1998 ) The utility efficient set and its interactive reduction. Eur. J. Oper. Res. , 1 , 581 – 593 . Google Scholar CrossRef Search ADS Salo, A. A. ( 1995 ) Interactive decision aiding for group decision support. Eur. J. Oper. Res. , 8 , 134 – 149 . Google Scholar CrossRef Search ADS Sarabando, P. & Dias, L. C. ( 2009 ) Multiattribute choice with ordinal information: A comparison of different decision rules. IEEE Trans. Syst. Man Cybern. , Part A 39, 545 – 554 . Google Scholar CrossRef Search ADS Seo, F. , Nishizaki, I. & Park, S. H. ( 1999 ) Multiattribute utility analysis using object-oriented programming MAP and its application. J. Bus. Admin. Inf. 7 , Setsunan University, 27 – 57 . Vetschera, R. ( 1991 ) Integrating databases and preference evaluations in group decision support: A feedback-oriented approach. Decis. Support Syst. , 7, 67 – 77 . Google Scholar CrossRef Search ADS Vetschera, R. , Chen, Y. , Hipel, K. & Kilgour, D. ( 2010 ) Robustness and information levels in case-based multiple criteria sorting. Eur. J. Oper. Res. , 2 , 841 – 852 . Google Scholar CrossRef Search ADS Wallenius, J. , Dyer, J. S. , Fishburn, P. C. , Steuer, R. E. , Zionts, S. & Deb, K. ( 2008 ) Multiple criteria decision making, multiattribute utility theory: Recent accomplishments and what lies ahead. Manage. Sci. , 5 , 1336 – 1349 . Google Scholar CrossRef Search ADS Wang, J. & Malakooti, B. ( 1992 ) A feed forward neural network for multiple criteria decision making. Comput. Oper. Res. , 1 , 151 – 167 . Google Scholar CrossRef Search ADS Weber, M. ( 1985 ) A method of multiattribute decision making with incomplete information. Manage. Sci. , 3 , 1365 – 1371 . Google Scholar CrossRef Search ADS © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

### Journal

IMA Journal of Management MathematicsOxford University Press

Published: Sep 28, 2016

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations