A multi-criteria approach to approximate solution of multiple-choice knapsack problem

A multi-criteria approach to approximate solution of multiple-choice knapsack problem Comput Optim Appl (2018) 70:889–910 https://doi.org/10.1007/s10589-018-9988-z A multi-criteria approach to approximate solution of multiple-choice knapsack problem 1,2 1 Ewa M. Bednarczuk · Janusz Miroforidis · Przemysław Pyzel Received: 24 February 2017 / Published online: 1 March 2018 © The Author(s) 2018. This article is an open access publication Abstract We propose a method for finding approximate solutions to multiple-choice knapsack problems. To this aim we transform the multiple-choice knapsack problem into a bi-objective optimization problem whose solution set contains solutions of the original multiple-choice knapsack problem. The method relies on solving a series of suitably defined linearly scalarized bi-objective problems. The novelty which makes the method attractive from the computational point of view is that we are able to solve explicitly those linearly scalarized bi-objective problems with the help of the closed-form formulae. The method is computationally analyzed on a set of large- scale problem instances (test problems) of two categories: uncorrelated and weakly correlated. Computational results show that after solving, in average 10 scalarized bi-objective problems, the optimal value of the original knapsack problem is approxi- mated with the accuracy comparable to the accuracies obtained by the greedy algorithm and an exact algorithm. More importantly, the respective approximate solution to the original knapsack problem (for which the approximate optimal value is attained) can Przemysław Pyzel PPyzel@ibspan.waw.pl Ewa M. Bednarczuk Ewa.Bednarczuk@ibspan.waw.pl Janusz Miroforidis Janusz.Miroforidis@ibspan.waw.pl Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warszawa, Poland Faculty of Mathematics and Information Science, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa, Poland PhD Programme, Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warszawa, Poland 123 890 E. M. Bednarczuk et al. be found without resorting to the dynamic programming. In the test problems, the number of multiple-choice constraints ranges up to hundreds with hundreds variables in each constraint. Keywords Knapsack · Multi-objective optimization · Multiple-choice knapsack · Linear scalarization 1 Introduction The multi-dimensional multiple-choice knapsack problem (MMCK P) and the multiple-choice knapsack problem (MC K P) are classical generalizations of the knap- sack problem (KP) and are applied to modeling many real-life problems, e.g., in project (investments) portfolio selection [21,29], capital budgeting [24], advertising [27], component selection in IT systems [16,25], computer networks management [17], adaptive multimedia systems [14], and other. The multiple-choice knapsack problem (MC K P) is formulated as follows. Given are k sets N , N ,..., N of items, of cardinality |N |= n , i = 1,..., k. Each item 1 2 k i i of each set has been assigned real-valued nonnegative ‘profit’ p ≥ 0 and ‘cost’ ij c ≥ 0, i = 1,..., k, j = 1,..., n . ij i The problem consists in choosing exactly one item from each set N so that the total cost does not exceed a given b ≥ 0 and the total profit is maximized. Let x , i = 1,..., k, j = 1,..., n , be defined as ij i 1 if item j from set N is chosen x = ij 0 otherwise. k n Note that all x form a vector x of length n = n , x ∈ R , and we write ij i i =1 x := (x , x ,..., x , x ,..., x ,..., x , x ,..., x ) . 11 12 1n 21 2n k1 k2 kn 1 2 k In this paper, we adopt the convention that a vector x is a column vector, and hence the transpose of x, denoted by x , is a row vector. Problem (MC K P) is of the form k n max p x ij ij i =1 j =1 subject to k i (MC K P) c x ≤ b ij ij i =1 j =1 (x ) ∈ X := {(x ) | x = 1, ij ij ij j =1 x ∈{0, 1} i = 1,..., k, j = 1,..., n }. ij i By using the above notations, problem (MC K P) can be equivalently rewritten in the vector form 123 A multi-criteria approach to approximate solution of … 891 max p x subject to (MC K P) c x ≤ b x = (x ) ∈ X, ij where p and c are vectors from R , p := (p , p ,..., p , p ,..., p ,..., p , p ,..., p ) 11 12 1n 21 2n k1 k2 kn 1 2 k c := (c , c ,..., c , c ,..., c ,..., c , c ,..., c ) , 11 12 1n 21 2n k1 k2 kn 1 2 k n T and for any vectors u,v ∈ R , the scalar product u v is defined in the usual way as u v := u v . i i i =1 The feasible set F to problem (MC K P) is defined by a single linear inequality constraint and the constraint x ∈ X, i.e., n T F := {x ∈ R | c x ≤ b, x ∈ X } and finally max p x (MC K P) subject to x ∈ F. The optimal value of problem (MC K P) is equal to max p x and the solution set x ∈F S is given as ∗ T T S := x¯ ∈ F | p x¯ = max p x . x ∈F Problem (MC K P) is NP-hard. The approaches to solving (MC K P) can be: heuristics [1,12], exact methods providing upper bounds for the optimal value of the profit together with the corresponding approximate solutions [26], exact methods providing solutions [18]. There are algorithms that efficiently solve (MC K P) without sorting and reduction [8,28] or with sorting and reduction [4]. Solving (MC K P) with a linear relaxation (by neglecting the constraints x ∈{0, 1}, i = 1,..., k, j = ij 1,..., n ) gives upper bounds on the value of optimal profit. Upper bounds can be also obtained with the help of the Lagrange relaxation. These facts and other features of (MC K P) are described in details in monographs [13,19]. Exact branch-and-bound methods [6] (integer programming), even those using commercial optimization software (e.g., LINGO, CPLEX) can have troubles with solving large (MC K P) problems. A branch-and-bound algorithm with a quick solu- tion of the relaxation of reduced problems was proposed by Sinha and Zoltners [27]. Dudzinski ´ and Walukiewicz proposed an algorithm with pseudo-polynomial complex- ity [5]. Algorithms that use dynamic programming require integer values of data and for large-scale problems require large amount of memory for backtracking (finding solu- tions in set X), see also the monograph [19]. The algorithm we propose does not need the data to be integer numbers. 123 892 E. M. Bednarczuk et al. Heuristic algorithms, based on solving linear (or continuous) relaxation of (MC K P) and dynamic programming [7,22,24] are reported to be fast, but have limitations typical for dynamic programming. The most recent approach “reduce and solve” [2,10] is based on reducing the problem by proposed pseudo cuts and then solving the reduced problems by a Mixed Integer Programming (MI P) solver. In the present paper, we propose a new exact (not heuristic) method which provides approximate optimal profits together with the corresponding approximate solutions. The method is based on multi-objective optimization techniques. Namely, we start by formulating a linear bi-objective problem (BP) related to the original problem (MC K P). After investigating the relationships between (MC K P) and (BP) prob- lems, we propose an algorithm for solving (MC K P) via a series of scalarized linear bi-objective problems (BS(λ)). The main advantage of the proposed method is that the scalarized linear bi-objective problems (BS(λ)) can be explicitly solved by exploiting the structure of the set X. Namely, these scalarized problems can be decomposed into k independent subprob- lems the solutions of which are given by simple closed-form formulas. This feature of our method is particularly suitable for parallelization. It allows to generate solutions of scalarized problems in an efficient and fast way. The experiments show that the method we propose generates very quickly an out- come xˆ ∈ F which is an approximate solution to (MC K P). Moreover, lower bound (LB) and upper bound (UB) for the optimal profit are provided. The obtained approximate solution xˆ ∈ F could serve as a good starting point for other, e.g., heuristic or exact algorithms for finding an optimal solution to the problem (MC K P). The organization of the paper is as follows. In Sect. 2, we provide preliminary facts on multi-objective optimization problems and we formulate a bi-objective optimization problem (BP) associated with (MC K P). In Sect. 3, we investigate the relationships between the problem (BP) and the original problem (MC K P). In Sect. 4,wefor- mulate scalarized problems (BS(λ)) for bi-objective problem (BP) and we provide closed-form formulae for solutions to problems (BS(λ)) by decomposing them into k independent subproblems (BS(λ)) , i = 1,..., k. In Sect. 5, we present our method (together with the pseudo-code) which provides a lower bound (LB) for the opti- mal profit together with the corresponding approximate feasible solution xˆ ∈ F to (MC K P) for which the bound (LB) is attained. In Sect. 6, we report on the results of numerical experiments. The last section concludes. 2 Multi-objective optimization problems n n n Let f : R → R, i = 1,..., k, be functions defined on R and Ω ⊂ R be a subset in R . The multi-objective optimization problem is defined as Vmax ( f (x),..., f (x )) 1 k (P) subject to x ∈ Ω, 123 A multi-criteria approach to approximate solution of … 893 where the symbol Vmax means that solutions to problem (P) are understood in the sense of Pareto efficiency defined in Definition 2.1. Let k k R := {x = (x ,..., x ) ∈ R : x ≥ 0, i = 1,..., k}. 1 k i Definition 2.1 A point x ∈ Ω is a Pareto efficient (Pareto maximal) solution to (P) if ∗ k ∗ f (Ω) ∩ ( f (x ) + R ) ={ f (x )}. In other words, x ∈ Ω is a Pareto efficient solution to (P) if there is no x¯ ∈ Ω such that f (x¯ ) ≥ f (x ) for i = 1,..., k and i i (1) f (x¯)> f (x ) for some l 1 ≤  ≤ k. The problem (P) where all the functions f , i = 1,..., k are linear is called a linear multi-objective optimization problem. Remark 2.1 The bi-objective problem f (x ) → max, f (x ) → min 1 2 subject to x ∈ Ω with Pareto solutions x ∈ Ω defined as ∗ ∗ 2 ∗ ∗ ( f (Ω), f (Ω)) ∩[( f (x ), f (x )) + R ]={( f (x ), f (x ))} (2) 1 2 1 2 1 2 +− where 2 2 R := {x = (x , x ) ∈ R : x ≥ 0, x ≤ 0} 1 2 1 2 +− is equivalent to the problem f (x ) → max, − f (x ) → max 1 2 subject to x ∈ Ω in the sense that Pareto efficient solution sets (as subsets of the feasible set Ω) coincide and Pareto elements (the images in R of Pareto efficient solutions) differ in sign in the second component. 2.1 A bi-objective optimization problem related to (MC K P ) In relation to the original multiple-choice knapsack problem (MC K P), we consider the linear bi-objective binary optimization problem (BP1) of the form 123 894 E. M. Bednarczuk et al. n n k i k i p x → max, c x → min ij ij ij ij i =1 j =1 i =1 j =1 (BP1) subject to (x ) ∈ X. ij In this problem, the left-hand side of the linear inequality constraint c x ≤ b of (MC K P) becomes a second criterion and the constraint set reduces to the set X. There are two-fold motivations of considering the bi-objective problem (BP1). First motivation comes from the fact that in (MC K P) the inequality k n c x ≤ b ij ij i =1 j =1 is usually seen as a budget (in general: a resource) constraint with the left- hand-side to be preferably not greater than a given available budget b.Inthe bi-objective problem (BP1), this requirement is represented through the minimiza- k n tion of c x . In Theorem 3.1 of Sect. 3, we show that under relatively ij ij i =1 j =1 mild conditions among solutions of the bi-objective problem (BP1) [or the equivalent problem (BP)] there are solutions to problem (MC K P). Second motivation is important from the algorithmic point of view and is related to the fact that in the proposed algorithm we are able to exploit efficiently the specific structure of the constraint set X which contains k linear equality constraints (each one referring to a different group of variables) and the binary conditions only. More precisely, the set X can be represented as the Cartesian product 1 2 k X = X × X × ··· × X , (3) i i i n i of the sets X , where X := {x ∈ R | x = 1, x ∈{0, 1}, j = 1,..., n }, ij ij i j =1 i = 1,..., k and x = (x ,..., x , x ,..., x ,..., x ,..., x ) , (4) 11 1n 21 2n k1 kn 1 2 1 1 2 k x x x i.e., 1 k T x = (x ,..., x ) and x = (x ,..., x ). Accordingly, i1 in 1 k T 1 k T p = (p ,..., p ) , and c = (c ,..., c ) . Note that due to the presence of the budget inequality constraint the feasible set F of problem (MC K P) cannot be represented in the form analogous to (3). According to Remark 2.1, problem (BP1) can be equivalently reformulated in the form 123 A multi-criteria approach to approximate solution of … 895 T T Vmax (p x,(−c) x ) (BP) subject to x ∈ X. 3 The relationships between (BP ) and (MC P K ) Starting from the multiple-choice knapsack problem (MC K P) of the form max p x (MC K P) subject to x ∈ F, in the present section we analyse relationships between problems (MC K P) and (BP). We start with a basic observation. Recall first that (MC K P) is solvable, i.e., the feasible set F is nonempty if b ≥ min c x . x ∈X On the other hand, if b ≥ max c x, (MC K P) is trivially solvable. Thus, in the x ∈X sequel we assume that T T C := min c x ≤ b < C := max c x . (5) min max x ∈X x ∈X T T Let P := max p x, i.e., P is the maximal value of the function p x on max x ∈X max the set X. The following observations are essential for further considerations. 1. First, among the elements of X which realize the maximal value P , there exists max at least one which is feasible for (MC K P), i.e., there exists x ∈ X, p x = P p p max such that c x ≤ b, i.e., C ≤ c x ≤ b < C . (6) min p max Then, clearly, x solves (MC K P). 2. Second, none of elements which realize the maximal value P is feasible for max T T (MC K P), i.e., for every x ∈ X, p x = P we have c x > b, i.e., any x p p max p p realizing the maximal value P is infeasible for (MC K P), i.e. max C ≤ b < c x ≤ C . (7) min p max In the sequel, we concentrate on Case 2, characterized by (7). This case is related to problem (BP). To see this let us introduce some additional notations. Let x ∈ X cmin and x ∈ X be defined as pmax T T T c x = C and p x = max T p x cmin min cmin c x =C min T T T p x = P and c x = min c x . pmax max pmax p x =P max 123 896 E. M. Bednarczuk et al. Let S be the set of all Pareto solutions to the bi-objective problem (BP), bo T T T T 2 T T S := {x ∈ X : (p ,(−c) )(X ) ∩[(p x,(−c) x ) + R ]={(p x,(−c) x )}}, bo (c.f. Definition 2.1). The following lemma holds. Lemma 3.1 Assume that we are in Case 2, i.e., condition (7) holds. There exists a Pareto solution to the bi-objective optimization problem (BP), x¯ ∈ S which is bo feasible to problem (MC K P),i.e., c x¯ ≤ b which amounts to x¯ ∈ F. Proof According to Definition 2.1, both x ∈ X and x ∈ X are Pareto efficient pmax cmin T T T T solutions to (BP), i.e., there is no x ∈ X such that (p x , c x ) = (p x , c x ) pmax pmax and T T T T p x ≥ p x and c x ≤ c x pmax pmax T T T T and there is no x ∈ X such that (p x , c x ) = (p x , c x ) and cmin cmin T T T T p x ≥ p x and c x ≤ c x . cmin cmin Moreover, by (7), T T C = c x ≤ b < c x . (8) min cmin pmax In view of (8), x¯ = x ∈ S and x¯ = x ∈ F (c x ≤ b) which means that cmin bo cmin cmin x¯ is feasible to problem (MC K P), which concludes the proof. Now we are ready to formulate the result establishing the relationship between solutions of (MC K P) and Pareto efficient solutions of (BP) in the case where the condition (7) holds. Theorem 3.1 Suppose we are given problem (MC K P) satisfying condition (7). Let x ∈ X be a Pareto solution to (BP), such that T ∗ T b − c x = min b − c x.(∗) x ∈S ,b−c x ≥0 bo Then x solves (MC K P). Proof Observe first that, by Lemma 3.1, there exist x ∈ S satisfying the constraint bo c x ≤ b, i.e., condition (∗) is not dummy. ∗ ∗ T ∗ By contradiction, suppose that a feasible element x ∈ F, i.e., x ∈ X, c x ≤ b, is not a solution to (MC K P), i.e., there exists an x ∈ X, such that T T T ∗ c x ≤ b and p x > p x .(∗∗) 1 1 ∗ T T ∗ ∗ We show that x cannot satisfy condition (∗).If c x ≤ c x , then x is not a Pareto ∗ ∗ solution to (BP), i.e., x ∈ / S , and x does not satisfy condition (∗). Otherwise, bo T T ∗ c x > c x , i.e., T ∗ T b − c x > b − c x . (9) 123 A multi-criteria approach to approximate solution of … 897 Fig. 1 Illustration to the content of Theorem 3.1; black dots—outcomes of Pareto efficient solutions to (BP), star—Pareto efficient outcome to (BP) which solves (MCKP) ∗ ∗ If x ∈ S , then because x ∈ S , then, according to (9), x cannot satisfy 1 bo bo condition (∗). T T If x ∈ / S , there exists x ∈ S which dominates x , i.e., (p x ,(−c) x ) ∈ 1 bo 2 bo 1 2 2 T T 2 T T ∗ ∗ (p x ,(−c) x ) + R .Again,if c x ≤ c x , then x is not a Pareto solution to 1 1 2 ∗ T T ∗ (BP), i.e., x cannot satisfy condition (∗). Otherwise, if c x > c x , then either ∗ ∗ ∗ x ∈ / S and consequently x cannot satisfy condition (∗),or x ∈ S , in which case bo bo T ∗ T ∗ b − c x > b − c x and x does not satisfy condition (∗), a contradiction which completes the proof. Theorem 3.1 says that under condition (7) any solution to (BP) satisfying condition (∗) solves problem (MC K P). General relations between constrained optimization and multi-objective programming were investigated in [15]. Basing ourselves on Theorem 3.1, in Sect. 5 we provide an algorithm for finding x ∈ S , a Pareto solution to (BP), which is feasible to problem (MC K P) and for bo which the condition (∗) is either satisfied or is, in some sense, as close as possible to be satisfied. In this latter case, the algorithm provides upper and lower bounds for the optimal value of (MC K P) (see Fig. 1). 4 Decomposition of the scalarized bi-objective problem (BP ) In the present section, we consider problem (BS(λ ,λ )) defined by (10) which is a 1 2 linear scalarization of problem (BP). In our algorithm BISSA, presented in Sect. 5, we obtain an approximate feasible solution to (MC K P) by solving a (usually very small) number of problems of the form (BS(λ ,λ )). The main advantage of basing 1 2 our algorithm on problems (BS(λ ,λ )) is that they are explicitly solvable by simple 1 2 closed-form expressions (17). For problem (BP) the following classical scalarization result holds. Theorem 4.1 [9,20] If there exist λ > 0,  = 1, 2, such that x ∈ X is a solution to the scalarized problem 123 898 E. M. Bednarczuk et al. T T max λ p x + λ (−c) x (BS(λ ,λ )) 1 2 (10) 1 2 x ∈X then x is a Pareto efficient solution to problem (BP). Without loosing generality we can assume that λ = 1. In the sequel, we l=1 consider, for 0 <λ< 1, scalarized problems of the form T T max λp x + (1 − λ)(−c) x (BS(λ)) (11) x ∈X Remark 4.1 According to Theorem 4.1, solutions to problems T T max p x , max(−c) x (12) x ∈X x ∈X need not be Pareto efficient because the weights are not both positive. However, there exist Pareto efficient solutions to (BP) among solutions to these problems. Namely, there exist ε > 0 and ε > 0 such that solutions to problems 1 2 T T (P1) max p x + ε (−c) x x ∈X and T T (P2) max(−c) x + ε p x x ∈X are Pareto efficient solutions to problems (12), respectively. Suitable ε and ε will be 1 2 determined in the next section. 4.1 Decomposition Due to the highly structured form of the set X and the possibility of representing X in the form (3), 1 2 k X = X × X × ··· × X , we can provide explicit formulae for solving problems (BS(λ)). To this aim we decom- pose problems (BS(λ)) as follows. Recall that by using the notation (4) we can put any x ∈ X in the form 1 2 k T x := (x , x ,..., x ) , i i where x = (x ,..., x ), i = 1,..., k, and x = 1. i1 in ij j =1 Let 0 <λ< 1. According to (3)wehave ⎧ ⎫ ⎨ i ⎬ i i n X := x = (x ,..., x ) ∈ R : x = 1, x ∈{0, 1}, j = 1,..., n i1 in ij ij i ⎩ ⎭ j =1 123 A multi-criteria approach to approximate solution of … 899 for i = 1,..., k. Consider problems (BS(λ)) , i = 1,..., k, of the form i T i i T i max [λ(p ) x + (1 − λ)(−c ) x ] (BS(λ)) (13) i i x ∈X By solving problems (BS(λ)) , i = 1,..., k, we find their solutions x¯ . We shall show that 1 k T x¯ := (x¯ ,..., x¯ ) solves (BS(λ)). Thus, problem (11) is decomposed into k subproblems (13), the solu- tions of which form solutions to (11). Note that similar decomposed problems with feasible sets X and another objective functions have already been considered in [3] in relation to multi-dimensional multiple- choice knapsack problems. Now we give a closed-form formulae for solutions of (BS(λ)) .For i = 1, .., , k,let V := max{λp + (1 − λ)(−c ) : 1 ≤ j ≤ n }. (14) i ij ij i and let 1 ≤ j ≤ n be the index number for which the value V is attained, i.e., i i V = λp ∗ + (1 − λ)(−c ∗ ). (15) i ij ij i i We show that x¯ := (0, .., 1 , 0,..., 0) (16) is a solution to (BS(λ)) and ∗ 1 2 k T x¯ := (x¯ , x¯ ,..., x¯ ) (17) is a solution to (BS(λ)). The optimal value of (BS(λ)) is V := V + ··· + V . (18) 1 k Namely, the following proposition holds. i n Proposition 4.1 Any element x¯ ∈ R given by (16) solves (BS(λ)) for i = 1,..., k ∗ n and any x¯ ∈ R given by (17) solves problem (BS(λ)). i i Proof Clearly, x¯ are feasible for (BS(λ)) , i = 1,..., k, because x¯ is of the form (16) and hence belongs to the set X which is the constraint set of (BS(λ)) . Consequently, x¯ defined by (17)isfeasible for (BS(λ)) because all the components are binary and the linear equality constraints x = 1, i = 1, 2,..., k j =1 are satisfied. 123 900 E. M. Bednarczuk et al. To see that x¯ are also optimal for (BS(λ)) , i = 1,..., k, suppose by the contrary, that there exists 1 ≤ i ≤ k and an element y ∈ R which is feasible for (BS(λ)) with the value of the objective function strictly greater than the value at x¯ , i.e., n n i i [λp + (1 − λ)(−c )]y > [λp + (1 − λ)(−c )]¯x . ij ij j ij ij j =1 j =1 This, however, would mean that there exists an index 1 ≤ j ≤ n such that ∗ ∗ λp + (1 − λ)(−c )>λp + (1 − λ)(−c ) ij ij ij ij contrary to the definition of j . To see that x¯ is optimal for (BS(λ)), suppose by the contrary, that there exists an element y ∈ R which is feasible for (BS(λ)) and the value of the objective function at y is strictly greater than the value of the objective function at x¯ , i.e., T T T ∗ T ∗ λp y + (1 − λ)(−c) y >λp x¯ + (1 − λ)(−c) x¯ . In the same way as previously, we get the contradiction with the definition of the components of x¯ given by (17). Let us observe that each optimization problem (BS(λ)) can be solved in time O(n ), hence problem (BS(λ)) can be solved in time O(n), where n = n . i i i =1 Clearly, one can have more than one solution to (BS(λ)) , i = 1,..., k.Inthe next section, according to Theorem 3.1, from among all the solutions of (BS(λ)) we choose the one for which the value of the second criterion is greater than and as close as possible to −b. Note that by using Proposition 4.1, one can easily solve problems (P1) and (P2) defined in Remark 4.1, i.e., by applying (18) we immediately get T T F := max p x , F := max(−c) x 1 2 x ∈X x ∈X the optimal values of (P1) and (P2) and by (17), we find their solutions x¯ and x¯ , 1 2 respectively. Proposition 4.1 and formula (17) allows to find ε > 0 and ε > 0 as defined in 1 2 Remark 4.1.By(17), it is easy to find elements x¯ , x¯ ∈ X such that 1 2 T T F = p x¯ , F = (−c) x¯ . 1 1 2 2 Put T T ¯ ¯ F := p x¯ , F := (−c) x¯ 1 2 2 1 and let ¯ ¯ V := F − decr (p), V := F − decr (−c), 1 1 2 2 123 A multi-criteria approach to approximate solution of … 901 where decr (p) and decr (−c) denote the smallest nonzero decrease on X of functions p and (−c) from F and F , respectively. Note that decr (p) and decr (−c) can easily 1 2 be found. Remark 4.2 The following formulas describe decr (p) and decr (−c), i i i i decr (p) := min (p − p ), decr (−c) := min ((−c) − (−c) ), max submax max submax 1≤i ≤k 1≤i ≤k (19) i i i i where p and c , i = 1,..., k, are defined by (4), p , (−c) , i = 1,..., k, submax submax i T i i T i i i are submaximal values of functions (p ) x , ((−c) ) x , x ∈ X , i = 1,..., k. i T i i For any 1 ≤ i ≤ k, the submaximal values of a linear function (d ) x on X can be found by: ordering first the coefficients of the function (d ) decreasingly, i j1 i j2 i jm (d ) >(d ) ≥ ··· ≥ (d ) , and next observing that the submaximal (i.e., smaller than maximal but as close as i i possible to the maximal) value of (d ) on X is attained for x¯ := (0,..., 1 , 0,... 0). j2 i i Basing on Remark 4.2 one can find values of p and (−c) in time submax submax O(n ), i = 1,..., k, even without any sorting. It can be done for a given i by finding i i a maximal value among all p (c ), j = 1,..., n , except p (c ). Therefore ij ij i max max the computational cost of calculating decr (p) and decr (−c) is O(n). We have the following fact. ¯ ¯ ¯ ¯ Proposition 4.2 Let F ,F , F , F , V , V be as defined above. The problems 1 2 1 2 1 2 T T (P1) max p x + ε (−c) x x ∈X and T T (P2) max(−c) x + ε p x x ∈X where ¯ ¯ F − V F − V 1 1 2 2 ε := ,ε := , (20) 1 2 ¯ ¯ F − F F − F 2 2 1 1 give Pareto efficient solutions to problem (BP), x¯ and x¯ , respectively. Moreover, 1 2 f (x¯ ) = F and f (x¯ ) = F , 1 1 1 2 2 2 i.e., x¯ , x¯ solve problems (12), respectively. 1 2 123 902 E. M. Bednarczuk et al. Fig. 2 Construction of ε and ε 1 2 Proof Follows immediately from the adopted notations, see Fig. 2. For instance, the objective of problem (P1) is represented by the straight line passing through points ¯ ¯ (F , F ) and (V , F ), i.e., 1 2 1 2 ¯ ¯ F + ε F = V + ε F 1 1 2 1 1 2 ¯ ¯ which gives (20). The choice of F , F and V , F guarantees that x¯ solves (P1) (and 1 2 1 2 1 analogously for x¯ which solves (P2)). 5 Bi-objective approximate solution search algorithm (BI SSA) for solving (MC K P ) In this section, we propose the bi-objective approximate solution search algorithm BI SSA, for finding an element xˆ ∈ F which is an approximate solution to (MC K P). The algorithm relies on solving a series of problems (BS(λ)) defined by (11)for 0 <λ< 1 chosen in the way that the Pareto solutions x (λ) to (BS(λ)) are feasible T T for (MC K P) and for which (−c) x (λ) + b ≥ 0 and (−c) x (λ) + b diminishes for subsequent λ. According to Theorem 4.1, each solution to (BS(λ)) solves the linear bi-objective optimization problem (BP), T T Vmax(p x,(−c) x ) (BP) subject to x ∈ X. According to Theorem 3.1, any Pareto efficient solution x to problem (BP) which T ∗ is feasible to (MC K P), i.e., (−c) x ≥−b, and satisfies condition (∗), i.e., T ∗ T (−c) x + b = min (−c) x + b (∗) x ∈S ,(−c) x +b≥0 bo solves problem (MC K P). Since problems (BS(λ)) are defined with the help of linear scalarization, we are not able, in general, to enumerate all x ∈ S such that (−c) x + bo 123 A multi-criteria approach to approximate solution of … 903 Fig. 3 Outcome f (xˆ ) and bounds derived by the BI SSA algorithm; x —the solution to problem (MC K P) b ≥ 0 in order to find an x which satisfy condition (∗). On the other hand, by using linear scalarization, we are able to decompose and easily solve problems (BS(λ)). The BI SSA algorithm aims at finding a Pareto efficient solution xˆ ∈ X to (BP) T T which is feasible to (MC K P), i.e., c xˆ ≤ b for which the value of b − c xˆ is as small as possible (but not necessarily minimal) and approaches condition (∗) of Theorem 3.1 as close as possible. Here, we give a description of the BI SSA algorithm. The first step of the algorithm (lines 1–5) is to find solutions to problems (P1) and (P2) as well as their outcomes. The solutions are the extreme Pareto solutions to problem (BP). Those points named 0 0 (a , b ) and (a , b ) are presented in Fig. 3. Then (lines 6-9), in order to assert 1 1 2 2 whether a solution to problem (MC K P) exists or not, a basic checking is made against value −b. If the algorithm reaches line 10, no solution has been found yet, and we can begin the exploration of the search space. We calculate λ according to line 13. The value of λ is the slope of the straight line joining (a , b ) and (a , b ). At the same time it is the scalarization parameter 1 1 2 2 defining the problem (BS(λ)) (formula 11). The outcome of the solution to problem (BS(λ)) cannot lie below the straight line determined by points (a , b ) and (a , b ). 1 1 2 2 It must lie on or above this line, as it is the Pareto efficient solution to problem (BP). Then, problem (BS(λ)) is solved (line 14) by using formulae (16) and (17). Next, in lines 15–27 of the repeat-until loop a scanning of the search space is conducted to find solutions to problem (BP) which are feasible to problem (MC K P). If there exist solutions with outcomes lying above the straight line determined by λ (the condition in line 15 is true), either the narrowing of the search space is made (by determining new points (a , b ) and (a , b ), see Fig. 3, and points with upper index equal to 1), 1 1 2 2 and the loop continues, or the solution to problem (MC K P) is derived. If not, the 123 904 E. M. Bednarczuk et al. solution x from set S which outcome lies above the line determined by −b [the feasible solution to problem (MC K P)] and for which value f (x ) + b is minimal in this set, is an approximate solution (xˆ) to problem (MC K P), and the loop terminates. Finally (line 28), the upper bound f (xˆ) + u on the profit value of exact solution to problem (MC K P) is calculated. Algorithm 1 BI SSA - Approximate solution search to (MC K P) 1: Calculate ε , ε according to (20) 1 2 T T 2: Assume that f (x ) = p x and f (x ) = (−c) x 1 2 3: Solve (P1) according to (18)and (17) x a solution to (P1) 4: Solve (P2) according to (17)and (18) x a solution to (P2) 5: a := f (x ), b := f (x ), a := f (x ), b := f (x ) 1 1 1 1 2 1 2 1 2 2 2 2 6: if (a , b ) = (a , b ) and b ≥−b then x solves (MC K P) and STOP end if 1 1 2 2 2 2 7: if b ≥−b then x solves (MC K P) and STOP end if 1 1 8: if b =−b then x solves (MC K P) and STOP end if 2 2 9: if b < −b then no solution to (MC K P) and STOP end if 10: (a , b ) = (a , b ) and b < −b < b . Explore the search space 1 1 2 2 1 2 11: loop := TRU E 12: repeat (b −b ) 2 1 13: λ := , α := λa + (1 − λ)b 0 <λ< 1 1 1 (a −a )+(b −b ) 1 2 2 1 14: Solve (BS(λ)) according to (16)and (17) x a solution, opt the optimal value, S the solution set to (BS(λ)) 15: if opt >α then 16: if f (x)> −b then 17: a := f (x ), b := f (x ) 2 1 2 2 18: else if f (x)< −b then 19: a := f (x ), b := f (x ) 1 1 1 2 20: else 21: x solves (MC K P) and STOP 22: end if 23: else opt = α 24: xˆ := arg min f (x ) x ∈S, f (x )≥−b 25: loop := FALSE 26: end if 27: until ¬loop ˆx is an approximate solution to (MC K P) f (xˆ ) is a lower bound (LB)for (MC K P) (a − f (xˆ))( f (xˆ )+b) 1 1 2 28: u := f (xˆ ) + u is an upper bound (UB)for (MC K P) f (xˆ)−b 2 1 The BI SSA algorithm finds either an exact solution to problem (MC K P), or (after reaching line 27) a lower bound (LB) with its solution xˆ and an upper bound (UB) (see Fig. 3). A solution found by the algorithm is, in general, only an approxi- mate solution to problem (MC K P) because a triangle (called further the triangle of uncertainty) determined by points ( f (xˆ ), f (xˆ )), ( f (xˆ ) + u, −b), ( f (xˆ ), −b) may 1 2 1 1 contain other Pareto outcomes [candidates for outcomes of exact solutions to problem (MC K P)] which the proposed algorithm is not able to derive. The reason is that we use a scalarization technique based on weighted sums of criteria functions to obtain Pareto solutions to problem (BP). Let us recall that each instance of the optimization problem (BS(λ)) can be solved in time O(n), but the number of these instances solved by the proposed algorithm depends on the size of the problem (values k and n ) and the data. 123 A multi-criteria approach to approximate solution of … 905 6 Computational experiments Most publicly available test instances refer not to the (MC K P) problem (let us recall, that there is only one inequality or budget constraint in the problem we consider) but to multi-dimensional knapsack problems. Due to this fact we generate new ran- dom instances (available from the authors on request). However, to compare solutions obtained by the BI SSA algorithm to the exact solutions we used the minimal algo- rithm for the multiple-choice knapsack problem [22] which we call EX ACT and its implementation in C [23]. The EX ACT algorithm gives the profit value of the optimal solution as well as the solution obtained by the greedy algorithm for the (MC K P) problem, so the quality of the BI SSA algorithm approximate solutions can be assessed in terms of the difference or relative difference between profit values of approximate solutions and exact ones. Since the difficulty of knapsack problems (see, e.g., the monograph [19]) depends on the correlation between profits and weights of items, we conducted two computa- tional experiments: Experiment 1 with uncorrelated data instances (easy to solve) and Experiment 2 with weakly correlated data instances (more difficult to solve) (c.f.[11]). The explanation why weakly correlated problems are more difficult to solve by the BI SSA algorithm than uncorrelated ones we give later. To prepare test problems (data instances) we used a method proposed in [22] and our own procedure for calculating total cost values. The BI SSA algorithm has been implemented in C. The implementation of BI SSA algorithm was run on off-the-shelf laptop (2GHz AMD processor, Windows 10), and the implementation of EX ACT algorithm was run on PC machine (4x3.2GHz Intel processor, Linux). The running time for BI SSA and EX ACT algorithms for each of the test problems was below one second. The contents of the tables columns containing experiment results is as follows. 1 Problem no. 2 Profit of the exact solution found by the EX ACT algorithm. 3 Profit of the approximate solution found by the BI SSA algorithm. 4 Difference between 2 and 3. 5 Relative (%) difference between 2 and 3. 6 Upper bound for (MC K P) found by the BI SSA algorithm. 7 The difference between the upper bound and profit of the approximate solution. 8 The relative difference between the upper bound and profit of the approximate solution. 9 Upper bound for (MC K P) found by the greedy algorithm. 10 Number of (BS(λ)) problems solved by the BI SSA algorithm. Experiment 1: uncorrelated data (unc) instances We generated 10 test problems assuming that k = 10 and n = 1000, i = 1,..., k (problem set (unc, 10, 1000)), 10 test problems assuming that k = 100 and n = 100, i = 1,..., k (problem set (unc, 100, 100)), and 10 test problems assuming that k = 1000 and n = 10, i = 1,..., k (problem set (unc, 1000, 10)). For each test problem profits ( p ) and costs (c ) of items were randomly distributed (according to ij ij the uniform distribution) in [1, R], R = 10000. Profits and costs of items were integers. 123 906 E. M. Bednarczuk et al. For each test problem the total cost b was equal to either c + random(0, ∗ c),or c − random(0, ∗ c) randomly selected with the same probability equal to 0.5), 1 k where c = (min c + max c ), and random(0, r ) denotes j =1,...,n ij j =1,...,n ij i =1 i i randomly selected (according to the uniform distribution) integer from [0, r ]. The results for problem sets (unc, 10, 1000), (unc, 100, 100) and (unc, 1000, 10) are given, respectively, in tables Tables 1, 2 and 3. Experiment 2: weakly correlated (wco) data instances We generated 10 test problems assuming that k = 20 and n = 20, i = 1,..., k (problem set (wco, 20, 20)). For each test problem costs (c ) of items in set N were ij i randomly distributed (according to the uniform distribution) in [1, R], R = 10000, and profits of items ( p ) in this set were randomly distributed in [c − 10, c + 10], ij ij ij such that p ≥ 1. Profits and costs of items were integers. For each test problem the ij total cost b was calculated as for Experiment 1. Table 1 Obtained results for Experiment 1, problem set (unc, 10, 1000) 1 2 34 5 6 7 8 910 1 99,873 99,849 24 0.024 99,887.011 38.011 0.038 99,887 7 2 99,894 99,889 5 0.005 99,899.061 10.061 0.010 99,899 8 3 99,861 99,861 0 0.000 99,866.141 5.141 0.005 99,866 7 4 99,832 99,832 0 0.000 99,836.262 4.262 0.004 99,836 6 5 99,854 99,854 0 0.000 99,856.485 2.485 0.002 99,856 6 6 99,827 99,808 19 0.019 99,835.986 27.986 0.028 99,835 6 7 99,860 99,841 19 0.019 99,863.302 22.302 0.022 99,863 6 8 99,883 99,883 0 0.000 99,895.311 12.311 0.012 99,895 6 9 99,881 99,881 0 0.000 99,883.419 2.419 0.002 99,883 7 10 99,702 99,702 0 0.000 99,724.825 22.825 0.023 99,724 6 Table 2 Obtained results for Experiment 1, problem set (unc, 100, 100) 12 3 4 5 6 7 8 9 10 1 983,045 982,946 99 0.010 983,059.387 113.387 0.012 983,059 10 2 980,483 980,433 50 0.005 980,492.589 59.589 0.006 980,492 11 3 984,106 983,999 107 0.011 984,130.851 131.851 0.013 984,130 8 4 982,980 982,684 296 0.030 983,021.172 337.172 0.034 983,021 10 5 981,421 981,421 0 0.000 981,426.965 5.965 0.001 981,426 10 6 983,059 982,968 91 0.009 983,080.841 112.841 0.011 983,080 10 7 984,059 984,001 58 0.006 984,071.849 70.849 0.007 984,071 10 8 987,210 987,158 52 0.005 987,228.022 70.022 0.007 987,228 9 9 980,999 980,944 55 0.006 981,035.911 91.911 0.009 981,035 9 10 982,142 982,060 82 0.008 982,176.615 116.615 0.012 982,176 9 123 A multi-criteria approach to approximate solution of … 907 Table 3 Obtained results for Experiment 1, problem set (unc, 1000, 10) 12 3 4 5 6 7 8 9 10 1 8,421,950 8,420,352 1598 0.019 8,421,964.411 1612.411 0.019 8,421,964 12 2 8,770,359 8,768,966 1393 0.016 8,770,370.988 1404.988 0.016 8,770,370 11 3 8,959,068 8,958,820 248 0.003 8,959,085.071 265.071 0.003 8,959,085 12 4 8,848,233 8,847,518 715 0.008 8,848,270.055 752.055 0.008 8,848,270 11 5 8,807,777 8,806,990 787 0.009 8,807,787.139 797.139 0.009 8,807,787 12 6 8,881,946 8,881,649 297 0.003 8,881,976.338 327.338 0.004 8,881,976 11 7 8,927,815 8,927,065 750 0.008 8,927,826.311 761.311 0.009 8,927,826 13 8 8,742,270 8,740,874 1396 0.016 8,742,284.668 1410.668 0.016 8,742,284 12 9 8,693,221 8,690,669 2552 0.029 8,693,245.349 2576.349 0.030 8,693,245 12 10 8,411,809 8,411,350 459 0.005 8,411,859.566 509.566 0.006 8,411,859 12 Table 4 Obtained results for Experiment 2, problem set (wco, 20, 20) 12 3 4 5 6 7 8 9 10 1 113,664 113,584 80 0.070 113,665.988 81.988 0.072 113,665 8 2 102,060 102,049 11 0.011 102,061.000 12.000 0.012 102,061 5 391,399 89,864 1535 1.679 91,400.223 1536.223 1.681 91,400 8 4 121,378 118,231 3147 2.593 121,380.379 3149.379 2.595 121,380 8 5 100,029 96,907 3122 3.121 100,032.112 3125.112 3.124 100,032 8 697,145 97,145 0 0.000 97,146.000 1.000 0.001 97,146 5 7 103,176 97,340 5836 5.656 103,178.131 5838.131 5.658 103,178 7 882,942 82,832 110 0.133 82,944.000 112.000 0.135 82,944 5 986,132 86,132 0 0.000 86,132.000 0.000 0.000 86,132 6 10 80,322 80,194 128 0.159 80,325.000 131.000 0.163 80,325 6 The results for problem set (wco, 20, 20) are given in Table 4. In the case of uncorrelated data instances, the BI SSA algorithm was able to find approximate solutions (and profit values) to problems with 10000 binary variables in reasonable time. The relative difference between profit values of exact and approximate solutions are small for each of the test problems. Upper bounds found by the BI SSA algorithm are almost the same as upper bounds found by the greedy algorithm for (MC K P). Even for the problem set (unc, 1000, 10) number of (BS(λ)) problems solved by the BI SSA algorithm is relatively small in regards to number of decision variables. In the case of weakly correlated data instances, the BI SSA algorithm solved prob- lems with 400 binary variables in reasonable time. The relative difference between profit values of exact and approximate solutions is, in average, greater than for uncorrelated test problems. As one can see in Table 4, upper bounds found by the BI SSA algorithm are almost the same as upper bounds found by the greedy algorithm for (MC K P). The reason why the BI SSA algorithm solves weakly corre- 123 908 E. M. Bednarczuk et al. lated instances with a significantly smaller number of variables than for uncorrelated ones in reasonable time is as follows. In line 24 of the BI SSA algorithm, in order to find an element xˆ, we have to go through the solution set S to the problem (BS(λ)) [the complete scan of set S according to values of the second objective function of problem (BP)]. For weakly correlated data instances the cardinality of the set S may be large even for problems belonging to class (wco, 30, 30).We conducted experiments for problem class (wco, 30, 30). For the most difficult test problem in this class, the cardinality of solution set S to the problem (BS(λ)) was 199,065,600. For greater weakly correlated problems that number may be even larger. 7 Conclusions and future works A new approximate method of solving multiple-choice knapsack problems by replac- ing the budget constraint with the second objective function has been presented. Such a relaxation of the original problem allows to the smart scanning of the decision space by quick solving of the binary linear optimization problem (it is possible by the decom- position of this problem to independently solved easy subproblems). Let us note that our method can also be used for finding an upper bound for the multi-dimensional multiple-choice knapsack problem (MMCK P) via the relaxation obtained by sum- ming up all the linear inequality constraints [1]. The method can be compared to greedy algorithm for multiple-choice knapsack problems which also finds, in general, an approximate solution and an upper bound. Two preliminary computational experiments have been conducted to check how the proposed algorithm behaves for simple to solve (uncorrelated) instances and hard to solve (weakly correlated) instances. The results have been compared to results obtained by the exact state-of-the-art algorithm for multiple-choice knapsack problems [22]. For weakly correlated problems, the number of solution outcomes which have to be checked in order to derive the triangle of uncertainty (so also an approximate solution to the problem and its upper bound) grows fast with the size of the problem. Therefore, for weakly correlated problems we are able to solve smaller problem instances in reasonable time than for uncorrelated problem instances. It is worth underlying that in the proposed method profits and costs of items as well as total cost can be real numbers. It could be of value when one wants to solve multiple-choice knapsack problems without changing real numbers into integers (as one has to do for dynamic programming methods). Further work will cover investigations of how the algorithm behaves for weekly and strongly correlated instances as well as on the issue of finding a better solution by a smart “scanning” of the triangle of uncertainty. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 123 A multi-criteria approach to approximate solution of … 909 References 1. Akbar, M.M., Rahman, M.S., Kaykobad, M., Manning, E.G., Shoja, G.C.: Solving the multidimen- sional multiple-choice knapsack problem by constructing convex hulls. Comput. Oper. Res. 33(5), 1259–1273 (2006). https://doi.org/10.1016/j.cor.2004.09.016 2. Chen, Y., Hao, J.K.: A reduce and solve approach for the multiple-choice multidimensional knapsack problem. Eur. J. Oper. Res. 239(2), 313–322 (2014). https://doi.org/10.1016/j.ejor.2014.05.025 3. Cherfi, N., Hifi, M.: A column generation method for the multiple-choice multi-dimensional knapsack problem. Comput. Optim. Appl. 46(1), 51–73 (2010). https://doi.org/10.1007/s10589-008-9184-7 4. Dudzinski, K., Walukiewicz, S.: A fast algorithm for the linear multiple-choice knapsack problem. Oper. Res. Lett. 3(4), 205–209 (1984). https://doi.org/10.1016/0167-6377(84)90027-0 5. Dudzinski, K., Walukiewicz, S.: Exact methods for the knapsack problem and its generalizations. Eur. J. Oper. Res. 28(1), 3–21 (1987). https://doi.org/10.1016/0377-2217(87)90165-2 6. Dyer, M., Kayal, N., Walker, J.: A branch and bound algorithm for solving the multiple-choice knapsack problem. J. Comput. Appl. Math. 11(2), 231–249 (1984). https://doi.org/10.1016/0377- 0427(84)90023-2 7. Dyer, M., Riha, W., Walker, J.: A hybrid dynamic programming/branch-and-bound algorithm for the multiple-choice knapsack problem. J. Comput. Appl. Math. 58(1), 43–54 (1995). https://doi.org/10. 1016/0377-0427(93)E0264-M 8. Dyer, M.E.: An O(n) algorithm for the multiple-choice knapsack linear program. Math. Program. 29(1), 57–63 (1984). https://doi.org/10.1007/BF02591729 9. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005). http://www.springer.com/gp/book/ 10. Gao, C., Lu, G., Yao, X., Li, J.: An iterative pseudo-gap enumeration approach for the multidimensional multiple-choice knapsack problem. Eur. J. Oper. Res. (2016). https://doi.org/10.1016/j.ejor.2016.11. 11. Han, B., Leblet, J., Simon, G.: Hard multidimensional multiple choice knapsack problems, an empirical study. Comput. Oper. Res. 37(1), 172–181 (2010). https://doi.org/10.1016/j.cor.2009.04.006 12. Hifi, M., Michrafy, M., Sbihi, A.: Heuristic algorithms for the multiple-choice multidimensional knap- sack problem. J. Oper. Res. Soc. 55(12), 1323–1332 (2004). https://doi.org/10.1057/palgrave.jors. 13. Kellerer, H., Pferschy, U., Pisinger, D.: Knapsack Problems. Springer (2004). https://books.google.pl/ books?id=u5DB7gck08YC 14. Khan, M.S.: Quality adaptation in a multisession multimedia system: model, algorithms, and architec- ture. Ph.D. thesis, AAINQ36645, University of Victoria, Victoria (1998) 15. Klamroth, K., Tind, J.: Constrained optimization using multiple objective programming. J. Glob. Optim. 37(3), 325–355 (2007). https://doi.org/10.1007/s10898-006-9052-x 16. Kwong, C., Mu, L., Tang, J., Luo, X.: Optimization of software components selection for component- based software system development. Comput. Ind. Eng. 58(4), 618–624 (2010). https://doi.org/10. 1016/j.cie.2010.01.003 17. Lee, C., Lehoczky, J., Rajkumar, R.R., Siewiorek, D.: On quality of service optimization with discrete QoS options. In: Proceedings of the IEEE Real-Time Technology and Applications Symposium, pp. 276–286 (1999) 18. Martello, S., Pisinger, D., Toth, P.: New trends in exact algorithms for the 01 knapsack problem. Eur. J. Oper. Res. 123(2), 325–332 (2000). https://doi.org/10.1016/S0377-2217(99)00260-X 19. Martello, S., Toth, P.: Knapsack Problems: Algorithms and Computer Implementations. Wiley, New York (1990) 20. Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer Academic Publishers (1999). https:// doi.org/10.1007/978-1-4615-5563-6. http://users.jyu.fi/~miettine/book/ 21. Nauss, R.M.: The 01 knapsack problem with multiple choice constraints. Eur. J. Oper. Res. 2(2), 125–131 (1978). https://doi.org/10.1016/0377-2217(78)90108-X 22. Pisinger, D.: A minimal algorithm for the multiple-choice knapsack problem. Eur. J. Oper. Res. 83(2), 394–410 (1995). https://doi.org/10.1016/0377-2217(95)00015-I 23. Pisinger, D.: Program code in C (1995). http://www.diku.dk/~pisinger/minknap.c (downloaded in 2016) 24. Pisinger, D.: Budgeting with bounded multiple-choice constraints. Eur. J. Oper. Res. 129(3), 471–480 (2001). https://doi.org/10.1016/S0377-2217(99)00451-8 123 910 E. M. Bednarczuk et al. 25. Pyzel, P.: Propozycja metody oceny efektywnosci systemow MIS. In: Myslinski, A. (ed.) Techniki Informacyjne - Teoria i Zastosowania, Wybrane Problemy, vol. 2(14), pp. 59–70. Instytut Badan Sys- temowych PAN, Warszawa (2012) 26. Sbihi, A.: A best first search exact algorithm for the multiple-choice multidimensional knapsack prob- lem. J. Comb. Optim. 13(4), 337–351 (2007). https://doi.org/10.1007/s10878-006-9035-3 27. Sinha, P., Zoltners, A.A.: The multiple-choice knapsack problem. Oper. Res. 27(3), 503–515 (1979). https://doi.org/10.1287/opre.27.3.503 28. Zemel, E.: An O(n) algorithm for the linear multiple choice knapsack problem and related problems. Inf. Process. Lett. 18(3), 123–128 (1984). https://doi.org/10.1016/0020-0190(84)90014-0 29. Zhong, T., Young, R.: Multiple choice knapsack problem: example of planning choice in transportation. Eval. Program Plan. 33(2), 128–137 (2010). https://doi.org/10.1016/j.evalprogplan.2009.06.007 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Computational Optimization and Applications Springer Journals

A multi-criteria approach to approximate solution of multiple-choice knapsack problem

Free
22 pages
Loading next page...
 
/lp/springer_journal/a-multi-criteria-approach-to-approximate-solution-of-multiple-choice-uClTAoKkk9
Publisher
Springer US
Copyright
Copyright © 2018 by The Author(s)
Subject
Mathematics; Optimization; Operations Research, Management Science; Operations Research/Decision Theory; Statistics, general; Convex and Discrete Geometry
ISSN
0926-6003
eISSN
1573-2894
D.O.I.
10.1007/s10589-018-9988-z
Publisher site
See Article on Publisher Site

Abstract

Comput Optim Appl (2018) 70:889–910 https://doi.org/10.1007/s10589-018-9988-z A multi-criteria approach to approximate solution of multiple-choice knapsack problem 1,2 1 Ewa M. Bednarczuk · Janusz Miroforidis · Przemysław Pyzel Received: 24 February 2017 / Published online: 1 March 2018 © The Author(s) 2018. This article is an open access publication Abstract We propose a method for finding approximate solutions to multiple-choice knapsack problems. To this aim we transform the multiple-choice knapsack problem into a bi-objective optimization problem whose solution set contains solutions of the original multiple-choice knapsack problem. The method relies on solving a series of suitably defined linearly scalarized bi-objective problems. The novelty which makes the method attractive from the computational point of view is that we are able to solve explicitly those linearly scalarized bi-objective problems with the help of the closed-form formulae. The method is computationally analyzed on a set of large- scale problem instances (test problems) of two categories: uncorrelated and weakly correlated. Computational results show that after solving, in average 10 scalarized bi-objective problems, the optimal value of the original knapsack problem is approxi- mated with the accuracy comparable to the accuracies obtained by the greedy algorithm and an exact algorithm. More importantly, the respective approximate solution to the original knapsack problem (for which the approximate optimal value is attained) can Przemysław Pyzel PPyzel@ibspan.waw.pl Ewa M. Bednarczuk Ewa.Bednarczuk@ibspan.waw.pl Janusz Miroforidis Janusz.Miroforidis@ibspan.waw.pl Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warszawa, Poland Faculty of Mathematics and Information Science, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa, Poland PhD Programme, Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warszawa, Poland 123 890 E. M. Bednarczuk et al. be found without resorting to the dynamic programming. In the test problems, the number of multiple-choice constraints ranges up to hundreds with hundreds variables in each constraint. Keywords Knapsack · Multi-objective optimization · Multiple-choice knapsack · Linear scalarization 1 Introduction The multi-dimensional multiple-choice knapsack problem (MMCK P) and the multiple-choice knapsack problem (MC K P) are classical generalizations of the knap- sack problem (KP) and are applied to modeling many real-life problems, e.g., in project (investments) portfolio selection [21,29], capital budgeting [24], advertising [27], component selection in IT systems [16,25], computer networks management [17], adaptive multimedia systems [14], and other. The multiple-choice knapsack problem (MC K P) is formulated as follows. Given are k sets N , N ,..., N of items, of cardinality |N |= n , i = 1,..., k. Each item 1 2 k i i of each set has been assigned real-valued nonnegative ‘profit’ p ≥ 0 and ‘cost’ ij c ≥ 0, i = 1,..., k, j = 1,..., n . ij i The problem consists in choosing exactly one item from each set N so that the total cost does not exceed a given b ≥ 0 and the total profit is maximized. Let x , i = 1,..., k, j = 1,..., n , be defined as ij i 1 if item j from set N is chosen x = ij 0 otherwise. k n Note that all x form a vector x of length n = n , x ∈ R , and we write ij i i =1 x := (x , x ,..., x , x ,..., x ,..., x , x ,..., x ) . 11 12 1n 21 2n k1 k2 kn 1 2 k In this paper, we adopt the convention that a vector x is a column vector, and hence the transpose of x, denoted by x , is a row vector. Problem (MC K P) is of the form k n max p x ij ij i =1 j =1 subject to k i (MC K P) c x ≤ b ij ij i =1 j =1 (x ) ∈ X := {(x ) | x = 1, ij ij ij j =1 x ∈{0, 1} i = 1,..., k, j = 1,..., n }. ij i By using the above notations, problem (MC K P) can be equivalently rewritten in the vector form 123 A multi-criteria approach to approximate solution of … 891 max p x subject to (MC K P) c x ≤ b x = (x ) ∈ X, ij where p and c are vectors from R , p := (p , p ,..., p , p ,..., p ,..., p , p ,..., p ) 11 12 1n 21 2n k1 k2 kn 1 2 k c := (c , c ,..., c , c ,..., c ,..., c , c ,..., c ) , 11 12 1n 21 2n k1 k2 kn 1 2 k n T and for any vectors u,v ∈ R , the scalar product u v is defined in the usual way as u v := u v . i i i =1 The feasible set F to problem (MC K P) is defined by a single linear inequality constraint and the constraint x ∈ X, i.e., n T F := {x ∈ R | c x ≤ b, x ∈ X } and finally max p x (MC K P) subject to x ∈ F. The optimal value of problem (MC K P) is equal to max p x and the solution set x ∈F S is given as ∗ T T S := x¯ ∈ F | p x¯ = max p x . x ∈F Problem (MC K P) is NP-hard. The approaches to solving (MC K P) can be: heuristics [1,12], exact methods providing upper bounds for the optimal value of the profit together with the corresponding approximate solutions [26], exact methods providing solutions [18]. There are algorithms that efficiently solve (MC K P) without sorting and reduction [8,28] or with sorting and reduction [4]. Solving (MC K P) with a linear relaxation (by neglecting the constraints x ∈{0, 1}, i = 1,..., k, j = ij 1,..., n ) gives upper bounds on the value of optimal profit. Upper bounds can be also obtained with the help of the Lagrange relaxation. These facts and other features of (MC K P) are described in details in monographs [13,19]. Exact branch-and-bound methods [6] (integer programming), even those using commercial optimization software (e.g., LINGO, CPLEX) can have troubles with solving large (MC K P) problems. A branch-and-bound algorithm with a quick solu- tion of the relaxation of reduced problems was proposed by Sinha and Zoltners [27]. Dudzinski ´ and Walukiewicz proposed an algorithm with pseudo-polynomial complex- ity [5]. Algorithms that use dynamic programming require integer values of data and for large-scale problems require large amount of memory for backtracking (finding solu- tions in set X), see also the monograph [19]. The algorithm we propose does not need the data to be integer numbers. 123 892 E. M. Bednarczuk et al. Heuristic algorithms, based on solving linear (or continuous) relaxation of (MC K P) and dynamic programming [7,22,24] are reported to be fast, but have limitations typical for dynamic programming. The most recent approach “reduce and solve” [2,10] is based on reducing the problem by proposed pseudo cuts and then solving the reduced problems by a Mixed Integer Programming (MI P) solver. In the present paper, we propose a new exact (not heuristic) method which provides approximate optimal profits together with the corresponding approximate solutions. The method is based on multi-objective optimization techniques. Namely, we start by formulating a linear bi-objective problem (BP) related to the original problem (MC K P). After investigating the relationships between (MC K P) and (BP) prob- lems, we propose an algorithm for solving (MC K P) via a series of scalarized linear bi-objective problems (BS(λ)). The main advantage of the proposed method is that the scalarized linear bi-objective problems (BS(λ)) can be explicitly solved by exploiting the structure of the set X. Namely, these scalarized problems can be decomposed into k independent subprob- lems the solutions of which are given by simple closed-form formulas. This feature of our method is particularly suitable for parallelization. It allows to generate solutions of scalarized problems in an efficient and fast way. The experiments show that the method we propose generates very quickly an out- come xˆ ∈ F which is an approximate solution to (MC K P). Moreover, lower bound (LB) and upper bound (UB) for the optimal profit are provided. The obtained approximate solution xˆ ∈ F could serve as a good starting point for other, e.g., heuristic or exact algorithms for finding an optimal solution to the problem (MC K P). The organization of the paper is as follows. In Sect. 2, we provide preliminary facts on multi-objective optimization problems and we formulate a bi-objective optimization problem (BP) associated with (MC K P). In Sect. 3, we investigate the relationships between the problem (BP) and the original problem (MC K P). In Sect. 4,wefor- mulate scalarized problems (BS(λ)) for bi-objective problem (BP) and we provide closed-form formulae for solutions to problems (BS(λ)) by decomposing them into k independent subproblems (BS(λ)) , i = 1,..., k. In Sect. 5, we present our method (together with the pseudo-code) which provides a lower bound (LB) for the opti- mal profit together with the corresponding approximate feasible solution xˆ ∈ F to (MC K P) for which the bound (LB) is attained. In Sect. 6, we report on the results of numerical experiments. The last section concludes. 2 Multi-objective optimization problems n n n Let f : R → R, i = 1,..., k, be functions defined on R and Ω ⊂ R be a subset in R . The multi-objective optimization problem is defined as Vmax ( f (x),..., f (x )) 1 k (P) subject to x ∈ Ω, 123 A multi-criteria approach to approximate solution of … 893 where the symbol Vmax means that solutions to problem (P) are understood in the sense of Pareto efficiency defined in Definition 2.1. Let k k R := {x = (x ,..., x ) ∈ R : x ≥ 0, i = 1,..., k}. 1 k i Definition 2.1 A point x ∈ Ω is a Pareto efficient (Pareto maximal) solution to (P) if ∗ k ∗ f (Ω) ∩ ( f (x ) + R ) ={ f (x )}. In other words, x ∈ Ω is a Pareto efficient solution to (P) if there is no x¯ ∈ Ω such that f (x¯ ) ≥ f (x ) for i = 1,..., k and i i (1) f (x¯)> f (x ) for some l 1 ≤  ≤ k. The problem (P) where all the functions f , i = 1,..., k are linear is called a linear multi-objective optimization problem. Remark 2.1 The bi-objective problem f (x ) → max, f (x ) → min 1 2 subject to x ∈ Ω with Pareto solutions x ∈ Ω defined as ∗ ∗ 2 ∗ ∗ ( f (Ω), f (Ω)) ∩[( f (x ), f (x )) + R ]={( f (x ), f (x ))} (2) 1 2 1 2 1 2 +− where 2 2 R := {x = (x , x ) ∈ R : x ≥ 0, x ≤ 0} 1 2 1 2 +− is equivalent to the problem f (x ) → max, − f (x ) → max 1 2 subject to x ∈ Ω in the sense that Pareto efficient solution sets (as subsets of the feasible set Ω) coincide and Pareto elements (the images in R of Pareto efficient solutions) differ in sign in the second component. 2.1 A bi-objective optimization problem related to (MC K P ) In relation to the original multiple-choice knapsack problem (MC K P), we consider the linear bi-objective binary optimization problem (BP1) of the form 123 894 E. M. Bednarczuk et al. n n k i k i p x → max, c x → min ij ij ij ij i =1 j =1 i =1 j =1 (BP1) subject to (x ) ∈ X. ij In this problem, the left-hand side of the linear inequality constraint c x ≤ b of (MC K P) becomes a second criterion and the constraint set reduces to the set X. There are two-fold motivations of considering the bi-objective problem (BP1). First motivation comes from the fact that in (MC K P) the inequality k n c x ≤ b ij ij i =1 j =1 is usually seen as a budget (in general: a resource) constraint with the left- hand-side to be preferably not greater than a given available budget b.Inthe bi-objective problem (BP1), this requirement is represented through the minimiza- k n tion of c x . In Theorem 3.1 of Sect. 3, we show that under relatively ij ij i =1 j =1 mild conditions among solutions of the bi-objective problem (BP1) [or the equivalent problem (BP)] there are solutions to problem (MC K P). Second motivation is important from the algorithmic point of view and is related to the fact that in the proposed algorithm we are able to exploit efficiently the specific structure of the constraint set X which contains k linear equality constraints (each one referring to a different group of variables) and the binary conditions only. More precisely, the set X can be represented as the Cartesian product 1 2 k X = X × X × ··· × X , (3) i i i n i of the sets X , where X := {x ∈ R | x = 1, x ∈{0, 1}, j = 1,..., n }, ij ij i j =1 i = 1,..., k and x = (x ,..., x , x ,..., x ,..., x ,..., x ) , (4) 11 1n 21 2n k1 kn 1 2 1 1 2 k x x x i.e., 1 k T x = (x ,..., x ) and x = (x ,..., x ). Accordingly, i1 in 1 k T 1 k T p = (p ,..., p ) , and c = (c ,..., c ) . Note that due to the presence of the budget inequality constraint the feasible set F of problem (MC K P) cannot be represented in the form analogous to (3). According to Remark 2.1, problem (BP1) can be equivalently reformulated in the form 123 A multi-criteria approach to approximate solution of … 895 T T Vmax (p x,(−c) x ) (BP) subject to x ∈ X. 3 The relationships between (BP ) and (MC P K ) Starting from the multiple-choice knapsack problem (MC K P) of the form max p x (MC K P) subject to x ∈ F, in the present section we analyse relationships between problems (MC K P) and (BP). We start with a basic observation. Recall first that (MC K P) is solvable, i.e., the feasible set F is nonempty if b ≥ min c x . x ∈X On the other hand, if b ≥ max c x, (MC K P) is trivially solvable. Thus, in the x ∈X sequel we assume that T T C := min c x ≤ b < C := max c x . (5) min max x ∈X x ∈X T T Let P := max p x, i.e., P is the maximal value of the function p x on max x ∈X max the set X. The following observations are essential for further considerations. 1. First, among the elements of X which realize the maximal value P , there exists max at least one which is feasible for (MC K P), i.e., there exists x ∈ X, p x = P p p max such that c x ≤ b, i.e., C ≤ c x ≤ b < C . (6) min p max Then, clearly, x solves (MC K P). 2. Second, none of elements which realize the maximal value P is feasible for max T T (MC K P), i.e., for every x ∈ X, p x = P we have c x > b, i.e., any x p p max p p realizing the maximal value P is infeasible for (MC K P), i.e. max C ≤ b < c x ≤ C . (7) min p max In the sequel, we concentrate on Case 2, characterized by (7). This case is related to problem (BP). To see this let us introduce some additional notations. Let x ∈ X cmin and x ∈ X be defined as pmax T T T c x = C and p x = max T p x cmin min cmin c x =C min T T T p x = P and c x = min c x . pmax max pmax p x =P max 123 896 E. M. Bednarczuk et al. Let S be the set of all Pareto solutions to the bi-objective problem (BP), bo T T T T 2 T T S := {x ∈ X : (p ,(−c) )(X ) ∩[(p x,(−c) x ) + R ]={(p x,(−c) x )}}, bo (c.f. Definition 2.1). The following lemma holds. Lemma 3.1 Assume that we are in Case 2, i.e., condition (7) holds. There exists a Pareto solution to the bi-objective optimization problem (BP), x¯ ∈ S which is bo feasible to problem (MC K P),i.e., c x¯ ≤ b which amounts to x¯ ∈ F. Proof According to Definition 2.1, both x ∈ X and x ∈ X are Pareto efficient pmax cmin T T T T solutions to (BP), i.e., there is no x ∈ X such that (p x , c x ) = (p x , c x ) pmax pmax and T T T T p x ≥ p x and c x ≤ c x pmax pmax T T T T and there is no x ∈ X such that (p x , c x ) = (p x , c x ) and cmin cmin T T T T p x ≥ p x and c x ≤ c x . cmin cmin Moreover, by (7), T T C = c x ≤ b < c x . (8) min cmin pmax In view of (8), x¯ = x ∈ S and x¯ = x ∈ F (c x ≤ b) which means that cmin bo cmin cmin x¯ is feasible to problem (MC K P), which concludes the proof. Now we are ready to formulate the result establishing the relationship between solutions of (MC K P) and Pareto efficient solutions of (BP) in the case where the condition (7) holds. Theorem 3.1 Suppose we are given problem (MC K P) satisfying condition (7). Let x ∈ X be a Pareto solution to (BP), such that T ∗ T b − c x = min b − c x.(∗) x ∈S ,b−c x ≥0 bo Then x solves (MC K P). Proof Observe first that, by Lemma 3.1, there exist x ∈ S satisfying the constraint bo c x ≤ b, i.e., condition (∗) is not dummy. ∗ ∗ T ∗ By contradiction, suppose that a feasible element x ∈ F, i.e., x ∈ X, c x ≤ b, is not a solution to (MC K P), i.e., there exists an x ∈ X, such that T T T ∗ c x ≤ b and p x > p x .(∗∗) 1 1 ∗ T T ∗ ∗ We show that x cannot satisfy condition (∗).If c x ≤ c x , then x is not a Pareto ∗ ∗ solution to (BP), i.e., x ∈ / S , and x does not satisfy condition (∗). Otherwise, bo T T ∗ c x > c x , i.e., T ∗ T b − c x > b − c x . (9) 123 A multi-criteria approach to approximate solution of … 897 Fig. 1 Illustration to the content of Theorem 3.1; black dots—outcomes of Pareto efficient solutions to (BP), star—Pareto efficient outcome to (BP) which solves (MCKP) ∗ ∗ If x ∈ S , then because x ∈ S , then, according to (9), x cannot satisfy 1 bo bo condition (∗). T T If x ∈ / S , there exists x ∈ S which dominates x , i.e., (p x ,(−c) x ) ∈ 1 bo 2 bo 1 2 2 T T 2 T T ∗ ∗ (p x ,(−c) x ) + R .Again,if c x ≤ c x , then x is not a Pareto solution to 1 1 2 ∗ T T ∗ (BP), i.e., x cannot satisfy condition (∗). Otherwise, if c x > c x , then either ∗ ∗ ∗ x ∈ / S and consequently x cannot satisfy condition (∗),or x ∈ S , in which case bo bo T ∗ T ∗ b − c x > b − c x and x does not satisfy condition (∗), a contradiction which completes the proof. Theorem 3.1 says that under condition (7) any solution to (BP) satisfying condition (∗) solves problem (MC K P). General relations between constrained optimization and multi-objective programming were investigated in [15]. Basing ourselves on Theorem 3.1, in Sect. 5 we provide an algorithm for finding x ∈ S , a Pareto solution to (BP), which is feasible to problem (MC K P) and for bo which the condition (∗) is either satisfied or is, in some sense, as close as possible to be satisfied. In this latter case, the algorithm provides upper and lower bounds for the optimal value of (MC K P) (see Fig. 1). 4 Decomposition of the scalarized bi-objective problem (BP ) In the present section, we consider problem (BS(λ ,λ )) defined by (10) which is a 1 2 linear scalarization of problem (BP). In our algorithm BISSA, presented in Sect. 5, we obtain an approximate feasible solution to (MC K P) by solving a (usually very small) number of problems of the form (BS(λ ,λ )). The main advantage of basing 1 2 our algorithm on problems (BS(λ ,λ )) is that they are explicitly solvable by simple 1 2 closed-form expressions (17). For problem (BP) the following classical scalarization result holds. Theorem 4.1 [9,20] If there exist λ > 0,  = 1, 2, such that x ∈ X is a solution to the scalarized problem 123 898 E. M. Bednarczuk et al. T T max λ p x + λ (−c) x (BS(λ ,λ )) 1 2 (10) 1 2 x ∈X then x is a Pareto efficient solution to problem (BP). Without loosing generality we can assume that λ = 1. In the sequel, we l=1 consider, for 0 <λ< 1, scalarized problems of the form T T max λp x + (1 − λ)(−c) x (BS(λ)) (11) x ∈X Remark 4.1 According to Theorem 4.1, solutions to problems T T max p x , max(−c) x (12) x ∈X x ∈X need not be Pareto efficient because the weights are not both positive. However, there exist Pareto efficient solutions to (BP) among solutions to these problems. Namely, there exist ε > 0 and ε > 0 such that solutions to problems 1 2 T T (P1) max p x + ε (−c) x x ∈X and T T (P2) max(−c) x + ε p x x ∈X are Pareto efficient solutions to problems (12), respectively. Suitable ε and ε will be 1 2 determined in the next section. 4.1 Decomposition Due to the highly structured form of the set X and the possibility of representing X in the form (3), 1 2 k X = X × X × ··· × X , we can provide explicit formulae for solving problems (BS(λ)). To this aim we decom- pose problems (BS(λ)) as follows. Recall that by using the notation (4) we can put any x ∈ X in the form 1 2 k T x := (x , x ,..., x ) , i i where x = (x ,..., x ), i = 1,..., k, and x = 1. i1 in ij j =1 Let 0 <λ< 1. According to (3)wehave ⎧ ⎫ ⎨ i ⎬ i i n X := x = (x ,..., x ) ∈ R : x = 1, x ∈{0, 1}, j = 1,..., n i1 in ij ij i ⎩ ⎭ j =1 123 A multi-criteria approach to approximate solution of … 899 for i = 1,..., k. Consider problems (BS(λ)) , i = 1,..., k, of the form i T i i T i max [λ(p ) x + (1 − λ)(−c ) x ] (BS(λ)) (13) i i x ∈X By solving problems (BS(λ)) , i = 1,..., k, we find their solutions x¯ . We shall show that 1 k T x¯ := (x¯ ,..., x¯ ) solves (BS(λ)). Thus, problem (11) is decomposed into k subproblems (13), the solu- tions of which form solutions to (11). Note that similar decomposed problems with feasible sets X and another objective functions have already been considered in [3] in relation to multi-dimensional multiple- choice knapsack problems. Now we give a closed-form formulae for solutions of (BS(λ)) .For i = 1, .., , k,let V := max{λp + (1 − λ)(−c ) : 1 ≤ j ≤ n }. (14) i ij ij i and let 1 ≤ j ≤ n be the index number for which the value V is attained, i.e., i i V = λp ∗ + (1 − λ)(−c ∗ ). (15) i ij ij i i We show that x¯ := (0, .., 1 , 0,..., 0) (16) is a solution to (BS(λ)) and ∗ 1 2 k T x¯ := (x¯ , x¯ ,..., x¯ ) (17) is a solution to (BS(λ)). The optimal value of (BS(λ)) is V := V + ··· + V . (18) 1 k Namely, the following proposition holds. i n Proposition 4.1 Any element x¯ ∈ R given by (16) solves (BS(λ)) for i = 1,..., k ∗ n and any x¯ ∈ R given by (17) solves problem (BS(λ)). i i Proof Clearly, x¯ are feasible for (BS(λ)) , i = 1,..., k, because x¯ is of the form (16) and hence belongs to the set X which is the constraint set of (BS(λ)) . Consequently, x¯ defined by (17)isfeasible for (BS(λ)) because all the components are binary and the linear equality constraints x = 1, i = 1, 2,..., k j =1 are satisfied. 123 900 E. M. Bednarczuk et al. To see that x¯ are also optimal for (BS(λ)) , i = 1,..., k, suppose by the contrary, that there exists 1 ≤ i ≤ k and an element y ∈ R which is feasible for (BS(λ)) with the value of the objective function strictly greater than the value at x¯ , i.e., n n i i [λp + (1 − λ)(−c )]y > [λp + (1 − λ)(−c )]¯x . ij ij j ij ij j =1 j =1 This, however, would mean that there exists an index 1 ≤ j ≤ n such that ∗ ∗ λp + (1 − λ)(−c )>λp + (1 − λ)(−c ) ij ij ij ij contrary to the definition of j . To see that x¯ is optimal for (BS(λ)), suppose by the contrary, that there exists an element y ∈ R which is feasible for (BS(λ)) and the value of the objective function at y is strictly greater than the value of the objective function at x¯ , i.e., T T T ∗ T ∗ λp y + (1 − λ)(−c) y >λp x¯ + (1 − λ)(−c) x¯ . In the same way as previously, we get the contradiction with the definition of the components of x¯ given by (17). Let us observe that each optimization problem (BS(λ)) can be solved in time O(n ), hence problem (BS(λ)) can be solved in time O(n), where n = n . i i i =1 Clearly, one can have more than one solution to (BS(λ)) , i = 1,..., k.Inthe next section, according to Theorem 3.1, from among all the solutions of (BS(λ)) we choose the one for which the value of the second criterion is greater than and as close as possible to −b. Note that by using Proposition 4.1, one can easily solve problems (P1) and (P2) defined in Remark 4.1, i.e., by applying (18) we immediately get T T F := max p x , F := max(−c) x 1 2 x ∈X x ∈X the optimal values of (P1) and (P2) and by (17), we find their solutions x¯ and x¯ , 1 2 respectively. Proposition 4.1 and formula (17) allows to find ε > 0 and ε > 0 as defined in 1 2 Remark 4.1.By(17), it is easy to find elements x¯ , x¯ ∈ X such that 1 2 T T F = p x¯ , F = (−c) x¯ . 1 1 2 2 Put T T ¯ ¯ F := p x¯ , F := (−c) x¯ 1 2 2 1 and let ¯ ¯ V := F − decr (p), V := F − decr (−c), 1 1 2 2 123 A multi-criteria approach to approximate solution of … 901 where decr (p) and decr (−c) denote the smallest nonzero decrease on X of functions p and (−c) from F and F , respectively. Note that decr (p) and decr (−c) can easily 1 2 be found. Remark 4.2 The following formulas describe decr (p) and decr (−c), i i i i decr (p) := min (p − p ), decr (−c) := min ((−c) − (−c) ), max submax max submax 1≤i ≤k 1≤i ≤k (19) i i i i where p and c , i = 1,..., k, are defined by (4), p , (−c) , i = 1,..., k, submax submax i T i i T i i i are submaximal values of functions (p ) x , ((−c) ) x , x ∈ X , i = 1,..., k. i T i i For any 1 ≤ i ≤ k, the submaximal values of a linear function (d ) x on X can be found by: ordering first the coefficients of the function (d ) decreasingly, i j1 i j2 i jm (d ) >(d ) ≥ ··· ≥ (d ) , and next observing that the submaximal (i.e., smaller than maximal but as close as i i possible to the maximal) value of (d ) on X is attained for x¯ := (0,..., 1 , 0,... 0). j2 i i Basing on Remark 4.2 one can find values of p and (−c) in time submax submax O(n ), i = 1,..., k, even without any sorting. It can be done for a given i by finding i i a maximal value among all p (c ), j = 1,..., n , except p (c ). Therefore ij ij i max max the computational cost of calculating decr (p) and decr (−c) is O(n). We have the following fact. ¯ ¯ ¯ ¯ Proposition 4.2 Let F ,F , F , F , V , V be as defined above. The problems 1 2 1 2 1 2 T T (P1) max p x + ε (−c) x x ∈X and T T (P2) max(−c) x + ε p x x ∈X where ¯ ¯ F − V F − V 1 1 2 2 ε := ,ε := , (20) 1 2 ¯ ¯ F − F F − F 2 2 1 1 give Pareto efficient solutions to problem (BP), x¯ and x¯ , respectively. Moreover, 1 2 f (x¯ ) = F and f (x¯ ) = F , 1 1 1 2 2 2 i.e., x¯ , x¯ solve problems (12), respectively. 1 2 123 902 E. M. Bednarczuk et al. Fig. 2 Construction of ε and ε 1 2 Proof Follows immediately from the adopted notations, see Fig. 2. For instance, the objective of problem (P1) is represented by the straight line passing through points ¯ ¯ (F , F ) and (V , F ), i.e., 1 2 1 2 ¯ ¯ F + ε F = V + ε F 1 1 2 1 1 2 ¯ ¯ which gives (20). The choice of F , F and V , F guarantees that x¯ solves (P1) (and 1 2 1 2 1 analogously for x¯ which solves (P2)). 5 Bi-objective approximate solution search algorithm (BI SSA) for solving (MC K P ) In this section, we propose the bi-objective approximate solution search algorithm BI SSA, for finding an element xˆ ∈ F which is an approximate solution to (MC K P). The algorithm relies on solving a series of problems (BS(λ)) defined by (11)for 0 <λ< 1 chosen in the way that the Pareto solutions x (λ) to (BS(λ)) are feasible T T for (MC K P) and for which (−c) x (λ) + b ≥ 0 and (−c) x (λ) + b diminishes for subsequent λ. According to Theorem 4.1, each solution to (BS(λ)) solves the linear bi-objective optimization problem (BP), T T Vmax(p x,(−c) x ) (BP) subject to x ∈ X. According to Theorem 3.1, any Pareto efficient solution x to problem (BP) which T ∗ is feasible to (MC K P), i.e., (−c) x ≥−b, and satisfies condition (∗), i.e., T ∗ T (−c) x + b = min (−c) x + b (∗) x ∈S ,(−c) x +b≥0 bo solves problem (MC K P). Since problems (BS(λ)) are defined with the help of linear scalarization, we are not able, in general, to enumerate all x ∈ S such that (−c) x + bo 123 A multi-criteria approach to approximate solution of … 903 Fig. 3 Outcome f (xˆ ) and bounds derived by the BI SSA algorithm; x —the solution to problem (MC K P) b ≥ 0 in order to find an x which satisfy condition (∗). On the other hand, by using linear scalarization, we are able to decompose and easily solve problems (BS(λ)). The BI SSA algorithm aims at finding a Pareto efficient solution xˆ ∈ X to (BP) T T which is feasible to (MC K P), i.e., c xˆ ≤ b for which the value of b − c xˆ is as small as possible (but not necessarily minimal) and approaches condition (∗) of Theorem 3.1 as close as possible. Here, we give a description of the BI SSA algorithm. The first step of the algorithm (lines 1–5) is to find solutions to problems (P1) and (P2) as well as their outcomes. The solutions are the extreme Pareto solutions to problem (BP). Those points named 0 0 (a , b ) and (a , b ) are presented in Fig. 3. Then (lines 6-9), in order to assert 1 1 2 2 whether a solution to problem (MC K P) exists or not, a basic checking is made against value −b. If the algorithm reaches line 10, no solution has been found yet, and we can begin the exploration of the search space. We calculate λ according to line 13. The value of λ is the slope of the straight line joining (a , b ) and (a , b ). At the same time it is the scalarization parameter 1 1 2 2 defining the problem (BS(λ)) (formula 11). The outcome of the solution to problem (BS(λ)) cannot lie below the straight line determined by points (a , b ) and (a , b ). 1 1 2 2 It must lie on or above this line, as it is the Pareto efficient solution to problem (BP). Then, problem (BS(λ)) is solved (line 14) by using formulae (16) and (17). Next, in lines 15–27 of the repeat-until loop a scanning of the search space is conducted to find solutions to problem (BP) which are feasible to problem (MC K P). If there exist solutions with outcomes lying above the straight line determined by λ (the condition in line 15 is true), either the narrowing of the search space is made (by determining new points (a , b ) and (a , b ), see Fig. 3, and points with upper index equal to 1), 1 1 2 2 and the loop continues, or the solution to problem (MC K P) is derived. If not, the 123 904 E. M. Bednarczuk et al. solution x from set S which outcome lies above the line determined by −b [the feasible solution to problem (MC K P)] and for which value f (x ) + b is minimal in this set, is an approximate solution (xˆ) to problem (MC K P), and the loop terminates. Finally (line 28), the upper bound f (xˆ) + u on the profit value of exact solution to problem (MC K P) is calculated. Algorithm 1 BI SSA - Approximate solution search to (MC K P) 1: Calculate ε , ε according to (20) 1 2 T T 2: Assume that f (x ) = p x and f (x ) = (−c) x 1 2 3: Solve (P1) according to (18)and (17) x a solution to (P1) 4: Solve (P2) according to (17)and (18) x a solution to (P2) 5: a := f (x ), b := f (x ), a := f (x ), b := f (x ) 1 1 1 1 2 1 2 1 2 2 2 2 6: if (a , b ) = (a , b ) and b ≥−b then x solves (MC K P) and STOP end if 1 1 2 2 2 2 7: if b ≥−b then x solves (MC K P) and STOP end if 1 1 8: if b =−b then x solves (MC K P) and STOP end if 2 2 9: if b < −b then no solution to (MC K P) and STOP end if 10: (a , b ) = (a , b ) and b < −b < b . Explore the search space 1 1 2 2 1 2 11: loop := TRU E 12: repeat (b −b ) 2 1 13: λ := , α := λa + (1 − λ)b 0 <λ< 1 1 1 (a −a )+(b −b ) 1 2 2 1 14: Solve (BS(λ)) according to (16)and (17) x a solution, opt the optimal value, S the solution set to (BS(λ)) 15: if opt >α then 16: if f (x)> −b then 17: a := f (x ), b := f (x ) 2 1 2 2 18: else if f (x)< −b then 19: a := f (x ), b := f (x ) 1 1 1 2 20: else 21: x solves (MC K P) and STOP 22: end if 23: else opt = α 24: xˆ := arg min f (x ) x ∈S, f (x )≥−b 25: loop := FALSE 26: end if 27: until ¬loop ˆx is an approximate solution to (MC K P) f (xˆ ) is a lower bound (LB)for (MC K P) (a − f (xˆ))( f (xˆ )+b) 1 1 2 28: u := f (xˆ ) + u is an upper bound (UB)for (MC K P) f (xˆ)−b 2 1 The BI SSA algorithm finds either an exact solution to problem (MC K P), or (after reaching line 27) a lower bound (LB) with its solution xˆ and an upper bound (UB) (see Fig. 3). A solution found by the algorithm is, in general, only an approxi- mate solution to problem (MC K P) because a triangle (called further the triangle of uncertainty) determined by points ( f (xˆ ), f (xˆ )), ( f (xˆ ) + u, −b), ( f (xˆ ), −b) may 1 2 1 1 contain other Pareto outcomes [candidates for outcomes of exact solutions to problem (MC K P)] which the proposed algorithm is not able to derive. The reason is that we use a scalarization technique based on weighted sums of criteria functions to obtain Pareto solutions to problem (BP). Let us recall that each instance of the optimization problem (BS(λ)) can be solved in time O(n), but the number of these instances solved by the proposed algorithm depends on the size of the problem (values k and n ) and the data. 123 A multi-criteria approach to approximate solution of … 905 6 Computational experiments Most publicly available test instances refer not to the (MC K P) problem (let us recall, that there is only one inequality or budget constraint in the problem we consider) but to multi-dimensional knapsack problems. Due to this fact we generate new ran- dom instances (available from the authors on request). However, to compare solutions obtained by the BI SSA algorithm to the exact solutions we used the minimal algo- rithm for the multiple-choice knapsack problem [22] which we call EX ACT and its implementation in C [23]. The EX ACT algorithm gives the profit value of the optimal solution as well as the solution obtained by the greedy algorithm for the (MC K P) problem, so the quality of the BI SSA algorithm approximate solutions can be assessed in terms of the difference or relative difference between profit values of approximate solutions and exact ones. Since the difficulty of knapsack problems (see, e.g., the monograph [19]) depends on the correlation between profits and weights of items, we conducted two computa- tional experiments: Experiment 1 with uncorrelated data instances (easy to solve) and Experiment 2 with weakly correlated data instances (more difficult to solve) (c.f.[11]). The explanation why weakly correlated problems are more difficult to solve by the BI SSA algorithm than uncorrelated ones we give later. To prepare test problems (data instances) we used a method proposed in [22] and our own procedure for calculating total cost values. The BI SSA algorithm has been implemented in C. The implementation of BI SSA algorithm was run on off-the-shelf laptop (2GHz AMD processor, Windows 10), and the implementation of EX ACT algorithm was run on PC machine (4x3.2GHz Intel processor, Linux). The running time for BI SSA and EX ACT algorithms for each of the test problems was below one second. The contents of the tables columns containing experiment results is as follows. 1 Problem no. 2 Profit of the exact solution found by the EX ACT algorithm. 3 Profit of the approximate solution found by the BI SSA algorithm. 4 Difference between 2 and 3. 5 Relative (%) difference between 2 and 3. 6 Upper bound for (MC K P) found by the BI SSA algorithm. 7 The difference between the upper bound and profit of the approximate solution. 8 The relative difference between the upper bound and profit of the approximate solution. 9 Upper bound for (MC K P) found by the greedy algorithm. 10 Number of (BS(λ)) problems solved by the BI SSA algorithm. Experiment 1: uncorrelated data (unc) instances We generated 10 test problems assuming that k = 10 and n = 1000, i = 1,..., k (problem set (unc, 10, 1000)), 10 test problems assuming that k = 100 and n = 100, i = 1,..., k (problem set (unc, 100, 100)), and 10 test problems assuming that k = 1000 and n = 10, i = 1,..., k (problem set (unc, 1000, 10)). For each test problem profits ( p ) and costs (c ) of items were randomly distributed (according to ij ij the uniform distribution) in [1, R], R = 10000. Profits and costs of items were integers. 123 906 E. M. Bednarczuk et al. For each test problem the total cost b was equal to either c + random(0, ∗ c),or c − random(0, ∗ c) randomly selected with the same probability equal to 0.5), 1 k where c = (min c + max c ), and random(0, r ) denotes j =1,...,n ij j =1,...,n ij i =1 i i randomly selected (according to the uniform distribution) integer from [0, r ]. The results for problem sets (unc, 10, 1000), (unc, 100, 100) and (unc, 1000, 10) are given, respectively, in tables Tables 1, 2 and 3. Experiment 2: weakly correlated (wco) data instances We generated 10 test problems assuming that k = 20 and n = 20, i = 1,..., k (problem set (wco, 20, 20)). For each test problem costs (c ) of items in set N were ij i randomly distributed (according to the uniform distribution) in [1, R], R = 10000, and profits of items ( p ) in this set were randomly distributed in [c − 10, c + 10], ij ij ij such that p ≥ 1. Profits and costs of items were integers. For each test problem the ij total cost b was calculated as for Experiment 1. Table 1 Obtained results for Experiment 1, problem set (unc, 10, 1000) 1 2 34 5 6 7 8 910 1 99,873 99,849 24 0.024 99,887.011 38.011 0.038 99,887 7 2 99,894 99,889 5 0.005 99,899.061 10.061 0.010 99,899 8 3 99,861 99,861 0 0.000 99,866.141 5.141 0.005 99,866 7 4 99,832 99,832 0 0.000 99,836.262 4.262 0.004 99,836 6 5 99,854 99,854 0 0.000 99,856.485 2.485 0.002 99,856 6 6 99,827 99,808 19 0.019 99,835.986 27.986 0.028 99,835 6 7 99,860 99,841 19 0.019 99,863.302 22.302 0.022 99,863 6 8 99,883 99,883 0 0.000 99,895.311 12.311 0.012 99,895 6 9 99,881 99,881 0 0.000 99,883.419 2.419 0.002 99,883 7 10 99,702 99,702 0 0.000 99,724.825 22.825 0.023 99,724 6 Table 2 Obtained results for Experiment 1, problem set (unc, 100, 100) 12 3 4 5 6 7 8 9 10 1 983,045 982,946 99 0.010 983,059.387 113.387 0.012 983,059 10 2 980,483 980,433 50 0.005 980,492.589 59.589 0.006 980,492 11 3 984,106 983,999 107 0.011 984,130.851 131.851 0.013 984,130 8 4 982,980 982,684 296 0.030 983,021.172 337.172 0.034 983,021 10 5 981,421 981,421 0 0.000 981,426.965 5.965 0.001 981,426 10 6 983,059 982,968 91 0.009 983,080.841 112.841 0.011 983,080 10 7 984,059 984,001 58 0.006 984,071.849 70.849 0.007 984,071 10 8 987,210 987,158 52 0.005 987,228.022 70.022 0.007 987,228 9 9 980,999 980,944 55 0.006 981,035.911 91.911 0.009 981,035 9 10 982,142 982,060 82 0.008 982,176.615 116.615 0.012 982,176 9 123 A multi-criteria approach to approximate solution of … 907 Table 3 Obtained results for Experiment 1, problem set (unc, 1000, 10) 12 3 4 5 6 7 8 9 10 1 8,421,950 8,420,352 1598 0.019 8,421,964.411 1612.411 0.019 8,421,964 12 2 8,770,359 8,768,966 1393 0.016 8,770,370.988 1404.988 0.016 8,770,370 11 3 8,959,068 8,958,820 248 0.003 8,959,085.071 265.071 0.003 8,959,085 12 4 8,848,233 8,847,518 715 0.008 8,848,270.055 752.055 0.008 8,848,270 11 5 8,807,777 8,806,990 787 0.009 8,807,787.139 797.139 0.009 8,807,787 12 6 8,881,946 8,881,649 297 0.003 8,881,976.338 327.338 0.004 8,881,976 11 7 8,927,815 8,927,065 750 0.008 8,927,826.311 761.311 0.009 8,927,826 13 8 8,742,270 8,740,874 1396 0.016 8,742,284.668 1410.668 0.016 8,742,284 12 9 8,693,221 8,690,669 2552 0.029 8,693,245.349 2576.349 0.030 8,693,245 12 10 8,411,809 8,411,350 459 0.005 8,411,859.566 509.566 0.006 8,411,859 12 Table 4 Obtained results for Experiment 2, problem set (wco, 20, 20) 12 3 4 5 6 7 8 9 10 1 113,664 113,584 80 0.070 113,665.988 81.988 0.072 113,665 8 2 102,060 102,049 11 0.011 102,061.000 12.000 0.012 102,061 5 391,399 89,864 1535 1.679 91,400.223 1536.223 1.681 91,400 8 4 121,378 118,231 3147 2.593 121,380.379 3149.379 2.595 121,380 8 5 100,029 96,907 3122 3.121 100,032.112 3125.112 3.124 100,032 8 697,145 97,145 0 0.000 97,146.000 1.000 0.001 97,146 5 7 103,176 97,340 5836 5.656 103,178.131 5838.131 5.658 103,178 7 882,942 82,832 110 0.133 82,944.000 112.000 0.135 82,944 5 986,132 86,132 0 0.000 86,132.000 0.000 0.000 86,132 6 10 80,322 80,194 128 0.159 80,325.000 131.000 0.163 80,325 6 The results for problem set (wco, 20, 20) are given in Table 4. In the case of uncorrelated data instances, the BI SSA algorithm was able to find approximate solutions (and profit values) to problems with 10000 binary variables in reasonable time. The relative difference between profit values of exact and approximate solutions are small for each of the test problems. Upper bounds found by the BI SSA algorithm are almost the same as upper bounds found by the greedy algorithm for (MC K P). Even for the problem set (unc, 1000, 10) number of (BS(λ)) problems solved by the BI SSA algorithm is relatively small in regards to number of decision variables. In the case of weakly correlated data instances, the BI SSA algorithm solved prob- lems with 400 binary variables in reasonable time. The relative difference between profit values of exact and approximate solutions is, in average, greater than for uncorrelated test problems. As one can see in Table 4, upper bounds found by the BI SSA algorithm are almost the same as upper bounds found by the greedy algorithm for (MC K P). The reason why the BI SSA algorithm solves weakly corre- 123 908 E. M. Bednarczuk et al. lated instances with a significantly smaller number of variables than for uncorrelated ones in reasonable time is as follows. In line 24 of the BI SSA algorithm, in order to find an element xˆ, we have to go through the solution set S to the problem (BS(λ)) [the complete scan of set S according to values of the second objective function of problem (BP)]. For weakly correlated data instances the cardinality of the set S may be large even for problems belonging to class (wco, 30, 30).We conducted experiments for problem class (wco, 30, 30). For the most difficult test problem in this class, the cardinality of solution set S to the problem (BS(λ)) was 199,065,600. For greater weakly correlated problems that number may be even larger. 7 Conclusions and future works A new approximate method of solving multiple-choice knapsack problems by replac- ing the budget constraint with the second objective function has been presented. Such a relaxation of the original problem allows to the smart scanning of the decision space by quick solving of the binary linear optimization problem (it is possible by the decom- position of this problem to independently solved easy subproblems). Let us note that our method can also be used for finding an upper bound for the multi-dimensional multiple-choice knapsack problem (MMCK P) via the relaxation obtained by sum- ming up all the linear inequality constraints [1]. The method can be compared to greedy algorithm for multiple-choice knapsack problems which also finds, in general, an approximate solution and an upper bound. Two preliminary computational experiments have been conducted to check how the proposed algorithm behaves for simple to solve (uncorrelated) instances and hard to solve (weakly correlated) instances. The results have been compared to results obtained by the exact state-of-the-art algorithm for multiple-choice knapsack problems [22]. For weakly correlated problems, the number of solution outcomes which have to be checked in order to derive the triangle of uncertainty (so also an approximate solution to the problem and its upper bound) grows fast with the size of the problem. Therefore, for weakly correlated problems we are able to solve smaller problem instances in reasonable time than for uncorrelated problem instances. It is worth underlying that in the proposed method profits and costs of items as well as total cost can be real numbers. It could be of value when one wants to solve multiple-choice knapsack problems without changing real numbers into integers (as one has to do for dynamic programming methods). Further work will cover investigations of how the algorithm behaves for weekly and strongly correlated instances as well as on the issue of finding a better solution by a smart “scanning” of the triangle of uncertainty. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 123 A multi-criteria approach to approximate solution of … 909 References 1. Akbar, M.M., Rahman, M.S., Kaykobad, M., Manning, E.G., Shoja, G.C.: Solving the multidimen- sional multiple-choice knapsack problem by constructing convex hulls. Comput. Oper. Res. 33(5), 1259–1273 (2006). https://doi.org/10.1016/j.cor.2004.09.016 2. Chen, Y., Hao, J.K.: A reduce and solve approach for the multiple-choice multidimensional knapsack problem. Eur. J. Oper. Res. 239(2), 313–322 (2014). https://doi.org/10.1016/j.ejor.2014.05.025 3. Cherfi, N., Hifi, M.: A column generation method for the multiple-choice multi-dimensional knapsack problem. Comput. Optim. Appl. 46(1), 51–73 (2010). https://doi.org/10.1007/s10589-008-9184-7 4. Dudzinski, K., Walukiewicz, S.: A fast algorithm for the linear multiple-choice knapsack problem. Oper. Res. Lett. 3(4), 205–209 (1984). https://doi.org/10.1016/0167-6377(84)90027-0 5. Dudzinski, K., Walukiewicz, S.: Exact methods for the knapsack problem and its generalizations. Eur. J. Oper. Res. 28(1), 3–21 (1987). https://doi.org/10.1016/0377-2217(87)90165-2 6. Dyer, M., Kayal, N., Walker, J.: A branch and bound algorithm for solving the multiple-choice knapsack problem. J. Comput. Appl. Math. 11(2), 231–249 (1984). https://doi.org/10.1016/0377- 0427(84)90023-2 7. Dyer, M., Riha, W., Walker, J.: A hybrid dynamic programming/branch-and-bound algorithm for the multiple-choice knapsack problem. J. Comput. Appl. Math. 58(1), 43–54 (1995). https://doi.org/10. 1016/0377-0427(93)E0264-M 8. Dyer, M.E.: An O(n) algorithm for the multiple-choice knapsack linear program. Math. Program. 29(1), 57–63 (1984). https://doi.org/10.1007/BF02591729 9. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005). http://www.springer.com/gp/book/ 10. Gao, C., Lu, G., Yao, X., Li, J.: An iterative pseudo-gap enumeration approach for the multidimensional multiple-choice knapsack problem. Eur. J. Oper. Res. (2016). https://doi.org/10.1016/j.ejor.2016.11. 11. Han, B., Leblet, J., Simon, G.: Hard multidimensional multiple choice knapsack problems, an empirical study. Comput. Oper. Res. 37(1), 172–181 (2010). https://doi.org/10.1016/j.cor.2009.04.006 12. Hifi, M., Michrafy, M., Sbihi, A.: Heuristic algorithms for the multiple-choice multidimensional knap- sack problem. J. Oper. Res. Soc. 55(12), 1323–1332 (2004). https://doi.org/10.1057/palgrave.jors. 13. Kellerer, H., Pferschy, U., Pisinger, D.: Knapsack Problems. Springer (2004). https://books.google.pl/ books?id=u5DB7gck08YC 14. Khan, M.S.: Quality adaptation in a multisession multimedia system: model, algorithms, and architec- ture. Ph.D. thesis, AAINQ36645, University of Victoria, Victoria (1998) 15. Klamroth, K., Tind, J.: Constrained optimization using multiple objective programming. J. Glob. Optim. 37(3), 325–355 (2007). https://doi.org/10.1007/s10898-006-9052-x 16. Kwong, C., Mu, L., Tang, J., Luo, X.: Optimization of software components selection for component- based software system development. Comput. Ind. Eng. 58(4), 618–624 (2010). https://doi.org/10. 1016/j.cie.2010.01.003 17. Lee, C., Lehoczky, J., Rajkumar, R.R., Siewiorek, D.: On quality of service optimization with discrete QoS options. In: Proceedings of the IEEE Real-Time Technology and Applications Symposium, pp. 276–286 (1999) 18. Martello, S., Pisinger, D., Toth, P.: New trends in exact algorithms for the 01 knapsack problem. Eur. J. Oper. Res. 123(2), 325–332 (2000). https://doi.org/10.1016/S0377-2217(99)00260-X 19. Martello, S., Toth, P.: Knapsack Problems: Algorithms and Computer Implementations. Wiley, New York (1990) 20. Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer Academic Publishers (1999). https:// doi.org/10.1007/978-1-4615-5563-6. http://users.jyu.fi/~miettine/book/ 21. Nauss, R.M.: The 01 knapsack problem with multiple choice constraints. Eur. J. Oper. Res. 2(2), 125–131 (1978). https://doi.org/10.1016/0377-2217(78)90108-X 22. Pisinger, D.: A minimal algorithm for the multiple-choice knapsack problem. Eur. J. Oper. Res. 83(2), 394–410 (1995). https://doi.org/10.1016/0377-2217(95)00015-I 23. Pisinger, D.: Program code in C (1995). http://www.diku.dk/~pisinger/minknap.c (downloaded in 2016) 24. Pisinger, D.: Budgeting with bounded multiple-choice constraints. Eur. J. Oper. Res. 129(3), 471–480 (2001). https://doi.org/10.1016/S0377-2217(99)00451-8 123 910 E. M. Bednarczuk et al. 25. Pyzel, P.: Propozycja metody oceny efektywnosci systemow MIS. In: Myslinski, A. (ed.) Techniki Informacyjne - Teoria i Zastosowania, Wybrane Problemy, vol. 2(14), pp. 59–70. Instytut Badan Sys- temowych PAN, Warszawa (2012) 26. Sbihi, A.: A best first search exact algorithm for the multiple-choice multidimensional knapsack prob- lem. J. Comb. Optim. 13(4), 337–351 (2007). https://doi.org/10.1007/s10878-006-9035-3 27. Sinha, P., Zoltners, A.A.: The multiple-choice knapsack problem. Oper. Res. 27(3), 503–515 (1979). https://doi.org/10.1287/opre.27.3.503 28. Zemel, E.: An O(n) algorithm for the linear multiple choice knapsack problem and related problems. Inf. Process. Lett. 18(3), 123–128 (1984). https://doi.org/10.1016/0020-0190(84)90014-0 29. Zhong, T., Young, R.: Multiple choice knapsack problem: example of planning choice in transportation. Eval. Program Plan. 33(2), 128–137 (2010). https://doi.org/10.1016/j.evalprogplan.2009.06.007

Journal

Computational Optimization and ApplicationsSpringer Journals

Published: Mar 1, 2018

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off