Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Solving TSP by Considering Processing Time: Meta-Heuristics and Fuzzy Approaches

Solving TSP by Considering Processing Time: Meta-Heuristics and Fuzzy Approaches Fuzzy Inf. Eng. (2011) 4: 359-378 DOI 10.1007/s12543-011-0091-8 ORIGINAL ARTICLE Solving TSP by Considering Processing Time: Meta-Heuristics and Fuzzy Approaches S.H. Nasseri · M.H. Khaviari Received: 30 January 2011/ Revised: 17 July 2011/ Accepted: 31 October 2011/ © Springer-Verlag Berlin Heidelberg and Fuzzy Information and Engineering Branch of the Operations Research Society of China Abstract In this paper, general properties of traveling salesman problem has been illustrated, then a model has been introduced to minimize Make-span (time interval which all of jobs will be done) with considering sequence-dependence setup times and processing time. Furthermore, fuzzy sets and its characteristics are applied to a Hard-solvable traveling salesman problem with considering processing times. As it can be seen, traveling salesman problems are NP-hard, so solving problem in huge dimensions is uncontrollably manageable and in other side these kinds of problems in real-world are unavoidable, so a local search can prove their importance. (However this Meta-heuristic methods may not reveal exact optimal solution, but they yield a near-exact optimal solution in undeniable reduced computational time). Here, some of most famous local searches have been explained in their common and normal form, which are Genetic Algorithm (GA), Tabu Search (TS), Simulated Annealing (SA), Ant Colony System (ACO). In rest, a comprehensive comparison through these methods has been shown. This normality in methods structure could help the article to hold no-side-taken and acceptable judgments. In final, point methods analysis and parameter tuning are held. Keywords Traveling salesman problem· Fuzzy numbers· Genetic algorithm· Tabu search· Simulated annealing · Ant colony system 1. Introduction S.H. Nasseri () Department of Mathematics, University of Mazandaran, Babolsar, Iran email: nasseri@umz.ac.ir M.H. Khaviari () Science and Technology University of Mazandaran, Babolsar, Iran email: khaviari@ustmb.ac.ir 360 S.H. Nasseri · M. H. Khaviari (2011) The traveling salesman problem (TSP) finds a routing of a salesman, who starts from a home location, visits a prescribed set of cities and returned to the original location in such way that total distance traveled should be minimized and each city is visited exactly once. In general, TSP represents typical ‘hard’ combinatorial optimization problem. ‘TSP’ is known variously in many fields of study, e.g. Karl Manger (a mathematician) calls it the messenger problem. In field of scheduling, ‘TSP’ is used for showing a sequencing-dependent setup time problem on a single machine. In many realistic problems, setup times depend on the type of jobs just completed as well as, type of the job to process. In that situation, it is not valid to absorb setup times for jobs in their processing times, and explicit modification must be made. The time interval in which job j occupies the machine is expressed S + p where i is the ij j job that proceeds j in current sequence, S is the setup time required for job j after ij job i is completed, and P is the amount of direct processing time require to complete job j. In basic single machine problem, C is a constant. With sequence-dependant max setup times, however the C depends on which sequence has been chosen as follows max [1]: C = F = S + p , 1 [1] 0,[1] [1] C = F = F + S + p , 2 [2] [1] [1],[2] [1] . (1) C = F = F + S + p = C , n [n] [n−1] [n−1],[n] [n] max where state zero corresponds to an initial state, usually an idle state, and C is com- [i] th pletion time of i job in sequence. Also if we define the state [n + 1] as a terminal state (perhaps identical to state 0), then the C becomes as follows [1]: max n+1 n C = F + S = S + p . (2) max [1] [n],[n+1] [ j−1],[ j] j j=1 j=1 The second summation is constant; therefore if first part is minimized, whole sum- mation will be optimized. This sum represents the total non-productive time in the full sequence, beginning and ending in the idle state. After a brief characterization of TSP, it must be said that crisp data are rarely found in real-life problem, so exploring properties of TSP on uncertain numbers shows its importance. TSP can be classified as NP-hard problems; however state-of-art algorithms are capable of solving very large problem, e.g. problem with thousands of cities. 2. Problem Modeling As we explained above, it is desired to minimize C and minimization of satisfac- max tion grade average. 2.1. Characterizing Problems Fuzzy Numbers Fuzzy set was proposed by [17]. When fuzzy set A has the following properties, it can be announced as a fuzzy number: Fuzzy Inf. Eng. (2011) 4: 359-378 361 (a) It is normal i.e.,∃x ∈ R : μ(x) = 1. (b) It is a convex set. (c) Support (A) is bounded. (d) A for α ∈ (0, 1] is a closed interval i.e., all of α-cuts are within a close subset in real numbers set. According to membership function of a fuzzy number, the fuzzy number gets its name, such as Triangular and Trapezoidal fuzzy numbers; these will be used in this article, so they could be illustrated as follows: At first, the membership function of a triangular fuzzy number can be as follows, and Figure 1 shows its shape: a− t ⎪ 1− , a−α ≤ t ≤ a, t− a μ (t) = (3) ˜ ⎪ A 1− , a < t ≤ a+β, 0, otherwise. (t) a- a a+ Fig. 1 A shape of triangular fuzzy number Then the following part explains the general form of a trapezoidal fuzzy number and its membership function shape: t− a−α , a−α ≤ t ≤ a, ⎪ 1, a < t ≤ b, μ (t) = (4) ˜ ⎪ ⎪ b− t+β ⎪ , b < t ≤ b+β, 0, otherwise. (t) 0 a- a b b+ Fig. 2 General shape of trapezoidal fuzzy number 362 S.H. Nasseri · M. H. Khaviari (2011) Note: The number which has membership function value equal to 1 is called mode of fuzzy number in a triangular form. For modeling ‘TSP’ in fuzzy approach, at first fuzzy number special characteristics must be prescribed. As it was said before, it is better to consider setup times and processing times as fuzzy numbers. 1) Fuzzy setup times are supposed to be triangular, even though lots of models exist there such as trapezoidal, and normal, but it seems that triangular one must be more suitable. 2) To define fuzzy processing time for job j, the job is assigned by an integer processing time (P ) in the uniform distribution [1, 100]. Then α and β are defined as random integers in interval [0, 0.1× P ]. [14] 3) To characterize fuzzy setup time (S ,α,β), the setup time of job ‘ j’ preceding ij job ‘i’ in sequence is randomly selected integer number between [1, 0.1 × n] that n is number of jobs and α, β are randomly selected integer number in [0, 0.2× S ]. ij 2.2. Minimizing Make Span (Objective Function) Make span or C formula is shown before, but it can be seen as follows: max n+1 n C = S + p. (5) max [i−1],[i] i i=1 i=1 3. Artificial Intelligence Tools Several heuristic and meta-heuristic tools has evolved in the last decades by inspiring from natural intelligence, these are so-called artificial intelligence. These artificial intelligence methods facilitate solving optimization problems which were previously difficult or impossible to solve by exact methods. This article will try to illustrate some of the most powerful and famous methods which have been proposed in last decades. They are Simulated Annealing, Tabu Search, Ant Colony System and Ge- netic Algorithm. Recently, these new heuristic tools have been combined among themselves and with knowledge elements, as well as with more traditional approach such as statistical analysis, to solve extremely challenging problem, and their ap- plication appears in many cases of study, particularly in Genetic Algorithm, Ling- Ning and partners (2008) improved a hybrid GA with some optimization strategies for asymmetric TSPs, Y. Marikanis and A. Migdalas (2005) proposed a hybridizesd GA with randomized greedy search procedure. In Tabu search, F. Choobineh, E. Mohebbi, H. Khoo (2006) worked on multi-objective tabu search on solving sin- gle machine sequence-dependent setup times, and formulated this in mixed integer programming. Also Stecco, Cordeau, and Moretti (2009) published about solving sequence-dependent setup time single machine. In addition for Simulated Anneal- ing, Meer (2007) presented a comparison on simulated annealing versus Metropolis method. Finally in Ant Colony System, B. Bontoux (2008) submitted an article about Fuzzy Inf. Eng. (2011) 4: 359-378 363 handling TSP problem by Ant Colony. Furthermore, A. Ugur and D. Aydin (2009) explained a iterative software for applying ant colony about TSP problem. Solving problem with these tools offers two major benefits as follows: (i) System is very robust, being relatively preserved to missing and/or noisy data. (ii) CPU time is much shorter than when using more traditional approaches(exact ones). Each of these introduced method works locally and in an iterative way. In each iterations, they search in current solution neighborhood and repeat it on next solution which is chosen in last iteration. Neighborship function is defined as follows. “Two solutions are neighbor if and only if they could be met only by one transmis- sion.” 3.1. Tabu Search Tabu Search (TS) was originally proposed by Fred Glover (1986). TS is basically a gradient-decent search with an special memory. The memory preserves some number of previously visited states along with a number of states that might be considered un- wanted. This long term memory can stand for avoiding solution to that TS previously visited them or their attributes. Precisely, in each iteration, a predefined number of neighbors around current best solution will be sorted in order of worst to best solu- tion according to objective function, then if selected solution is not tabu (banned)or it is, but it’s objective function is better than S (best found solution), this will be replaced by S . Tabu memory can avoid cycling, even when tabu restraints are en- forced, cycling may occur if the number of moves k when a tabu is active is relatively small. Excessively large values of k, on the other hand, can turn the search ineffi- cient as the visit to certain attractive configurations may be delayed [( Y. L. Kwang and M. A. El-sharkawi, 2008)]. Moreover for making the algorithm more efficient some sub-functions has been added in, including; Diversification, intensification, and illumination. Diversification keeps the search in vast area and tries to escape from local optima, for being well-focused on current neighbors in any recur intensification function has been applied which performs a local simple random search in current solution. Illumination works out as controlling tabu list size. Stop-condition of the algorithm is supposed to be a predefined number of iteration which is related on size of instance and its complexity and parameters difficulties. Because of computational consideration, there is another stop condition that control optimal condition and num- ber of non-improved iterations. In Figure 3 pseudo code of TS is illustrated. 3.2. Simulated Annealing In statistical machine, a physical process called annealing is often performed in order to relax the system to state with minimum free energy. In the annealing process, a solid in a heat bath is heated up increasing temperature of the bath until the solid is melted into liquid, then the temperature decreases slowly. In the liquid phase, all par- ticles of the solid arrange themselves randomly. In the ground state, the particles are arranged in a highly structured lattice, and the energy of the system is minimized [( Y. 364 S.H. Nasseri · M. H. Khaviari (2011) L. Kwang and M. A. El-sharkawi, 2008)]. Simulated Annealing (SA) search method introduced by S. Kirkpatrick, C. D. Gellatt, M. P. Vecchi (1993), SA do a recurring task for finding better solution although there is a chance of acceptance for worse solution. Acceptance of worse solution furnished SA to escape for local optima. This −Δ chance is equal to exp( ) thatΔ= f (s ) − f (s) when Δ ≤ 0 (i.e. current solution s is worse than current optimal solution s ) and T is current temperature. In every iteration, the method chooses neighbors from N(s ) . Reduction of initial tempera- ture and neighborhood solution member dramatically happen in each iteration. The process of reducing temperature each iteration is called cooling schedule. In Figure 4 pseudo-code of simulated annealing is represented. Initial temperature is caught from various ways. One famous way to achieve this parameter is revealed as follows: T = −μ , (6) ln(φ) whereμ is percent of hill climbing, f is difference between possible best & worst so- lutions, andφ is percent of accepting worse solutions, and alsoμ introduces a constant for tuning initial temperature. Tabu Search(f); Initializing(N,Tabu-lengh,iteration) s s initial solution(); While (q is lesser or equal to iteration)&& (non-improved iteration< max-non) For i=1 to N List Random_neighbour_select(); End of ‘for’. Sort (list); For i=1 to N If [List(i) is not Tabu]or[List(i) is Tabu and f( ) >f(List(i)) ] S list(i); End of if End of for Tabu_add( S ); * * Intensification( S ); Diversification( S ); Illumination(tabu long-term memo); End of while END Fig. 3 Pseudo-code of Tabu Search Fuzzy Inf. Eng. (2011) 4: 359-378 365 Simulated Annealing Algorithm (f); Initializing : instance(N, ,iteration) s s initial solution(); While (q <iteration)&& (non-improved iteration< max-non) For i=1 to N S =Random Choose from ( ); N (S ) End of ‘for’. For i=1 to N If 0 else ( With Possibility of exp( /T ) ) End of for Calculate neighborhood lengh(N); S S Cooling schedule(T); End of while END Fig. 4 Pseudo-code of simulated annealing In addition, some parallel mechanism can be utilized as follows; a system with several levels could be established. Then in any level, SA could be launched in paral- lel and best result is transferred to next down level. Number of SA running in every level could be pre-defined or descending, and each level uses up level output (level one uses initial individual). Figure 5 illustrates form of parallel SA. . . . . . . Fig. 5 Illustrates form of parallel SA 366 S.H. Nasseri · M. H. Khaviari (2011) 3.3. Ant Colony System Ant Colony search algorithms mimic the behavior of real ants. Ant Colony System Optimization (ACO) is a meta-heuristic approach for a combinatorial optimization problem. As it is well-known, real ants are capable of finding the shortest path from food sources to the nest without using visual cues. They are also capable of adapting to changes in the environment, for example, a new shortest path once the old one is no longer feasible because of a new obstacle. [9] Ant Colony System algorithms (ACOAs) have recently been introduced as power- ful tools to solve a diverse set of optimization problems, such as a traveling salesman problem, the Quadratic assignment problem [13]. The first ACO algorithm was pro- posed by Dorgio (1991). ACO belongs to biologically inspired heuristic algorithms. Something very interesting happens when an ant, which is inspecting the food, finds a source of food, in turning back path the ant releases a special chemical that is called pheromone. Every ant in further iteration follows most probability the path that include more dozen of pheromone. If an ant chooses a path, the paths dozen of pheromone will be increased. During this accumulation, paths that are not use even more or use rarely will evaporate their pheromone, and their chance to be chosen by ants will be reduced. For decreasing possibility of fully in local optima, all of edges will be initialized by a random pheromone between (0, 1). Jobs and their relation make a graph that jobs are its vertices and their relation makes edges. When an ant returns to its nest pheromone amount will be change in paths as follows with considering T as amount of pheromone in edge i, j: ij T ← (1−ρ)× T +ρ× T , (7) ij ij ij whereρ is a random real number within (0, 1). In above statement on right hand side, there are two. First part indicates evaporation process and T is previous amount of ij pheromone, and in next part T is accumulation coefficient of pheromone and it can ij be cost of the edge. Left hand side of (7) determines future amount of pheromone in next iteration. By considering this process of pheromone changing in a complete path, a question could be raised that how a path according to this pheromone, can be built. Each step of constructing a path divides jobs into j as scheduled and j unscheduled th th job set, so in construction process of k ant in i step next job ‘ j’ could be selected as follows: ⎪ α β arg{max {(T ) (η ) }}, q < q, j ij ij 0 j = (8) c, otherwise, where α and β randomly selected (0, 1), and they are related to emphasis on old pheromone T and cost of each edgesη . q is predefined number between (0, 1), and i, j ij q will be chosen randomly in each step, finally c is selected in following: For i ∈ j possibility of choosing i to j direction will be: α β (T ) (η ) ij ij p(i, j) =  . (9) α β (T ) (η ) j∈J ij ij Fuzzy Inf. Eng. (2011) 4: 359-378 367 Finally, best found solution pheromone in each iteration should be rebuilt due to putting appropriate emphasis on reassuring answers. Ant Colony Algorithm Initializing (ant number, iteration); Pheromone-initializing(T); f f max integer; While q<iteration && non-improved <max For i=1 to ant number path path-construction(i); end of for For i=1 to ant number If path(i) cost better than f S path(i); Pheromone-update(); end end Non-improved check(); Global optima check(); Pheromone-update(); End End Fig. 6 Pseudo-code of Ant Colony System 3.4. Genetic Algorithm Genetic Algorithm (GA) is a search algorithm based on the conjecture of natural se- lection and genetics. GA was firstly introduced by J.H. Holland (1975). The features of Genetic Algorithm are different from other search techniques in several aspects. Firstly, the algorithm is a multi-path that searches many peaks in parallel, hence re- ducing the possibility of local minimum trapping. Secondly, the GA works with a coding of parameters instead of the parameters themselves and by this computational 368 S.H. Nasseri · M. H. Khaviari (2011) time decreases rapidly. Thirdly, the GA evaluates the fitness of each string to guide its search instead of the optimization function. GA operates on a population of individ- uals. After an initial population randomly or heuristically is generated, the algorithm evolves the population through sequential and iterative application of three operators: selection, crossover and mutation. The Genetic Algorithm Inializing problem (Popsize,iteraon); f=fitness funcon(); p = Mutaon possibility; Int Pop= Inial-Populaon-Generators(); Sort Pop= Sorng(Int Pop); S S Sort Pop(1); Pop2 Int Pop; While q<itraon && z< max non-devaloped For i=1to Popsize/2 If rand(0,1)< p Pop1 muta on(ma ng pool); Else Pop1 crossover(ma ng pool); End. Pop1 Pop2(1); //Eli sm Pop2 Pop1; Stopping criteria Check(); End End Fig. 7 Pseudo-code Genetic Algorithm A new generation is formed at the end of any iteration. Selection operator is for elitism concept. It means any new population will be at least as well as previous pop- ulation, this process happens when best solution of each population exactly copies in next generation [9]. Crossover will be applied by selecting two present mainly affected by fitness function values. Any reproduction process happens in mating pool which includes parents, in each crossover, offspring heir from their parents. There are several attitudes for select parents who perform crossing over in the literature such as: roulette wheel, random selection, tournament, scaling selection probability etc. Mu- Fuzzy Inf. Eng. (2011) 4: 359-378 369 tation operator happens in order to escape from local optima. This operator happens in very low possibility not to interrupt convergence of the algorithm. Crossover and mutation follow intensification and diversification in order, so by increasing crossover possibility intensification will be paid more attention, and by increasing mutation possibility algorithms diversification happens more. By performing these operators at any iteration new population is generated and algorithm will be continued, till it reaches convergence or stopping criteria. In Figure 7 pseudo-code of GA has been shown. Note: To make sure, the comparison is held in equal condition, author tried to perform simple form of any algorithms to credit whole analysis. In addition, machine using for this article was: CPU Pent T4300, RAM 4 GB. (Matlab, version: R2009b) 4. Computational Result 4.1. Example Table 1 shows a randomly produced setup times for problem with 7 variables, which has been solve through total enumeration (7! = 5040), and it revealed sequence “1, 6, 4, 7, 3, 5, 2” with objective optimal value: 40, however optimal value is defuzzified duo to evaluating fuzzy objectives for comparisons in the search structure. Table 1: Randomly produced fuzzy setup times. Modes of Fuzzy Setup Times α’s (left hand sides) β’s (right hand sides) 98 83 In this part, author has tried to illustrate the meta-heuristic methods through some standard and classified instances, i.e., for 52, 100 and 200 cities some examples has heuriscs 370 S.H. Nasseri · M. H. Khaviari (2011) been gained from J.E. Beasley (1990), the examples taken from OR-library are crisp, though due to making them suitable for this article fuzzification is applied, according to the attitude. Then result is drawn into some Tables to perform a comprehensive analysis. Therefore, Table 2 shows result of used methods for fuzzy TSPs. 4.2. Hypothesis of Table 2 i. GA population sizes are: 52 cities: 100, 100 cities: 200, 200 cities: 400. Pm 0.6, Pc 0.3. ii. Tournament method has been chosen for parent selection. iii. In ACO, ants number; for 52 cities: 50, for 100 cities: 100, for 200 cities: 200. iv. Tabu list size; for 52 cities: 100, for 100 cities: 200, for 200 cities: 400. v. Cooling coefficient in SA is equal to 0.65, and initial temperature is 100. vi. Iterations: 200, max un-improved tolerance: 20. Table 2: Computational result of TS, ACO, SA, and GA for selected instances. Time Time Time Obj. Obj. Obj. Obj. Obj. Obj. 3472.6 2273.6 00 3910.8 32.7 5292.5 5498.5 30 SA* 2980.1 61.2 4380.4 1567.5 5512 3253.3 6067.2 8239.1 00 * In SA due to parallel system, in each level and each parts maximum iteration has been used and commulatively total iteration has been calculated. As it has been described above, according to Table 2, for 52-city size problem, SA operates better than others in average and best, and also consumed computa- tional time. After SA, GA plays well as objective function. Not only does SA work fastest for 52-dimension problem, but also it reveals solution seeming more qualify- ing. About 100 cities, SA harvests all records too, it works convincing in all aspect (i.e., average result, best, and time). Finally, for large size problems it can be pointed out that TS, and GA operate similar however SA still works better and faster than them. Moreover, through increasing problem size, it can be seen that computational time orders of SA and TS are much more acceptable. In Table 3 ranking of methods among several aspects is showed. Fuzzy Inf. Eng. (2011) 4: 359-378 371 Table 3: Ranking of methods in best, average results, and CPU time. 52 100 200 Soluon soluon me Soluon soluon me Soluon soluon me GA TS GA TS TS TS TS GA TS ACO GA ACO GA GA 4.3. Analysis & Parameters Tuning In this section, a comprehensive analysis has been held to better understanding char- acteristics of used algorithms. So these analyses can support depth study on behavior of any methods and tune them. For performing acceptable comparison between al- gorithms, all of them should be tuned, if we want to compare them neutrally. So a satisfying tuning must be done, and it should be found that which amounts can tune parameters appropriately, and finally a wisely comparison can be held for adjusting search methods. 4.3.1. Tabu Search Tabu search is an attitude based on forbidden list and its form so that rest of search could be handled through its control. In tuning TS, Tabu list size and size of running list (i.e., number of individuals is selected from current answer) are guessed more effective parameters than others. So Table 4 shows an appropriately focused test on them. Tabu list plays as an auto adjusting an tuning during search iteration and this will be revised if its length exceed from a pre-defined size, so Tabu list is up-to-date in whole procedure by use of Tabu updating and illuminating functions. TS result in TSP problem 0 20 40 60 80 100 120 140 160 iteration Fig. 8 Objective behaviors under TS algorithm inspection best result 372 S.H. Nasseri · M. H. Khaviari (2011) In addition, Figure 8 illustrates behavior of objective function under TS exploration (for 100 cities, Tabu list:50, Running list: 50). As we can see in Table 2, from first rows method performance progresses till middle, but as close as final rows lack of improvement or even though setting back in efficiency can be seen. According to concept of tabu list size, there is some relation between tabu list size and output results in Table 2, however short tabu list could interrupt convergence and limits exploration, and long tabu size could make intermediate convergence. In addition, it may happen, because some value corresponds to several solutions. Table 4: Parameters tuning of TS for 52 cities problem. Tabu Best Ave. Sol. Consumed Ave. list solution (5 runs) Time Iteration 3771.4 4052.1 3408.5 3633.0 3869.7 3988.1 3178.007 3629.097 89 3476.727 3497.9 3811.7 3737.3 3206.1 3566.7 3487.5 3751.0 3705.5 3633.175 4.3.2. Ant Colony System Ant’s algorithm based on a very clever natural intelligence which has been explored in very soon last decades. This chemical nature way to find path by ants has pen- etrated in problem solver world. During solving problem, some parameters could take their effects on final result such as ants number, initial pheromone, evaporation pace, but it seems that ants number acts most important factor. So an analysis on ants number and its effects on output, iteration and computational time have been pointed out in Table 5. Moreover, in Figure 9 behavior of objective aspect shows ants search operation. This part does an analysis on effects of ant’s number escalation on objective re- sult and CPU consumed times. However increasing ant’s number could cause some evolution on values, but from one threshold increasing ant’s population couldn’t be logical. It means that from 100 ants or more improvement on result couldn’t be as worthy as increasing on CPU consumed time. Therefore an integration of 50 ants with well-tuned iteration number could handle the problem productively. Fuzzy Inf. Eng. (2011) 4: 359-378 373 Table 5: Ant Colony System behavior for 52 cities problem. Best Ave. CPU Ave. Ants result result Time(s) Iteration* 5 4242.10 4755.01 56.44 176.4 4224.60 4837.58 65.37 175 4229.98 4774.83 69.79 159.4 4218.38 4727.59 82.34 163.8 10 4373.65 4742.82 114.08 182.1 12 4093.43 4504.82 140.55 185.5 3557.47 4399.60 176.82 199 3969.27 4469.55 182.43 180.7 3926.84 4333.22 308.35 246.8 25 4036.45 4360.24 344.21 220 30 4130.65 4463.94 389.16 208.4 3910.81 4153.24 766.63 250 3905.22 4105.28 1575.47 250 3981.26 4024.41 1878.97 250 3827.20 4075.55 2256.87 247.4 ACO result in TSP problem 0 50 100 150 iteration Fig. 9 Ant Colony System behaviors (Ants number: 5, problem size: 52) 4.3.3. Simulated Annealing Annealing is announced for the procedure of cooling down a melted solid gradually, so this cooling schedule could arrange and align the molecule by better disciplines. best Obj. Func . 374 S.H. Nasseri · M. H. Khaviari (2011) This physical intelligence can be applied, by suite formulation, coding, and then per- forming an expedient tuning. There are some integral factors including initial tem- perature and cooling pace. In Table 6 we can see an analysis on named factors. In addition, an example of SA search behavior has been illustrated in Figure 10. Table 6: Analysis of SA under cooling and initial temp for 52 cities. Cooling Int. Best Ave. CPU Ave. coefficient Temp. result result Time(s) Iteration 0.5 0 3031.05 3234.70 94.40 13921.3 10 2899.61 3299.88 88.73 13127.9 25 3021.34 3323.76 93.69 13257.1 50 3071.57 3204.79 90.29 13156.1 100 2892.32 3137.84 86.25 12726.8 500 2892.83 3191.76 82.21 12061.2 2*E+03 2992.15 3249.58 77.42 11455.2 5*E+03 3129.15 3303.55 79.41 11616.7 1*E+04 3107.95 3433.71 77.46 11455 5*E+04 3216.45 3472.17 77.60 11505.7 1*E+05 3275.61 3551.64 78.50 11656.4 1*E+07 3421.32 3537.96 76.21 11571.6 0.05 100 2996.32 3266.82 79.05 11607.8 0.1 2858.62 3223.21 88.99 13117.6 0.2 2929.85 3228.98 85.87 12734.6 0.3 2922.75 3057.98 86.69 12685.9 0.4 2871.28 3211.72 86.60 12834.9 0.5 2976.01 3221.38 87.76 12967.8 0.6 2900.95 3134.64 90.89 13374.4 0.65 2630.40 3089.18 83.47 12319.6 0.7 2828.12 3102.42 88.03 12937.6 0.8 2736.53 3174.83 88.09 13094.2 0.9 2954.19 3179.97 80.16 11973.6 1 3651.23 3922.16 56.16 8433.6 SA result in TSP problem 0 2 4 6 8 10 12 14 level Fig. 10 Analysis of SA under cooling and initial temp for 52 cities best result Fuzzy Inf. Eng. (2011) 4: 359-378 375 According to Table 6, after testing search method about several parameters, it seems that increasing initial temperature from zero degree can evolve final out-put, however this progression in function result from one threshold (100 degree) stops and then condition becomes worse. So 100 degree could be an appropriate initial temperature. On the other side, cooling procedure is a significance element, as it can be pointed that increasing cooling coefficient till 0.65 ascends efficiency, but greater than 0.65 more inefficiency, and this condition becomes worse till arriving to point 1. 4.3.4. Genetic Algorithm Genetic algorithm stands on the intelligence existing in societies. Any society of existent is improved due to producing generation. Any well-defined genes in each generation, can place in mating pool and produce the next. In this procedure chance of mutation, cross-over, reproduction, and population size could adjust the searching properties. In Table 7, we can see a comprehensive analysis on GA characteristics and its behaviors under mutation probability and population size seeming more effective. Table 7: GA operations under objective best and average results. Mut. Pop. Best Ave. CPU Ave. Prob. Size result result Time(s) Iteraon 10 6298.85 3.91 677.52 531.69 21.94 86.3 089.81 134.04 06.3 100 3972.52 274.03 594.0 122.7 150 3651.41 1500. 13 200 3645.01 788.83 2698.62 142.2 300 3191.35 6034.08 42.3 100 3689.20 965.90 679 140 3582.20 644 46 3240.50 636.60 595 147 3312.00 552 48 3251.20 463.20 513 150 3113.60 466 50 3261.90 405.30 426 150 3647.60 388 50 1 3757.70 4067.83 356.66 150 In Table 7, it can be pointed that ascending of population size can progress final result, even though this ascending from one level (100 members) shows extraordinary accumulation on computational time. So parameter tunning for iteration, mutation probability, and number of unimproved iteration might happen. In addition, it can be seen that mutation probability is crucial in somehow for escaping from local optima, however large amount might postpone acceptable convergence. In Table 8, article tries to perform some deterministic examination on parents se- lecting attitude, at first we can see a roulette wheel method. This method is based on probabilistic mode and aligns some weight corresponding to each individuals fit- ness function which defines each member chance for being chosen. Then random selection has been applied which is based on purely random selection. Tournament 376 S.H. Nasseri · M. H. Khaviari (2011) method includes a sub-population random selected, then a competition among com- petitors (selected individuals) are ranked and bests will be separated. Finally, scaling selection probability used which operates during weighted process as roulette wheel, but with a changing emphasis on differences between solutions i.e., following af + b linear formula as a for putting emphasis, f corresponding to fitness function, and b for moderation. We see that tournament and scaling appear better operators as found- ing average and also best result, but tournament makes algorithm much faster than its rival, and this point dominates it than others. Table 8: Analysis of selection parent attitude. Best tude iteraon result Ti roulee 3859.44 238.28 150 ra 3549.57 tournamen 414.25 730.81 127 sc rob 422.44 810.16 The analysis which have been held for “52-city problem”, naturally could be done for rests, but due to similarity and keeping generality of the article author surrendered others. 5. Conclusion Traveling salesman problem is one of most famous categories among OR classified problems. Especially a person who works as a retailer travel to several cities and his goal is to narrow entire traveling distance. This concept has been interpreted to several subjects including sequence-dependent setup times in scheduling. Under fuzzification, this field of study could be realm of real study or problems which man- agers or decision makers are facing with. Fuzzy distance between two points shall be supposed as a triangular fuzzy numbers due to its characteristics and nature. TSP problem has been known as NP-hard problems, so applying exact algorithm to gain absolute answer seems inefficient because of time and space constrains. Therefore significance of near-exact algorithm can prove itself. Meta-heuristic methods are kind of heuristics which look more general and can be tooled in many problems by suit- able formulation and coding. In here, Ant Colony System (ACO), Genetic Algorithm (GA), Tabu Search (TS), and Simulated annealing (SA) has been explained and com- pared. GA and ACO spring from natural intelligence. TS based on a taboo list and escaping from visited area as a vast unexpectedly unexplored area. Finally, SA can be related to physical congealing (crystallization) which offers chance of accepting best solution or vice versa. In this article, because of keeping neutrality and not taking any sides of compari- son, author tried to use normal and common form of any attitudes. Although under this study, several standard problems in small, medium, and large sizes have been used to get reliable result, fuzzy ones couldn’t be found and real instances has been fuzzified to suite for the article. According to the results, SA outperforms are much Fuzzy Inf. Eng. (2011) 4: 359-378 377 more acceptable in best and average results. Also it operates the fastest. After SA, in 52 cities GA, TS, ACO works weaker in order. It can be concluded that SA is a fast and efficient method in fuzzy TSP problems. (The following quotation has been derived from 52 cities problem) Duo to analysis and parameters tuning SAs effec- tiveness seems to have a direct relation with increasing initial temperature up to 100, and cooling coefficient up to 0.65. In GA, increasing population members up to 100 reveals considerable improving, however after 100 threshold CPU time looks disas- ter, furthermore mutation probability about 0.6 helps the algorithm to perform well. Moreover in ACO, assigning wisely number of ants - “50 ants” - (as path explorers) make more efficiency, however too ants provide uniform chance between next steps by making some interruptions in evaporation and pheromone up-date. Finally about TS, it could be claimed that increasing taboo memory up to 100 helps progression, but ascending from this threshold avoids escaping from local optima, and decline other solutions with some objective function. 6. Future Research For future research author suggests working on other methods such as magnetic search, immune search, gut feeling search as newer methods and hold some com- parison with this article. Working on vehicle routing as a more general form of TSPs and multi-objectives could be interesting and suitable area of researches. Finally, hy- brid algorithms can take advantage of their sub-algorithm and they could be appear efficient. Acknowledgments The authors thank to the research center of Algebraic Hyperstructures and Fuzzy Mathematics, Babolsar, Iran and National Elite Foundation, Tehran, Iran for their supports. The authors also appreciate the anonymous referee for his suggestions and comments to improve an earlier version of this work. References 1. Baker K R, Trietsch D ( 2009) Principles of sequencing and scheduling. New Jersey: John Wiley and Sons, Inc 2. Bontoux B (2008) Ant colony optimization for the traveling purchaser problem. Computers and Operations Research 35: 628-637 3. Beasley J E (1990) OR-Library: distributing test problems by electronic mail. J. Operation Res Soc 41: 1069-1072 4. Dorigo M , Maniezzo V , Colorni A (1991) Positive feedback as a search strategy. Technical Report Department of Politicians of Milan, Italy 91-106 5. Fred Choobineh F, Mohebbi E, Khoo H (2006) A multi-objective tabu search for single-machine scheduling problem with sequence-dependent setup times. European Journal of Operation Research 175: 318-337 6. Glover F (1986) Future paths for integer programming and links to artificial intelligence. Computer Operation Research 13(8): 533-549 7. Holland J H (1975) Adaptation in natural and artificial systems. Cambridge, MA: MIT 8. Kirkpatrick S, Gellatt C D, Vecchi M P (1993) Optimization by simulated annealing. Sci 220: 671- 9. Kwang Y L, El-sharkawi M A (2008) Modern heuristic optimization techniques. New Jersey: IEEE Press, John Wiley and Sons 378 S.H. Nasseri · M. H. Khaviari (2011) 10. Xing L N, Chen Y W, Yang K W, Hou F, Shen X S, Cai H P (2008) A hybrid approach combining an improved genetic algorithm and optimization strategies for the asymmetric traveling salesman problem. Engineering Applications of Artificial Intelligence 21: 1370-1380 11. Marikanis Y, Migdalas A (2005) A hybrid genetic GRASP algorithm using Lagrangean relaxation for the traveling salesman problem. Journal of Combinatorial Optimization 10: 311-326 12. Meer K (2007) Simulated annealing versus metropolis for a TSP instance. Information Processing Letters 104(6): 216-219 13. Mouhoub M, Wang Zh (2006) Ant colony with stochastic local search for the quadratic assignment problem. Proceeding of 18th IEEE International Conference on Tools with Artificial Intelligence 6: 127-131 14. Saidi Mehrabad, Pahlavani (2009) A fuzzy multi-objective programming for scheduling of weighted jobs on a single machine. Int. J. Adv Manuf. Technology 45: 122-139 15. Stecco G, Cordeau J, Moretti E (2009) A tabu search heuristic for a sequence-dependent and time- dependent scheduling problem on a single machine. J. Sched. 12: 3-16 16. Ugur A, Aydin D (2009) An interactive simulation and analysis software for solving TSP using Ant Colony Optimization algorithms. Advances in Engineering Software 40: 341-349 17. Zadeh L A (1965) Fuzzy sets. Information Control 8: 338-353 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Fuzzy Information and Engineering Taylor & Francis

Solving TSP by Considering Processing Time: Meta-Heuristics and Fuzzy Approaches

Fuzzy Information and Engineering , Volume 3 (4): 20 – Dec 1, 2011

Solving TSP by Considering Processing Time: Meta-Heuristics and Fuzzy Approaches

Abstract

AbstractIn this paper, general properties of traveling salesman problem has been illustrated, then a model has been introduced to minimize Make-span (time interval which all of jobs will be done) with considering sequence-dependence setup times and processing time. Furthermore, fuzzy sets and its characteristics are applied to a Hard-solvable traveling salesman problem with considering processing times. As it can be seen, traveling salesman problems are NP-hard, so solving problem in huge...
Loading next page...
 
/lp/taylor-francis/solving-tsp-by-considering-processing-time-meta-heuristics-and-fuzzy-UMgrfNXjHv
Publisher
Taylor & Francis
Copyright
© 2011 Taylor and Francis Group, LLC
ISSN
1616-8666
eISSN
1616-8658
DOI
10.1007/s12543-011-0091-8
Publisher site
See Article on Publisher Site

Abstract

Fuzzy Inf. Eng. (2011) 4: 359-378 DOI 10.1007/s12543-011-0091-8 ORIGINAL ARTICLE Solving TSP by Considering Processing Time: Meta-Heuristics and Fuzzy Approaches S.H. Nasseri · M.H. Khaviari Received: 30 January 2011/ Revised: 17 July 2011/ Accepted: 31 October 2011/ © Springer-Verlag Berlin Heidelberg and Fuzzy Information and Engineering Branch of the Operations Research Society of China Abstract In this paper, general properties of traveling salesman problem has been illustrated, then a model has been introduced to minimize Make-span (time interval which all of jobs will be done) with considering sequence-dependence setup times and processing time. Furthermore, fuzzy sets and its characteristics are applied to a Hard-solvable traveling salesman problem with considering processing times. As it can be seen, traveling salesman problems are NP-hard, so solving problem in huge dimensions is uncontrollably manageable and in other side these kinds of problems in real-world are unavoidable, so a local search can prove their importance. (However this Meta-heuristic methods may not reveal exact optimal solution, but they yield a near-exact optimal solution in undeniable reduced computational time). Here, some of most famous local searches have been explained in their common and normal form, which are Genetic Algorithm (GA), Tabu Search (TS), Simulated Annealing (SA), Ant Colony System (ACO). In rest, a comprehensive comparison through these methods has been shown. This normality in methods structure could help the article to hold no-side-taken and acceptable judgments. In final, point methods analysis and parameter tuning are held. Keywords Traveling salesman problem· Fuzzy numbers· Genetic algorithm· Tabu search· Simulated annealing · Ant colony system 1. Introduction S.H. Nasseri () Department of Mathematics, University of Mazandaran, Babolsar, Iran email: nasseri@umz.ac.ir M.H. Khaviari () Science and Technology University of Mazandaran, Babolsar, Iran email: khaviari@ustmb.ac.ir 360 S.H. Nasseri · M. H. Khaviari (2011) The traveling salesman problem (TSP) finds a routing of a salesman, who starts from a home location, visits a prescribed set of cities and returned to the original location in such way that total distance traveled should be minimized and each city is visited exactly once. In general, TSP represents typical ‘hard’ combinatorial optimization problem. ‘TSP’ is known variously in many fields of study, e.g. Karl Manger (a mathematician) calls it the messenger problem. In field of scheduling, ‘TSP’ is used for showing a sequencing-dependent setup time problem on a single machine. In many realistic problems, setup times depend on the type of jobs just completed as well as, type of the job to process. In that situation, it is not valid to absorb setup times for jobs in their processing times, and explicit modification must be made. The time interval in which job j occupies the machine is expressed S + p where i is the ij j job that proceeds j in current sequence, S is the setup time required for job j after ij job i is completed, and P is the amount of direct processing time require to complete job j. In basic single machine problem, C is a constant. With sequence-dependant max setup times, however the C depends on which sequence has been chosen as follows max [1]: C = F = S + p , 1 [1] 0,[1] [1] C = F = F + S + p , 2 [2] [1] [1],[2] [1] . (1) C = F = F + S + p = C , n [n] [n−1] [n−1],[n] [n] max where state zero corresponds to an initial state, usually an idle state, and C is com- [i] th pletion time of i job in sequence. Also if we define the state [n + 1] as a terminal state (perhaps identical to state 0), then the C becomes as follows [1]: max n+1 n C = F + S = S + p . (2) max [1] [n],[n+1] [ j−1],[ j] j j=1 j=1 The second summation is constant; therefore if first part is minimized, whole sum- mation will be optimized. This sum represents the total non-productive time in the full sequence, beginning and ending in the idle state. After a brief characterization of TSP, it must be said that crisp data are rarely found in real-life problem, so exploring properties of TSP on uncertain numbers shows its importance. TSP can be classified as NP-hard problems; however state-of-art algorithms are capable of solving very large problem, e.g. problem with thousands of cities. 2. Problem Modeling As we explained above, it is desired to minimize C and minimization of satisfac- max tion grade average. 2.1. Characterizing Problems Fuzzy Numbers Fuzzy set was proposed by [17]. When fuzzy set A has the following properties, it can be announced as a fuzzy number: Fuzzy Inf. Eng. (2011) 4: 359-378 361 (a) It is normal i.e.,∃x ∈ R : μ(x) = 1. (b) It is a convex set. (c) Support (A) is bounded. (d) A for α ∈ (0, 1] is a closed interval i.e., all of α-cuts are within a close subset in real numbers set. According to membership function of a fuzzy number, the fuzzy number gets its name, such as Triangular and Trapezoidal fuzzy numbers; these will be used in this article, so they could be illustrated as follows: At first, the membership function of a triangular fuzzy number can be as follows, and Figure 1 shows its shape: a− t ⎪ 1− , a−α ≤ t ≤ a, t− a μ (t) = (3) ˜ ⎪ A 1− , a < t ≤ a+β, 0, otherwise. (t) a- a a+ Fig. 1 A shape of triangular fuzzy number Then the following part explains the general form of a trapezoidal fuzzy number and its membership function shape: t− a−α , a−α ≤ t ≤ a, ⎪ 1, a < t ≤ b, μ (t) = (4) ˜ ⎪ ⎪ b− t+β ⎪ , b < t ≤ b+β, 0, otherwise. (t) 0 a- a b b+ Fig. 2 General shape of trapezoidal fuzzy number 362 S.H. Nasseri · M. H. Khaviari (2011) Note: The number which has membership function value equal to 1 is called mode of fuzzy number in a triangular form. For modeling ‘TSP’ in fuzzy approach, at first fuzzy number special characteristics must be prescribed. As it was said before, it is better to consider setup times and processing times as fuzzy numbers. 1) Fuzzy setup times are supposed to be triangular, even though lots of models exist there such as trapezoidal, and normal, but it seems that triangular one must be more suitable. 2) To define fuzzy processing time for job j, the job is assigned by an integer processing time (P ) in the uniform distribution [1, 100]. Then α and β are defined as random integers in interval [0, 0.1× P ]. [14] 3) To characterize fuzzy setup time (S ,α,β), the setup time of job ‘ j’ preceding ij job ‘i’ in sequence is randomly selected integer number between [1, 0.1 × n] that n is number of jobs and α, β are randomly selected integer number in [0, 0.2× S ]. ij 2.2. Minimizing Make Span (Objective Function) Make span or C formula is shown before, but it can be seen as follows: max n+1 n C = S + p. (5) max [i−1],[i] i i=1 i=1 3. Artificial Intelligence Tools Several heuristic and meta-heuristic tools has evolved in the last decades by inspiring from natural intelligence, these are so-called artificial intelligence. These artificial intelligence methods facilitate solving optimization problems which were previously difficult or impossible to solve by exact methods. This article will try to illustrate some of the most powerful and famous methods which have been proposed in last decades. They are Simulated Annealing, Tabu Search, Ant Colony System and Ge- netic Algorithm. Recently, these new heuristic tools have been combined among themselves and with knowledge elements, as well as with more traditional approach such as statistical analysis, to solve extremely challenging problem, and their ap- plication appears in many cases of study, particularly in Genetic Algorithm, Ling- Ning and partners (2008) improved a hybrid GA with some optimization strategies for asymmetric TSPs, Y. Marikanis and A. Migdalas (2005) proposed a hybridizesd GA with randomized greedy search procedure. In Tabu search, F. Choobineh, E. Mohebbi, H. Khoo (2006) worked on multi-objective tabu search on solving sin- gle machine sequence-dependent setup times, and formulated this in mixed integer programming. Also Stecco, Cordeau, and Moretti (2009) published about solving sequence-dependent setup time single machine. In addition for Simulated Anneal- ing, Meer (2007) presented a comparison on simulated annealing versus Metropolis method. Finally in Ant Colony System, B. Bontoux (2008) submitted an article about Fuzzy Inf. Eng. (2011) 4: 359-378 363 handling TSP problem by Ant Colony. Furthermore, A. Ugur and D. Aydin (2009) explained a iterative software for applying ant colony about TSP problem. Solving problem with these tools offers two major benefits as follows: (i) System is very robust, being relatively preserved to missing and/or noisy data. (ii) CPU time is much shorter than when using more traditional approaches(exact ones). Each of these introduced method works locally and in an iterative way. In each iterations, they search in current solution neighborhood and repeat it on next solution which is chosen in last iteration. Neighborship function is defined as follows. “Two solutions are neighbor if and only if they could be met only by one transmis- sion.” 3.1. Tabu Search Tabu Search (TS) was originally proposed by Fred Glover (1986). TS is basically a gradient-decent search with an special memory. The memory preserves some number of previously visited states along with a number of states that might be considered un- wanted. This long term memory can stand for avoiding solution to that TS previously visited them or their attributes. Precisely, in each iteration, a predefined number of neighbors around current best solution will be sorted in order of worst to best solu- tion according to objective function, then if selected solution is not tabu (banned)or it is, but it’s objective function is better than S (best found solution), this will be replaced by S . Tabu memory can avoid cycling, even when tabu restraints are en- forced, cycling may occur if the number of moves k when a tabu is active is relatively small. Excessively large values of k, on the other hand, can turn the search ineffi- cient as the visit to certain attractive configurations may be delayed [( Y. L. Kwang and M. A. El-sharkawi, 2008)]. Moreover for making the algorithm more efficient some sub-functions has been added in, including; Diversification, intensification, and illumination. Diversification keeps the search in vast area and tries to escape from local optima, for being well-focused on current neighbors in any recur intensification function has been applied which performs a local simple random search in current solution. Illumination works out as controlling tabu list size. Stop-condition of the algorithm is supposed to be a predefined number of iteration which is related on size of instance and its complexity and parameters difficulties. Because of computational consideration, there is another stop condition that control optimal condition and num- ber of non-improved iterations. In Figure 3 pseudo code of TS is illustrated. 3.2. Simulated Annealing In statistical machine, a physical process called annealing is often performed in order to relax the system to state with minimum free energy. In the annealing process, a solid in a heat bath is heated up increasing temperature of the bath until the solid is melted into liquid, then the temperature decreases slowly. In the liquid phase, all par- ticles of the solid arrange themselves randomly. In the ground state, the particles are arranged in a highly structured lattice, and the energy of the system is minimized [( Y. 364 S.H. Nasseri · M. H. Khaviari (2011) L. Kwang and M. A. El-sharkawi, 2008)]. Simulated Annealing (SA) search method introduced by S. Kirkpatrick, C. D. Gellatt, M. P. Vecchi (1993), SA do a recurring task for finding better solution although there is a chance of acceptance for worse solution. Acceptance of worse solution furnished SA to escape for local optima. This −Δ chance is equal to exp( ) thatΔ= f (s ) − f (s) when Δ ≤ 0 (i.e. current solution s is worse than current optimal solution s ) and T is current temperature. In every iteration, the method chooses neighbors from N(s ) . Reduction of initial tempera- ture and neighborhood solution member dramatically happen in each iteration. The process of reducing temperature each iteration is called cooling schedule. In Figure 4 pseudo-code of simulated annealing is represented. Initial temperature is caught from various ways. One famous way to achieve this parameter is revealed as follows: T = −μ , (6) ln(φ) whereμ is percent of hill climbing, f is difference between possible best & worst so- lutions, andφ is percent of accepting worse solutions, and alsoμ introduces a constant for tuning initial temperature. Tabu Search(f); Initializing(N,Tabu-lengh,iteration) s s initial solution(); While (q is lesser or equal to iteration)&& (non-improved iteration< max-non) For i=1 to N List Random_neighbour_select(); End of ‘for’. Sort (list); For i=1 to N If [List(i) is not Tabu]or[List(i) is Tabu and f( ) >f(List(i)) ] S list(i); End of if End of for Tabu_add( S ); * * Intensification( S ); Diversification( S ); Illumination(tabu long-term memo); End of while END Fig. 3 Pseudo-code of Tabu Search Fuzzy Inf. Eng. (2011) 4: 359-378 365 Simulated Annealing Algorithm (f); Initializing : instance(N, ,iteration) s s initial solution(); While (q <iteration)&& (non-improved iteration< max-non) For i=1 to N S =Random Choose from ( ); N (S ) End of ‘for’. For i=1 to N If 0 else ( With Possibility of exp( /T ) ) End of for Calculate neighborhood lengh(N); S S Cooling schedule(T); End of while END Fig. 4 Pseudo-code of simulated annealing In addition, some parallel mechanism can be utilized as follows; a system with several levels could be established. Then in any level, SA could be launched in paral- lel and best result is transferred to next down level. Number of SA running in every level could be pre-defined or descending, and each level uses up level output (level one uses initial individual). Figure 5 illustrates form of parallel SA. . . . . . . Fig. 5 Illustrates form of parallel SA 366 S.H. Nasseri · M. H. Khaviari (2011) 3.3. Ant Colony System Ant Colony search algorithms mimic the behavior of real ants. Ant Colony System Optimization (ACO) is a meta-heuristic approach for a combinatorial optimization problem. As it is well-known, real ants are capable of finding the shortest path from food sources to the nest without using visual cues. They are also capable of adapting to changes in the environment, for example, a new shortest path once the old one is no longer feasible because of a new obstacle. [9] Ant Colony System algorithms (ACOAs) have recently been introduced as power- ful tools to solve a diverse set of optimization problems, such as a traveling salesman problem, the Quadratic assignment problem [13]. The first ACO algorithm was pro- posed by Dorgio (1991). ACO belongs to biologically inspired heuristic algorithms. Something very interesting happens when an ant, which is inspecting the food, finds a source of food, in turning back path the ant releases a special chemical that is called pheromone. Every ant in further iteration follows most probability the path that include more dozen of pheromone. If an ant chooses a path, the paths dozen of pheromone will be increased. During this accumulation, paths that are not use even more or use rarely will evaporate their pheromone, and their chance to be chosen by ants will be reduced. For decreasing possibility of fully in local optima, all of edges will be initialized by a random pheromone between (0, 1). Jobs and their relation make a graph that jobs are its vertices and their relation makes edges. When an ant returns to its nest pheromone amount will be change in paths as follows with considering T as amount of pheromone in edge i, j: ij T ← (1−ρ)× T +ρ× T , (7) ij ij ij whereρ is a random real number within (0, 1). In above statement on right hand side, there are two. First part indicates evaporation process and T is previous amount of ij pheromone, and in next part T is accumulation coefficient of pheromone and it can ij be cost of the edge. Left hand side of (7) determines future amount of pheromone in next iteration. By considering this process of pheromone changing in a complete path, a question could be raised that how a path according to this pheromone, can be built. Each step of constructing a path divides jobs into j as scheduled and j unscheduled th th job set, so in construction process of k ant in i step next job ‘ j’ could be selected as follows: ⎪ α β arg{max {(T ) (η ) }}, q < q, j ij ij 0 j = (8) c, otherwise, where α and β randomly selected (0, 1), and they are related to emphasis on old pheromone T and cost of each edgesη . q is predefined number between (0, 1), and i, j ij q will be chosen randomly in each step, finally c is selected in following: For i ∈ j possibility of choosing i to j direction will be: α β (T ) (η ) ij ij p(i, j) =  . (9) α β (T ) (η ) j∈J ij ij Fuzzy Inf. Eng. (2011) 4: 359-378 367 Finally, best found solution pheromone in each iteration should be rebuilt due to putting appropriate emphasis on reassuring answers. Ant Colony Algorithm Initializing (ant number, iteration); Pheromone-initializing(T); f f max integer; While q<iteration && non-improved <max For i=1 to ant number path path-construction(i); end of for For i=1 to ant number If path(i) cost better than f S path(i); Pheromone-update(); end end Non-improved check(); Global optima check(); Pheromone-update(); End End Fig. 6 Pseudo-code of Ant Colony System 3.4. Genetic Algorithm Genetic Algorithm (GA) is a search algorithm based on the conjecture of natural se- lection and genetics. GA was firstly introduced by J.H. Holland (1975). The features of Genetic Algorithm are different from other search techniques in several aspects. Firstly, the algorithm is a multi-path that searches many peaks in parallel, hence re- ducing the possibility of local minimum trapping. Secondly, the GA works with a coding of parameters instead of the parameters themselves and by this computational 368 S.H. Nasseri · M. H. Khaviari (2011) time decreases rapidly. Thirdly, the GA evaluates the fitness of each string to guide its search instead of the optimization function. GA operates on a population of individ- uals. After an initial population randomly or heuristically is generated, the algorithm evolves the population through sequential and iterative application of three operators: selection, crossover and mutation. The Genetic Algorithm Inializing problem (Popsize,iteraon); f=fitness funcon(); p = Mutaon possibility; Int Pop= Inial-Populaon-Generators(); Sort Pop= Sorng(Int Pop); S S Sort Pop(1); Pop2 Int Pop; While q<itraon && z< max non-devaloped For i=1to Popsize/2 If rand(0,1)< p Pop1 muta on(ma ng pool); Else Pop1 crossover(ma ng pool); End. Pop1 Pop2(1); //Eli sm Pop2 Pop1; Stopping criteria Check(); End End Fig. 7 Pseudo-code Genetic Algorithm A new generation is formed at the end of any iteration. Selection operator is for elitism concept. It means any new population will be at least as well as previous pop- ulation, this process happens when best solution of each population exactly copies in next generation [9]. Crossover will be applied by selecting two present mainly affected by fitness function values. Any reproduction process happens in mating pool which includes parents, in each crossover, offspring heir from their parents. There are several attitudes for select parents who perform crossing over in the literature such as: roulette wheel, random selection, tournament, scaling selection probability etc. Mu- Fuzzy Inf. Eng. (2011) 4: 359-378 369 tation operator happens in order to escape from local optima. This operator happens in very low possibility not to interrupt convergence of the algorithm. Crossover and mutation follow intensification and diversification in order, so by increasing crossover possibility intensification will be paid more attention, and by increasing mutation possibility algorithms diversification happens more. By performing these operators at any iteration new population is generated and algorithm will be continued, till it reaches convergence or stopping criteria. In Figure 7 pseudo-code of GA has been shown. Note: To make sure, the comparison is held in equal condition, author tried to perform simple form of any algorithms to credit whole analysis. In addition, machine using for this article was: CPU Pent T4300, RAM 4 GB. (Matlab, version: R2009b) 4. Computational Result 4.1. Example Table 1 shows a randomly produced setup times for problem with 7 variables, which has been solve through total enumeration (7! = 5040), and it revealed sequence “1, 6, 4, 7, 3, 5, 2” with objective optimal value: 40, however optimal value is defuzzified duo to evaluating fuzzy objectives for comparisons in the search structure. Table 1: Randomly produced fuzzy setup times. Modes of Fuzzy Setup Times α’s (left hand sides) β’s (right hand sides) 98 83 In this part, author has tried to illustrate the meta-heuristic methods through some standard and classified instances, i.e., for 52, 100 and 200 cities some examples has heuriscs 370 S.H. Nasseri · M. H. Khaviari (2011) been gained from J.E. Beasley (1990), the examples taken from OR-library are crisp, though due to making them suitable for this article fuzzification is applied, according to the attitude. Then result is drawn into some Tables to perform a comprehensive analysis. Therefore, Table 2 shows result of used methods for fuzzy TSPs. 4.2. Hypothesis of Table 2 i. GA population sizes are: 52 cities: 100, 100 cities: 200, 200 cities: 400. Pm 0.6, Pc 0.3. ii. Tournament method has been chosen for parent selection. iii. In ACO, ants number; for 52 cities: 50, for 100 cities: 100, for 200 cities: 200. iv. Tabu list size; for 52 cities: 100, for 100 cities: 200, for 200 cities: 400. v. Cooling coefficient in SA is equal to 0.65, and initial temperature is 100. vi. Iterations: 200, max un-improved tolerance: 20. Table 2: Computational result of TS, ACO, SA, and GA for selected instances. Time Time Time Obj. Obj. Obj. Obj. Obj. Obj. 3472.6 2273.6 00 3910.8 32.7 5292.5 5498.5 30 SA* 2980.1 61.2 4380.4 1567.5 5512 3253.3 6067.2 8239.1 00 * In SA due to parallel system, in each level and each parts maximum iteration has been used and commulatively total iteration has been calculated. As it has been described above, according to Table 2, for 52-city size problem, SA operates better than others in average and best, and also consumed computa- tional time. After SA, GA plays well as objective function. Not only does SA work fastest for 52-dimension problem, but also it reveals solution seeming more qualify- ing. About 100 cities, SA harvests all records too, it works convincing in all aspect (i.e., average result, best, and time). Finally, for large size problems it can be pointed out that TS, and GA operate similar however SA still works better and faster than them. Moreover, through increasing problem size, it can be seen that computational time orders of SA and TS are much more acceptable. In Table 3 ranking of methods among several aspects is showed. Fuzzy Inf. Eng. (2011) 4: 359-378 371 Table 3: Ranking of methods in best, average results, and CPU time. 52 100 200 Soluon soluon me Soluon soluon me Soluon soluon me GA TS GA TS TS TS TS GA TS ACO GA ACO GA GA 4.3. Analysis & Parameters Tuning In this section, a comprehensive analysis has been held to better understanding char- acteristics of used algorithms. So these analyses can support depth study on behavior of any methods and tune them. For performing acceptable comparison between al- gorithms, all of them should be tuned, if we want to compare them neutrally. So a satisfying tuning must be done, and it should be found that which amounts can tune parameters appropriately, and finally a wisely comparison can be held for adjusting search methods. 4.3.1. Tabu Search Tabu search is an attitude based on forbidden list and its form so that rest of search could be handled through its control. In tuning TS, Tabu list size and size of running list (i.e., number of individuals is selected from current answer) are guessed more effective parameters than others. So Table 4 shows an appropriately focused test on them. Tabu list plays as an auto adjusting an tuning during search iteration and this will be revised if its length exceed from a pre-defined size, so Tabu list is up-to-date in whole procedure by use of Tabu updating and illuminating functions. TS result in TSP problem 0 20 40 60 80 100 120 140 160 iteration Fig. 8 Objective behaviors under TS algorithm inspection best result 372 S.H. Nasseri · M. H. Khaviari (2011) In addition, Figure 8 illustrates behavior of objective function under TS exploration (for 100 cities, Tabu list:50, Running list: 50). As we can see in Table 2, from first rows method performance progresses till middle, but as close as final rows lack of improvement or even though setting back in efficiency can be seen. According to concept of tabu list size, there is some relation between tabu list size and output results in Table 2, however short tabu list could interrupt convergence and limits exploration, and long tabu size could make intermediate convergence. In addition, it may happen, because some value corresponds to several solutions. Table 4: Parameters tuning of TS for 52 cities problem. Tabu Best Ave. Sol. Consumed Ave. list solution (5 runs) Time Iteration 3771.4 4052.1 3408.5 3633.0 3869.7 3988.1 3178.007 3629.097 89 3476.727 3497.9 3811.7 3737.3 3206.1 3566.7 3487.5 3751.0 3705.5 3633.175 4.3.2. Ant Colony System Ant’s algorithm based on a very clever natural intelligence which has been explored in very soon last decades. This chemical nature way to find path by ants has pen- etrated in problem solver world. During solving problem, some parameters could take their effects on final result such as ants number, initial pheromone, evaporation pace, but it seems that ants number acts most important factor. So an analysis on ants number and its effects on output, iteration and computational time have been pointed out in Table 5. Moreover, in Figure 9 behavior of objective aspect shows ants search operation. This part does an analysis on effects of ant’s number escalation on objective re- sult and CPU consumed times. However increasing ant’s number could cause some evolution on values, but from one threshold increasing ant’s population couldn’t be logical. It means that from 100 ants or more improvement on result couldn’t be as worthy as increasing on CPU consumed time. Therefore an integration of 50 ants with well-tuned iteration number could handle the problem productively. Fuzzy Inf. Eng. (2011) 4: 359-378 373 Table 5: Ant Colony System behavior for 52 cities problem. Best Ave. CPU Ave. Ants result result Time(s) Iteration* 5 4242.10 4755.01 56.44 176.4 4224.60 4837.58 65.37 175 4229.98 4774.83 69.79 159.4 4218.38 4727.59 82.34 163.8 10 4373.65 4742.82 114.08 182.1 12 4093.43 4504.82 140.55 185.5 3557.47 4399.60 176.82 199 3969.27 4469.55 182.43 180.7 3926.84 4333.22 308.35 246.8 25 4036.45 4360.24 344.21 220 30 4130.65 4463.94 389.16 208.4 3910.81 4153.24 766.63 250 3905.22 4105.28 1575.47 250 3981.26 4024.41 1878.97 250 3827.20 4075.55 2256.87 247.4 ACO result in TSP problem 0 50 100 150 iteration Fig. 9 Ant Colony System behaviors (Ants number: 5, problem size: 52) 4.3.3. Simulated Annealing Annealing is announced for the procedure of cooling down a melted solid gradually, so this cooling schedule could arrange and align the molecule by better disciplines. best Obj. Func . 374 S.H. Nasseri · M. H. Khaviari (2011) This physical intelligence can be applied, by suite formulation, coding, and then per- forming an expedient tuning. There are some integral factors including initial tem- perature and cooling pace. In Table 6 we can see an analysis on named factors. In addition, an example of SA search behavior has been illustrated in Figure 10. Table 6: Analysis of SA under cooling and initial temp for 52 cities. Cooling Int. Best Ave. CPU Ave. coefficient Temp. result result Time(s) Iteration 0.5 0 3031.05 3234.70 94.40 13921.3 10 2899.61 3299.88 88.73 13127.9 25 3021.34 3323.76 93.69 13257.1 50 3071.57 3204.79 90.29 13156.1 100 2892.32 3137.84 86.25 12726.8 500 2892.83 3191.76 82.21 12061.2 2*E+03 2992.15 3249.58 77.42 11455.2 5*E+03 3129.15 3303.55 79.41 11616.7 1*E+04 3107.95 3433.71 77.46 11455 5*E+04 3216.45 3472.17 77.60 11505.7 1*E+05 3275.61 3551.64 78.50 11656.4 1*E+07 3421.32 3537.96 76.21 11571.6 0.05 100 2996.32 3266.82 79.05 11607.8 0.1 2858.62 3223.21 88.99 13117.6 0.2 2929.85 3228.98 85.87 12734.6 0.3 2922.75 3057.98 86.69 12685.9 0.4 2871.28 3211.72 86.60 12834.9 0.5 2976.01 3221.38 87.76 12967.8 0.6 2900.95 3134.64 90.89 13374.4 0.65 2630.40 3089.18 83.47 12319.6 0.7 2828.12 3102.42 88.03 12937.6 0.8 2736.53 3174.83 88.09 13094.2 0.9 2954.19 3179.97 80.16 11973.6 1 3651.23 3922.16 56.16 8433.6 SA result in TSP problem 0 2 4 6 8 10 12 14 level Fig. 10 Analysis of SA under cooling and initial temp for 52 cities best result Fuzzy Inf. Eng. (2011) 4: 359-378 375 According to Table 6, after testing search method about several parameters, it seems that increasing initial temperature from zero degree can evolve final out-put, however this progression in function result from one threshold (100 degree) stops and then condition becomes worse. So 100 degree could be an appropriate initial temperature. On the other side, cooling procedure is a significance element, as it can be pointed that increasing cooling coefficient till 0.65 ascends efficiency, but greater than 0.65 more inefficiency, and this condition becomes worse till arriving to point 1. 4.3.4. Genetic Algorithm Genetic algorithm stands on the intelligence existing in societies. Any society of existent is improved due to producing generation. Any well-defined genes in each generation, can place in mating pool and produce the next. In this procedure chance of mutation, cross-over, reproduction, and population size could adjust the searching properties. In Table 7, we can see a comprehensive analysis on GA characteristics and its behaviors under mutation probability and population size seeming more effective. Table 7: GA operations under objective best and average results. Mut. Pop. Best Ave. CPU Ave. Prob. Size result result Time(s) Iteraon 10 6298.85 3.91 677.52 531.69 21.94 86.3 089.81 134.04 06.3 100 3972.52 274.03 594.0 122.7 150 3651.41 1500. 13 200 3645.01 788.83 2698.62 142.2 300 3191.35 6034.08 42.3 100 3689.20 965.90 679 140 3582.20 644 46 3240.50 636.60 595 147 3312.00 552 48 3251.20 463.20 513 150 3113.60 466 50 3261.90 405.30 426 150 3647.60 388 50 1 3757.70 4067.83 356.66 150 In Table 7, it can be pointed that ascending of population size can progress final result, even though this ascending from one level (100 members) shows extraordinary accumulation on computational time. So parameter tunning for iteration, mutation probability, and number of unimproved iteration might happen. In addition, it can be seen that mutation probability is crucial in somehow for escaping from local optima, however large amount might postpone acceptable convergence. In Table 8, article tries to perform some deterministic examination on parents se- lecting attitude, at first we can see a roulette wheel method. This method is based on probabilistic mode and aligns some weight corresponding to each individuals fit- ness function which defines each member chance for being chosen. Then random selection has been applied which is based on purely random selection. Tournament 376 S.H. Nasseri · M. H. Khaviari (2011) method includes a sub-population random selected, then a competition among com- petitors (selected individuals) are ranked and bests will be separated. Finally, scaling selection probability used which operates during weighted process as roulette wheel, but with a changing emphasis on differences between solutions i.e., following af + b linear formula as a for putting emphasis, f corresponding to fitness function, and b for moderation. We see that tournament and scaling appear better operators as found- ing average and also best result, but tournament makes algorithm much faster than its rival, and this point dominates it than others. Table 8: Analysis of selection parent attitude. Best tude iteraon result Ti roulee 3859.44 238.28 150 ra 3549.57 tournamen 414.25 730.81 127 sc rob 422.44 810.16 The analysis which have been held for “52-city problem”, naturally could be done for rests, but due to similarity and keeping generality of the article author surrendered others. 5. Conclusion Traveling salesman problem is one of most famous categories among OR classified problems. Especially a person who works as a retailer travel to several cities and his goal is to narrow entire traveling distance. This concept has been interpreted to several subjects including sequence-dependent setup times in scheduling. Under fuzzification, this field of study could be realm of real study or problems which man- agers or decision makers are facing with. Fuzzy distance between two points shall be supposed as a triangular fuzzy numbers due to its characteristics and nature. TSP problem has been known as NP-hard problems, so applying exact algorithm to gain absolute answer seems inefficient because of time and space constrains. Therefore significance of near-exact algorithm can prove itself. Meta-heuristic methods are kind of heuristics which look more general and can be tooled in many problems by suit- able formulation and coding. In here, Ant Colony System (ACO), Genetic Algorithm (GA), Tabu Search (TS), and Simulated annealing (SA) has been explained and com- pared. GA and ACO spring from natural intelligence. TS based on a taboo list and escaping from visited area as a vast unexpectedly unexplored area. Finally, SA can be related to physical congealing (crystallization) which offers chance of accepting best solution or vice versa. In this article, because of keeping neutrality and not taking any sides of compari- son, author tried to use normal and common form of any attitudes. Although under this study, several standard problems in small, medium, and large sizes have been used to get reliable result, fuzzy ones couldn’t be found and real instances has been fuzzified to suite for the article. According to the results, SA outperforms are much Fuzzy Inf. Eng. (2011) 4: 359-378 377 more acceptable in best and average results. Also it operates the fastest. After SA, in 52 cities GA, TS, ACO works weaker in order. It can be concluded that SA is a fast and efficient method in fuzzy TSP problems. (The following quotation has been derived from 52 cities problem) Duo to analysis and parameters tuning SAs effec- tiveness seems to have a direct relation with increasing initial temperature up to 100, and cooling coefficient up to 0.65. In GA, increasing population members up to 100 reveals considerable improving, however after 100 threshold CPU time looks disas- ter, furthermore mutation probability about 0.6 helps the algorithm to perform well. Moreover in ACO, assigning wisely number of ants - “50 ants” - (as path explorers) make more efficiency, however too ants provide uniform chance between next steps by making some interruptions in evaporation and pheromone up-date. Finally about TS, it could be claimed that increasing taboo memory up to 100 helps progression, but ascending from this threshold avoids escaping from local optima, and decline other solutions with some objective function. 6. Future Research For future research author suggests working on other methods such as magnetic search, immune search, gut feeling search as newer methods and hold some com- parison with this article. Working on vehicle routing as a more general form of TSPs and multi-objectives could be interesting and suitable area of researches. Finally, hy- brid algorithms can take advantage of their sub-algorithm and they could be appear efficient. Acknowledgments The authors thank to the research center of Algebraic Hyperstructures and Fuzzy Mathematics, Babolsar, Iran and National Elite Foundation, Tehran, Iran for their supports. The authors also appreciate the anonymous referee for his suggestions and comments to improve an earlier version of this work. References 1. Baker K R, Trietsch D ( 2009) Principles of sequencing and scheduling. New Jersey: John Wiley and Sons, Inc 2. Bontoux B (2008) Ant colony optimization for the traveling purchaser problem. Computers and Operations Research 35: 628-637 3. Beasley J E (1990) OR-Library: distributing test problems by electronic mail. J. Operation Res Soc 41: 1069-1072 4. Dorigo M , Maniezzo V , Colorni A (1991) Positive feedback as a search strategy. Technical Report Department of Politicians of Milan, Italy 91-106 5. Fred Choobineh F, Mohebbi E, Khoo H (2006) A multi-objective tabu search for single-machine scheduling problem with sequence-dependent setup times. European Journal of Operation Research 175: 318-337 6. Glover F (1986) Future paths for integer programming and links to artificial intelligence. Computer Operation Research 13(8): 533-549 7. Holland J H (1975) Adaptation in natural and artificial systems. Cambridge, MA: MIT 8. Kirkpatrick S, Gellatt C D, Vecchi M P (1993) Optimization by simulated annealing. Sci 220: 671- 9. Kwang Y L, El-sharkawi M A (2008) Modern heuristic optimization techniques. New Jersey: IEEE Press, John Wiley and Sons 378 S.H. Nasseri · M. H. Khaviari (2011) 10. Xing L N, Chen Y W, Yang K W, Hou F, Shen X S, Cai H P (2008) A hybrid approach combining an improved genetic algorithm and optimization strategies for the asymmetric traveling salesman problem. Engineering Applications of Artificial Intelligence 21: 1370-1380 11. Marikanis Y, Migdalas A (2005) A hybrid genetic GRASP algorithm using Lagrangean relaxation for the traveling salesman problem. Journal of Combinatorial Optimization 10: 311-326 12. Meer K (2007) Simulated annealing versus metropolis for a TSP instance. Information Processing Letters 104(6): 216-219 13. Mouhoub M, Wang Zh (2006) Ant colony with stochastic local search for the quadratic assignment problem. Proceeding of 18th IEEE International Conference on Tools with Artificial Intelligence 6: 127-131 14. Saidi Mehrabad, Pahlavani (2009) A fuzzy multi-objective programming for scheduling of weighted jobs on a single machine. Int. J. Adv Manuf. Technology 45: 122-139 15. Stecco G, Cordeau J, Moretti E (2009) A tabu search heuristic for a sequence-dependent and time- dependent scheduling problem on a single machine. J. Sched. 12: 3-16 16. Ugur A, Aydin D (2009) An interactive simulation and analysis software for solving TSP using Ant Colony Optimization algorithms. Advances in Engineering Software 40: 341-349 17. Zadeh L A (1965) Fuzzy sets. Information Control 8: 338-353

Journal

Fuzzy Information and EngineeringTaylor & Francis

Published: Dec 1, 2011

Keywords: Traveling salesman problem; Fuzzy numbers; Genetic algorithm; Tabu search; Simulated annealing; Ant colony system

References