Abstract Nowadays, the cloud computing has emerged as the advanced form of distributed computing, grid computing, utility computing and virtualization. Efficient task scheduling algorithms would help reduce the number of virtual machines used and in turn reduce the cost and increase the fitness function. According to this, a new multi-objective function is proposed combining load utilization, energy consumption, migration cost and time. Using this objective function, we proposed a hybrid algorithm namely Genetic Gray Wolf Optimization Algorithm (GGWO) by combining Gray Wolf Optimizer (GWO) and Genetic Algorithm (GA). The performance of the algorithm is analyzed based on the different evaluation measures. The algorithm such as GWO and GA algorithm is compared with proposed GGWO and it is taken for the comparative analysis. To improve the performance analysis the work has been computed with five common scientific workflows such as LIGO, Montage, Epigenomics, SIPHT and Cybershake. Experiments show that GGWO can improve task scheduling when compared with standard GWO and GA with minimum computation time, migration cost, energy consumption and maximum load utilization. 1. INTRODUCTION The collections of the interconnected computer that are given by at least one are more united computing resources are known as the cloud. In the course of recent years, the advancement of cloud computing has simulated quick arrangement of inter-data center systems to interface geographically conveyed data centers for offering high-quality and dependable services [1]. These days, cloud computing has turned into an efficient paradigm to offer computational abilities as administrations on a ‘pay-per-utilize’ premise [2]. Cloud computing brings the conformity and change of the IT business. With its developing application and promotion, cloud computing offers tremendous open doors, as well as confronts many difficulties in its advancement procedure [3, 4]. Recently, cloud computing has risen as another Internet-based model for empowering advantageous, on-request organize access to a shared pool of configurable assets which can be immediately provided and discharged with insignificant administration exertion or cloud provider cooperation [5]. Because of this innovation, many advantages are applicable to improve benefit in the market. The benefit in terms of time, cost, stack adjusting and storage. With this innovation, all applications keep running on a virtual platform and every one of the resources is distributed among these virtual machines. Every last application is distinctive and is independent. In cloud computing, scheduling is another idea with ongoing days its climb to cloud fame. An essential method for booking is that mapping of preparing assignments to the resources on the premise of attributes. This all occurred with the care of every single accessible asset ought to be used properly [6–8]. Outlining a vitality proficient task scheduling is a basic step in the cloud associate approach. It chooses which tasks might be executed in the cloud rather than on the mobile devices. Since task allocation on cloud computing resources is an outstanding NP-difficult issue in the general form, because of the time constraint required for run-time schedulers, the execution of vast time optimization algorithm is not appropriate. For a real-time scheduler, it is helpful to create heuristic procedures for imperfect arrangements, keeping in mind the end goal to build up a scheduling algorithm that keeps running in polynomial-time without performing a thorough search [9, 10]. Meanwhile, without considering task scheduling, individuals have explored the multicast directing and related resource allotment plans for optical systems in which can be connected to support inter-data centers communications [1]. At long last, tasks are assigned by the task scheduler to legitimate resources in distributed frameworks and task scheduling algorithms choose the execution request of these tasks. Because of the fact that the task of the processors is a challenging issue, keeping in mind the end goal to minimize make span of utilizations, different algorithms have been proposed. Since the task scheduling issue on Data centers is an NP-finish issue, thusly, different reviews have been made to acquire close ideal answer for taking care of this issue [4, 11]. GWO is another swarm insight algorithm propelled from mirroring the leadership chain of importance and chasing mechanism of gray wolves. GWO is ended up being focused or better than other traditional metaheuristics, for example, differential transformative and molecule swarm improvement calculation [12, 13]. Moreover, GWO is an extremely proficient metaheuristic algorithm because of its high joining speed and basic mathematical model. GWO just requires characterizing of populace size and a most extreme number of iteration for its usefulness [14, 15]. The objective of this paper is to develop an optimal task scheduling using a hybrid approach combining GWO and GA in the cloud computing environment. Here GA algorithm is used in GWO algorithm is to improve the performance and to speed up the optimization process. Four parameters such as migration cost, total time, load utilization and energy consumption are used to achieve the maximum fitness function value. Our proposed GGWO hybrid algorithms presented with the derivation of fitness function in multi-objective techniques, efficient solution representation along with the usual GWO and Genetic operators. The rest of the paper is organized as follows: a brief review of some of the literature works based on task scheduling in cloud computing is presented in Section 2. Problem evaluation of our proposed method is given in Section 3. The proposed technique for hybrid multi-objective optimization algorithm for task allocation in virtual machines using proposed GGWO is given in Section 4. The experimental results and the performance evaluation discussion are provided in Section 5. Finally, the conclusions are summed up in Section 6. 2. LITERATURE SURVEY Literature presents several works related to task scheduling in cloud computing. Here, we review some recent works related to task scheduling in cloud computing. Zuo et al. [16] have proposed an asset cost demonstrate that characterizes the request of task on a resource with more details. This model mirrors the relationship between the client’s asset costs and the spending costs. A multi-target advancement planning technique has been proposed in view of this asset cost display. This technique considers the make span and the client’s spending costs as limitations of the enhancement issue, accomplishing multi-objective advancement of both execution and cost. An enhanced ant colony algorithm has been proposed to tackle this issue. Two requirement capacities were utilized to assess and give criticism with respect to the execution and budget cost. Moreover, Hua et al. [17] have formulated the task scheduling issue. At that point, a task scheduling arrangement is progressed to get the ideal asset usage, undertaking finish time, normal cost and normal energy consumption. With a specific end goal to keep up the molecule differing qualities, the versatile acceleration coefficient is received. Similarly, Liu et al. [18] have developed an energy-efficient scheduling of tasks, in which the cell phone offloads suitable undertakings to the cloud through a Wi-Fi get to point. The planning means to minimize the vitality utilization of cell phone for one application under the imperative of aggregate finishing time. This undertaking planning issue is reproduced in an obliged most brief way issue and the LARAC technique is connected to get the inexact optimal solution. In [19], Zhu et al. have developed the resource model of the cloud data center and the dynamic power model of the physical machine, and after that propose a 3D virtual asset planning strategy for vitality sparing in distributed computing (TVRSM), in which the procedure of virtual asset booking are isolated into three phases: virtual asset assignment, virtual asset planning and virtual asset improvement. For the distinctive target of every stage, they plan three unique algorithms. Lin et al. [20] have investigated the issue of scheduling tasks in the MCC environment. All the more exactly, the planning issue includes the accompanying strides: (i) deciding the assignments to be offloaded onto the cloud, (ii) mapping the rest of the undertakings onto nearby centers in the cell phone, (iii) deciding the frequencies for executing local tasks and (iv) booking all assignments on the centers and the remote correspondence channels with the end goal that the assignment priority prerequisites and the application finish time imperative are fulfilled while the aggregate vitality dispersal in the cell phone is minimized. Moreover, Li [21] have designed and examine the execution of heuristic algorithms that utilize the equivalent speed technique. Pre-control assurance algorithms and post power-assurance algorithms are created for both vitality and time-constrained scheduling of priority compelled parallel tasks on numerous many-core processors with ceaseless or discrete speed levels. Additionally, Abdullahi et al. [22] have presented a Discrete Symbiotic Organism Search (DSOS) algorithm for the ideal planning of undertakings on cloud assets. Advantageous Organism Search (SOS) is a recently created metaheuristic advancement strategy for taking care of numerical enhancement issues. SOS emulates the cooperative connections displayed by living beings in an environment. Simulations come about uncovered that DSOS outflanks Particle Swarm Optimization which is a standout amongst the most well-known heuristic enhancement strategies utilized for undertaking planning issues. In [23], Xu et al. developed two resources pre-designation algorithms in light of the ‘shut down the repetitive, turn on the requested’ procedure here. First, a green cloud computing model is displayed, abstracting the undertaking planning issue to the virtual machine arrangement issue with the virtualization innovation. Also, the future workloads of the framework should be anticipated: a cubic exponential smoothing calculation in light of the traditionalist control technique is proposed, consolidating with the present state and asset dispersion of framework, so as to figure the request of assets for the following time of task requests. 3. PROBLEM WITH SOLUTION FRAMEWORK The resources allocation framework is shown in Fig. 1. Cloud provides physical machines and virtual machines for a public interface within its proprietary infrastructure. By using User interface component, the application (task) request is received from Cloud consumer. The request manager then receives the task from the user interface and then collects and manages all accepted users request. The cloud Resource pool such as CPU, memory and storage is monitored by the Resource Monitor. The Scheduler component is to schedule tasks to the cloud to minimize fitness function. This scheduling process occurs by using the constrained task based on the performance of virtual machines. The data from the Request Manager and Resource Monitor are collected by the scheduler and then makes a decision to allocate each task to the virtual machine. Figure 1. View largeDownload slide Architecture of proposed technique. Figure 1. View largeDownload slide Architecture of proposed technique. In physical machine (PM) scheduling, we have two types of problems in existing such as routing problem and sequencing problem. To assign each task to the corresponding virtual machine (VM), we can obtain routing problem and to series the task on the virtual machines to decrease the entire migration cost, total time, load utilization and energy consumption. Let us consider the user task Ta and each task is permissible to be administered on any specified accessible virtual machine VMi. Primarily, it is pre-assumed that there is p the number of tasks Ta=(T1,T2,…,Tp), r the number of virtual machines VMi=(VM1,VM2,…,VMr) and s the number of physical machines PMi=(PM1,PM2,…,PMs) in the current scheme of cloud computing. A virtual machine in the cloud has a CPU, memory and storage. A task is allocated on one Virtual machine based on its performance at a total time and the given resources are available continuously. Each task requires being executed in one VM instance. Task scheduling of cloud computing can be quantified as follows: Maximize, FitnessFunction=Max[λ1(1−Tt)+λ2(1−MC)+λ3(LU)+λ4(1−EC)], (1) where Tt- is the total time, MC is the migration cost, LU is the load utilized and EC is the energy consumption. Our objective function is to maximize the fitness function value. In our approach, we take λ1+λ2+λ3+λ4=1 and 0≤λi≤1,∀i,1≤i≤4. In this paper, the objective function is based on the four parameters such as total time, migration cost, load utilization and energy consumption. Here, each task has a different total time, migration cost, load utilization and energy consumption. To formulate this problem, control parameters such as λ1,λ2,λ3andλ4 with the range of [0, 1] are defined. The problem can be formulated by using Equation (1). In this paper, the first parameter is considered as total time (Tt). The time taken for executing the overall process is known as total time. The total time taken to allocate the task in a virtual machine should be the minimum value. The total time taken to complete the task Tt is given in the following equation: Tt=∑TR+∑TP+∑TW, (2) where the total time includes processing time of the task TP, receiving time off task TR and waiting time TW. Similarly, migration cost (MC) is the second parameter of scheduling. Cost of executing the whole task also increases due to increase in execution time and costly resource allocation in place of more efficient resources, which can be used according to the task, is known as migration cost. The migration cost should also get the minimum value to achieve an objective function that is maximum fitness function. The total migration cost can be calculated by using the following equation: MC=MF+CF2. (3) Here, MF=1PM[∑i=1VMi(NumberofmovementsTotalVM)], CF=∑i=1VMi(Costtorun×MemoryoftaskVM×PM), where movement factor (MF) and cost factor (CF) are calculated to find the minimum migration cost value. Similarly, load utilization (LU) is the third parameter of the objective function. LU is the process of distributing workloads in a cloud computing environment. It allows enterprises to manage workload by allocating tasks among multiple virtual machines. LU is the way of distributing workloads over numerous computing resources. LU adjusting decreases a cost related to document administration systems and maximizes accessibility of fitness function: LU=1PMi×VMi[∑i=1PMi∑j=1VMi13CPUUtilizedCPUij+MemoryUtilizedMemoryij+TimeUtilizedTimeij], (4) where CPUij is the total number of CPUs that are taken in a virtual machine, Memoryij is the total number of memories that are available in a virtual machine and Timeij is the total duration of time in the virtual machine. Amount of energy consumed in a task scheduling process in a cloud computing environment. To attain the maximum fitness function, energy consumption (EC) is also one of the important parameters: EC=1PMi×VMi[∑i=1PMi∑j=1VMiUijDmax+(1−Uij)φijDmax], (5) where Uij,Dmax∈[0,1], φij=13(CPUUtilizedCPUij)+(MemoryUtilizedMemoryij)+(TimeUtilizedTimeij). In Equation (1), the objective function of our research is specified. The first term of Equation (1) signifies the total time of the task, the second term signifies the migration cost, the third term signifies the load utilization and the fourth term signifies the energy consumption. From the formulation, we can see that the problem is a task scheduling one. Solving such problems using a mathematical programming approach will take a large amount of computational time for a large size problem. In this paper, the above objective function is going to be maximized using the proposed algorithm. The parameters used in the proposed method are given in Table 1. Table 1. Parameters used in task scheduling. Symbol Definition Ta Taska, 1≤i≤p VMi VirtualMachinei, 1≤i≤r PMi PhysicalMachines, 1≤i≤s Tp Number of tasks VMr Number of virtual machines PMs Number of physical machines λ1,λ2,λ3,λ4 Control parameters Symbol Definition Ta Taska, 1≤i≤p VMi VirtualMachinei, 1≤i≤r PMi PhysicalMachines, 1≤i≤s Tp Number of tasks VMr Number of virtual machines PMs Number of physical machines λ1,λ2,λ3,λ4 Control parameters Table 1. Parameters used in task scheduling. Symbol Definition Ta Taska, 1≤i≤p VMi VirtualMachinei, 1≤i≤r PMi PhysicalMachines, 1≤i≤s Tp Number of tasks VMr Number of virtual machines PMs Number of physical machines λ1,λ2,λ3,λ4 Control parameters Symbol Definition Ta Taska, 1≤i≤p VMi VirtualMachinei, 1≤i≤r PMi PhysicalMachines, 1≤i≤s Tp Number of tasks VMr Number of virtual machines PMs Number of physical machines λ1,λ2,λ3,λ4 Control parameters 4. TASK SCHEDULING USING HYBRID GGWO The main intention of this paper is to optimize task and resource scheduling using gray wolf optimizer [24, 25] and to improve the optimization process using genetic algorithm [26, 27] based hybrid approach (GGWO) based on the proposed migration cost, time, load utilization and energy consumption models on cloud computing environment. To optimize the virtual machine, we utilize multi-objective function based on migration cost and total time model of proposed approach. The migration cost included in the proposed model contains movement factor and cost factor. Similarly, the total time model includes receiving, processing and waiting time. The good virtual machine scheduling decreases the total running time, migration cost and energy consumption with maximum load utilization [28, 29]. 4.1. Scheduling optimization model based on multi-objective function Here, we have intended the parallel machine scheduling based on multi-objective function using Genetic Gray wolf optimizer. GWO algorithm mimics the leadership hierarchy and hunting mechanism of gray wolves in nature. To progress the performance of the scheme, in our paper we utilize Genetic algorithm-based Gray wolf optimizer (Table 2). Table 2. Parameters described. Parameters used Physical Machine (PM) VM1 VM2 VM3 VM4 VM5 VMN Total time ti1 ti2 ti3 ti4 ti5 tin Migration cost MC1 MC2 MC3 MC4 MC5 MCn Load utilization L1 L2 L3 L4 L5 Ln Energy consumption EC1 EC2 EC3 EC4 EC5 ECn Parameters used Physical Machine (PM) VM1 VM2 VM3 VM4 VM5 VMN Total time ti1 ti2 ti3 ti4 ti5 tin Migration cost MC1 MC2 MC3 MC4 MC5 MCn Load utilization L1 L2 L3 L4 L5 Ln Energy consumption EC1 EC2 EC3 EC4 EC5 ECn Table 2. Parameters described. Parameters used Physical Machine (PM) VM1 VM2 VM3 VM4 VM5 VMN Total time ti1 ti2 ti3 ti4 ti5 tin Migration cost MC1 MC2 MC3 MC4 MC5 MCn Load utilization L1 L2 L3 L4 L5 Ln Energy consumption EC1 EC2 EC3 EC4 EC5 ECn Parameters used Physical Machine (PM) VM1 VM2 VM3 VM4 VM5 VMN Total time ti1 ti2 ti3 ti4 ti5 tin Migration cost MC1 MC2 MC3 MC4 MC5 MCn Load utilization L1 L2 L3 L4 L5 Ln Energy consumption EC1 EC2 EC3 EC4 EC5 ECn This scheduling method is able to obtain an optimal or suboptimal result in a minimum computational migration cost, total time, energy consumption and maximum load utilization. The step by step process of proposed parallel machine scheduling is explained below. Flowchart of proposed virtual machine scheduling is shown in Fig. 2. Figure 2. View largeDownload slide Flowchart for proposed GGWO. Figure 2. View largeDownload slide Flowchart for proposed GGWO. Step 1: Solution encoding: In this proposed work, the solution consists of two components such as task and virtual machines. At first, we randomly assign each task to any one virtual machine. For example, we consider five tasks, two physical machines (PM1 and PM2) and then PM 1 has two virtual machines (VM1, VM2) and PM2 has three virtual machines (VM3, VM4 and VM5). Here we consider an initial solution (S) as VM1, VM2, VM3, VM4 and VM5 as given in below example. The main objective of this paper is to schedule these five tasks to corresponding five virtual machines optimally based on its performance and further steps are explained below. Solution representation: Each application contains many tasks, such that all applications can be considered as a set of tasks. Then the tasks should be scheduled in virtual machines and its dimensions are given by D=(VM1,VM2,VM3,VM4,VM5). Then the position of each virtual machine is considered based on its specifications. The position value of each dimension is given by P=(0.1,0.4,0.2,0.9,0.8). The position is choosing based on the priority values as shown in Table 3. Thus, we can obtain a priority sequence of tasks, Priority={1,3,2,5,4}. Table 3. Particle coding and decoding. PM1 PM2 Dimensions VM1 VM2 VM3 VM4 VM5 Position 0.1 0.4 0.2 0.9 0.8 Priority 1 3 2 5 4 PM1 PM2 Dimensions VM1 VM2 VM3 VM4 VM5 Position 0.1 0.4 0.2 0.9 0.8 Priority 1 3 2 5 4 Table 3. Particle coding and decoding. PM1 PM2 Dimensions VM1 VM2 VM3 VM4 VM5 Position 0.1 0.4 0.2 0.9 0.8 Priority 1 3 2 5 4 PM1 PM2 Dimensions VM1 VM2 VM3 VM4 VM5 Position 0.1 0.4 0.2 0.9 0.8 Priority 1 3 2 5 4 Step 2: Fitness calculation: Once the initial solution is generated, the fitness value of each individual is evaluated and stored for future reference. The fitness function is defined by using Equation (1). Maximum fitness function value is given by minimum total time, minimum migration cost, maximum load utilization and minimum energy consumption as mentioned in Equation (1). In order to calculate total time, migration cost, load utilization and energy consumption Equations (2–5) is used: FitnessFunction=Max[λ1(1−Tt)+λ2(1−MC)+λ3(LU)+λ4(1−EC)]. Moreover, based on these equations, we calculate the fitness function for each virtual machine. The maximum fitness function gained virtual machine is considered as the best solution. Then remaining virtual machines are updated based on these functions for finding the best solution. Step 3: Calculating α, β, δand ω: We find out α, β, δ and ω after calculating the fitness function. Here, while conceiving the GGWO, the alpha ( α) is esteemed as the most suitable arrangement with a perspective to replicating logically the social pecking order of wolves. Thus, the second and the third best arrangements are named as beta ( β) and delta ( δ) separately. The remaining applicant arrangements are regarded to be the omega ( ω). Let the first best fitness solutions be Sα, the second best fitness solutions Sβ and the third best fitness solutions Sδ. Step 4: Encircling prey: The hunting is guided by α, β, δ and ω follow these three candidates. In order to hunt, a prey is first encircled: S(t+1)=S(t)+X→⋅K→, (6) K→=|Y→⋅S(t+1)−S(t)|, (7) X→=2a→r1−a→andY→=2r2. (8) Step 5: Hunting: As a solution, we heard the chief three best results achieved so far and need the other search agents to reread their positions rendering to the position of the best search agent. For recurrence, the novel solution S(t+1) is assessed by using the formulae cited underneath: K→α=|Y→1⋅Sα−S|,K→β=|Y→2⋅Sβ−S|,K→δ=|Y→3⋅Sδ−S| (9) S1=Sα−X→1⋅(K→α),S2=Sβ−X→2⋅(K→β),S3=Sδ−X→3⋅(K→δ) (10) S(t+1)=S1+S2+S33. (11) It can be recognized that the concluding location would be in a random place within a circle that is distinct using the positions of alpha, beta and delta in the search space. Step 6: Solution Updation using crossover and mutation: In our work, crossover and mutation of GA algorithm are used to improve the performance of GWO algorithm and to speed up the optimization process. 4.2. Crossover operation Here while task scheduling by using GWO algorithm, we have considered 50% of the solution as best and remaining 50% as the worst solution. The remaining 50% worst solution is updated by using crossover process for a better solution. For example, here five tasks are considered and coded as a chromosome integer as shown in Fig. 2. Suppose, the first chromosome selected from obtained worst population (obtained using GWO) to be denoted as worst Chromosome 1 (w1) and second chromosome selected from obtained worst population (obtained using GWO) to be denoted as worst Chromosome 2 (w2). The left segment of the new Chromosome 1 or new Chromosome 2 is inherited from the corresponding segment of the worst Chromosome 1 (0.1, 0.4, 0.6, 0.3, 0.8) or worst Chromosome 2 (0.5, 0.2, 0.9, 0.7, 0.6), respectively. Here we can delete the tasks from the topological order, and it will still be a topological order without violating the precedence constraints. For, example, second and third position of w2 is crossed over with fourth and fifth position of w1. Finally the new Chromosome 1 (0.1, 0.4, 0.6, 0.2, 0.9) and new Chromosome 2 (0.5, 0.2, 0.4, 0.3, 0.8) is the updated solution. A detailed description of the crossover operation is given in Fig. 3. Figure 3. View largeDownload slide Crossover operation for worst chromosomes. Figure 3. View largeDownload slide Crossover operation for worst chromosomes. Mutation Operation: Here in mutation process, the crossover solution is taken and that solution must be mutated. Same five tasks are considered and mutation technique is applied to the new chromosome. Here single point mutation is applied for mutation operation. A single point is chosen and mutated with another point as shown in Fig. 4. Figure 4. View largeDownload slide Mutated chromosome. Figure 4. View largeDownload slide Mutated chromosome. Step 7: Attacking prey and Search for prey: The adaptive values of parameters a and X permit GGWO to smoothly transition amongst exploration and exploitation. With declining A, half of the iterations are dedicated to exploration (|X|≥1) and the other half are devoted to exploitation (|X| < 1). The GWO has only two chief parameters to be attuned (a and Y). The technique will be managed by the maximum number of iterations is accomplished. Ultimately, the optimal outcomes are chosen on the premise of the fitness value. Step 8: Termination criteria: Once the best fitness is achieved by a method for GGWO algorithm, the chosen task is allocated for cloud computing process and the execution is terminated (Table 4). Table 4. Pseudo code of proposed parallel machine scheduling. Input: The parameter of GGWO algorithm The parameter of Virtual machine scheduling Output: A scheduled task Assumption: Input solution Si, Fitness Function, Migration cost, Total time, Load utilization and Energy consumption. Initialization: Initialize the number of tasks Ti, number of virtual machines VMi, number of physical machine PMi and Coefficient vector X, Y and a. Start: Generate the initial population Si, i=1,2,…,n for all Si, do Evaluate the fitness function of the population using (1) end for Set iteration to 1 Repeat Select the best search agent Sα Select the second best agent Sβ Select the third best agent Sγ While (t<maxnumberofiterations) for each search agent Update the position of the current search agent using Equation (11) end for Update α, X, Y Calculate fitness of all search agents Update Sα, Sβ and Sγ Produce crossover and mutation to update the worst chromosome // Crossover { For i = 1, 2…,N do Randomly select two worst chromosomes w1andw2 from the initial solution or population Generate w11andw12 by one point crossover Save w11andw12 to the new solution wiNew End for } // Mutation { For i = 1, 2….N do Select the new solution from the crossover Mutate each bit of wiNew; Save mutated solution as the new solution End for } t = t + 1 end while return Sα end scheduled task stop Input: The parameter of GGWO algorithm The parameter of Virtual machine scheduling Output: A scheduled task Assumption: Input solution Si, Fitness Function, Migration cost, Total time, Load utilization and Energy consumption. Initialization: Initialize the number of tasks Ti, number of virtual machines VMi, number of physical machine PMi and Coefficient vector X, Y and a. Start: Generate the initial population Si, i=1,2,…,n for all Si, do Evaluate the fitness function of the population using (1) end for Set iteration to 1 Repeat Select the best search agent Sα Select the second best agent Sβ Select the third best agent Sγ While (t<maxnumberofiterations) for each search agent Update the position of the current search agent using Equation (11) end for Update α, X, Y Calculate fitness of all search agents Update Sα, Sβ and Sγ Produce crossover and mutation to update the worst chromosome // Crossover { For i = 1, 2…,N do Randomly select two worst chromosomes w1andw2 from the initial solution or population Generate w11andw12 by one point crossover Save w11andw12 to the new solution wiNew End for } // Mutation { For i = 1, 2….N do Select the new solution from the crossover Mutate each bit of wiNew; Save mutated solution as the new solution End for } t = t + 1 end while return Sα end scheduled task stop Table 4. Pseudo code of proposed parallel machine scheduling. Input: The parameter of GGWO algorithm The parameter of Virtual machine scheduling Output: A scheduled task Assumption: Input solution Si, Fitness Function, Migration cost, Total time, Load utilization and Energy consumption. Initialization: Initialize the number of tasks Ti, number of virtual machines VMi, number of physical machine PMi and Coefficient vector X, Y and a. Start: Generate the initial population Si, i=1,2,…,n for all Si, do Evaluate the fitness function of the population using (1) end for Set iteration to 1 Repeat Select the best search agent Sα Select the second best agent Sβ Select the third best agent Sγ While (t<maxnumberofiterations) for each search agent Update the position of the current search agent using Equation (11) end for Update α, X, Y Calculate fitness of all search agents Update Sα, Sβ and Sγ Produce crossover and mutation to update the worst chromosome // Crossover { For i = 1, 2…,N do Randomly select two worst chromosomes w1andw2 from the initial solution or population Generate w11andw12 by one point crossover Save w11andw12 to the new solution wiNew End for } // Mutation { For i = 1, 2….N do Select the new solution from the crossover Mutate each bit of wiNew; Save mutated solution as the new solution End for } t = t + 1 end while return Sα end scheduled task stop Input: The parameter of GGWO algorithm The parameter of Virtual machine scheduling Output: A scheduled task Assumption: Input solution Si, Fitness Function, Migration cost, Total time, Load utilization and Energy consumption. Initialization: Initialize the number of tasks Ti, number of virtual machines VMi, number of physical machine PMi and Coefficient vector X, Y and a. Start: Generate the initial population Si, i=1,2,…,n for all Si, do Evaluate the fitness function of the population using (1) end for Set iteration to 1 Repeat Select the best search agent Sα Select the second best agent Sβ Select the third best agent Sγ While (t<maxnumberofiterations) for each search agent Update the position of the current search agent using Equation (11) end for Update α, X, Y Calculate fitness of all search agents Update Sα, Sβ and Sγ Produce crossover and mutation to update the worst chromosome // Crossover { For i = 1, 2…,N do Randomly select two worst chromosomes w1andw2 from the initial solution or population Generate w11andw12 by one point crossover Save w11andw12 to the new solution wiNew End for } // Mutation { For i = 1, 2….N do Select the new solution from the crossover Mutate each bit of wiNew; Save mutated solution as the new solution End for } t = t + 1 end while return Sα end scheduled task stop 5. SIMULATION RESULT AND DISCUSSION In this section, we examine the outcome obtained from the proposed GGWO algorithm-based task scheduling procedure. We have actualized our proposed undertaking task scheduling using Java (jdk 1.6) with cloudSim devices and a progression of experiments was performed on a PC with Windows 7 Operating framework at 2 GHz dual core computer with 4 GB RAM running a 64-bit version of Windows 2007. 5.1. Parameter setup The basic idea of our proposed methodology is virtual machine scheduling using GGWO algorithm. Here, at first, we assign the N number of task and M number of resources. To schedule the task based on the migration cost, total time, load utilization and energy consumption function (Table 5). Table 5. Parameter description. Parameter description Problem Instance 1 Problem Instance 2 Problem Instance 3 Range Range Range Physical machine 50 100 200 Virtual machine 126 294 589 Tasks 100 200 400 CPU 126 294 589 Memory 100 MB–1 GB 100 MB–1 GB 100 MB–1 GB Time 1000 ms–100 000 ms 1000 ms–100 000 ms 1000 ms–100 000 ms Parameter description Problem Instance 1 Problem Instance 2 Problem Instance 3 Range Range Range Physical machine 50 100 200 Virtual machine 126 294 589 Tasks 100 200 400 CPU 126 294 589 Memory 100 MB–1 GB 100 MB–1 GB 100 MB–1 GB Time 1000 ms–100 000 ms 1000 ms–100 000 ms 1000 ms–100 000 ms Table 5. Parameter description. Parameter description Problem Instance 1 Problem Instance 2 Problem Instance 3 Range Range Range Physical machine 50 100 200 Virtual machine 126 294 589 Tasks 100 200 400 CPU 126 294 589 Memory 100 MB–1 GB 100 MB–1 GB 100 MB–1 GB Time 1000 ms–100 000 ms 1000 ms–100 000 ms 1000 ms–100 000 ms Parameter description Problem Instance 1 Problem Instance 2 Problem Instance 3 Range Range Range Physical machine 50 100 200 Virtual machine 126 294 589 Tasks 100 200 400 CPU 126 294 589 Memory 100 MB–1 GB 100 MB–1 GB 100 MB–1 GB Time 1000 ms–100 000 ms 1000 ms–100 000 ms 1000 ms–100 000 ms In this work, we have taken three problem instances. In problem Instance 1, the low range parameters are taken, in problem Instance 2, the middle range parameters are taken and in problem Instance 3, high-range parameters are taken. Then the performance analysis is taken based on problem Instance 1, problem Instance 2 and problem Instance 3 such as (i) Performance analysis with 50 Physical Machines and 126 Virtual Machines (Problem Instance 1), (ii) Performance analysis with 100 Physical Machines and 294 Virtual Machines (Problem Instance 2) and (iii) Performance analysis with 200 Physical Machines and 589 Virtual Machines (Problem Instance 3). Here 100, 200 and 400 such tasks were used for scheduling processes to allocate in virtual machines and 100 tasks are considered in problem Instance 1, then 200 task is considered in problem Instance 2 and 400 task is considered in problem Instance 3. The total number of CPUs utilized is equal to a total number of virtual machines used in each problem instances. Then the range used for memory is 100 MB–1 GB and for load 1000 ms–100 000 ms. 5.2. Performance comparisons Table 6 shows the performance comparison of problem instances in terms of load utilization. The values in GGWO algorithm in Tables 6–9 indicate that the proposed algorithm gives better objective values than the other existing GWO and GA algorithms. Table 6. Performance analysis for problem instances in terms of load utilization. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.39 0.38 0.4 0.32 0.31 0.33 0.25 0.24 0.26 10 0.39 0.38 0.4 0.33 0.32 0.34 0.25 0.24 0.26 15 0.4 0.38 0.41 0.33 0.32 0.34 0.26 0.24 0.26 20 0.4 0.39 0.41 0.33 0.32 0.35 0.26 0.24 0.27 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.39 0.38 0.4 0.32 0.31 0.33 0.25 0.24 0.26 10 0.39 0.38 0.4 0.33 0.32 0.34 0.25 0.24 0.26 15 0.4 0.38 0.41 0.33 0.32 0.34 0.26 0.24 0.26 20 0.4 0.39 0.41 0.33 0.32 0.35 0.26 0.24 0.27 Table 6. Performance analysis for problem instances in terms of load utilization. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.39 0.38 0.4 0.32 0.31 0.33 0.25 0.24 0.26 10 0.39 0.38 0.4 0.33 0.32 0.34 0.25 0.24 0.26 15 0.4 0.38 0.41 0.33 0.32 0.34 0.26 0.24 0.26 20 0.4 0.39 0.41 0.33 0.32 0.35 0.26 0.24 0.27 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.39 0.38 0.4 0.32 0.31 0.33 0.25 0.24 0.26 10 0.39 0.38 0.4 0.33 0.32 0.34 0.25 0.24 0.26 15 0.4 0.38 0.41 0.33 0.32 0.34 0.26 0.24 0.26 20 0.4 0.39 0.41 0.33 0.32 0.35 0.26 0.24 0.27 Table 7. Performance analysis for problem instances in terms of migration cost. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 10 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 15 0.019 0.019 0.017 0.015 0.017 0.014 0.013 0.015 0.011 20 0.019 0.19 0.017 0.015 0.017 0.014 0.013 0.015 0.011 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 10 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 15 0.019 0.019 0.017 0.015 0.017 0.014 0.013 0.015 0.011 20 0.019 0.19 0.017 0.015 0.017 0.014 0.013 0.015 0.011 Table 7. Performance analysis for problem instances in terms of migration cost. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 10 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 15 0.019 0.019 0.017 0.015 0.017 0.014 0.013 0.015 0.011 20 0.019 0.19 0.017 0.015 0.017 0.014 0.013 0.015 0.011 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 10 0.019 0.02 0.017 0.015 0.017 0.014 0.013 0.015 0.011 15 0.019 0.019 0.017 0.015 0.017 0.014 0.013 0.015 0.011 20 0.019 0.19 0.017 0.015 0.017 0.014 0.013 0.015 0.011 Table 8. Performance analysis for problem instances in terms of energy consumption. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.41 0.38 0.42 0.25 0.25 0.26 0.22 0.23 0.21 10 0.41 0.38 0.42 0.25 0.26 0.27 0.22 0.23 0.21 15 0.43 0.39 0.44 0.26 0.26 0.27 0.23 0.23 0.21 20 0.43 0.4 0.44 0.26 0.26 0.28 0.23 0.25 0.21 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.41 0.38 0.42 0.25 0.25 0.26 0.22 0.23 0.21 10 0.41 0.38 0.42 0.25 0.26 0.27 0.22 0.23 0.21 15 0.43 0.39 0.44 0.26 0.26 0.27 0.23 0.23 0.21 20 0.43 0.4 0.44 0.26 0.26 0.28 0.23 0.25 0.21 Table 8. Performance analysis for problem instances in terms of energy consumption. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.41 0.38 0.42 0.25 0.25 0.26 0.22 0.23 0.21 10 0.41 0.38 0.42 0.25 0.26 0.27 0.22 0.23 0.21 15 0.43 0.39 0.44 0.26 0.26 0.27 0.23 0.23 0.21 20 0.43 0.4 0.44 0.26 0.26 0.28 0.23 0.25 0.21 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 0.41 0.38 0.42 0.25 0.25 0.26 0.22 0.23 0.21 10 0.41 0.38 0.42 0.25 0.26 0.27 0.22 0.23 0.21 15 0.43 0.39 0.44 0.26 0.26 0.27 0.23 0.23 0.21 20 0.43 0.4 0.44 0.26 0.26 0.28 0.23 0.25 0.21 Table 9. Performance analysis for problem instances in terms of processing time. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 256 646 2 826 416 233 646 378 456 396 545 356 562 674 514 686 494 648 946 10 295 165 3 165 454 251 444 389 561 401 654 366 664 682 664 696 798 654 970 15 314 548 3 346 574 278 454 394 451 425 454 385 165 689 116 699 787 659 641 20 321 541 3 452 544 294 512 400 025 445 484 391 464 693 264 700 064 669 614 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 256 646 2 826 416 233 646 378 456 396 545 356 562 674 514 686 494 648 946 10 295 165 3 165 454 251 444 389 561 401 654 366 664 682 664 696 798 654 970 15 314 548 3 346 574 278 454 394 451 425 454 385 165 689 116 699 787 659 641 20 321 541 3 452 544 294 512 400 025 445 484 391 464 693 264 700 064 669 614 Table 9. Performance analysis for problem instances in terms of processing time. Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 256 646 2 826 416 233 646 378 456 396 545 356 562 674 514 686 494 648 946 10 295 165 3 165 454 251 444 389 561 401 654 366 664 682 664 696 798 654 970 15 314 548 3 346 574 278 454 394 451 425 454 385 165 689 116 699 787 659 641 20 321 541 3 452 544 294 512 400 025 445 484 391 464 693 264 700 064 669 614 Iterations Problem instances Problem Instance 1 Problem Instance 2 Problem Instance 3 GWO GA GGWO GWO GA GGWO GWO GA GGWO 5 256 646 2 826 416 233 646 378 456 396 545 356 562 674 514 686 494 648 946 10 295 165 3 165 454 251 444 389 561 401 654 366 664 682 664 696 798 654 970 15 314 548 3 346 574 278 454 394 451 425 454 385 165 689 116 699 787 659 641 20 321 541 3 452 544 294 512 400 025 445 484 391 464 693 264 700 064 669 614 The results shown in Table 6 suggest that the GGWO algorithm provides considerably better performance for load utilization in terms of 40.5% for the problem instance 1, 34% for the problem Instance 2 and 26.25% for the problem Instance 3, compared with existing GWO and GA. Table 7 presents the performance comparison of existing GWO, GA and the proposed GGWO algorithms in terms of migration cost. For effective objective function the migration cost of the proposed algorithm must be minimum. From Table 7, it is deduced that the proposed GGWO gives better performance than the existing swarm-based techniques. Due to this, the proposed algorithm can provide better objective values in majority of the three problem instances. Compared with GWO and GA, the performance improvement of the proposed GGWO algorithm in terms migration cost is 1.7% for the problem Instance 1, 1.4% for the problem Instance 2 and 1.1% for the problem Instance 3. Table 8 shows the performance comparison of the results obtained by the proposed GGWO algorithm in terms of energy consumption. The results presented in Table 8 suggest that the proposed GGWO algorithm notably provides better objective values in terms energy consumption is 4.3% for the problem Instance 1, 2.7% for the problem Instance 2 and 2.1% for problem instance 3. The objective value comparison of the proposed GGWO and the existing GWO and GA are presented in Table 9 in terms of processing time respectively. The processing time is calculated here in milliseconds (ms). The processing time of proposed GGWO algorithm must be minimum when compared with existing GWO and GA algorithms. The processing time of the proposed GGWO algorithm is less when compared with the existing GWO and GA for three problem instances. Therefore, the GGWO algorithm provides better performance in the task scheduling problem in distributed systems when compared with other existing algorithms with respect to multiple objectives and complexity of the algorithms. 5.3. Comparative analysis for problem instances in terms of fitness function The proposed GGWO technique archives minimum migration cost, total time, energy consumption and maximum load utilization than the existing GWO and GA models while using problem Instance 1 such as 50 physical machines, 126 virtual machines and 100 tasks. As per our objective function, the fitness function value should also be maximum. Hence we get minimum total time, migration cost, energy consumption and maximum load utilization for achieving maximum fitness function value. Comparative analysis graph that is given clearly shows that the fitness value of the proposed method is maximum than GWO and GA technique. Figures 5–7 show result of the fitness function using three problem instances. Figure 5. View largeDownload slide Comparative analysis for problem Instance 1 in terms of fitness function (50 PM and 126 VM). Figure 5. View largeDownload slide Comparative analysis for problem Instance 1 in terms of fitness function (50 PM and 126 VM). Figure 6. View largeDownload slide Comparative analysis for problem Instance 2 in terms of fitness function (100 PM and 294 VM). Figure 6. View largeDownload slide Comparative analysis for problem Instance 2 in terms of fitness function (100 PM and 294 VM). Figure 7. View largeDownload slide Comparative analysis for problem Instance 3 in terms of fitness function (200 PM and 589 VM). Figure 7. View largeDownload slide Comparative analysis for problem Instance 3 in terms of fitness function (200 PM and 589 VM). Figure 5 shows the comparative analysis of the results obtained by the proposed GGWO algorithm in terms of fitness function. From Fig. 5, it is deduced that the proposed GGWO gives better fitness function value than the existing GWO and GA techniques. The performance improvement of the proposed GGWO algorithm in terms fitness function is 65.12%, while GWO value is 64.44% and GA is 60.76% for the problem Instance 1. In Fig. 6, fitness function comparison for problem Instance 2 for proposed GGWO algorithm achieves 55.64% while GWO has 54.7% and GA has 54.77%. Then by observing Fig. 7, it has shown that for the fitness function values for problem Instance 3 is compared between proposed GGWO and Existing GA and GWO techniques. For GGWO, the fitness function value is 53.66%, where GWO has 52.98% and GA has 52%. Hence, by comparing three problem instances, our proposed GGWO algorithm gives better performance than the existing GWO and GA techniques. By analyzing the above results by using problem Instance 1, problem Instance 2 and problem Instance 3, our parameters such as total time, migration cost and energy consumption have a minimum value and load utilization has maximum value for proposed GGWO when compared with the existing GWO and GA value. Hence, the proposed GGWO algorithm gives the better objective function value than the existing one. From here we came to know that our proposed GGWO hybrid algorithm is effective for task scheduling in cloud computing. 5.4. Performance analysis on real-world dataset In this section, the simulations were conducted using five common scientific workflows such as LIGO, Montage, Epigenomics, SIPHT and Cybershake and each of these generated workflows consisted of 1000 tasks. In order to process the real-world dataset, we took problem Instance 2 (100 Physical Machines, 294 Virtual Machines and 200 tasks) for the execution. The load utilization of each algorithm is displayed in Fig. 8. The X-axis in Fig. 8 is the load utilization and the Y-axis is the number of iterations that is used for scheduling the task. While the number of iterations increased the load utilization is high in the proposed GGWO algorithm than GWO and GA algorithms. By observing Fig. 8, LIGA and SIPHT give the high performance than the other datasets. Figure 8. View largeDownload slide Performance analysis on real-world dataset in terms of load utilization: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. Figure 8. View largeDownload slide Performance analysis on real-world dataset in terms of load utilization: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. Ultimately in Fig. 9, the migration cost vs. number of iterations performance of each existing GFWO, GA and proposed GGWO algorithms is the most significant basis for evaluating their performance. For example in Montage, nearly 200 parallel tasks out of 1000 in first two levels need to be scheduled. As expected, experimental results in Fig. 9 show that the migration cost of the task scheduling is almost constant for GGWO algorithm even when the number of iterations increases and the cost is less when compared with existing GWO and GA. Figure 9. View largeDownload slide Performance analysis on real-world dataset in terms of migration cost: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. Figure 9. View largeDownload slide Performance analysis on real-world dataset in terms of migration cost: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. The best performance of GGWO belongs to Epigenomics and Cybershake, which has a energy consumption in all iterations. The experimental result of Fig. 10 shows, when the number of iterations gets increased the energy consumption will be high in proposed GGWO algorithm than the existing GWO and GA algorithms. Figure 10. View largeDownload slide Performance analysis on real-world dataset in terms of energy consumption: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. Figure 10. View largeDownload slide Performance analysis on real-world dataset in terms of energy consumption: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. The overall processing time is shown in Fig. 11 for real-world datasets. The processing time is reduced by using GGWO algorithm for scheduling the task oin VM’s. Hence, by analyzing, the performance of our proposed GGWO algorithm in real-world datasets gives the better performance than the existing techniques. Figure 11. View largeDownload slide Performance analysis on real-world dataset in terms of processing time: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. Figure 11. View largeDownload slide Performance analysis on real-world dataset in terms of processing time: (a) LIGO, (b) Montage, (c) Epigenomics, (d) SIPHT and (e) Cybershake. 6. CONCLUSION In this paper, a multi-objective task scheduling method was proposed based on the hybrid approach of genetic gray wolves optimization (GGWO). The multi-objective optimization approach is used to improve the scheduling performance compared with single objective function. Here multi-objective function such as total time, migration cost, load utilization and energy consumption were taken for better performance. In GGWO, each dimension of a solution represents a task and a solution as a whole representation of all tasks priorities. Here GA is improving the performance of the GWO algorithm while optimizing the task in virtual machines. This approach is able to obtain a high-quality scheduling task by adaptively selecting crossover mutation updating strategies to update each task. The experimental results took based on three such problem instances. Finally, five common scientific workflows such as LIGO, Montage, Epigenomics, SIPHT and Cybershake performance were compared with our proposed GGWO technique. The result shows our proposed multi-objective-based task scheduling is better than other approaches. Its results are better than standard GWO and GA algorithm, and for task scheduling problem, experimental results demonstrated that the GGWO-based scheduling approach was well suited. REFERENCES 1 Wu , K. , Lu , P. and Zhu , Z. ( 2016 ) Distributed online scheduling and routing of multicast-oriented tasks for fitness function-driven cloud computing . IEEE Commun. Lett. , 20 , 684 – 687 . Google Scholar Crossref Search ADS 2 Zhu , X. , Chen , C. , Yang , L.T. and Xiang , Y. ( 2015 ) ANGEL: agent-based scheduling for real-time tasks in virtualized clouds . IEEE Trans. Comput. , 64 , 3389 – 3403 . Google Scholar Crossref Search ADS 3 Cheng , C. , Li , J. and Wang , Y. ( 2015 ) An energy-saving task scheduling strategy based on vacation queuing theory in cloud computing . Tsinghua Sci. Technol. , 20 , 28 – 39 . Google Scholar Crossref Search ADS 4 Lovász , G. , Niedermeier , F. and De Meer , H. ( 2013 ) Performance tradeoffs of energy-aware virtual machine consolidation . Cluster Comput. , 16 , 481 – 496 . Google Scholar Crossref Search ADS 5 Keshanchi , B. , Souri , A. and Navimipour , N.J. ( 2017 ) An improved genetic algorithm for task scheduling in the cloud environments using the priority queues: formal verification, simulation, and statistical testing . J. Syst. Softw. , 124 , 1 – 21 . Google Scholar Crossref Search ADS 6 Bansal , N. , Maurya , A. , Kumar , T. , Singh , M. and Bansal , S. ( 2015 ) Cost performance of QoS Driven task scheduling in cloud computing . Procedia Comput. Sci. , 57 , 126 – 130 . Google Scholar Crossref Search ADS 7 Ali , H.G.E.D.H. , Saroit , I.A. and Kotb , A.M. ( 2016 ) Grouped tasks scheduling algorithm based on QoS in cloud computing network . Egypt. Inform. J. , 18 , 11 – 19 . 8 Patel , R. and Mer , H. ( 2013 ) A survey of various qos-based task scheduling algorithm in cloud computing environment . Int. J. Sci. Technol. Res , 2 , 109 – 112 . 9 Juarez , F. , Ejarque , J. and Badia , R.M. ( 2016 ) Dynamic Energy-Aware Scheduling for Parallel Task-Based Application in Cloud Computing. Future Generation Computer Systems Available online 5 July 2016, In Press, Corrected Proof—Note to users. 10 Ismail , L. and Fardoun , A. ( 2016 ) EATS: Energy-Aware Tasks Scheduling in Cloud Computing Systems . Procedia Comput. Sci. , 83 , 870 – 877 . Google Scholar Crossref Search ADS 11 Dasgupta , K. , Mandal , B. , Dutta , P. , Mandal , J.K. and Dam , S. ( 2013 ) A genetic algorithm (GA) based load balancing strategy for cloud computing . Procedia Technol. , 10 , 340 – 347 . Google Scholar Crossref Search ADS 12 Lu , C. , Gao , L. , Li , X. and Xiao , S. ( 2017 ) A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry . Eng. Appl. Artif. Intell. , 57 , 61 – 79 . Google Scholar Crossref Search ADS 13 Lu , C. , Xiao , S. , Li , X. and Gao , L. ( 2016 ) An effective multi-objective discrete grey wolf optimizer for a real-world scheduling problem in welding production . Adv. Eng. Softw. , 99 , 161 – 176 . Google Scholar Crossref Search ADS 14 Guha , D. , Roy , P.K. and Banerjee , S. ( 2016 ) Load frequency control of large scale power system using quasi-oppositional grey wolf optimization algorithm . Eng. Sci. Technol. Int. J. , 19 , 1693 – 1713 . Google Scholar Crossref Search ADS 15 Guha , D. , Roy , P.K. and Banerjee , S. ( 2016 ) Quasi-oppositional differential search algorithm applied to load frequency control . Eng. Sci. Technol. Int. J. , 19 , 1635 – 1654 . Google Scholar Crossref Search ADS 16 Zuo , L. , Shu , L. , Dong , S. , Zhu , C. and Hara , T. ( 2015 ) A multi-objective optimization scheduling method based on the ant colony algorithm in cloud computing . IEEE Access , 3 , 2687 – 2699 . Google Scholar Crossref Search ADS 17 He , H. , Xu , G. , Pang , S. and Zhao , Z. ( 2016 ) AMTS: adaptive multi-objective task scheduling strategy in cloud computing . China Commun. , 13 , 162 – 171 . Google Scholar Crossref Search ADS 18 Liu , T. , Chen , F. , Ma , Y. and Xie , Y. ( 2016 ) An energy-efficient task scheduling for mobile devices based on cloud assistant . Future Gener. Comput. Syst. , 61 , 1 – 12 . Google Scholar Crossref Search ADS 19 Zhu , W. , Zhuang , Y. and Zhang , L. ( 2017 ) A Three-dimensional Virtual Resource Scheduling Method for energy saving in cloud computing . Future Gener. Comput. Syst. , 69 , 66 – 74 . Google Scholar Crossref Search ADS 20 Lin , X. , Wang , Y. , Xie , Q. and Pedramm , M. ( 2015 ) Task scheduling with dynamic voltage and frequency scaling for energy minimization in the mobile cloud computing environment . IEEE Trans. Serv. Comput. , 8 , 175 – 186 . Google Scholar Crossref Search ADS 21 Li , K. ( 2017 ) Scheduling parallel tasks with energy and time constraints on multiple many core processors in a cloud computing environment . Future Gener. Comput. Syst. , 1 – 41 . 22 Abdullahi , M. and Ngadi , M.A. ( 2016 ) Symbiotic Organism Search optimization based task scheduling in cloud computing environment . Future Gener. Comput. Syst. , 56 , 640 – 650 . Google Scholar Crossref Search ADS 23 Xu , X. , Cao , L. and Wang , X. ( 2016 ) Resource pre-allocation algorithms for low-energy task scheduling of cloud computing . J. Syst. Eng. Electron. , 27 , 457 – 469 . Google Scholar Crossref Search ADS 24 Zhu , A. , Xu , C. , Li , Z. , Wu , J. and Liu , Z. ( 2015 ) Hybridizing grey wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC . J. Syst. Eng. Electron. , 26 , 317 – 328 . Google Scholar Crossref Search ADS 25 Kishor , A. and Singh , P.K. ( 2016 ) Empirical Study of Grey Wolf Optimizer. In Proc. Fifth Int. Conf. Soft Computing for Problem Solving, 1037–1049. 26 Naznin , F. , Sarker , R. and Essam , D. ( 2012 ) Progressive alignment method using genetic algorithm for multiple sequence alignment . IEEE Trans. Evol. Comput. , 16 , 615 – 631 . Google Scholar Crossref Search ADS 27 Whitley , D. ( 2002 ) Genetic Algorithms and Evolutionary Computing . Van Nostrand’s Scientific Encyclopedia . 28 Kohli , M. and Arora , S. ( 2017 ) Chaotic grey wolf optimization algorithm for constrained optimization problems . J. Comput. Des. Eng. , 1 – 33 . 29 Mehdi , A. , Rashidib , H. and Alizadehc , S.H. ( 2017 ) An enhanced genetic algorithm with new operators for task scheduling in heterogeneous computing systems . Eng. Appl. Artif. Intell. , 61 , 35 – 46 . Google Scholar Crossref Search ADS © The British Computer Society 2018. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
The Computer Journal – Oxford University Press
Published: Oct 1, 2018
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.