Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Some Single-machine Scheduling Problems with Actual Time and Position Dependent Learning Effects

Some Single-machine Scheduling Problems with Actual Time and Position Dependent Learning Effects Fuzzy Inf. Eng. (2009)2:161-177 DOI 10.1007/s12543-009-0013-1 ORIGINAL ARTICLE Some Single-machine Scheduling Problems with Actual Time and Position Dependent Learning Effects Kai-biao Sun · Hong-xing Li Received: 18 March 2009/ Revised: 24 April 2009/ Accepted: 10 May 2009/ © Springer and Fuzzy Information and Engineering Branch of the Operations Research Society of China Abstract In this paper we study some single-machine scheduling problems with learning effects where the actual processing time of a job serves as a function of the total actual processing times of the jobs already processed and of its scheduled position. We show by examples that the optimal schedules for the classical version of problems are not optimal under this actual time and position dependent learning effect model for the following objectives: makespan, sum of kth power of the completion times, total weighted completion times, maximum lateness and number of tardy jobs. But under certain conditions, we show that the shortest processing time (SPT) rule, the weighted shortest processing time (WSPT) rule, the earliest due date (EDD) rule and the modified Moore’s Algorithm can also construct an optimal schedule for the problem of minimizing these objective functions, respectively. Keywords Scheduling · Actual time-dependent · Position-dependent · Learning effect · Single-machine 1. Introduction Scheduling problems have received considerable attention since the middle of the last century. However, most researchers assume that job processing times are known and fixed during the whole processing process. Recent empirical studies in several indus- tries have demonstrated that the time needed to produce a single unit continuously decreases with the processing of additional units and that due to the declining pro- cessing times the costs per unit also decline [22]. This phenomenon is well-known as the “learning effect” in the literature [2]. Learning effect has attracted growing attention in the scheduling community on account of its significance. For instance, Biskup [3] pointed out that repeated processing of similar tasks improves workers’ Kai-biao Sun · Hong-xing Li () School of Electronic and Information Engineering, Dalian University of Technology, Dalian 116024, P.R.China e-mail: [email protected], [email protected] 162 Kai-biao Sun · Hong-xing Li (2009) skills and introduced a scheduling model with learning effects in which the actual processing time of a job is a function of its position in the schedule. Mosheiov [14] found that under Biskup’s learning effect model the optimal schedules for some clas- sical scheduling problems remain valid, but they require much greater computational effort to obtain. Cheng and Wang [6] considered a single-machine scheduling prob- lem with a volume-dependent, piecewise linear processing time function to model the learning effect. Lee et al. [11] studied a bi-criterion scheduling problem on a single machine. Lin [12] gave the complexity results for single-machine scheduling with positional learning effects. Besides the job position dependent learning effect scheduling model, there are sev- eral other learning effect scheduling models in the literature. Alidaee and Womer [1] presented a review and extensions of the scheduling with time dependent processing times. Cheng et al [5] gave a concise survey of scheduling with time-dependent pro- cessing times. Wang and Cheng [17] studied a model in which the job processing times is a function of their starting times and positions in the sequence. Recently, Kuo and Yang [9,10] introduced a time-dependent learning effect which is a function of the total normal processing times of the jobs previously scheduled. Wang et al [20] considered the same learning effect model as that in Kuo and Yang [9]. They proved that the weighted shortest processing time (WSPT) rule, the earliest due date (EDD) rule and the modified Moore⚶Hodgson algorithm can, under certain condi- tions, construct the optimal schedule for the problem of minimizing the following three objectives: total weighted completion time, maximum lateness and number of tardy jobs, respectively. Koulamas and Kyparisis [8] introduced a general sum-of-job- processing- times dependent learning effect model in which employees learn more if they perform a job with a longer processing time. Wang [19] investigated a schedul- ing model considering the setup times and time dependent learning effect at the same time. For a state-of-the-art review on scheduling models and problems with learning effects, the readers are referred to [4]. Biskup [4] classified learning models into two types, namely position dependent learning and sum-of-processing time dependent learning. He further claimed that position dependent learning assumes that learning takes place by processing time independent operations like setting up of machines. This seems to be a realistic as- sumption for the case where the actual processing time of a job is mainly machine- driven and has (or is near to) none human interference. On the other hand, sum-of- processing-time dependent learning takes into account the experience gained from producing the same or similar jobs over time. This might, for example, be the case for offset printing, where running the press itself is a highly complicated and error- prone process. Recently, Wu and Lee [21] considered a learning model, in which the learning effect is a function of the total normal processing times of the jobs already processed and the job’s scheduled position. Let p denote the actual processing time jr ⎛  ⎞ r−1 ⎜ p ⎟ ⎜ [k] ⎟ k=1 a ⎜ ⎟ ⎜ ⎟ job J scheduled in rth position, then p = p 1+  r , where p is j jr j ⎝ ⎠ [k] k=1 the normal processing time of the kth job in a sequence and a  0, a  0. They 1 2 showed that the SPT sequence is optimal for minimizing make-span and total com- pletion time on single machine under the proposed learning model. In addition, they Fuzzy Inf. Eng. (2009) 2: 161-177 163 showed that the WSPT sequence is optimal for minimizing the sum of the weighted completion times if jobs have agreeable weights. Cheng et al. [7] consider another ⎛ ⎞ r−1 1 ⎜ ⎟ [k] ⎜ k=1 ⎟ ⎜ ⎟ 2 ⎜ ⎟ learning model, in which the learning effect is given as p = p 1−  r , jr j ⎝ ⎠ k=1 where a  1, a  0. Under this learning model, they obtained the same results 1 2 as that in Wu and Lee [21]. Further they showed that the EDD sequence is optimal for the maximum lateness if jobs have agreeable due dates. Besides, they presented polynomial-time optimal solutions for some special cases of the m machine flowshop make-span and total completion time minimization problems. From the above two learning models, it is not hard to see that the actual processing time of, say, the rth job is influenced by the sum of the normal processing times of the previously scheduled r-1 jobs. This learning effect model is based on the assumption that the number of operating processes in a job depends on the repetitions of the same operation. That is, the learning effect depends on the repetitions of the same operation in a job. However, in many situations, the operating processes of a job are different, for example, car repair or maintenance and patient diagnosis and treatment. The conditions of cars or patients are different. Hence, there are no identical repetitions of operating processes in the job. Nevertheless, there still exists a certain learning effect after operating the job. In such situations, the learning effect is due to the experiences of operating jobs, i.e., the total actual processing time of jobs. Therefore, In this paper, we study a new learning effect scheduling model where the learning effect is assumed to be a function of the sum of the actual processing times of the jobs already processed and of the job’s scheduled position. The remainder of this paper is organized as follows. In section 2 we define the problem formally. The solution procedures for the single-machine problems to minimize makespan, sum of the kth power of completion times, total weighted completion time, maximum lateness and number of tardy jobs minimization under the proposed model are presented in the next section. We conclude the paper in the last section. 2. Problem Formulation Suppose that there are n jobs to be scheduled on a single machine. Each job J has a normal processing time p , a weight w and a due-date d (1  j  n). Let p j j j [k] and p be the normal processing time and the actual processing time of a job when [k] it is scheduled in the kth position in a sequence, respectively. Let p be the actual ir processing time of job J when it is scheduled in position r in a sequence. Then ⎛ ⎞ r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ [k] ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ A a ⎜ ⎟ 2 ⎜ ⎟ p = p 1− r , (1) i ⎜ ⎟ ir ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ k=1 ⎛  ⎞ a s−1 ⎜ ⎟ ⎜ k=1 ⎟ [k] ⎜ ⎟ a A A 2 ⎜ ⎟ where p = p , p = p 1−  s (1 < s  r− 1), a  1 and a  0. ⎜ ⎟ [1] [s] 1 2 ⎝ n ⎠ [1] [s] k=1 For convenience, we denote this actual time and position dependent learning effect defined in Eq. (1) by LE . at-p 164 Kai-biao Sun · Hong-xing Li (2009) For a given schedule S , C = C (S ) represents the completion time of J in S . Let j j j C denote the make-span, where C = max{C , C ,··· , C }. Let w C , C max max 1 2 n j j j (k > 0), L = max{C − d | j = 1,··· , n} and U , where U = 0if C  d , else max j j j j j j U = 1 represent the sum of weighted completion times, the sum of kth power of completion times, the maximum lateness and the number of tardy jobs, respectively. Using the traditional notation, we refer to the problem as 1|LE | f if the criterion is at-p to minimize f , where f ∈{C , C , w C , L , U }. max i j max j 3. Several Single-machine Scheduling Problems In this section, we consider several single-machine scheduling problems with this learning effect model. Suppose that S and S are two job schedules. The difference between S and S 1 2 1 2 is a pairwise interchange of two adjacent jobs J and J , i.e., S = (σ, J, J ,σ ) and i j 1 i j S = (σ, J , J,σ ), where σ and σ denote the partial sequences of S (or S ) before 2 j i 1 2 and after J and J , respectively, σ (or σ ) may be empty. Furthermore, we assume i j that there are r − 1 jobs in σ. In addition, let C denote the completion time of the last job in σ and J be the first job in σ . Under S , the completion times of jobs J k 1 i and J are respectively ⎛ ⎞ r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ ⎟ [k] ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C + p 1− r (2) i 1 σ i ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ k ⎟ ⎝ ⎠ k=1 and ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p + p ⎟ ⎜ [k] ⎟ ⎜ [k] ir ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ k=1 ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ C (S ) = C + p 1− r + p 1− (r+ 1) . (3) j 1 σ i ⎜ ⎟ j ⎜ ⎟ ⎜ n ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 Similarly, the completion time of jobs J and J under S are respectively j i 2 ⎛ ⎞ r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ ⎟ [k] ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 ⎜ ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C + p 1− r (4) j 2 σ j ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k ⎟ ⎝ ⎠ k=1 and ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p + p ⎟ ⎜ ⎟ ⎜ ⎟ [k] [k] jr ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 k=1 ⎜ ⎟ ⎜ ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ C (S ) = C + p 1− r + p 1− (r+ 1) . (5) i 2 σ j ⎜ ⎟ i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 Fuzzy Inf. Eng. (2009) 2: 161-177 165 3.1 The Make-span Criterion In the classical single-machine makespan minimization problems, the makespan value is sequence-independent. However, this may be different when the learning effect is considered. Wu and Lee [21] and Cheng et al [7] showed that the SPT rule is optimal when the learning is based on the sum of normal processing times of jobs previously scheduled and the job scheduled position. But, the following example shows that the SPT order does not yield an optimal schedule under the proposed model. Example 1 Let n = 3, p = 1, p = 2, p = 57, a = 3 and a = −0.5. The 1 2 3 1 2 1 −0.5 SPT schedule (J , J , J ) yields the makespan C = 1+ 2× 1− × 2 + 57× 1 2 3 max 1 −0.5 1+2× 1− ×2 ( ) 60 −0.5 1− ×3 = 31.5444. Obviously, the optimal sequence (J , J , J ) 2 1 3 2 −0.5 2+1× 1− ×2 ( ) −0.5 60 yields the optimal value C = 2+1× 1− ×2 +57× 1− × max 60 60 −0.5 3 = 31.3940. In order to solve the problem approximately, we also use the SPT rule as a heuristic for the problem 1|LE |C . The performance of the SPT algorithm is evaluated at-p max by its worst-case error bound. Theorem 1 Let S be an optimal schedule and S be a SPT schedule for the prob- n 1 C (S ) k max k=1 lem 1|LE |C . Then ρ =  , where p = min{p | j = at-p max 1 min j C (S ) p max min 1, 2,··· , n}. Proof Without loss of generality, we can suppose that p  p  ···  p . Let 1 2 n P = p . Then k=1 A A A C (S )= p + p +···+ p max 1 2 ⎛ ⎞ 1 ⎜ p +···+ p ⎟ 1 ⎜ ⎟ a n−1 a 2 ⎜ ⎟ 2 ⎜ ⎟ = p + p 1− 2 +···+ p 1− n 1 2 n ⎝ ⎠ P P a a 2 2 p + p 2 +···+ p n 1 2 n = p k , k=1 ∗ A A A C (S )= p + p +···+ p max [1] [2] [n] ⎛ ⎞ a p +···+ p 1 ⎜ ⎟ p [1] ⎜ ⎟ [1] [n−1] a ⎜ ⎟ a 2 2 ⎜ ⎟ = p + p 1− 2 +···+ p 1− n ⎜ ⎟ [1] [2] [n] ⎝ ⎠ P P a a 1 1 p p +···+ p [1] [1] [n−1] a a 2 2 p + p 1− 2 +···+ p 1− n [1] [2] [n] P P p +···+ p [1] [n−1] p k 1− [k] k=1 min p k . k=1 166 Kai-biao Sun · Hong-xing Li (2009) Hence ⎛ ⎞ a n 1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ C (S ) max ⎜ k=1 ⎟ ⎜ ⎟ ⎜ ⎟ ρ =  . 1 ⎜ ⎟ ∗ ⎜ ⎟ C (S ) ⎜ p ⎟ max ⎜ min ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ When a = 0, we haveρ = 1, which is consistent with the fact that the SPT sched- 1 1 ule is optimal when the learning effect requirement is based on the job’s scheduled position [3]. Although the SPT order does not provide an optimal schedule under the proposed learning model, it still gives an optimal solution when the processing times of jobs satisfy certain condition. We first give two useful lemmas before presenting the re- sults. a a Lemma 1 1 − λ + λ(1 − x) α − (1 − λx) α  0 for λ  1, 0  x  , a  1 and 0  α  1. a a Proof Let F(x) = 1−λ+λ(1− x) α− (1−λx) α. Taking the first derivative of F(x) with respect to x,wehave a−1 a−1 a−1 a−1 F (x) = −λαa(1− x) +αaλ(1−λx) = aλα[(1−λx) − (1− x) ]. Since λ  1 and 0  x  , it implies that F (x)  0, thus F(x) is non-increasing on 1 1 0  x  . Therefore, F(x)  F(0) = (1−λ)(1−α)  0 for λ  1, 0  x  , a  1 λ λ and 0  α  1. a a Lemma 2 For any 0  y  x  1 and a  1, (1− x+ y) − (1− x)  ay. Proof Let f (u) = u (u > 0). According to the mean value theorem, for any u > 0 and u > 0, there exists a pointξ between u and u , such that f (u) = f (u )+ f (ξ)(u− 0 0 0 u ). Taking the first and second derivatives of f (u) with respect to u, we obtain a−1  a−2 f (u) = au , f (u) = a(a− 1)u > 0. Hence f (u) is increasing on u. Let u = 1 − x + y and u = 1 − x. Then we have a a a−1 a−1 (1− x+ y) − (1− x) = ay(ξ)  ay(1− x+ y)  ay for 0  y  x  1. k=1 Theorem 2 For the makespan problem 1|LE |C ,if p  for l = 1, 2,··· , at-p max l a 3 n, then an optimal schedule can be obtained by sequencing jobs in non-decreasing order of p (i.e., SPT rule). Proof Suppose p  p . To show S dominates S ,it suffices to show C (S ) i j 1 2 j 1 C (S ) and C (S )  C (S ) for any J inσ . i 2 l 1 l 2 l Fuzzy Inf. Eng. (2009) 2: 161-177 167 Taking the difference between Eq. (3) and Eq. (5), we have ⎛ ⎞ a r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ [k] ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ ⎟ 2 C (S )− C (S ) = (p − p ) ⎜ 1− ⎟ r j 1 i 2 i j ⎜ ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ k ⎟ ⎝ ⎠ k=1 ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p + p ⎟ ⎜ p + p ⎟ ⎜ ⎟ ⎜ ⎟ [k] ir [k] jr ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 k=1 ⎜ ⎟ ⎜ ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ +p 1− (r+ 1) − p 1− (r+ 1) . j ⎜ ⎟ i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 r−1 A k=1 [k] A a a A a a 1 2 1 2 Let x = 1−  . Then we have p = p x r and p = p x r . Thus i j ir jr k=1 C (S )− C (S ) j 1 i 2 a a 1 2 x r ⎛ ⎞ ⎛ ⎞ a a 1 1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ a ⎜ ⎟ a a −1 2 a −1 2 ⎜ 1 ⎟ ⎜ 1 ⎟ ⎜ ⎟ ⎜ p x ⎟ p x r+ 1 r+ 1 ⎜ ⎟ ⎜ j ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = p − p + p 1− − p 1− . (6) i j j ⎜ ⎟ i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ r r ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p ⎟ ⎝ k ⎠ ⎝ k ⎠ k=1 k=1 a −1 2 p x r+ 1 i p Let t =  , λ = and α = . It is clearly that t  0, λ  1 and p r k=1 0  α  1. By substituting t into Eq. (6), it is simplified to C (S )− C (S ) j 1 i 2 a a 1 1 = p{1−λ+λ(1− t) α− (1−λt) α}. a a 1 2 x r By Lemma 1, we have C (S )  C (S ). j 1 i 2 Note that J is the first job inσ , it is scheduled in (r+ 2)th position. Then we have ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ C (S ) j 1 ⎜ ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C (S )+ p 1− (r+ 2) (7) ⎜ ⎟ k 1 j 1 k ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ k ⎠ k=1 and ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ C (S ) ⎜ i 2 ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C (S )+ p 1− (r+ 2) . (8) k 2 i 2 k ⎜ ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ k ⎠ k=1 168 Kai-biao Sun · Hong-xing Li (2009) LetΔ= C (S )− C (S ). Taking the difference between Eq. (7) and Eq. (8), we have i 2 j 1 C (S )− C (S ) k 1 k 2 a a 1 1 C (S ) C (S ) j 1 i 2 a a 2 2 = C (S )− C (S )+ p 1−  (r+ 2) − 1−  (r+ 2) j 1 i 2 k n n p p k k k=1 k=1 a a 1 1 C (S ) C (S ) j 1 i 2 a a 2 2 = C (S )− C (S )+ p 1−  (r+ 2) − 1−  (r+ 2) j 1 i 2 k n n p p k k k=1 k=1 a a 1 1 C (S ) Δ C (S ) i 2 i 2 (9) a a 2 2 = −Δ+ p 1−  +  (r+ 2) − 1−  (r+ 2) n n n p p p k k k k=1 k=1 k=1 −Δ+ p a (r+ 2)  (By Lemma 2) k 1 k=1 ⎡ ⎤ ⎢ ⎥ Δ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ =  ⎢ p a (r+ 2) − p ⎥ . k 1 k n ⎣ ⎦ k=1 k=1 k=1 For any job J (l = 1, 2,··· , n), if p  , then Eq. (9) is not larger than zero, l l a 3 i.e., C (S )  C (S ). Similarly, we have C (S )  C (S ) for any J in σ . Thus k 1 k 2 h 1 h 2 h the makespan of S = (σ, J, J ,σ ) is not larger than that of S = (σ, J , J,σ ). 1 i j 2 j i Repeating this interchange argument for all jobs not sequenced in the SPT rule yields the theorem. 3.2 The Sum of The kth Power of Completion Times Criterion Townsend [16] showed that the problem 1|| C can be solved by the SPT rule. When learning and deteriorating of jobs are considered, Wang [18] showed that the problem a 2 1|p = p α(t)+wr | C can also be solved by the SPT rule. In this section, we con- jr 1 sider a more general measure in scheduling, i.e., sum of the kth power of completion times. We still use the SPT rule as a heuristic for the problem 1|LE | C . at-p j=1 j Theorem 3 Let S be an optimal schedule and S be a SPT schedule for the problem n ka C (S ) j=1 j k k=1 n k 1|LE | C . Then ρ =  , where p = min{p | j = at 2  min j j=1 j n C (S ) p min j=1 1, 2,··· , n}. Proof Without loss of generality, we can suppose that p  p  ···  p . Let 1 2 n P = p . Then k=1 k k k A A A C (S )= p +···+ p +···+ p j=1 j n 1 1 ⎛ ⎛ ⎞ ⎞ a k ⎜ ⎜ p +···+ p ⎟ ⎟ ⎜ ⎜ ⎟ ⎟ k n−1 a ⎜ ⎜ ⎟ 2⎟ = (p ) +···+ ⎜ p +···+ p ⎜ 1− ⎟ n ⎟ 1 ⎝ 1 n ⎝ ⎠ ⎠ k a a 2 2 (p ) +···+ (p + p 2 +···+ p n ) 1 1 2 n ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 2⎟ ⎜ ⎟ = p l , ⎜ l ⎟ ⎝ ⎠ j=1 l=1 Fuzzy Inf. Eng. (2009) 2: 161-177 169 k k n k ∗ A A A C (S )= p +···+ p +···+ p j=1 j [1] [1] [n] ⎛ ⎛ ⎞ ⎞ a k ⎜ ⎜ p +···+ p ⎟ ⎟ [1] ⎜ ⎜ ⎟ ⎟ [n−1] ⎜ ⎜ ⎟ a ⎟ ⎜ ⎜ ⎟ ⎟ = p +···+ ⎜ p +···+ p ⎜ 1− ⎟ n ⎟ [1] [1] [n] ⎝ ⎝ ⎠ ⎠ ⎛ ⎞ n j ⎜ ⎟ ka ⎜ ⎟ p +···+ p ⎜ ⎟ [1] [n−1] ⎜ 2⎟ ⎜ ⎟ p l 1− ⎜ [l] ⎟ ⎝ ⎠ j=1 l=1 ⎛ ⎞ n j ⎜ ⎟ ka ⎜ ⎟ min ⎜ ⎟ ⎜ 2⎟ ⎜ p l ⎟ . ⎜ l ⎟ ⎝ ⎠ j=1 l=1 Hence n ka C (S ) j=1 j k k=1 ρ =  . C (S ) p min j=1 If the job processing times satisfy certain condition, we have Theorem 4 For the problem 1|LE | C , where k is a positive real number, if at-p k=1 p  for l = 1, 2,··· , n, then there exists an optimal schedule in which the a 3 job sequence is determined by the SPT rule. Proof Suppose p  p . We will prove the theorem by adjacent pairwise interchange i j argument. By comparing Eq. (2) and Eq. (4), it is easy to see C (S )  C (S ). On the other i 1 j 2 hand, by the proof of Theorem 2, we have C (S )  C (S ) and C (S )  C (S ) for j 1 i 2 l 1 l 2 any job J (l  i, j). Therefore, since k is a positive real number, we have C (S ) l 1 C (S ). This completes the proof. Corollary 1 For total completion time minimization problem 1|LE | C ,if p at-p j l k=1 for l = 1, 2,··· , n, then there exists an optimal schedule in which the job a 3 sequence is determined by the SPT rule. 3.3 The Total Weighted Completion Time Criterion Smith [15] showed that sequencing jobs according to the weighted smallest process- ing time (WSPT) rule provides an optimal schedule for the classical total weighted completion time problem, i.e., sequencing jobs in non-decreasing order of p /w , j j where w is the weight of job J . However, the WSPT order does not yield an op- j j timal schedule under the proposed learning model, which is shown by the following example. Example 2 Let n = 2, p = 1, p = 2, w = 10, w = 30, a = 1 and a = −0.5. The 1 2 1 2 1 2 ⎡ ⎤ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥ WSPT schedule (J , J ) yields the value w C = 30× 2+ 10× 2+ 1× 1− × ⎢ ⎥ 2 1 j j ⎣ ⎦ −0.5 2 > 76. Obviously, the optimal sequence (J , J ) yields the optimal value w C = 1 2 j j ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 10× 1+ 30× ⎢ 1+ 2× 1− ⎥ < 60. ⎣ ⎦ 3 170 Kai-biao Sun · Hong-xing Li (2009) Although the WSPT order does not provide an optimal schedule under the pro- posed learning model, it still gives an optimal solution if the processing times and the weights of jobs satisfy certain contains. Before presenting the main result, we first give two useful lemmas. a a −1 1 1 Lemma 3 1−λ (1− t) α−λ a t(1− t) α  0 for 0  t  1, 0  α  1,a  1 1 2 1 1 and 0  λ  λ  1. 1 2 ∂F a a −1 a 1 1 Proof Let F(t,λ ,λ ) = 1−λ (1−t) α−λ a t(1−t) α. Then = −(1−t) α< 1 2 1 2 1 ∂λ ∂F a −1 0, = −a t(1− t) α  0. Thus we have ∂λ a a −1 1 1 F(t,λ ,λ )  F(t, 1, 1) = 1− (1− t) α− a t(1− t) α. 1 2 1 a a −1 1 1 Let ϕ(t) = F(t, 1, 1) = 1− (1− t) α− a t(1− t) α. Then ϕ (t) = a (a − 1)t(1− 1 1 1 a −2 t) α  0, hence ϕ(t) is nondecreasing on t and ϕ(t)  ϕ(0) = 1−α  0 for t  0. a a −1 1 1 Therefore, F(t,λ ,λ ) = 1 − λ (1 − t) α − λ a t(1 − t) α  0 for 0  t  1, 1 2 1 2 1 0  α  1, a  1 and 0  λ  λ  1. This completes the proof. 1 1 2 a a 1 1 Lemma 4 λ[1 − λ (1 − t) α] − [1 − λ (1 − λt) α]  0 for λ  1,t  0,a  1, 1 2 1 0  α  1 and 0  λ  λ  1. 1 2 Proof Let a a 1 1 H(λ) = λ[1−λ (1− t) α]− [1−λ (1−λt) α]. (10) 1 2 Taking the first and second derivatives of Eq. (10) with respect toλ,wehave a a −1 1 1 H (λ) = 1−λ (1− t) α−λ a t(1−λt) α 1 2 1 and 2 a −2 H (λ) = λ a (a − 1)t (1−λt) α  0. 2 1 1 Hence, F (α) is nondecreasing on λ  1, i.e., H (λ)  H (1) for λ  1. By Lemma 3, a a −1 1 1 we have H (1) = 1−λ (1−t) α−λ a t(1−t) α  0. Therefore, H (λ)  H (1)  0. 1 2 1 Hence, H(λ) is nondecreasing on λ, i.e., H(λ)  H(1) = α(λ −λ )(1− t)  0 for 2 1 λ  1, 0  α  1, 0  t  1, a  1 and 0  λ  λ  1. This completes the proof. 1 2 Theorem 5 For total weighted completion time problem 1|LE | w C ,if p at-p j j l k=1 for l = 1, 2,··· , n and jobs have reversely agreeable weights, i.e., p < p i j a 3 implies w  w for all jobs J and J , then an optimal schedule can be obtained by i j i j sequencing jobs in the WSPT rule. Proof By the pairwise job interchange technique. Suppose p/w  p /w , it also i i j j implies p  p due to reversely agreeable weights of jobs. i j In order to prove that the WSPT schedule is optimal, it is sufficient to show w C (S ) + w C (S )  w C (S ) + w C (S ) since C (S )  C (S ) for any job J , i i 1 j j 1 i i 2 j j 2 l 1 l 2 l Fuzzy Inf. Eng. (2009) 2: 161-177 171 l  i, j by the proof of Theorem 2. From Eqs. (2)-(5), we have ⎡ ⎛ ⎞ ⎤ r−1 ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ A ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ [k] ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ k=1 ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ 2⎥ ⎢ ⎜ ⎟ ⎥ w C (S )+ w C (S ) = w C + p 1− r i i 1 j j 1 i ⎢ σ i ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ k ⎟ ⎥ ⎣ ⎝ ⎠ ⎦ k=1 ⎡ ⎛ ⎞ ⎛ ⎞ ⎤ a a 1 1 r−1 r−1 ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ A ⎟ ⎜ A A ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ p p + p ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ [k] ⎟ ⎜ [k] ir ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ k=1 ⎟ ⎜ k=1 ⎟ ⎥ a a ⎢ ⎜ ⎟ 2 ⎜ ⎟ 2⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ +w C + p 1− r + p 1− (r+ 1) j ⎢ σ i ⎜ ⎟ j ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎜ n ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ p p k k ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎣ ⎝ ⎠ ⎝ ⎠ ⎦ k=1 k=1 ⎛ ⎞ a ⎛ ⎞ a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ p p + p ⎜ ⎟ ⎜ ⎟ ⎜ [k] ⎟ ⎜ [k] ir ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ k=1 ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 = (w + w )C + (w + w )p ⎜ 1− ⎟ r + w p ⎜ 1− ⎟ (r+ 1) i j σ i j i ⎜ ⎟ j j ⎜ ⎟ ⎜ n ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p ⎟ ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 and ⎡ ⎛ ⎞ a ⎤ r−1 ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ A ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ [k] ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ k=1 ⎟ ⎥ ⎢ ⎜ ⎟ 2⎥ w C (S )+ w C (S ) = w ⎢ C + p ⎜ 1− ⎟ r ⎥ i i 2 j j 2 j ⎢ σ j ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎥ ⎢ ⎜ k ⎟ ⎥ ⎣ ⎝ ⎠ ⎦ k=1 ⎡ ⎛ ⎞ ⎛ ⎞ ⎤ a a 1 1 r−1 r−1 ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ A A A ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎜ p + p ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ jr ⎟ ⎥ [k] [k] ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ k=1 ⎟ ⎜ k=1 ⎟ ⎥ a a ⎢ ⎜ ⎟ 2 ⎜ ⎟ 2⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ +w C + p 1− r + p 1− (r+ 1) i ⎢ σ j ⎜ ⎟ i ⎜ ⎟ ⎥ n n ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎜ p ⎟ ⎥ ⎢ ⎜ k ⎟ ⎜ k ⎟ ⎥ ⎣ ⎝ ⎠ ⎝ ⎠ ⎦ k=1 k=1 ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p + p ⎟ ⎜ ⎟ ⎜ ⎟ [k] [k] jr ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 k=1 ⎜ ⎟ ⎜ ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ = (w + w )C + (w + w )p 1− r + w p 1− (r+ 1) . i j σ i j j ⎜ ⎟ i i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 172 Kai-biao Sun · Hong-xing Li (2009) r−1 k=1 [k] Let x = 1−  . Then we have k=1 w C (S )+ w C (S )− w C (S )− w C (S ) i i 2 j j 2 i i 1 j j 1 a a 1 2 (w + w )x r i j ⎡ ⎛ ⎞ ⎤ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ a −1 2 ⎢ ⎜ 1 ⎟ ⎥ ⎢ w ⎜ ⎟ ⎥ p x r+ 1 ⎢ j ⎜ i ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ = p 1− 1− j ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎥ ⎢ w + w ⎜ ⎟ r ⎥ ⎢ i j ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎥ ⎣ ⎝ k ⎠ ⎦ k=1 ⎡ ⎛ ⎞ ⎤ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ a ⎥ a −1 2 ⎢ ⎜ 1 ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ w p x r+ 1 ⎢ i ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ −p 1− 1− . (11) i ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ w + w r ⎢ i j ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎣ ⎝ ⎠ ⎦ k=1 a −1 2 w p w p x r+ 1 j i j i Let λ = ,λ = , λ = , t =  and α = . Clearly 1 2 w + w w + w p p r i j i j i k k=1 λ  1, 0  λ  λ  1, t > 0 and 0  α  1. By substituting α, λ , λ and t into Eq. 1 2 1 2 (11), it is simplified to w C (S )+ w C (S )− w C (S )− w C (S ) i i 2 j j 2 i i 1 j j 1 a a 1 2 (w + w )x r i j a a = p{λ[1−λ (1− t) α]− [1−λ (1−λt) α]}. i 1 2 Thus, by Lemma 4, we have w C (S ) + w C (S )  w C (S ) + w C (S ). Thus, i i 2 j j 2 i i 1 j j 1 repeating this interchanging argument for all the jobs not sequenced in the WSPT rule, we completes the proof. Corollary 2 For the problem 1|LE , p = p| w C , an optimal schedule can be at-p j j j obtained by sequencing jobs in non-increasing order of the weights w . If w = (k > 0) for j = 1, 2,··· , n, then it satisfies the reversely agreeable condition, thus we have k=1 Corollary 3 For the problem 1|LE , w = | w C ,if p  for l = at-p j j j l j a a 3 1, 2,··· , n, then an optimal schedule can be obtained by sequencing jobs in WSPT rule. 3.4 The Maximum Lateness Criterion For the classical maximum lateness problem, the earliest due date (EDD) rule pro- vides the optimal schedule. However, this policy is not optimal under the proposed learning model, as shown in the following example. Example 3 Let n = 2, p = 10, p = 20, d = 23, d = 21, a = 1 and a = −1. The 1 2 1 2 1 2 2 2 −1 EDD schedule (J , J ) yields the value L = L = 20+10× 1− ×2 −21 = , 2 1 max 1 3 3 Fuzzy Inf. Eng. (2009) 2: 161-177 173 while the sequence (J , J ) yields the optimal value L = L = 10+ 20× 1− × 1 2 max 2 −1 2 − 23 = − . In order to solve the problem approximately, we still use the EDD rule as a heuris- tic for the problem 1|LE |L . To develop a worst-case performance ratio for a at-p max heuristic, we have to avoid cases involving non-positive L . Similar to Cheng and max Wang [6], the worst-case error bound is defined as follows: L (S )+ d max max ρ = , L (S )+ d max max where S and L (S ) denote the heuristic schedule and the corresponding maximum max ∗ ∗ lateness, respectively, while S and L (S ) denote the heuristic schedule and the max corresponding maximum lateness, respectively, and d = max{d | j = 1, 2,··· , n}. max j Theorem 6 Let S be an optimal schedule and S be an EDD schedule for the prob- n n a a 1 2 L (S )+ d ( p ) p k k k max max k=1 k=1 lem 1|LE |L . Then ρ =  . at-p max 3 ∗ a L (S )+ d (p ) C (SPT ) max max min max Proof Without loss of generality, supposing that d  d  ···  d . Let P = 1 2 n p . Then k=1 A A A L (S )= max p + p +···+ p − d max j 1 2 j j=1,2,···,n ⎧ ⎫ ⎛ ⎞ ⎪ ⎪ a p +···+ p ⎪ 1 ⎜ ⎟ ⎪ p ⎜ ⎟ ⎨ 1 j−1 ⎬ a ⎜ ⎟ a 2 2 ⎜ ⎟ = max p + p 1− 2 +···+ p ⎜ 1− ⎟ j ⎪ 1 2 j ⎪ ⎝ ⎠ ⎪ ⎪ ⎩ ⎭ j=1,2,···,n P p a a 2 2 max p + p 2 +···+ p j − d , 1 2 j j j=1,2,···,n ∗ A A A L (S )= max p + p +···+ p − d max [ j] [1] [2] [ j] j=1,2,···,n ⎧ ⎫ ⎛  ⎞ j j j i−1 A ⎪ ⎪ ⎜ p ⎟ ⎪ ⎪ ⎜ ⎟ ⎨ k=1 [k] ⎬ a a ⎜ ⎟ a 2 2 2 ⎜ ⎟ = max p i − d − p i + p ⎜ 1− ⎟ i ⎪ [i] [ j] [i] [i] ⎪ ⎝ ⎠ ⎪ ⎪ ⎩ ⎭ j=1,2,···,n i=1 i=1 i=1 ⎧ ⎫ ⎛  ⎞ j i−1 A n n ⎪ ⎪ ⎜ p ⎟ ⎪ ⎪ ⎜ ⎟ ⎨ ⎬ k=1 [k] a a ⎜ ⎟ a 2 2 2 ⎜ ⎟ max p i − d − p i + p ⎜ 1− ⎟ i ⎪ [i] [ j]⎪ [i] [i] ⎝ ⎠ ⎪ ⎪ ⎩ ⎭ j=1,2,···,n i=1 i=1 i=1 a ∗ L (S )− p k + C . max k max k=1 Hence ∗ a ∗ L (S )− L (S )  p k − C , max max k max k=1 and so a ∗ p k − C max L (S )+ d max max k=1 ρ =  1+ ∗ ∗ L (S )+ d L (S )+ d max max max max 174 Kai-biao Sun · Hong-xing Li (2009) ⎛ ⎞ n n n ⎜ ⎟ ⎜ ⎟ a ⎜ ⎟ a 2 2 ⎜ ⎟ p k ⎜ p ⎟ p k k k k ⎝ ⎠ k=1 k=1 k=1 C (p ) C (SPT ) min max max Although the EDD rule can not provide an optimal schedule for the maximum lateness problem under the proposed model, the problem remain polynomial solvable under certain conditions. Let EDD-SPT denote the restricted EDD rule where ties are broken in favor of jobs with smaller processing times, i.e., the SPT rule. k=1 Theorem 7 For the maximum lateness problem 1|LE |L ,if p  for at-p max l a 3 l = 1, 2,··· , n and jobs have agreeable due dates, i.e., p < p implies d  d for i j i j jobs J and J , then an optimal schedule can be obtained by sequenced jobs in the i j EDD-SPT rule. Proof Suppose d  d .If d < d , it implies p  p due to agreeable due dates of i j i j i j jobs. If d = d , we assume p  p . i j i j By the proof of Theorem 2, we have C (S )  C (S ) for any job J , h  i, j. h 1 h 2 h Thus L (S )  L (S ). In order to show the EDD-SPT rule is optimal, it is sufficient h 1 h 2 to show max{L (S ), L (S )}  max{L (S ), L (S )}. i 1 j 1 i 2 j 2 Note that L (S ) = C (S ) − d < C (S ) − d  C (S ) − d = L (S ),L (S ) = i 1 i 1 i j 1 i i 2 i i 2 j 1 C (S ) − d  C (S ) − d  C (S ) − d = L (S ). Therefore we have max{L (S ), j 1 j i 2 j i 2 i i 2 i 1 L (S )}  max{L (S ), L (S )}. This completes the proof. j 1 i 2 j 2 k=1 Corollary 4 For the problem 1|LE , d = d|L ,if p  for l = 1, 2,··· , n, at-p j max l a 3 then an optimal schedule can be obtained by sequencing jobs in the SPT rule. k=1 Corollary 5 For the problem 1|LE , d = kp |L ,if p  for l = at-p j j max l a 3 1, 2,··· , n, then an optimal schedule can be obtained by sequencing jobs in the EDD rule or the SPT rule. 3.5 The Number of Tardy Jobs Criterion As we all known, the problem 1|| U can be solved by Moore’s Algorithm [13]. In the following, we will show that under certain conditions, the problem 1|LE | U at-p j can be solved by Moore’s algorithm with a slightly modification. Firstly, we give a formal description of Moore’s Algorithm. Moore’s Algorithm (see Moore [13]) Step 1: Order the set of jobs according to the EDD rule and call the resulting ordering S = (J , J ,··· , J ) the current sequence. EDD i i i 1 2 n Step 2: Using the current sequence, find the first late job, denoted by J , and go to Step 3. If no such job is found, the algorithm terminates with an optimal schedule obtained by ordering the jobs in the current sequence followed by the set of jobs that have been rejected in any order. Fuzzy Inf. Eng. (2009) 2: 161-177 175 Step 3: Reject the job with largest processing time among {J , J ,··· , J } and i i i 1 2 q remove it from the sequence. Now go to Step 2 with the resulting sequence as the current sequence. In order to solve our problem, we need to modify Moore’s algorithm by using the EDD-SPT rule instead of the EDD rule in the first step. We call the resulting algorithm Moore-SPT. n k=1 Theorem 8 For the problem 1|LE | U ,if p  for l = 1, 2,··· , n at-p j l j=1 a 3 and jobs have agreeable due-dates, i.e., p < p implies d  d for jobs J and J , i j i j i j then an optimal schedule can obtained by the Moore-SPT algorithm. Proof Assume jobs are indexed according to the EDD-SPT rule, i.e., S = EDD-SPT (J , J ,··· , J ). We denote a schedule by S = (E, L), where E is the set of early jobs 1 2 n and L is the set of late jobs. Let J and J be the first late job and the first rejected l j job in Moore’s Algorithm, respectively. We first show that there exists an optimal ∗ ∗ ∗ ∗ schedule S = (E , L ) with J ∈ L . Let S = (E , L ) be an optimal schedule with J ∈ E . Since jobs have agreeable 1 1 1 j 1 due dates, we conclude that the jobs in S are also in the SPT order. Thus there EDD-SPT exists at least a job J in T with p  p (Otherwise, for any job J with p  p ,we k 1 k j i i j have J ∈ E . By Theorem 7, the EDD-SPT schedule is optimal for jobs in E when i 1 1 the objective is to minimize the maximum lateness. This contradict to the fact that J is late). By interchanging the jobs J and J , we obtain a new schedule S = (E , T ) j k 2 2 2 with J ∈ T . By the proof of Theorem 2, we have|E |  |E |. Thus we get an optimal j 2 2 1 ∗ ∗ schedule S = S with J ∈ T . 2 j The theorem can be proved by induction on the number n of jobs. Clearly, the theorem is correct if n = 1. Assume the theorem is correct for all problems with n − 1 jobs. Let S = (E, T ) be the schedule constructed by the Moore- Moore−SPT SPT algorithm and let S = (E , T ), an optimal schedule with J ∈ T . Then by 1 j optimality, we have |E|  |E |. If we applied the Moore-SPT algorithm to the set of jobs {J , J ,··· , J , J ,··· , J }, we get an optimal schedule of the form (E, T \ 1 2 j−1 j+1 n {J }). Because (E , T \{J }) is feasible for the reduced problem, we have |E |  |E|. j j Thus|E| = |E | and S = (E, T ) is optimal. Moore Based on Theorem 8, we have the following results. Theorem 9 For the problem 1|LE , p = p| U , there exists an optimal schedule at-p j j in which jobs are sequenced in the EDD rule. k=1 Theorem 10 For the problem 1|LE , d = d| U ,if p  for l = 1, 2, at-p j j l a 3 ··· , n, then there exists an optimal schedule in which the jobs are sequence in the SPT rule. k=1 Corollary 6 For the problem 1|LE , d = kp | U ,if p  for l = at-p j j j l a 3 1, 2,··· , n, then an optimal schedule can be obtained by the Moore-SPT algorithm. 176 Kai-biao Sun · Hong-xing Li (2009) 4. Conclusions In this paper, we introduce an actual time-dependent and position-dependent learning effect model. The learning effect of a job is assumed to be a function of the sum of actual processing times of the jobs previously scheduled and of its scheduled position. We show by examples that the makespan minimization problem, the sum of kth power of completion times minimization problem, the weighted sum of completion times minimization problem, the maximum lateness minimization problem and the number of tardy jobs minimization problem can not be optimally solved by the corresponding classical scheduling rules. But for some special cases, the problems can be solved in polynomial time. We also use the classical rules as heuristic algorithms for these problems in general cases and analyzed their worst-case bounds, respectively. Further research may focus on determining the computational complexity of these problems as they remain open, or proposing more sophisticated heuristics. It is also clearly worthwhile for future research to investigate the actual time-dependent learning effect in the context of other scheduling environments, such as parallel machine scheduling and flow-shop scheduling. Acknowledgments We are grateful to anonymous referees for their helpful comments on an earlier ver- sion of this paper. This research was supported in part by National Natural Science Foundation of China (60774049). References 1. Alidaee B, Womer NK (1999) Scheduling with time dependent processing times: Review and exten- sions. Journal of the Operational Research Society 50:711-720 2. Badiru AB (1992) Computational survey of univariate and multivariate learning curve models. IEEE Transactions on Engineering Management 39:176-188 3. Biskup D (1999) Single-machine scheduling with learning considerations. European Journal of Op- erational Research 115:173-178 4. Biskup D (2008) A state-of-the-art review on scheduling with learning effects. European Journal of Operational Research 188:315-329 5. Cheng TCE, Ding Q, Lin BMT (2004) A concise survey of scheduling with time-dependent process- ing times. European Journal of Operational Research 152:1-13 6. Cheng TCE, Wang G (2000) Single machine scheduling with learning effect considerations. Annals of Operations Research 98:273-290 7. Cheng TCE, Wu CC, Lee WC (2008) Some scheduling problems with sum-of-processing-times- based and job-position-based learning effects. Information Science 178:2476-2487 8. Koulamas C, Kyparisis GJ (2007) Single-machine and two-machine flowshop scheduling with gen- eral learning function. European Journal of Operational Research 178:402-407 9. Kuo WH, Yang DL (2006) Minimizing the total completion time in a single-machine scheduling problem with a time-dependent learning effect. European Journal of Operational Research 174:1184- 10. Kuo WH, Yang DL (2007) Single-machine scheduling problems with the time-dependent learning effect. Computers and Mathematics with Application 53:1733-1739 11. Lee WC, Wu CC, Sung HJ (2004) A bi-criterion single-machine scheduling problem with learning considerations. Acta Informatica 40:303-315 12. Lin BMT (2007) Complexity results for single-machine scheduling with positional learning effects. Journal of Operational Research Society 58:1099-1102 Fuzzy Inf. Eng. (2009) 2: 161-177 177 13. Moore JM (1968) An n job one machine sequencing algorithm for minimizing the number of late jobs. Management Science 15:102-109 14. Mosheiov G (2001) Scheduling problems with learning effect. European Journal of Operational Research 132:687-693 15. Smith WE (1956) Various optimizers for single state production. Naval Research Logistics Quarterly 3:59-66 16. Townsend W (1978) The single machine problem with quadratic penalty function of completion times: a branch-and-bound solution. Management Science 24:530-534 17. Wang X, Cheng TCE (2007) Single-machine scheduling with deteriorating jobs and learning effects to minimize the makespan. European Journal of Operational Research 178:57-70 18. Wang JB (2007) Single-machine scheduling problems with the effects of learning and deterioration. Omega 35:397-402 19. Wang JB (2008) Single-machine scheduling with past-sequence-dependent setup times and time- dependent learning effect. Computers and Industrial Engineering 55:584-591 20. Wang JB, Ng CT, Cheng TCE, Lin LL (2008) Single-machine scheduling with a time-dependent learning effect. International Journal of Production Economics 111:802-811 21. Wu CC, Lee WC (2008) Single machine scheduling problems with a learning effect. Applied Math- ematical Modelling 32:1191-1197 22. Yelle LE (1979) The learning curve: Historical review and comprehensive survey. Decision Sciences 10:302-328 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Fuzzy Information and Engineering Taylor & Francis

Some Single-machine Scheduling Problems with Actual Time and Position Dependent Learning Effects

Fuzzy Information and Engineering , Volume 1 (2): 17 – Jun 1, 2009

Loading next page...
 
/lp/taylor-francis/some-single-machine-scheduling-problems-with-actual-time-and-position-z0ItiuGGTI

References (24)

Publisher
Taylor & Francis
Copyright
© 2009 Taylor and Francis Group, LLC
ISSN
1616-8666
eISSN
1616-8658
DOI
10.1007/s12543-009-0013-1
Publisher site
See Article on Publisher Site

Abstract

Fuzzy Inf. Eng. (2009)2:161-177 DOI 10.1007/s12543-009-0013-1 ORIGINAL ARTICLE Some Single-machine Scheduling Problems with Actual Time and Position Dependent Learning Effects Kai-biao Sun · Hong-xing Li Received: 18 March 2009/ Revised: 24 April 2009/ Accepted: 10 May 2009/ © Springer and Fuzzy Information and Engineering Branch of the Operations Research Society of China Abstract In this paper we study some single-machine scheduling problems with learning effects where the actual processing time of a job serves as a function of the total actual processing times of the jobs already processed and of its scheduled position. We show by examples that the optimal schedules for the classical version of problems are not optimal under this actual time and position dependent learning effect model for the following objectives: makespan, sum of kth power of the completion times, total weighted completion times, maximum lateness and number of tardy jobs. But under certain conditions, we show that the shortest processing time (SPT) rule, the weighted shortest processing time (WSPT) rule, the earliest due date (EDD) rule and the modified Moore’s Algorithm can also construct an optimal schedule for the problem of minimizing these objective functions, respectively. Keywords Scheduling · Actual time-dependent · Position-dependent · Learning effect · Single-machine 1. Introduction Scheduling problems have received considerable attention since the middle of the last century. However, most researchers assume that job processing times are known and fixed during the whole processing process. Recent empirical studies in several indus- tries have demonstrated that the time needed to produce a single unit continuously decreases with the processing of additional units and that due to the declining pro- cessing times the costs per unit also decline [22]. This phenomenon is well-known as the “learning effect” in the literature [2]. Learning effect has attracted growing attention in the scheduling community on account of its significance. For instance, Biskup [3] pointed out that repeated processing of similar tasks improves workers’ Kai-biao Sun · Hong-xing Li () School of Electronic and Information Engineering, Dalian University of Technology, Dalian 116024, P.R.China e-mail: [email protected], [email protected] 162 Kai-biao Sun · Hong-xing Li (2009) skills and introduced a scheduling model with learning effects in which the actual processing time of a job is a function of its position in the schedule. Mosheiov [14] found that under Biskup’s learning effect model the optimal schedules for some clas- sical scheduling problems remain valid, but they require much greater computational effort to obtain. Cheng and Wang [6] considered a single-machine scheduling prob- lem with a volume-dependent, piecewise linear processing time function to model the learning effect. Lee et al. [11] studied a bi-criterion scheduling problem on a single machine. Lin [12] gave the complexity results for single-machine scheduling with positional learning effects. Besides the job position dependent learning effect scheduling model, there are sev- eral other learning effect scheduling models in the literature. Alidaee and Womer [1] presented a review and extensions of the scheduling with time dependent processing times. Cheng et al [5] gave a concise survey of scheduling with time-dependent pro- cessing times. Wang and Cheng [17] studied a model in which the job processing times is a function of their starting times and positions in the sequence. Recently, Kuo and Yang [9,10] introduced a time-dependent learning effect which is a function of the total normal processing times of the jobs previously scheduled. Wang et al [20] considered the same learning effect model as that in Kuo and Yang [9]. They proved that the weighted shortest processing time (WSPT) rule, the earliest due date (EDD) rule and the modified Moore⚶Hodgson algorithm can, under certain condi- tions, construct the optimal schedule for the problem of minimizing the following three objectives: total weighted completion time, maximum lateness and number of tardy jobs, respectively. Koulamas and Kyparisis [8] introduced a general sum-of-job- processing- times dependent learning effect model in which employees learn more if they perform a job with a longer processing time. Wang [19] investigated a schedul- ing model considering the setup times and time dependent learning effect at the same time. For a state-of-the-art review on scheduling models and problems with learning effects, the readers are referred to [4]. Biskup [4] classified learning models into two types, namely position dependent learning and sum-of-processing time dependent learning. He further claimed that position dependent learning assumes that learning takes place by processing time independent operations like setting up of machines. This seems to be a realistic as- sumption for the case where the actual processing time of a job is mainly machine- driven and has (or is near to) none human interference. On the other hand, sum-of- processing-time dependent learning takes into account the experience gained from producing the same or similar jobs over time. This might, for example, be the case for offset printing, where running the press itself is a highly complicated and error- prone process. Recently, Wu and Lee [21] considered a learning model, in which the learning effect is a function of the total normal processing times of the jobs already processed and the job’s scheduled position. Let p denote the actual processing time jr ⎛  ⎞ r−1 ⎜ p ⎟ ⎜ [k] ⎟ k=1 a ⎜ ⎟ ⎜ ⎟ job J scheduled in rth position, then p = p 1+  r , where p is j jr j ⎝ ⎠ [k] k=1 the normal processing time of the kth job in a sequence and a  0, a  0. They 1 2 showed that the SPT sequence is optimal for minimizing make-span and total com- pletion time on single machine under the proposed learning model. In addition, they Fuzzy Inf. Eng. (2009) 2: 161-177 163 showed that the WSPT sequence is optimal for minimizing the sum of the weighted completion times if jobs have agreeable weights. Cheng et al. [7] consider another ⎛ ⎞ r−1 1 ⎜ ⎟ [k] ⎜ k=1 ⎟ ⎜ ⎟ 2 ⎜ ⎟ learning model, in which the learning effect is given as p = p 1−  r , jr j ⎝ ⎠ k=1 where a  1, a  0. Under this learning model, they obtained the same results 1 2 as that in Wu and Lee [21]. Further they showed that the EDD sequence is optimal for the maximum lateness if jobs have agreeable due dates. Besides, they presented polynomial-time optimal solutions for some special cases of the m machine flowshop make-span and total completion time minimization problems. From the above two learning models, it is not hard to see that the actual processing time of, say, the rth job is influenced by the sum of the normal processing times of the previously scheduled r-1 jobs. This learning effect model is based on the assumption that the number of operating processes in a job depends on the repetitions of the same operation. That is, the learning effect depends on the repetitions of the same operation in a job. However, in many situations, the operating processes of a job are different, for example, car repair or maintenance and patient diagnosis and treatment. The conditions of cars or patients are different. Hence, there are no identical repetitions of operating processes in the job. Nevertheless, there still exists a certain learning effect after operating the job. In such situations, the learning effect is due to the experiences of operating jobs, i.e., the total actual processing time of jobs. Therefore, In this paper, we study a new learning effect scheduling model where the learning effect is assumed to be a function of the sum of the actual processing times of the jobs already processed and of the job’s scheduled position. The remainder of this paper is organized as follows. In section 2 we define the problem formally. The solution procedures for the single-machine problems to minimize makespan, sum of the kth power of completion times, total weighted completion time, maximum lateness and number of tardy jobs minimization under the proposed model are presented in the next section. We conclude the paper in the last section. 2. Problem Formulation Suppose that there are n jobs to be scheduled on a single machine. Each job J has a normal processing time p , a weight w and a due-date d (1  j  n). Let p j j j [k] and p be the normal processing time and the actual processing time of a job when [k] it is scheduled in the kth position in a sequence, respectively. Let p be the actual ir processing time of job J when it is scheduled in position r in a sequence. Then ⎛ ⎞ r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ [k] ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ A a ⎜ ⎟ 2 ⎜ ⎟ p = p 1− r , (1) i ⎜ ⎟ ir ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ k=1 ⎛  ⎞ a s−1 ⎜ ⎟ ⎜ k=1 ⎟ [k] ⎜ ⎟ a A A 2 ⎜ ⎟ where p = p , p = p 1−  s (1 < s  r− 1), a  1 and a  0. ⎜ ⎟ [1] [s] 1 2 ⎝ n ⎠ [1] [s] k=1 For convenience, we denote this actual time and position dependent learning effect defined in Eq. (1) by LE . at-p 164 Kai-biao Sun · Hong-xing Li (2009) For a given schedule S , C = C (S ) represents the completion time of J in S . Let j j j C denote the make-span, where C = max{C , C ,··· , C }. Let w C , C max max 1 2 n j j j (k > 0), L = max{C − d | j = 1,··· , n} and U , where U = 0if C  d , else max j j j j j j U = 1 represent the sum of weighted completion times, the sum of kth power of completion times, the maximum lateness and the number of tardy jobs, respectively. Using the traditional notation, we refer to the problem as 1|LE | f if the criterion is at-p to minimize f , where f ∈{C , C , w C , L , U }. max i j max j 3. Several Single-machine Scheduling Problems In this section, we consider several single-machine scheduling problems with this learning effect model. Suppose that S and S are two job schedules. The difference between S and S 1 2 1 2 is a pairwise interchange of two adjacent jobs J and J , i.e., S = (σ, J, J ,σ ) and i j 1 i j S = (σ, J , J,σ ), where σ and σ denote the partial sequences of S (or S ) before 2 j i 1 2 and after J and J , respectively, σ (or σ ) may be empty. Furthermore, we assume i j that there are r − 1 jobs in σ. In addition, let C denote the completion time of the last job in σ and J be the first job in σ . Under S , the completion times of jobs J k 1 i and J are respectively ⎛ ⎞ r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ ⎟ [k] ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C + p 1− r (2) i 1 σ i ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ k ⎟ ⎝ ⎠ k=1 and ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p + p ⎟ ⎜ [k] ⎟ ⎜ [k] ir ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ k=1 ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ C (S ) = C + p 1− r + p 1− (r+ 1) . (3) j 1 σ i ⎜ ⎟ j ⎜ ⎟ ⎜ n ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 Similarly, the completion time of jobs J and J under S are respectively j i 2 ⎛ ⎞ r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ ⎟ [k] ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 ⎜ ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C + p 1− r (4) j 2 σ j ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k ⎟ ⎝ ⎠ k=1 and ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p + p ⎟ ⎜ ⎟ ⎜ ⎟ [k] [k] jr ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 k=1 ⎜ ⎟ ⎜ ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ C (S ) = C + p 1− r + p 1− (r+ 1) . (5) i 2 σ j ⎜ ⎟ i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 Fuzzy Inf. Eng. (2009) 2: 161-177 165 3.1 The Make-span Criterion In the classical single-machine makespan minimization problems, the makespan value is sequence-independent. However, this may be different when the learning effect is considered. Wu and Lee [21] and Cheng et al [7] showed that the SPT rule is optimal when the learning is based on the sum of normal processing times of jobs previously scheduled and the job scheduled position. But, the following example shows that the SPT order does not yield an optimal schedule under the proposed model. Example 1 Let n = 3, p = 1, p = 2, p = 57, a = 3 and a = −0.5. The 1 2 3 1 2 1 −0.5 SPT schedule (J , J , J ) yields the makespan C = 1+ 2× 1− × 2 + 57× 1 2 3 max 1 −0.5 1+2× 1− ×2 ( ) 60 −0.5 1− ×3 = 31.5444. Obviously, the optimal sequence (J , J , J ) 2 1 3 2 −0.5 2+1× 1− ×2 ( ) −0.5 60 yields the optimal value C = 2+1× 1− ×2 +57× 1− × max 60 60 −0.5 3 = 31.3940. In order to solve the problem approximately, we also use the SPT rule as a heuristic for the problem 1|LE |C . The performance of the SPT algorithm is evaluated at-p max by its worst-case error bound. Theorem 1 Let S be an optimal schedule and S be a SPT schedule for the prob- n 1 C (S ) k max k=1 lem 1|LE |C . Then ρ =  , where p = min{p | j = at-p max 1 min j C (S ) p max min 1, 2,··· , n}. Proof Without loss of generality, we can suppose that p  p  ···  p . Let 1 2 n P = p . Then k=1 A A A C (S )= p + p +···+ p max 1 2 ⎛ ⎞ 1 ⎜ p +···+ p ⎟ 1 ⎜ ⎟ a n−1 a 2 ⎜ ⎟ 2 ⎜ ⎟ = p + p 1− 2 +···+ p 1− n 1 2 n ⎝ ⎠ P P a a 2 2 p + p 2 +···+ p n 1 2 n = p k , k=1 ∗ A A A C (S )= p + p +···+ p max [1] [2] [n] ⎛ ⎞ a p +···+ p 1 ⎜ ⎟ p [1] ⎜ ⎟ [1] [n−1] a ⎜ ⎟ a 2 2 ⎜ ⎟ = p + p 1− 2 +···+ p 1− n ⎜ ⎟ [1] [2] [n] ⎝ ⎠ P P a a 1 1 p p +···+ p [1] [1] [n−1] a a 2 2 p + p 1− 2 +···+ p 1− n [1] [2] [n] P P p +···+ p [1] [n−1] p k 1− [k] k=1 min p k . k=1 166 Kai-biao Sun · Hong-xing Li (2009) Hence ⎛ ⎞ a n 1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ C (S ) max ⎜ k=1 ⎟ ⎜ ⎟ ⎜ ⎟ ρ =  . 1 ⎜ ⎟ ∗ ⎜ ⎟ C (S ) ⎜ p ⎟ max ⎜ min ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ When a = 0, we haveρ = 1, which is consistent with the fact that the SPT sched- 1 1 ule is optimal when the learning effect requirement is based on the job’s scheduled position [3]. Although the SPT order does not provide an optimal schedule under the proposed learning model, it still gives an optimal solution when the processing times of jobs satisfy certain condition. We first give two useful lemmas before presenting the re- sults. a a Lemma 1 1 − λ + λ(1 − x) α − (1 − λx) α  0 for λ  1, 0  x  , a  1 and 0  α  1. a a Proof Let F(x) = 1−λ+λ(1− x) α− (1−λx) α. Taking the first derivative of F(x) with respect to x,wehave a−1 a−1 a−1 a−1 F (x) = −λαa(1− x) +αaλ(1−λx) = aλα[(1−λx) − (1− x) ]. Since λ  1 and 0  x  , it implies that F (x)  0, thus F(x) is non-increasing on 1 1 0  x  . Therefore, F(x)  F(0) = (1−λ)(1−α)  0 for λ  1, 0  x  , a  1 λ λ and 0  α  1. a a Lemma 2 For any 0  y  x  1 and a  1, (1− x+ y) − (1− x)  ay. Proof Let f (u) = u (u > 0). According to the mean value theorem, for any u > 0 and u > 0, there exists a pointξ between u and u , such that f (u) = f (u )+ f (ξ)(u− 0 0 0 u ). Taking the first and second derivatives of f (u) with respect to u, we obtain a−1  a−2 f (u) = au , f (u) = a(a− 1)u > 0. Hence f (u) is increasing on u. Let u = 1 − x + y and u = 1 − x. Then we have a a a−1 a−1 (1− x+ y) − (1− x) = ay(ξ)  ay(1− x+ y)  ay for 0  y  x  1. k=1 Theorem 2 For the makespan problem 1|LE |C ,if p  for l = 1, 2,··· , at-p max l a 3 n, then an optimal schedule can be obtained by sequencing jobs in non-decreasing order of p (i.e., SPT rule). Proof Suppose p  p . To show S dominates S ,it suffices to show C (S ) i j 1 2 j 1 C (S ) and C (S )  C (S ) for any J inσ . i 2 l 1 l 2 l Fuzzy Inf. Eng. (2009) 2: 161-177 167 Taking the difference between Eq. (3) and Eq. (5), we have ⎛ ⎞ a r−1 ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ [k] ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ ⎟ 2 C (S )− C (S ) = (p − p ) ⎜ 1− ⎟ r j 1 i 2 i j ⎜ ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ k ⎟ ⎝ ⎠ k=1 ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p + p ⎟ ⎜ p + p ⎟ ⎜ ⎟ ⎜ ⎟ [k] ir [k] jr ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 k=1 ⎜ ⎟ ⎜ ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ +p 1− (r+ 1) − p 1− (r+ 1) . j ⎜ ⎟ i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 r−1 A k=1 [k] A a a A a a 1 2 1 2 Let x = 1−  . Then we have p = p x r and p = p x r . Thus i j ir jr k=1 C (S )− C (S ) j 1 i 2 a a 1 2 x r ⎛ ⎞ ⎛ ⎞ a a 1 1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ a ⎜ ⎟ a a −1 2 a −1 2 ⎜ 1 ⎟ ⎜ 1 ⎟ ⎜ ⎟ ⎜ p x ⎟ p x r+ 1 r+ 1 ⎜ ⎟ ⎜ j ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = p − p + p 1− − p 1− . (6) i j j ⎜ ⎟ i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ r r ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p ⎟ ⎝ k ⎠ ⎝ k ⎠ k=1 k=1 a −1 2 p x r+ 1 i p Let t =  , λ = and α = . It is clearly that t  0, λ  1 and p r k=1 0  α  1. By substituting t into Eq. (6), it is simplified to C (S )− C (S ) j 1 i 2 a a 1 1 = p{1−λ+λ(1− t) α− (1−λt) α}. a a 1 2 x r By Lemma 1, we have C (S )  C (S ). j 1 i 2 Note that J is the first job inσ , it is scheduled in (r+ 2)th position. Then we have ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ C (S ) j 1 ⎜ ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C (S )+ p 1− (r+ 2) (7) ⎜ ⎟ k 1 j 1 k ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ k ⎠ k=1 and ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ C (S ) ⎜ i 2 ⎟ ⎜ ⎟ 2 ⎜ ⎟ C (S ) = C (S )+ p 1− (r+ 2) . (8) k 2 i 2 k ⎜ ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ k ⎠ k=1 168 Kai-biao Sun · Hong-xing Li (2009) LetΔ= C (S )− C (S ). Taking the difference between Eq. (7) and Eq. (8), we have i 2 j 1 C (S )− C (S ) k 1 k 2 a a 1 1 C (S ) C (S ) j 1 i 2 a a 2 2 = C (S )− C (S )+ p 1−  (r+ 2) − 1−  (r+ 2) j 1 i 2 k n n p p k k k=1 k=1 a a 1 1 C (S ) C (S ) j 1 i 2 a a 2 2 = C (S )− C (S )+ p 1−  (r+ 2) − 1−  (r+ 2) j 1 i 2 k n n p p k k k=1 k=1 a a 1 1 C (S ) Δ C (S ) i 2 i 2 (9) a a 2 2 = −Δ+ p 1−  +  (r+ 2) − 1−  (r+ 2) n n n p p p k k k k=1 k=1 k=1 −Δ+ p a (r+ 2)  (By Lemma 2) k 1 k=1 ⎡ ⎤ ⎢ ⎥ Δ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ =  ⎢ p a (r+ 2) − p ⎥ . k 1 k n ⎣ ⎦ k=1 k=1 k=1 For any job J (l = 1, 2,··· , n), if p  , then Eq. (9) is not larger than zero, l l a 3 i.e., C (S )  C (S ). Similarly, we have C (S )  C (S ) for any J in σ . Thus k 1 k 2 h 1 h 2 h the makespan of S = (σ, J, J ,σ ) is not larger than that of S = (σ, J , J,σ ). 1 i j 2 j i Repeating this interchange argument for all jobs not sequenced in the SPT rule yields the theorem. 3.2 The Sum of The kth Power of Completion Times Criterion Townsend [16] showed that the problem 1|| C can be solved by the SPT rule. When learning and deteriorating of jobs are considered, Wang [18] showed that the problem a 2 1|p = p α(t)+wr | C can also be solved by the SPT rule. In this section, we con- jr 1 sider a more general measure in scheduling, i.e., sum of the kth power of completion times. We still use the SPT rule as a heuristic for the problem 1|LE | C . at-p j=1 j Theorem 3 Let S be an optimal schedule and S be a SPT schedule for the problem n ka C (S ) j=1 j k k=1 n k 1|LE | C . Then ρ =  , where p = min{p | j = at 2  min j j=1 j n C (S ) p min j=1 1, 2,··· , n}. Proof Without loss of generality, we can suppose that p  p  ···  p . Let 1 2 n P = p . Then k=1 k k k A A A C (S )= p +···+ p +···+ p j=1 j n 1 1 ⎛ ⎛ ⎞ ⎞ a k ⎜ ⎜ p +···+ p ⎟ ⎟ ⎜ ⎜ ⎟ ⎟ k n−1 a ⎜ ⎜ ⎟ 2⎟ = (p ) +···+ ⎜ p +···+ p ⎜ 1− ⎟ n ⎟ 1 ⎝ 1 n ⎝ ⎠ ⎠ k a a 2 2 (p ) +···+ (p + p 2 +···+ p n ) 1 1 2 n ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 2⎟ ⎜ ⎟ = p l , ⎜ l ⎟ ⎝ ⎠ j=1 l=1 Fuzzy Inf. Eng. (2009) 2: 161-177 169 k k n k ∗ A A A C (S )= p +···+ p +···+ p j=1 j [1] [1] [n] ⎛ ⎛ ⎞ ⎞ a k ⎜ ⎜ p +···+ p ⎟ ⎟ [1] ⎜ ⎜ ⎟ ⎟ [n−1] ⎜ ⎜ ⎟ a ⎟ ⎜ ⎜ ⎟ ⎟ = p +···+ ⎜ p +···+ p ⎜ 1− ⎟ n ⎟ [1] [1] [n] ⎝ ⎝ ⎠ ⎠ ⎛ ⎞ n j ⎜ ⎟ ka ⎜ ⎟ p +···+ p ⎜ ⎟ [1] [n−1] ⎜ 2⎟ ⎜ ⎟ p l 1− ⎜ [l] ⎟ ⎝ ⎠ j=1 l=1 ⎛ ⎞ n j ⎜ ⎟ ka ⎜ ⎟ min ⎜ ⎟ ⎜ 2⎟ ⎜ p l ⎟ . ⎜ l ⎟ ⎝ ⎠ j=1 l=1 Hence n ka C (S ) j=1 j k k=1 ρ =  . C (S ) p min j=1 If the job processing times satisfy certain condition, we have Theorem 4 For the problem 1|LE | C , where k is a positive real number, if at-p k=1 p  for l = 1, 2,··· , n, then there exists an optimal schedule in which the a 3 job sequence is determined by the SPT rule. Proof Suppose p  p . We will prove the theorem by adjacent pairwise interchange i j argument. By comparing Eq. (2) and Eq. (4), it is easy to see C (S )  C (S ). On the other i 1 j 2 hand, by the proof of Theorem 2, we have C (S )  C (S ) and C (S )  C (S ) for j 1 i 2 l 1 l 2 any job J (l  i, j). Therefore, since k is a positive real number, we have C (S ) l 1 C (S ). This completes the proof. Corollary 1 For total completion time minimization problem 1|LE | C ,if p at-p j l k=1 for l = 1, 2,··· , n, then there exists an optimal schedule in which the job a 3 sequence is determined by the SPT rule. 3.3 The Total Weighted Completion Time Criterion Smith [15] showed that sequencing jobs according to the weighted smallest process- ing time (WSPT) rule provides an optimal schedule for the classical total weighted completion time problem, i.e., sequencing jobs in non-decreasing order of p /w , j j where w is the weight of job J . However, the WSPT order does not yield an op- j j timal schedule under the proposed learning model, which is shown by the following example. Example 2 Let n = 2, p = 1, p = 2, w = 10, w = 30, a = 1 and a = −0.5. The 1 2 1 2 1 2 ⎡ ⎤ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥ WSPT schedule (J , J ) yields the value w C = 30× 2+ 10× 2+ 1× 1− × ⎢ ⎥ 2 1 j j ⎣ ⎦ −0.5 2 > 76. Obviously, the optimal sequence (J , J ) yields the optimal value w C = 1 2 j j ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 10× 1+ 30× ⎢ 1+ 2× 1− ⎥ < 60. ⎣ ⎦ 3 170 Kai-biao Sun · Hong-xing Li (2009) Although the WSPT order does not provide an optimal schedule under the pro- posed learning model, it still gives an optimal solution if the processing times and the weights of jobs satisfy certain contains. Before presenting the main result, we first give two useful lemmas. a a −1 1 1 Lemma 3 1−λ (1− t) α−λ a t(1− t) α  0 for 0  t  1, 0  α  1,a  1 1 2 1 1 and 0  λ  λ  1. 1 2 ∂F a a −1 a 1 1 Proof Let F(t,λ ,λ ) = 1−λ (1−t) α−λ a t(1−t) α. Then = −(1−t) α< 1 2 1 2 1 ∂λ ∂F a −1 0, = −a t(1− t) α  0. Thus we have ∂λ a a −1 1 1 F(t,λ ,λ )  F(t, 1, 1) = 1− (1− t) α− a t(1− t) α. 1 2 1 a a −1 1 1 Let ϕ(t) = F(t, 1, 1) = 1− (1− t) α− a t(1− t) α. Then ϕ (t) = a (a − 1)t(1− 1 1 1 a −2 t) α  0, hence ϕ(t) is nondecreasing on t and ϕ(t)  ϕ(0) = 1−α  0 for t  0. a a −1 1 1 Therefore, F(t,λ ,λ ) = 1 − λ (1 − t) α − λ a t(1 − t) α  0 for 0  t  1, 1 2 1 2 1 0  α  1, a  1 and 0  λ  λ  1. This completes the proof. 1 1 2 a a 1 1 Lemma 4 λ[1 − λ (1 − t) α] − [1 − λ (1 − λt) α]  0 for λ  1,t  0,a  1, 1 2 1 0  α  1 and 0  λ  λ  1. 1 2 Proof Let a a 1 1 H(λ) = λ[1−λ (1− t) α]− [1−λ (1−λt) α]. (10) 1 2 Taking the first and second derivatives of Eq. (10) with respect toλ,wehave a a −1 1 1 H (λ) = 1−λ (1− t) α−λ a t(1−λt) α 1 2 1 and 2 a −2 H (λ) = λ a (a − 1)t (1−λt) α  0. 2 1 1 Hence, F (α) is nondecreasing on λ  1, i.e., H (λ)  H (1) for λ  1. By Lemma 3, a a −1 1 1 we have H (1) = 1−λ (1−t) α−λ a t(1−t) α  0. Therefore, H (λ)  H (1)  0. 1 2 1 Hence, H(λ) is nondecreasing on λ, i.e., H(λ)  H(1) = α(λ −λ )(1− t)  0 for 2 1 λ  1, 0  α  1, 0  t  1, a  1 and 0  λ  λ  1. This completes the proof. 1 2 Theorem 5 For total weighted completion time problem 1|LE | w C ,if p at-p j j l k=1 for l = 1, 2,··· , n and jobs have reversely agreeable weights, i.e., p < p i j a 3 implies w  w for all jobs J and J , then an optimal schedule can be obtained by i j i j sequencing jobs in the WSPT rule. Proof By the pairwise job interchange technique. Suppose p/w  p /w , it also i i j j implies p  p due to reversely agreeable weights of jobs. i j In order to prove that the WSPT schedule is optimal, it is sufficient to show w C (S ) + w C (S )  w C (S ) + w C (S ) since C (S )  C (S ) for any job J , i i 1 j j 1 i i 2 j j 2 l 1 l 2 l Fuzzy Inf. Eng. (2009) 2: 161-177 171 l  i, j by the proof of Theorem 2. From Eqs. (2)-(5), we have ⎡ ⎛ ⎞ ⎤ r−1 ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ A ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ [k] ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ k=1 ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ 2⎥ ⎢ ⎜ ⎟ ⎥ w C (S )+ w C (S ) = w C + p 1− r i i 1 j j 1 i ⎢ σ i ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ k ⎟ ⎥ ⎣ ⎝ ⎠ ⎦ k=1 ⎡ ⎛ ⎞ ⎛ ⎞ ⎤ a a 1 1 r−1 r−1 ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ A ⎟ ⎜ A A ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ p p + p ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ [k] ⎟ ⎜ [k] ir ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ k=1 ⎟ ⎜ k=1 ⎟ ⎥ a a ⎢ ⎜ ⎟ 2 ⎜ ⎟ 2⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ +w C + p 1− r + p 1− (r+ 1) j ⎢ σ i ⎜ ⎟ j ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎜ n ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ p p k k ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎣ ⎝ ⎠ ⎝ ⎠ ⎦ k=1 k=1 ⎛ ⎞ a ⎛ ⎞ a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ p p + p ⎜ ⎟ ⎜ ⎟ ⎜ [k] ⎟ ⎜ [k] ir ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ k=1 ⎟ ⎜ k=1 ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 = (w + w )C + (w + w )p ⎜ 1− ⎟ r + w p ⎜ 1− ⎟ (r+ 1) i j σ i j i ⎜ ⎟ j j ⎜ ⎟ ⎜ n ⎟ ⎜ n ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p ⎟ ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 and ⎡ ⎛ ⎞ a ⎤ r−1 ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ A ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ [k] ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ k=1 ⎟ ⎥ ⎢ ⎜ ⎟ 2⎥ w C (S )+ w C (S ) = w ⎢ C + p ⎜ 1− ⎟ r ⎥ i i 2 j j 2 j ⎢ σ j ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎥ ⎢ ⎜ k ⎟ ⎥ ⎣ ⎝ ⎠ ⎦ k=1 ⎡ ⎛ ⎞ ⎛ ⎞ ⎤ a a 1 1 r−1 r−1 ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ A A A ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎜ p + p ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ jr ⎟ ⎥ [k] [k] ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ k=1 ⎟ ⎜ k=1 ⎟ ⎥ a a ⎢ ⎜ ⎟ 2 ⎜ ⎟ 2⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ +w C + p 1− r + p 1− (r+ 1) i ⎢ σ j ⎜ ⎟ i ⎜ ⎟ ⎥ n n ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎜ p ⎟ ⎥ ⎢ ⎜ k ⎟ ⎜ k ⎟ ⎥ ⎣ ⎝ ⎠ ⎝ ⎠ ⎦ k=1 k=1 ⎛ ⎞ ⎛ ⎞ a a 1 1 r−1 r−1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A ⎟ ⎜ A A ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ p + p ⎟ ⎜ ⎟ ⎜ ⎟ [k] [k] jr ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ k=1 k=1 ⎜ ⎟ ⎜ ⎟ a a ⎜ ⎟ 2 ⎜ ⎟ 2 ⎜ ⎟ ⎜ ⎟ = (w + w )C + (w + w )p 1− r + w p 1− (r+ 1) . i j σ i j j ⎜ ⎟ i i ⎜ ⎟ n n ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ p p ⎜ k ⎟ ⎜ k ⎟ ⎝ ⎠ ⎝ ⎠ k=1 k=1 172 Kai-biao Sun · Hong-xing Li (2009) r−1 k=1 [k] Let x = 1−  . Then we have k=1 w C (S )+ w C (S )− w C (S )− w C (S ) i i 2 j j 2 i i 1 j j 1 a a 1 2 (w + w )x r i j ⎡ ⎛ ⎞ ⎤ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ a −1 2 ⎢ ⎜ 1 ⎟ ⎥ ⎢ w ⎜ ⎟ ⎥ p x r+ 1 ⎢ j ⎜ i ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ = p 1− 1− j ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎥ ⎢ w + w ⎜ ⎟ r ⎥ ⎢ i j ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ p ⎟ ⎥ ⎣ ⎝ k ⎠ ⎦ k=1 ⎡ ⎛ ⎞ ⎤ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ a ⎥ a −1 2 ⎢ ⎜ 1 ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ w p x r+ 1 ⎢ i ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ −p 1− 1− . (11) i ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ n ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ w + w r ⎢ i j ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎢ ⎜ ⎟ ⎥ ⎣ ⎝ ⎠ ⎦ k=1 a −1 2 w p w p x r+ 1 j i j i Let λ = ,λ = , λ = , t =  and α = . Clearly 1 2 w + w w + w p p r i j i j i k k=1 λ  1, 0  λ  λ  1, t > 0 and 0  α  1. By substituting α, λ , λ and t into Eq. 1 2 1 2 (11), it is simplified to w C (S )+ w C (S )− w C (S )− w C (S ) i i 2 j j 2 i i 1 j j 1 a a 1 2 (w + w )x r i j a a = p{λ[1−λ (1− t) α]− [1−λ (1−λt) α]}. i 1 2 Thus, by Lemma 4, we have w C (S ) + w C (S )  w C (S ) + w C (S ). Thus, i i 2 j j 2 i i 1 j j 1 repeating this interchanging argument for all the jobs not sequenced in the WSPT rule, we completes the proof. Corollary 2 For the problem 1|LE , p = p| w C , an optimal schedule can be at-p j j j obtained by sequencing jobs in non-increasing order of the weights w . If w = (k > 0) for j = 1, 2,··· , n, then it satisfies the reversely agreeable condition, thus we have k=1 Corollary 3 For the problem 1|LE , w = | w C ,if p  for l = at-p j j j l j a a 3 1, 2,··· , n, then an optimal schedule can be obtained by sequencing jobs in WSPT rule. 3.4 The Maximum Lateness Criterion For the classical maximum lateness problem, the earliest due date (EDD) rule pro- vides the optimal schedule. However, this policy is not optimal under the proposed learning model, as shown in the following example. Example 3 Let n = 2, p = 10, p = 20, d = 23, d = 21, a = 1 and a = −1. The 1 2 1 2 1 2 2 2 −1 EDD schedule (J , J ) yields the value L = L = 20+10× 1− ×2 −21 = , 2 1 max 1 3 3 Fuzzy Inf. Eng. (2009) 2: 161-177 173 while the sequence (J , J ) yields the optimal value L = L = 10+ 20× 1− × 1 2 max 2 −1 2 − 23 = − . In order to solve the problem approximately, we still use the EDD rule as a heuris- tic for the problem 1|LE |L . To develop a worst-case performance ratio for a at-p max heuristic, we have to avoid cases involving non-positive L . Similar to Cheng and max Wang [6], the worst-case error bound is defined as follows: L (S )+ d max max ρ = , L (S )+ d max max where S and L (S ) denote the heuristic schedule and the corresponding maximum max ∗ ∗ lateness, respectively, while S and L (S ) denote the heuristic schedule and the max corresponding maximum lateness, respectively, and d = max{d | j = 1, 2,··· , n}. max j Theorem 6 Let S be an optimal schedule and S be an EDD schedule for the prob- n n a a 1 2 L (S )+ d ( p ) p k k k max max k=1 k=1 lem 1|LE |L . Then ρ =  . at-p max 3 ∗ a L (S )+ d (p ) C (SPT ) max max min max Proof Without loss of generality, supposing that d  d  ···  d . Let P = 1 2 n p . Then k=1 A A A L (S )= max p + p +···+ p − d max j 1 2 j j=1,2,···,n ⎧ ⎫ ⎛ ⎞ ⎪ ⎪ a p +···+ p ⎪ 1 ⎜ ⎟ ⎪ p ⎜ ⎟ ⎨ 1 j−1 ⎬ a ⎜ ⎟ a 2 2 ⎜ ⎟ = max p + p 1− 2 +···+ p ⎜ 1− ⎟ j ⎪ 1 2 j ⎪ ⎝ ⎠ ⎪ ⎪ ⎩ ⎭ j=1,2,···,n P p a a 2 2 max p + p 2 +···+ p j − d , 1 2 j j j=1,2,···,n ∗ A A A L (S )= max p + p +···+ p − d max [ j] [1] [2] [ j] j=1,2,···,n ⎧ ⎫ ⎛  ⎞ j j j i−1 A ⎪ ⎪ ⎜ p ⎟ ⎪ ⎪ ⎜ ⎟ ⎨ k=1 [k] ⎬ a a ⎜ ⎟ a 2 2 2 ⎜ ⎟ = max p i − d − p i + p ⎜ 1− ⎟ i ⎪ [i] [ j] [i] [i] ⎪ ⎝ ⎠ ⎪ ⎪ ⎩ ⎭ j=1,2,···,n i=1 i=1 i=1 ⎧ ⎫ ⎛  ⎞ j i−1 A n n ⎪ ⎪ ⎜ p ⎟ ⎪ ⎪ ⎜ ⎟ ⎨ ⎬ k=1 [k] a a ⎜ ⎟ a 2 2 2 ⎜ ⎟ max p i − d − p i + p ⎜ 1− ⎟ i ⎪ [i] [ j]⎪ [i] [i] ⎝ ⎠ ⎪ ⎪ ⎩ ⎭ j=1,2,···,n i=1 i=1 i=1 a ∗ L (S )− p k + C . max k max k=1 Hence ∗ a ∗ L (S )− L (S )  p k − C , max max k max k=1 and so a ∗ p k − C max L (S )+ d max max k=1 ρ =  1+ ∗ ∗ L (S )+ d L (S )+ d max max max max 174 Kai-biao Sun · Hong-xing Li (2009) ⎛ ⎞ n n n ⎜ ⎟ ⎜ ⎟ a ⎜ ⎟ a 2 2 ⎜ ⎟ p k ⎜ p ⎟ p k k k k ⎝ ⎠ k=1 k=1 k=1 C (p ) C (SPT ) min max max Although the EDD rule can not provide an optimal schedule for the maximum lateness problem under the proposed model, the problem remain polynomial solvable under certain conditions. Let EDD-SPT denote the restricted EDD rule where ties are broken in favor of jobs with smaller processing times, i.e., the SPT rule. k=1 Theorem 7 For the maximum lateness problem 1|LE |L ,if p  for at-p max l a 3 l = 1, 2,··· , n and jobs have agreeable due dates, i.e., p < p implies d  d for i j i j jobs J and J , then an optimal schedule can be obtained by sequenced jobs in the i j EDD-SPT rule. Proof Suppose d  d .If d < d , it implies p  p due to agreeable due dates of i j i j i j jobs. If d = d , we assume p  p . i j i j By the proof of Theorem 2, we have C (S )  C (S ) for any job J , h  i, j. h 1 h 2 h Thus L (S )  L (S ). In order to show the EDD-SPT rule is optimal, it is sufficient h 1 h 2 to show max{L (S ), L (S )}  max{L (S ), L (S )}. i 1 j 1 i 2 j 2 Note that L (S ) = C (S ) − d < C (S ) − d  C (S ) − d = L (S ),L (S ) = i 1 i 1 i j 1 i i 2 i i 2 j 1 C (S ) − d  C (S ) − d  C (S ) − d = L (S ). Therefore we have max{L (S ), j 1 j i 2 j i 2 i i 2 i 1 L (S )}  max{L (S ), L (S )}. This completes the proof. j 1 i 2 j 2 k=1 Corollary 4 For the problem 1|LE , d = d|L ,if p  for l = 1, 2,··· , n, at-p j max l a 3 then an optimal schedule can be obtained by sequencing jobs in the SPT rule. k=1 Corollary 5 For the problem 1|LE , d = kp |L ,if p  for l = at-p j j max l a 3 1, 2,··· , n, then an optimal schedule can be obtained by sequencing jobs in the EDD rule or the SPT rule. 3.5 The Number of Tardy Jobs Criterion As we all known, the problem 1|| U can be solved by Moore’s Algorithm [13]. In the following, we will show that under certain conditions, the problem 1|LE | U at-p j can be solved by Moore’s algorithm with a slightly modification. Firstly, we give a formal description of Moore’s Algorithm. Moore’s Algorithm (see Moore [13]) Step 1: Order the set of jobs according to the EDD rule and call the resulting ordering S = (J , J ,··· , J ) the current sequence. EDD i i i 1 2 n Step 2: Using the current sequence, find the first late job, denoted by J , and go to Step 3. If no such job is found, the algorithm terminates with an optimal schedule obtained by ordering the jobs in the current sequence followed by the set of jobs that have been rejected in any order. Fuzzy Inf. Eng. (2009) 2: 161-177 175 Step 3: Reject the job with largest processing time among {J , J ,··· , J } and i i i 1 2 q remove it from the sequence. Now go to Step 2 with the resulting sequence as the current sequence. In order to solve our problem, we need to modify Moore’s algorithm by using the EDD-SPT rule instead of the EDD rule in the first step. We call the resulting algorithm Moore-SPT. n k=1 Theorem 8 For the problem 1|LE | U ,if p  for l = 1, 2,··· , n at-p j l j=1 a 3 and jobs have agreeable due-dates, i.e., p < p implies d  d for jobs J and J , i j i j i j then an optimal schedule can obtained by the Moore-SPT algorithm. Proof Assume jobs are indexed according to the EDD-SPT rule, i.e., S = EDD-SPT (J , J ,··· , J ). We denote a schedule by S = (E, L), where E is the set of early jobs 1 2 n and L is the set of late jobs. Let J and J be the first late job and the first rejected l j job in Moore’s Algorithm, respectively. We first show that there exists an optimal ∗ ∗ ∗ ∗ schedule S = (E , L ) with J ∈ L . Let S = (E , L ) be an optimal schedule with J ∈ E . Since jobs have agreeable 1 1 1 j 1 due dates, we conclude that the jobs in S are also in the SPT order. Thus there EDD-SPT exists at least a job J in T with p  p (Otherwise, for any job J with p  p ,we k 1 k j i i j have J ∈ E . By Theorem 7, the EDD-SPT schedule is optimal for jobs in E when i 1 1 the objective is to minimize the maximum lateness. This contradict to the fact that J is late). By interchanging the jobs J and J , we obtain a new schedule S = (E , T ) j k 2 2 2 with J ∈ T . By the proof of Theorem 2, we have|E |  |E |. Thus we get an optimal j 2 2 1 ∗ ∗ schedule S = S with J ∈ T . 2 j The theorem can be proved by induction on the number n of jobs. Clearly, the theorem is correct if n = 1. Assume the theorem is correct for all problems with n − 1 jobs. Let S = (E, T ) be the schedule constructed by the Moore- Moore−SPT SPT algorithm and let S = (E , T ), an optimal schedule with J ∈ T . Then by 1 j optimality, we have |E|  |E |. If we applied the Moore-SPT algorithm to the set of jobs {J , J ,··· , J , J ,··· , J }, we get an optimal schedule of the form (E, T \ 1 2 j−1 j+1 n {J }). Because (E , T \{J }) is feasible for the reduced problem, we have |E |  |E|. j j Thus|E| = |E | and S = (E, T ) is optimal. Moore Based on Theorem 8, we have the following results. Theorem 9 For the problem 1|LE , p = p| U , there exists an optimal schedule at-p j j in which jobs are sequenced in the EDD rule. k=1 Theorem 10 For the problem 1|LE , d = d| U ,if p  for l = 1, 2, at-p j j l a 3 ··· , n, then there exists an optimal schedule in which the jobs are sequence in the SPT rule. k=1 Corollary 6 For the problem 1|LE , d = kp | U ,if p  for l = at-p j j j l a 3 1, 2,··· , n, then an optimal schedule can be obtained by the Moore-SPT algorithm. 176 Kai-biao Sun · Hong-xing Li (2009) 4. Conclusions In this paper, we introduce an actual time-dependent and position-dependent learning effect model. The learning effect of a job is assumed to be a function of the sum of actual processing times of the jobs previously scheduled and of its scheduled position. We show by examples that the makespan minimization problem, the sum of kth power of completion times minimization problem, the weighted sum of completion times minimization problem, the maximum lateness minimization problem and the number of tardy jobs minimization problem can not be optimally solved by the corresponding classical scheduling rules. But for some special cases, the problems can be solved in polynomial time. We also use the classical rules as heuristic algorithms for these problems in general cases and analyzed their worst-case bounds, respectively. Further research may focus on determining the computational complexity of these problems as they remain open, or proposing more sophisticated heuristics. It is also clearly worthwhile for future research to investigate the actual time-dependent learning effect in the context of other scheduling environments, such as parallel machine scheduling and flow-shop scheduling. Acknowledgments We are grateful to anonymous referees for their helpful comments on an earlier ver- sion of this paper. This research was supported in part by National Natural Science Foundation of China (60774049). References 1. Alidaee B, Womer NK (1999) Scheduling with time dependent processing times: Review and exten- sions. Journal of the Operational Research Society 50:711-720 2. Badiru AB (1992) Computational survey of univariate and multivariate learning curve models. IEEE Transactions on Engineering Management 39:176-188 3. Biskup D (1999) Single-machine scheduling with learning considerations. European Journal of Op- erational Research 115:173-178 4. Biskup D (2008) A state-of-the-art review on scheduling with learning effects. European Journal of Operational Research 188:315-329 5. Cheng TCE, Ding Q, Lin BMT (2004) A concise survey of scheduling with time-dependent process- ing times. European Journal of Operational Research 152:1-13 6. Cheng TCE, Wang G (2000) Single machine scheduling with learning effect considerations. Annals of Operations Research 98:273-290 7. Cheng TCE, Wu CC, Lee WC (2008) Some scheduling problems with sum-of-processing-times- based and job-position-based learning effects. Information Science 178:2476-2487 8. Koulamas C, Kyparisis GJ (2007) Single-machine and two-machine flowshop scheduling with gen- eral learning function. European Journal of Operational Research 178:402-407 9. Kuo WH, Yang DL (2006) Minimizing the total completion time in a single-machine scheduling problem with a time-dependent learning effect. European Journal of Operational Research 174:1184- 10. Kuo WH, Yang DL (2007) Single-machine scheduling problems with the time-dependent learning effect. Computers and Mathematics with Application 53:1733-1739 11. Lee WC, Wu CC, Sung HJ (2004) A bi-criterion single-machine scheduling problem with learning considerations. Acta Informatica 40:303-315 12. Lin BMT (2007) Complexity results for single-machine scheduling with positional learning effects. Journal of Operational Research Society 58:1099-1102 Fuzzy Inf. Eng. (2009) 2: 161-177 177 13. Moore JM (1968) An n job one machine sequencing algorithm for minimizing the number of late jobs. Management Science 15:102-109 14. Mosheiov G (2001) Scheduling problems with learning effect. European Journal of Operational Research 132:687-693 15. Smith WE (1956) Various optimizers for single state production. Naval Research Logistics Quarterly 3:59-66 16. Townsend W (1978) The single machine problem with quadratic penalty function of completion times: a branch-and-bound solution. Management Science 24:530-534 17. Wang X, Cheng TCE (2007) Single-machine scheduling with deteriorating jobs and learning effects to minimize the makespan. European Journal of Operational Research 178:57-70 18. Wang JB (2007) Single-machine scheduling problems with the effects of learning and deterioration. Omega 35:397-402 19. Wang JB (2008) Single-machine scheduling with past-sequence-dependent setup times and time- dependent learning effect. Computers and Industrial Engineering 55:584-591 20. Wang JB, Ng CT, Cheng TCE, Lin LL (2008) Single-machine scheduling with a time-dependent learning effect. International Journal of Production Economics 111:802-811 21. Wu CC, Lee WC (2008) Single machine scheduling problems with a learning effect. Applied Math- ematical Modelling 32:1191-1197 22. Yelle LE (1979) The learning curve: Historical review and comprehensive survey. Decision Sciences 10:302-328

Journal

Fuzzy Information and EngineeringTaylor & Francis

Published: Jun 1, 2009

Keywords: Scheduling; Actual time-dependent; Position-dependent; Learning effect; Single-machine

There are no references for this article.