“Woah! It's like Spotify but for academic articles.”

Instant Access to Thousands of Journals for just $40/month

Get 2 Weeks Free

Semi-Markov Reliability Models with Recurrence Times and Credit Rating Applications

Semi-Markov Reliability Models with Recurrence Times and Credit Rating Applications Semi-Markov Reliability Models with Recurrence Times and Credit Rating Applications //// Hindawi Publishing Corporation Home Journals About Us About this Journal Submit a Manuscript Table of Contents Journal Menu Abstracting and Indexing Aims and Scope Article Processing Charges Articles in Press Author Guidelines Bibliographic Information Contact Information Editorial Board Editorial Workflow Free eTOC Alerts Reviewers Acknowledgment Subscription Information Open Special Issues Published Special Issues Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Linked References How to Cite this Article Journal of Applied Mathematics and Decision Sciences Volume 2009 (2009), Article ID 625712, 17 pages doi:10.1155/2009/625712 Research Article <h2>Semi-Markov Reliability Models with Recurrence Times and Credit Rating Applications</h2> Guglielmo D'Amico , 1 Jacques Janssen , 2 and Raimondo Manca 3 1 Dipartimento di Scienze del Farmaco, Università “G. D'Annunzio”, Via dei Vestini, 66100 Chieti, Italy 2 jacan and euria Université de Bretagne Occidentale, 6 avenue le Gorgeu, CS 93837, 29238 brest Cedex 3, France 3 Dipartimento di Matematica per le Decisioni Economiche, Università “La Sapienza”, Finanziarie ed Assicurative, Via del Castro Laurenziano 9, 00161 Roma, Italy Received 15 May 2009; Accepted 9 September 2009 Academic Editor: Mark Bebbington Copyright © 2009 Guglielmo D'Amico et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract We show how it is possible to construct efficient duration dependent semi-Markov reliability models by considering recurrence time processes. We define generalized reliability indexes and we show how it is possible to compute them. Finally, we describe a possible application in the study of credit rating dynamics by considering the credit rating migration as a reliability problem. 1. Introduction Homogeneous semi-Markov processes (HSMPs) were defined by [ 1 , 2 ]. A detailed analysis of HSMP was given in [ 3 , 4 ]. Engineers have used these processes to analyze mechanical, transportation, and informative systems. One of the most important engineering applications of HSMP is in reliability; see [ 3 , 5 , 6 ]. Generalized semi-Markov processes have been proposed and insensitivity phenomenon displayed by stochastic models from the areas of reliability and telephone engineering has been investigated; see [ 7 – 10 ]. In its basic form, a reliability problem consists of the analysing the performance of a system that moves randomly in time between many possible states . The state set is partitioned into two subsets. The first subset that is formed by the states in which the system is working and in the second subset by all states in which the system is down are present. Supposing that next state depends only on the last one (the future depends only on the present) the problem can be confronted by means of Markov processes. In discrete time Markov chain environment the distribution functions (d.f.) of the waiting times between transitions are geometric while in continuous time they are negative exponentials. Usually, the transitions happen after random durations in general are not described by memoryless distribution functions (exponential or geometric). This is the reason why HSMP fits better then the Markov one in reliability problems; it offers the possibility of being able to use any distribution function. Semi-Markov processes have been proposed for reliability and performability evaluation of systems; see [ 6 , 11 , 12 ]. In [ 13 ], how to apply homogeneous and nonhomogeneous semi-Markov processes in reliability problems is shown. In this paper, in order to study the duration dependence we attach the backward and forward recurrence time processes to the HSMP and we consider them simultaneously at the beginning and at the end of the considered time interval. These processes have been analyzed by several authors; see [ 4 , 5 , 14 – 18 ]. They allow us to have complete information as regards the waiting times scenarios. In fact (i) initial backward times take into account the time at which the system went in the state even if the arrival time is before the beginning of the studied time horizon; (ii) initial forward times consider the time at which the first transition after the beginning of the studied time will happen; (iii) final backward times take into account the time at which the last transition before the end of the considered time interval is done; (iv) final forward allows us to consider the time at which the system will exit from the state occupied at the final time. In this way, the use of the initial and final backward and forward processes gives us the possibility of constructing all the waiting time scenarios that could happen in the neighbours of the initial and final observed times. Since HSMP uses no memoryless d.f., different values assumed by recurrence processes change the transition probabilities of the HSMP. Consequently it is possible to define generalized reliability measures which depend on the values assumed by the HSMP and recurrence time processes at starting and ending times. The usefulness of the results is illustrated in the applicative section on the credit risk rating dynamic which is one of the most important problems in financial literature. Fundamentally it consists of computing the default probability of a firm going into debt. The literature on this topic is very wide but the reader can refer to [ 19 – 22 ]. In order to evaluate credit risk, international big organisations like Fitch, Moody’s, and Standard & Poor’s give different ranks to firms which agree to be evaluated. Each firm receives a “rating” representing an evaluation of the “reliability” of its capacity to reimburse debt. Clearly, the lower the rating, the higher the interest rate the evaluated firm should pay. The rating level changes in the time and one way to model it is by means of Markov processes as in [ 23 ]. In this environment Markov models are called “migration models.” The poor fitting of Markov process in credit risk environment was outlined in [ 24 – 26 ]. In our opinion, the credit rating problem can be included in the more general problem of the reliability of a stochastic system as already highlighted in [ 27 – 30 ]. Indeed, rating agencies, through the assessment of a rating, estimate the reliability of the firm issuing a bond. In this paper we generalize the results of [ 27 , 28 ] and another step is made in order to establish a link between credit rating and reliability models. Moreover the duration dependence of the rating evolution can be fully captured by means of recurrence times. Section 2 will present a short description of HSMP. In Section 3 the backward and forward recurrence time processes are presented and their general distributions are determined. Section 4 is devoted to the initial and final backward semi-Markov reliability models. Section 5 presents the credit risk model with complete information as regards the waiting times. In the last section some concluding remarks are given. 2. Homogeneous Semi-Markov Processes We follow the notation given in [ 4 ]. On a complete probability space ( Ω , ℑ , 𝑃 ) we define two random variables: (i) 𝐽 𝑛 , 𝑛 ∈ β„• with state space 𝐼 = { 1 , … , π‘š } representing the state at the n th transition. (ii) 𝑇 𝑛 , 𝑛 ∈ β„• with ℝ + as state space representing the time of the n th transition. The process ( 𝐽 𝑛 , 𝑇 𝑛 ) is supposed to be a homogeneous Markov renewal process (HMRP) of kernel 𝐐 = [ 𝑄 𝑖 𝑗 ( 𝑑 ) ] so that 𝑄 𝑖 𝑗 ξ€Ί 𝐽 ( 𝑑 ) = 𝑃 𝑛 + 1 = 𝑗 , 𝑇 𝑛 + 1 − 𝑇 𝑛 ≤ 𝑑 ∣ 𝐽 𝑛 ξ€» = 𝑖 . ( 2 . 1 ) We know that 𝑝 𝑖 𝑗 ξ€· 𝐽 ≐ 𝑃 𝑛 = 𝑗 ∣ 𝐽 𝑛 − 1 ξ€Έ = 𝑖 = l i m 𝑑 → ∞ 𝑄 𝑖 𝑗 ( 𝑑 ) , 𝑖 , 𝑗 ∈ 𝐼 , 𝑑 ∈ ℝ + , ( 2 . 2 ) so 𝐏 = [ 𝑝 𝑖 𝑗 ] is the transition matrix of the embedded Markov chain { 𝐽 𝑛 } . Furthermore, it is necessary to introduce the probability that the process will leave state i in a time t : 𝐻 𝑖 ( ξ€Ί 𝑇 𝑑 ) ≐ 𝑃 𝑛 + 1 − 𝑇 𝑛 ≤ 𝑑 ∣ 𝐽 𝑛 ξ€» =  = 𝑖 𝑗 ∈ 𝐼 𝑄 𝑖 𝑗 ( 𝑑 ) . ( 2 . 3 ) It is possible to define the distribution function of the waiting time in each state i , given that the state j , successively occupied, is known as 𝐺 𝑖 𝑗 ξ€Ί 𝑇 ( 𝑑 ) ≐ 𝑃 𝑛 + 1 − 𝑇 𝑛 ≤ 𝑑 ∣ 𝐽 𝑛 = 𝑖 , 𝐽 𝑛 + 1 ξ€» = ⎧ βŽͺ ⎨ βŽͺ ⎩ 𝑄 = 𝑗 𝑖 𝑗 ( 𝑑 ) 𝑝 𝑖 𝑗 , i f 𝑝 𝑖 𝑗 , ≠ 0 1 , i f 𝑝 𝑖 𝑗 . = 0 ( 2 . 4 ) The main difference between a continuous time Markov process and an HSMP is in the distribution functions 𝐺 𝑖 𝑗 ( 𝑑 ) . In a Markov environment this function has to be a negative exponential function of parameter πœ† 𝑖 𝑗 ; on the other hand, in the semi-Markov case the distribution functions 𝐺 𝑖 𝑗 ( 𝑑 ) can be of any type. This fact means that in order to consider the duration effects we can use the functions 𝐺 𝑖 𝑗 ( 𝑑 ) . The HSMP 𝑍 = ( 𝑍 ( 𝑑 ) , 𝑑 ∈ ℝ + ) represents, for each waiting time, the state occupied by the system, that is, 𝑍 ( 𝑑 ) = 𝐽 𝑁 ( 𝑑 ) ξ€½ , w h e r e 𝑁 ( 𝑑 ) = m a x 𝑛 ∢ 𝑇 𝑛 ξ€Ύ ≤ 𝑑 . ( 2 . 5 ) The HSMP transition probabilities are defined in the following way: πœ™ 𝑖 𝑗 ξ€Ί 𝑍 ( 𝑑 ) = 𝑃 ( 𝑑 ) = 𝑗 ∣ 𝑍 ( 0 ) = 𝑖 , 𝑇 𝑁 ( 0 ) ξ€» = 0 . ( 2 . 6 ) They are obtained solving the following evolution equations (see among others [ 4 ]): πœ™ 𝑖 𝑗 ( 𝑑 ) = 𝛿 𝑖 𝑗 ξ€· 1 − 𝐻 𝑖 ( ξ€Έ +  𝑑 ) 𝛽 ∈ 𝐼 ξ€œ 𝑑 0 Μ‡ 𝑄 𝑖 𝛽 ( πœ— ) πœ™ 𝛽 𝑗 ( 𝑑 − πœ— ) 𝑑 πœ— , ( 2 . 7 ) where 𝛿 𝑖 𝑗 represents the Kronecker symbol. The first part of formula ( 2.7 ), 𝛿 𝑖 𝑗 ( 1 − 𝐻 𝑖 ( 𝑑 ) ) , gives the probability of the system not having transitions up to time t given that it entered in state i at time 0. In the second part, ∑ π‘š 𝛽 = 1 ∫ 𝑑 0 Μ‡ 𝑄 𝑖 𝛽 ( πœ— ) πœ™ 𝛽 𝑗 ( 𝑑 − πœ— ) 𝑑 πœ— , Μ‡ 𝑄 𝑖 𝛽 ( πœ— ) 𝑑 πœ— represents the probability of the system not making the next transition in [ πœ— , πœ— + 𝑑 πœ— ) moving from state i to state 𝛽 . After the transition, the system will go to state j following one of all possible trajectories that go from state 𝛽 to state j in a time 𝑑 − πœ— . 3. Backward and Forward Recurrence Time Processes In this section we present some generalizations of the HSMP transition probabilities ( 2.7 ) using the recurrence time processes. Recurrence time processes have been investigated by many authors. For example, in [ 14 , 15 ] the backward process at starting time was used to determine the asymptotic distribution of an ergodic HSMP. In [ 5 ] the backward processes were considered both at starting and arriving times in the transition probabilities but, to the authors’ knowledge, a complete study such as that given in the next subsection has never been presented. Given the HMRP ( 𝐽 𝑛 , 𝑇 𝑛 ) , we define the following stochastic processes of recurrence times: 𝐡 ( 𝑑 ) = 𝑑 − 𝑇 𝑁 ( 𝑑 ) , 𝐹 ( 𝑑 ) = 𝑇 𝑁 ( 𝑑 ) + 1 − 𝑑 . ( 3 . 1 ) The process 𝐡 ( 𝑑 ) is called the backward recurrence time (or age) process and 𝐹 ( 𝑑 ) F the forward (or residual time) recurrence time process (see [ 4 ]). The recurrence time processes complement the semi-Markov to a Markov process with respect to ℑ + 𝑑 ≡ 𝜎 { 𝑍 ( 𝜏 ) , 𝐹 ( 𝜏 ) , 𝜏 ∈ [ 0 , 𝑑 ] } , 𝑑 ≥ 0 and for this reason they are often called auxiliary processes. Then for any bounded 𝐼 × β„ + -measurable function 𝑓 ( π‘₯ , 𝑑 ) , 𝑠 ≤ 𝑑 , it results that 𝐸 ξ€Ί 𝑓 ( 𝑍 ( 𝑑 ) , 𝐹 ( 𝑑 ) ) ∣ ℑ + 𝑠 ξ€» [ 𝑓 ] = 𝐸 ( 𝑍 ( 𝑑 ) , 𝐹 ( 𝑑 ) ) ∣ 𝑍 ( 𝑠 ) , 𝐹 ( 𝑠 ) . ( 3 . 2 ) For ( 3.2 ) see [ 16 ]. Figure 1 presents an HSMP trajectory in which we reported the recurrence time processes. At time s the HSMP is in state 𝑍 ( 𝑠 ) = 𝑖 , it entered in this state with its last transition at time 𝑇 𝑛 = 𝑠 − 𝑣 then the initial backward value is 𝐡 ( 𝑠 ) = 𝑣 . The process makes next transition at time 𝑇 𝑛 + 1 = 𝑠 + 𝑒 , then the initial forward value is 𝐹 ( 𝑠 ) = 𝑒 . At time 𝑑 + 𝑠 the HSMP is in state 𝑍 ( 𝑑 + 𝑠 ) = 𝑗 , it entered in this state with its last transition at time 𝑇 β„Ž − 1 = 𝑑 + 𝑠 − 𝑣 ξ…ž then the initial backward value is 𝐡 ( 𝑑 + 𝑠 ) = 𝑣 ξ…ž . The process makes next transition at time 𝑇 β„Ž = 𝑑 + 𝑠 + 𝑒 ξ…ž , then the initial forward value is 𝐹 ( 𝑑 + 𝑠 ) = 𝑒 ξ…ž . Figure 1: Trajectory of backward and forward SMP. Our objective, in this section, is to define and compute transition probabilities that are constrained at initial time 𝑠 and at final time 𝑑 + 𝑠 by the recurrence time processes. To be more precise, given the information ( 𝑍 ( 𝑠 ) = 𝑖 , 𝐡 ( 𝑠 ) = 𝑣 , 𝐹 ( 𝑠 ) = 𝑒 ) , we want to compute the probability of having ( 𝑍 ( 𝑑 + 𝑠 ) = 𝑗 , 𝐡 ( 𝑑 + 𝑠 ) = 𝑣 ξ…ž , 𝐹 ( 𝑑 + 𝑠 ) = 𝑒 ξ…ž ) . In order to clarify the presentation, we show, firstly some particular cases. (a) Let 𝑏 πœ™ 𝑖 𝑗 ( 𝑣 ; 𝑑 ) = 𝑃 [ 𝑍 ( 𝑑 ) = 𝑗 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐡 ( 0 ) = 𝑣 ] be the transition probability with initial backward value 𝑣 . It denotes the probability of being in state 𝑗 after t periods given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before. Using the relation ξ€½ 𝐽 { 𝑍 ( 𝑑 ) = 𝑖 , 𝐡 ( 𝑑 ) = 𝑣 } = 𝑁 ( 𝑑 ) = 𝑖 , 𝑇 𝑁 ( 𝑑 ) = 𝑑 − 𝑣 , 𝑇 𝑁 ( 𝑑 ) + 1 ξ€Ύ > 𝑑 , ( 3 . 3 ) it can be proved that 𝑏 πœ™ 𝑖 𝑗 𝛿 ( 𝑣 ; 𝑑 ) = 𝑖 𝑗 ξ€· 1 − 𝐻 𝑖 ξ€Έ ( 𝑑 + 𝑣 ) 1 − 𝐻 𝑖 +  ( 𝑣 ) π‘˜ ∈ 𝐼 ξ€œ 𝑑 0 Μ‡ 𝑄 𝑖 π‘˜ ( 𝜏 + 𝑣 ) 1 − 𝐻 𝑖 πœ™ ( 𝑣 ) π‘˜ 𝑗 ( 𝑑 − 𝜏 ) 𝑑 𝜏 . ( 3 . 4 ) An explanation of ( 3.4 ) can be provided. The term ( 1 − 𝐻 𝑖 ( 𝑑 + 𝑣 ) ) / ( 1 − 𝐻 𝑖 ( 𝑣 ) ) represents the probability of remaining in state 𝑖 for 𝑑 + 𝑣 times given that the process will stay in state 𝑖 for 𝑣 times. This probability contributes to 𝑏 πœ™ 𝑖 𝑗 ( 𝑣 ; 𝑑 ) only if 𝑖 = 𝑗 . The second term ∑ π‘˜ ∈ 𝐼 ∫ 𝑑 0 ( Μ‡ 𝑄 𝑖 π‘˜ ( 𝜏 + 𝑣 ) / ( 1 − 𝐻 𝑖 ( 𝑣 ) ) ) πœ‘ π‘˜ 𝑗 ( 𝑑 − 𝜏 ) 𝑑 𝜏 expresses the probability of a trajectory making provision for the entrance into the state π‘˜ after 𝜏 periods given that the process remained in state 𝑖 for 𝑣 times; then the transition in state 𝑗 in the remaining time 𝑑 − 𝜏 from state π‘˜ has to be carried out. This holds for all states π‘˜ ∈ 𝐼 and times 𝜏 ∈ [ 0 , 𝑑 ] . Note that if 𝑣 = 0 , 𝑏 πœ™ 𝑖 𝑗 ( 𝑣 ; 𝑑 ) = πœ™ 𝑖 𝑗 ( 𝑑 ) recovering the ordinary HSMP transition probabilities ( 2.7 ). (b) Let 𝑓 πœ™ 𝑖 𝑗 ( 𝑒 ; 𝑑 ) = 𝑃 [ 𝑍 ( 𝑑 ) = 𝑗 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐹 ( 0 ) = 𝑒 ] be the transition probability with fixed initial forward value 𝑒 . It denotes the probability of being in state 𝑗 after a time 𝑑 given that, at time zero, the process entered into state 𝑖 and it makes next transition just at time 𝑒 into whatever state 𝑓 πœ™ 𝑖 𝑗  ( 𝑒 ; 𝑑 ) = π‘˜ ∈ 𝐼 𝑑 𝑄 𝑖 π‘˜ ( 𝑒 ) 𝑑 𝐻 𝑖 πœ™ ( 𝑒 ) π‘˜ 𝑗 ( 𝑑 − 𝑒 ) . ( 3 . 5 ) The term 𝑑 𝑄 𝑖 π‘˜ ( 𝑒 ) / 𝑑 𝐻 𝑖 ( 𝑒 ) is the Radon-Nikodym derivative of 𝑄 𝑖 π‘˜ ( ⋅ ) with respect to 𝐻 𝑖 ( ⋅ ) . It expresses 𝑑 𝑄 𝑖 π‘˜ ( 𝑒 ) / 𝑑 𝐻 𝑖 ( 𝑒 ) = 𝑃 [ 𝐽 𝑛 + 1 = π‘˜ ∣ 𝐽 𝑛 = 𝑖 , 𝑇 𝑛 + 1 − 𝑇 𝑛 = 𝑒 ] . In this way, in ( 3.5 ) the entrance into all possible states π‘˜ with next transition at time 𝑒 is considered and then, from that state the process has to occupy state 𝑗 at time 𝑑 . (c) Let 𝐹 πœ™ 𝑖 𝑗 ( 𝑒 ; 𝑑 ) = 𝑃 [ 𝑍 ( 𝑑 ) = 𝑗 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐹 ( 0 ) > 𝑒 ] be the transition probability with initial certain waiting value 𝑒 . It denotes the probability of being in state 𝑗 at time 𝑑 given that, at time zero, the process entered into state 𝑖 and it makes next transition after time 𝑒 into any state, then we are sure that, up to time 𝑒 , the process is still in state 𝑖 𝐹 πœ™ 𝑖 𝑗 𝛿 ( 𝑒 ; 𝑑 ) = 𝑖 𝑗 ξ€· 1 − 𝐻 𝑖 ξ€Έ ( 𝑑 ) 1 − 𝐻 𝑖 +  ( 𝑒 ) π‘˜ ∈ 𝐼 ξ€œ 0 𝑑 − 𝑒 Μ‡ 𝑄 𝑖 π‘˜ ( 𝜏 + 𝑒 ) 1 − 𝐻 𝑖 πœ™ ( 𝑒 ) π‘˜ 𝑗 ( 𝑑 − 𝑒 − 𝜏 ) 𝑑 𝜏 . ( 3 . 6 ) The first term 𝛿 𝑖 𝑗 ( 1 − 𝐻 𝑖 ( 𝑑 ) ) / ( 1 − 𝐻 𝑖 ( 𝑒 ) ) gives us the probability of remaining up to time 𝑑 in state 𝑖 given that the process will stay in that state at least up to time 𝑒 . The second term considers the possibility of evolving with the next transition into any state at any time 𝜏 ∈ ( 𝑒 , 𝑑 ] given no movement up to time 𝑒 . After the transition at time 𝜏 , the system will be in the state 𝑗 at time 𝑑 following one of the possible trajectories. (d) Let πœ™ 𝐡 𝑖 𝑗 ( ; 𝑣 ξ…ž , 𝑑 ) = 𝑃 [ 𝑍 ( 𝑑 ) = 𝑗 , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž ∣ 𝑍 ( 0 ) = 𝑖 ] be the transition probability with final backward value 𝑣 ξ…ž . It denotes the probability of being in state 𝑗 at time 𝑑 given that the process entered into state 𝑗 with its last transition within the interval [ 𝑣 ξ…ž , 𝑑 ] given that it entered at time zero in state 𝑖 πœ™ 𝐡 𝑖 𝑗 ξ€· ; 𝑣 ξ…ž ξ€Έ , 𝑑 = 𝛿 𝑖 𝑗 ξ€· 1 − 𝐻 𝑖 ( ξ€Έ 1 𝑑 ) ξ€½ 𝑣 ′ ξ€Ύ = 0 +  π‘˜ ∈ 𝐼 ξ€œ 𝑑 0 Μ‡ 𝑄 𝑖 π‘˜ ( 𝜏 ) πœ™ 𝐡 π‘˜ 𝑗 ξ€· ; 𝑣 ξ…ž ξ€Έ − 𝜏 , 𝑑 − 𝜏 𝑑 𝜏 . ( 3 . 7 ) The term ( 1 − 𝐻 𝑖 ( 𝑑 ) ) gives us the probability of remaining from time zero up to time 𝑑 in state 𝑖 ; this probability contributes to πœ™ 𝐡 𝑖 𝑗 ( ; 𝑣 ξ…ž , 𝑑 ) only if 𝑖 = 𝑗 and 𝑣 ξ…ž = 0 . In fact, the system will be in state 𝑗 at time 𝑑 without any transition in the time interval [ 0 , 𝑑 ] only if 𝑖 = 𝑗 , then the backward value at time 𝑑 has, necessarily, to be equal to the length of the interval that is 𝐡 ( 𝑑 ) = 𝑑 this implies 𝐡 ( 𝑑 ) = 𝑑 − 𝑣 ξ…ž = 𝑑 ⇔ 𝑣 ξ…ž = 0 . The second term considers the possibility of evolving with the next transition into any state at any time 𝜏 ∈ [ 0 , 𝑑 ] and then we have to consider all possible trajectories that will bring the system into state 𝑗 in the remaining time 𝑑 − 𝜏 with its last transition into state 𝑗 at a time that belongs to the interval [ 𝑣 ξ…ž − 𝜏 , 𝑑 − 𝜏 ] . In this way, the final backward will be less or equal to 𝑑 − 𝜏 − ( 𝑣 ξ…ž − 𝜏 ) = 𝑑 − 𝑣 ξ…ž as required in relation ( 3.7 ). (e) Let πœ™ 𝐹 𝑖 𝑗 ( ; 𝑑 , 𝑒 ξ…ž ) = 𝑃 [ 𝑍 ( 𝑑 ) = 𝑗 , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 ] be the transition probability with final forward value 𝑒 ξ…ž . It denotes the probability of being in state 𝑗 at time 𝑑 and of making the next transition in the time interval ( 𝑑 , 𝑒 ξ…ž ] given that the process entered at time zero into state 𝑖 πœ™ 𝐹 𝑖 𝑗 ξ€· ; 𝑑 , 𝑒 ξ…ž ξ€Έ = 𝛿 𝑖 𝑗 ξ€· 𝐻 𝑖 ξ€· 𝑒 ξ…ž ξ€Έ − 𝐻 𝑖 ( ξ€Έ +  𝑑 ) π‘˜ ∈ 𝐼 ξ€œ 𝑑 0 Μ‡ 𝑄 𝑖 π‘˜ ( 𝜏 ) πœ™ 𝐹 π‘˜ 𝑗 ξ€· ; 𝑑 − 𝜏 , 𝑒 ξ…ž ξ€Έ − 𝜏 𝑑 𝜏 . ( 3 . 8 ) The first term ( 𝐻 𝑖 ( 𝑒 ξ…ž ) − 𝐻 𝑖 ( 𝑑 ) ) gives us the probability of exiting for the first time from state 𝑖 in the time interval ( 𝑑 , 𝑒 ξ…ž ] ; this probability contributes to πœ™ 𝐹 𝑖 𝑗 ( ; 𝑑 , 𝑒 ξ…ž ) only if 𝑖 = 𝑗 . The second term considers the possibility of evolving with the next transition into any state at any time 𝜏 ∈ [ 0 , 𝑑 ] and then we have to consider all possible trajectories that will bring in state 𝑗 in the remaining time 𝑑 − 𝜏 with a final forward of 𝑒 ξ…ž − 𝜏 . 3.1. The General Distributions of the Auxiliary Processes There are a lot of cases in which combinations of the previous equations can be considered. In this subsection, we explain the equations for the two general cases. (f) Let 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) = 𝑃 [ 𝑍 ( 𝑑 ) = 𝑗 , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) = 𝑒 ] be the transition probability with initial and final backward and forward values 𝑣 , 𝑒 , 𝑣 ξ…ž , 𝑒 ξ…ž . It denotes the probability of being in state 𝑗 at time 𝑑 and of entering into that state in the time interval [ 𝑣 ξ…ž , 𝑑 ] and of making next transition in ( 𝑑 , 𝑒 ξ…ž ] given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before and it remained until time 𝑒 where a transition took place. It results that 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ =  π‘˜ ∈ 𝐼 𝑑 𝑄 𝑖 π‘˜ ( 𝑣 + 𝑒 ) 𝑑 𝐻 𝑖 πœ™ ( 𝑣 + 𝑒 ) 𝐡 𝐹 π‘˜ 𝑗 ξ€· ; 𝑣 ξ…ž − 𝑒 , 𝑑 − 𝑒 , 𝑒 ξ…ž ξ€Έ − 𝑒 , ( 3 . 9 ) where πœ™ 𝐡 𝐹 𝑖 𝑗 ξ€· ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ ξ€Ί 𝑍 = 𝑃 ( 𝑑 ) = 𝑗 , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž ξ€» − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 = 𝛿 𝑖 𝑗 ξ€· 𝐻 𝑖 ξ€· 𝑒 ξ…ž ξ€Έ − 𝐻 𝑖 ( ξ€Έ 1 𝑑 ) ξ€½ 𝑣 ′ ξ€Ύ = 0 +  π‘˜ ∈ 𝐼 ξ€œ 𝑑 0 Μ‡ 𝑄 𝑖 π‘˜ ( 𝜏 ) πœ‘ 𝐡 𝐹 π‘˜ 𝑗 ξ€· ; 𝑣 ξ…ž − 𝜏 , 𝑑 − 𝜏 , 𝑒 ξ…ž ξ€Έ − 𝜏 𝑑 𝜏 . ( 3 . 1 0 ) Equation ( 3.10 ) is composed of two parts, the first expresses the probability of exiting for the first time from state 𝑖 in the time interval ( 𝑑 , 𝑒 ξ…ž ] this contributes to πœ™ 𝐡 𝐹 𝑖 𝑗 ( ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) when 𝑖 = 𝑗 then the backward at time 𝑑 must be equal to 𝑑 periods, that is, 𝐡 ( 𝑑 ) ≡ 𝑑 − 𝑣 ξ…ž = 𝑑 ⇔ 𝑣 ξ…ž = 0 ; the second term considers the entrance into any state π‘˜ at any time 𝜏 ∈ ( 𝑒 , 𝑑 ] and then the system has to be in state 𝑗 at time 𝑑 with final backward and forward values at a maximum of 𝑑 − 𝑣 ξ…ž and 𝑒 ξ…ž − 𝑑 . Consequently, relation ( 3.9 ) states that to compute 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) it is enough to consider the probability πœ™ 𝐡 𝐹 π‘˜ 𝑗 ( ; 𝑣 ξ…ž − 𝑒 , 𝑑 − 𝑒 , 𝑒 ξ…ž − 𝑒 ) using a random starting distribution represented by the Radon-Nikodym derivative 𝑑 𝑄 𝑖 π‘˜ ( 𝑣 + 𝑒 ) / 𝑑 𝐻 𝑖 ( 𝑣 + 𝑒 ) . (g) Let 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) = 𝑃 [ 𝑍 ( 𝑑 ) = 𝑗 , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) > 𝑒 ] be the transition probability with initial backward and starting certain waiting forward and final backward and forward values 𝑣 , 𝑒 , 𝑣 ξ…ž , 𝑒 ξ…ž . It denotes the probability of being in state 𝑗 at time 𝑑 and of entering that state in the time interval [ 𝑣 ξ…ž , 𝑑 ] and of making the next transition in ( 𝑑 , 𝑒 ξ…ž ] given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before and it remained in this state at least until time 𝑒 . It results that 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ = 𝛿 𝑖 𝑗  𝐻 𝑖 ξ€· 𝑒 ξ…ž ξ€Έ + 𝑣 − 𝐻 𝑖 ( 𝑑 + 𝑣 ) 1 − 𝐻 𝑖 ξƒ­ 1 ( 𝑒 + 𝑣 ) ξ€½ 𝑣 ′ ξ€Ύ = − 𝑣 +  π‘˜ ∈ 𝐼 ξ€œ 0 𝑑 − 𝑒 Μ‡ 𝑄 𝑖 π‘˜ ( 𝜏 + 𝑒 + 𝑣 ) 1 − 𝐻 𝑖 πœ‘ ( 𝑒 + 𝑣 ) 𝐡 𝐹 π‘˜ 𝑗 ξ€· ; 𝑣 ξ…ž − 𝑒 − 𝜏 , 𝑑 − 𝑒 − 𝜏 , 𝑒 ξ…ž ξ€Έ − 𝑒 − 𝜏 𝑑 𝜏 . ( 3 . 1 1 ) The term [ ( 𝐻 𝑖 ( 𝑒 ξ…ž + 𝑣 ) − 𝐻 𝑖 ( 𝑑 + 𝑣 ) ) / ( 1 − 𝐻 𝑖 ( 𝑒 + 𝑣 ) ) ] gives us the probability of exiting from state 𝑖 for the first time in the interval ( 𝑑 , 𝑒 ξ…ž ] given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before and up to time 𝑒 (i.e., for 𝑒 + 𝑣 periods) no new transitions were carried out by the process. This probability is part of 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) if 𝑖 = 𝑗 ; consequently, the backward process at time 𝑑 has to be equal to 𝑑 + 𝑣 , that is, 𝐡 ( 𝑑 ) ≡ 𝑑 − 𝑣 ξ…ž = 𝑑 + 𝑣 ⇔ 𝑣 ξ…ž = − 𝑣 . The other term considers the entrance into the state π‘˜ at time 𝜏 ∈ ( 𝑒 , 𝑑 ] , given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before with no movement from that state up to time 𝑒 ; then all possible trajectories that will bring the system into state 𝑗 at time 𝑑 with a final backward and forward of, respectively, 𝑑 − 𝑣 ξ…ž and 𝑒 ξ…ž − 𝑑 given the entrance into state π‘˜ at time 𝜏 have to be considered. 4. Continuous Time Homogeneous Waiting Times Complete Knowledge Semi-Markov Reliability Model There are a lot of semi-Markov models in reliability theory; see, for example, [ 5 , 6 , 11 – 13 , 31 ]. The nonhomogeneous case was presented in [ 13 ]. More recently, in [ 28 ] a nonhomogeneous backward semi-Markov reliability model was presented. In this section, we will generalize these reliability models taking into account initial and final backward and forward processes all together in a homogeneous environment. Let us consider a reliability system that can be at every time 𝑑 in one of the states of 𝐼 = { 1 , … , π‘š } . The stochastic process of the successive states of the system is denoted by 𝑍 = { 𝑍 ( 𝑑 ) , 𝑑 ≥ 0 } . The state set is partitioned into sets π‘ˆ and 𝐷 , so that 𝐼 = π‘ˆ ∪ 𝐷 , ∅ = π‘ˆ ∩ 𝐷 , π‘ˆ ≠ ∅ , π‘ˆ ≠ 𝐼 . ( 4 . 1 ) The subset π‘ˆ contains all “good” states in which the system is working and subset 𝐷 all “bad” states in which the system is not working well or has failed. The classic indicators used in reliability theory are the following ones. (i) The pointwise availability function A giving the probability that the system is working at time 𝑑 regardless of what has happened on ( 0 , 𝑑 ] : [ ] 𝐴 ( 𝑑 ) = 𝑃 𝑍 ( 𝑑 ) ∈ π‘ˆ . ( 4 . 2 ) (ii) The reliability function R giving the probability that the system was always working from time 0 to time 𝑑 : [ 𝑅 ( 𝑑 ) = 𝑃 𝑍 ( 𝑒 ) ∈ π‘ˆ ∢ ∀ 𝑒 ∈ ( 0 , 𝑑 ] ] . ( 4 . 3 ) (iii) The maintainability function M giving the probability that the system will leave the set 𝐷 within the time 𝑑 being in 𝐷 at time 0: [ 𝑀 ( 𝑑 ) = 1 − 𝑃 𝑍 ( 𝑒 ) ∈ 𝐷 , ∀ 𝑒 ∈ ( 0 , 𝑑 ] ] . ( 4 . 4 ) Considering the generalizations presented in the previous section we give the following new definitions. ( i ξ…ž ) The pointwise homogeneous waiting times complete knowledge availability function with fixed initial forward 𝑏 𝑓 𝐴 𝑖 𝐡 𝐹 giving the probability that the system is working at time 𝑑 , given that the entrance into the state 𝑍 ( 0 ) = 𝑖 with 𝑣 as initial backward time and that the process moves from this state just at time 𝑒 . Furthermore, we require the system to enter into the working state 𝑍 ( 𝑑 ) at time 𝑇 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž and to remain in this state until the time of the next transition 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž : 𝑏 𝑓 𝐴 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ ξ€Ί 𝑍 = 𝑃 ( 𝑑 ) ∈ π‘ˆ , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž ξ€» . − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) = 𝑒 ( 4 . 5 ) ( i ξ…ž ξ…ž ) The pointwise homogeneous waiting times complete knowledge availability function with starting certain waiting forward time 𝑏 𝐹 𝐴 𝑖 𝐡 𝐹 giving the probability that the system is working on time 𝑑 , conditioned by the entrance into the state 𝑍 ( 0 ) = 𝑖 with 𝑣 as initial backward time and that the process does not move from this state up to the time 𝑒 . Furthermore, we require the system to enter the working state 𝑍 ( 𝑑 ) at time 𝑑 ≥ 𝑇 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž and to remain in this state until the time of the next transition 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž : 𝑏 𝐹 𝐴 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ ξ€Ί 𝑍 = 𝑃 ( 𝑑 ) ∈ π‘ˆ , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž ξ€» . − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) > 𝑒 ( 4 . 6 ) ( i i ξ…ž ) The homogeneous waiting times complete knowledge reliability function with fixed initial forward 𝑏 𝑓 𝑅 𝑖 𝐡 𝐹 giving the probability that the system was always working for a time 𝑑 , given that the entrance into the state 𝑍 ( 0 ) = 𝑖 was 𝑣 time periods before 0 and the next transition is at time 𝑒 . Furthermore, it is assumed that the system did the 𝑁 ( 𝑑 ) th transition at time 𝑑 ≥ 𝑇 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž and the ( 𝑁 ( 𝑑 ) + 1 ) th at time 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž : 𝑏 𝑓 𝑅 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ ξ€Ί 𝑍 ] = 𝑃 ( β„Ž ) ∈ π‘ˆ ∀ β„Ž ∈ ( 0 , 𝑑 , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž ξ€» . − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) = 𝑒 ( 4 . 7 ) ( i i ξ…ž ξ…ž ) The homogeneous waiting times complete knowledge reliability function with starting certain waiting forward time 𝑏 𝑓 𝑅 𝑖 𝐡 𝐹 giving the probability that the system was always working for a time 𝑑 , given that the entrance into the state 𝑍 ( 0 ) = 𝑖 was 𝑣 time periods before 0 and the next transition is after time 𝑒 . Furthermore, it is assumed that the system did the 𝑁 ( 𝑑 ) th transition at time 𝑑 ≥ 𝑇 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž and the ( 𝑁 ( 𝑑 ) + 1 ) th at time 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž : 𝑏 𝐹 𝑅 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ ξ€Ί 𝑍 ] = 𝑃 ( β„Ž ) ∈ π‘ˆ ∀ β„Ž ∈ ( 0 , 𝑑 , 𝐡 ( 𝑑 ) ≤ 𝑑 − 𝑣 ξ…ž , 𝐹 ( 𝑑 ) ≤ 𝑒 ξ…ž ξ€» . − 𝑑 ∣ 𝑍 ( 0 ) = 𝑖 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) > 𝑒 ( 4 . 8 ) ( i i i ξ…ž ) The homogeneous waiting times complete knowledge maintainability function 𝑏 𝑓 𝑀 𝑖 𝐡 𝐹 giving the probability that the system will leave the set 𝐷 going into an up state, at least once, within the time 𝑑 and that 𝑑 ≥ 𝑇 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž , 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž given that at present the process is in state 𝑖 ∈ 𝐷 and it entered this state with the last transition 𝑣 periods before and at time 𝑒 a new transition occurs 𝑏 𝑓 𝑀 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ ξ€Ί 𝑇 = 𝑃 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž , 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž ] ξ€» . , ∃ β„Ž ∈ ( 0 , 𝑑 ∢ 𝑍 ( β„Ž ) ∈ π‘ˆ ∣ 𝑍 ( 0 ) = 𝑖 ∈ 𝐷 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) = 𝑒 ( 4 . 9 ) ( i i i ξ…ž ξ…ž ) The homogeneous waiting times complete knowledge maintainability function with starting certain waiting forward time 𝑏 𝐹 𝑀 𝑖 𝐡 𝐹 giving the probability that the system will leave the set 𝐷 going into an up state within the time 𝑑 and that 𝑑 ≥ 𝑇 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž , 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž given that at present the process is in state 𝑖 ∈ 𝐷 and it entered into this state with the last transition 𝑣 periods before and after the time 𝑒 a new transition occurs 𝑏 𝐹 𝑀 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ ξ€Ί 𝑇 = 𝑃 𝑁 ( 𝑑 ) ≥ 𝑣 ξ…ž , 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž ] ξ€» . , ∃ β„Ž ∈ ( 0 , 𝑑 ∢ 𝑍 ( β„Ž ) ∈ π‘ˆ ∣ 𝑍 ( 0 ) = 𝑖 ∈ 𝐷 , 𝐡 ( 0 ) = 𝑣 , 𝐹 ( 0 ) > 𝑒 ( 4 . 1 0 ) The probabilities ( 4.5 ), ( 4.6 ), ( 4.7 ), ( 4.8 ), ( 4.9 ), and ( 4.10 ) can be computed as follows. (i) The pointwise availability functions ( 4.5 ) and ( 4.6 ) are, respectively, 𝑏 𝑓 𝐴 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ =  𝑗 ∈ π‘ˆ 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ , ( 4 . 1 1 ) 𝑏 𝐹 𝐴 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ =  𝑗 ∈ π‘ˆ 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ . ( 4 . 1 2 ) (ii) The reliability functions ( 4.7 ) and ( 4.8 ) are, respectively, 𝑏 𝑓 𝑅 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ =  𝑗 ∈ π‘ˆ 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ , ( 4 . 1 3 ) 𝑏 𝐹 𝑅 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ =  𝑗 ∈ π‘ˆ 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ , ( 4 . 1 4 ) where 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) and 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) are the solutions to ( 3.9 ) and ( 3.11 ) with all the states in 𝐷 that are absorbing. To compute these probabilities all the states of the subset 𝐷 are changed into absorbing states through the following transformation of the semi-Markov kernel: 𝑝 𝑖 𝑗 = 𝛿 𝑖 𝑗 i f 𝑖 ∈ 𝐷 . ( 4 . 1 5 ) (iii) The maintainability function ( 4.9 ) and ( 4.10 ) are, respectively, 𝑏 𝑓 𝑀 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ =  𝑗 ∈ π‘ˆ 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ , ( 4 . 1 6 ) 𝑏 𝐹 𝑀 𝑖 𝐡 𝐹 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ =  𝑗 ∈ π‘ˆ 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ξ€· 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ξ€Έ , ( 4 . 1 7 ) where 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) and 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) are, respectively, the solution to ( 3.9 ) and ( 3.11 ) with all the states in π‘ˆ that are absorbing. In this case, all the states of the subset π‘ˆ are changed into absorbing states through the transformation of the semi-Markov kernel: 𝑝 𝑖 𝑗 = 𝛿 𝑖 𝑗 i f 𝑖 ∈ π‘ˆ . ( 4 . 1 8 ) The here-defined reliability indexes are able to assess different probabilities depending on the backward and forward process values. This makes it possible to obtain, for example, a complete knowledge of the variability of the survival probabilities of the system depending on the waiting time scenario. 5. The Homogeneous Waiting Time with Complete Knowledge for Semi-Markov Reliability Credit Risk Models The rating migration problem can be situated in the reliability environment. The rating process, done by the rating agency, states the degree of reliability of a firm’s bond. The problem of the poor fit of the Markov process in a credit risk environment was outlined in [ 24 – 26 ]. The duration effect of rating transitions is one of the main problems which demonstrate the inadequacy of Markov models. The probability of changing rating depends on the time that a firm remains within the same rating class (see [ 32 ]). This problem can be solved satisfactorily by means of HSMP; see [ 27 , 28 , 33 ]. In fact, as already explained, in HSMP the transition probabilities are a function of the waiting time spent in a state of the system. The knowledge of the waiting times around the beginning and the end of the considered interval is of fundamental relevance in credit rating migration modelling. Indeed, the solutions of the evolution equations ( 3.9 ) and ( 3.11 ) consider the duration time inside the starting and the arriving states. In the next two subsections, we will present two semi-Markov reliability credit risk models. To construct a semi-Markov model, it is necessary to construct the embedded Markov chain ( 2.2 ) and to find the d.f. of waiting times ( 2.4 ). The embedded Markov chain constructed by real data of Standard & Poor’s rating agency was given in [ 34 ] and it is reported in the next sub-section. This matrix is aperiodic and irreducible and has two down states D and NR. In the following sub-section the case in which the default state is supposed to be absorbing is studied and the No-Rating state is not considered. Under these hypotheses, the embedded Markov chain is mono-unireducible; see [ 35 ]. 5.1. The Irreducible Case with Two Down States For example the rating agency Standard & Poor’s considers 9 different classes of rating and the No Rating state, so we have the following set of states: 𝐼 1 = { A A A , A A , A , B B B , B B , B , C C C , D , N R } . ( 5 . 1 ) The ratings express the creditworthiness of the rated firm. The creditworthiness is the highest for the rating AAA, assigned to firm extremely reliable with regard to financial obligations, and decrease towards the rating D which expresses the occurrence of payment default on some financial obligation. A table showing the financial meaning of the Standard & Poors rating categories is reported in [ 21 ]. As a matter of example, the rating B is assigned to firm vulnerable to changes in economic conditions currently showing the ability to meet its financial obligations. The first 7 states are working states (good states) and the last two are the nonworking states. The two subsets are the following: π‘ˆ = { A A A , A A , A , B B B , B B , B , C C C } , 𝐷 1 = { D , N R } . ( 5 . 2 ) By solving the different evolution equations we obtain the following results. ( 1 . 1 ξ…ž ) 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probabilities of being in the state 𝑗 after a time 𝑑 starting in state 𝑖 with an initial backward time 𝑣 , an initial forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž . ( 1 . 1 ξ…ž ξ…ž ) 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probabilities of being in the state 𝑗 after a time 𝑑 starting in the state 𝑖 with an initial backward time 𝑣 , a starting certain forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž . Both the results take into account the different probabilities of changing state as a function of all the possible entrance and exit times in the starting and arriving states. In the first case, the exit will be just at time u and in the second case after a time 𝑒 . ( 1 . 2 ξ…ž ) 𝑏 𝑓 𝐴 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability of the system having, at time 𝑑 , an up rating given that it entered the state 𝑖 with an initial backward time 𝑣 and exited from 𝑖 at time 𝑒 (forward initial time). ( 1 . 2 ξ…ž ξ…ž ) 𝑏 𝐹 𝐴 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability of the system having, at time 𝑑 , an up rating given that it entered the state 𝑖 with an initial backward time 𝑣 and exited from 𝑖 after time 𝑒 (certain waiting forward time). ( 2 . 1 ξ…ž ) 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability of being in the state 𝑗 after a time 𝑑 starting in the state 𝑖 with an initial backward time 𝑣 , an initial forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž given that the two down states are considered absorbing. ( 2 . 1 ξ…ž ξ…ž ) 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability of being in the state 𝑗 after a time 𝑑 starting in the state 𝑖 ∈ π‘ˆ with an initial backward time 𝑣 , a starting certain forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž given that the two down states are considered absorbing. ( 2 . 2 ξ…ž ) 𝑏 𝑓 𝑅 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability that the system was always up in the time interval ( 0 , 𝑑 ] given that it entered the state 𝑖 ∈ π‘ˆ with an initial backward time 𝑣 and it exits from 𝑖 at time 𝑒 (forward initial time) considering the two down states as absorbing. ( 2 . 2 ξ…ž ξ…ž ) 𝑏 𝐹 𝑅 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑅 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability that the system was always up in the time interval ( 0 , 𝑑 ] with an initial backward time 𝑣 and it exited from 𝑖 after time 𝑒 (certain waiting forward time) considering the two down states as absorbing. ( 3 . 1 ξ…ž ) 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probabilities of being in the state 𝑗 after a time 𝑑 starting in the state 𝑖 ∈ 𝐷 with an initial backward time 𝑣 , an initial forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž given that all the up states are considered absorbing. ( 3 . 1 ξ…ž ξ…ž ) 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probabilities of being in state 𝑗 after a time 𝑑 starting in the state 𝑖 ∈ 𝐷 with an initial backward time 𝑣 , a starting certain forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž given that all the up states are considered absorbing. ( 3 . 2 ξ…ž ) 𝑏 𝑓 𝑀 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability that the system at time 𝑑 has an up rating given that it entered into the state 𝑖 ∈ 𝐷 with an initial backward time 𝑣 and it exited from 𝑖 at time 𝑒 (forward initial time) given that all the up states are considered absorbing. ( 3 . 2 ξ…ž ξ…ž ) 𝑏 𝐹 𝑀 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑀 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability that the system at time 𝑑 has an up rating given that it entered the state 𝑖 ∈ 𝐷 with an initial backward time 𝑣 and exited from 𝑖 after time 𝑒 (certain waiting forward time) given that all the up states are considered absorbing. The maintainability function 𝑀 has a precise financial meaning. It assesses the probability of a firm leaving state 𝐷 within time 𝑑 through reorganization. In fact, if the firm is reorganized the rating agency will give a new rating which evaluates the new financial situation. In the No Rating state, the re-entrance of the firm in the bond market will imply a new rating evaluation. In Table 1 the embedded Markov chain obtained considering all the transitions of the historical data base of S&P is given. This matrix was presented in [ 34 ]. Table 1: Embedded Markov chain obtained by the S&P data. 5.2. The Default as Absorbing Case In many credit risk migration models, the NoRating state is ignored and the Default is considered as an absorbing state. Under these hypotheses, the embedded Markov chain of the semi-Markov process has only two classes of states. The first is a transient class and the second is an absorbing class. The absorbing class is constituted by only one state and all the elements of the main diagonal of the matrix are always greater than zero. This kind of matrices (and the corresponding processes) is called monounireducible; see [ 35 ]. The set of states becomes 𝐼 2 = { A A A , A A , A , B B B , B B , B , C C C , D } , ( 5 . 3 ) and the partition in up and down states is π‘ˆ = { A A A , A A , A , B B B , B B , B , C C C } , 𝐷 2 = { D } . ( 5 . 4 ) The embedded Markov chain is reported in Table 2 . Table 2: S&P embedded Markov chain with default as an absorbing state. In this case, it results that reliability and availability correspond. Indeed, the only down state is absorbing and if the system at time 𝑑 is available it means that it never went into the default state and so it remained for all the observation time in a up state, which is the reliability definition. Furthermore, maintainability does not make sense because it is not possible to exit from the default state. The following results can be obtained ( 1 . 1 ξ…ž ) 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probabilities of being in the state 𝑗 after a time 𝑑 starting in the state 𝑖 with an initial backward time 𝑣 , an initial forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž . ( 1 . 1 ξ…ž ξ…ž ) 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probabilities of being in the state 𝑗 after a time 𝑑 starting in the state 𝑖 with an initial backward time 𝑣 , starting certain forward time 𝑒 , with 𝑣 ξ…ž ≤ 𝑇 𝑁 ( 𝑑 ) ≤ 𝑑 < 𝑇 𝑁 ( 𝑑 ) + 1 ≤ 𝑒 ξ…ž . Both the results take into account the different probabilities of changing state during the permanence of the system in the same state considering all the possible entrance and the exit times in the starting and arriving states. In the first case, the exit will be just at time 𝑒 , in the second case after a time 𝑒 . ( 1 . 2 ξ…ž ) 𝑏 𝑓 𝐴 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) = 𝑏 𝑓 𝑅 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝑓 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability that the system at time 𝑑 has an up rating given that it entered the state 𝑖 ∈ π‘ˆ with an initial backward time v and it exits from i at time u (forward initial time). ( 1 . 2 ξ…ž ξ…ž ) 𝑏 𝐹 𝐴 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) = 𝑏 𝐹 𝑅 𝑖 𝐡 𝐹 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ∑ ) = 𝑗 ∈ π‘ˆ 𝑏 𝐹 πœ™ 𝐡 𝐹 𝑖 𝑗 ( 𝑣 , 𝑒 ; 𝑣 ξ…ž , 𝑑 , 𝑒 ξ…ž ) represents the probability that the system at time 𝑑 has an up rating given that it entered the state 𝑖 ∈ π‘ˆ with an initial backward time 𝑣 and it exits from 𝑖 after time 𝑒 (certain waiting forward time). Remark 5.1. We wish to mention that the Markov matrices that are given yearly in the Standard & Poor’s publications always have greater elements on the main diagonal compared to the matrices that are presented in this paper. The reason for this is that, in a semi-Markov environment the transitions occur only when there is a real check on the state. In a credit risk environment this means that a transition is computed if and only if the rating agency assigns a new rating, given that the firm already has a rating. In the S&P transition Markov chain, if in a year there was no rating evaluation of a firm, it is supposed that the firm is still in the same state. Then the rating agency, in the construction of the transition matrix, considers a “virtual” transition (transition from that state into the same state). This implies that the number of virtual transitions is very high and the Markov chain, almost everywhere, becomes diagonally dominant. In the embedded Markov chain of the SMP, the virtual transitions are possible, but they happen when the rating agency gives a new rating which is equal to the previous one. We think another reason for the superior performance of the semi-Markov environment as compared to that of the Markov one is the fact that the former only considers the transitions of the rating process which actually occurred. 6. Conclusions This paper introduces, for the first time to the authors’ knowledge, initial and final backward and forward processes in a continuous time homogeneous semi-Markov environment at the same time. By means of this new approach a generalization of the transition probabilities of an HSMP is given and we show how it is possible to consider the time spent by the system in the starting state and in the final state. The waiting time inside the starting state is managed by means of initial backward and forward times. The time spent in the last state of the considered horizon is studied by means of the final backward and forward times. The obtained results are used to derive generalized reliability measures and we show how it is possible to compute them. An application to credit risk problems, which is considered as a particular aspect of the more general context of the reliability of a system, is illustrated. In this way, the paper may also serve the purpose of inviting stochastic modelling engineers into a new field. However, the model could also be useful for solving other reliability problems. In the last part of the paper, the Markov chains embedded in the homogeneous semi-Markov processes obtained by the historical Standard and Poor’s database are presented. The difference between the obtained transition matrices and the ones that are provided by Standard and Poor’s agency is outlined. The authors also explain why the matrices obtained are more reliable compared to those of Standard and Poor’s. Future work includes the construction of (i) the discrete time version of this model, (ii) the related algorithm and computer program, (iii) the nonhomogeneous model, (iv) the related algorithm and computer program. Furthermore, we hope to apply the models to the mechanical reliability context in the near future. <h4>References</h4> P. Levy, “Processus semi-markoviens,” in Proceedings of the International Congress of Mathematicians , P. Erven and N. V. Noordhoff, Eds., pp. 416–426, Groningen, The Netherlands, 1956. W. L. Smith, “ Regenerative stochastic processes ,” Proceedings of the Royal Society of London A , vol. 232, pp. 6–31, 1955. R. Howard, Dynamic Probabilistic Systems , vol. 2, John Wiley & Sons, New York, NY, USA, 1971. J. Janssen and R. Manca, Applied Semi-Markov Processes , Springer, New York, NY, USA, 2006. N. Limnios and G. Oprişan, Semi-Markov Processes and Reliability Modelling , Statistics for Industry and Technology, Birkhauser, Boston, Mass, USA, 2001. J. Janssen and R. Manca, Semi-Markov Risk Models for Finance, Insurance and Reliability , Springer, New York, NY, USA, 2007. R. Schassberger, “ Insensitivity of steady-state distributions of generalized semi-Markov processes: part 1 ,” The Annals of Probability , vol. 5, no. 1, pp. 87–99, 1977. A. D. Barbour, “ Generalized semi-Markov schemes and open queueing networks ,” Journal of Applied Probability , vol. 19, no. 2, pp. 469–474, 1982. P. Taylor, “ Insensitivity in processes with zero speeds ,” Advances in Applied Probability , vol. 21, no. 3, pp. 612–628, 1989. P. G. Taylor, “ Algebraic criteria for extended product form in generalised semi-Markov processes ,” Stochastic Processes and Their Applications , vol. 42, no. 2, pp. 269–282, 1992. N. Limnios and G. Oprişan, “A unified approach for reliability and performability evaluation of semi-Markov systems,” Applied Stochastic Models in Business and Industry , vol. 15, no. 4, pp. 353–368, 1999. V. Barbu, M. Boussemart, and N. Limnios, “ Discrete-time semi-Markov model for reliability and survival analysis ,” Communications in Statistics, Theory and Methods , vol. 33, no. 11-12, pp. 2833–2868, 2004. A. Blasi, J. Janssen, and R. Manca, “ Numerical treatment of homogeneous and non-homogeneous semi-Markov reliability models ,” Communications in Statistics, Theory and Methods , vol. 33, no. 3, pp. 697–714, 2004. J. Yackel, “ Limit theorems for semi-Markov processes ,” Transactions of the American Mathematical Society , vol. 123, pp. 402–424, 1966. E. Çinlar, “ Markov renewal theory ,” Advances in Applied Probability , vol. 1, pp. 123–187, 1969. V. Korolyuk and A. Swishchuk, Semi-Markov Random Evolutions , vol. 308 of Mathematics and Its Applications , Kluwer Academic Publishers, New York, NY, USA, 1995. M. Zelen, “ Forward and backward recurrence times and length biased sampling: age specific models ,” Lifetime Data Analysis , vol. 10, no. 4, pp. 325–334, 2004. F. Stenberg, R. Manca, and D. Silvestrov, “Semi-Markov reward models for disability insurance,” Theory of Stochastic Processes , vol. 12, no. 28, pp. 239–254, 2006. D. Lando, Credit Risk Modelling , Princeton University Press, Princeton, NJ, USA, 2004. D. Duffie and K. J. Singleton, Credit Risk , Princeton University Press, Princeton, NJ, USA, 2003. C. Bluhm, L. Overbeck, and C. Wagner, An Introduction to Credit Risk Management , Chapmann & Hall, London, UK, 2002. M. Crouhy, D. Galai, and R. Mark, “ A comparative analysis of current credit risk models ,” Journal of Banking and Finance , vol. 24, pp. 59–117, 2000. A. J. Jarrow, D. Lando, and S. M. Turnbull, “ A Markov model for the term structure of credit risk spreads ,” The Review of Financial Studies , vol. 10, pp. 481–523, 1997. E. I. Altman, “The importance and subtlety of credit rating migration,” Journal of Banking and Finance , vol. 22, pp. 1231–1247, 1996. P. Nickell, W. Perraudin, and S. Varotto, “ Stability of rating transitions ,” Journal of Banking and Finance , vol. 24, pp. 203–227, 2000. D. Lando and T. M. Skodeberg, “ Analyzing rating transitions and rating drift with continuous observations ,” Journal of Banking and Finance , vol. 26, pp. 423–444, 2002. G. D'Amico, J. Janssen, and R. Manca, “Homogeneous semi-Markov reliability models for credit risk management,” Decisions in Economics and Finance , vol. 28, no. 2, pp. 79–93, 2005. G. D'Amico, J. Janssen, and R. Manca, “Non-homogeneous backward semi-Markov reliability approach to downward migration credit risk problem,” in Proceedings of the 8th Italian—Spanish Meeting on Financial Mathematics , Verbania, Italy, 2005. G. D'Amico, J. Janssen, and R. Manca, “ Valuing credit default swap in a semi-Markovian rating-based Model ,” Computational Economics , vol. 29, pp. 119–138, 2007. G. D'Amico, “A semi-Markov maintenance model with credit rating application,” IMA Journal of Management Mathematics , vol. 20, no. 1, pp. 51–58, 2009. S. Osaki, Stochastic System Reliability Modeling , vol. 5 of Series in Modern Applied Mathematics , World Scientific, Singapore, 1985. L. Carty and J. Fons, “Measuring changes in corporate credit quality,” The Journal of Fixed Income , vol. 4, pp. 27–41, 1994. A. Vasileiou and P. C.-G. Vassiliou, “ An inhomogeneous semi-Markov model for the term structure of credit risk spreads ,” Advances in Applied Probability , vol. 38, no. 1, pp. 171–198, 2006. G. Di Biase, J. Janssen, and R. Manca, “ A real data semi-Markov reliability model ,” in Proceedings of the 4th International Workshop in Applied Probability Congress (IWAP '08) , Compiegne, France, 2008. G. D'Amico, J. Janssen, and R. Manca, “The dynamic behavior of non-homogeneous single unireducible Markov and semi-Markov chains,” in Networks, Topology and Dynamics , A. K. Naimzada, S. Stefani, and A. Torriero, Eds., vol. 613 of Lecture Notes in Economics and Mathematical Systems , pp. 195–211, Springer, Berlin, Germany. // http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

Loading next page...
/lp/hindawi-publishing-corporation/semi-markov-reliability-models-with-recurrence-times-and-credit-rating-JhdRiFF1ev

You're reading a free preview. Subscribe to read the entire article.

And millions more from thousands of peer-reviewed journals, for just $40/month

Get 2 Weeks Free

To be the best researcher, you need access to the best research

  • With DeepDyve, you can stop worrying about how much articles cost, or if it's too much hassle to order — it's all at your fingertips. Your research is important and deserves the top content.
  • Read from thousands of the leading scholarly journals from Springer, Elsevier, Nature, IEEE, Wiley-Blackwell and more.
  • All the latest content is available, no embargo periods.

Stop missing out on the latest updates in your field

  • We’ll send you automatic email updates on the keywords and journals you tell us are most important to you.
  • There is a lot of content out there, so we help you sift through it and stay organized.