Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation

Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation Methodol Comput Appl Probab https://doi.org/10.1007/s11009-018-9644-1 Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation 1 2 3 3 O. J. Boxma · E. J. Cahen · D. Koops · M. Mandjes Received: 7 March 2017 / Revised: 16 May 2018 / Accepted: 22 May 2018 © The Author(s) 2018 Abstract We consider a linear stochastic fluid network under Markov modulation, with a focus on the probability that the joint storage level attains a value in a rare set at a given point in time. The main objective is to develop efficient importance sampling algorithms with provable performance guarantees. For linear stochastic fluid networks without modulation, we prove that the number of runs needed (so as to obtain an estimate with a given precision) increases polynomially (whereas the probability under consideration decays essentially exponentially); for networks operating in the slow modulation regime, our algorithm is asymptotically efficient. Our techniques are in the tradition of the rare-event simulation procedures that were developed for the sample-mean of i.i.d. one-dimensional light-tailed random variables, and intensively use the idea of exponential twisting. In passing, we also point out how to set up a recursion to evaluate the (transient and stationary) moments of the joint storage level in Markov-modulated linear stochastic fluid networks. Keywords Linear networks · Stochastic processes · Queues · Rare events · Importance sampling Mathematics Subject Classification (2010) 60K25 · 60F10 · 65C05 M. Mandjes m.r.h.mandjes@uva.nl EURANDOM and Department of Mathematics and Computer Science, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The Netherlands CWI, Science Park 123, 1098 XG Amsterdam, The Netherlands Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Science Park 105-107, 1098 XG Amsterdam, The Netherlands Methodol Comput Appl Probab 1 Introduction Linear stochastic fluid networks, as introduced in Kella and Whitt (1999), can be informally described as follows. Consider a network consisting of L stations. Jobs, whose sizes are i.i.d. samples from some general L-dimensional distribution, arrive at the stations according to a Poisson process. At each of the nodes, in between arrivals the storage level decreases exponentially. Processed traffic is either transferred to the other nodes or leaves the network (according to a given routing matrix). In addition to this basic version of the linear stochastic fluid network, there is also its Markov modulated counterpart (Kella and Stadje 2002), in which the arrival rate, the distribution of the job sizes, and the routing matrix depend on the state of an external, autonomously evolving finite-state continuous-time Markov chain (usually referred to as the background process). Linear stochastic fluid networks can be seen as natural fluid counterparts of corre- sponding infinite-server queues. As such, they inherit several nice properties of those infinite-server queues. In particular, separate infinitesimally small fluid particles, moving through the network, do not interfere, and are therefore mutually independent. Essentially due to this property, linear stochastic fluid networks allow explicit analysis; in particular, the joint Laplace transform of the storage levels at a given point in time can be expressed in closed form as a function of the arrival rate, the Laplace transform of the job sizes and the routing matrix (Kella and Whitt 1999,Thm.5.1). When Markov modulation is imposed, the analysis becomes substantially harder. Condi- tional on the path of the background process, again explicit expressions can be derived, cf. (Kella and Stadje 2002, Thm. 1). Unconditioning, however, cannot be done in a straightfor- ward manner. As a consequence the results found are substantially less explicit than for the non-modulated linear stochastic fluid network. In Kella and Stadje (2002) also a system of ordinary differential equations has been set up that provides the transform of the stationary storage level; in addition, conditions are identified that guarantee the existence of such a stationary distribution. In this paper we focus on rare events for Markov-modulated linear stochastic fluid net- works. More specifically, in a particular scaling regime (parameterized by n)weanalyze the probability p that at a given point in time the network storage vector is in a given rare set. By scaling the arrival rate as well as the rare set (which amounts to multiply- ing them by a scaling parameter n), the event of interest becomes increasingly rare. More specifically, under a Cramer ´ -type assumption on the job-size distribution, application of large-deviations theory yields that p decays (roughly) exponentially. As p can be char- n n acterized only asymptotically, one could consider the option of using simulation to obtain precise estimates. The effectiveness, however, of such an approach is limited due to the rar- ity of the event under consideration: in order to get a reliable estimate, one needs sufficiently many runs in which the event occurs. This is the reason why one often resorts to simulation using importance sampling (or: change of measure). This is a variance reduction technique in which one replaces the actual probability measure by an alternative measure under which the event under consideration is not rare; correcting the simulation output with appropriate likelihood ratios yields an unbiased estimate. The crucial issue when setting up an importance sampling procedure concerns the choice of the alternative measure: one would like to select one that provides a substantial variance reduction, or is even (in some sense) optimal. The objective of this paper is to develop a change of measure which performs provably optimally. Our ultimate goal is to obtain an efficient simulation procedure for Markov-modulated linear stochastic fluid networks. We do so by (i) first considering a single node without Methodol Comput Appl Probab modulation, (ii) then multi-node systems, still without modulation, and (iii) finally modu- lated multi-node systems. There are two reasons for this step-by-step setup: ◦ For the non-modulated models we have more refined results than for the modulated models. More specifically, for the non-modulated models we have developed estimates for the number of runs  required to obtain an estimate with predefined precision (showing that  grows polynomially in the rarity parameter n), whereas for modulated models we can just prove that  grows subexponentially. ◦ In addition, this approach allows the reader to get gradually familiar with the concepts used in this paper. The construction and analysis of our importance sampling methodology is based on the ideas developed in Blom and Mandjes (2013); there the focus was on addressing similar issues for a single-node Markov modulated infinite-server system. In line with Blom and Mandjes (2013), we consider the regime in which the background process is ‘slow’: while we (linearly) speed up the driving Poisson process, we leave the rates of the Markovian background process unalterned. A traditional, thoroughly examined, importance sampling problem concerns the sam- ple mean S of n i.i.d. light-tailed random variables X ,...,X ; the objective there is to n 1 n estimate P(S  a) for a> EX and n large. As described in (Asmussen and Glynn n 1 2007, Section VI.2), in this situation importance sampling (i.e., sampling under an alterna- tive measure, and translating the simulation output back by applying appropriate likelihood ratios) works extremely well. To this end, the distribution of the X s should be exponen- tially twisted. As it turns out, in our setup, the probability of our interest can be cast in terms of this problem. Compared to the standard setup of sample means of one-dimensional random variables, however, there are a few complications: (i) in our case it is not a priori clear how to sample from the exponentially twisted distributions, (ii) we consider multi- dimensional distributions (i.e., rare-event probabilities that concern the storage levels of all individual buffers in the network), (iii) we impose Markov modulation. We refer to e.g. Glasserman and Juneja (2008) and Kuhn et al. (2017) for earlier work on similar problems. In passing, we also point out how to set up a recursion to evaluate the (transient and stationary) moments of the joint storage level in Markov-modulated linear stochastic fluid networks (where the results in Kella and Stadje (2002) are restricted to just the first two stationary moments at epochs that the background process jumps). The single-node model without modulation falls in the class of (one-dimensional) shot- noise models, for which efficient rare-event simulation techniques have been developed over the past, say, two decades. Asmussen and Nielsen (1995) and Ganesh et al. (2007) consider the probability that a shot-noise process decreased by a linear drift ever exceeds some given level. Relying on sample-path large deviations results, an asymptotically efficient impor- tance sampling algorithm is developed, under the same scaling as the one we consider in our paper. The major difference with our model (apart from the fact that we deal with con- siderably more general models, as we focus on networks and allow modulation) is that we focus on a rare-event probability that relates to the position of the process at a fixed point in time; in this setting we succeed in finding accurate estimates of the number of runs needed to get an estimate of given precision. There is a vast body of literature related to the broader area of rare-event simulation for queueing systems. We refer to the literature overviews (Blanchet and Mandjes 2009; Juneja et al. 2006); interesting recent papers include (Asmussen and Kortschak 2015; Cahen et al. 2017; Sezer 2009). Methodol Comput Appl Probab This paper is organized as follows. In Section 2 the focus is on a single-node net- work, without Markov modulation (addressing complication (i) above), Section 3 addresses the extension to multi-node systems (addressing complication (ii)), and in Section 4 the feature of modulation is added (addressing complication (iii)). In each of these three sec- tions, we propose a change of measure, quantify its performance, and demonstrate its efficiency through simulation experiments. In Section 4.1 we include the explicit expres- sions for the moments in Markov-modulated linear stochastic fluid networks. A discussion and concluding remarks are found in Section 5. 2 Single Resource, No Modulation To introduce the concepts we work with in this paper, we analyze in this section a linear stochastic fluid network consisting of a single node, in which the input is just compound Poisson (so no Markov modulation is imposed). More precisely, in the model considered, jobs arrive according to a Poisson process with rate λ, bring along i.i.d. amounts of work (represented by the sequence of i.i.d. random variables (B ,B ,...)), and the workload 1 2 level decays exponentially at a rate r> 0. This model belongs to the class of shot-noise processes. As mentioned in the introduction, we gradually extend the model in the next sections. 2.1 Preliminaries We first present a compact representation for the amount of work in the system at time t, whichwedenoteby X(t), through its moment generating function. To this end, let N(t) denote a Poisson random variable with mean λt,and (U ,U ,...) i.i.d. uniformly dis- 1 2 tributed random variables (on the interval [0,t ]). Assume in addition that the random objects (B ,B ,...), N(t),and (U ,U ,...) are independent. Then it is well-known that the value 1 2 1 2 of our shot-noise process at time t can be expressed as N(t) N(t) −r(t −U ) −rU j j X(t) = B e = B e , (1) j j j =1 j =1 where the distributional equality is a consequence of the fact that the distribution of U is symmetric on the interval [0,t ]. It is easy to compute the moment generating function (MGF) of X(t), by conditioning on the value of N(t): (λt ) ϑX(t) −λt −rU M(ϑ) := E e = e E exp(ϑ B e ) k! k=0 −ru = exp λ β(e ϑ) − 1 du , (2) where β(·) is the MGF corresponding to B (throughout assumed to exist). By differentiating and inserting ϑ = 0, it follows immediately that −rt E X(t) = (1 − e ) E B =: m(t ). Higher moments can be found by repeated differentiation. We note that, as t is held fixed throughout the document, we often write N rather than N(t). Methodol Comput Appl Probab 2.2 Tail Probabilities, Change of Measure The next objective is to consider the asymptotics of the random variable X(t) under a par- ticular scaling. In this scaling we let the arrival rate be nλ rather than just λ,for n ∈ N.The value of the shot-noise process is now given by Y (t ) := X (t ), n i i=1 with the vector (X (t), ...,X (t )) consisting of i.i.d. copies of the random variable X(t) 1 n introduced above; here the infinite divisibility of a Compound Poisson distribution is used. Our goal is to devise techniques to analyze the tail distribution of Y (t ). Standard theory now provides us with the asymptotics of p (a) = P(Y (t )  na) n n for some a> m(t); we are in the classical ‘Cramer ´ setting’ (Dembo and Zeitouni 1998, Section 2.2) if it is assumed that M(ϑ) is finite in a neighborhood around the origin (which requires that the same property is satisfied by β(·)). Let I(a) and ϑ ≡ ϑ (a), respectively, be defined as I(a) := sup ϑa − log M(ϑ) ,ϑ := arg sup ϑa − log M(ϑ) , ϑ ϑ with M(·) as above. Using ‘Cramer’, ´ we obtain that, under mild conditions, lim log p (a) =−I(a) =−ϑ a + log M(ϑ ). n→∞ More refined asymptotics are available as well; we get back to this issue in Section 2.3. As these results apply in the regime that n is large, a relevant issue concerns the develop- ment of efficient techniques to estimate p (a) through simulation. An important rare-event simulation technique is importance sampling, relying on the commonly used exponential twisting technique. We now investigate how to construct the exponentially twisted version Q (with twist ϑ ) of the original probability measure P. The main idea is that under Q the X (t ) have mean a, such that under the new measure the event under study is not rare anymore. More concretely, exponential twisting with parameter ϑ means that under the new measure Q,the X (t ) should have the MGF (ϑ +ϑ )X(t ) E e M(ϑ + ϑ ) ϑX(t) E e = = ; (3) ϑ X(t) E e M(ϑ ) under this choice the random variable has the desired mean: M (ϑ ) E X(t) = = a. M(ϑ ) The question is now: how to sample a random variable that has this MGF? To this end, notice −rU that M(ϑ) = exp(−λt + λt E exp(ϑ Be )) and ∞ k −rU k  −rU (λt E exp(ϑ B e )) E exp((ϑ + ϑ )B e ) −λt M(ϑ + ϑ ) = e , −rU k! E exp(ϑ B e ) k=0 Methodol Comput Appl Probab such that Eq. 3 equals ∞ k −rU k  −rU (λt E exp(ϑ B e )) E exp((ϑ + ϑ )B e ) −rU exp(−λt E exp(ϑ B e )) . −rU k! E exp(ϑ B e ) k=0 From this expression we can see how to sample the X (t ) under Q,asfollows.Inthefirst place we conclude that under Q the number of arrivals becomes Poisson with mean −rU −ru λt E exp(ϑ B e ) = λ β(e ϑ )du, (4) rather than λt (which is an increase). Likewise, it entails that under Q the distribution of −rU the B e should be twisted by ϑ , in the sense that these random variables should have under Q the MGF −rU E exp((ϑ + ϑ )B e ) −rU E exp((ϑ + ϑ )B e ) = . −rU E exp(ϑ B e ) We now point out how such a random variable should be sampled. To this end, observe that t −ru β(e (ϑ + ϑ )) 1 −rU −ru E exp((ϑ + ϑ )B e ) = β(e ϑ )du, −ru β(e ϑ ) t so that t −ru  −ru β(e (ϑ + ϑ )) β(e ϑ ) −rU E exp((ϑ + ϑ )B e ) =  du. −ru β(e ϑ ) −rv β(e ϑ )dv From this representation two conclusions can be drawn. In the first place, supposing there are k arrivals, then the arrival epochs U ,...,U are i.i.d. under Q, with the density given 1 k by −ru β(e ϑ ) f (u) =  . U t −rv β(e ϑ ) dv In the second place, given that the k-th arrival occurs at time u, the density of the corre- −ru sponding job size B should be exponentially twisted by e ϑ (where each of the job sizes is sampled independently of everything else). Now that we know how to sample from Q it is straightforward to implement the impor- tance sampling. Before we describe its complexity (in terms of the number of runs required to obtain an estimate with given precision), we first provide an example in which we demonstrate how the change of measure can be performed. Example 1 In this example we consider the case that the B are exponentially distributed −1 −ru with mean μ . Applying the transformation w := e ϑ/μ, it is first seen that s s ϑ/μ μ 1 1 1 −ru β(e ϑ)du = du = dw −ru μ − e ϑ r −rs 1 − w w 0 0 e ϑ/μ ϑ/μ rs 1 w 1 μe − ϑ = log = log . r 1 − w r μ − ϑ −rs e ϑ/μ As ϑ solves the equation M (ϑ )/M(ϑ ) = a, we obtain the quadratic equation ϑ ϑ −rt m(t ) = a 1 − 1 − e , μ μ Methodol Comput Appl Probab leading to rt μe m(t ) −rt −rt 2 −rt ϑ = (1 + e ) − (1 − e ) + 4e 2 a (where it is readily checked that ϑ ∈ (0,μ)). Now we compute what the alternative measure Q amounts to. In the first place, the number of arrivals should become Poisson with parameter rt λ μe − ϑ log r μ − ϑ (which is larger than λt). In addition, we can check that ru  rt μe − ϑ μe − ϑ F (u) := Q(U  u) = log log μ − ϑ μ − ϑ (rather than u/t). The function F (u) has the value 0 for u = 0 and the value 1 for u = t, and is concave. This concavity reflects that the arrival epochs of the shots tend to be closer to 0 under Q than under P. This is because we identified each of the U with t minus the actual corresponding arrival epoch in Eq. 1; along the most likely path of Y (t ) itself the shots will be typically closer to t under Q. Observe that one can sample U under Q using the classical inverse distribution function method (Asmussen and Glynn 2007, Section II.2a): with H denoting a uniform number on [0, 1), we obtain such a sample by H 1−H 1 ϑ ϑ ϑ rt log e − 1 − + . r μ μ μ Also, conditional on a U having attained the value u, the jobs B should be sampled from i i −ru  −1 an exponential distribution with mean (μ − e ϑ ) . Remark 1 In the model we study in this section, the input of the linear stochastic fluid network is a compound Poisson process. As pointed out in Kella and Whitt (1999)the class of inputs can be extended to the more general class of increasing Levy ´ processes in a straightforward manner. 2.3 Efficiency Properties of Importance Sampling Procedure In this subsection we analyze the performance of the procedure introduced in the previous section. The focus is on a characterization of the number of runs needed to obtain an estimate with a given precision (at a given confidence level). In every run Y (t ) is sampled under Q, as pointed out above. As Q is an implementation of an exponential twist (with twist ϑ ), the likelihood ratio (of sampling Y (t ) under P relative to Q)isgiven by dP −ϑ Y (t ) n log M(ϑ ) L = = e e . dQ In addition, define I as the indicator function of the event {Y (t )  na}. Clearly, E (LI ) = n Q p (a). We keep generating samples LI (under Q), and estimate p (a) by the corresponding n n sample mean, until the ratio of the half-width of the confidence interval (with critical value T ) and the estimator drops below some predefined ε (say, 10%). Under P the number of runs needed is effectively inversely proportional to p (a), hence exponentially increasing in n. We now focus on quantifying the reduction of the number of runs when using the importance sampling procedure we described above, i.e., the one based on the measure Q. Methodol Comput Appl Probab Using a Normal approximation, it is a standard reasoning that when performing N runs the ratio of the half-width of the confidence interval and the estimator is approximately 1 T · √ Var (L I), p (a) n N and hence the number of runs needed is roughly 2 2 T Var (L I) := . 2 2 ε (p (a)) We now analyze how  behaves as a function of the ‘rarity parameter’ n. Due to the Bahadur-Rao result (Bahadur and Rao 1960), with f ∼ g denoting f /g → 1as n → n n n n ∞, 1 1 d −nI (a) p (a) = E (LI ) ∼ √ √ e ,τ := log M(ϑ) . (5) n Q n dϑ ϑ 2πτ ϑ =ϑ Using the same proof technique as in Bahadur and Rao (1960), it can be shown that 1 1 2 −2nI (a) E (L I) ∼ √ √ e ; (6) 2ϑ 2πτ see Appendix A for the underlying computation. It also follows that E (L I) ∼ Var (L I). We can use these asymptotics, to conclude that under Q the number of runs required grows slowly in n. More specifically,  is essentially proportional to n for n large. This leads to the following result; cf. (Blanchet et al. 2008, Section 2) for related findings in a more general context. Proposition 1 As n →∞, T 1 ∼ α n, α = ϑ · 2πτ . (7) ε 2 2.4 Simulation Experiments In this subsection we present numerical results for the single-node model without Markov modulation. We focus on the case of exponential jobs, as in Example 1. We simulate until the estimate has reached the precision ε = 0.1, with confidence level 0.95 (such that the critical value is T = 1.96). The parameters chosen are: t = 1, r = 1, λ = 1, and μ = 1. −1 We set a = 1 (which is larger than m(t ) = 1 − e ). As it turns out, ϑ = 0.2918 and λ 1 1 τ = − = 1.8240. 2 rt  2 r (μ − ϑ ) (μe − ϑ ) The top-left panel of Fig. 1 confirms the exponential decay of the probability of interest, as a function of n. In the top-right panel we verify that the number of runs indeed grows proportionally to n;thevalueof α, as defined in Eq. 7, is 198.7, which is depicted by the horizontal line. The bottom-left panel shows the density of the arrival epochs, which confirms that the arrival epochs tend to be closer to 0 under Q than under P; recall that under P these epochs are uniformly distributed on [0,t ]. Recall that we reversed time in Eq. 1: for the actual shot-noise system that we are considering, it means that in order to reach the Methodol Comput Appl Probab Fig. 1 Numerical results for Section 2.4 desired level at time t, the arrival epochs tend to be closer to t under Q than under P.The bottom-right panel presents the rate of the exponential job sizes as a function of u.Using (4), the arrival rate under Q turns out to be 1.2315. 3 Multi-node Systems, No Modulation In this section we consider multi-node stochastic fluid linear stochastic fluid networks, of the type analyzed in the work by Kella and Whitt (1999). It is instructive to first consider the simplest multi-node system: a tandem network without external input in the downstream node and no traffic leaving after having been served by the upstream node (and rate r for node , = 1, 2); later we extend the ideas developed to general linear stochastic fluid networks. 3.1 Preliminaries As mentioned above, we first consider the two-node tandem. The content of the first node is, as before, (1) −r (t −U ) 1 j X (t ) = B e j =1 Methodol Comput Appl Probab (with N having a Poisson distribution with mean λt), but it can be argued that the content of the second node satisfies a similar representation. More specifically, using the machinery developed in Kella and Whitt (1999), it turns out that N N r r 1 d 1 (2) −r (t −U ) −r (t −U ) −r U −r U 2 j 1 j 2 j 1 j X (t ) = B e − e = B e − e . j j r − r r − r 1 2 1 2 j =1 j =1 (8) As before, perform the scaling by n, meaning that the arrival rate λ is inflated by a (1) (1) (2) (2) factor n. It leads to the random vectors (X (t),...,X (t )) and (X (t ), . . . , X (t )). n n 1 1 (1) (2) With these vectors we can define Y (t ) and Y (t ), analogously to how this was done in n n the single-node case; these two random quantities represent the contents of the upstream resource and the downstream resource, respectively. The state of this tandem system can be uniquely characterized in terms of its (bivariate) moment generating function. The technique to derive an explicit expression is by relying on the above distributional equality (8). Again, the key step is to condition on the number of shots that have arrived in the interval [0,t ]: with ϑ = (ϑ ,ϑ ), 1 2 (1) (2) ϑ X (t )+ϑ X (t ) 1 2 M(ϑ ) := E e (λt ) r −λt −r U −r U −r U 1 2 1 = e E exp ϑ Be + ϑ B e − e 1 2 k! r − r 1 2 k=0 k t (λt ) 1 r −λt −r u −r u −r u 1 2 1 = e E exp ϑ Be + ϑ B e − e du 1 2 k! t r − r 0 1 2 k=0 ∞ k k t (λt ) 1 r −λt −r u −r u −r u 1 2 1 = e β e ϑ + e − e ϑ du 1 2 k! t r − r 0 1 2 k=0 −r u −r u −r u 1 2 1 = exp λ β e ϑ + e − e ϑ − 1 du . (9) 1 2 r − r 1 2 The above computation is for the two-node tandem system, but the underlying procedure can be extended to the case of networks with more than 2 nodes, and external input in each of the nodes. To this end, we consider the following network consisting of L nodes. Jobs are generated according to a Poisson process. At an arrival epoch, an amount is added to the content of each of the resources ∈{1,...,L}, where the amount added to resource is ( ) L distributed as the (non-negative) random variable B ; β(ϑ ), with ϑ ∈ R , is the joint MGF (1) (L) of B up to B (note that the components are not assumed independent). In addition, let the traffic level at node decay exponentially with rate r (i.e., the value of the output rate is linear in the current level, with proportionality constant r ). A deterministic fraction p   0( = ) is then fed into node , whereas a fraction p  0 leaves the network (with p = 1). We denote r := r p . As an aside we mention that this general =1 model covers models in which some arrivals (of the Poisson process with parameter λ) (1) (L) actually lead to arrivals at only a subset of the L queues (since the job sizes B ,...,B are allowed to equal 0). We now point out how the joint buffer content process can be analyzed. Again our objec- tive is to evaluate the moment generating function. Define the matrix R as follows: its ( , )-th entry is r +  r , whereas its ( , )-th entry (with = )is −r .Wehave, = Methodol Comput Appl Probab according to Kella and Whitt (1999), with N again Poisson with mean λt, the following distributional equality: for any ∈{1,...,L}, L N ( ) ( ) −R(t −U ) X (t ) = B (e )  . j =1 =1 (1) (L) It means we can compute the joint MGF of X (t ) up to X (t ) as follows, cf. (Kella and Whitt 1999,Thm.5.1): ( ) M(ϑ ) := E exp ϑ X (t ) =1 ∞ L L (λt ) −λt ( ) −R(t −U) = e E exp ϑ B (e ) k! k=0 =1 =1 ∞ L L k t (λt ) 1 −λt ( ) −Ru = e E exp ϑ B (e )  du k! t k=0 =1 =1 ∞ L L k t (λt ) 1 −λt −Ru −Ru = e β (e ) ϑ ,..., (e ) ϑ du 1 L k! t k=0 =1 =1 L L −Ru −Ru = exp −λt + λ β (e ) ϑ ,..., (e ) ϑ du 1 L =1 =1 −Ru = exp λ β e ϑ − 1 du , which is the matrix/vector-counterpart of the expression (2) that we found in the single-node case; for the two-node case the special form (9) applies. 3.2 Tail Probabilities, Change of Measure In this subsection we introduce the change of measure that we use in our importance sam- pling approach. Many of the concepts are analogous to concepts used for the single-node case in Section 2. Define (in self-evident notation) (1) (L) p (a) := P Y (t )  na ,...,Y (t )  na . n 1 L n n Due to the multivariate version of Cramer’ ´ s theorem, with A := [a , ∞) × ··· × [a , ∞), 1 L lim log p (a) =− inf I(b), where I(b) := sup ( ϑ , b − log M(ϑ )) . (10) n→∞ n b∈A More refined asymptotics than the logarithmic asymptotics of Eq. 10 are available as well, but these are not yet relevant in the context of the present subsection; we return to these ‘exact asymptotics’ in Section 3.3. We assume that the set A is ‘rare’, in the sense that ∂M(ϑ ) m(t ) ∈ A, with m (t ) := . ∂ϑ ϑ =0 Methodol Comput Appl Probab Let us now construct the importance sampling measure. Let ϑ be the optimizing ϑ in the decay rate of p (a). Mimicking the reasoning we used in the single-node case, the number of arrivals becomes Poisson with mean −r u  −r u −r u 1 2 1 λ β e ϑ + e − e ϑ du 1 2 r − r 0 1 2 (rather than λt). The density of U under Q becomes −r u  −r u −r u 1 2 1 β e ϑ + e − e ϑ 1 2 r − r Q 1 2 f (u) =   . U t −r v  −r v −r v 1 2 1 β e ϑ + e − e ϑ dv 1 2 r − r 0 1 2 Given a sample from this distribution attains the value u, the distribution of the correspond- ing random variable B should be twisted by −r u  −r u −r u 1 2 1 e ϑ + e − e ϑ . 1 2 r − r 1 2 Analogously to what we found above in the two-node tandem, we can identify Q for gen- eral linear stochastic fluid networks. We find that under Q the number of arrivals becomes Poisson with parameter −Ru λ β e ϑ du. The arrival epochs should be drawn using the density −Ru β e ϑ f (u) =  . U t −Rv β e ϑ dv (1) (L) Given an arrival at time u, (B ,...,B ) should be exponentially twisted by −Ru  −Ru (e ϑ ) ,...,(e ϑ ) . 1 L 3.3 Efficiency Properties of Importance Sampling Procedure We now consider the efficiency properties of the change of measure proposed in the previous subsection. To this end, we first argue that the vector ϑ generally has some (at least one) strictly positive entries, whereas the other entries equal 0; i.e., there are no negative entries. To this end, we first denote by b the ‘most likely point’ in A: b := arg inf I(b), b∈A so that ϑ = ϑ (b ). It is a standard result from convex optimization that ∂I (b) = ϑ (b). (11) ∂b Suppose now that ϑ (b )< 0. Increasing the i-th component of the b (while leaving all other components unchanged) would lead to a vector that is still in A, but that by virtue of Eq. 11 corresponds to a lower value of the objective function I(·), thus yielding that b was not the optimizer; we have thus found a contradiction. Similarly, when ϑ (b ) = 0wehave that b >a (as otherwise a reduction of the objective function value would be possible, which contradicts b being minimizer). Methodol Comput Appl Probab Now define as the subset of i ∈{1,...,L} such that ϑ > 0, and let D ∈{1,...,L} the number of elements of . We now argue that the number of runs needed to obtain an D/2 estimate of predefined precision scales as n . Relying on the results from Chaganthy and Sethuraman (1996) (in particular their Thm. 3.4), it follows that p (a) behaves (for n −D/2  2 large) proportionally to n exp(−nI (b )); using the same machinery, E (L I) behaves −D/2 proportionally to n exp(−2nI (b )). Mimicking the line of reasoning of Section 2.3, D/2 we conclude that the number of runs needed is essentially proportional to n . The formal statement is as follows; in Appendix A we comment on the underlying computations. Proposition 2 As n →∞, √ D T 1 D/2 ∼ αn ,α = ϑ · 2π τ, (12) 2 D ε 2 i∈D where τ is the determinant of the Hessian of log M(ϑ ) in ϑ . We further illustrate the ideas and intuition behind the qualitative result described in the above proposition by considering the case L = 2. It is noted that three cases may arise: (i) ={1, 2}, (ii) ={1}, (iii) ={2}; as case (iii) can be dealt with in the same way as case (ii), we concentrate on the cases (i) and (ii) only. In case (i), where D = 2, the necessary condition (Chaganthy and Sethuraman 1996, Eqn. (3.4)) is fulfilled as ϑ > 0 componentwise. As in addition the conditions A–C of (Chaganthy and Sethuraman 1996) are in place, it is concluded that (Chaganthy and Sethuraman 1996, Thm. 3.4) can be applied, leading to b = a, and 1 1 −nI (a) p (a) ∼ √ e , n ϑ ϑ · 2π τ 1 2 where τ is the determinant of the Hessian of log M(ϑ ) in ϑ . Along the same lines, it can be shown that 1 1 2 −2nI (a) E (L I) ∼ e . n 4ϑ ϑ · 2π τ 1 2 It now follows that  is roughly linear in n: with ε and T as introduced in Section 2.3, T π τ = αn, α := ϑ ϑ · . (13) 1 2 ε 2 In case (ii), we do not have that ϑ > 0 componentwise, and hence (Chaganthy and Sethura- man 1996, Thm. 3.4) does not apply; in the above terminology, D = 1 < 2 = L.Observe (1) (2) that in this case the exponential decay rate of the event {Y (t )  na ,Y (t ) < na } 1 2 n n (1) strictly majorizes that of {Y (t )  na } (informally: the former event is substantially less n 1 likely than the latter). It thus follows that b = a and b >a ,and 1 2 1 2 (1) (1) (2) p (a) = P Y (t )  na − P Y (t )  na ,Y (t ) < na n 1 1 2 n n n 1 1  d (1) −2nI (b ) ∼ P Y (t )  na ∼ √ √ e ,τ := log M(ϑ, 0) , n dϑ ϑ 2πτ ϑ =ϑ and in addition 1 1 2 −2nI (b ) E (L I) ∼ √ √ e . 2ϑ 2πτ 1 Methodol Comput Appl Probab As a consequence in this regime  grows essentially proportional to n for n large: √ T 1 ∼ α n, α := ϑ · 2πτ . ε 2 In case (iii)  behaves proportionally to n as well. 3.4 Simulation Experiments We conclude this section by providing a few numerical illustrations. In the first set we focus on the downstream queue only (i.e., we analyze p (0,a )), whereas in the second set we n 2 consider the joint exceedance probability p (a). The precision and confidence have been chosen as in Example 1. Throughout we take t = 1, r = 2, r = 1, λ = 1, and μ = 1. 1 2 In the first set of experiments we take a = 0and a = 1. Elementary numerical analysis 1 2 yields that ϑ = 0.8104 and τ = 1.4774, leading to α, as defined in Eq. 13, equalling 474.3. For graphs on the behavior of p (1) asafunctionof n and the number of runs needed, we refer to (Boxma et al. 2018, Fig. 2). The two panels of Fig. 2 should be interpreted as the bottom panels of Fig. 1. Interestingly, the left panel indicates that in the tandem system it does not pay off to let jobs arrive right before t (as they first have to go through the first resource to end up in the second resource), as reflected by the shape of the density of the arrival epochs under Q; to this end, recall that we reversed time in Eq. 8,sothatalow density at u = 0 in the graph corresponds to a high density at u = t in the actual system. The arrival rate under Q is 1.5103. In the second set of experiments we take a = 1.2and a = 1.1; all other parameters 1 2 are the same as in the first set. As mentioned above, we now consider the joint exceedance probability. As it turns out, ϑ = 0.1367 and ϑ = 0.2225. For graphs describing the 1 2 behavior of p (1.2, 1.1) asafunctionof n and the number of runs needed, we refer to (Boxma et al. 2018, Fig. 3); the latter graph reveals that for this specific parameter setting /n converges to the limiting constant rather slowly. Concerning the left panel of Fig. 3, note that in Section 2 we saw that to make sure the first queue gets large it helps to have arrivals at the end of the interval, whereas above we observed that to make the second queue large arrivals should occur relatively early. We now focus on the event that both queues are large, and consequently the arrival distribution becomes relatively uniform again, as shown in the left panel of Fig. 3. The arrival rate under Q is 2.3478. IS IS MC MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 2 Numerical results for Section 3.4: downstream queue only density 0.0 0.4 0.8 1.2 rate 0.0 0.4 0.8 1.2 Methodol Comput Appl Probab IS IS MC MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 3 Numerical results for Section 3.4: both queues 4 Multi-node Systems Under Markov Modulation In this section consider the networks analyzed in the previous section, but now in a random environment. More specifically, the type of random environment we focus on here is known as Markov modulation: the system dynamics are affected by the state of an external finite- state irreducible Markov process J(·) with generator matrix Q = (q ) . When this jj j,j =1 Markov process (usually referred to as the background process) is in state j, arrivals occur according to a Poisson process with rate λ ,the MGF of the job size is β (ϑ ), and the routing j j matrix is R . Analogously to the definitions used in the case without Markov modulation, this routing matrix’ (i, i)-th entry is (j ) (j ) (R ) := r + r , j ii ii ii =i which can be interpreted as the rate at which fluid leaves server i when the background process is in j. Likewise, for i = i , (j ) (R )  := −r , j ii ii which is the rate at which fluid flows from server i to i when the background process is in j. Below we assume that J(0) = j for a fixed state j ∈{1,...,d}; it is seen that all 0 0 results generalize to an arbitrary initial distribution in a straightforward manner. The structure of the section is as follows: we consecutively describe general results for the model under consideration (extending earlier stationary results from Kella and Stadje (2002) to their transient counterpart), propose an importance sampling measure, establish efficiency properties of the corresponding estimator, and present a number of numerical experiments. Note that the setup of this section slightly differs from that of the previous sections. For the models covered in Sections 2 and 3, already detailed explicit analysis is available; see e.g. the results in terms of transforms and moments in Kella and Whitt (1999). Such a complete analysis is lacking for the model featuring in the present section. With the results of our paper added to the literature, the situation has become ‘uniform’: for all three setups (i.e., Sections 2, 3,and 4), one has results on transient transforms, transient moments, as well as recipes for efficient rare-event simulation. density 0.0 0.4 0.8 1.2 rate 0.0 0.4 0.8 1.2 Methodol Comput Appl Probab 4.1 Exact Expressions for Moments Before focusing on simulation-based techniques, this subsection (which can be read inde- pendently of the rest of the section) shows that various moment-related quantities can be computed in closed form. Multi-node systems under Markov modulation have been studied in detail by Kella and Stadje (2002). We start this subsection by providing a compact derivation of a PDE charac- terizing the system’s transient behavior, which was not included in that paper. To this end, we define, for j ∈{1,...,d}, ( ) (ϑ,t) := E exp ϑ X (t ) 1 (t ) , j j =1 with 1 (t ) the indicator function of the event that J(t) = j. Using the standard ‘Markov machinery’,  (ϑ,t + t ) equals (up to o(t ) terms) the sum of a contribution λ t  (ϑ,t)β (ϑ ) j j j due to the scenario that an arrival occurs between t and t + t, a contribution q t  (ϑ,t) j j j =j due to the scenario that the modulating Markov process jumps between t and t + t,anda contribution L L ( ) 1 − λ t − q t E exp ϑ − ϑ  (R )  t X (t ) 1 (t ) , j j j j =1 =1 with q := −q ; regarding the last term, observe that when the background process is in j jj state j, and no new job arrives between t and t + t, ( ) ( ) ( ) ( ) X (t + t ) = X (t ) − (R ) t X (t ) − (R ) t X (t ). j j We thus find that (ϑ,t + t ) = λ t β (ϑ) (ϑ,t) + q t  (ϑ,t) + j j j j j j j =j 1 − λ t − q t  ϑ − R ϑ t, t + o(t ). j j j j This immediately leads to (by subsequently subtracting  (ϑ,t) from both sides, dividing by t, and letting t ↓ 0) d L ∂ ∂ (ϑ,t) = λ β (ϑ ) − 1  (ϑ,t) + q    (ϑ,t) − R ϑ  (ϑ,t). j j j j j j j j j ∂t ∂ϑ j =1 =1 (14) Let us now compactly summarize the relation (14), in vector/matrix notation. This notation will prove practical when computing higher moments; in other (but related) contexts, similar procedures have been proposed in e.g. Huang et al. (2016) and Rabehasaina (2006). Let n ×n 1 2 M be the set of R-valued matrices of dimension n × n (for generic n ,n ∈ N). 1 2 1 2 Methodol Comput Appl Probab In addition, I is the identity matrix of dimension n ∈ N. We introduce the following three d×d d×d Ld×Ld matrices in M , M ,and M , respectively: ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ λ ... 0 β (ϑ) ... 0 R ... 0 1 1 1 ⎜ . . ⎟ ⎜ . . ⎟ ⎜ . . ⎟ . . . . . . . . . . . . := ,B(ϑ ) := ,R := . ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ . . . . . . . . . 0 ... λ 0 ... β (ϑ ) 0 ... R d d d We use the conventional notation ⊗ for the Kronecker product. Recall that the Kronecker product is bilinear, associative and distributive with respect to addition; these properties we will use in the sequel without mentioning. It also satisfies the mixed product property (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD). Furthermore, note that I ⊗ I = I . n n n n 1 2 1 2 We now consider some differentiation rules for matrix-valued functions which will allow us to iteratively evaluate moments. In the first place we define the operator ∇ for ϑ ∈ R ; to keep notation compact, we often suppress the subscript ϑ, and write just ∇.Let f ≡ f(ϑ ) L n ×n n L×n 1 2 1 2 be a mapping of R to M .Then ∇f ≡∇f(ϑ ) ∈ M is defined by ⎛ ⎞ ⎛ ⎞ ∇f ∇f ··· ∇f ∂ f 11 12 1n 1 ij ⎜ ⎟ ⎜ ⎟ ∇f ∇f ··· ∇f ∂ f 21 22 2n 2 ij ⎜ ⎟ ⎜ ⎟ := ∇f = ⎜ ⎟ , where ∇f ⎜ ⎟ . . . . ij . . . . . . ⎝ ⎠ ⎝ ⎠ . . . . ∇f ∇f ··· ∇f ∂ f n 1 n 2 n n L ij 1 1 1 2 In the above definition ∇f ≡∇f (ϑ ) is to be understood as the usual gradient; the symbol ij ij ∂ is used to denote the partial derivative with respect to the i-th variable, in the sense of ∂ f := f (ϑ ). i ij ij ∂ϑ k k k−1 0 Furthermore, we define inductively ∇ f ≡∇ f(ϑ ) := ∇(∇ f), k ∈ N, with ∇ f := k L L n ×n 1 2 f . It is checked that ∇ f(ϑ ) is a mapping of R to M . In the sequel we use a couple of differentiation rules, that we have listed below. Let A(·) L n ×n 1 2 be a matrix-valued function from R to M ,and B(·) a matrix-valued function from L n ×n 2 3 R to M ,and let I be a q × q identity matrix (for some q ∈ N). Then, – Product rule: ∇ A(ϑ)B(ϑ ) = (∇ A(ϑ )) B(ϑ ) + (A(ϑ ) ⊗ I ) ∇ B(ϑ ); ϑ ϑ L ϑ Ln ×n 1 3 being an element of M . – Differentiation of Kronecker product (1): ∇ (I ⊗ A(ϑ )) = I ⊗ (∇ A(ϑ )). ϑ q q ϑ – Differentiation of Kronecker product (2): ∇ (A(ϑ ) ⊗ I ) = (K ⊗ I )(I ⊗ (∇ A(ϑ )))K ϑ q n ,q L q ϑ q,n 1 2 = (K ⊗ I )K (∇ A(ϑ ) ⊗ I ), n ,q L q,n ϑ q 1 2 where K is the commutation matrix defined by m,n m n K := (H ⊗ H ), m,n ij ij i=1 j =1 m×n and H ∈ M denotes a matrix with a 1 at its (i, j )-th position and zeros elsewhere, ij cf. Magnus and Neudecker (1979). Methodol Comput Appl Probab The first rule can be checked componentwise and the second rule is trivial. The third rule fol- lows from the first and second rule in combination with the fact that the Kronecker product commutes after a correction with the commutation matrices. Moreover, we use the prop- −1 erty K = K . An overview of the properties of commutation matrices can be found in n,m m,n Magnus and Neudecker (1979). In the introduced terminology, it follows that Eq. 14 can be written as T T T (ϑ,t) =  B(ϑ ) − I (ϑ,t) + Q (ϑ,t) − I ⊗ ϑ R ∇ (ϑ,t). (15) d d ϑ ∂t We now point out how (transient and stationary) moments can be evaluated; note that Kella and Stadje (2002) focuses on the first two stationary moments at epochs that the background process jumps. We throughout use the notation z (t ) for the i-th derivative of (ϑ,t) in (0,t),for t  0: i L d×d z (t ) := ∇ (ϑ,t) ∈ M , ϑ =0 for i ∈ N. Note that, with π (t ) = (exp(Qt )) , j j ,j (ϑ , 0) = e , (0,t) = π (t ) ≡ (π (t), ...,π (t )). j 1 d ◦ We start by characterizing the first moments. Applying the operator ∇≡ ∇ to the differential equation (15) yields ∇ (ϑ,t) = ( ⊗ I )(∇ B(ϑ )) (ϑ,t) + ϑ L ϑ ∂t T T Q ⊗ I + (B(ϑ ) − I ) ⊗ I − R ∇ (ϑ,t) − L d L ϑ T T 2 ((I ⊗ ϑ )R ) ⊗ I ∇ (ϑ,t), (16) d L using standard properties of the Kronecker product in combination with T T ∇ (I ⊗ ϑ ) = I ⊗ (∇ ϑ ) = I ⊗ (e ,..., e ) = I ⊗ I = I , ϑ d d ϑ d 1 L d L dL where e denotes the L-dimensional column vector in which component i equals 1 and all other components are 0. Then, inserting ϑ = 0 into Eq. 16 yields the system of (non- homogeneous) linear differential equations T T z (t ) = ( ⊗ I ) ∇B(0) π (t ) + (Q ⊗ I ) − R z (t ). (17) L L 1 In the stationary case, we obtain −1 T T z (∞) = R − (Q ⊗ I ) ( ⊗ I ) ∇B(0) π , (18) 1 L L T T with π := lim π (t ) (being the unique non-negative solution of π Q = 0 such that t →∞ the entries of π sum to 1). ◦ We now move to second moments. Applying the ∇ -operator to Eq. 16, 2 2 ∇ (ϑ,t) = ( ⊗ I 2 )(∇ B(ϑ ))(ϑ,t) + ϑ L ϑ ∂t (( ⊗ I )∇ B(ϑ )) ⊗ I ∇ (ϑ,t) + L ϑ L ϑ ∇ (B(ϑ ) ⊗ I )∇ (ϑ,t) + ϑ L ϑ T T 2 Q ⊗ I 2 + (B(ϑ ) − I ) ⊗ I 2 − R ⊗ I ∇ (ϑ,t) − d L L L ϑ T T 3 (((I ⊗ ϑ )R ) ⊗ I ) ∇ (ϑ,t) − L ϑ T T 2 ∇ (((I ⊗ ϑ )R ) ⊗ I ) ∇ (ϑ,t), ϑ d L ϑ Methodol Comput Appl Probab in which the factor ∇ (B(ϑ ) ⊗ I ) can be expressed more explicitly as ϑ L (K ⊗ I )K ((( ⊗ I )∇ B(ϑ )) ⊗ I ), d,L L L,dL L ϑ L T T T and the factor ∇ (((I ⊗ ϑ )R ) ⊗ I ) simplifies to (K ⊗ I )K (R ⊗ I ). Inserting ϑ d L d,L L L,dL L ϑ = 0 yields the system of linear differential equations z (t ) = ( ⊗ I )(∇ B(0)) π (t ) + 2 L T T (Q ⊗ I 2 − ((K ⊗ I )K + I 2 )(R ⊗ I )) z (t ) + d,L L L,dL L 2 L dL (( ⊗ I )(∇B(0))) ⊗ I z (t ) + L L 1 (K ⊗ I )K ((( ⊗ I )∇B(0)) ⊗ I )z (t ) d,L L L,dL L L 1 where z (t ) solves (17). As before, the stationary quantities can be easily derived (by equat- ing z (t ) to 0). One has to keep in mind, however, that some of the mixed partial derivatives occur multiple times in z ,for k ∈{2, 3,...}, and therefore the solution will only be unique after removing the corresponding redundant rows. Alternatively, the system can be completed by including equations which state that these mixed partial derivatives are equal. ◦ It now follows that higher moments can be found recursively, using the three differentiation rules that we stated above. Remark 2 Various variants of our model can be dealt with similarly. In this remark we consider the slightly adapted model in which shots only occur simultaneously with a jump in the modulating Markov chain. Then (up to o(t ) terms)  (ϑ,t + t ) is the sum of a contribution q  t   (ϑ,t)β (ϑ ) j j j j =j due to the scenario that there is a jump in the modulating chain in the interval [t, t + t ] (which also induces a shot), and a contribution of L d ( ) (1 − q t ) E exp ϑ − ϑ  (R )  t X (t ) 1 (t ) , j j j =1 =1 with q := −q , in the scenario that there is no jump. Performing the same steps as above, j jj we obtain d L ∂ ∂ (ϑ,t)=q (β (ϑ )−1) (ϑ,t)+ q  (ϑ,t)β (ϑ )− (R ϑ )  (ϑ,t), j j j j j j j j j j j ∂t ∂ϑ j =1 j =1 which has a similar structure as Eq. 14. It follows that the moments can be found as before. With Q := diag{q ,...,q }, it turns out that the transient means are given by 1 d T T T z (t ) =∇B(0)(Q + Q)π (t ) + (Q ⊗ I ) − R z (t ). L 1 In particular, the stationary first moment equals −1 T T T z (∞) = R − (Q ⊗ I ) ∇B(0)(Q + Q)π . 1 L Consider the following numerical example for the computation of the expected values and variances, in which the technique described above is illustrated. Example 2 In this example, we choose the parameters in such a way that we see non- monotonic behavior. Our example corresponds to a case in which the system does not start Methodol Comput Appl Probab Steady state EX (t) EX (t) Steady state VarX (t) VarX (t) 01234567 01234567 t t Fig. 4 Transient expected values and variances of Example 2 empty, which is dealt with by imposing suitable starting conditions. We consider a two- dimensional (L = 2) queueing system, with a two-dimensional state space of the Markov modulating process (d = 2). We pick q = q = 1, λ = λ = 1, E B = E B = 12 21 1 2 1 2 2 2 E B = E B = 1, J(0) = 1, (X (0), X (0)) = (3, 3), and the rate matrices 1 2 1 2 2 −1 1 −1 R = ,R = . 1 2 −11 −12 Due to the symmetry in the choice of the parameters, one can expect that for both states of the background process expected value tends (as t grows large) to the same steady-state value; the same is anticipated for the stationary variance. This is confirmed by Fig. 4.For t small, the two queues behave differently due to J(0) = 1, which implies that queue 1 drains faster. Note that E X (t ) even increases for t small, due to the fact that the flow from node 1 to 2 equals the flow from 2 to 1, constituting a net flow of zero, so that the additional contribution of external output to node 2 leads to a net increase of E X (t ). The transient correlation is plotted in Fig. 5. At time t = 0 the queues are perfectly correlated, since the starting state is deterministic. Then the correlation decreases due to the asymmetric flow rates until around t = 1, which is when the Markov chain J is expected to switch, after which the correlation monotonously tends to the steady state. Steady state Cor(X (t), X (t)) 1 2 Fig. 5 Transient correlation between X (t ), X (t ) of Example 2 1 2 2.2 2.4 2.6 2.8 3.0 0.6 0.7 0.8 0.9 1.0 0.0 0.4 0.8 1.2 Methodol Comput Appl Probab 4.2 Tail Probabilities, Change of Measure We now characterize the decay rate of the rare-event probability under study, and we pro- pose a change of measure to efficiently estimate it. In the notation we have been using so far, we again focus on (1) (L) p (a) := P Y (t )  na ,...,Y (t )  na = P (Y (t ) ∈ A) , n 1 L n n n (1) (L) where Y (t ) = (Y (t ), . . . , Y (t )). It is stressed that, following (Blom and Mandjes n n n 2013), we consider the regime in which the background process is ‘slow’. In concrete terms, this means that we linearly speed up the driving Poisson process (i.e., we replace the arrival rates λ by nλ ), but leave the rates of the Markovian background process unaltered. j j First we find an alternative characterization of the state of the system at time t.Let F denote the set of all functions from [0,t ] onto the states {1,...,d}. Consider a path f ∈ F . Let f have K(f ) jumps between 0 and t, whose epochs we denote by t (f ) up to t (f ) 1 K(f ) (and in addition t (f ) := 0and t (f ) := t). Let 0 K(f )+1 j (f ) := lim f(t) t ↓t (f ) (i.e., the state of f immediately after the i-th jump). We also introduce D (u, f ) := exp −(t (f ) − u) R ,D (f ) := exp −(t (f ) − t (f )) R . i i+1 j (f ) i i+1 i j (f ) i i Suppose now that the Markov process J(·) follows the path f ∈ F . Then the contribution to the MGF of X(t ) due to shots that arrived between t (f ) and t (f ) is, mimicking the i i+1 arguments that we used in Section 3.2 for non-modulated networks, t (f ) i+1 ψ (f, ϑ ) := exp λ β D (u, f )D (f ) ··· D (f ) ϑ − 1 du . i j (f ) j (f ) i i+1 K(f ) i i t (f ) MGF of X(t) given the path f is As a consequence, the K(f ) M (ϑ ) := ψ (f, ϑ ). f i i=0 First conditioning on the path of J(·) ∈ F between 0 and t and then unconditioning, it then immediately follows that the MGF of X(t ) is given by M(ϑ ) = E M (ϑ ). Then, precisely as is shown in Blom and Mandjes (2013) for a related stochastic system, the decay rate can be characterized as follows: lim log p (a) =− inf I (a), I (a) := inf sup ϑ , b − log M (ϑ ) . (19) n f f f n→∞ n f ∈F b∈A The argumentation to show this is analogous to the one in (Blom and Mandjes 2013,Thm. 1), and can be summarized as follows. In the first place, let f be the optimizing path in Eq. 19. Then, as J(·) does not depend on n, we can choose a ‘ball’ B (f ) around f such that the decay rate of the probability of J(·) being in that ball is 0. The lower bound follows by only taking into account the contribution due to paths in B (f ). The upper bound follows by showing that the contribution of all f ∈ F \ B (f ) is negligible. t t Methodol Comput Appl Probab Informally, the path f has the interpretation of the most likely path of J(·) given that the rare event under consideration happens. To make sure that the event under consideration is rare, we assume that for all f ∈ F ∂ ∂ M (ϑ ) ,..., M (ϑ ) ∈ A. f f ∂ϑ ∂ϑ 1 L ϑ =0 ϑ =0 The change of measure we propose is the following. In every run we first sample the path J(s) for s ∈[0,t ] under the original measure P (i.e., with J(0) = j , and then using the generator matrix Q). We call the resulting path f ∈ F . For this path, define ϑ  0 as the optimizing ϑ in the definition of I(f ) in Eq. 19; b ∈ A is the optimizing b. Conditional on the path f of the background process, under the new measure Q the number of external arrivals between t (f ) and t (f ) is Poisson with parameter i i+1 t (f ) i+1 λ β P (u, f ) ϑ du, j (f ) j (f ) i i i f t (f ) where P (u, f ) := D (u, f )D (f ) ··· D (f ). The arrival epochs between t (f ) and i i i+1 K(f ) i t (f ) should be drawn using the density i+1 β P (u, f ) ϑ j (f ) i f (u) = . t (f ) i+1 β P (v, f ) ϑ dv j (f ) i f t (f ) (1) (L) Given an arrival at time u between t (f ) and t (f ), the job sizes (B ,...,B ) should i i+1 be sampled from a distribution with MGF β (ϑ ), but then exponentially twisted by j (f ) P (u, f ) ϑ ,..., P (u, f ) ϑ . i i f f 1 L Remark 3 As mentioned above, the background process is sampled under the original mea- sure, whereas an alternative measure is used for the number of arrivals, the arrival epochs, and the job sizes. The intuition behind this, is that the rare event under consideration is caused by two effects: ◦ In the first place, samples of the background process J should be close to f . Under P a reasonable fraction ends up close to f — more precisely, the event of J being close to f does not become increasingly rare when n grows. As a consequence, no change of measure is needed here. ( ) ◦ In the second place, given the path of the background process, the Y (t ) should exceed the values na ,for = 1,...,L. This event does become exponentially rare as n grows, so importance sampling is to be applied here. 4.3 Efficiency Properties of Importance Sampling Procedure We now analyze the speed up realized by the change of measure introduced in the previous subsection. Unlike our results for the non-modulated systems, now we cannot find the pre- cise rate of growth of  .What is possible though, is proving asymptotic efficiency (also sometimes referred to as logarithmic efficiency), in the sense that we can show that 1 2 lim log E (L I) = lim log p (a) =−2inf inf sup ϑ , b − log M (ϑ ) n f n→∞ n→∞ n n f ∈F b∈A ϑ Methodol Comput Appl Probab (where the second equality is a consequence of Eq. 19). This equality is proven as follows. 2 2 2 As by Jensen’s inequality E (L I)  (E (LI )) = (p (a)) , we are left to prove the Q Q upper bound: 1 2 lim log E (L I)  lim log p (a). Q n n→∞ n→∞ n n If the path of J(·) equals f ∈ F , it follows by an elementary computation that we have constructed the measure Q such that dP L = = exp − ϑ , Y (t ) + n log M (ϑ ) . n f f f dQ =1 The fact that ϑ is componentwise non-negative, in combination with the fact that Y (t ) a when I = 1, entails that −n I (a) LI  exp −n ϑ , a +n log M (ϑ ) =exp −n ϑ , b +n log M (ϑ ) =e , f f f f f f f noting that a and b may only differ if the corresponding entry of ϑ equals 0 (that is, f f a − b , ϑ = 0). The upper bound thus follows: with f the minimizing path in Eq. 19, f f recalling that J(·) is sampled under P, 2 −2n I (a) −2n I  (a) J f E (L I)  E e  e . We have established the following result. Proposition 3 As n →∞, the proposed importance sampling procedure is asymptotically efficient. This means that the number of runs needed grows subexponentially: lim log  = 0. n→∞ Remark 4 In the scaling considered, for both the logarithmic asymptotics of p (a) and our importance sampling algorithm, the precise transition rates q do not matter; the only ij crucial element is that the background process is irreducible. Observe that, even though the logarithmic asymptotics of p (a) do not depend on the actual values of the transition rates q , the probability p (a) itself and its exact asymptotics do depend on those rates. We refer ij n to Blom et al. (2017) for the exact asymptotics of a related infinite-server model; it is noted that the derivation of such precise asymptotics is typically highly involved. The above reasoning indicates that the proposed procedure remains valid under more general conditions: the ideas carry over to any situation in which the rates are piecewise constant along the most likely path. 4.4 Simulation Experiments We performed experiments featuring a single-node system under Markov modulation. In our example the job sizes stem from an exponential distribution. When the background process is in state i, the arrival rate is λ , the job-size distribution is exponential with parameter μ , i i and the rate at which the storage level decays is r ,for i ∈{1,...,d}. The change of measure is then implemented as follows. As pointed out in Section 4.2,per run a path f of the background process is sampled under the original measure P. Suppose along this path there are K transitions (remarking that, for compactness, we leave out the argument f here), say at times t up to t ; with t = 0and t = t, the state between t 1 K 0 K+1 i Methodol Comput Appl Probab and t is denoted by j ,for i = 0,...,K. Per run a specific change of measure is to be i+1 i computed, parametrized by the t and j , as follows. i i We define −r (t −t ) r u −r t j j j i +1 i ¯ ¯ i+1 i i i P (u) := P e , P := e e ; i i i i =i+1 the product in this expression should be interpreted as 1 if i + 1 >K. It is readily checked that i+1 P (u) ϑ M(ϑ) = exp λ du . μ − P (u) ϑ t j i i i i=0 Let ϑ be the maximizing argument of ϑa − log M(ϑ). We can now provide the alternative measure Q for this path of the background pro- cess. The number of arrivals between t and t (for i = 0,...,K) becomes Poisson with i i+1 parameter r t i+1 ¯ j i μ λ μ − P e ϑ j j j i i i i λ du = log −r (t −t ) r t j i j i i+1 ¯ μ − P (u) ϑ r i i j i j μ e − P e ϑ i i i j i r t j i λ μ − P e ϑ j j i i i = log + λ (t − t ). j i+1 i r t j i+1 r i μ − P e ϑ i j i 0 1020304050 010 20 30 40 50 scale n scale n IS IS MC MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 6 Numerical results for Section 4.4:first example p (3) density 0 2468 10 12 1e−14 1e−08 1e−02 rate number of runs 0.0 0.2 0.4 0.6 0.8 1.0 0 4000 8000 12000 Methodol Comput Appl Probab (where it is noted that this expression is larger than λ (t − t ), which was the parameter j i+1 i under P). The density of each of the arrivals between t and t becomes i i+1 i+1 1 1 dv μ − P (u) ϑ μ − P (v) ϑ j i t j i i i i r t j i μ 1 μ − P e ϑ j j i i i = log −r (t −t ) r t j i+1 i j i μ − P (u) ϑ r i i j i j μ e − P e ϑ i i j i (rather than a uniform distribution, as was the case under P); sampling from this distribution is easy, since the inverse distribution function can be determined in closed form. Given an arrival that takes place at time u between t and t , the job size is exponential with i i+1 parameter μ − P (u) ϑ (rather than exponential with parameter μ ). j i j i i We now describe two examples in which the dimension of the background process is d = 2, q = q = 2, and t = 1. In the first example we fix a = 3, λ = (2, 1), 12 21 −1 μ = ( , 1), and r = (5, 1), in the second example a = 0.8, λ = (0.9, 1), μ = (0.9 , 1), and r = (0.3, 0.6). As before, we simulate until the precision of the estimate has reached ε = 0.1. The top two panels in Figs. 6–7 should be read as those in Figs. 1–3; the bottom two panels correspond to the density of the arrival epochs and the rate of the exponential job sizes, respectively, for f the ‘empirical maximizer’ of I (a) (i.e., the maximizer of I (a) f f over all paths f of the background process that were sampled in the simulation experiment). 0 2000 4000 6000 8000 0 2000 4000 6000 8000 scale n scale n IS MC IS MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 7 Numerical results for Section 4.4: second example p (0.6) density 01234 1e−05 1e−03 1e−01 rate number of runs 0.0 0.2 0.4 0.6 0.8 1.0 0 2000 6000 Methodol Comput Appl Probab In the first example the thus obtained ‘optimal path’ subsequently visits states 1, 2, and 1, where the corresponding jump times are t = 0.654 and t = 0.739, and the decay rate is 1 2 0.573. The mean numbers of arrivals in the three parts of the optimal path are 1.392, 0.090 and 0.963 respectively, whereas for Monte Carlo sampling these are 1.308, 0.085 and 0.522 respectively. In the second example the optimal path subsequently visits states 2 and 1, where the cor- responding jump time is t = 0.790. In this case the decay rate has the value 0.000806. The mean numbers of arrivals in the two parts of the optimal path are 0.812 and 0.195 respec- tively, which are slightly higher than the corresponding values under Monte Carlo sampling (0.790 and 0.189 respectively). Observe that in this example the difference between the two measures is relative small, also reflected by the small value of the decay rate; the event under consideration technically qualifies as ‘rare’ in that p (0.8) → 0as n →∞, but has a rela- tively high likelihood (e.g. as compared to the first example). As a consequence of the fact that both measures almost coincide, the two densities in the bottom-left panel can hardly be distinguished. We observe that the top panels confirm that in both examples (i) p (a) decays roughly exponentially in n, (ii) the number of runs needed grows roughly linearly in n (in the first example slightly sublinearly). 5 Discussion and Concluding Remarks In this paper we have considered the probability of attaining a value in a rare set A at a fixed point in time t: with A =[a , ∞) ×· · ·×[a , ∞), 1 L (1) (L) p (a) = P Y (t )  na ,...,Y (t )  na . n 1 L n n A relevant related quantity is the probability of having reached the set A before t: (1) (L) P ∃s  t : Y (s)  na ,...,Y (s)  na ; (20) 1 L n n observe that this probability increases to 1 as t →∞. Alternatively, one could study the probability that all a (for = 1,...,L) are exceeded before t, but not necessarily at the same time: (1) (L) P ∃s  t : Y (s )  na ,..., ∃s  t : Y (s )  na . (21) 1 1 1 L L L n n Powerful novel sample-path large deviations results by Budhiraja and Nyquist (2015), which deal with a general class of multi-dimensional shot-noise processes, may facili- tate the development of efficient importance sampling algorithms for non-modulated linear stochastic fluid networks. The results in (Budhiraja and Nyquist 2015)donot coverMarkov modulation, though. In the current setup of Section 4 the speed of the background process is kept fixed, i.e., not scaled by n. For modulated diffusions a sample-path large deviation principle has been recently established in Huang et al. (2016) for the case that the background process is sped up by a factor n (which amounts to multiplying the generator matrix Q by n); the rate function decouples into (i) a part concerning the rare-event behavior of the background process and (ii) a part concerning the rare-event behavior of the diffusion (conditional on Methodol Comput Appl Probab the path of the background process). With a similar result for the Markov-modulated linear stochastic fluid networks that we have studied in this paper, one could potentially set up an efficient importance sampling procedure for the probabilities (20)and(21) under this scaling. Acknowledgments The research of O. Boxma, D. Koops and M. Mandjes was partly funded by the NWO Gravitation Project NETWORKS, Grant Number 024.002.003. The research of O. Boxma was also partly funded by the Belgian Government, via the IAP Bestcom project. The research of E. Cahen was funded by an NWO grant, Grant Number 613.001.352. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Inter- national License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Appendix A We here point out how (6) can be established; the line of reasoning is precisely the same as in the derivation of (5) in (Dembo and Zeitouni 1998, Thm. 3.7.4). First write 2 −2ϑ Y (t ) 2n log M(ϑ ) −2nI (a) −2ϑ (Y (t )−na) n n E (L I) =E (e e 1 ) =e E (e 1 ), Q Q {Y (t )na} Q {Y (t )na} n n which, with Z := (Y (t ) − na)/ n, equals n n −2nI (a) −2ϑ Z n e E (e 1 ). Q {Z 0} Observe that E Y = na, due to the very choice of Q. This entails that Z converges Q n n in distribution to a centered Normal random variable; as can be verified, the correspond- ing variance is τ (where τ is defined in Eq. 5). Using the Berry-Esseen-based justification presented in (Dembo and Zeitouni 1998, page 111), we conclude that, as n →∞, √ √ 1 2 −2ϑ Z n −2ϑ nx −x /(2τ) E (e 1 ) ∼ e √ e dx. Q {Z 0} 2πτ Completing the square, the right-hand side of the previous display equals, with N (M, V) a normal random variable with mean M and variance V, √   √ 2  2 2(ϑ ) nτ  2(ϑ ) nτ e P N (−2ϑ nτ,τ) > 0 = e P N (0, 1)> 2ϑ nτ . Now we use the standard equivalence (as x →∞) 1 1 2 −x /2 P(N (0, 1)>x) ∼ e , 2π to obtain 1 1 1 −2ϑ nx −x /(2τ) e √ e dx ∼ √ √ . 2π 2ϑ 2πτ Combining the above, we derive the claim: 1 1 2 −2nI (a) E (L I) ∼ √ √ e . 2ϑ 2πτ Methodol Comput Appl Probab We now proceed with the computations underlying (12). To this end, first observe that dP  n log M(ϑ ) − ϑ ,Y (t ) e L = = e . dQ As a consequence, in line with the above computation for the one-dimensional case, −nI (b ) − ϑ ,Y (t )−na E (LI ) = e E (e 1 ), Q Q {Y (t )∈A} 2 −2nI (b ) −2 ϑ ,Y (t )−na E (L I) = e E (e 1 ). Q Q {Y (t )∈A} It was proven in (Chaganthy and Sethuraman 1996, Thm. 3.4) that −1 −D/2 −nI (b ) p (a) = E (LI ) ∼ √ ϑ (2πn) e , n Q i∈D while at the same time −1 2  −D/2 −2nI (b ) E (L I) ∼ √ (2ϑ ) (2πn) e . i∈D This immediately leads to Eq. 12. References Asmussen S, Glynn P (2007) Stochastic simulation. Springer, New York Asmussen S, Kortschak D (2015) Error rates and improved algorithms for rare event simulation with heavy Weibull tails. Methodol Comput Appl Probab 17:441–461 Asmussen S, Nielsen H (1995) Ruin probabilities via local adjustment coefficients. J Appl Probab 33:736– Bahadur R, Rao RR (1960) On deviations of the sample mean. Ann Math Stat 31:1015–1027 Blanchet J, Leder K, Glynn P (2008) Strongly efficient algorithms for light-tailed random walks: an old folk song sung to a faster new tune. In: L’Ecuyer P, Owen A (eds) Proceedings of the Eighth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing (MCQMC 2008). Springer, Berlin, pp 227-248 Blanchet J, Mandjes M (2009) Rare event simulation for queues. In: Rubino G, Tuffin B (eds) Rare event simulation using Monte Carlo methods. Wiley, Chichester, pp 87–124 Blom J, De Turck K, Mandjes M (2017) Refined large deviations asymptotics for Markov-modulated infinite- server systems. Euro J Oper Res 259:1036–1044 Blom J, Mandjes M (2013) A large-deviations analysis of Markov-modulated infinite-server queues. Oper Res Lett 41:220–225 Boxma O, Cahen E, Koops D, Mandjes M (2018) Linear networks: rare-event simulation and Markov modulation. arXiv:1705.10273 Budhiraja A, Nyquist P (2015) Large deviations for multidimensional state-dependent shot-noise processes. J Appl Probab 52:1097–1114 Cahen E, Mandjes M, Zwart B (2017) Rare event analysis and efficient simulation for a multi-dimensional ruin problem. Probab Eng Inform Sci 31:265–283 Chaganthy N, Sethuraman J (1996) Multidimensional strong large deviation theorems. J Stat Plan Inference 55:265–280 Dembo A, Zeitouni O (1998) Large deviations techniques and applications, 2nd ed. Springer, New York Ganesh A, Macci C, Torrisi G (2007) A class of risk processes with reserve-dependent premium rate: sample path large deviations and importance sampling. Queueing Syst 55:83–94 Glasserman P, Juneja S (2008) Uniformly efficient importance sampling for the tail distribution of sums of random variables. Math Oper Res 33:36–50 Huang G, Jansen HM, Mandjes M, Spreij P, De Turck K (2016) Markov-modulated Ornstein-Uhlenbeck processes. Adv Appl Probab 48:235–254 Huang G, Mandjes M, Spreij P (2016) Large deviations for Markov-modulated diffusion processes with rapid switching. Stoch Process Appl 126:1785–1818 Methodol Comput Appl Probab Juneja S, Shahabuddin P, Nelson B (2006) Rare event simulation techniques: an introduction and recent advances. In: Henderson S (ed) Handbook in operations research and management sciences, volume 13: Simulation, pp 291–350 Kella O, Stadje W (2002) Markov modulated linear fluid networks with Markov additive input. J Appl Probab 39:413–420 Kella O, Whitt W (1999) Linear stochastic fluid networks. J Appl Probab 36:244–260 Kuhn J, Mandjes M, Taimre T (2017) Exact asymptotics of sample-mean related rare-event probabilities. Probability in the Engineering and Informational Sciences, to appear Magnus J, Neudecker H (1979) The commutation matrix: some properties and applications. Ann Stat 7:381– Rabehasaina L (2006) Moments of a Markov-modulated irreducible network of fluid queues. J Appl Probab 43:510–522 Sezer A (2009) Importance sampling for a Markov modulated queuing network. Stoch Process Appl 119:491–517 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Methodology and Computing in Applied Probability Springer Journals

Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation

Free
29 pages

Loading next page...
 
/lp/springer_journal/linear-stochastic-fluid-networks-rare-event-simulation-and-markov-Z87U51BAQt
Publisher
Springer Journals
Copyright
Copyright © 2018 by The Author(s)
Subject
Statistics; Statistics, general; Life Sciences, general; Electrical Engineering; Economics, general; Business and Management, general
ISSN
1387-5841
eISSN
1573-7713
D.O.I.
10.1007/s11009-018-9644-1
Publisher site
See Article on Publisher Site

Abstract

Methodol Comput Appl Probab https://doi.org/10.1007/s11009-018-9644-1 Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation 1 2 3 3 O. J. Boxma · E. J. Cahen · D. Koops · M. Mandjes Received: 7 March 2017 / Revised: 16 May 2018 / Accepted: 22 May 2018 © The Author(s) 2018 Abstract We consider a linear stochastic fluid network under Markov modulation, with a focus on the probability that the joint storage level attains a value in a rare set at a given point in time. The main objective is to develop efficient importance sampling algorithms with provable performance guarantees. For linear stochastic fluid networks without modulation, we prove that the number of runs needed (so as to obtain an estimate with a given precision) increases polynomially (whereas the probability under consideration decays essentially exponentially); for networks operating in the slow modulation regime, our algorithm is asymptotically efficient. Our techniques are in the tradition of the rare-event simulation procedures that were developed for the sample-mean of i.i.d. one-dimensional light-tailed random variables, and intensively use the idea of exponential twisting. In passing, we also point out how to set up a recursion to evaluate the (transient and stationary) moments of the joint storage level in Markov-modulated linear stochastic fluid networks. Keywords Linear networks · Stochastic processes · Queues · Rare events · Importance sampling Mathematics Subject Classification (2010) 60K25 · 60F10 · 65C05 M. Mandjes m.r.h.mandjes@uva.nl EURANDOM and Department of Mathematics and Computer Science, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The Netherlands CWI, Science Park 123, 1098 XG Amsterdam, The Netherlands Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Science Park 105-107, 1098 XG Amsterdam, The Netherlands Methodol Comput Appl Probab 1 Introduction Linear stochastic fluid networks, as introduced in Kella and Whitt (1999), can be informally described as follows. Consider a network consisting of L stations. Jobs, whose sizes are i.i.d. samples from some general L-dimensional distribution, arrive at the stations according to a Poisson process. At each of the nodes, in between arrivals the storage level decreases exponentially. Processed traffic is either transferred to the other nodes or leaves the network (according to a given routing matrix). In addition to this basic version of the linear stochastic fluid network, there is also its Markov modulated counterpart (Kella and Stadje 2002), in which the arrival rate, the distribution of the job sizes, and the routing matrix depend on the state of an external, autonomously evolving finite-state continuous-time Markov chain (usually referred to as the background process). Linear stochastic fluid networks can be seen as natural fluid counterparts of corre- sponding infinite-server queues. As such, they inherit several nice properties of those infinite-server queues. In particular, separate infinitesimally small fluid particles, moving through the network, do not interfere, and are therefore mutually independent. Essentially due to this property, linear stochastic fluid networks allow explicit analysis; in particular, the joint Laplace transform of the storage levels at a given point in time can be expressed in closed form as a function of the arrival rate, the Laplace transform of the job sizes and the routing matrix (Kella and Whitt 1999,Thm.5.1). When Markov modulation is imposed, the analysis becomes substantially harder. Condi- tional on the path of the background process, again explicit expressions can be derived, cf. (Kella and Stadje 2002, Thm. 1). Unconditioning, however, cannot be done in a straightfor- ward manner. As a consequence the results found are substantially less explicit than for the non-modulated linear stochastic fluid network. In Kella and Stadje (2002) also a system of ordinary differential equations has been set up that provides the transform of the stationary storage level; in addition, conditions are identified that guarantee the existence of such a stationary distribution. In this paper we focus on rare events for Markov-modulated linear stochastic fluid net- works. More specifically, in a particular scaling regime (parameterized by n)weanalyze the probability p that at a given point in time the network storage vector is in a given rare set. By scaling the arrival rate as well as the rare set (which amounts to multiply- ing them by a scaling parameter n), the event of interest becomes increasingly rare. More specifically, under a Cramer ´ -type assumption on the job-size distribution, application of large-deviations theory yields that p decays (roughly) exponentially. As p can be char- n n acterized only asymptotically, one could consider the option of using simulation to obtain precise estimates. The effectiveness, however, of such an approach is limited due to the rar- ity of the event under consideration: in order to get a reliable estimate, one needs sufficiently many runs in which the event occurs. This is the reason why one often resorts to simulation using importance sampling (or: change of measure). This is a variance reduction technique in which one replaces the actual probability measure by an alternative measure under which the event under consideration is not rare; correcting the simulation output with appropriate likelihood ratios yields an unbiased estimate. The crucial issue when setting up an importance sampling procedure concerns the choice of the alternative measure: one would like to select one that provides a substantial variance reduction, or is even (in some sense) optimal. The objective of this paper is to develop a change of measure which performs provably optimally. Our ultimate goal is to obtain an efficient simulation procedure for Markov-modulated linear stochastic fluid networks. We do so by (i) first considering a single node without Methodol Comput Appl Probab modulation, (ii) then multi-node systems, still without modulation, and (iii) finally modu- lated multi-node systems. There are two reasons for this step-by-step setup: ◦ For the non-modulated models we have more refined results than for the modulated models. More specifically, for the non-modulated models we have developed estimates for the number of runs  required to obtain an estimate with predefined precision (showing that  grows polynomially in the rarity parameter n), whereas for modulated models we can just prove that  grows subexponentially. ◦ In addition, this approach allows the reader to get gradually familiar with the concepts used in this paper. The construction and analysis of our importance sampling methodology is based on the ideas developed in Blom and Mandjes (2013); there the focus was on addressing similar issues for a single-node Markov modulated infinite-server system. In line with Blom and Mandjes (2013), we consider the regime in which the background process is ‘slow’: while we (linearly) speed up the driving Poisson process, we leave the rates of the Markovian background process unalterned. A traditional, thoroughly examined, importance sampling problem concerns the sam- ple mean S of n i.i.d. light-tailed random variables X ,...,X ; the objective there is to n 1 n estimate P(S  a) for a> EX and n large. As described in (Asmussen and Glynn n 1 2007, Section VI.2), in this situation importance sampling (i.e., sampling under an alterna- tive measure, and translating the simulation output back by applying appropriate likelihood ratios) works extremely well. To this end, the distribution of the X s should be exponen- tially twisted. As it turns out, in our setup, the probability of our interest can be cast in terms of this problem. Compared to the standard setup of sample means of one-dimensional random variables, however, there are a few complications: (i) in our case it is not a priori clear how to sample from the exponentially twisted distributions, (ii) we consider multi- dimensional distributions (i.e., rare-event probabilities that concern the storage levels of all individual buffers in the network), (iii) we impose Markov modulation. We refer to e.g. Glasserman and Juneja (2008) and Kuhn et al. (2017) for earlier work on similar problems. In passing, we also point out how to set up a recursion to evaluate the (transient and stationary) moments of the joint storage level in Markov-modulated linear stochastic fluid networks (where the results in Kella and Stadje (2002) are restricted to just the first two stationary moments at epochs that the background process jumps). The single-node model without modulation falls in the class of (one-dimensional) shot- noise models, for which efficient rare-event simulation techniques have been developed over the past, say, two decades. Asmussen and Nielsen (1995) and Ganesh et al. (2007) consider the probability that a shot-noise process decreased by a linear drift ever exceeds some given level. Relying on sample-path large deviations results, an asymptotically efficient impor- tance sampling algorithm is developed, under the same scaling as the one we consider in our paper. The major difference with our model (apart from the fact that we deal with con- siderably more general models, as we focus on networks and allow modulation) is that we focus on a rare-event probability that relates to the position of the process at a fixed point in time; in this setting we succeed in finding accurate estimates of the number of runs needed to get an estimate of given precision. There is a vast body of literature related to the broader area of rare-event simulation for queueing systems. We refer to the literature overviews (Blanchet and Mandjes 2009; Juneja et al. 2006); interesting recent papers include (Asmussen and Kortschak 2015; Cahen et al. 2017; Sezer 2009). Methodol Comput Appl Probab This paper is organized as follows. In Section 2 the focus is on a single-node net- work, without Markov modulation (addressing complication (i) above), Section 3 addresses the extension to multi-node systems (addressing complication (ii)), and in Section 4 the feature of modulation is added (addressing complication (iii)). In each of these three sec- tions, we propose a change of measure, quantify its performance, and demonstrate its efficiency through simulation experiments. In Section 4.1 we include the explicit expres- sions for the moments in Markov-modulated linear stochastic fluid networks. A discussion and concluding remarks are found in Section 5. 2 Single Resource, No Modulation To introduce the concepts we work with in this paper, we analyze in this section a linear stochastic fluid network consisting of a single node, in which the input is just compound Poisson (so no Markov modulation is imposed). More precisely, in the model considered, jobs arrive according to a Poisson process with rate λ, bring along i.i.d. amounts of work (represented by the sequence of i.i.d. random variables (B ,B ,...)), and the workload 1 2 level decays exponentially at a rate r> 0. This model belongs to the class of shot-noise processes. As mentioned in the introduction, we gradually extend the model in the next sections. 2.1 Preliminaries We first present a compact representation for the amount of work in the system at time t, whichwedenoteby X(t), through its moment generating function. To this end, let N(t) denote a Poisson random variable with mean λt,and (U ,U ,...) i.i.d. uniformly dis- 1 2 tributed random variables (on the interval [0,t ]). Assume in addition that the random objects (B ,B ,...), N(t),and (U ,U ,...) are independent. Then it is well-known that the value 1 2 1 2 of our shot-noise process at time t can be expressed as N(t) N(t) −r(t −U ) −rU j j X(t) = B e = B e , (1) j j j =1 j =1 where the distributional equality is a consequence of the fact that the distribution of U is symmetric on the interval [0,t ]. It is easy to compute the moment generating function (MGF) of X(t), by conditioning on the value of N(t): (λt ) ϑX(t) −λt −rU M(ϑ) := E e = e E exp(ϑ B e ) k! k=0 −ru = exp λ β(e ϑ) − 1 du , (2) where β(·) is the MGF corresponding to B (throughout assumed to exist). By differentiating and inserting ϑ = 0, it follows immediately that −rt E X(t) = (1 − e ) E B =: m(t ). Higher moments can be found by repeated differentiation. We note that, as t is held fixed throughout the document, we often write N rather than N(t). Methodol Comput Appl Probab 2.2 Tail Probabilities, Change of Measure The next objective is to consider the asymptotics of the random variable X(t) under a par- ticular scaling. In this scaling we let the arrival rate be nλ rather than just λ,for n ∈ N.The value of the shot-noise process is now given by Y (t ) := X (t ), n i i=1 with the vector (X (t), ...,X (t )) consisting of i.i.d. copies of the random variable X(t) 1 n introduced above; here the infinite divisibility of a Compound Poisson distribution is used. Our goal is to devise techniques to analyze the tail distribution of Y (t ). Standard theory now provides us with the asymptotics of p (a) = P(Y (t )  na) n n for some a> m(t); we are in the classical ‘Cramer ´ setting’ (Dembo and Zeitouni 1998, Section 2.2) if it is assumed that M(ϑ) is finite in a neighborhood around the origin (which requires that the same property is satisfied by β(·)). Let I(a) and ϑ ≡ ϑ (a), respectively, be defined as I(a) := sup ϑa − log M(ϑ) ,ϑ := arg sup ϑa − log M(ϑ) , ϑ ϑ with M(·) as above. Using ‘Cramer’, ´ we obtain that, under mild conditions, lim log p (a) =−I(a) =−ϑ a + log M(ϑ ). n→∞ More refined asymptotics are available as well; we get back to this issue in Section 2.3. As these results apply in the regime that n is large, a relevant issue concerns the develop- ment of efficient techniques to estimate p (a) through simulation. An important rare-event simulation technique is importance sampling, relying on the commonly used exponential twisting technique. We now investigate how to construct the exponentially twisted version Q (with twist ϑ ) of the original probability measure P. The main idea is that under Q the X (t ) have mean a, such that under the new measure the event under study is not rare anymore. More concretely, exponential twisting with parameter ϑ means that under the new measure Q,the X (t ) should have the MGF (ϑ +ϑ )X(t ) E e M(ϑ + ϑ ) ϑX(t) E e = = ; (3) ϑ X(t) E e M(ϑ ) under this choice the random variable has the desired mean: M (ϑ ) E X(t) = = a. M(ϑ ) The question is now: how to sample a random variable that has this MGF? To this end, notice −rU that M(ϑ) = exp(−λt + λt E exp(ϑ Be )) and ∞ k −rU k  −rU (λt E exp(ϑ B e )) E exp((ϑ + ϑ )B e ) −λt M(ϑ + ϑ ) = e , −rU k! E exp(ϑ B e ) k=0 Methodol Comput Appl Probab such that Eq. 3 equals ∞ k −rU k  −rU (λt E exp(ϑ B e )) E exp((ϑ + ϑ )B e ) −rU exp(−λt E exp(ϑ B e )) . −rU k! E exp(ϑ B e ) k=0 From this expression we can see how to sample the X (t ) under Q,asfollows.Inthefirst place we conclude that under Q the number of arrivals becomes Poisson with mean −rU −ru λt E exp(ϑ B e ) = λ β(e ϑ )du, (4) rather than λt (which is an increase). Likewise, it entails that under Q the distribution of −rU the B e should be twisted by ϑ , in the sense that these random variables should have under Q the MGF −rU E exp((ϑ + ϑ )B e ) −rU E exp((ϑ + ϑ )B e ) = . −rU E exp(ϑ B e ) We now point out how such a random variable should be sampled. To this end, observe that t −ru β(e (ϑ + ϑ )) 1 −rU −ru E exp((ϑ + ϑ )B e ) = β(e ϑ )du, −ru β(e ϑ ) t so that t −ru  −ru β(e (ϑ + ϑ )) β(e ϑ ) −rU E exp((ϑ + ϑ )B e ) =  du. −ru β(e ϑ ) −rv β(e ϑ )dv From this representation two conclusions can be drawn. In the first place, supposing there are k arrivals, then the arrival epochs U ,...,U are i.i.d. under Q, with the density given 1 k by −ru β(e ϑ ) f (u) =  . U t −rv β(e ϑ ) dv In the second place, given that the k-th arrival occurs at time u, the density of the corre- −ru sponding job size B should be exponentially twisted by e ϑ (where each of the job sizes is sampled independently of everything else). Now that we know how to sample from Q it is straightforward to implement the impor- tance sampling. Before we describe its complexity (in terms of the number of runs required to obtain an estimate with given precision), we first provide an example in which we demonstrate how the change of measure can be performed. Example 1 In this example we consider the case that the B are exponentially distributed −1 −ru with mean μ . Applying the transformation w := e ϑ/μ, it is first seen that s s ϑ/μ μ 1 1 1 −ru β(e ϑ)du = du = dw −ru μ − e ϑ r −rs 1 − w w 0 0 e ϑ/μ ϑ/μ rs 1 w 1 μe − ϑ = log = log . r 1 − w r μ − ϑ −rs e ϑ/μ As ϑ solves the equation M (ϑ )/M(ϑ ) = a, we obtain the quadratic equation ϑ ϑ −rt m(t ) = a 1 − 1 − e , μ μ Methodol Comput Appl Probab leading to rt μe m(t ) −rt −rt 2 −rt ϑ = (1 + e ) − (1 − e ) + 4e 2 a (where it is readily checked that ϑ ∈ (0,μ)). Now we compute what the alternative measure Q amounts to. In the first place, the number of arrivals should become Poisson with parameter rt λ μe − ϑ log r μ − ϑ (which is larger than λt). In addition, we can check that ru  rt μe − ϑ μe − ϑ F (u) := Q(U  u) = log log μ − ϑ μ − ϑ (rather than u/t). The function F (u) has the value 0 for u = 0 and the value 1 for u = t, and is concave. This concavity reflects that the arrival epochs of the shots tend to be closer to 0 under Q than under P. This is because we identified each of the U with t minus the actual corresponding arrival epoch in Eq. 1; along the most likely path of Y (t ) itself the shots will be typically closer to t under Q. Observe that one can sample U under Q using the classical inverse distribution function method (Asmussen and Glynn 2007, Section II.2a): with H denoting a uniform number on [0, 1), we obtain such a sample by H 1−H 1 ϑ ϑ ϑ rt log e − 1 − + . r μ μ μ Also, conditional on a U having attained the value u, the jobs B should be sampled from i i −ru  −1 an exponential distribution with mean (μ − e ϑ ) . Remark 1 In the model we study in this section, the input of the linear stochastic fluid network is a compound Poisson process. As pointed out in Kella and Whitt (1999)the class of inputs can be extended to the more general class of increasing Levy ´ processes in a straightforward manner. 2.3 Efficiency Properties of Importance Sampling Procedure In this subsection we analyze the performance of the procedure introduced in the previous section. The focus is on a characterization of the number of runs needed to obtain an estimate with a given precision (at a given confidence level). In every run Y (t ) is sampled under Q, as pointed out above. As Q is an implementation of an exponential twist (with twist ϑ ), the likelihood ratio (of sampling Y (t ) under P relative to Q)isgiven by dP −ϑ Y (t ) n log M(ϑ ) L = = e e . dQ In addition, define I as the indicator function of the event {Y (t )  na}. Clearly, E (LI ) = n Q p (a). We keep generating samples LI (under Q), and estimate p (a) by the corresponding n n sample mean, until the ratio of the half-width of the confidence interval (with critical value T ) and the estimator drops below some predefined ε (say, 10%). Under P the number of runs needed is effectively inversely proportional to p (a), hence exponentially increasing in n. We now focus on quantifying the reduction of the number of runs when using the importance sampling procedure we described above, i.e., the one based on the measure Q. Methodol Comput Appl Probab Using a Normal approximation, it is a standard reasoning that when performing N runs the ratio of the half-width of the confidence interval and the estimator is approximately 1 T · √ Var (L I), p (a) n N and hence the number of runs needed is roughly 2 2 T Var (L I) := . 2 2 ε (p (a)) We now analyze how  behaves as a function of the ‘rarity parameter’ n. Due to the Bahadur-Rao result (Bahadur and Rao 1960), with f ∼ g denoting f /g → 1as n → n n n n ∞, 1 1 d −nI (a) p (a) = E (LI ) ∼ √ √ e ,τ := log M(ϑ) . (5) n Q n dϑ ϑ 2πτ ϑ =ϑ Using the same proof technique as in Bahadur and Rao (1960), it can be shown that 1 1 2 −2nI (a) E (L I) ∼ √ √ e ; (6) 2ϑ 2πτ see Appendix A for the underlying computation. It also follows that E (L I) ∼ Var (L I). We can use these asymptotics, to conclude that under Q the number of runs required grows slowly in n. More specifically,  is essentially proportional to n for n large. This leads to the following result; cf. (Blanchet et al. 2008, Section 2) for related findings in a more general context. Proposition 1 As n →∞, T 1 ∼ α n, α = ϑ · 2πτ . (7) ε 2 2.4 Simulation Experiments In this subsection we present numerical results for the single-node model without Markov modulation. We focus on the case of exponential jobs, as in Example 1. We simulate until the estimate has reached the precision ε = 0.1, with confidence level 0.95 (such that the critical value is T = 1.96). The parameters chosen are: t = 1, r = 1, λ = 1, and μ = 1. −1 We set a = 1 (which is larger than m(t ) = 1 − e ). As it turns out, ϑ = 0.2918 and λ 1 1 τ = − = 1.8240. 2 rt  2 r (μ − ϑ ) (μe − ϑ ) The top-left panel of Fig. 1 confirms the exponential decay of the probability of interest, as a function of n. In the top-right panel we verify that the number of runs indeed grows proportionally to n;thevalueof α, as defined in Eq. 7, is 198.7, which is depicted by the horizontal line. The bottom-left panel shows the density of the arrival epochs, which confirms that the arrival epochs tend to be closer to 0 under Q than under P; recall that under P these epochs are uniformly distributed on [0,t ]. Recall that we reversed time in Eq. 1: for the actual shot-noise system that we are considering, it means that in order to reach the Methodol Comput Appl Probab Fig. 1 Numerical results for Section 2.4 desired level at time t, the arrival epochs tend to be closer to t under Q than under P.The bottom-right panel presents the rate of the exponential job sizes as a function of u.Using (4), the arrival rate under Q turns out to be 1.2315. 3 Multi-node Systems, No Modulation In this section we consider multi-node stochastic fluid linear stochastic fluid networks, of the type analyzed in the work by Kella and Whitt (1999). It is instructive to first consider the simplest multi-node system: a tandem network without external input in the downstream node and no traffic leaving after having been served by the upstream node (and rate r for node , = 1, 2); later we extend the ideas developed to general linear stochastic fluid networks. 3.1 Preliminaries As mentioned above, we first consider the two-node tandem. The content of the first node is, as before, (1) −r (t −U ) 1 j X (t ) = B e j =1 Methodol Comput Appl Probab (with N having a Poisson distribution with mean λt), but it can be argued that the content of the second node satisfies a similar representation. More specifically, using the machinery developed in Kella and Whitt (1999), it turns out that N N r r 1 d 1 (2) −r (t −U ) −r (t −U ) −r U −r U 2 j 1 j 2 j 1 j X (t ) = B e − e = B e − e . j j r − r r − r 1 2 1 2 j =1 j =1 (8) As before, perform the scaling by n, meaning that the arrival rate λ is inflated by a (1) (1) (2) (2) factor n. It leads to the random vectors (X (t),...,X (t )) and (X (t ), . . . , X (t )). n n 1 1 (1) (2) With these vectors we can define Y (t ) and Y (t ), analogously to how this was done in n n the single-node case; these two random quantities represent the contents of the upstream resource and the downstream resource, respectively. The state of this tandem system can be uniquely characterized in terms of its (bivariate) moment generating function. The technique to derive an explicit expression is by relying on the above distributional equality (8). Again, the key step is to condition on the number of shots that have arrived in the interval [0,t ]: with ϑ = (ϑ ,ϑ ), 1 2 (1) (2) ϑ X (t )+ϑ X (t ) 1 2 M(ϑ ) := E e (λt ) r −λt −r U −r U −r U 1 2 1 = e E exp ϑ Be + ϑ B e − e 1 2 k! r − r 1 2 k=0 k t (λt ) 1 r −λt −r u −r u −r u 1 2 1 = e E exp ϑ Be + ϑ B e − e du 1 2 k! t r − r 0 1 2 k=0 ∞ k k t (λt ) 1 r −λt −r u −r u −r u 1 2 1 = e β e ϑ + e − e ϑ du 1 2 k! t r − r 0 1 2 k=0 −r u −r u −r u 1 2 1 = exp λ β e ϑ + e − e ϑ − 1 du . (9) 1 2 r − r 1 2 The above computation is for the two-node tandem system, but the underlying procedure can be extended to the case of networks with more than 2 nodes, and external input in each of the nodes. To this end, we consider the following network consisting of L nodes. Jobs are generated according to a Poisson process. At an arrival epoch, an amount is added to the content of each of the resources ∈{1,...,L}, where the amount added to resource is ( ) L distributed as the (non-negative) random variable B ; β(ϑ ), with ϑ ∈ R , is the joint MGF (1) (L) of B up to B (note that the components are not assumed independent). In addition, let the traffic level at node decay exponentially with rate r (i.e., the value of the output rate is linear in the current level, with proportionality constant r ). A deterministic fraction p   0( = ) is then fed into node , whereas a fraction p  0 leaves the network (with p = 1). We denote r := r p . As an aside we mention that this general =1 model covers models in which some arrivals (of the Poisson process with parameter λ) (1) (L) actually lead to arrivals at only a subset of the L queues (since the job sizes B ,...,B are allowed to equal 0). We now point out how the joint buffer content process can be analyzed. Again our objec- tive is to evaluate the moment generating function. Define the matrix R as follows: its ( , )-th entry is r +  r , whereas its ( , )-th entry (with = )is −r .Wehave, = Methodol Comput Appl Probab according to Kella and Whitt (1999), with N again Poisson with mean λt, the following distributional equality: for any ∈{1,...,L}, L N ( ) ( ) −R(t −U ) X (t ) = B (e )  . j =1 =1 (1) (L) It means we can compute the joint MGF of X (t ) up to X (t ) as follows, cf. (Kella and Whitt 1999,Thm.5.1): ( ) M(ϑ ) := E exp ϑ X (t ) =1 ∞ L L (λt ) −λt ( ) −R(t −U) = e E exp ϑ B (e ) k! k=0 =1 =1 ∞ L L k t (λt ) 1 −λt ( ) −Ru = e E exp ϑ B (e )  du k! t k=0 =1 =1 ∞ L L k t (λt ) 1 −λt −Ru −Ru = e β (e ) ϑ ,..., (e ) ϑ du 1 L k! t k=0 =1 =1 L L −Ru −Ru = exp −λt + λ β (e ) ϑ ,..., (e ) ϑ du 1 L =1 =1 −Ru = exp λ β e ϑ − 1 du , which is the matrix/vector-counterpart of the expression (2) that we found in the single-node case; for the two-node case the special form (9) applies. 3.2 Tail Probabilities, Change of Measure In this subsection we introduce the change of measure that we use in our importance sam- pling approach. Many of the concepts are analogous to concepts used for the single-node case in Section 2. Define (in self-evident notation) (1) (L) p (a) := P Y (t )  na ,...,Y (t )  na . n 1 L n n Due to the multivariate version of Cramer’ ´ s theorem, with A := [a , ∞) × ··· × [a , ∞), 1 L lim log p (a) =− inf I(b), where I(b) := sup ( ϑ , b − log M(ϑ )) . (10) n→∞ n b∈A More refined asymptotics than the logarithmic asymptotics of Eq. 10 are available as well, but these are not yet relevant in the context of the present subsection; we return to these ‘exact asymptotics’ in Section 3.3. We assume that the set A is ‘rare’, in the sense that ∂M(ϑ ) m(t ) ∈ A, with m (t ) := . ∂ϑ ϑ =0 Methodol Comput Appl Probab Let us now construct the importance sampling measure. Let ϑ be the optimizing ϑ in the decay rate of p (a). Mimicking the reasoning we used in the single-node case, the number of arrivals becomes Poisson with mean −r u  −r u −r u 1 2 1 λ β e ϑ + e − e ϑ du 1 2 r − r 0 1 2 (rather than λt). The density of U under Q becomes −r u  −r u −r u 1 2 1 β e ϑ + e − e ϑ 1 2 r − r Q 1 2 f (u) =   . U t −r v  −r v −r v 1 2 1 β e ϑ + e − e ϑ dv 1 2 r − r 0 1 2 Given a sample from this distribution attains the value u, the distribution of the correspond- ing random variable B should be twisted by −r u  −r u −r u 1 2 1 e ϑ + e − e ϑ . 1 2 r − r 1 2 Analogously to what we found above in the two-node tandem, we can identify Q for gen- eral linear stochastic fluid networks. We find that under Q the number of arrivals becomes Poisson with parameter −Ru λ β e ϑ du. The arrival epochs should be drawn using the density −Ru β e ϑ f (u) =  . U t −Rv β e ϑ dv (1) (L) Given an arrival at time u, (B ,...,B ) should be exponentially twisted by −Ru  −Ru (e ϑ ) ,...,(e ϑ ) . 1 L 3.3 Efficiency Properties of Importance Sampling Procedure We now consider the efficiency properties of the change of measure proposed in the previous subsection. To this end, we first argue that the vector ϑ generally has some (at least one) strictly positive entries, whereas the other entries equal 0; i.e., there are no negative entries. To this end, we first denote by b the ‘most likely point’ in A: b := arg inf I(b), b∈A so that ϑ = ϑ (b ). It is a standard result from convex optimization that ∂I (b) = ϑ (b). (11) ∂b Suppose now that ϑ (b )< 0. Increasing the i-th component of the b (while leaving all other components unchanged) would lead to a vector that is still in A, but that by virtue of Eq. 11 corresponds to a lower value of the objective function I(·), thus yielding that b was not the optimizer; we have thus found a contradiction. Similarly, when ϑ (b ) = 0wehave that b >a (as otherwise a reduction of the objective function value would be possible, which contradicts b being minimizer). Methodol Comput Appl Probab Now define as the subset of i ∈{1,...,L} such that ϑ > 0, and let D ∈{1,...,L} the number of elements of . We now argue that the number of runs needed to obtain an D/2 estimate of predefined precision scales as n . Relying on the results from Chaganthy and Sethuraman (1996) (in particular their Thm. 3.4), it follows that p (a) behaves (for n −D/2  2 large) proportionally to n exp(−nI (b )); using the same machinery, E (L I) behaves −D/2 proportionally to n exp(−2nI (b )). Mimicking the line of reasoning of Section 2.3, D/2 we conclude that the number of runs needed is essentially proportional to n . The formal statement is as follows; in Appendix A we comment on the underlying computations. Proposition 2 As n →∞, √ D T 1 D/2 ∼ αn ,α = ϑ · 2π τ, (12) 2 D ε 2 i∈D where τ is the determinant of the Hessian of log M(ϑ ) in ϑ . We further illustrate the ideas and intuition behind the qualitative result described in the above proposition by considering the case L = 2. It is noted that three cases may arise: (i) ={1, 2}, (ii) ={1}, (iii) ={2}; as case (iii) can be dealt with in the same way as case (ii), we concentrate on the cases (i) and (ii) only. In case (i), where D = 2, the necessary condition (Chaganthy and Sethuraman 1996, Eqn. (3.4)) is fulfilled as ϑ > 0 componentwise. As in addition the conditions A–C of (Chaganthy and Sethuraman 1996) are in place, it is concluded that (Chaganthy and Sethuraman 1996, Thm. 3.4) can be applied, leading to b = a, and 1 1 −nI (a) p (a) ∼ √ e , n ϑ ϑ · 2π τ 1 2 where τ is the determinant of the Hessian of log M(ϑ ) in ϑ . Along the same lines, it can be shown that 1 1 2 −2nI (a) E (L I) ∼ e . n 4ϑ ϑ · 2π τ 1 2 It now follows that  is roughly linear in n: with ε and T as introduced in Section 2.3, T π τ = αn, α := ϑ ϑ · . (13) 1 2 ε 2 In case (ii), we do not have that ϑ > 0 componentwise, and hence (Chaganthy and Sethura- man 1996, Thm. 3.4) does not apply; in the above terminology, D = 1 < 2 = L.Observe (1) (2) that in this case the exponential decay rate of the event {Y (t )  na ,Y (t ) < na } 1 2 n n (1) strictly majorizes that of {Y (t )  na } (informally: the former event is substantially less n 1 likely than the latter). It thus follows that b = a and b >a ,and 1 2 1 2 (1) (1) (2) p (a) = P Y (t )  na − P Y (t )  na ,Y (t ) < na n 1 1 2 n n n 1 1  d (1) −2nI (b ) ∼ P Y (t )  na ∼ √ √ e ,τ := log M(ϑ, 0) , n dϑ ϑ 2πτ ϑ =ϑ and in addition 1 1 2 −2nI (b ) E (L I) ∼ √ √ e . 2ϑ 2πτ 1 Methodol Comput Appl Probab As a consequence in this regime  grows essentially proportional to n for n large: √ T 1 ∼ α n, α := ϑ · 2πτ . ε 2 In case (iii)  behaves proportionally to n as well. 3.4 Simulation Experiments We conclude this section by providing a few numerical illustrations. In the first set we focus on the downstream queue only (i.e., we analyze p (0,a )), whereas in the second set we n 2 consider the joint exceedance probability p (a). The precision and confidence have been chosen as in Example 1. Throughout we take t = 1, r = 2, r = 1, λ = 1, and μ = 1. 1 2 In the first set of experiments we take a = 0and a = 1. Elementary numerical analysis 1 2 yields that ϑ = 0.8104 and τ = 1.4774, leading to α, as defined in Eq. 13, equalling 474.3. For graphs on the behavior of p (1) asafunctionof n and the number of runs needed, we refer to (Boxma et al. 2018, Fig. 2). The two panels of Fig. 2 should be interpreted as the bottom panels of Fig. 1. Interestingly, the left panel indicates that in the tandem system it does not pay off to let jobs arrive right before t (as they first have to go through the first resource to end up in the second resource), as reflected by the shape of the density of the arrival epochs under Q; to this end, recall that we reversed time in Eq. 8,sothatalow density at u = 0 in the graph corresponds to a high density at u = t in the actual system. The arrival rate under Q is 1.5103. In the second set of experiments we take a = 1.2and a = 1.1; all other parameters 1 2 are the same as in the first set. As mentioned above, we now consider the joint exceedance probability. As it turns out, ϑ = 0.1367 and ϑ = 0.2225. For graphs describing the 1 2 behavior of p (1.2, 1.1) asafunctionof n and the number of runs needed, we refer to (Boxma et al. 2018, Fig. 3); the latter graph reveals that for this specific parameter setting /n converges to the limiting constant rather slowly. Concerning the left panel of Fig. 3, note that in Section 2 we saw that to make sure the first queue gets large it helps to have arrivals at the end of the interval, whereas above we observed that to make the second queue large arrivals should occur relatively early. We now focus on the event that both queues are large, and consequently the arrival distribution becomes relatively uniform again, as shown in the left panel of Fig. 3. The arrival rate under Q is 2.3478. IS IS MC MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 2 Numerical results for Section 3.4: downstream queue only density 0.0 0.4 0.8 1.2 rate 0.0 0.4 0.8 1.2 Methodol Comput Appl Probab IS IS MC MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 3 Numerical results for Section 3.4: both queues 4 Multi-node Systems Under Markov Modulation In this section consider the networks analyzed in the previous section, but now in a random environment. More specifically, the type of random environment we focus on here is known as Markov modulation: the system dynamics are affected by the state of an external finite- state irreducible Markov process J(·) with generator matrix Q = (q ) . When this jj j,j =1 Markov process (usually referred to as the background process) is in state j, arrivals occur according to a Poisson process with rate λ ,the MGF of the job size is β (ϑ ), and the routing j j matrix is R . Analogously to the definitions used in the case without Markov modulation, this routing matrix’ (i, i)-th entry is (j ) (j ) (R ) := r + r , j ii ii ii =i which can be interpreted as the rate at which fluid leaves server i when the background process is in j. Likewise, for i = i , (j ) (R )  := −r , j ii ii which is the rate at which fluid flows from server i to i when the background process is in j. Below we assume that J(0) = j for a fixed state j ∈{1,...,d}; it is seen that all 0 0 results generalize to an arbitrary initial distribution in a straightforward manner. The structure of the section is as follows: we consecutively describe general results for the model under consideration (extending earlier stationary results from Kella and Stadje (2002) to their transient counterpart), propose an importance sampling measure, establish efficiency properties of the corresponding estimator, and present a number of numerical experiments. Note that the setup of this section slightly differs from that of the previous sections. For the models covered in Sections 2 and 3, already detailed explicit analysis is available; see e.g. the results in terms of transforms and moments in Kella and Whitt (1999). Such a complete analysis is lacking for the model featuring in the present section. With the results of our paper added to the literature, the situation has become ‘uniform’: for all three setups (i.e., Sections 2, 3,and 4), one has results on transient transforms, transient moments, as well as recipes for efficient rare-event simulation. density 0.0 0.4 0.8 1.2 rate 0.0 0.4 0.8 1.2 Methodol Comput Appl Probab 4.1 Exact Expressions for Moments Before focusing on simulation-based techniques, this subsection (which can be read inde- pendently of the rest of the section) shows that various moment-related quantities can be computed in closed form. Multi-node systems under Markov modulation have been studied in detail by Kella and Stadje (2002). We start this subsection by providing a compact derivation of a PDE charac- terizing the system’s transient behavior, which was not included in that paper. To this end, we define, for j ∈{1,...,d}, ( ) (ϑ,t) := E exp ϑ X (t ) 1 (t ) , j j =1 with 1 (t ) the indicator function of the event that J(t) = j. Using the standard ‘Markov machinery’,  (ϑ,t + t ) equals (up to o(t ) terms) the sum of a contribution λ t  (ϑ,t)β (ϑ ) j j j due to the scenario that an arrival occurs between t and t + t, a contribution q t  (ϑ,t) j j j =j due to the scenario that the modulating Markov process jumps between t and t + t,anda contribution L L ( ) 1 − λ t − q t E exp ϑ − ϑ  (R )  t X (t ) 1 (t ) , j j j j =1 =1 with q := −q ; regarding the last term, observe that when the background process is in j jj state j, and no new job arrives between t and t + t, ( ) ( ) ( ) ( ) X (t + t ) = X (t ) − (R ) t X (t ) − (R ) t X (t ). j j We thus find that (ϑ,t + t ) = λ t β (ϑ) (ϑ,t) + q t  (ϑ,t) + j j j j j j j =j 1 − λ t − q t  ϑ − R ϑ t, t + o(t ). j j j j This immediately leads to (by subsequently subtracting  (ϑ,t) from both sides, dividing by t, and letting t ↓ 0) d L ∂ ∂ (ϑ,t) = λ β (ϑ ) − 1  (ϑ,t) + q    (ϑ,t) − R ϑ  (ϑ,t). j j j j j j j j j ∂t ∂ϑ j =1 =1 (14) Let us now compactly summarize the relation (14), in vector/matrix notation. This notation will prove practical when computing higher moments; in other (but related) contexts, similar procedures have been proposed in e.g. Huang et al. (2016) and Rabehasaina (2006). Let n ×n 1 2 M be the set of R-valued matrices of dimension n × n (for generic n ,n ∈ N). 1 2 1 2 Methodol Comput Appl Probab In addition, I is the identity matrix of dimension n ∈ N. We introduce the following three d×d d×d Ld×Ld matrices in M , M ,and M , respectively: ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ λ ... 0 β (ϑ) ... 0 R ... 0 1 1 1 ⎜ . . ⎟ ⎜ . . ⎟ ⎜ . . ⎟ . . . . . . . . . . . . := ,B(ϑ ) := ,R := . ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ . . . . . . . . . 0 ... λ 0 ... β (ϑ ) 0 ... R d d d We use the conventional notation ⊗ for the Kronecker product. Recall that the Kronecker product is bilinear, associative and distributive with respect to addition; these properties we will use in the sequel without mentioning. It also satisfies the mixed product property (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD). Furthermore, note that I ⊗ I = I . n n n n 1 2 1 2 We now consider some differentiation rules for matrix-valued functions which will allow us to iteratively evaluate moments. In the first place we define the operator ∇ for ϑ ∈ R ; to keep notation compact, we often suppress the subscript ϑ, and write just ∇.Let f ≡ f(ϑ ) L n ×n n L×n 1 2 1 2 be a mapping of R to M .Then ∇f ≡∇f(ϑ ) ∈ M is defined by ⎛ ⎞ ⎛ ⎞ ∇f ∇f ··· ∇f ∂ f 11 12 1n 1 ij ⎜ ⎟ ⎜ ⎟ ∇f ∇f ··· ∇f ∂ f 21 22 2n 2 ij ⎜ ⎟ ⎜ ⎟ := ∇f = ⎜ ⎟ , where ∇f ⎜ ⎟ . . . . ij . . . . . . ⎝ ⎠ ⎝ ⎠ . . . . ∇f ∇f ··· ∇f ∂ f n 1 n 2 n n L ij 1 1 1 2 In the above definition ∇f ≡∇f (ϑ ) is to be understood as the usual gradient; the symbol ij ij ∂ is used to denote the partial derivative with respect to the i-th variable, in the sense of ∂ f := f (ϑ ). i ij ij ∂ϑ k k k−1 0 Furthermore, we define inductively ∇ f ≡∇ f(ϑ ) := ∇(∇ f), k ∈ N, with ∇ f := k L L n ×n 1 2 f . It is checked that ∇ f(ϑ ) is a mapping of R to M . In the sequel we use a couple of differentiation rules, that we have listed below. Let A(·) L n ×n 1 2 be a matrix-valued function from R to M ,and B(·) a matrix-valued function from L n ×n 2 3 R to M ,and let I be a q × q identity matrix (for some q ∈ N). Then, – Product rule: ∇ A(ϑ)B(ϑ ) = (∇ A(ϑ )) B(ϑ ) + (A(ϑ ) ⊗ I ) ∇ B(ϑ ); ϑ ϑ L ϑ Ln ×n 1 3 being an element of M . – Differentiation of Kronecker product (1): ∇ (I ⊗ A(ϑ )) = I ⊗ (∇ A(ϑ )). ϑ q q ϑ – Differentiation of Kronecker product (2): ∇ (A(ϑ ) ⊗ I ) = (K ⊗ I )(I ⊗ (∇ A(ϑ )))K ϑ q n ,q L q ϑ q,n 1 2 = (K ⊗ I )K (∇ A(ϑ ) ⊗ I ), n ,q L q,n ϑ q 1 2 where K is the commutation matrix defined by m,n m n K := (H ⊗ H ), m,n ij ij i=1 j =1 m×n and H ∈ M denotes a matrix with a 1 at its (i, j )-th position and zeros elsewhere, ij cf. Magnus and Neudecker (1979). Methodol Comput Appl Probab The first rule can be checked componentwise and the second rule is trivial. The third rule fol- lows from the first and second rule in combination with the fact that the Kronecker product commutes after a correction with the commutation matrices. Moreover, we use the prop- −1 erty K = K . An overview of the properties of commutation matrices can be found in n,m m,n Magnus and Neudecker (1979). In the introduced terminology, it follows that Eq. 14 can be written as T T T (ϑ,t) =  B(ϑ ) − I (ϑ,t) + Q (ϑ,t) − I ⊗ ϑ R ∇ (ϑ,t). (15) d d ϑ ∂t We now point out how (transient and stationary) moments can be evaluated; note that Kella and Stadje (2002) focuses on the first two stationary moments at epochs that the background process jumps. We throughout use the notation z (t ) for the i-th derivative of (ϑ,t) in (0,t),for t  0: i L d×d z (t ) := ∇ (ϑ,t) ∈ M , ϑ =0 for i ∈ N. Note that, with π (t ) = (exp(Qt )) , j j ,j (ϑ , 0) = e , (0,t) = π (t ) ≡ (π (t), ...,π (t )). j 1 d ◦ We start by characterizing the first moments. Applying the operator ∇≡ ∇ to the differential equation (15) yields ∇ (ϑ,t) = ( ⊗ I )(∇ B(ϑ )) (ϑ,t) + ϑ L ϑ ∂t T T Q ⊗ I + (B(ϑ ) − I ) ⊗ I − R ∇ (ϑ,t) − L d L ϑ T T 2 ((I ⊗ ϑ )R ) ⊗ I ∇ (ϑ,t), (16) d L using standard properties of the Kronecker product in combination with T T ∇ (I ⊗ ϑ ) = I ⊗ (∇ ϑ ) = I ⊗ (e ,..., e ) = I ⊗ I = I , ϑ d d ϑ d 1 L d L dL where e denotes the L-dimensional column vector in which component i equals 1 and all other components are 0. Then, inserting ϑ = 0 into Eq. 16 yields the system of (non- homogeneous) linear differential equations T T z (t ) = ( ⊗ I ) ∇B(0) π (t ) + (Q ⊗ I ) − R z (t ). (17) L L 1 In the stationary case, we obtain −1 T T z (∞) = R − (Q ⊗ I ) ( ⊗ I ) ∇B(0) π , (18) 1 L L T T with π := lim π (t ) (being the unique non-negative solution of π Q = 0 such that t →∞ the entries of π sum to 1). ◦ We now move to second moments. Applying the ∇ -operator to Eq. 16, 2 2 ∇ (ϑ,t) = ( ⊗ I 2 )(∇ B(ϑ ))(ϑ,t) + ϑ L ϑ ∂t (( ⊗ I )∇ B(ϑ )) ⊗ I ∇ (ϑ,t) + L ϑ L ϑ ∇ (B(ϑ ) ⊗ I )∇ (ϑ,t) + ϑ L ϑ T T 2 Q ⊗ I 2 + (B(ϑ ) − I ) ⊗ I 2 − R ⊗ I ∇ (ϑ,t) − d L L L ϑ T T 3 (((I ⊗ ϑ )R ) ⊗ I ) ∇ (ϑ,t) − L ϑ T T 2 ∇ (((I ⊗ ϑ )R ) ⊗ I ) ∇ (ϑ,t), ϑ d L ϑ Methodol Comput Appl Probab in which the factor ∇ (B(ϑ ) ⊗ I ) can be expressed more explicitly as ϑ L (K ⊗ I )K ((( ⊗ I )∇ B(ϑ )) ⊗ I ), d,L L L,dL L ϑ L T T T and the factor ∇ (((I ⊗ ϑ )R ) ⊗ I ) simplifies to (K ⊗ I )K (R ⊗ I ). Inserting ϑ d L d,L L L,dL L ϑ = 0 yields the system of linear differential equations z (t ) = ( ⊗ I )(∇ B(0)) π (t ) + 2 L T T (Q ⊗ I 2 − ((K ⊗ I )K + I 2 )(R ⊗ I )) z (t ) + d,L L L,dL L 2 L dL (( ⊗ I )(∇B(0))) ⊗ I z (t ) + L L 1 (K ⊗ I )K ((( ⊗ I )∇B(0)) ⊗ I )z (t ) d,L L L,dL L L 1 where z (t ) solves (17). As before, the stationary quantities can be easily derived (by equat- ing z (t ) to 0). One has to keep in mind, however, that some of the mixed partial derivatives occur multiple times in z ,for k ∈{2, 3,...}, and therefore the solution will only be unique after removing the corresponding redundant rows. Alternatively, the system can be completed by including equations which state that these mixed partial derivatives are equal. ◦ It now follows that higher moments can be found recursively, using the three differentiation rules that we stated above. Remark 2 Various variants of our model can be dealt with similarly. In this remark we consider the slightly adapted model in which shots only occur simultaneously with a jump in the modulating Markov chain. Then (up to o(t ) terms)  (ϑ,t + t ) is the sum of a contribution q  t   (ϑ,t)β (ϑ ) j j j j =j due to the scenario that there is a jump in the modulating chain in the interval [t, t + t ] (which also induces a shot), and a contribution of L d ( ) (1 − q t ) E exp ϑ − ϑ  (R )  t X (t ) 1 (t ) , j j j =1 =1 with q := −q , in the scenario that there is no jump. Performing the same steps as above, j jj we obtain d L ∂ ∂ (ϑ,t)=q (β (ϑ )−1) (ϑ,t)+ q  (ϑ,t)β (ϑ )− (R ϑ )  (ϑ,t), j j j j j j j j j j j ∂t ∂ϑ j =1 j =1 which has a similar structure as Eq. 14. It follows that the moments can be found as before. With Q := diag{q ,...,q }, it turns out that the transient means are given by 1 d T T T z (t ) =∇B(0)(Q + Q)π (t ) + (Q ⊗ I ) − R z (t ). L 1 In particular, the stationary first moment equals −1 T T T z (∞) = R − (Q ⊗ I ) ∇B(0)(Q + Q)π . 1 L Consider the following numerical example for the computation of the expected values and variances, in which the technique described above is illustrated. Example 2 In this example, we choose the parameters in such a way that we see non- monotonic behavior. Our example corresponds to a case in which the system does not start Methodol Comput Appl Probab Steady state EX (t) EX (t) Steady state VarX (t) VarX (t) 01234567 01234567 t t Fig. 4 Transient expected values and variances of Example 2 empty, which is dealt with by imposing suitable starting conditions. We consider a two- dimensional (L = 2) queueing system, with a two-dimensional state space of the Markov modulating process (d = 2). We pick q = q = 1, λ = λ = 1, E B = E B = 12 21 1 2 1 2 2 2 E B = E B = 1, J(0) = 1, (X (0), X (0)) = (3, 3), and the rate matrices 1 2 1 2 2 −1 1 −1 R = ,R = . 1 2 −11 −12 Due to the symmetry in the choice of the parameters, one can expect that for both states of the background process expected value tends (as t grows large) to the same steady-state value; the same is anticipated for the stationary variance. This is confirmed by Fig. 4.For t small, the two queues behave differently due to J(0) = 1, which implies that queue 1 drains faster. Note that E X (t ) even increases for t small, due to the fact that the flow from node 1 to 2 equals the flow from 2 to 1, constituting a net flow of zero, so that the additional contribution of external output to node 2 leads to a net increase of E X (t ). The transient correlation is plotted in Fig. 5. At time t = 0 the queues are perfectly correlated, since the starting state is deterministic. Then the correlation decreases due to the asymmetric flow rates until around t = 1, which is when the Markov chain J is expected to switch, after which the correlation monotonously tends to the steady state. Steady state Cor(X (t), X (t)) 1 2 Fig. 5 Transient correlation between X (t ), X (t ) of Example 2 1 2 2.2 2.4 2.6 2.8 3.0 0.6 0.7 0.8 0.9 1.0 0.0 0.4 0.8 1.2 Methodol Comput Appl Probab 4.2 Tail Probabilities, Change of Measure We now characterize the decay rate of the rare-event probability under study, and we pro- pose a change of measure to efficiently estimate it. In the notation we have been using so far, we again focus on (1) (L) p (a) := P Y (t )  na ,...,Y (t )  na = P (Y (t ) ∈ A) , n 1 L n n n (1) (L) where Y (t ) = (Y (t ), . . . , Y (t )). It is stressed that, following (Blom and Mandjes n n n 2013), we consider the regime in which the background process is ‘slow’. In concrete terms, this means that we linearly speed up the driving Poisson process (i.e., we replace the arrival rates λ by nλ ), but leave the rates of the Markovian background process unaltered. j j First we find an alternative characterization of the state of the system at time t.Let F denote the set of all functions from [0,t ] onto the states {1,...,d}. Consider a path f ∈ F . Let f have K(f ) jumps between 0 and t, whose epochs we denote by t (f ) up to t (f ) 1 K(f ) (and in addition t (f ) := 0and t (f ) := t). Let 0 K(f )+1 j (f ) := lim f(t) t ↓t (f ) (i.e., the state of f immediately after the i-th jump). We also introduce D (u, f ) := exp −(t (f ) − u) R ,D (f ) := exp −(t (f ) − t (f )) R . i i+1 j (f ) i i+1 i j (f ) i i Suppose now that the Markov process J(·) follows the path f ∈ F . Then the contribution to the MGF of X(t ) due to shots that arrived between t (f ) and t (f ) is, mimicking the i i+1 arguments that we used in Section 3.2 for non-modulated networks, t (f ) i+1 ψ (f, ϑ ) := exp λ β D (u, f )D (f ) ··· D (f ) ϑ − 1 du . i j (f ) j (f ) i i+1 K(f ) i i t (f ) MGF of X(t) given the path f is As a consequence, the K(f ) M (ϑ ) := ψ (f, ϑ ). f i i=0 First conditioning on the path of J(·) ∈ F between 0 and t and then unconditioning, it then immediately follows that the MGF of X(t ) is given by M(ϑ ) = E M (ϑ ). Then, precisely as is shown in Blom and Mandjes (2013) for a related stochastic system, the decay rate can be characterized as follows: lim log p (a) =− inf I (a), I (a) := inf sup ϑ , b − log M (ϑ ) . (19) n f f f n→∞ n f ∈F b∈A The argumentation to show this is analogous to the one in (Blom and Mandjes 2013,Thm. 1), and can be summarized as follows. In the first place, let f be the optimizing path in Eq. 19. Then, as J(·) does not depend on n, we can choose a ‘ball’ B (f ) around f such that the decay rate of the probability of J(·) being in that ball is 0. The lower bound follows by only taking into account the contribution due to paths in B (f ). The upper bound follows by showing that the contribution of all f ∈ F \ B (f ) is negligible. t t Methodol Comput Appl Probab Informally, the path f has the interpretation of the most likely path of J(·) given that the rare event under consideration happens. To make sure that the event under consideration is rare, we assume that for all f ∈ F ∂ ∂ M (ϑ ) ,..., M (ϑ ) ∈ A. f f ∂ϑ ∂ϑ 1 L ϑ =0 ϑ =0 The change of measure we propose is the following. In every run we first sample the path J(s) for s ∈[0,t ] under the original measure P (i.e., with J(0) = j , and then using the generator matrix Q). We call the resulting path f ∈ F . For this path, define ϑ  0 as the optimizing ϑ in the definition of I(f ) in Eq. 19; b ∈ A is the optimizing b. Conditional on the path f of the background process, under the new measure Q the number of external arrivals between t (f ) and t (f ) is Poisson with parameter i i+1 t (f ) i+1 λ β P (u, f ) ϑ du, j (f ) j (f ) i i i f t (f ) where P (u, f ) := D (u, f )D (f ) ··· D (f ). The arrival epochs between t (f ) and i i i+1 K(f ) i t (f ) should be drawn using the density i+1 β P (u, f ) ϑ j (f ) i f (u) = . t (f ) i+1 β P (v, f ) ϑ dv j (f ) i f t (f ) (1) (L) Given an arrival at time u between t (f ) and t (f ), the job sizes (B ,...,B ) should i i+1 be sampled from a distribution with MGF β (ϑ ), but then exponentially twisted by j (f ) P (u, f ) ϑ ,..., P (u, f ) ϑ . i i f f 1 L Remark 3 As mentioned above, the background process is sampled under the original mea- sure, whereas an alternative measure is used for the number of arrivals, the arrival epochs, and the job sizes. The intuition behind this, is that the rare event under consideration is caused by two effects: ◦ In the first place, samples of the background process J should be close to f . Under P a reasonable fraction ends up close to f — more precisely, the event of J being close to f does not become increasingly rare when n grows. As a consequence, no change of measure is needed here. ( ) ◦ In the second place, given the path of the background process, the Y (t ) should exceed the values na ,for = 1,...,L. This event does become exponentially rare as n grows, so importance sampling is to be applied here. 4.3 Efficiency Properties of Importance Sampling Procedure We now analyze the speed up realized by the change of measure introduced in the previous subsection. Unlike our results for the non-modulated systems, now we cannot find the pre- cise rate of growth of  .What is possible though, is proving asymptotic efficiency (also sometimes referred to as logarithmic efficiency), in the sense that we can show that 1 2 lim log E (L I) = lim log p (a) =−2inf inf sup ϑ , b − log M (ϑ ) n f n→∞ n→∞ n n f ∈F b∈A ϑ Methodol Comput Appl Probab (where the second equality is a consequence of Eq. 19). This equality is proven as follows. 2 2 2 As by Jensen’s inequality E (L I)  (E (LI )) = (p (a)) , we are left to prove the Q Q upper bound: 1 2 lim log E (L I)  lim log p (a). Q n n→∞ n→∞ n n If the path of J(·) equals f ∈ F , it follows by an elementary computation that we have constructed the measure Q such that dP L = = exp − ϑ , Y (t ) + n log M (ϑ ) . n f f f dQ =1 The fact that ϑ is componentwise non-negative, in combination with the fact that Y (t ) a when I = 1, entails that −n I (a) LI  exp −n ϑ , a +n log M (ϑ ) =exp −n ϑ , b +n log M (ϑ ) =e , f f f f f f f noting that a and b may only differ if the corresponding entry of ϑ equals 0 (that is, f f a − b , ϑ = 0). The upper bound thus follows: with f the minimizing path in Eq. 19, f f recalling that J(·) is sampled under P, 2 −2n I (a) −2n I  (a) J f E (L I)  E e  e . We have established the following result. Proposition 3 As n →∞, the proposed importance sampling procedure is asymptotically efficient. This means that the number of runs needed grows subexponentially: lim log  = 0. n→∞ Remark 4 In the scaling considered, for both the logarithmic asymptotics of p (a) and our importance sampling algorithm, the precise transition rates q do not matter; the only ij crucial element is that the background process is irreducible. Observe that, even though the logarithmic asymptotics of p (a) do not depend on the actual values of the transition rates q , the probability p (a) itself and its exact asymptotics do depend on those rates. We refer ij n to Blom et al. (2017) for the exact asymptotics of a related infinite-server model; it is noted that the derivation of such precise asymptotics is typically highly involved. The above reasoning indicates that the proposed procedure remains valid under more general conditions: the ideas carry over to any situation in which the rates are piecewise constant along the most likely path. 4.4 Simulation Experiments We performed experiments featuring a single-node system under Markov modulation. In our example the job sizes stem from an exponential distribution. When the background process is in state i, the arrival rate is λ , the job-size distribution is exponential with parameter μ , i i and the rate at which the storage level decays is r ,for i ∈{1,...,d}. The change of measure is then implemented as follows. As pointed out in Section 4.2,per run a path f of the background process is sampled under the original measure P. Suppose along this path there are K transitions (remarking that, for compactness, we leave out the argument f here), say at times t up to t ; with t = 0and t = t, the state between t 1 K 0 K+1 i Methodol Comput Appl Probab and t is denoted by j ,for i = 0,...,K. Per run a specific change of measure is to be i+1 i computed, parametrized by the t and j , as follows. i i We define −r (t −t ) r u −r t j j j i +1 i ¯ ¯ i+1 i i i P (u) := P e , P := e e ; i i i i =i+1 the product in this expression should be interpreted as 1 if i + 1 >K. It is readily checked that i+1 P (u) ϑ M(ϑ) = exp λ du . μ − P (u) ϑ t j i i i i=0 Let ϑ be the maximizing argument of ϑa − log M(ϑ). We can now provide the alternative measure Q for this path of the background pro- cess. The number of arrivals between t and t (for i = 0,...,K) becomes Poisson with i i+1 parameter r t i+1 ¯ j i μ λ μ − P e ϑ j j j i i i i λ du = log −r (t −t ) r t j i j i i+1 ¯ μ − P (u) ϑ r i i j i j μ e − P e ϑ i i i j i r t j i λ μ − P e ϑ j j i i i = log + λ (t − t ). j i+1 i r t j i+1 r i μ − P e ϑ i j i 0 1020304050 010 20 30 40 50 scale n scale n IS IS MC MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 6 Numerical results for Section 4.4:first example p (3) density 0 2468 10 12 1e−14 1e−08 1e−02 rate number of runs 0.0 0.2 0.4 0.6 0.8 1.0 0 4000 8000 12000 Methodol Comput Appl Probab (where it is noted that this expression is larger than λ (t − t ), which was the parameter j i+1 i under P). The density of each of the arrivals between t and t becomes i i+1 i+1 1 1 dv μ − P (u) ϑ μ − P (v) ϑ j i t j i i i i r t j i μ 1 μ − P e ϑ j j i i i = log −r (t −t ) r t j i+1 i j i μ − P (u) ϑ r i i j i j μ e − P e ϑ i i j i (rather than a uniform distribution, as was the case under P); sampling from this distribution is easy, since the inverse distribution function can be determined in closed form. Given an arrival that takes place at time u between t and t , the job size is exponential with i i+1 parameter μ − P (u) ϑ (rather than exponential with parameter μ ). j i j i i We now describe two examples in which the dimension of the background process is d = 2, q = q = 2, and t = 1. In the first example we fix a = 3, λ = (2, 1), 12 21 −1 μ = ( , 1), and r = (5, 1), in the second example a = 0.8, λ = (0.9, 1), μ = (0.9 , 1), and r = (0.3, 0.6). As before, we simulate until the precision of the estimate has reached ε = 0.1. The top two panels in Figs. 6–7 should be read as those in Figs. 1–3; the bottom two panels correspond to the density of the arrival epochs and the rate of the exponential job sizes, respectively, for f the ‘empirical maximizer’ of I (a) (i.e., the maximizer of I (a) f f over all paths f of the background process that were sampled in the simulation experiment). 0 2000 4000 6000 8000 0 2000 4000 6000 8000 scale n scale n IS MC IS MC 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u u Fig. 7 Numerical results for Section 4.4: second example p (0.6) density 01234 1e−05 1e−03 1e−01 rate number of runs 0.0 0.2 0.4 0.6 0.8 1.0 0 2000 6000 Methodol Comput Appl Probab In the first example the thus obtained ‘optimal path’ subsequently visits states 1, 2, and 1, where the corresponding jump times are t = 0.654 and t = 0.739, and the decay rate is 1 2 0.573. The mean numbers of arrivals in the three parts of the optimal path are 1.392, 0.090 and 0.963 respectively, whereas for Monte Carlo sampling these are 1.308, 0.085 and 0.522 respectively. In the second example the optimal path subsequently visits states 2 and 1, where the cor- responding jump time is t = 0.790. In this case the decay rate has the value 0.000806. The mean numbers of arrivals in the two parts of the optimal path are 0.812 and 0.195 respec- tively, which are slightly higher than the corresponding values under Monte Carlo sampling (0.790 and 0.189 respectively). Observe that in this example the difference between the two measures is relative small, also reflected by the small value of the decay rate; the event under consideration technically qualifies as ‘rare’ in that p (0.8) → 0as n →∞, but has a rela- tively high likelihood (e.g. as compared to the first example). As a consequence of the fact that both measures almost coincide, the two densities in the bottom-left panel can hardly be distinguished. We observe that the top panels confirm that in both examples (i) p (a) decays roughly exponentially in n, (ii) the number of runs needed grows roughly linearly in n (in the first example slightly sublinearly). 5 Discussion and Concluding Remarks In this paper we have considered the probability of attaining a value in a rare set A at a fixed point in time t: with A =[a , ∞) ×· · ·×[a , ∞), 1 L (1) (L) p (a) = P Y (t )  na ,...,Y (t )  na . n 1 L n n A relevant related quantity is the probability of having reached the set A before t: (1) (L) P ∃s  t : Y (s)  na ,...,Y (s)  na ; (20) 1 L n n observe that this probability increases to 1 as t →∞. Alternatively, one could study the probability that all a (for = 1,...,L) are exceeded before t, but not necessarily at the same time: (1) (L) P ∃s  t : Y (s )  na ,..., ∃s  t : Y (s )  na . (21) 1 1 1 L L L n n Powerful novel sample-path large deviations results by Budhiraja and Nyquist (2015), which deal with a general class of multi-dimensional shot-noise processes, may facili- tate the development of efficient importance sampling algorithms for non-modulated linear stochastic fluid networks. The results in (Budhiraja and Nyquist 2015)donot coverMarkov modulation, though. In the current setup of Section 4 the speed of the background process is kept fixed, i.e., not scaled by n. For modulated diffusions a sample-path large deviation principle has been recently established in Huang et al. (2016) for the case that the background process is sped up by a factor n (which amounts to multiplying the generator matrix Q by n); the rate function decouples into (i) a part concerning the rare-event behavior of the background process and (ii) a part concerning the rare-event behavior of the diffusion (conditional on Methodol Comput Appl Probab the path of the background process). With a similar result for the Markov-modulated linear stochastic fluid networks that we have studied in this paper, one could potentially set up an efficient importance sampling procedure for the probabilities (20)and(21) under this scaling. Acknowledgments The research of O. Boxma, D. Koops and M. Mandjes was partly funded by the NWO Gravitation Project NETWORKS, Grant Number 024.002.003. The research of O. Boxma was also partly funded by the Belgian Government, via the IAP Bestcom project. The research of E. Cahen was funded by an NWO grant, Grant Number 613.001.352. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Inter- national License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Appendix A We here point out how (6) can be established; the line of reasoning is precisely the same as in the derivation of (5) in (Dembo and Zeitouni 1998, Thm. 3.7.4). First write 2 −2ϑ Y (t ) 2n log M(ϑ ) −2nI (a) −2ϑ (Y (t )−na) n n E (L I) =E (e e 1 ) =e E (e 1 ), Q Q {Y (t )na} Q {Y (t )na} n n which, with Z := (Y (t ) − na)/ n, equals n n −2nI (a) −2ϑ Z n e E (e 1 ). Q {Z 0} Observe that E Y = na, due to the very choice of Q. This entails that Z converges Q n n in distribution to a centered Normal random variable; as can be verified, the correspond- ing variance is τ (where τ is defined in Eq. 5). Using the Berry-Esseen-based justification presented in (Dembo and Zeitouni 1998, page 111), we conclude that, as n →∞, √ √ 1 2 −2ϑ Z n −2ϑ nx −x /(2τ) E (e 1 ) ∼ e √ e dx. Q {Z 0} 2πτ Completing the square, the right-hand side of the previous display equals, with N (M, V) a normal random variable with mean M and variance V, √   √ 2  2 2(ϑ ) nτ  2(ϑ ) nτ e P N (−2ϑ nτ,τ) > 0 = e P N (0, 1)> 2ϑ nτ . Now we use the standard equivalence (as x →∞) 1 1 2 −x /2 P(N (0, 1)>x) ∼ e , 2π to obtain 1 1 1 −2ϑ nx −x /(2τ) e √ e dx ∼ √ √ . 2π 2ϑ 2πτ Combining the above, we derive the claim: 1 1 2 −2nI (a) E (L I) ∼ √ √ e . 2ϑ 2πτ Methodol Comput Appl Probab We now proceed with the computations underlying (12). To this end, first observe that dP  n log M(ϑ ) − ϑ ,Y (t ) e L = = e . dQ As a consequence, in line with the above computation for the one-dimensional case, −nI (b ) − ϑ ,Y (t )−na E (LI ) = e E (e 1 ), Q Q {Y (t )∈A} 2 −2nI (b ) −2 ϑ ,Y (t )−na E (L I) = e E (e 1 ). Q Q {Y (t )∈A} It was proven in (Chaganthy and Sethuraman 1996, Thm. 3.4) that −1 −D/2 −nI (b ) p (a) = E (LI ) ∼ √ ϑ (2πn) e , n Q i∈D while at the same time −1 2  −D/2 −2nI (b ) E (L I) ∼ √ (2ϑ ) (2πn) e . i∈D This immediately leads to Eq. 12. References Asmussen S, Glynn P (2007) Stochastic simulation. Springer, New York Asmussen S, Kortschak D (2015) Error rates and improved algorithms for rare event simulation with heavy Weibull tails. Methodol Comput Appl Probab 17:441–461 Asmussen S, Nielsen H (1995) Ruin probabilities via local adjustment coefficients. J Appl Probab 33:736– Bahadur R, Rao RR (1960) On deviations of the sample mean. Ann Math Stat 31:1015–1027 Blanchet J, Leder K, Glynn P (2008) Strongly efficient algorithms for light-tailed random walks: an old folk song sung to a faster new tune. In: L’Ecuyer P, Owen A (eds) Proceedings of the Eighth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing (MCQMC 2008). Springer, Berlin, pp 227-248 Blanchet J, Mandjes M (2009) Rare event simulation for queues. In: Rubino G, Tuffin B (eds) Rare event simulation using Monte Carlo methods. Wiley, Chichester, pp 87–124 Blom J, De Turck K, Mandjes M (2017) Refined large deviations asymptotics for Markov-modulated infinite- server systems. Euro J Oper Res 259:1036–1044 Blom J, Mandjes M (2013) A large-deviations analysis of Markov-modulated infinite-server queues. Oper Res Lett 41:220–225 Boxma O, Cahen E, Koops D, Mandjes M (2018) Linear networks: rare-event simulation and Markov modulation. arXiv:1705.10273 Budhiraja A, Nyquist P (2015) Large deviations for multidimensional state-dependent shot-noise processes. J Appl Probab 52:1097–1114 Cahen E, Mandjes M, Zwart B (2017) Rare event analysis and efficient simulation for a multi-dimensional ruin problem. Probab Eng Inform Sci 31:265–283 Chaganthy N, Sethuraman J (1996) Multidimensional strong large deviation theorems. J Stat Plan Inference 55:265–280 Dembo A, Zeitouni O (1998) Large deviations techniques and applications, 2nd ed. Springer, New York Ganesh A, Macci C, Torrisi G (2007) A class of risk processes with reserve-dependent premium rate: sample path large deviations and importance sampling. Queueing Syst 55:83–94 Glasserman P, Juneja S (2008) Uniformly efficient importance sampling for the tail distribution of sums of random variables. Math Oper Res 33:36–50 Huang G, Jansen HM, Mandjes M, Spreij P, De Turck K (2016) Markov-modulated Ornstein-Uhlenbeck processes. Adv Appl Probab 48:235–254 Huang G, Mandjes M, Spreij P (2016) Large deviations for Markov-modulated diffusion processes with rapid switching. Stoch Process Appl 126:1785–1818 Methodol Comput Appl Probab Juneja S, Shahabuddin P, Nelson B (2006) Rare event simulation techniques: an introduction and recent advances. In: Henderson S (ed) Handbook in operations research and management sciences, volume 13: Simulation, pp 291–350 Kella O, Stadje W (2002) Markov modulated linear fluid networks with Markov additive input. J Appl Probab 39:413–420 Kella O, Whitt W (1999) Linear stochastic fluid networks. J Appl Probab 36:244–260 Kuhn J, Mandjes M, Taimre T (2017) Exact asymptotics of sample-mean related rare-event probabilities. Probability in the Engineering and Informational Sciences, to appear Magnus J, Neudecker H (1979) The commutation matrix: some properties and applications. Ann Stat 7:381– Rabehasaina L (2006) Moments of a Markov-modulated irreducible network of fluid queues. J Appl Probab 43:510–522 Sezer A (2009) Importance sampling for a Markov modulated queuing network. Stoch Process Appl 119:491–517

Journal

Methodology and Computing in Applied ProbabilitySpringer Journals

Published: Jun 4, 2018

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off