1996 Applied Stochastic Models and Data Analysis
doi: 10.1002/(SICI)1099-0747(199612)12:4<209::AID-ASM284>3.0.CO;2-T
The authors consider an irreducible Markov renewal process (MRP) with a finite number of states. Their aim is to derive estimators of a censored MRP with a finite number of states either in a fixed time T or in the Nth jump. The estimators given here are seen to be of the Kaplan‐Meier type. The asymptotic properties these estimators are given. The reliability of a semi‐Markov system model is examined numerically by the estimators, and a comparison is made with estimators obtained by Lagakos et al. © 1996 John Wiley & Sons, Ltd.
Bar‐Lev, S. K.; Parlar, M.; Perry, D.
1996 Applied Stochastic Models and Data Analysis
doi: 10.1002/(SICI)1099-0747(199612)12:4<221::AID-ASM285>3.0.CO;2-B
In this paper we analyse a stochastic production/inventory problem with compound Poisson demand and state (i.e. inventory level) dependent production rates. Customers arrive according to a Poisson process where the amount demanded by each customer is assumed to have a general distribution. When the inventory W(t) falls below a critical level m, production is started at a rate of r[W(t)], i.e. production rate dynamically changes as a function of the inventory level. Production continues until a level M (œ w m) is reached. Excess demand is assumed to be lost. We identify a dam content process X that is a dual for the inventory level W and develop the stationary distribution for the X process. To achieve this we use tools from renewal and level crossing theories. The two‐sided (m, M) policy is optimized using the expected cost obtained from the stationary density of W and a conditional (on w) expected cost function for this process. For a special case, we obtain explicit results for all the relevant expressions. Numerical examples are provided for several test problems. © 1996 John Wiley & Sons, Ltd.
1996 Applied Stochastic Models and Data Analysis
doi: 10.1002/(SICI)1099-0747(199612)12:4<239::AID-ASM286>3.0.CO;2-K
In considering the strength of association of particular variables, we cannot ignore the effects of confounding factors that cause Simpson's paradox. Many methods for adjusting these effects have been proposed, and a great deal of effort has been devoted to statistical tests. Apart from the statistical tests, the aim of the present study is to examine the strength of association of two categorical variables without reference to any explicit confounding factors. In other words, our aim is to specify the conditions under which Simpson's paradox does not occur, where the idea of classifying the original universe into groups is adopted. Let us begin by focusing our attention on a 2 × 2 contingency table (cross‐classification table) and considering the association of X with Y, where X and Y denote dichotomous variables with classes A and B for X and classes + and − for Y. To examine the strength of association between these variables, the index k = q/p is used, where p denotes the proportion of A + in A and q denotes that of B + in B. Using the maximum and minimum values of the index k obtained by numerical calculation, the strength of association is examined. The results are discussed and examples given. © 1996 John Wiley & Sons, Ltd.
Gupta, Ramesh C.; Akman, Olcay
1996 Applied Stochastic Models and Data Analysis
doi: 10.1002/(SICI)1099-0747(199612)12:4<255::AID-ASM287>3.0.CO;2-R
The coefficient of variation is an important parameter in many physical, biological and medical sciences. In this paper we study the estimation of the square of the coefficient of variation in a weighted inverse Gaussian model which is a mixture of the inverse Gaussian and the length biased inverse Gaussian distribution. This represents a rich family of distributions for different values of the mixing parameter and can be used for modelling various life testing situations. The maximum likelihood as well as the Bayes estimates of the parameters are obtained. These estimates are used to derive the estimates of the square of the coefficient of variation of the model under study. Several important data sets are analysed to illustrate the results. © 1996 John Wiley & Sons, Ltd.
1996 Applied Stochastic Models and Data Analysis
doi: 10.1002/(SICI)1099-0747(199612)12:4<265::AID-ASM288>3.0.CO;2-N
Stochastic volatility models (SVMs) represent an important framework for the analysis of financial time series data, together with ARCH‐type models; but unlike the latter, the former, at least from the statistical point of view, cannot rely on the possibility of obtaining exact inference, in particular with regard to maximum likelihood estimates for the parameters of interest. For SVMs, usually only approximate results can be obtained, unless particularly sophisticated estimation strategies like exact non‐gaussian filtering methods or simulation techniques are employed. In this paper we review SVM and present a new characterization for them, called ‘generalized bilinear stochastic volatility’. © 1996 John Wiley & Sons, Ltd.
1996 Applied Stochastic Models and Data Analysis
doi: 10.1002/(SICI)1099-0747(199612)12:4<281::AID-ASM1293>3.0.CO;2-1
Showing 1 to 10 of 11 Articles