Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

The stopped clock model

The stopped clock model 1IntroductionLet {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}and {Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}be stationary sequences of real random variables on the probability space (Ω,A,P)\left(\Omega ,{\mathcal{A}},P)and P(Un∈{0,1})=1P\left({U}_{n}\in \left\{0,1\right\})=1. We define, for n≥1n\ge 1, (1)Yn=Xn,Un=1Yn−1,Un=0.{Y}_{n}=\left\{\phantom{\rule[-1.15em]{}{0ex}}\begin{array}{ll}{X}_{n},& {U}_{n}=1\\ {Y}_{n-1},& {U}_{n}=0.\end{array}\right.Sequence {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}corresponds to a model of failures on records of {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}replaced by the last available record, which occurs in some random past instant, if we interpret nnas time. Thus, if, for example, it occurs {U1=1,U2=0,U3=1,U4=0,U5=0,U6=0,U7=1}\left\{{U}_{1}=1,{U}_{2}=0,{U}_{3}=1,{U}_{4}=0,{U}_{5}=0,{U}_{6}=0,{U}_{7}=1\right\}, we will have {Y1=X1,Y2=X1,Y3=X3,Y4=X3,Y5=X3,Y6=X3,Y7=X7}\{{Y}_{1}={X}_{1},{Y}_{2}={X}_{1},{Y}_{3}={X}_{3},{Y}_{4}={X}_{3},{Y}_{5}={X}_{3},{Y}_{6}={X}_{3},{Y}_{7}={X}_{7}\}. This constancy of some variables of {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}for random periods of time motivates the designation of “stopped clock model” for sequence {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}.Failure models studied in the literature from the point of view of extremal behavior do not consider the stopped clock model (Hall and Hüsler [4]; Ferreira et al. [3] and references therein).The model we will study can also be represented by {XNn}n≥1{\left\{{X}_{{N}_{n}}\right\}}_{n\ge 1}, where {Nn}n≥1{\left\{{N}_{n}\right\}}_{n\ge 1}is a sequence of positive integer variables representable by Nn=nUn+∑n−1≥i≥1∏j=0i−1(1−Un−j)Un−i(n−i),n≥1.{N}_{n}=n{U}_{n}+\sum _{n-1\ge i\ge 1}\left(\mathop{\prod }\limits_{j=0}^{i-1}\left(1-{U}_{n-j})\right){U}_{n-i}\left(n-i),\hspace{1em}n\ge 1.We can also state a recursive formulation for {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}through Yn=XnUn+∑n−1≥i≥1∏j=0i−1(1−Un−j)Un−iXn−i+∏i≥0(1−Un−i)Yn−κ,n≥1,κ≥1.{Y}_{n}={X}_{n}{U}_{n}+\sum _{n-1\ge i\ge 1}\left(\mathop{\prod }\limits_{j=0}^{i-1}\left(1-{U}_{n-j})\right){U}_{n-i}{X}_{n-i}+\prod _{i\ge 0}\left(1-{U}_{n-i}){Y}_{n-\kappa },\hspace{1em}n\ge 1,\hspace{0.33em}\kappa \ge 1.Under any of the three possible representations (failures model, random index sequence or recursive sequence), we are not aware of an extremal behavior study of {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}in the literature.Our departure hypotheses about the base sequence {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}and about sequence {Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}are as follows: (1){Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}is a stationary sequence of random variables almost surely distinct and, without loss of generality, such that FXn(x)≔F(x)=exp(−1/x){F}_{{X}_{n}}\left(x):= F\left(x)=\exp \left(-1\hspace{0.1em}\text{/}\hspace{0.1em}x), x>0x\gt 0, i.e., standard Fréchet distributed.(2){Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}and {Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}are independent.(3){Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}is stationary and pn1,…,ns(i1,…,is)≔P(Un1=i1,…,Uns=is){p}_{{n}_{1},\ldots ,{n}_{s}}\hspace{0.33em}\left({i}_{1},\ldots ,{i}_{s}):= P\left({U}_{{n}_{1}}={i}_{1},\ldots ,{U}_{{n}_{s}}={i}_{s}), ij∈{0,1}{i}_{j}\in \left\{0,1\right\}, j=1,…,sj=1,\ldots ,s, is such that pn,n+1,…,n+κ−1(0,…,0)=0{p}_{n,n+1,\ldots ,n+\kappa -1}\hspace{0.33em}\left(0,\ldots ,0)=0, for some κ≥1\kappa \ge 1.The trivial case κ=1\kappa =1corresponds to Yn=Xn{Y}_{n}={X}_{n}, n≥1n\ge 1. Hypothesis (3) means that we are assuming that it is almost impossible to lose κ\kappa or more consecutive values of {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}. We remark that, along the paper, the summations, produts and intersections is considered to be nonexistent whenever the end of the counter is less than the beginning. We will also use notation a∨b=max(a,b)a\vee b=\max \left(a,b).Example 1.1Consider an independent and identically distributed sequence {Wn}n∈Z{\left\{{W}_{n}\right\}}_{n\in {\mathbb{Z}}}of real random variables on (Ω,A,P)\left(\Omega ,{\mathcal{A}},P)and a Borelian set AA. Let p=P(An)p=P\left({A}_{n}), where An={Wn∈A}{A}_{n}=\left\{{W}_{n}\in A\right\}, n∈Zn\in {\mathbb{Z}}. The sequence of Bernoulli random variables (2)Un=1{⋂i=1κ−1A¯n−i}+(1−1{⋂i=1κ−1A¯n−i})1{An},n∈Z,{U}_{n}={{\bf{1}}}_{\{{\bigcap }_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\}}+\left(1-{{\bf{1}}}_{\{{\bigcap }_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\}}){{\bf{1}}}_{\left\{{A}_{n}\right\}},\hspace{1em}n\in {\mathbb{Z}},where 1{⋅}{{\bf{1}}}_{\left\{\cdot \right\}}denotes the indicator function, defined for some fixed κ≥2\kappa \ge 2, is such that pn,n+1,…,n+κ−1(0,…,0)=0{p}_{n,n+1,\ldots ,n+\kappa -1}\hspace{0.08em}\left(0,\ldots ,0)=0, i.e., it is almost sure that after κ−1\kappa -1consecutive variables equal to zero, the next variable takes value one. We also have pn(0)=P(1{⋂i=1κ−1A¯n−i}=0,1{An}=0)=P⋃i=1κ−1An−i∩A¯n=P(A¯n)−P⋂i=0κ−1A¯n−i=1−p−(1−p)κ,\begin{array}{rcl}{p}_{n}\left(0)& =& P({{\bf{1}}}_{\{{\bigcap }_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\}}=0,{{\bf{1}}}_{\left\{{A}_{n}\right\}}=0)=P\left(\mathop{\bigcup }\limits_{i=1}^{\kappa -1}{A}_{n-i}\cap {\overline{A}}_{n}\right)\\ & =& P\left({\overline{A}}_{n})-P\left(\mathop{\bigcap }\limits_{i=0}^{\kappa -1}{\overline{A}}_{n-i}\right)=1-p-{\left(1-p)}^{\kappa },\end{array}since the independence of random variables Wn{W}_{n}implies the independence of events An{A}_{n}, and, for κ>2\kappa \gt 2, pn−1,n(0,0)=P⋃i=1κ−1An−i∩A¯n∩⋃i=1κ−1An−1−i∩A¯n−1=P(A¯n∩A¯n−1)−PA¯n∩A¯n−1∩⋂i=1κ−1A¯n−i∪⋂i=1κ−1A¯n−1−i=P(A¯n)P(A¯n−1)−P⋂i=0κ−1A¯n−i=(1−p)2−(1−p)κ,\begin{array}{rcl}{p}_{n-1,n}\left(0,0)& =& P\left(\left(\mathop{\bigcup }\limits_{i=1}^{\kappa -1}{A}_{n-i}\cap {\overline{A}}_{n}\right)\cap \left(\mathop{\bigcup }\limits_{i=1}^{\kappa -1}{A}_{n-1-i}\cap {\overline{A}}_{n-1}\right)\right)\\ & =& P\left({\overline{A}}_{n}\cap {\overline{A}}_{n-1})-P\left({\overline{A}}_{n}\cap {\overline{A}}_{n-1}\cap \left(\mathop{\bigcap }\limits_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\cup \mathop{\bigcap }\limits_{i=1}^{\kappa -1}{\overline{A}}_{n-1-i}\right)\right)\\ & =& P\left({\overline{A}}_{n})P\left({\overline{A}}_{n-1})-P\left(\mathop{\bigcap }\limits_{i=0}^{\kappa -1}{\overline{A}}_{n-i}\right)={\left(1-p)}^{2}-{\left(1-p)}^{\kappa },\end{array}pn−1,n(1,0)=pn(0)−pn−1,n(0,0)=p(1−p){p}_{n-1,n}\left(1,0)={p}_{n}\left(0)-{p}_{n-1,n}\left(0,0)=p\left(1-p).In Figure 1, we illustrate with a particular example based on independent standard Fréchet {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}, {Wn}n∈Z{\left\{{W}_{n}\right\}}_{n\in {\mathbb{Z}}}with standard exponential marginals, A=]0,1/2]A=]0,1\hspace{0.1em}\text{/}\hspace{0.1em}2]and thus, p=0.3935p=0.3935and considering κ=3\kappa =3. Therefore, pn,n+1,n+2(0,0,0)=0{p}_{n,n+1,n+2}\left(0,0,0)=0, pn,n+1,n+2(1,0,0)=pn,n+1(0,0)=p(1−p)2{p}_{n,n+1,n+2}\left(1,0,0)={p}_{n,n+1}\left(0,0)=p{\left(1-p)}^{2}.Figure 1Sample path of 100 observations simulated from {Yn}\left\{{Y}_{n}\right\}defined in (1) based on independent standard Fréchet {Xn}\left\{{X}_{n}\right\}and on {Un}\left\{{U}_{n}\right\}given in (2) where we take random variables {Wn}\left\{{W}_{n}\right\}standard exponential distributed, A=]0,1/2]A=]0,1\hspace{0.1em}\text{/}\hspace{0.1em}2], and thus, p=0.3935p=0.3935and considering κ=3\kappa =3.In the next section, we propose an estimator for probabilities pn,…,n+s(1,0,…,0){p}_{n,\ldots ,n+s}\left(1,0,\ldots ,0), 0≤s<κ−10\le s\lt \kappa -1. In Section 3, we analyze the existence of the extremal index for {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}, an important measure to evaluate the tendency to occur clusters of its high values (see, e.g., Kulik and Solier [6], and references therein). A characterization of the tail dependence will be presented in Section 4. The results are illustrated with an ARMAX sequence.For the sake of simplicity, we will omit the variation of nnin sequence notation whenever there is no doubt, taking into account that we will keep the designation {Yn}\left\{{Y}_{n}\right\}for the stopped clock model and {Xn}\left\{{X}_{n}\right\}and {Un}\left\{{U}_{n}\right\}for the sequences that generate it.2Inference on {Un}\left\{{U}_{n}\right\}Assuming that {Un}\left\{{U}_{n}\right\}is not observable, as well as the values of {Xn}\left\{{X}_{n}\right\}that are lost, it is of interest to retrieve information about these sequences from the available sequence {Yn}\left\{{Y}_{n}\right\}.Since, for n≥1n\ge 1and s≥1s\ge 1, we have pn(1)=E(1{Yn≠Yn−1}),pn(0)=E(1{Yn=Yn−1})andpn−s,n−s+1,…,n(1,0,…,0)=E(1{Yn−s−1≠Yn−s=Yn−s+1=…=Yn}),{p}_{n}\left(1)=E({{\bf{1}}}_{\left\{{Y}_{n}\ne {Y}_{n-1}\right\}}),\hspace{1em}{p}_{n}\left(0)=E({{\bf{1}}}_{\left\{{Y}_{n}={Y}_{n-1}\right\}})\hspace{1em}{\rm{and}}\hspace{1em}{p}_{n-s,n-s+1,\ldots ,n}\left(1,0,\ldots ,0)=E({{\bf{1}}}_{\left\{{Y}_{n-s-1}\ne {Y}_{n-s}={Y}_{n-s+1}=\ldots ={Y}_{n}\right\}}),we propose to estimate these probabilities from the respective empirical counterparts of a random sample (Yˆ1,Yˆ2,…,Yˆm)\left({\hat{Y}}_{1},{\hat{Y}}_{2},\ldots ,{\hat{Y}}_{m})from {Yn}\left\{{Y}_{n}\right\}, i.e., p^n(1)=1m∑i=2m1{Yˆi≠Yˆi−1},p^n(0)=1m∑i=2m1{Yˆi=Yˆi−1}andp^n−s,n−s+1,…,n(1,0,…,0)=1m∑i=s+2m1{Yˆi−s−1≠Yˆi−s=Yˆi−s+1=…=Yˆi},\begin{array}{l}{\widehat{p}}_{n}\left(1)=\frac{1}{m}\mathop{\displaystyle \sum }\limits_{i=2}^{m}{{\bf{1}}}_{\left\{{\hat{Y}}_{i}\ne {\hat{Y}}_{i-1}\right\}},\hspace{1em}{\widehat{p}}_{n}\left(0)=\frac{1}{m}\mathop{\displaystyle \sum }\limits_{i=2}^{m}{{\bf{1}}}_{\left\{{\hat{Y}}_{i}={\hat{Y}}_{i-1}\right\}}\hspace{1em}{\rm{and}}\\ {\widehat{p}}_{n-s,n-s+1,\ldots ,n}\left(1,0,\ldots ,0)=\frac{1}{m}\mathop{\displaystyle \sum }\limits_{i=s+2}^{m}{{\bf{1}}}_{\left\{{\hat{Y}}_{i-s-1}\ne {\hat{Y}}_{i-s}={\hat{Y}}_{i-s+1}=\ldots ={\hat{Y}}_{i}\right\}},\end{array}which are consistent by the weak law of large numbers. The value of κ\kappa can be inferred from κ^=⋁i=s+2m⋁s≥1s1{Yˆi−s−1≠Yˆi−s=Yˆi−s+1=…=Yˆi}.\widehat{\kappa }=\underset{i=s+2}{\overset{m}{\bigvee }}\hspace{0.33em}\mathop{\bigvee }\limits_{s\ge 1}s\hspace{0.33em}{{\bf{1}}}_{\left\{{\hat{Y}}_{i-s-1}\ne {\hat{Y}}_{i-s}={\hat{Y}}_{i-s+1}=\ldots ={\hat{Y}}_{i}\right\}}.In order to evaluate the finite sample behavior of the aforementioned estimators, we have simulated 1,000 independent replicas with size m=100m=100, 1,000, 5,000 of the model in Example 1.1. The absolute bias (abias) and root mean squared error (rmse) are presented in Table 1. The results reveal a good performance of the estimators, even in the case of smaller sample sizes. Parameter κ\kappa was always estimated with no error.Table 1The absolute bias (abias) and rmse obtained from 1,000 simulated samples with size m=100m=100, 1,000, 5,000 of the model in Example 1.1abiasrmsep^n(0){\widehat{p}}_{n}\left(0)m=100m=1000.02720.0335m=1,000m=\hspace{0.1em}\text{1,000}\hspace{0.1em}0.00870.0108m=5,000m=\hspace{0.1em}\text{5,000}\hspace{0.1em}0.00390.0048p^n−1,n(1,0){\widehat{p}}_{n-1,n}\left(1,0)m=100m=1000.01990.0253m=1,000m=\hspace{0.1em}\text{1,000}\hspace{0.1em}0.00650.0080m=5,000m=\hspace{0.1em}\text{5,000}\hspace{0.1em}0.00300.0037p^n−2,n−1,n(1,0,0){\widehat{p}}_{n-2,n-1,n}\left(1,0,0)m=100m=1000.01600.0200m=1,000m=\hspace{0.1em}\text{1,000}\hspace{0.1em}0.00510.0064m=5,000m=\hspace{0.1em}\text{5,000}\hspace{0.1em}0.00220.00283The extremal index of {Yn}\left\{{Y}_{n}\right\}The sequence {Yn}\left\{{Y}_{n}\right\}is stationary because the sequences {Xn}\left\{{X}_{n}\right\}and {Un}\left\{{U}_{n}\right\}are stationary and independent from each other. In addition, the common distribution for Yn{Y}_{n}, n≥1n\ge 1, is also standard Fréchet, as is the common distribution for Xn{X}_{n}, since FYn(x)=∑i=1κ−1P(Xn−i≤x,Un−i=1,Un−i+1=0=…=Un)+P(Xn≤x)P(Un=1)=F(x)pn(1)+∑i=1κ−1pn−i,…,n(1,0,…,0)=F(x).\begin{array}{rcl}{F}_{{Y}_{n}}\left(x)& =& \mathop{\displaystyle \sum }\limits_{i=1}^{\kappa -1}P\left({X}_{n-i}\le x,{U}_{n-i}=1,{U}_{n-i+1}=0=\ldots ={U}_{n})+P\left({X}_{n}\le x)P\left({U}_{n}=1)\\ & =& F\left(x)\left({p}_{n}\left(1)+\mathop{\displaystyle \sum }\limits_{i=1}^{\kappa -1}{p}_{n-i,\ldots ,n}\left(1,0,\ldots ,0)\right)=F\left(x).\end{array}For any τ>0\tau \gt 0, if we define un≡un(τ)=n/τ{u}_{n}\equiv {u}_{n}\left(\tau )=n\hspace{0.1em}\text{/}\hspace{0.1em}\tau , n≥1n\ge 1, it turns out that E∑i=1n1{Yi>un}=nP(Y1>un)⟶n→∞τE\left({\sum }_{i=1}^{n}{{\bf{1}}}_{\left\{{Y}_{i}\gt {u}_{n}\right\}}\right)=nP\left({Y}_{1}\gt {u}_{n})\mathop{\longrightarrow }\limits_{n\to \infty }\tau and nP(X1>un)⟶n→∞τnP\left({X}_{1}\gt {u}_{n})\mathop{\longrightarrow }\limits_{n\to \infty }\tau , so we refer to these levels un{u}_{n}by normalized levels for {Yn}\left\{{Y}_{n}\right\}and {Xn}\left\{{X}_{n}\right\}.In this section, in addition to the general assumptions about the model presented in Section 1, we start by assuming that {Xn}\left\{{X}_{n}\right\}and {Un}\left\{{U}_{n}\right\}present dependency structures such that variables sufficiently apart can be considered approximately independent. Concretely, we assume that {Un}\left\{{U}_{n}\right\}satisfies the strong-mixing condition (Rosenblatt [9]) and {Xn}\left\{{X}_{n}\right\}satisfies condition D(un)D\left({u}_{n})(Leadbetter [7]) for normalized levels un{u}_{n}.Proposition 3.1If {Un}\left\{{U}_{n}\right\}is strong-mixing and {Xn}\left\{{X}_{n}\right\}satisfies condition D(un)D\left({u}_{n}), then {Yn}\left\{{Y}_{n}\right\}also satisfies condition D(un)D\left({u}_{n}).ProofFor any choice of p+qp+qintegers, 1≤i1<…<ip<j1<…<jq≤n1\le {i}_{1}\hspace{0.33em}\hspace{0.33em}\lt \ldots \lt {i}_{p}\lt {j}_{1}\hspace{0.33em}\hspace{0.33em}\lt \ldots \lt {j}_{q}\le nsuch that j1≥ip+l{j}_{1}\ge {i}_{p}+l, we have that P⋂s=1pXis≤un,⋂s=1qXjs≤un−P⋂s=1pXis≤unP⋂s=1qXjs≤un≤αn,l,\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}}\le {u}_{n}\right)-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}}\le {u}_{n}\right)\right|\le {\alpha }_{n,l},with αn,ln→0{\alpha }_{n,{l}_{n}}\to 0, as n→∞n\to \infty , for some sequence ln=o(n){l}_{n}=o\left(n), and ∣P(A∩B)−P(A)P(B)∣≤g(l),| P(A\cap B)-P(A)P(B)| \le g\left(l),with g(l)→0g\left(l)\to 0, as l→∞l\to \infty , where AAbelongs to the σ\sigma -algebra generated by {Ui,i=1,…,ip}\left\{{U}_{i},\hspace{0.33em}i=1,\ldots ,{i}_{p}\right\}and BBbelongs to the σ\sigma -algebra generated by {Ui,i=j1,j1+1,…}\left\{{U}_{i},\hspace{0.33em}i={j}_{1},{j}_{1}+1,\ldots \right\}. Thus, for any choice of p+qp+qintegers, 1≤i1<…<ip<j1<…<jq≤n1\le {i}_{1}\hspace{0.33em}\lt \ldots \lt {i}_{p}\lt {j}_{1}\hspace{0.33em}\lt \ldots \lt {j}_{q}\le nsuch that j1≥ip+l+κ{j}_{1}\ge {i}_{p}+l+\kappa , we will have P⋂s=1pYis≤un,⋂s=1qYjs≤un−P⋂s=1pYis≤unP⋂s=1qYjs≤un≤∑is−κ<is∗≤isjs−κ<js∗≤jsP⋂s=1pXis∗≤un,⋂s=1qXjs∗≤unP(A∗∩B∗)−P⋂s=1pXis∗≤unP⋂s=1qXjs∗≤unP(A∗)P(B∗),\begin{array}{l}\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{Y}_{{i}_{s}}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{Y}_{{j}_{s}}\le {u}_{n}\right)-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{Y}_{{i}_{s}}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{Y}_{{j}_{s}}\le {u}_{n}\right)\right|\\ \hspace{1.0em}\le \displaystyle \sum _{\begin{array}{c}{i}_{s}-\kappa \lt {i}_{s}^{\ast }\le {i}_{s}\\ {j}_{s}-\kappa \lt {j}_{s}^{\ast }\le {j}_{s}\end{array}}\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)P({A}^{\ast }\cap {B}^{\ast })-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)P({A}^{\ast })P({B}^{\ast })\right|,\end{array}where A∗=⋂s=1p{Uis=0=…=Uis∗+1,Uis∗=1}{A}^{\ast }={\bigcap }_{s=1}^{p}\left\{{U}_{{i}_{s}}=0=\ldots ={U}_{{i}_{s}^{\ast }+1},{U}_{{i}_{s}^{\ast }}=1\right\}and B∗=⋂s=1q{Ujs=0=…=Ujs∗+1,Ujs∗=1}{B}^{\ast }={\bigcap }_{s=1}^{q}\left\{{U}_{{j}_{s}}=0=\ldots ={U}_{{j}_{s}^{\ast }+1},{U}_{{j}_{s}^{\ast }}=1\right\}and j1∗>j1−κ≥ip∗+l{j}_{1}^{\ast }\hspace{-0.08em}\gt \hspace{-0.08em}{j}_{1}\hspace{-0.08em}-\hspace{-0.08em}\kappa \hspace{-0.08em}\ge \hspace{-0.08em}{i}_{p}^{\ast }\hspace{-0.08em}+\hspace{-0.08em}l.Therefore, the aforementioned summation is upper limited by ∑is−κ<is∗≤isjs−κ<js∗≤jsP⋂s=1pXis∗≤un,⋂s=1qXjs∗≤un−P⋂s=1pXis∗≤unP⋂s=1qXjs∗≤un+∣P(A∗∩B∗)−P(A∗)P(B∗)∣≤∑is−κ<is∗≤isjs−κ<js∗≤js(αn,l+g(l)),\begin{array}{l}\displaystyle \sum _{\begin{array}{c}{i}_{s}-\kappa \lt {i}_{s}^{\ast }\le {i}_{s}\\ {j}_{s}-\kappa \lt {j}_{s}^{\ast }\le {j}_{s}\end{array}}\left(\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)\right|+| P({A}^{\ast }\cap {B}^{\ast })-P({A}^{\ast })P({B}^{\ast })| \right)\\ \hspace{1.0em}\le \displaystyle \sum _{\begin{array}{c}{i}_{s}-\kappa \lt {i}_{s}^{\ast }\le {i}_{s}\\ {j}_{s}-\kappa \lt {j}_{s}^{\ast }\le {j}_{s}\end{array}}({\alpha }_{n,l}+g\left(l)),\end{array}which allows to conclude that D(un)D\left({u}_{n})holds for {Yn}\left\{{Y}_{n}\right\}with ln(Y)=ln+κ{l}_{n}^{\left(Y)}={l}_{n}+\kappa .□The tendency for clustering of values of {Yn}\left\{{Y}_{n}\right\}above un{u}_{n}depends on the same tendency within {Xn}\left\{{X}_{n}\right\}and the propensity of {Un}\left\{{U}_{n}\right\}for consecutive null values. The clustering tendency can be assessed through the extremal index (Leadbetter, [7]). More precisely, {Xn}\left\{{X}_{n}\right\}is said to have extremal index θX∈(0,1]{\theta }_{X}\in (0,1]if (3)limn→∞P⋁i=1nXi≤n/τ=e−θXτ.\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left(\underset{i=1}{\overset{n}{\bigvee }}{X}_{i}\le n\hspace{0.1em}\text{/}\hspace{0.1em}\tau \right)={e}^{-{\theta }_{X}\tau }.If D(un)D\left({u}_{n})holds for {Xn}\left\{{X}_{n}\right\}, we have limn→∞P⋁i=1nXi≤un=limn→∞Pkn⋁i=1[n/kn]Xi≤un\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left(\underset{i=1}{\overset{n}{\bigvee }}{X}_{i}\le {u}_{n}\right)=\mathop{\mathrm{lim}}\limits_{n\to \infty }{P}^{{k}_{n}}\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{X}_{i}\le {u}_{n}\right)for any integers sequence {kn}\left\{{k}_{n}\right\}, such that, (4)kn→∞,knln/n→0andknαn,ln→0,asn→∞.{k}_{n}\to \infty ,\hspace{1em}{k}_{n}{l}_{n}\hspace{0.1em}\text{/}\hspace{0.1em}n\to 0\hspace{1em}{\rm{and}}\hspace{1em}{k}_{n}{\alpha }_{n,{l}_{n}}\to 0,\hspace{1em}{\rm{as}}\hspace{0.33em}n\to \infty .We can therefore say that θXτ=limn→∞knP⋁i=1[n/kn]Xi>un.{\theta }_{X}\tau =\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{X}_{i}\gt {u}_{n}\right).Now we compare the local behavior of sequences {Xn}\left\{{X}_{n}\right\}and {Yn}\left\{{Y}_{n}\right\}, i.e., of Xi{X}_{i}and Yi{Y}_{i}for i∈(j−1)nkn+1,…,jnkni\hspace{-0.08em}\in \hspace{-0.08em}\left\{\left(j-1)\left[\hspace{-0.16em},\frac{n}{{k}_{n}},\hspace{-0.16em}\right]\hspace{-0.08em}+\hspace{-0.08em}1,\ldots ,j\left[\hspace{-0.16em},\frac{n}{{k}_{n}},\hspace{-0.16em}\right]\right\}, j=1,…,knj=1,\ldots ,{k}_{n}, with regard to the oscillations of their values in relation to un{u}_{n}. To this end, we will use local dependency conditions D(s)(un){D}^{\left(s)}\left({u}_{n}). We say that {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, whenever limn→∞n∑j=s[n/kn]P(X1>un,Xj≤un<Xj+1)=0,\mathop{\mathrm{lim}}\limits_{n\to \infty }n\mathop{\sum }\limits_{j=s}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({X}_{1}\gt {u}_{n},{X}_{j}\le {u}_{n}\lt {X}_{j+1})=0,for some integers sequence {kn}\left\{{k}_{n}\right\}satisfying (4). Condition D(1)(un){D}^{\left(1)}\left({u}_{n})translates into limn→∞n∑j=2[n/kn]P(X1>un,Xj>un)=0.\mathop{\mathrm{lim}}\limits_{n\to \infty }n\mathop{\sum }\limits_{j=2}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({X}_{1}\gt {u}_{n},{X}_{j}\gt {u}_{n})=0.Observe that if D(s)(un){D}^{\left(s)}\left({u}_{n})holds for some s≥1s\ge 1, then D(m)(un){D}^{\left(m)}\left({u}_{n})also holds for m>sm\gt s. Condition D(1)(un){D}^{\left(1)}\left({u}_{n})is known as D′(un){D}^{^{\prime} }\left({u}_{n})(Leadbetter et al. [8]) and relates to a unit extremal index, i.e., absence of extreme values clustering. In particular, this is the case of independent variables. Although {Xn}\left\{{X}_{n}\right\}satisfies D′(un){D}^{^{\prime} }\left({u}_{n}), this condition is not generally valid for {Yn}\left\{{Y}_{n}\right\}. Observe that n∑j=2[n/kn]P(Y1>un,Yj>un)=∑i=2−κ1n∑j=2[n/kn]∑j∗=i∨(j−κ+1)jP(Xi>un,Xj∗>un)⋅pi,…,1,j∗,j∗+1,…,j(0,…,0,1,0,…,0).\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=2}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({Y}_{1}\gt {u}_{n},{Y}_{j}\gt {u}_{n})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=i\vee \left(j-\kappa +1)}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\gt {u}_{n})\cdot {p}_{i,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j}\left(0,\ldots ,0,1,0,\ldots ,0).\end{array}For i=1i=1and j=κj=\kappa , we have j∗=1{j}^{\ast }=1and the corresponding term becomes nP(X1>un)→τ>0nP\left({X}_{1}\gt {u}_{n})\to \tau \gt 0, as n→∞n\to \infty , and this is the reason why, in general, {Yn}\left\{{Y}_{n}\right\}does not satisfy D′(un){D}^{^{\prime} }\left({u}_{n})even if {Xn}\left\{{X}_{n}\right\}satisfies it.Proposition 3.2The following statements hold: (i)If {Yn}\left\{{Y}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, then {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}).(ii)If {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, then {Yn}\left\{{Y}_{n}\right\}satisfies D(s+κ−1)(un){D}^{\left(s+\kappa -1)}\left({u}_{n}).(iii)If {Xn}\left\{{X}_{n}\right\}satisfies D′(un){D}^{^{\prime} }\left({u}_{n}), then {Yn}\left\{{Y}_{n}\right\}satisfies D(2)(un){D}^{\left(2)}\left({u}_{n}).ProofConsider rn=[n/kn]{r}_{n}=\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]. We have that (5)n∑j=srnP(Y1>un,Yj≤un<Yj+1)=∑i=2−κ1n∑j=srnP(Xi>un,Yj≤un<Xj+1,Ui=1,Ui+1=0=…=U1,Uj+1=1)=∑i=2−κ1n∑j=srn∑j∗=(i+1)∨(j−κ+1)jP(Xi>un,Xj∗≤un<Xj+1)⋅pi,i+1,…,1,j∗,j∗+1,…,j,j+1(1,0,…,0,1,0,…,0,1).\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=s}^{{r}_{n}}P({Y}_{1}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {Y}_{j+1})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s}^{{r}_{n}}P({X}_{i}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {X}_{j+1},{U}_{i}=1,{U}_{i+1}=0=\ldots ={U}_{1},{U}_{j+1}=1)\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s}^{{r}_{n}}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=\left(i+1)\vee \left(j-\kappa +1)}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\le {u}_{n}\lt {X}_{j+1})\cdot {p}_{i,i+1,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j,j+1}\left(1,0,\ldots ,0,1,0,\ldots ,0,1).\end{array}Since {Yn}\left\{{Y}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), with s≥2s\ge 2, and thus, the first summation in (5) converges to zero, as n→∞n\to \infty , then all the terms in the last summations also converge to zero. In particular, when i=1i=1and j∗=j{j}^{\ast }=j, we have n∑j=srnP(X1>un,Xj≤un<Xj+1)→0n{\sum }_{j=s}^{{r}_{n}}P({X}_{1}\gt {u}_{n},{X}_{j}\le {u}_{n}\lt {X}_{j+1})\to 0, as n→∞n\to \infty , which proves (i).Conversely, writing the first summation in (5) with jjstarting at s+κ−1s+\kappa -1, we have (6)n∑j=s+κ−1rnP(Y1>un,Yj≤un<Yj+1)=∑i=2−κ1n∑j=s+κ−1rn∑j∗=j−κ+1jP(Xi>un,Xj∗≤un<Xj+1)⋅pi,i+1,…,1,j∗,j∗+1,…,j,j+1(1,0,…,0,1,0,…,0,1)=∑i=2−κ1n∑j=s+κ−1rn∑j∗=j−κ+1j∑i∗=j∗jP(Xi>un,Xj∗≤un,…,Xi∗≤un,Xj+1>un)⋅pi,i+1,…,1,j∗,j∗+1,…,j,j+1(1,0,…,0,1,0,…,0,1),\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=s+\kappa -1}^{{r}_{n}}P({Y}_{1}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {Y}_{j+1})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s+\kappa -1}^{{r}_{n}}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=j-\kappa +1}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\le {u}_{n}\lt {X}_{j+1})\cdot {p}_{i,i+1,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j,j+1}\left(1,0,\ldots ,0,1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s+\kappa -1}^{{r}_{n}}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=j-\kappa +1}^{j}\mathop{\displaystyle \sum }\limits_{{i}^{\ast }={j}^{\ast }}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\le {u}_{n},\ldots ,{X}_{{i}^{\ast }}\le {u}_{n},{X}_{j+1}\gt {u}_{n})\cdot {p}_{i,i+1,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j,j+1}\left(1,0,\ldots ,0,1,0,\ldots ,0,1),\end{array}where the least of distances between iiand i∗{i}^{\ast }corresponds to the case i=1i=1and i∗=j∗=s{i}^{\ast }={j}^{\ast }=s. Therefore, if {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n})for some s≥2s\ge 2, then each term of (6) converges to zero, as n→∞n\to \infty , and thus, {Yn}\left\{{Y}_{n}\right\}satisfies D(s+κ−1)(un){D}^{\left(s+\kappa -1)}\left({u}_{n}), proving (ii).As for (iii), observe that (7)n∑j=2rnP(Y1>un,Yj≤un<Yj+1)=∑i=2−κ1n∑j=2rnP(Xi>un,Yj≤un<Xj+1,Ui=1,Ui+1=0=…=U1,Uj+1=1)≤∑i=2−κ1n∑j=2rnP(Xi>un,Xj+1>un)=∑i=2−κ1n∑j=2rnP(X1>un,Xj−i+2>un)≤κn∑j=2rnP(X1>un,Xj>un).\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({Y}_{1}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {Y}_{j+1})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{i}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {X}_{j+1},{U}_{i}=1,{U}_{i+1}=0=\ldots ={U}_{1},{U}_{j+1}=1)\\ \hspace{1.0em}\le \mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{i}\gt {u}_{n},{X}_{j+1}\gt {u}_{n})=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{1}\gt {u}_{n},{X}_{j-i+2}\gt {u}_{n})\\ \hspace{1.0em}\le \kappa n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{1}\gt {u}_{n},{X}_{j}\gt {u}_{n}).\end{array}If {Xn}\left\{{X}_{n}\right\}satisfies D′(un){D}^{^{\prime} }\left({u}_{n}), then (7) converges to zero, as n→∞n\to \infty , and D(2)(un){D}^{\left(2)}\left({u}_{n})holds for {Yn}\left\{{Y}_{n}\right\}.□Under conditions D(un)D\left({u}_{n})and D(s)(un){D}^{\left(s)}\left({u}_{n})with s≥2s\ge 2, we can also compute the extremal index θX{\theta }_{X}defined in (3) by (Chernick et al. [1]; Corollary 1.3) (8)θX=limn→∞P(X2≤un,…,Xs≤un∣X1>un).{\theta }_{X}=\mathop{\mathrm{lim}}\limits_{n\to \infty }P({X}_{2}\le {u}_{n},\ldots ,{X}_{s}\le {u}_{n}| {X}_{1}\gt {u}_{n}).If {Xn}\left\{{X}_{n}\right\}and {Yn}\left\{{Y}_{n}\right\}have extremal indexes θX{\theta }_{X}and θY{\theta }_{Y}, respectively, then θY≤θX{\theta }_{Y}\le {\theta }_{X}, since P(⋁i=1nXi≤n/τ)≤P(⋁i=1nYi≤n/τ)P\left({\bigvee }_{i=1}^{n}{X}_{i}\le n\hspace{0.1em}\text{/}\hspace{0.1em}\tau )\le P\left({\bigvee }_{i=1}^{n}{Y}_{i}\le n\hspace{0.1em}\text{/}\hspace{0.1em}\tau ). This corresponds to the intuitively expected, if we remember that the possible repetition of variables Xn{X}_{n}leads to larger clusters of values above un{u}_{n}. In the following result, we establish a relationship between θX{\theta }_{X}and θY{\theta }_{Y}.Proposition 3.3Suppose that {Un}\left\{{U}_{n}\right\}is strong-mixing and {Xn}\left\{{X}_{n}\right\}satisfies conditions D(un)D\left({u}_{n})and D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, for normalized levels un≡un(τ){u}_{n}\equiv {u}_{n}\left(\tau ). If {Xn}\left\{{X}_{n}\right\}has extremal index θX{\theta }_{X}, then {Yn}\left\{{Y}_{n}\right\}has extremal index θY{\theta }_{Y}given byθY=θX∑j=0κ−1p1,2,…,j+1,j+2(1,0,…,0,1)βj,{\theta }_{Y}={\theta }_{X}\mathop{\sum }\limits_{j=0}^{\kappa -1}{p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1){\beta }_{j},whereβj=limn→∞P(Xs+j>un∣X1≤un,…,Xs−1≤un<Xs).{\beta }_{j}=\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left({X}_{s+j}\gt {u}_{n}| {X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s}).ProofBy Proposition 3.1, {Yn}\left\{{Y}_{n}\right\}also satisfies condition D(un)D\left({u}_{n}). Thus, we have limn→∞P⋁i=1nYi≤un=exp−limn→∞knP⋁i=1[n/kn]Yi>un\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left(\underset{i=1}{\overset{n}{\bigvee }}{Y}_{i}\le {u}_{n}\right)=\exp \left\{-\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{Y}_{i}\gt {u}_{n}\right)\right\}and (9)limn→∞knP⋁i=1[n/kn]Yi>un=limn→∞knPY1≤un,⋁i=1[n/kn]{Yi>un}=limn→∞knP⋃i=1[n/kn]{Yi≤un<Yi+1}=limn→∞knP⋃i=1[n/kn]{Yi≤un<Xi+1,Ui+1=1}=limn→∞knP⋃i=1[n/kn]⋃j=0κ−1{Xi−j≤un<Xi+1,Ui−j=1,Ui−j+1=0=…=Ui,Ui+1=1}=limn→∞knP⋃i=1[n/kn]⋃j=0κ−1{Xi≤un<Xi+j+1,Ui=1,Ui+1=0=…=Ui+j,Ui+j+1=1}=limn→∞kn∑i=1[n/kn]∑j=0κ−1P(X1≤un,…,Xi≤un<Xi+1,Xi+j+1>un)⋅pi,i+1,…,i+j,i+j+1(1,0,…,0,1)=limn→∞kn∑i=1[n/kn]∑j=0κ−1P(Xi−s+2≤un,…,Xi≤un<Xi+1,Xi+j+1>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)\begin{array}{l}\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{Y}_{i}\gt {u}_{n}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left({Y}_{1}\le {u}_{n},\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}\left\{{Y}_{i}\gt {u}_{n}\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\left\{{Y}_{i}\le {u}_{n}\lt {Y}_{i+1}\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\left\{{Y}_{i}\le {u}_{n}\lt {X}_{i+1},{U}_{i+1}=1\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\bigcup }\limits_{j=0}^{\kappa -1}\left\{{X}_{i-j}\le {u}_{n}\lt {X}_{i+1},{U}_{i-j}=1,{U}_{i-j+1}=0=\ldots ={U}_{i},{U}_{i+1}=1\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\bigcup }\limits_{j=0}^{\kappa -1}\left\{{X}_{i}\le {u}_{n}\lt {X}_{i+j+1},{U}_{i}=1,{U}_{i+1}=0=\ldots ={U}_{i+j},{U}_{i+j+1}=1\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{1}\le {u}_{n},\ldots ,{X}_{i}\le {u}_{n}\lt {X}_{i+1},{X}_{i+j+1}\gt {u}_{n})\cdot {p}_{i,i+1,\ldots ,i+j,i+j+1}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{i-s+2}\le {u}_{n},\ldots ,{X}_{i}\le {u}_{n}\lt {X}_{i+1},{X}_{i+j+1}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\end{array}since {Xn}\left\{{X}_{n}\right\}satisfies condition D(s)(un){D}^{\left(s)}\left({u}_{n})for some s≥2s\ge 2. The stationarity of {Xn}\left\{{X}_{n}\right\}leads to limn→∞kn∑i=1[n/kn]∑j=0κ−1P(Xi−s+2≤un,…,Xi≤u<Xi+1,Xi+j+1>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)=limn→∞kn∑i=1[n/kn]∑j=0κ−1P(X1≤un,…,Xs−1≤un<Xs,Xs+j>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)=limn→∞∑j=0κ−1nP(X1≤un,…,Xs−1≤un<Xs,Xs+j>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)\begin{array}{l}\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{i-s+2}\le {u}_{n},\ldots ,{X}_{i}\le {u}_{\lt }{X}_{i+1},{X}_{i+j+1}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s},{X}_{s+j}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}nP({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s},{X}_{s+j}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\end{array}=limn→∞∑j=0κ−1nP(X1≤un,…,Xs−1≤un<Xs)P(Xs+j>un∣X1≤un,…,Xs−1≤un<Xs)⋅p1,2,…,j+1,j+2(1,0,…,0,1)=τθX∑j=0κ−1p1,2,…,j+1,j+2(1,0,…,0,1)βj,\begin{array}{l}\hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}nP({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s})P({X}_{s+j}\gt {u}_{n}| {X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\tau {\theta }_{X}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}{p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1){\beta }_{j},\end{array}where the last step follows from (8).□Observe that ∑j=0κ−1p1,2,…,j+1,j+2(1,0,…,0,1)=pn(1)=P(Un=1)\mathop{\sum }\limits_{j=0}^{\kappa -1}{p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)={p}_{n}\left(1)=P\left({U}_{n}=1)and thus, θY≤θXpn(1)≤θX{\theta }_{Y}\le {\theta }_{X}{p}_{n}\left(1)\le {\theta }_{X}, as expected.Proposition 3.4Suppose that {Un}\left\{{U}_{n}\right\}is strong-mixing and {Xn}\left\{{X}_{n}\right\}satisfies conditions D(un)D\left({u}_{n})and D′(un){D}^{^{\prime} }\left({u}_{n}), for normalized levels un≡un(τ){u}_{n}\equiv {u}_{n}\left(\tau ). Then, {Yn}\left\{{Y}_{n}\right\}has extremal index θY{\theta }_{Y}given by θY=p1,2(1,1){\theta }_{Y}={p}_{1,2}\left(1,1).ProofBy condition D′(un){D}^{^{\prime} }\left({u}_{n}), the only term to consider in (9) corresponds to j=0j=0, and we obtain limn→∞knP⋁i=1[n/kn]Yn≤un=limn→∞kn∑i=1[n/kn]P(X1≤un,…,Xs−1≤un<Xs)p1,2(1,1)=limn→∞nP(Xs>un)p1,2(1,1)=τp1,2(1,1).□\hspace{11.6em}\begin{array}{l}\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{Y}_{n}\le {u}_{n}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s}){p}_{1,2}\left(1,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{s}\gt {u}_{n}){p}_{1,2}\left(1,1)=\tau {p}_{1,2}\left(1,1).\hspace{15em}\square \end{array}Observe that we can obtain the aforementioned result by applying Proposition 3.2 (iii) and calculating directly τθY=limn→∞nP(Y1≤un<Y2)\tau {\theta }_{Y}={\mathrm{lim}}_{n\to \infty }nP\left({Y}_{1}\le {u}_{n}\lt {Y}_{2}). More precisely, we have that {Yn}\left\{{Y}_{n}\right\}satisfies D(2)(un){D}^{\left(2)}\left({u}_{n}), and by applying (8), we obtain τθY=limn→∞nP(Y1≤un<Y2)=limn→∞nP(Y1≤un<X2,U2=1)=limn→∞nP⋃j=0κ−1X1−j≤un<X2,U1−j=1,U1−j+1=0=…=U1,U2=1,=limn→∞nP⋃j=0κ−1X2−κ≤un,…,X1−j≤un<X2−j,X2>un⋅p1−j,1−j+1,…,1,2(1,0,…,0,1)=limn→∞nP(X1≤un<X2)p1,2(1,1)=limn→∞nP(X2>un)p1,2(1,1)=τp1,2(1,1).\begin{array}{rcl}\tau {\theta }_{Y}& =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n}\lt {Y}_{2})\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n}\lt {X}_{2},{U}_{2}=1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left(\mathop{\bigcup }\limits_{j=0}^{\kappa -1}{X}_{1-j}\le {u}_{n}\lt {X}_{2},{U}_{1-j}=1,{U}_{1-j+1}=0=\ldots ={U}_{1},{U}_{2}=1,\right)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left(\mathop{\bigcup }\limits_{j=0}^{\kappa -1}{X}_{2-\kappa }\le {u}_{n},\ldots ,{X}_{1-j}\le {u}_{n}\lt {X}_{2-j},{X}_{2}\gt {u}_{n}\right)\cdot {p}_{1-j,1-j+1,\ldots ,1,2}\left(1,0,\ldots ,0,1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{1}\le {u}_{n}\lt {X}_{2}){p}_{1,2}\left(1,1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{2}\gt {u}_{n}){p}_{1,2}\left(1,1)=\tau {p}_{1,2}\left(1,1).\end{array}The same result can also be seen as a particular case of Proposition 3.3, where, if we take s=1s=1, we have βj=0{\beta }_{j}=0, for j≠0j\ne 0, and we obtain θY=θXβ0p1,2(1,1)=p1,2(1,1){\theta }_{Y}={\theta }_{X}{\beta }_{0}{p}_{1,2}\left(1,1)={p}_{1,2}\left(1,1), since β0=1{\beta }_{0}=1and under D′(un){D}^{^{\prime} }\left({u}_{n})it comes θX=1{\theta }_{X}=1.Example 3.1Consider {Yn}\left\{{Y}_{n}\right\}such that {Xn}\left\{{X}_{n}\right\}is an ARMAX sequence, i.e., Xn=ϕXn−1∨(1−ϕ)Zn{X}_{n}=\phi {X}_{n-1}\vee \left(1-\phi ){Z}_{n}, n≥1n\ge 1, where {Zn}\left\{{Z}_{n}\right\}is an independent sequence of random variables with standard Fréchet marginal distribution and {Xn}\left\{{X}_{n}\right\}and {Zn}\left\{{Z}_{n}\right\}are independent. We show that {Xn}\left\{{X}_{n}\right\}has also standard Fréchet marginal distribution, satisfies condition D(2)(un){D}^{\left(2)}\left({u}_{n})and has extremal index θX=1−ϕ{\theta }_{X}=1-\phi (see, e.g., Ferreira and Ferreira [2] and references therein).Observe that, for normalized levels un≡n/τ{u}_{n}\equiv n\hspace{0.1em}\text{/}\hspace{0.1em}\tau , τ>0\tau \gt 0, we have β1=limn→∞P(X3>un∣X1≤un<X2)=limn→∞P(X1≤un)−P(X1≤un,X2≤un)−P(X1≤un,X3≤un)+P(X1≤un,X2≤un,X3≤un)P(X1≤un)−P(X1≤un,X2≤un)=limn→∞1−τn−1−τn(2−ϕ)−1−τn(2−ϕ2)+1−τn(3−2ϕ)1−τn−1−τn(2−ϕ)=ϕ.\begin{array}{rcl}{\beta }_{1}& =& \mathop{\mathrm{lim}}\limits_{n\to \infty }P\left({X}_{3}\gt {u}_{n}| {X}_{1}\le {u}_{n}\lt {X}_{2})\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }\frac{P\left({X}_{1}\le {u}_{n})-P\left({X}_{1}\le {u}_{n},{X}_{2}\le {u}_{n})-P\left({X}_{1}\le {u}_{n},{X}_{3}\le {u}_{n})+P\left({X}_{1}\le {u}_{n},{X}_{2}\le {u}_{n},{X}_{3}\le {u}_{n})}{P\left({X}_{1}\le {u}_{n})-P\left({X}_{1}\le {u}_{n},{X}_{2}\le {u}_{n})}\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }\frac{1-\frac{\tau }{n}-\left(1-\frac{\tau }{n}\left(2-\phi )\right)-\left(1-\frac{\tau }{n}\left(2-{\phi }^{2})\right)+1-\frac{\tau }{n}\left(3-2\phi )}{1-\frac{\tau }{n}-\left(1-\frac{\tau }{n}\left(2-\phi )\right)}=\phi .\end{array}Analogous calculations lead to β2=ϕ2{\beta }_{2}={\phi }^{2}. Considering κ=3\kappa =3, we have θY=(1−ϕ)(p1,2(1,1)+ϕp1,2,3(1,0,1)+{\theta }_{Y}\hspace{-0.08em}=\hspace{-0.08em}\left(1-\phi )({p}_{1,2}\left(1,1)+\phi {p}_{1,2,3}\left(1,0,1)\hspace{0.25em}+ϕ2p1,2,3,4(1,0,0,1)){\phi }^{2}{p}_{1,2,3,4}\left(1,0,0,1)).The observed sequence is {Yn}\left\{{Y}_{n}\right\}, and therefore, results that allow retrieving information about the extreme behavior of the initial sequence {Xn}\left\{{X}_{n}\right\}, subject to the failures determined by {Un}\left\{{U}_{n}\right\}, may be of interest.If we assume that {Yn}\left\{{Y}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), then {Xn}\left\{{X}_{n}\right\}also satisfies D(s)(un){D}^{\left(s)}\left({u}_{n})by Proposition 3.2 (i), thus coming τθX=limn→∞nP(X1≤un,…,Xs−1≤un<Xs)=limn→∞nP(Y1≤un,…,Ys−1≤un<Ys∣U1=…=Us=1)=limn→∞nP(Y1≤un,…,Ys−1≤un<Ys∣Y0≠Y1≠…≠Ys).\begin{array}{rcl}\tau {\theta }_{X}& =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s})\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n},\ldots ,{Y}_{s-1}\le {u}_{n}\lt {Y}_{s}| {U}_{1}=\ldots =\hspace{0.33em}{U}_{s}=1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n},\ldots ,{Y}_{s-1}\le {u}_{n}\lt {Y}_{s}| {Y}_{0}\ne {Y}_{1}\ne \ldots \ne {Y}_{s}).\end{array}Thereby, we can write θX=limn→∞P(Y1≤un,…,Ys−1≤un<Ys∣Y0≠Y1≠…≠Ys)P(Y1>un).{\theta }_{X}=\mathop{\mathrm{lim}}\limits_{n\to \infty }\frac{P\left({Y}_{1}\le {u}_{n},\ldots ,{Y}_{s-1}\le {u}_{n}\lt {Y}_{s}| {Y}_{0}\ne {Y}_{1}\ne \ldots \ne {Y}_{s})}{P\left({Y}_{1}\gt {u}_{n})}.4Tail dependenceNow we will analyze the effect of this failure mechanism on the dependency between two variables, Yn{Y}_{n}and Yn+m{Y}_{n+m}, m≥1m\ge 1. More precisely, we are going to evaluate the lag-mmtail dependence coefficient λ(Yn+m∣Yn)=limx→∞P(Yn+m>x∣Yn>x),\lambda \left({Y}_{n+m}| {Y}_{n})=\mathop{\mathrm{lim}}\limits_{x\to \infty }P\left({Y}_{n+m}\gt x| {Y}_{n}\gt x),which incorporates the tail dependence between Xn{X}_{n}and Xn+j{X}_{n+j}, with jjregulated by the maximum number of failures κ−1\kappa -1and by the relation between mmand κ\kappa . In particular, independent variables present null tail dependence coefficients. If m=1m=1, we obtain the tail dependence coefficient in Joe [5]. For results related to lag-mmtail dependence in the literature, see, e.g., Zhang [10,11].Proposition 4.1Sequence {Yn}\left\{{Y}_{n}\right\}has lag-mmtail dependence coefficient, with m≥1m\ge 1, (10)λ(Yn+m∣Yn)=p1,…,m(0,…,0)1{m≤κ−1}+∑i=1∨(m−κ+1)m∑i∗=0κ−1λ(Xn+i+i∗∣Xn)⋅⋅p1,2,…,i∗+1,i∗+1+i,i∗+2+i,…,i∗+1+m(1,0,…,0,1,0,…,0),\begin{array}{rcl}\lambda \left({Y}_{n+m}| {Y}_{n})& =& {p}_{1,\ldots ,m}\left(0,\ldots ,0){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}+\mathop{\displaystyle \sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}\mathop{\displaystyle \sum }\limits_{{i}^{\ast }=0}^{\kappa -1}\lambda \left({X}_{n+i+{i}^{\ast }}| {X}_{n})\cdot \\ & & \cdot {p}_{1,2,\ldots ,{i}^{\ast }+1,{i}^{\ast }+1+i,{i}^{\ast }+2+i,\ldots ,{i}^{\ast }+1+m}\left(1,0,\ldots ,0,1,0,\ldots ,0),\end{array}provided all coefficients λ(Xn+i+i∗∣Xn)\lambda \left({X}_{n+i+{i}^{\ast }}| {X}_{n})exist.ProofObserve that P(Yn>x,Yn+m>x)=P(Yn>x,Un+1=0=…=Un+m)1{m≤κ−1}+∑i=1∨(m−κ+1)mP(Yn>x,Xn+i>x,Un+i=1,Un+i+1=0=…=Un+m)=∑i=0κ−1−mP(Xn−i>x)pn−i,n−i+1,…,n+m(1,0,…,0)1{m≤κ−1}+∑i=1∨(m−κ+1)m∑i∗=0κ−1P(Xn−i∗>x,Xn+i>x)pn−i∗,n−i∗+1,…,n,n+i,n+i+1,…,n+m(1,0,…,0,1,0,…,0)\begin{array}{l}P\left({Y}_{n}\gt x,{Y}_{n+m}\gt x)\\ \hspace{1.0em}=P\left({Y}_{n}\gt x,{U}_{n+1}=0=\ldots ={U}_{n+m}){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}+\mathop{\displaystyle \sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}P\left({Y}_{n}\gt x,{X}_{n+i}\gt x,{U}_{n+i}=1,{U}_{n+i+1}=0=\ldots ={U}_{n+m})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=0}^{\kappa -1-m}P\left({X}_{n-i}\gt x)\hspace{0.33em}{p}_{n-i,n-i+1,\ldots ,n+m}\left(1,0,\ldots ,0){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}\\ \hspace{1.0em}\hspace{1.0em}+\mathop{\displaystyle \sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}\mathop{\displaystyle \sum }\limits_{{i}^{\ast }=0}^{\kappa -1}P\left({X}_{n-{i}^{\ast }}\gt x,{X}_{n+i}\gt x){p}_{n-{i}^{\ast },n-{i}^{\ast }+1,\ldots ,n,n+i,n+i+1,\ldots ,n+m}\left(1,0,\ldots ,0,1,0,\ldots ,0)\end{array}and ∑i=0κ−1−mp1,2,…,m+i+1(1,0,…,0)=p1,…,m(0,…,0){\sum }_{i=0}^{\kappa -1-m}{p}_{1,2,\ldots ,m+i+1}\left(1,0,\ldots ,0)={p}_{1,\ldots ,m}\left(0,\ldots ,0).□Taking m=1m=1in (10), we obtain the tail dependence coefficient λ(Yn+1∣Yn)=pn(0)+∑i=0κ−1λ(Xn+1+i∣Xn)p1,2,…,i+1,i+2(1,0,…,0,1),\lambda \left({Y}_{n+1}| {Y}_{n})={p}_{n}\left(0)+\mathop{\sum }\limits_{i=0}^{\kappa -1}\lambda \left({X}_{n+1+i}| {X}_{n}){p}_{1,2,\ldots ,i+1,i+2}\left(1,0,\ldots ,0,1),provided all coefficients λ(Xn+1+i∣Xn)\lambda \left({X}_{n+1+i}| {X}_{n})exist.If {Xn}\left\{{X}_{n}\right\}is lag-m∗{m}^{\ast }tail independent for all integer m∗≥1∨(m−κ+1){m}^{\ast }\ge 1\vee \left(m-\kappa +1), we have λ(Xn+i+i∗∣Xn)=0\lambda \left({X}_{n+i+{i}^{\ast }}| {X}_{n})=0in the second ter of (10), and thus, λ(Yn+m∣Yn)=p1,…,m(0,…,0)1{m≤κ−1}\lambda \left({Y}_{n+m}| {Y}_{n})={p}_{1,\ldots ,m}\left(0,\ldots ,0){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}and {Yn}\left\{{Y}_{n}\right\}is lag-mmtail independent for all integer m≥κm\ge \kappa .Example 4.1Consider again {Yn}\left\{{Y}_{n}\right\}based on ARMAX sequence {Xn}\left\{{X}_{n}\right\}as in Example 3.1. We have that {Xn}\left\{{X}_{n}\right\}has lag-mmtail dependence coefficient λ(Xn+m∣Xn)=ϕm\lambda \left({X}_{n+m}| {X}_{n})={\phi }^{m}(Ferreira and Ferreira [2]), and thus, λ(Yn+m∣Yn)=p1,…,m(0,…,0)1{m≤κ−1}+∑i=1∨(m−κ+1)m∑i∗=0κ−1ϕi+i∗p1,2,…,i∗+1,i∗+1+i,i∗+2+i,…,i∗+1+m(1,0,…,0,1,0,…,0).\lambda \left({Y}_{n+m}| {Y}_{n})={p}_{1,\ldots ,m}\left(0,\ldots ,0)\hspace{0.33em}{{\bf{1}}}_{\left\{m\le \kappa -1\right\}}+\mathop{\sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}\mathop{\sum }\limits_{{i}^{\ast }=0}^{\kappa -1}{\phi }^{i+{i}^{\ast }}{p}_{1,2,\ldots ,{i}^{\ast }+1,{i}^{\ast }+1+i,{i}^{\ast }+2+i,\ldots ,{i}^{\ast }+1+m}\left(1,0,\ldots ,0,1,0,\ldots ,0). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Dependence Modeling de Gruyter

The stopped clock model

Dependence Modeling , Volume 10 (1): 10 – Jan 1, 2022

Loading next page...
 
/lp/de-gruyter/the-stopped-clock-model-4YkgqG00Zk
Publisher
de Gruyter
Copyright
© 2022 Helena Ferreira and Marta Ferreira, published by De Gruyter
ISSN
2300-2298
eISSN
2300-2298
DOI
10.1515/demo-2022-0101
Publisher site
See Article on Publisher Site

Abstract

1IntroductionLet {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}and {Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}be stationary sequences of real random variables on the probability space (Ω,A,P)\left(\Omega ,{\mathcal{A}},P)and P(Un∈{0,1})=1P\left({U}_{n}\in \left\{0,1\right\})=1. We define, for n≥1n\ge 1, (1)Yn=Xn,Un=1Yn−1,Un=0.{Y}_{n}=\left\{\phantom{\rule[-1.15em]{}{0ex}}\begin{array}{ll}{X}_{n},& {U}_{n}=1\\ {Y}_{n-1},& {U}_{n}=0.\end{array}\right.Sequence {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}corresponds to a model of failures on records of {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}replaced by the last available record, which occurs in some random past instant, if we interpret nnas time. Thus, if, for example, it occurs {U1=1,U2=0,U3=1,U4=0,U5=0,U6=0,U7=1}\left\{{U}_{1}=1,{U}_{2}=0,{U}_{3}=1,{U}_{4}=0,{U}_{5}=0,{U}_{6}=0,{U}_{7}=1\right\}, we will have {Y1=X1,Y2=X1,Y3=X3,Y4=X3,Y5=X3,Y6=X3,Y7=X7}\{{Y}_{1}={X}_{1},{Y}_{2}={X}_{1},{Y}_{3}={X}_{3},{Y}_{4}={X}_{3},{Y}_{5}={X}_{3},{Y}_{6}={X}_{3},{Y}_{7}={X}_{7}\}. This constancy of some variables of {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}for random periods of time motivates the designation of “stopped clock model” for sequence {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}.Failure models studied in the literature from the point of view of extremal behavior do not consider the stopped clock model (Hall and Hüsler [4]; Ferreira et al. [3] and references therein).The model we will study can also be represented by {XNn}n≥1{\left\{{X}_{{N}_{n}}\right\}}_{n\ge 1}, where {Nn}n≥1{\left\{{N}_{n}\right\}}_{n\ge 1}is a sequence of positive integer variables representable by Nn=nUn+∑n−1≥i≥1∏j=0i−1(1−Un−j)Un−i(n−i),n≥1.{N}_{n}=n{U}_{n}+\sum _{n-1\ge i\ge 1}\left(\mathop{\prod }\limits_{j=0}^{i-1}\left(1-{U}_{n-j})\right){U}_{n-i}\left(n-i),\hspace{1em}n\ge 1.We can also state a recursive formulation for {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}through Yn=XnUn+∑n−1≥i≥1∏j=0i−1(1−Un−j)Un−iXn−i+∏i≥0(1−Un−i)Yn−κ,n≥1,κ≥1.{Y}_{n}={X}_{n}{U}_{n}+\sum _{n-1\ge i\ge 1}\left(\mathop{\prod }\limits_{j=0}^{i-1}\left(1-{U}_{n-j})\right){U}_{n-i}{X}_{n-i}+\prod _{i\ge 0}\left(1-{U}_{n-i}){Y}_{n-\kappa },\hspace{1em}n\ge 1,\hspace{0.33em}\kappa \ge 1.Under any of the three possible representations (failures model, random index sequence or recursive sequence), we are not aware of an extremal behavior study of {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}in the literature.Our departure hypotheses about the base sequence {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}and about sequence {Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}are as follows: (1){Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}is a stationary sequence of random variables almost surely distinct and, without loss of generality, such that FXn(x)≔F(x)=exp(−1/x){F}_{{X}_{n}}\left(x):= F\left(x)=\exp \left(-1\hspace{0.1em}\text{/}\hspace{0.1em}x), x>0x\gt 0, i.e., standard Fréchet distributed.(2){Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}and {Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}are independent.(3){Un}n∈Z{\left\{{U}_{n}\right\}}_{n\in {\mathbb{Z}}}is stationary and pn1,…,ns(i1,…,is)≔P(Un1=i1,…,Uns=is){p}_{{n}_{1},\ldots ,{n}_{s}}\hspace{0.33em}\left({i}_{1},\ldots ,{i}_{s}):= P\left({U}_{{n}_{1}}={i}_{1},\ldots ,{U}_{{n}_{s}}={i}_{s}), ij∈{0,1}{i}_{j}\in \left\{0,1\right\}, j=1,…,sj=1,\ldots ,s, is such that pn,n+1,…,n+κ−1(0,…,0)=0{p}_{n,n+1,\ldots ,n+\kappa -1}\hspace{0.33em}\left(0,\ldots ,0)=0, for some κ≥1\kappa \ge 1.The trivial case κ=1\kappa =1corresponds to Yn=Xn{Y}_{n}={X}_{n}, n≥1n\ge 1. Hypothesis (3) means that we are assuming that it is almost impossible to lose κ\kappa or more consecutive values of {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}. We remark that, along the paper, the summations, produts and intersections is considered to be nonexistent whenever the end of the counter is less than the beginning. We will also use notation a∨b=max(a,b)a\vee b=\max \left(a,b).Example 1.1Consider an independent and identically distributed sequence {Wn}n∈Z{\left\{{W}_{n}\right\}}_{n\in {\mathbb{Z}}}of real random variables on (Ω,A,P)\left(\Omega ,{\mathcal{A}},P)and a Borelian set AA. Let p=P(An)p=P\left({A}_{n}), where An={Wn∈A}{A}_{n}=\left\{{W}_{n}\in A\right\}, n∈Zn\in {\mathbb{Z}}. The sequence of Bernoulli random variables (2)Un=1{⋂i=1κ−1A¯n−i}+(1−1{⋂i=1κ−1A¯n−i})1{An},n∈Z,{U}_{n}={{\bf{1}}}_{\{{\bigcap }_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\}}+\left(1-{{\bf{1}}}_{\{{\bigcap }_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\}}){{\bf{1}}}_{\left\{{A}_{n}\right\}},\hspace{1em}n\in {\mathbb{Z}},where 1{⋅}{{\bf{1}}}_{\left\{\cdot \right\}}denotes the indicator function, defined for some fixed κ≥2\kappa \ge 2, is such that pn,n+1,…,n+κ−1(0,…,0)=0{p}_{n,n+1,\ldots ,n+\kappa -1}\hspace{0.08em}\left(0,\ldots ,0)=0, i.e., it is almost sure that after κ−1\kappa -1consecutive variables equal to zero, the next variable takes value one. We also have pn(0)=P(1{⋂i=1κ−1A¯n−i}=0,1{An}=0)=P⋃i=1κ−1An−i∩A¯n=P(A¯n)−P⋂i=0κ−1A¯n−i=1−p−(1−p)κ,\begin{array}{rcl}{p}_{n}\left(0)& =& P({{\bf{1}}}_{\{{\bigcap }_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\}}=0,{{\bf{1}}}_{\left\{{A}_{n}\right\}}=0)=P\left(\mathop{\bigcup }\limits_{i=1}^{\kappa -1}{A}_{n-i}\cap {\overline{A}}_{n}\right)\\ & =& P\left({\overline{A}}_{n})-P\left(\mathop{\bigcap }\limits_{i=0}^{\kappa -1}{\overline{A}}_{n-i}\right)=1-p-{\left(1-p)}^{\kappa },\end{array}since the independence of random variables Wn{W}_{n}implies the independence of events An{A}_{n}, and, for κ>2\kappa \gt 2, pn−1,n(0,0)=P⋃i=1κ−1An−i∩A¯n∩⋃i=1κ−1An−1−i∩A¯n−1=P(A¯n∩A¯n−1)−PA¯n∩A¯n−1∩⋂i=1κ−1A¯n−i∪⋂i=1κ−1A¯n−1−i=P(A¯n)P(A¯n−1)−P⋂i=0κ−1A¯n−i=(1−p)2−(1−p)κ,\begin{array}{rcl}{p}_{n-1,n}\left(0,0)& =& P\left(\left(\mathop{\bigcup }\limits_{i=1}^{\kappa -1}{A}_{n-i}\cap {\overline{A}}_{n}\right)\cap \left(\mathop{\bigcup }\limits_{i=1}^{\kappa -1}{A}_{n-1-i}\cap {\overline{A}}_{n-1}\right)\right)\\ & =& P\left({\overline{A}}_{n}\cap {\overline{A}}_{n-1})-P\left({\overline{A}}_{n}\cap {\overline{A}}_{n-1}\cap \left(\mathop{\bigcap }\limits_{i=1}^{\kappa -1}{\overline{A}}_{n-i}\cup \mathop{\bigcap }\limits_{i=1}^{\kappa -1}{\overline{A}}_{n-1-i}\right)\right)\\ & =& P\left({\overline{A}}_{n})P\left({\overline{A}}_{n-1})-P\left(\mathop{\bigcap }\limits_{i=0}^{\kappa -1}{\overline{A}}_{n-i}\right)={\left(1-p)}^{2}-{\left(1-p)}^{\kappa },\end{array}pn−1,n(1,0)=pn(0)−pn−1,n(0,0)=p(1−p){p}_{n-1,n}\left(1,0)={p}_{n}\left(0)-{p}_{n-1,n}\left(0,0)=p\left(1-p).In Figure 1, we illustrate with a particular example based on independent standard Fréchet {Xn}n∈Z{\left\{{X}_{n}\right\}}_{n\in {\mathbb{Z}}}, {Wn}n∈Z{\left\{{W}_{n}\right\}}_{n\in {\mathbb{Z}}}with standard exponential marginals, A=]0,1/2]A=]0,1\hspace{0.1em}\text{/}\hspace{0.1em}2]and thus, p=0.3935p=0.3935and considering κ=3\kappa =3. Therefore, pn,n+1,n+2(0,0,0)=0{p}_{n,n+1,n+2}\left(0,0,0)=0, pn,n+1,n+2(1,0,0)=pn,n+1(0,0)=p(1−p)2{p}_{n,n+1,n+2}\left(1,0,0)={p}_{n,n+1}\left(0,0)=p{\left(1-p)}^{2}.Figure 1Sample path of 100 observations simulated from {Yn}\left\{{Y}_{n}\right\}defined in (1) based on independent standard Fréchet {Xn}\left\{{X}_{n}\right\}and on {Un}\left\{{U}_{n}\right\}given in (2) where we take random variables {Wn}\left\{{W}_{n}\right\}standard exponential distributed, A=]0,1/2]A=]0,1\hspace{0.1em}\text{/}\hspace{0.1em}2], and thus, p=0.3935p=0.3935and considering κ=3\kappa =3.In the next section, we propose an estimator for probabilities pn,…,n+s(1,0,…,0){p}_{n,\ldots ,n+s}\left(1,0,\ldots ,0), 0≤s<κ−10\le s\lt \kappa -1. In Section 3, we analyze the existence of the extremal index for {Yn}n≥1{\left\{{Y}_{n}\right\}}_{n\ge 1}, an important measure to evaluate the tendency to occur clusters of its high values (see, e.g., Kulik and Solier [6], and references therein). A characterization of the tail dependence will be presented in Section 4. The results are illustrated with an ARMAX sequence.For the sake of simplicity, we will omit the variation of nnin sequence notation whenever there is no doubt, taking into account that we will keep the designation {Yn}\left\{{Y}_{n}\right\}for the stopped clock model and {Xn}\left\{{X}_{n}\right\}and {Un}\left\{{U}_{n}\right\}for the sequences that generate it.2Inference on {Un}\left\{{U}_{n}\right\}Assuming that {Un}\left\{{U}_{n}\right\}is not observable, as well as the values of {Xn}\left\{{X}_{n}\right\}that are lost, it is of interest to retrieve information about these sequences from the available sequence {Yn}\left\{{Y}_{n}\right\}.Since, for n≥1n\ge 1and s≥1s\ge 1, we have pn(1)=E(1{Yn≠Yn−1}),pn(0)=E(1{Yn=Yn−1})andpn−s,n−s+1,…,n(1,0,…,0)=E(1{Yn−s−1≠Yn−s=Yn−s+1=…=Yn}),{p}_{n}\left(1)=E({{\bf{1}}}_{\left\{{Y}_{n}\ne {Y}_{n-1}\right\}}),\hspace{1em}{p}_{n}\left(0)=E({{\bf{1}}}_{\left\{{Y}_{n}={Y}_{n-1}\right\}})\hspace{1em}{\rm{and}}\hspace{1em}{p}_{n-s,n-s+1,\ldots ,n}\left(1,0,\ldots ,0)=E({{\bf{1}}}_{\left\{{Y}_{n-s-1}\ne {Y}_{n-s}={Y}_{n-s+1}=\ldots ={Y}_{n}\right\}}),we propose to estimate these probabilities from the respective empirical counterparts of a random sample (Yˆ1,Yˆ2,…,Yˆm)\left({\hat{Y}}_{1},{\hat{Y}}_{2},\ldots ,{\hat{Y}}_{m})from {Yn}\left\{{Y}_{n}\right\}, i.e., p^n(1)=1m∑i=2m1{Yˆi≠Yˆi−1},p^n(0)=1m∑i=2m1{Yˆi=Yˆi−1}andp^n−s,n−s+1,…,n(1,0,…,0)=1m∑i=s+2m1{Yˆi−s−1≠Yˆi−s=Yˆi−s+1=…=Yˆi},\begin{array}{l}{\widehat{p}}_{n}\left(1)=\frac{1}{m}\mathop{\displaystyle \sum }\limits_{i=2}^{m}{{\bf{1}}}_{\left\{{\hat{Y}}_{i}\ne {\hat{Y}}_{i-1}\right\}},\hspace{1em}{\widehat{p}}_{n}\left(0)=\frac{1}{m}\mathop{\displaystyle \sum }\limits_{i=2}^{m}{{\bf{1}}}_{\left\{{\hat{Y}}_{i}={\hat{Y}}_{i-1}\right\}}\hspace{1em}{\rm{and}}\\ {\widehat{p}}_{n-s,n-s+1,\ldots ,n}\left(1,0,\ldots ,0)=\frac{1}{m}\mathop{\displaystyle \sum }\limits_{i=s+2}^{m}{{\bf{1}}}_{\left\{{\hat{Y}}_{i-s-1}\ne {\hat{Y}}_{i-s}={\hat{Y}}_{i-s+1}=\ldots ={\hat{Y}}_{i}\right\}},\end{array}which are consistent by the weak law of large numbers. The value of κ\kappa can be inferred from κ^=⋁i=s+2m⋁s≥1s1{Yˆi−s−1≠Yˆi−s=Yˆi−s+1=…=Yˆi}.\widehat{\kappa }=\underset{i=s+2}{\overset{m}{\bigvee }}\hspace{0.33em}\mathop{\bigvee }\limits_{s\ge 1}s\hspace{0.33em}{{\bf{1}}}_{\left\{{\hat{Y}}_{i-s-1}\ne {\hat{Y}}_{i-s}={\hat{Y}}_{i-s+1}=\ldots ={\hat{Y}}_{i}\right\}}.In order to evaluate the finite sample behavior of the aforementioned estimators, we have simulated 1,000 independent replicas with size m=100m=100, 1,000, 5,000 of the model in Example 1.1. The absolute bias (abias) and root mean squared error (rmse) are presented in Table 1. The results reveal a good performance of the estimators, even in the case of smaller sample sizes. Parameter κ\kappa was always estimated with no error.Table 1The absolute bias (abias) and rmse obtained from 1,000 simulated samples with size m=100m=100, 1,000, 5,000 of the model in Example 1.1abiasrmsep^n(0){\widehat{p}}_{n}\left(0)m=100m=1000.02720.0335m=1,000m=\hspace{0.1em}\text{1,000}\hspace{0.1em}0.00870.0108m=5,000m=\hspace{0.1em}\text{5,000}\hspace{0.1em}0.00390.0048p^n−1,n(1,0){\widehat{p}}_{n-1,n}\left(1,0)m=100m=1000.01990.0253m=1,000m=\hspace{0.1em}\text{1,000}\hspace{0.1em}0.00650.0080m=5,000m=\hspace{0.1em}\text{5,000}\hspace{0.1em}0.00300.0037p^n−2,n−1,n(1,0,0){\widehat{p}}_{n-2,n-1,n}\left(1,0,0)m=100m=1000.01600.0200m=1,000m=\hspace{0.1em}\text{1,000}\hspace{0.1em}0.00510.0064m=5,000m=\hspace{0.1em}\text{5,000}\hspace{0.1em}0.00220.00283The extremal index of {Yn}\left\{{Y}_{n}\right\}The sequence {Yn}\left\{{Y}_{n}\right\}is stationary because the sequences {Xn}\left\{{X}_{n}\right\}and {Un}\left\{{U}_{n}\right\}are stationary and independent from each other. In addition, the common distribution for Yn{Y}_{n}, n≥1n\ge 1, is also standard Fréchet, as is the common distribution for Xn{X}_{n}, since FYn(x)=∑i=1κ−1P(Xn−i≤x,Un−i=1,Un−i+1=0=…=Un)+P(Xn≤x)P(Un=1)=F(x)pn(1)+∑i=1κ−1pn−i,…,n(1,0,…,0)=F(x).\begin{array}{rcl}{F}_{{Y}_{n}}\left(x)& =& \mathop{\displaystyle \sum }\limits_{i=1}^{\kappa -1}P\left({X}_{n-i}\le x,{U}_{n-i}=1,{U}_{n-i+1}=0=\ldots ={U}_{n})+P\left({X}_{n}\le x)P\left({U}_{n}=1)\\ & =& F\left(x)\left({p}_{n}\left(1)+\mathop{\displaystyle \sum }\limits_{i=1}^{\kappa -1}{p}_{n-i,\ldots ,n}\left(1,0,\ldots ,0)\right)=F\left(x).\end{array}For any τ>0\tau \gt 0, if we define un≡un(τ)=n/τ{u}_{n}\equiv {u}_{n}\left(\tau )=n\hspace{0.1em}\text{/}\hspace{0.1em}\tau , n≥1n\ge 1, it turns out that E∑i=1n1{Yi>un}=nP(Y1>un)⟶n→∞τE\left({\sum }_{i=1}^{n}{{\bf{1}}}_{\left\{{Y}_{i}\gt {u}_{n}\right\}}\right)=nP\left({Y}_{1}\gt {u}_{n})\mathop{\longrightarrow }\limits_{n\to \infty }\tau and nP(X1>un)⟶n→∞τnP\left({X}_{1}\gt {u}_{n})\mathop{\longrightarrow }\limits_{n\to \infty }\tau , so we refer to these levels un{u}_{n}by normalized levels for {Yn}\left\{{Y}_{n}\right\}and {Xn}\left\{{X}_{n}\right\}.In this section, in addition to the general assumptions about the model presented in Section 1, we start by assuming that {Xn}\left\{{X}_{n}\right\}and {Un}\left\{{U}_{n}\right\}present dependency structures such that variables sufficiently apart can be considered approximately independent. Concretely, we assume that {Un}\left\{{U}_{n}\right\}satisfies the strong-mixing condition (Rosenblatt [9]) and {Xn}\left\{{X}_{n}\right\}satisfies condition D(un)D\left({u}_{n})(Leadbetter [7]) for normalized levels un{u}_{n}.Proposition 3.1If {Un}\left\{{U}_{n}\right\}is strong-mixing and {Xn}\left\{{X}_{n}\right\}satisfies condition D(un)D\left({u}_{n}), then {Yn}\left\{{Y}_{n}\right\}also satisfies condition D(un)D\left({u}_{n}).ProofFor any choice of p+qp+qintegers, 1≤i1<…<ip<j1<…<jq≤n1\le {i}_{1}\hspace{0.33em}\hspace{0.33em}\lt \ldots \lt {i}_{p}\lt {j}_{1}\hspace{0.33em}\hspace{0.33em}\lt \ldots \lt {j}_{q}\le nsuch that j1≥ip+l{j}_{1}\ge {i}_{p}+l, we have that P⋂s=1pXis≤un,⋂s=1qXjs≤un−P⋂s=1pXis≤unP⋂s=1qXjs≤un≤αn,l,\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}}\le {u}_{n}\right)-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}}\le {u}_{n}\right)\right|\le {\alpha }_{n,l},with αn,ln→0{\alpha }_{n,{l}_{n}}\to 0, as n→∞n\to \infty , for some sequence ln=o(n){l}_{n}=o\left(n), and ∣P(A∩B)−P(A)P(B)∣≤g(l),| P(A\cap B)-P(A)P(B)| \le g\left(l),with g(l)→0g\left(l)\to 0, as l→∞l\to \infty , where AAbelongs to the σ\sigma -algebra generated by {Ui,i=1,…,ip}\left\{{U}_{i},\hspace{0.33em}i=1,\ldots ,{i}_{p}\right\}and BBbelongs to the σ\sigma -algebra generated by {Ui,i=j1,j1+1,…}\left\{{U}_{i},\hspace{0.33em}i={j}_{1},{j}_{1}+1,\ldots \right\}. Thus, for any choice of p+qp+qintegers, 1≤i1<…<ip<j1<…<jq≤n1\le {i}_{1}\hspace{0.33em}\lt \ldots \lt {i}_{p}\lt {j}_{1}\hspace{0.33em}\lt \ldots \lt {j}_{q}\le nsuch that j1≥ip+l+κ{j}_{1}\ge {i}_{p}+l+\kappa , we will have P⋂s=1pYis≤un,⋂s=1qYjs≤un−P⋂s=1pYis≤unP⋂s=1qYjs≤un≤∑is−κ<is∗≤isjs−κ<js∗≤jsP⋂s=1pXis∗≤un,⋂s=1qXjs∗≤unP(A∗∩B∗)−P⋂s=1pXis∗≤unP⋂s=1qXjs∗≤unP(A∗)P(B∗),\begin{array}{l}\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{Y}_{{i}_{s}}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{Y}_{{j}_{s}}\le {u}_{n}\right)-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{Y}_{{i}_{s}}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{Y}_{{j}_{s}}\le {u}_{n}\right)\right|\\ \hspace{1.0em}\le \displaystyle \sum _{\begin{array}{c}{i}_{s}-\kappa \lt {i}_{s}^{\ast }\le {i}_{s}\\ {j}_{s}-\kappa \lt {j}_{s}^{\ast }\le {j}_{s}\end{array}}\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)P({A}^{\ast }\cap {B}^{\ast })-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)P({A}^{\ast })P({B}^{\ast })\right|,\end{array}where A∗=⋂s=1p{Uis=0=…=Uis∗+1,Uis∗=1}{A}^{\ast }={\bigcap }_{s=1}^{p}\left\{{U}_{{i}_{s}}=0=\ldots ={U}_{{i}_{s}^{\ast }+1},{U}_{{i}_{s}^{\ast }}=1\right\}and B∗=⋂s=1q{Ujs=0=…=Ujs∗+1,Ujs∗=1}{B}^{\ast }={\bigcap }_{s=1}^{q}\left\{{U}_{{j}_{s}}=0=\ldots ={U}_{{j}_{s}^{\ast }+1},{U}_{{j}_{s}^{\ast }}=1\right\}and j1∗>j1−κ≥ip∗+l{j}_{1}^{\ast }\hspace{-0.08em}\gt \hspace{-0.08em}{j}_{1}\hspace{-0.08em}-\hspace{-0.08em}\kappa \hspace{-0.08em}\ge \hspace{-0.08em}{i}_{p}^{\ast }\hspace{-0.08em}+\hspace{-0.08em}l.Therefore, the aforementioned summation is upper limited by ∑is−κ<is∗≤isjs−κ<js∗≤jsP⋂s=1pXis∗≤un,⋂s=1qXjs∗≤un−P⋂s=1pXis∗≤unP⋂s=1qXjs∗≤un+∣P(A∗∩B∗)−P(A∗)P(B∗)∣≤∑is−κ<is∗≤isjs−κ<js∗≤js(αn,l+g(l)),\begin{array}{l}\displaystyle \sum _{\begin{array}{c}{i}_{s}-\kappa \lt {i}_{s}^{\ast }\le {i}_{s}\\ {j}_{s}-\kappa \lt {j}_{s}^{\ast }\le {j}_{s}\end{array}}\left(\left|P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n},\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)-P\left(\mathop{\bigcap }\limits_{s=1}^{p}{X}_{{i}_{s}^{\ast }}\le {u}_{n}\right)P\left(\mathop{\bigcap }\limits_{s=1}^{q}{X}_{{j}_{s}^{\ast }}\le {u}_{n}\right)\right|+| P({A}^{\ast }\cap {B}^{\ast })-P({A}^{\ast })P({B}^{\ast })| \right)\\ \hspace{1.0em}\le \displaystyle \sum _{\begin{array}{c}{i}_{s}-\kappa \lt {i}_{s}^{\ast }\le {i}_{s}\\ {j}_{s}-\kappa \lt {j}_{s}^{\ast }\le {j}_{s}\end{array}}({\alpha }_{n,l}+g\left(l)),\end{array}which allows to conclude that D(un)D\left({u}_{n})holds for {Yn}\left\{{Y}_{n}\right\}with ln(Y)=ln+κ{l}_{n}^{\left(Y)}={l}_{n}+\kappa .□The tendency for clustering of values of {Yn}\left\{{Y}_{n}\right\}above un{u}_{n}depends on the same tendency within {Xn}\left\{{X}_{n}\right\}and the propensity of {Un}\left\{{U}_{n}\right\}for consecutive null values. The clustering tendency can be assessed through the extremal index (Leadbetter, [7]). More precisely, {Xn}\left\{{X}_{n}\right\}is said to have extremal index θX∈(0,1]{\theta }_{X}\in (0,1]if (3)limn→∞P⋁i=1nXi≤n/τ=e−θXτ.\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left(\underset{i=1}{\overset{n}{\bigvee }}{X}_{i}\le n\hspace{0.1em}\text{/}\hspace{0.1em}\tau \right)={e}^{-{\theta }_{X}\tau }.If D(un)D\left({u}_{n})holds for {Xn}\left\{{X}_{n}\right\}, we have limn→∞P⋁i=1nXi≤un=limn→∞Pkn⋁i=1[n/kn]Xi≤un\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left(\underset{i=1}{\overset{n}{\bigvee }}{X}_{i}\le {u}_{n}\right)=\mathop{\mathrm{lim}}\limits_{n\to \infty }{P}^{{k}_{n}}\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{X}_{i}\le {u}_{n}\right)for any integers sequence {kn}\left\{{k}_{n}\right\}, such that, (4)kn→∞,knln/n→0andknαn,ln→0,asn→∞.{k}_{n}\to \infty ,\hspace{1em}{k}_{n}{l}_{n}\hspace{0.1em}\text{/}\hspace{0.1em}n\to 0\hspace{1em}{\rm{and}}\hspace{1em}{k}_{n}{\alpha }_{n,{l}_{n}}\to 0,\hspace{1em}{\rm{as}}\hspace{0.33em}n\to \infty .We can therefore say that θXτ=limn→∞knP⋁i=1[n/kn]Xi>un.{\theta }_{X}\tau =\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{X}_{i}\gt {u}_{n}\right).Now we compare the local behavior of sequences {Xn}\left\{{X}_{n}\right\}and {Yn}\left\{{Y}_{n}\right\}, i.e., of Xi{X}_{i}and Yi{Y}_{i}for i∈(j−1)nkn+1,…,jnkni\hspace{-0.08em}\in \hspace{-0.08em}\left\{\left(j-1)\left[\hspace{-0.16em},\frac{n}{{k}_{n}},\hspace{-0.16em}\right]\hspace{-0.08em}+\hspace{-0.08em}1,\ldots ,j\left[\hspace{-0.16em},\frac{n}{{k}_{n}},\hspace{-0.16em}\right]\right\}, j=1,…,knj=1,\ldots ,{k}_{n}, with regard to the oscillations of their values in relation to un{u}_{n}. To this end, we will use local dependency conditions D(s)(un){D}^{\left(s)}\left({u}_{n}). We say that {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, whenever limn→∞n∑j=s[n/kn]P(X1>un,Xj≤un<Xj+1)=0,\mathop{\mathrm{lim}}\limits_{n\to \infty }n\mathop{\sum }\limits_{j=s}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({X}_{1}\gt {u}_{n},{X}_{j}\le {u}_{n}\lt {X}_{j+1})=0,for some integers sequence {kn}\left\{{k}_{n}\right\}satisfying (4). Condition D(1)(un){D}^{\left(1)}\left({u}_{n})translates into limn→∞n∑j=2[n/kn]P(X1>un,Xj>un)=0.\mathop{\mathrm{lim}}\limits_{n\to \infty }n\mathop{\sum }\limits_{j=2}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({X}_{1}\gt {u}_{n},{X}_{j}\gt {u}_{n})=0.Observe that if D(s)(un){D}^{\left(s)}\left({u}_{n})holds for some s≥1s\ge 1, then D(m)(un){D}^{\left(m)}\left({u}_{n})also holds for m>sm\gt s. Condition D(1)(un){D}^{\left(1)}\left({u}_{n})is known as D′(un){D}^{^{\prime} }\left({u}_{n})(Leadbetter et al. [8]) and relates to a unit extremal index, i.e., absence of extreme values clustering. In particular, this is the case of independent variables. Although {Xn}\left\{{X}_{n}\right\}satisfies D′(un){D}^{^{\prime} }\left({u}_{n}), this condition is not generally valid for {Yn}\left\{{Y}_{n}\right\}. Observe that n∑j=2[n/kn]P(Y1>un,Yj>un)=∑i=2−κ1n∑j=2[n/kn]∑j∗=i∨(j−κ+1)jP(Xi>un,Xj∗>un)⋅pi,…,1,j∗,j∗+1,…,j(0,…,0,1,0,…,0).\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=2}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({Y}_{1}\gt {u}_{n},{Y}_{j}\gt {u}_{n})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=i\vee \left(j-\kappa +1)}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\gt {u}_{n})\cdot {p}_{i,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j}\left(0,\ldots ,0,1,0,\ldots ,0).\end{array}For i=1i=1and j=κj=\kappa , we have j∗=1{j}^{\ast }=1and the corresponding term becomes nP(X1>un)→τ>0nP\left({X}_{1}\gt {u}_{n})\to \tau \gt 0, as n→∞n\to \infty , and this is the reason why, in general, {Yn}\left\{{Y}_{n}\right\}does not satisfy D′(un){D}^{^{\prime} }\left({u}_{n})even if {Xn}\left\{{X}_{n}\right\}satisfies it.Proposition 3.2The following statements hold: (i)If {Yn}\left\{{Y}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, then {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}).(ii)If {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, then {Yn}\left\{{Y}_{n}\right\}satisfies D(s+κ−1)(un){D}^{\left(s+\kappa -1)}\left({u}_{n}).(iii)If {Xn}\left\{{X}_{n}\right\}satisfies D′(un){D}^{^{\prime} }\left({u}_{n}), then {Yn}\left\{{Y}_{n}\right\}satisfies D(2)(un){D}^{\left(2)}\left({u}_{n}).ProofConsider rn=[n/kn]{r}_{n}=\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]. We have that (5)n∑j=srnP(Y1>un,Yj≤un<Yj+1)=∑i=2−κ1n∑j=srnP(Xi>un,Yj≤un<Xj+1,Ui=1,Ui+1=0=…=U1,Uj+1=1)=∑i=2−κ1n∑j=srn∑j∗=(i+1)∨(j−κ+1)jP(Xi>un,Xj∗≤un<Xj+1)⋅pi,i+1,…,1,j∗,j∗+1,…,j,j+1(1,0,…,0,1,0,…,0,1).\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=s}^{{r}_{n}}P({Y}_{1}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {Y}_{j+1})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s}^{{r}_{n}}P({X}_{i}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {X}_{j+1},{U}_{i}=1,{U}_{i+1}=0=\ldots ={U}_{1},{U}_{j+1}=1)\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s}^{{r}_{n}}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=\left(i+1)\vee \left(j-\kappa +1)}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\le {u}_{n}\lt {X}_{j+1})\cdot {p}_{i,i+1,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j,j+1}\left(1,0,\ldots ,0,1,0,\ldots ,0,1).\end{array}Since {Yn}\left\{{Y}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), with s≥2s\ge 2, and thus, the first summation in (5) converges to zero, as n→∞n\to \infty , then all the terms in the last summations also converge to zero. In particular, when i=1i=1and j∗=j{j}^{\ast }=j, we have n∑j=srnP(X1>un,Xj≤un<Xj+1)→0n{\sum }_{j=s}^{{r}_{n}}P({X}_{1}\gt {u}_{n},{X}_{j}\le {u}_{n}\lt {X}_{j+1})\to 0, as n→∞n\to \infty , which proves (i).Conversely, writing the first summation in (5) with jjstarting at s+κ−1s+\kappa -1, we have (6)n∑j=s+κ−1rnP(Y1>un,Yj≤un<Yj+1)=∑i=2−κ1n∑j=s+κ−1rn∑j∗=j−κ+1jP(Xi>un,Xj∗≤un<Xj+1)⋅pi,i+1,…,1,j∗,j∗+1,…,j,j+1(1,0,…,0,1,0,…,0,1)=∑i=2−κ1n∑j=s+κ−1rn∑j∗=j−κ+1j∑i∗=j∗jP(Xi>un,Xj∗≤un,…,Xi∗≤un,Xj+1>un)⋅pi,i+1,…,1,j∗,j∗+1,…,j,j+1(1,0,…,0,1,0,…,0,1),\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=s+\kappa -1}^{{r}_{n}}P({Y}_{1}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {Y}_{j+1})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s+\kappa -1}^{{r}_{n}}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=j-\kappa +1}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\le {u}_{n}\lt {X}_{j+1})\cdot {p}_{i,i+1,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j,j+1}\left(1,0,\ldots ,0,1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=s+\kappa -1}^{{r}_{n}}\mathop{\displaystyle \sum }\limits_{{j}^{\ast }=j-\kappa +1}^{j}\mathop{\displaystyle \sum }\limits_{{i}^{\ast }={j}^{\ast }}^{j}P({X}_{i}\gt {u}_{n},{X}_{{j}^{\ast }}\le {u}_{n},\ldots ,{X}_{{i}^{\ast }}\le {u}_{n},{X}_{j+1}\gt {u}_{n})\cdot {p}_{i,i+1,\ldots ,1,{j}^{\ast },{j}^{\ast }+1,\ldots ,j,j+1}\left(1,0,\ldots ,0,1,0,\ldots ,0,1),\end{array}where the least of distances between iiand i∗{i}^{\ast }corresponds to the case i=1i=1and i∗=j∗=s{i}^{\ast }={j}^{\ast }=s. Therefore, if {Xn}\left\{{X}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n})for some s≥2s\ge 2, then each term of (6) converges to zero, as n→∞n\to \infty , and thus, {Yn}\left\{{Y}_{n}\right\}satisfies D(s+κ−1)(un){D}^{\left(s+\kappa -1)}\left({u}_{n}), proving (ii).As for (iii), observe that (7)n∑j=2rnP(Y1>un,Yj≤un<Yj+1)=∑i=2−κ1n∑j=2rnP(Xi>un,Yj≤un<Xj+1,Ui=1,Ui+1=0=…=U1,Uj+1=1)≤∑i=2−κ1n∑j=2rnP(Xi>un,Xj+1>un)=∑i=2−κ1n∑j=2rnP(X1>un,Xj−i+2>un)≤κn∑j=2rnP(X1>un,Xj>un).\begin{array}{l}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({Y}_{1}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {Y}_{j+1})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{i}\gt {u}_{n},{Y}_{j}\le {u}_{n}\lt {X}_{j+1},{U}_{i}=1,{U}_{i+1}=0=\ldots ={U}_{1},{U}_{j+1}=1)\\ \hspace{1.0em}\le \mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{i}\gt {u}_{n},{X}_{j+1}\gt {u}_{n})=\mathop{\displaystyle \sum }\limits_{i=2-\kappa }^{1}n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{1}\gt {u}_{n},{X}_{j-i+2}\gt {u}_{n})\\ \hspace{1.0em}\le \kappa n\mathop{\displaystyle \sum }\limits_{j=2}^{{r}_{n}}P({X}_{1}\gt {u}_{n},{X}_{j}\gt {u}_{n}).\end{array}If {Xn}\left\{{X}_{n}\right\}satisfies D′(un){D}^{^{\prime} }\left({u}_{n}), then (7) converges to zero, as n→∞n\to \infty , and D(2)(un){D}^{\left(2)}\left({u}_{n})holds for {Yn}\left\{{Y}_{n}\right\}.□Under conditions D(un)D\left({u}_{n})and D(s)(un){D}^{\left(s)}\left({u}_{n})with s≥2s\ge 2, we can also compute the extremal index θX{\theta }_{X}defined in (3) by (Chernick et al. [1]; Corollary 1.3) (8)θX=limn→∞P(X2≤un,…,Xs≤un∣X1>un).{\theta }_{X}=\mathop{\mathrm{lim}}\limits_{n\to \infty }P({X}_{2}\le {u}_{n},\ldots ,{X}_{s}\le {u}_{n}| {X}_{1}\gt {u}_{n}).If {Xn}\left\{{X}_{n}\right\}and {Yn}\left\{{Y}_{n}\right\}have extremal indexes θX{\theta }_{X}and θY{\theta }_{Y}, respectively, then θY≤θX{\theta }_{Y}\le {\theta }_{X}, since P(⋁i=1nXi≤n/τ)≤P(⋁i=1nYi≤n/τ)P\left({\bigvee }_{i=1}^{n}{X}_{i}\le n\hspace{0.1em}\text{/}\hspace{0.1em}\tau )\le P\left({\bigvee }_{i=1}^{n}{Y}_{i}\le n\hspace{0.1em}\text{/}\hspace{0.1em}\tau ). This corresponds to the intuitively expected, if we remember that the possible repetition of variables Xn{X}_{n}leads to larger clusters of values above un{u}_{n}. In the following result, we establish a relationship between θX{\theta }_{X}and θY{\theta }_{Y}.Proposition 3.3Suppose that {Un}\left\{{U}_{n}\right\}is strong-mixing and {Xn}\left\{{X}_{n}\right\}satisfies conditions D(un)D\left({u}_{n})and D(s)(un){D}^{\left(s)}\left({u}_{n}), s≥2s\ge 2, for normalized levels un≡un(τ){u}_{n}\equiv {u}_{n}\left(\tau ). If {Xn}\left\{{X}_{n}\right\}has extremal index θX{\theta }_{X}, then {Yn}\left\{{Y}_{n}\right\}has extremal index θY{\theta }_{Y}given byθY=θX∑j=0κ−1p1,2,…,j+1,j+2(1,0,…,0,1)βj,{\theta }_{Y}={\theta }_{X}\mathop{\sum }\limits_{j=0}^{\kappa -1}{p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1){\beta }_{j},whereβj=limn→∞P(Xs+j>un∣X1≤un,…,Xs−1≤un<Xs).{\beta }_{j}=\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left({X}_{s+j}\gt {u}_{n}| {X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s}).ProofBy Proposition 3.1, {Yn}\left\{{Y}_{n}\right\}also satisfies condition D(un)D\left({u}_{n}). Thus, we have limn→∞P⋁i=1nYi≤un=exp−limn→∞knP⋁i=1[n/kn]Yi>un\mathop{\mathrm{lim}}\limits_{n\to \infty }P\left(\underset{i=1}{\overset{n}{\bigvee }}{Y}_{i}\le {u}_{n}\right)=\exp \left\{-\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{Y}_{i}\gt {u}_{n}\right)\right\}and (9)limn→∞knP⋁i=1[n/kn]Yi>un=limn→∞knPY1≤un,⋁i=1[n/kn]{Yi>un}=limn→∞knP⋃i=1[n/kn]{Yi≤un<Yi+1}=limn→∞knP⋃i=1[n/kn]{Yi≤un<Xi+1,Ui+1=1}=limn→∞knP⋃i=1[n/kn]⋃j=0κ−1{Xi−j≤un<Xi+1,Ui−j=1,Ui−j+1=0=…=Ui,Ui+1=1}=limn→∞knP⋃i=1[n/kn]⋃j=0κ−1{Xi≤un<Xi+j+1,Ui=1,Ui+1=0=…=Ui+j,Ui+j+1=1}=limn→∞kn∑i=1[n/kn]∑j=0κ−1P(X1≤un,…,Xi≤un<Xi+1,Xi+j+1>un)⋅pi,i+1,…,i+j,i+j+1(1,0,…,0,1)=limn→∞kn∑i=1[n/kn]∑j=0κ−1P(Xi−s+2≤un,…,Xi≤un<Xi+1,Xi+j+1>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)\begin{array}{l}\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{Y}_{i}\gt {u}_{n}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left({Y}_{1}\le {u}_{n},\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}\left\{{Y}_{i}\gt {u}_{n}\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\left\{{Y}_{i}\le {u}_{n}\lt {Y}_{i+1}\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\left\{{Y}_{i}\le {u}_{n}\lt {X}_{i+1},{U}_{i+1}=1\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\bigcup }\limits_{j=0}^{\kappa -1}\left\{{X}_{i-j}\le {u}_{n}\lt {X}_{i+1},{U}_{i-j}=1,{U}_{i-j+1}=0=\ldots ={U}_{i},{U}_{i+1}=1\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\mathop{\bigcup }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\bigcup }\limits_{j=0}^{\kappa -1}\left\{{X}_{i}\le {u}_{n}\lt {X}_{i+j+1},{U}_{i}=1,{U}_{i+1}=0=\ldots ={U}_{i+j},{U}_{i+j+1}=1\right\}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{1}\le {u}_{n},\ldots ,{X}_{i}\le {u}_{n}\lt {X}_{i+1},{X}_{i+j+1}\gt {u}_{n})\cdot {p}_{i,i+1,\ldots ,i+j,i+j+1}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{i-s+2}\le {u}_{n},\ldots ,{X}_{i}\le {u}_{n}\lt {X}_{i+1},{X}_{i+j+1}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\end{array}since {Xn}\left\{{X}_{n}\right\}satisfies condition D(s)(un){D}^{\left(s)}\left({u}_{n})for some s≥2s\ge 2. The stationarity of {Xn}\left\{{X}_{n}\right\}leads to limn→∞kn∑i=1[n/kn]∑j=0κ−1P(Xi−s+2≤un,…,Xi≤u<Xi+1,Xi+j+1>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)=limn→∞kn∑i=1[n/kn]∑j=0κ−1P(X1≤un,…,Xs−1≤un<Xs,Xs+j>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)=limn→∞∑j=0κ−1nP(X1≤un,…,Xs−1≤un<Xs,Xs+j>un)⋅p1,2,…,j+1,j+2(1,0,…,0,1)\begin{array}{l}\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{i-s+2}\le {u}_{n},\ldots ,{X}_{i}\le {u}_{\lt }{X}_{i+1},{X}_{i+j+1}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}P({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s},{X}_{s+j}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}nP({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s},{X}_{s+j}\gt {u}_{n})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\end{array}=limn→∞∑j=0κ−1nP(X1≤un,…,Xs−1≤un<Xs)P(Xs+j>un∣X1≤un,…,Xs−1≤un<Xs)⋅p1,2,…,j+1,j+2(1,0,…,0,1)=τθX∑j=0κ−1p1,2,…,j+1,j+2(1,0,…,0,1)βj,\begin{array}{l}\hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}nP({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s})P({X}_{s+j}\gt {u}_{n}| {X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s})\cdot {p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)\\ \hspace{1.0em}=\tau {\theta }_{X}\mathop{\displaystyle \sum }\limits_{j=0}^{\kappa -1}{p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1){\beta }_{j},\end{array}where the last step follows from (8).□Observe that ∑j=0κ−1p1,2,…,j+1,j+2(1,0,…,0,1)=pn(1)=P(Un=1)\mathop{\sum }\limits_{j=0}^{\kappa -1}{p}_{1,2,\ldots ,j+1,j+2}\left(1,0,\ldots ,0,1)={p}_{n}\left(1)=P\left({U}_{n}=1)and thus, θY≤θXpn(1)≤θX{\theta }_{Y}\le {\theta }_{X}{p}_{n}\left(1)\le {\theta }_{X}, as expected.Proposition 3.4Suppose that {Un}\left\{{U}_{n}\right\}is strong-mixing and {Xn}\left\{{X}_{n}\right\}satisfies conditions D(un)D\left({u}_{n})and D′(un){D}^{^{\prime} }\left({u}_{n}), for normalized levels un≡un(τ){u}_{n}\equiv {u}_{n}\left(\tau ). Then, {Yn}\left\{{Y}_{n}\right\}has extremal index θY{\theta }_{Y}given by θY=p1,2(1,1){\theta }_{Y}={p}_{1,2}\left(1,1).ProofBy condition D′(un){D}^{^{\prime} }\left({u}_{n}), the only term to consider in (9) corresponds to j=0j=0, and we obtain limn→∞knP⋁i=1[n/kn]Yn≤un=limn→∞kn∑i=1[n/kn]P(X1≤un,…,Xs−1≤un<Xs)p1,2(1,1)=limn→∞nP(Xs>un)p1,2(1,1)=τp1,2(1,1).□\hspace{11.6em}\begin{array}{l}\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}P\left(\underset{i=1}{\overset{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}{\bigvee }}{Y}_{n}\le {u}_{n}\right)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }{k}_{n}\mathop{\displaystyle \sum }\limits_{i=1}^{\left[n\hspace{0.1em}\text{/}\hspace{0.1em}{k}_{n}]}P({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s}){p}_{1,2}\left(1,1)\\ \hspace{1.0em}=\mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{s}\gt {u}_{n}){p}_{1,2}\left(1,1)=\tau {p}_{1,2}\left(1,1).\hspace{15em}\square \end{array}Observe that we can obtain the aforementioned result by applying Proposition 3.2 (iii) and calculating directly τθY=limn→∞nP(Y1≤un<Y2)\tau {\theta }_{Y}={\mathrm{lim}}_{n\to \infty }nP\left({Y}_{1}\le {u}_{n}\lt {Y}_{2}). More precisely, we have that {Yn}\left\{{Y}_{n}\right\}satisfies D(2)(un){D}^{\left(2)}\left({u}_{n}), and by applying (8), we obtain τθY=limn→∞nP(Y1≤un<Y2)=limn→∞nP(Y1≤un<X2,U2=1)=limn→∞nP⋃j=0κ−1X1−j≤un<X2,U1−j=1,U1−j+1=0=…=U1,U2=1,=limn→∞nP⋃j=0κ−1X2−κ≤un,…,X1−j≤un<X2−j,X2>un⋅p1−j,1−j+1,…,1,2(1,0,…,0,1)=limn→∞nP(X1≤un<X2)p1,2(1,1)=limn→∞nP(X2>un)p1,2(1,1)=τp1,2(1,1).\begin{array}{rcl}\tau {\theta }_{Y}& =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n}\lt {Y}_{2})\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n}\lt {X}_{2},{U}_{2}=1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left(\mathop{\bigcup }\limits_{j=0}^{\kappa -1}{X}_{1-j}\le {u}_{n}\lt {X}_{2},{U}_{1-j}=1,{U}_{1-j+1}=0=\ldots ={U}_{1},{U}_{2}=1,\right)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left(\mathop{\bigcup }\limits_{j=0}^{\kappa -1}{X}_{2-\kappa }\le {u}_{n},\ldots ,{X}_{1-j}\le {u}_{n}\lt {X}_{2-j},{X}_{2}\gt {u}_{n}\right)\cdot {p}_{1-j,1-j+1,\ldots ,1,2}\left(1,0,\ldots ,0,1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{1}\le {u}_{n}\lt {X}_{2}){p}_{1,2}\left(1,1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{2}\gt {u}_{n}){p}_{1,2}\left(1,1)=\tau {p}_{1,2}\left(1,1).\end{array}The same result can also be seen as a particular case of Proposition 3.3, where, if we take s=1s=1, we have βj=0{\beta }_{j}=0, for j≠0j\ne 0, and we obtain θY=θXβ0p1,2(1,1)=p1,2(1,1){\theta }_{Y}={\theta }_{X}{\beta }_{0}{p}_{1,2}\left(1,1)={p}_{1,2}\left(1,1), since β0=1{\beta }_{0}=1and under D′(un){D}^{^{\prime} }\left({u}_{n})it comes θX=1{\theta }_{X}=1.Example 3.1Consider {Yn}\left\{{Y}_{n}\right\}such that {Xn}\left\{{X}_{n}\right\}is an ARMAX sequence, i.e., Xn=ϕXn−1∨(1−ϕ)Zn{X}_{n}=\phi {X}_{n-1}\vee \left(1-\phi ){Z}_{n}, n≥1n\ge 1, where {Zn}\left\{{Z}_{n}\right\}is an independent sequence of random variables with standard Fréchet marginal distribution and {Xn}\left\{{X}_{n}\right\}and {Zn}\left\{{Z}_{n}\right\}are independent. We show that {Xn}\left\{{X}_{n}\right\}has also standard Fréchet marginal distribution, satisfies condition D(2)(un){D}^{\left(2)}\left({u}_{n})and has extremal index θX=1−ϕ{\theta }_{X}=1-\phi (see, e.g., Ferreira and Ferreira [2] and references therein).Observe that, for normalized levels un≡n/τ{u}_{n}\equiv n\hspace{0.1em}\text{/}\hspace{0.1em}\tau , τ>0\tau \gt 0, we have β1=limn→∞P(X3>un∣X1≤un<X2)=limn→∞P(X1≤un)−P(X1≤un,X2≤un)−P(X1≤un,X3≤un)+P(X1≤un,X2≤un,X3≤un)P(X1≤un)−P(X1≤un,X2≤un)=limn→∞1−τn−1−τn(2−ϕ)−1−τn(2−ϕ2)+1−τn(3−2ϕ)1−τn−1−τn(2−ϕ)=ϕ.\begin{array}{rcl}{\beta }_{1}& =& \mathop{\mathrm{lim}}\limits_{n\to \infty }P\left({X}_{3}\gt {u}_{n}| {X}_{1}\le {u}_{n}\lt {X}_{2})\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }\frac{P\left({X}_{1}\le {u}_{n})-P\left({X}_{1}\le {u}_{n},{X}_{2}\le {u}_{n})-P\left({X}_{1}\le {u}_{n},{X}_{3}\le {u}_{n})+P\left({X}_{1}\le {u}_{n},{X}_{2}\le {u}_{n},{X}_{3}\le {u}_{n})}{P\left({X}_{1}\le {u}_{n})-P\left({X}_{1}\le {u}_{n},{X}_{2}\le {u}_{n})}\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }\frac{1-\frac{\tau }{n}-\left(1-\frac{\tau }{n}\left(2-\phi )\right)-\left(1-\frac{\tau }{n}\left(2-{\phi }^{2})\right)+1-\frac{\tau }{n}\left(3-2\phi )}{1-\frac{\tau }{n}-\left(1-\frac{\tau }{n}\left(2-\phi )\right)}=\phi .\end{array}Analogous calculations lead to β2=ϕ2{\beta }_{2}={\phi }^{2}. Considering κ=3\kappa =3, we have θY=(1−ϕ)(p1,2(1,1)+ϕp1,2,3(1,0,1)+{\theta }_{Y}\hspace{-0.08em}=\hspace{-0.08em}\left(1-\phi )({p}_{1,2}\left(1,1)+\phi {p}_{1,2,3}\left(1,0,1)\hspace{0.25em}+ϕ2p1,2,3,4(1,0,0,1)){\phi }^{2}{p}_{1,2,3,4}\left(1,0,0,1)).The observed sequence is {Yn}\left\{{Y}_{n}\right\}, and therefore, results that allow retrieving information about the extreme behavior of the initial sequence {Xn}\left\{{X}_{n}\right\}, subject to the failures determined by {Un}\left\{{U}_{n}\right\}, may be of interest.If we assume that {Yn}\left\{{Y}_{n}\right\}satisfies D(s)(un){D}^{\left(s)}\left({u}_{n}), then {Xn}\left\{{X}_{n}\right\}also satisfies D(s)(un){D}^{\left(s)}\left({u}_{n})by Proposition 3.2 (i), thus coming τθX=limn→∞nP(X1≤un,…,Xs−1≤un<Xs)=limn→∞nP(Y1≤un,…,Ys−1≤un<Ys∣U1=…=Us=1)=limn→∞nP(Y1≤un,…,Ys−1≤un<Ys∣Y0≠Y1≠…≠Ys).\begin{array}{rcl}\tau {\theta }_{X}& =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({X}_{1}\le {u}_{n},\ldots ,{X}_{s-1}\le {u}_{n}\lt {X}_{s})\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n},\ldots ,{Y}_{s-1}\le {u}_{n}\lt {Y}_{s}| {U}_{1}=\ldots =\hspace{0.33em}{U}_{s}=1)\\ & =& \mathop{\mathrm{lim}}\limits_{n\to \infty }nP\left({Y}_{1}\le {u}_{n},\ldots ,{Y}_{s-1}\le {u}_{n}\lt {Y}_{s}| {Y}_{0}\ne {Y}_{1}\ne \ldots \ne {Y}_{s}).\end{array}Thereby, we can write θX=limn→∞P(Y1≤un,…,Ys−1≤un<Ys∣Y0≠Y1≠…≠Ys)P(Y1>un).{\theta }_{X}=\mathop{\mathrm{lim}}\limits_{n\to \infty }\frac{P\left({Y}_{1}\le {u}_{n},\ldots ,{Y}_{s-1}\le {u}_{n}\lt {Y}_{s}| {Y}_{0}\ne {Y}_{1}\ne \ldots \ne {Y}_{s})}{P\left({Y}_{1}\gt {u}_{n})}.4Tail dependenceNow we will analyze the effect of this failure mechanism on the dependency between two variables, Yn{Y}_{n}and Yn+m{Y}_{n+m}, m≥1m\ge 1. More precisely, we are going to evaluate the lag-mmtail dependence coefficient λ(Yn+m∣Yn)=limx→∞P(Yn+m>x∣Yn>x),\lambda \left({Y}_{n+m}| {Y}_{n})=\mathop{\mathrm{lim}}\limits_{x\to \infty }P\left({Y}_{n+m}\gt x| {Y}_{n}\gt x),which incorporates the tail dependence between Xn{X}_{n}and Xn+j{X}_{n+j}, with jjregulated by the maximum number of failures κ−1\kappa -1and by the relation between mmand κ\kappa . In particular, independent variables present null tail dependence coefficients. If m=1m=1, we obtain the tail dependence coefficient in Joe [5]. For results related to lag-mmtail dependence in the literature, see, e.g., Zhang [10,11].Proposition 4.1Sequence {Yn}\left\{{Y}_{n}\right\}has lag-mmtail dependence coefficient, with m≥1m\ge 1, (10)λ(Yn+m∣Yn)=p1,…,m(0,…,0)1{m≤κ−1}+∑i=1∨(m−κ+1)m∑i∗=0κ−1λ(Xn+i+i∗∣Xn)⋅⋅p1,2,…,i∗+1,i∗+1+i,i∗+2+i,…,i∗+1+m(1,0,…,0,1,0,…,0),\begin{array}{rcl}\lambda \left({Y}_{n+m}| {Y}_{n})& =& {p}_{1,\ldots ,m}\left(0,\ldots ,0){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}+\mathop{\displaystyle \sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}\mathop{\displaystyle \sum }\limits_{{i}^{\ast }=0}^{\kappa -1}\lambda \left({X}_{n+i+{i}^{\ast }}| {X}_{n})\cdot \\ & & \cdot {p}_{1,2,\ldots ,{i}^{\ast }+1,{i}^{\ast }+1+i,{i}^{\ast }+2+i,\ldots ,{i}^{\ast }+1+m}\left(1,0,\ldots ,0,1,0,\ldots ,0),\end{array}provided all coefficients λ(Xn+i+i∗∣Xn)\lambda \left({X}_{n+i+{i}^{\ast }}| {X}_{n})exist.ProofObserve that P(Yn>x,Yn+m>x)=P(Yn>x,Un+1=0=…=Un+m)1{m≤κ−1}+∑i=1∨(m−κ+1)mP(Yn>x,Xn+i>x,Un+i=1,Un+i+1=0=…=Un+m)=∑i=0κ−1−mP(Xn−i>x)pn−i,n−i+1,…,n+m(1,0,…,0)1{m≤κ−1}+∑i=1∨(m−κ+1)m∑i∗=0κ−1P(Xn−i∗>x,Xn+i>x)pn−i∗,n−i∗+1,…,n,n+i,n+i+1,…,n+m(1,0,…,0,1,0,…,0)\begin{array}{l}P\left({Y}_{n}\gt x,{Y}_{n+m}\gt x)\\ \hspace{1.0em}=P\left({Y}_{n}\gt x,{U}_{n+1}=0=\ldots ={U}_{n+m}){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}+\mathop{\displaystyle \sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}P\left({Y}_{n}\gt x,{X}_{n+i}\gt x,{U}_{n+i}=1,{U}_{n+i+1}=0=\ldots ={U}_{n+m})\\ \hspace{1.0em}=\mathop{\displaystyle \sum }\limits_{i=0}^{\kappa -1-m}P\left({X}_{n-i}\gt x)\hspace{0.33em}{p}_{n-i,n-i+1,\ldots ,n+m}\left(1,0,\ldots ,0){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}\\ \hspace{1.0em}\hspace{1.0em}+\mathop{\displaystyle \sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}\mathop{\displaystyle \sum }\limits_{{i}^{\ast }=0}^{\kappa -1}P\left({X}_{n-{i}^{\ast }}\gt x,{X}_{n+i}\gt x){p}_{n-{i}^{\ast },n-{i}^{\ast }+1,\ldots ,n,n+i,n+i+1,\ldots ,n+m}\left(1,0,\ldots ,0,1,0,\ldots ,0)\end{array}and ∑i=0κ−1−mp1,2,…,m+i+1(1,0,…,0)=p1,…,m(0,…,0){\sum }_{i=0}^{\kappa -1-m}{p}_{1,2,\ldots ,m+i+1}\left(1,0,\ldots ,0)={p}_{1,\ldots ,m}\left(0,\ldots ,0).□Taking m=1m=1in (10), we obtain the tail dependence coefficient λ(Yn+1∣Yn)=pn(0)+∑i=0κ−1λ(Xn+1+i∣Xn)p1,2,…,i+1,i+2(1,0,…,0,1),\lambda \left({Y}_{n+1}| {Y}_{n})={p}_{n}\left(0)+\mathop{\sum }\limits_{i=0}^{\kappa -1}\lambda \left({X}_{n+1+i}| {X}_{n}){p}_{1,2,\ldots ,i+1,i+2}\left(1,0,\ldots ,0,1),provided all coefficients λ(Xn+1+i∣Xn)\lambda \left({X}_{n+1+i}| {X}_{n})exist.If {Xn}\left\{{X}_{n}\right\}is lag-m∗{m}^{\ast }tail independent for all integer m∗≥1∨(m−κ+1){m}^{\ast }\ge 1\vee \left(m-\kappa +1), we have λ(Xn+i+i∗∣Xn)=0\lambda \left({X}_{n+i+{i}^{\ast }}| {X}_{n})=0in the second ter of (10), and thus, λ(Yn+m∣Yn)=p1,…,m(0,…,0)1{m≤κ−1}\lambda \left({Y}_{n+m}| {Y}_{n})={p}_{1,\ldots ,m}\left(0,\ldots ,0){{\bf{1}}}_{\left\{m\le \kappa -1\right\}}and {Yn}\left\{{Y}_{n}\right\}is lag-mmtail independent for all integer m≥κm\ge \kappa .Example 4.1Consider again {Yn}\left\{{Y}_{n}\right\}based on ARMAX sequence {Xn}\left\{{X}_{n}\right\}as in Example 3.1. We have that {Xn}\left\{{X}_{n}\right\}has lag-mmtail dependence coefficient λ(Xn+m∣Xn)=ϕm\lambda \left({X}_{n+m}| {X}_{n})={\phi }^{m}(Ferreira and Ferreira [2]), and thus, λ(Yn+m∣Yn)=p1,…,m(0,…,0)1{m≤κ−1}+∑i=1∨(m−κ+1)m∑i∗=0κ−1ϕi+i∗p1,2,…,i∗+1,i∗+1+i,i∗+2+i,…,i∗+1+m(1,0,…,0,1,0,…,0).\lambda \left({Y}_{n+m}| {Y}_{n})={p}_{1,\ldots ,m}\left(0,\ldots ,0)\hspace{0.33em}{{\bf{1}}}_{\left\{m\le \kappa -1\right\}}+\mathop{\sum }\limits_{i=1\vee \left(m-\kappa +1)}^{m}\mathop{\sum }\limits_{{i}^{\ast }=0}^{\kappa -1}{\phi }^{i+{i}^{\ast }}{p}_{1,2,\ldots ,{i}^{\ast }+1,{i}^{\ast }+1+i,{i}^{\ast }+2+i,\ldots ,{i}^{\ast }+1+m}\left(1,0,\ldots ,0,1,0,\ldots ,0).

Journal

Dependence Modelingde Gruyter

Published: Jan 1, 2022

Keywords: extreme values; stationary sequences; failures model; extremal index; tail dependence coefficient; 60G70

There are no references for this article.